threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "\nSimon has just pointed out to me that as a result of recent commits, a\nnumber of things now should move from the unsupported table to the\nsupported table in features.sgml. In particular, it looks to me like all\nof these should move:\n\nT811 Basic SQL/JSON constructor functions \nT812 SQL/JSON: JSON_OBJECTAGG \nT813 SQL/JSON: JSON_ARRAYAGG with ORDER BY \nT814 Colon in JSON_OBJECT or JSON_OBJECTAGG \nT821 Basic SQL/JSON query operators \nT822 SQL/JSON: IS JSON WITH UNIQUE KEYS predicate \nT823 SQL/JSON: PASSING clause \nT824 JSON_TABLE: specific PLAN clause \nT825 SQL/JSON: ON EMPTY and ON ERROR clauses \nT826 General value expression in ON ERROR or ON EMPTY clauses\n \nT827 JSON_TABLE: sibling NESTED COLUMNS clauses \nT828 JSON_QUERY \nT829 JSON_QUERY: array wrapper options \nT830 Enforcing unique keys in SQL/JSON constructor functions \nT838 JSON_TABLE: PLAN DEFAULT clause\n\n\nIf there's no objection I'll make it so.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Apr 2022 16:43:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "SQL JSON compliance"
},
{
"msg_contents": "On 13.04.22 22:43, Andrew Dunstan wrote:\n> Simon has just pointed out to me that as a result of recent commits, a\n> number of things now should move from the unsupported table to the\n> supported table in features.sgml. In particular, it looks to me like all\n> of these should move:\n\nThis all looks correct to me. Please go ahead.\n\n> T811 Basic SQL/JSON constructor functions\n> T812 SQL/JSON: JSON_OBJECTAGG\n> T813 SQL/JSON: JSON_ARRAYAGG with ORDER BY\n> T814 Colon in JSON_OBJECT or JSON_OBJECTAGG\n> T821 Basic SQL/JSON query operators\n> T822 SQL/JSON: IS JSON WITH UNIQUE KEYS predicate\n> T823 SQL/JSON: PASSING clause\n> T824 JSON_TABLE: specific PLAN clause\n> T825 SQL/JSON: ON EMPTY and ON ERROR clauses\n> T826 General value expression in ON ERROR or ON EMPTY clauses\n> T827 JSON_TABLE: sibling NESTED COLUMNS clauses\n> T828 JSON_QUERY\n> T829 JSON_QUERY: array wrapper options\n> T830 Enforcing unique keys in SQL/JSON constructor functions\n> T838 JSON_TABLE: PLAN DEFAULT clause\n\n\n",
"msg_date": "Fri, 29 Apr 2022 10:13:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL JSON compliance"
},
{
"msg_contents": "\nOn 2022-04-29 Fr 04:13, Peter Eisentraut wrote:\n> On 13.04.22 22:43, Andrew Dunstan wrote:\n>> Simon has just pointed out to me that as a result of recent commits, a\n>> number of things now should move from the unsupported table to the\n>> supported table in features.sgml. In particular, it looks to me like all\n>> of these should move:\n>\n> This all looks correct to me. Please go ahead.\n\n\nThanks, done.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 29 Apr 2022 09:11:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: SQL JSON compliance"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAs of 15251c0, when a standby encounters an incompatible parameter change,\nit pauses replay so that read traffic can continue while the administrator\nfixes the parameters. Once the server is restarted, replay can continue.\nBefore this change, such incompatible parameter changes caused the standby\nto immediately shut down.\n\nI noticed that there was some suggestion in the thread associated with\n15251c0 [0] for making this behavior configurable, but there didn't seem to\nbe much interest at the time. I am interested in allowing administrators\nto specify the behavior before 15251c0 (i.e., immediately shut down the\nstandby when an incompatible parameter change is detected). The use-case I\nhave in mind is when an administrator has automation in place for adjusting\nthese parameters and would like to avoid stopping replay any longer than\nnecessary. FWIW this is what we do in RDS.\n\nI've attached a patch that adds a new GUC where users can specify the\naction to take when an incompatible parameter change is detected on a\nstandby. For now, there are just two options: 'pause' and 'shutdown'.\nThis new GUC is largely modeled after recovery_target_action.\n\nI initially set out to see if it was possible to automatically adjust these\nparameters on a standby, but that is considerably more difficult. It isn't\nenough to just hook into the restart_after_crash functionality since it\ndoesn't go back far enough in the postmaster logic. IIUC we'd need to\nreload preloaded libraries (which there is presently no support for),\nrecalculate MaxBackends, etc. Another option I considered was to\nautomatically adjust the parameters during startup so that you just need to\nrestart the server. However, we need to know for sure that the server is\ngoing to be a hot standby, and I don't believe we have that information\nwhere such GUC changes would need to occur (I could be wrong about this).\nAnyway, for now I'm just proposing the modest change described above, but\nI'd welcome any discussion about improving matters further in this area.\n\n[0] https://postgr.es/m/4ad69a4c-cc9b-0dfe-0352-8b1b0cd36c7b%402ndquadrant.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 13 Apr 2022 14:35:21 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "At Wed, 13 Apr 2022 14:35:21 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> Hi hackers,\n> \n> As of 15251c0, when a standby encounters an incompatible parameter change,\n> it pauses replay so that read traffic can continue while the administrator\n> fixes the parameters. Once the server is restarted, replay can continue.\n> Before this change, such incompatible parameter changes caused the standby\n> to immediately shut down.\n> \n> I noticed that there was some suggestion in the thread associated with\n> 15251c0 [0] for making this behavior configurable, but there didn't seem to\n> be much interest at the time. I am interested in allowing administrators\n> to specify the behavior before 15251c0 (i.e., immediately shut down the\n> standby when an incompatible parameter change is detected). The use-case I\n> have in mind is when an administrator has automation in place for adjusting\n> these parameters and would like to avoid stopping replay any longer than\n> necessary. FWIW this is what we do in RDS.\n> \n> I've attached a patch that adds a new GUC where users can specify the\n> action to take when an incompatible parameter change is detected on a\n> standby. For now, there are just two options: 'pause' and 'shutdown'.\n> This new GUC is largely modeled after recovery_target_action.\n\nThe overall direction of going to shutdown without needing user\ninteraction seems fine. I think the same can be done by\ntimeout. That is, we provide a GUC named like\ninsufficient_standby_setting_shutdown_timeout (mmm. too long..), then\nrecovery sits down for the duration then shuts down. -1 means the\ncurrent behavior, 0 means what this patch is going to\nintroduce. However I don't see a concrete use case of the timeout.\n\n> I initially set out to see if it was possible to automatically adjust these\n> parameters on a standby, but that is considerably more difficult. It isn't\n> enough to just hook into the restart_after_crash functionality since it\n> doesn't go back far enough in the postmaster logic. IIUC we'd need to\n> reload preloaded libraries (which there is presently no support for),\n> recalculate MaxBackends, etc. Another option I considered was to\n\nSure.\n\n> automatically adjust the parameters during startup so that you just need to\n> restart the server. However, we need to know for sure that the server is\n> going to be a hot standby, and I don't believe we have that information\n> where such GUC changes would need to occur (I could be wrong about this).\n\nConldn't we use AlterSystemSetConfigFile for this purpose in\nCheckRequiredParameterValues?\n\n> Anyway, for now I'm just proposing the modest change described above, but\n> I'd welcome any discussion about improving matters further in this area.\n> \n> [0] https://postgr.es/m/4ad69a4c-cc9b-0dfe-0352-8b1b0cd36c7b%402ndquadrant.com\n\nIs the reason for the enum the extensibility to add a new choice like\n\"auto-adjust\"?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Apr 2022 11:36:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "Thanks for taking a look!\n\nOn Thu, Apr 14, 2022 at 11:36:11AM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 13 Apr 2022 14:35:21 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n>> I initially set out to see if it was possible to automatically adjust these\n>> parameters on a standby, but that is considerably more difficult. It isn't\n>> enough to just hook into the restart_after_crash functionality since it\n>> doesn't go back far enough in the postmaster logic. IIUC we'd need to\n>> reload preloaded libraries (which there is presently no support for),\n>> recalculate MaxBackends, etc. Another option I considered was to\n> \n> Sure.\n> \n>> automatically adjust the parameters during startup so that you just need to\n>> restart the server. However, we need to know for sure that the server is\n>> going to be a hot standby, and I don't believe we have that information\n>> where such GUC changes would need to occur (I could be wrong about this).\n> \n> Conldn't we use AlterSystemSetConfigFile for this purpose in\n> CheckRequiredParameterValues?\n\nThat's an interesting idea. When an incompatible parameter change is\ndetected, the server would automatically run the equivalent of an ALTER\nSYSTEM SET command before shutting down or pausing replay. I might draft\nup a proof-of-concept to see what this looks like and how well it works.\n\n>> Anyway, for now I'm just proposing the modest change described above, but\n>> I'd welcome any discussion about improving matters further in this area.\n>> \n>> [0] https://postgr.es/m/4ad69a4c-cc9b-0dfe-0352-8b1b0cd36c7b%402ndquadrant.com\n> \n> Is the reason for the enum the extensibility to add a new choice like\n> \"auto-adjust\"?\n\nYes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:13:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHello\r\n\r\nThe patch applied and tested fine. I think for this kind of exception, it really is up to the administrator how he/she should proceed to resolve depending on his/her business application. Leaving things configurable by the user is generally a nice and modest change. I also like that you leave the parameters as enum entry so it is possible to extend other possible actions such as automatically adjust to match the new value. \r\n\r\n\r\n---------\r\nCary Huang\r\nHighGo Software Canada",
"msg_date": "Fri, 29 Apr 2022 18:35:52 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "On Fri, Apr 29, 2022 at 06:35:52PM +0000, Cary Huang wrote:\n> The patch applied and tested fine. I think for this kind of exception, it really is up to the administrator how he/she should proceed to resolve depending on his/her business application. Leaving things configurable by the user is generally a nice and modest change. I also like that you leave the parameters as enum entry so it is possible to extend other possible actions such as automatically adjust to match the new value. \n\nThanks for reviewing! Do you think this patch can be marked as\nready-for-committer?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 2 May 2022 10:01:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "On Wed, 13 Apr 2022 at 22:35, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n> As of 15251c0, when a standby encounters an incompatible parameter change,\n> it pauses replay so that read traffic can continue while the administrator\n> fixes the parameters. Once the server is restarted, replay can continue.\n> Before this change, such incompatible parameter changes caused the standby\n> to immediately shut down.\n>\n> I noticed that there was some suggestion in the thread associated with\n> 15251c0 [0] for making this behavior configurable, but there didn't seem to\n> be much interest at the time. I am interested in allowing administrators\n> to specify the behavior before 15251c0 (i.e., immediately shut down the\n> standby when an incompatible parameter change is detected). The use-case I\n> have in mind is when an administrator has automation in place for adjusting\n> these parameters and would like to avoid stopping replay any longer than\n> necessary. FWIW this is what we do in RDS.\n\nI don't understand why you need this patch at all.\n\nSince you have automation, you can use that layer to automatically\nrestart all standbys at once, if you choose, without any help or\nhindrance from PostgreSQL.\n\nBut I really don't want you to do that, since that results in loss of\navailability of the service. I'd like you to try a little harder and\nautomate this in a way that allows the service to continue with some\nstandbys available while others restart.\n\nAll this feature does is allow you to implement things in a lazy way\nthat causes a loss of availability for users. I don't think that is of\nbenefit to PostgreSQL users, so -1 from me, on this patch (only),\nsorry about that.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 23 Jun 2022 16:19:45 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "Thanks for chiming in.\n\nOn Thu, Jun 23, 2022 at 04:19:45PM +0100, Simon Riggs wrote:\n> I don't understand why you need this patch at all.\n> \n> Since you have automation, you can use that layer to automatically\n> restart all standbys at once, if you choose, without any help or\n> hindrance from PostgreSQL.\n> \n> But I really don't want you to do that, since that results in loss of\n> availability of the service. I'd like you to try a little harder and\n> automate this in a way that allows the service to continue with some\n> standbys available while others restart.\n> \n> All this feature does is allow you to implement things in a lazy way\n> that causes a loss of availability for users. I don't think that is of\n> benefit to PostgreSQL users, so -1 from me, on this patch (only),\n> sorry about that.\n\nOverall, this is intended for users that care more about keeping WAL replay\ncaught up than a temporary loss of availability due to a restart. Without\nthis, I'd need to detect that WAL replay has paused due to insufficient\nparameters and restart Postgres. If І can configure Postgres to\nautomatically shut down in these scenarios, my automation can skip right to\nadjusting the parameters and starting Postgres up. Of course, if you care\nmore about availability, you'd keep this parameter set to the default\n(pause) and restart on your own schedule.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Jun 2022 10:45:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "On Thu, 23 Jun 2022 at 18:45, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Thanks for chiming in.\n>\n> On Thu, Jun 23, 2022 at 04:19:45PM +0100, Simon Riggs wrote:\n> > I don't understand why you need this patch at all.\n> >\n> > Since you have automation, you can use that layer to automatically\n> > restart all standbys at once, if you choose, without any help or\n> > hindrance from PostgreSQL.\n> >\n> > But I really don't want you to do that, since that results in loss of\n> > availability of the service. I'd like you to try a little harder and\n> > automate this in a way that allows the service to continue with some\n> > standbys available while others restart.\n> >\n> > All this feature does is allow you to implement things in a lazy way\n> > that causes a loss of availability for users. I don't think that is of\n> > benefit to PostgreSQL users, so -1 from me, on this patch (only),\n> > sorry about that.\n>\n> Overall, this is intended for users that care more about keeping WAL replay\n> caught up than a temporary loss of availability due to a restart. Without\n> this, I'd need to detect that WAL replay has paused due to insufficient\n> parameters and restart Postgres. If І can configure Postgres to\n> automatically shut down in these scenarios, my automation can skip right to\n> adjusting the parameters and starting Postgres up. Of course, if you care\n> more about availability, you'd keep this parameter set to the default\n> (pause) and restart on your own schedule.\n\nThere are a few choices of how we can deal with this situation\n1. Make the change blindly and then pick up the pieces afterwards\n2. Check the configuration before changes are made, and make the\nchanges in the right order\n\nThis patch and the above argument assumes that you must do (1), but\nyou could easily do (2).\n\ni.e. If you know that changing specific parameters might affect\navailability, why not query those parameter values on all servers\nfirst and check whether the change will affect availability, before\nyou allow the changes? why rely on PostgreSQL to pick up the pieces\nbecause the orchestration code doesn't (yet) make configuration sanity\nchecks?\n\nThis patch would undo a very important change - to keep servers\navailable by default and go back to the old behavior for a huge fleet\nof Postgres databases. The old behavior of shutdown-on-change caused\ncatastrophe many times for users and in one case brought down a rather\nlarge and important service provider, whose CTO explained to me quite\nclearly how stupid he thought that old behavior was. So I will not\neasily agree to allowing it to be put back into PostgreSQL, simply to\navoid adding a small amount of easy code into an orchestration layer\nthat could and should implement documented best practice.\n\nI am otherwise very appreciative of your insightful contributions,\njust not this specific one.\n\nHappy to discuss how we might introduce new parameters/behavior to\nreduce the list of parameters that need to be kept in lock-step.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 24 Jun 2022 11:42:29 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 11:42:29AM +0100, Simon Riggs wrote:\n> This patch would undo a very important change - to keep servers\n> available by default and go back to the old behavior for a huge fleet\n> of Postgres databases. The old behavior of shutdown-on-change caused\n> catastrophe many times for users and in one case brought down a rather\n> large and important service provider, whose CTO explained to me quite\n> clearly how stupid he thought that old behavior was. So I will not\n> easily agree to allowing it to be put back into PostgreSQL, simply to\n> avoid adding a small amount of easy code into an orchestration layer\n> that could and should implement documented best practice.\n> \n> I am otherwise very appreciative of your insightful contributions,\n> just not this specific one.\n\nGiven this feedback, I intend to mark the associated commitfest entry as\nWithdrawn at the conclusion of the current commitfest.\n\n> Happy to discuss how we might introduce new parameters/behavior to\n> reduce the list of parameters that need to be kept in lock-step.\n\nMe, too. I don't have anything concrete to propose at the moment. I\nthought Horiguchi-san's idea of automatically running ALTER SYSTEM was\nintriguing, but I have not explored that in depth.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Jul 2022 15:17:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 03:17:10PM -0700, Nathan Bossart wrote:\n> Given this feedback, I intend to mark the associated commitfest entry as\n> Withdrawn at the conclusion of the current commitfest.\n\nDone.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Jul 2022 09:22:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow specifying action when standby encounters incompatible\n parameter settings"
}
] |
[
{
"msg_contents": "For the past five days or so, wrasse has been intermittently\nfailing due to unexpectedly not using an Index Only Scan plan\nin the create_index test [1], eg\n\n@@ -1910,11 +1910,15 @@\n SELECT unique1 FROM tenk1\n WHERE unique1 IN (1,42,7)\n ORDER BY unique1;\n- QUERY PLAN \n--------------------------------------------------------\n- Index Only Scan using tenk1_unique1 on tenk1\n- Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n-(2 rows)\n+ QUERY PLAN \n+-------------------------------------------------------------------\n+ Sort\n+ Sort Key: unique1\n+ -> Bitmap Heap Scan on tenk1\n+ Recheck Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n+ -> Bitmap Index Scan on tenk1_unique1\n+ Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n+(6 rows)\n \n SELECT unique1 FROM tenk1\n WHERE unique1 IN (1,42,7)\n\nThe most probable explanation for this seems to be that tenk1's\npg_class.relallvisible value hasn't been set high enough to make an IOS\nlook cheaper than the alternatives. Where that ought to be getting set\nis the \"VACUUM ANALYZE tenk1\" step in test_setup.sql. It's plausible\nI guess that a background autovacuum is preventing that command from\nsetting relallvisible as high as it ought to be --- but if so, why\nare we only seeing two plans changing, on only one animal?\n\nBut what I'm really confused about is that this test arrangement has\nbeen stable since early February. Why has wrasse suddenly started\nshowing a 25% failure rate when it never failed this way before that?\nSomebody has to have recently committed a change that affects this.\nChecking the commit log up to the onset of the failures on 8 April,\nI only see two plausible candidates:\n\n* shared-memory pgstats\n* Peter's recent VACUUM changes\n\nAny connection to pgstats is, um, pretty obscure. I'd finger the VACUUM\nchanges as a more likely trigger except that the last interesting-looking\none was f3c15cbe5 on 3 April, and wrasse got through \"make check\" 38 times\nafter that before its first failure of this kind. That doesn't square with\nthe 25% failure rate since then, so I'm kind of forced to the conclusion\nthat the pgstats work changed some behavior that it should not have.\n\nAny ideas?\n\nI'm tempted to add something like\n\nSELECT relallvisible = relpages FROM pg_class WHERE relname = 'tenk1';\n\nso that we can confirm or refute the theory that relallvisible is\nthe driving factor.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-04-08%2003%3A48%3A30\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:08:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm tempted to add something like\n>\n> SELECT relallvisible = relpages FROM pg_class WHERE relname = 'tenk1';\n>\n> so that we can confirm or refute the theory that relallvisible is\n> the driving factor.\n\nIt would be fairly straightforward to commit a temporary debugging\npatch that has the autovacuum logging stuff report directly on how\nVACUUM set new_rel_allvisible in pg_class. We should probably be doing\nthat already, just because it's useful information that is already\nclose at hand.\n\nMight be a bit trickier to make sure that wrasse reliably reported on\nall relevant VACUUMs, since that would have to include manual VACUUMs\n(which would really have to use VACUUM VERBOSE), as well as\nautovacuums.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 15:20:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 13, 2022 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm tempted to add something like\n>> SELECT relallvisible = relpages FROM pg_class WHERE relname = 'tenk1';\n>> so that we can confirm or refute the theory that relallvisible is\n>> the driving factor.\n\n> It would be fairly straightforward to commit a temporary debugging\n> patch that has the autovacuum logging stuff report directly on how\n> VACUUM set new_rel_allvisible in pg_class. We should probably be doing\n> that already, just because it's useful information that is already\n> close at hand.\n\nDoesn't look like wrasse has autovacuum logging enabled, though.\n\nAfter a bit more navel-contemplation I see a way that the pgstats\nwork could have changed timing in this area. We used to have a\nrate limit on how often stats reports would be sent to the\ncollector, which'd ensure half a second or so delay before a\ntransaction's change counts became visible to the autovac daemon.\nI've not looked at the new code, but I'm betting that that's gone\nand the autovac launcher might start a worker nearly immediately\nafter some foreground process finishes inserting some rows.\nSo that could result in autovac activity occurring concurrently\nwith test_setup where it didn't before.\n\nAs to what to do about it ... maybe apply the FREEZE and\nDISABLE_PAGE_SKIPPING options in test_setup's vacuums?\nIt seems like DISABLE_PAGE_SKIPPING is necessary but perhaps\nnot sufficient.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:54:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, 14 Apr 2022 at 10:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After a bit more navel-contemplation I see a way that the pgstats\n> work could have changed timing in this area. We used to have a\n> rate limit on how often stats reports would be sent to the\n> collector, which'd ensure half a second or so delay before a\n> transaction's change counts became visible to the autovac daemon.\n> I've not looked at the new code, but I'm betting that that's gone\n> and the autovac launcher might start a worker nearly immediately\n> after some foreground process finishes inserting some rows.\n> So that could result in autovac activity occurring concurrently\n> with test_setup where it didn't before.\n\nIt's not quite clear to me why the manual vacuum wouldn't just cancel\nthe autovacuum and complete the job. I can't quite see how there's\nroom for competing page locks here. Also, see [1]. One of the\nreported failing tests there is the same as one of the failing tests\non wrasse. My investigation for the AIO branch found that\nrelallvisible was not equal to relpages. I don't recall the reason why\nthat was happening now.\n\n> As to what to do about it ... maybe apply the FREEZE and\n> DISABLE_PAGE_SKIPPING options in test_setup's vacuums?\n> It seems like DISABLE_PAGE_SKIPPING is necessary but perhaps\n> not sufficient.\n\nWe should likely try and confirm it's due to relallvisible first.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20220224153339.pqn64kseb5gpgl74@alap3.anarazel.de\n\n\n",
"msg_date": "Thu, 14 Apr 2022 11:06:33 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 3:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After a bit more navel-contemplation I see a way that the pgstats\n> work could have changed timing in this area. We used to have a\n> rate limit on how often stats reports would be sent to the\n> collector, which'd ensure half a second or so delay before a\n> transaction's change counts became visible to the autovac daemon.\n> I've not looked at the new code, but I'm betting that that's gone\n> and the autovac launcher might start a worker nearly immediately\n> after some foreground process finishes inserting some rows.\n> So that could result in autovac activity occurring concurrently\n> with test_setup where it didn't before.\n\nBut why should it matter? The test_setup.sql VACUUM of tenk1 should\nleave relallvisible and relpages in the same state, either way (or\nvery close to it).\n\nThe only way that it seems like it could matter is if OldestXmin was\nheld back during test_setup.sql's execution of the VACUUM command.\n\n> As to what to do about it ... maybe apply the FREEZE and\n> DISABLE_PAGE_SKIPPING options in test_setup's vacuums?\n> It seems like DISABLE_PAGE_SKIPPING is necessary but perhaps\n> not sufficient.\n\nBTW, the work on VACUUM for Postgres 15 probably makes VACUUM test\nflappiness issues less of a problem -- unless they're issues involving\nsomething holding back OldestXmin when it shouldn't (in which case it\nwon't have any effect on test stability). I would expect that to be\nthe case, at least, since VACUUM now does almost all of the same work\nfor any individual page that it cannot get a cleanup lock on. There is\nsurprisingly little difference between a page that gets processed by\nlazy_scan_prune and a page that gets processed by lazy_scan_noprune.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 16:07:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi, \n\nOn April 13, 2022 7:06:33 PM EDT, David Rowley <dgrowleyml@gmail.com> wrote:\n>On Thu, 14 Apr 2022 at 10:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After a bit more navel-contemplation I see a way that the pgstats\n>> work could have changed timing in this area. We used to have a\n>> rate limit on how often stats reports would be sent to the\n>> collector, which'd ensure half a second or so delay before a\n>> transaction's change counts became visible to the autovac daemon.\n>> I've not looked at the new code, but I'm betting that that's gone\n>> and the autovac launcher might start a worker nearly immediately\n>> after some foreground process finishes inserting some rows.\n>> So that could result in autovac activity occurring concurrently\n>> with test_setup where it didn't before.\n>\n>It's not quite clear to me why the manual vacuum wouldn't just cancel\n>the autovacuum and complete the job. I can't quite see how there's\n>room for competing page locks here. Also, see [1]. One of the\n>reported failing tests there is the same as one of the failing tests\n>on wrasse. My investigation for the AIO branch found that\n>relallvisible was not equal to relpages. I don't recall the reason why\n>that was happening now.\n>\n>> As to what to do about it ... maybe apply the FREEZE and\n>> DISABLE_PAGE_SKIPPING options in test_setup's vacuums?\n>> It seems like DISABLE_PAGE_SKIPPING is necessary but perhaps\n>> not sufficient.\n>\n>We should likely try and confirm it's due to relallvisible first.\n\nWe had this issue before, and not just on the aio branch. On my phone right now, so won't look up references.\n\nIIRC the problem in matter isn't skipped pages, but that the horizon simply isn't new enough to mark pages as all visible. An independent autovac worker starting is enough for that, for example. Previously the data load and vacuum were further apart, preventing this kind of issue.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 13 Apr 2022 19:13:07 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n> IIRC the problem in matter isn't skipped pages, but that the horizon simply isn't new enough to mark pages as all visible.\n\nSometimes OldestXmin can go backwards in VACUUM operations that are\nrun in close succession against the same table, due to activity from\nother databases in the same cluster (perhaps other factors are\ninvolved at times).\n\nThat's why the following assertion that I recently added to\nvacuumlazy.c will fail pretty quickly without the\n\"vacrel->NewRelfrozenXid == OldestXmin\" part of its test:\n\n Assert(vacrel->NewRelfrozenXid == OldestXmin ||\n TransactionIdPrecedesOrEquals(aggressive ? FreezeLimit :\n vacrel->relfrozenxid,\n vacrel->NewRelfrozenXid));\n\nIf you remove \"vacrel->NewRelfrozenXid == OldestXmin\", and run the\nregression tests, the remaining assertion will fail quite easily.\nThough perhaps not with a serial \"make check\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 16:17:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 13, 2022 at 4:13 PM Andres Freund <andres@anarazel.de> wrote:\n>> IIRC the problem in matter isn't skipped pages, but that the horizon simply isn't new enough to mark pages as all visible.\n\n> Sometimes OldestXmin can go backwards in VACUUM operations that are\n> run in close succession against the same table, due to activity from\n> other databases in the same cluster (perhaps other factors are\n> involved at times).\n\nI've been doing some testing locally by inserting commands to\nmanually set tenk1's relallvisible to zero. I first did that\nin test_setup.sql ... and it had no effect whatsoever. Further\nexperimentation showed that the \"CREATE INDEX ON tenk1\" steps\nin create_index.sql itself generally suffice to fix relallvisible;\nalthough if you force it back to zero after the last such command,\nyou get the same plan diffs wrasse is showing. And you don't\nget any others, which I thought curious until I realized that\nsanity_check.sql's database-wide VACUUM offers yet another\nopportunity to heal the incorrect value. If you force it back\nto zero again after that, a bunch of later tests start to show\nplan differences, which is what I'd been expecting.\n\nSo what seems to be happening on wrasse is that a background\nautovacuum (or really autoanalyze?) is preventing pages from\nbeing marked all-visible not only during test_setup.sql but\nalso create_index.sql; but it's gone by the time sanity_check.sql\nruns. Which is odd in itself because not that much time elapses\nbetween create_index and sanity_check, certainly less than the\ntime from test_setup to create_index.\n\nIt seems like a reliable fix might require test_setup to wait\nfor any background autovac to exit before it does its own\nvacuums. Ick.\n\nAnd we still lack an explanation of why this only now broke.\nI remain suspicious that pgstats is behaving unexpectedly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 19:38:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So what seems to be happening on wrasse is that a background\n> autovacuum (or really autoanalyze?) is preventing pages from\n> being marked all-visible not only during test_setup.sql but\n> also create_index.sql; but it's gone by the time sanity_check.sql\n> runs.\n\nI agree that it would need to be an autoanalyze (due to the\nPROC_IN_VACUUM optimization).\n\n> It seems like a reliable fix might require test_setup to wait\n> for any background autovac to exit before it does its own\n> vacuums. Ick.\n\nThis is hardly a new problem, really. I wonder if it's worth inventing\na comprehensive solution. Some kind of infrastructure that makes\nVACUUM establish a next XID up-front (by calling\nReadNextTransactionId()), and then find a way to run with an\nOldestXmin that's >= the earleir \"next\" XID value. If necessary by\nwaiting.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 16:45:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 13, 2022 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It seems like a reliable fix might require test_setup to wait\n>> for any background autovac to exit before it does its own\n>> vacuums. Ick.\n\n> This is hardly a new problem, really. I wonder if it's worth inventing\n> a comprehensive solution.\n\nYeah, we have band-aided around this type of problem repeatedly.\nMaking a fix that's readily accessible from any test script\nseems like a good idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 19:51:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, we have band-aided around this type of problem repeatedly.\n> Making a fix that's readily accessible from any test script\n> seems like a good idea.\n\nWe might even be able to consistently rely on this new option, given\n*any* problem involving test stability and VACUUM. Having a\none-size-fits-all solution to these kinds of stability problems would\nbe nice -- no more DISABLE_PAGE_SKIPPING bandaids.\n\nThat general approach will be possible provided an inability to\nacquire a cleanup lock during VACUUM (which can more or less occur at\nrandom in most environments) doesn't ever lead to unexpected test\nresults. There is good reason to think that it might work out that\nway. Simulating problems with acquiring cleanup locks during VACUUM\nleft me with the impression that that could really work:\n\nhttps://postgr.es/m/CAH2-WzkiB-qcsBmWrpzP0nxvrQExoUts1d7TYShg_DrkOHeg4Q@mail.gmail.com\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 17:17:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-13 16:45:44 -0700, Peter Geoghegan wrote:\n> On Wed, Apr 13, 2022 at 4:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > So what seems to be happening on wrasse is that a background\n> > autovacuum (or really autoanalyze?) is preventing pages from\n> > being marked all-visible not only during test_setup.sql but\n> > also create_index.sql; but it's gone by the time sanity_check.sql\n> > runs.\n> \n> I agree that it would need to be an autoanalyze (due to the\n> PROC_IN_VACUUM optimization).\n\nThat's not a realiable protection - the snapshot is established normally\nat first, only after a while we set PROC_IN_VACUUM...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Apr 2022 17:35:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Apr 13, 2022 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, we have band-aided around this type of problem repeatedly.\n>> Making a fix that's readily accessible from any test script\n>> seems like a good idea.\n\n> We might even be able to consistently rely on this new option, given\n> *any* problem involving test stability and VACUUM. Having a\n> one-size-fits-all solution to these kinds of stability problems would\n> be nice -- no more DISABLE_PAGE_SKIPPING bandaids.\n\nMy guess is that you'd need both this new wait-for-horizon behavior\n*and* DISABLE_PAGE_SKIPPING. But the two together ought to make\nfor pretty reproducible behavior. I noticed while scanning the\ncommit log that some patches have tried adding a FREEZE option,\nwhich seems more like waving a dead chicken than anything that'd\nimprove stability.\n\nWe'd not necessarily have to embed wait-for-horizon into VACUUM\nitself. It seems like a SQL-accessible function could be written\nand then called before any problematic VACUUM. I like this better\nfor something we're thinking of jamming in post-feature-freeze;\nwe'd not be committing to the feature quite as much as if we\nadded a VACUUM option.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 20:35:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-13 18:54:06 -0400, Tom Lane wrote:\n> We used to have a rate limit on how often stats reports would be sent\n> to the collector, which'd ensure half a second or so delay before a\n> transaction's change counts became visible to the autovac daemon.\n\nJust for posterity: That's not actually gone. But what is gone is the\nrate limiting in autovacuum about requesting recent stats for a table /\nautovac seeing slightly older stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Apr 2022 17:59:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 5:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My guess is that you'd need both this new wait-for-horizon behavior\n> *and* DISABLE_PAGE_SKIPPING. But the two together ought to make\n> for pretty reproducible behavior. I noticed while scanning the\n> commit log that some patches have tried adding a FREEZE option,\n> which seems more like waving a dead chicken than anything that'd\n> improve stability.\n\nI think that it's more likely that FREEZE will correct problems, out of the two:\n\n* FREEZE forces an aggressive VACUUM whose FreezeLimit is as recent a\ncutoff value as possible (FreezeLimit will be equal to OldestXmin).\n\n* DISABLE_PAGE_SKIPPING also forces an aggressive VACUUM. But unlike\nFREEZE it makes VACUUM not use the visibility map, even in the case of\nall-frozen pages. And it changes nothing about FreezeLimit.\n\nIt's also a certainty that VACUUM(FREEZE, DISABLE_PAGE_SKIPPING) is\nnot a sensible remedy for any problem with test stability, but there\nare still some examples of that combination in the regression tests.\nThe only way it could make sense is if the visibility map might be\ncorrupt, but surely we're not expecting that anyway (and if we were\nwe'd be testing it more directly).\n\nI recently argued that DISABLE_PAGE_SKIPPING should have nothing to do\nwith aggressive vacuuming -- that should all be left up to VACUUM\nFREEZE. It seems more logical to make DISABLE_PAGE_SKIPPING mean\n\"don't use the visibility map to skip anything\", without bringing\naggressiveness into it at all. That would be less confusing.\n\n> We'd not necessarily have to embed wait-for-horizon into VACUUM\n> itself. It seems like a SQL-accessible function could be written\n> and then called before any problematic VACUUM. I like this better\n> for something we're thinking of jamming in post-feature-freeze;\n> we'd not be committing to the feature quite as much as if we\n> added a VACUUM option.\n\nHmm. I would say that the feature has zero appeal to users anyway.\nMaybe it can and should be done through an SQL function for other\nreasons, though. Users already think that there are several different\nflavors of VACUUM, which isn't really true.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:03:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-13 20:35:50 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Wed, Apr 13, 2022 at 4:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Yeah, we have band-aided around this type of problem repeatedly.\n> >> Making a fix that's readily accessible from any test script\n> >> seems like a good idea.\n> \n> > We might even be able to consistently rely on this new option, given\n> > *any* problem involving test stability and VACUUM. Having a\n> > one-size-fits-all solution to these kinds of stability problems would\n> > be nice -- no more DISABLE_PAGE_SKIPPING bandaids.\n> \n> My guess is that you'd need both this new wait-for-horizon behavior\n> *and* DISABLE_PAGE_SKIPPING. But the two together ought to make\n> for pretty reproducible behavior. I noticed while scanning the\n> commit log that some patches have tried adding a FREEZE option,\n> which seems more like waving a dead chicken than anything that'd\n> improve stability.\n\nI think most of those we've ended up replacing by using temp tables in\nthose tests instead, since they're not affected by the global horizon\nanymore.\n\n\n> We'd not necessarily have to embed wait-for-horizon into VACUUM\n> itself.\n\nI'm not sure it'd be quite reliable outside of vacuum though, due to the\nhorizon potentially going backwards (in otherwise harmless ways)?\n\n\n> It seems like a SQL-accessible function could be written\n> and then called before any problematic VACUUM. I like this better\n> for something we're thinking of jamming in post-feature-freeze;\n> we'd not be committing to the feature quite as much as if we\n> added a VACUUM option.\n\nWe could otherwise just disable IOS for that query, for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:05:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 6:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that it's more likely that FREEZE will correct problems, out of the two:\n>\n> * FREEZE forces an aggressive VACUUM whose FreezeLimit is as recent a\n> cutoff value as possible (FreezeLimit will be equal to OldestXmin).\n\nThe reason why that might have helped (at least in the past) is that\nit's enough to force us to wait for a cleanup lock to prune and\nfreeze, if necessary. Which was never something that\nDISABLE_PAGE_SKIPPING could do.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:05:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 6:05 PM Andres Freund <andres@anarazel.de> wrote:\n> I think most of those we've ended up replacing by using temp tables in\n> those tests instead, since they're not affected by the global horizon\n> anymore.\n\nMaybe, but it's a pain to have to work that way. You can't do it in\ncases like this, because a temp table is not workable. So that's not\nan ideal long term solution.\n\n> > We'd not necessarily have to embed wait-for-horizon into VACUUM\n> > itself.\n>\n> I'm not sure it'd be quite reliable outside of vacuum though, due to the\n> horizon potentially going backwards (in otherwise harmless ways)?\n\nI agree, since vacuumlazy.c would need to either be given its own\nOldestXmin, or knowledge of a wait-up-to XID. Either way we have to\nmake non-trivial changes to vacuumlazy.c.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:08:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-13 20:35:50 -0400, Tom Lane wrote:\n>> It seems like a SQL-accessible function could be written\n>> and then called before any problematic VACUUM. I like this better\n>> for something we're thinking of jamming in post-feature-freeze;\n>> we'd not be committing to the feature quite as much as if we\n>> added a VACUUM option.\n\n> We could otherwise just disable IOS for that query, for now.\n\nThe entire point of that test case is to verify the shape of the\nIOS plan, so no that's not an acceptable answer. But if we're\nlooking for quick hacks, we could do\n\nupdate pg_class set relallvisible = relpages where relname = 'tenk1';\n\njust before that test.\n\nI'm still suspicious of the pgstat changes, though. I checked into\nthings here by doing\n\n\tinitdb\n\tedit postgresql.conf to set log_autovacuum_min_duration = 0\n\tpg_ctl start && make installcheck-parallel\n\nand what I see is that the first reported autovacuum activity begins\nexactly one minute after the postmaster starts, which is what I'd\nexpect given the autovacuum naptime rules. On my machine, of course,\nthe installcheck-parallel run is long gone by then. But even on the\nmuch slower wrasse, we should be well past create_index by the time any\nautovac worker launches --- you can see from wrasse's reported test\nruntimes that only about 10 seconds have elapsed when it get to the end\nof create_index.\n\nThis suggests to me that what is holding the (presumed) conflicting\nsnapshot must be the autovac launcher, because what else could it be?\nSo I'm suspicious that we broke something in that area, though I am\nbaffled why only wrasse would be telling us so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 21:23:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nNoah, any chance you could enable log_autovacuum_min_duration=0 on\nwrasse?\n\n\nOn 2022-04-13 21:23:12 -0400, Tom Lane wrote:\n> I'm still suspicious of the pgstat changes, though. I checked into\n> things here by doing\n> \n> \tinitdb\n> \tedit postgresql.conf to set log_autovacuum_min_duration = 0\n> \tpg_ctl start && make installcheck-parallel\n> \n> and what I see is that the first reported autovacuum activity begins\n> exactly one minute after the postmaster starts, which is what I'd\n> expect given the autovacuum naptime rules.\n\nIt'd not necessarily have to be autovacuum. A CREATE INDEX or VACUUM\nusing parallelism, could also cause this, I think. It'd be a narrow\nwindow, of course...\n\nDoes sparc have wider alignment rules for some types? Perhaps that'd be\nenough to put some tables to be sufficiently larger to trigger parallel\nvacuum?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Apr 2022 18:51:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Noah, any chance you could enable log_autovacuum_min_duration=0 on\n> wrasse?\n\n+1\n\n> Does sparc have wider alignment rules for some types? Perhaps that'd be\n> enough to put some tables to be sufficiently larger to trigger parallel\n> vacuum?\n\nNo, the configure results on wrasse look pretty ordinary:\n\nchecking size of void *... 8\nchecking size of size_t... 8\nchecking size of long... 8\nchecking alignment of short... 2\nchecking alignment of int... 4\nchecking alignment of long... 8\nchecking alignment of double... 8\n\nI wondered for a moment about force_parallel_mode, but wrasse doesn't\nappear to be setting that, and in any case I'm pretty sure it only\naffects plannable statements.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 22:18:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 06:51:12PM -0700, Andres Freund wrote:\n> Noah, any chance you could enable log_autovacuum_min_duration=0 on\n> wrasse?\n\nDone. Also forced hourly builds.\n\n\n",
"msg_date": "Wed, 13 Apr 2022 19:51:22 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Wed, Apr 13, 2022 at 06:51:12PM -0700, Andres Freund wrote:\n>> Noah, any chance you could enable log_autovacuum_min_duration=0 on\n>> wrasse?\n\n> Done. Also forced hourly builds.\n\nThanks! We now have two failing runs with the additional info [1][2],\nand in both, it's clear that the first autovac worker doesn't launch\nuntil 1 minute after postmaster start, by which time we're long done\nwith the test scripts of interest. So whatever is breaking this is\nnot an autovac worker.\n\nI think I'm going to temporarily add a couple of queries to check\nwhat tenk1's relallvisible actually is, just so we can confirm\npositively that that's what's causing the plan change. (I'm also\ncurious about whether the CREATE INDEX steps manage to change it\nat all.)\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-04-14%2013%3A28%3A14\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-04-14%2004%3A48%3A13\n\n\n",
"msg_date": "Thu, 14 Apr 2022 12:01:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-14 12:01:23 -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Wed, Apr 13, 2022 at 06:51:12PM -0700, Andres Freund wrote:\n> >> Noah, any chance you could enable log_autovacuum_min_duration=0 on\n> >> wrasse?\n> \n> > Done. Also forced hourly builds.\n\nThanks! Can you repro the problem manually on wrasse, perhaps even\noutside the buildfarm script? That might be simpler than debugging via\nthe BF...\n\n\n> Thanks! We now have two failing runs with the additional info [1][2],\n> and in both, it's clear that the first autovac worker doesn't launch\n> until 1 minute after postmaster start, by which time we're long done\n> with the test scripts of interest. So whatever is breaking this is\n> not an autovac worker.\n\nI did some experiments around that too, and didn't find any related\nproblems.\n\nFor a second I was wondering if it's caused by the time of initdb (which\nends up with a working pgstat snapshot now, but didn't before), but\nthat's just a few more seconds. While the BF scripts don't show\ntimestamps for initdb, the previous step's log output confirms that it's\njust a few seconds...\n\n\n> I think I'm going to temporarily add a couple of queries to check\n> what tenk1's relallvisible actually is, just so we can confirm\n> positively that that's what's causing the plan change. (I'm also\n> curious about whether the CREATE INDEX steps manage to change it\n> at all.)\n\nI wonder if we should make VACUUM log the VERBOSE output at DEBUG1\nunconditionally. This is like the third bug where we needed that\ninformation, and it's practically impossible to include in regression\noutput. Then we'd know what the xid horizon is, whether pages were\nskipped, etc.\n\nIt also just generally seems like a good thing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:18:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 9:18 AM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if we should make VACUUM log the VERBOSE output at DEBUG1\n> unconditionally. This is like the third bug where we needed that\n> information, and it's practically impossible to include in regression\n> output. Then we'd know what the xid horizon is, whether pages were\n> skipped, etc.\n\nI like the idea of making VACUUM log the VERBOSE output as a\nconfigurable user-visible feature. We'll then be able to log all\nVACUUM statements (not just autovacuum worker VACUUMs).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:21:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Thanks! Can you repro the problem manually on wrasse, perhaps even\n> outside the buildfarm script?\n\nI'm working on that right now, actually...\n\n> I wonder if we should make VACUUM log the VERBOSE output at DEBUG1\n> unconditionally. This is like the third bug where we needed that\n> information, and it's practically impossible to include in regression\n> output. Then we'd know what the xid horizon is, whether pages were\n> skipped, etc.\n\nRight at the moment it seems like we also need visibility into what\nCREATE INDEX is doing.\n\nI'm not sure I'd buy into permanent changes here (at least not ones made\nin haste), but temporarily adding more logging seems perfectly reasonable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 12:26:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-14 12:26:20 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Thanks! Can you repro the problem manually on wrasse, perhaps even\n> > outside the buildfarm script?\n\nAh, cool.\n\n\n> I'm working on that right now, actually...\n> \n> > I wonder if we should make VACUUM log the VERBOSE output at DEBUG1\n> > unconditionally. This is like the third bug where we needed that\n> > information, and it's practically impossible to include in regression\n> > output. Then we'd know what the xid horizon is, whether pages were\n> > skipped, etc.\n> \n> Right at the moment it seems like we also need visibility into what\n> CREATE INDEX is doing.\n\n> I'm not sure I'd buy into permanent changes here (at least not ones made\n> in haste), but temporarily adding more logging seems perfectly reasonable.\n\nI think it might be worth leaving in, but let's debate that separately?\nI'm thinking of something like the attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 14 Apr 2022 09:48:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 9:48 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it might be worth leaving in, but let's debate that separately?\n> I'm thinking of something like the attached.\n\nThe current convention for the \"extra\" ereport()s that VACUUM VERBOSE\noutputs at INFO elevel is to use DEBUG2 elevel in all other cases\n(these extra messages are considered part of VACUUM VERBOSE output,\nbut are *not* considered part of the autovacuum log output).\n\nIt looks like you're changing the elevel convention for these \"extra\"\nmessages with this patch. That might be fine, but don't forget about\nsimilar ereports() in vacuumparallel.c. I think that the elevel should\nprobably remain uniform across all of these messages. Though I don't\nparticular care if it's DEBUG2 or DEBUG5.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 10:07:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 10:07 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> It looks like you're changing the elevel convention for these \"extra\"\n> messages with this patch. That might be fine, but don't forget about\n> similar ereports() in vacuumparallel.c. I think that the elevel should\n> probably remain uniform across all of these messages. Though I don't\n> particular care if it's DEBUG2 or DEBUG5.\n\nAlso, don't forget to do something here, with the assertion and with\nthe message:\n\n if (verbose)\n {\n /*\n * Aggressiveness already reported earlier, in dedicated\n * VACUUM VERBOSE ereport\n */\n Assert(!params->is_wraparound);\n msgfmt = _(\"finished vacuuming \\\"%s.%s.%s\\\": index\nscans: %d\\n\");\n }\n else if (params->is_wraparound)\n {\n\nPresumably we will need to report on antiwraparound-ness in the\nverbose-debug-elevel-for-autovacuum case (and not allow this assertion\nto fail).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 10:12:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "I wroteL\n> Andres Freund <andres@anarazel.de> writes:\n>> Thanks! Can you repro the problem manually on wrasse, perhaps even\n>> outside the buildfarm script?\n\n> I'm working on that right now, actually...\n\nSo far, reproducing it manually has been a miserable failure: I've\nrun about 180 cycles of the core regression tests with no error.\nNot sure what's different between my test scenario and wrasse's.\n\nMeanwhile, wrasse did fail with my relallvisible check in place [1],\nand what that shows is that relallvisible is *zero* to start with\nand remains so throughout the CREATE INDEX sequence. That pretty\ndefinitively proves that it's not a page-skipping problem but\nan xmin-horizon-too-old problem. We're no closer to understanding\nwhere that horizon value is coming from, though.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-04-14%2019%3A28%3A12\n\n\n",
"msg_date": "Thu, 14 Apr 2022 17:33:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 2:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meanwhile, wrasse did fail with my relallvisible check in place [1],\n> and what that shows is that relallvisible is *zero* to start with\n> and remains so throughout the CREATE INDEX sequence. That pretty\n> definitively proves that it's not a page-skipping problem but\n> an xmin-horizon-too-old problem. We're no closer to understanding\n> where that horizon value is coming from, though.\n\nHave you looked at the autovacuum log output in more detail? It might\nbe possible to debug further, but looks like there are no XIDs to work\noff of in the log_line_prefix that's in use on wrasse.\n\nThe CITester log_line_prefix is pretty useful -- I wonder if we can\nstandardize on that within the buildfarm, too.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 15:08:45 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Have you looked at the autovacuum log output in more detail?\n\nI don't think there's anything to be learned there. The first autovacuum\nin wrasse's log happens long after things went south:\n\n2022-04-14 22:49:15.177 CEST [9427:1] LOG: automatic vacuum of table \"regression.pg_catalog.pg_type\": index scans: 1\n\tpages: 0 removed, 49 remain, 49 scanned (100.00% of total)\n\ttuples: 539 removed, 1112 remain, 0 are dead but not yet removable\n\tremovable cutoff: 8915, older by 1 xids when operation ended\n\tindex scan needed: 34 pages from table (69.39% of total) had 1107 dead item identifiers removed\n\tindex \"pg_type_oid_index\": pages: 14 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n\tindex \"pg_type_typname_nsp_index\": pages: 13 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n\tavg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n\tbuffer usage: 193 hits, 0 misses, 0 dirtied\n\tWAL usage: 116 records, 0 full page images, 14113 bytes\n\tsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n\nIf we captured equivalent output from the manual VACUUM in test_setup,\nmaybe something could be learned. However, it seems virtually certain\nto me that the problematic xmin is in some background process\n(eg autovac launcher) and thus wouldn't show up in the postmaster log,\nlog_line_prefix or no.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 18:23:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we captured equivalent output from the manual VACUUM in test_setup,\n> maybe something could be learned. However, it seems virtually certain\n> to me that the problematic xmin is in some background process\n> (eg autovac launcher) and thus wouldn't show up in the postmaster log,\n> log_line_prefix or no.\n\nA bunch of autovacuums that ran between \"2022-04-14 22:49:16.274\" and\n\"2022-04-14 22:49:19.088\" all have the same \"removable cutoff\".\n\nThe logs from this time show a period of around three seconds\n(likely more) where something held back OldestXmin generally.\nThat does seem a bit fishy to me, even though it happened about a\nminute after the failure itself took place.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 15:28:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 3:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> A bunch of autovacuums that ran between \"2022-04-14 22:49:16.274\" and\n> \"2022-04-14 22:49:19.088\" all have the same \"removable cutoff\".\n\nAre you aware of Andres' commit 02fea8fd? That work prevented exactly\nthe same set of symptoms (the same index-only scan create_index\nregressions), which was apparently necessary following the\nrearrangements to the regression tests to remove cross-script\ndependencies (Tom's commit cc50080a82).\n\nThis was the thread that led to Andres' commit, which was just over a month ago:\n\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TPJNof1Q+vJsy3QebgbPgXdu2ErPvYkBdhD6_Ckv5EZRg@mail.gmail.com\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 18:18:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Are you aware of Andres' commit 02fea8fd? That work prevented exactly\n> the same set of symptoms (the same index-only scan create_index\n> regressions),\n\nHm. I'm starting to get the feeling that the real problem here is\nwe've \"optimized\" the system to the point where repeatable results\nfrom VACUUM are impossible :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 21:32:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 6:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm. I'm starting to get the feeling that the real problem here is\n> we've \"optimized\" the system to the point where repeatable results\n> from VACUUM are impossible :-(\n\nI don't think that there is any fundamental reason why VACUUM cannot\nhave repeatable results.\n\nAnyway, I suppose it's possible that problems reappeared here due to\nsome other patch. Something else could have broken Andres' earlier\nband aid solution (which was to set synchronous_commit=on in\ntest_setup).\n\nIs there any patch that could plausibly have had that effect, whose\ncommit fits with our timeline for the problems seen on wrasse?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 18:49:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-14 21:32:27 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > Are you aware of Andres' commit 02fea8fd? That work prevented exactly\n> > the same set of symptoms (the same index-only scan create_index\n> > regressions),\n> \n> Hm. I'm starting to get the feeling that the real problem here is\n> we've \"optimized\" the system to the point where repeatable results\n> from VACUUM are impossible :-(\n\nThe synchronous_commit issue is an old one. It might actually be worth\naddressing it by flushing out pending async commits out instead. It just\nstarted to be noticeable when tenk1 load and vacuum were moved closer.\n\n\nWhat do you think about applying a polished version of what I posted in\nhttps://postgr.es/m/20220414164830.63rk5zqsvtqqk7qz%40alap3.anarazel.de\n? That'd tell us a bit more about the horizon etc.\n\nIt doesn't have to be the autovacuum launcher. I think it shouldn't even\nbe taken into account - it's not database connected, and tenk1 isn't a\nshared relation. All very odd.\n\nIt's also interesting that it only happens in the installcheck cases,\nafaics, not the check ones. Although that might just be because there's\nmore of them...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Apr 2022 18:52:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Anyway, I suppose it's possible that problems reappeared here due to\n> some other patch. Something else could have broken Andres' earlier\n> band aid solution (which was to set synchronous_commit=on in\n> test_setup).\n\nThat band-aid only addressed the situation of someone having turned\noff synchronous_commit in the first place; which is not the case\non wrasse or most/all other buildfarm animals. Whatever we're\ndealing with here is something independent of that.\n\n> Is there any patch that could plausibly have had that effect, whose\n> commit fits with our timeline for the problems seen on wrasse?\n\nI already enumerated my suspects, back at the top of this thread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 21:53:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 6:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That band-aid only addressed the situation of someone having turned\n> off synchronous_commit in the first place; which is not the case\n> on wrasse or most/all other buildfarm animals. Whatever we're\n> dealing with here is something independent of that.\n\nThat was the intent, but that in itself doesn't mean that it isn't\nsomething to do with setting hint bits (not the OldestXmin horizon\nbeing held back). I'd really like to know what the removable cutoff\nlooks like for these VACUUM operations, which is something like\nAndres' VACUUM VERBOSE debug patch should tell us.\n\n> > Is there any patch that could plausibly have had that effect, whose\n> > commit fits with our timeline for the problems seen on wrasse?\n>\n> I already enumerated my suspects, back at the top of this thread.\n\nRight, but I thought that the syncronous_commit thing was new\ninformation that made that worth revisiting.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 18:59:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> That was the intent, but that in itself doesn't mean that it isn't\n> something to do with setting hint bits (not the OldestXmin horizon\n> being held back).\n\nOh! You mean that maybe the OldestXmin horizon was fine, but something\ndecided not to update hint bits (and therefore also not the all-visible\nbit) anyway? Worth investigating I guess.\n\n> I'd really like to know what the removable cutoff\n> looks like for these VACUUM operations, which is something like\n> Andres' VACUUM VERBOSE debug patch should tell us.\n\nYeah. I'd hoped to investigate this manually and not have to clutter\nthe main repo with debugging commits. However, since I've still\nutterly failed to reproduce the problem on wrasse's host, I think\nwe're going to be forced to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:20:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 7:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Oh! You mean that maybe the OldestXmin horizon was fine, but something\n> decided not to update hint bits (and therefore also not the all-visible\n> bit) anyway? Worth investigating I guess.\n\nYes. That is starting to seem like a plausible alternative explanation.\n\n> > I'd really like to know what the removable cutoff\n> > looks like for these VACUUM operations, which is something like\n> > Andres' VACUUM VERBOSE debug patch should tell us.\n>\n> Yeah. I'd hoped to investigate this manually and not have to clutter\n> the main repo with debugging commits.\n\nSuppose that the bug was actually in 06f5295af6, \"Add single-item\ncache when looking at topmost XID of a subtrans XID\". Doesn't that fit\nyour timeline just as well?\n\nI haven't really started to investigate that theory (just putting\ndinner on here). Just a wild guess at this point.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 19:27:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Suppose that the bug was actually in 06f5295af6, \"Add single-item\n> cache when looking at topmost XID of a subtrans XID\". Doesn't that fit\n> your timeline just as well?\n\nI'd dismissed that on the grounds that there are no subtrans XIDs\ninvolved in tenk1's contents. However, if that patch was faulty\nenough, maybe it affected other cases besides the advertised one?\nI've not read it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:40:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 06:52:49PM -0700, Andres Freund wrote:\n> On 2022-04-14 21:32:27 -0400, Tom Lane wrote:\n> > Peter Geoghegan <pg@bowt.ie> writes:\n> > > Are you aware of Andres' commit 02fea8fd? That work prevented exactly\n> > > the same set of symptoms (the same index-only scan create_index\n> > > regressions),\n> > \n> > Hm. I'm starting to get the feeling that the real problem here is\n> > we've \"optimized\" the system to the point where repeatable results\n> > from VACUUM are impossible :-(\n> \n> The synchronous_commit issue is an old one. It might actually be worth\n> addressing it by flushing out pending async commits out instead. It just\n> started to be noticeable when tenk1 load and vacuum were moved closer.\n> \n> \n> What do you think about applying a polished version of what I posted in\n> https://postgr.es/m/20220414164830.63rk5zqsvtqqk7qz%40alap3.anarazel.de\n> ? That'd tell us a bit more about the horizon etc.\n\nNo objection.\n\n> It's also interesting that it only happens in the installcheck cases,\n> afaics, not the check ones. Although that might just be because there's\n> more of them...\n\nI suspect the failure is somehow impossible in \"check\". Yesterday, I cranked\nup the number of locales, so there are now a lot more installcheck. Before\nthat, each farm run had one \"check\" and two \"installcheck\". Those days saw\nten installcheck failures, zero check failures.\n\nLike Tom, I'm failing to reproduce this outside the buildfarm client. I wrote\na shell script to closely resemble the buildfarm installcheck sequence, but\nit's lasted a dozen runs without failing.\n\n\n",
"msg_date": "Thu, 14 Apr 2022 19:45:15 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 07:45:15PM -0700, Noah Misch wrote:\n> On Thu, Apr 14, 2022 at 06:52:49PM -0700, Andres Freund wrote:\n> > On 2022-04-14 21:32:27 -0400, Tom Lane wrote:\n> > > Peter Geoghegan <pg@bowt.ie> writes:\n> > > > Are you aware of Andres' commit 02fea8fd? That work prevented exactly\n> > > > the same set of symptoms (the same index-only scan create_index\n> > > > regressions),\n> > > \n> > > Hm. I'm starting to get the feeling that the real problem here is\n> > > we've \"optimized\" the system to the point where repeatable results\n> > > from VACUUM are impossible :-(\n> > \n> > The synchronous_commit issue is an old one. It might actually be worth\n> > addressing it by flushing out pending async commits out instead. It just\n> > started to be noticeable when tenk1 load and vacuum were moved closer.\n> > \n> > \n> > What do you think about applying a polished version of what I posted in\n> > https://postgr.es/m/20220414164830.63rk5zqsvtqqk7qz%40alap3.anarazel.de\n> > ? That'd tell us a bit more about the horizon etc.\n> \n> No objection.\n> \n> > It's also interesting that it only happens in the installcheck cases,\n> > afaics, not the check ones. Although that might just be because there's\n> > more of them...\n> \n> I suspect the failure is somehow impossible in \"check\". Yesterday, I cranked\n> up the number of locales, so there are now a lot more installcheck. Before\n> that, each farm run had one \"check\" and two \"installcheck\". Those days saw\n> ten installcheck failures, zero check failures.\n> \n> Like Tom, I'm failing to reproduce this outside the buildfarm client. I wrote\n> a shell script to closely resemble the buildfarm installcheck sequence, but\n> it's lasted a dozen runs without failing.\n\nBut 24s after that email, it did reproduce the problem. Same symptoms as the\nlast buildfarm runs, including visfrac=0. I'm attaching my script. (It has\nvarious references to my home directory, so it's not self-contained.)",
"msg_date": "Thu, 14 Apr 2022 19:50:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> Like Tom, I'm failing to reproduce this outside the buildfarm client.\n\nThis is far from the first time that I've failed to reproduce a buildfarm\nresult manually, even on the very machine hosting the animal. I would\nlike to identify the cause(s) of that. One obvious theory is that the\nenvironment under a cron job is different --- but the only thing I know\nof that should be different is possibly nice'ing the job priorities.\nI did try a fair number of test cycles under \"nice\" in this case.\nAnybody have other ideas?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:54:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> But 24s after that email, it did reproduce the problem.\n\nAin't that always the way.\n\n> Same symptoms as the\n> last buildfarm runs, including visfrac=0. I'm attaching my script. (It has\n> various references to my home directory, so it's not self-contained.)\n\nThat's interesting, because I see you used installcheck-parallel,\nwhich I'd not been using on the grounds that wrasse isn't parallelizing\nthese runs. That puts a big hole in my working assumption that the\nproblem is one of timing. I'm now suspecting that the key issue is\nsomething about how wrasse is building the executables that I did\nnot exactly reproduce. I did not try to copy the build details\ninvolving stuff under your home directory (like the private openldap\nversion), mainly because it didn't seem like openldap or uuid or\nperl could be involved at all. But maybe somehow?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 23:06:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 7:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This is far from the first time that I've failed to reproduce a buildfarm\n> result manually, even on the very machine hosting the animal. I would\n> like to identify the cause(s) of that. One obvious theory is that the\n> environment under a cron job is different --- but the only thing I know\n> of that should be different is possibly nice'ing the job priorities.\n> I did try a fair number of test cycles under \"nice\" in this case.\n> Anybody have other ideas?\n\nWell, Noah is running wrasse with 'fsync = off'. And did so in the\nscript as well.\n\nThat seems like it definitely could matter.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Apr 2022 20:06:16 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Well, Noah is running wrasse with 'fsync = off'. And did so in the\n> script as well.\n\nAs am I. I duplicated wrasse's config to the extent of\n\ncat >>$PGDATA/postgresql.conf <<EOF\nlog_line_prefix = '%m [%p:%l] %q%a '\nlog_connections = 'true'\nlog_disconnections = 'true'\nlog_statement = 'all'\nfsync = off\nlog_autovacuum_min_duration = 0\nEOF\n\nOne thing I'm eyeing now is that it looks like Noah is re-initdb'ing\neach time, whereas I'd just stopped and started the postmaster of\nan existing installation. That does not seem like it could matter\nbut ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 23:17:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 11:06:04PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > But 24s after that email, it did reproduce the problem.\n> \n> Ain't that always the way.\n\nQuite so.\n\n> > Same symptoms as the\n> > last buildfarm runs, including visfrac=0. I'm attaching my script. (It has\n> > various references to my home directory, so it's not self-contained.)\n> \n> That's interesting, because I see you used installcheck-parallel,\n> which I'd not been using on the grounds that wrasse isn't parallelizing\n> these runs. That puts a big hole in my working assumption that the\n> problem is one of timing.\n\nWith \"make installcheck-tests TESTS='test_setup create_index'\" it remains\nreproducible. The attached script reproduced it in 103s and then in 703s.\n\n> I'm now suspecting that the key issue is\n> something about how wrasse is building the executables that I did\n> not exactly reproduce. I did not try to copy the build details\n> involving stuff under your home directory (like the private openldap\n> version), mainly because it didn't seem like openldap or uuid or\n> perl could be involved at all. But maybe somehow?\n\nCan't rule it out entirely. I think I've now put world-read on all the\ndirectories referenced in the script, in the event you'd like to use them.",
"msg_date": "Thu, 14 Apr 2022 20:27:09 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "I wrote:\n> One thing I'm eyeing now is that it looks like Noah is re-initdb'ing\n> each time, whereas I'd just stopped and started the postmaster of\n> an existing installation. That does not seem like it could matter\n> but ...\n\nWell, damn. I changed my script that way and it failed on the tenth\niteration (versus a couple hundred successful iterations the other\nway). So somehow this is related to time-since-initdb, not\ntime-since-postmaster-start. Any ideas?\n\nAnyway, I'm too tired to do more tonight, but now that I can reproduce it\nI will stick some debugging logic in tomorrow. I no longer think we\nshould clutter the git repo with any more short-term hacks.\n\n\t\t\tregards, tom lane\n\nPS a bit later: I've not yet reproduced it a second time, so the\nfailure rate is unfortunately a lot less than one-in-ten. Still,\nthis eliminates the idea that there's some secret sauce in Noah's\nbuild details.\n\n\n",
"msg_date": "Thu, 14 Apr 2022 23:56:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-14 22:40:51 -0400, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > Suppose that the bug was actually in 06f5295af6, \"Add single-item\n> > cache when looking at topmost XID of a subtrans XID\". Doesn't that fit\n> > your timeline just as well?\n> \n> I'd dismissed that on the grounds that there are no subtrans XIDs\n> involved in tenk1's contents. However, if that patch was faulty\n> enough, maybe it affected other cases besides the advertised one?\n> I've not read it.\n\nI was planning to complain about that commit, fwiw. Without so much as\nan assertion verifying the cache is correct it seems quite dangerous to\nme.\n\nAnd looking at it, it has obvious wraparound issues... But that can't\nmatter here, obviously.\n\nWe also reach SubTransGetTopmostTransaction() from XactLockTableWait()\nbut I don't quite see how we reach that here either...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Apr 2022 21:50:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 11:56:15PM -0400, Tom Lane wrote:\n> Anyway, I'm too tired to do more tonight, but now that I can reproduce it\n> I will stick some debugging logic in tomorrow. I no longer think we\n> should clutter the git repo with any more short-term hacks.\n\nSounds good. I've turned off the wrasse buildfarm client for the moment.\nIt's less useful than your local setup, and they'd compete for resources.\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:01:16 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-14 23:56:15 -0400, Tom Lane wrote:\n> I wrote:\n> > One thing I'm eyeing now is that it looks like Noah is re-initdb'ing\n> > each time, whereas I'd just stopped and started the postmaster of\n> > an existing installation. That does not seem like it could matter\n> > but ...\n> \n> Well, damn. I changed my script that way and it failed on the tenth\n> iteration (versus a couple hundred successful iterations the other\n> way).\n\nJust to make sure: This is also on wrasse?\n\nWhat DSM backend do we end up with on solaris? With shared memory stats\nwe're using DSM a lot earlier and more commonly than before.\n\nAnother thing that might be worth trying is to enable checksums. I've\ncaught weird bugs with that in the past. And it's possible that bgwriter\nwrites out a page that we then read back in quickly after, or something\nlike that.\n\n\n> So somehow this is related to time-since-initdb, not\n> time-since-postmaster-start. Any ideas?\n\nPerhaps it makes a difference that we start with a \"young\" database xid\nage wise? We've had bugs around subtracting xids and ending up on some\nspecial one in the past.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:05:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-14 19:45:15 -0700, Noah Misch wrote:\n> I suspect the failure is somehow impossible in \"check\". Yesterday, I cranked\n> up the number of locales, so there are now a lot more installcheck. Before\n> that, each farm run had one \"check\" and two \"installcheck\". Those days saw\n> ten installcheck failures, zero check failures.\n\nI notice that the buildfarm appears to run initdb with syncing enabled\n(\"syncing data to disk ... ok\" in the initdb steps). Whereas pg_regress\nuses --no-sync.\n\nI wonder if that's what makes the difference? Now that you reproduced\nit, does it still reproduce with --no-sync added?\n\nAlso worth noting that pg_regress doesn't go through pg_ctl...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:12:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-14 23:56:15 -0400, Tom Lane wrote:\n>> Well, damn. I changed my script that way and it failed on the tenth\n>> iteration (versus a couple hundred successful iterations the other\n>> way).\n\n> Just to make sure: This is also on wrasse?\n\nRight, gcc211 with a moderately close approximation to wrasse's\nbuild details. Why that shows the problem when we've not seen\nit elsewhere remains to be seen.\n\n> What DSM backend do we end up with on solaris? With shared memory stats\n> we're using DSM a lot earlier and more commonly than before.\n\nThat ... is an interesting point. It seems to be just \"posix\" though.\n\n>> So somehow this is related to time-since-initdb, not\n>> time-since-postmaster-start. Any ideas?\n\n> Perhaps it makes a difference that we start with a \"young\" database xid\n> age wise? We've had bugs around subtracting xids and ending up on some\n> special one in the past.\n\nIt does seem like it's got to be related to small XID and/or small\nLSN values. No clue right now, but more news tomorrow, I hope.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 01:18:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 10:12:05PM -0700, Andres Freund wrote:\n> On 2022-04-14 19:45:15 -0700, Noah Misch wrote:\n> > I suspect the failure is somehow impossible in \"check\". Yesterday, I cranked\n> > up the number of locales, so there are now a lot more installcheck. Before\n> > that, each farm run had one \"check\" and two \"installcheck\". Those days saw\n> > ten installcheck failures, zero check failures.\n> \n> I notice that the buildfarm appears to run initdb with syncing enabled\n> (\"syncing data to disk ... ok\" in the initdb steps). Whereas pg_regress\n> uses --no-sync.\n\nYep.\n\n> I wonder if that's what makes the difference? Now that you reproduced\n> it, does it still reproduce with --no-sync added?\n\nIt does; the last version of my script used \"initdb -N ...\".\n\n> Also worth noting that pg_regress doesn't go through pg_ctl...\n\nHmmm.\n\n\n",
"msg_date": "Thu, 14 Apr 2022 22:21:25 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "The morning's first result is that during a failing run,\nthe vacuum in test_setup sees\n\n2022-04-15 16:01:43.064 CEST [4436:75] pg_regress/test_setup LOG: statement: VACUUM ANALYZE tenk1; \n2022-04-15 16:01:43.064 CEST [4436:76] pg_regress/test_setup LOG: vacuuming \"regression.public.tenk1\" \n2022-04-15 16:01:43.064 CEST [4436:77] pg_regress/test_setup STATEMENT: VACUUM ANALYZE tenk1; \n2022-04-15 16:01:43.071 CEST [4436:78] pg_regress/test_setup LOG: finished vacuuming \"regression.public.tenk1\": index scans: 0\n pages: 0 removed, 345 remain, 345 scanned (100.00% of total) \n tuples: 0 removed, 10000 remain, 0 are dead but not yet removable \n removable cutoff: 724, older by 26 xids when operation ended\n index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n avg read rate: 2.189 MB/s, avg write rate: 2.189 MB/s\n buffer usage: 695 hits, 2 misses, 2 dirtied\n WAL usage: 1 records, 0 full page images, 188 bytes\n system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n2022-04-15 16:01:43.071 CEST [4436:79] pg_regress/test_setup STATEMENT: VACUUM ANALYZE tenk1;\n\nOldestXmin = 724 is too old to consider tenk1's contents as all-visible:\n\nregression=# select distinct xmin from tenk1;\n xmin \n------\n 749\n(1 row)\n\nIn fact, right after initdb pg_controldata shows\nLatest checkpoint's NextXID: 0:724\nLatest checkpoint's oldestXID: 716\n\nSo there's no longer any doubt that something is holding back OldestXmin.\nI will go put some instrumentation into the code that's computing that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 10:15:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-15 10:15:32 -0400, Tom Lane wrote:\n> The morning's first result is that during a failing run,\n> the vacuum in test_setup sees\n>\n> 2022-04-15 16:01:43.064 CEST [4436:75] pg_regress/test_setup LOG: statement: VACUUM ANALYZE tenk1;\n> 2022-04-15 16:01:43.064 CEST [4436:76] pg_regress/test_setup LOG: vacuuming \"regression.public.tenk1\"\n> 2022-04-15 16:01:43.064 CEST [4436:77] pg_regress/test_setup STATEMENT: VACUUM ANALYZE tenk1;\n> 2022-04-15 16:01:43.071 CEST [4436:78] pg_regress/test_setup LOG: finished vacuuming \"regression.public.tenk1\": index scans: 0\n> pages: 0 removed, 345 remain, 345 scanned (100.00% of total)\n> tuples: 0 removed, 10000 remain, 0 are dead but not yet removable\n> removable cutoff: 724, older by 26 xids when operation ended\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n> avg read rate: 2.189 MB/s, avg write rate: 2.189 MB/s\n> buffer usage: 695 hits, 2 misses, 2 dirtied\n> WAL usage: 1 records, 0 full page images, 188 bytes\n> system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> 2022-04-15 16:01:43.071 CEST [4436:79] pg_regress/test_setup STATEMENT: VACUUM ANALYZE tenk1;\n\nThe horizon advancing by 26 xids during tenk1's vacuum seems like quite\na bit, given there's no normal concurrent activity during test_setup.\n\n\n> In fact, right after initdb pg_controldata shows\n> Latest checkpoint's NextXID: 0:724\n> Latest checkpoint's oldestXID: 716\n\nSo that's the xmin that e.g. the autovac launcher ends up with during\nstart...\n\nIf I make get_database_list() sleep for 5s within the scan, I can\nreproduce on x86-64.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Apr 2022 08:12:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "I wrote:\n> the vacuum in test_setup sees\n> ...\n> removable cutoff: 724, older by 26 xids when operation ended\n> ...\n\nBTW, before I forget: the wording of this log message is just awful.\nOn first sight, I thought that it meant that we'd computed OldestXmin\na second time and discovered that it advanced by 26 xids while the VACUUM\nwas running. Looking at the code, I see that's not so:\n\n diff = (int32) (ReadNextTransactionId() - OldestXmin);\n appendStringInfo(&buf,\n _(\"removable cutoff: %u, older by %d xids when operation ended\\n\"),\n OldestXmin, diff);\n\nbut good luck understanding what it actually means from the message\ntext alone. I think more appropriate wording would be something like\n\n\"removable cutoff: %u, which was %d xids old when operation ended\\n\"\n\nAlso, is it really our practice to spell XID in lower-case in\nuser-facing messages?\n\nThoughts, better ideas?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 11:14:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi, \n\nOn April 15, 2022 11:12:10 AM EDT, Andres Freund <andres@anarazel.de> wrote:\n>Hi,\n>\n>On 2022-04-15 10:15:32 -0400, Tom Lane wrote:\n>> The morning's first result is that during a failing run,\n>> the vacuum in test_setup sees\n>>\n>> 2022-04-15 16:01:43.064 CEST [4436:75] pg_regress/test_setup LOG: statement: VACUUM ANALYZE tenk1;\n>> 2022-04-15 16:01:43.064 CEST [4436:76] pg_regress/test_setup LOG: vacuuming \"regression.public.tenk1\"\n>> 2022-04-15 16:01:43.064 CEST [4436:77] pg_regress/test_setup STATEMENT: VACUUM ANALYZE tenk1;\n>> 2022-04-15 16:01:43.071 CEST [4436:78] pg_regress/test_setup LOG: finished vacuuming \"regression.public.tenk1\": index scans: 0\n>> pages: 0 removed, 345 remain, 345 scanned (100.00% of total)\n>> tuples: 0 removed, 10000 remain, 0 are dead but not yet removable\n>> removable cutoff: 724, older by 26 xids when operation ended\n>> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n>> avg read rate: 2.189 MB/s, avg write rate: 2.189 MB/s\n>> buffer usage: 695 hits, 2 misses, 2 dirtied\n>> WAL usage: 1 records, 0 full page images, 188 bytes\n>> system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>> 2022-04-15 16:01:43.071 CEST [4436:79] pg_regress/test_setup STATEMENT: VACUUM ANALYZE tenk1;\n>\n>The horizon advancing by 26 xids during tenk1's vacuum seems like quite\n>a bit, given there's no normal concurrent activity during test_setup.\n>\n>\n>> In fact, right after initdb pg_controldata shows\n>> Latest checkpoint's NextXID: 0:724\n>> Latest checkpoint's oldestXID: 716\n>\n>So that's the xmin that e.g. the autovac launcher ends up with during\n>start...\n>\n>If I make get_database_list() sleep for 5s within the scan, I can\n>reproduce on x86-64.\n\nOff for a bit, but I realized that we likely don't exclude the launcher because it's not database associated...\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 15 Apr 2022 11:19:54 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "I wrote:\n> So there's no longer any doubt that something is holding back OldestXmin.\n> I will go put some instrumentation into the code that's computing that.\n\nThe something is the logical replication launcher. In the failing runs,\nit is advertising xmin = 724 (the post-initdb NextXID) and continues to\ndo so well past the point where tenk1 gets vacuumed.\n\nDiscuss.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 11:23:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Off for a bit, but I realized that we likely don't exclude the launcher because it's not database associated...\n\nYeah. I think this bit in ComputeXidHorizons needs rethinking:\n\n /*\n * Normally queries in other databases are ignored for anything but\n * the shared horizon. ...\n */\n if (in_recovery ||\n MyDatabaseId == InvalidOid || proc->databaseId == MyDatabaseId ||\n proc->databaseId == 0) /* always include WalSender */\n {\n\nThe \"proc->databaseId == 0\" business apparently means to include only\nwalsender processes, and it's broken because that condition doesn't\ninclude only walsender processes.\n\nAt this point we have the following conclusions:\n\n1. A slow transaction in the launcher's initial get_database_list()\ncall fully explains these failures. (I had been thinking that the\nlauncher's xact would have to persist as far as the create_index\nscript, but that's not so: it only has to last until test_setup\nbegins vacuuming tenk1. The CREATE INDEX steps are not doing any\nvisibility map changes of their own, but what they are doing is\nupdating relallvisible from the results of visibilitymap_count().\nThat's why they undid the effects of manually poking relallvisible,\nwithout actually inserting any data better than what the initial\nVACUUM computed.)\n\n2. We can probably explain why only wrasse sees this as some quirk\nof the Solaris scheduler. I'm satisfied to blame it-happens-in-\ninstallcheck-but-not-check on that too.\n\n3. It remains unclear why we suddenly started seeing this last week.\nI suppose it has to be a side-effect of the pgstats changes, but\nthe mechanism is obscure. Probably not worth the effort to pin\ndown exactly why.\n\nAs for fixing it, what I think would be the preferable answer is to\nfix the above-quoted logic so that it indeed includes only walsenders\nand not random other background workers. (Why does it need to include\nwalsenders, anyway? The commentary sucks.) Alternatively, or perhaps\nalso, we could do what was discussed previously and make a hack to\nallow delaying vacuum until the system is quiescent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 12:16:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-15 10:15:32 -0400, Tom Lane wrote:\n>> removable cutoff: 724, older by 26 xids when operation ended\n\n> The horizon advancing by 26 xids during tenk1's vacuum seems like quite\n> a bit, given there's no normal concurrent activity during test_setup.\n\nHah, so you were taken in by this wording too. See my complaint\nabout it downthread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 12:17:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi, \n\nOn April 15, 2022 11:23:40 AM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>I wrote:\n>> So there's no longer any doubt that something is holding back OldestXmin.\n>> I will go put some instrumentation into the code that's computing that.\n>\n>The something is the logical replication launcher. In the failing runs,\n>it is advertising xmin = 724 (the post-initdb NextXID) and continues to\n>do so well past the point where tenk1 gets vacuumed.\n>\n>Discuss.\n\nThat explains it. Before shmstat autovac needed to wait for the stats collector to write out stats. Now it's near instantaneous. So the issue probably existed before, just unlikely to ever be reached.\n\nWe can't just ignore database less xmins for non-shared rels, because walsender propagates hot_standby_feedback that way. But we can probably add a flag somewhere indicating whether a database less PGPROC has to be accounted in the horizon for non-shared rels.\n\nAndres\n\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 15 Apr 2022 12:22:41 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 8:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, before I forget: the wording of this log message is just awful.\n> On first sight, I thought that it meant that we'd computed OldestXmin\n> a second time and discovered that it advanced by 26 xids while the VACUUM\n> was running.\n\n> \"removable cutoff: %u, which was %d xids old when operation ended\\n\"\n\nHow the output appears when placed right before the output describing\nhow VACUUM advanced relfrozenxid is an important consideration. I want\nthe format and wording that we use to imply a relationship between\nthese two things. Right now, that other line looks like this:\n\n\"new relfrozenxid: %u, which is %d xids ahead of previous value\\n\"\n\nDo you think that this juxtaposition works well?\n\n> Also, is it really our practice to spell XID in lower-case in\n> user-facing messages?\n\nThere are examples of both. This could easily be changed to \"XIDs\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 15 Apr 2022 09:29:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On April 15, 2022 11:23:40 AM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The something is the logical replication launcher. In the failing runs,\n>> it is advertising xmin = 724 (the post-initdb NextXID) and continues to\n>> do so well past the point where tenk1 gets vacuumed.\n\n> That explains it. Before shmstat autovac needed to wait for the stats collector to write out stats. Now it's near instantaneous. So the issue probably existed before, just unlikely to ever be reached.\n\nUm, this is the logical replication launcher, not the autovac launcher.\nYour observation that a sleep in get_database_list() reproduces it\nconfirms that, and I don't entirely see why the timing of the LR launcher\nwould have changed.\n\n(On thinking about it, I suppose the AV launcher might trigger this\ntoo, but that is not the PID I saw in testing.)\n\n> We can't just ignore database less xmins for non-shared rels, because walsender propagates hot_standby_feedback that way. But we can probably add a flag somewhere indicating whether a database less PGPROC has to be accounted in the horizon for non-shared rels.\n\nYeah, I was also thinking about a flag in PGPROC being a more reliable\nway to do this. Is there anything besides walsenders that should set\nthat flag?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 12:36:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Apr 15, 2022 at 8:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, before I forget: the wording of this log message is just awful.\n>> [ so how about ]\n>> \"removable cutoff: %u, which was %d xids old when operation ended\\n\"\n\n> How the output appears when placed right before the output describing\n> how VACUUM advanced relfrozenxid is an important consideration. I want\n> the format and wording that we use to imply a relationship between\n> these two things. Right now, that other line looks like this:\n\n> \"new relfrozenxid: %u, which is %d xids ahead of previous value\\n\"\n\n> Do you think that this juxtaposition works well?\n\nSeems all right to me; do you have a better suggestion?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 12:40:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Do you think that this juxtaposition works well?\n>\n> Seems all right to me; do you have a better suggestion?\n\nNo. At first I thought that mixing \"which is\" and \"which was\" wasn't\nquite right. I changed my mind, though. Your new wording is fine.\n\nI'll update the log output some time today.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 15 Apr 2022 09:47:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-15 12:36:52 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On April 15, 2022 11:23:40 AM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The something is the logical replication launcher. In the failing runs,\n> >> it is advertising xmin = 724 (the post-initdb NextXID) and continues to\n> >> do so well past the point where tenk1 gets vacuumed.\n>\n> > That explains it. Before shmstat autovac needed to wait for the stats collector to write out stats. Now it's near instantaneous. So the issue probably existed before, just unlikely to ever be reached.\n>\n> Um, this is the logical replication launcher, not the autovac\n> launcher.\n\nShort term confusion...\n\n\n> Your observation that a sleep in get_database_list() reproduces it\n> confirms that\n\nI don't understand what you mean here? get_database_list() is autovac\nlauncher code? So being able to reproduce the problem by putting in a\nsleep there doesn't seem like a confirm anything about the logical rep\nlauncher?\n\n\n> , and I don't entirely see why the timing of the LR launcher\n> would have changed.\n\nCould still be related to the autovac launcher not requesting / pgstats\nnot writing / launcher not reading the stats file(s). That obviously is\ngoing to have some scheduler impact.\n\n\n> > We can't just ignore database less xmins for non-shared rels, because walsender propagates hot_standby_feedback that way. But we can probably add a flag somewhere indicating whether a database less PGPROC has to be accounted in the horizon for non-shared rels.\n>\n> Yeah, I was also thinking about a flag in PGPROC being a more reliable\n> way to do this. Is there anything besides walsenders that should set\n> that flag?\n\nNot that I can think of. It's only because of hs_feedback that we need\nto. I guess it's possible that somebody build some extension that needs\nsomething similar, but then they'd need to set that flag...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Apr 2022 09:57:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\n(Sent again, somehow my editor started to sometimes screw up mail\nheaders, and ate the From:, sorry for the duplicate)\n\nOn 2022-04-15 12:36:52 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On April 15, 2022 11:23:40 AM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The something is the logical replication launcher. In the failing runs,\n> >> it is advertising xmin = 724 (the post-initdb NextXID) and continues to\n> >> do so well past the point where tenk1 gets vacuumed.\n>\n> > That explains it. Before shmstat autovac needed to wait for the stats collector to write out stats. Now it's near instantaneous. So the issue probably existed before, just unlikely to ever be reached.\n>\n> Um, this is the logical replication launcher, not the autovac\n> launcher.\n\nShort term confusion...\n\n\n> Your observation that a sleep in get_database_list() reproduces it\n> confirms that\n\nI don't understand what you mean here? get_database_list() is autovac\nlauncher code? So being able to reproduce the problem by putting in a\nsleep there doesn't seem like a confirm anything about the logical rep\nlauncher?\n\n\n> , and I don't entirely see why the timing of the LR launcher\n> would have changed.\n\nCould still be related to the autovac launcher not requesting / pgstats\nnot writing / launcher not reading the stats file(s). That obviously is\ngoing to have some scheduler impact.\n\n\n> > We can't just ignore database less xmins for non-shared rels, because walsender propagates hot_standby_feedback that way. But we can probably add a flag somewhere indicating whether a database less PGPROC has to be accounted in the horizon for non-shared rels.\n>\n> Yeah, I was also thinking about a flag in PGPROC being a more reliable\n> way to do this. Is there anything besides walsenders that should set\n> that flag?\n\nNot that I can think of. It's only because of hs_feedback that we need\nto. I guess it's possible that somebody build some extension that needs\nsomething similar, but then they'd need to set that flag...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Apr 2022 09:58:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "I wrote:\n> Um, this is the logical replication launcher, not the autovac launcher.\n> Your observation that a sleep in get_database_list() reproduces it\n> confirms that, and I don't entirely see why the timing of the LR launcher\n> would have changed.\n\nOh, to clarify: I misread \"get_database_list()\" as\n\"get_subscription_list()\", which is the part of the LR launcher startup\nthat causes the problem for me. So what we actually have confirmed is\nthat BOTH of those launchers are problematic for this. And AFAICS\nneither of them needs to be causing horizon adjustments for non-shared\ntables.\n\n(It's possible that the AV launcher is responsible in some of wrasse's\nreports, but it's been the LR launcher in five out of five\nsufficiently-instrumented failures for me.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 13:05:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-15 09:29:20 -0700, Peter Geoghegan wrote:\n> On Fri, Apr 15, 2022 at 8:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > BTW, before I forget: the wording of this log message is just awful.\n> > On first sight, I thought that it meant that we'd computed OldestXmin\n> > a second time and discovered that it advanced by 26 xids while the VACUUM\n> > was running.\n> \n> > \"removable cutoff: %u, which was %d xids old when operation ended\\n\"\n> \n> How the output appears when placed right before the output describing\n> how VACUUM advanced relfrozenxid is an important consideration. I want\n> the format and wording that we use to imply a relationship between\n> these two things. Right now, that other line looks like this:\n>\n> \"new relfrozenxid: %u, which is %d xids ahead of previous value\\n\"\n> \n> Do you think that this juxtaposition works well?\n\nI don't think they're actually that comparable. One shows how much\nrelfrozenxid advanced, to a large degree influenced by the time between\naggressive (or \"unintentionally aggressive\") vacuums. The other shows\nthe age of OldestXmin at the end of the vacuum. Which is influenced by\nwhat's currently running.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Apr 2022 10:05:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 10:05 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't think they're actually that comparable. One shows how much\n> relfrozenxid advanced, to a large degree influenced by the time between\n> aggressive (or \"unintentionally aggressive\") vacuums.\n\nIt matters more in the extreme cases. The most recent possible value\nfor our new relfrozenxid is OldestXmin/removable cutoff. So when\nsomething holds back OldestXmin, it also holds back new relfrozenxid\nvalues.\n\n> The other shows\n> the age of OldestXmin at the end of the vacuum. Which is influenced by\n> what's currently running.\n\nAs well as the age of OldestXmin at the start of VACUUM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 15 Apr 2022 10:11:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Apr 15, 2022 at 10:05 AM Andres Freund <andres@anarazel.de> wrote:\n>> The other shows\n>> the age of OldestXmin at the end of the vacuum. Which is influenced by\n>> what's currently running.\n\n> As well as the age of OldestXmin at the start of VACUUM.\n\nIs it worth capturing and logging both of those numbers? Why is\nthe age at the end more interesting than the age at the start?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 13:15:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > As well as the age of OldestXmin at the start of VACUUM.\n>\n> Is it worth capturing and logging both of those numbers? Why is\n> the age at the end more interesting than the age at the start?\n\nAs Andres said, that's often more interesting because most of the time\nOldestXmin is not held back by much (not enough to matter).\n\nUsers will often look at the output of successive related VACUUM\noperations. Often the way things change over time is much more\ninteresting than the details at any particular point in time.\nEspecially in the kinds of extreme cases I'm thinking about.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 15 Apr 2022 10:23:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-15 10:23:56 -0700, Peter Geoghegan wrote:\n> On Fri, Apr 15, 2022 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > As well as the age of OldestXmin at the start of VACUUM.\n> >\n> > Is it worth capturing and logging both of those numbers? Why is\n> > the age at the end more interesting than the age at the start?\n> \n> As Andres said, that's often more interesting because most of the time\n> OldestXmin is not held back by much (not enough to matter).\n\nI think it'd be interesting - particularly for large relations or when\nlooking to adjust autovac cost limits. It's not rare for autovac to take\nlong enough that another autovac is necessary immediately again. Also\nhelps to interpret the \"dead but not yet removable\" counts.\n\nSomething like:\nremovable cutoff: %u, age at start: %u, age at end: %u...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Apr 2022 10:43:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 10:43 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd be interesting - particularly for large relations or when\n> looking to adjust autovac cost limits.\n\n> Something like:\n> removable cutoff: %u, age at start: %u, age at end: %u...\n\nPart of the problem here is that we determine VACUUM's FreezeLimit by\ncalculating `OldestXmin - vacuum_freeze_min_age` (more or less [1]).\nWhy should we do less freezing due to the presence of an old snapshot?\nSure, that has to happen with those XIDs that are fundamentally\nineligible for freezing due to the presence of the old snapshot -- but\nwhat about those XIDs that *are* eligible, and still don't get frozen\nat first?\n\nWe should determine FreezeLimit by calculating `NextXID -\nvacuum_freeze_min_age ` instead (and then clamp, to make sure that\nit's always <= OldestXmin). That approach would make our final\nFreezeLimit \"strictly age-based\".\n\n[1] We do something a bit like this when OldestXmin is already very\nold -- then FreezeLimit is the same value as OldestXmin (see WARNING\nfrom vacuum_set_xid_limits() function). That's better than nothing,\nbut doesn't change the fact that our general approach to calculating\nFreezeLimit makes little sense.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 15 Apr 2022 11:12:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-15 12:36:52 -0400, Tom Lane wrote:\n>> Yeah, I was also thinking about a flag in PGPROC being a more reliable\n>> way to do this. Is there anything besides walsenders that should set\n>> that flag?\n\n> Not that I can think of. It's only because of hs_feedback that we need\n> to. I guess it's possible that somebody build some extension that needs\n> something similar, but then they'd need to set that flag...\n\nHere's a WIP patch for that. The only exciting thing in it is that\nbecause of some undocumented cowboy programming in walsender.c, the\n\tAssert((proc->statusFlags & (~PROC_COPYABLE_FLAGS)) == 0);\nin ProcArrayInstallRestoredXmin fires unless we skip that.\n\nI could use some help filling in the XXX comments, because it's far\nfrom clear to me *why* walsenders need this to happen.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 15 Apr 2022 14:14:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi, \n\nOn April 15, 2022 2:14:47 PM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2022-04-15 12:36:52 -0400, Tom Lane wrote:\n>>> Yeah, I was also thinking about a flag in PGPROC being a more reliable\n>>> way to do this. Is there anything besides walsenders that should set\n>>> that flag?\n>\n>> Not that I can think of. It's only because of hs_feedback that we need\n>> to. I guess it's possible that somebody build some extension that needs\n>> something similar, but then they'd need to set that flag...\n>\n>Here's a WIP patch for that. The only exciting thing in it is that\n>because of some undocumented cowboy programming in walsender.c, the\n>\tAssert((proc->statusFlags & (~PROC_COPYABLE_FLAGS)) == 0);\n>in ProcArrayInstallRestoredXmin fires unless we skip that.\n>\n>I could use some help filling in the XXX comments, because it's far\n>from clear to me *why* walsenders need this to happen.\n\nI'm out for the rest of the day due to family events (visiting my girlfriend's parents till Wednesday), I can take a stab at formulating something after. \n\nIf you want to commit before: The reason is that walsenders use their xmin to represent the xmin of standbys when using hot_standby_feedback. Since we're only transmitting global horizons up from standbys, it has to influence globally (and it would be hard to represent per db horizons anyway).\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 15 Apr 2022 15:01:38 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On April 15, 2022 2:14:47 PM EDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I could use some help filling in the XXX comments, because it's far\n>> from clear to me *why* walsenders need this to happen.\n\n> If you want to commit before: The reason is that walsenders use their xmin to represent the xmin of standbys when using hot_standby_feedback. Since we're only transmitting global horizons up from standbys, it has to influence globally (and it would be hard to represent per db horizons anyway).\n\nGot it. I rewrote the comments and pushed. Noah, it should be safe\nto turn wrasse back on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 17:51:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-15 11:12:34 -0700, Peter Geoghegan wrote:\n> On Fri, Apr 15, 2022 at 10:43 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think it'd be interesting - particularly for large relations or when\n> > looking to adjust autovac cost limits.\n> \n> > Something like:\n> > removable cutoff: %u, age at start: %u, age at end: %u...\n> \n> Part of the problem here is that we determine VACUUM's FreezeLimit by\n> calculating `OldestXmin - vacuum_freeze_min_age` (more or less [1]).\n\nWhat the message outputs is OldestXmin and not FreezeLimit though. And\nFreezeLimit doesn't affect \"dead but not yet removable\".\n\nIOW, the following might be right, but that seems independent of\nimproving the output of\n diff = (int32) (ReadNextTransactionId() - OldestXmin);\n appendStringInfo(&buf,\n _(\"removable cutoff: %u, which was %d XIDs old when operation ended\\n\"),\n OldestXmin, diff);\n\n\n> Why should we do less freezing due to the presence of an old snapshot?\n> Sure, that has to happen with those XIDs that are fundamentally\n> ineligible for freezing due to the presence of the old snapshot -- but\n> what about those XIDs that *are* eligible, and still don't get frozen\n> at first?\n> \n> We should determine FreezeLimit by calculating `NextXID -\n> vacuum_freeze_min_age ` instead (and then clamp, to make sure that\n> it's always <= OldestXmin). That approach would make our final\n> FreezeLimit \"strictly age-based\".\n> \n> [1] We do something a bit like this when OldestXmin is already very\n> old -- then FreezeLimit is the same value as OldestXmin (see WARNING\n> from vacuum_set_xid_limits() function). That's better than nothing,\n> but doesn't change the fact that our general approach to calculating\n> FreezeLimit makes little sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 17 Apr 2022 07:36:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Sun, Apr 17, 2022 at 7:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > Part of the problem here is that we determine VACUUM's FreezeLimit by\n> > calculating `OldestXmin - vacuum_freeze_min_age` (more or less [1]).\n>\n> What the message outputs is OldestXmin and not FreezeLimit though.\n\nMy higher level point is that there is a general tendency to assume\nthat OldestXmin is the same thing as NextXID, which it isn't. It's an\neasy enough mistake to make, though, in part because they're usually\nquite close together. The \"Routine Vacuuming\" docs seem to suggest\nthat they're the same thing, or at least that's what I take away from\nthe following sentence:\n\n\"This implies that if a table is not otherwise vacuumed, autovacuum\nwill be invoked on it approximately once every\nautovacuum_freeze_max_age minus vacuum_freeze_min_age transactions\".\n\n> And FreezeLimit doesn't affect \"dead but not yet removable\".\n\nBut OldestXmin affects FreezeLimit.\n\nAnyway, I'm not opposed to showing the age at the start as well. But\nfrom the point of view of issues like this tenk1 issue, it would be\nmore useful to just report on new_rel_allvisible. It would also be\nmore useful to users.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 17 Apr 2022 08:29:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On 2022-Apr-15, Tom Lane wrote:\n\n> Here's a WIP patch for that. The only exciting thing in it is that\n> because of some undocumented cowboy programming in walsender.c, the\n> \tAssert((proc->statusFlags & (~PROC_COPYABLE_FLAGS)) == 0);\n> in ProcArrayInstallRestoredXmin fires unless we skip that.\n\nHmm, maybe a better use of that define is to use to select which flags\nto copy, rather than to ensure we they are the only ones set. What\nabout this?\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"¿Qué importan los años? Lo que realmente importa es comprobar que\na fin de cuentas la mejor edad de la vida es estar vivo\" (Mafalda)",
"msg_date": "Tue, 19 Apr 2022 18:37:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Apr-15, Tom Lane wrote:\n>> Here's a WIP patch for that. The only exciting thing in it is that\n>> because of some undocumented cowboy programming in walsender.c, the\n>> Assert((proc->statusFlags & (~PROC_COPYABLE_FLAGS)) == 0);\n>> in ProcArrayInstallRestoredXmin fires unless we skip that.\n\n> Hmm, maybe a better use of that define is to use to select which flags\n> to copy, rather than to ensure we they are the only ones set. What\n> about this?\n\nYeah, I thought about that too, but figured that the author probably\nhad a reason for writing the assertion the way it was. If we want\nto redefine PROC_COPYABLE_FLAGS as \"flags associated with xmin\",\nthat's fine by me. But I'd suggest that both the name of the macro\nand the comment for it in proc.h should be revised to match that\ndefinition.\n\nAnother point is that as you've coded it, the code doesn't so much\ncopy those flags as union them with whatever the recipient had,\nwhich seems wrong. I could go with either\n\n Assert(!(MyProc->statusFlags & PROC_XMIN_FLAGS));\n MyProc->statusFlags |= (proc->statusFlags & PROC_XMIN_FLAGS);\n\nor\n\n MyProc->statusFlags = (MyProc->statusFlags & ~PROC_XMIN_FLAGS) |\n (proc->statusFlags & PROC_XMIN_FLAGS);\n\nPerhaps the latter is more future-proof.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Apr 2022 14:29:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 3:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Apr-15, Tom Lane wrote:\n> >> Here's a WIP patch for that. The only exciting thing in it is that\n> >> because of some undocumented cowboy programming in walsender.c, the\n> >> Assert((proc->statusFlags & (~PROC_COPYABLE_FLAGS)) == 0);\n> >> in ProcArrayInstallRestoredXmin fires unless we skip that.\n>\n> > Hmm, maybe a better use of that define is to use to select which flags\n> > to copy, rather than to ensure we they are the only ones set. What\n> > about this?\n>\n> Yeah, I thought about that too, but figured that the author probably\n> had a reason for writing the assertion the way it was.\n\nThe motivation behind the assertion was that when we copy whole\nstatusFlags from the leader process to the worker process we want to\nmake sure that the flags we're copying is a known subset of the flags\nthat are valid to copy from the leader.\n\n> If we want\n> to redefine PROC_COPYABLE_FLAGS as \"flags associated with xmin\",\n> that's fine by me. But I'd suggest that both the name of the macro\n> and the comment for it in proc.h should be revised to match that\n> definition.\n>\n> Another point is that as you've coded it, the code doesn't so much\n> copy those flags as union them with whatever the recipient had,\n> which seems wrong. I could go with either\n>\n> Assert(!(MyProc->statusFlags & PROC_XMIN_FLAGS));\n> MyProc->statusFlags |= (proc->statusFlags & PROC_XMIN_FLAGS);\n>\n> or\n>\n> MyProc->statusFlags = (MyProc->statusFlags & ~PROC_XMIN_FLAGS) |\n> (proc->statusFlags & PROC_XMIN_FLAGS);\n>\n> Perhaps the latter is more future-proof.\n\nCopying only xmin-related flags in this way also makes sense to me and\nthere is no problem at least for now. A note would be that when we\nintroduce a new flag that needs to be copied in the future, we need to\nmake sure to add it to PROC_XMIN_FLAGS so it is copied. Otherwise a\nsimilar issue we fixed by 0f0cfb494004befb0f6e could happen again.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 20 Apr 2022 13:55:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On 2022-Apr-20, Masahiko Sawada wrote:\n\n> > MyProc->statusFlags = (MyProc->statusFlags & ~PROC_XMIN_FLAGS) |\n> > (proc->statusFlags & PROC_XMIN_FLAGS);\n> >\n> > Perhaps the latter is more future-proof.\n\n> Copying only xmin-related flags in this way also makes sense to me and\n> there is no problem at least for now. A note would be that when we\n> introduce a new flag that needs to be copied in the future, we need to\n> make sure to add it to PROC_XMIN_FLAGS so it is copied. Otherwise a\n> similar issue we fixed by 0f0cfb494004befb0f6e could happen again.\n\nOK, done this way -- patch attached.\n\nReading the comment I wrote about it, I wonder if flags\nPROC_AFFECTS_ALL_HORIZONS and PROC_IN_LOGICAL_DECODING should also be\nincluded. I think the only reason we don't care at this point is that\nwalsenders (logical or otherwise) do not engage in snapshot copying.\nBut if we were to implement usage of parallel workers sharing a common\nsnapshot to do table sync in parallel, then it ISTM it would be\nimportant to copy at least the latter. Not sure there are any cases\nwere we might care about the former.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801",
"msg_date": "Sat, 14 May 2022 16:53:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "On Sun, May 15, 2022 at 12:29 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Apr-20, Masahiko Sawada wrote:\n>\n> > > MyProc->statusFlags = (MyProc->statusFlags & ~PROC_XMIN_FLAGS) |\n> > > (proc->statusFlags & PROC_XMIN_FLAGS);\n> > >\n> > > Perhaps the latter is more future-proof.\n>\n> > Copying only xmin-related flags in this way also makes sense to me and\n> > there is no problem at least for now. A note would be that when we\n> > introduce a new flag that needs to be copied in the future, we need to\n> > make sure to add it to PROC_XMIN_FLAGS so it is copied. Otherwise a\n> > similar issue we fixed by 0f0cfb494004befb0f6e could happen again.\n>\n> OK, done this way -- patch attached.\n\nThank you for updating the patch.\n\n>\n> Reading the comment I wrote about it, I wonder if flags\n> PROC_AFFECTS_ALL_HORIZONS and PROC_IN_LOGICAL_DECODING should also be\n> included. I think the only reason we don't care at this point is that\n> walsenders (logical or otherwise) do not engage in snapshot copying.\n> But if we were to implement usage of parallel workers sharing a common\n> snapshot to do table sync in parallel, then it ISTM it would be\n> important to copy at least the latter. Not sure there are any cases\n> were we might care about the former.\n\nYeah, it seems to be inconsistent between the comment (and the new\nname) and the flags actually included. I think we can include all\nxmin-related flags to PROC_XMIN_FLAGS as the comment says. That way,\nit would be useful also for other use cases, and I don't see any\ndownside for now.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 18 May 2022 16:49:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
},
{
"msg_contents": "I was looking at backpatching this to pg13. That made me realize that\ncommit dc7420c2c927 changed things in 14; and before that commit, the\nbitmask that is checked is PROCARRAY_FLAGS_VACUUM, which has a\ndefinition independently from whatever proc.h says. As far as I can\ntell, there's no problem with the patches I post here (the backpatched\nversion for pg13 and p14). But's it's something to be aware of; and if\nwe do want to add the additional bits to the bitmask, we should do that\nin a separate master-only commit.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 19 May 2022 12:21:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Intermittent buildfarm failures on wrasse"
}
] |
[
{
"msg_contents": "Hey,\n\nI've been building in the git repo just fine but wanted to use vpath builds\nso I could keep both \"maked\" v14 and v15 binaries around, ready to be\ninstalled.\n\nThe attached log is result of (while in the versioned directory, a sibling\nof the git repo)\n`../postgresql/configure`\n`make`\n`tree`\n\nstdout and stderr output tee'd to a file.\n\nPer the instructions here:\n\nhttps://www.postgresql.org/docs/current/install-procedure.html\n\nThe last handful of lines for make are below:\n\nThanks!\n\nDavid J.\n\ncat: ../../src/timezone/objfiles.txt: No such file or directory\ncat: jit/objfiles.txt: No such file or directory\ngcc -Wall -Wmissing-prototypes [...see file...] -Wl,--as-needed\n-Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -Wl,-E -lz -lpthread\n-lrt -ldl -lm -o postgres\ngcc: error: replication/backup_manifest.o: No such file or directory\ngcc: error: replication/basebackup.o: No such file or directory\ngcc: error: replication/basebackup_copy.o: No such file or directory\ngcc: error: replication/basebackup_gzip.o: No such file or directory\ngcc: error: replication/basebackup_lz4.o: No such file or directory\ngcc: error: replication/basebackup_zstd.o: No such file or directory\ngcc: error: replication/basebackup_progress.o: No such file or directory\ngcc: error: replication/basebackup_server.o: No such file or directory\ngcc: error: replication/basebackup_sink.o: No such file or directory\ngcc: error: replication/basebackup_target.o: No such file or directory\ngcc: error: replication/basebackup_throttle.o: No such file or directory\ngcc: error: replication/repl_gram.o: No such file or directory\ngcc: error: replication/slot.o: No such file or directory\ngcc: error: replication/slotfuncs.o: No such file or directory\ngcc: error: replication/syncrep.o: No such file or directory\ngcc: error: replication/syncrep_gram.o: No such file or directory\ngcc: error: replication/walreceiver.o: No such file or directory\ngcc: error: replication/walreceiverfuncs.o: No such file or directory\ngcc: error: replication/walsender.o: No such file or directory\ngcc: error: utils/fmgrtab.o: No such file or directory\nmake[2]: *** [Makefile:66: postgres] Error 1\nmake[2]: Leaving directory '/home/vagrant/pgsql15/src/backend'\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake[1]: Leaving directory '/home/vagrant/pgsql15/src'\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\nHey,I've been building in the git repo just fine but wanted to use vpath builds so I could keep both \"maked\" v14 and v15 binaries around, ready to be installed.The attached log is result of (while in the versioned directory, a sibling of the git repo)`../postgresql/configure``make``tree`stdout and stderr output tee'd to a file.Per the instructions here:https://www.postgresql.org/docs/current/install-procedure.htmlThe last handful of lines for make are below:Thanks!David J.cat: ../../src/timezone/objfiles.txt: No such file or directorycat: jit/objfiles.txt: No such file or directorygcc -Wall -Wmissing-prototypes [...see file...] -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags -Wl,-E -lz -lpthread -lrt -ldl -lm -o postgresgcc: error: replication/backup_manifest.o: No such file or directorygcc: error: replication/basebackup.o: No such file or directorygcc: error: replication/basebackup_copy.o: No such file or directorygcc: error: replication/basebackup_gzip.o: No such file or directorygcc: error: replication/basebackup_lz4.o: No such file or directorygcc: error: replication/basebackup_zstd.o: No such file or directorygcc: error: replication/basebackup_progress.o: No such file or directorygcc: error: replication/basebackup_server.o: No such file or directorygcc: error: replication/basebackup_sink.o: No such file or directorygcc: error: replication/basebackup_target.o: No such file or directorygcc: error: replication/basebackup_throttle.o: No such file or directorygcc: error: replication/repl_gram.o: No such file or directorygcc: error: replication/slot.o: No such file or directorygcc: error: replication/slotfuncs.o: No such file or directorygcc: error: replication/syncrep.o: No such file or directorygcc: error: replication/syncrep_gram.o: No such file or directorygcc: error: replication/walreceiver.o: No such file or directorygcc: error: replication/walreceiverfuncs.o: No such file or directorygcc: error: replication/walsender.o: No such file or directorygcc: error: utils/fmgrtab.o: No such file or directorymake[2]: *** [Makefile:66: postgres] Error 1make[2]: Leaving directory '/home/vagrant/pgsql15/src/backend'make[1]: *** [Makefile:42: all-backend-recurse] Error 2make[1]: Leaving directory '/home/vagrant/pgsql15/src'make: *** [GNUmakefile:11: all-src-recurse] Error 2",
"msg_date": "Wed, 13 Apr 2022 17:17:31 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "VPath Build Errors"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The attached log is result of (while in the versioned directory, a sibling\n> of the git repo)\n> `../postgresql/configure`\n> `make`\n> `tree`\n\nThe VPATH buildfarm members haven't been complaining, and I can't\nreproduce a failure here, so I'm inclined to suspect pilot error.\n\nOne point that's not very clearly documented is that your source\ndirectory needs to be clean; no build products in it, except\npossibly those that are included in tarballs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Apr 2022 20:44:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: VPath Build Errors"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 5:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > The attached log is result of (while in the versioned directory, a\n> sibling\n> > of the git repo)\n> > `../postgresql/configure`\n> > `make`\n> > `tree`\n>\n> The VPATH buildfarm members haven't been complaining, and I can't\n> reproduce a failure here, so I'm inclined to suspect pilot error.\n>\n> One point that's not very clearly documented is that your source\n> directory needs to be clean; no build products in it, except\n> possibly those that are included in tarballs.\n>\n>\nI'll double-check but that would explain it. I know it was not clean when\nI tried this.\n\nDavid J.\n\nOn Wed, Apr 13, 2022 at 5:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The attached log is result of (while in the versioned directory, a sibling\n> of the git repo)\n> `../postgresql/configure`\n> `make`\n> `tree`\n\nThe VPATH buildfarm members haven't been complaining, and I can't\nreproduce a failure here, so I'm inclined to suspect pilot error.\n\nOne point that's not very clearly documented is that your source\ndirectory needs to be clean; no build products in it, except\npossibly those that are included in tarballs.I'll double-check but that would explain it. I know it was not clean when I tried this.David J.",
"msg_date": "Wed, 13 Apr 2022 17:48:49 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VPath Build Errors"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 05:48:49PM -0700, David G. Johnston wrote:\n> On Wed, Apr 13, 2022 at 5:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > > The attached log is result of (while in the versioned directory, a\n> > sibling\n> > > of the git repo)\n> > > `../postgresql/configure`\n> > > `make`\n> > > `tree`\n> >\n> > The VPATH buildfarm members haven't been complaining, and I can't\n> > reproduce a failure here, so I'm inclined to suspect pilot error.\n> >\n> > One point that's not very clearly documented is that your source\n> > directory needs to be clean; no build products in it, except\n> > possibly those that are included in tarballs.\n> >\n> >\n> I'll double-check but that would explain it. I know it was not clean when\n> I tried this.\n\nNote that if you what you want is building multiple major versions at the same\ntime, the easiest way to do that is probably to use git worktrees [1] and\ncheckout multiple branches of the same repo at the same times, and then build\neach one as you'd normally do.\n\n[1] https://git-scm.com/docs/git-worktree\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:20:11 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VPath Build Errors"
}
] |
[
{
"msg_contents": "In the executor code, we mix use outerPlanState macro and referring to\nleffttree. Commit 40f42d2a tried to keep the code consistent by\nreplacing referring to lefftree with outerPlanState macro, but there are\nstill some outliers. This patch tries to clean them up.\n\nThanks\nRichard",
"msg_date": "Thu, 14 Apr 2022 14:49:23 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use outerPlanState macro instead of referring to leffttree"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> In the executor code, we mix use outerPlanState macro and referring to\n> leffttree. Commit 40f42d2a tried to keep the code consistent by\n> replacing referring to lefftree with outerPlanState macro, but there are\n> still some outliers. This patch tries to clean them up.\n\nSeems generally reasonable, but what about righttree? I find a few\nof those too with \"grep\".\n\nBacking up a little bit, one thing not to like about the outerPlanState\nand innerPlanState macros is that they lose all semblance of type\nsafety:\n\n#define innerPlanState(node)\t\t(((PlanState *)(node))->righttree)\n#define outerPlanState(node)\t\t(((PlanState *)(node))->lefttree)\n\nYou can pass any pointer you want, and the compiler will not complain.\nI wonder if there's any trick (even a gcc-only one) that could improve\non that. In the absence of such a check, people might feel that\nincreasing our reliance on these macros isn't such a hot idea.\n\nNow, the typical coding pattern you've used:\n\n ExecReScanHash(HashState *node)\n {\n+\tPlanState *outerPlan = outerPlanState(node);\n\nis probably reasonably secure against wrong-pointer slip-ups. But\nI'm less convinced about that for in-line usages in the midst of\na function, particularly in the common case that the function has\na variable pointing to its Plan node as well as PlanState node.\nWould it make sense to try to use the local-variable style everywhere?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Jul 2022 17:32:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use outerPlanState macro instead of referring to leffttree"
},
{
"msg_contents": "Thanks for reviewing this patch.\n\nOn Sat, Jul 2, 2022 at 5:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > In the executor code, we mix use outerPlanState macro and referring to\n> > leffttree. Commit 40f42d2a tried to keep the code consistent by\n> > replacing referring to lefftree with outerPlanState macro, but there are\n> > still some outliers. This patch tries to clean them up.\n>\n> Seems generally reasonable, but what about righttree? I find a few\n> of those too with \"grep\".\n>\n\nYes. We may do the same trick for righttree.\n\n\n>\n> Backing up a little bit, one thing not to like about the outerPlanState\n> and innerPlanState macros is that they lose all semblance of type\n> safety:\n>\n> #define innerPlanState(node) (((PlanState *)(node))->righttree)\n> #define outerPlanState(node) (((PlanState *)(node))->lefttree)\n>\n> You can pass any pointer you want, and the compiler will not complain.\n> I wonder if there's any trick (even a gcc-only one) that could improve\n> on that. In the absence of such a check, people might feel that\n> increasing our reliance on these macros isn't such a hot idea.\n>\n\nYour concern makes sense. I think outerPlan and innerPlan macros share\nthe same issue. Not sure if there is a way to do the type check.\n\n\n>\n> Now, the typical coding pattern you've used:\n>\n> ExecReScanHash(HashState *node)\n> {\n> + PlanState *outerPlan = outerPlanState(node);\n>\n> is probably reasonably secure against wrong-pointer slip-ups. But\n> I'm less convinced about that for in-line usages in the midst of\n> a function, particularly in the common case that the function has\n> a variable pointing to its Plan node as well as PlanState node.\n> Would it make sense to try to use the local-variable style everywhere?\n>\n\nDo you mean the pattern like below?\n\n outerPlanState(hashstate) = ExecInitNode(outerPlan(node), estate, eflags);\n\nIt seems that this pattern is mostly used when initializing child nodes\nwith ExecInitNode(), and most calls to ExecInitNode() are using this\npattern as a convention. Not sure if it's better to change them to\nlocal-variable style.\n\nThanks\nRichard\n\nThanks for reviewing this patch.On Sat, Jul 2, 2022 at 5:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> In the executor code, we mix use outerPlanState macro and referring to\n> leffttree. Commit 40f42d2a tried to keep the code consistent by\n> replacing referring to lefftree with outerPlanState macro, but there are\n> still some outliers. This patch tries to clean them up.\n\nSeems generally reasonable, but what about righttree? I find a few\nof those too with \"grep\".Yes. We may do the same trick for righttree. \n\nBacking up a little bit, one thing not to like about the outerPlanState\nand innerPlanState macros is that they lose all semblance of type\nsafety:\n\n#define innerPlanState(node) (((PlanState *)(node))->righttree)\n#define outerPlanState(node) (((PlanState *)(node))->lefttree)\n\nYou can pass any pointer you want, and the compiler will not complain.\nI wonder if there's any trick (even a gcc-only one) that could improve\non that. In the absence of such a check, people might feel that\nincreasing our reliance on these macros isn't such a hot idea.Your concern makes sense. I think outerPlan and innerPlan macros sharethe same issue. Not sure if there is a way to do the type check. \n\nNow, the typical coding pattern you've used:\n\n ExecReScanHash(HashState *node)\n {\n+ PlanState *outerPlan = outerPlanState(node);\n\nis probably reasonably secure against wrong-pointer slip-ups. But\nI'm less convinced about that for in-line usages in the midst of\na function, particularly in the common case that the function has\na variable pointing to its Plan node as well as PlanState node.\nWould it make sense to try to use the local-variable style everywhere?Do you mean the pattern like below? outerPlanState(hashstate) = ExecInitNode(outerPlan(node), estate, eflags);It seems that this pattern is mostly used when initializing child nodeswith ExecInitNode(), and most calls to ExecInitNode() are using thispattern as a convention. Not sure if it's better to change them tolocal-variable style.ThanksRichard",
"msg_date": "Wed, 6 Jul 2022 17:41:44 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use outerPlanState macro instead of referring to leffttree"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Sat, Jul 2, 2022 at 5:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Backing up a little bit, one thing not to like about the outerPlanState\n>> and innerPlanState macros is that they lose all semblance of type\n>> safety:\n\n> Your concern makes sense. I think outerPlan and innerPlan macros share\n> the same issue. Not sure if there is a way to do the type check.\n\nYeah, I don't know of one either. It needn't hold up this patch.\n\n>> Would it make sense to try to use the local-variable style everywhere?\n\n> Do you mean the pattern like below?\n> outerPlanState(hashstate) = ExecInitNode(outerPlan(node), estate, eflags);\n> It seems that this pattern is mostly used when initializing child nodes\n> with ExecInitNode(), and most calls to ExecInitNode() are using this\n> pattern as a convention. Not sure if it's better to change them to\n> local-variable style.\n\nThat's probably fine, especially if it's a commonly used pattern.\n\nTypically, if one applies outerPlan() or outerPlanState() to the\nwrong pointer, the mistake will become obvious upon even minimal\ntesting. My concern here is more about usages in edge cases that\nperhaps escape testing, for instance in the arguments of an\nelog() for some nearly-can't-happen case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Jul 2022 10:48:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use outerPlanState macro instead of referring to leffttree"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 10:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Typically, if one applies outerPlan() or outerPlanState() to the\n> wrong pointer, the mistake will become obvious upon even minimal\n> testing. My concern here is more about usages in edge cases that\n> perhaps escape testing, for instance in the arguments of an\n> elog() for some nearly-can't-happen case.\n\n\nYeah, concur with that. For edge case usages maybe we can use the\nlocal-variable style to avoid wrong-pointer mistakes.\n\nUpdate the patch to include changes about righttree. But this doesn't\ninclude changes for edge case usages. (A rough look through shows to me\nthat the current usages should be able to be covered by tests.)\n\nThanks\nRichard",
"msg_date": "Thu, 7 Jul 2022 14:59:27 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use outerPlanState macro instead of referring to leffttree"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Update the patch to include changes about righttree. But this doesn't\n> include changes for edge case usages. (A rough look through shows to me\n> that the current usages should be able to be covered by tests.)\n\nI found a couple other places by grepping, and adjusted those too.\nPushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Jul 2022 11:25:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use outerPlanState macro instead of referring to leffttree"
}
] |
[
{
"msg_contents": "Hello, this is keshav. And I have changed my proposal for this project.\nKindly accept it.",
"msg_date": "Thu, 14 Apr 2022 14:48:11 +0530",
"msg_from": "\"S.R Keshav\" <srkeshav7@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSOC'2022: New and improved website for pgjdbc (JDBC) (2022)"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 5:18 AM S.R Keshav <srkeshav7@gmail.com> wrote:\n> Hello, this is keshav. And I have changed my proposal for this project.\n> Kindly accept it.\n\nHi,\n\nWhen you post updates, please reply to the email you sent previously,\ninstead of sending a new one. That way, your messages will all be\ngrouped together into a single thread, which will make them easier for\npeople to find.\n\nThanks,\n\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:03:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GSOC'2022: New and improved website for pgjdbc (JDBC) (2022)"
},
{
"msg_contents": "Ok, I will do that.\n\nOn Thu, 14 Apr, 2022, 6:33 pm Robert Haas, <robertmhaas@gmail.com> wrote:\n\n> On Thu, Apr 14, 2022 at 5:18 AM S.R Keshav <srkeshav7@gmail.com> wrote:\n> > Hello, this is keshav. And I have changed my proposal for this project.\n> > Kindly accept it.\n>\n> Hi,\n>\n> When you post updates, please reply to the email you sent previously,\n> instead of sending a new one. That way, your messages will all be\n> grouped together into a single thread, which will make them easier for\n> people to find.\n>\n> Thanks,\n>\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nOk, I will do that. On Thu, 14 Apr, 2022, 6:33 pm Robert Haas, <robertmhaas@gmail.com> wrote:On Thu, Apr 14, 2022 at 5:18 AM S.R Keshav <srkeshav7@gmail.com> wrote:\n> Hello, this is keshav. And I have changed my proposal for this project.\n> Kindly accept it.\n\nHi,\n\nWhen you post updates, please reply to the email you sent previously,\ninstead of sending a new one. That way, your messages will all be\ngrouped together into a single thread, which will make them easier for\npeople to find.\n\nThanks,\n\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Apr 2022 18:37:24 +0530",
"msg_from": "\"S.R Keshav\" <srkeshav7@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GSOC'2022: New and improved website for pgjdbc (JDBC) (2022)"
}
] |
[
{
"msg_contents": "Hi Dave,\n\nMy name is Alena, I am a 4th year Software Engineering student. I'd like to\nrebuild and improve the website for pgjdbc.\n\nI have experience in frontend (Vanilla JS and React) and backend\n(Express.js) web development and understanding on how Jekyll works.\n\nBefore writing a proposal I launched the current website locally,\ninvestigated the code base, and dug into the latest versions of Jekyll,\nkramdown, and Liquid. Please find my proposal attached.\n\nLooking forward to hearing from you.\n\nHave a nice day,\nAlena Katkova",
"msg_date": "Thu, 14 Apr 2022 13:36:38 +0200",
"msg_from": "Alena Katkova <alena.a.katkova@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC: New and improved website for pgjdbc (JDBC) (2022)"
}
] |
[
{
"msg_contents": "Hello,\nHerewith attach my GSOC22 Proposal\n<https://docs.google.com/document/d/10xe2WXETWxqs7cOhLjkvGmIEigBNu98VGQuQ7lWelus/edit?usp=sharing>\nand request for reviews and comments\nLooking forward to constructive feedback\nRegards,\nArjun\n\nHello,Herewith attach my GSOC22 Proposal and request for reviews and commentsLooking forward to constructive feedbackRegards,Arjun",
"msg_date": "Thu, 14 Apr 2022 21:03:59 +0530",
"msg_from": "Arjun Prashanth <arjunp0710@gmail.com>",
"msg_from_op": true,
"msg_subject": "[GSOC-22] Proposal Review"
},
{
"msg_contents": "Thanks for the proposal, Arjun! This looks good to me.\n\nIlaria\n\nOn 14.04.22 17:33, Arjun Prashanth wrote:\n> Hello,\n> Herewith attach my GSOC22 Proposal \n> <https://docs.google.com/document/d/10xe2WXETWxqs7cOhLjkvGmIEigBNu98VGQuQ7lWelus/edit?usp=sharing> \n> and request for reviews and comments\n> Looking forward to constructive feedback\n> Regards,\n> Arjun\n>\n\n\n\n\n\nThanks for the proposal, Arjun! This looks good to me. \n\nIlaria\n\nOn 14.04.22 17:33, Arjun Prashanth\n wrote:\n\n\n\nHello,\n Herewith attach my GSOC22 Proposal and request for reviews\n and comments\nLooking forward to constructive feedback\nRegards,\nArjun",
"msg_date": "Fri, 15 Apr 2022 22:40:42 +0200",
"msg_from": "Ilaria Battiston <ilaria.battiston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [GSOC-22] Proposal Review"
}
] |
[
{
"msg_contents": "Hi everyone.\n\nI develop postgresql's extension such as fdw in my work. \nI'm interested in using postgresql for OLAP. \nAfter [1] having been withdrawn, I reviewed [1].\nI think that this patch is realy useful when using OLAP queries.\nFurthermore, I think it would be more useful if this patch works on a foreign table.\nSo, I would like to ask you a question on this patch in this new thread.\n\nI changed this patch a little and confirmed that my idea is true.\nThe followings are things I found and differences of between my prototype and this patch. \n 1. Things I found\n I execute a query which contain join of postgres_fdw's foreign table and a table and aggregation of the join result.\n In my setting, my prototype reduce this query's response by 93%.\n 2. Differences between my prototype and this patch\n (1) Pushdown aggregation of foeign table if FDW pushdown partial aggregation\n (2) postgres_fdw pushdowns some partial aggregations\nI attached my prototype source code and content of my experiment.\nI want to resume development of this patch if there is some possibility of accept of this patch's function.\nI took a contact to Mr.Houska on resuming development of this patch.\nAs a result, Mr.Houska advised for me that I ask in pgsql-hackers whether any reviewers / committers are \ninterested to work on the patch.\nIs anyone interested in my work?\n\nSincerely yours.\nYuuki Fujii\n\n[1] https://commitfest.postgresql.org/32/\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation",
"msg_date": "Fri, 15 Apr 2022 07:33:26 +0000",
"msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>",
"msg_from_op": true,
"msg_subject": "WIP: Aggregation push-down - take2"
},
{
"msg_contents": "Hi everyone.\n\nI rebased the following patches which were submitted in [1].\n v17-0001-Introduce-RelInfoList-structure.patch\n v17-0002-Aggregate-push-down-basic-functionality.patch\n v17-0003-Use-also-partial-paths-as-the-input-for-grouped-path.patch\n\nI checked I can apply the rebased patch to commit 2cd2569c72b8920048e35c31c9be30a6170e1410.\n\nI'm going to register the rebased patch in next commitfest.\n\nSincerely yours, \nYuuki Fujii\n\n[1] https://commitfest.postgresql.org/32/\n\n--\nYuuki Fujii\nInformation Technology R&D Center Mitsubishi Electric Corporation\n\n> -----Original Message-----\n> From: Fujii.Yuki@df.MitsubishiElectric.co.jp\n> <Fujii.Yuki@df.MitsubishiElectric.co.jp>\n> Sent: Friday, April 15, 2022 4:33 PM\n> To: pgsql-hackers@lists.postgresql.org\n> Cc: david@pgmasters.net; ah@cybertec.at; tgl@sss.pgh.pa.us; Tomas Vondra\n> <tomas.vondra@enterprisedb.com>; zhihui.fan1213@gmail.com;\n> legrand_legrand@hotmail.com; daniel@yesql.se\n> Subject: [CAUTION!! MELCO?] WIP: Aggregation push-down - take2\n> \n> Hi everyone.\n> \n> I develop postgresql's extension such as fdw in my work.\n> I'm interested in using postgresql for OLAP.\n> After [1] having been withdrawn, I reviewed [1].\n> I think that this patch is realy useful when using OLAP queries.\n> Furthermore, I think it would be more useful if this patch works on a foreign\n> table.\n> So, I would like to ask you a question on this patch in this new thread.\n> \n> I changed this patch a little and confirmed that my idea is true.\n> The followings are things I found and differences of between my prototype and\n> this patch.\n> 1. Things I found\n> I execute a query which contain join of postgres_fdw's foreign table and a\n> table and aggregation of the join result.\n> In my setting, my prototype reduce this query's response by 93%.\n> 2. Differences between my prototype and this patch\n> (1) Pushdown aggregation of foeign table if FDW pushdown partial\n> aggregation\n> (2) postgres_fdw pushdowns some partial aggregations I attached my\n> prototype source code and content of my experiment.\n> I want to resume development of this patch if there is some possibility of\n> accept of this patch's function.\n> I took a contact to Mr.Houska on resuming development of this patch.\n> As a result, Mr.Houska advised for me that I ask in pgsql-hackers whether any\n> reviewers / committers are interested to work on the patch.\n> Is anyone interested in my work?\n> \n> Sincerely yours.\n> Yuuki Fujii\n> \n> [1] https://commitfest.postgresql.org/32/\n> \n> --\n> Yuuki Fujii\n> Information Technology R&D Center Mitsubishi Electric Corporation",
"msg_date": "Tue, 12 Jul 2022 06:49:16 +0000",
"msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "Hi,\n\nOn 7/12/22 08:49, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> Hi everyone.\n> \n> I rebased the following patches which were submitted in [1].\n> v17-0001-Introduce-RelInfoList-structure.patch\n> v17-0002-Aggregate-push-down-basic-functionality.patch\n> v17-0003-Use-also-partial-paths-as-the-input-for-grouped-path.patch\n> \n> I checked I can apply the rebased patch to commit 2cd2569c72b8920048e35c31c9be30a6170e1410.\n> \n> I'm going to register the rebased patch in next commitfest.\n> \nI've started looking at this patch series again, but I wonder what's the\nplan. The last patch version no longer applies, so I rebased it - see\nthe attachment. The failures were pretty minor, but there're two warnings:\n\npathnode.c:3174:11: warning: variable 'agg_exprs' set but not used\n[-Wunused-but-set-variable]\n Node *agg_exprs;\n ^\npathnode.c:3252:11: warning: variable 'agg_exprs' set but not used\n[-Wunused-but-set-variable]\n Node *agg_exprs;\n ^\n\nso there seem to be some loose ends. Moreover, there are two failures in\nmake check, due to plan changes like this:\n\n+ Finalize GroupAggregate\n Group Key: p.i\n- -> Nested Loop\n- -> Partial HashAggregate\n- Group Key: c1.parent\n- -> Seq Scan on agg_pushdown_child1 c1\n- -> Index Scan using agg_pushdown_parent_pkey on ...\n- Index Cond: (i = c1.parent)\n-(8 rows)\n+ -> Sort\n+ Sort Key: p.i\n+ -> Nested Loop\n+ -> Partial HashAggregate\n+ Group Key: c1.parent\n+ -> Seq Scan on agg_pushdown_child1 c1\n+ -> Index Scan using agg_pushdown_parent_pkey on ...\n+ Index Cond: (i = c1.parent)\n+(10 rows)\n\nThis seems somewhat strange - maybe the plan is correct, but the extra\nsort seems unnecessary.\n\nHowever, maybe I'm confused/missing something? The above message says\nv17 having parts 0001-0003, but there's only one patch in v18. So maybe\nI failed to apply some prior patch?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 31 Oct 2022 20:00:20 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> Hi,\n> \n> On 7/12/22 08:49, Fujii.Yuki@df.MitsubishiElectric.co.jp wrote:\n> > Hi everyone.\n> > \n> > I rebased the following patches which were submitted in [1].\n> > v17-0001-Introduce-RelInfoList-structure.patch\n> > v17-0002-Aggregate-push-down-basic-functionality.patch\n> > v17-0003-Use-also-partial-paths-as-the-input-for-grouped-path.patch\n> > \n> > I checked I can apply the rebased patch to commit 2cd2569c72b8920048e35c31c9be30a6170e1410.\n> > \n> > I'm going to register the rebased patch in next commitfest.\n> > \n> I've started looking at this patch series again, but I wonder what's the\n> plan. The last patch version no longer applies, so I rebased it - see\n> the attachment. The failures were pretty minor, but there're two warnings:\n> \n> pathnode.c:3174:11: warning: variable 'agg_exprs' set but not used\n> [-Wunused-but-set-variable]\n> Node *agg_exprs;\n> ^\n> pathnode.c:3252:11: warning: variable 'agg_exprs' set but not used\n> [-Wunused-but-set-variable]\n> Node *agg_exprs;\n> ^\n> \n> so there seem to be some loose ends. Moreover, there are two failures in\n> make check, due to plan changes like this:\n> \n> + Finalize GroupAggregate\n> Group Key: p.i\n> - -> Nested Loop\n> - -> Partial HashAggregate\n> - Group Key: c1.parent\n> - -> Seq Scan on agg_pushdown_child1 c1\n> - -> Index Scan using agg_pushdown_parent_pkey on ...\n> - Index Cond: (i = c1.parent)\n> -(8 rows)\n> + -> Sort\n> + Sort Key: p.i\n> + -> Nested Loop\n> + -> Partial HashAggregate\n> + Group Key: c1.parent\n> + -> Seq Scan on agg_pushdown_child1 c1\n> + -> Index Scan using agg_pushdown_parent_pkey on ...\n> + Index Cond: (i = c1.parent)\n> +(10 rows)\n> \n> This seems somewhat strange - maybe the plan is correct, but the extra\n> sort seems unnecessary.\n> \n> However, maybe I'm confused/missing something? The above message says\n> v17 having parts 0001-0003, but there's only one patch in v18. So maybe\n> I failed to apply some prior patch?\n\nI've rebased the last version I had on my workstation (v17), the regression\ntests just worked. Maybe v18 was messed up. v20 is attached.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Fri, 04 Nov 2022 15:17:15 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "Hi,\n\nI did a quick initial review of the v20 patch series. I plan to do a\nmore thorough review over the next couple days, if time permits. In\ngeneral I think the patch is in pretty good shape.\n\nI've added a bunch of comments in a number of places - see the \"review\ncomments\" parts for each of the original parts. That should make it\neasier to deal with all the items. I'll go through the main stuff here:\n\n1) I was somewhat confused why we even need RelInfoList, when it merely\nwraps existing fields, but I guess it's because we need multiple such\npairs - one for joins, one for grouped rels. Correct?\n\n2) While reading the README, I was somewhat confused because it seems to\nsuggest we have to push the aggregate only to baserel level, but then it\nalso talks about pushing to other places (above joins). There's a couple\nother places in the README that confused me a bit, see the XXX comments.\n\nIn general, I think the README focuses on explaining the motivation,\ni.e. why we want to do this, but it's somewhat light on how it's done.\nThe other parts talk about the implementation in more detail.\n\n3) I tweaked a couple places in allpaths.c to make it more readable, but\nI admit that's a somewhat subjective measure, so feel free to undo that.\n\n4) setup_base_grouped_rels compares bitmaps before looking at\nreloptkind, which seems to be cheaper so maybe the checks should happen\nin the opposite order (not a huge difference, though)\n\n5) add_grouped_path seems to be a bit confusing, because the name makes\nit look like it does about the same stuff as add_path/add_partial_path,\nwhen that's not quite true\n\n6) 0002 failed to add enable_agg_pushdown to the sample file, which\nleads to a failure in regression tests\n\n7) when I change enable_agg_pushdown to true and run regression tests, I\nget a bunch of failures like\n\n ERROR: WindowFunc found where not expected\n\nSeems we don't handle window functions correctly somewhere, or maybe\nsetup_aggregate_pushdown should check/reject hasWindowFuncs too?\n\n8) create_ordinary_grouping_paths changes when set_cheapest() gets\ncalled, but I can't quite convince myself the change is correct. How\ncome it's correct to check pathlist instead of partial_pathlist (as before).\n\n9) I see create_agg_sorted_path is quite picky about the subpath\npathkeys, essentially requiring it to be a prefix of group_pathkeys.\nSeems unnecessary, no? Even if we sort/group on different pathkeys, that\nreduces the cardinality, and we may do sort later (or just finalize\nusing hashagg).\n\nFurthermore, we generally try creating a sort with the proper ordering\nin other places - why not here? I mean, if subpath has pathkeys=A and we\nneed [A,B], we could try adding suitable IncrementalSort, no? Or even\nfull Sort, or something. Or is that not beneficial here?\n\n10) I don't understand why create_agg_hashed_path limits the hashtable\nsize to work_mem - shouldn't it do something like cost_agg to account\nfor spilling to disk?\n\n11) There's an unnecessary/unrelated change in trigger.c.\n\n12) I improved/reworded a couple comments where I initially was unsure\nwhat exactly that means. Hopefully I got it right.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 13 Nov 2022 22:15:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> Hi,\n> \n> I did a quick initial review of the v20 patch series. I plan to do a\n> more thorough review over the next couple days, if time permits. In\n> general I think the patch is in pretty good shape.\n\nThanks.\n\n> I've added a bunch of comments in a number of places - see the \"review\n> comments\" parts for each of the original parts. That should make it\n> easier to deal with all the items. I'll go through the main stuff here:\n\nUnless I miss something, all these items are covered in context below, except\nfor this one:\n\n> 7) when I change enable_agg_pushdown to true and run regression tests, I\n> get a bunch of failures like\n> \n> ERROR: WindowFunc found where not expected\n> \n> Seems we don't handle window functions correctly somewhere, or maybe\n> setup_aggregate_pushdown should check/reject hasWindowFuncs too?\n\nWe don't need to reject window functions, window functions are processed after\ngrouping/aggregation. The problem I noticed in the regression tests was that a\nwindow function referenced a (non-window) aggregate. We just need to ensure\nthat pull_var_clause() recurses into that window function in such cases:\n\nBesides the next version, v21-fixes.patch file is attached. It tries to\nsummarize all the changes between v21 and v22. (I wonder if this attachment\nmakes the cfbot fail.)\n\n\ndiff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c\nindex 8e913c92d8..8dc39765f2 100644\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -355,7 +355,8 @@ create_aggregate_grouped_var_infos(PlannerInfo *root)\n \tAssert(root->grouped_var_list == NIL);\n \n \ttlist_exprs = pull_var_clause((Node *) root->processed_tlist,\n-\t\t\t\t\t\t\t\t PVC_INCLUDE_AGGREGATES);\n+\t\t\t\t\t\t\t\t PVC_INCLUDE_AGGREGATES |\n+\t\t\t\t\t\t\t\t PVC_RECURSE_WINDOWFUNCS);\n \n \t/*\n \t * Although GroupingFunc is related to root->parse->groupingSets, this\n\n\n> ---\n> src/backend/optimizer/util/relnode.c | 11 +++++++++++\n> src/include/nodes/pathnodes.h | 3 +++\n> 2 files changed, 14 insertions(+)\n> \n> diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c\n> index 94720865f47..d4367ba14a5 100644\n> --- a/src/backend/optimizer/util/relnode.c\n> +++ b/src/backend/optimizer/util/relnode.c\n> @@ -382,6 +382,12 @@ find_base_rel(PlannerInfo *root, int relid)\n> /*\n> * build_rel_hash\n> *\t Construct the auxiliary hash table for relation specific data.\n> + *\n> + * XXX Why is this renamed, leaving out the \"join\" part? Are we going to use\n> + * it for other purposes?\n\nYes, besides join relation, it's used to find the \"grouped relation\" by\nRelids. This change tries to follow the suggestion \"Maybe an appropriate\npreliminary patch ...\" in [1], but I haven't got any feedback whether my\nunderstanding was correct.\n\n> + * XXX Also, why change the API and not pass PlannerInfo? Seems pretty usual\n> + * for planner functions.\n\nI think that the reason was that, with the patch applied, PlannerInfo contains\nmultiple fields of the RelInfoList type, so build_rel_hash() needs an\ninformation which one it should process. Passing the exact field is simpler\nthan passing PlannerInfo plus some additional information.\n\n> */\n> static void\n> build_rel_hash(RelInfoList *list)\n> @@ -422,6 +428,11 @@ build_rel_hash(RelInfoList *list)\n> /*\n> * find_rel_info\n> *\t Find a base or join relation entry.\n> + *\n> + * XXX Why change the API and not pass PlannerInfo? Seems pretty usual\n> + * for planner functions.\n\nFor the same reason that build_rel_hash() receives the list explicitly, see\nabove.\n\n> + * XXX I don't understand why we need both this and find_join_rel.\n\nPerhaps I just wanted to keep the call sites of find_join_rel() untouched. I\nthink that\n\n find_join_rel(root, relids);\n\nis a little bit easier to read than\n\n (RelOptInfo *) find_rel_info(root->join_rel_list, relids);\n\n> */\n> static void *\n> find_rel_info(RelInfoList *list, Relids relids)\n> diff --git a/src/include/nodes/pathnodes.h b/src/include/nodes/pathnodes.h\n> index 0ca7d5ab51e..018ce755720 100644\n> --- a/src/include/nodes/pathnodes.h\n> +++ b/src/include/nodes/pathnodes.h\n> @@ -88,6 +88,9 @@ typedef enum UpperRelationKind\n> * present and valid when rel_hash is not NULL. Note that we still maintain\n> * the list even when using the hash table for lookups; this simplifies life\n> * for GEQO.\n> + *\n> + * XXX I wonder why we actually need a separate node, merely wrapping fields\n> + * that already existed ...\n\nThis is so that the existing fields can still be printed out\n(nodes/outfuncs.c).\n\n> diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\n> index 2fd1a962699..6f6b7d0b93b 100644\n> --- a/src/backend/optimizer/README\n> +++ b/src/backend/optimizer/README\n> @@ -1168,6 +1168,12 @@ input of Agg node. However, if the groups are large enough, it may be more\n> efficient to apply the partial aggregation to the output of base relation\n> scan, and finalize it when we have all relations of the query joined:\n> \n> +XXX review: Hmm, do we need to push it all the way down to base relations? Or\n> +would it make sense to do the agg on an intermediate level? Say, we're joining\n> +three tables A, B and C. Maybe the agg could/should be evaluated on top of join\n> +A+B, before joining with C? Say, maybe the aggregate references columns from\n> +both base relations?\n> +\n> EXPLAIN\n> SELECT a.i, avg(b.y)\n> FROM a JOIN b ON b.j = a.i\n\nAnother example below does show the partial aggregates at join level.\n\n> +XXX Perhaps mention this may also mean the partial ggregate could be pushed\n> +to a remote server with FDW partitions?\n\nEven if it's not implemented in the current patch version?\n\n> +\n> Note that there's often no GROUP BY expression to be used for the partial\n> aggregation, so we use equivalence classes to derive grouping expression: in\n> the example above, the grouping key \"b.j\" was derived from \"a.i\".\n> \n> +XXX I think this is slightly confusing - there is a GROUP BY expression for the\n> +partial aggregate, but as stated in the query it may not reference the side of\n> +a join explicitly.\n\nok, changed.\n\n> Also note that in this case the partial aggregate uses the \"b.j\" as grouping\n> column although the column does not appear in the query target list. The point\n> is that \"b.j\" is needed to evaluate the join condition, and there's no other\n> way for the partial aggregate to emit its values.\n> \n> +XXX Not sure I understand what this is trying to say. Firstly, maybe it'd be\n> +helpful to show targetlists in the EXPLAIN, i.e. do it as VERBOSE. But more\n> +importantly, isn't this a direct consequence of the equivalence classes stuff\n> +mentioned in the preceding paragraph?\n\nThe equivalence class is just a mechanism to derive expressions which are not\nexplicitly mentioned in the query, but there's always a question whether you\nneed to derive any expression for particular table or not. Here I tried to\nexplain that the choice of join columns is related to the choice of grouping\nkeys for the partial aggregate.\n\nI've deleted this paragraph and added a note to the previous one.\n\n> Besides base relation, the aggregation can also be pushed down to join:\n> \n> EXPLAIN\n> @@ -1217,6 +1235,10 @@ Besides base relation, the aggregation can also be pushed down to join:\n> \t -> Hash\n> \t\t-> Seq Scan on a\n> \n> +XXX Aha, so this is pretty-much an answer to my earlier comment, and matches\n> +my example with three tables. Maybe this suggests the initial reference to\n> +base relations is a bit confusing.\n\nI tried to use the simplest example to demonstrate the concepts, then extended\nit to the partially-aggregated joins.\n\n> +XXX I think this is a good explanation of the motivation for this patch, but\n> +maybe it'd be good to go into more details about how we decide if it's correct\n> +to actually do the pushdown, data structures etc. Similar to earlier parts of\n> +this README.\n\nAdded two paragraphs, see \"Regarding correctness...\".\n\n> diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> index f00f900ff41..6d2c2f4fc36 100644\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -196,9 +196,10 @@ make_one_rel(PlannerInfo *root, List *joinlist)\n> \t/*\n> \t * Now that the sizes are known, we can estimate the sizes of the grouped\n> \t * relations.\n> +\t *\n> +\t * XXX Seems more consistent with code nearby.\n> \t */\n> -\tif (root->grouped_var_list)\n> -\t\tsetup_base_grouped_rels(root);\n> +\tsetup_base_grouped_rels(root);\n\nIn general I prefer not calling a function if it's obvious that it's not\nneeded, but on the other hand the test of the 'grouped_var_list' field may be\nconsidered disturbing from the caller's perspective. I've got no strong\nopinion on this, so I can accept this proposal.\n\n> \n> /*\n> - * setup_based_grouped_rels\n> + * setup_base_grouped_rels\n> *\t For each \"plain\" relation build a grouped relation if aggregate pushdown\n> * is possible and if this relation is suitable for partial aggregation.\n> */\n\nFixed, thanks.\n\n> {\n> \tIndex\t\trti;\n> \n> +\t/* If there are no grouped relations, estimate their sizes. */\n> +\tif (!root->grouped_var_list)\n> +\t\treturn;\n> +\n\nAccepted, but with different wording (s/relations/expressions/).\n\n> +\t\t/* XXX Shouldn't this check be earlier? Seems cheaper than the check\n> +\t\t * calling bms_nonempty_difference, for example. */\n> \t\tif (brel->reloptkind != RELOPT_BASEREL)\n> \t\t\tcontinue;\n\nRight, moved.\n\n> \t\trel_grouped = build_simple_grouped_rel(root, brel->relid, &agg_info);\n> -\t\tif (rel_grouped)\n> -\t\t{\n> -\t\t\t/* Make the relation available for joining. */\n> -\t\t\tadd_grouped_rel(root, rel_grouped, agg_info);\n> -\t\t}\n> +\n> +\t\t/* XXX When does this happen? */\n> +\t\tif (!rel_grouped)\n> +\t\t\tcontinue;\n> +\n> +\t\t/* Make the relation available for joining. */\n> +\t\tadd_grouped_rel(root, rel_grouped, agg_info);\n\nI'd use the \"continue\" statement if there was a lot of code in the \"if\n(rel_grouped) {...}\" branch, but no strong preference in this case, so\naccepted.\n\n> \t}\n> }\n> \n> @@ -560,6 +569,8 @@ set_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,\n> \t\t\t\t\t/* Plain relation */\n> \t\t\t\t\tset_plain_rel_pathlist(root, rel, rte);\n> \n> +\t\t\t\t\t/* XXX Shouldn't this really be part of set_plain_rel_pathlist? */\n> +\n> \t\t\t\t\t/* Add paths to the grouped relation if one exists. */\n> \t\t\t\t\trel_grouped = find_grouped_rel(root, rel->relids,\n\nYes, it can. Moved.\n\n> @@ -3382,6 +3393,11 @@ generate_grouping_paths(PlannerInfo *root, RelOptInfo *rel_grouped,\n> \n> /*\n> * Apply partial aggregation to a subpath and add the AggPath to the pathlist.\n> + *\n> + * XXX I think this is potentially quite confusing, because the existing \"add\"\n> + * functions add_path and add_partial_path only check if the proposed path is\n> + * dominated by an existing path, pathkeys, etc. But this does more than that,\n> + * perhaps even constructing new path etc.\n> */\n> static void\n> add_grouped_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n\nMaybe, but I don't have a good idea of an alternative name.\ncreate_group_path() already exists and the create_*_path() functions are\nrather low-level. Maybe generate_grouped_path(), and at the same time rename\ngenerate_grouping_paths() to generate_grouped_paths()? In general, the\ngenerate_*_path*() functions do non-trivial things and eventually call\nadd_path().\n\n> @@ -3399,9 +3414,16 @@ add_grouped_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> \telse\n> \t\telog(ERROR, \"unexpected strategy %d\", aggstrategy);\n> \n> +\t/*\n> +\t * Bail out if we failed to create a suitable aggregated path. This can\n> +\t * happen e.g. then the path does not support hashing (for AGG_HASHED),\n> +\t * or when the input path is not sorted.\n> +\t */\n> +\tif (agg_path == NULL)\n> +\t\treturn;\n> +\n> \t/* Add the grouped path to the list of grouped base paths. */\n> -\tif (agg_path != NULL)\n> -\t\tadd_path(rel, (Path *) agg_path);\n> +\tadd_path(rel, (Path *) agg_path);\n\nok, changed.\n\n> }\n> \n> /*\n> @@ -3545,7 +3567,6 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)\n> \n> \tfor (lev = 2; lev <= levels_needed; lev++)\n> \t{\n> -\t\tRelOptInfo *rel_grouped;\n> \t\tListCell *lc;\n> \n> \t\t/*\n> @@ -3567,6 +3588,8 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)\n> \t\t */\n> \t\tforeach(lc, root->join_rel_level[lev])\n> \t\t{\n> +\t\t\tRelOptInfo *rel_grouped;\n> +\n> \t\t\trel = (RelOptInfo *) lfirst(lc);\n\nSure, fixed.\n\n> diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c\n> index 8e913c92d8b..d7a9de9645e 100644\n> --- a/src/backend/optimizer/plan/initsplan.c\n> +++ b/src/backend/optimizer/plan/initsplan.c\n> @@ -278,6 +278,8 @@ add_vars_to_targetlist(PlannerInfo *root, List *vars,\n> * each possible grouping expression.\n> *\n> * root->group_pathkeys must be setup before this function is called.\n> + *\n> + * XXX Perhaps this should check/reject hasWindowFuncs too?\n\ncreate_window_paths() is called after create_grouping_paths() (see\ngrouping_planner()), so it should not care whether the input (possibly\ngrouped) paths involve the aggregate push-down or not.\n\n> */\n> extern void\n> setup_aggregate_pushdown(PlannerInfo *root)\n> @@ -311,6 +313,12 @@ setup_aggregate_pushdown(PlannerInfo *root)\n> \tif (root->parse->hasTargetSRFs)\n> \t\treturn;\n> \n> +\t/*\n> +\t * XXX Maybe it'd be better to move create_aggregate_grouped_var_infos and\n> +\t * create_grouping_expr_grouped_var_infos to a function returning bool, and\n> +\t * only check that here.\n> +\t */\n> +\n\nHm, it looks to me like too much \"indirection\", and also a decriptive function\nname would be tricky to invent.\n\n> \t/* Create GroupedVarInfo per (distinct) aggregate. */\n> \tcreate_aggregate_grouped_var_infos(root);\n> \n> @@ -329,6 +337,8 @@ setup_aggregate_pushdown(PlannerInfo *root)\n> \t * Now that we know that grouping can be pushed down, search for the\n> \t * maximum sortgroupref. The base relations may need it if extra grouping\n> \t * expressions get added to them.\n> +\t *\n> +\t * XXX Shouldn't we do that only when adding extra grouping expressions?\n> \t */\n> \tAssert(root->max_sortgroupref == 0);\n> \tforeach(lc, root->processed_tlist)\n\nWe don't know at this (early) stage whether those \"extra grouping expression\"\nwill be needed for at least one relation. (max_sortgroupref is used by\ncreate_rel_agg_info())\n\n> diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c\n> index 0ada3ba3ebe..2f4db69c1f9 100644\n> --- a/src/backend/optimizer/plan/planner.c\n> +++ b/src/backend/optimizer/plan/planner.c\n> @@ -3899,6 +3899,10 @@ create_ordinary_grouping_paths(PlannerInfo *root, RelOptInfo *input_rel,\n> \t/*\n> \t * The non-partial paths can come either from the Gather above or from\n> \t * aggregate push-down.\n> +\t *\n> +\t * XXX I can't quite convince myself this is correct. How come it's fine\n> +\t * to check pathlist and then call set_cheapest() on partially_grouped_rel?\n> +\t * Maybe it's correct and the comment merely needs to explain this.\n\nIt's not clear to me what makes you confused. Without my patch, the code looks\nlike this:\n\n if (partially_grouped_rel && partially_grouped_rel->partial_pathlist)\n {\n gather_grouping_paths(root, partially_grouped_rel);\n set_cheapest(partially_grouped_rel);\n }\n\nHere gather_grouping_paths() adds paths to partially_grouped_rel->pathlist. My\npatch calls set_cheapest() independent from gather_grouping_paths() because\nthe paths requiring the aggregate finalization can also be generated by the\naggregate push-down feature.\n\n> \t */\n> \tif (partially_grouped_rel && partially_grouped_rel->pathlist)\n> \t\tset_cheapest(partially_grouped_rel);\n> @@ -6847,6 +6851,12 @@ create_partial_grouping_paths(PlannerInfo *root,\n> \t * push-down.\n> \t */\n> \tpartially_grouped_rel = find_grouped_rel(root, input_rel->relids, NULL);\n> +\n> +\t/*\n> +\t * If the relation already exists, it must have been created by aggregate\n> +\t * pushdown. We can't check how exactly it got created, but we can at least\n> +\t * check that aggregate pushdown is enabled.\n> +\t */\n> \tAssert(enable_agg_pushdown || partially_grouped_rel == NULL);\n\nok, done.\n\n> @@ -6872,6 +6882,8 @@ create_partial_grouping_paths(PlannerInfo *root,\n> \t * If we can't partially aggregate partial paths, and we can't partially\n> \t * aggregate non-partial paths, then don't bother creating the new\n> \t * RelOptInfo at all, unless the caller specified force_rel_creation.\n> +\t *\n> +\t * XXX Not sure why we're checking the partially_grouped_rel here?\n> \t */\n> \tif (cheapest_total_path == NULL &&\n> \t\tcheapest_partial_path == NULL &&\n\nI think (but not verified yet) that without this test the function could\nreturn NULL for reasons unrelated to the aggregate push-down. Nevertheless, I\nrealize now that there's no aggregate push-down specific processing in the\nfunction. I've adjusted it so that it does return, but the returned value is\npartially_grouped_rel rather than NULL.\n\n> @@ -6881,7 +6893,9 @@ create_partial_grouping_paths(PlannerInfo *root,\n> \n> \t/*\n> \t * Build a new upper relation to represent the result of partially\n> -\t * aggregating the rows from the input relation.\n> +\t * aggregating the rows from the input relation. The relation may\n> +\t * already exist due to aggregate pushdown, in which case we don't\n> +\t * need to create it.\n> \t */\n> \tif (partially_grouped_rel == NULL)\n> \t\tpartially_grouped_rel = fetch_upper_rel(root,\n\nok, done.\n\n> @@ -6903,6 +6917,8 @@ create_partial_grouping_paths(PlannerInfo *root,\n> \t *\n> \t * If the target was already created for the sake of aggregate push-down,\n> \t * it should be compatible with what we'd create here.\n> +\t *\n> +\t * XXX Why is this checking reltarget->exprs? What does that mean? \n> \t */\n> \tif (partially_grouped_rel->reltarget->exprs == NIL)\n> \t\tpartially_grouped_rel->reltarget =\n\nI've added this comment:\n\n\t * XXX If fetch_upper_rel() had to create a new relation (i.e. aggregate\n\t * push-down generated no paths), it created an empty target. Should we\n\t * change the convention and have it assign NULL to reltarget instead? Or\n\t * should we introduce a function like is_pathtarget_empty()?\n\n> diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c\n> index 7025ebf94be..395bd093d34 100644\n> --- a/src/backend/optimizer/util/pathnode.c\n> +++ b/src/backend/optimizer/util/pathnode.c\n> @@ -3163,9 +3163,21 @@ create_agg_path(PlannerInfo *root,\n> }\n> \n> /*\n> + * create_agg_sorted_path\n> + *\t\tCreates a pathnode performing sorted aggregation/grouping\n> + *\n> * Apply AGG_SORTED aggregation path to subpath if it's suitably sorted.\n> *\n> * NULL is returned if sorting of subpath output is not suitable.\n> + *\n> + * XXX I'm a bit confused why we need this? We now have create_agg_path and also\n> + * create_agg_sorted_path and create_agg_hashed_path.\n\nDo you mean that the function names are confusing? The functions\ncreate_agg_sorted_path() and create_agg_hashed_path() do some checks /\npreparation for the call of the existing function create_agg_path(), which is\nmore low-level. Should the names be something like\ncreate_partial_agg_sorted_path() and create_partial_agg_hashed_path() ?\n\n> + *\n> + * XXX This assumes the input path to be sorted in a suitable way, but for\n> + * regular aggregation we check that separately and then perhaps add sort\n> + * if needed (possibly incremental one). That is, we don't do such checks\n> + * in create_agg_path. Shouldn't we do the same thing before calling this\n> + * new functions?\n> */\n> AggPath *\n> create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> @@ -3184,6 +3196,7 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> \tagg_exprs = agg_info->agg_exprs;\n> \ttarget = agg_info->target;\n\nLikewise, it seems that you'd like to see different function name and maybe\ndifferent location of this function. Both create_agg_sorted_path() and\ncreate_agg_hashed_path() are rather wrappers for create_agg_path().\n\n> \n> +\t/* Bail out if the input path is not sorted at all. */\n> \tif (subpath->pathkeys == NIL)\n> \t\treturn NULL;\n\nok, done.\n\n> @@ -3192,6 +3205,18 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> \n> \t/*\n> \t * Find all query pathkeys that our relation does affect.\n> +\t *\n> +\t * XXX Not sure what \"that our relation does affect\" means? Also, we\n> +\t * are not looking at query_pathkeys but group_pathkeys, so that's a\n> +\t * bit confusing. Perhaps something like this would be better:\n> +\t *\n\nIndeed, the check of pathkeys was weird, I've reworked it.\n\n> @@ -3210,10 +3235,21 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> \t\t}\n> \t}\n> \n> +\t/* Bail out if the subquery has no pathkeys for the grouping. */\n> \tif (key_subset == NIL)\n> \t\treturn NULL;\n> \n> -\t/* Check if AGG_SORTED is useful for the whole query. */\n> +\t/*\n> +\t * Check if AGG_SORTED is useful for the whole query.\n> +\t *\n> +\t * XXX So this means we require the group pathkeys matched to the\n> +\t * subpath have to be a prefix of subpath->pathkeys. Why is that\n> +\t * necessary? We'll reduce the cardinality, and in the worst case\n> +\t * we'll have to add a separate sort (full or incremental). Or we\n> +\t * could finalize using hashed aggregate.\n\nAlthough with different arguments, pathkeys_contained_in() is still used in\nthe new version of the patch. I've added a TODO comment about the incremental\nsort (it did not exist when I was writing the patch), but what do you mean by\n\"reducing the cardinality\"? Eventually the partial aggregate should reduce the\ncardinality, but for the AGG_SORT strategy to work, the input sorting must be\nsuch that the executor can recognize the group boundaries.\n\n> +\t *\n> +\t * XXX Doesn't seem to change any regression tests when disabled.\n> +\t */\n> \tif (!pathkeys_contained_in(key_subset, subpath->pathkeys))\n> \t\treturn NULL;\n\n\"disabled\" means removal of this part (including the return statement), or\nreturning NULL unconditionally? Whatever you mean, please check with the new\nversion.\n\n> @@ -3231,7 +3267,7 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> \tresult = create_agg_path(root, rel, subpath, target,\n> \t\t\t\t\t\t\t AGG_SORTED, aggsplit,\n> \t\t\t\t\t\t\t agg_info->group_clauses,\n> -\t\t\t\t\t\t\t NIL,\n> +\t\t\t\t\t\t\t NIL,\t/* qual for HAVING clause */\n> \t\t\t\t\t\t\t &agg_costs,\n> \t\t\t\t\t\t\t dNumGroups);\n\nok, done here as well as in create_agg_hashed_path().\n\n> @@ -3283,6 +3319,9 @@ create_agg_hashed_path(PlannerInfo *root, RelOptInfo *rel,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t &agg_costs,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t dNumGroups);\n> \n> +\t\t/*\n> +\t\t * XXX But we can spill to disk in hashagg now, no?\n> +\t\t */\n> \t\tif (hashaggtablesize < work_mem * 1024L)\n> \t\t{\n\nYes, we can. It wasn't possible while I was writing the patch. Fixed.\n\n> diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\n> index 868d21c351e..6e87ada684b 100644\n> --- a/src/backend/utils/misc/postgresql.conf.sample\n> +++ b/src/backend/utils/misc/postgresql.conf.sample\n> @@ -388,6 +388,7 @@\n> #enable_seqscan = on\n> #enable_sort = on\n> #enable_tidscan = on\n> +#enable_agg_pushdown = on\n\nDone.\n\n> diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> index 1055ea70940..05192ca549a 100644\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -3352,7 +3352,7 @@ generate_grouping_paths(PlannerInfo *root, RelOptInfo *rel_grouped,\n> \t\t\t\t\t\tRelOptInfo *rel_plain, RelAggInfo *agg_info)\n> {\n> \tListCell *lc;\n> -\tPath\t *path;\n> +\tPath\t *path;\t/* XXX why declare at this level, not in the loops */\n> \n\nI usually do it this way, not sure why. Perhaps because it's less typing :-) I\nchanged that in the next version so that we don't waste time arguing about\nunimportant things.\n\n[1] https://www.postgresql.org/message-id/9726.1542577439%40sss.pgh.pa.us\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 17 Nov 2022 12:05:39 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 16:34, Antonin Houska <ah@cybertec.at> wrote:\n>\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> > Hi,\n> >\n> > I did a quick initial review of the v20 patch series. I plan to do a\n> > more thorough review over the next couple days, if time permits. In\n> > general I think the patch is in pretty good shape.\n>\n> Thanks.\n>\n> > I've added a bunch of comments in a number of places - see the \"review\n> > comments\" parts for each of the original parts. That should make it\n> > easier to deal with all the items. I'll go through the main stuff here:\n>\n> Unless I miss something, all these items are covered in context below, except\n> for this one:\n>\n> > 7) when I change enable_agg_pushdown to true and run regression tests, I\n> > get a bunch of failures like\n> >\n> > ERROR: WindowFunc found where not expected\n> >\n> > Seems we don't handle window functions correctly somewhere, or maybe\n> > setup_aggregate_pushdown should check/reject hasWindowFuncs too?\n>\n> We don't need to reject window functions, window functions are processed after\n> grouping/aggregation. The problem I noticed in the regression tests was that a\n> window function referenced a (non-window) aggregate. We just need to ensure\n> that pull_var_clause() recurses into that window function in such cases:\n>\n> Besides the next version, v21-fixes.patch file is attached. It tries to\n> summarize all the changes between v21 and v22. (I wonder if this attachment\n> makes the cfbot fail.)\n>\n>\n> diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c\n> index 8e913c92d8..8dc39765f2 100644\n> --- a/src/backend/optimizer/plan/initsplan.c\n> +++ b/src/backend/optimizer/plan/initsplan.c\n> @@ -355,7 +355,8 @@ create_aggregate_grouped_var_infos(PlannerInfo *root)\n> Assert(root->grouped_var_list == NIL);\n>\n> tlist_exprs = pull_var_clause((Node *) root->processed_tlist,\n> - PVC_INCLUDE_AGGREGATES);\n> + PVC_INCLUDE_AGGREGATES |\n> + PVC_RECURSE_WINDOWFUNCS);\n>\n> /*\n> * Although GroupingFunc is related to root->parse->groupingSets, this\n>\n>\n> > ---\n> > src/backend/optimizer/util/relnode.c | 11 +++++++++++\n> > src/include/nodes/pathnodes.h | 3 +++\n> > 2 files changed, 14 insertions(+)\n> >\n> > diff --git a/src/backend/optimizer/util/relnode.c b/src/backend/optimizer/util/relnode.c\n> > index 94720865f47..d4367ba14a5 100644\n> > --- a/src/backend/optimizer/util/relnode.c\n> > +++ b/src/backend/optimizer/util/relnode.c\n> > @@ -382,6 +382,12 @@ find_base_rel(PlannerInfo *root, int relid)\n> > /*\n> > * build_rel_hash\n> > * Construct the auxiliary hash table for relation specific data.\n> > + *\n> > + * XXX Why is this renamed, leaving out the \"join\" part? Are we going to use\n> > + * it for other purposes?\n>\n> Yes, besides join relation, it's used to find the \"grouped relation\" by\n> Relids. This change tries to follow the suggestion \"Maybe an appropriate\n> preliminary patch ...\" in [1], but I haven't got any feedback whether my\n> understanding was correct.\n>\n> > + * XXX Also, why change the API and not pass PlannerInfo? Seems pretty usual\n> > + * for planner functions.\n>\n> I think that the reason was that, with the patch applied, PlannerInfo contains\n> multiple fields of the RelInfoList type, so build_rel_hash() needs an\n> information which one it should process. Passing the exact field is simpler\n> than passing PlannerInfo plus some additional information.\n>\n> > */\n> > static void\n> > build_rel_hash(RelInfoList *list)\n> > @@ -422,6 +428,11 @@ build_rel_hash(RelInfoList *list)\n> > /*\n> > * find_rel_info\n> > * Find a base or join relation entry.\n> > + *\n> > + * XXX Why change the API and not pass PlannerInfo? Seems pretty usual\n> > + * for planner functions.\n>\n> For the same reason that build_rel_hash() receives the list explicitly, see\n> above.\n>\n> > + * XXX I don't understand why we need both this and find_join_rel.\n>\n> Perhaps I just wanted to keep the call sites of find_join_rel() untouched. I\n> think that\n>\n> find_join_rel(root, relids);\n>\n> is a little bit easier to read than\n>\n> (RelOptInfo *) find_rel_info(root->join_rel_list, relids);\n>\n> > */\n> > static void *\n> > find_rel_info(RelInfoList *list, Relids relids)\n> > diff --git a/src/include/nodes/pathnodes.h b/src/include/nodes/pathnodes.h\n> > index 0ca7d5ab51e..018ce755720 100644\n> > --- a/src/include/nodes/pathnodes.h\n> > +++ b/src/include/nodes/pathnodes.h\n> > @@ -88,6 +88,9 @@ typedef enum UpperRelationKind\n> > * present and valid when rel_hash is not NULL. Note that we still maintain\n> > * the list even when using the hash table for lookups; this simplifies life\n> > * for GEQO.\n> > + *\n> > + * XXX I wonder why we actually need a separate node, merely wrapping fields\n> > + * that already existed ...\n>\n> This is so that the existing fields can still be printed out\n> (nodes/outfuncs.c).\n>\n> > diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\n> > index 2fd1a962699..6f6b7d0b93b 100644\n> > --- a/src/backend/optimizer/README\n> > +++ b/src/backend/optimizer/README\n> > @@ -1168,6 +1168,12 @@ input of Agg node. However, if the groups are large enough, it may be more\n> > efficient to apply the partial aggregation to the output of base relation\n> > scan, and finalize it when we have all relations of the query joined:\n> >\n> > +XXX review: Hmm, do we need to push it all the way down to base relations? Or\n> > +would it make sense to do the agg on an intermediate level? Say, we're joining\n> > +three tables A, B and C. Maybe the agg could/should be evaluated on top of join\n> > +A+B, before joining with C? Say, maybe the aggregate references columns from\n> > +both base relations?\n> > +\n> > EXPLAIN\n> > SELECT a.i, avg(b.y)\n> > FROM a JOIN b ON b.j = a.i\n>\n> Another example below does show the partial aggregates at join level.\n>\n> > +XXX Perhaps mention this may also mean the partial ggregate could be pushed\n> > +to a remote server with FDW partitions?\n>\n> Even if it's not implemented in the current patch version?\n>\n> > +\n> > Note that there's often no GROUP BY expression to be used for the partial\n> > aggregation, so we use equivalence classes to derive grouping expression: in\n> > the example above, the grouping key \"b.j\" was derived from \"a.i\".\n> >\n> > +XXX I think this is slightly confusing - there is a GROUP BY expression for the\n> > +partial aggregate, but as stated in the query it may not reference the side of\n> > +a join explicitly.\n>\n> ok, changed.\n>\n> > Also note that in this case the partial aggregate uses the \"b.j\" as grouping\n> > column although the column does not appear in the query target list. The point\n> > is that \"b.j\" is needed to evaluate the join condition, and there's no other\n> > way for the partial aggregate to emit its values.\n> >\n> > +XXX Not sure I understand what this is trying to say. Firstly, maybe it'd be\n> > +helpful to show targetlists in the EXPLAIN, i.e. do it as VERBOSE. But more\n> > +importantly, isn't this a direct consequence of the equivalence classes stuff\n> > +mentioned in the preceding paragraph?\n>\n> The equivalence class is just a mechanism to derive expressions which are not\n> explicitly mentioned in the query, but there's always a question whether you\n> need to derive any expression for particular table or not. Here I tried to\n> explain that the choice of join columns is related to the choice of grouping\n> keys for the partial aggregate.\n>\n> I've deleted this paragraph and added a note to the previous one.\n>\n> > Besides base relation, the aggregation can also be pushed down to join:\n> >\n> > EXPLAIN\n> > @@ -1217,6 +1235,10 @@ Besides base relation, the aggregation can also be pushed down to join:\n> > -> Hash\n> > -> Seq Scan on a\n> >\n> > +XXX Aha, so this is pretty-much an answer to my earlier comment, and matches\n> > +my example with three tables. Maybe this suggests the initial reference to\n> > +base relations is a bit confusing.\n>\n> I tried to use the simplest example to demonstrate the concepts, then extended\n> it to the partially-aggregated joins.\n>\n> > +XXX I think this is a good explanation of the motivation for this patch, but\n> > +maybe it'd be good to go into more details about how we decide if it's correct\n> > +to actually do the pushdown, data structures etc. Similar to earlier parts of\n> > +this README.\n>\n> Added two paragraphs, see \"Regarding correctness...\".\n>\n> > diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> > index f00f900ff41..6d2c2f4fc36 100644\n> > --- a/src/backend/optimizer/path/allpaths.c\n> > +++ b/src/backend/optimizer/path/allpaths.c\n> > @@ -196,9 +196,10 @@ make_one_rel(PlannerInfo *root, List *joinlist)\n> > /*\n> > * Now that the sizes are known, we can estimate the sizes of the grouped\n> > * relations.\n> > + *\n> > + * XXX Seems more consistent with code nearby.\n> > */\n> > - if (root->grouped_var_list)\n> > - setup_base_grouped_rels(root);\n> > + setup_base_grouped_rels(root);\n>\n> In general I prefer not calling a function if it's obvious that it's not\n> needed, but on the other hand the test of the 'grouped_var_list' field may be\n> considered disturbing from the caller's perspective. I've got no strong\n> opinion on this, so I can accept this proposal.\n>\n> >\n> > /*\n> > - * setup_based_grouped_rels\n> > + * setup_base_grouped_rels\n> > * For each \"plain\" relation build a grouped relation if aggregate pushdown\n> > * is possible and if this relation is suitable for partial aggregation.\n> > */\n>\n> Fixed, thanks.\n>\n> > {\n> > Index rti;\n> >\n> > + /* If there are no grouped relations, estimate their sizes. */\n> > + if (!root->grouped_var_list)\n> > + return;\n> > +\n>\n> Accepted, but with different wording (s/relations/expressions/).\n>\n> > + /* XXX Shouldn't this check be earlier? Seems cheaper than the check\n> > + * calling bms_nonempty_difference, for example. */\n> > if (brel->reloptkind != RELOPT_BASEREL)\n> > continue;\n>\n> Right, moved.\n>\n> > rel_grouped = build_simple_grouped_rel(root, brel->relid, &agg_info);\n> > - if (rel_grouped)\n> > - {\n> > - /* Make the relation available for joining. */\n> > - add_grouped_rel(root, rel_grouped, agg_info);\n> > - }\n> > +\n> > + /* XXX When does this happen? */\n> > + if (!rel_grouped)\n> > + continue;\n> > +\n> > + /* Make the relation available for joining. */\n> > + add_grouped_rel(root, rel_grouped, agg_info);\n>\n> I'd use the \"continue\" statement if there was a lot of code in the \"if\n> (rel_grouped) {...}\" branch, but no strong preference in this case, so\n> accepted.\n>\n> > }\n> > }\n> >\n> > @@ -560,6 +569,8 @@ set_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,\n> > /* Plain relation */\n> > set_plain_rel_pathlist(root, rel, rte);\n> >\n> > + /* XXX Shouldn't this really be part of set_plain_rel_pathlist? */\n> > +\n> > /* Add paths to the grouped relation if one exists. */\n> > rel_grouped = find_grouped_rel(root, rel->relids,\n>\n> Yes, it can. Moved.\n>\n> > @@ -3382,6 +3393,11 @@ generate_grouping_paths(PlannerInfo *root, RelOptInfo *rel_grouped,\n> >\n> > /*\n> > * Apply partial aggregation to a subpath and add the AggPath to the pathlist.\n> > + *\n> > + * XXX I think this is potentially quite confusing, because the existing \"add\"\n> > + * functions add_path and add_partial_path only check if the proposed path is\n> > + * dominated by an existing path, pathkeys, etc. But this does more than that,\n> > + * perhaps even constructing new path etc.\n> > */\n> > static void\n> > add_grouped_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n>\n> Maybe, but I don't have a good idea of an alternative name.\n> create_group_path() already exists and the create_*_path() functions are\n> rather low-level. Maybe generate_grouped_path(), and at the same time rename\n> generate_grouping_paths() to generate_grouped_paths()? In general, the\n> generate_*_path*() functions do non-trivial things and eventually call\n> add_path().\n>\n> > @@ -3399,9 +3414,16 @@ add_grouped_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> > else\n> > elog(ERROR, \"unexpected strategy %d\", aggstrategy);\n> >\n> > + /*\n> > + * Bail out if we failed to create a suitable aggregated path. This can\n> > + * happen e.g. then the path does not support hashing (for AGG_HASHED),\n> > + * or when the input path is not sorted.\n> > + */\n> > + if (agg_path == NULL)\n> > + return;\n> > +\n> > /* Add the grouped path to the list of grouped base paths. */\n> > - if (agg_path != NULL)\n> > - add_path(rel, (Path *) agg_path);\n> > + add_path(rel, (Path *) agg_path);\n>\n> ok, changed.\n>\n> > }\n> >\n> > /*\n> > @@ -3545,7 +3567,6 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)\n> >\n> > for (lev = 2; lev <= levels_needed; lev++)\n> > {\n> > - RelOptInfo *rel_grouped;\n> > ListCell *lc;\n> >\n> > /*\n> > @@ -3567,6 +3588,8 @@ standard_join_search(PlannerInfo *root, int levels_needed, List *initial_rels)\n> > */\n> > foreach(lc, root->join_rel_level[lev])\n> > {\n> > + RelOptInfo *rel_grouped;\n> > +\n> > rel = (RelOptInfo *) lfirst(lc);\n>\n> Sure, fixed.\n>\n> > diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c\n> > index 8e913c92d8b..d7a9de9645e 100644\n> > --- a/src/backend/optimizer/plan/initsplan.c\n> > +++ b/src/backend/optimizer/plan/initsplan.c\n> > @@ -278,6 +278,8 @@ add_vars_to_targetlist(PlannerInfo *root, List *vars,\n> > * each possible grouping expression.\n> > *\n> > * root->group_pathkeys must be setup before this function is called.\n> > + *\n> > + * XXX Perhaps this should check/reject hasWindowFuncs too?\n>\n> create_window_paths() is called after create_grouping_paths() (see\n> grouping_planner()), so it should not care whether the input (possibly\n> grouped) paths involve the aggregate push-down or not.\n>\n> > */\n> > extern void\n> > setup_aggregate_pushdown(PlannerInfo *root)\n> > @@ -311,6 +313,12 @@ setup_aggregate_pushdown(PlannerInfo *root)\n> > if (root->parse->hasTargetSRFs)\n> > return;\n> >\n> > + /*\n> > + * XXX Maybe it'd be better to move create_aggregate_grouped_var_infos and\n> > + * create_grouping_expr_grouped_var_infos to a function returning bool, and\n> > + * only check that here.\n> > + */\n> > +\n>\n> Hm, it looks to me like too much \"indirection\", and also a decriptive function\n> name would be tricky to invent.\n>\n> > /* Create GroupedVarInfo per (distinct) aggregate. */\n> > create_aggregate_grouped_var_infos(root);\n> >\n> > @@ -329,6 +337,8 @@ setup_aggregate_pushdown(PlannerInfo *root)\n> > * Now that we know that grouping can be pushed down, search for the\n> > * maximum sortgroupref. The base relations may need it if extra grouping\n> > * expressions get added to them.\n> > + *\n> > + * XXX Shouldn't we do that only when adding extra grouping expressions?\n> > */\n> > Assert(root->max_sortgroupref == 0);\n> > foreach(lc, root->processed_tlist)\n>\n> We don't know at this (early) stage whether those \"extra grouping expression\"\n> will be needed for at least one relation. (max_sortgroupref is used by\n> create_rel_agg_info())\n>\n> > diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c\n> > index 0ada3ba3ebe..2f4db69c1f9 100644\n> > --- a/src/backend/optimizer/plan/planner.c\n> > +++ b/src/backend/optimizer/plan/planner.c\n> > @@ -3899,6 +3899,10 @@ create_ordinary_grouping_paths(PlannerInfo *root, RelOptInfo *input_rel,\n> > /*\n> > * The non-partial paths can come either from the Gather above or from\n> > * aggregate push-down.\n> > + *\n> > + * XXX I can't quite convince myself this is correct. How come it's fine\n> > + * to check pathlist and then call set_cheapest() on partially_grouped_rel?\n> > + * Maybe it's correct and the comment merely needs to explain this.\n>\n> It's not clear to me what makes you confused. Without my patch, the code looks\n> like this:\n>\n> if (partially_grouped_rel && partially_grouped_rel->partial_pathlist)\n> {\n> gather_grouping_paths(root, partially_grouped_rel);\n> set_cheapest(partially_grouped_rel);\n> }\n>\n> Here gather_grouping_paths() adds paths to partially_grouped_rel->pathlist. My\n> patch calls set_cheapest() independent from gather_grouping_paths() because\n> the paths requiring the aggregate finalization can also be generated by the\n> aggregate push-down feature.\n>\n> > */\n> > if (partially_grouped_rel && partially_grouped_rel->pathlist)\n> > set_cheapest(partially_grouped_rel);\n> > @@ -6847,6 +6851,12 @@ create_partial_grouping_paths(PlannerInfo *root,\n> > * push-down.\n> > */\n> > partially_grouped_rel = find_grouped_rel(root, input_rel->relids, NULL);\n> > +\n> > + /*\n> > + * If the relation already exists, it must have been created by aggregate\n> > + * pushdown. We can't check how exactly it got created, but we can at least\n> > + * check that aggregate pushdown is enabled.\n> > + */\n> > Assert(enable_agg_pushdown || partially_grouped_rel == NULL);\n>\n> ok, done.\n>\n> > @@ -6872,6 +6882,8 @@ create_partial_grouping_paths(PlannerInfo *root,\n> > * If we can't partially aggregate partial paths, and we can't partially\n> > * aggregate non-partial paths, then don't bother creating the new\n> > * RelOptInfo at all, unless the caller specified force_rel_creation.\n> > + *\n> > + * XXX Not sure why we're checking the partially_grouped_rel here?\n> > */\n> > if (cheapest_total_path == NULL &&\n> > cheapest_partial_path == NULL &&\n>\n> I think (but not verified yet) that without this test the function could\n> return NULL for reasons unrelated to the aggregate push-down. Nevertheless, I\n> realize now that there's no aggregate push-down specific processing in the\n> function. I've adjusted it so that it does return, but the returned value is\n> partially_grouped_rel rather than NULL.\n>\n> > @@ -6881,7 +6893,9 @@ create_partial_grouping_paths(PlannerInfo *root,\n> >\n> > /*\n> > * Build a new upper relation to represent the result of partially\n> > - * aggregating the rows from the input relation.\n> > + * aggregating the rows from the input relation. The relation may\n> > + * already exist due to aggregate pushdown, in which case we don't\n> > + * need to create it.\n> > */\n> > if (partially_grouped_rel == NULL)\n> > partially_grouped_rel = fetch_upper_rel(root,\n>\n> ok, done.\n>\n> > @@ -6903,6 +6917,8 @@ create_partial_grouping_paths(PlannerInfo *root,\n> > *\n> > * If the target was already created for the sake of aggregate push-down,\n> > * it should be compatible with what we'd create here.\n> > + *\n> > + * XXX Why is this checking reltarget->exprs? What does that mean?\n> > */\n> > if (partially_grouped_rel->reltarget->exprs == NIL)\n> > partially_grouped_rel->reltarget =\n>\n> I've added this comment:\n>\n> * XXX If fetch_upper_rel() had to create a new relation (i.e. aggregate\n> * push-down generated no paths), it created an empty target. Should we\n> * change the convention and have it assign NULL to reltarget instead? Or\n> * should we introduce a function like is_pathtarget_empty()?\n>\n> > diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c\n> > index 7025ebf94be..395bd093d34 100644\n> > --- a/src/backend/optimizer/util/pathnode.c\n> > +++ b/src/backend/optimizer/util/pathnode.c\n> > @@ -3163,9 +3163,21 @@ create_agg_path(PlannerInfo *root,\n> > }\n> >\n> > /*\n> > + * create_agg_sorted_path\n> > + * Creates a pathnode performing sorted aggregation/grouping\n> > + *\n> > * Apply AGG_SORTED aggregation path to subpath if it's suitably sorted.\n> > *\n> > * NULL is returned if sorting of subpath output is not suitable.\n> > + *\n> > + * XXX I'm a bit confused why we need this? We now have create_agg_path and also\n> > + * create_agg_sorted_path and create_agg_hashed_path.\n>\n> Do you mean that the function names are confusing? The functions\n> create_agg_sorted_path() and create_agg_hashed_path() do some checks /\n> preparation for the call of the existing function create_agg_path(), which is\n> more low-level. Should the names be something like\n> create_partial_agg_sorted_path() and create_partial_agg_hashed_path() ?\n>\n> > + *\n> > + * XXX This assumes the input path to be sorted in a suitable way, but for\n> > + * regular aggregation we check that separately and then perhaps add sort\n> > + * if needed (possibly incremental one). That is, we don't do such checks\n> > + * in create_agg_path. Shouldn't we do the same thing before calling this\n> > + * new functions?\n> > */\n> > AggPath *\n> > create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> > @@ -3184,6 +3196,7 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> > agg_exprs = agg_info->agg_exprs;\n> > target = agg_info->target;\n>\n> Likewise, it seems that you'd like to see different function name and maybe\n> different location of this function. Both create_agg_sorted_path() and\n> create_agg_hashed_path() are rather wrappers for create_agg_path().\n>\n> >\n> > + /* Bail out if the input path is not sorted at all. */\n> > if (subpath->pathkeys == NIL)\n> > return NULL;\n>\n> ok, done.\n>\n> > @@ -3192,6 +3205,18 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> >\n> > /*\n> > * Find all query pathkeys that our relation does affect.\n> > + *\n> > + * XXX Not sure what \"that our relation does affect\" means? Also, we\n> > + * are not looking at query_pathkeys but group_pathkeys, so that's a\n> > + * bit confusing. Perhaps something like this would be better:\n> > + *\n>\n> Indeed, the check of pathkeys was weird, I've reworked it.\n>\n> > @@ -3210,10 +3235,21 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> > }\n> > }\n> >\n> > + /* Bail out if the subquery has no pathkeys for the grouping. */\n> > if (key_subset == NIL)\n> > return NULL;\n> >\n> > - /* Check if AGG_SORTED is useful for the whole query. */\n> > + /*\n> > + * Check if AGG_SORTED is useful for the whole query.\n> > + *\n> > + * XXX So this means we require the group pathkeys matched to the\n> > + * subpath have to be a prefix of subpath->pathkeys. Why is that\n> > + * necessary? We'll reduce the cardinality, and in the worst case\n> > + * we'll have to add a separate sort (full or incremental). Or we\n> > + * could finalize using hashed aggregate.\n>\n> Although with different arguments, pathkeys_contained_in() is still used in\n> the new version of the patch. I've added a TODO comment about the incremental\n> sort (it did not exist when I was writing the patch), but what do you mean by\n> \"reducing the cardinality\"? Eventually the partial aggregate should reduce the\n> cardinality, but for the AGG_SORT strategy to work, the input sorting must be\n> such that the executor can recognize the group boundaries.\n>\n> > + *\n> > + * XXX Doesn't seem to change any regression tests when disabled.\n> > + */\n> > if (!pathkeys_contained_in(key_subset, subpath->pathkeys))\n> > return NULL;\n>\n> \"disabled\" means removal of this part (including the return statement), or\n> returning NULL unconditionally? Whatever you mean, please check with the new\n> version.\n>\n> > @@ -3231,7 +3267,7 @@ create_agg_sorted_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath,\n> > result = create_agg_path(root, rel, subpath, target,\n> > AGG_SORTED, aggsplit,\n> > agg_info->group_clauses,\n> > - NIL,\n> > + NIL, /* qual for HAVING clause */\n> > &agg_costs,\n> > dNumGroups);\n>\n> ok, done here as well as in create_agg_hashed_path().\n>\n> > @@ -3283,6 +3319,9 @@ create_agg_hashed_path(PlannerInfo *root, RelOptInfo *rel,\n> > &agg_costs,\n> > dNumGroups);\n> >\n> > + /*\n> > + * XXX But we can spill to disk in hashagg now, no?\n> > + */\n> > if (hashaggtablesize < work_mem * 1024L)\n> > {\n>\n> Yes, we can. It wasn't possible while I was writing the patch. Fixed.\n>\n> > diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\n> > index 868d21c351e..6e87ada684b 100644\n> > --- a/src/backend/utils/misc/postgresql.conf.sample\n> > +++ b/src/backend/utils/misc/postgresql.conf.sample\n> > @@ -388,6 +388,7 @@\n> > #enable_seqscan = on\n> > #enable_sort = on\n> > #enable_tidscan = on\n> > +#enable_agg_pushdown = on\n>\n> Done.\n>\n> > diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> > index 1055ea70940..05192ca549a 100644\n> > --- a/src/backend/optimizer/path/allpaths.c\n> > +++ b/src/backend/optimizer/path/allpaths.c\n> > @@ -3352,7 +3352,7 @@ generate_grouping_paths(PlannerInfo *root, RelOptInfo *rel_grouped,\n> > RelOptInfo *rel_plain, RelAggInfo *agg_info)\n> > {\n> > ListCell *lc;\n> > - Path *path;\n> > + Path *path; /* XXX why declare at this level, not in the loops */\n> >\n>\n> I usually do it this way, not sure why. Perhaps because it's less typing :-) I\n> changed that in the next version so that we don't waste time arguing about\n> unimportant things.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n5212d447fa53518458cbe609092b347803a667c5 ===\n=== applying patch ./v21-fixes.patch\npatching file src/backend/optimizer/README\nHunk #1 FAILED at 1186.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/optimizer/README.rej\npatching file src/backend/optimizer/path/allpaths.c\nHunk #1 FAILED at 197.\nHunk #2 FAILED at 341.\nHunk #3 succeeded at 339 with fuzz 1 (offset -11 lines).\nHunk #4 succeeded at 1014 with fuzz 2 (offset 647 lines).\nHunk #5 FAILED at 378.\nHunk #6 FAILED at 563.\nHunk #7 succeeded at 2793 with fuzz 1 (offset 1948 lines).\nHunk #8 FAILED at 867.\nHunk #9 FAILED at 3439.\nHunk #10 FAILED at 3590.\nHunk #11 succeeded at 3430 (offset -182 lines).\n7 out of 11 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/path/allpaths.c.rej\npatching file src/backend/optimizer/path/costsize.c\n\n[1] - http://cfbot.cputube.org/patch_41_3764.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:42:21 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> wrote:\n\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n> [1] - http://cfbot.cputube.org/patch_41_3764.log\n\nThis is the next version (only rebased, no other changes).\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 05 Jan 2023 08:59:30 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 02:59, Antonin Houska <ah@cybertec.at> wrote:\n>\n> vignesh C <vignesh21@gmail.com> wrote:\n>\n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nAnd again...\n\nSetting this to Waiting on Author for the moment.\n\nDo you think this patch is likely to be ready for this release or the\nnext one? Is there specific feedback you're looking for?\n\npatching file src/backend/optimizer/util/relnode.c\nHunk #1 FAILED at 18.\nHunk #2 succeeded at 85 (offset 8 lines).\nHunk #3 succeeded at 405 with fuzz 1 (offset 25 lines).\nHunk #4 succeeded at 595 (offset 63 lines).\nHunk #5 succeeded at 657 (offset 63 lines).\nHunk #6 succeeded at 692 (offset 63 lines).\nHunk #7 succeeded at 731 (offset 63 lines).\nHunk #8 succeeded at 849 (offset 62 lines).\nHunk #9 succeeded at 860 (offset 62 lines).\nHunk #10 succeeded at 873 (offset 62 lines).\nHunk #11 FAILED at 911.\nHunk #12 FAILED at 945.\nHunk #13 succeeded at 2585 (offset 310 lines).\n3 out of 13 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/util/relnode.c.rej\npatching file src/backend/optimizer/util/tlist.c\npatching file src/backend/utils/misc/guc_tables.c\nHunk #1 succeeded at 946 (offset 1 line).\npatching file src/backend/utils/misc/postgresql.conf.sample\nHunk #1 succeeded at 390 (offset 2 lines).\npatching file src/include/nodes/pathnodes.h\nHunk #1 succeeded at 386 (offset 10 lines).\nHunk #2 succeeded at 429 (offset 10 lines).\nHunk #3 succeeded at 477 (offset 38 lines).\nHunk #4 succeeded at 1084 (offset 37 lines).\nHunk #5 succeeded at 3117 (offset 146 lines).\npatching file src/include/optimizer/clauses.h\npatching file src/include/optimizer/pathnode.h\nHunk #2 FAILED at 311.\nHunk #3 FAILED at 344.\n2 out of 3 hunks FAILED -- saving rejects to file\nsrc/include/optimizer/pathnode.h.rej\n\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:06:08 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "It looks like in November 2022 Tomas Vondra said:\n\n> I did a quick initial review of the v20 patch series.\n> I plan to do a\nmore thorough review over the next couple days, if time permits.\n> In\ngeneral I think the patch is in pretty good shape.\n\nFollowing which Antonin Houska updated the patch responding to his\nreview comments.\n\nSince then this patch has demonstrated the unfortunate \"please rebase\nthx\" followed by the author rebasing and getting no feedback until\n\"please rebase again thx\"...\n\nSo while the patch doesn't currently apply it seems like it really\nshould be either Needs Review or Ready for Commit.\n\nThat said, I suspect this patch has missed the boat for this CF.\nHopefully it will get more attention next release.\n\nI'll move it to the next CF but set it to Needs Review even though it\nneeds a rebase.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 3 Apr 2023 17:38:13 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\", but there has been\nno activity on this thread for 9+ months.\n\nSince there seems not much interest, I have changed the status to\n\"Returned with Feedback\" [1]. Feel free to propose a stronger use case\nfor the patch and add an entry for the same.\n\n======\n[1] https://commitfest.postgresql.org/46/3764/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:22:37 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: Aggregation push-down - take2"
}
] |
[
{
"msg_contents": "mylodon just showed a new-to-me failure mode [1]:\n\nCore was generated by `postgres: cascade: startup recovering 000000010000000000000002 '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49\n49\t../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49\n#1 0x00007f8b8db2d546 in __GI_abort () at abort.c:79\n#2 0x000000000098a4dc in ExceptionalCondition (conditionName=<optimized out>, errorType=0x9e8061 \"FailedAssertion\", fileName=0xaf811f \"/mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/lib/dshash.c\", lineNumber=lineNumber@entry=744) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/utils/error/assert.c:69\n#3 0x00000000006dbe65 in dshash_delete_current (status=status@entry=0x7fffec732dc8) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/lib/dshash.c:744\n#4 0x000000000085f911 in pgstat_free_entry (shent=0x7f8b8b0fc320, hstat=0x7fffec732dc8) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/utils/activity/pgstat_shmem.c:741\n#5 pgstat_drop_entry_internal (shent=0x7f8b8b0fc320, hstat=hstat@entry=0x7fffec732dc8) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/utils/activity/pgstat_shmem.c:773\n#6 0x000000000085fa2e in pgstat_drop_all_entries () at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/utils/activity/pgstat_shmem.c:887\n#7 0x0000000000859301 in pgstat_reset_after_failure () at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/utils/activity/pgstat.c:1631\n#8 pgstat_discard_stats () at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/utils/activity/pgstat.c:435\n#9 0x0000000000555ae0 in StartupXLOG () at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/access/transam/xlog.c:5127\n#10 0x00000000007a8ece in StartupProcessMain () at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/postmaster/startup.c:267\n#11 0x000000000079f44e in AuxiliaryProcessMain (auxtype=auxtype@entry=StartupProcess) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/postmaster/auxprocess.c:141\n#12 0x00000000007a5891 in StartChildProcess (type=type@entry=StartupProcess) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:5417\n#13 0x00000000007a3ea0 in PostmasterMain (argc=argc@entry=4, argv=<optimized out>, argv@entry=0x1d1cc90) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1457\n#14 0x00000000006f1b91 in main (argc=4, argv=0x1d1cc90) at /mnt/resource/bf/build/mylodon/HEAD/pgsql.build/../pgsql/src/backend/main/main.c:202\n$1 = {si_signo = 6, si_errno = 0, si_code = -6, _sifields = {_pad = {675836, 1001, 0 <repeats 26 times>}, _kill = {si_pid = 675836, si_uid = 1001}, _timer = {si_tid = 675836, si_overrun = 1001, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 675836, si_uid = 1001, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 675836, si_uid = 1001, si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0x3e9000a4ffc, _addr_lsb = 0, _addr_bnd = {_lower = 0x0, _upper = 0x0}}, _sigpoll = {si_band = 4299262939132, si_fd = 0}}}\n\nProbably deserves investigation.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2022-04-15%2011%3A51%3A35\n\n\n",
"msg_date": "Fri, 15 Apr 2022 13:28:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Crash in new pgstats code"
},
{
"msg_contents": "I wrote:\n> mylodon just showed a new-to-me failure mode [1]:\n\nAnother occurrence here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2022-04-15%2022%3A42%3A07\n\nI've added an open item.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Apr 2022 19:14:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Hi\n\nOn 2022-04-15 13:28:35 -0400, Tom Lane wrote:\n> mylodon just showed a new-to-me failure mode [1]:\n\nThanks. Found the bug (pgstat_drop_all_entries() passed the wrong lock\nlevel), with the obvious fix.\n\nThis failed to fail in other tests because they all end up resetting\nonly when there's no stats. It's not too hard to write a test for that,\nwhich is how I reproduced the issue.\n\nI'm planning to make it a bit easier to test by verifying that 'E' in\npgstat_read_statsfile() actually is just before EOF. That seems like a\ngood check anyway.\n\n\nWhat confuses me so far is what already had generated stats before\nreaching pgstat_reset_after_failure() (so that the bug could even be hit\nin t/025_stuck_on_old_timeline.pl).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 16 Apr 2022 12:13:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Hi\n\nOn 2022-04-16 12:13:09 -0700, Andres Freund wrote:\n> What confuses me so far is what already had generated stats before\n> reaching pgstat_reset_after_failure() (so that the bug could even be hit\n> in t/025_stuck_on_old_timeline.pl).\n\nI see part of a problem - in archiver stats. Even in 14 (and presumably\nbefore), we do work that can generate archiver stats\n(e.g. ReadCheckpointRecord()) before pgstat_reset_all(). It's not the\nend of the world, but doesn't seem great.\n\nBut since archiver stats are fixed-numbered stats (and thus not in the\nhash table), they'd not trigger the backtrace we saw here.\n\n\nOne thing that's interesting is that the failing tests have:\n2022-04-15 12:07:48.828 UTC [675922][walreceiver][:0] FATAL: could not link file \"pg_wal/xlogtemp.675922\" to \"pg_wal/00000002.history\": File exists\n\nwhich I haven't seen locally. Looks like we have some race between\nstartup process and walreceiver? That seems not great. I'm a bit\nconfused that walreceiver and archiving are both active at the same time\nin the first place - that doesn't seem right as things are set up\ncurrently.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 16 Apr 2022 14:36:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-16 12:13:09 -0700, Andres Freund wrote:\n> On 2022-04-15 13:28:35 -0400, Tom Lane wrote:\n> > mylodon just showed a new-to-me failure mode [1]:\n> \n> Thanks. Found the bug (pgstat_drop_all_entries() passed the wrong lock\n> level), with the obvious fix.\n> \n> This failed to fail in other tests because they all end up resetting\n> only when there's no stats. It's not too hard to write a test for that,\n> which is how I reproduced the issue.\n> \n> I'm planning to make it a bit easier to test by verifying that 'E' in\n> pgstat_read_statsfile() actually is just before EOF. That seems like a\n> good check anyway.\n\nI've pushed that fix.\n\n\n> What confuses me so far is what already had generated stats before\n> reaching pgstat_reset_after_failure() (so that the bug could even be hit\n> in t/025_stuck_on_old_timeline.pl).\n\nBut there's still things I don't understand about that aspect.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 16 Apr 2022 15:07:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "On Sat, Apr 16, 2022 at 02:36:33PM -0700, Andres Freund wrote:\n> which I haven't seen locally. Looks like we have some race between\n> startup process and walreceiver? That seems not great. I'm a bit\n> confused that walreceiver and archiving are both active at the same time\n> in the first place - that doesn't seem right as things are set up\n> currently.\n\nYeah, that should be exclusively one or the other, never both.\nWaitForWALToBecomeAvailable() would be a hot spot when it comes to\ndecide when a WAL receiver should be spawned by the startup process.\nExcept from the recent refactoring of xlog.c or the WAL prefetch work,\nthere has not been many changes in this area lately.\n--\nMichael",
"msg_date": "Mon, 18 Apr 2022 16:18:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 7:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Apr 16, 2022 at 02:36:33PM -0700, Andres Freund wrote:\n> > which I haven't seen locally. Looks like we have some race between\n> > startup process and walreceiver? That seems not great. I'm a bit\n> > confused that walreceiver and archiving are both active at the same time\n> > in the first place - that doesn't seem right as things are set up\n> > currently.\n>\n> Yeah, that should be exclusively one or the other, never both.\n> WaitForWALToBecomeAvailable() would be a hot spot when it comes to\n> decide when a WAL receiver should be spawned by the startup process.\n> Except from the recent refactoring of xlog.c or the WAL prefetch work,\n> there has not been many changes in this area lately.\n\nHmm, well I'm not sure what is happening here and will try to dig\ntomorrow, but one observation from some log scraping is that kestrel\nlogged similar output with \"could not link file\" several times before\nthe main prefetching commit (5dc0418). I looked back 3 months on\nkestrel/HEAD and found these:\n\n commit | log\n---------+-------------------------------------------------------------------------------------------------------------------\n 411b913 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-03-27%2010:57:20&stg=recovery-check\n 3d067c5 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-03-29%2017:52:32&stg=recovery-check\n cd7ea75 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-03-30%2015:25:03&stg=recovery-check\n 8e053dc | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-03-30%2020:27:44&stg=recovery-check\n 4e34747 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-04%2020:32:24&stg=recovery-check\n 01effb1 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-06%2007:32:40&stg=recovery-check\n fbfe691 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-07%2005:10:05&stg=recovery-check\n 5dc0418 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-07%2007:51:00&stg=recovery-check\n bd037dc | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-11%2022:00:58&stg=recovery-check\n a4b5754 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-12%2004:40:44&stg=recovery-check\n 7129a97 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-15%2022:42:07&stg=recovery-check\n 9f4f0a0 | https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2022-04-16%2020:05:34&stg=recovery-check\n\n\n",
"msg_date": "Mon, 18 Apr 2022 22:45:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-18 22:45:07 +1200, Thomas Munro wrote:\n> On Mon, Apr 18, 2022 at 7:19 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Sat, Apr 16, 2022 at 02:36:33PM -0700, Andres Freund wrote:\n> > > which I haven't seen locally. Looks like we have some race between\n> > > startup process and walreceiver? That seems not great. I'm a bit\n> > > confused that walreceiver and archiving are both active at the same time\n> > > in the first place - that doesn't seem right as things are set up\n> > > currently.\n> >\n> > Yeah, that should be exclusively one or the other, never both.\n> > WaitForWALToBecomeAvailable() would be a hot spot when it comes to\n> > decide when a WAL receiver should be spawned by the startup process.\n> > Except from the recent refactoring of xlog.c or the WAL prefetch work,\n> > there has not been many changes in this area lately.\n> \n> Hmm, well I'm not sure what is happening here and will try to dig\n> tomorrow, but one observation from some log scraping is that kestrel\n> logged similar output with \"could not link file\" several times before\n> the main prefetching commit (5dc0418). I looked back 3 months on\n> kestrel/HEAD and found these:\n\nKestrel won't go that far back even - I set it up 23 days ago...\n\nI'm formally on vacation till Thursday, I'll try to look at earlier\ninstances then. Unless it's already figured out :). I failed at\nreproducing it locally, despite a fair bit of effort.\n\nThe BF really should break out individual tests into their own stage\nlogs. The recovery-check stage is 13MB and 150k lines by now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 18 Apr 2022 07:50:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 2:50 AM Andres Freund <andres@anarazel.de> wrote:\n> Kestrel won't go that far back even - I set it up 23 days ago...\n\nHere's a ~6 month old example from mylodon (I can't see much further\nback than that with HTTP requests... I guess BF records are purged?):\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mylodon&dt=2021-10-19%2022%3A57%3A54&stg=recovery-check\n\n\n",
"msg_date": "Tue, 19 Apr 2022 20:31:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 08:31:05PM +1200, Thomas Munro wrote:\n> On Tue, Apr 19, 2022 at 2:50 AM Andres Freund <andres@anarazel.de> wrote:\n> > Kestrel won't go that far back even - I set it up 23 days ago...\n> \n> Here's a ~6 month old example from mylodon (I can't see much further\n> back than that with HTTP requests... I guess BF records are purged?):\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mylodon&dt=2021-10-19%2022%3A57%3A54&stg=recovery-check\n\nDo we have anything remaining on this thread in light of the upcoming\nbeta1? One fix has been pushed upthread, but it does not seem we are\ncompletely in the clear either.\n--\nMichael",
"msg_date": "Wed, 11 May 2022 15:46:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-11 15:46:13 +0900, Michael Paquier wrote:\n> On Tue, Apr 19, 2022 at 08:31:05PM +1200, Thomas Munro wrote:\n> > On Tue, Apr 19, 2022 at 2:50 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Kestrel won't go that far back even - I set it up 23 days ago...\n> > \n> > Here's a ~6 month old example from mylodon (I can't see much further\n> > back than that with HTTP requests... I guess BF records are purged?):\n> > \n> > https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=mylodon&dt=2021-10-19%2022%3A57%3A54&stg=recovery-check\n> \n> Do we have anything remaining on this thread in light of the upcoming\n> beta1? One fix has been pushed upthread, but it does not seem we are\n> completely in the clear either.\n\nI don't know what else there is to do, tbh.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 08:38:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-11 15:46:13 +0900, Michael Paquier wrote:\n>> Do we have anything remaining on this thread in light of the upcoming\n>> beta1? One fix has been pushed upthread, but it does not seem we are\n>> completely in the clear either.\n\n> I don't know what else there is to do, tbh.\n\nWell, it was mostly you expressing misgivings upthread ;-). But we\nhave not seen any pgstat crashes lately, so I'm content to mark the\nopen item as resolved.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 May 2022 12:12:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-12 12:12:59 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-11 15:46:13 +0900, Michael Paquier wrote:\n> >> Do we have anything remaining on this thread in light of the upcoming\n> >> beta1? One fix has been pushed upthread, but it does not seem we are\n> >> completely in the clear either.\n> \n> > I don't know what else there is to do, tbh.\n> \n> Well, it was mostly you expressing misgivings upthread ;-).\n\nThose mostly were about stuff in 14 as well... I guess it'd be good to figure\nout precisely how the problem was triggered, but without further information\nI don't quite see how to figure it out...\n\n\n> But we have not seen any pgstat crashes lately, so I'm content to mark the\n> open item as resolved.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 09:33:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
},
{
"msg_contents": "On Thu, May 12, 2022 at 09:33:05AM -0700, Andres Freund wrote:\n> On 2022-05-12 12:12:59 -0400, Tom Lane wrote:\n>> But we have not seen any pgstat crashes lately, so I'm content to mark the\n>> open item as resolved.\n> \n> Cool.\n\nOkay, thanks for the feedback. I have marked the item as resolved for\nthe time being. Let's revisit it later if necessary.\n--\nMichael",
"msg_date": "Fri, 13 May 2022 08:54:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Crash in new pgstats code"
}
] |
[
{
"msg_contents": "Hello,\nI am a pre-final year student of IIT Jodhpur, pursuing my BTech in computer\nscience and engineering.\nAs a part of GSOC 2022, I would like to work on the project *Improve\npgarchives*. I have prepared the final draft of the proposal and would like\nto receive suggestions/feedback on it.\n\nProposal link: pgarchives-proposal\n<https://docs.google.com/document/d/1A773BPhN5erqrvpMwU85YpfvtZ_q4VUAS2_KfxcRo-Y/edit?usp=sharing>\n\n\nRegards,\nSahil Harpal\n\nHello,I am a pre-final year student of IIT Jodhpur, pursuing my BTech in computer science and engineering.As a part of GSOC 2022, I would like to work on the project Improve pgarchives. I have prepared the final draft of the proposal and would like to receive suggestions/feedback on it.Proposal link: pgarchives-proposal Regards,Sahil Harpal",
"msg_date": "Sat, 16 Apr 2022 04:06:27 +0530",
"msg_from": "Sahil Harpal <sahilharpal1234@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSOC-2022 | Improve pgarchives proposal review"
}
] |
[
{
"msg_contents": "Hackers,\n\ninitdb is already pretty chatty, and the version of the cluster being\ninstalled seems useful to include as well. The data directory is probably\nless so - though I am thinking that the absolute path would be useful to\nreport, especially when a relative path is specified (I didn't figure that\npart out yet, figured I'd get the idea approved before working out how to\nmake it happen).\n\nMoving \"Success\" to that \"summary output\" line and leaving the optional\nshell command line just be the shell command made sense to me.\n\ndiff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c\nindex ab826da650..54a1d1fcac 100644\n--- a/src/bin/initdb/initdb.c\n+++ b/src/bin/initdb/initdb.c\n@@ -3119,6 +3119,9 @@ main(int argc, char *argv[])\n \"--auth-local and --auth-host, the next time you run initdb.\");\n }\n\n+ printf(_(\"\\nSuccess. PostgreSQL version %s cluster has been initialized\nat %s.\\n\"), PG_VERSION, pg_data);\n+ fflush(stdout);\n+\n if (!noinstructions)\n {\n /*\n@@ -3147,7 +3150,7 @@ main(int argc, char *argv[])\n /* translator: This is a placeholder in a shell command. */\n appendPQExpBuffer(start_db_cmd, \" -l %s start\", _(\"logfile\"));\n\n- printf(_(\"\\nSuccess. You can now start the database server using:\\n\\n\"\n+ printf(_(\"\\nYou can now start the database server using:\\n\\n\"\n \" %s\\n\\n\"),\n start_db_cmd->data);\n\n\nDavid J.\n\nHackers,initdb is already pretty chatty, and the version of the cluster being installed seems useful to include as well. The data directory is probably less so - though I am thinking that the absolute path would be useful to report, especially when a relative path is specified (I didn't figure that part out yet, figured I'd get the idea approved before working out how to make it happen).Moving \"Success\" to that \"summary output\" line and leaving the optional shell command line just be the shell command made sense to me.diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.cindex ab826da650..54a1d1fcac 100644--- a/src/bin/initdb/initdb.c+++ b/src/bin/initdb/initdb.c@@ -3119,6 +3119,9 @@ main(int argc, char *argv[]) \t\t\t\t\t\t\t\"--auth-local and --auth-host, the next time you run initdb.\"); \t} +\tprintf(_(\"\\nSuccess. PostgreSQL version %s cluster has been initialized at %s.\\n\"), PG_VERSION, pg_data);+\tfflush(stdout);+ \tif (!noinstructions) \t{ \t\t/*@@ -3147,7 +3150,7 @@ main(int argc, char *argv[]) \t\t/* translator: This is a placeholder in a shell command. */ \t\tappendPQExpBuffer(start_db_cmd, \" -l %s start\", _(\"logfile\")); -\t\tprintf(_(\"\\nSuccess. You can now start the database server using:\\n\\n\"+\t\tprintf(_(\"\\nYou can now start the database server using:\\n\\n\" \t\t\t\t \" %s\\n\\n\"), \t\t\t start_db_cmd->data); David J.",
"msg_date": "Fri, 15 Apr 2022 16:50:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add version and data directory to initdb output"
},
{
"msg_contents": "> On 16 Apr 2022, at 01:50, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n> initdb is already pretty chatty, and the version of the cluster being installed seems useful to include as well. \n\nThat seems quite reasonable.\n\n> The data directory is probably less so - though I am thinking that the absolute path would be useful to report, especially when a relative path is specified (I didn't figure that part out yet, figured I'd get the idea approved before working out how to make it happen).\n\nI'm less convinced that it will be worth the additional code to make it\nportable across *nix/Windows etc.\n\n> Moving \"Success\" to that \"summary output\" line and leaving the optional shell command line just be the shell command made sense to me.\n\nLooking at the output, couldn't it alternatively be printed grouped with the\nother info on the cluster, ie the final three rows in the example below:\n\n ./bin/initdb -D data\n The files belonging to this database system will be owned by user \"<username>\".\n This user must also own the server process.\n\n The database cluster will be initialized with locale \"en_US.UTF-8\".\n The default database encoding has accordingly been set to \"UTF8\".\n The default text search configuration will be set to \"english\".\n\nHow about 'The database cluster will be initialized with version \"14.2\".' added\nthere, which then can keep the \"Success\" line in place in case existing scripts\nare triggering on that line?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 19 Apr 2022 11:28:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 2:28 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 16 Apr 2022, at 01:50, David G. Johnston <david.g.johnston@gmail.com>\n> wrote:\n>\n> > initdb is already pretty chatty, and the version of the cluster being\n> installed seems useful to include as well.\n>\n> That seems quite reasonable.\n>\n> > The data directory is probably less so - though I am thinking that the\n> absolute path would be useful to report, especially when a relative path is\n> specified (I didn't figure that part out yet, figured I'd get the idea\n> approved before working out how to make it happen).\n>\n> I'm less convinced that it will be worth the additional code to make it\n> portable across *nix/Windows etc.\n>\n\nok\n\n>\n> > Moving \"Success\" to that \"summary output\" line and leaving the optional\n> shell command line just be the shell command made sense to me.\n>\n> Looking at the output, couldn't it alternatively be printed grouped with\n> the\n> other info on the cluster, ie the final three rows in the example below:\n>\n> ./bin/initdb -D data\n> The files belonging to this database system will be owned by user\n> \"<username>\".\n> This user must also own the server process.\n>\n> The database cluster will be initialized with locale \"en_US.UTF-8\".\n> The default database encoding has accordingly been set to \"UTF8\".\n> The default text search configuration will be set to \"english\".\n>\n> How about 'The database cluster will be initialized with version \"14.2\".'\n> added\n> there, which then can keep the \"Success\" line in place in case existing\n> scripts\n> are triggering on that line?\n>\n>\nThe motivating situation had me placing it as close to the last line as\npossible so my 8 line or so tmux panel would show it to me without\nscrolling. The version is all I cared about, but when writing the patch\nthe path seemed to be at least worth considering.\n\nAs for \"Success\", I'm confused about the --no-instructions choice to change\nit the way it did, but given that precedent I only felt it important to\nleave the word Success as the leading word on a line. Scripts should be\ntriggering on the exit code anyway and presently --no-instructions removes\nthe Success acknowledgement completely anyway.\n\nDavid J.\n\nOn Tue, Apr 19, 2022 at 2:28 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 16 Apr 2022, at 01:50, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n> initdb is already pretty chatty, and the version of the cluster being installed seems useful to include as well. \n\nThat seems quite reasonable.\n\n> The data directory is probably less so - though I am thinking that the absolute path would be useful to report, especially when a relative path is specified (I didn't figure that part out yet, figured I'd get the idea approved before working out how to make it happen).\n\nI'm less convinced that it will be worth the additional code to make it\nportable across *nix/Windows etc.ok\n\n> Moving \"Success\" to that \"summary output\" line and leaving the optional shell command line just be the shell command made sense to me.\n\nLooking at the output, couldn't it alternatively be printed grouped with the\nother info on the cluster, ie the final three rows in the example below:\n\n ./bin/initdb -D data\n The files belonging to this database system will be owned by user \"<username>\".\n This user must also own the server process.\n\n The database cluster will be initialized with locale \"en_US.UTF-8\".\n The default database encoding has accordingly been set to \"UTF8\".\n The default text search configuration will be set to \"english\".\n\nHow about 'The database cluster will be initialized with version \"14.2\".' added\nthere, which then can keep the \"Success\" line in place in case existing scripts\nare triggering on that line?\nThe motivating situation had me placing it as close to the last line as possible so my 8 line or so tmux panel would show it to me without scrolling. The version is all I cared about, but when writing the patch the path seemed to be at least worth considering.As for \"Success\", I'm confused about the --no-instructions choice to change it the way it did, but given that precedent I only felt it important to leave the word Success as the leading word on a line. Scripts should be triggering on the exit code anyway and presently --no-instructions removes the Success acknowledgement completely anyway.David J.",
"msg_date": "Tue, 19 Apr 2022 06:55:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "> On 19 Apr 2022, at 15:56, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\n> The motivating situation had me placing it as close to the last line as possible so my 8 line or so tmux panel would show it to me without scrolling. The version is all I cared about, but when writing the patch the path seemed to be at least worth considering.\n> \n> As for \"Success\", I'm confused about the --no-instructions choice to change it the way it did, but given that precedent I only felt it important to leave the word Success as the leading word on a line. Scripts should be triggering on the exit code anyway and presently --no-instructions removes the Success acknowledgement completely anyway.\n\nGood point, I forgot about the no-instructions option.\n\n./daniel\nOn 19 Apr 2022, at 15:56, David G. Johnston <david.g.johnston@gmail.com> wrote:The motivating situation had me placing it as close to the last line as possible so my 8 line or so tmux panel would show it to me without scrolling. The version is all I cared about, but when writing the patch the path seemed to be at least worth considering.As for \"Success\", I'm confused about the --no-instructions choice to change it the way it did, but given that precedent I only felt it important to leave the word Success as the leading word on a line. Scripts should be triggering on the exit code anyway and presently --no-instructions removes the Success acknowledgement completely anyway.\nGood point, I forgot about the no-instructions option../daniel",
"msg_date": "Tue, 19 Apr 2022 16:27:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "On 19.04.22 15:55, David G. Johnston wrote:\n> The motivating situation had me placing it as close to the last line as \n> possible so my 8 line or so tmux panel would show it to me without \n> scrolling. The version is all I cared about, but when writing the patch \n> the path seemed to be at least worth considering.\n> \n> As for \"Success\", I'm confused about the --no-instructions choice to \n> change it the way it did, but given that precedent I only felt it \n> important to leave the word Success as the leading word on a line. \n> Scripts should be triggering on the exit code anyway and presently \n> --no-instructions removes the Success acknowledgement completely anyway.\n\nThe order of outputs of initdb seems to be approximately\n\n1. These are the settings I will use based on what you told me.\n2. This is what I'm doing right now.\n3. Here's what you can do next.\n\nYour additions would appear to fall into bucket #1. So I think adding \nthem near the start of the output makes more sense. Otherwise, one \ncould also argue that all the locale information etc. should also be \nrepeated at the end, in case one forgot them or whatever.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 23:04:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 2:04 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 19.04.22 15:55, David G. Johnston wrote:\n> > The motivating situation had me placing it as close to the last line as\n> > possible so my 8 line or so tmux panel would show it to me without\n> > scrolling. The version is all I cared about, but when writing the patch\n> > the path seemed to be at least worth considering.\n> >\n> > As for \"Success\", I'm confused about the --no-instructions choice to\n> > change it the way it did, but given that precedent I only felt it\n> > important to leave the word Success as the leading word on a line.\n> > Scripts should be triggering on the exit code anyway and presently\n> > --no-instructions removes the Success acknowledgement completely anyway.\n>\n> The order of outputs of initdb seems to be approximately\n>\n> 1. These are the settings I will use based on what you told me.\n> 2. This is what I'm doing right now.\n> 3. Here's what you can do next.\n>\n> Your additions would appear to fall into bucket #1. So I think adding\n> them near the start of the output makes more sense. Otherwise, one\n> could also argue that all the locale information etc. should also be\n> repeated at the end, in case one forgot them or whatever.\n>\n\nI agree with the observation but it initdb is fast enough and\nnon-interactive and so that order isn't particularly appealing.\n\nThus either:\n\n1. Initialization is running ... Here's what we are doing.\n2. All done! Here's what we did.\n3. Here's what you can do next.\n\nor\n\n1. These are the settings I will use based on what you told me.\n2. This is what I'm doing right now.\n3. All done! Here's what you ended up with (can repeat items from 1 if\ndesired...)\n4. Here's what you can do next.\n\nI'd rather do the first proposal given buy-in. Though I would have\nconcerns about what the output looks like upon failure.\n\nI'm basically proposing the second option, add a formal \"All done!\" section\nand recap what the final result is. I'd be content with having the version\nappear in both 1 and 3 in that scenario. It isn't a frequently executed\ncommand, already is verbose, and when done interactively in development I\ndon't want to have to dedicate a 20 line panel so I can see \"All Done!\" and\nsome (one) key attribute(s) (locale and path seems useful though) without\nscrolling.\n\nIf the consensus is to place it before, and only before, the \"this is what\nI'm doing right now\" stuff, that is better than nothing, but the choice of\nnot doing so was intentional.\n\nDavid J.\n\nOn Wed, Apr 20, 2022 at 2:04 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 19.04.22 15:55, David G. Johnston wrote:\n> The motivating situation had me placing it as close to the last line as \n> possible so my 8 line or so tmux panel would show it to me without \n> scrolling. The version is all I cared about, but when writing the patch \n> the path seemed to be at least worth considering.\n> \n> As for \"Success\", I'm confused about the --no-instructions choice to \n> change it the way it did, but given that precedent I only felt it \n> important to leave the word Success as the leading word on a line. \n> Scripts should be triggering on the exit code anyway and presently \n> --no-instructions removes the Success acknowledgement completely anyway.\n\nThe order of outputs of initdb seems to be approximately\n\n1. These are the settings I will use based on what you told me.\n2. This is what I'm doing right now.\n3. Here's what you can do next.\n\nYour additions would appear to fall into bucket #1. So I think adding \nthem near the start of the output makes more sense. Otherwise, one \ncould also argue that all the locale information etc. should also be \nrepeated at the end, in case one forgot them or whatever.I agree with the observation but it initdb is fast enough and non-interactive and so that order isn't particularly appealing.Thus either:1. Initialization is running ... Here's what we are doing.2. All done! Here's what we did.3. Here's what you can do next.or1. These are the settings I will use based on what you told me.2. This is what I'm doing right now.3. All done! Here's what you ended up with (can repeat items from 1 if desired...)4. Here's what you can do next.I'd rather do the first proposal given buy-in. Though I would have concerns about what the output looks like upon failure.I'm basically proposing the second option, add a formal \"All done!\" section and recap what the final result is. I'd be content with having the version appear in both 1 and 3 in that scenario. It isn't a frequently executed command, already is verbose, and when done interactively in development I don't want to have to dedicate a 20 line panel so I can see \"All Done!\" and some (one) key attribute(s) (locale and path seems useful though) without scrolling.If the consensus is to place it before, and only before, the \"this is what I'm doing right now\" stuff, that is better than nothing, but the choice of not doing so was intentional.David J.",
"msg_date": "Wed, 20 Apr 2022 14:21:29 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "On 20.04.22 23:21, David G. Johnston wrote:\n> I agree with the observation but it initdb is fast enough and \n> non-interactive and so that order isn't particularly appealing.\n\nI'm not a particular fan of the current initdb output and it could use a \ngeneral revision IMO. If you want to look into that, please do. But \nfor your particular proposed addition, let's put it somewhere it makes \nsense either in the current scheme or a future scheme when that is done.\n\n\n",
"msg_date": "Thu, 21 Apr 2022 14:15:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I'm not a particular fan of the current initdb output and it could use a \n> general revision IMO. If you want to look into that, please do. But \n> for your particular proposed addition, let's put it somewhere it makes \n> sense either in the current scheme or a future scheme when that is done.\n\nTBH, I think we should reject the current proposal outright.\nThe target directory's name already appears twice in initdb's output;\nwe do not need a third time. And as for the version, if you want that\nyou can get it from \"initdb --version\".\n\nI agree that there could be scope for rethinking initdb's output\naltogether. It's fast enough nowadays that the former need for\nprogress reporting could probably be dropped. Maybe we could\ngo over to something that's more nearly intended to be a\nmachine-readable summary of the configuration, like\n\nData directory: ...\nOwning user ID: ...\nLocale: ...\nDefault server encoding: ...\netc etc\n\nEven if you like the current output for interactive usage,\nperhaps something like this could be selected by a switch for\nnon-interactive usage (or just repurpose --no-instructions).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Apr 2022 10:18:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 21, 2022 at 10:18:56AM -0400, Tom Lane wrote:\n> And as for the version, if you want that you can get it from \"initdb\n> --version\".\n\nI assumed the point in stamping the version in the output was that\npeople might want to pipe it to some logfile and then later on, when\nthey found some issues, be able to go back and know what version was\nused when initializing this data directory.\n\n\nMichael\n\n\n",
"msg_date": "Thu, 21 Apr 2022 16:24:43 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Add version and data directory to initdb output"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 7:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > I'm not a particular fan of the current initdb output and it could use a\n> > general revision IMO. If you want to look into that, please do. But\n> > for your particular proposed addition, let's put it somewhere it makes\n> > sense either in the current scheme or a future scheme when that is done.\n>\n> TBH, I think we should reject the current proposal outright.\n> The target directory's name already appears twice in initdb's output;\n> we do not need a third time. And as for the version, if you want that\n> you can get it from \"initdb --version\".\n>\n>\nI don't really see a reason not to add the version to the log output, if\njust for simplicity and having a self-contained stream of content.\n\nI'm off my desire to have it be the nearly last thing to print though;\nhaving it print first actually works better since if you are interactive\nyou'll see it pop-up just after pressing enter. Subconsciously you'll know\nwhat you are expecting to see there and if it just happens to be different\nyou'll probably notice it. Solutions requiring additional commands/effort\nto retrieve the version presume one is expecting/caring about checking that\nvalue specifically, and while that may be true the simplicity combined with\nthe benefit to people not expecting there to be an issue make adding it\nalongside the various others key=value settings a no-brainer for me.\n\nDavid J.\n\nOn Thu, Apr 21, 2022 at 7:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I'm not a particular fan of the current initdb output and it could use a \n> general revision IMO. If you want to look into that, please do. But \n> for your particular proposed addition, let's put it somewhere it makes \n> sense either in the current scheme or a future scheme when that is done.\n\nTBH, I think we should reject the current proposal outright.\nThe target directory's name already appears twice in initdb's output;\nwe do not need a third time. And as for the version, if you want that\nyou can get it from \"initdb --version\".I don't really see a reason not to add the version to the log output, if just for simplicity and having a self-contained stream of content.I'm off my desire to have it be the nearly last thing to print though; having it print first actually works better since if you are interactive you'll see it pop-up just after pressing enter. Subconsciously you'll know what you are expecting to see there and if it just happens to be different you'll probably notice it. Solutions requiring additional commands/effort to retrieve the version presume one is expecting/caring about checking that value specifically, and while that may be true the simplicity combined with the benefit to people not expecting there to be an issue make adding it alongside the various others key=value settings a no-brainer for me.David J.",
"msg_date": "Thu, 21 Apr 2022 07:39:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add version and data directory to initdb output"
}
] |
[
{
"msg_contents": "Hi,\n\nI get this crash running the attached test program. On my slow-disked \nand old desktop it occurs once in 20 or so runs (it is yet another \ninstallment of an old test that runs pgbench with logical replication).\n\n15devel compiled from d3609dd25.\n\n(The bash deletes stuff, and without my environment it will need some \ntweaking)\n\nThanks,\n\nErik Rijkers",
"msg_date": "Sat, 16 Apr 2022 09:37:55 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "On Sat, Apr 16, 2022 at 09:37:55AM +0200, Erik Rijkers wrote:\n> I get this crash running the attached test program. On my slow-disked and\n> old desktop it occurs once in 20 or so runs (it is yet another installment\n> of an old test that runs pgbench with logical replication).\n> \n> 15devel compiled from d3609dd25.\n> \n> (The bash deletes stuff, and without my environment it will need some\n> tweaking)\n\nThanks for the report, Erik. This one is new, likely related to the\nmove of the stats to shared memory. I have added an open item.\n--\nMichael",
"msg_date": "Sat, 16 Apr 2022 21:29:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-16 09:37:55 +0200, Erik Rijkers wrote:\n> I get this crash running the attached test program. On my slow-disked and\n> old desktop it occurs once in 20 or so runs (it is yet another installment\n> of an old test that runs pgbench with logical replication).\n>\n> 15devel compiled from d3609dd25.\n> \n> (The bash deletes stuff, and without my environment it will need some\n> tweaking)\n\nAny chance for a backtrace? I'll otherwise try to adjust the script, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 16 Apr 2022 11:23:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "At Sat, 16 Apr 2022 11:23:23 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-04-16 09:37:55 +0200, Erik Rijkers wrote:\n> > I get this crash running the attached test program. On my slow-disked and\n> > old desktop it occurs once in 20 or so runs (it is yet another installment\n> > of an old test that runs pgbench with logical replication).\n> >\n> > 15devel compiled from d3609dd25.\n> > \n> > (The bash deletes stuff, and without my environment it will need some\n> > tweaking)\n> \n> Any chance for a backtrace? I'll otherwise try to adjust the script, but ...\n\nFWIW, the script keep succussfully running more than 140 times for me.\n(on master, Cent8) And I haven't find a hypothesis for the cause of\nthe symptom.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 18 Apr 2022 16:13:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Op 18-04-2022 om 09:13 schreef Kyotaro Horiguchi:\n> At Sat, 16 Apr 2022 11:23:23 -0700, Andres Freund <andres@anarazel.de> wrote in\n>> Hi,\n>>\n>> On 2022-04-16 09:37:55 +0200, Erik Rijkers wrote:\n>>> I get this crash running the attached test program. On my slow-disked and\n>>> old desktop it occurs once in 20 or so runs (it is yet another installment\n>>> of an old test that runs pgbench with logical replication).\n>>>\n>>> 15devel compiled from d3609dd25.\n>>>\n>>> (The bash deletes stuff, and without my environment it will need some\n>>> tweaking)\n>>\n>> Any chance for a backtrace? I'll otherwise try to adjust the script, but ...\n> \n> FWIW, the script keep succussfully running more than 140 times for me.\n> (on master, Cent8) And I haven't find a hypothesis for the cause of\n> the symptom.\n\nHm. Just now I've recompiled and retried and after 5 runs got the same \ncrash. Then tried on another machine (also old, I'm afraid),\nand built 1a8b11053 and ran the same thing. That failed on the first \ntry, and made core dump from which I extracted:\n\n\ngdb ~/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres \ncore-postgres-6-500-500-8289-1650269886 -ex bt -ex q\n\n\nGNU gdb (GDB) 7.6\nCopyright (C) 2013 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later \n<http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-unknown-linux-gnu\".\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>...\nReading symbols from \n/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.\n[New LWP 8289]\n\nwarning: Can't read pathname for load map: Input/output error.\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: logical replication worker for \nsubscription 16411 '.\nProgram terminated with signal 6, Aborted.\n#0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n#0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n#1 0x000000357d633cd5 in abort () from /lib64/libc.so.6\n#2 0x0000000000973fcb in ExceptionalCondition \n(conditionName=conditionName@entry=0xb20d76 \"tabstat->trans == trans\", \nerrorType=errorType@entry=0x9c7c2b \"FailedAssertion\",\n fileName=fileName@entry=0xb20d0b \"pgstat_relation.c\", \nlineNumber=lineNumber@entry=508) at assert.c:69\n#3 0x000000000086b77f in AtEOXact_PgStat_Relations \n(xact_state=xact_state@entry=0x26f0b50, isCommit=isCommit@entry=true) at \npgstat_relation.c:508\n#4 0x000000000086ec0f in AtEOXact_PgStat (isCommit=isCommit@entry=true, \nparallel=parallel@entry=false) at pgstat_xact.c:54\n#5 0x00000000005bd2a3 in CommitTransaction () at xact.c:2360\n#6 0x00000000005be5d5 in CommitTransactionCommand () at xact.c:3048\n#7 0x00000000007ee72b in apply_handle_commit_internal \n(commit_data=commit_data@entry=0x7ffe4606a7a0) at worker.c:1532\n#8 0x00000000007efac9 in apply_handle_commit (s=0x7ffe4606a940) at \nworker.c:845\n#9 apply_dispatch () at worker.c:2473\n#10 0x00000000007f11a7 in LogicalRepApplyLoop (last_received=74454600) \nat worker.c:2757\n#11 start_apply () at worker.c:3526\n#12 0x00000000007f175f in ApplyWorkerMain () at worker.c:3782\n#13 0x00000000007bdba3 in StartBackgroundWorker () at bgworker.c:858\n#14 0x00000000007c3241 in do_start_bgworker (rw=<optimized out>) at \npostmaster.c:5802\n#15 maybe_start_bgworkers () at postmaster.c:6026\n#16 0x00000000007c3b65 in sigusr1_handler \n(postgres_signal_arg=<optimized out>) at postmaster.c:5191\n#17 <signal handler called>\n#18 0x000000357d6e1683 in __select_nocancel () from /lib64/libc.so.6\n#19 0x00000000007c41d6 in ServerLoop () at postmaster.c:1757\n#20 0x00000000007c5c3b in PostmasterMain () at postmaster.c:1465\n#21 0x0000000000720cfe in main (argc=11, argv=0x2615590) at main.c:202\n\n\nI'm not sure that helps.\n\n\n> \n> regards.\n> \n\n\n",
"msg_date": "Mon, 18 Apr 2022 10:57:02 +0200",
"msg_from": "Erikjan Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "At Mon, 18 Apr 2022 10:57:02 +0200, Erikjan Rijkers <er@xs4all.nl> wrote in \n> Hm. Just now I've recompiled and retried and after 5 runs got the\n> same crash. Then tried on another machine (also old, I'm afraid),\n> and built 1a8b11053 and ran the same thing. That failed on the first\n> try, and made core dump from which I extracted:\n\nThanks!\n\n> gdb ~/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres\n> core-postgres-6-500-500-8289-1650269886 -ex bt -ex q\n> \n> #2 0x0000000000973fcb in ExceptionalCondition\n> #(conditionName=conditionName@entry=0xb20d76 \"tabstat->trans == trans\",\n> #errorType=errorType@entry=0x9c7c2b \"FailedAssertion\",\n> fileName=fileName@entry=0xb20d0b \"pgstat_relation.c\",\n> lineNumber=lineNumber@entry=508) at assert.c:69\n> #3 0x000000000086b77f in AtEOXact_PgStat_Relations\n> #(xact_state=xact_state@entry=0x26f0b50, isCommit=isCommit@entry=true)\n> #at pgstat_relation.c:508\n\nCould you read tabstat, *tabstat, trans, *trans here?\n\n> #4 0x000000000086ec0f in AtEOXact_PgStat (isCommit=isCommit@entry=true,\n> #parallel=parallel@entry=false) at pgstat_xact.c:54\n> #5 0x00000000005bd2a3 in CommitTransaction () at xact.c:2360\n> #6 0x00000000005be5d5 in CommitTransactionCommand () at xact.c:3048\n> #7 0x00000000007ee72b in apply_handle_commit_internal\n> #(commit_data=commit_data@entry=0x7ffe4606a7a0) at worker.c:1532\n> #8 0x00000000007efac9 in apply_handle_commit (s=0x7ffe4606a940) at\n> #worker.c:845\n> #9 apply_dispatch () at worker.c:2473\n> #10 0x00000000007f11a7 in LogicalRepApplyLoop (last_received=74454600)\n> #at worker.c:2757\n> #11 start_apply () at worker.c:3526\n> #12 0x00000000007f175f in ApplyWorkerMain () at worker.c:3782\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 19 Apr 2022 09:15:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:\n> At Mon, 18 Apr 2022 10:57:02 +0200, Erikjan Rijkers <er@xs4all.nl> wrote in\n>> Hm. Just now I've recompiled and retried and after 5 runs got the\n>> same crash. Then tried on another machine (also old, I'm afraid),\n>> and built 1a8b11053 and ran the same thing. That failed on the first\n>> try, and made core dump from which I extracted:\n> \n> Thanks!\n> \n>> gdb ~/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres\n>> core-postgres-6-500-500-8289-1650269886 -ex bt -ex q\n>>\n>> #2 0x0000000000973fcb in ExceptionalCondition\n>> #(conditionName=conditionName@entry=0xb20d76 \"tabstat->trans == trans\",\n>> #errorType=errorType@entry=0x9c7c2b \"FailedAssertion\",\n>> fileName=fileName@entry=0xb20d0b \"pgstat_relation.c\",\n>> lineNumber=lineNumber@entry=508) at assert.c:69\n>> #3 0x000000000086b77f in AtEOXact_PgStat_Relations\n>> #(xact_state=xact_state@entry=0x26f0b50, isCommit=isCommit@entry=true)\n>> #at pgstat_relation.c:508\n> \n> Could you read tabstat, *tabstat, trans, *trans here?\n\nTo be honest I'm not sure how to, but I gave it a try:\n\nGNU gdb (GDB) 7.6\nCopyright (C) 2013 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later \n<http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-unknown-linux-gnu\".\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>...\nReading symbols from \n/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.\n[New LWP 21839]\n\nwarning: Can't read pathname for load map: Input/output error.\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: logical replication worker for \nsubscription 16411 '.\nProgram terminated with signal 6, Aborted.\n#0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n(gdb) bt\n#0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n#1 0x000000357d633cd5 in abort () from /lib64/libc.so.6\n#2 0x000000000097400b in ExceptionalCondition \n(conditionName=conditionName@entry=0xb20df6 \"tabstat->trans == trans\", \nerrorType=errorType@entry=0x9c7cab \"FailedAssertion\",\n fileName=fileName@entry=0xb20d8b \"pgstat_relation.c\", \nlineNumber=lineNumber@entry=508) at assert.c:69\n#3 0x000000000086b7bf in AtEOXact_PgStat_Relations \n(xact_state=xact_state@entry=0x2d9ab50, isCommit=isCommit@entry=true) at \npgstat_relation.c:508\n#4 0x000000000086ec4f in AtEOXact_PgStat (isCommit=isCommit@entry=true, \nparallel=parallel@entry=false) at pgstat_xact.c:54\n#5 0x00000000005bd2a3 in CommitTransaction () at xact.c:2360\n#6 0x00000000005be5d5 in CommitTransactionCommand () at xact.c:3048\n#7 0x00000000007ee76b in apply_handle_commit_internal \n(commit_data=commit_data@entry=0x7fffb90aa8e0) at worker.c:1532\n#8 0x00000000007efb09 in apply_handle_commit (s=0x7fffb90aaa80) at \nworker.c:845\n#9 apply_dispatch () at worker.c:2473\n#10 0x00000000007f11e7 in LogicalRepApplyLoop (last_received=74695984) \nat worker.c:2757\n#11 start_apply () at worker.c:3526\n#12 0x00000000007f179f in ApplyWorkerMain () at worker.c:3782\n#13 0x00000000007bdbb3 in StartBackgroundWorker () at bgworker.c:858\n#14 0x00000000007c3251 in do_start_bgworker (rw=<optimized out>) at \npostmaster.c:5802\n#15 maybe_start_bgworkers () at postmaster.c:6026\n#16 0x00000000007c3b75 in sigusr1_handler \n(postgres_signal_arg=<optimized out>) at postmaster.c:5191\n#17 <signal handler called>\n#18 0x000000357d6e1683 in __select_nocancel () from /lib64/libc.so.6\n#19 0x00000000007c41e6 in ServerLoop () at postmaster.c:1757\n#20 0x00000000007c5c4b in PostmasterMain () at postmaster.c:1465\n#21 0x0000000000720d0e in main (argc=11, argv=0x2cbf590) at main.c:202\n(gdb) f 3\n#3 0x000000000086b7bf in AtEOXact_PgStat_Relations \n(xact_state=xact_state@entry=0x2d9ab50, isCommit=isCommit@entry=true) at \npgstat_relation.c:508\n508 Assert(tabstat->trans == trans);\n(gdb) p tabstat\n$1 = <optimized out>\n(gdb) p *tabstat\nvalue has been optimized out\n(gdb) p trans\n$2 = <optimized out>\n(gdb) p *trans\nvalue has been optimized out\n(gdb)\n\n\n> \n>> #4 0x000000000086ec0f in AtEOXact_PgStat (isCommit=isCommit@entry=true,\n>> #parallel=parallel@entry=false) at pgstat_xact.c:54\n>> #5 0x00000000005bd2a3 in CommitTransaction () at xact.c:2360\n>> #6 0x00000000005be5d5 in CommitTransactionCommand () at xact.c:3048\n>> #7 0x00000000007ee72b in apply_handle_commit_internal\n>> #(commit_data=commit_data@entry=0x7ffe4606a7a0) at worker.c:1532\n>> #8 0x00000000007efac9 in apply_handle_commit (s=0x7ffe4606a940) at\n>> #worker.c:845\n>> #9 apply_dispatch () at worker.c:2473\n>> #10 0x00000000007f11a7 in LogicalRepApplyLoop (last_received=74454600)\n>> #at worker.c:2757\n>> #11 start_apply () at worker.c:3526\n>> #12 0x00000000007f175f in ApplyWorkerMain () at worker.c:3782\n> \n> regards.\n> \n\n\n",
"msg_date": "Tue, 19 Apr 2022 07:00:30 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Thaks Erik.\n\nAt Tue, 19 Apr 2022 07:00:30 +0200, Erik Rijkers <er@xs4all.nl> wrote in \n> Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:\n> > Could you read tabstat, *tabstat, trans, *trans here?\n> \n> To be honest I'm not sure how to, but I gave it a try:\n>\n> (gdb) p tabstat\n> $1 = <optimized out>\n\nGreat! It is that. But unfortunately they are optimized out.. Could\nyou cause the crash with -O0 binary? You will see the variable with\nit. You can rebuild with the option as follows.\n\n$ make clean; make install CUSTOM_COPT=\"-O0 -g\"\n\nYou can dump only the whole xact_state chain from the current core\nfile but the result will give a bit obscure hint for diagnosis.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 19 Apr 2022 18:25:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Op 19-04-2022 om 11:25 schreef Kyotaro Horiguchi:\n> Thaks Erik.\n> \n> At Tue, 19 Apr 2022 07:00:30 +0200, Erik Rijkers <er@xs4all.nl> wrote in\n>> Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:\n>>> Could you read tabstat, *tabstat, trans, *trans here?\n>>\n>> To be honest I'm not sure how to, but I gave it a try:\n>>\n\n\nI rebuilt newest master (a62bff74b135) with\n\nexport CUSTOM_COPT=\"-O0 -g\"\n\nThe 12th run of statbug.sh crashed and gave a corefile.\n\n\nGNU gdb (GDB) 7.6\nCopyright (C) 2013 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later \n<http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-unknown-linux-gnu\".\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>...\nReading symbols from \n/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.\n[New LWP 25058]\n\nwarning: Can't read pathname for load map: Input/output error.\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: logical replication worker for \nsubscription 16411 '.\nProgram terminated with signal 6, Aborted.\n#0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n(gdb) bt\n#0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n#1 0x000000357d633cd5 in abort () from /lib64/libc.so.6\n#2 0x0000000000b3bada in ExceptionalCondition (conditionName=0xd389a1 \n\"tabstat->trans == trans\", errorType=0xd388b2 \"FailedAssertion\", \nfileName=0xd388a0 \"pgstat_relation.c\", lineNumber=508) at assert.c:69\n#3 0x00000000009bf5dc in AtEOXact_PgStat_Relations \n(xact_state=0x31b1b50, isCommit=true) at pgstat_relation.c:508\n#4 0x00000000009c4107 in AtEOXact_PgStat (isCommit=true, \nparallel=false) at pgstat_xact.c:54\n#5 0x0000000000583764 in CommitTransaction () at xact.c:2360\n#6 0x0000000000584354 in CommitTransactionCommand () at xact.c:3048\n#7 0x000000000090b34e in apply_handle_commit_internal \n(commit_data=0x7ffd024b5940) at worker.c:1532\n#8 0x000000000090a287 in apply_handle_commit (s=0x7ffd024b59b0) at \nworker.c:845\n#9 0x000000000090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473\n#10 0x000000000090d41c in LogicalRepApplyLoop (last_received=74680880) \nat worker.c:2757\n#11 0x000000000090e974 in start_apply (origin_startpos=0) at worker.c:3526\n#12 0x000000000090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782\n#13 0x00000000008c7623 in StartBackgroundWorker () at bgworker.c:858\n#14 0x00000000008d1557 in do_start_bgworker (rw=0x30ff0a0) at \npostmaster.c:5802\n#15 0x00000000008d1903 in maybe_start_bgworkers () at postmaster.c:6026\n#16 0x00000000008d09ba in sigusr1_handler (postgres_signal_arg=10) at \npostmaster.c:5191\n#17 <signal handler called>\n#18 0x000000357d6e1683 in __select_nocancel () from /lib64/libc.so.6\n#19 0x00000000008cc6c1 in ServerLoop () at postmaster.c:1757\n#20 0x00000000008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at \npostmaster.c:1465\n#21 0x00000000007c9256 in main (argc=11, argv=0x30d6590) at main.c:202\n(gdb) f 3\n#3 0x00000000009bf5dc in AtEOXact_PgStat_Relations \n(xact_state=0x31b1b50, isCommit=true) at pgstat_relation.c:508\n508 Assert(tabstat->trans == trans);\n(gdb) p tabstat\n$1 = (PgStat_TableStatus *) 0x319e630\n(gdb) p *tabstat\n$2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f, \nt_counts = {t_numscans = 9187201950435737471, t_tuples_returned = \n9187201950435737471, t_tuples_fetched = 9187201950435737471,\n t_tuples_inserted = 9187201950435737471, t_tuples_updated = \n9187201950435737471, t_tuples_deleted = 9187201950435737471, \nt_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,\n t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples = \n9187201950435737471, t_changed_tuples = 9187201950435737471, \nt_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},\n relation = 0x7f7f7f7f7f7f7f7f}\n(gdb) p trans\n$3 = (PgStat_TableXactStatus *) 0x31b1ba8\n(gdb) p *trans\n$4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0, \ntruncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop \n= 0, deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,\n parent = 0x319e630, next = 0x31b1ab8}\n(gdb)\n\n\n\nLooks like we're one step further, no?\n\n\nErik\n\n\n>> (gdb) p tabstat\n>> $1 = <optimized out>\n> \n> Great! It is that. But unfortunately they are optimized out.. Could\n> you cause the crash with -O0 binary? You will see the variable with\n> it. You can rebuild with the option as follows.\n> \n> $ make clean; make install CUSTOM_COPT=\"-O0 -g\"\n> \n> You can dump only the whole xact_state chain from the current core\n> file but the result will give a bit obscure hint for diagnosis.\n> \n> regards.\n> \n\n\n",
"msg_date": "Tue, 19 Apr 2022 13:50:25 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "On 2022-Apr-19, Erik Rijkers wrote:\n\n> (gdb) p tabstat\n> $1 = (PgStat_TableStatus *) 0x319e630\n> (gdb) p *tabstat\n> $2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f,\n> t_counts = {t_numscans = 9187201950435737471, t_tuples_returned =\n> 9187201950435737471, t_tuples_fetched = 9187201950435737471,\n> t_tuples_inserted = 9187201950435737471, t_tuples_updated =\n> 9187201950435737471, t_tuples_deleted = 9187201950435737471,\n> t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,\n> t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples =\n> 9187201950435737471, t_changed_tuples = 9187201950435737471,\n> t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},\n> relation = 0x7f7f7f7f7f7f7f7f}\n\nIt looks like this struct is freed or is in a memory context that was\nreset. Perhaps its lifetime wasn't carefully considered in the logical\nreplication code, which takes some shortcuts.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")\n\n\n",
"msg_date": "Tue, 19 Apr 2022 14:39:30 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:\n> The 12th run of statbug.sh crashed and gave a corefile.\n\nI ran through quite a few iterations by now, without reproducing :(\n\nI guess there's some timing issue and you're hitting on your system\ndue to the slower disks.\n\n\n> Program terminated with signal 6, Aborted.\n> #0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n> (gdb) bt\n> #0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n> #1 0x000000357d633cd5 in abort () from /lib64/libc.so.6\n> #2 0x0000000000b3bada in ExceptionalCondition (conditionName=0xd389a1\n> \"tabstat->trans == trans\", errorType=0xd388b2 \"FailedAssertion\",\n> fileName=0xd388a0 \"pgstat_relation.c\", lineNumber=508) at assert.c:69\n> #3 0x00000000009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,\n> isCommit=true) at pgstat_relation.c:508\n> #4 0x00000000009c4107 in AtEOXact_PgStat (isCommit=true, parallel=false) at\n> pgstat_xact.c:54\n> #5 0x0000000000583764 in CommitTransaction () at xact.c:2360\n> #6 0x0000000000584354 in CommitTransactionCommand () at xact.c:3048\n> #7 0x000000000090b34e in apply_handle_commit_internal\n> (commit_data=0x7ffd024b5940) at worker.c:1532\n> #8 0x000000000090a287 in apply_handle_commit (s=0x7ffd024b59b0) at\n> worker.c:845\n> #9 0x000000000090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473\n> #10 0x000000000090d41c in LogicalRepApplyLoop (last_received=74680880) at\n> worker.c:2757\n> #11 0x000000000090e974 in start_apply (origin_startpos=0) at worker.c:3526\n> #12 0x000000000090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782\n> #13 0x00000000008c7623 in StartBackgroundWorker () at bgworker.c:858\n> #14 0x00000000008d1557 in do_start_bgworker (rw=0x30ff0a0) at\n> postmaster.c:5802\n> #15 0x00000000008d1903 in maybe_start_bgworkers () at postmaster.c:6026\n> #16 0x00000000008d09ba in sigusr1_handler (postgres_signal_arg=10) at\n> postmaster.c:5191\n> #17 <signal handler called>\n> #18 0x000000357d6e1683 in __select_nocancel () from /lib64/libc.so.6\n> #19 0x00000000008cc6c1 in ServerLoop () at postmaster.c:1757\n> #20 0x00000000008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at\n> postmaster.c:1465\n> #21 0x00000000007c9256 in main (argc=11, argv=0x30d6590) at main.c:202\n> (gdb) f 3\n> #3 0x00000000009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,\n> isCommit=true) at pgstat_relation.c:508\n> 508 Assert(tabstat->trans == trans);\n> (gdb) p tabstat\n> $1 = (PgStat_TableStatus *) 0x319e630\n> (gdb) p *tabstat\n> $2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f,\n> t_counts = {t_numscans = 9187201950435737471, t_tuples_returned =\n> 9187201950435737471, t_tuples_fetched = 9187201950435737471,\n> t_tuples_inserted = 9187201950435737471, t_tuples_updated =\n> 9187201950435737471, t_tuples_deleted = 9187201950435737471,\n> t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,\n> t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples =\n> 9187201950435737471, t_changed_tuples = 9187201950435737471,\n> t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},\n> relation = 0x7f7f7f7f7f7f7f7f}\n> (gdb) p trans\n> $3 = (PgStat_TableXactStatus *) 0x31b1ba8\n> (gdb) p *trans\n> $4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0,\n> truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop = 0,\n> deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,\n> parent = 0x319e630, next = 0x31b1ab8}\n> (gdb)\n\nCould you print out\np xact_state\np *xact_state\np xact_state->first\np *xact_state->first\n\nDo you have the server log file for the failed run / instance?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:36:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-19 10:36:24 -0700, Andres Freund wrote:\n> On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:\n> > The 12th run of statbug.sh crashed and gave a corefile.\n> \n> I ran through quite a few iterations by now, without reproducing :(\n> \n> I guess there's some timing issue and you're hitting on your system\n> due to the slower disks.\n\nAh. I found the issue. The new pgstat_report_stat(true) call in\nLogicalRepApplyLoop()'s \"timeout\" section doesn't check if we're in a\ntransaction. And the transactional stats code doesn't handle that (never\nhas).\n\nI think all that's needed is a if (IsTransactionState()) around that\npgstat_report_stat().\n\nIt might be possible to put an assertion into pgstat_report_stat(), but\nI need to look at the process exit code to see if it is.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:55:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Op 19-04-2022 om 19:36 schreef Andres Freund:\n> Hi,\n> \n> On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:\n>> The 12th run of statbug.sh crashed and gave a corefile.\n> \n> I ran through quite a few iterations by now, without reproducing :(\n> \n> I guess there's some timing issue and you're hitting on your system\n> due to the slower disks.\n> \n> \n>> Program terminated with signal 6, Aborted.\n>> #0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n>> (gdb) bt\n>> #0 0x000000357d6324f5 in raise () from /lib64/libc.so.6\n>> #1 0x000000357d633cd5 in abort () from /lib64/libc.so.6\n>> #2 0x0000000000b3bada in ExceptionalCondition (conditionName=0xd389a1\n>> \"tabstat->trans == trans\", errorType=0xd388b2 \"FailedAssertion\",\n>> fileName=0xd388a0 \"pgstat_relation.c\", lineNumber=508) at assert.c:69\n>> #3 0x00000000009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,\n>> isCommit=true) at pgstat_relation.c:508\n>> #4 0x00000000009c4107 in AtEOXact_PgStat (isCommit=true, parallel=false) at\n>> pgstat_xact.c:54\n>> #5 0x0000000000583764 in CommitTransaction () at xact.c:2360\n>> #6 0x0000000000584354 in CommitTransactionCommand () at xact.c:3048\n>> #7 0x000000000090b34e in apply_handle_commit_internal\n>> (commit_data=0x7ffd024b5940) at worker.c:1532\n>> #8 0x000000000090a287 in apply_handle_commit (s=0x7ffd024b59b0) at\n>> worker.c:845\n>> #9 0x000000000090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473\n>> #10 0x000000000090d41c in LogicalRepApplyLoop (last_received=74680880) at\n>> worker.c:2757\n>> #11 0x000000000090e974 in start_apply (origin_startpos=0) at worker.c:3526\n>> #12 0x000000000090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782\n>> #13 0x00000000008c7623 in StartBackgroundWorker () at bgworker.c:858\n>> #14 0x00000000008d1557 in do_start_bgworker (rw=0x30ff0a0) at\n>> postmaster.c:5802\n>> #15 0x00000000008d1903 in maybe_start_bgworkers () at postmaster.c:6026\n>> #16 0x00000000008d09ba in sigusr1_handler (postgres_signal_arg=10) at\n>> postmaster.c:5191\n>> #17 <signal handler called>\n>> #18 0x000000357d6e1683 in __select_nocancel () from /lib64/libc.so.6\n>> #19 0x00000000008cc6c1 in ServerLoop () at postmaster.c:1757\n>> #20 0x00000000008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at\n>> postmaster.c:1465\n>> #21 0x00000000007c9256 in main (argc=11, argv=0x30d6590) at main.c:202\n>> (gdb) f 3\n>> #3 0x00000000009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,\n>> isCommit=true) at pgstat_relation.c:508\n>> 508 Assert(tabstat->trans == trans);\n>> (gdb) p tabstat\n>> $1 = (PgStat_TableStatus *) 0x319e630\n>> (gdb) p *tabstat\n>> $2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f,\n>> t_counts = {t_numscans = 9187201950435737471, t_tuples_returned =\n>> 9187201950435737471, t_tuples_fetched = 9187201950435737471,\n>> t_tuples_inserted = 9187201950435737471, t_tuples_updated =\n>> 9187201950435737471, t_tuples_deleted = 9187201950435737471,\n>> t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,\n>> t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples =\n>> 9187201950435737471, t_changed_tuples = 9187201950435737471,\n>> t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},\n>> relation = 0x7f7f7f7f7f7f7f7f}\n>> (gdb) p trans\n>> $3 = (PgStat_TableXactStatus *) 0x31b1ba8\n>> (gdb) p *trans\n>> $4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0,\n>> truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop = 0,\n>> deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,\n>> parent = 0x319e630, next = 0x31b1ab8}\n>> (gdb)\n> \n> Could you print out\n> p xact_state\n> p *xact_state\n> p xact_state->first\n> p *xact_state->first\n> \n> Do you have the server log file for the failed run / instance?\n\n\n(gdb) p xact_state\n$5 = (PgStat_SubXactStatus *) 0x31b1b50\n\n(gdb) p *xact_state\n$6 = {nest_level = 1, prev = 0x0, pending_drops = {head = {prev = \n0x31b1b60, next = 0x31b1b60}}, pending_drops_count = 0, first = 0x31b1ba8}\n\n(gdb) p xact_state->first\n$7 = (PgStat_TableXactStatus *) 0x31b1ba8\n\n(gdb) p *xact_state->first\n$8 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0, \ntruncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop \n= 0, deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,\n parent = 0x319e630, next = 0x31b1ab8}\n(gdb)\n\n\nThe logfile is attached.\n\n\nErik\n\n\n> Greetings,\n> \n> Andres Freund",
"msg_date": "Tue, 19 Apr 2022 20:02:24 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "At Tue, 19 Apr 2022 10:55:26 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-04-19 10:36:24 -0700, Andres Freund wrote:\n> > On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:\n> > > The 12th run of statbug.sh crashed and gave a corefile.\n> > \n> > I ran through quite a few iterations by now, without reproducing :(\n> > \n> > I guess there's some timing issue and you're hitting on your system\n> > due to the slower disks.\n> \n> Ah. I found the issue. The new pgstat_report_stat(true) call in\n> LogicalRepApplyLoop()'s \"timeout\" section doesn't check if we're in a\n> transaction. And the transactional stats code doesn't handle that (never\n> has).\n> \n> I think all that's needed is a if (IsTransactionState()) around that\n> pgstat_report_stat().\n\nif (!IsTransactinoState()) ?\n\n> It might be possible to put an assertion into pgstat_report_stat(), but\n> I need to look at the process exit code to see if it is.\n\nInserting a sleep in pgoutput_commit_txn reproduced this. Crashes with\nthe same stack trace with the similar variable state.\n\ndiff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c\nindex b197bfd565..def4d751d3 100644\n--- a/src/backend/replication/pgoutput/pgoutput.c\n+++ b/src/backend/replication/pgoutput/pgoutput.c\n@@ -568,6 +568,7 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n \t\treturn;\n \t}\n \n+\tsleep(2);\n \tOutputPluginPrepareWrite(ctx, true);\n \tlogicalrep_write_commit(ctx->out, txn, commit_lsn);\n \tOutputPluginWrite(ctx, true);\n\nThe following actuall works for this.\n\ndiff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c\nindex 4171371296..f4e5359513 100644\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -2882,10 +2882,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n \t\t\tsend_feedback(last_received, requestReply, requestReply);\n \n \t\t\t/*\n-\t\t\t * Force reporting to ensure long idle periods don't lead to\n-\t\t\t * arbitrarily delayed stats.\n+\t\t\t * Force reporting to ensure long out-of-transaction idle periods\n+\t\t\t * don't lead to arbitrarily delayed stats.\n \t\t\t */\n-\t\t\tpgstat_report_stat(true);\n+\t\t\tif (!IsTransactionState())\n+\t\t\t\tpgstat_report_stat(true);\n \t\t}\n \t}\n \nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Apr 2022 13:54:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Op 20-04-2022 om 06:54 schreef Kyotaro Horiguchi:\n> At Tue, 19 Apr 2022 10:55:26 -0700, Andres Freund <andres@anarazel.de> wrote in\n>> Hi,\n>>\n>> On 2022-04-19 10:36:24 -0700, Andres Freund wrote:\n>>> On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:\n>>>> The 12th run of statbug.sh crashed and gave a corefile.\n>>>\n>>> I ran through quite a few iterations by now, without reproducing :(\n>>>\n>>> I guess there's some timing issue and you're hitting on your system\n>>> due to the slower disks.\n>>\n>> Ah. I found the issue. The new pgstat_report_stat(true) call in\n>> LogicalRepApplyLoop()'s \"timeout\" section doesn't check if we're in a\n>> transaction. And the transactional stats code doesn't handle that (never\n>> has).\n>>\n>> I think all that's needed is a if (IsTransactionState()) around that\n>> pgstat_report_stat().\n> \n> if (!IsTransactinoState()) ?\n> \n>> It might be possible to put an assertion into pgstat_report_stat(), but\n>> I need to look at the process exit code to see if it is.\n> \n> Inserting a sleep in pgoutput_commit_txn reproduced this. Crashes with\n> the same stack trace with the similar variable state.\n> \n> diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c\n> index b197bfd565..def4d751d3 100644\n> --- a/src/backend/replication/pgoutput/pgoutput.c\n> +++ b/src/backend/replication/pgoutput/pgoutput.c\n> @@ -568,6 +568,7 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> \t\treturn;\n> \t}\n> \n> +\tsleep(2);\n> \tOutputPluginPrepareWrite(ctx, true);\n> \tlogicalrep_write_commit(ctx->out, txn, commit_lsn);\n> \tOutputPluginWrite(ctx, true);\n> \n> The following actuall works for this.\n> \n> diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c\n> index 4171371296..f4e5359513 100644\n> --- a/src/backend/replication/logical/worker.c\n> +++ b/src/backend/replication/logical/worker.c\n> @@ -2882,10 +2882,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n> \t\t\tsend_feedback(last_received, requestReply, requestReply);\n> \n> \t\t\t/*\n> -\t\t\t * Force reporting to ensure long idle periods don't lead to\n> -\t\t\t * arbitrarily delayed stats.\n> +\t\t\t * Force reporting to ensure long out-of-transaction idle periods\n> +\t\t\t * don't lead to arbitrarily delayed stats.\n> \t\t\t */\n> -\t\t\tpgstat_report_stat(true);\n> +\t\t\tif (!IsTransactionState())\n> +\t\t\t\tpgstat_report_stat(true);\n> \t\t}\n> \t}\n> \n\nYes, that seems to fix it: I applied that latter patch, and ran my \nprogram 250x without errors. Then I removed it again an it gave the \nerror within 15x.\n\nthanks!\n\nErik\n\n\n> regards.\n> \n\n\n",
"msg_date": "Wed, 20 Apr 2022 13:03:20 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 01:03:20PM +0200, Erik Rijkers wrote:\n> Yes, that seems to fix it: I applied that latter patch, and ran my program\n> 250x without errors. Then I removed it again an it gave the error within\n> 15x.\n\nThat looks simple enough, indeed. Andres, are you planning to address\nthis issue?\n--\nMichael",
"msg_date": "Mon, 25 Apr 2022 15:18:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 03:18:52PM +0900, Michael Paquier wrote:\n> On Wed, Apr 20, 2022 at 01:03:20PM +0200, Erik Rijkers wrote:\n>> Yes, that seems to fix it: I applied that latter patch, and ran my program\n>> 250x without errors. Then I removed it again an it gave the error within\n>> 15x.\n> \n> That looks simple enough, indeed. Andres, are you planning to address\n> this issue?\n\nPing. It looks annoying to release beta1 with that, as assertions are\nlikely going to be enabled in a lot of test builds.\n--\nMichael",
"msg_date": "Wed, 11 May 2022 15:48:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-11 15:48:40 +0900, Michael Paquier wrote:\n> On Mon, Apr 25, 2022 at 03:18:52PM +0900, Michael Paquier wrote:\n> > On Wed, Apr 20, 2022 at 01:03:20PM +0200, Erik Rijkers wrote:\n> >> Yes, that seems to fix it: I applied that latter patch, and ran my program\n> >> 250x without errors. Then I removed it again an it gave the error within\n> >> 15x.\n> > \n> > That looks simple enough, indeed. Andres, are you planning to address\n> > this issue?\n> \n> Ping. It looks annoying to release beta1 with that, as assertions are\n> likely going to be enabled in a lot of test builds.\n\nI'll try to fix it tomorrow... Sorry for the delay.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 May 2022 20:32:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "On Wed, May 11, 2022 at 08:32:14PM -0700, Andres Freund wrote:\n> I'll try to fix it tomorrow... Sorry for the delay.\n\nThanks, Andres.\n--\nMichael",
"msg_date": "Thu, 12 May 2022 14:21:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
},
{
"msg_contents": "Hi,\n\nI finally pushed the fix for this. Erik, thanks for the report! And thanks\nMichael for the ping...\n\nOn 2022-05-11 20:32:14 -0700, Andres Freund wrote:\n> On 2022-05-11 15:48:40 +0900, Michael Paquier wrote:\n\n> > Ping. It looks annoying to release beta1 with that, as assertions are\n> > likely going to be enabled in a lot of test builds.\n\nFWIW, it's somewhat hard to hit (basically the sender needs to stall while\nsending out a transaction / network being really slow), so it'd not have been\nlikely to be hit by all that many people.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 19:02:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: TRAP: FailedAssertion(\"tabstat->trans == trans\", File:\n \"pgstat_relation.c\", Line: 508"
}
] |
[
{
"msg_contents": "Hi!\n\nMy name is Donglin Xie, a MSc. student at Zhejiang University, in China. I\nam interested in the project *pgmoneta: Write-Ahead Log (WAL)\ninfrastructure.*\n\nThe proposal is attached to this email. Looking forward to the suggestions!\n\nSincerely\nDonglin Xie",
"msg_date": "Sun, 17 Apr 2022 01:06:41 +0800",
"msg_from": "dl x <xray20161@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC: pgmoneta: Write-Ahead Log (WAL) infrastructure (2022)"
},
{
"msg_contents": "Hi,\n\nOn 4/16/22 13:06, dl x wrote:\n> My name is Donglin Xie, a MSc. student at Zhejiang University, in China. I\n> am interested in the project *pgmoneta: Write-Ahead Log (WAL)\n> infrastructure.*\n>\n> The proposal is attached to this email. Looking forward to the suggestions!\n\n\nThanks for your proposal to Google Summer of Code 2022 !\n\nWe'll follow up off-list to get this finalized.\n\nBest regards,\n Jesper\n\n\n\n\n",
"msg_date": "Sat, 16 Apr 2022 15:42:05 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: GSoC: pgmoneta: Write-Ahead Log (WAL) infrastructure (2022)"
}
] |
[
{
"msg_contents": "My pet dinosaur prairiedog just failed in the contrib/test_decoding\ntests [1]:\n\ndiff -U3 /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n--- /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out\t2022-04-15 07:59:17.000000000 -0400\n+++ /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\t2022-04-15 09:06:36.000000000 -0400\n@@ -77,10 +77,12 @@\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n+ closing a streamed block for transaction\n+ opening a streamed block for transaction\n streaming change for transaction\n closing a streamed block for transaction\n committing streamed transaction\n-(13 rows)\n+(15 rows)\n\nLooking at the postmaster log, it's obvious where this extra transaction\ncame from: auto-analyze ran on pg_type concurrently with the test step\njust before this one. That could only happen if the tests ran long enough\nfor autovacuum_naptime to elapse, but prairiedog is a pretty slow machine.\n(And I hasten to point out that some other animals, such as those running\nvalgrind or CLOBBER_CACHE_ALWAYS, are even slower.)\n\nWe've seen this sort of problem before [2], and attempted to fix it [3]\nby making these tests ignore empty transactions. But of course\nauto-analyze's transaction wasn't empty, so that didn't help.\n\nI think the most expedient way to prevent this type of failure is to run\nthe test_decoding tests with autovacuum_naptime cranked up so far as to\nmake it a non-issue, like maybe a day. Since test_decoding already adds\nsome custom settings to postgresql.conf, this'll take just a one-line\naddition to test_decoding/logical.conf.\n\nI wonder whether we ought to then revert these tests' use of\nskip-empty-xacts, or at least start having a mix of cases.\nIt seems to me that we'd rather know about it if there are unexpected\nempty transactions. Is there anything we're using that for other than\nto hide the effects of autovacuum?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2022-04-15%2011%3A59%3A16\n\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-02-12%2010%3A24%3A22\n\n[3] https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=b779d7d8fdae088d70da5ed9fcd8205035676df3\n\n\n",
"msg_date": "Sat, 16 Apr 2022 13:11:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-16 13:11:59 -0400, Tom Lane wrote:\n> My pet dinosaur prairiedog just failed in the contrib/test_decoding\n> tests [1]:\n> \n> diff -U3 /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n> --- /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out\t2022-04-15 07:59:17.000000000 -0400\n> +++ /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\t2022-04-15 09:06:36.000000000 -0400\n> @@ -77,10 +77,12 @@\n> streaming change for transaction\n> streaming change for transaction\n> streaming change for transaction\n> + closing a streamed block for transaction\n> + opening a streamed block for transaction\n> streaming change for transaction\n> closing a streamed block for transaction\n> committing streamed transaction\n> -(13 rows)\n> +(15 rows)\n> \n> Looking at the postmaster log, it's obvious where this extra transaction\n> came from: auto-analyze ran on pg_type concurrently with the test step\n> just before this one. That could only happen if the tests ran long enough\n> for autovacuum_naptime to elapse, but prairiedog is a pretty slow machine.\n> (And I hasten to point out that some other animals, such as those running\n> valgrind or CLOBBER_CACHE_ALWAYS, are even slower.)\n> \n> We've seen this sort of problem before [2], and attempted to fix it [3]\n> by making these tests ignore empty transactions. But of course\n> auto-analyze's transaction wasn't empty, so that didn't help.\n\nI don't quite understand this bit - the logic test_decoding uses to\ndecide if a transaction is \"empty\" is just whether a tuple was\noutput. And there shouldn't be any as part of auto-analyze, because we\ndon't decode catalog changes. I suspect there's something broken in the\nstreaming logic (potentially just in test_decoding) around\nskip_empty_xacts.\n\n\n> I think the most expedient way to prevent this type of failure is to run\n> the test_decoding tests with autovacuum_naptime cranked up so far as to\n> make it a non-issue, like maybe a day. Since test_decoding already adds\n> some custom settings to postgresql.conf, this'll take just a one-line\n> addition to test_decoding/logical.conf.\n\nI'm a bit worried about this approach - we've IIRC had past bugs that\ncame only to light because of autovacuum starting. I wonder if we rather\nshould do the opposite and reduce naptime so it'll be seen on fast\nmachines, rather than very slow ones.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 17 Apr 2022 07:31:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "Hi,\n\nAdding Amit, I think this is stuff he worked on...\n\nOn 2022-04-17 07:31:04 -0700, Andres Freund wrote:\n> On 2022-04-16 13:11:59 -0400, Tom Lane wrote:\n> > My pet dinosaur prairiedog just failed in the contrib/test_decoding\n> > tests [1]:\n> > \n> > diff -U3 /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n> > --- /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out\t2022-04-15 07:59:17.000000000 -0400\n> > +++ /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\t2022-04-15 09:06:36.000000000 -0400\n> > @@ -77,10 +77,12 @@\n> > streaming change for transaction\n> > streaming change for transaction\n> > streaming change for transaction\n> > + closing a streamed block for transaction\n> > + opening a streamed block for transaction\n> > streaming change for transaction\n> > closing a streamed block for transaction\n> > committing streamed transaction\n> > -(13 rows)\n> > +(15 rows)\n> > \n> > Looking at the postmaster log, it's obvious where this extra transaction\n> > came from: auto-analyze ran on pg_type concurrently with the test step\n> > just before this one. That could only happen if the tests ran long enough\n> > for autovacuum_naptime to elapse, but prairiedog is a pretty slow machine.\n> > (And I hasten to point out that some other animals, such as those running\n> > valgrind or CLOBBER_CACHE_ALWAYS, are even slower.)\n> > \n> > We've seen this sort of problem before [2], and attempted to fix it [3]\n> > by making these tests ignore empty transactions. But of course\n> > auto-analyze's transaction wasn't empty, so that didn't help.\n> \n> I don't quite understand this bit - the logic test_decoding uses to\n> decide if a transaction is \"empty\" is just whether a tuple was\n> output. And there shouldn't be any as part of auto-analyze, because we\n> don't decode catalog changes. I suspect there's something broken in the\n> streaming logic (potentially just in test_decoding) around\n> skip_empty_xacts.\n> \n> \n> > I think the most expedient way to prevent this type of failure is to run\n> > the test_decoding tests with autovacuum_naptime cranked up so far as to\n> > make it a non-issue, like maybe a day. Since test_decoding already adds\n> > some custom settings to postgresql.conf, this'll take just a one-line\n> > addition to test_decoding/logical.conf.\n> \n> I'm a bit worried about this approach - we've IIRC had past bugs that\n> came only to light because of autovacuum starting. I wonder if we rather\n> should do the opposite and reduce naptime so it'll be seen on fast\n> machines, rather than very slow ones.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 17 Apr 2022 07:32:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> On 2022-04-16 13:11:59 -0400, Tom Lane wrote:\n>>> We've seen this sort of problem before [2], and attempted to fix it [3]\n>>> by making these tests ignore empty transactions. But of course\n>>> auto-analyze's transaction wasn't empty, so that didn't help.\n\n>> I don't quite understand this bit - the logic test_decoding uses to\n>> decide if a transaction is \"empty\" is just whether a tuple was\n>> output. And there shouldn't be any as part of auto-analyze, because we\n>> don't decode catalog changes. I suspect there's something broken in the\n>> streaming logic (potentially just in test_decoding) around\n>> skip_empty_xacts.\n\nHmm, I'll defer to somebody who knows that code better about whether\nthere's an actual bug. However ...\n\n>>> I think the most expedient way to prevent this type of failure is to run\n>>> the test_decoding tests with autovacuum_naptime cranked up so far as to\n>>> make it a non-issue, like maybe a day.\n\n>> I'm a bit worried about this approach - we've IIRC had past bugs that\n>> came only to light because of autovacuum starting. I wonder if we rather\n>> should do the opposite and reduce naptime so it'll be seen on fast\n>> machines, rather than very slow ones.\n\nIt seems likely to me that trying to make a test like this one blind to\nautovacuum/autoanalyze activity will make it less useful, not more so.\nWhy is such blindness desirable?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Apr 2022 12:01:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "Hi\n\nOn 2022-04-17 12:01:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> On 2022-04-16 13:11:59 -0400, Tom Lane wrote:\n> >>> We've seen this sort of problem before [2], and attempted to fix it [3]\n> >>> by making these tests ignore empty transactions. But of course\n> >>> auto-analyze's transaction wasn't empty, so that didn't help.\n> \n> >> I don't quite understand this bit - the logic test_decoding uses to\n> >> decide if a transaction is \"empty\" is just whether a tuple was\n> >> output. And there shouldn't be any as part of auto-analyze, because we\n> >> don't decode catalog changes. I suspect there's something broken in the\n> >> streaming logic (potentially just in test_decoding) around\n> >> skip_empty_xacts.\n> \n> Hmm, I'll defer to somebody who knows that code better about whether\n> there's an actual bug. However ...\n> \n> >>> I think the most expedient way to prevent this type of failure is to run\n> >>> the test_decoding tests with autovacuum_naptime cranked up so far as to\n> >>> make it a non-issue, like maybe a day.\n> \n> >> I'm a bit worried about this approach - we've IIRC had past bugs that\n> >> came only to light because of autovacuum starting. I wonder if we rather\n> >> should do the opposite and reduce naptime so it'll be seen on fast\n> >> machines, rather than very slow ones.\n> \n> It seems likely to me that trying to make a test like this one blind to\n> autovacuum/autoanalyze activity will make it less useful, not more so.\n> Why is such blindness desirable?\n\nMaybe I misunderstood - I thought you were proposing to prevent\nautovacuum by increasing naptime? Won't that precisely blind us to\nautovacuum/analyze? Hiding empty xacts happens \"very late\", so all the\ndecoding etc still happens.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 17 Apr 2022 14:52:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-04-17 12:01:53 -0400, Tom Lane wrote:\n>> It seems likely to me that trying to make a test like this one blind to\n>> autovacuum/autoanalyze activity will make it less useful, not more so.\n>> Why is such blindness desirable?\n\n> Maybe I misunderstood - I thought you were proposing to prevent\n> autovacuum by increasing naptime? Won't that precisely blind us to\n> autovacuum/analyze? Hiding empty xacts happens \"very late\", so all the\n> decoding etc still happens.\n\nMy concern is basically that if we hack the code so it does not report\nautovacuum activity, that might result in it also not reporting other\nthings that are more interesting. So I think an external method of\nsuppressing test noise due to autovac is more advisable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Apr 2022 17:55:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "On Sun, Apr 17, 2022 at 8:02 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Adding Amit, I think this is stuff he worked on...\n>\n\nI'll look into this and share my findings.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 08:25:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "On Sat, Apr 16, 2022 at 10:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> My pet dinosaur prairiedog just failed in the contrib/test_decoding\n> tests [1]:\n>\n> diff -U3 /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n> --- /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out 2022-04-15 07:59:17.000000000 -0400\n> +++ /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out 2022-04-15 09:06:36.000000000 -0400\n> @@ -77,10 +77,12 @@\n> streaming change for transaction\n> streaming change for transaction\n> streaming change for transaction\n> + closing a streamed block for transaction\n> + opening a streamed block for transaction\n> streaming change for transaction\n> closing a streamed block for transaction\n> committing streamed transaction\n> -(13 rows)\n> +(15 rows)\n>\n> Looking at the postmaster log, it's obvious where this extra transaction\n> came from: auto-analyze ran on pg_type concurrently with the test step\n> just before this one. That could only happen if the tests ran long enough\n> for autovacuum_naptime to elapse, but prairiedog is a pretty slow machine.\n> (And I hasten to point out that some other animals, such as those running\n> valgrind or CLOBBER_CACHE_ALWAYS, are even slower.)\n>\n> We've seen this sort of problem before [2], and attempted to fix it [3]\n> by making these tests ignore empty transactions. But of course\n> auto-analyze's transaction wasn't empty, so that didn't help.\n>\n\nThe possible reason here is that this extra (auto-analyze) transaction\ncauses the logical decoding work mem to reach before the last change\nof the test's transaction. As can be seen from the logs, it just\nclosed the stream before the last change and then opened a new stream\nfor the last change. Now, it is true that the auto-analyze changes\nwon't be decoded as they don't perform DML operation on any\nnon-catalog table but it could generate some invalidation message\nwhich needs to be processed even though we won't send anything related\nto it to the downstream.\n\nThis needs to be verified once by doing some manual testing as it may\nnot be easily reproducible every time. If this happens to be true then\nI think your suggestion related to increasing autovacuum_naptime would\nwork.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 11:19:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Sat, Apr 16, 2022 at 10:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > My pet dinosaur prairiedog just failed in the contrib/test_decoding\n> > tests [1]:\n> >\n> > diff -U3\n> /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out\n> /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n> > ---\n> /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out\n> 2022-04-15 07:59:17.000000000 -0400\n> > +++\n> /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n> 2022-04-15 09:06:36.000000000 -0400\n> > @@ -77,10 +77,12 @@\n> > streaming change for transaction\n> > streaming change for transaction\n> > streaming change for transaction\n> > + closing a streamed block for transaction\n> > + opening a streamed block for transaction\n> > streaming change for transaction\n> > closing a streamed block for transaction\n> > committing streamed transaction\n> > -(13 rows)\n> > +(15 rows)\n> >\n> > Looking at the postmaster log, it's obvious where this extra transaction\n> > came from: auto-analyze ran on pg_type concurrently with the test step\n> > just before this one. That could only happen if the tests ran long\n> enough\n> > for autovacuum_naptime to elapse, but prairiedog is a pretty slow\n> machine.\n> > (And I hasten to point out that some other animals, such as those running\n> > valgrind or CLOBBER_CACHE_ALWAYS, are even slower.)\n> >\n> > We've seen this sort of problem before [2], and attempted to fix it [3]\n> > by making these tests ignore empty transactions. But of course\n> > auto-analyze's transaction wasn't empty, so that didn't help.\n> >\n>\n> The possible reason here is that this extra (auto-analyze) transaction\n> causes the logical decoding work mem to reach before the last change\n> of the test's transaction. As can be seen from the logs, it just\n> closed the stream before the last change and then opened a new stream\n> for the last change. Now, it is true that the auto-analyze changes\n> won't be decoded as they don't perform DML operation on any\n> non-catalog table but it could generate some invalidation message\n> which needs to be processed even though we won't send anything related\n> to it to the downstream.\n>\n\nThis analysis seems right to me.\n\n\n> This needs to be verified once by doing some manual testing as it may\n> not be easily reproducible every time. If this happens to be true then\n> I think your suggestion related to increasing autovacuum_naptime would\n> work.\n>\n\nI will try to reproduce this, maybe by reducing the autovacuum_naptime or\nparallelly running some script that continuously performs DDL-only\ntransactions.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Apr 18, 2022 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Apr 16, 2022 at 10:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> My pet dinosaur prairiedog just failed in the contrib/test_decoding\n> tests [1]:\n>\n> diff -U3 /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out\n> --- /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/expected/stream.out 2022-04-15 07:59:17.000000000 -0400\n> +++ /Users/buildfarm/bf-data/HEAD/pgsql.build/contrib/test_decoding/results/stream.out 2022-04-15 09:06:36.000000000 -0400\n> @@ -77,10 +77,12 @@\n> streaming change for transaction\n> streaming change for transaction\n> streaming change for transaction\n> + closing a streamed block for transaction\n> + opening a streamed block for transaction\n> streaming change for transaction\n> closing a streamed block for transaction\n> committing streamed transaction\n> -(13 rows)\n> +(15 rows)\n>\n> Looking at the postmaster log, it's obvious where this extra transaction\n> came from: auto-analyze ran on pg_type concurrently with the test step\n> just before this one. That could only happen if the tests ran long enough\n> for autovacuum_naptime to elapse, but prairiedog is a pretty slow machine.\n> (And I hasten to point out that some other animals, such as those running\n> valgrind or CLOBBER_CACHE_ALWAYS, are even slower.)\n>\n> We've seen this sort of problem before [2], and attempted to fix it [3]\n> by making these tests ignore empty transactions. But of course\n> auto-analyze's transaction wasn't empty, so that didn't help.\n>\n\nThe possible reason here is that this extra (auto-analyze) transaction\ncauses the logical decoding work mem to reach before the last change\nof the test's transaction. As can be seen from the logs, it just\nclosed the stream before the last change and then opened a new stream\nfor the last change. Now, it is true that the auto-analyze changes\nwon't be decoded as they don't perform DML operation on any\nnon-catalog table but it could generate some invalidation message\nwhich needs to be processed even though we won't send anything related\nto it to the downstream.This analysis seems right to me. \nThis needs to be verified once by doing some manual testing as it may\nnot be easily reproducible every time. If this happens to be true then\nI think your suggestion related to increasing autovacuum_naptime would\nwork.I will try to reproduce this, maybe by reducing the autovacuum_naptime or parallelly running some script that continuously performs DDL-only transactions.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Apr 2022 15:29:37 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 3:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>\n> This needs to be verified once by doing some manual testing as it may\n> not be easily reproducible every time. If this happens to be true then\n> I think your suggestion related to increasing autovacuum_naptime would\n> work.\n>\n>\n> I will try to reproduce this, maybe by reducing the autovacuum_naptime or parallelly running some script that continuously performs DDL-only transactions.\n\nI have reproduced it [1] by repeatedly running the attached\nscript(stream.sql) from one session and parallely running the vacuum\nanalysis from the another session.\n\nI have also changed the config for testing decoding to set the\nautovacuum_naptime to 1d (patch attached)\n\n[1]\nResult without vacuum analyze:\n data\n------------------------------------------\n opening a streamed block for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n closing a streamed block for transaction\n committing streamed transaction\n(13 rows)\n\nResult with parallely running VACUUM ANALYZE\n\n data\n------------------------------------------\n opening a streamed block for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n streaming change for transaction\n closing a streamed block for transaction\n opening a streamed block for transaction\n streaming change for transaction\n closing a streamed block for transaction\n committing streamed transaction\n(15 rows)\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 19 Apr 2022 11:38:14 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Apr 18, 2022 at 3:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n> > This needs to be verified once by doing some manual testing as it may\n> > not be easily reproducible every time. If this happens to be true then\n> > I think your suggestion related to increasing autovacuum_naptime would\n> > work.\n> >\n> >\n> > I will try to reproduce this, maybe by reducing the autovacuum_naptime or parallelly running some script that continuously performs DDL-only transactions.\n>\n> I have reproduced it [1] by repeatedly running the attached\n> script(stream.sql) from one session and parallely running the vacuum\n> analysis from the another session.\n>\n> I have also changed the config for testing decoding to set the\n> autovacuum_naptime to 1d (patch attached)\n>\n\nThanks, I am also able to see similar results. This shows the analysis\nwas right. I will push the autovacuum_naptime change in HEAD and 14\n(as both contains this test) tomorrow unless someone thinks otherwise.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 19 Apr 2022 15:16:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 19, 2022 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Apr 18, 2022 at 3:29 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > >\n> > > This needs to be verified once by doing some manual testing as it may\n> > > not be easily reproducible every time. If this happens to be true then\n> > > I think your suggestion related to increasing autovacuum_naptime would\n> > > work.\n> > >\n> > >\n> > > I will try to reproduce this, maybe by reducing the autovacuum_naptime or parallelly running some script that continuously performs DDL-only transactions.\n> >\n> > I have reproduced it [1] by repeatedly running the attached\n> > script(stream.sql) from one session and parallely running the vacuum\n> > analysis from the another session.\n> >\n> > I have also changed the config for testing decoding to set the\n> > autovacuum_naptime to 1d (patch attached)\n> >\n>\n> Thanks, I am also able to see similar results. This shows the analysis\n> was right. I will push the autovacuum_naptime change in HEAD and 14\n> (as both contains this test) tomorrow unless someone thinks otherwise.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 11:30:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stabilizing the test_decoding checks, take N"
}
] |
[
{
"msg_contents": "Hi!\n\nMy name is Donglin Xie, a MSc. student at Zhejiang University, in China. I\nam interested in the project *pgexporter: Custom_file.*\n\nThe proposal is attached to this email. Looking forward to the suggestions!\n\nSincerely\nDonglin Xie",
"msg_date": "Sun, 17 Apr 2022 16:43:05 +0800",
"msg_from": "dl x <xray20161@gmail.com>",
"msg_from_op": true,
"msg_subject": "GsoC: pgexporter: Custom file"
},
{
"msg_contents": "Hi,\n\nOn 4/17/22 04:43, dl x wrote:\n> My name is Donglin Xie, a MSc. student at Zhejiang University, in China. I\n> am interested in the project *pgexporter: Custom_file.*\n>\n> The proposal is attached to this email. Looking forward to the suggestions!\n>\n>\n\nThanks for your proposal to Google Summer of Code 2022 !\n\nWe'll follow up off-list to get this finalized.\n\nBest regards,\n Jesper\n\n\n\n\n",
"msg_date": "Sun, 17 Apr 2022 08:05:19 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: GsoC: pgexporter: Custom file"
}
] |
[
{
"msg_contents": "Hi,\n(I added Tomas in CC:.)\n\nOne thing I noticed while reviewing the patch for fast copying into\nforeign tables/partitions using batch insert [1] is that in\npostgres_fdw we allow batch-inserting into foreign tables/partitions\nwith before row triggers, but such triggers might query the target\ntable/partition and act differently if the tuples that have already\nbeen processed and prepared for batch-insertion are not there. Here\nis an example using HEAD:\n\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw options\n(dbname 'postgres');\ncreate user mapping for current_user server loopback;\ncreate table t (a int);\ncreate foreign table ft (a int) server loopback options (table_name 't');\ncreate function ft_rowcount_tf() returns trigger as $$ begin raise\nnotice '%: rows = %', tg_name, (select count(*) from ft); return new;\nend; $$ language plpgsql;\ncreate trigger ft_rowcount before insert on ft for each row execute\nfunction ft_rowcount_tf();\n\ninsert into ft select i from generate_series(1, 10) i;\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 1\nNOTICE: ft_rowcount: rows = 2\nNOTICE: ft_rowcount: rows = 3\nNOTICE: ft_rowcount: rows = 4\nNOTICE: ft_rowcount: rows = 5\nNOTICE: ft_rowcount: rows = 6\nNOTICE: ft_rowcount: rows = 7\nNOTICE: ft_rowcount: rows = 8\nNOTICE: ft_rowcount: rows = 9\nINSERT 0 10\n\nThis looks good, but when batch insert is enabled, the trigger\nproduces incorrect results:\n\nalter foreign table ft options (add batch_size '10');\ndelete from ft;\n\ninsert into ft select i from generate_series(1, 10) i;\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nNOTICE: ft_rowcount: rows = 0\nINSERT 0 10\n\nSo I think we should disable batch insert in such cases, just as we\ndisable multi insert when there are any before row triggers on the\ntarget (local) tables/partitions in copyfrom.c. Attached is a patch\nfor that.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/bc489202-9855-7550-d64c-ad2d83c24867%40postgrespro.ru",
"msg_date": "Sun, 17 Apr 2022 18:20:48 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Sun, Apr 17, 2022 at 6:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Here\n> is an example using HEAD:\n>\n> create extension postgres_fdw;\n> create server loopback foreign data wrapper postgres_fdw options\n> (dbname 'postgres');\n> create user mapping for current_user server loopback;\n> create table t (a int);\n> create foreign table ft (a int) server loopback options (table_name 't');\n> create function ft_rowcount_tf() returns trigger as $$ begin raise\n> notice '%: rows = %', tg_name, (select count(*) from ft); return new;\n> end; $$ language plpgsql;\n> create trigger ft_rowcount before insert on ft for each row execute\n> function ft_rowcount_tf();\n>\n> insert into ft select i from generate_series(1, 10) i;\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 1\n> NOTICE: ft_rowcount: rows = 2\n> NOTICE: ft_rowcount: rows = 3\n> NOTICE: ft_rowcount: rows = 4\n> NOTICE: ft_rowcount: rows = 5\n> NOTICE: ft_rowcount: rows = 6\n> NOTICE: ft_rowcount: rows = 7\n> NOTICE: ft_rowcount: rows = 8\n> NOTICE: ft_rowcount: rows = 9\n> INSERT 0 10\n>\n> This looks good, but when batch insert is enabled, the trigger\n> produces incorrect results:\n>\n> alter foreign table ft options (add batch_size '10');\n> delete from ft;\n>\n> insert into ft select i from generate_series(1, 10) i;\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> NOTICE: ft_rowcount: rows = 0\n> INSERT 0 10\n\nActually, the results are correct, as we do batch-insert here. But I\njust wanted to show that the trigger behaves *differently* when doing\nbatch-insert.\n\n> So I think we should disable batch insert in such cases, just as we\n> disable multi insert when there are any before row triggers on the\n> target (local) tables/partitions in copyfrom.c. Attached is a patch\n> for that.\n\nIf there are no objections from Tomas or anyone else, I'll commit the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 19 Apr 2022 18:16:03 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "\n\nOn 4/19/22 11:16, Etsuro Fujita wrote:\n> On Sun, Apr 17, 2022 at 6:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> Here\n>> is an example using HEAD:\n>>\n>> create extension postgres_fdw;\n>> create server loopback foreign data wrapper postgres_fdw options\n>> (dbname 'postgres');\n>> create user mapping for current_user server loopback;\n>> create table t (a int);\n>> create foreign table ft (a int) server loopback options (table_name 't');\n>> create function ft_rowcount_tf() returns trigger as $$ begin raise\n>> notice '%: rows = %', tg_name, (select count(*) from ft); return new;\n>> end; $$ language plpgsql;\n>> create trigger ft_rowcount before insert on ft for each row execute\n>> function ft_rowcount_tf();\n>>\n>> insert into ft select i from generate_series(1, 10) i;\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 1\n>> NOTICE: ft_rowcount: rows = 2\n>> NOTICE: ft_rowcount: rows = 3\n>> NOTICE: ft_rowcount: rows = 4\n>> NOTICE: ft_rowcount: rows = 5\n>> NOTICE: ft_rowcount: rows = 6\n>> NOTICE: ft_rowcount: rows = 7\n>> NOTICE: ft_rowcount: rows = 8\n>> NOTICE: ft_rowcount: rows = 9\n>> INSERT 0 10\n>>\n>> This looks good, but when batch insert is enabled, the trigger\n>> produces incorrect results:\n>>\n>> alter foreign table ft options (add batch_size '10');\n>> delete from ft;\n>>\n>> insert into ft select i from generate_series(1, 10) i;\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> NOTICE: ft_rowcount: rows = 0\n>> INSERT 0 10\n> \n> Actually, the results are correct, as we do batch-insert here. But I\n> just wanted to show that the trigger behaves *differently* when doing\n> batch-insert.\n> \n>> So I think we should disable batch insert in such cases, just as we\n>> disable multi insert when there are any before row triggers on the\n>> target (local) tables/partitions in copyfrom.c. Attached is a patch\n>> for that.\n> \n> If there are no objections from Tomas or anyone else, I'll commit the patch.\n> \n\n+1, I think it's a bug to do batch insert in this case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 19 Apr 2022 14:00:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "Hi,\n\nOn Tue, Apr 19, 2022 at 9:00 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 4/19/22 11:16, Etsuro Fujita wrote:\n> > On Sun, Apr 17, 2022 at 6:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> >> So I think we should disable batch insert in such cases, just as we\n> >> disable multi insert when there are any before row triggers on the\n> >> target (local) tables/partitions in copyfrom.c. Attached is a patch\n> >> for that.\n> >\n> > If there are no objections from Tomas or anyone else, I'll commit the patch.\n\n> +1, I think it's a bug to do batch insert in this case.\n\nPushed and back-patched to v14, after tweaking a comment a little bit.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 21 Apr 2022 15:49:45 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "Hi,\n\nAnother thing I noticed while working on the \"Fast COPY FROM based on\nbatch insert\" patch is: batch inserts vs. WITH CHECK OPTION\nconstraints from parent views. Here is an example on a normal build\nproducing incorrect results.\n\nCREATE TABLE base_tbl (a int, b int);\nCREATE FUNCTION row_before_insert_trigfunc() RETURNS trigger AS\n$$BEGIN NEW.a := NEW.a + 10; RETURN NEW; END$$ LANGUAGE plpgsql;\nCREATE TRIGGER row_before_insert_trigger BEFORE INSERT ON base_tbl FOR\nEACH ROW EXECUTE PROCEDURE row_before_insert_trigfunc();\nCREATE FOREIGN TABLE foreign_tbl (a int, b int) SERVER loopback\nOPTIONS (table_name 'base_tbl');\nCREATE VIEW rw_view AS SELECT * FROM foreign_tbl WHERE a < b WITH CHECK OPTION;\nALTER SERVER loopback OPTIONS (ADD batch_size '10');\n\nEXPLAIN VERBOSE INSERT INTO rw_view VALUES (0, 15), (0, 5);\n QUERY PLAN\n--------------------------------------------------------------------------------\n Insert on public.foreign_tbl (cost=0.00..0.03 rows=0 width=0)\n Remote SQL: INSERT INTO public.base_tbl(a, b) VALUES ($1, $2) RETURNING a, b\n Batch Size: 10\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=8)\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2\n(5 rows)\n\nINSERT INTO rw_view VALUES (0, 15), (0, 5);\nINSERT 0 2\n\nThis isn't correct; the INSERT query should abort because the\nsecond-inserted row violates the WCO constraint as it is changed to\n(10, 5) by the BEFORE ROW trigger.\n\nAlso, the query caused an assertion failure on an assert-enabled build, like:\n\nTRAP: FailedAssertion(\"*numSlots == 1\", File: \"postgres_fdw.c\", Line:\n4164, PID: 7775)\n\nI think the root cause for these is that WCO constraints are enforced\nlocally, but in batch-insert mode postgres_fdw cannot currently\nretrieve the data needed to enforce such constraints locally that was\nactually inserted on the remote side (except for the first-inserted\nrow). And I think this leads to the incorrect results on the normal\nbuild as the WCO constraint is enforced with the data passed from the\ncore for the second-inserted row, and leads to the assertion failure\non the assert-enabled build.\n\nTo fix, I modified postgresGetForeignModifyBatchSize() to disable\nbatch insert when there are any such constraints, like when there are\nany AFTER ROW triggers on the foreign table. Attached is a patch for\nthat.\n\nIf there are no objections, I'll commit the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 3 Aug 2022 14:24:48 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Tue, 19 Apr 2022 at 14:00, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/19/22 11:16, Etsuro Fujita wrote:\n> > On Sun, Apr 17, 2022 at 6:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> >> Here\n> >> is an example using HEAD:\n> >>\n> >> create extension postgres_fdw;\n> >> create server loopback foreign data wrapper postgres_fdw options\n> >> (dbname 'postgres');\n> >> create user mapping for current_user server loopback;\n> >> create table t (a int);\n> >> create foreign table ft (a int) server loopback options (table_name 't');\n> >> create function ft_rowcount_tf() returns trigger as $$ begin raise\n> >> notice '%: rows = %', tg_name, (select count(*) from ft); return new;\n> >> end; $$ language plpgsql;\n> >> create trigger ft_rowcount before insert on ft for each row execute\n> >> function ft_rowcount_tf();\n> >>\n> >> insert into ft select i from generate_series(1, 10) i;\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 1\n> >> NOTICE: ft_rowcount: rows = 2\n> >> NOTICE: ft_rowcount: rows = 3\n> >> NOTICE: ft_rowcount: rows = 4\n> >> NOTICE: ft_rowcount: rows = 5\n> >> NOTICE: ft_rowcount: rows = 6\n> >> NOTICE: ft_rowcount: rows = 7\n> >> NOTICE: ft_rowcount: rows = 8\n> >> NOTICE: ft_rowcount: rows = 9\n> >> INSERT 0 10\n> >>\n> >> This looks good, but when batch insert is enabled, the trigger\n> >> produces incorrect results:\n> >>\n> >> alter foreign table ft options (add batch_size '10');\n> >> delete from ft;\n> >>\n> >> insert into ft select i from generate_series(1, 10) i;\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> NOTICE: ft_rowcount: rows = 0\n> >> INSERT 0 10\n> >\n> > Actually, the results are correct, as we do batch-insert here. But I\n> > just wanted to show that the trigger behaves *differently* when doing\n> > batch-insert.\n>\n> +1, I think it's a bug to do batch insert in this case.\n\nI don't have a current version of the SQL spec, but one preliminary\nversion of SQL:2012 I retrieved via the wiki details that all BEFORE\ntriggers on INSERT/UPDATE/DELETE statements are all executed before\n_any_ of that statements' affected data is modified.\n\nSee the \"SQL:2011 (preliminary)\" document you can grab on the wiki,\nPart 2: for INSERT, in 15.10 (4) the BEFORE triggers on the changeset\nare executed, and only after that in section 15.10 (5)(c) the\nchangeset is inserted into the target table. During the BEFORE-trigger\nthis table does not contain the rows of the changeset, thus a count(*)\non that table would result in a single value for all the BEFORE\ntriggers triggered on that statement, regardless of the FOR EACH ROW\nspecifier. The sections for DELETE are 15.7 (6) and 15.7 (7); and for\nUPDATE 15.13(7) and 15.13(9) respectively.\n\nI don't know about the semantics of triggers in the latest SQL\nstandard versions, but based on that sample it seems like we're\nnon-compliant on BEFORE trigger behaviour, and it doesn't seem like\nit's documented in the trigger documentation.\n\nI seem to recall a mail on this topic (changes in trigger execution\norder with respect to the DML it is triggered by in the newest SQL\nspec) but I can't seem to find that thread.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 3 Aug 2022 23:52:02 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> I don't have a current version of the SQL spec, but one preliminary\n> version of SQL:2012 I retrieved via the wiki details that all BEFORE\n> triggers on INSERT/UPDATE/DELETE statements are all executed before\n> _any_ of that statements' affected data is modified.\n> ...\n> I don't know about the semantics of triggers in the latest SQL\n> standard versions, but based on that sample it seems like we're\n> non-compliant on BEFORE trigger behaviour, and it doesn't seem like\n> it's documented in the trigger documentation.\n\nI think we're compliant if you declare the trigger functions as\nstable (or immutable, but in any case where this matters, I think\nyou'd be lying). They'll then run with the snapshot of the calling\nquery, in which those updates are not yet visible.\n\nThis is documented somewhere, but maybe not anywhere near triggers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Aug 2022 17:57:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Wed, 3 Aug 2022 at 23:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > I don't have a current version of the SQL spec, but one preliminary\n> > version of SQL:2012 I retrieved via the wiki details that all BEFORE\n> > triggers on INSERT/UPDATE/DELETE statements are all executed before\n> > _any_ of that statements' affected data is modified.\n> > ...\n> > I don't know about the semantics of triggers in the latest SQL\n> > standard versions, but based on that sample it seems like we're\n> > non-compliant on BEFORE trigger behaviour, and it doesn't seem like\n> > it's documented in the trigger documentation.\n>\n> I think we're compliant if you declare the trigger functions as\n> stable (or immutable, but in any case where this matters, I think\n> you'd be lying). They'll then run with the snapshot of the calling\n> query, in which those updates are not yet visible.\n>\n> This is documented somewhere, but maybe not anywhere near triggers.\n\nThank you for this pointer.\n\nLooking around a bit, it seems like this behaviour for functions is\nindeed documented in xfunc.sgml, but rendered docs page [0] does not\nseem to mention triggers, nor does the triggers page link to that part\nof the xfunc document. This makes it quite easy to overlook that this\nis expected (?) behaviour for VOLATILE functions only.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/docs/current/xfunc-volatility.html\n\n\n",
"msg_date": "Thu, 4 Aug 2022 00:35:13 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Wed, Aug 3, 2022 at 2:24 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> To fix, I modified postgresGetForeignModifyBatchSize() to disable\n> batch insert when there are any such constraints, like when there are\n> any AFTER ROW triggers on the foreign table. Attached is a patch for\n> that.\n>\n> If there are no objections, I'll commit the patch.\n\nPushed after modifying the patch a bit so that in that function the\nWCO test in the if test is done before the trigger test, as the former\nwould be cheaper than the latter.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 5 Aug 2022 17:36:22 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "Hi,\n\nWhile working on something else, I notice some more oddities. Here is\nan example:\n\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw options\n(dbname 'postgres');\ncreate user mapping for current_user server loopback;\ncreate table t1 (a text, b int);\ncreate foreign table ft1 (a text, b int) server loopback options\n(table_name 't1');\ncreate table w1 (a text, b int);\ncreate function ft1_rowcount_trigf() returns trigger language plpgsql\nas $$ begin raise notice '%: there are % rows in ft1', tg_name,\n(select count(*) from ft1); return new; end; $$;\ncreate trigger ft1_rowcount_trigger before insert on w1 for each row\nexecute function ft1_rowcount_trigf();\nalter server loopback options (add batch_size '10');\n\nwith t as (insert into w1 values ('foo', 10), ('bar', 20) returning *)\ninsert into ft1 select * from t;\nNOTICE: ft1_rowcount_trigger: there are 0 rows in ft1\nNOTICE: ft1_rowcount_trigger: there are 0 rows in ft1\nINSERT 0 2\n\nThe command tag shows that two rows were inserted into ft1, but:\n\nselect * from ft1;\n a | b\n-----+----\n foo | 10\n bar | 20\n foo | 10\n bar | 20\n(4 rows)\n\nft1 has four rows, which is wrong. Also, when inserting the second\nrow (‘bar’, 20) into w1, the BEFORE ROW INSERT trigger should see the\nfirst row (‘foo’, 10) in ft1, but it reports no rows were visible\nthere.\n\nThe reason for the former is that this bit added by commit b663a4136\nis done not only when running the primary ModifyTable node but when\nrunning the secondary ModifyTable node (with the wrong\nModifyTableState).\n\n /*\n * Insert remaining tuples for batch insert.\n */\n if (proute)\n relinfos = estate->es_tuple_routing_result_relations;\n else\n relinfos = estate->es_opened_result_relations;\n\n foreach(lc, relinfos)\n {\n resultRelInfo = lfirst(lc);\n if (resultRelInfo->ri_NumSlots > 0)\n ExecBatchInsert(node, resultRelInfo,\n resultRelInfo->ri_Slots,\n resultRelInfo->ri_PlanSlots,\n resultRelInfo->ri_NumSlots,\n estate, node->canSetTag);\n }\n\nThe reason for the latter is that that commit fails to flush pending\ninserts before executing any BEFORE ROW triggers, so that rows are\nvisible to such triggers.\n\nAttached is a patch for fixing these issues. In the patch I added to\nthe EState struct a List member es_insert_pending_result_relations to\nstore ResultRelInfos for foreign tables on which batch inserts are to\nbe performed, so that we avoid scanning through\nes_tuple_routing_result_relations or es_opened_result_relations each\ntime when flushing pending inserts to the foreign tables.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 18 Nov 2022 20:46:59 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 8:46 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Attached is a patch for fixing these issues.\n\nHere is an updated patch. In the attached, I added an assertion to\nExecInsert(). Also, I tweaked comments and test cases a little bit,\nfor consistency. Also, I noticed a copy-and-pasteo in a comment in\nExecBatchInsert(), so I fixed it as well.\n\nBarring objections, I will commit the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 24 Nov 2022 20:19:10 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Thu, Nov 24, 2022 at 8:19 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Here is an updated patch. In the attached, I added an assertion to\n> ExecInsert(). Also, I tweaked comments and test cases a little bit,\n> for consistency. Also, I noticed a copy-and-pasteo in a comment in\n> ExecBatchInsert(), so I fixed it as well.\n>\n> Barring objections, I will commit the patch.\n\nI have committed the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 25 Nov 2022 17:59:22 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> I have committed the patch.\n\nApologies for not having paid attention to this thread, but ...\n\nI don't think the committed patch is acceptable at all, at least\nnot in the back branches, because it creates a severe ABI break.\nSpecifically, by adding a field to ResultRelInfo you have changed\nthe array stride of es_result_relations, and that will break any\npreviously-compiled extension code that accesses that array.\n\nI'm not terribly pleased with it having added a field to EState\neither. That seems much more global than what we need here.\nCouldn't we add the field to ModifyTableState, instead?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Nov 2022 11:57:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 1:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think the committed patch is acceptable at all, at least\n> not in the back branches, because it creates a severe ABI break.\n> Specifically, by adding a field to ResultRelInfo you have changed\n> the array stride of es_result_relations, and that will break any\n> previously-compiled extension code that accesses that array.\n\nUgh.\n\n> I'm not terribly pleased with it having added a field to EState\n> either. That seems much more global than what we need here.\n\nThe field stores pending buffered inserts, and I added it to Estate so\nthat it can be shared across primary/secondary ModifyTable nodes.\n(Re-)consider this:\n\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw options\n(dbname 'postgres');\ncreate user mapping for current_user server loopback;\ncreate table t1 (a text, b int);\ncreate foreign table ft1 (a text, b int) server loopback options\n(table_name 't1');\ncreate table w1 (a text, b int);\ncreate function ft1_rowcount_trigf() returns trigger language plpgsql as\n$$\nbegin\n raise notice '%: there are % rows in ft1',\n tg_name, (select count(*) from ft1);\n return new;\nend;\n$$;\ncreate trigger ft1_rowcount_trigger before insert on w1 for each row\nexecute function ft1_rowcount_trigf();\nalter server loopback options (add batch_size '10');\n\nwith t as (insert into w1 values ('foo', 10), ('bar', 20) returning *)\ninsert into ft1 select * from t;\nNOTICE: ft1_rowcount_trigger: there are 0 rows in ft1\nNOTICE: ft1_rowcount_trigger: there are 1 rows in ft1\nINSERT 0 2\n\nFor this query, the primary ModifyTable node doing batch insert is\nexecuted concurrently with the secondary ModifyTable node doing the\nmodifying CTE, and in the secondary ModifyTable node, any pending\nbuffered insert done in the primary ModifyTable node needs to be\nflushed before firing the BEFORE ROW trigger, so the row is visible to\nthe trigger. The field is useful for cases like this.\n\n> Couldn't we add the field to ModifyTableState, instead?\n\nWe could probably do so, but I thought having a global list would be\nmore efficient to handle pending buffered inserts than that.\n\nAnyway I will work on this further. Thanks for looking at this!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sat, 26 Nov 2022 20:38:11 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Sat, Nov 26, 2022 at 1:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Couldn't we add the field to ModifyTableState, instead?\n\n> We could probably do so, but I thought having a global list would be\n> more efficient to handle pending buffered inserts than that.\n\nOK, as long as there's a reason for doing it that way, it's OK\nby me. I don't think that adding a field at the end of EState\nis an ABI problem.\n\nWe have to do something else than add to ResultRelInfo, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Nov 2022 10:11:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 12:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > On Sat, Nov 26, 2022 at 1:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Couldn't we add the field to ModifyTableState, instead?\n>\n> > We could probably do so, but I thought having a global list would be\n> > more efficient to handle pending buffered inserts than that.\n>\n> OK, as long as there's a reason for doing it that way, it's OK\n> by me. I don't think that adding a field at the end of EState\n> is an ABI problem.\n>\n> We have to do something else than add to ResultRelInfo, though.\n\nOK, I removed from ResultRelInfo a field that I added in the commit to\nsave the owning ModifyTableState if insert-pending, and added to\nEState another List member to save such ModifyTableStates, instead. I\nam planning to apply this to not only back branches but HEAD, to make\nback-patching easy, if there are no objections.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 2 Dec 2022 16:54:34 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 4:54 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Sun, Nov 27, 2022 at 12:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > OK, as long as there's a reason for doing it that way, it's OK\n> > by me. I don't think that adding a field at the end of EState\n> > is an ABI problem.\n> >\n> > We have to do something else than add to ResultRelInfo, though.\n>\n> OK, I removed from ResultRelInfo a field that I added in the commit to\n> save the owning ModifyTableState if insert-pending, and added to\n> EState another List member to save such ModifyTableStates, instead. I\n> am planning to apply this to not only back branches but HEAD, to make\n> back-patching easy, if there are no objections.\n\nThere seems to be no objection, so pushed.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 8 Dec 2022 16:39:31 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: batch inserts vs. before row triggers"
}
] |
[
{
"msg_contents": "Hello,\n\nPFA my proposal for GSoC 2022 Project: GUI representation of monitoring System Activity with the system_stats Extension in pgAdmin 4 \n\nPotential Mentor: Khushboo Vashi\n\nThank you\n\nKunal Kashyap",
"msg_date": "Sun, 17 Apr 2022 22:23:30 -0400",
"msg_from": "Kunal Kashyap <kk4564@nyu.edu>",
"msg_from_op": true,
"msg_subject": "GSoC: GUI representation of monitoring System Activity with the\n system_stats Extension in pgAdmin 4"
}
] |
[
{
"msg_contents": "Hello all,\n\nPFA my proposal for GSoC 2022 Project: Improve PgArchives\n\nPotential Mentors: Ilaria Battiston, Stephen Frost \n\nThank you\n\nKunal Kashyap",
"msg_date": "Sun, 17 Apr 2022 22:25:57 -0400",
"msg_from": "Kunal Kashyap <kk4564@nyu.edu>",
"msg_from_op": true,
"msg_subject": "GSoC: Improve PgArchives"
}
] |
[
{
"msg_contents": "Hello all,\n\nPFA my proposal for GSoC 2022 Project: Improve PgArchives\n\nPotential Mentor: Dave Cramer\n\nThank you\n\nKunal Kashyap",
"msg_date": "Mon, 18 Apr 2022 00:19:49 -0400",
"msg_from": "Kunal Kashyap <kk4564@nyu.edu>",
"msg_from_op": true,
"msg_subject": "GSoC: New & Improved Website for PgJDBC"
}
] |
[
{
"msg_contents": "The array sortgrouprefs[] inside PathTarget might be NULL if we have not\nidentified sort/group columns in this tlist. In that case we would have\na NULL pointer reference in _outPathTarget() when trying to print\nsortgrouprefs[] with WRITE_INDEX_ARRAY as we are using the length of\nPathTarget->exprs as its array length.\n\nAttached is a fix that can address this problem.\n\nThanks\nRichard",
"msg_date": "Mon, 18 Apr 2022 15:35:53 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> The array sortgrouprefs[] inside PathTarget might be NULL if we have not\n> identified sort/group columns in this tlist. In that case we would have\n> a NULL pointer reference in _outPathTarget() when trying to print\n> sortgrouprefs[] with WRITE_INDEX_ARRAY as we are using the length of\n> PathTarget->exprs as its array length.\n\nI wondered why we'd not noticed this long since, and the answer is that\nit got broken relatively recently by bdeb2c4ec, which removed the former\nconditionality of the code:\n\n@@ -2510,14 +2517,7 @@ _outPathTarget(StringInfo str, const PathTarget *node)\n WRITE_NODE_TYPE(\"PATHTARGET\");\n \n WRITE_NODE_FIELD(exprs);\n- if (node->sortgrouprefs)\n- {\n- int i;\n-\n- appendStringInfoString(str, \" :sortgrouprefs\");\n- for (i = 0; i < list_length(node->exprs); i++)\n- appendStringInfo(str, \" %u\", node->sortgrouprefs[i]);\n- }\n+ WRITE_INDEX_ARRAY(sortgrouprefs, list_length(node->exprs));\n WRITE_FLOAT_FIELD(cost.startup, \"%.2f\");\n WRITE_FLOAT_FIELD(cost.per_tuple, \"%.2f\");\n WRITE_INT_FIELD(width);\n\nA semantics-preserving conversion would have looked something like\n\n if (node->sortgrouprefs)\n WRITE_INDEX_ARRAY(sortgrouprefs, list_length(node->exprs));\n\nI suppose that Peter was trying to remove special cases from the\noutfuncs.c code, but do we want to put this one back? Richard's\nproposal would not accurately reflect the contents of the data\nstructure, so I'm not too thrilled with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 14:53:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 2:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> A semantics-preserving conversion would have looked something like\n>\n> if (node->sortgrouprefs)\n> WRITE_INDEX_ARRAY(sortgrouprefs, list_length(node->exprs));\n>\n> I suppose that Peter was trying to remove special cases from the\n> outfuncs.c code, but do we want to put this one back? Richard's\n> proposal would not accurately reflect the contents of the data\n> structure, so I'm not too thrilled with it.\n>\n\nThe commit message in bdeb2c4ec mentions that:\n\n\"\nThis also changes the behavior slightly: Before, the field name was\nskipped if the length was zero. Now it prints the field name even in\nthat case. This is more consistent with how other array fields are\nhandled.\n\"\n\nSo I suppose we are trying to print the field name even if the length is\nzero. Should we keep this behavior in the fix?\n\nThanks\nRichard\n\nOn Tue, Apr 19, 2022 at 2:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nA semantics-preserving conversion would have looked something like\n\n if (node->sortgrouprefs)\n WRITE_INDEX_ARRAY(sortgrouprefs, list_length(node->exprs));\n\nI suppose that Peter was trying to remove special cases from the\noutfuncs.c code, but do we want to put this one back? Richard's\nproposal would not accurately reflect the contents of the data\nstructure, so I'm not too thrilled with it.The commit message in bdeb2c4ec mentions that:\"This also changes the behavior slightly: Before, the field name wasskipped if the length was zero. Now it prints the field name even inthat case. This is more consistent with how other array fields arehandled.\"So I suppose we are trying to print the field name even if the length iszero. Should we keep this behavior in the fix?ThanksRichard",
"msg_date": "Tue, 19 Apr 2022 10:51:49 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "On 2022-Apr-18, Tom Lane wrote:\n\n> I suppose that Peter was trying to remove special cases from the\n> outfuncs.c code, but do we want to put this one back? Richard's\n> proposal would not accurately reflect the contents of the data\n> structure, so I'm not too thrilled with it.\n\nYeah -- looking at the script to generate node support functions[1], it\nmight be better go back to the original formulation (i.e., your proposed\npatch), and then use a \"path_hack4\" for this struct member, which looks\nsimilar to other hacks already there for other cases that require\nbespoke handling.\n\n[1] https://postgr.es/m/bee9fdb0-cd10-5fdb-3027-c4b5a240bc74@enterprisedb.com\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 19 Apr 2022 11:53:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "On 18.04.22 09:35, Richard Guo wrote:\n> The array sortgrouprefs[] inside PathTarget might be NULL if we have not\n> identified sort/group columns in this tlist. In that case we would have\n> a NULL pointer reference in _outPathTarget() when trying to print\n> sortgrouprefs[] with WRITE_INDEX_ARRAY as we are using the length of\n> PathTarget->exprs as its array length.\n\nDo you have a test case that triggers this issue?\n\n\n",
"msg_date": "Wed, 20 Apr 2022 18:02:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "On 18.04.22 20:53, Tom Lane wrote:\n> A semantics-preserving conversion would have looked something like\n> \n> if (node->sortgrouprefs)\n> WRITE_INDEX_ARRAY(sortgrouprefs, list_length(node->exprs));\n> \n> I suppose that Peter was trying to remove special cases from the\n> outfuncs.c code, but do we want to put this one back? Richard's\n> proposal would not accurately reflect the contents of the data\n> structure, so I'm not too thrilled with it.\n\nI think we could put the if (node->fldname) inside the WRITE_INDEX_ARRAY \nmacro.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 18:04:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 18.04.22 20:53, Tom Lane wrote:\n>> A semantics-preserving conversion would have looked something like\n>> \tif (node->sortgrouprefs)\n>> \t\tWRITE_INDEX_ARRAY(sortgrouprefs, list_length(node->exprs));\n\n> I think we could put the if (node->fldname) inside the WRITE_INDEX_ARRAY \n> macro.\n\nYeah, that's another way to do it. I think though that the unresolved\nquestion is whether or not we want the field name to appear in the output\nwhen the field is null. I believe that I intentionally made it not appear\noriginally, so that that case could readily be distinguished. You could\nargue that that would complicate life greatly for a _readPathTarget()\nfunction, which is true, but I don't foresee that we'll need one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 12:53:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 12:02 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 18.04.22 09:35, Richard Guo wrote:\n> > The array sortgrouprefs[] inside PathTarget might be NULL if we have not\n> > identified sort/group columns in this tlist. In that case we would have\n> > a NULL pointer reference in _outPathTarget() when trying to print\n> > sortgrouprefs[] with WRITE_INDEX_ARRAY as we are using the length of\n> > PathTarget->exprs as its array length.\n>\n> Do you have a test case that triggers this issue?\n>\n\nI don't have a test case. :( I triggered this issue while debugging\nwith gdb and I was printing a certain 'pathlist' with nodeToString().\n\nIf it helps, here is the backtrace:\n\n#0 in _outPathTarget (str=0x7fff683d7e50, node=0x56011e5cece0) at\noutfuncs.c:2672\n#1 in outNode (str=0x7fff683d7e50, obj=0x56011e5cece0) at outfuncs.c:4490\n#2 in _outPathInfo (str=0x7fff683d7e50, node=0x56011e5f3408) at\noutfuncs.c:1922\n#3 in _outPath (str=0x7fff683d7e50, node=0x56011e5f3408) at outfuncs.c:1957\n#4 in outNode (str=0x7fff683d7e50, obj=0x56011e5f3408) at outfuncs.c:4358\n#5 in _outProjectionPath (str=0x7fff683d7e50, node=0x56011e5f3890) at\noutfuncs.c:2154\n#6 in outNode (str=0x7fff683d7e50, obj=0x56011e5f3890) at outfuncs.c:4409\n#7 in _outAggPath (str=0x7fff683d7e50, node=0x56011e5f4550) at\noutfuncs.c:2224\n#8 in outNode (str=0x7fff683d7e50, obj=0x56011e5f4550) at outfuncs.c:4427\n#9 in _outGatherPath (str=0x7fff683d7e50, node=0x56011e5f45e8) at\noutfuncs.c:2142\n#10 in outNode (str=0x7fff683d7e50, obj=0x56011e5f45e8) at outfuncs.c:4406\n#11 in _outList (str=0x7fff683d7e50, node=0x56011e5f4680) at outfuncs.c:227\n#12 in outNode (str=0x7fff683d7e50, obj=0x56011e5f4680) at outfuncs.c:4028\n#13 in nodeToString (obj=0x56011e5f4680) at outfuncs.c:4782\n\n\nThanks\nRichard\n\nOn Thu, Apr 21, 2022 at 12:02 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 18.04.22 09:35, Richard Guo wrote:\n> The array sortgrouprefs[] inside PathTarget might be NULL if we have not\n> identified sort/group columns in this tlist. In that case we would have\n> a NULL pointer reference in _outPathTarget() when trying to print\n> sortgrouprefs[] with WRITE_INDEX_ARRAY as we are using the length of\n> PathTarget->exprs as its array length.\n\nDo you have a test case that triggers this issue?I don't have a test case. :( I triggered this issue while debuggingwith gdb and I was printing a certain 'pathlist' with nodeToString().If it helps, here is the backtrace:#0 in _outPathTarget (str=0x7fff683d7e50, node=0x56011e5cece0) at outfuncs.c:2672#1 in outNode (str=0x7fff683d7e50, obj=0x56011e5cece0) at outfuncs.c:4490#2 in _outPathInfo (str=0x7fff683d7e50, node=0x56011e5f3408) at outfuncs.c:1922#3 in _outPath (str=0x7fff683d7e50, node=0x56011e5f3408) at outfuncs.c:1957#4 in outNode (str=0x7fff683d7e50, obj=0x56011e5f3408) at outfuncs.c:4358#5 in _outProjectionPath (str=0x7fff683d7e50, node=0x56011e5f3890) at outfuncs.c:2154#6 in outNode (str=0x7fff683d7e50, obj=0x56011e5f3890) at outfuncs.c:4409#7 in _outAggPath (str=0x7fff683d7e50, node=0x56011e5f4550) at outfuncs.c:2224#8 in outNode (str=0x7fff683d7e50, obj=0x56011e5f4550) at outfuncs.c:4427#9 in _outGatherPath (str=0x7fff683d7e50, node=0x56011e5f45e8) at outfuncs.c:2142#10 in outNode (str=0x7fff683d7e50, obj=0x56011e5f45e8) at outfuncs.c:4406#11 in _outList (str=0x7fff683d7e50, node=0x56011e5f4680) at outfuncs.c:227#12 in outNode (str=0x7fff683d7e50, obj=0x56011e5f4680) at outfuncs.c:4028#13 in nodeToString (obj=0x56011e5f4680) at outfuncs.c:4782ThanksRichard",
"msg_date": "Thu, 21 Apr 2022 12:25:11 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "\nOn 20.04.22 18:53, Tom Lane wrote:\n>> I think we could put the if (node->fldname) inside the WRITE_INDEX_ARRAY\n>> macro.\n> \n> Yeah, that's another way to do it. I think though that the unresolved\n> question is whether or not we want the field name to appear in the output\n> when the field is null. I believe that I intentionally made it not appear\n> originally, so that that case could readily be distinguished. You could\n> argue that that would complicate life greatly for a _readPathTarget()\n> function, which is true, but I don't foresee that we'll need one.\n\nWe could adapt the convention to print NULL values as \"<>\", like\n\ndiff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c\nindex 6a02f81ad5..4eb5be3787 100644\n--- a/src/backend/nodes/outfuncs.c\n+++ b/src/backend/nodes/outfuncs.c\n@@ -127,8 +127,11 @@ static void outChar(StringInfo str, char c);\n #define WRITE_INDEX_ARRAY(fldname, len) \\\n do { \\\n appendStringInfoString(str, \" :\" CppAsString(fldname) \" \"); \\\n- for (int i = 0; i < len; i++) \\\n- appendStringInfo(str, \" %u\", node->fldname[i]); \\\n+ if (node->fldname) \\\n+ for (int i = 0; i < len; i++) \\\n+ appendStringInfo(str, \" %u\", node->fldname[i]); \\\n+ else \\\n+ appendStringInfoString(str, \"<>\"); \\\n } while(0)\n\n #define WRITE_INT_ARRAY(fldname, len) \\\n\nThere is currently no read function for this that would need to be \nchanged. But looking at peers such as WRITE_INT_ARRAY/READ_INT_ARRAY it \nshouldn't be hard to sort out if it became necessary.\n\n\n",
"msg_date": "Fri, 22 Apr 2022 16:16:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 20.04.22 18:53, Tom Lane wrote:\n>> Yeah, that's another way to do it. I think though that the unresolved\n>> question is whether or not we want the field name to appear in the output\n>> when the field is null. I believe that I intentionally made it not appear\n>> originally, so that that case could readily be distinguished. You could\n>> argue that that would complicate life greatly for a _readPathTarget()\n>> function, which is true, but I don't foresee that we'll need one.\n\n> We could adapt the convention to print NULL values as \"<>\", like\n\nWorks for me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Apr 2022 10:18:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
},
{
"msg_contents": "On 22.04.22 16:18, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 20.04.22 18:53, Tom Lane wrote:\n>>> Yeah, that's another way to do it. I think though that the unresolved\n>>> question is whether or not we want the field name to appear in the output\n>>> when the field is null. I believe that I intentionally made it not appear\n>>> originally, so that that case could readily be distinguished. You could\n>>> argue that that would complicate life greatly for a _readPathTarget()\n>>> function, which is true, but I don't foresee that we'll need one.\n> \n>> We could adapt the convention to print NULL values as \"<>\", like\n> \n> Works for me.\n\ndone\n\n\n",
"msg_date": "Wed, 27 Apr 2022 09:17:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix NULL pointer reference in _outPathTarget()"
}
] |
[
{
"msg_contents": "To whom it may concern:\n\nI'm Haitao Wang, interested in participating in GSOC 2022 PostgreSQL projects.\nAttached is my proposal. Please check!\n \nBest Regards,\nHaitao Wang",
"msg_date": "Mon, 18 Apr 2022 16:16:46 +0800 (GMT+08:00)",
"msg_from": "wanghaitao0125@zju.edu.cn",
"msg_from_op": true,
"msg_subject": "GSoC: pgagroal: SCRAM-SHA-256-PLUS support (2022)"
},
{
"msg_contents": "Hi,\n\nOn 4/18/22 04:16, wanghaitao0125@zju.edu.cn wrote:\n> I'm Haitao Wang, interested in participating in GSOC 2022 PostgreSQL projects.\n> Attached is my proposal. Please check!\n\nThanks for your proposal to Google Summer of Code 2022 !\n\n\nWe'll follow up off-list to get this finalized.\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 07:10:46 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: GSoC: pgagroal: SCRAM-SHA-256-PLUS support (2022)"
}
] |
[
{
"msg_contents": "subscribe pgsql-hackers \nsubscribe pgsql-hackers",
"msg_date": "Mon, 18 Apr 2022 19:37:16 +0800",
"msg_from": "\"=?UTF-8?B?5rGq5rSL?=\" <ocean.wy@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?c3Vic2NyaWJlIGhhY2tlcnM=?="
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 6:54 PM 汪洋 <ocean.wy@alibaba-inc.com> wrote:\n>\n> subscribe pgsql-hackers\n\nHi, this mailing list is not managed by subject line. To subscribe, please visit\n\nhttps://lists.postgresql.org/\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Apr 2022 23:55:06 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: subscribe hackers"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI found we defined PG_BINARY_R/W/A macros for opening files, however,\nthere are some places use the constant strings. IMO we should use\nthose macros instead of constant strings. Here is a patch for it.\nAny thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Mon, 18 Apr 2022 21:36:01 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> I found we defined PG_BINARY_R/W/A macros for opening files, however,\n> there are some places use the constant strings. IMO we should use\n> those macros instead of constant strings. Here is a patch for it.\n> Any thoughts?\n\nA lot of these changes look wrong to me: they are substituting \"rb\" for\n\"r\", etc, in places that mean to read text files. You have to think\nabout the Windows semantics.\n\nIf you think any of those changes are correct, then they are bug fixes\nthat need to be considered separately from cosmetic tidying.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 10:41:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "\nOn Mon, 18 Apr 2022 at 22:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> I found we defined PG_BINARY_R/W/A macros for opening files, however,\n>> there are some places use the constant strings. IMO we should use\n>> those macros instead of constant strings. Here is a patch for it.\n>> Any thoughts?\n>\n> A lot of these changes look wrong to me: they are substituting \"rb\" for\n> \"r\", etc, in places that mean to read text files. You have to think\n> about the Windows semantics.\n>\n\nI do this substituting, since the comment says it can be used for opening\ntext files. Maybe I misunderstand the comment.\n\n\t/*\n\t * NOTE: this is also used for opening text files.\n\t * WIN32 treats Control-Z as EOF in files opened in text mode.\n\t * Therefore, we open files in binary mode on Win32 so we can read\n\t * literal control-Z. The other affect is that we see CRLF, but\n\t * that is OK because we can already handle those cleanly.\n\t */\n\t#if defined(WIN32) || defined(__CYGWIN__)\n\t#define PG_BINARY O_BINARY\n\t#define PG_BINARY_A \"ab\"\n\t#define PG_BINARY_R \"rb\"\n\t#define PG_BINARY_W \"wb\"\n\t#else\n\t#define PG_BINARY 0\n\t#define PG_BINARY_A \"a\"\n\t#define PG_BINARY_R \"r\"\n\t#define PG_BINARY_W \"w\"\n\t#endif\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 19 Apr 2022 13:29:18 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 01:29:18PM +0800, Japin Li wrote:\n> On Mon, 18 Apr 2022 at 22:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Japin Li <japinli@hotmail.com> writes:\n>>> I found we defined PG_BINARY_R/W/A macros for opening files, however,\n>>> there are some places use the constant strings. IMO we should use\n>>> those macros instead of constant strings. Here is a patch for it.\n>>> Any thoughts?\n>>\n>> A lot of these changes look wrong to me: they are substituting \"rb\" for\n>> \"r\", etc, in places that mean to read text files. You have to think\n>> about the Windows semantics.\n\nThis reminded me of the business from a couple of years ago in\npgwin32_open() to enforce the text mode in the frontend if O_BINARY is\nnot specified.\n\n> I do this substituting, since the comment says it can be used for opening\n> text files. Maybe I misunderstand the comment.\n\n'b' is normally ignored on POSIX platforms (per the Linux man page for\nfopen), but your patch has as effect to silently switch to binary mode\non Windows all those code paths. See _setmode() in pgwin32_open(),\nthat changes the behavior of CRLF when reading or writing such files,\nas described here:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setmode?view=msvc-170\n\nThe change in adminpack.c would be actually as 'b' should be ignored\non non-WIN32, but Tom's point is to not take lightly all the others.\n--\nMichael",
"msg_date": "Tue, 19 Apr 2022 15:14:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Mon, 18 Apr 2022 at 22:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A lot of these changes look wrong to me: they are substituting \"rb\" for\n>> \"r\", etc, in places that mean to read text files. You have to think\n>> about the Windows semantics.\n\n> I do this substituting, since the comment says it can be used for opening\n> text files. Maybe I misunderstand the comment.\n\nI think the comment's at best misleading. See e.g. 66f8687a8.\nIt might be okay to use \"rb\" to read a text file when there\nis actually \\r-stripping logic present, but you need to check\nthat. Using \"wb\" to write a text file is flat wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Apr 2022 02:20:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "\nOn Tue, 19 Apr 2022 at 14:14, Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Apr 19, 2022 at 01:29:18PM +0800, Japin Li wrote:\n>> On Mon, 18 Apr 2022 at 22:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Japin Li <japinli@hotmail.com> writes:\n>>>> I found we defined PG_BINARY_R/W/A macros for opening files, however,\n>>>> there are some places use the constant strings. IMO we should use\n>>>> those macros instead of constant strings. Here is a patch for it.\n>>>> Any thoughts?\n>>>\n>>> A lot of these changes look wrong to me: they are substituting \"rb\" for\n>>> \"r\", etc, in places that mean to read text files. You have to think\n>>> about the Windows semantics.\n>\n> This reminded me of the business from a couple of years ago in\n> pgwin32_open() to enforce the text mode in the frontend if O_BINARY is\n> not specified.\n>\n>> I do this substituting, since the comment says it can be used for opening\n>> text files. Maybe I misunderstand the comment.\n>\n> 'b' is normally ignored on POSIX platforms (per the Linux man page for\n> fopen), but your patch has as effect to silently switch to binary mode\n> on Windows all those code paths. See _setmode() in pgwin32_open(),\n> that changes the behavior of CRLF when reading or writing such files,\n> as described here:\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setmode?view=msvc-170\n>\n> The change in adminpack.c would be actually as 'b' should be ignored\n> on non-WIN32, but Tom's point is to not take lightly all the others.\n\nOh, I understand your points. Thanks for the explanation.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 19 Apr 2022 15:53:36 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "\nOn Tue, 19 Apr 2022 at 14:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Mon, 18 Apr 2022 at 22:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> A lot of these changes look wrong to me: they are substituting \"rb\" for\n>>> \"r\", etc, in places that mean to read text files. You have to think\n>>> about the Windows semantics.\n>\n>> I do this substituting, since the comment says it can be used for opening\n>> text files. Maybe I misunderstand the comment.\n>\n> I think the comment's at best misleading. See e.g. 66f8687a8.\n> It might be okay to use \"rb\" to read a text file when there\n> is actually \\r-stripping logic present, but you need to check\n> that. Using \"wb\" to write a text file is flat wrong.\n>\n\nThanks for the detail explanation. Should we remove the misleading comment?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 19 Apr 2022 15:56:25 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Tue, 19 Apr 2022 at 14:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think the comment's at best misleading. See e.g. 66f8687a8.\n>> It might be okay to use \"rb\" to read a text file when there\n>> is actually \\r-stripping logic present, but you need to check\n>> that. Using \"wb\" to write a text file is flat wrong.\n\n> Thanks for the detail explanation. Should we remove the misleading comment?\n\nWe should rewrite it, not just remove it. But I'm not 100% sure\nwhat to say instead. I wonder whether the comment's claims about\ncontrol-Z processing still apply on modern Windows.\n\nAnother question is whether we actually like the current shape of\nthe code. I can see at least two different directions we might\nprefer to the status quo:\n\n* Invent '#define PG_TEXT_R \"r\"' and so on, and use those in the\ncalls that currently use plain \"r\" etc, establishing a project\npolicy that you should use one of these six macros and never the\nunderlying strings directly. This perhaps has some advantages\nin greppability and clarity of intent, but I can't help wondering\nif it's mostly obsessive-compulsiveness.\n\n* In the other direction, decide that the PG_BINARY_X macros are\noffering no benefit at all and just rip 'em out, writing \"rb\" and\nso on in their place. POSIX specifies that the character \"b\" has\nno effect on Unix-oid systems, and it has said that for thirty years\nnow, so we do not really need the platform dependency that presently\nexists in the macro definitions. The presence or absence of \"b\"\nwould serve fine as an indicator of intent, and there would be one\nless PG-specific coding convention to remember.\n\nOr maybe it's fine as-is. Any sort of wide-ranging change like this\ncreates hazards for back-patching, so we shouldn't do it lightly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:21:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "\nOn Tue, 19 Apr 2022 at 22:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Tue, 19 Apr 2022 at 14:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I think the comment's at best misleading. See e.g. 66f8687a8.\n>>> It might be okay to use \"rb\" to read a text file when there\n>>> is actually \\r-stripping logic present, but you need to check\n>>> that. Using \"wb\" to write a text file is flat wrong.\n>\n>> Thanks for the detail explanation. Should we remove the misleading comment?\n>\n> We should rewrite it, not just remove it. But I'm not 100% sure\n> what to say instead. I wonder whether the comment's claims about\n> control-Z processing still apply on modern Windows.\n>\n\nIt might be true [1].\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/fopen-wfopen?view=msvc-170\n\n> Another question is whether we actually like the current shape of\n> the code. I can see at least two different directions we might\n> prefer to the status quo:\n>\n> * Invent '#define PG_TEXT_R \"r\"' and so on, and use those in the\n> calls that currently use plain \"r\" etc, establishing a project\n> policy that you should use one of these six macros and never the\n> underlying strings directly. This perhaps has some advantages\n> in greppability and clarity of intent, but I can't help wondering\n> if it's mostly obsessive-compulsiveness.\n>\n> * In the other direction, decide that the PG_BINARY_X macros are\n> offering no benefit at all and just rip 'em out, writing \"rb\" and\n> so on in their place. POSIX specifies that the character \"b\" has\n> no effect on Unix-oid systems, and it has said that for thirty years\n> now, so we do not really need the platform dependency that presently\n> exists in the macro definitions. The presence or absence of \"b\"\n> would serve fine as an indicator of intent, and there would be one\n> less PG-specific coding convention to remember.\n>\n\nI'm incline the second direction if we need to change this.\n\n> Or maybe it's fine as-is. Any sort of wide-ranging change like this\n> creates hazards for back-patching, so we shouldn't do it lightly.\n>\n\nAgreed. Thanks again for the explanation.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 08:50:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "\nOn 19.04.22 16:21, Tom Lane wrote:\n> * In the other direction, decide that the PG_BINARY_X macros are\n> offering no benefit at all and just rip 'em out, writing \"rb\" and\n> so on in their place. POSIX specifies that the character \"b\" has\n> no effect on Unix-oid systems, and it has said that for thirty years\n> now, so we do not really need the platform dependency that presently\n> exists in the macro definitions. The presence or absence of \"b\"\n> would serve fine as an indicator of intent, and there would be one\n> less PG-specific coding convention to remember.\n\nI can only imagine that there must have been some Unix systems that did \nnot understand the \"binary\" APIs required for Windows. (For example, \nneither the Linux nor the macOS open(2) man page mentions O_BINARY.) \nOtherwise, these macros don't make any sense, because then you could \njust write the thing directly on all platforms.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 22:11:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 19.04.22 16:21, Tom Lane wrote:\n>> * In the other direction, decide that the PG_BINARY_X macros are\n>> offering no benefit at all and just rip 'em out, writing \"rb\" and\n>> so on in their place. POSIX specifies that the character \"b\" has\n>> no effect on Unix-oid systems, and it has said that for thirty years\n>> now, so we do not really need the platform dependency that presently\n>> exists in the macro definitions. The presence or absence of \"b\"\n>> would serve fine as an indicator of intent, and there would be one\n>> less PG-specific coding convention to remember.\n\n> I can only imagine that there must have been some Unix systems that did \n> not understand the \"binary\" APIs required for Windows. (For example, \n> neither the Linux nor the macOS open(2) man page mentions O_BINARY.) \n> Otherwise, these macros don't make any sense, because then you could \n> just write the thing directly on all platforms.\n\nPG_BINARY is useful for open(). It's the PG_BINARY_R/W/A macros for\nfopen() that are redundant per POSIX. Possibly someone generalized\ninappropriately; or maybe long ago we supported some platform that\nrejected the \"b\" option?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 16:29:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "On 20.04.22 22:29, Tom Lane wrote:\n> PG_BINARY is useful for open(). It's the PG_BINARY_R/W/A macros for\n> fopen() that are redundant per POSIX. Possibly someone generalized\n> inappropriately; or maybe long ago we supported some platform that\n> rejected the \"b\" option?\n\nI think the latter was the case. I doubt it's still a problem.\n\nI see some of the new code in pg_basebackup uses \"wb\" directly. It \nwould probably be good to fix that to be consistent one way or the \nother. I vote for getting rid of the macros.\n\n\n\n",
"msg_date": "Thu, 21 Apr 2022 22:25:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 20.04.22 22:29, Tom Lane wrote:\n>> PG_BINARY is useful for open(). It's the PG_BINARY_R/W/A macros for\n>> fopen() that are redundant per POSIX. Possibly someone generalized\n>> inappropriately; or maybe long ago we supported some platform that\n>> rejected the \"b\" option?\n\n> I think the latter was the case. I doubt it's still a problem.\n\nWe could find that out with little effort, at least for machines in the\nbuildfarm, by modifying c.h to use the form with \"b\" always.\n\n> I see some of the new code in pg_basebackup uses \"wb\" directly. It \n> would probably be good to fix that to be consistent one way or the \n> other. I vote for getting rid of the macros.\n\nYeah, I suspect there have been other inconsistencies for years :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Apr 2022 16:38:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "\nOn Fri, 22 Apr 2022 at 04:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 20.04.22 22:29, Tom Lane wrote:\n>>> PG_BINARY is useful for open(). It's the PG_BINARY_R/W/A macros for\n>>> fopen() that are redundant per POSIX. Possibly someone generalized\n>>> inappropriately; or maybe long ago we supported some platform that\n>>> rejected the \"b\" option?\n>\n>> I think the latter was the case. I doubt it's still a problem.\n>\n> We could find that out with little effort, at least for machines in the\n> buildfarm, by modifying c.h to use the form with \"b\" always.\n>\n\nI think we should also consider the popen() (see: OpenPipeStream() function),\non the Windows, it can use \"b\", however, for linux, it might be not right.\nSo, modifying c.h to use the form with \"b\" isn't always right.\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/popen-wpopen?view=msvc-170\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 22 Apr 2022 09:51:03 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> I think we should also consider the popen() (see: OpenPipeStream() function),\n> on the Windows, it can use \"b\", however, for linux, it might be not right.\n\nOh, ugh ... POSIX says for popen():\n\n The behavior of popen() is specified for values of mode of r and\n w. Other modes such as rb and wb might be supported by specific\n implementations, but these would not be portable features. Note\n that historical implementations of popen() only check to see if\n the first character of mode is r. Thus, a mode of robert the robot\n would be treated as mode r, and a mode of anything else would be\n treated as mode w.\n\nMaybe it's best to leave well enough alone here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Apr 2022 23:21:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace open mode with PG_BINARY_R/W/A macros"
}
] |
[
{
"msg_contents": "Dear concerned,\n\nI hope you are doing well.\n\nI am Mohammad Zain Abbas, currently enrolled in Erasmus Mundus (BDMA)\nprogram. I would like you to have a look at my proposal for the \"*Database\nLoad Stress Benchmark\" *project.\n\nLink:\nhttps://docs.google.com/document/d/1TThl7ODGD301GkjITY2k4OU88fZIhc1XvJYGqPCnOns/edit?usp=sharing\n\nI would appreciate any feedback or guidance that you are able to provide.\n\nThank you!\n\nRegards,\n\nMohammad Zain Abbas\n\nDear concerned,I hope you are doing well.I am Mohammad Zain Abbas, currently enrolled in Erasmus Mundus (BDMA) program. I would like you to have a look at my proposal for the \"Database Load Stress Benchmark\" project.Link:https://docs.google.com/document/d/1TThl7ODGD301GkjITY2k4OU88fZIhc1XvJYGqPCnOns/edit?usp=sharingI would appreciate any feedback or guidance that you are able to provide. Thank you!Regards,Mohammad Zain Abbas",
"msg_date": "Mon, 18 Apr 2022 15:40:23 +0200",
"msg_from": "Mohammad Zain Abbas <mohammadzainabbas@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC: Database Load Stress Benchmark (2022)"
},
{
"msg_contents": "Hi!\n\nOn Mon, Apr 18, 2022 at 03:40:23PM +0200, Mohammad Zain Abbas wrote:\n> Dear concerned,\n> \n> I hope you are doing well.\n> \n> I am Mohammad Zain Abbas, currently enrolled in Erasmus Mundus (BDMA)\n> program. I would like you to have a look at my proposal for the \"*Database\n> Load Stress Benchmark\" *project.\n> \n> Link:\n> https://docs.google.com/document/d/1TThl7ODGD301GkjITY2k4OU88fZIhc1XvJYGqPCnOns/edit?usp=sharing\n> \n> I would appreciate any feedback or guidance that you are able to provide.\n\nI think you've covered all the bases here. Good luck!\n\nRegards,\nMark\n\n\n",
"msg_date": "Tue, 19 Apr 2022 16:28:56 +0000",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GSoC: Database Load Stress Benchmark (2022)"
}
] |
[
{
"msg_contents": "Commit 7c91a0364f standardized the approach we take to estimating\npg_class.reltuples, so that everybody agrees on what that means.\nFollow-up work by commit 3d351d91 defined a pg_class.reltuples of -1\nas \"unknown, probably never vacuumed\".\n\nThe former commit added this code and comment to vacuumlazy.c:\n\n /*\n * Now we can provide a better estimate of total number of surviving\n * tuples (we assume indexes are more interested in that than in the\n * number of nominally live tuples).\n */\n ivinfo.num_heap_tuples = vacrelstats->new_rel_tuples;\n ivinfo.strategy = vac_strategy;\n\nI don't see why it makes sense to treat indexes differently here. Why\nallow the special case? Why include dead tuples like this?\n\nWe make a general assumption that pg_class.reltuples only includes\nlive tuples, which this code contravenes. It's quite clear that\nindexes are no exception to the general rule, since CREATE INDEX quite\ndeliberately does reltuples accounting in a way that fits with the\nusual definition (live tuples only), per comments in\nheapam_index_build_range_scan. One of these code paths must be doing\nit wrong -- I think it's vacuumlazy.c.\n\nThis also confuses the index AM definitions. Whenever we call\nambulkdelete routines, IndexVacuumInfo.num_heap_tuples will always\ncome from the heap relation's existing pg_class.reltuples, which could\neven be -1 -- so clearly its value can only be a count of live tuples.\nOn the other hand IndexVacuumInfo.num_heap_tuples might include some\ndead tuples when we call amvacuumcleanup routines, since (as shown)\nthe value comes from vacuumlazy.c's vacrelstats->new_rel_tuples. It\nwould be more logical if IndexVacuumInfo.num_heap_tuples was always\nthe pg_class.reltuples for the table (either the original/existing\nvalue, or the value that it's just about to be updated to).\n\nThat said, I can see why we wouldn't want to allow pg_class.reltuples\nto ever be -1 in the case of an index. So I think we should bring\nvacuumlazy.c in line with everything else here, without allowing that\ncase. I believe that the \"pg_class.reltuples is -1 even after a\nVACUUM\" case is completely impossible following the Postgres 15 work\non VACUUM, but we should still clamp for safety in\nupdate_relstats_all_indexes (though not in the amvacuumcleanup path).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Apr 2022 12:04:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Why does pg_class.reltuples count only live tuples in indexes (after\n VACUUM runs)?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Commit 7c91a0364f standardized the approach we take to estimating\n> pg_class.reltuples, so that everybody agrees on what that means.\n> Follow-up work by commit 3d351d91 defined a pg_class.reltuples of -1\n> as \"unknown, probably never vacuumed\".\n\n> The former commit added this code and comment to vacuumlazy.c:\n\n> /*\n> * Now we can provide a better estimate of total number of surviving\n> * tuples (we assume indexes are more interested in that than in the\n> * number of nominally live tuples).\n> */\n> ivinfo.num_heap_tuples = vacrelstats->new_rel_tuples;\n> ivinfo.strategy = vac_strategy;\n\n> I don't see why it makes sense to treat indexes differently here. Why\n> allow the special case? Why include dead tuples like this?\n\nThe index has presumably got entries corresponding to dead tuples,\nso that the number of entries it has ought to be more or less\nnum_heap_tuples, not reltuples (with discrepancies for concurrent\ninsertions of course).\n\n> We make a general assumption that pg_class.reltuples only includes\n> live tuples, which this code contravenes.\n\nHuh? This is not pg_class.reltuples. If an index AM wants that, it\nknows where to find it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 15:15:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 12:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't see why it makes sense to treat indexes differently here. Why\n> > allow the special case? Why include dead tuples like this?\n>\n> The index has presumably got entries corresponding to dead tuples,\n> so that the number of entries it has ought to be more or less\n> num_heap_tuples, not reltuples (with discrepancies for concurrent\n> insertions of course).\n\nI guess that pg_class.reltuples has to include some \"recently dead\"\ntuples in the case of an index, just because of the impracticality of\naccurately counting index tuples while knowing if they're dead or\nalive. However, it would be practical to update pg_class.reltuples to\na value \"IndexBulkDeleteResult.num_index_tuples -\nrecently_dead_tuples\" in update_relstats_all_indexes to compensate.\nThen everything is consistent.\n\n> > We make a general assumption that pg_class.reltuples only includes\n> > live tuples, which this code contravenes.\n>\n> Huh? This is not pg_class.reltuples. If an index AM wants that, it\n> knows where to find it.\n\nIt's not, but it is how we calculate\nIndexBulkDeleteResult.num_index_tuples, which is related. Granted,\nthat won't be used to update pg_class for the index in the case where\nit's just an estimate anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Apr 2022 12:27:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Apr 18, 2022 at 12:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Huh? This is not pg_class.reltuples. If an index AM wants that, it\n>> knows where to find it.\n\n> It's not, but it is how we calculate\n> IndexBulkDeleteResult.num_index_tuples, which is related.\n\nWell, the number of entries in an index needn't be exactly the\nsame as the number in the underlying heap. If we're setting\nan index's reltuples to the number of actual index entries\nincluding dead entries, I don't have a problem with that:\nunlike the case for table reltuples, it's not going to result\nin bad estimates of the number of rows resulting from a query.\nIf the planner looks at index reltuples at all, it's doing so\nfor cost estimation purposes, where the count including dead\nentries is probably the right thing to use.\n\nIf you want to make this cleaner, maybe there's a case for\nsplitting reltuples into two columns. But then index AMs\nwould be on the hook to determine how many of their entries\nare live, which is not really an index's concern.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 15:41:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If the planner looks at index reltuples at all, it's doing so\n> for cost estimation purposes, where the count including dead\n> entries is probably the right thing to use.\n\nThen why does heapam_index_build_range_scan do it the other way around?\n\nI think that it probably doesn't matter that much in practice. The\ninconsistency should be noted in update_relstats_all_indexes, though.\n\n> If you want to make this cleaner, maybe there's a case for\n> splitting reltuples into two columns. But then index AMs\n> would be on the hook to determine how many of their entries\n> are live, which is not really an index's concern.\n\nThe main concern behind this is that we're using\nvacrel->new_rel_tuples for the IndexVacuumInfo.num_heap_tuples value\nin amvacuumcleanup (but not in ambulkdelete), which is calculated\ntowards the end of lazy_scan_heap, like so:\n\n /*\n * Also compute the total number of surviving heap entries. In the\n * (unlikely) scenario that new_live_tuples is -1, take it as zero.\n */\n vacrel->new_rel_tuples =\n Max(vacrel->new_live_tuples, 0) + vacrel->recently_dead_tuples +\n vacrel->missed_dead_tuples;\n\nI think that this doesn't really belong here; new_rel_tuples should\nonly be used for VACUUM VERBOSE/server log output, once we return to\nheap_vacuum_rel from lazy_scan_heap. We should use\nvacrel->new_live_tuples as our IndexVacuumInfo.num_heap_tuples value\nin the amvacuumcleanup path (instead of new_rel_tuples). That way the\nrule about IndexVacuumInfo.num_heap_tuples is simple: it's always\ntaken from pg_class.reltuples (for the heap rel). Either the existing\nvalue, or the new value.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Apr 2022 12:54:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I think that this doesn't really belong here; new_rel_tuples should\n> only be used for VACUUM VERBOSE/server log output, once we return to\n> heap_vacuum_rel from lazy_scan_heap. We should use\n> vacrel->new_live_tuples as our IndexVacuumInfo.num_heap_tuples value\n> in the amvacuumcleanup path (instead of new_rel_tuples). That way the\n> rule about IndexVacuumInfo.num_heap_tuples is simple: it's always\n> taken from pg_class.reltuples (for the heap rel). Either the existing\n> value, or the new value.\n\nThe places where index AMs refer to num_heap_tuples seem to be using\nit as a ceiling on estimated index tuple counts. Given that we should\nbe counting dead index entries, redefining it as you suggest would be\nwrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 16:10:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The places where index AMs refer to num_heap_tuples seem to be using\n> it as a ceiling on estimated index tuple counts. Given that we should\n> be counting dead index entries, redefining it as you suggest would be\n> wrong.\n\nI would argue that it would be correct for the first time -- at least\nif we take the behavior within heapam_index_build_range_scan (and\neverywhere else) as authoritative. That's a feature, not a bug.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:12:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 1:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would argue that it would be correct for the first time -- at least\n> if we take the behavior within heapam_index_build_range_scan (and\n> everywhere else) as authoritative. That's a feature, not a bug.\n\nAttached draft patch shows what I have in mind. I think that the issue\nshould be treated as a bug, albeit a minor one that's not worth\nbackpatching. I would like to target Postgres 15 here.\n\nAddressing this issue allowed me to move more state out of vacrel.\nThis patch continues the trend of moving everything that deals with\npg_class, statistics, or instrumentation from lazy_scan_heap into its\ncaller, heap_vacuum_rel().\n\nI noticed GIN gives us another reason to go this way:\nginvacuumcleanup() always instructs lazyvacuum.c to set each GIN\nindex's pg_class.reltuples to whatever the value is that will be set\nin the heap relation, even with a partial index. So defining reltuples\ndifferently for indexes just doesn't seem reasonable to me. (Actually,\nlazyvacuum.c won't end up setting the value in the GIN index's\npg_class entry when IndexVacuumInfo.estimated_count is set to true.\nBut that hardly matters -- either way, num_index_tuples comes from\nnum_heap_tuples in a GIN index, no matter what.)\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 18 Apr 2022 18:13:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why does pg_class.reltuples count only live tuples in indexes\n (after VACUUM runs)?"
}
] |
[
{
"msg_contents": "At the moment you cannot create a unique index other than a btree. (As\ndiscussed on other threads, I am pursuing unique hash indexes for\nPostgreSQL, one step at a time).\nYou get \"ERROR index foo_idx is not a btree\"\n\nAccording to parse_utilcmd.c line 2310, this is because it would break\npg_dump, which needs ADD CONSTRAINT to create the same kind of index\nagain. Fair enough.\n\nThis is needed because ADD CONSTRAINT just uses the defaults index\ntype. We could simply allow a GUC for\ndefault_primary_key_access_method, but that is overkill and there\nseems to be an easy and more general solution:\n\nI propose that we change pg_dump so that when it creates a PK it does\nso in 2 commands:\n1. CREATE [UNIQUE] INDEX iname ...\n2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n\nStep\n(1) recreates the index, respecting its AM, even if that is not a btree\n(2) works and there is no problem with defaults\n\nDoing this as 2 steps instead of one doesn't add any more time because\n(2) is just a metadata-only change, not an index build.\n\nAny objections to a patch to implement this thought?\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 18 Apr 2022 20:59:44 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> At the moment you cannot create a unique index other than a btree. (As\n> discussed on other threads, I am pursuing unique hash indexes for\n> PostgreSQL, one step at a time).\n> You get \"ERROR index foo_idx is not a btree\"\n>\n> According to parse_utilcmd.c line 2310, this is because it would break\n> pg_dump, which needs ADD CONSTRAINT to create the same kind of index\n> again. Fair enough.\n>\n> This is needed because ADD CONSTRAINT just uses the defaults index\n> type. We could simply allow a GUC for\n> default_primary_key_access_method, but that is overkill and there\n> seems to be an easy and more general solution:\n>\n> I propose that we change pg_dump so that when it creates a PK it does\n> so in 2 commands:\n> 1. CREATE [UNIQUE] INDEX iname ...\n> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n>\n> Step\n> (1) recreates the index, respecting its AM, even if that is not a btree\n> (2) works and there is no problem with defaults\n>\n> Doing this as 2 steps instead of one doesn't add any more time because\n> (2) is just a metadata-only change, not an index build.\n>\n> Any objections to a patch to implement this thought?\n>\n\nWhy not just get rid of the limitation that constraint definitions don't\nsupport non-default methods?\n\nI.e., add syntax to index_parameters so that the underlying index can be\ndefined directly.\n\nindex_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are:\n\n[ INCLUDE ( column_name [, ... ] ) ]\n[ WITH ( storage_parameter [= value] [, ... ] ) ]\n[ USING INDEX TABLESPACE tablespace_name ]\n\nWe should add:\n\n[ USING INDEX METHOD index_method ]\n\nindex_method := { BTREE | GIN | GIST | HASH | SPGIST | BRIN }\n\nDavid J.\n\nOn Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:At the moment you cannot create a unique index other than a btree. (As\ndiscussed on other threads, I am pursuing unique hash indexes for\nPostgreSQL, one step at a time).\nYou get \"ERROR index foo_idx is not a btree\"\n\nAccording to parse_utilcmd.c line 2310, this is because it would break\npg_dump, which needs ADD CONSTRAINT to create the same kind of index\nagain. Fair enough.\n\nThis is needed because ADD CONSTRAINT just uses the defaults index\ntype. We could simply allow a GUC for\ndefault_primary_key_access_method, but that is overkill and there\nseems to be an easy and more general solution:\n\nI propose that we change pg_dump so that when it creates a PK it does\nso in 2 commands:\n1. CREATE [UNIQUE] INDEX iname ...\n2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n\nStep\n(1) recreates the index, respecting its AM, even if that is not a btree\n(2) works and there is no problem with defaults\n\nDoing this as 2 steps instead of one doesn't add any more time because\n(2) is just a metadata-only change, not an index build.\n\nAny objections to a patch to implement this thought?Why not just get rid of the limitation that constraint definitions don't support non-default methods?I.e., add syntax to index_parameters so that the underlying index can be defined directly.index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are:[ INCLUDE ( column_name [, ... ] ) ][ WITH ( storage_parameter [= value] [, ... ] ) ][ USING INDEX TABLESPACE tablespace_name ]We should add:[ USING INDEX METHOD index_method ]index_method := { BTREE | GIN | GIST | HASH | SPGIST | BRIN }David J.",
"msg_date": "Mon, 18 Apr 2022 13:20:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> I propose that we change pg_dump so that when it creates a PK it does\n> so in 2 commands:\n> 1. CREATE [UNIQUE] INDEX iname ...\n> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n\n> Step\n> (1) recreates the index, respecting its AM, even if that is not a btree\n> (2) works and there is no problem with defaults\n\n> Doing this as 2 steps instead of one doesn't add any more time because\n> (2) is just a metadata-only change, not an index build.\n\nI don't believe the claim that this adds no cost. Maybe it's negligible\nin context, but you've provided no evidence of that. (Parallel restore,\nwhere the ALTER would have to be a separate worker task, would probably\nbe the worst case here.)\n\nAlso, I assume your ambition would extend to supporting UNIQUE (but\nnon-PKEY) constraints, so that would have to be done this way too.\n\nA potential advantage of doing things this way is that if we make\npg_dump treat the index and the constraint as fully independent\nobjects, that might allow some logic simplifications in pg_dump.\nRight now I think there are various weirdnesses in there that\nexist precisely because we don't want to dump them separately.\n\nOne concern is that this'd create a hard compatibility break for\nloading dump output into servers that predate whenever we added\nADD PRIMARY KEY USING INDEX. However, it looks like that syntax\nis accepted back to 9.1, so probably that's no issue in practice.\nMaybe a bigger concern for people who want to port to other\nRDBMSes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 16:27:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n>> I propose that we change pg_dump so that when it creates a PK it does\n>> so in 2 commands:\n>> 1. CREATE [UNIQUE] INDEX iname ...\n>> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n\n> Why not just get rid of the limitation that constraint definitions don't\n> support non-default methods?\n\nThat approach would be doubling down on the assumption that we can always\nshoehorn more custom options into SQL-standard constraint clauses, and\nwe'll never fall foul of shift/reduce problems or future spec additions.\nI think for example that USING INDEX TABLESPACE is a blot on humanity,\nand I'd be very glad to see pg_dump stop using it in favor of doing\nthings as Simon suggests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 16:48:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 1:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <\n> simon.riggs@enterprisedb.com>\n> > wrote:\n> >> I propose that we change pg_dump so that when it creates a PK it does\n> >> so in 2 commands:\n> >> 1. CREATE [UNIQUE] INDEX iname ...\n> >> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n>\n> > Why not just get rid of the limitation that constraint definitions don't\n> > support non-default methods?\n>\n> That approach would be doubling down on the assumption that we can always\n> shoehorn more custom options into SQL-standard constraint clauses, and\n> we'll never fall foul of shift/reduce problems or future spec additions.\n> I think for example that USING INDEX TABLESPACE is a blot on humanity,\n> and I'd be very glad to see pg_dump stop using it in favor of doing\n> things as Simon suggests.\n>\n>\nI'm convinced.\n\nAs for portability - that would be something we could explicitly define and\nsupport through a pg_dump option. In compatibility mode you get whatever\nthe default index would be for your engine while by default we output the\nexisting index as defined and then alter-add it to the table.\n\nDavid J.\n\nOn Mon, Apr 18, 2022 at 1:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n>> I propose that we change pg_dump so that when it creates a PK it does\n>> so in 2 commands:\n>> 1. CREATE [UNIQUE] INDEX iname ...\n>> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n\n> Why not just get rid of the limitation that constraint definitions don't\n> support non-default methods?\n\nThat approach would be doubling down on the assumption that we can always\nshoehorn more custom options into SQL-standard constraint clauses, and\nwe'll never fall foul of shift/reduce problems or future spec additions.\nI think for example that USING INDEX TABLESPACE is a blot on humanity,\nand I'd be very glad to see pg_dump stop using it in favor of doing\nthings as Simon suggests.I'm convinced.As for portability - that would be something we could explicitly define and support through a pg_dump option. In compatibility mode you get whatever the default index would be for your engine while by default we output the existing index as defined and then alter-add it to the table.David J.",
"msg_date": "Mon, 18 Apr 2022 14:00:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Mon, 18 Apr 2022 at 21:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> > wrote:\n> >> I propose that we change pg_dump so that when it creates a PK it does\n> >> so in 2 commands:\n> >> 1. CREATE [UNIQUE] INDEX iname ...\n> >> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n>\n> > Why not just get rid of the limitation that constraint definitions don't\n> > support non-default methods?\n>\n> That approach would be doubling down on the assumption that we can always\n> shoehorn more custom options into SQL-standard constraint clauses, and\n> we'll never fall foul of shift/reduce problems or future spec additions.\n> I think for example that USING INDEX TABLESPACE is a blot on humanity,\n> and I'd be very glad to see pg_dump stop using it in favor of doing\n> things as Simon suggests.\n\nSigh, agreed. It's more work, but its cleaner in the longer term to\nseparate indexes from constraints.\n\nI'll look in more detail and come back here later.\n\nThanks both.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 18 Apr 2022 22:05:57 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Mon, 18 Apr 2022 at 22:05, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, 18 Apr 2022 at 21:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > > On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> > > wrote:\n> > >> I propose that we change pg_dump so that when it creates a PK it does\n> > >> so in 2 commands:\n> > >> 1. CREATE [UNIQUE] INDEX iname ...\n> > >> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n> >\n> > > Why not just get rid of the limitation that constraint definitions don't\n> > > support non-default methods?\n> >\n> > That approach would be doubling down on the assumption that we can always\n> > shoehorn more custom options into SQL-standard constraint clauses, and\n> > we'll never fall foul of shift/reduce problems or future spec additions.\n> > I think for example that USING INDEX TABLESPACE is a blot on humanity,\n> > and I'd be very glad to see pg_dump stop using it in favor of doing\n> > things as Simon suggests.\n>\n> Sigh, agreed. It's more work, but its cleaner in the longer term to\n> separate indexes from constraints.\n>\n> I'll look in more detail and come back here later.\n>\n> Thanks both.\n\nMy original plan was to get pg_dump to generate\n\n--\n-- Name: foo foo_a_idx; Type: CONSTRAINT; Schema: public; Owner: postgres\n--\nCREATE UNIQUE INDEX foo_a_idx ON public.foo USING btree (a);\nALTER TABLE ONLY public.foo\n ADD CONSTRAINT foo_a_idx PRIMARY KEY USING INDEX foo_a_idx;\n\nso the index definition is generated as a CONSTRAINT, not an INDEX.\n\nSeparating things a bit more generates this output, which is what I\nthink we want:\n\n--\n-- Name: foo foo_a_idx; Type: CONSTRAINT; Schema: public; Owner: postgres\n--\nALTER TABLE ONLY public.foo\n ADD CONSTRAINT foo_a_idx PRIMARY KEY USING INDEX foo_a_idx;\n--\n-- Name: foo_a_idx; Type: INDEX; Schema: public; Owner: postgres\n--\nCREATE UNIQUE INDEX foo_a_idx ON public.foo USING btree (a);\n\nWhich is better, but there is still some ugly code for REPLICA\nIDENTITY and CLUSTER duplicated in dumpIndex() and dumpConstraint().\n\nThe attached patch includes a change to pg_dump_sort.c which changes\nthe priority of CONSTRAINT, but that doesn't seem to have any effect\non the output. I'm hoping that's a quick fix, but I haven't seen it\nyet, even after losing sanity points trying to read the priority code.\n\nAnyway, the main question is how should the code be structured?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Apr 2022 17:13:52 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 9:14 AM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Mon, 18 Apr 2022 at 22:05, Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n> >\n> > On Mon, 18 Apr 2022 at 21:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > > > On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <\n> simon.riggs@enterprisedb.com>\n> > > > wrote:\n> > > >> I propose that we change pg_dump so that when it creates a PK it\n> does\n> > > >> so in 2 commands:\n> > > >> 1. CREATE [UNIQUE] INDEX iname ...\n> > > >> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n> > >\n> > > > Why not just get rid of the limitation that constraint definitions\n> don't\n> > > > support non-default methods?\n> > >\n> > > That approach would be doubling down on the assumption that we can\n> always\n> > > shoehorn more custom options into SQL-standard constraint clauses, and\n> > > we'll never fall foul of shift/reduce problems or future spec\n> additions.\n> > > I think for example that USING INDEX TABLESPACE is a blot on humanity,\n> > > and I'd be very glad to see pg_dump stop using it in favor of doing\n> > > things as Simon suggests.\n> >\n> > Sigh, agreed. It's more work, but its cleaner in the longer term to\n> > separate indexes from constraints.\n> >\n> > I'll look in more detail and come back here later.\n> >\n> > Thanks both.\n>\n> Anyway, the main question is how should the code be structured?\n>\n>\nI don't have a good answer to that question but the patch presently\nproduces the dump below for a partitioned table with one partition.\n\nAfter manually adjusting the order of operations you end up with:\n\npsql:/vagrant/pg_dump_indexattach.v1.txt:67: ERROR: index \"parent_pkey\" is\nnot valid\nLINE 2: ADD CONSTRAINT parent_pkey PRIMARY KEY USING INDEX paren...\n ^\nBecause:\n\nhttps://www.postgresql.org/docs/current/sql-altertable.html\nADD table_constraint_using_index\n...This form is not currently supported on partitioned tables.\n\nDavid J.\n\n===== pg_dump with manual re-ordering of create/alter index before alter\ntable\n\nCREATE TABLE public.parent (\n id integer NOT NULL,\n class text NOT NULL\n)\nPARTITION BY LIST (class);\n\nCREATE TABLE public.parent_a (\n id integer NOT NULL,\n class text NOT NULL\n);\n\nALTER TABLE public.parent_a OWNER TO vagrant;\n\nALTER TABLE ONLY public.parent ATTACH PARTITION public.parent_a FOR VALUES\nIN ('a');\n\nCREATE UNIQUE INDEX parent_pkey ON ONLY public.parent USING btree (id,\nclass);\n\nALTER TABLE ONLY public.parent\n ADD CONSTRAINT parent_pkey PRIMARY KEY USING INDEX parent_pkey;\n\nCREATE UNIQUE INDEX parent_a_pkey ON public.parent_a USING btree (id,\nclass);\n\nALTER INDEX public.parent_pkey ATTACH PARTITION public.parent_a_pkey;\n\nALTER TABLE ONLY public.parent_a\n ADD CONSTRAINT parent_a_pkey PRIMARY KEY USING INDEX parent_a_pkey;\n\nOn Tue, Apr 19, 2022 at 9:14 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Mon, 18 Apr 2022 at 22:05, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, 18 Apr 2022 at 21:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > > On Mon, Apr 18, 2022 at 1:00 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> > > wrote:\n> > >> I propose that we change pg_dump so that when it creates a PK it does\n> > >> so in 2 commands:\n> > >> 1. CREATE [UNIQUE] INDEX iname ...\n> > >> 2. ALTER TABLE .. ADD PRIMARY KEY USING INDEX iname;\n> >\n> > > Why not just get rid of the limitation that constraint definitions don't\n> > > support non-default methods?\n> >\n> > That approach would be doubling down on the assumption that we can always\n> > shoehorn more custom options into SQL-standard constraint clauses, and\n> > we'll never fall foul of shift/reduce problems or future spec additions.\n> > I think for example that USING INDEX TABLESPACE is a blot on humanity,\n> > and I'd be very glad to see pg_dump stop using it in favor of doing\n> > things as Simon suggests.\n>\n> Sigh, agreed. It's more work, but its cleaner in the longer term to\n> separate indexes from constraints.\n>\n> I'll look in more detail and come back here later.\n>\n> Thanks both.\nAnyway, the main question is how should the code be structured?I don't have a good answer to that question but the patch presently produces the dump below for a partitioned table with one partition.After manually adjusting the order of operations you end up with:psql:/vagrant/pg_dump_indexattach.v1.txt:67: ERROR: index \"parent_pkey\" is not validLINE 2: ADD CONSTRAINT parent_pkey PRIMARY KEY USING INDEX paren... ^Because:https://www.postgresql.org/docs/current/sql-altertable.htmlADD table_constraint_using_index...This form is not currently supported on partitioned tables.David J.===== pg_dump with manual re-ordering of create/alter index before alter tableCREATE TABLE public.parent ( id integer NOT NULL, class text NOT NULL)PARTITION BY LIST (class);CREATE TABLE public.parent_a ( id integer NOT NULL, class text NOT NULL);ALTER TABLE public.parent_a OWNER TO vagrant;ALTER TABLE ONLY public.parent ATTACH PARTITION public.parent_a FOR VALUES IN ('a');CREATE UNIQUE INDEX parent_pkey ON ONLY public.parent USING btree (id, class);ALTER TABLE ONLY public.parent ADD CONSTRAINT parent_pkey PRIMARY KEY USING INDEX parent_pkey;CREATE UNIQUE INDEX parent_a_pkey ON public.parent_a USING btree (id, class);ALTER INDEX public.parent_pkey ATTACH PARTITION public.parent_a_pkey;ALTER TABLE ONLY public.parent_a ADD CONSTRAINT parent_a_pkey PRIMARY KEY USING INDEX parent_a_pkey;",
"msg_date": "Tue, 19 Apr 2022 19:05:29 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Wed, 20 Apr 2022 at 03:05, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n\n> https://www.postgresql.org/docs/current/sql-altertable.html\n> ADD table_constraint_using_index\n> ...This form is not currently supported on partitioned tables.\n\nGood to know, thanks very much for pointing it out.\n\nThat needs to be fixed before we can progress further on this thread.\nWill look into it.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 20 Apr 2022 11:10:34 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On 18.04.22 22:48, Tom Lane wrote:\n>> Why not just get rid of the limitation that constraint definitions don't\n>> support non-default methods?\n> That approach would be doubling down on the assumption that we can always\n> shoehorn more custom options into SQL-standard constraint clauses, and\n> we'll never fall foul of shift/reduce problems or future spec additions.\n\nWhen we do get the ability to create a table with a primary key with an \nunderlying hash index, how would that be done? Would the only way be\n\n1. create the table without primary key\n2. create the index\n3. attach the index as primary key constraint\n\nThat doesn't sound attractive.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 22:46:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Wed, 20 Apr 2022 at 21:46, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 18.04.22 22:48, Tom Lane wrote:\n> >> Why not just get rid of the limitation that constraint definitions don't\n> >> support non-default methods?\n> > That approach would be doubling down on the assumption that we can always\n> > shoehorn more custom options into SQL-standard constraint clauses, and\n> > we'll never fall foul of shift/reduce problems or future spec additions.\n>\n> When we do get the ability to create a table with a primary key with an\n> underlying hash index, how would that be done? Would the only way be\n>\n> 1. create the table without primary key\n> 2. create the index\n> 3. attach the index as primary key constraint\n>\n> That doesn't sound attractive.\n\nCan you explain what you find unattractive about it?\n\nThe alternative is we have this\n\n1. create the table without primary key\n2. attach the index as primary key constraint (which must be extended\nto include ALL of the options available on create index)\n\nHaving to extend ALTER TABLE so it exactly matches CREATE INDEX is\npainful and maintaining it that way seems unattractive, to me.\n\n\nJust so we are clear this is not about hash indexes, this is about\nusing ANY kind of index (i.e. any index access method, extension or\notherwise) to enforce a constraint.\n\n\nAnother idea might be to allow some kind of statement embedding... so\nwe don't need to constantly fiddle with ALTER TABLE\nALTER TABLE foo ADD PRIMARY KEY USING INDEX (CREATE INDEX .... )\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 21 Apr 2022 12:43:10 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On 21.04.22 13:43, Simon Riggs wrote:\n>> 1. create the table without primary key\n>> 2. create the index\n>> 3. attach the index as primary key constraint\n>>\n>> That doesn't sound attractive.\n> Can you explain what you find unattractive about it?\n\nWell, if I want to create a table with a primary key, the established \nway is to say \"primary key\", not to have to assemble it from multiple \npieces.\n\nI think this case is very similar to exclusion constraints, which also \nhave syntax to specify the index access method.\n\n\n",
"msg_date": "Fri, 22 Apr 2022 15:38:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 21.04.22 13:43, Simon Riggs wrote:\n>> Can you explain what you find unattractive about it?\n\n> Well, if I want to create a table with a primary key, the established \n> way is to say \"primary key\", not to have to assemble it from multiple \n> pieces.\n\n> I think this case is very similar to exclusion constraints, which also \n> have syntax to specify the index access method.\n\nThat analogy would be compelling if exclusion constraints were a\nSQL-standard feature; but they aren't so their clause syntax is\nfully under our control. The scenario that worries me is that\nsomewhere down the pike, the SQL committee might extend the\nsyntax of PKEY/UNIQUE constraint clauses in a way that breaks\nour nonstandard extensions of them.\n\nHowever, independently of whether we offer a syntax option or not,\nit may still simplify pg_dump to make it treat the constraint and\nthe index as independent objects in all cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Apr 2022 10:14:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On 22.04.22 16:14, Tom Lane wrote:\n> That analogy would be compelling if exclusion constraints were a\n> SQL-standard feature; but they aren't so their clause syntax is\n> fully under our control. The scenario that worries me is that\n> somewhere down the pike, the SQL committee might extend the\n> syntax of PKEY/UNIQUE constraint clauses in a way that breaks\n> our nonstandard extensions of them.\n\nSome syntax like\n\n PRIMARY KEY (x, y) USING ACCESS METHOD hash\n\nshould be able to avoid any future clashes.\n\n\n",
"msg_date": "Thu, 28 Apr 2022 16:09:18 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Dump/Restore of non-default PKs"
},
{
"msg_contents": "On Thu, 28 Apr 2022 at 15:09, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 22.04.22 16:14, Tom Lane wrote:\n> > That analogy would be compelling if exclusion constraints were a\n> > SQL-standard feature; but they aren't so their clause syntax is\n> > fully under our control. The scenario that worries me is that\n> > somewhere down the pike, the SQL committee might extend the\n> > syntax of PKEY/UNIQUE constraint clauses in a way that breaks\n> > our nonstandard extensions of them.\n>\n> Some syntax like\n>\n> PRIMARY KEY (x, y) USING ACCESS METHOD hash\n>\n> should be able to avoid any future clashes.\n\nThat seems to conflict with USING INDEX TABLESPACE. I've tried a few\nthings but have not found anything yet.\n\nAny other ideas on syntax?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 1 Aug 2022 17:00:41 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Dump/Restore of non-default PKs"
}
] |
[
{
"msg_contents": "Hash index pages are stored in sorted order, but we don't prepare the\ndata correctly.\n\nWe sort the data as the first step of a hash index build, but we\nforget to sort the data by hash as well as by hash bucket. This causes\nthe normal insert path to do extra pushups to put the data in the\ncorrect sort order on each page, which wastes effort.\n\nAdding this patch makes a CREATE INDEX about 8-9% faster, on an unlogged table.\n\nThoughts?\n\n\nAside:\n\nI'm not very sure why tuplesort has private code in it dedicated to\nhash indexes, but it does.\n\nEven more strangely, the Tuplesortstate fixes the size of max_buckets\nat tuplesort_begin() time rather than tuplesort_performsort(), forcing\nus to estimate the number of tuples ahead of time rather than using\nthe exact number. Next trick would be to alter the APIs to allow exact\nvalues to be used for sorting, which would allow page at a time\nbuilds.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 18 Apr 2022 22:35:21 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 3:05 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> Hash index pages are stored in sorted order, but we don't prepare the\n> data correctly.\n>\n> We sort the data as the first step of a hash index build, but we\n> forget to sort the data by hash as well as by hash bucket.\n>\n\nI was looking into the nearby comments (Fetch hash keys and mask off\nbits we don't want to sort by.) and it sounds like we purposefully\ndon't want to sort by the hash key. I see that this comment was\noriginally introduced in the below commit:\n\ncommit 4adc2f72a4ccd6e55e594aca837f09130a6af62b\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Sep 15 18:43:41 2008 +0000\n\n Change hash indexes to store only the hash code rather than the\nwhole indexed\n value.\n\nBut even before that, we seem to mask off the bits before comparison.\nIs it that we are doing so because we want to keep the order of hash\nkeys in a particular bucket so such masking was required? If so, then\nmaybe it is okay to compare the hash keys as you are proposing once we\nfind that the values fall in a particular bucket. Another thing to\nnote is that this code was again changed in ea69a0dead but it seems to\nbe following the intent of the original code.\n\nFew comments on the patch:\n1. I think it is better to use DatumGetUInt32 to fetch the hash key as\nthe nearby code is using.\n2. You may want to change the below comment in HSpool\n/*\n* We sort the hash keys based on the buckets they belong to. Below masks\n* are used in _hash_hashkey2bucket to determine the bucket of given hash\n* key.\n*/\n\n>\n> Aside:\n>\n> I'm not very sure why tuplesort has private code in it dedicated to\n> hash indexes, but it does.\n>\n\nAre you talking about\ntuplesort_begin_index_hash/comparetup_index_hash? I see the\ncorresponding functions for btree as well in that file.\n\n> Even more strangely, the Tuplesortstate fixes the size of max_buckets\n> at tuplesort_begin() time rather than tuplesort_performsort(), forcing\n> us to estimate the number of tuples ahead of time rather than using\n> the exact number. Next trick would be to alter the APIs to allow exact\n> values to be used for sorting, which would allow page at a time\n> builds.\n>\n\nIt is not clear to me what exactly you want to do here but maybe it is\na separate topic and we should discuss this separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 30 Apr 2022 16:42:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Sat, 30 Apr 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 19, 2022 at 3:05 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > Hash index pages are stored in sorted order, but we don't prepare the\n> > data correctly.\n> >\n> > We sort the data as the first step of a hash index build, but we\n> > forget to sort the data by hash as well as by hash bucket.\n> >\n>\n> I was looking into the nearby comments (Fetch hash keys and mask off\n> bits we don't want to sort by.) and it sounds like we purposefully\n> don't want to sort by the hash key. I see that this comment was\n> originally introduced in the below commit:\n>\n> commit 4adc2f72a4ccd6e55e594aca837f09130a6af62b\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Mon Sep 15 18:43:41 2008 +0000\n>\n> Change hash indexes to store only the hash code rather than the\n> whole indexed\n> value.\n>\n> But even before that, we seem to mask off the bits before comparison.\n> Is it that we are doing so because we want to keep the order of hash\n> keys in a particular bucket so such masking was required?\n\nWe need to sort by both hash bucket and hash value.\n\nHash bucket id so we can identify the correct hash bucket to insert into.\n\nBut then on each bucket/overflow page we store it sorted by hash value\nto make lookup faster, so inserts go faster if they are also sorted.\n\nThe pages are identical with/without this patch, its just the\ndifference between quicksort and insertion sort.\n\n> Few comments on the patch:\n> 1. I think it is better to use DatumGetUInt32 to fetch the hash key as\n> the nearby code is using.\n> 2. You may want to change the below comment in HSpool\n> /*\n> * We sort the hash keys based on the buckets they belong to. Below masks\n> * are used in _hash_hashkey2bucket to determine the bucket of given hash\n> * key.\n> */\n\nMany thanks, will do.\n\n> >\n> > Aside:\n\n...\n\n> It is not clear to me what exactly you want to do here but maybe it is\n> a separate topic and we should discuss this separately.\n\nAgreed, will open another thread.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 2 May 2022 16:58:08 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Mon, May 2, 2022 at 9:28 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Sat, 30 Apr 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 19, 2022 at 3:05 AM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > Hash index pages are stored in sorted order, but we don't prepare the\n> > > data correctly.\n> > >\n> > > We sort the data as the first step of a hash index build, but we\n> > > forget to sort the data by hash as well as by hash bucket.\n> > >\n> >\n> > I was looking into the nearby comments (Fetch hash keys and mask off\n> > bits we don't want to sort by.) and it sounds like we purposefully\n> > don't want to sort by the hash key. I see that this comment was\n> > originally introduced in the below commit:\n> >\n> > commit 4adc2f72a4ccd6e55e594aca837f09130a6af62b\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > Date: Mon Sep 15 18:43:41 2008 +0000\n> >\n> > Change hash indexes to store only the hash code rather than the\n> > whole indexed\n> > value.\n> >\n> > But even before that, we seem to mask off the bits before comparison.\n> > Is it that we are doing so because we want to keep the order of hash\n> > keys in a particular bucket so such masking was required?\n>\n> We need to sort by both hash bucket and hash value.\n>\n> Hash bucket id so we can identify the correct hash bucket to insert into.\n>\n> But then on each bucket/overflow page we store it sorted by hash value\n> to make lookup faster, so inserts go faster if they are also sorted.\n>\n\nI also think so. So, we should go with this unless someone else sees\nany flaw here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 May 2022 15:57:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Sat, 30 Apr 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Few comments on the patch:\n> 1. I think it is better to use DatumGetUInt32 to fetch the hash key as\n> the nearby code is using.\n> 2. You may want to change the below comment in HSpool\n> /*\n> * We sort the hash keys based on the buckets they belong to. Below masks\n> * are used in _hash_hashkey2bucket to determine the bucket of given hash\n> * key.\n> */\n\nAddressed in new patch, v2.\n\nOn Wed, 4 May 2022 at 11:27, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> So, we should go with this unless someone else sees any flaw here.\n\nCool, thanks.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 10 May 2022 10:42:59 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Tue, May 10, 2022 5:43 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\r\n> \r\n> On Sat, 30 Apr 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > Few comments on the patch:\r\n> > 1. I think it is better to use DatumGetUInt32 to fetch the hash key as\r\n> > the nearby code is using.\r\n> > 2. You may want to change the below comment in HSpool\r\n> > /*\r\n> > * We sort the hash keys based on the buckets they belong to. Below\r\n> masks\r\n> > * are used in _hash_hashkey2bucket to determine the bucket of given\r\n> hash\r\n> > * key.\r\n> > */\r\n> \r\n> Addressed in new patch, v2.\r\n> \r\n\r\nI think your changes looks reasonable.\r\n\r\nBesides, I tried this patch with Simon's script, and index creation time was about\r\n7.5% faster after applying this patch on my machine, which looks good to me.\r\n\r\nRESULT - index creation time\r\n===================\r\nHEAD: 9513.466 ms\r\nPatched: 8796.75 ms\r\n\r\nI ran it 10 times and got the average, and here are the configurations used in\r\nthe test:\r\nshared_buffers = 2GB\r\ncheckpoint_timeout = 30min\r\nmax_wal_size = 20GB\r\nmin_wal_size = 10GB\r\nautovacuum = off\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Mon, 30 May 2022 08:13:07 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Monday, May 30, 2022 4:13 PMshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tue, May 10, 2022 5:43 PM Simon Riggs <simon.riggs@enterprisedb.com>\r\n> wrote:\r\n> >\r\n> > On Sat, 30 Apr 2022 at 12:12, Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > Few comments on the patch:\r\n> > > 1. I think it is better to use DatumGetUInt32 to fetch the hash key\r\n> > > as the nearby code is using.\r\n> > > 2. You may want to change the below comment in HSpool\r\n> > > /*\r\n> > > * We sort the hash keys based on the buckets they belong to. Below\r\n> > masks\r\n> > > * are used in _hash_hashkey2bucket to determine the bucket of given\r\n> > hash\r\n> > > * key.\r\n> > > */\r\n> >\r\n> > Addressed in new patch, v2.\r\n> >\r\n> \r\n> I think your changes looks reasonable.\r\n\r\n+1, the changes look good to me as well.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Fri, 22 Jul 2022 03:46:35 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Hash index build performance tweak from sorting"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> [ hash_sort_by_hash.v2.patch ]\n\nThe cfbot says this no longer applies --- probably sideswiped by\nKorotkov's sorting-related commits last night.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:22:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 19:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > [ hash_sort_by_hash.v2.patch ]\n>\n> The cfbot says this no longer applies --- probably sideswiped by\n> Korotkov's sorting-related commits last night.\n\nThanks for the nudge. New version attached.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 28 Jul 2022 13:47:10 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> Thanks for the nudge. New version attached.\n\nI also see a speed improvement from this, so pushed (after minor comment\nediting). I notice though that if I feed it random data,\n\n---\nDROP TABLE IF EXISTS hash_speed;\nCREATE unlogged TABLE hash_speed (x integer);\nINSERT INTO hash_speed SELECT random()*10000000 FROM\ngenerate_series(1,10000000) x;\nvacuum hash_speed;\n\\timing on\nCREATE INDEX ON hash_speed USING hash (x);\n---\n\nthen the speed improvement is only about 5% not the 7.5% I see\nwith your original test case. I don't have an explanation\nfor that, do you?\n\nAlso, it seems like we've left some money on the table by not\nexploiting downstream the knowledge that this sorting happened.\nDuring an index build, it's no longer necessary for\n_hash_pgaddtup to do _hash_binsearch, and therefore also not\n_hash_get_indextuple_hashkey: we could just always append the new\ntuple at the end. Perhaps checking it against the last existing\ntuple is worth the trouble as a bug guard, but for sure we don't\nneed the log2(N) comparisons that _hash_binsearch will do.\n\nAnother point that I noticed is that it's not really desirable to\nuse the same _hash_binsearch logic for insertions and searches.\n_hash_binsearch finds the first entry with hash >= target, which\nis necessary for searches, but for insertions we'd really rather\nfind the first entry with hash > target. As things stand, to\nthe extent that there are duplicate hash values we are still\nperforming unnecessary data motion within PageAddItem.\n\nI've not looked into how messy these things would be to implement,\nnor whether we get any noticeable speed gain thereby. But since\nyou've proven that cutting the PageAddItem data motion cost\nyields visible savings, these things might be visible too.\n\nAt this point the cfbot will start to bleat that the patch of\nrecord doesn't apply, so I'm going to mark the CF entry committed.\nIf anyone wants to produce a follow-on patch, please make a\nnew entry.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Jul 2022 14:50:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 19:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > Thanks for the nudge. New version attached.\n>\n> I also see a speed improvement from this, so pushed (after minor comment\n> editing).\n\nThanks\n\n> I notice though that if I feed it random data,\n>\n> ---\n> DROP TABLE IF EXISTS hash_speed;\n> CREATE unlogged TABLE hash_speed (x integer);\n> INSERT INTO hash_speed SELECT random()*10000000 FROM\n> generate_series(1,10000000) x;\n> vacuum hash_speed;\n> \\timing on\n> CREATE INDEX ON hash_speed USING hash (x);\n> ---\n>\n> then the speed improvement is only about 5% not the 7.5% I see\n> with your original test case. I don't have an explanation\n> for that, do you?\n\nNo, sorry. It could be a data-based effect or a physical effect.\n\n> Also, it seems like we've left some money on the table by not\n> exploiting downstream the knowledge that this sorting happened.\n> During an index build, it's no longer necessary for\n> _hash_pgaddtup to do _hash_binsearch, and therefore also not\n> _hash_get_indextuple_hashkey: we could just always append the new\n> tuple at the end. Perhaps checking it against the last existing\n> tuple is worth the trouble as a bug guard, but for sure we don't\n> need the log2(N) comparisons that _hash_binsearch will do.\n\nHmm, I had that in an earlier version of the patch, not sure why it\ndropped out since I wrote it last year, but then I've got lots of\nfuture WIP patches in the area of hash indexes.\n\n> Another point that I noticed is that it's not really desirable to\n> use the same _hash_binsearch logic for insertions and searches.\n> _hash_binsearch finds the first entry with hash >= target, which\n> is necessary for searches, but for insertions we'd really rather\n> find the first entry with hash > target. As things stand, to\n> the extent that there are duplicate hash values we are still\n> performing unnecessary data motion within PageAddItem.\n\nThat thought is new to me, and will investigate.\n\n> I've not looked into how messy these things would be to implement,\n> nor whether we get any noticeable speed gain thereby. But since\n> you've proven that cutting the PageAddItem data motion cost\n> yields visible savings, these things might be visible too.\n\nIt's a clear follow-on thought, so will pursue. Thanks for the nudge.\n\n> At this point the cfbot will start to bleat that the patch of\n> record doesn't apply, so I'm going to mark the CF entry committed.\n> If anyone wants to produce a follow-on patch, please make a\n> new entry.\n\nWill do. Thanks.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 29 Jul 2022 13:49:01 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Fri, 29 Jul 2022 at 13:49, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 28 Jul 2022 at 19:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > > Thanks for the nudge. New version attached.\n> >\n> > I also see a speed improvement from this\n\n> > ---\n> > DROP TABLE IF EXISTS hash_speed;\n> > CREATE unlogged TABLE hash_speed (x integer);\n> > INSERT INTO hash_speed SELECT random()*10000000 FROM\n> > generate_series(1,10000000) x;\n> > vacuum hash_speed;\n> > \\timing on\n> > CREATE INDEX ON hash_speed USING hash (x);\n> > ---\n\n> > Also, it seems like we've left some money on the table by not\n> > exploiting downstream the knowledge that this sorting happened.\n> > During an index build, it's no longer necessary for\n> > _hash_pgaddtup to do _hash_binsearch, and therefore also not\n> > _hash_get_indextuple_hashkey: we could just always append the new\n> > tuple at the end. Perhaps checking it against the last existing\n> > tuple is worth the trouble as a bug guard, but for sure we don't\n> > need the log2(N) comparisons that _hash_binsearch will do.\n>\n> Hmm, I had that in an earlier version of the patch, not sure why it\n> dropped out since I wrote it last year, but then I've got lots of\n> future WIP patches in the area of hash indexes.\n\n...\n\n> > At this point the cfbot will start to bleat that the patch of\n> > record doesn't apply, so I'm going to mark the CF entry committed.\n> > If anyone wants to produce a follow-on patch, please make a\n> > new entry.\n>\n> Will do. Thanks.\n\nUsing the above test case, I'm getting a further 4-7% improvement on\nalready committed code with the attached patch, which follows your\nproposal.\n\nThe patch passes info via a state object, useful to avoid API churn in\nlater patches.\n\nAdding to CFapp again.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 1 Aug 2022 16:37:22 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On 2022-08-01 8:37 a.m., Simon Riggs wrote:\n> Using the above test case, I'm getting a further 4-7% improvement on\n> already committed code with the attached patch, which follows your\n> proposal.\n\nI ran two test cases: for committed patch `hash_sort_by_hash.v3.patch`, I can see about 6 ~ 7% improvement; and after applied patch `hash_inserted_sorted.v2.patch`, I see about ~3% improvement. All the test results are based on 10 times average on two different machines.\n\nBest regards,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Fri, 5 Aug 2022 12:46:27 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Fri, 5 Aug 2022 at 20:46, David Zhang <david.zhang@highgo.ca> wrote:\n>\n> On 2022-08-01 8:37 a.m., Simon Riggs wrote:\n> > Using the above test case, I'm getting a further 4-7% improvement on\n> > already committed code with the attached patch, which follows your\n> > proposal.\n>\n> I ran two test cases: for committed patch `hash_sort_by_hash.v3.patch`, I can see about 6 ~ 7% improvement; and after applied patch `hash_inserted_sorted.v2.patch`, I see about ~3% improvement. All the test results are based on 10 times average on two different machines.\n\nThanks for testing David.\n\nIt's a shame you only see 3%, but that's still worth it.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:27:04 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Tue, 2 Aug 2022 at 03:37, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> Using the above test case, I'm getting a further 4-7% improvement on\n> already committed code with the attached patch, which follows your\n> proposal.\n>\n> The patch passes info via a state object, useful to avoid API churn in\n> later patches.\n\nHi Simon,\n\nI took this patch for a spin and saw a 2.5% performance increase using\nthe random INT test that Tom posted. The index took an average of\n7227.47 milliseconds on master and 7045.05 with the patch applied.\n\nOn making a pass of the changes, I noted down a few things.\n\n1. In _h_spoolinit() the HSpool is allocated with palloc and then\nyou're setting the istate field to a pointer to the HashInsertState\nwhich is allocated on the stack by the only calling function\n(hashbuild()). Looking at hashbuild(), it looks like the return value\nof _h_spoolinit is never put anywhere to make it available outside of\nthe function, so it does not seem like there is an actual bug there.\nHowever, it just seems like a bug waiting to happen. If _h_spoolinit()\nis pallocing memory, then we really shouldn't be setting pointer\nfields in that memory to point to something on the stack. It might be\nnicer if the istate field in HSpool was a HashInsertStateData and\n_h_spoolinit() just memcpy'd the contents of that parameter. That\nwould make HSpool 4 bytes smaller and save additional pointer\ndereferences in _hash_doinsert().\n\n2. There are quite a few casts that are not required. e.g:\n\n_hash_doinsert(rel, itup, heapRel, (HashInsertState) &istate);\nbuildstate.spool = _h_spoolinit(heap, index, num_buckets,\n(HashInsertState) &insertstate);\nbuildstate.istate = (HashInsertState) &insertstate;\n\nThis is just my opinion, but I don't really see the value in having a\ntypedef for a pointer to HashInsertStateData. I can understand that if\nthe struct was local to a .c file, but you've got the struct and\npointer typedef in the same header. I understand we often do this in\nthe code, but I feel like we do it less often in newer code. e.g we do\nit in aset.c but not generation.c (which is much newer than aset.c).\nMy personal preference would be just to name the struct\nHashInsertState and have no extra pointer typedefs.\n\n3. Just a minor nitpick. Line wraps at 80 chars. You're doing this\nsometimes but not others. This seems just to be due to the additional\nfunction parameters that have been added.\n\n4. I added the following Assert to _hash_pgaddtup() as I expected the\nitup_off to be set to the same thing before and after this change. I\nsee the Assert is failing in the regression tests.\n\nAssert(PageGetMaxOffsetNumber(page) + 1 ==\n _hash_binsearch(page, _hash_get_indextuple_hashkey(itup)));\n\nI think this is because _hash_binsearch() returns the offset with the\nfirst tuple with the given hashkey, so if there are duplicate hashkey\nvalues then it looks like PageAddItemExtended() will set needshuffle\nand memmove() the existing item(s) up one slot. I don't know this\nhash index building code very well, but I wonder if it's worth having\nanother version of _hash_binsearch() that can be used to make\n_hash_pgaddtup() put any duplicate hashkeys after the existing ones\nrather than before and shuffle the others up? It sounds like that\nmight speed up normal insertions when there are many duplicate values\nto hash.\n\nI wonder if this might be the reason the random INT test didn't come\nout as good as your original test which had unique values. The unique\nvalues test would do less shuffling during PageAddItemExtended(). If\nso, that implies that skipping the binary search is only part of the\ngains here and that not shuffling tuples accounts for quite a bit of\nthe gain you're seeing. If so, then it would be good to not have to\nshuffle duplicate hashkey tuples up in the page during normal\ninsertions as well as when building the index.\n\nIn any case, it would be nice to have some way to assert that we don't\naccidentally pass sorted==true to _hash_pgaddtup() when there's an\nexisting item on the page with a higher hash value. Maybe we could\njust look at the hash value of the last tuple on the page and ensure\nit's <= to the current one?\n\n5. I think it would be nicer to move the insertstate.sorted = false;\ninto the else branch in hashbuild(). However, you might have to do\nthat anyway if you were to do what I mentioned in #1.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Sep 2022 13:31:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Wed, 21 Sept 2022 at 02:32, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 2 Aug 2022 at 03:37, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > Using the above test case, I'm getting a further 4-7% improvement on\n> > already committed code with the attached patch, which follows your\n> > proposal.\n> >\n> > The patch passes info via a state object, useful to avoid API churn in\n> > later patches.\n>\n> Hi Simon,\n>\n> I took this patch for a spin and saw a 2.5% performance increase using\n> the random INT test that Tom posted. The index took an average of\n> 7227.47 milliseconds on master and 7045.05 with the patch applied.\n\nHi David,\n\nThanks for tests and review. I'm just jumping on a plane, so may not\nrespond in detail until next Mon.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 21 Sep 2022 12:43:15 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 12:43:15PM +0100, Simon Riggs wrote:\n> Thanks for tests and review. I'm just jumping on a plane, so may not\n> respond in detail until next Mon.\n\nOkay. If you have time to address that by next CF, that would be\ninteresting. For now I have marked the entry as returned with\nfeedback.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:43:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Wed, 21 Sept 2022 at 02:32, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n\n> I took this patch for a spin and saw a 2.5% performance increase using\n> the random INT test that Tom posted. The index took an average of\n> 7227.47 milliseconds on master and 7045.05 with the patch applied.\n\nThanks for the review, apologies for the delay in acting upon your comments.\n\nMy tests show the sorted and random tests are BOTH 4.6% faster with\nthe v3 changes using 5-test avg, but you'll be pleased to know your\nkit is about 15.5% faster than mine, comparing absolute execution\ntimes.\n\n> On making a pass of the changes, I noted down a few things.\n\n> 2. There are quite a few casts that are not required. e.g:\n>\n> _hash_doinsert(rel, itup, heapRel, (HashInsertState) &istate);\n> buildstate.spool = _h_spoolinit(heap, index, num_buckets,\n> (HashInsertState) &insertstate);\n> buildstate.istate = (HashInsertState) &insertstate;\n\nRemoved\n\n> 3. Just a minor nitpick. Line wraps at 80 chars. You're doing this\n> sometimes but not others. This seems just to be due to the additional\n> function parameters that have been added.\n\nDone\n\n> 4. I added the following Assert to _hash_pgaddtup() as I expected the\n> itup_off to be set to the same thing before and after this change. I\n> see the Assert is failing in the regression tests.\n>\n> Assert(PageGetMaxOffsetNumber(page) + 1 ==\n> _hash_binsearch(page, _hash_get_indextuple_hashkey(itup)));\n>\n> I think this is because _hash_binsearch() returns the offset with the\n> first tuple with the given hashkey, so if there are duplicate hashkey\n> values then it looks like PageAddItemExtended() will set needshuffle\n> and memmove() the existing item(s) up one slot. I don't know this\n> hash index building code very well, but I wonder if it's worth having\n> another version of _hash_binsearch() that can be used to make\n> _hash_pgaddtup() put any duplicate hashkeys after the existing ones\n> rather than before and shuffle the others up? It sounds like that\n> might speed up normal insertions when there are many duplicate values\n> to hash.\n\nSounds reasonable.\n\nI tried changing src/backend/access/hash/hashinsert.c, line 307 (on\npatched file) from\n\n- itup_off = _hash_binsearch(page, hashkey);\n\nto\n\n+ itup_off = _hash_binsearch_last(page, hashkey) + 1;\n\nsince exactly such a function already exists in code.\n\nBut this seems to cause a consistent ~1% regression in performance,\nwhich surprises me.\nTest was the random INSERT SELECT with 10E6 rows after the CREATE INDEX.\n\nNot sure what to suggest, but the above change is not included in v3.\n\n> I wonder if this might be the reason the random INT test didn't come\n> out as good as your original test which had unique values. The unique\n> values test would do less shuffling during PageAddItemExtended(). If\n> so, that implies that skipping the binary search is only part of the\n> gains here and that not shuffling tuples accounts for quite a bit of\n> the gain you're seeing. If so, then it would be good to not have to\n> shuffle duplicate hashkey tuples up in the page during normal\n> insertions as well as when building the index.\n\nThere is still a 1.4% lead for the sorted test over the random one, in my tests.\n\n> In any case, it would be nice to have some way to assert that we don't\n> accidentally pass sorted==true to _hash_pgaddtup() when there's an\n> existing item on the page with a higher hash value. Maybe we could\n> just look at the hash value of the last tuple on the page and ensure\n> it's <= to the current one?\n\nDone\n\n> 5. I think it would be nicer to move the insertstate.sorted = false;\n> into the else branch in hashbuild(). However, you might have to do\n> that anyway if you were to do what I mentioned in #1.\n\nDone\n\n> 1. In _h_spoolinit() the HSpool is allocated with palloc and then\n> you're setting the istate field to a pointer to the HashInsertState\n> which is allocated on the stack by the only calling function\n> (hashbuild()). Looking at hashbuild(), it looks like the return value\n> of _h_spoolinit is never put anywhere to make it available outside of\n> the function, so it does not seem like there is an actual bug there.\n> However, it just seems like a bug waiting to happen. If _h_spoolinit()\n> is pallocing memory, then we really shouldn't be setting pointer\n> fields in that memory to point to something on the stack. It might be\n> nicer if the istate field in HSpool was a HashInsertStateData and\n> _h_spoolinit() just memcpy'd the contents of that parameter. That\n> would make HSpool 4 bytes smaller and save additional pointer\n> dereferences in _hash_doinsert().\n\n> This is just my opinion, but I don't really see the value in having a\n> typedef for a pointer to HashInsertStateData. I can understand that if\n> the struct was local to a .c file, but you've got the struct and\n> pointer typedef in the same header. I understand we often do this in\n> the code, but I feel like we do it less often in newer code. e.g we do\n> it in aset.c but not generation.c (which is much newer than aset.c).\n> My personal preference would be just to name the struct\n> HashInsertState and have no extra pointer typedefs.\n\nNot done, but not disagreeing either, just not very comfortable\nactually making those changes.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 16 Nov 2022 04:33:08 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "Hi,\n\nI did some simple benchmark with v2 and v3, using the attached script,\nwhich essentially just builds hash index on random data, with different\ndata types and maintenance_work_mem values. And what I see is this\n(median of 10 runs):\n\n machine data type m_w_m master v2 v3\n ------------------------------------------------------------\n i5 bigint 128MB 9652 9402 9669\n 32MB 9545 9291 9535\n 4MB 9599 9371 9741\n int 128MB 9666 9475 9676\n 32MB 9530 9347 9528\n 4MB 9595 9394 9624\n text 128MB 9755 9596 9897\n 32MB 9711 9547 9846\n 4MB 9808 9744 10024\n xeon bigint 128MB 10790 10555 10812\n 32MB 10690 10373 10579\n 4MB 10682 10351 10650\n int 128MB 11258 10550 10712\n 32MB 10963 10272 10410\n 4MB 11152 10366 10589\n text 128MB 10935 10694 10930\n 32MB 10822 10672 10861\n 4MB 10835 10684 10895\n\nOr, relative to master:\n\n machine data type memory v2 v3\n ----------------------------------------------------------\n i5 bigint 128MB 97.40% 100.17%\n 32MB 97.34% 99.90%\n 4MB 97.62% 101.48%\n int 128MB 98.03% 100.11%\n 32MB 98.08% 99.98%\n 4MB 97.91% 100.31%\n text 128MB 98.37% 101.46%\n 32MB 98.32% 101.40%\n 4MB 99.35% 102.20%\n xeon bigint 128MB 97.82% 100.20%\n 32MB 97.03% 98.95%\n 4MB 96.89% 99.70%\n int 128MB 93.71% 95.15%\n 32MB 93.70% 94.95%\n 4MB 92.95% 94.95%\n text 128MB 97.80% 99.96%\n 32MB 98.62% 100.36%\n 4MB 98.61% 100.55%\n\nSo to me it seems v2 performs demonstrably better, v3 is consistently\nslower - not only compared to v2, but often also to master.\n\nAttached is the script I used and the raw results - this includes also\nresults for logged tables - the improvement is smaller, but the\nconclusions are otherwise similar.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 17 Nov 2022 15:34:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 17:33, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> Thanks for the review, apologies for the delay in acting upon your comments.\n>\n> My tests show the sorted and random tests are BOTH 4.6% faster with\n> the v3 changes using 5-test avg, but you'll be pleased to know your\n> kit is about 15.5% faster than mine, comparing absolute execution\n> times.\n\nThanks for the updated patch.\n\nI started to look at this again and I'm starting to think that the\nHashInsertState struct is the wrong approach for passing along the\nsorted flag to _hash_doinsert(). The reason I think this is that in\nhashbuild() when setting buildstate.spool to NULL, you're also making\nthe decision about what to set the sorted flag to. However, in\nreality, we already know what we should be passing *every* time we\ncall _hash_doinsert(). The only place where we can pass the sorted\noption as true is in _h_indexbuild() when we're doing the sorted\nversion of the index build. Trying to make that decision any sooner\nseems error-prone.\n\nI understand you have made HashInsertState so that we don't need to\nadd new parameters should we ever need to pass something else along,\nbut I'm just thinking that if we ever need to add more, then we should\njust reconsider this in the future. I think for today, the better\noption is just to add the bool sorted as a parameter to\n_hash_doinsert() and pass it as true in the single case where it's\nvalid to do so. That seems less likely that we'll inherit some\noptions from some other place after some future modification and end\nup passing sorted as true when it should be false.\n\nAnother reason I didn't like the HashInsertState idea is that in the\nv3 patch there's an HashInsertState in both HashBuildState and HSpool.\nBecause in the normal insert path (hashinsert), we've neither a\nHashBuildState nor an HSpool, you're having to fake up a\nHashInsertStateData to pass something along to _hash_doinsert() in\nhashinsert(). When we're building an index, in the non-sorted index\nbuild case, you're always passing the HashInsertStateData from the\nHashBuildState, but when we're doing the sorted index build the one\nfrom HSpool is passed. In other words, in each of the 3 calls to\n_hash_doinsert(), the HashInsertStateData comes from a different\nplace.\n\nNow, I do see that you've coded hashbuild() so both versions of the\nHashInsertState point to the same HashInsertStateData, but I find it\nunacceptable programming that in _h_spoolinit() the code palloc's the\nmemory for the HSpool and you're setting the istate field to the\nHashInsertStateData that's on the stack. That just seems like a great\nway to end up having istate pointing to junk should the HSpool ever\nlive beyond the hashbuild() call. If we really don't want HSpool to\nlive beyond hashbuild(), then it too should be a local variable to\nhashbuild() instead of being palloc'ed in _h_spoolinit().\n_h_spoolinit() could just be passed a pointer to the HSpool to\npopulate.\n\nAfter getting rid of the HashInsertState code and just adding bool\nsorted to _hash_doinsert() and _hash_pgaddtup(), the resulting patch\nis much more simple:\n\nv3:\n src/backend/access/hash/hash.c | 19 ++++++++++++++++---\n src/backend/access/hash/hashinsert.c | 40\n++++++++++++++++++++++++++++++++++------\n src/backend/access/hash/hashsort.c | 8 ++++++--\n src/include/access/hash.h | 14 +++++++++++---\n 4 files changed, 67 insertions(+), 14 deletions(-)\n\nv4:\nsrc/backend/access/hash/hash.c | 4 ++--\nsrc/backend/access/hash/hashinsert.c | 40 ++++++++++++++++++++++++++++--------\nsrc/backend/access/hash/hashsort.c | 3 ++-\nsrc/include/access/hash.h | 6 ++++--\n4 files changed, 40 insertions(+), 13 deletions(-)\n\nand v4 includes 7 extra lines in hashinsert.c for the Assert() I\nmentioned in my previous email plus a bunch of extra comments.\n\nI'd rather see this solved like v4 is doing it.\n\nDavid",
"msg_date": "Thu, 24 Nov 2022 02:04:21 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Fri, 18 Nov 2022 at 03:34, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I did some simple benchmark with v2 and v3, using the attached script,\n> which essentially just builds hash index on random data, with different\n> data types and maintenance_work_mem values. And what I see is this\n> (median of 10 runs):\n\n> So to me it seems v2 performs demonstrably better, v3 is consistently\n> slower - not only compared to v2, but often also to master.\n\nCould this just be down to code alignment changes? There does not\nreally seem to be any fundamental differences which would explain\nthis.\n\nDavid\n\n\n",
"msg_date": "Thu, 24 Nov 2022 02:07:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Wed, 23 Nov 2022 at 13:04, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> After getting rid of the HashInsertState code and just adding bool\n> sorted to _hash_doinsert() and _hash_pgaddtup(), the resulting patch\n> is much more simple:\n\nSeems good to me and I wouldn't argue with any of your comments.\n\n> and v4 includes 7 extra lines in hashinsert.c for the Assert() I\n> mentioned in my previous email plus a bunch of extra comments.\n\nOh, I did already include that in v3 as requested.\n\n> I'd rather see this solved like v4 is doing it.\n\nPlease do. No further comments. Thanks for your help\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 23 Nov 2022 13:27:42 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "\n\nOn 11/23/22 14:07, David Rowley wrote:\n> On Fri, 18 Nov 2022 at 03:34, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> I did some simple benchmark with v2 and v3, using the attached script,\n>> which essentially just builds hash index on random data, with different\n>> data types and maintenance_work_mem values. And what I see is this\n>> (median of 10 runs):\n> \n>> So to me it seems v2 performs demonstrably better, v3 is consistently\n>> slower - not only compared to v2, but often also to master.\n> \n> Could this just be down to code alignment changes? There does not\n> really seem to be any fundamental differences which would explain\n> this.\n> \n\nCould be, but then how do we know the speedup with v2 is not due to code\nalignment too?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 23 Nov 2022 20:08:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Thu, 24 Nov 2022 at 08:08, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> >> So to me it seems v2 performs demonstrably better, v3 is consistently\n> >> slower - not only compared to v2, but often also to master.\n> >\n> > Could this just be down to code alignment changes? There does not\n> > really seem to be any fundamental differences which would explain\n> > this.\n> >\n>\n> Could be, but then how do we know the speedup with v2 is not due to code\n> alignment too?\n\nIt's a good question. Back when I was working on 913ec71d6, I had\nsimilar problems that I saw wildly different performance gains\ndepending on which commit I patched with. I sorted that out by just\nbenchmarking on a bunch of different commits both patched and\nunpatched.\n\nI've attached a crude bash script which looks at every commit since\n1st November 2022 that's changed anything in src/backend/* and runs a\nbenchmark with and without the v4 patch. That was 76 commits when I\ntested. In each instance, with the test I ran, I saw between a 5 and\n15% performance improvement with the v4 patch. No commit showed any\nperformance regression. That makes me fairly happy that there's a\ngenuine win with this patch.\n\nI've attached the script and the benchmark files along with the\nresults and a chart.\n\nDavid",
"msg_date": "Thu, 24 Nov 2022 15:47:19 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
},
{
"msg_contents": "On Thu, 24 Nov 2022 at 02:27, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Wed, 23 Nov 2022 at 13:04, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> > I'd rather see this solved like v4 is doing it.\n>\n> Please do. No further comments. Thanks for your help\n\nThanks. I pushed the v4 patch with some minor comment adjustments and\nalso renamed _hash_pgaddtup()'s new parameter to \"appendtup\". I felt\nthat better reflected what the parameter does in that function.\n\nDavid\n\n\n",
"msg_date": "Thu, 24 Nov 2022 17:24:54 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Hash index build performance tweak from sorting"
}
] |
[
{
"msg_contents": "Hi list,\n\nI have a case where Postgres chooses the wrong index and I'm not sure what\nto do about it:\n\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=f356fd56a920ea8a93c192f5a8c16b\n1c\n\nSetup:\n\nCREATE TABLE t (\n filename int,\n cropped bool not null default false,\n resized bool not null default false,\n create_date date not null default '1970-01-01'\n);\n\nINSERT INTO t\nSELECT generate_series(1, 1000000);\n\nUPDATE t SET cropped = true, resized = true\nWHERE filename IN (SELECT filename FROM t ORDER BY random() LIMIT 900000);\nUPDATE t SET resized = false\nWHERE filename IN (SELECT filename FROM t WHERE cropped = true ORDER BY\nrandom() LIMIT 1000);\n\nVACUUM FULL t;\nANALYZE t;\n\nData now looks like this:\n\nSELECT cropped, resized, count(*)\nFROM t\nGROUP BY 1,2;\n\nI create two indexes:\n\nCREATE INDEX idx_resized ON t(resized) WHERE NOT resized;\nCREATE INDEX specific ON t(cropped,resized) WHERE cropped AND NOT resized;\n\nAnd then run my query:\n\nEXPLAIN ANALYZE\n SELECT count(*) FROM t WHERE cropped AND NOT resized AND create_date <\nCURRENT_DATE;\n\nAggregate (cost=4001.25..4001.26 rows=1 width=8) (actual\ntime=478.557..478.558 rows=1 loops=1)\n -> Index Scan using idx_resized on t (cost=0.29..3777.71 rows=89415\nwidth=0) (actual time=478.177..478.480 rows=1000 loops=1)\n Filter: (cropped AND (create_date < CURRENT_DATE))\n Rows Removed by Filter: 100000\n\nIt takes 478 ms on dbfiddle.uk (on my machine it's faster but the difference\nis still visible).\n\nNow I delete an index:\n\nDROP INDEX idx_resized;\n\nand run the same query again and I get a much better plan:\n\nAggregate (cost=11876.27..11876.28 rows=1 width=8) (actual\ntime=0.315..0.316 rows=1 loops=1)\n -> Bitmap Heap Scan on t (cost=35.50..11652.73 rows=89415 width=0)\n(actual time=0.054..0.250 rows=1000 loops=1)\n Recheck Cond: (cropped AND (NOT resized))\n Filter: (create_date < CURRENT_DATE)\n Heap Blocks: exact=6\n -> Bitmap Index Scan on specific (cost=0.00..13.15 rows=89415\nwidth=0) (actual time=0.040..0.040 rows=1000 loops=1)\n\nwhich uses the index specific and completes in less than a ms on both\ndbfiddle.uk and my machine.\n\nAdditional mystery - when I set the values not with an UPDATE but with a\nDEFAULT, then the correct index is chosen. What is going on?\nhttps://dbfiddle.uk/?rdbms=postgres_14&fiddle=dc7d8aea14e90f08ab6537a855f34d\n8c\n\nRegards,\nAndré\n\n\n\n",
"msg_date": "Tue, 19 Apr 2022 13:25:21 +0200",
"msg_from": "=?iso-8859-1?Q?Andr=E9_H=E4nsel?= <andre@webkr.de>",
"msg_from_op": true,
"msg_subject": "Bad estimate with partial index"
},
{
"msg_contents": "=?iso-8859-1?Q?Andr=E9_H=E4nsel?= <andre@webkr.de> writes:\n> I have a case where Postgres chooses the wrong index and I'm not sure what\n> to do about it:\n\nThe core problem here seems to be a poor estimate for the selectivity\nof \"WHERE cropped AND NOT resized\":\n\nregression=# EXPLAIN ANALYZE\nSELECT count(*) FROM t\nWHERE cropped AND NOT resized ;\n...\n -> Bitmap Heap Scan on t (cost=35.26..6352.26 rows=91100 width=0) (actual time=0.121..0.190 rows=1000 loops=1)\n Recheck Cond: (cropped AND (NOT resized))\n...\n\nI think this is because the planner expects those two columns to be\nindependent, which they are completely not in your test data. Perhaps\nthat assumption is more true in your real-world data, but since you're\nhere complaining, I suppose not :-(. What you can do about that, in\nrecent Postgres versions, is to create extended statistics on the\ncombination of the columns:\n\nregression=# create statistics t_stats on cropped, resized from t;\nCREATE STATISTICS\nregression=# analyze t;\nANALYZE\nregression=# EXPLAIN ANALYZE \nSELECT count(*) FROM t\nWHERE cropped AND NOT resized AND create_date < CURRENT_DATE;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3145.15..3145.16 rows=1 width=8) (actual time=9.765..9.766 rows=1 loops=1)\n -> Index Scan using idx_resized on t (cost=0.29..3142.65 rows=1000 width=0) (actual time=9.608..9.735 rows=1000 loops=1)\n Filter: (cropped AND (create_date < CURRENT_DATE))\n Rows Removed by Filter: 100000\n Planning Time: 0.115 ms\n Execution Time: 9.779 ms\n\nBetter estimate, but it's still using the wrong index :-(. If we force\nuse of the other one:\n\nregression=# drop index idx_resized;\nDROP INDEX\nregression=# EXPLAIN ANALYZE\nregression-# SELECT count(*) FROM t\nregression-# WHERE cropped AND NOT resized AND create_date < CURRENT_DATE;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6795.38..6795.39 rows=1 width=8) (actual time=0.189..0.191 rows=1 loops=1)\n -> Bitmap Heap Scan on t (cost=13.40..6792.88 rows=1000 width=0) (actual time=0.047..0.147 rows=1000 loops=1)\n Recheck Cond: (cropped AND (NOT resized))\n Filter: (create_date < CURRENT_DATE)\n Heap Blocks: exact=6\n -> Bitmap Index Scan on specific (cost=0.00..13.15 rows=91565 width=0) (actual time=0.035..0.035 rows=1000 loops=1)\n ^^^^^^^^^^\n Planning Time: 0.154 ms\n Execution Time: 0.241 ms\n\nit looks like the problem is that the extended stats haven't been used\nwhile forming the estimate of the number of index entries retrieved,\nso we overestimate the cost of using this index.\n\nThat seems like a bug. Tomas?\n\nIn the meantime, maybe you could dodge the problem by combining\n\"cropped\" and \"resized\" into one multivalued column, so that there's\nnot a need to depend on extended stats to arrive at a decent estimate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Apr 2022 14:01:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
},
{
"msg_contents": "I wrote:\n> it looks like the problem is that the extended stats haven't been used\n> while forming the estimate of the number of index entries retrieved, so\n> we overestimate the cost of using this index.\n> That seems like a bug. Tomas?\n\nI dug into this enough to locate the source of the problem.\nbtcostestimate includes the partial index clauses in what it\nsends to clauselist_selectivity, but per the comments for\nadd_predicate_to_index_quals:\n\n * Note that indexQuals contains RestrictInfo nodes while the indpred\n * does not, so the output list will be mixed. This is OK for both\n * predicate_implied_by() and clauselist_selectivity(), but might be\n * problematic if the result were passed to other things.\n\nThat comment was true when it was written, but it's been falsified\nby the extended-stats patches, which have added a whole lot of logic\nin and under clauselist_selectivity that ignores clauses that are not\nRestrictInfos.\n\nWhile we could perhaps fix this by having add_predicate_to_index_quals\nadd RestrictInfos, I'm inclined to feel that the extended-stats code\nis in the wrong. The contract for clauselist_selectivity has always\nbeen that it could optimize if given RestrictInfos rather than bare\nclauses, not that it would fail to work entirely without them.\nThere are probably more places besides add_predicate_to_index_quals\nthat are relying on that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Apr 2022 17:08:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
},
{
"msg_contents": "On 4/19/22 23:08, Tom Lane wrote:\n> I wrote:\n>> it looks like the problem is that the extended stats haven't been used\n>> while forming the estimate of the number of index entries retrieved, so\n>> we overestimate the cost of using this index.\n>> That seems like a bug. Tomas?\n> \n> I dug into this enough to locate the source of the problem.\n> btcostestimate includes the partial index clauses in what it\n> sends to clauselist_selectivity, but per the comments for\n> add_predicate_to_index_quals:\n> \n> * Note that indexQuals contains RestrictInfo nodes while the indpred\n> * does not, so the output list will be mixed. This is OK for both\n> * predicate_implied_by() and clauselist_selectivity(), but might be\n> * problematic if the result were passed to other things.\n> \n> That comment was true when it was written, but it's been falsified\n> by the extended-stats patches, which have added a whole lot of logic\n> in and under clauselist_selectivity that ignores clauses that are not\n> RestrictInfos.\n> \n> While we could perhaps fix this by having add_predicate_to_index_quals\n> add RestrictInfos, I'm inclined to feel that the extended-stats code\n> is in the wrong. The contract for clauselist_selectivity has always\n> been that it could optimize if given RestrictInfos rather than bare\n> clauses, not that it would fail to work entirely without them.\n> There are probably more places besides add_predicate_to_index_quals\n> that are relying on that.\n> \n\nYes, that seems like a fair assessment. I'll look into fixing this, not\nsure how invasive it will get, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Apr 2022 09:58:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
},
{
"msg_contents": "On 4/20/22 09:58, Tomas Vondra wrote:\n> On 4/19/22 23:08, Tom Lane wrote:\n>> I wrote:\n>>> it looks like the problem is that the extended stats haven't been used\n>>> while forming the estimate of the number of index entries retrieved, so\n>>> we overestimate the cost of using this index.\n>>> That seems like a bug. Tomas?\n>>\n>> I dug into this enough to locate the source of the problem.\n>> btcostestimate includes the partial index clauses in what it\n>> sends to clauselist_selectivity, but per the comments for\n>> add_predicate_to_index_quals:\n>>\n>> * Note that indexQuals contains RestrictInfo nodes while the indpred\n>> * does not, so the output list will be mixed. This is OK for both\n>> * predicate_implied_by() and clauselist_selectivity(), but might be\n>> * problematic if the result were passed to other things.\n>>\n>> That comment was true when it was written, but it's been falsified\n>> by the extended-stats patches, which have added a whole lot of logic\n>> in and under clauselist_selectivity that ignores clauses that are not\n>> RestrictInfos.\n>>\n>> While we could perhaps fix this by having add_predicate_to_index_quals\n>> add RestrictInfos, I'm inclined to feel that the extended-stats code\n>> is in the wrong. The contract for clauselist_selectivity has always\n>> been that it could optimize if given RestrictInfos rather than bare\n>> clauses, not that it would fail to work entirely without them.\n>> There are probably more places besides add_predicate_to_index_quals\n>> that are relying on that.\n>>\n> \n> Yes, that seems like a fair assessment. I'll look into fixing this, not\n> sure how invasive it will get, though.\n> \n\nSo, here's a WIP fix that improves the example shared by Andre, and does\nnot seem to break anything (or at least not any regression test).\n\nThe whole idea is that instead of bailing out for non-RestrictInfo case,\nit calculates the necessary information for the clause from scratch.\nThis means relids and pseudoconstant flag, which are checked to decide\nif the clause is compatible with extended stats.\n\nBut when inspecting how to calculate pseudoconstant, I realized that\nmaybe that's not really needed. Per distribute_qual_to_rels() we only\nset it to 'true' when bms_is_empty(relids), and we already check that\nrelids is a singleton, so it can't be empty - which means pseudoconstant\ncan't be true either.\n\n\nAndre, are you in position to test this fix with your application? Which\nPostgres version are you using, actually?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 20 Apr 2022 15:39:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> The whole idea is that instead of bailing out for non-RestrictInfo case,\n> it calculates the necessary information for the clause from scratch.\n> This means relids and pseudoconstant flag, which are checked to decide\n> if the clause is compatible with extended stats.\n\nRight.\n\n> But when inspecting how to calculate pseudoconstant, I realized that\n> maybe that's not really needed. Per distribute_qual_to_rels() we only\n> set it to 'true' when bms_is_empty(relids), and we already check that\n> relids is a singleton, so it can't be empty - which means pseudoconstant\n> can't be true either.\n\nYeah, I would not bother with the pseudoconstant-related tests for a\nbare clause. Patch looks reasonably sane in a quick once-over otherwise,\nand the fact that it fixes the presented test case is promising.\n(If you set enable_indexscan = off, you can verify that the estimate\nfor the number of index entries retrieved is now sane.) I did not look\nto see if there were any other RestrictInfo dependencies, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 10:15:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
},
{
"msg_contents": "Tomas Vondra wrote:\n> Andre, are you in position to test this fix with your application? Which Postgres version are you using, actually?\n\nThere's a test case in my original email, which obviously was synthetic, but I could also test this with my original\napplication data if I can get a Postgres running with your patch.\n\nI guess I could probably run the official Dockerfile and apply your patch somewhere between these lines?\nhttps://github.com/docker-library/postgres/blob/e8ebf74e50128123a8d0220b85e357ef2d73a7ec/14/bullseye/Dockerfile#L138\n\nAndré\n\n\n\n",
"msg_date": "Wed, 20 Apr 2022 16:23:26 +0200",
"msg_from": "=?utf-8?Q?Andr=C3=A9_H=C3=A4nsel?= <andre@webkr.de>",
"msg_from_op": false,
"msg_subject": "RE: Bad estimate with partial index"
},
{
"msg_contents": "On 4/20/22 16:15, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> The whole idea is that instead of bailing out for non-RestrictInfo case,\n>> it calculates the necessary information for the clause from scratch.\n>> This means relids and pseudoconstant flag, which are checked to decide\n>> if the clause is compatible with extended stats.\n> \n> Right.\n> \n>> But when inspecting how to calculate pseudoconstant, I realized that\n>> maybe that's not really needed. Per distribute_qual_to_rels() we only\n>> set it to 'true' when bms_is_empty(relids), and we already check that\n>> relids is a singleton, so it can't be empty - which means pseudoconstant\n>> can't be true either.\n> \n> Yeah, I would not bother with the pseudoconstant-related tests for a\n> bare clause. Patch looks reasonably sane in a quick once-over otherwise,\n> and the fact that it fixes the presented test case is promising.\n\nAttached is a slightly more polished patch, adding a couple comments and\nremoving the unnecessary pseudoconstant checks.\n\n> (If you set enable_indexscan = off, you can verify that the estimate\n> for the number of index entries retrieved is now sane.) \n\nI did that. Sorry for not mentioning that explicitly in my message.\n\n> I did not look to see if there were any other RestrictInfo\n> dependencies, though.\n\nI checked the places messing with RestrictInfo in clausesel.c and\nsrc/backend/statististics. Code in clausesel.c seems fine and mcv.c\nseems fine to (it merely strips the RestrictInfo).\n\nBut dependencies.c might need a fix too, although the issue is somewhat\ninverse to this one, because it looks like this:\n\n if (IsA(clause, RestrictInfo))\n {\n ... do some checks ...\n }\n\nso if there's no RestrictInfo on top, we just accept the clause. I guess\nthis should do the same thing with checking relids like the fix, but\nI've been unable to construct an example demonstrating the issue (it'd\nhave to be either pseudoconstant or reference multiple rels, which seems\nhard to get in btcostestimate).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 20 Apr 2022 20:27:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> But dependencies.c might need a fix too, although the issue is somewhat\n> inverse to this one, because it looks like this:\n\n> if (IsA(clause, RestrictInfo))\n> {\n> ... do some checks ...\n> }\n\n> so if there's no RestrictInfo on top, we just accept the clause. I guess\n> this should do the same thing with checking relids like the fix, but\n> I've been unable to construct an example demonstrating the issue (it'd\n> have to be either pseudoconstant or reference multiple rels, which seems\n> hard to get in btcostestimate).\n\nHm. You could get an indexqual referencing other rels when considering\ndoing a join via a nestloop with parameterized inner indexscan. However,\nthat would always be a query WHERE clause, which'd have a RestrictInfo.\nAt least in this code path, a bare clause would have to be a partial\nindex's predicate, which could not reference any other rels. The\npseudoconstant case would require a predicate reducing to WHERE FALSE\nor WHERE TRUE, which is at best pointless, though I'm not sure that\nwe prevent it.\n\nYou might have to go looking for other code paths that can pass a\nbare clause if you want a test case for this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 15:03:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bad estimate with partial index"
}
] |
[
{
"msg_contents": "Hi PostgreSQL Project team,\n\nHope this message finds you well. I have submitted my proposal through the GSoC website. I send an email to you for our convenience of future discussions.\n\nThis project would aim to update the pgjdbc website. Currently it is based on a very old PostgreSQL template and uses jekyll to build it. Ideally it will be buildable from github actions so that updating it was automated. I propose I will rewrite the pgjdbc website by the React way because of its advantages and my experience.\n\nA list of deliverables:\n1, Website Wireframe and prototyping pages\n2, Better performance in the UI\n3, Updated PostgreSQL\n4, New and improved awesome website for the pgjdbc project\n\nQuestions or discussions are welcomed. We will have a great collaboration journey.\n\nBest wishes,\nRui",
"msg_date": "Tue, 19 Apr 2022 13:11:50 +0000",
"msg_from": "Rui Huang <ruihuang789@gmail.com>",
"msg_from_op": true,
"msg_subject": "Rui Huang Google Summer of Code 2022 Proposal"
}
] |
[
{
"msg_contents": "I was reading the contributor guidelines and it mentioned sending the\nproposal to this email address. The guidelines also mentioned that you must\nbe subscribed to the mailing list. Please let me know if something more has\nto be done.\n\nThe proposal is attached to this email\n\nThank You.\n\nRegards\n\nVedant Gokhale",
"msg_date": "Tue, 19 Apr 2022 19:03:08 +0530",
"msg_from": "Vedant Gokhale <gokhalevedant06@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal for New and improved website for pgjdbc (JDBC) for GSOC 2022"
},
{
"msg_contents": "Greetings,\n\n* Vedant Gokhale (gokhalevedant06@gmail.com) wrote:\n> I was reading the contributor guidelines and it mentioned sending the\n> proposal to this email address. The guidelines also mentioned that you must\n> be subscribed to the mailing list. Please let me know if something more has\n> to be done.\n> \n> The proposal is attached to this email\n\nThanks!\n\nStephen",
"msg_date": "Tue, 19 Apr 2022 10:13:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal for New and improved website for pgjdbc (JDBC) for GSOC\n 2022"
}
] |
[
{
"msg_contents": "There's a complain from Coverity about outer_plan being referenced while\npossibly NULL, which can be silenced by using an existing local\nvariable. 0001 does that.\n\n0002 and 0003 are unrelated: in the former, we avoid creating a separate\nlocal variable planSlot when we can just refer to the eponymous member\nof ModifyTableContext. In the latter, we reduce the scope where\n'lockmode' is defined by moving it from ModifyTableContext to\nUpdateContext, which means we can save initializing it in a few spots;\nthis makes the code more natural.\n\nI expect these fixups in new code should be uncontroversial.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Apr 2022 15:45:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "minor MERGE cleanups"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 03:45:22PM +0200, Alvaro Herrera wrote:\n> I expect these fixups in new code should be uncontroversial.\n\nThe whole set looks rather sane to me.\n--\nMichael",
"msg_date": "Wed, 20 Apr 2022 10:29:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: minor MERGE cleanups"
},
{
"msg_contents": "On 2022-Apr-20, Michael Paquier wrote:\n\n> On Tue, Apr 19, 2022 at 03:45:22PM +0200, Alvaro Herrera wrote:\n> > I expect these fixups in new code should be uncontroversial.\n> \n> The whole set looks rather sane to me.\n\nThank you, I have pushed them.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 20 Apr 2022 13:45:10 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: minor MERGE cleanups"
}
] |
[
{
"msg_contents": "Sent from Mail for Windows",
"msg_date": "Tue, 19 Apr 2022 17:00:07 +0300",
"msg_from": "Israa Odeh <israa.k.odeh@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC Proposal Submission."
},
{
"msg_contents": "Greetings,\n\nAn actual message would be better when sending to this list in the\nfuture. Thanks for your GSoC proposal.\n\nStephen",
"msg_date": "Tue, 19 Apr 2022 10:13:51 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: GSoC Proposal Submission."
}
] |
[
{
"msg_contents": "The error handling for pqRowProcessor is described as\n\n * Add the received row to the current async result (conn->result).\n * Returns 1 if OK, 0 if error occurred.\n *\n * On error, *errmsgp can be set to an error string to be returned.\n * If it is left NULL, the error is presumed to be \"out of memory\".\n\nI find that this doesn't work anymore. If you set *errmsgp = \"some \nmessage\" and return 0, then psql will just print a result set with zero \nrows.\n\nBisecting points to\n\ncommit 618c16707a6d6e8f5c83ede2092975e4670201ad\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Feb 18 15:35:15 2022 -0500\n\n Rearrange libpq's error reporting to avoid duplicated error text.\n\nIt is very uncommon to get an error from pqRowProcessor(). To \nreproduce, I inserted this code:\n\ndiff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c\nindex c7c48d07dc..9c1b33c6e2 100644\n--- a/src/interfaces/libpq/fe-exec.c\n+++ b/src/interfaces/libpq/fe-exec.c\n@@ -1124,6 +1124,12 @@ pqRowProcessor(PGconn *conn, const char **errmsgp)\n return 0;\n }\n\n+ if (nfields == 7)\n+ {\n+ *errmsgp = \"gotcha\";\n+ goto fail;\n+ }\n+\n /*\n * Basically we just allocate space in the PGresult for each field and\n * copy the data over.\n\nThis will produce assorted failures in the regression tests that \nillustrate the effect.\n\n(Even before the above commit, the handling of the returned message was \na bit weird: The error output was just the message string, without any \nprefix like \"ERROR:\".)\n\n\n",
"msg_date": "Tue, 19 Apr 2022 16:34:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "error handling in pqRowProcessor broken"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The error handling for pqRowProcessor is described as\n> * Add the received row to the current async result (conn->result).\n> * Returns 1 if OK, 0 if error occurred.\n> *\n> * On error, *errmsgp can be set to an error string to be returned.\n> * If it is left NULL, the error is presumed to be \"out of memory\".\n\n> I find that this doesn't work anymore.\n\nWill look into it, thanks for reporting.\n\n(Hmm, seems like this API spec is deficient anyway. Is the error\nstring to be freed later? Is it already translated?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:54:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: error handling in pqRowProcessor broken"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I find that this doesn't work anymore. If you set *errmsgp = \"some\n> message\" and return 0, then psql will just print a result set with zero\n> rows.\n\nAh, I see the problem: a few places in fe-protocol3 didn't get the memo\nthat conn->error_result represents a \"pending\" PGresult that hasn't\nbeen constructed yet. The attached fixes it for me --- can you try it\non whatever test case led you to this?\n\n> (Even before the above commit, the handling of the returned message was\n> a bit weird: The error output was just the message string, without any\n> prefix like \"ERROR:\".)\n\nlibpq's always acted that way for internally-generated messages.\nMost of them are so rare that we're probably not used to seeing 'em.\nPerhaps there's a case for making it more verbose, but right now\ndoesn't seem like the time to undertake that.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 19 Apr 2022 15:16:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: error handling in pqRowProcessor broken"
},
{
"msg_contents": "On 19.04.22 21:16, Tom Lane wrote:\n> Peter Eisentraut<peter.eisentraut@enterprisedb.com> writes:\n>> I find that this doesn't work anymore. If you set *errmsgp = \"some\n>> message\" and return 0, then psql will just print a result set with zero\n>> rows.\n> Ah, I see the problem: a few places in fe-protocol3 didn't get the memo\n> that conn->error_result represents a \"pending\" PGresult that hasn't\n> been constructed yet. The attached fixes it for me --- can you try it\n> on whatever test case led you to this?\n\nYour patch fixes it for me.\n\n\n",
"msg_date": "Thu, 21 Apr 2022 22:18:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: error handling in pqRowProcessor broken"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 19.04.22 21:16, Tom Lane wrote:\n>> Ah, I see the problem: a few places in fe-protocol3 didn't get the memo\n>> that conn->error_result represents a \"pending\" PGresult that hasn't\n>> been constructed yet. The attached fixes it for me --- can you try it\n>> on whatever test case led you to this?\n\n> Your patch fixes it for me.\n\nThanks for testing. I pushed a cosmetically-polished version of that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Apr 2022 17:13:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: error handling in pqRowProcessor broken"
}
] |
[
{
"msg_contents": "Sent from Mail for Windows",
"msg_date": "Tue, 19 Apr 2022 17:47:10 +0300",
"msg_from": "Israa Odeh <israa.k.odeh@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSoC Proposal Submission."
}
] |
[
{
"msg_contents": "Dear all,\n\nPlease review the attached for my jerry-rigged project proposal. I am\nseeking to continually refactor the proposal as I can!\n\nThanks,\nMahesh",
"msg_date": "Tue, 19 Apr 2022 14:01:54 -0400",
"msg_from": "Mahesh Gouru <mahesh.gouru@gmail.com>",
"msg_from_op": true,
"msg_subject": "DBT-5 Stored Procedure Development (2022)"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 11:02 AM Mahesh Gouru <mahesh.gouru@gmail.com> wrote:\n> Please review the attached for my jerry-rigged project proposal. I am seeking to continually refactor the proposal as I can!\n\nI for one see a lot of value in this proposal. I think it would be\ngreat to revive DBT-5, since TPC-E has a number of interesting\nbottlenecks that we'd likely learn something from. It's particularly\ngood at stressing concurrency control, which TPC-C really doesn't do.\nIt's also a lot easier to run smaller benchmarks that don't require\nlots of storage space, but are nevertheless correct according to the\nspec.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 19 Apr 2022 11:07:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: DBT-5 Stored Procedure Development (2022)"
},
{
"msg_contents": "Hi Mahesh,\n\nOn Tue, Apr 19, 2022 at 02:01:54PM -0400, Mahesh Gouru wrote:\n> Dear all,\n> \n> Please review the attached for my jerry-rigged project proposal. I am\n> seeking to continually refactor the proposal as I can!\n\nMy comments might briefer that they should be, but I need to write this\nquickly. :)\n\n* The 4 steps in the description aren't needed, they already exist.\n* May 20: I think this should be more about reviewing the TPC-E\n specification rather than industry research, as we want to try to\n follow specification guidelines.\n* June 20: Random data generation and scaling are provided by and\n already defined by the spec\n* Aug 01: A report generator already exists, but I think time could be\n allocated to redoing the raw HTML generation with something like\n reStructuredText, something that is easier to generate with scripts\n and convertible into other formats with other tools\n\nAs some of tasks proposed are actually in place, one other task could be\nupdating egen (the TPC supplied code.) The kit was last developed again\n1.12 and 1.14 is current as this email.\n\nRegards,\nMark\n\n\n",
"msg_date": "Tue, 19 Apr 2022 18:30:56 +0000",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DBT-5 Stored Procedure Development (2022)"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 11:31 AM Mark Wong <markwkm@gmail.com> wrote:\n> As some of tasks proposed are actually in place, one other task could be\n> updating egen (the TPC supplied code.) The kit was last developed again\n> 1.12 and 1.14 is current as this email.\n\nAs you know, I have had some false starts with using DBT5 on a modern\nLinux distribution. Perhaps I gave up too easily at the time, but I'm\ndefinitely still interested. Has there been work on that since?\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 19 Apr 2022 17:20:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: DBT-5 Stored Procedure Development (2022)"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 05:20:50PM -0700, Peter Geoghegan wrote:\n> On Tue, Apr 19, 2022 at 11:31 AM Mark Wong <markwkm@gmail.com> wrote:\n> > As some of tasks proposed are actually in place, one other task could be\n> > updating egen (the TPC supplied code.) The kit was last developed again\n> > 1.12 and 1.14 is current as this email.\n> \n> As you know, I have had some false starts with using DBT5 on a modern\n> Linux distribution. Perhaps I gave up too easily at the time, but I'm\n> definitely still interested. Has there been work on that since?\n\nI'm afraid not. I'm guessing that pulling in egen 1.14 would address\nthat. Maybe it would make sense to put that on the top of todo list if\nthis project is accepted...\n\nRegards,\nMark\n\n\n",
"msg_date": "Tue, 26 Apr 2022 17:36:17 +0000",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DBT-5 Stored Procedure Development (2022)"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 10:36 AM Mark Wong <markwkm@gmail.com> wrote:\n> I'm afraid not. I'm guessing that pulling in egen 1.14 would address\n> that. Maybe it would make sense to put that on the top of todo list if\n> this project is accepted...\n\nWouldn't it be a prerequisite here? I don't actually have any reason\nto prefer the old function-based code to the new stored procedure\nbased code. Really, all I'm looking for is a credible implementation\nof TPC-E that I can use to model some aspects of OLTP performance for\nmy own purposes.\n\nTPC-C (which I have plenty of experience with) has only two secondary\nindexes (in typical configurations), and doesn't really stress\nconcurrency control at all. Plus there are no low cardinality indexes\nin TPC-C, while TPC-E has quite a few. Chances are high that I'd learn\nsomething from TPC-E, which has all of these things -- I'm really\nlooking for bottlenecks, where Postgres does entirely the wrong thing.\nIt's especially interesting to me as somebody that focuses on B-Tree\nindexing.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 26 Apr 2022 10:44:45 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: DBT-5 Stored Procedure Development (2022)"
},
{
"msg_contents": "On Mon, May 02, 2022 at 07:14:28AM -0700, Mark Wong wrote:\n> On Tue, Apr 26, 2022, 10:45 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> > On Tue, Apr 26, 2022 at 10:36 AM Mark Wong <markwkm@gmail.com> wrote:\n> > > I'm afraid not. I'm guessing that pulling in egen 1.14 would address\n> > > that. Maybe it would make sense to put that on the top of todo list if\n> > > this project is accepted...\n> >\n> > Wouldn't it be a prerequisite here? I don't actually have any reason\n> > to prefer the old function-based code to the new stored procedure\n> > based code. Really, all I'm looking for is a credible implementation\n> > of TPC-E that I can use to model some aspects of OLTP performance for\n> > my own purposes.\n> >\n> > TPC-C (which I have plenty of experience with) has only two secondary\n> > indexes (in typical configurations), and doesn't really stress\n> > concurrency control at all. Plus there are no low cardinality indexes\n> > in TPC-C, while TPC-E has quite a few. Chances are high that I'd learn\n> > something from TPC-E, which has all of these things -- I'm really\n> > looking for bottlenecks, where Postgres does entirely the wrong thing.\n> > It's especially interesting to me as somebody that focuses on B-Tree\n> > indexing.\n\nI think it could be done in either order.\n\nWhile it's not ideal that the kit seems to work most reliably as-is on\nRHEL/Centos/etc. 6, I think that could provide some confidence in\ngetting familiar with something on a working platform. The updates to\nthe stored functions/procedures would be the same regardless of egen\nversion.\n\nIf we get the project slot, we can talk further about what to actually\ntackle first.\n\nRegards,\nMark\n\n\n",
"msg_date": "Mon, 2 May 2022 07:30:23 -0700",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DBT-5 Stored Procedure Development (2022)"
}
] |
[
{
"msg_contents": "view bc is just a joining wrapper around pg_buffercache.\n\nregression=# select datname, relname, count(*), sum(count(*)) over () AS\ntotal from bc where isdirty group by datname, relname;\n datname | relname | count | total\n---------+---------+-------+-------\n(0 rows)\n\nregression=# update tenk1 set stringu1 = stringu1 || '' where (unique1 %\n384) = 3;\nUPDATE 27\nregression=# select datname, relname, count(*), sum(count(*)) over () AS\ntotal from bc where isdirty group by datname, relname;\n datname | relname | count | total\n------------+---------+-------+-------\n regression | tenk1 | 3 | 3\n(1 row)\n\nregression=# checkpoint;\nCHECKPOINT\n\n2022-04-19 23:17:08.256 UTC [161084] LOG: checkpoint starting: immediate\nforce wait\n2022-04-19 23:17:08.264 UTC [161084] LOG: checkpoint complete: wrote 4\nbuffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s,\nsync=0.002 s, total=0.009 s; sync files=2, longest=0.002 s, average=0.001\ns; distance=12 kB, estimate=72358 kB\n\nI've done this four times in a row and while the number of dirty buffers\nshown each time vary (see below) I see that \"wrote N buffers\" is always\nexactly one more than the total count of dirty buffers. I'm just curious\nif anyone has a quick answer for this unusual correspondence.\n\nDavid J.\n\nregression=# update tenk1 set stringu1 = stringu1 || '' where (unique1 %\n384) = 3;\nUPDATE 27\nregression=# select datname, relname, count(*), sum(count(*)) over () AS\ntotal from bc where isdirty group by datname, relname;\n datname | relname | count | total\n------------+----------------------+-------+-------\n regression | tenk1 | 33 | 102\n regression | tenk1_hundred | 9 | 102\n regression | tenk1_thous_tenthous | 18 | 102\n regression | tenk1_unique1 | 27 | 102\n regression | tenk1_unique2 | 15 | 102\n(5 rows)\n\n2022-04-19 23:13:03.480 UTC [161084] LOG: checkpoint starting: immediate\nforce wait\n2022-04-19 23:13:03.523 UTC [161084] LOG: checkpoint complete: wrote 103\nbuffers (0.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.004 s,\nsync=0.014 s, total=0.044 s; sync files=8, longest=0.008 s, average=0.002\ns; distance=721 kB, estimate=110165 kB\n\nview bc is just a joining wrapper around pg_buffercache.regression=# select datname, relname, count(*), sum(count(*)) over () AS total from bc where isdirty group by datname, relname; datname | relname | count | total---------+---------+-------+-------(0 rows)regression=# update tenk1 set stringu1 = stringu1 || '' where (unique1 % 384) = 3;UPDATE 27regression=# select datname, relname, count(*), sum(count(*)) over () AS total from bc where isdirty group by datname, relname; datname | relname | count | total------------+---------+-------+------- regression | tenk1 | 3 | 3(1 row)regression=# checkpoint;CHECKPOINT2022-04-19 23:17:08.256 UTC [161084] LOG: checkpoint starting: immediate force wait2022-04-19 23:17:08.264 UTC [161084] LOG: checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.002 s, total=0.009 s; sync files=2, longest=0.002 s, average=0.001 s; distance=12 kB, estimate=72358 kBI've done this four times in a row and while the number of dirty buffers shown each time vary (see below) I see that \"wrote N buffers\" is always exactly one more than the total count of dirty buffers. I'm just curious if anyone has a quick answer for this unusual correspondence.David J.regression=# update tenk1 set stringu1 = stringu1 || '' where (unique1 % 384) = 3;UPDATE 27regression=# select datname, relname, count(*), sum(count(*)) over () AS total from bc where isdirty group by datname, relname; datname | relname | count | total------------+----------------------+-------+------- regression | tenk1 | 33 | 102 regression | tenk1_hundred | 9 | 102 regression | tenk1_thous_tenthous | 18 | 102 regression | tenk1_unique1 | 27 | 102 regression | tenk1_unique2 | 15 | 102(5 rows)2022-04-19 23:13:03.480 UTC [161084] LOG: checkpoint starting: immediate force wait2022-04-19 23:13:03.523 UTC [161084] LOG: checkpoint complete: wrote 103 buffers (0.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.004 s, sync=0.014 s, total=0.044 s; sync files=8, longest=0.008 s, average=0.002 s; distance=721 kB, estimate=110165 kB",
"msg_date": "Tue, 19 Apr 2022 16:21:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Odd off-by-one dirty buffers and checkpoint buffers written"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 04:21:21PM -0700, David G. Johnston wrote:\n> I've done this four times in a row and while the number of dirty buffers\n> shown each time vary (see below) I see that \"wrote N buffers\" is always\n> exactly one more than the total count of dirty buffers. I'm just curious\n> if anyone has a quick answer for this unusual correspondence.\n\nI see that SlruInternalWritePage() increments ckpt_bufs_written, so my\nfirst guess would be that it's due to something like CheckPointCLOG().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Apr 2022 16:36:51 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd off-by-one dirty buffers and checkpoint buffers written"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 4:36 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Tue, Apr 19, 2022 at 04:21:21PM -0700, David G. Johnston wrote:\n> > I've done this four times in a row and while the number of dirty buffers\n> > shown each time vary (see below) I see that \"wrote N buffers\" is always\n> > exactly one more than the total count of dirty buffers. I'm just curious\n> > if anyone has a quick answer for this unusual correspondence.\n>\n> I see that SlruInternalWritePage() increments ckpt_bufs_written, so my\n> first guess would be that it's due to something like CheckPointCLOG().\n>\n>\nI peeked at pg_stat_bgwriter and see an increase in buffers_checkpoint\nmatching the dirty buffers number.\n\nI also looked at pg_stat_slru to try and find the corresponding change\ncaused by:\n\nslru.c:766 (SlruPhysicalWritePage)\npgstat_count_slru_page_written(shared->slru_stats_idx);\n\nI do see (Xact) blks_hit change during this process (after the\nupdate/commit, not the checkpoint, though) but it increases by 2 when dirty\nbuffers is 4. I was expecting 4, thinking that blocks and buffers and\npages are basically the same things (which [1] seems to affirm).\n\nhttps://www.postgresql.org/message-id/13563.1044552279%40sss.pgh.pa.us\n\nDavid J.\n\nOn Tue, Apr 19, 2022 at 4:36 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Tue, Apr 19, 2022 at 04:21:21PM -0700, David G. Johnston wrote:\n> I've done this four times in a row and while the number of dirty buffers\n> shown each time vary (see below) I see that \"wrote N buffers\" is always\n> exactly one more than the total count of dirty buffers. I'm just curious\n> if anyone has a quick answer for this unusual correspondence.\n\nI see that SlruInternalWritePage() increments ckpt_bufs_written, so my\nfirst guess would be that it's due to something like CheckPointCLOG().I peeked at pg_stat_bgwriter and see an increase in buffers_checkpoint matching the dirty buffers number.I also looked at pg_stat_slru to try and find the corresponding change caused by:slru.c:766 (SlruPhysicalWritePage)pgstat_count_slru_page_written(shared->slru_stats_idx);I do see (Xact) blks_hit change during this process (after the update/commit, not the checkpoint, though) but it increases by 2 when dirty buffers is 4. I was expecting 4, thinking that blocks and buffers and pages are basically the same things (which [1] seems to affirm).https://www.postgresql.org/message-id/13563.1044552279%40sss.pgh.pa.usDavid J.",
"msg_date": "Tue, 19 Apr 2022 17:51:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Odd off-by-one dirty buffers and checkpoint buffers written"
},
{
"msg_contents": "At Tue, 19 Apr 2022 17:51:24 -0700, \"David G. Johnston\" <david.g.johnston@gmail.com> wrote in \n> On Tue, Apr 19, 2022 at 4:36 PM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n> \n> > On Tue, Apr 19, 2022 at 04:21:21PM -0700, David G. Johnston wrote:\n> > > I've done this four times in a row and while the number of dirty buffers\n> > > shown each time vary (see below) I see that \"wrote N buffers\" is always\n> > > exactly one more than the total count of dirty buffers. I'm just curious\n> > > if anyone has a quick answer for this unusual correspondence.\n> >\n> > I see that SlruInternalWritePage() increments ckpt_bufs_written, so my\n> > first guess would be that it's due to something like CheckPointCLOG().\n> >\n> >\n> I peeked at pg_stat_bgwriter and see an increase in buffers_checkpoint\n> matching the dirty buffers number.\n> \n> I also looked at pg_stat_slru to try and find the corresponding change\n> caused by:\n> \n> slru.c:766 (SlruPhysicalWritePage)\n> pgstat_count_slru_page_written(shared->slru_stats_idx);\n> \n> I do see (Xact) blks_hit change during this process (after the\n> update/commit, not the checkpoint, though) but it increases by 2 when dirty\n> buffers is 4. I was expecting 4, thinking that blocks and buffers and\n> pages are basically the same things (which [1] seems to affirm).\n> \n> https://www.postgresql.org/message-id/13563.1044552279%40sss.pgh.pa.us\n\nIf I understand you point correctly..\n\nXact SLRU is so-called CLOG, on which transaction statuses\n(running/committed/aborted) are recorded. Its pages are separate\nobjects from table pages, which are out-of-sight of pg_bufferchace.\nHowever, the same relationship between pages, blocks and buffers\napplies to the both cases in parallel.\n\nThe reason for the 2 hits of Xact SLRU is that once for visibility\n(MVCC) check and another for commit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Apr 2022 17:03:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd off-by-one dirty buffers and checkpoint buffers written"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 1:03 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> The reason for the 2 hits of Xact SLRU is that once for visibility\n> (MVCC) check and another for commit.\n>\n>\nMakes sense. Thanks. Now, is the lack of such a detail when looking at\npg_stat_slru (for this and the other 6 named caches) an omission by intent\nor just no one has taken the time to write up what the different caches are\nholding? I would think a brief sentence for each followed by a link to the\nmain section describing the feature would be decent content to add to the\nintroduction for the view in 28.2.21.\n\nAlso, is \"other\" ever expected to be something besides all zeros?\n\nThanks!\n\nDavid J.\n\nOn Wed, Apr 20, 2022 at 1:03 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nThe reason for the 2 hits of Xact SLRU is that once for visibility\n(MVCC) check and another for commit. Makes sense. Thanks. Now, is the lack of such a detail when looking at pg_stat_slru (for this and the other 6 named caches) an omission by intent or just no one has taken the time to write up what the different caches are holding? I would think a brief sentence for each followed by a link to the main section describing the feature would be decent content to add to the introduction for the view in 28.2.21.Also, is \"other\" ever expected to be something besides all zeros?Thanks!David J.",
"msg_date": "Wed, 20 Apr 2022 06:50:36 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Odd off-by-one dirty buffers and checkpoint buffers written"
}
] |
[
{
"msg_contents": "NVMe devices have a maximum queue length of 64k:\n\n\thttps://blog.westerndigital.com/nvme-queues-explained/\n\nbut our effective_io_concurrency maximum is 1,000:\n\n\ttest=> set effective_io_concurrency = 1001;\n\tERROR: 1001 is outside the valid range for parameter \"effective_io_concurrency\" (0 .. 1000)\n\nShould we increase its maximum to 64k? Backpatched? (SATA has a\nmaximum queue length of 256.)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 19 Apr 2022 22:56:05 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 10:56:05PM -0400, Bruce Momjian wrote:\n> NVMe devices have a maximum queue length of 64k:\n> \n> \thttps://blog.westerndigital.com/nvme-queues-explained/\n> \n> but our effective_io_concurrency maximum is 1,000:\n> \n> \ttest=> set effective_io_concurrency = 1001;\n> \tERROR: 1001 is outside the valid range for parameter \"effective_io_concurrency\" (0 .. 1000)\n> \n> Should we increase its maximum to 64k? Backpatched? (SATA has a\n> maximum queue length of 256.)\n\nIf there are demonstrable improvements with higher values, this seems\nreasonable to me. I would even suggest removing the limit completely so\nthis doesn't need to be revisited in the future.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Apr 2022 10:58:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "On Wed, 20 Apr 2022 at 14:56, Bruce Momjian <bruce@momjian.us> wrote:\n> NVMe devices have a maximum queue length of 64k:\n\n> Should we increase its maximum to 64k? Backpatched? (SATA has a\n> maximum queue length of 256.)\n\nI have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0\nNVMe SSD. I ran a few tests to see how different values of\neffective_io_concurrency would affect performance. I tried to come up\nwith a query that did little enough CPU processing to ensure that I/O\nwas the clear bottleneck.\n\nThe test was with a 128GB table on a machine with 64GB of RAM. I\npadded the tuples out so there were 4 per page so that the aggregation\ndidn't have much work to do.\n\nThe query I ran was: explain (analyze, buffers, timing off) select\ncount(p) from r where a = 1;\n\nHere's what I saw:\n\nNVME PCIe 3.0 (Samsung 970 Evo 1TB)\ne_i_c query_time_ms\n0 88627.221\n1 652915.192\n5 271536.054\n10 141168.986\n100 67340.026\n1000 70686.596\n10000 70027.938\n100000 70106.661\n\nSaw a max of 991 MB/sec in iotop\n\nNVME PCIe 4.0 (Samsung 980 Pro 1TB)\ne_i_c query_time_ms\n0 59306.960\n1 956170.704\n5 237879.121\n10 135004.111\n100 55662.030\n1000 51513.717\n10000 59807.824\n100000 53443.291\n\nSaw a max of 1126 MB/sec in iotop\n\nI'm not pretending that this is the best query and table size to show\nit, but at least this test shows that there's not much to gain by\nprefetching further. I imagine going further than we need to is\nlikely to have negative consequences due to populating the kernel page\ncache with buffers that won't be used for a while. I also imagine\ngoing too far out likely increases the risk that buffers we've\nprefetched are evicted before they're used.\n\nThis does also highlight that an effective_io_concurrency of 1 (the\ndefault) is pretty terrible in this test. The bitmap contained every\n2nd page. I imagine that would break normal page prefetching by the\nkernel. If that's true, then it does not explain why e_i_c = 0 was so\nfast.\n\nI've attached the test setup that I did. I'm open to modifying the\ntest and running again if someone has an idea that might show benefits\nto larger values for effective_io_concurrency.\n\nDavid",
"msg_date": "Thu, 21 Apr 2022 20:14:28 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "On 4/21/22 10:14, David Rowley wrote:\n> On Wed, 20 Apr 2022 at 14:56, Bruce Momjian <bruce@momjian.us> wrote:\n>> NVMe devices have a maximum queue length of 64k:\n> \n>> Should we increase its maximum to 64k? Backpatched? (SATA has a\n>> maximum queue length of 256.)\n> \n> I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0\n> NVMe SSD. I ran a few tests to see how different values of\n> effective_io_concurrency would affect performance. I tried to come up\n> with a query that did little enough CPU processing to ensure that I/O\n> was the clear bottleneck.\n> \n> The test was with a 128GB table on a machine with 64GB of RAM. I\n> padded the tuples out so there were 4 per page so that the aggregation\n> didn't have much work to do.\n> \n> The query I ran was: explain (analyze, buffers, timing off) select\n> count(p) from r where a = 1;\n> \n> Here's what I saw:\n> \n> NVME PCIe 3.0 (Samsung 970 Evo 1TB)\n> e_i_c query_time_ms\n> 0 88627.221\n> 1 652915.192\n> 5 271536.054\n> 10 141168.986\n> 100 67340.026\n> 1000 70686.596\n> 10000 70027.938\n> 100000 70106.661\n> \n> Saw a max of 991 MB/sec in iotop\n> \n> NVME PCIe 4.0 (Samsung 980 Pro 1TB)\n> e_i_c query_time_ms\n> 0 59306.960\n> 1 956170.704\n> 5 237879.121\n> 10 135004.111\n> 100 55662.030\n> 1000 51513.717\n> 10000 59807.824\n> 100000 53443.291\n> \n> Saw a max of 1126 MB/sec in iotop\n> \n> I'm not pretending that this is the best query and table size to show\n> it, but at least this test shows that there's not much to gain by\n> prefetching further. I imagine going further than we need to is\n> likely to have negative consequences due to populating the kernel page\n> cache with buffers that won't be used for a while. I also imagine\n> going too far out likely increases the risk that buffers we've\n> prefetched are evicted before they're used.\n> \n\nNot sure.\n\nI don't think the risk of polluting the cache is very high, because the\n1k buffers is 8MB and 64k would be 512MB. That's significant, but likely\njust a tiny fraction of the available memory in machines with NVME.\nSure, there may be multiple sessions doing prefetch, but chances the\nsessions touch the same data etc.\n\n> This does also highlight that an effective_io_concurrency of 1 (the\n> default) is pretty terrible in this test. The bitmap contained every\n> 2nd page. I imagine that would break normal page prefetching by the\n> kernel. If that's true, then it does not explain why e_i_c = 0 was so\n> fast.\n> \n\nYeah, this default is clearly pretty unfortunate. I think the problem is\nthat async request is not free, i.e. prefetching means\n\n async request + read\n\nand the prefetch trick is in assuming that\n\n cost(async request) << cost(read)\n\nand moving the read to a background thread. But the NVMe make reads\ncheaper, so the amount of work moved to the background thread gets\nlower, while the cost of the async request remains roughly the same.\nWhich means the difference (benefit) decreases over time.\n\nAlso, recent NVMe devices (like Intel Optane) aim to require lower queue\ndepths, so although the NVMe spec supports 64k queues and 64k commands\nper queue, that does not mean you need to use that many requests to get\ngood performance.\n\nAs for the strange behavior with e_i_c=0, I think this can be explained\nby how NVMe work internally. A simplified model of NVMe device is \"slow\"\nflash with a DRAM cache, and AFAIK the data is not read from flash into\nDRAM in 8kB pages but larger chunks. So even if there's no explicit OS\nreadahead, the device may still cache larger chunks in the DRAM buffer.\n\n\n> I've attached the test setup that I did. I'm open to modifying the\n> test and running again if someone has an idea that might show benefits\n> to larger values for effective_io_concurrency.\n> \n\nI think it'd be interesting to test different / less regular patterns,\nnot just every 2nd page etc.\n\nThe other idea I had while looking at batching a while back, is that we\nshould batch the prefetches. The current logic interleaves prefetches\nwith other work - prefetch one page, process one page, ... But once\nreading a page gets sufficiently fast, this means the queues never get\ndeep enough for optimizations. So maybe we should think about batching\nthe prefetches, in some way. Unfortunately posix_fadvise does not allow\nbatching of requests, but we can at least stop interleaving the requests.\n\nThe attached patch is a trivial version that waits until we're at least\n32 pages behind the target, and then prefetches all of them. Maybe give\nit a try? (This pretty much disables prefetching for e_i_c below 32, but\nfor an experimental patch that's enough.)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 21 Apr 2022 15:49:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "Hi,\n\nI've been looking at this a bit more, investigating the regression. I\nwas wondering how come no one noticed/reported this issue before, since\nwe have \"1\" as the default value since 9.5.\n\nSo either this behaves very differently on moder flash/NVMe storage, or\nmaybe it somehow depends on the dataset / access pattern.\n\nNote: I don't have access to a machine with NVMe at the moment, so I did\nall the tests on my usual machine with SATA SSDs. I plan to run the same\ntests on NVMe once the bigger machine is available, but I think that'll\nlead mostly to the same conclusions. So let me present the results now,\nincluding the scripts so David can run those tests on their machine.\n\n\n From now on, I'll refer to two storage devices:\n\n1) SSD RAID - 6x Intel S3600 100GB SATA, in RAID0\n\n2) SSD SINGLE - Intel Series 320, 120GB\n\nThe machine is pretty small, with just 8GB of RAM and i5-2500k (4C) CPU.\n\n\nFirstly, I remembered there were some prefetching benchmarks [1], so I\nrepeated those. I don't have the same SSD as Merlin, but the general\nbehavior should be similar.\n\n e_i_c 1 2 4 8 16 32 64 128 256\n -------------------------------------------------------------\n timing 46.3 49.3 29.1 23.2 22.1 20.7 20.0 19.3 19.2\n diff 100% 106% 63% 50% 48% 45% 43% 42% 41%\n\nThe second line is simply the timing relative to the first column.\nMerlin did not include timing for e_i_c=0 (I think that was valid value,\nmeaning \"disabled\" even back then.\n\nIn any case, those results shows significant improvements compared to\ne_i_c=1 as prefetch increases.\n\nWhen I run the same query on scale 3000, including eic=0:\n\n e_i_c 0 1 2 4 8 16 32 64 128 256\n ---------------------------------------------------------------------\n ssd 29.4 49.4 33.9 25.2 31.9 27.2 28.0 29.3 27.6 27.6\n ssd 100% 168% 115% 86% 108% 92% 95% 100% 94% 94%\n ---------------------------------------------------------------------\n ssd raid 10.7 74.2 51.2 30.6 24.0 13.8 14.6 14.3 14.1 14.0\n ssd raid 100% 691% 477% 285% 224% 129% 137% 134% 132% 131%\n\nNotice that ignoring the eic=0 value (no prefetch), the behavior is\npretty similar to what Melin reported - consistent improvements as the\neic value increases. Ultimately it gets close to eic=0, but not faster\n(at least not significantly).\n\nFWIW I actually tried running this on 9.3, and the behavior is the same.\n\nSo I guess the behavior is the same, but it misses that eic=1 actually\nmay be making it much worse (compared to eic=0). The last para in [1]\nactually says:\n\n > Interesting that at setting of '2' (the lowest possible setting with\n > the feature actually working) is pessimal.\n\nwhich sounds a bit like '1' does nothing (no prefetch). But that's not\n(and was not) the case, I think. But we don't have the results for eic=0\nunfortunately.\n\nNote: We stopped using the complex prefetch distance calculating since\nthen, but we can ignore that here I think.\n\n\nThe other problem with reproducing/interpreting those results is it's\nunclear whether the query was executed immediately after \"pgbench -i\" or\nsometime later (after a bunch of transactions were done). Consider the\nquery is:\n\n select * from pgbench_accounts\n where aid between 1000 and 50000000 and abalance != 0;\n\nand right after initialization the accounts will be almost perfectly\nsequential. So the query will match a continuous range of pages\n(roughtly 1/6 of the whole table). But updates may be shuffling rows\naround, making the I/O access pattern more random (but I'm not sure how\nmuch, I'd expect most updates to fit on the same page).\n\nThis might explain the poor results (compared to eic=0). Sequential\naccess is great for readahead (in the OS and also internal in SSD),\nwhich makes our prefetch pretty unnecessary / perhaps even actively harmful.\n\nAnd the same explanation applies to David's query - that's also almost\nperfectly sequential, AFAICS.\n\nBut that just raises the question - how does the prefetch work for other\naccess patterns, with pages not this sequential, but spread randomly\nthrough the table.\n\nSo I constructed a couple datasets, with different patterns, generated\nby the attached bash script. The table has this structure:\n\n CREATE TABLE t (a int, padding text)\n\nand \"a\" has values between 0 and 1000, and the script generates data so\nthat each page contains 27 rows with the same \"a\" value. This allows us\nto write queries matching arbitrary fraction of the table. For example\nwe can say \"a BETWEEN 10 AND 20\" which matches 1%, etc.\n\nFurthermore, the pages are either independent (each with a different\nvalue) or with longer streaks of the same value.\n\nThe script generates these data sets:\n\n random: each page gets a random \"a\" value\n random-8: each sequence of 8 pages gets a random value\n random-32: each sequence of 8 pages gets a random value\n sequential: split into 1000 sequences, values 0, 1, 2, ...\n\nAnd then the script runs queries matching a random subset the table,\nwith fractions 1%, 5%, 10%, 25% and 50% (queries with different\nselectivity). The ranges are generated at random, it's just the length\nof the range that matters.\n\nThe script also restarts the database and drops caches, so that the\nprefetch actually does something.\n\nAttached are CSV files with a complete run from the two SSD devices, if\nyou want to dig in. But the two PDFs are a better \"visualization\" of\nperformance compared to \"no prefetch\" (eic=0).\n\nThe \"tables\" PDF shows timing compared to eic=0, so 100% means \"the\nsame\" and 200% \"twice slower\". Or by color - red is \"slower\" (bad) while\ngreen is \"faster\" (good).\n\nThe \"charts\" PDF shows essentially the same thing (duration compared to\neic=0), but as chart with \"eic\" on x-axis. In principle, we want all the\nvalues to be \"below\" 100% line.\n\n\nI think there are three obvious observations we can make from the tables\nand charts:\n\n1) The higher the selectivity, the worse.\n\n2) The more sequential the data, the worse.\n\n3) These two things \"combine\".\n\n\nFor example on the \"random\" data, prefetching works perfectly fine for\nqueries matching 1%, 5% and 10% even for eic=1. But queries matching 25%\nand 50% get much slower with eic=1 and need much higher values to even\nbreak even.\n\nThe less random data sets make it worse and worse. With random-32 all\nquery cases (even 1%) require much at least eic=4 or more to break even,\nand with \"sequential\" it never happens.\n\nI'd bet the NVMe devices will behave mostly the same way, after all\nDavid showed the same issue for prefetching on sequential data. I'm not\nsure about the \"more random\" cases, because one of the supposed\nadvantages of modern NVMe devices is they require lower queue depth.\n\nThis may also explain why we haven't received any reports - most queries\nprobably match either tiny fraction of data, or the data is mostly\nrandom. So prefetching either helps, or at least is not too harmful.\n\n\nI think this can be explained mostly by OS read-ahead and/or internal\ncaching on SSD devices, which works pretty well for sequential accesses\nand \"our\" prefetching may be either unnecessary (essentially a little\nbit of extra overhead) or interfering with it - changing the access\npattern so that OS does not recognize/trigger the read-ahead, or maybe\nevicting the interesting pages from internal device cache.\n\n\nWhat can we do about this? AFAICS it shouldn't be difficult to look at\nthe bitmap generated by the bitmap index scan, and analyze it - that\nwill tell us what fraction of pages match, and also how sequential the\npatterns are. And based on that we can either adjust prefetching\ndistance, or maybe wen disable prefetching for cases matching too many\npages or \"too sequential\". Of course, that'll require some heuristics or\na simple \"cost model\".\n\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 6 May 2022 00:58:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "Hi Nathan,\n\n> > NVMe devices have a maximum queue length of 64k:\n[..]\n> > but our effective_io_concurrency maximum is 1,000:\n[..]\n> > Should we increase its maximum to 64k? Backpatched? (SATA has a\n> > maximum queue length of 256.)\n> \n> If there are demonstrable improvements with higher values, this seems\n> reasonable to me. I would even suggest removing the limit completely so\n> this doesn't need to be revisited in the future.\n\nWell, are there any? I remember playing with this (although for ANALYZE Stephen's case [1]) and got quite contrary results [2] -- see going to 16 from 8 actually degraded performance.\nI somehow struggle to understand how 1000+ fadvise() syscalls would be a net benefit on storage with latency of ~0.1.. 0.3ms as each syscall on it's own is overhead (quite contrary, it should help on very slow one?) \nPardon if I'm wrong (I don't have time to lookup code now), but maybe Bitmap Scans/fadvise() logic would first need to perform some fadvise() offset/length aggregations to bigger fadvise() syscalls and in the end real hardware observable I/O concurrency would be bigger (assuming that fs/LVM/dm/mq layer would split that into more parallel IOs).\n\n-J.\n\n[1] - https://commitfest.postgresql.org/30/2799/\n[2] - https://www.postgresql.org/message-id/flat/VI1PR0701MB69603A433348EDCF783C6ECBF6EF0@VI1PR0701MB6960.eurprd07.prod.outlook.com\n\n\n \n\n\n\n",
"msg_date": "Thu, 2 Jun 2022 07:59:45 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "Hi Tomas,\n\n> > I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0\n> > NVMe SSD. I ran a few tests to see how different values of\n> > effective_io_concurrency would affect performance. I tried to come up\n> > with a query that did little enough CPU processing to ensure that I/O\n> > was the clear bottleneck.\n> >\n> > The test was with a 128GB table on a machine with 64GB of RAM. I\n> > padded the tuples out so there were 4 per page so that the aggregation\n> > didn't have much work to do.\n> >\n> > The query I ran was: explain (analyze, buffers, timing off) select\n> > count(p) from r where a = 1;\n \n> The other idea I had while looking at batching a while back, is that we should\n> batch the prefetches. The current logic interleaves prefetches with other work -\n> prefetch one page, process one page, ... But once reading a page gets\n> sufficiently fast, this means the queues never get deep enough for\n> optimizations. So maybe we should think about batching the prefetches, in some\n> way. Unfortunately posix_fadvise does not allow batching of requests, but we\n> can at least stop interleaving the requests.\n\n.. for now it doesn't, but IORING_OP_FADVISE is on the long-term horizon. \n\n> The attached patch is a trivial version that waits until we're at least\n> 32 pages behind the target, and then prefetches all of them. Maybe give it a try?\n> (This pretty much disables prefetching for e_i_c below 32, but for an\n> experimental patch that's enough.)\n\nI've tried it at e_i_c=10 initially on David's setup.sql, and most defaults s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier inspection with strace just to be sure//so please don't compare times). \n\nrun:\na) master (e_i_c=10) 181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k IOPS (~122k IOPS practical max here for libaio)\nb) patched(e_i_c=10) 237774ms, 236326ms, ..as you stated it disabled prefetching, fadvise() not occurring\nc) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms, 81432ms (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even peaked for a second on ~122k)\nd) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms 99939ms (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks to 90..100k IOPS)\n\n~16% benefit sounds good (help me understand: L1i cache?). Maybe it is worth throwing that patch onto more advanced / complete performance test farm too ? (although it's only for bitmap heap scans)\n\nrun a: looked interleaved as you said:\nfadvise64(160, 1064157184, 8192, POSIX_FADV_WILLNEED) = 0\npread64(160, \"@\\0\\0\\0\\200\\303/_\\0\\0\\4\\0(\\0\\200\\0\\0 \\4 \\0\\0\\0\\0 \\230\\300\\17@\\220\\300\\17\"..., 8192, 1064009728) = 8192\nfadvise64(160, 1064173568, 8192, POSIX_FADV_WILLNEED) = 0\npread64(160, \"@\\0\\0\\0\\0\\0040_\\0\\0\\4\\0(\\0\\200\\0\\0 \\4 \\0\\0\\0\\0 \\230\\300\\17@\\220\\300\\17\"..., 8192, 1064026112) = 8192\n[..]\n\nBTW: interesting note, for run b, the avgrq-sz from extended iostat jumps is flipping between 16(*512=8kB) to ~256(*512=~128kB!) as if kernel was doing some own prefetching heuristics on and off in cycles, while when calling e_i_c/fadvise() is in action then it seems to be always 8kB requests. So with disabled fadivse() one IMHO might have problems deterministically benchmarking short queries as kernel voodoo might be happening (?)\n\n-J.\n\n\n",
"msg_date": "Tue, 7 Jun 2022 13:29:21 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "On 6/7/22 15:29, Jakub Wartak wrote:\n> Hi Tomas,\n> \n>>> I have a machine here with 1 x PCIe 3.0 NVMe SSD and also 1 x PCIe 4.0\n>>> NVMe SSD. I ran a few tests to see how different values of\n>>> effective_io_concurrency would affect performance. I tried to come up\n>>> with a query that did little enough CPU processing to ensure that I/O\n>>> was the clear bottleneck.\n>>>\n>>> The test was with a 128GB table on a machine with 64GB of RAM. I\n>>> padded the tuples out so there were 4 per page so that the aggregation\n>>> didn't have much work to do.\n>>>\n>>> The query I ran was: explain (analyze, buffers, timing off) select\n>>> count(p) from r where a = 1;\n> \n>> The other idea I had while looking at batching a while back, is that we should\n>> batch the prefetches. The current logic interleaves prefetches with other work -\n>> prefetch one page, process one page, ... But once reading a page gets\n>> sufficiently fast, this means the queues never get deep enough for\n>> optimizations. So maybe we should think about batching the prefetches, in some\n>> way. Unfortunately posix_fadvise does not allow batching of requests, but we\n>> can at least stop interleaving the requests.\n> \n> .. for now it doesn't, but IORING_OP_FADVISE is on the long-term horizon. \n> \n\nInteresting! Will take time to get into real systems, though.\n\n>> The attached patch is a trivial version that waits until we're at least\n>> 32 pages behind the target, and then prefetches all of them. Maybe give it a try?\n>> (This pretty much disables prefetching for e_i_c below 32, but for an\n>> experimental patch that's enough.)\n> \n> I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier inspection with strace just to be sure//so please don't compare times). \n> \n> run:\n> a) master (e_i_c=10) 181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k IOPS (~122k IOPS practical max here for libaio)\n> b) patched(e_i_c=10) 237774ms, 236326ms, ..as you stated it disabled prefetching, fadvise() not occurring\n> c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms, 81432ms (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even peaked for a second on ~122k)\n> d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms 99939ms (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks to 90..100k IOPS)\n> \n> ~16% benefit sounds good (help me understand: L1i cache?). Maybe it is worth throwing that patch onto more advanced / complete performance test farm too ? (although it's only for bitmap heap scans)\n> \n> run a: looked interleaved as you said:\n> fadvise64(160, 1064157184, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(160, \"@\\0\\0\\0\\200\\303/_\\0\\0\\4\\0(\\0\\200\\0\\0 \\4 \\0\\0\\0\\0 \\230\\300\\17@\\220\\300\\17\"..., 8192, 1064009728) = 8192\n> fadvise64(160, 1064173568, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(160, \"@\\0\\0\\0\\0\\0040_\\0\\0\\4\\0(\\0\\200\\0\\0 \\4 \\0\\0\\0\\0 \\230\\300\\17@\\220\\300\\17\"..., 8192, 1064026112) = 8192\n> [..]\n> \n> BTW: interesting note, for run b, the avgrq-sz from extended iostat jumps is flipping between 16(*512=8kB) to ~256(*512=~128kB!) as if kernel was doing some own prefetching heuristics on and off in cycles, while when calling e_i_c/fadvise() is in action then it seems to be always 8kB requests. So with disabled fadivse() one IMHO might have problems deterministically benchmarking short queries as kernel voodoo might be happening (?)\n> \n\nYes, kernel certainly does it's own read-ahead, which works pretty well\nfor sequential patterns. What does\n\n blockdev --getra /dev/...\n\nsay?\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jun 2022 17:12:43 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "> >> The attached patch is a trivial version that waits until we're at\n> >> least\n> >> 32 pages behind the target, and then prefetches all of them. Maybe give it a\n> try?\n> >> (This pretty much disables prefetching for e_i_c below 32, but for an\n> >> experimental patch that's enough.)\n> >\n> > I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults\n> s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier\n> inspection with strace just to be sure//so please don't compare times).\n> >\n> > run:\n> > a) master (e_i_c=10) 181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k\n> > IOPS (~122k IOPS practical max here for libaio)\n> > b) patched(e_i_c=10) 237774ms, 236326ms, ..as you stated it disabled\n> > prefetching, fadvise() not occurring\n> > c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms,\n> > 81432ms (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even\n> > peaked for a second on ~122k)\n> > d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms\n> > 99939ms (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks\n> > to 90..100k IOPS)\n> >\n> > ~16% benefit sounds good (help me understand: L1i cache?). Maybe it is\n> > worth throwing that patch onto more advanced / complete performance\n> > test farm too ? (although it's only for bitmap heap scans)\n\nI hope you have some future plans for this patch :)\n\n> Yes, kernel certainly does it's own read-ahead, which works pretty well for\n> sequential patterns. What does\n> \n> blockdev --getra /dev/...\n> \n> say?\n\nIt's default, 256 sectors (128kb) so it matches.\n\n-J.\n\n\n",
"msg_date": "Wed, 8 Jun 2022 06:29:01 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: effective_io_concurrency and NVMe devices"
},
{
"msg_contents": "On 6/8/22 08:29, Jakub Wartak wrote:\n>>>> The attached patch is a trivial version that waits until we're at\n>>>> least\n>>>> 32 pages behind the target, and then prefetches all of them. Maybe give it a\n>> try?\n>>>> (This pretty much disables prefetching for e_i_c below 32, but for an\n>>>> experimental patch that's enough.)\n>>>\n>>> I've tried it at e_i_c=10 initially on David's setup.sql, and most defaults\n>> s_b=128MB, dbsize=8kb but with forced disabled parallel query (for easier\n>> inspection with strace just to be sure//so please don't compare times).\n>>>\n>>> run:\n>>> a) master (e_i_c=10) 181760ms, 185680ms, 185384ms @ ~ 340MB/s and 44k\n>>> IOPS (~122k IOPS practical max here for libaio)\n>>> b) patched(e_i_c=10) 237774ms, 236326ms, ..as you stated it disabled\n>>> prefetching, fadvise() not occurring\n>>> c) patched(e_i_c=128) 90430ms, 88354ms, 85446ms, 78475ms, 74983ms,\n>>> 81432ms (mean=83186ms +/- 5947ms) @ ~570MB/s and 75k IOPS (it even\n>>> peaked for a second on ~122k)\n>>> d) master (e_i_c=128) 116865ms, 101178ms, 89529ms, 95024ms, 89942ms\n>>> 99939ms (mean=98746ms +/- 10118ms) @ ~510MB/s and 65k IOPS (rare peaks\n>>> to 90..100k IOPS)\n>>>\n>>> ~16% benefit sounds good (help me understand: L1i cache?). Maybe it is\n>>> worth throwing that patch onto more advanced / complete performance\n>>> test farm too ? (although it's only for bitmap heap scans)\n> \n> I hope you have some future plans for this patch :)\n> \n\nI think the big challenge is to make this adaptive, i.e. work well for\naccess patterns that are not known in advance. The existing prefetching\nworks fine for our random stuff (even for nvme devices), not so much for\nsequential (as demonstrated by David's example).\n\n>> Yes, kernel certainly does it's own read-ahead, which works pretty well for\n>> sequential patterns. What does\n>>\n>> blockdev --getra /dev/...\n>>\n>> say?\n> \n> It's default, 256 sectors (128kb) so it matches.\n> \n\nRight. I think this is pretty much why (our) prefetching performs so\npoorly on sequential access patterns - the kernel read-ahead works very\nwell in this case, and our prefetching can't help but can interfere.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 8 Jun 2022 10:59:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: effective_io_concurrency and NVMe devices"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI am writing a program that behaves like a Postgres backend and can see\nthat if I select oid from pg_type that the type old’s could be returned in\nthe Row Description message for the field’s data type and that seems to\nwork.\n\nHowever, I didn’t read anywhere that these are guaranteed to be\nconstant/stable so I’d like to know if this is the case. For example: is\nthe old for pg_type bool = 16 on every instance of Postgres?\n\nAlso does there exist any documentation decoding what the pg_type fields\nall mean?\n\nPlease let me know, thanks!\n\n-Tyler\n\nHi everyone,I am writing a program that behaves like a Postgres backend and can see that if I select oid from pg_type that the type old’s could be returned in the Row Description message for the field’s data type and that seems to work.However, I didn’t read anywhere that these are guaranteed to be constant/stable so I’d like to know if this is the case. For example: is the old for pg_type bool = 16 on every instance of Postgres?Also does there exist any documentation decoding what the pg_type fields all mean?Please let me know, thanks!-Tyler",
"msg_date": "Wed, 20 Apr 2022 15:18:22 +0000",
"msg_from": "Tyler Brock <tyler.brock@gmail.com>",
"msg_from_op": true,
"msg_subject": "Are OIDs for pg_types constant?"
},
{
"msg_contents": "Tyler Brock <tyler.brock@gmail.com> writes:\n> I am writing a program that behaves like a Postgres backend and can see\n> that if I select oid from pg_type that the type old’s could be returned in\n> the Row Description message for the field’s data type and that seems to\n> work.\n> However, I didn’t read anywhere that these are guaranteed to be\n> constant/stable so I’d like to know if this is the case. For example: is\n> the old for pg_type bool = 16 on every instance of Postgres?\n\nHand-assigned OIDs (those below 10000) are stable in released versions.\nThose above 10K might vary across installations or PG versions. You\nmight find it interesting to read\n\nhttps://www.postgresql.org/docs/current/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n\n> Also does there exist any documentation decoding what the pg_type fields\n> all mean?\n\nhttps://www.postgresql.org/docs/current/catalog-pg-type.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 11:23:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Are OIDs for pg_types constant?"
},
{
"msg_contents": "Thank you Tom, this is exactly what I was looking for.\n\n-Tyler\n\n\nOn Apr 20, 2022 at 11:23:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tyler Brock <tyler.brock@gmail.com> writes:\n>\n> I am writing a program that behaves like a Postgres backend and can see\n>\n> that if I select oid from pg_type that the type old’s could be returned in\n>\n> the Row Description message for the field’s data type and that seems to\n>\n> work.\n>\n> However, I didn’t read anywhere that these are guaranteed to be\n>\n> constant/stable so I’d like to know if this is the case. For example: is\n>\n> the old for pg_type bool = 16 on every instance of Postgres?\n>\n>\n> Hand-assigned OIDs (those below 10000) are stable in released versions.\n> Those above 10K might vary across installations or PG versions. You\n> might find it interesting to read\n>\n>\n> https://www.postgresql.org/docs/current/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n>\n> Also does there exist any documentation decoding what the pg_type fields\n>\n> all mean?\n>\n>\n> https://www.postgresql.org/docs/current/catalog-pg-type.html\n>\n> regards, tom lane\n>\n\n\n Thank you Tom, this is exactly what I was looking for.-Tyler\n\nOn Apr 20, 2022 at 11:23:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n\n Tyler Brock <tyler.brock@gmail.com> writes: I am writing a program that behaves like a Postgres backend and can see that if I select oid from pg_type that the type old’s could be returned in the Row Description message for the field’s data type and that seems to work. However, I didn’t read anywhere that these are guaranteed to be constant/stable so I’d like to know if this is the case. For example: is the old for pg_type bool = 16 on every instance of Postgres?Hand-assigned OIDs (those below 10000) are stable in released versions.Those above 10K might vary across installations or PG versions. Youmight find it interesting to readhttps://www.postgresql.org/docs/current/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT Also does there exist any documentation decoding what the pg_type fields all mean?https://www.postgresql.org/docs/current/catalog-pg-type.html\t\t\tregards, tom lane",
"msg_date": "Wed, 20 Apr 2022 15:30:51 +0000",
"msg_from": "Tyler Brock <tyler.brock@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Are OIDs for pg_types constant?"
}
] |
[
{
"msg_contents": "Allow db.schema.table patterns, but complain about random garbage.\n\npsql, pg_dump, and pg_amcheck share code to process object name\npatterns like 'foo*.bar*' to match all tables with names starting in\n'bar' that are in schemas starting with 'foo'. Before v14, any number\nof extra name parts were silently ignored, so a command line '\\d\nfoo.bar.baz.bletch.quux' was interpreted as '\\d bletch.quux'. In v14,\nas a result of commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868, we\ninstead treated this as a request for table quux in a schema named\n'foo.bar.baz.bletch'. That caused problems for people like Justin\nPryzby who were accustomed to copying strings of the form\ndb.schema.table from messages generated by PostgreSQL itself and using\nthem as arguments to \\d.\n\nAccordingly, revise things so that if an object name pattern contains\nmore parts than we're expecting, we throw an error, unless there's\nexactly one extra part and it matches the current database name.\nThat way, thisdb.myschema.mytable is accepted as meaning just\nmyschema.mytable, but otherdb.myschema.mytable is an error, and so\nis some.random.garbage.myschema.mytable.\n\nMark Dilger, per report from Justin Pryzby and discussion among\nvarious people.\n\nDiscussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d2d35479796c3510e249d6fc72adbd5df918efbf\n\nModified Files\n--------------\ndoc/src/sgml/ref/psql-ref.sgml | 17 +-\nsrc/bin/pg_amcheck/pg_amcheck.c | 27 +-\nsrc/bin/pg_amcheck/t/002_nonesuch.pl | 99 ++++-\nsrc/bin/pg_dump/pg_dump.c | 65 ++-\nsrc/bin/pg_dump/pg_dumpall.c | 13 +-\nsrc/bin/pg_dump/t/002_pg_dump.pl | 107 +++++\nsrc/bin/psql/describe.c | 504 ++++++++++++++--------\nsrc/fe_utils/string_utils.c | 129 ++++--\nsrc/include/fe_utils/string_utils.h | 6 +-\nsrc/test/regress/expected/psql.out | 804 +++++++++++++++++++++++++++++++++++\nsrc/test/regress/sql/psql.sql | 242 +++++++++++\n11 files changed, 1796 insertions(+), 217 deletions(-)",
"msg_date": "Wed, 20 Apr 2022 15:52:12 +0000",
"msg_from": "Robert Haas <rhaas@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Allow db.schema.table patterns,\n but complain about random garbag"
},
{
"msg_contents": "\nOn 2022-04-20 We 11:52, Robert Haas wrote:\n> Allow db.schema.table patterns, but complain about random garbage.\n>\n> psql, pg_dump, and pg_amcheck share code to process object name\n> patterns like 'foo*.bar*' to match all tables with names starting in\n> 'bar' that are in schemas starting with 'foo'. Before v14, any number\n> of extra name parts were silently ignored, so a command line '\\d\n> foo.bar.baz.bletch.quux' was interpreted as '\\d bletch.quux'. In v14,\n> as a result of commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868, we\n> instead treated this as a request for table quux in a schema named\n> 'foo.bar.baz.bletch'. That caused problems for people like Justin\n> Pryzby who were accustomed to copying strings of the form\n> db.schema.table from messages generated by PostgreSQL itself and using\n> them as arguments to \\d.\n>\n> Accordingly, revise things so that if an object name pattern contains\n> more parts than we're expecting, we throw an error, unless there's\n> exactly one extra part and it matches the current database name.\n> That way, thisdb.myschema.mytable is accepted as meaning just\n> myschema.mytable, but otherdb.myschema.mytable is an error, and so\n> is some.random.garbage.myschema.mytable.\n\n\nThis has upset the buildfarm's msys2 animals. There appears to be some\nwildcard expansion going on that causes the problem. I don't know why it\nshould here when it's not causing trouble elsewhere. I have tried\nchanging the way the tests are quoted, without success. Likewise,\nsetting SHELLOPTS=noglob didn't work.\n\nAt this stage I'm fresh out of ideas to fix it. It's also quite possible\nthat my diagnosis is wrong.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 22 Apr 2022 09:15:31 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> This has upset the buildfarm's msys2 animals. There appears to be some\n> wildcard expansion going on that causes the problem. I don't know why it\n> should here when it's not causing trouble elsewhere. I have tried\n> changing the way the tests are quoted, without success. Likewise,\n> setting SHELLOPTS=noglob didn't work.\n\n> At this stage I'm fresh out of ideas to fix it. It's also quite possible\n> that my diagnosis is wrong.\n\nWhen I was looking at this patch, I thought the number of test cases\nwas very substantially out of line anyway. I suggest that rather\nthan investing a bunch of brain cells trying to work around this,\nwe just remove the failing test cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Apr 2022 10:04:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns,\n but complain about random garbag"
},
{
"msg_contents": "\nOn 2022-04-22 Fr 10:04, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> This has upset the buildfarm's msys2 animals. There appears to be some\n>> wildcard expansion going on that causes the problem. I don't know why it\n>> should here when it's not causing trouble elsewhere. I have tried\n>> changing the way the tests are quoted, without success. Likewise,\n>> setting SHELLOPTS=noglob didn't work.\n>> At this stage I'm fresh out of ideas to fix it. It's also quite possible\n>> that my diagnosis is wrong.\n> When I was looking at this patch, I thought the number of test cases\n> was very substantially out of line anyway. I suggest that rather\n> than investing a bunch of brain cells trying to work around this,\n> we just remove the failing test cases.\n\nWFM.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 22 Apr 2022 10:24:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
},
{
"msg_contents": "On Fri, Apr 22, 2022 at 10:24 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-04-22 Fr 10:04, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> This has upset the buildfarm's msys2 animals. There appears to be some\n> >> wildcard expansion going on that causes the problem. I don't know why it\n> >> should here when it's not causing trouble elsewhere. I have tried\n> >> changing the way the tests are quoted, without success. Likewise,\n> >> setting SHELLOPTS=noglob didn't work.\n> >> At this stage I'm fresh out of ideas to fix it. It's also quite possible\n> >> that my diagnosis is wrong.\n> > When I was looking at this patch, I thought the number of test cases\n> > was very substantially out of line anyway. I suggest that rather\n> > than investing a bunch of brain cells trying to work around this,\n> > we just remove the failing test cases.\n>\n> WFM.\n\nSure, see also http://postgr.es/m/CA+TgmoYRGUcFBy6VgN0+Pn4f6Wv=2H0HZLuPHqSy6VC8Ba7vdg@mail.gmail.com\nwhere Andrew's opinion on how to fix this was sought.\n\nI have to say the fact that IPC::Run does shell-glob expansion of its\narguments on some machines and not others seems ludicrous to me. This\npatch may be overtested, but such a radical behavior difference is\ncompletely nuts. How is anyone supposed to write reliable tests for\nany feature in the face of such wildly inconsistent behavior?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Apr 2022 16:06:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns,\n but complain about random garbag"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 8:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Sure, see also http://postgr.es/m/CA+TgmoYRGUcFBy6VgN0+Pn4f6Wv=2H0HZLuPHqSy6VC8Ba7vdg@mail.gmail.com\n> where Andrew's opinion on how to fix this was sought.\n>\n> I have to say the fact that IPC::Run does shell-glob expansion of its\n> arguments on some machines and not others seems ludicrous to me. This\n> patch may be overtested, but such a radical behavior difference is\n> completely nuts. How is anyone supposed to write reliable tests for\n> any feature in the face of such wildly inconsistent behavior?\n\nYeah, I was speculating that it's a bug in IPC::Run that has been\nfixed (by our very own Noah), and some of the machines are still\nrunning the buggy version.\n\n(Not a Windows person, but I speculate the reason that such a stupid\nbug is even possible may be that Windows lacks a way to 'exec' stuff\nwith a passed-in unadulterated argv[] array, so you always need to\nbuild a full shell command subject to interpolation, so if you're\ntrying to emulate an argv[]-style interface you have to write the code\nto do the escaping, and so everyone gets a chance to screw that up.)\n\n\n",
"msg_date": "Sat, 23 Apr 2022 09:12:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns,\n but complain about random garbag"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 09:12:20AM +1200, Thomas Munro wrote:\n> On Sat, Apr 23, 2022 at 8:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I have to say the fact that IPC::Run does shell-glob expansion of its\n> > arguments on some machines and not others seems ludicrous to me. This\n> > patch may be overtested, but such a radical behavior difference is\n> > completely nuts. How is anyone supposed to write reliable tests for\n> > any feature in the face of such wildly inconsistent behavior?\n\nThe MinGW gcc crt*.o files do shell-glob expansion on the arguments before\nentering main(). See https://google.com/search?q=mingw+command+line+glob for\nvarious discussion of that behavior. I suspect you experienced that, not any\nIPC::Run behavior. (I haven't tested, though.) Commit 11e9caf likely had the\nsame cause, though the commit message attributed it to the msys shell rather\nthan to crt*.o.\n\nLet's disable that MinGW compiler behavior.\nhttps://willus.com/mingw/_globbing.shtml lists two ways of achieving that.\n\n> Yeah, I was speculating that it's a bug in IPC::Run that has been\n> fixed (by our very own Noah), and some of the machines are still\n> running the buggy version.\n\nThat change affected arguments containing double quote characters, but mere\nasterisks shouldn't need it. It also has not yet been released, so few or no\nbuildfarm machines are using it.\n\n> (Not a Windows person, but I speculate the reason that such a stupid\n> bug is even possible may be that Windows lacks a way to 'exec' stuff\n> with a passed-in unadulterated argv[] array, so you always need to\n> build a full shell command subject to interpolation, so if you're\n> trying to emulate an argv[]-style interface you have to write the code\n> to do the escaping, and so everyone gets a chance to screw that up.)\n\nYou needn't involve any shell. Other than that, your description pretty much\nsays it. CreateProcessA() is the Windows counterpart of execve(). There's no\nargv[] array, just a single string. (The following trivia does not affect\nPostgreSQL.) Worse, while there's a typical way to convert that string to\nargv[] at program start, some programs do it differently. You need to know\nyour callee in order to construct the string:\nhttps://github.com/toddr/IPC-Run/pull/148/commits/c299a86c9a292375fbfc39fb756883c80adac4b0#diff-5833a343d19ba684779743e2c90516fc65479609274731785364d2d2b49e2211\n\n\n\n",
"msg_date": "Fri, 22 Apr 2022 19:59:27 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
},
{
"msg_contents": "\nOn 2022-04-22 Fr 22:59, Noah Misch wrote:\n> On Sat, Apr 23, 2022 at 09:12:20AM +1200, Thomas Munro wrote:\n>> On Sat, Apr 23, 2022 at 8:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>> I have to say the fact that IPC::Run does shell-glob expansion of its\n>>> arguments on some machines and not others seems ludicrous to me. This\n>>> patch may be overtested, but such a radical behavior difference is\n>>> completely nuts. How is anyone supposed to write reliable tests for\n>>> any feature in the face of such wildly inconsistent behavior?\n\n\n(I missed seeing the part where I was asked for help earlier on this thread)\n\n\n\n> The MinGW gcc crt*.o files do shell-glob expansion on the arguments before\n> entering main(). See https://google.com/search?q=mingw+command+line+glob for\n> various discussion of that behavior. I suspect you experienced that, not any\n> IPC::Run behavior. (I haven't tested, though.) Commit 11e9caf likely had the\n> same cause, though the commit message attributed it to the msys shell rather\n> than to crt*.o.\n>\n> Let's disable that MinGW compiler behavior.\n> https://willus.com/mingw/_globbing.shtml lists two ways of achieving that.\n\n\n\nYeah. I can definitely confirm that this is the proximate cause of the\nissue, and not either IPC::Run or the shell, which is why all my\nexperiments on this failed. With this patch\n\n\ndiff --git a/src/include/port/win32.h b/src/include/port/win32.h\nindex c6213c77c3..456c3f31f1 100644\n--- a/src/include/port/win32.h\n+++ b/src/include/port/win32.h\n@@ -77,3 +77,7 @@ struct sockaddr_un\n char sun_path[108];\n };\n #define HAVE_STRUCT_SOCKADDR_UN 1\n+\n+#ifndef _MSC_VER\n+extern int _CRT_glob = 0; /* 0 turns off globbing; 1 turns it on */\n+#endif\n\n\nfairywren happily passes the tests that Robert has since reverted.\n\n\nI'm rather tempted to call this CRT behaviour a mis-feature, especially\nas a default. I think we should certainly disable it in the development\nbranch, and consider back-patching it, although it is a slight change in\nbehaviour, albeit one that we didn't know about much less want or\ndocument. Still, we been building with mingw compilers for about 20\nyears and haven't hit this before so far as we know, so maybe not.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 24 Apr 2022 13:09:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
},
{
"msg_contents": "On Sun, Apr 24, 2022 at 01:09:08PM -0400, Andrew Dunstan wrote:\n> On 2022-04-22 Fr 22:59, Noah Misch wrote:\n> > The MinGW gcc crt*.o files do shell-glob expansion on the arguments before\n> > entering main(). See https://google.com/search?q=mingw+command+line+glob for\n> > various discussion of that behavior. I suspect you experienced that, not any\n> > IPC::Run behavior. (I haven't tested, though.) Commit 11e9caf likely had the\n> > same cause, though the commit message attributed it to the msys shell rather\n> > than to crt*.o.\n> >\n> > Let's disable that MinGW compiler behavior.\n> > https://willus.com/mingw/_globbing.shtml lists two ways of achieving that.\n> \n> Yeah. I can definitely confirm that this is the proximate cause of the\n> issue, and not either IPC::Run or the shell, which is why all my\n> experiments on this failed. With this patch\n\nThanks for confirming.\n\n> diff --git a/src/include/port/win32.h b/src/include/port/win32.h\n> index c6213c77c3..456c3f31f1 100644\n> --- a/src/include/port/win32.h\n> +++ b/src/include/port/win32.h\n> @@ -77,3 +77,7 @@ struct sockaddr_un\n> char sun_path[108];\n> };\n> #define HAVE_STRUCT_SOCKADDR_UN 1\n> +\n> +#ifndef _MSC_VER\n> +extern int _CRT_glob = 0; /* 0 turns off globbing; 1 turns it on */\n> +#endif\n> \n> \n> fairywren happily passes the tests that Robert has since reverted.\n> \n> \n> I'm rather tempted to call this CRT behaviour a mis-feature, especially\n> as a default. I think we should certainly disable it in the development\n> branch, and consider back-patching it, although it is a slight change in\n> behaviour, albeit one that we didn't know about much less want or\n> document. Still, we been building with mingw compilers for about 20\n> years and haven't hit this before so far as we know, so maybe not.\n\nI'd lean toward back-patching. We position MSVC and MinGW as two ways to\nbuild roughly the same PostgreSQL, not as two routes to different user-facing\nbehavior. Since the postgresql.org/download binaries use MSVC, aligning with\ntheir behavior is good. It's fair to mention in the release notes, of course.\n\nDoes your win32.h patch build without warnings or errors? Even if MinGW has\nsome magic to make that work, I suspect we'll want a non-header home. Perhaps\nsrc/common/exec.c? It's best to keep this symbol out of libpq and other\nDLLs, though I bet exports.txt would avoid functional problems.\n\n\n",
"msg_date": "Sun, 24 Apr 2022 11:19:34 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
},
{
"msg_contents": "\nOn 2022-04-24 Su 14:19, Noah Misch wrote:\n> On Sun, Apr 24, 2022 at 01:09:08PM -0400, Andrew Dunstan wrote:\n>> On 2022-04-22 Fr 22:59, Noah Misch wrote:\n>>> The MinGW gcc crt*.o files do shell-glob expansion on the arguments before\n>>> entering main(). See https://google.com/search?q=mingw+command+line+glob for\n>>> various discussion of that behavior. I suspect you experienced that, not any\n>>> IPC::Run behavior. (I haven't tested, though.) Commit 11e9caf likely had the\n>>> same cause, though the commit message attributed it to the msys shell rather\n>>> than to crt*.o.\n>>>\n>>> Let's disable that MinGW compiler behavior.\n>>> https://willus.com/mingw/_globbing.shtml lists two ways of achieving that.\n>> Yeah. I can definitely confirm that this is the proximate cause of the\n>> issue, and not either IPC::Run or the shell, which is why all my\n>> experiments on this failed. With this patch\n> Thanks for confirming.\n>\n>> diff --git a/src/include/port/win32.h b/src/include/port/win32.h\n>> index c6213c77c3..456c3f31f1 100644\n>> --- a/src/include/port/win32.h\n>> +++ b/src/include/port/win32.h\n>> @@ -77,3 +77,7 @@ struct sockaddr_un\n>> char sun_path[108];\n>> };\n>> #define HAVE_STRUCT_SOCKADDR_UN 1\n>> +\n>> +#ifndef _MSC_VER\n>> +extern int _CRT_glob = 0; /* 0 turns off globbing; 1 turns it on */\n>> +#endif\n>>\n>>\n>> fairywren happily passes the tests that Robert has since reverted.\n>>\n>>\n>> I'm rather tempted to call this CRT behaviour a mis-feature, especially\n>> as a default. I think we should certainly disable it in the development\n>> branch, and consider back-patching it, although it is a slight change in\n>> behaviour, albeit one that we didn't know about much less want or\n>> document. Still, we been building with mingw compilers for about 20\n>> years and haven't hit this before so far as we know, so maybe not.\n> I'd lean toward back-patching. We position MSVC and MinGW as two ways to\n> build roughly the same PostgreSQL, not as two routes to different user-facing\n> behavior. Since the postgresql.org/download binaries use MSVC, aligning with\n> their behavior is good. It's fair to mention in the release notes, of course.\n\n\n\nOK, good point.\n\n\n> Does your win32.h patch build without warnings or errors? \n\n\nYes.\n\n\n> Even if MinGW has\n> some magic to make that work, I suspect we'll want a non-header home. Perhaps\n> src/common/exec.c? It's best to keep this symbol out of libpq and other\n> DLLs, though I bet exports.txt would avoid functional problems.\n\n\nexec.c looks like it should work fine.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 24 Apr 2022 15:37:06 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
},
{
"msg_contents": "\nOn 2022-04-24 Su 15:37, Andrew Dunstan wrote:\n> On 2022-04-24 Su 14:19, Noah Misch wrote:\n>\n>> Even if MinGW has\n>> some magic to make that work, I suspect we'll want a non-header home. Perhaps\n>> src/common/exec.c? It's best to keep this symbol out of libpq and other\n>> DLLs, though I bet exports.txt would avoid functional problems.\n>\n> exec.c looks like it should work fine.\n>\n>\n\n\nOK, in the absence of further comment I'm going to do it that way.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 25 Apr 2022 14:57:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow db.schema.table patterns, but complain about random\n garbag"
}
] |
[
{
"msg_contents": "Hi All -\n\nI was implementing the infinity time constants in DuckDB when I ran into an infinite loop. It seems that PG has the same problem for the same reason (adding an interval to an infinite timestamp produces the same timestamp, so the increment operation never goes anywhere.) Here is the query:\nselect COUNT(*) \nFROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL '1 DAY');\n\nThis seems like a DoS great attack, so we are disallowing infinities as bounds for both table and scalar series generation. As an upper bound, it eventually gives an error, so it seems there is not much utility anyway.\n\n\nMet vriendelijke groet, best regards, mit freundlichen Grüßen,\nRichard Wesley\nGroup-By Therapist\nrichard@duckdblabs.com <mailto:richard@duckdblabs.com>\n\n\n\n\n\nHi All -I was implementing the infinity time constants in DuckDB when I ran into an infinite loop. It seems that PG has the same problem for the same reason (adding an interval to an infinite timestamp produces the same timestamp, so the increment operation never goes anywhere.) Here is the query:select COUNT(*) \nFROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL '1 DAY');This seems like a DoS great attack, so we are disallowing infinities as bounds for both table and scalar series generation. As an upper bound, it eventually gives an error, so it seems there is not much utility anyway.\nMet vriendelijke groet, best regards, mit freundlichen Grüßen,\n\n\n\n\n\n\nRichard WesleyGroup-By Therapist\nrichard@duckdblabs.com",
"msg_date": "Wed, 20 Apr 2022 09:17:32 -0700",
"msg_from": "Richard Wesley <richard@duckdblabs.com>",
"msg_from_op": true,
"msg_subject": "Query generates infinite loop"
},
{
"msg_contents": "Hi\n\nst 20. 4. 2022 v 18:42 odesílatel Richard Wesley <richard@duckdblabs.com>\nnapsal:\n\n> Hi All -\n>\n> I was implementing the infinity time constants in DuckDB when I ran into\n> an infinite loop. It seems that PG has the same problem for the same reason\n> (adding an interval to an infinite timestamp produces the same timestamp,\n> so the increment operation never goes anywhere.) Here is the query:\n>\n> 1.\n>\n> select COUNT(*) FROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL '1 DAY');\n>\n>\n>\n> This seems like a DoS great attack, so we are disallowing infinities as\n> bounds for both table and scalar series generation. As an upper bound, it\n> eventually gives an error, so it seems there is not much utility anyway.\n>\n\nThere are more ways to achieve the same effect. The protection is safe\nsetting of temp_file_limit\n\n2022-04-20 09:59:54) postgres=# set temp_file_limit to '1MB';\nSET\n(2022-04-20 18:51:48) postgres=# select COUNT(*)\nFROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL\n'1 DAY');\nERROR: temporary file size exceeds temp_file_limit (1024kB)\n(2022-04-20 18:51:50) postgres=#\n\nRegards\n\nPavel\n\n\n\n>\n> Met vriendelijke groet, best regards, mit freundlichen Grüßen,\n>\n> *Richard Wesley*\n> Group-By Therapist\n> richard@duckdblabs.com\n>\n>\n>\n>\n>\n\nHist 20. 4. 2022 v 18:42 odesílatel Richard Wesley <richard@duckdblabs.com> napsal:Hi All -I was implementing the infinity time constants in DuckDB when I ran into an infinite loop. It seems that PG has the same problem for the same reason (adding an interval to an infinite timestamp produces the same timestamp, so the increment operation never goes anywhere.) Here is the query:select COUNT(*) \nFROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL '1 DAY');This seems like a DoS great attack, so we are disallowing infinities as bounds for both table and scalar series generation. As an upper bound, it eventually gives an error, so it seems there is not much utility anyway.There are more ways to achieve the same effect. The protection is safe setting of temp_file_limit2022-04-20 09:59:54) postgres=# set temp_file_limit to '1MB';SET(2022-04-20 18:51:48) postgres=# select COUNT(*) FROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL '1 DAY');ERROR: temporary file size exceeds temp_file_limit (1024kB)(2022-04-20 18:51:50) postgres=# RegardsPavel\nMet vriendelijke groet, best regards, mit freundlichen Grüßen,\n\n\n\n\n\n\nRichard WesleyGroup-By Therapist\nrichard@duckdblabs.com",
"msg_date": "Wed, 20 Apr 2022 18:53:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 20. 4. 2022 v 18:42 odesílatel Richard Wesley <richard@duckdblabs.com>\n> napsal:\n>> select COUNT(*) FROM generate_series('-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, INTERVAL '1 DAY');\n>>\n>> This seems like a DoS great attack, so we are disallowing infinities as\n>> bounds for both table and scalar series generation. As an upper bound, it\n>> eventually gives an error, so it seems there is not much utility anyway.\n\n> There are more ways to achieve the same effect. The protection is safe\n> setting of temp_file_limit\n\nWell, there are any number of ways to DOS a database you can issue\narbitrary queries to. For instance, cross joining a number of very\nlarge tables. So I'm not excited about that aspect of it. Still,\nit's true that infinities as generate_series endpoints are going\nto work pretty oddly, so I agree with the idea of forbidding 'em.\n\nNumeric has infinity as of late, so the numeric variant would\nneed to do this too.\n\nI think we can allow infinity as the step, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 13:03:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "I wrote:\n> it's true that infinities as generate_series endpoints are going\n> to work pretty oddly, so I agree with the idea of forbidding 'em.\n\n> Numeric has infinity as of late, so the numeric variant would\n> need to do this too.\n\nOh --- looks like numeric generate_series() already throws error for\nthis, so we should just make the timestamp variants do the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 17:43:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > it's true that infinities as generate_series endpoints are going\n> > to work pretty oddly, so I agree with the idea of forbidding 'em.\n>\n> > Numeric has infinity as of late, so the numeric variant would\n> > need to do this too.\n>\n> Oh --- looks like numeric generate_series() already throws error for\n> this, so we should just make the timestamp variants do the same.\n>\n\nThe regression test you added for this change causes an infinite loop when\nrun against an unpatched server with --install-check. That is a bit\nunpleasant. Is there something we can and should do about that? I was\nexpecting regression test failures of course but not an infinite loop\nleading towards disk exhaustion.\n\nCheers,\n\nJeff\n\nOn Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> it's true that infinities as generate_series endpoints are going\n> to work pretty oddly, so I agree with the idea of forbidding 'em.\n\n> Numeric has infinity as of late, so the numeric variant would\n> need to do this too.\n\nOh --- looks like numeric generate_series() already throws error for\nthis, so we should just make the timestamp variants do the same.The regression test you added for this change causes an infinite loop when run against an unpatched server with --install-check. That is a bit unpleasant. Is there something we can and should do about that? I was expecting regression test failures of course but not an infinite loop leading towards disk exhaustion.Cheers,Jeff",
"msg_date": "Wed, 4 May 2022 15:01:27 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> The regression test you added for this change causes an infinite loop when\n> run against an unpatched server with --install-check. That is a bit\n> unpleasant. Is there something we can and should do about that? I was\n> expecting regression test failures of course but not an infinite loop\n> leading towards disk exhaustion.\n\nWe very often add regression test cases that will cause unpleasant\nfailures on unpatched code. I categorically reject the idea that\nthat's not a good thing, and question why you think that running\nknown-broken code against a regression suite is an important use case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 15:24:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "On Wed, May 4, 2022 at 3:01 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> I wrote:\n>> > it's true that infinities as generate_series endpoints are going\n>> > to work pretty oddly, so I agree with the idea of forbidding 'em.\n>>\n>> > Numeric has infinity as of late, so the numeric variant would\n>> > need to do this too.\n>>\n>> Oh --- looks like numeric generate_series() already throws error for\n>> this, so we should just make the timestamp variants do the same.\n>>\n>\n> The regression test you added for this change causes an infinite loop when\n> run against an unpatched server with --install-check. That is a bit\n> unpleasant. Is there something we can and should do about that? I was\n> expecting regression test failures of course but not an infinite loop\n> leading towards disk exhaustion.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThis came up once before\nhttps://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com\n\nOn Wed, May 4, 2022 at 3:01 PM Jeff Janes <jeff.janes@gmail.com> wrote:On Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> it's true that infinities as generate_series endpoints are going\n> to work pretty oddly, so I agree with the idea of forbidding 'em.\n\n> Numeric has infinity as of late, so the numeric variant would\n> need to do this too.\n\nOh --- looks like numeric generate_series() already throws error for\nthis, so we should just make the timestamp variants do the same.The regression test you added for this change causes an infinite loop when run against an unpatched server with --install-check. That is a bit unpleasant. Is there something we can and should do about that? I was expecting regression test failures of course but not an infinite loop leading towards disk exhaustion.Cheers,JeffThis came up once before https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com",
"msg_date": "Sun, 8 May 2022 23:44:33 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> On Wed, May 4, 2022 at 3:01 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n>> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Oh --- looks like numeric generate_series() already throws error for\n>>> this, so we should just make the timestamp variants do the same.\n\n> This came up once before\n> https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com\n\nOh! I'd totally forgotten that thread, but given that discussion,\nand particularly the counterexample at\n\nhttps://www.postgresql.org/message-id/16807.1456091547%40sss.pgh.pa.us\n\nit now feels to me like maybe this change was a mistake. Perhaps\ninstead of the committed change, we ought to go the other way and\nrip out the infinity checks in numeric generate_series().\n\nIn view of tomorrow's minor-release wrap, there is not time for\nthe sort of more leisured discussion that I now think this topic\nneeds. I propose to revert eafdf9de0 et al before the wrap,\nand think about this at more length before doing anything.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 May 2022 00:02:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "On Mon, May 9, 2022 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > On Wed, May 4, 2022 at 3:01 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> >> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Oh --- looks like numeric generate_series() already throws error for\n> >>> this, so we should just make the timestamp variants do the same.\n>\n> > This came up once before\n> >\n> https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com\n>\n> Oh! I'd totally forgotten that thread, but given that discussion,\n> and particularly the counterexample at\n>\n> https://www.postgresql.org/message-id/16807.1456091547%40sss.pgh.pa.us\n>\n> it now feels to me like maybe this change was a mistake. Perhaps\n> instead of the committed change, we ought to go the other way and\n> rip out the infinity checks in numeric generate_series().\n>\n\nThe infinite-upper-bound-withlimit-pushdown counterexample makes sense, but\nseems like we're using generate_series() only because we lack a function\nthat generates a series of N elements, without a specified upper bound,\nsomething like\n\n generate_finite_series( start, step, num_elements )\n\nAnd if we did that, I'd lobby that we have one that takes dates as well as\none that takes timestamps, because that was my reason for starting the\nthread above.\n\nOn Mon, May 9, 2022 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> On Wed, May 4, 2022 at 3:01 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n>> On Wed, Apr 20, 2022 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Oh --- looks like numeric generate_series() already throws error for\n>>> this, so we should just make the timestamp variants do the same.\n\n> This came up once before\n> https://www.postgresql.org/message-id/CAB7nPqQUuUh_W3s55eSiMnt901Ud3meF7f_96yPkKcqfd1ZaMg%40mail.gmail.com\n\nOh! I'd totally forgotten that thread, but given that discussion,\nand particularly the counterexample at\n\nhttps://www.postgresql.org/message-id/16807.1456091547%40sss.pgh.pa.us\n\nit now feels to me like maybe this change was a mistake. Perhaps\ninstead of the committed change, we ought to go the other way and\nrip out the infinity checks in numeric generate_series().The infinite-upper-bound-withlimit-pushdown counterexample makes sense, but seems like we're using generate_series() only because we lack a function that generates a series of N elements, without a specified upper bound, something like generate_finite_series( start, step, num_elements )And if we did that, I'd lobby that we have one that takes dates as well as one that takes timestamps, because that was my reason for starting the thread above.",
"msg_date": "Mon, 9 May 2022 02:19:30 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> The infinite-upper-bound-withlimit-pushdown counterexample makes sense, but\n> seems like we're using generate_series() only because we lack a function\n> that generates a series of N elements, without a specified upper bound,\n> something like\n\n> generate_finite_series( start, step, num_elements )\n\nYeah, that could be a reasonable thing to add.\n\n> And if we did that, I'd lobby that we have one that takes dates as well as\n> one that takes timestamps, because that was my reason for starting the\n> thread above.\n\nLess sure about that. ISTM the reason that the previous proposal failed\nwas that it introduced too much ambiguity about how to resolve\nunknown-type arguments. Wouldn't the same problems arise here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 May 2022 12:42:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": ">\n> Less sure about that. ISTM the reason that the previous proposal failed\n> was that it introduced too much ambiguity about how to resolve\n> unknown-type arguments. Wouldn't the same problems arise here?\n>\n\nIf I recall, the problem was that the lack of a date-specific\ngenerate_series function would result in a date value being coerced to\ntimestamp, and thus adding generate_series(date, date, step) would change\nbehavior of existing code, and that was a POLA violation (among other bad\nthings).\n\nBy adding a different function, there is no prior behavior to worry about.\nSo we should be safe with the following signatures doing the right thing,\nyes?:\n generate_finite_series(start timestamp, step interval, num_elements\ninteger)\n generate_finite_series(start date, step integer, num_elements integer)\n generate_finite_series(start date, step interval year to month,\nnum_elements integer)\n\nLess sure about that. ISTM the reason that the previous proposal failed\nwas that it introduced too much ambiguity about how to resolve\nunknown-type arguments. Wouldn't the same problems arise here?If I recall, the problem was that the lack of a date-specific generate_series function would result in a date value being coerced to timestamp, and thus adding generate_series(date, date, step) would change behavior of existing code, and that was a POLA violation (among other bad things).By adding a different function, there is no prior behavior to worry about. So we should be safe with the following signatures doing the right thing, yes?: generate_finite_series(start timestamp, step interval, num_elements integer) generate_finite_series(start date, step integer, num_elements integer) generate_finite_series(start date, step interval year to month, num_elements integer)",
"msg_date": "Tue, 10 May 2022 19:24:15 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n>> Less sure about that. ISTM the reason that the previous proposal failed\n>> was that it introduced too much ambiguity about how to resolve\n>> unknown-type arguments. Wouldn't the same problems arise here?\n\n> By adding a different function, there is no prior behavior to worry about.\n\nTrue, that's one less thing to worry about.\n\n> So we should be safe with the following signatures doing the right thing,\n> yes?:\n> generate_finite_series(start timestamp, step interval, num_elements\n> integer)\n> generate_finite_series(start date, step integer, num_elements integer)\n> generate_finite_series(start date, step interval year to month,\n> num_elements integer)\n\nNo. You can experiment with it easily enough using stub functions:\n\nregression=# create function generate_finite_series(start timestamp, step interval, num_elements\nregression(# integer) returns timestamp as 'select $1' language sql;\nCREATE FUNCTION\nregression=# create function generate_finite_series(start date, step integer, num_elements integer) returns timestamp as 'select $1' language sql;\nCREATE FUNCTION\nregression=# create function generate_finite_series(start date, step interval year to month,\nregression(# num_elements integer) returns timestamp as 'select $1' language sql;;\nCREATE FUNCTION\n\nregression=# select generate_finite_series(current_date, '1 day', 10);\nERROR: function generate_finite_series(date, unknown, integer) is not unique\nLINE 1: select generate_finite_series(current_date, '1 day', 10);\n ^\nHINT: Could not choose a best candidate function. You might need to add explicit type casts.\n\nIt's even worse if the first argument is also an unknown-type literal.\nSure, you could add explicit casts to force the choice of variant,\nbut then ease of use went out the window somewhere --- and IMO this\nproposal is mostly about ease of use, since there's no fundamentally\nnew functionality.\n\nIt looks like you could make it work with just these three variants:\n\nregression=# \\df generate_finite_series\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n--------+------------------------+-----------------------------+------------------------------------------------------------------------+------\n public | generate_finite_series | timestamp without time zone | start date, step interval, num_elements integer | func\n public | generate_finite_series | timestamp with time zone | start timestamp with time zone, step interval, num_elements integer | func\n public | generate_finite_series | timestamp without time zone | start timestamp without time zone, step interval, num_elements integer | func\n(3 rows)\n\nI get non-error results with these:\n\nregression=# select generate_finite_series(current_date, '1 day', 10);\n generate_finite_series \n------------------------\n 2022-05-10 00:00:00\n(1 row)\n\nregression=# select generate_finite_series('now', '1 day', 10);\n generate_finite_series \n-------------------------------\n 2022-05-10 19:35:33.773738-04\n(1 row)\n\nThat shows that an unknown-type literal in the first argument will default\nto timestamptz given these choices, which seems like a sane default.\n\nBTW, you don't get to say \"interval year to month\" as a function argument,\nor at least it won't do anything useful. If you want to restrict the\ncontents of the interval it'll have to be a runtime check inside the\nfunction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 May 2022 19:42:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query generates infinite loop"
}
] |
[
{
"msg_contents": "The following code doesn't make a lot of sense to me:\n\n /*\n * All done with end-of-recovery actions.\n *\n * Now allow backends to write WAL and update the control file status in\n * consequence. SharedRecoveryState, that controls if backends can write\n * WAL, is updated while holding ControlFileLock to prevent other backends\n * to look at an inconsistent state of the control file in shared memory.\n * There is still a small window during which backends can write WAL and\n * the control file is still referring to a system not in DB_IN_PRODUCTION\n * state while looking at the on-disk control file.\n *\n * Also, we use info_lck to update SharedRecoveryState to ensure that\n * there are no race conditions concerning visibility of other recent\n * updates to shared memory.\n */\n LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n ControlFile->state = DB_IN_PRODUCTION;\n\n SpinLockAcquire(&XLogCtl->info_lck);\n XLogCtl->SharedRecoveryState = RECOVERY_STATE_DONE;\n SpinLockRelease(&XLogCtl->info_lck);\n\n UpdateControlFile();\n LWLockRelease(ControlFileLock);\n\nBefore ebdf5bf7d1c97a926e2b0cb6523344c2643623c7 (2016) we changed the\ncontrol file state first, then did a bunch of stuff like StartupCLOG()\nand TrimCLOG() and RecoverPreparedTransactions(), and then set\nRECOVERY_STATE_DONE. Judging by the commit message for the\naforementioned commit, some people didn't like the fact that\nDB_IN_PRODUCTION would show up in the control file potentially some\ntime in advance of when the server was actually able to accept\nread-write connections. However, it seems to me that we now have the\nopposite problem: as soon as we set RECOVERY_STATE_DONE in shared\nmemory, some other backend can write WAL, which I think means that we\nmight start writing WAL before the control file says we're in\nproduction. Which might not be an entirely cosmetic problem, because\nthere's code that looks at whether the control file state is\nDB_SHUTDOWNED to figure out whether we crashed previously -- and it's\nhard to imagine that it would be OK to write some WAL, crash before\nupdating the control file state, and then observe the control file\nstate on restart to still be indicative of a clean shutdown. I imagine\nthe race is rather narrow, but I don't see what prevents it in theory.\n\nIt seems reasonable to me to want these changes to happen as close\ntogether as possible, and before\nebdf5bf7d1c97a926e2b0cb6523344c2643623c7 that wasn't the case, so I do\nacknowledge that there was a legitimate reason to make a change. But I\ndon't think the change was correct in detail. True simultaneity is\nimpossible, and one change or the other must happen first, rather than\ninterleaving them as this code does. And I think the one that should\nhappen first is the control file update, including the on-disk copy,\nbecause no WAL should be generated until that's fully complete. If\nthat's not good enough, we could update the control file twice, once\njust before allowing connections to say that we're about to allow new\nWAL, and once just after to confirm that we did. The first update\nwould be for the benefit of the server itself, so that it can be\ncertain whether any WAL might have been generated, and the second one\nwould just be for the benefit of observers.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Apr 2022 16:29:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "when should we set DB_IN_PRODUCTION?"
}
] |
[
{
"msg_contents": "I did a test run of renumber_oids.pl to see if there would be any\nproblems when the time comes (pretty soon!) to run it for v15.\nDepressingly enough, I found two problems:\n\n1. When commit dfb75e478 invented DECLARE_UNIQUE_INDEX_PKEY,\nit neglected to teach renumber_oids.pl about it. I'm surprised\nwe did not notice this last year.\n\n2. renumber_oids.pl failed miserably on pg_parameter_acl.h:\n\n@@ -48,11 +48,11 @@ CATALOG(pg_parameter_acl,8924,ParameterAclRelationId) BKI_SHARED_RELATION\n */\n typedef FormData_pg_parameter_acl *Form_pg_parameter_acl;\n \n-DECLARE_TOAST(pg_parameter_acl, 8925, 8926);\n+DECLARE_TOAST(pg_parameter_acl, 6244, 6245);\n #define PgParameterAclToastTable 8925\n #define PgParameterAclToastIndex 8926\n\nbecause of course it didn't know it should update the\nPgParameterAclToastTable and PgParameterAclToastIndex macro definitions.\n(We have this same coding pattern elsewhere, but I guess that\nrenumber_oids.pl has never previously been asked to renumber a shared\ncatalog.)\n\nI think the right way to fix #2 is to put the responsibility for\ngenerating the #define's into genbki.pl, instead of this mistake-prone\napproach of duplicating the OID constants in the source code.\n\nThe attached proposed patch invents a variant macro\nDECLARE_TOAST_WITH_MACRO for the relatively small number of cases\nwhere we need such OID macros. A different idea could be to require\nall the catalog headers to define C macros for their toast tables\nand change DECLARE_TOAST to a five-argument macro across the board.\nHowever, that would require touching a bunch more places and inventing\na bunch more macro names, and it didn't really seem useful.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 20 Apr 2022 16:45:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "renumber_oids.pl needs some updates"
},
{
"msg_contents": "On 20.04.22 22:45, Tom Lane wrote:\n> I think the right way to fix #2 is to put the responsibility for\n> generating the #define's into genbki.pl, instead of this mistake-prone\n> approach of duplicating the OID constants in the source code.\n> \n> The attached proposed patch invents a variant macro\n> DECLARE_TOAST_WITH_MACRO for the relatively small number of cases\n> where we need such OID macros.\n\nThis makes sense.\n\nA more elaborate (future) project would be to have genbki.pl generate \nall of IsSharedRelation(), which is the only place these toast-table-OID \nmacros are used, AFAICT.\n\n\n\n",
"msg_date": "Wed, 20 Apr 2022 22:56:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: renumber_oids.pl needs some updates"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 20.04.22 22:45, Tom Lane wrote:\n>> The attached proposed patch invents a variant macro\n>> DECLARE_TOAST_WITH_MACRO for the relatively small number of cases\n>> where we need such OID macros.\n\n> This makes sense.\n\n> A more elaborate (future) project would be to have genbki.pl generate \n> all of IsSharedRelation(), which is the only place these toast-table-OID \n> macros are used, AFAICT.\n\nPerhaps. We invent shared catalogs at a slow enough rate that I'm\nnot sure the effort would ever pay for itself in person-hours,\nbut maybe making such invention a trifle less error-prone is\nworth something.\n\nI'd still want to keep this form of DECLARE_TOAST, in case someone\ncomes up with a different reason to want macro names for toast OIDs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 17:10:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: renumber_oids.pl needs some updates"
}
] |
[
{
"msg_contents": "Hi,\n\nit seems there's something wrong with CTE inlining when there's a view\ncontaining a correlated subquery referencing the CTE. Consider a simple\nexample like this:\n\n create table results (\n id serial primary key,\n run text,\n tps float4\n );\n\n create view results_agg as\n with base_tps as (\n select run, tps from results\n )\n select\n run,\n count(*) as runs,\n\n (select tps from base_tps b where b.run = r.run) AS base_tps\n\n from results r\n group by\n run\n order by\n run;\n\n explain SELECT run FROM results_agg ORDER BY 1;\n\n\nThis crashes on this assert in inline_cte():\n\n Assert(context.refcount == 0);\n\nbecause the refcount value remains 1. There's a backtrace attached.\n\nI don't know why exactly this happens, my knowledge of CTE inlining is\nsomewhat limited. The counter is clearly out of sync\n\n\nbut a couple more observations:\n\n1) it fails all the way back to PG12, where CTE inlining was added\n\n2) it does not happen if the CTE is defined as MATERIALIZED\n\n QUERY PLAN\n -----------------------------------------\n Subquery Scan on results_agg\n -> Sort\n Sort Key: r.run\n CTE base_tps\n -> Seq Scan on results\n -> HashAggregate\n Group Key: r.run\n -> Seq Scan on results r\n (8 rows)\n\n3) without asserts, it seems to work and the query generates this plan\n\n QUERY PLAN\n -----------------------------------------\n Subquery Scan on results_agg\n -> Sort\n Sort Key: r.run\n -> HashAggregate\n Group Key: r.run\n -> Seq Scan on results r\n (6 rows)\n\n4) it does not seem to happen without the view, i.e. this works\n\n explain\n with base_tps as (\n select run, tps from results\n )\n select run from (\n select\n run,\n count(*) as runs,\n\n (select tps from base_tps b where b.run = r.run) AS base_tps\n\n from results r\n group by\n run\n order by\n run\n ) results_agg order by 1;\n\nThe difference between plans in (2) and (3) is interesting, because it\nseems the CTE got inlined, so why was the refcount not decremented?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 20 Apr 2022 23:33:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n> The difference between plans in (2) and (3) is interesting, because it\n> seems the CTE got inlined, so why was the refcount not decremented?\n>\n\nThe query is not actually referencing the cte. So the cte range table\nentry would not appear anywhere in the query tree. That's why refcount\nis not decremented after inline cte walker.\n\nIf we explicitly reference the cte in the query, say in the targetlist,\nit would then work.\n\n# explain (costs off) SELECT * FROM results_agg ORDER BY 1;\n QUERY PLAN\n---------------------------------------\n Sort\n Sort Key: r.run\n -> HashAggregate\n Group Key: r.run\n -> Seq Scan on results r\n SubPlan 1\n -> Seq Scan on results\n Filter: (run = r.run)\n(8 rows)\n\nIMO the culprit is that we incorrectly set cterefcount to one while\nactually the cte is not referenced at all.\n\nThanks\nRichard\n\nOn Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\nThe difference between plans in (2) and (3) is interesting, because it\nseems the CTE got inlined, so why was the refcount not decremented?The query is not actually referencing the cte. So the cte range tableentry would not appear anywhere in the query tree. That's why refcountis not decremented after inline cte walker.If we explicitly reference the cte in the query, say in the targetlist,it would then work.# explain (costs off) SELECT * FROM results_agg ORDER BY 1; QUERY PLAN--------------------------------------- Sort Sort Key: r.run -> HashAggregate Group Key: r.run -> Seq Scan on results r SubPlan 1 -> Seq Scan on results Filter: (run = r.run)(8 rows)IMO the culprit is that we incorrectly set cterefcount to one whileactually the cte is not referenced at all.ThanksRichard",
"msg_date": "Thu, 21 Apr 2022 15:28:32 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n> it seems there's something wrong with CTE inlining when there's a view\n> containing a correlated subquery referencing the CTE.\n>\n\nBTW, seems view is not a necessary condition to reproduce this issue.\nFor instance:\n\ncreate table t (a int, b int);\n\nexplain (costs off) select a from\n(\n with t_cte as (select a, b from t)\n select\n a,\n (select b from t_cte where t_cte.a = t.a) AS t_sub\n from t\n) sub;\n\nThanks\nRichard\n\nOn Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\nit seems there's something wrong with CTE inlining when there's a view\ncontaining a correlated subquery referencing the CTE. BTW, seems view is not a necessary condition to reproduce this issue.For instance:create table t (a int, b int);explain (costs off) select a from( with t_cte as (select a, b from t) select a, (select b from t_cte where t_cte.a = t.a) AS t_sub from t) sub;ThanksRichard",
"msg_date": "Thu, 21 Apr 2022 15:51:53 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 3:51 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra <\n> tomas.vondra@enterprisedb.com> wrote:\n>\n>>\n>> it seems there's something wrong with CTE inlining when there's a view\n>> containing a correlated subquery referencing the CTE.\n>>\n>\n> BTW, seems view is not a necessary condition to reproduce this issue.\n> For instance:\n>\n> create table t (a int, b int);\n>\n> explain (costs off) select a from\n> (\n> with t_cte as (select a, b from t)\n> select\n> a,\n> (select b from t_cte where t_cte.a = t.a) AS t_sub\n> from t\n> ) sub;\n>\n\nFurther debugging shows that in this repro the reference to the CTE is\nremoved when generating paths for the subquery 'sub', where we would try\nto remove subquery targetlist items that are not needed. So for the\nitems we are to remove, maybe we need to check if they contain CTEs and\nif so decrease cterefcount of the CTEs correspondingly.\n\nThanks\nRichard\n\nOn Thu, Apr 21, 2022 at 3:51 PM Richard Guo <guofenglinux@gmail.com> wrote:On Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\nit seems there's something wrong with CTE inlining when there's a view\ncontaining a correlated subquery referencing the CTE. BTW, seems view is not a necessary condition to reproduce this issue.For instance:create table t (a int, b int);explain (costs off) select a from( with t_cte as (select a, b from t) select a, (select b from t_cte where t_cte.a = t.a) AS t_sub from t) sub;Further debugging shows that in this repro the reference to the CTE isremoved when generating paths for the subquery 'sub', where we would tryto remove subquery targetlist items that are not needed. So for theitems we are to remove, maybe we need to check if they contain CTEs andif so decrease cterefcount of the CTEs correspondingly.ThanksRichard",
"msg_date": "Thu, 21 Apr 2022 16:29:01 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "On 4/21/22 10:29, Richard Guo wrote:\n> \n> On Thu, Apr 21, 2022 at 3:51 PM Richard Guo <guofenglinux@gmail.com\n> <mailto:guofenglinux@gmail.com>> wrote:\n> \n> \n> On Thu, Apr 21, 2022 at 5:33 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> wrote:\n> \n> \n> it seems there's something wrong with CTE inlining when there's\n> a view\n> containing a correlated subquery referencing the CTE. \n> \n> \n> BTW, seems view is not a necessary condition to reproduce this issue.\n> For instance:\n> \n> create table t (a int, b int);\n> \n> explain (costs off) select a from\n> (\n> with t_cte as (select a, b from t)\n> select\n> a,\n> (select b from t_cte where t_cte.a = t.a) AS t_sub\n> from t\n> ) sub;\n> \n> \n> Further debugging shows that in this repro the reference to the CTE is\n> removed when generating paths for the subquery 'sub', where we would try\n> to remove subquery targetlist items that are not needed. So for the\n> items we are to remove, maybe we need to check if they contain CTEs and\n> if so decrease cterefcount of the CTEs correspondingly.\n> \n\nRight, at some point we remove the unnecessary targetlist entries, but\nthat ignores the entry may reference a CTE. That's pretty much what I\nmeant by the counter being \"out of sync\".\n\nUpdating the counter while removing the entry is one option, but maybe\nwe could simply delay counting the CTE references until after that?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Apr 2022 13:48:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 4/21/22 10:29, Richard Guo wrote:\n>> Further debugging shows that in this repro the reference to the CTE is\n>> removed when generating paths for the subquery 'sub', where we would try\n>> to remove subquery targetlist items that are not needed. So for the\n>> items we are to remove, maybe we need to check if they contain CTEs and\n>> if so decrease cterefcount of the CTEs correspondingly.\n\n> Right, at some point we remove the unnecessary targetlist entries, but\n> that ignores the entry may reference a CTE. That's pretty much what I\n> meant by the counter being \"out of sync\".\n> Updating the counter while removing the entry is one option, but maybe\n> we could simply delay counting the CTE references until after that?\n\nI think we should just drop this cross-check altogether; it is not nearly\nuseful enough to justify the work that'd be involved in maintaining\ncterefcount accurately for all such transformations. All it's really\nthere for is to be sure that we don't need to make a subplan for the\ninlined CTE.\n\nThere are two downstream consumers of cte_plan_ids, which currently just\nhave Asserts that we made a subplan. I think it'd be worth converting\nthose to real run-time tests, which leads me to something more or less as\nattached.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 21 Apr 2022 15:03:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "On 4/21/22 21:03, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 4/21/22 10:29, Richard Guo wrote:\n>>> Further debugging shows that in this repro the reference to the CTE is\n>>> removed when generating paths for the subquery 'sub', where we would try\n>>> to remove subquery targetlist items that are not needed. So for the\n>>> items we are to remove, maybe we need to check if they contain CTEs and\n>>> if so decrease cterefcount of the CTEs correspondingly.\n> \n>> Right, at some point we remove the unnecessary targetlist entries, but\n>> that ignores the entry may reference a CTE. That's pretty much what I\n>> meant by the counter being \"out of sync\".\n>> Updating the counter while removing the entry is one option, but maybe\n>> we could simply delay counting the CTE references until after that?\n> \n> I think we should just drop this cross-check altogether; it is not nearly\n> useful enough to justify the work that'd be involved in maintaining\n> cterefcount accurately for all such transformations. All it's really\n> there for is to be sure that we don't need to make a subplan for the\n> inlined CTE.\n> \n> There are two downstream consumers of cte_plan_ids, which currently just\n> have Asserts that we made a subplan. I think it'd be worth converting\n> those to real run-time tests, which leads me to something more or less as\n> attached.\n> \n\nWFM. I'm not particularly attached to the assert, so if you say it's not\nworth it let's get rid of it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Apr 2022 22:45:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 4/21/22 21:03, Tom Lane wrote:\n>> I think we should just drop this cross-check altogether; it is not nearly\n>> useful enough to justify the work that'd be involved in maintaining\n>> cterefcount accurately for all such transformations. All it's really\n>> there for is to be sure that we don't need to make a subplan for the\n>> inlined CTE.\n\n> WFM. I'm not particularly attached to the assert, so if you say it's not\n> worth it let's get rid of it.\n\nDone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Apr 2022 17:59:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure in CTE inlining with view and correlated subquery"
}
] |
[
{
"msg_contents": "My recent bugfix commit d3609dd2 addressed an issue with VACUUM\nVERBOSE. It would aggregate buffers hit/missed/dirtied counts\nincorrectly (by double counting), though only when there are multiple\nheap rels processed by the same VACUUM command. It failed to account\nfor the fact that the VacuumPageHit, VacuumPageMiss, and\nVacuumPageDirty global variables were really only designed to work in\nautovacuum (see 2011 commit 9d3b5024).\n\nI just realized that there is one remaining problem: parallel VACUUM\ndoesn't care about these global variables, so there will still be\ndiscrepancies there. I can't really blame that on parallel VACUUM,\nthough, because vacuumparallel.c at least copies buffer usage counters\nfrom parallel workers (stored in its PARALLEL_VACUUM_KEY_BUFFER_USAGE\nspace). I wonder why we still have these seemingly redundant global\nvariables, which are maintained by bufmgr.c (alongside the\npgBufferUsage stuff). It looks like recent commit 5dc0418fab\n(\"Prefetch data referenced by the WAL, take II\") taught bufmgr.c to\ninstrument all buffer accesses. So it looks like we just don't need\nVacuumPageHit and friends anymore.\n\nWouldn't it be better if every VACUUM used the same generic approach,\nusing pgBufferUsage? As a general rule code that only runs in\nautovacuum is a recipe for bugs. It looks like VacuumPageHit is\nmaintained based on different rules to pgBufferUsage.shared_blks_hit\nin bufmgr.c (just as an example), which seems like a bad sign.\nBesides, the pgBufferUsage counters have more information, which seems\nlike it might be useful to the lazyvacuum.c instrumentation.\n\nOne question for the author of the WAL prefetch patch, Thomas (CC'd):\nIt's not 100% clear what the expectation is with pgBufferUsage when\ntrack_io_timing is off, so are fields like\npgBufferUsage.shared_blks_hit (i.e. those that don't have a\ntime/duration component) officially okay to rely on across the board?\nIt looks like they are okay to rely on (even when track_io_timing is\noff), but it would be nice to put that on a formal footing, if it\nisn't already.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 20 Apr 2022 15:03:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "More problems with VacuumPageHit style global variables"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 10:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I just realized that there is one remaining problem: parallel VACUUM\n> doesn't care about these global variables, so there will still be\n> discrepancies there. I can't really blame that on parallel VACUUM,\n> though, because vacuumparallel.c at least copies buffer usage counters\n> from parallel workers (stored in its PARALLEL_VACUUM_KEY_BUFFER_USAGE\n> space). I wonder why we still have these seemingly redundant global\n> variables, which are maintained by bufmgr.c (alongside the\n> pgBufferUsage stuff). It looks like recent commit 5dc0418fab\n> (\"Prefetch data referenced by the WAL, take II\") taught bufmgr.c to\n> instrument all buffer accesses. So it looks like we just don't need\n> VacuumPageHit and friends anymore.\n\nYeah, that sounds right.\n\n> Wouldn't it be better if every VACUUM used the same generic approach,\n> using pgBufferUsage? As a general rule code that only runs in\n> autovacuum is a recipe for bugs. It looks like VacuumPageHit is\n> maintained based on different rules to pgBufferUsage.shared_blks_hit\n> in bufmgr.c (just as an example), which seems like a bad sign.\n> Besides, the pgBufferUsage counters have more information, which seems\n> like it might be useful to the lazyvacuum.c instrumentation.\n\n+1\n\n> One question for the author of the WAL prefetch patch, Thomas (CC'd):\n> It's not 100% clear what the expectation is with pgBufferUsage when\n> track_io_timing is off, so are fields like\n> pgBufferUsage.shared_blks_hit (i.e. those that don't have a\n> time/duration component) officially okay to rely on across the board?\n> It looks like they are okay to rely on (even when track_io_timing is\n> off), but it would be nice to put that on a formal footing, if it\n> isn't already.\n\nRight, that commit did this, plus the local variant:\n\n@@ -680,6 +682,8 @@ ReadRecentBuffer(RelFileNode rnode, ForkNumber\nforkNum, BlockNumber blockNum,\n else\n PinBuffer_Locked(bufHdr); /* pin\nfor first time */\n\n+ pgBufferUsage.shared_blks_hit++;\n+\n return true;\n }\n\nI should perhaps have committed those changes separately with their\nown explanation, since it was really an oversight in commit\n2f27f8c5114 that this type of hit wasn't counted (as noted by Julien\nin review of the WAL prefetcher). I doubt anyone else has discovered\nthat function, which has no caller in PG14.\n\nAs for your general question, I think you must be right. From a quick\nrummage around in the commit log, it does appear that commit cddca5ec\n(2009), which introduced pgBufferUsage, always bumped the counters\nunconditionally. It predated track_io_timing by years (40b9b957694\n(2012)), and long before that the Berkeley code already had a simpler\nthing along those lines (ReadBufferCount, BufferHitCount etc). I\ndidn't look up the discussion, but I wonder if the reason commit\n9d3b5024435 (2011) introduced VacuumPage{Hit,Miss,Dirty} instead of\nmeasuring level changes in pgBufferUsage is that pgBufferUsage didn't\nhave a dirty count until commit 2254367435f (2012), and once the\nauthors had decided they'd need a new special counter for that, they\ncontinued down that path and added the others too?\n\n\n",
"msg_date": "Thu, 21 Apr 2022 14:50:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More problems with VacuumPageHit style global variables"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 7:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As for your general question, I think you must be right. From a quick\n> rummage around in the commit log, it does appear that commit cddca5ec\n> (2009), which introduced pgBufferUsage, always bumped the counters\n> unconditionally. It predated track_io_timing by years (40b9b957694\n> (2012)), and long before that the Berkeley code already had a simpler\n> thing along those lines (ReadBufferCount, BufferHitCount etc). I\n> didn't look up the discussion, but I wonder if the reason commit\n> 9d3b5024435 (2011) introduced VacuumPage{Hit,Miss,Dirty} instead of\n> measuring level changes in pgBufferUsage is that pgBufferUsage didn't\n> have a dirty count until commit 2254367435f (2012), and once the\n> authors had decided they'd need a new special counter for that, they\n> continued down that path and added the others too?\n\nI knew about pgBufferUsage, and I knew about\nVacuumPage{Hit,Miss,Dirty} for a long time. But somehow I didn't make\nthe very obvious connection between the two until today. I am probably\nnot the only one.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 20 Apr 2022 20:00:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: More problems with VacuumPageHit style global variables"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 8:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I knew about pgBufferUsage, and I knew about\n> VacuumPage{Hit,Miss,Dirty} for a long time. But somehow I didn't make\n> the very obvious connection between the two until today. I am probably\n> not the only one.\n\nWhat about pgStatBlockWriteTime/pgstat_count_buffer_write_time(),\nwhich also seem redundant? These were added by commit 64482890 back in\n2012 (though under slightly different names), and are still used today\nby code that aggregates database-level timing stats -- see\npgstat_update_dbstats().\n\nCode like FlushBuffer() maintains both pgStatBlockWriteTime and\npgBufferUsage.blk_write_time (iff track_io_timing is on). So looking\nat both the consumer side and the produce side makes it no more clear\nwhy both are needed.\n\nI suspect pgStatBlockWriteTime exists because of a similar kind of\nhistoric confusion, or losing track of things. There are\nsimilar-looking variables named things like pgStatXactCommit, which\nare not redundant (since pgBufferUsage doesn't have any of that, just\ngranular I/O timing stuff). It would have been easy to miss the fact\nthat only a subset of these pgStat* variables were redundant. Also\nseems possible that there was confusion about which variable was owned\nby what subsystem, with the pgStat* stuff appearing to be a stats\ncollector thing, while pgBufferUsage appeared to be an executor thing.\n\nI don't think that there is any risk of one user of either variable\n\"clobbering\" some other user -- the current values of the variables\nare not actually meaningful at all. They're only useful as a way that\nan arbitrary piece of code instruments an arbitrary operation, by\nmaking their own copies, running whatever the operation is, and then\nreporting on the deltas. Which makes it even more surprising that this\nwas overlooked until now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 21 Apr 2022 16:28:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: More problems with VacuumPageHit style global variables"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 4:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't think that there is any risk of one user of either variable\n> \"clobbering\" some other user -- the current values of the variables\n> are not actually meaningful at all. They're only useful as a way that\n> an arbitrary piece of code instruments an arbitrary operation, by\n> making their own copies, running whatever the operation is, and then\n> reporting on the deltas. Which makes it even more surprising that this\n> was overlooked until now.\n\nI suppose code like pgstat_update_dbstats() would need to copy\npgBufferUsage somewhere if we were to get rid of pgStatBlockReadTime\nand pgStatBlockWriteTime. That might not have been acceptable back\nwhen we had the old stats collector; frequent copying of pgBufferUsage\nmight have non-trivial overhead. The relevant struct (BufferUsage) has\nover 10 64-bit integers, versus only 2 for pgStatBlockReadTime and\npgStatBlockWriteTime.\n\nBut does that matter anymore now that we have the cumulative stats\nsystem? Doesn't the redundancy seem like a problem?\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 21 Apr 2022 16:53:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: More problems with VacuumPageHit style global variables"
}
] |
[
{
"msg_contents": "I think this makes sense but I wanted to get confirmation:\n\nI created a table with a column having the type int4 (integer). When I\ninsert a row with a number into that column and get it back out I've\nobserved a discrepancy:\n\nThe DataRow message has the field encoded as an ASCII ‘7’ with a column\nlength of 1 despite the RowDescription having a column length 4. I assume\nthat this is because it’s a simple query (Q) and therefore the format code\nfor all columns is 0 (for text format).\n\nIt makes sense that at the time the RowDescription is written out that it\ncan’t possibly know how many bytes the textual representation of each int\nwill take so it just uses the length of the underlying type.\n\nIs this accurate?\n\n-Tyler\n\nI think this makes sense but I wanted to get confirmation:I created a table with a column having the type int4 (integer). When I insert a row with a number into that column and get it back out I've observed a discrepancy:The DataRow message has the field encoded as an ASCII ‘7’ with a column length of 1 despite the RowDescription having a column length 4. I assume that this is because it’s a simple query (Q) and therefore the format code for all columns is 0 (for text format).It makes sense that at the time the RowDescription is written out that it can’t possibly know how many bytes the textual representation of each int will take so it just uses the length of the underlying type.Is this accurate? -Tyler",
"msg_date": "Wed, 20 Apr 2022 23:39:15 +0000",
"msg_from": "Tyler Brock <tyler.brock@gmail.com>",
"msg_from_op": true,
"msg_subject": "DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 4:39 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n\n> I think this makes sense but I wanted to get confirmation:\n>\n> I created a table with a column having the type int4 (integer). When I\n> insert a row with a number into that column and get it back out I've\n> observed a discrepancy:\n>\n> The DataRow message has the field encoded as an ASCII ‘7’ with a column\n> length of 1 despite the RowDescription having a column length 4. I assume\n> that this is because it’s a simple query (Q) and therefore the format code\n> for all columns is 0 (for text format).\n>\n> It makes sense that at the time the RowDescription is written out that it\n> can’t possibly know how many bytes the textual representation of each int\n> will take so it just uses the length of the underlying type.\n>\n> Is this accurate?\n>\n>\nYou probably shouldn't think of DataRow as giving you a \"column length\" -\nit is simply giving you the number of bytes you need to read to retrieve\nall of the bytes for the column and thus position your read pointer at the\ndata length Int32 for the subsequent column (which you do iteratively Int16\ncolumn count times).\n\nYou now have bytes for columnN - which you need to interpret via\nRowDescription to transform the raw protocol bytes into a meaningful datum.\n\nYou don't care whether the source API was simple or not - RowDescription\nwill tell you what you need to know to interpret the value - it is all\nself-contained. But yes, because it is a simple query the RowDescription\nmeta-data will inform you that all of the bytes represent (in aggregate ?)\nthe textual representation of the data.\n\nDavid J.\n\nOn Wed, Apr 20, 2022 at 4:39 PM Tyler Brock <tyler.brock@gmail.com> wrote:I think this makes sense but I wanted to get confirmation:I created a table with a column having the type int4 (integer). When I insert a row with a number into that column and get it back out I've observed a discrepancy:The DataRow message has the field encoded as an ASCII ‘7’ with a column length of 1 despite the RowDescription having a column length 4. I assume that this is because it’s a simple query (Q) and therefore the format code for all columns is 0 (for text format).It makes sense that at the time the RowDescription is written out that it can’t possibly know how many bytes the textual representation of each int will take so it just uses the length of the underlying type.Is this accurate? You probably shouldn't think of DataRow as giving you a \"column length\" - it is simply giving you the number of bytes you need to read to retrieve all of the bytes for the column and thus position your read pointer at the data length Int32 for the subsequent column (which you do iteratively Int16 column count times).You now have bytes for columnN - which you need to interpret via RowDescription to transform the raw protocol bytes into a meaningful datum.You don't care whether the source API was simple or not - RowDescription will tell you what you need to know to interpret the value - it is all self-contained. But yes, because it is a simple query the RowDescription meta-data will inform you that all of the bytes represent (in aggregate ?) the textual representation of the data.David J.",
"msg_date": "Wed, 20 Apr 2022 17:00:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "For sure, I’m thinking of it that way. Thanks for confirming.\n\nWhat I don’t understand is that if I respond to psql with the\nRowDescription indicating the format code is 1 for binary (and encode it\nthat way, with 4 bytes, in the DataRow) it doesn’t render the number in the\nresults.\n\n-Tyler\n\n\nOn Apr 20, 2022 at 8:00:18 PM, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, Apr 20, 2022 at 4:39 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n>\n>> I think this makes sense but I wanted to get confirmation:\n>>\n>> I created a table with a column having the type int4 (integer). When I\n>> insert a row with a number into that column and get it back out I've\n>> observed a discrepancy:\n>>\n>> The DataRow message has the field encoded as an ASCII ‘7’ with a column\n>> length of 1 despite the RowDescription having a column length 4. I assume\n>> that this is because it’s a simple query (Q) and therefore the format code\n>> for all columns is 0 (for text format).\n>>\n>> It makes sense that at the time the RowDescription is written out that it\n>> can’t possibly know how many bytes the textual representation of each int\n>> will take so it just uses the length of the underlying type.\n>>\n>> Is this accurate?\n>>\n>>\n> You probably shouldn't think of DataRow as giving you a \"column length\" -\n> it is simply giving you the number of bytes you need to read to retrieve\n> all of the bytes for the column and thus position your read pointer at the\n> data length Int32 for the subsequent column (which you do iteratively Int16\n> column count times).\n>\n> You now have bytes for columnN - which you need to interpret via\n> RowDescription to transform the raw protocol bytes into a meaningful datum.\n>\n> You don't care whether the source API was simple or not - RowDescription\n> will tell you what you need to know to interpret the value - it is all\n> self-contained. But yes, because it is a simple query the RowDescription\n> meta-data will inform you that all of the bytes represent (in aggregate ?)\n> the textual representation of the data.\n>\n> David J.\n>\n>\n\n\n For sure, I’m thinking of it that way. Thanks for confirming.What I don’t understand is that if I respond to psql with the RowDescription indicating the format code is 1 for binary (and encode it that way, with 4 bytes, in the DataRow) it doesn’t render the number in the results.-Tyler\n\nOn Apr 20, 2022 at 8:00:18 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\nOn Wed, Apr 20, 2022 at 4:39 PM Tyler Brock <tyler.brock@gmail.com> wrote:I think this makes sense but I wanted to get confirmation:I created a table with a column having the type int4 (integer). When I insert a row with a number into that column and get it back out I've observed a discrepancy:The DataRow message has the field encoded as an ASCII ‘7’ with a column length of 1 despite the RowDescription having a column length 4. I assume that this is because it’s a simple query (Q) and therefore the format code for all columns is 0 (for text format).It makes sense that at the time the RowDescription is written out that it can’t possibly know how many bytes the textual representation of each int will take so it just uses the length of the underlying type.Is this accurate? You probably shouldn't think of DataRow as giving you a \"column length\" - it is simply giving you the number of bytes you need to read to retrieve all of the bytes for the column and thus position your read pointer at the data length Int32 for the subsequent column (which you do iteratively Int16 column count times).You now have bytes for columnN - which you need to interpret via RowDescription to transform the raw protocol bytes into a meaningful datum.You don't care whether the source API was simple or not - RowDescription will tell you what you need to know to interpret the value - it is all self-contained. But yes, because it is a simple query the RowDescription meta-data will inform you that all of the bytes represent (in aggregate ?) the textual representation of the data.David J.",
"msg_date": "Thu, 21 Apr 2022 00:11:47 +0000",
"msg_from": "Tyler Brock <tyler.brock@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 5:11 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n\n> For sure, I’m thinking of it that way. Thanks for confirming.\n>\n> What I don’t understand is that if I respond to psql with the\n> RowDescription indicating the format code is 1 for binary (and encode it\n> that way, with 4 bytes, in the DataRow) it doesn’t render the number in the\n> results.\n>\n>>\n>>\nPlease don't top-post.\n\npsql is a command line program, the server is PostgreSQL or postgres.\n\nI'm not familiar with interacting with the server in C or at the protocol\nlevel; I have no idea what that sentence is supposed to mean. But\nRowDescription seems to be strictly informative so how would you \"respond\nto psql with [it]\"?\n\nDavid J.\n\nOn Wed, Apr 20, 2022 at 5:11 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n For sure, I’m thinking of it that way. Thanks for confirming.What I don’t understand is that if I respond to psql with the RowDescription indicating the format code is 1 for binary (and encode it that way, with 4 bytes, in the DataRow) it doesn’t render the number in the results.Please don't top-post.psql is a command line program, the server is PostgreSQL or postgres.I'm not familiar with interacting with the server in C or at the protocol level; I have no idea what that sentence is supposed to mean. But RowDescription seems to be strictly informative so how would you \"respond to psql with [it]\"?David J.",
"msg_date": "Wed, 20 Apr 2022 17:16:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "I’m not sure what top-posting is?\n\nI’m talking about responding to psql the command line program.\n\n-Tyler\n\n\nOn Apr 20, 2022 at 8:16:28 PM, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, Apr 20, 2022 at 5:11 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n>\n>> For sure, I’m thinking of it that way. Thanks for confirming.\n>>\n>> What I don’t understand is that if I respond to psql with the\n>> RowDescription indicating the format code is 1 for binary (and encode it\n>> that way, with 4 bytes, in the DataRow) it doesn’t render the number in the\n>> results.\n>>\n>>>\n>>>\n> Please don't top-post.\n>\n> psql is a command line program, the server is PostgreSQL or postgres.\n>\n> I'm not familiar with interacting with the server in C or at the protocol\n> level; I have no idea what that sentence is supposed to mean. But\n> RowDescription seems to be strictly informative so how would you \"respond\n> to psql with [it]\"?\n>\n> David J.\n>\n>\n>\n\n\n I’m not sure what top-posting is?I’m talking about responding to psql the command line program.-Tyler\n\nOn Apr 20, 2022 at 8:16:28 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\nOn Wed, Apr 20, 2022 at 5:11 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n For sure, I’m thinking of it that way. Thanks for confirming.What I don’t understand is that if I respond to psql with the RowDescription indicating the format code is 1 for binary (and encode it that way, with 4 bytes, in the DataRow) it doesn’t render the number in the results.Please don't top-post.psql is a command line program, the server is PostgreSQL or postgres.I'm not familiar with interacting with the server in C or at the protocol level; I have no idea what that sentence is supposed to mean. But RowDescription seems to be strictly informative so how would you \"respond to psql with [it]\"?David J.",
"msg_date": "Thu, 21 Apr 2022 00:21:41 +0000",
"msg_from": "Tyler Brock <tyler.brock@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 5:21 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n\n> I’m not sure what top-posting is?\n>\n\nIt's when you place your replies before what you are replying to.\nhttps://en.wikipedia.org/wiki/Posting_style\n\nUnlike mine, which is inline-posting, where the reply is after the thing\nbeing replied to, trimming unneeded context as appropriate.\n\n>\n> I’m talking about responding to psql the command line program.\n>\n>\nOk. I'm outside my league then.\n\nDavid J.\n\nOn Wed, Apr 20, 2022 at 5:21 PM Tyler Brock <tyler.brock@gmail.com> wrote:\n I’m not sure what top-posting is?It's when you place your replies before what you are replying to.https://en.wikipedia.org/wiki/Posting_styleUnlike mine, which is inline-posting, where the reply is after the thing being replied to, trimming unneeded context as appropriate.I’m talking about responding to psql the command line program.Ok. I'm outside my league then.David J.",
"msg_date": "Wed, 20 Apr 2022 17:26:26 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "Tyler Brock <tyler.brock@gmail.com> writes:\n> I think this makes sense but I wanted to get confirmation:\n> I created a table with a column having the type int4 (integer). When I\n> insert a row with a number into that column and get it back out I've\n> observed a discrepancy:\n\n> The DataRow message has the field encoded as an ASCII ‘7’ with a column\n> length of 1 despite the RowDescription having a column length 4. I assume\n> that this is because it’s a simple query (Q) and therefore the format code\n> for all columns is 0 (for text format).\n\nIf you mean the \"data type size\" (typlen) field of RowDescription, that\nis arguably completely irrelevant; it's there for historical reasons,\nI think. The contents of a DataRow field will either be a textual\nconversion of the value or the on-the-wire binary representation defined\nby the type's typsend routine. In either case, the actual length of\nthe value as it appears in DataRow is given right there in the DataRow\nmessage. And in either case, the typlen value doesn't necessarily have\nanything to do with the length of the DataRow representation. typlen\ndoes happen to match up with the length that'd appear in DataRow for\nsimple integral types sent in binary format ... but for other cases,\nnot so much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Apr 2022 23:09:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
},
{
"msg_contents": "On 21.04.22 02:11, Tyler Brock wrote:\n> What I don’t understand is that if I respond to psql with the \n> RowDescription indicating the format code is 1 for binary (and encode it \n> that way, with 4 bytes, in the DataRow) it doesn’t render the number in \n> the results.\n\npsql only handles results in text format.\n\n\n\n",
"msg_date": "Thu, 21 Apr 2022 15:56:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DataRow message for Integer(int4) returns result as text?"
}
] |
[
{
"msg_contents": "Hackers,\n\nThe new cumulative stats subsystem no longer has a \"lost under heavy load\"\nproblem so that parenthetical should go (or at least be modified).\n\nThese stats can be reset so some discussion about how the system uses them\ngiven that possibility seems like it would be good to add here. I'm not\nsure what that should look like though.\n\nDavid J.\n\n\ndiff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\nindex 04a04e0e5f..360807c8f9 100644\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n@@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold\n+ vacuum insert scale fac\n tuples to be frozen by earlier vacuums. The number of obsolete tuples\nand\n the number of inserted tuples are obtained from the cumulative\nstatistics system;\n it is a semi-accurate count updated by each <command>UPDATE</command>,\n- <command>DELETE</command> and <command>INSERT</command> operation.\n (It is\n- only semi-accurate because some information might be lost under heavy\n- load.) If the <structfield>relfrozenxid</structfield> value of the\ntable\n+ <command>DELETE</command> and <command>INSERT</command> operation.\n+ If the <structfield>relfrozenxid</structfield> value of the table\n is more than <varname>vacuum_freeze_table_age</varname> transactions\nold,\n an aggressive vacuum is performed to freeze old tuples and advance\n <structfield>relfrozenxid</structfield>; otherwise, only pages that\nhave been modified\n\nHackers,The new cumulative stats subsystem no longer has a \"lost under heavy load\" problem so that parenthetical should go (or at least be modified).These stats can be reset so some discussion about how the system uses them given that possibility seems like it would be good to add here. I'm not sure what that should look like though.David J.diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgmlindex 04a04e0e5f..360807c8f9 100644--- a/doc/src/sgml/maintenance.sgml+++ b/doc/src/sgml/maintenance.sgml@@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold + vacuum insert scale fac tuples to be frozen by earlier vacuums. The number of obsolete tuples and the number of inserted tuples are obtained from the cumulative statistics system; it is a semi-accurate count updated by each <command>UPDATE</command>,- <command>DELETE</command> and <command>INSERT</command> operation. (It is- only semi-accurate because some information might be lost under heavy- load.) If the <structfield>relfrozenxid</structfield> value of the table+ <command>DELETE</command> and <command>INSERT</command> operation.+ If the <structfield>relfrozenxid</structfield> value of the table is more than <varname>vacuum_freeze_table_age</varname> transactions old, an aggressive vacuum is performed to freeze old tuples and advance <structfield>relfrozenxid</structfield>; otherwise, only pages that have been modified",
"msg_date": "Wed, 20 Apr 2022 16:40:44 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 04:40:44PM -0700, David G. Johnston wrote:\n> Hackers,\n> \n> The new cumulative stats subsystem no longer has a \"lost under heavy load\"\n> problem so that parenthetical should go (or at least be modified).\n> \n> These stats can be reset so some discussion about how the system uses them\n> given that possibility seems like it would be good to add here. I'm not sure\n> what that should look like though.\n> \n> diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> index 04a04e0e5f..360807c8f9 100644\n> --- a/doc/src/sgml/maintenance.sgml\n> +++ b/doc/src/sgml/maintenance.sgml\n> @@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold +\n> vacuum insert scale fac\n> tuples to be frozen by earlier vacuums. The number of obsolete tuples and\n> the number of inserted tuples are obtained from the cumulative statistics\n> system;\n> it is a semi-accurate count updated by each <command>UPDATE</command>,\n> - <command>DELETE</command> and <command>INSERT</command> operation. (It is\n> - only semi-accurate because some information might be lost under heavy\n> - load.) If the <structfield>relfrozenxid</structfield> value of the table\n> + <command>DELETE</command> and <command>INSERT</command> operation.\n> + If the <structfield>relfrozenxid</structfield> value of the table\n> is more than <varname>vacuum_freeze_table_age</varname> transactions old,\n> an aggressive vacuum is performed to freeze old tuples and advance\n> <structfield>relfrozenxid</structfield>; otherwise, only pages that have\n> been modified\n\nYes, I agree and plan to apply this patch soon.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 18:58:09 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "Hi,\n\nI had missed David's original email on this topic...\n\nOn 2022-07-14 18:58:09 -0400, Bruce Momjian wrote:\n> On Wed, Apr 20, 2022 at 04:40:44PM -0700, David G. Johnston wrote:\n> > The new cumulative stats subsystem no longer has a \"lost under heavy load\"\n> > problem so that parenthetical should go (or at least be modified).\n> > \n> > These stats can be reset so some discussion about how the system uses them\n> > given that possibility seems like it would be good to add here.� I'm not sure\n> > what�that should look like though.\n> > \n> > diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> > index 04a04e0e5f..360807c8f9 100644\n> > --- a/doc/src/sgml/maintenance.sgml\n> > +++ b/doc/src/sgml/maintenance.sgml\n> > @@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold +\n> > vacuum insert scale fac\n> > � � �tuples to be frozen by earlier vacuums.� The number of obsolete tuples and\n> > � � �the number of inserted tuples are obtained from the cumulative statistics\n> > system;\n> > � � �it is a semi-accurate count updated by each <command>UPDATE</command>,\n> > - � �<command>DELETE</command> and <command>INSERT</command> operation. �(It is\n> > - � �only semi-accurate because some information might be lost under heavy\n> > - � �load.) �If the <structfield>relfrozenxid</structfield> value of the table\n> > + � �<command>DELETE</command> and <command>INSERT</command> operation.\n> > + � �If the <structfield>relfrozenxid</structfield> value of the table\n> > � � �is more than <varname>vacuum_freeze_table_age</varname> transactions old,\n> > � � �an aggressive vacuum is performed to freeze old tuples and advance\n> > � � �<structfield>relfrozenxid</structfield>; otherwise, only pages that have\n> > been modified\n> \n> Yes, I agree and plan to apply this patch soon.\n\nIt might make sense to still say semi-accurate, but adjust the explanation to\nsay that stats reporting is not instantaneous?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 14 Jul 2022 16:31:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> I had missed David's original email on this topic...\n>\n> On 2022-07-14 18:58:09 -0400, Bruce Momjian wrote:\n> > On Wed, Apr 20, 2022 at 04:40:44PM -0700, David G. Johnston wrote:\n> > > The new cumulative stats subsystem no longer has a \"lost under heavy\n> load\"\n> > > problem so that parenthetical should go (or at least be modified).\n> > >\n> > > These stats can be reset so some discussion about how the system uses\n> them\n> > > given that possibility seems like it would be good to add here. I'm\n> not sure\n> > > what that should look like though.\n> > >\n> > > diff --git a/doc/src/sgml/maintenance.sgml\n> b/doc/src/sgml/maintenance.sgml\n> > > index 04a04e0e5f..360807c8f9 100644\n> > > --- a/doc/src/sgml/maintenance.sgml\n> > > +++ b/doc/src/sgml/maintenance.sgml\n> > > @@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert\n> threshold +\n> > > vacuum insert scale fac\n> > > tuples to be frozen by earlier vacuums. The number of obsolete\n> tuples and\n> > > the number of inserted tuples are obtained from the cumulative\n> statistics\n> > > system;\n> > > it is a semi-accurate count updated by each\n> <command>UPDATE</command>,\n> > > - <command>DELETE</command> and <command>INSERT</command>\n> operation. (It is\n> > > - only semi-accurate because some information might be lost under\n> heavy\n> > > - load.) If the <structfield>relfrozenxid</structfield> value of\n> the table\n> > > + <command>DELETE</command> and <command>INSERT</command> operation.\n> > > + If the <structfield>relfrozenxid</structfield> value of the table\n> > > is more than <varname>vacuum_freeze_table_age</varname>\n> transactions old,\n> > > an aggressive vacuum is performed to freeze old tuples and advance\n> > > <structfield>relfrozenxid</structfield>; otherwise, only pages\n> that have\n> > > been modified\n> >\n> > Yes, I agree and plan to apply this patch soon.\n>\n> It might make sense to still say semi-accurate, but adjust the explanation\n> to\n> say that stats reporting is not instantaneous?\n>\n>\nUnless that delay manifests in executing an UPDATE in a session then\nlooking at these views in the same session and not seeing that update\nreflected I wouldn't mention it. Concurrency aspects are reasonably\nexpected here. But if we do want to mention it maybe:\n\n \"...it is an eventually-consistent count updated by...\"\n\nDavid J.\n\nOn Thu, Jul 14, 2022 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nI had missed David's original email on this topic...\n\nOn 2022-07-14 18:58:09 -0400, Bruce Momjian wrote:\n> On Wed, Apr 20, 2022 at 04:40:44PM -0700, David G. Johnston wrote:\n> > The new cumulative stats subsystem no longer has a \"lost under heavy load\"\n> > problem so that parenthetical should go (or at least be modified).\n> > \n> > These stats can be reset so some discussion about how the system uses them\n> > given that possibility seems like it would be good to add here. I'm not sure\n> > what that should look like though.\n> > \n> > diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> > index 04a04e0e5f..360807c8f9 100644\n> > --- a/doc/src/sgml/maintenance.sgml\n> > +++ b/doc/src/sgml/maintenance.sgml\n> > @@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold +\n> > vacuum insert scale fac\n> > tuples to be frozen by earlier vacuums. The number of obsolete tuples and\n> > the number of inserted tuples are obtained from the cumulative statistics\n> > system;\n> > it is a semi-accurate count updated by each <command>UPDATE</command>,\n> > - <command>DELETE</command> and <command>INSERT</command> operation. (It is\n> > - only semi-accurate because some information might be lost under heavy\n> > - load.) If the <structfield>relfrozenxid</structfield> value of the table\n> > + <command>DELETE</command> and <command>INSERT</command> operation.\n> > + If the <structfield>relfrozenxid</structfield> value of the table\n> > is more than <varname>vacuum_freeze_table_age</varname> transactions old,\n> > an aggressive vacuum is performed to freeze old tuples and advance\n> > <structfield>relfrozenxid</structfield>; otherwise, only pages that have\n> > been modified\n> \n> Yes, I agree and plan to apply this patch soon.\n\nIt might make sense to still say semi-accurate, but adjust the explanation to\nsay that stats reporting is not instantaneous?Unless that delay manifests in executing an UPDATE in a session then looking at these views in the same session and not seeing that update reflected I wouldn't mention it. Concurrency aspects are reasonably expected here. But if we do want to mention it maybe: \"...it is an eventually-consistent count updated by...\"David J.",
"msg_date": "Mon, 18 Jul 2022 19:47:39 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-18 19:47:39 -0700, David G. Johnston wrote:\n> On Thu, Jul 14, 2022 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > It might make sense to still say semi-accurate, but adjust the explanation\n> > to\n> > say that stats reporting is not instantaneous?\n> >\n> >\n> Unless that delay manifests in executing an UPDATE in a session then\n> looking at these views in the same session and not seeing that update\n> reflected I wouldn't mention it.\n\nDepending on which stats you're looking at, yes, that could totally happen. I\ndon't think the issue is not seeing changes from the current transaction\nthough - it's that *after* commit you might not see them for a while (the're\ntransmitted not more than once a second, and can be delayed up to 60s if\nthere's contention).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 18 Jul 2022 20:04:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 08:04:12PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-07-18 19:47:39 -0700, David G. Johnston wrote:\n> > On Thu, Jul 14, 2022 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > > It might make sense to still say semi-accurate, but adjust the explanation\n> > > to\n> > > say that stats reporting is not instantaneous?\n> > >\n> > >\n> > Unless that delay manifests in executing an UPDATE in a session then\n> > looking at these views in the same session and not seeing that update\n> > reflected I wouldn't mention it.\n> \n> Depending on which stats you're looking at, yes, that could totally happen. I\n> don't think the issue is not seeing changes from the current transaction\n> though - it's that *after* commit you might not see them for a while (the're\n> transmitted not more than once a second, and can be delayed up to 60s if\n> there's contention).\n\nSo the docs don't need any changes, I assume.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 12 Aug 2022 15:48:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 12:48 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Jul 18, 2022 at 08:04:12PM -0700, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-07-18 19:47:39 -0700, David G. Johnston wrote:\n> > > On Thu, Jul 14, 2022 at 4:31 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > It might make sense to still say semi-accurate, but adjust the\n> explanation\n> > > > to\n> > > > say that stats reporting is not instantaneous?\n> > > >\n> > > >\n> > > Unless that delay manifests in executing an UPDATE in a session then\n> > > looking at these views in the same session and not seeing that update\n> > > reflected I wouldn't mention it.\n> >\n> > Depending on which stats you're looking at, yes, that could totally\n> happen. I\n> > don't think the issue is not seeing changes from the current transaction\n> > though - it's that *after* commit you might not see them for a while\n> (the're\n> > transmitted not more than once a second, and can be delayed up to 60s if\n> > there's contention).\n>\n> So the docs don't need any changes, I assume.\n>\n>\nI dislike using the word accurate here now, it will be accurate, but we\ndon't promise perfect timeliness. So it needs to change:\n\n diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\nindex 04a04e0e5f..360807c8f9 100644\n--- a/doc/src/sgml/maintenance.sgml\n+++ b/doc/src/sgml/maintenance.sgml\n@@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold\n+ vacuum insert scale fac\n tuples to be frozen by earlier vacuums. The number of obsolete tuples\nand\n the number of inserted tuples are obtained from the cumulative\nstatistics system;\n it is a semi-accurate count updated by each <command>UPDATE</command>,\n- <command>DELETE</command> and <command>INSERT</command> operation.\n (It is\n- only semi-accurate because some information might be lost under heavy\n- load.) If the <structfield>relfrozenxid</structfield> value of the\ntable\n+ <command>DELETE</command> and <command>INSERT</command> operation.\n+ If the <structfield>relfrozenxid</structfield> value of the table\n is more than <varname>vacuum_freeze_table_age</varname> transactions\nold,\n an aggressive vacuum is performed to freeze old tuples and advance\n <structfield>relfrozenxid</structfield>; otherwise, only pages that\nhave been modified\n\nHowever, also replace the remaining instance of \"a semi-accurate count\"\nwith \"an eventually-consistent count\".\n\n...it is an eventually-consistent count updated by each UPDATE, DELETE, and\nINSERT operation.\n\nDavid J.\n\nOn Fri, Aug 12, 2022 at 12:48 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Jul 18, 2022 at 08:04:12PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-07-18 19:47:39 -0700, David G. Johnston wrote:\n> > On Thu, Jul 14, 2022 at 4:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > > It might make sense to still say semi-accurate, but adjust the explanation\n> > > to\n> > > say that stats reporting is not instantaneous?\n> > >\n> > >\n> > Unless that delay manifests in executing an UPDATE in a session then\n> > looking at these views in the same session and not seeing that update\n> > reflected I wouldn't mention it.\n> \n> Depending on which stats you're looking at, yes, that could totally happen. I\n> don't think the issue is not seeing changes from the current transaction\n> though - it's that *after* commit you might not see them for a while (the're\n> transmitted not more than once a second, and can be delayed up to 60s if\n> there's contention).\n\nSo the docs don't need any changes, I assume.I dislike using the word accurate here now, it will be accurate, but we don't promise perfect timeliness. So it needs to change: diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgmlindex 04a04e0e5f..360807c8f9 100644--- a/doc/src/sgml/maintenance.sgml+++ b/doc/src/sgml/maintenance.sgml@@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold + vacuum insert scale fac tuples to be frozen by earlier vacuums. The number of obsolete tuples and the number of inserted tuples are obtained from the cumulative statistics system; it is a semi-accurate count updated by each <command>UPDATE</command>,- <command>DELETE</command> and <command>INSERT</command> operation. (It is- only semi-accurate because some information might be lost under heavy- load.) If the <structfield>relfrozenxid</structfield> value of the table+ <command>DELETE</command> and <command>INSERT</command> operation.+ If the <structfield>relfrozenxid</structfield> value of the table is more than <varname>vacuum_freeze_table_age</varname> transactions old, an aggressive vacuum is performed to freeze old tuples and advance <structfield>relfrozenxid</structfield>; otherwise, only pages that have been modifiedHowever, also replace the remaining instance of \"a semi-accurate count\" with \"an eventually-consistent count\"....it is an eventually-consistent count updated by each UPDATE, DELETE, and INSERT operation.David J.",
"msg_date": "Fri, 12 Aug 2022 12:58:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 12:58:28PM -0700, David G. Johnston wrote:\n> I dislike using the word accurate here now, it will be accurate, but we don't\n> promise perfect timeliness. So it needs to change:\n> \n> diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml\n> index 04a04e0e5f..360807c8f9 100644\n> --- a/doc/src/sgml/maintenance.sgml\n> +++ b/doc/src/sgml/maintenance.sgml\n> @@ -652,9 +652,8 @@ vacuum insert threshold = vacuum base insert threshold +\n> vacuum insert scale fac\n> tuples to be frozen by earlier vacuums. The number of obsolete tuples and\n> the number of inserted tuples are obtained from the cumulative statistics\n> system;\n> it is a semi-accurate count updated by each <command>UPDATE</command>,\n> - <command>DELETE</command> and <command>INSERT</command> operation. (It is\n> - only semi-accurate because some information might be lost under heavy\n> - load.) If the <structfield>relfrozenxid</structfield> value of the table\n> + <command>DELETE</command> and <command>INSERT</command> operation.\n> + If the <structfield>relfrozenxid</structfield> value of the table\n> is more than <varname>vacuum_freeze_table_age</varname> transactions old,\n> an aggressive vacuum is performed to freeze old tuples and advance\n> <structfield>relfrozenxid</structfield>; otherwise, only pages that have\n> been modified\n\nDone in master.\n\n> However, also replace the remaining instance of \"a semi-accurate count\" with\n> \"an eventually-consistent count\".\n\n> ...it is an eventually-consistent count updated by each UPDATE, DELETE, and\n> INSERT operation.\n\nAlso done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 21:23:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: New cumulative stats subsystem obsoletes comment in\n maintenance.sgml"
}
] |
[
{
"msg_contents": "Hackers,\n\nI posted all of these elsewhere (docs, bugs) but am consolidating them here\ngoing forward.\n\n\nv0001-database-default-name (-bugs, with a related cleanup suggestion as\nwell)\nhttps://www.postgresql.org/message-id/flat/CAKFQuwZvHH1HVSOu7EYjvshynk4pnDwC5RwkF%3DVfZJvmUskwrQ%40mail.gmail.com#0e6d799478d88aee93402bec35fa64a2\n\n\nv0002-doc-extension-dependent-routine-behavior (-general, reply to user\nconfusion)\nhttps://www.postgresql.org/message-id/CAKFQuwb_QtY25feLeh%3D8uNdnyo1H%3DcN4R3vENsUwQzJP4-0xZg%40mail.gmail.com\n\n\nv0001-doc-savepoint-name-reuse (-docs, reply to user request for\nimprovement)\nhttps://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.com\n\n\nv0001-on-conflict-excluded-is-name-not-table (-docs, figured out while\ntrying to improve the docs to reduce user confusion in this area)\nhttps://www.postgresql.org/message-id/flat/CAKFQuwYN20c0%2B7kKvm3PBgibu77BzxDvk9RvoXBb1%3Dj1mDODPw%40mail.gmail.com#ea79c88b55fdccecbd2c4fe549f321c9\n\n\nv0001-doc-make-row-estimation-example-match-prose (-docs, reply to user\npointing of an inconsistency)\nhttps://www.postgresql.org/message-id/CAKFQuwax7V5R_rw%3DEOWmy%3DTBON6v3sveBx_WvwsENskCL5CLQQ%40mail.gmail.com\n\nThanks!\n\nDavid J.",
"msg_date": "Wed, 20 Apr 2022 17:59:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assorted small doc patches"
},
{
"msg_contents": "On 2022-Apr-20, David G. Johnston wrote:\n\n> v0001-doc-savepoint-name-reuse (-docs, reply to user request for\n> improvement)\n> https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.com\n\nThis one is incorrect; rolling back to a savepoint does not remove the\nsavepoint, so if you ROLLBACK TO it again afterwards, you'll get the\nsame one again. In fact, Your proposed example doesn't work as your\ncomments intend.\n\nThe way to get the effect you show is to first RELEASE the second\nsavepoint, then roll back to the earliest one. Maybe like this:\n\nBEGIN;\n INSERT INTO table1 VALUES (1);\n SAVEPOINT my_savepoint;\n INSERT INTO table1 VALUES (2);\n SAVEPOINT my_savepoint;\n INSERT INTO table1 VALUES (3);\n ROLLBACK TO SAVEPOINT my_savepoint;\n SELECT * FROM table1; -- shows rows 1, 2\n\n RELEASE SAVEPOINT my_savepoint;\t-- gets rid of the latest one without rolling back anything\n ROLLBACK TO SAVEPOINT my_savepoint;\t-- rolls back to the earliest one\n SELECT * FROM table1; -- just 1\nCOMMIT;\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 21 Apr 2022 19:46:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Assorted small doc patches"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 10:46 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Apr-20, David G. Johnston wrote:\n>\n> > v0001-doc-savepoint-name-reuse (-docs, reply to user request for\n> > improvement)\n> >\n> https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.com\n>\n> This one is incorrect; rolling back to a savepoint does not remove the\n> savepoint, so if you ROLLBACK TO it again afterwards, you'll get the\n> same one again. In fact, Your proposed example doesn't work as your\n> comments intend.\n>\n\nYeah, my bad for not testing things.\n\n\n>\n> The way to get the effect you show is to first RELEASE the second\n> savepoint, then roll back to the earliest one. Maybe like this:\n>\n> BEGIN;\n> INSERT INTO table1 VALUES (1);\n> SAVEPOINT my_savepoint;\n> INSERT INTO table1 VALUES (2);\n> SAVEPOINT my_savepoint;\n> INSERT INTO table1 VALUES (3);\n> ROLLBACK TO SAVEPOINT my_savepoint;\n> SELECT * FROM table1; -- shows rows 1, 2\n>\n> RELEASE SAVEPOINT my_savepoint; -- gets rid of the latest one\n> without rolling back anything\n> ROLLBACK TO SAVEPOINT my_savepoint; -- rolls back to the earliest one\n> SELECT * FROM table1; -- just 1\n> COMMIT;\n>\n>\nI'm ok with that, though I decided to experiment a bit. I decided to use\ncomments to make the example understandable without needing a server;\nself-contained AND easier to follow the status of both the table and the\nsavepoint reference.\n\nI explicitly demonstrate both release and rollback here along with the\nchoice to use just a single savepoint name. We could make even more\nexamples in a \"unit test\" type style but with the commentary I think this\ncommunicates the pertinent points quite well.\n\nBEGIN;\n INSERT INTO table1 VALUES (1);\n SAVEPOINT my_savepoint;\n -- Savepoint: [1]; Table: [1]\n\n INSERT INTO table1 VALUES (2);\n SAVEPOINT my_savepoint;\n -- Savepoint: [1,2]; Table: [1,2]\n\n INSERT INTO table1 VALUES (3);\n SAVEPOINT my_savepoint;\n -- Savepoint: [1,2,3]; Table: [1,2,3]\n\n INSERT INTO table1 VALUES (4);\n -- Savepoint: [1,2,3]; Table: [1,2,3,4]\n\n ROLLBACK TO SAVEPOINT my_savepoint;\n -- Savepoint: [1,2,3]; Table: [1,2,3]\n\n ROLLBACK TO SAVEPOINT my_savepoint; -- No Change\n -- Savepoint: [1,2,3]; Table: [1,2,3]\n SELECT * FROM table1;\n\n RELEASE my_savepoint;\n RELEASE my_savepoint;\n -- Savepoint: [1]; Table: [1,2,3]\n\n SELECT * FROM table1;\n\n ROLLBACK TO SAVEPOINT my_savepoint;\n -- Savepoint: [1]; Table: [1]\n\n SELECT * FROM table1;\nCOMMIT;\n\nDavid J.\n\nOn Thu, Apr 21, 2022 at 10:46 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Apr-20, David G. Johnston wrote:\n\n> v0001-doc-savepoint-name-reuse (-docs, reply to user request for\n> improvement)\n> https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.com\n\nThis one is incorrect; rolling back to a savepoint does not remove the\nsavepoint, so if you ROLLBACK TO it again afterwards, you'll get the\nsame one again. In fact, Your proposed example doesn't work as your\ncomments intend.Yeah, my bad for not testing things. \n\nThe way to get the effect you show is to first RELEASE the second\nsavepoint, then roll back to the earliest one. Maybe like this:\n\nBEGIN;\n INSERT INTO table1 VALUES (1);\n SAVEPOINT my_savepoint;\n INSERT INTO table1 VALUES (2);\n SAVEPOINT my_savepoint;\n INSERT INTO table1 VALUES (3);\n ROLLBACK TO SAVEPOINT my_savepoint;\n SELECT * FROM table1; -- shows rows 1, 2\n\n RELEASE SAVEPOINT my_savepoint; -- gets rid of the latest one without rolling back anything\n ROLLBACK TO SAVEPOINT my_savepoint; -- rolls back to the earliest one\n SELECT * FROM table1; -- just 1\nCOMMIT;I'm ok with that, though I decided to experiment a bit. I decided to use comments to make the example understandable without needing a server; self-contained AND easier to follow the status of both the table and the savepoint reference.I explicitly demonstrate both release and rollback here along with the choice to use just a single savepoint name. We could make even more examples in a \"unit test\" type style but with the commentary I think this communicates the pertinent points quite well.BEGIN; INSERT INTO table1 VALUES (1); SAVEPOINT my_savepoint; -- Savepoint: [1]; Table: [1] INSERT INTO table1 VALUES (2); SAVEPOINT my_savepoint; -- Savepoint: [1,2]; Table: [1,2] INSERT INTO table1 VALUES (3); SAVEPOINT my_savepoint; -- Savepoint: [1,2,3]; Table: [1,2,3] INSERT INTO table1 VALUES (4); -- Savepoint: [1,2,3]; Table: [1,2,3,4] ROLLBACK TO SAVEPOINT my_savepoint; -- Savepoint: [1,2,3]; Table: [1,2,3] ROLLBACK TO SAVEPOINT my_savepoint; -- No Change -- Savepoint: [1,2,3]; Table: [1,2,3] SELECT * FROM table1; RELEASE my_savepoint; RELEASE my_savepoint; -- Savepoint: [1]; Table: [1,2,3] SELECT * FROM table1; ROLLBACK TO SAVEPOINT my_savepoint; -- Savepoint: [1]; Table: [1] SELECT * FROM table1;COMMIT;David J.",
"msg_date": "Thu, 21 Apr 2022 12:15:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assorted small doc patches"
},
{
"msg_contents": "Updated status of the set.\n\nOn Wed, Apr 20, 2022 at 5:59 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n>\n> v0001-database-default-name (-bugs, with a related cleanup suggestion as\n> well)\n>\n> https://www.postgresql.org/message-id/flat/CAKFQuwZvHH1HVSOu7EYjvshynk4pnDwC5RwkF%3DVfZJvmUskwrQ%40mail.gmail.com#0e6d799478d88aee93402bec35fa64a2\n>\n>\n> v0002-doc-extension-dependent-routine-behavior (-general, reply to user\n> confusion)\n>\n> https://www.postgresql.org/message-id/CAKFQuwb_QtY25feLeh%3D8uNdnyo1H%3DcN4R3vENsUwQzJP4-0xZg%40mail.gmail.com\n>\n>\n> v0001-doc-savepoint-name-reuse (-docs, reply to user request for\n> improvement)\n>\n> https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.com\n>\n\nPending discussion of alternate presentation of transaction sequence; if\nnot favorable, I can just go with a simple factual fix of the mistake in\nv0001 (see this thread).\n\n\n>\n> v0001-on-conflict-excluded-is-name-not-table (-docs, figured out while\n> trying to improve the docs to reduce user confusion in this area)\n>\n> https://www.postgresql.org/message-id/flat/CAKFQuwYN20c0%2B7kKvm3PBgibu77BzxDvk9RvoXBb1%3Dj1mDODPw%40mail.gmail.com#ea79c88b55fdccecbd2c4fe549f321c9\n>\n>\n> v0001-doc-make-row-estimation-example-match-prose (-docs, reply to user\n> pointing of an inconsistency)\n>\n> https://www.postgresql.org/message-id/CAKFQuwax7V5R_rw%3DEOWmy%3DTBON6v3sveBx_WvwsENskCL5CLQQ%40mail.gmail.com\n>\n\nDavid J.\n\nUpdated status of the set.On Wed, Apr 20, 2022 at 5:59 PM David G. Johnston <david.g.johnston@gmail.com> wrote:v0001-database-default-name (-bugs, with a related cleanup suggestion as well)https://www.postgresql.org/message-id/flat/CAKFQuwZvHH1HVSOu7EYjvshynk4pnDwC5RwkF%3DVfZJvmUskwrQ%40mail.gmail.com#0e6d799478d88aee93402bec35fa64a2v0002-doc-extension-dependent-routine-behavior (-general, reply to user confusion)https://www.postgresql.org/message-id/CAKFQuwb_QtY25feLeh%3D8uNdnyo1H%3DcN4R3vENsUwQzJP4-0xZg%40mail.gmail.comv0001-doc-savepoint-name-reuse (-docs, reply to user request for improvement)https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.comPending discussion of alternate presentation of transaction sequence; if not favorable, I can just go with a simple factual fix of the mistake in v0001 (see this thread). v0001-on-conflict-excluded-is-name-not-table (-docs, figured out while trying to improve the docs to reduce user confusion in this area)https://www.postgresql.org/message-id/flat/CAKFQuwYN20c0%2B7kKvm3PBgibu77BzxDvk9RvoXBb1%3Dj1mDODPw%40mail.gmail.com#ea79c88b55fdccecbd2c4fe549f321c9v0001-doc-make-row-estimation-example-match-prose (-docs, reply to user pointing of an inconsistency)https://www.postgresql.org/message-id/CAKFQuwax7V5R_rw%3DEOWmy%3DTBON6v3sveBx_WvwsENskCL5CLQQ%40mail.gmail.comDavid J.",
"msg_date": "Fri, 29 Apr 2022 06:52:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assorted small doc patches"
},
{
"msg_contents": "Anything I should be doing differently here to get a bit of\nreviewer/committer time on these? I'll add them to the commitfest for next\nmonth if needed but I'm seeing quick patches going in every week and the\nbatch format done at the beginning of the month got processed through\nwithout issue.\n\nPer: https://wiki.postgresql.org/wiki/Submitting_a_Patch\nI was hoping for Workflow A especially as I acquit myself more than\nadequately on the \"How do you get someone to respond to you?\" items.\n\nI was going to chalk it up to bad timing but the volume of doc patches this\nmonth hasn't really dipped even with the couple of bad bugs being worked on.\n\nThank you!\n\nDavid J.\n\nOn Fri, Apr 29, 2022 at 6:52 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> Updated status of the set.\n>\n> On Wed, Apr 20, 2022 at 5:59 PM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>\n>>\n>> v0001-database-default-name (-bugs, with a related cleanup suggestion as\n>> well)\n>>\n>> https://www.postgresql.org/message-id/flat/CAKFQuwZvHH1HVSOu7EYjvshynk4pnDwC5RwkF%3DVfZJvmUskwrQ%40mail.gmail.com#0e6d799478d88aee93402bec35fa64a2\n>>\n>>\n>> v0002-doc-extension-dependent-routine-behavior (-general, reply to user\n>> confusion)\n>>\n>> https://www.postgresql.org/message-id/CAKFQuwb_QtY25feLeh%3D8uNdnyo1H%3DcN4R3vENsUwQzJP4-0xZg%40mail.gmail.com\n>>\n>>\n>> v0001-doc-savepoint-name-reuse (-docs, reply to user request for\n>> improvement)\n>>\n>> https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.com\n>>\n>\n> Pending discussion of alternate presentation of transaction sequence; if\n> not favorable, I can just go with a simple factual fix of the mistake in\n> v0001 (see this thread).\n>\n>\n>>\n>> v0001-on-conflict-excluded-is-name-not-table (-docs, figured out while\n>> trying to improve the docs to reduce user confusion in this area)\n>>\n>> https://www.postgresql.org/message-id/flat/CAKFQuwYN20c0%2B7kKvm3PBgibu77BzxDvk9RvoXBb1%3Dj1mDODPw%40mail.gmail.com#ea79c88b55fdccecbd2c4fe549f321c9\n>>\n>>\n>> v0001-doc-make-row-estimation-example-match-prose (-docs, reply to user\n>> pointing of an inconsistency)\n>>\n>> https://www.postgresql.org/message-id/CAKFQuwax7V5R_rw%3DEOWmy%3DTBON6v3sveBx_WvwsENskCL5CLQQ%40mail.gmail.com\n>>\n>\n> David J.\n>\n\nAnything I should be doing differently here to get a bit of reviewer/committer time on these? I'll add them to the commitfest for next month if needed but I'm seeing quick patches going in every week and the batch format done at the beginning of the month got processed through without issue.Per: https://wiki.postgresql.org/wiki/Submitting_a_PatchI was hoping for Workflow A especially as I acquit myself more than adequately on the \"How do you get someone to respond to you?\" items.I was going to chalk it up to bad timing but the volume of doc patches this month hasn't really dipped even with the couple of bad bugs being worked on.Thank you!David J.On Fri, Apr 29, 2022 at 6:52 AM David G. Johnston <david.g.johnston@gmail.com> wrote:Updated status of the set.On Wed, Apr 20, 2022 at 5:59 PM David G. Johnston <david.g.johnston@gmail.com> wrote:v0001-database-default-name (-bugs, with a related cleanup suggestion as well)https://www.postgresql.org/message-id/flat/CAKFQuwZvHH1HVSOu7EYjvshynk4pnDwC5RwkF%3DVfZJvmUskwrQ%40mail.gmail.com#0e6d799478d88aee93402bec35fa64a2v0002-doc-extension-dependent-routine-behavior (-general, reply to user confusion)https://www.postgresql.org/message-id/CAKFQuwb_QtY25feLeh%3D8uNdnyo1H%3DcN4R3vENsUwQzJP4-0xZg%40mail.gmail.comv0001-doc-savepoint-name-reuse (-docs, reply to user request for improvement)https://www.postgresql.org/message-id/CAKFQuwYzSb9OW5qTFgc0v9RWMN8bX83wpe8okQ7x6vtcmfA2KQ%40mail.gmail.comPending discussion of alternate presentation of transaction sequence; if not favorable, I can just go with a simple factual fix of the mistake in v0001 (see this thread). v0001-on-conflict-excluded-is-name-not-table (-docs, figured out while trying to improve the docs to reduce user confusion in this area)https://www.postgresql.org/message-id/flat/CAKFQuwYN20c0%2B7kKvm3PBgibu77BzxDvk9RvoXBb1%3Dj1mDODPw%40mail.gmail.com#ea79c88b55fdccecbd2c4fe549f321c9v0001-doc-make-row-estimation-example-match-prose (-docs, reply to user pointing of an inconsistency)https://www.postgresql.org/message-id/CAKFQuwax7V5R_rw%3DEOWmy%3DTBON6v3sveBx_WvwsENskCL5CLQQ%40mail.gmail.comDavid J.",
"msg_date": "Tue, 31 May 2022 13:12:23 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assorted small doc patches"
},
{
"msg_contents": "On 31.05.22 22:12, David G. Johnston wrote:\n> Anything I should be doing differently here to get a bit of \n> reviewer/committer time on these? I'll add them to the commitfest for \n> next month if needed but I'm seeing quick patches going in every week \n> and the batch format done at the beginning of the month got processed \n> through without issue.\n\nThese patches appear to have merit, but they address various nontrivial \nareas of functionality, so either a) I just pick out a few that I \nunderstand and deal with those and leave the rest open, or b) I'm \noverwhelmed and do none. It might have been better to post these \nseparately.\n\nI'll start with one though: v0001-database-default-name.patch\n\nI don't understand why you propose this change. It appears to reduce \nprecision.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 16:05:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Assorted small doc patches"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 7:05 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 31.05.22 22:12, David G. Johnston wrote:\n> > Anything I should be doing differently here to get a bit of\n> > reviewer/committer time on these? I'll add them to the commitfest for\n> > next month if needed but I'm seeing quick patches going in every week\n> > and the batch format done at the beginning of the month got processed\n> > through without issue.\n>\n> These patches appear to have merit, but they address various nontrivial\n> areas of functionality, so either a) I just pick out a few that I\n> understand and deal with those and leave the rest open, or b) I'm\n> overwhelmed and do none. It might have been better to post these\n> separately.\n>\n\nI did for quite a few of them, per the links provided. But I get your\npoint, the originals weren't on -hackers for many of them and moving them\nover singly probably would have worked out better.\n\n\n> I'll start with one though: v0001-database-default-name.patch\n>\n> I don't understand why you propose this change. It appears to reduce\n> precision.\n>\n\nAs the proposed commit message says we don't tend to say \"database user\nname\" elsewhere in the documentation (or the error messages shown) so the\nremoval of the word database doesn't actually change anything. We only\nneed to provide a qualifier for user name when it is not the database user\nname that is being referred to, or basically when it is the operating\nsystem user name.\n\nThe last hunk is the actual bug - the existing wording effectively reads:\n\n\"The default database name is the operating system user name.\"\n\nThat is incorrect. It happens to be the same value when no other user name\nis specified, otherwise whatever gets resolved is used for the database\nname (but it is still a \"default\" because the database name was not\nexplicitly specified).\n\nDavid J.\n\nOn Wed, Jun 1, 2022 at 7:05 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 31.05.22 22:12, David G. Johnston wrote:\n> Anything I should be doing differently here to get a bit of \n> reviewer/committer time on these? I'll add them to the commitfest for \n> next month if needed but I'm seeing quick patches going in every week \n> and the batch format done at the beginning of the month got processed \n> through without issue.\n\nThese patches appear to have merit, but they address various nontrivial \nareas of functionality, so either a) I just pick out a few that I \nunderstand and deal with those and leave the rest open, or b) I'm \noverwhelmed and do none. It might have been better to post these \nseparately.I did for quite a few of them, per the links provided. But I get your point, the originals weren't on -hackers for many of them and moving them over singly probably would have worked out better.\n\nI'll start with one though: v0001-database-default-name.patch\n\nI don't understand why you propose this change. It appears to reduce \nprecision.As the proposed commit message says we don't tend to say \"database user name\" elsewhere in the documentation (or the error messages shown) so the removal of the word database doesn't actually change anything. We only need to provide a qualifier for user name when it is not the database user name that is being referred to, or basically when it is the operating system user name.The last hunk is the actual bug - the existing wording effectively reads:\"The default database name is the operating system user name.\"That is incorrect. It happens to be the same value when no other user name is specified, otherwise whatever gets resolved is used for the database name (but it is still a \"default\" because the database name was not explicitly specified).David J.",
"msg_date": "Wed, 1 Jun 2022 10:35:55 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assorted small doc patches"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 7:05 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 31.05.22 22:12, David G. Johnston wrote:\n> > Anything I should be doing differently here to get a bit of\n> > reviewer/committer time on these? I'll add them to the commitfest for\n> > next month if needed but I'm seeing quick patches going in every week\n> > and the batch format done at the beginning of the month got processed\n> > through without issue.\n>\n\n\n> It might have been better to post these\n> separately.\n>\n>\nDoing so hasn't seemed to make a difference. Still not getting even a\nsingle committer on a single patch willing to either work with me to get it\ncommitted, or decide otherwise, despite my prompt responses on the few\ncomments that have been given.\n\nGiven the lack of feedback and my track record I don't think it\nunreasonable for someone to simply commit these without comment and let any\nlate naysayers voice their opinions after it hits the repo.\n\nIn any case, I've added these and more to the commitfest at this point so\nhopefully the changed mindset of committers when it comes to clearing out\nthe commitfest backlog will come into play in a couple of weeks and I might\nsee some action.\n\nDavid J.\n\nOn Wed, Jun 1, 2022 at 7:05 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 31.05.22 22:12, David G. Johnston wrote:\n> Anything I should be doing differently here to get a bit of \n> reviewer/committer time on these? I'll add them to the commitfest for \n> next month if needed but I'm seeing quick patches going in every week \n> and the batch format done at the beginning of the month got processed \n> through without issue. It might have been better to post these \nseparately.Doing so hasn't seemed to make a difference. Still not getting even a single committer on a single patch willing to either work with me to get it committed, or decide otherwise, despite my prompt responses on the few comments that have been given.Given the lack of feedback and my track record I don't think it unreasonable for someone to simply commit these without comment and let any late naysayers voice their opinions after it hits the repo.In any case, I've added these and more to the commitfest at this point so hopefully the changed mindset of committers when it comes to clearing out the commitfest backlog will come into play in a couple of weeks and I might see some action.David J.",
"msg_date": "Mon, 20 Jun 2022 10:59:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assorted small doc patches"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nI try to use gdb to debug the jit-ed code, however, there isn't\nmuch useful information. I tried the way from [1], however, it\ndoesn't work for me (llvm-10 in my environment).\n\nHow can I do this debugging? Any suggestions? Thanks in advance!\n\n\n[1] https://releases.llvm.org/8.0.1/docs/DebuggingJITedCode.html\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 21 Apr 2022 12:46:08 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "How to debug JIT-ed code in PostgreSQL using GDB"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWhile reading the latest master branch code, I found something that we\nmay be able to improve.\n\n1. The am table_relation_copy_for_cluster() interface.\n\n static inline void\n table_relation_copy_for_cluster(Relation OldTable, Relation NewTable,\n Relation OldIndex,\n bool use_sort,\n TransactionId OldestXmin,\n TransactionId *xid_cutoff,\n MultiXactId *multi_cutoff,\n double *num_tuples,\n double *tups_vacuumed,\n double *tups_recently_dead)\n\n- Should add a line for parameter num_tuples below \"Output parameters\n\" in comment\n- Look at the caller code, i.e. copy_table_data(). It does initialize\n*num_tuples,\n *tups_vacuumed and *tups_recently_dead at first. This does not seem\nto be a good\n API design or implementation. We'd better let the am api return the\nvalues without\n initializing from callers, right?\n\n2. For CTAS (create table as) with no data. It seems that we won't run\ninto intorel_receive().\n intorel_startup() could be run into for \"create table as t1\nexecute with no data\". So it looks\n like we do not need to judge for into->skipData in\nintorel_receive(). If we really want to\n check into->skipData we could add an assert check there or if I\nmissed some code paths\n in which we could be run into the code branch, we could instead\ncall below code in\n intorel_receive() to stop early, right?\n\n if (myState->into->skipData)\n return false;\n\nRegards,\nPaul\n\n\n",
"msg_date": "Thu, 21 Apr 2022 14:29:59 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Two small issues related to table_relation_copy_for_cluster() and\n CTAS with no data."
}
] |
[
{
"msg_contents": "Hi,\n\nwould it be possible to add Apache Arrow streaming format to the copy\nbackend + frontend?\nThe use case is fetching (or storing) tens or hundreds of millions of rows\nfor client side data science purposes (Pandas, Apache Arrow compute\nkernels, Parquet conversion etc). It looks like the serialization overhead\nwhen using the postgresql wire format can be significant.\n\nBest regards,\nAdam Lippai\n\nHi,would it be possible to add Apache Arrow streaming format to the copy backend + frontend?The use case is fetching (or storing) tens or hundreds of millions of rows for client side data science purposes (Pandas, Apache Arrow compute kernels, Parquet conversion etc). It looks like the serialization overhead when using the postgresql wire format can be significant.Best regards,Adam Lippai",
"msg_date": "Thu, 21 Apr 2022 10:41:17 -0400",
"msg_from": "Adam Lippai <adam@rigo.sk>",
"msg_from_op": true,
"msg_subject": "COPY TO STDOUT Apache Arrow support"
},
{
"msg_contents": "Hi,\n\nThere are two bigger developments in this topic:\n\n 1. Pandas 2.0 is released and it can use Apache Arrow as a backend\n 2. Apache Arrow ADBC is released which standardizes the client API.\n Currently it uses the postgresql wire protocol underneath\n\nBest regards,\nAdam Lippai\n\nOn Thu, Apr 21, 2022 at 10:41 AM Adam Lippai <adam@rigo.sk> wrote:\n\n> Hi,\n>\n> would it be possible to add Apache Arrow streaming format to the copy\n> backend + frontend?\n> The use case is fetching (or storing) tens or hundreds of millions of rows\n> for client side data science purposes (Pandas, Apache Arrow compute\n> kernels, Parquet conversion etc). It looks like the serialization overhead\n> when using the postgresql wire format can be significant.\n>\n> Best regards,\n> Adam Lippai\n>\n\nHi,There are two bigger developments in this topic:Pandas 2.0 is released and it can use Apache Arrow as a backendApache Arrow ADBC is released which standardizes the client API. Currently it uses the postgresql wire protocol underneathBest regards,Adam LippaiOn Thu, Apr 21, 2022 at 10:41 AM Adam Lippai <adam@rigo.sk> wrote:Hi,would it be possible to add Apache Arrow streaming format to the copy backend + frontend?The use case is fetching (or storing) tens or hundreds of millions of rows for client side data science purposes (Pandas, Apache Arrow compute kernels, Parquet conversion etc). It looks like the serialization overhead when using the postgresql wire format can be significant.Best regards,Adam Lippai",
"msg_date": "Thu, 13 Apr 2023 14:35:48 -0400",
"msg_from": "Adam Lippai <adam@rigo.sk>",
"msg_from_op": true,
"msg_subject": "Re: COPY TO STDOUT Apache Arrow support"
},
{
"msg_contents": "Hi,\n\nThere is also a new Arrow C library (one .h and one .c file) which makes it\neasier to use it from the postgresql codebase.\n\nhttps://arrow.apache.org/blog/2023/03/07/nanoarrow-0.1.0-release/\nhttps://github.com/apache/arrow-nanoarrow/tree/main/dist\n\nBest regards,\nAdam Lippai\n\nOn Thu, Apr 13, 2023 at 2:35 PM Adam Lippai <adam@rigo.sk> wrote:\n\n> Hi,\n>\n> There are two bigger developments in this topic:\n>\n> 1. Pandas 2.0 is released and it can use Apache Arrow as a backend\n> 2. Apache Arrow ADBC is released which standardizes the client API.\n> Currently it uses the postgresql wire protocol underneath\n>\n> Best regards,\n> Adam Lippai\n>\n> On Thu, Apr 21, 2022 at 10:41 AM Adam Lippai <adam@rigo.sk> wrote:\n>\n>> Hi,\n>>\n>> would it be possible to add Apache Arrow streaming format to the copy\n>> backend + frontend?\n>> The use case is fetching (or storing) tens or hundreds of millions of\n>> rows for client side data science purposes (Pandas, Apache Arrow compute\n>> kernels, Parquet conversion etc). It looks like the serialization overhead\n>> when using the postgresql wire format can be significant.\n>>\n>> Best regards,\n>> Adam Lippai\n>>\n>\n\nHi,There is also a new Arrow C library (one .h and one .c file) which makes it easier to use it from the postgresql codebase.https://arrow.apache.org/blog/2023/03/07/nanoarrow-0.1.0-release/https://github.com/apache/arrow-nanoarrow/tree/main/distBest regards,Adam LippaiOn Thu, Apr 13, 2023 at 2:35 PM Adam Lippai <adam@rigo.sk> wrote:Hi,There are two bigger developments in this topic:Pandas 2.0 is released and it can use Apache Arrow as a backendApache Arrow ADBC is released which standardizes the client API. Currently it uses the postgresql wire protocol underneathBest regards,Adam LippaiOn Thu, Apr 21, 2022 at 10:41 AM Adam Lippai <adam@rigo.sk> wrote:Hi,would it be possible to add Apache Arrow streaming format to the copy backend + frontend?The use case is fetching (or storing) tens or hundreds of millions of rows for client side data science purposes (Pandas, Apache Arrow compute kernels, Parquet conversion etc). It looks like the serialization overhead when using the postgresql wire format can be significant.Best regards,Adam Lippai",
"msg_date": "Tue, 2 May 2023 23:14:44 -0400",
"msg_from": "Adam Lippai <adam@rigo.sk>",
"msg_from_op": true,
"msg_subject": "Re: COPY TO STDOUT Apache Arrow support"
},
{
"msg_contents": "Hi\n\nst 3. 5. 2023 v 5:15 odesílatel Adam Lippai <adam@rigo.sk> napsal:\n\n> Hi,\n>\n> There is also a new Arrow C library (one .h and one .c file) which makes\n> it easier to use it from the postgresql codebase.\n>\n> https://arrow.apache.org/blog/2023/03/07/nanoarrow-0.1.0-release/\n> https://github.com/apache/arrow-nanoarrow/tree/main/dist\n>\n> Best regards,\n> Adam Lippai\n>\n\nWith 9fcdf2c787ac6da330165ea3cd50ec5155943a2b it can be implemented in\nextension\n\nRegards\n\nPavel\n\n\n> On Thu, Apr 13, 2023 at 2:35 PM Adam Lippai <adam@rigo.sk> wrote:\n>\n>> Hi,\n>>\n>> There are two bigger developments in this topic:\n>>\n>> 1. Pandas 2.0 is released and it can use Apache Arrow as a backend\n>> 2. Apache Arrow ADBC is released which standardizes the client API.\n>> Currently it uses the postgresql wire protocol underneath\n>>\n>> Best regards,\n>> Adam Lippai\n>>\n>> On Thu, Apr 21, 2022 at 10:41 AM Adam Lippai <adam@rigo.sk> wrote:\n>>\n>>> Hi,\n>>>\n>>> would it be possible to add Apache Arrow streaming format to the copy\n>>> backend + frontend?\n>>> The use case is fetching (or storing) tens or hundreds of millions of\n>>> rows for client side data science purposes (Pandas, Apache Arrow compute\n>>> kernels, Parquet conversion etc). It looks like the serialization overhead\n>>> when using the postgresql wire format can be significant.\n>>>\n>>> Best regards,\n>>> Adam Lippai\n>>>\n>>\n\nHist 3. 5. 2023 v 5:15 odesílatel Adam Lippai <adam@rigo.sk> napsal:Hi,There is also a new Arrow C library (one .h and one .c file) which makes it easier to use it from the postgresql codebase.https://arrow.apache.org/blog/2023/03/07/nanoarrow-0.1.0-release/https://github.com/apache/arrow-nanoarrow/tree/main/distBest regards,Adam LippaiWith 9fcdf2c787ac6da330165ea3cd50ec5155943a2b it can be implemented in extensionRegardsPavel On Thu, Apr 13, 2023 at 2:35 PM Adam Lippai <adam@rigo.sk> wrote:Hi,There are two bigger developments in this topic:Pandas 2.0 is released and it can use Apache Arrow as a backendApache Arrow ADBC is released which standardizes the client API. Currently it uses the postgresql wire protocol underneathBest regards,Adam LippaiOn Thu, Apr 21, 2022 at 10:41 AM Adam Lippai <adam@rigo.sk> wrote:Hi,would it be possible to add Apache Arrow streaming format to the copy backend + frontend?The use case is fetching (or storing) tens or hundreds of millions of rows for client side data science purposes (Pandas, Apache Arrow compute kernels, Parquet conversion etc). It looks like the serialization overhead when using the postgresql wire format can be significant.Best regards,Adam Lippai",
"msg_date": "Wed, 3 May 2023 06:01:27 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY TO STDOUT Apache Arrow support"
}
] |
[
{
"msg_contents": "git.postgresql.org Git - postgresql.git/commit\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a>\ngit.postgresql.org Git - postgresql.git/blob -\nsrc/test/regress/expected/foreign_key.out\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a>\n\n> 1330\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1330>\n> -- could fail with only 2 changes to make, if row was already updated\n> 1331\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1331>\n> BEGIN;\n> 1332\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1332>\n> UPDATE tasks set id=id WHERE id=2;\n> 1333\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1333>\n> SELECT * FROM tasks;\n> 1334\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1334>\n> id | owner | worker | checked_by\n> 1335\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1335>\n> ----+-------+--------+------------\n> 1336\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1336>\n> 1 | 1 | |\n> 1337\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1337>\n> 3 | | |\n> 1338\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1338>\n> 2 | 2 | 2 |\n> 1339\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1339>\n> (3 rows)\n> 1340\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1340>\n> 1341\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1341>\n> DELETE FROM users WHERE id = 2;\n> 1342\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1342>\n> SELECT * FROM tasks;\n> 1343\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1343>\n> id | owner | worker | checked_by\n> 1344\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1344>\n> ----+-------+--------+------------\n> 1345\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1345>\n> 1 | 1 | |\n> 1346\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1346>\n> 3 | | |\n> 1347\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1347>\n> 2 | | |\n> 1348\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1348>\n> (3 rows)\n> 1349\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1349>\n> 1350\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/foreign_key.out;h=4c5274983d48b21ff7d4eaee192672d05f9b7c08;hb=d6f96ed94e73052f99a2e545ed17a8b2fdc1fb8a#l1350>\n> COMMIT;\n\n\nI don't understand the comment *-- could fail with only 2 changes to\nmake, if row was already updated\n*mean.\nSince now the code block didn't yield any error.\n\ngit.postgresql.org Git - postgresql.git/commitgit.postgresql.org Git - postgresql.git/blob - src/test/regress/expected/foreign_key.out1330 -- could fail with only 2 changes to make, if row was already updated1331 BEGIN;1332 UPDATE tasks set id=id WHERE id=2;1333 SELECT * FROM tasks;1334 id | owner | worker | checked_by 1335 ----+-------+--------+------------1336 1 | 1 | | 1337 3 | | | 1338 2 | 2 | 2 | 1339 (3 rows)1340 1341 DELETE FROM users WHERE id = 2;1342 SELECT * FROM tasks;1343 id | owner | worker | checked_by 1344 ----+-------+--------+------------1345 1 | 1 | | 1346 3 | | | 1347 2 | | | 1348 (3 rows)1349 1350 COMMIT;I don't understand the comment -- could fail with only 2 changes to make, if row was already updated mean.Since now the code block didn't yield any error.",
"msg_date": "Fri, 22 Apr 2022 10:36:02 +0530",
"msg_from": "alias <postgres.rocks@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow specifying column list for foreign key ON DELETE SET\n src_test_regress_sql_foreign_key.sql don't understand the comment."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThis is a follow-up thread to `RFC: compression dictionaries for JSONB`\n[1]. I would like to share my current progress in order to get early\nfeedback. The patch is currently in a draft state but implements the basic\nfunctionality. I did my best to account for all the great feedback I\npreviously got from Alvaro and Matthias.\n\nUsage example:\n\n```\nCREATE TYPE mydict AS DICTIONARY OF jsonb ('aaa', 'bbb');\n\nSELECT '{\"aaa\":\"bbb\"}' :: mydict;\n mydict\n----------------\n {\"aaa\": \"bbb\"}\n\nSELECT ('{\"aaa\":\"bbb\"}' :: mydict) -> 'aaa';\n ?column?\n----------\n \"bbb\"\n```\n\nHere `mydict` works as a transparent replacement for `jsonb`. However, its\ninternal representation differs. The provided dictionary entries ('aaa',\n'bbb') are stored in the new catalog table:\n\n```\nSELECT * FROM pg_dict;\n oid | dicttypid | dictentry\n-------+-----------+-----------\n 39476 | 39475 | aaa\n 39477 | 39475 | bbb\n(2 rows)\n```\n\nWhen `mydict` sees 'aaa' in the document, it replaces it with the\ncorresponding code, in this case - 39476. For more details regarding the\ncompression algorithm and choosen compromises please see the comments in\nthe patch.\n\nIn pg_type `mydict` has typtype = TYPTYPE_DICT. It works the same way as\nTYPTYPE_BASE with only difference: corresponding `<type>_in`\n(pg_type.typinput) and `<another-type>_<type>` (pg_cast.castfunc)\nprocedures receive the dictionary Oid as a `typmod` argument. This way the\nprocedures can distinguish `mydict1` from `mydict2` and use the proper\ncompression dictionary.\n\nThe approach with alternative `typmod` role is arguably a bit hacky, but it\nwas the less invasive way to implement the feature I've found. I'm open to\nalternative suggestions.\n\nCurrent limitations (todo):\n- ALTER TYPE is not implemented\n- Tests and documentation are missing\n- Autocomplete is missing\n\nFuture work (out of scope of this patch):\n- Support types other than JSONB: TEXT, XML, etc\n- Automatically updated dictionaries, e.g. during VACUUM\n- Alternative compression algorithms. Note that this will not require any\nfurther changes in the catalog, only the values we write to pg_type and\npg_cast will differ.\n\nOpen questions:\n- Dictionary entries are currently stored as NameData, the same type that\nis used for enums. Are we OK with the accompanying limitations? Any\nalternative suggestions?\n- All in all, am I moving the right direction?\n\nYour feedback is very much welcomed!\n\n[1]:\nhttps://postgr.es/m/CAJ7c6TPx7N-bVw0dZ1ASCDQKZJHhBYkT6w4HV1LzfS%2BUUTUfmA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 22 Apr 2022 11:30:01 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Fri, Apr 22, 2022 at 1:30 AM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> This is a follow-up thread to `RFC: compression dictionaries for JSONB`\n> [1]. I would like to share my current progress in order to get early\n> feedback. The patch is currently in a draft state but implements the basic\n> functionality. I did my best to account for all the great feedback I\n> previously got from Alvaro and Matthias.\n>\n> Usage example:\n>\n> ```\n> CREATE TYPE mydict AS DICTIONARY OF jsonb ('aaa', 'bbb');\n>\n> SELECT '{\"aaa\":\"bbb\"}' :: mydict;\n> mydict\n> ----------------\n> {\"aaa\": \"bbb\"}\n>\n> SELECT ('{\"aaa\":\"bbb\"}' :: mydict) -> 'aaa';\n> ?column?\n> ----------\n> \"bbb\"\n> ```\n>\n> Here `mydict` works as a transparent replacement for `jsonb`. However, its\n> internal representation differs. The provided dictionary entries ('aaa',\n> 'bbb') are stored in the new catalog table:\n>\n> ```\n> SELECT * FROM pg_dict;\n> oid | dicttypid | dictentry\n> -------+-----------+-----------\n> 39476 | 39475 | aaa\n> 39477 | 39475 | bbb\n> (2 rows)\n> ```\n>\n> When `mydict` sees 'aaa' in the document, it replaces it with the\n> corresponding code, in this case - 39476. For more details regarding the\n> compression algorithm and choosen compromises please see the comments in\n> the patch.\n>\n> In pg_type `mydict` has typtype = TYPTYPE_DICT. It works the same way as\n> TYPTYPE_BASE with only difference: corresponding `<type>_in`\n> (pg_type.typinput) and `<another-type>_<type>` (pg_cast.castfunc)\n> procedures receive the dictionary Oid as a `typmod` argument. This way the\n> procedures can distinguish `mydict1` from `mydict2` and use the proper\n> compression dictionary.\n>\n> The approach with alternative `typmod` role is arguably a bit hacky, but\n> it was the less invasive way to implement the feature I've found. I'm open\n> to alternative suggestions.\n>\n> Current limitations (todo):\n> - ALTER TYPE is not implemented\n> - Tests and documentation are missing\n> - Autocomplete is missing\n>\n> Future work (out of scope of this patch):\n> - Support types other than JSONB: TEXT, XML, etc\n> - Automatically updated dictionaries, e.g. during VACUUM\n> - Alternative compression algorithms. Note that this will not require any\n> further changes in the catalog, only the values we write to pg_type and\n> pg_cast will differ.\n>\n> Open questions:\n> - Dictionary entries are currently stored as NameData, the same type that\n> is used for enums. Are we OK with the accompanying limitations? Any\n> alternative suggestions?\n> - All in all, am I moving the right direction?\n>\n> Your feedback is very much welcomed!\n>\n> [1]:\n> https://postgr.es/m/CAJ7c6TPx7N-bVw0dZ1ASCDQKZJHhBYkT6w4HV1LzfS%2BUUTUfmA%40mail.gmail.com\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\nHi,\nFor src/backend/catalog/pg_dict.c, please add license header.\n\n+ elog(ERROR, \"skipbytes > decoded_size - outoffset\");\n\nInclude the values for skipbytes, decoded_size and outoffset.\n\nCheers\n\nOn Fri, Apr 22, 2022 at 1:30 AM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,This is a follow-up thread to `RFC: compression dictionaries for JSONB` [1]. I would like to share my current progress in order to get early feedback. The patch is currently in a draft state but implements the basic functionality. I did my best to account for all the great feedback I previously got from Alvaro and Matthias.Usage example:```CREATE TYPE mydict AS DICTIONARY OF jsonb ('aaa', 'bbb');SELECT '{\"aaa\":\"bbb\"}' :: mydict; mydict---------------- {\"aaa\": \"bbb\"}SELECT ('{\"aaa\":\"bbb\"}' :: mydict) -> 'aaa'; ?column?---------- \"bbb\"```Here `mydict` works as a transparent replacement for `jsonb`. However, its internal representation differs. The provided dictionary entries ('aaa', 'bbb') are stored in the new catalog table:```SELECT * FROM pg_dict; oid | dicttypid | dictentry-------+-----------+----------- 39476 | 39475 | aaa 39477 | 39475 | bbb(2 rows)```When `mydict` sees 'aaa' in the document, it replaces it with the corresponding code, in this case - 39476. For more details regarding the compression algorithm and choosen compromises please see the comments in the patch.In pg_type `mydict` has typtype = TYPTYPE_DICT. It works the same way as TYPTYPE_BASE with only difference: corresponding `<type>_in` (pg_type.typinput) and `<another-type>_<type>` (pg_cast.castfunc) procedures receive the dictionary Oid as a `typmod` argument. This way the procedures can distinguish `mydict1` from `mydict2` and use the proper compression dictionary.The approach with alternative `typmod` role is arguably a bit hacky, but it was the less invasive way to implement the feature I've found. I'm open to alternative suggestions.Current limitations (todo):- ALTER TYPE is not implemented- Tests and documentation are missing- Autocomplete is missingFuture work (out of scope of this patch):- Support types other than JSONB: TEXT, XML, etc- Automatically updated dictionaries, e.g. during VACUUM- Alternative compression algorithms. Note that this will not require any further changes in the catalog, only the values we write to pg_type and pg_cast will differ.Open questions:- Dictionary entries are currently stored as NameData, the same type that is used for enums. Are we OK with the accompanying limitations? Any alternative suggestions?- All in all, am I moving the right direction?Your feedback is very much welcomed![1]: https://postgr.es/m/CAJ7c6TPx7N-bVw0dZ1ASCDQKZJHhBYkT6w4HV1LzfS%2BUUTUfmA%40mail.gmail.com-- Best regards,Aleksander AlekseevHi,For src/backend/catalog/pg_dict.c, please add license header.+ elog(ERROR, \"skipbytes > decoded_size - outoffset\");Include the values for skipbytes, decoded_size and outoffset.Cheers",
"msg_date": "Fri, 22 Apr 2022 08:21:03 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Zhihong,\n\nMany thanks for your feedback!\n\n> For src/backend/catalog/pg_dict.c, please add license header.\n\nFixed.\n\n> + elog(ERROR, \"skipbytes > decoded_size - outoffset\");\n>\n> Include the values for skipbytes, decoded_size and outoffset.\n\nIn fact, this code should never be executed, and if somehow it will\nbe, this information will not help us much to debug the issue. I made\ncorresponding changes to the error message and added the comments.\n\nHere it the 2nd version of the patch:\n\n- Includes changes named above\n- Fixes a warning reported by cfbot\n- Fixes some FIXME's\n- The path includes some simple tests now\n- A proper commit message was added\n\nPlease note that this is still a draft. Feedback is welcome.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 25 Apr 2022 16:15:55 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> Here it the 2nd version of the patch:\n>\n> - Includes changes named above\n> - Fixes a warning reported by cfbot\n> - Fixes some FIXME's\n> - The path includes some simple tests now\n> - A proper commit message was added\n\nHere is the rebased version of the patch. Changes compared to v2 are minimal.\n\n> Open questions:\n> - Dictionary entries are currently stored as NameData, the same type that is\n> used for enums. Are we OK with the accompanying limitations? Any alternative\n> suggestions?\n> - All in all, am I moving the right direction?\n\nI would like to receive a little bit more feedback before investing more time\ninto this effort. This will allow me, if necessary, to alter the overall design\nmore easily.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 13 May 2022 11:08:53 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 1:44 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> This is a follow-up thread to `RFC: compression dictionaries for JSONB` [1]. I would like to share my current progress in order to get early feedback. The patch is currently in a draft state but implements the basic functionality. I did my best to account for all the great feedback I previously got from Alvaro and Matthias.\n\nI'm coming up to speed with this set of threads -- the following is\nnot a complete review by any means, and please let me know if I've\nmissed some of the history.\n\n> SELECT * FROM pg_dict;\n> oid | dicttypid | dictentry\n> -------+-----------+-----------\n> 39476 | 39475 | aaa\n> 39477 | 39475 | bbb\n> (2 rows)\n\nI saw there was some previous discussion about dictionary size. It\nlooks like this approach would put all dictionaries into a shared OID\npool. Since I don't know what a \"standard\" use case is, is there any\nrisk of OID exhaustion for larger deployments with many dictionaries?\nOr is 2**32 so comparatively large that it's not really a serious\nconcern?\n\n> When `mydict` sees 'aaa' in the document, it replaces it with the corresponding code, in this case - 39476. For more details regarding the compression algorithm and choosen compromises please see the comments in the patch.\n\nI see the algorithm description, but I'm curious to know whether it's\nbased on some other existing compression scheme, for the sake of\ncomparison. It seems like it shares similarities with the Snappy\nscheme?\n\nCould you talk more about what the expected ratios and runtime\ncharacteristics are? Best I can see is that compression runtime is\nsomething like O(n * e * log d) where n is the length of the input, e\nis the maximum length of a dictionary entry, and d is the number of\nentries in the dictionary. Since e and d are constant for a given\nstatic dictionary, how well the dictionary is constructed is\npresumably important.\n\n> In pg_type `mydict` has typtype = TYPTYPE_DICT. It works the same way as TYPTYPE_BASE with only difference: corresponding `<type>_in` (pg_type.typinput) and `<another-type>_<type>` (pg_cast.castfunc) procedures receive the dictionary Oid as a `typmod` argument. This way the procedures can distinguish `mydict1` from `mydict2` and use the proper compression dictionary.\n>\n> The approach with alternative `typmod` role is arguably a bit hacky, but it was the less invasive way to implement the feature I've found. I'm open to alternative suggestions.\n\nHaven't looked at this closely enough to develop an opinion yet.\n\n> Current limitations (todo):\n> - ALTER TYPE is not implemented\n\nThat reminds me. How do people expect to generate a \"good\" dictionary\nin practice? Would they somehow get the JSONB representations out of\nPostgres and run a training program over the blobs? I see some\nreference to training functions in the prior threads but don't see any\nbreadcrumbs in the code.\n\n> - Alternative compression algorithms. Note that this will not require any further changes in the catalog, only the values we write to pg_type and pg_cast will differ.\n\nCould you expand on this? I.e. why would alternative algorithms not\nneed catalog changes? It seems like the only schemes that could be\nused with pg_catalog.pg_dict are those that expect to map a byte\nstring to a number. Is that general enough to cover other standard\ncompression algorithms?\n\n> Open questions:\n> - Dictionary entries are currently stored as NameData, the same type that is used for enums. Are we OK with the accompanying limitations? Any alternative suggestions?\n\nIt does feel a little weird to have a hard limit on the entry length,\nsince that also limits the compression ratio. But it also limits the\ncompression runtime, so maybe it's a worthwhile tradeoff.\n\nIt also seems strange to use a dictionary of C strings to compress\nbinary data; wouldn't we want to be able to compress zero bytes too?\n\nHope this helps,\n--Jacob\n\n\n",
"msg_date": "Wed, 1 Jun 2022 14:29:06 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Jacob,\n\nMany thanks for your feedback!\n\n> I saw there was some previous discussion about dictionary size. It\n> looks like this approach would put all dictionaries into a shared OID\n> pool. Since I don't know what a \"standard\" use case is, is there any\n> risk of OID exhaustion for larger deployments with many dictionaries?\n> Or is 2**32 so comparatively large that it's not really a serious\n> concern?\n\nI agree, this is a drawback of the current implementation. To be honest,\nI simply followed the example of how ENUMs are implemented. I'm not 100% sure\nif we should be worried here (apparently, freed OIDs are reused). I'm OK with\nusing a separate sequence if someone could second this. This is the first time\nI'm altering the catalog so I'm not certain what the best practices are.\n\n> I see the algorithm description, but I'm curious to know whether it's\n> based on some other existing compression scheme, for the sake of\n> comparison. It seems like it shares similarities with the Snappy\n> scheme?\n>\n> Could you talk more about what the expected ratios and runtime\n> characteristics are? Best I can see is that compression runtime is\n> something like O(n * e * log d) where n is the length of the input, e\n> is the maximum length of a dictionary entry, and d is the number of\n> entries in the dictionary. Since e and d are constant for a given\n> static dictionary, how well the dictionary is constructed is\n> presumably important.\n\nThe algorithm is almost identical to the one I used in ZSON extension [1]\nexcept the fact that ZSON uses 16-bit codes. In docs/benchmark.md you will find\napproximate ratios to expect, etc. The reasons why this particular algorithm\nwas chosen are:\n\n1. It was extensively tested in the past and seem to work OK for existing\n ZSON users.\n2. It doesn't use any knowledge regarding the data structure and thus can be\n reused for TEXT/XML/etc as-is.\n3. Previously we agreed that at some point users will be able to change the\n algorithm (the same way as they can do it for TOAST now) so which algorithm\n will be used in the first implementation is not that important. I simply\n choose the already existing one.\n\n> > Current limitations (todo):\n> > - ALTER TYPE is not implemented\n>\n> That reminds me. How do people expect to generate a \"good\" dictionary\n> in practice? Would they somehow get the JSONB representations out of\n> Postgres and run a training program over the blobs? I see some\n> reference to training functions in the prior threads but don't see any\n> breadcrumbs in the code.\n\nSo far we agreed that in the first implementation it will be done manually.\nIn the future it will be possible to update the dictionaries automatically\nduring VACUUM. The idea of something similar to zson_learn() procedure, as\nI recall, didn't get much support, so we probably will not have it, or at least\nit is not a priority.\n\n> > - Alternative compression algorithms. Note that this will not require any\n> > further changes in the catalog, only the values we write to pg_type and\n> > pg_cast will differ.\n>\n> Could you expand on this? I.e. why would alternative algorithms not\n> need catalog changes? It seems like the only schemes that could be\n> used with pg_catalog.pg_dict are those that expect to map a byte\n> string to a number. Is that general enough to cover other standard\n> compression algorithms?\n\nSure. When creating a new dictionary pg_type and pg_cast are modified like this:\n\n =# CREATE TYPE mydict AS DICTIONARY OF JSONB ('abcdef', 'ghijkl');\nCREATE TYPE\n =# SELECT * FROM pg_type WHERE typname = 'mydict';\n-[ RECORD 1 ]--+---------------\noid | 16397\ntypname | mydict\ntypnamespace | 2200\n...\ntyparray | 16396\ntypinput | dictionary_in\ntypoutput | dictionary_out\n...\n\n=# SELECT c.*, p.proname FROM pg_cast AS c\n LEFT JOIN pg_proc AS p\n ON p.oid = c.castfunc\n WHERE c.castsource = 16397 or c.casttarget = 16397;\n-[ RECORD 1 ]-----------------\noid | 16400\ncastsource | 3802\ncasttarget | 16397\ncastfunc | 9866\ncastcontext | a\ncastmethod | f\nproname | jsonb_dictionary\n-[ RECORD 2 ]-----------------\noid | 16401\ncastsource | 16397\ncasttarget | 3802\ncastfunc | 9867\ncastcontext | i\ncastmethod | f\nproname | dictionary_jsonb\n-[ RECORD 3 ]-----------------\noid | 16402\ncastsource | 16397\ncasttarget | 17\ncastfunc | 9868\ncastcontext | e\ncastmethod | f\nproname | dictionary_bytea\n\nIn order to add a new algorithm you simply need to provide alternatives\nto dictionary_in / dictionary_out / jsonb_dictionary / dictionary_jsonb and\nspecify them in the catalog instead. The catalog schema will remain the same.\n\n> It also seems strange to use a dictionary of C strings to compress\n> binary data; wouldn't we want to be able to compress zero bytes too?\n\nThat's a good point. Again, here I simply followed the example of the ENUMs\nimplementation. Since compression dictionaries are intended to be used with\ntext-like types such as JSONB, (and also JSON, TEXT and XML in the future),\nchoosing Name type seemed to be a reasonable compromise. Dictionary entries are\nmost likely going to store JSON keys, common words used in the TEXT, etc.\nHowever, I'm fine with any alternative scheme if somebody experienced with the\nPostgreSQL catalog could second this.\n\n[1]: https://github.com/afiskon/zson\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 2 Jun 2022 16:30:20 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 6:30 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > I saw there was some previous discussion about dictionary size. It\n> > looks like this approach would put all dictionaries into a shared OID\n> > pool. Since I don't know what a \"standard\" use case is, is there any\n> > risk of OID exhaustion for larger deployments with many dictionaries?\n> > Or is 2**32 so comparatively large that it's not really a serious\n> > concern?\n>\n> I agree, this is a drawback of the current implementation. To be honest,\n> I simply followed the example of how ENUMs are implemented. I'm not 100% sure\n> if we should be worried here (apparently, freed OIDs are reused). I'm OK with\n> using a separate sequence if someone could second this. This is the first time\n> I'm altering the catalog so I'm not certain what the best practices are.\n\nI think reuse should be fine (if a bit slower, but offhand that\ndoesn't seem like an important bottleneck). Users may be unamused to\nfind that one large dictionary has prevented the creation of any new\nentries in other dictionaries, though. But again, I have no intuition\nfor the size of a production-grade compression dictionary, and maybe\nit's silly to assume that normal use would ever reach the OID limit.\n\n> > I see the algorithm description, but I'm curious to know whether it's\n> > based on some other existing compression scheme, for the sake of\n> > comparison. It seems like it shares similarities with the Snappy\n> > scheme?\n> >\n> > Could you talk more about what the expected ratios and runtime\n> > characteristics are? Best I can see is that compression runtime is\n> > something like O(n * e * log d) where n is the length of the input, e\n> > is the maximum length of a dictionary entry, and d is the number of\n> > entries in the dictionary. Since e and d are constant for a given\n> > static dictionary, how well the dictionary is constructed is\n> > presumably important.\n>\n> The algorithm is almost identical to the one I used in ZSON extension [1]\n> except the fact that ZSON uses 16-bit codes. In docs/benchmark.md you will find\n> approximate ratios to expect, etc.\n\nThat's assuming a machine-trained dictionary, though, which isn't part\nof the proposal now. Is there a performance/ratio sample for a \"best\npractice\" hand-written dictionary?\n\n> > That reminds me. How do people expect to generate a \"good\" dictionary\n> > in practice? Would they somehow get the JSONB representations out of\n> > Postgres and run a training program over the blobs? I see some\n> > reference to training functions in the prior threads but don't see any\n> > breadcrumbs in the code.\n>\n> So far we agreed that in the first implementation it will be done manually.\n> In the future it will be possible to update the dictionaries automatically\n> during VACUUM. The idea of something similar to zson_learn() procedure, as\n> I recall, didn't get much support, so we probably will not have it, or at least\n> it is not a priority.\n\nHm... I'm skeptical that a manually-constructed set of compression\ndictionaries would be maintainable over time or at scale. But I'm not\nthe target audience so I will let others weigh in here instead.\n\n> > > - Alternative compression algorithms. Note that this will not require any\n> > > further changes in the catalog, only the values we write to pg_type and\n> > > pg_cast will differ.\n> >\n> > Could you expand on this? I.e. why would alternative algorithms not\n> > need catalog changes? It seems like the only schemes that could be\n> > used with pg_catalog.pg_dict are those that expect to map a byte\n> > string to a number. Is that general enough to cover other standard\n> > compression algorithms?\n>\n> Sure. When creating a new dictionary pg_type and pg_cast are modified like this:\n>\n> =# CREATE TYPE mydict AS DICTIONARY OF JSONB ('abcdef', 'ghijkl');\n> CREATE TYPE\n> =# SELECT * FROM pg_type WHERE typname = 'mydict';\n> -[ RECORD 1 ]--+---------------\n> oid | 16397\n> typname | mydict\n> typnamespace | 2200\n> ...\n> typarray | 16396\n> typinput | dictionary_in\n> typoutput | dictionary_out\n> ...\n>\n> =# SELECT c.*, p.proname FROM pg_cast AS c\n> LEFT JOIN pg_proc AS p\n> ON p.oid = c.castfunc\n> WHERE c.castsource = 16397 or c.casttarget = 16397;\n> -[ RECORD 1 ]-----------------\n> oid | 16400\n> castsource | 3802\n> casttarget | 16397\n> castfunc | 9866\n> castcontext | a\n> castmethod | f\n> proname | jsonb_dictionary\n> -[ RECORD 2 ]-----------------\n> oid | 16401\n> castsource | 16397\n> casttarget | 3802\n> castfunc | 9867\n> castcontext | i\n> castmethod | f\n> proname | dictionary_jsonb\n> -[ RECORD 3 ]-----------------\n> oid | 16402\n> castsource | 16397\n> casttarget | 17\n> castfunc | 9868\n> castcontext | e\n> castmethod | f\n> proname | dictionary_bytea\n>\n> In order to add a new algorithm you simply need to provide alternatives\n> to dictionary_in / dictionary_out / jsonb_dictionary / dictionary_jsonb and\n> specify them in the catalog instead. The catalog schema will remain the same.\n\nThe catalog schemas for pg_type and pg_cast would. But would the\ncurrent pg_dict schema be generally applicable to other cross-table\ncompression schemes? It seems narrowly tailored -- which is not a\nproblem for a proof of concept patch; I'm just not seeing how other\nstandard compression schemes might make use of an OID-to-NameData map.\nMy naive understanding is that they have their own dictionary\nstructures.\n\n(You could of course hack in any general structure you needed by\ntreating pg_dict like a list of chunks, but that seems wasteful and\nslow, especially given the 63-byte chunk limit, and even more likely\nto exhaust the shared OID pool. I think LZMA dictionaries can be huge,\nas one example.)\n\n> > It also seems strange to use a dictionary of C strings to compress\n> > binary data; wouldn't we want to be able to compress zero bytes too?\n>\n> That's a good point. Again, here I simply followed the example of the ENUMs\n> implementation. Since compression dictionaries are intended to be used with\n> text-like types such as JSONB, (and also JSON, TEXT and XML in the future),\n> choosing Name type seemed to be a reasonable compromise. Dictionary entries are\n> most likely going to store JSON keys, common words used in the TEXT, etc.\n> However, I'm fine with any alternative scheme if somebody experienced with the\n> PostgreSQL catalog could second this.\n\nI think Matthias back in the first thread was hoping for the ability\nto compress duplicated JSON objects as well; it seems like that\nwouldn't be possible with the current scheme. (Again I have no\nintuition for which use cases are must-haves.) I'm wondering if\npg_largeobject would be an alternative catalog to draw inspiration\nfrom... specifically the use of bytea as the stored value, and of a\ntwo-column primary key.\n\nBut take all my suggestions with a dash of salt :D I'm new to this space.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Thu, 2 Jun 2022 15:21:26 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Fri, 13 May 2022 at 10:09, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> > Here it the 2nd version of the patch:\n> >\n> > - Includes changes named above\n> > - Fixes a warning reported by cfbot\n> > - Fixes some FIXME's\n> > - The path includes some simple tests now\n> > - A proper commit message was added\n>\n> Here is the rebased version of the patch. Changes compared to v2 are minimal.\n>\n> > Open questions:\n> > - Dictionary entries are currently stored as NameData, the same type that is\n> > used for enums. Are we OK with the accompanying limitations? Any alternative\n> > suggestions?\n> > - All in all, am I moving the right direction?\n>\n> I would like to receive a little bit more feedback before investing more time\n> into this effort. This will allow me, if necessary, to alter the overall design\n> more easily.\n\nSorry for the delayed reply. After the last thread, I've put some time\nin looking into the \"pluggable toaster\" patches, which appears to want\nto provide related things: Compressing typed data using an extensible\nAPI. I think that that API is a better approach to increase the\ncompression ratio for JSONB.\n\nThat does not mean that I think that the basis of this patch is\nincorrect, just that the current API (through new entries in the\npg_type and pg_casts catalogs) is not the right direction if/when\nwe're going to have a pluggable toaster API. The bulk of the patch\nshould still be usable, but I think that the way it interfaces with\nthe CREATE TABLE (column ...) APIs would need reworking to build on\ntop of the api's of the \"pluggable toaster\" patches (so, creating\ntoasters instead of types). I think that would allow for an overall\nbetter user experience and better performance due to decreased need\nfor fully decompressed type casting.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Sun, 5 Jun 2022 21:51:47 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Matthias,\n\n> The bulk of the patch\n> should still be usable, but I think that the way it interfaces with\n> the CREATE TABLE (column ...) APIs would need reworking to build on\n> top of the api's of the \"pluggable toaster\" patches (so, creating\n> toasters instead of types). I think that would allow for an overall\n> better user experience and better performance due to decreased need\n> for fully decompressed type casting.\n\nMany thanks for the feedback.\n\nThe \"pluggable TOASTer\" patch looks very interesting indeed. I'm\ncurrently trying to make heads and tails of it and trying to figure\nout if it can be used as a base for compression dictionaries,\nespecially for implementing the partial decompression. Hopefully I\nwill be able to contribute to it and to the dependent patch [1] in the\nupcoming CF, at least as a tester/reviewer. Focusing our efforts on\n[1] for now seems to be a good strategy.\n\nMy current impression of your idea is somewhat mixed at this point though.\n\nTeodor's goal is to allow creating _extensions_ that implement\nalternative TOAST strategies, which use alternative compression\nalgorithms and/or use the knowledge of the binary representation of\nthe particular type. For sure, this would be a nice thing to have.\nHowever, during the discussion of the \"compression dictionaries\" RFC\nthe consensus was reached that the community wants to see it as a\n_built_in_ functionality rather than an extension. Otherwise we could\nsimply add ZSON to /contrib/ as it was originally proposed.\n\nSo if we are going to keep \"compression dictionaries\" a built-in\nfunctionality, putting artificial constraints on its particular\nimplementation, or adding artificial dependencies of two rather\ncomplicated patches, is arguably a controversial idea. Especially\nconsidering the fact that it was shown that the feature can be\nimplemented without these dependencies, in a very non-invasive way.\n\nThese are just my initial thoughts I would like to share though. I may\nchange my mind after diving deeper into a \"pluggable TOASTer\" patch.\n\nI cc:'ed Teodor in case he would like to share his insights on the topic.\n\n[1]: https://commitfest.postgresql.org/38/3479/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 15 Jun 2022 15:38:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Matthias,\n\n> These are just my initial thoughts I would like to share though. I may\n> change my mind after diving deeper into a \"pluggable TOASTer\" patch.\n\nI familiarized myself with the \"pluggable TOASTer\" thread and joined\nthe discussion [1].\n\nI'm afraid so far I failed to understand your suggestion to base\n\"compression dictionaries\" patch on \"pluggable TOASTer\", considering\nthe fair amount of push-back it got from the community, not to mention\na somewhat raw state of the patchset. It's true that Teodor and I are\ntrying to address similar problems. This however doesn't mean that\nthere should be a dependency between these patches.\n\nAlso, I completely agree with Tomas [2]:\n\n> My main point is that we should not be making too many radical\n> changes at once - it makes it much harder to actually get anything done.\n\nIMO the patches don't depend on each other but rather complement each\nother. The user can switch between different TOAST methods, and the\ncompression dictionaries can work on top of different TOAST methods.\nAlthough there is also a high-level idea (according to the\npresentations) to share common data between different TOASTed values,\nsimilarly to what compression dictionaries do, by looking at the\ncurrent feedback and considering the overall complexity and the amount\nof open questions (e.g. interaction with different TableAMs, etc), I\nseriously doubt that this particular part of \"pluggable TOASTer\" will\nend-up in the core.\n\n[1]: https://postgr.es/m/CAJ7c6TOMPiRs-CZ%3DA9hyzxOyqHhKXxLD8qCF5%2BGJuLjQBzOX4A%40mail.gmail.com\n[2]: https://postgr.es/m/9ef14537-b33b-c63a-9938-e2b413db0a4c%40enterprisedb.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 17 Jun 2022 18:04:11 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Thu, 2 Jun 2022 at 14:30, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n\n> > I saw there was some previous discussion about dictionary size. It\n> > looks like this approach would put all dictionaries into a shared OID\n> > pool. Since I don't know what a \"standard\" use case is, is there any\n> > risk of OID exhaustion for larger deployments with many dictionaries?\n> > Or is 2**32 so comparatively large that it's not really a serious\n> > concern?\n>\n> I agree, this is a drawback of the current implementation. To be honest,\n> I simply followed the example of how ENUMs are implemented. I'm not 100% sure\n> if we should be worried here (apparently, freed OIDs are reused). I'm OK with\n> using a separate sequence if someone could second this. This is the first time\n> I'm altering the catalog so I'm not certain what the best practices are.\n\nThe goal of this patch is great, thank you for working on this (and ZSON).\n\nThe approach chosen has a few downsides that I'm not happy with yet.\n\n* Assigning OIDs for each dictionary entry is not a great idea. I\ndon't see why you would need to do that; just assign monotonically\nascending keys for each dictionary, as we do for AttrNums.\n\n* There is a limit on SQL statement size, which will effectively limit\nthe size of dictionaries, but the examples are unrealistically small,\nso this isn't clear as a limitation, but it would be in practice. It\nwould be better to specify a filename, which can be read in when the\nDDL executes. This can be put into pg_dump output in a similar way to\nthe COPY data for a table is, so once read in it stays static.\n\n* The dictionaries are only allowed for certain datatypes. This should\nnot be specifically limited by this patch, i.e. user defined types\nshould not be rejected.\n\n* Dictionaries have no versioning. Any list of data items changes over\ntime, so how do we express that? Enums were also invented as static\nlists originally, then had to be modified later to accomodate\nadditions and revisions, so let's think about that now, even if we\ndon't add all of the commands in one go. Currently we would have to\ncreate a whole new dictionary if even one word changes. Ideally, we\nwant the dictionary to have a top-level name and then have multiple\nversions over time. Let's agree how we are going do these things, so\nwe can make sure the design and code allows for those future\nenhancements.\ni.e. how will we do ALTER TABLE ... UPGRADE DICTIONARY without causing\na table rewrite?\n\n* Does the order of entries in the dictionary allow us to express a\npriority? i.e. to allow Huffman coding.\n\nThanks for your efforts - this is a very important patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 23 Jun 2022 16:48:48 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Simon,\n\nMany thanks for your feedback!\n\nI'm going to submit an updated version of the patch in a bit. I just\nwanted to reply to some of your questions / comments.\n\n> Dictionaries have no versioning. [...]\n\n> Does the order of entries in the dictionary allow us to express a priority? i.e. to allow Huffman coding. [...]\n\nThis is something we discussed in the RFC thread. I got an impression\nthat the consensus was reached:\n\n1. To simply use 32-bit codes in the compressed documents, instead of\n16-bit ones as it was done in ZSON;\n2. Not to use any sort of variable-length coding;\n3. Not to use dictionary versions. New codes can be added to the\nexisting dictionaries by executing ALTER TYPE mydict ADD ENTRY. (This\nalso may answer your comment regarding a limit on SQL statement size.)\n4. The compression scheme can be altered in the future if needed.\nEvery compressed document stores algorithm_version (1 byte).\n\nDoes this plan of action sound OK to you? At this point it is not too\ndifficult to make design changes.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 28 Jun 2022 15:37:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Alexander,\n\nOn Fri, 17 Jun 2022 at 17:04, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>> These are just my initial thoughts I would like to share though. I may\n>> change my mind after diving deeper into a \"pluggable TOASTer\" patch.\n>\n> I familiarized myself with the \"pluggable TOASTer\" thread and joined\n> the discussion [1].\n>\n> I'm afraid so far I failed to understand your suggestion to base\n> \"compression dictionaries\" patch on \"pluggable TOASTer\", considering\n> the fair amount of push-back it got from the community, not to mention\n> a somewhat raw state of the patchset. It's true that Teodor and I are\n> trying to address similar problems. This however doesn't mean that\n> there should be a dependency between these patches.\n\nThe reason I think this is better implemented as a pluggable toaster\nis because casts are necessarily opaque and require O(sizeofdata)\ncopies or processing. The toaster infrastructure that is proposed in\n[0] seems to improve on the O(sizeofdata) requirement for toast, but\nthat will not work with casts.\n\n> Also, I completely agree with Tomas [2]:\n>\n>> My main point is that we should not be making too many radical\n>> changes at once - it makes it much harder to actually get anything done.\n>\n> IMO the patches don't depend on each other but rather complement each\n> other. The user can switch between different TOAST methods, and the\n> compression dictionaries can work on top of different TOAST methods.\n\nI don't think that is possible (or at least, not as performant). To\ntreat type X' as type X and use it as a stored medium instead, you\nmust have either the whole binary representation of X, or have access\nto the internals of type X. I find it difficult to believe that casts\ncan be done without a full detoast (or otherwise without deep\nknowledge about internal structure of the data type such as 'type A is\nbinary compatible with type X'), and as such I think this feature\n'compression dictionaries' is competing with the 'pluggable toaster'\nfeature, if the one is used on top of the other. That is, the\ndictionary is still created like in the proposed patches (though\npreferably without the 64-byte NAMELEN limit), but the usage will be\nthrough \"TOASTER my_dict_enabled_toaster\".\n\nAdditionally, I don't think we've ever accepted two different\nimplementations of the same concept, at least not without first having\ngood arguments why both competing implementations have obvious\nbenefits over the other, and both implementations being incompatible.\n\n> Although there is also a high-level idea (according to the\n> presentations) to share common data between different TOASTed values,\n> similarly to what compression dictionaries do, by looking at the\n> current feedback and considering the overall complexity and the amount\n> of open questions (e.g. interaction with different TableAMs, etc), I\n> seriously doubt that this particular part of \"pluggable TOASTer\" will\n> end-up in the core.\n\nYes, and that's why I think that this where this dictionary\ninfrastructure could provide value, as an alternative or extension to\nthe proposed jsonb toaster in the 'pluggable toaster' thread.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 4 Jul 2022 14:45:22 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Matthias,\n\n> > Although there is also a high-level idea (according to the\n> > presentations) to share common data between different TOASTed values,\n> > similarly to what compression dictionaries do, by looking at the\n> > current feedback and considering the overall complexity and the amount\n> > of open questions (e.g. interaction with different TableAMs, etc), I\n> > seriously doubt that this particular part of \"pluggable TOASTer\" will\n> > end-up in the core.\n>\n> Yes, and that's why I think that this where this dictionary\n> infrastructure could provide value, as an alternative or extension to\n> the proposed jsonb toaster in the 'pluggable toaster' thread.\n\nOK, I see your point now. And I think this is a very good point.\nBasing \"Compression dictionaries\" on the API provided by \"pluggable\nTOASTer\" can also be less hacky than what I'm currently doing with\n`typmod` argument. I'm going to switch the implementation at some\npoint, unless anyone will object to the idea.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Jul 2022 17:00:19 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> OK, I see your point now. And I think this is a very good point.\n> Basing \"Compression dictionaries\" on the API provided by \"pluggable\n> TOASTer\" can also be less hacky than what I'm currently doing with\n> `typmod` argument. I'm going to switch the implementation at some\n> point, unless anyone will object to the idea.\n\nHere is the rebased patch. I reworked the memory management a bit but\nother than that there are no new changes.\n\nSo far we seem to have a consensus to:\n\n1. Use bytea instead of NameData to store dictionary entries;\n\n2. Assign monotonically ascending IDs to the entries instead of using\nOids, as it is done with pg_class.relnatts. In order to do this we\nshould either add a corresponding column to pg_type, or add a new\ncatalog table, e.g. pg_dict_meta. Personally I don't have a strong\nopinion on what is better. Thoughts?\n\nBoth changes should be straightforward to implement and also are a\ngood exercise to newcomers.\n\nI invite anyone interested to join this effort as a co-author! (since,\nhonestly, rewriting the same feature over and over again alone is\nquite boring :D).\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 11 Jul 2022 17:44:42 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> I invite anyone interested to join this effort as a co-author!\n\nHere is v5. Same as v4 but with a fixed compiler warning (thanks,\ncfbot). Sorry for the noise.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 11 Jul 2022 18:41:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers!\n\nAleksander, please point me in the right direction if it was mentioned\nbefore, I have a few questions:\n\n1) It is not clear for me, how do you see the life cycle of such a\ndictionary? If it is meant to keep growing without\ncleaning up/rebuilding it could affect performance in an undesirable way,\nalong with keeping unused data without\nany means to get rid of them.\nAlso, I agree with Simon Riggs, using OIDs from the general pool for\ndictionary entries is a bad idea.\n\n2) From (1) follows another question - I haven't seen any means for getting\nrid of unused keys (or any other means\nfor dictionary cleanup). How could it be done?\n\n3) Is the possible scenario legal - by some means a dictionary does not\ncontain some keys for entries? What happens then?\n\n4) If one dictionary is used by several tables - I see future issues in\nconcurrent dictionary updates. This will for sure\naffect performance and can cause unpredictable behavior for queries.\n\nIf you have any questions on Pluggable TOAST don't hesitate to ask me and\non JSONB Toaster you can ask Nikita Glukhov.\n\nThank you!\n\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\nOn Mon, Jul 11, 2022 at 6:41 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> > I invite anyone interested to join this effort as a co-author!\n>\n> Here is v5. Same as v4 but with a fixed compiler warning (thanks,\n> cfbot). Sorry for the noise.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nHi hackers!Aleksander, please point me in the right direction if it was mentioned before, I have a few questions:1) It is not clear for me, how do you see the life cycle of such a dictionary? If it is meant to keep growing without cleaning up/rebuilding it could affect performance in an undesirable way, along with keeping unused data without any means to get rid of them.Also, I agree with Simon Riggs, using OIDs from the general pool for dictionary entries is a bad idea.2) From (1) follows another question - I haven't seen any means for getting rid of unused keys (or any other means for dictionary cleanup). How could it be done?3) Is the possible scenario legal - by some means a dictionary does not contain some keys for entries? What happens then?4) If one dictionary is used by several tables - I see future issues in concurrent dictionary updates. This will for sureaffect performance and can cause unpredictable behavior for queries.If you have any questions on Pluggable TOAST don't hesitate to ask me and on JSONB Toaster you can ask Nikita Glukhov. Thank you!Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/On Mon, Jul 11, 2022 at 6:41 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\n> I invite anyone interested to join this effort as a co-author!\n\nHere is v5. Same as v4 but with a fixed compiler warning (thanks,\ncfbot). Sorry for the noise.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 12 Jul 2022 13:25:43 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Nikita,\n\n> Aleksander, please point me in the right direction if it was mentioned before, I have a few questions:\n\nThanks for your feedback. These are good questions indeed.\n\n> 1) It is not clear for me, how do you see the life cycle of such a dictionary? If it is meant to keep growing without\n> cleaning up/rebuilding it could affect performance in an undesirable way, along with keeping unused data without\n> any means to get rid of them.\n> 2) From (1) follows another question - I haven't seen any means for getting rid of unused keys (or any other means\n> for dictionary cleanup). How could it be done?\n\nGood point. This was not a problem for ZSON since the dictionary size\nwas limited to 2**16 entries, the dictionary was immutable, and the\ndictionaries had versions. For compression dictionaries we removed the\n2**16 entries limit and also decided to get rid of versions. The idea\nwas that you can simply continue adding new entries, but no one\nthought about the fact that this will consume the memory required to\ndecompress the document indefinitely.\n\nMaybe we should return to the idea of limited dictionary size and\nversions. Objections?\n\n> 4) If one dictionary is used by several tables - I see future issues in concurrent dictionary updates. This will for sure\n> affect performance and can cause unpredictable behavior for queries.\n\nYou are right. Another reason to return to the idea of dictionary versions.\n\n> Also, I agree with Simon Riggs, using OIDs from the general pool for dictionary entries is a bad idea.\n\nYep, we agreed to stop using OIDs for this, however this was not\nchanged in the patch at this point. Please don't hesitate joining the\neffort if you want to. I wouldn't mind taking a short break from this\npatch.\n\n> 3) Is the possible scenario legal - by some means a dictionary does not contain some keys for entries? What happens then?\n\nNo, we should either forbid removing dictionary entries or check that\nall the existing documents are not using the entries being removed.\n\n> If you have any questions on Pluggable TOAST don't hesitate to ask me and on JSONB Toaster you can ask Nikita Glukhov.\n\nWill do! Thanks for working on this and I'm looking forward to the\nnext version of the patch for the next round of review.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 12 Jul 2022 15:15:17 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers!\n\nAleksander, I've carefully gone over discussion and still have some\nquestions to ask -\n\n1) Is there any means of measuring overhead of dictionaries over vanilla\nimplementation? IMO it is a must because\nJSON is a widely used functionality. Also, as it was mentioned before, to\ncheck the dictionary value must be detoasted;\n\n2) Storing dictionaries in one table. As I wrote before, this will surely\nlead to locks and waits while inserting and updating\ndictionaries, and could cause serious performance issues. And vacuuming\nthis table will lead to locks for all tables using\ndictionaries until vacuum is complete;\n\n3) JSON documents in production environments could be very complex and use\nthousands of keys, so creating dictionary\ndirectly in SQL statement is not very good approach, so it's another reason\nto have means for creating dictionaries as a\nseparate tables and/or passing them as files or so;\n\n4) Suggested mechanics, if put on top of the TOAST, could not benefit from\nknowledge if internal JSON structure, which\nis seen as important drawback in spite of extensive research work done on\nworking with JSON schema (storing, validating,\netc.), and also it cannot recognize and help to compress duplicated parts\nof JSON document;\n\n5) A small test issue - if dictionaried' JSON has a key which is equal to\nOID used in a dictionary for some other key?\n\nIn Pluggable TOAST we suggest that as an improvement compression should be\nput inside the Toaster as an option,\nthus the Toaster could have maximum benefits from knowledge of data\ninternal structure (and in future use JSON Schema).\nFor using in special Toaster for JSON datatype compression dictionaries\nseem to be very valuable addition, but now I\nhave to agree that this feature in current state is competing with\nPluggable TOAST.\n\nThank you!\n\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nOn Tue, Jul 12, 2022 at 3:15 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > Aleksander, please point me in the right direction if it was mentioned\n> before, I have a few questions:\n>\n> Thanks for your feedback. These are good questions indeed.\n>\n> > 1) It is not clear for me, how do you see the life cycle of such a\n> dictionary? If it is meant to keep growing without\n> > cleaning up/rebuilding it could affect performance in an undesirable\n> way, along with keeping unused data without\n> > any means to get rid of them.\n> > 2) From (1) follows another question - I haven't seen any means for\n> getting rid of unused keys (or any other means\n> > for dictionary cleanup). How could it be done?\n>\n> Good point. This was not a problem for ZSON since the dictionary size\n> was limited to 2**16 entries, the dictionary was immutable, and the\n> dictionaries had versions. For compression dictionaries we removed the\n> 2**16 entries limit and also decided to get rid of versions. The idea\n> was that you can simply continue adding new entries, but no one\n> thought about the fact that this will consume the memory required to\n> decompress the document indefinitely.\n>\n> Maybe we should return to the idea of limited dictionary size and\n> versions. Objections?\n>\n> > 4) If one dictionary is used by several tables - I see future issues in\n> concurrent dictionary updates. This will for sure\n> > affect performance and can cause unpredictable behavior for queries.\n>\n> You are right. Another reason to return to the idea of dictionary versions.\n>\n> > Also, I agree with Simon Riggs, using OIDs from the general pool for\n> dictionary entries is a bad idea.\n>\n> Yep, we agreed to stop using OIDs for this, however this was not\n> changed in the patch at this point. Please don't hesitate joining the\n> effort if you want to. I wouldn't mind taking a short break from this\n> patch.\n>\n> > 3) Is the possible scenario legal - by some means a dictionary does not\n> contain some keys for entries? What happens then?\n>\n> No, we should either forbid removing dictionary entries or check that\n> all the existing documents are not using the entries being removed.\n>\n> > If you have any questions on Pluggable TOAST don't hesitate to ask me\n> and on JSONB Toaster you can ask Nikita Glukhov.\n>\n> Will do! Thanks for working on this and I'm looking forward to the\n> next version of the patch for the next round of review.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nHi hackers!Aleksander, I've carefully gone over discussion and still have some questions to ask -1) Is there any means of measuring overhead of dictionaries over vanilla implementation? IMO it is a must becauseJSON is a widely used functionality. Also, as it was mentioned before, to check the dictionary value must be detoasted;2) Storing dictionaries in one table. As I wrote before, this will surely lead to locks and waits while inserting and updatingdictionaries, and could cause serious performance issues. And vacuuming this table will lead to locks for all tables usingdictionaries until vacuum is complete;3) JSON documents in production environments could be very complex and use thousands of keys, so creating dictionarydirectly in SQL statement is not very good approach, so it's another reason to have means for creating dictionaries as a separate tables and/or passing them as files or so;4) Suggested mechanics, if put on top of the TOAST, could not benefit from knowledge if internal JSON structure, whichis seen as important drawback in spite of extensive research work done on working with JSON schema (storing, validating,etc.), and also it cannot recognize and help to compress duplicated parts of JSON document;5) A small test issue - if dictionaried' JSON has a key which is equal to OID used in a dictionary for some other key?In Pluggable TOAST we suggest that as an improvement compression should be put inside the Toaster as an option, thus the Toaster could have maximum benefits from knowledge of data internal structure (and in future use JSON Schema).For using in special Toaster for JSON datatype compression dictionaries seem to be very valuable addition, but now Ihave to agree that this feature in current state is competing with Pluggable TOAST.Thank you!Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/On Tue, Jul 12, 2022 at 3:15 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> Aleksander, please point me in the right direction if it was mentioned before, I have a few questions:\n\nThanks for your feedback. These are good questions indeed.\n\n> 1) It is not clear for me, how do you see the life cycle of such a dictionary? If it is meant to keep growing without\n> cleaning up/rebuilding it could affect performance in an undesirable way, along with keeping unused data without\n> any means to get rid of them.\n> 2) From (1) follows another question - I haven't seen any means for getting rid of unused keys (or any other means\n> for dictionary cleanup). How could it be done?\n\nGood point. This was not a problem for ZSON since the dictionary size\nwas limited to 2**16 entries, the dictionary was immutable, and the\ndictionaries had versions. For compression dictionaries we removed the\n2**16 entries limit and also decided to get rid of versions. The idea\nwas that you can simply continue adding new entries, but no one\nthought about the fact that this will consume the memory required to\ndecompress the document indefinitely.\n\nMaybe we should return to the idea of limited dictionary size and\nversions. Objections?\n\n> 4) If one dictionary is used by several tables - I see future issues in concurrent dictionary updates. This will for sure\n> affect performance and can cause unpredictable behavior for queries.\n\nYou are right. Another reason to return to the idea of dictionary versions.\n\n> Also, I agree with Simon Riggs, using OIDs from the general pool for dictionary entries is a bad idea.\n\nYep, we agreed to stop using OIDs for this, however this was not\nchanged in the patch at this point. Please don't hesitate joining the\neffort if you want to. I wouldn't mind taking a short break from this\npatch.\n\n> 3) Is the possible scenario legal - by some means a dictionary does not contain some keys for entries? What happens then?\n\nNo, we should either forbid removing dictionary entries or check that\nall the existing documents are not using the entries being removed.\n\n> If you have any questions on Pluggable TOAST don't hesitate to ask me and on JSONB Toaster you can ask Nikita Glukhov.\n\nWill do! Thanks for working on this and I'm looking forward to the\nnext version of the patch for the next round of review.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Sun, 17 Jul 2022 21:15:03 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Nikita,\n\nThanks for your feedback!\n\n> Aleksander, I've carefully gone over discussion and still have some questions to ask -\n>\n> 1) Is there any means of measuring overhead of dictionaries over vanilla implementation? IMO it is a must because\n> JSON is a widely used functionality. Also, as it was mentioned before, to check the dictionary value must be detoasted;\n\nNot sure what overhead you have in mind. The patch doesn't affect the\nvanilla JSONB implementation.\n\n> 2) Storing dictionaries in one table. As I wrote before, this will surely lead to locks and waits while inserting and updating\n> dictionaries, and could cause serious performance issues. And vacuuming this table will lead to locks for all tables using\n> dictionaries until vacuum is complete;\n\nI believe this is true to some degree. But doesn't the same generally\napply to the rest of catalog tables?\n\nI'm not that concerned about inserting/updating since this is a rare\noperation. Vacuuming shouldn't be such a problem unless the user\ncreates/deletes dictionaries all the time.\n\nAm I missing something?\n\n> 3) JSON documents in production environments could be very complex and use thousands of keys, so creating dictionary\n> directly in SQL statement is not very good approach, so it's another reason to have means for creating dictionaries as a\n> separate tables and/or passing them as files or so;\n\nYes, it was proposed to update dictionaries automatically e.g. during\nthe VACUUM of the table that contains compressed documents. This is\nsimply out of scope of this particular patch. It was argued that the\nmanual update should be supported too, which is implemented in this\npatch.\n\n> 4) Suggested mechanics, if put on top of the TOAST, could not benefit from knowledge if internal JSON structure, which\n> is seen as important drawback in spite of extensive research work done on working with JSON schema (storing, validating,\n> etc.), and also it cannot recognize and help to compress duplicated parts of JSON document;\n\nCould you please elaborate on this a bit and/or maybe give an example? ...\n\n> In Pluggable TOAST we suggest that as an improvement compression should be put inside the Toaster as an option,\n> thus the Toaster could have maximum benefits from knowledge of data internal structure (and in future use JSON Schema).\n\n... Current implementation doesn't use the knowledge of JSONB format,\nthat's true. This is because previously we agreed there is no \"one\nsize fits all\" compression method, thus several are going to be\nsupported eventually. The current algorithm was chosen merely as the\none that is going to work good enough for any data type, not just\nJSONB. Nothing prevents an alternative compression method from using\nthe knowledge of JSONB structure.\n\nAs, I believe, Matthias pointed out above, only partial decompression\nwould be a challenge. This is indeed something that would be better to\nimplement somewhere closer to the TOAST level. Other than that I'm not\nsure what you mean.\n\n> 5) A small test issue - if dictionaried' JSON has a key which is equal to OID used in a dictionary for some other key?\n\nAgain, I'm having difficulties understanding the case you are\ndescribing. Could you give a specific example?\n\n> For using in special Toaster for JSON datatype compression dictionaries seem to be very valuable addition, but now I\n> have to agree that this feature in current state is competing with Pluggable TOAST.\n\nI disagree with the word \"competing\" here. Again, Matthias had a very\ngood point about this above.\n\nIn short, pluggable TOAST is a low-level internal mechanism, but it\ndoesn't provide a good interface for the end user and has several open\nissues. The most important one IMO is how it is supposed to work with\npluggable AMs in the general case. \"Compression dictionaries\" have a\ngood user interface, and the implementation is not that important. The\ncurrent implementation uses casts, as the only option available at the\nmoment. But nothing prevents it from using Pluggable TOAST if this\nwill produce a cleaner code (I believe it will) and will allow\ndelivering partial decompression (this is yet to be figured out).\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 18 Jul 2022 15:26:47 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Sun, 17 Jul 2022 at 19:15, Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> we suggest that as an improvement compression should be put inside the Toaster as an option,\n> thus the Toaster could have maximum benefits from knowledge of data internal structure (and in future use JSON Schema).\n\nVery much agreed.\n\n> For using in special Toaster for JSON datatype compression dictionaries seem to be very valuable addition, but now I\n> have to agree that this feature in current state is competing with Pluggable TOAST.\n\nBut I don't understand this.\n\nWhy does storing a compression dictionary in the catalog prevent that\ndictionary from being used within the toaster?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 27 Jul 2022 08:36:13 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 09:36, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Sun, 17 Jul 2022 at 19:15, Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> > For using in special Toaster for JSON datatype compression dictionaries seem to be very valuable addition, but now I\n> > have to agree that this feature in current state is competing with Pluggable TOAST.\n>\n> But I don't understand this.\n>\n> Why does storing a compression dictionary in the catalog prevent that\n> dictionary from being used within the toaster?\n\nThe point is not that compression dictionaries in the catalog are bad\n- I think it makes a lot of sense - but that the typecast -based usage\nof those dictionaries in user tables (like the UI provided by zson)\neffectively competes with the toaster: It tries to store the data in a\nmore compressed manner than the toaster currently can because it has\nadditional knowledge about the values being toasted.\n\nThe main difference between casting and toasting however is that\ncasting is fairly because it has a significantly higher memory\noverhead: both the fully decompressed and the compressed values are\nstored in memory at the same time at some point when you cast a value,\nwhile only the decompressed value is stored in full in memory when\n(de)toasting.\n\nAnd, considering that there is an open proposal for extending the\ntoaster mechanism, I think that it is not specifically efficient to\nwork with the relatively expensive typecast -based infrastructure if\nthis dictionary compression can instead be added using the proposed\nextensible toasting mechanism at relatively low overhead.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 27 Jul 2022 10:30:18 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> So far we seem to have a consensus to:\n>\n> 1. Use bytea instead of NameData to store dictionary entries;\n>\n> 2. Assign monotonically ascending IDs to the entries instead of using\n> Oids, as it is done with pg_class.relnatts. In order to do this we\n> should either add a corresponding column to pg_type, or add a new\n> catalog table, e.g. pg_dict_meta. Personally I don't have a strong\n> opinion on what is better. Thoughts?\n>\n> Both changes should be straightforward to implement and also are a\n> good exercise to newcomers.\n>\n> I invite anyone interested to join this effort as a co-author! (since,\n> honestly, rewriting the same feature over and over again alone is\n> quite boring :D).\n\ncfbot complained that v5 doesn't apply anymore. Here is the rebased\nversion of the patch.\n\n> Good point. This was not a problem for ZSON since the dictionary size\n> was limited to 2**16 entries, the dictionary was immutable, and the\n> dictionaries had versions. For compression dictionaries we removed the\n> 2**16 entries limit and also decided to get rid of versions. The idea\n> was that you can simply continue adding new entries, but no one\n> thought about the fact that this will consume the memory required to\n> decompress the document indefinitely.\n>\n> Maybe we should return to the idea of limited dictionary size and\n> versions. Objections?\n> [ ...]\n> You are right. Another reason to return to the idea of dictionary versions.\n\nSince no one objected so far and/or proposed a better idea I assume\nthis can be added to the list of TODOs as well.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 1 Aug 2022 14:25:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers!\n\nI've got a partly question, partly proposal for the future development of\nthis\nfeature:\nWhat if we use pg_dict table not to store dictionaries but to store\ndictionaries'\nmeta, and actual dictionaries to be stored in separate tables like it is\ndone with\nTOAST tables (i.e. pg_dict.<dictionary 1 entry> --> pg_dict_16385 table)?\nThus we can kill several birds with one stone - we deal with concurrent\ndictionaries' updates - which looks like very serious issue for now, they\ndo not\naffect each other and overall DB performance while using, we get around SQL\nstatement size restriction, could effectively deal with versions in\ndictionaries\nand even dictionaries' versions, as well as dictionary size restriction, we\ncan\nuse it for duplicated JSON parts, and even we can provide an API to work\nwith dictionaries and dictionary tables which later could be usable even\nfor\nworking with JSON schemas as well (maybe, with some extension)?\n\nOverall structure could look like this:\npg_dict\n |\n |---- dictionary 1 meta\n | |--name\n | |--size\n | |--etc\n | |--dictionary table name (i.e. pg_dict_16385)\n | |\n | |----> pg_dict_16385\n |\n |---- dictionary 2 meta\n | |--name\n | |--size\n | |--etc\n | |--dictionary table name (i.e. pg_dict_16386)\n | |\n | |----> pg_dict_16386\n ...\n\nwhere dictionary table could look like\npg_dict_16385\n |\n |---- key 1\n | |-value\n |\n |---- key 2\n | |-value\n ...\n\nAnd with a special DICT API we would have means to access, cache, store our\ndictionaries in a uniform way from different levels. In this implementation\nit also\nlooks as a very valuable addition for our JSONb Toaster.\n\nJSON schema processing is a very promising feature and we have to keep up\nwith major competitors like Oracle which are already working on it.\n\nOn Mon, Aug 1, 2022 at 2:25 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi hackers,\n>\n> > So far we seem to have a consensus to:\n> >\n> > 1. Use bytea instead of NameData to store dictionary entries;\n> >\n> > 2. Assign monotonically ascending IDs to the entries instead of using\n> > Oids, as it is done with pg_class.relnatts. In order to do this we\n> > should either add a corresponding column to pg_type, or add a new\n> > catalog table, e.g. pg_dict_meta. Personally I don't have a strong\n> > opinion on what is better. Thoughts?\n> >\n> > Both changes should be straightforward to implement and also are a\n> > good exercise to newcomers.\n> >\n> > I invite anyone interested to join this effort as a co-author! (since,\n> > honestly, rewriting the same feature over and over again alone is\n> > quite boring :D).\n>\n> cfbot complained that v5 doesn't apply anymore. Here is the rebased\n> version of the patch.\n>\n> > Good point. This was not a problem for ZSON since the dictionary size\n> > was limited to 2**16 entries, the dictionary was immutable, and the\n> > dictionaries had versions. For compression dictionaries we removed the\n> > 2**16 entries limit and also decided to get rid of versions. The idea\n> > was that you can simply continue adding new entries, but no one\n> > thought about the fact that this will consume the memory required to\n> > decompress the document indefinitely.\n> >\n> > Maybe we should return to the idea of limited dictionary size and\n> > versions. Objections?\n> > [ ...]\n> > You are right. Another reason to return to the idea of dictionary\n> versions.\n>\n> Since no one objected so far and/or proposed a better idea I assume\n> this can be added to the list of TODOs as well.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nhttps://postgrespro.ru/\n\nHi hackers!I've got a partly question, partly proposal for the future development of thisfeature:What if we use pg_dict table not to store dictionaries but to store dictionaries'meta, and actual dictionaries to be stored in separate tables like it is done withTOAST tables (i.e. pg_dict.<dictionary 1 entry> --> pg_dict_16385 table)?Thus we can kill several birds with one stone - we deal with concurrent dictionaries' updates - which looks like very serious issue for now, they do not affect each other and overall DB performance while using, we get around SQLstatement size restriction, could effectively deal with versions in dictionaries and even dictionaries' versions, as well as dictionary size restriction, we canuse it for duplicated JSON parts, and even we can provide an API to work with dictionaries and dictionary tables which later could be usable even for working with JSON schemas as well (maybe, with some extension)?Overall structure could look like this:pg_dict | |---- dictionary 1 meta | |--name | |--size | |--etc | |--dictionary table name (i.e. pg_dict_16385) | | | |----> pg_dict_16385 | |---- dictionary 2 meta | |--name | |--size | |--etc | |--dictionary table name (i.e. pg_dict_16386) | | | |----> pg_dict_16386 ...where dictionary table could look likepg_dict_16385 | |---- key 1 | |-value | |---- key 2 | |-value ...And with a special DICT API we would have means to access, cache, store ourdictionaries in a uniform way from different levels. In this implementation it alsolooks as a very valuable addition for our JSONb Toaster.JSON schema processing is a very promising feature and we have to keep upwith major competitors like Oracle which are already working on it.On Mon, Aug 1, 2022 at 2:25 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\n> So far we seem to have a consensus to:\n>\n> 1. Use bytea instead of NameData to store dictionary entries;\n>\n> 2. Assign monotonically ascending IDs to the entries instead of using\n> Oids, as it is done with pg_class.relnatts. In order to do this we\n> should either add a corresponding column to pg_type, or add a new\n> catalog table, e.g. pg_dict_meta. Personally I don't have a strong\n> opinion on what is better. Thoughts?\n>\n> Both changes should be straightforward to implement and also are a\n> good exercise to newcomers.\n>\n> I invite anyone interested to join this effort as a co-author! (since,\n> honestly, rewriting the same feature over and over again alone is\n> quite boring :D).\n\ncfbot complained that v5 doesn't apply anymore. Here is the rebased\nversion of the patch.\n\n> Good point. This was not a problem for ZSON since the dictionary size\n> was limited to 2**16 entries, the dictionary was immutable, and the\n> dictionaries had versions. For compression dictionaries we removed the\n> 2**16 entries limit and also decided to get rid of versions. The idea\n> was that you can simply continue adding new entries, but no one\n> thought about the fact that this will consume the memory required to\n> decompress the document indefinitely.\n>\n> Maybe we should return to the idea of limited dictionary size and\n> versions. Objections?\n> [ ...]\n> You are right. Another reason to return to the idea of dictionary versions.\n\nSince no one objected so far and/or proposed a better idea I assume\nthis can be added to the list of TODOs as well.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita Malakhovhttps://postgrespro.ru/",
"msg_date": "Fri, 19 Aug 2022 10:57:42 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\nHere is the rebased version of the patch.\n\n> I invite anyone interested to join this effort as a co-author! (since,\n> honestly, rewriting the same feature over and over again alone is\n> quite boring :D).\n\n> Overall structure could look like this:\n> pg_dict\n> |\n> |---- dictionary 1 meta\n> | |--name\n> | |--size\n> | |--etc\n> | |--dictionary table name (i.e. pg_dict_16385)\n> | |\n> | |----> pg_dict_16385\n> |\n> |---- dictionary 2 meta\n> | |--name\n> | |--size\n> | |--etc\n> | |--dictionary table name (i.e. pg_dict_16386)\n> | |\n> | |----> pg_dict_16386\n\nFor the record, Nikita and I agreed offlist that Nikita will join this\neffort as a co-author in order to implement the suggested improvements\n(and perhaps some improvements that were not suggested yet). Meanwhile\nI'm going to keep the current version of the patch up to date with the\n`master` branch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 2 Sep 2022 13:50:12 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> For the record, Nikita and I agreed offlist that Nikita will join this\n> effort as a co-author in order to implement the suggested improvements\n> (and perhaps some improvements that were not suggested yet). Meanwhile\n> I'm going to keep the current version of the patch up to date with the\n> `master` branch.\n\nHere is an updated patch with added Meson support.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 6 Oct 2022 13:29:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> For the record, Nikita and I agreed offlist that Nikita will join this\n> effort as a co-author in order to implement the suggested improvements\n> (and perhaps some improvements that were not suggested yet). Meanwhile\n> I'm going to keep the current version of the patch up to date with the\n> `master` branch.\n\n8272749e added a few more arguments to CastCreate(). Here is the rebased patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 4 Nov 2022 11:37:02 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\n> 8272749e added a few more arguments to CastCreate(). Here is the rebased patch.\n\nAfter merging afbfc029 [1] the patch needed a rebase. PFA v10.\n\nThe patch is still in a PoC state and this is exactly why comments and\nsuggestions from the community are most welcome! Particularly I would\nlike to know:\n\n1. Would you call it a wanted feature considering the existence of\nPluggable TOASTer patchset which (besides other things) tries to\nintroduce type-aware TOASTers for EXTERNAL attributes? I know what\nSimon's [2] and Nikita's latest answers were, and I know my personal\nopinion on this [3][4], but I would like to hear from the rest of the\ncommunity.\n\n2. How should we make sure a dictionary will not consume all the\navailable memory? Limiting the amount of dictionary entries to pow(2,\n16) and having dictionary versions seems to work OK for ZSON. However\nit was pointed out that this may be an unwanted limitation for the\nin-core implementation.\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c727f511;hp=afbfc02983f86c4d71825efa6befd547fe81a926\n[2]: https://www.postgresql.org/message-id/CANbhV-HpCF852WcZuU0wyh1jMU4p6XLbV6rCRkZpnpeKQ9OenQ%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/CAJ7c6TN-N3%3DPSykmOjmW1EAf9YyyHFDHEznX-5VORsWUvVN-5w%40mail.gmail.com\n[4]: https://www.postgresql.org/message-id/CAJ7c6TO2XTTk3cu5w6ePHfhYQkoNpw7u1jeqHf%3DGwn%2BoWci8eA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 17 Nov 2022 13:36:36 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "This patch came up at the developer meeting in Brussels yesterday.\nhttps://wiki.postgresql.org/wiki/FOSDEM/PGDay_2023_Developer_Meeting#v16_Patch_Triage\n\nFirst, as far as I can tell, there is a large overlap between this patch\nand \"Pluggable toaster\" patch. The approaches are completely different,\nbut they seem to be trying to fix the same problem: the fact that the\ndefault TOAST stuff isn't good enough for JSONB. I think before asking\ndevelopers of both patches to rebase over and over, we should take a\nstep back and decide which one we dislike the less, and how to fix that\none into a shape that we no longer dislike.\n\n(Don't get me wrong. I'm all for having better JSONB compression.\nHowever, for one thing, both patches require action from the user to set\nup a compression mechanism by hand. Perhaps it would be even better if\nthe system determines that a JSONB column uses a different compression\nimplementation, without the user doing anything explicitly; or maybe we\nwant to give the user *some* agency for specific columns if they want,\nbut not force them into it for every single jsonb column.)\n\nNow, I don't think either of these patches can get to a committable\nshape in time for v16 -- even assuming we had an agreed design, which\nAFAICS we don't. But I encourage people to continue discussion and try\nto find consensus.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Fri, 3 Feb 2023 10:55:40 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Fri, 3 Feb 2023 at 14:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> This patch came up at the developer meeting in Brussels yesterday.\n> https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2023_Developer_Meeting#v16_Patch_Triage\n>\n> First, as far as I can tell, there is a large overlap between this patch\n> and \"Pluggable toaster\" patch. The approaches are completely different,\n> but they seem to be trying to fix the same problem: the fact that the\n> default TOAST stuff isn't good enough for JSONB. I think before asking\n> developers of both patches to rebase over and over, we should take a\n> step back and decide which one we dislike the less, and how to fix that\n> one into a shape that we no longer dislike.\n>\n> (Don't get me wrong. I'm all for having better JSONB compression.\n> However, for one thing, both patches require action from the user to set\n> up a compression mechanism by hand. Perhaps it would be even better if\n> the system determines that a JSONB column uses a different compression\n> implementation, without the user doing anything explicitly; or maybe we\n> want to give the user *some* agency for specific columns if they want,\n> but not force them into it for every single jsonb column.)\n>\n> Now, I don't think either of these patches can get to a committable\n> shape in time for v16 -- even assuming we had an agreed design, which\n> AFAICS we don't. But I encourage people to continue discussion and try\n> to find consensus.\n>\nHi, Alvaro!\n\nI'd like to give my +1 in favor of implementing a pluggable toaster\ninterface first. Then we can work on custom toast engines for\ndifferent scenarios, not limited to JSON(b).\n\nFor example, I find it useful to decrease WAL overhead on the\nreplication of TOAST updates. It is quite a pain now that we need to\nrewrite all toast chunks at any TOAST update. Also, it could be good\nfor implementing undo access methods etc., etc. Now, these kinds of\nactivities in extensions face the fact that core has only one TOAST\nwhich is quite inefficient in many scenarios.\n\nSo overall I value the extensibility part of this activity as the most\nimportant one and will be happy to see it completed first.\n\nKind regards,\nPavel Borisov,\nSupabase.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 14:39:31 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-03 14:39:31 +0400, Pavel Borisov wrote:\n> On Fri, 3 Feb 2023 at 14:04, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > This patch came up at the developer meeting in Brussels yesterday.\n> > https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2023_Developer_Meeting#v16_Patch_Triage\n> >\n> > First, as far as I can tell, there is a large overlap between this patch\n> > and \"Pluggable toaster\" patch. The approaches are completely different,\n> > but they seem to be trying to fix the same problem: the fact that the\n> > default TOAST stuff isn't good enough for JSONB. I think before asking\n> > developers of both patches to rebase over and over, we should take a\n> > step back and decide which one we dislike the less, and how to fix that\n> > one into a shape that we no longer dislike.\n> >\n> > (Don't get me wrong. I'm all for having better JSONB compression.\n> > However, for one thing, both patches require action from the user to set\n> > up a compression mechanism by hand. Perhaps it would be even better if\n> > the system determines that a JSONB column uses a different compression\n> > implementation, without the user doing anything explicitly; or maybe we\n> > want to give the user *some* agency for specific columns if they want,\n> > but not force them into it for every single jsonb column.)\n> >\n> > Now, I don't think either of these patches can get to a committable\n> > shape in time for v16 -- even assuming we had an agreed design, which\n> > AFAICS we don't. But I encourage people to continue discussion and try\n> > to find consensus.\n> >\n> Hi, Alvaro!\n>\n> I'd like to give my +1 in favor of implementing a pluggable toaster\n> interface first. Then we can work on custom toast engines for\n> different scenarios, not limited to JSON(b).\n\nI don't think the approaches in either of these threads is\npromising. They add a lot of complexity, require implementation effort\nfor each type, manual work by the administrator for column, etc.\n\n\nOne of the major justifications for work in this area is the cross-row\nredundancy for types like jsonb. I think there's ways to improve that\nacross types, instead of requiring per-type work. We could e.g. use\ncompression dictionaries to achieve much higher compression\nrates. Training of the dictionairy could even happen automatically by\nanalyze, if we wanted to. It's unlikely to get you everything a very\nsophisticated per-type compression is going to give you, but it's going\nto be a lot better than today, and it's going to work across types.\n\n\n> For example, I find it useful to decrease WAL overhead on the\n> replication of TOAST updates. It is quite a pain now that we need to\n> rewrite all toast chunks at any TOAST update. Also, it could be good\n> for implementing undo access methods etc., etc. Now, these kinds of\n> activities in extensions face the fact that core has only one TOAST\n> which is quite inefficient in many scenarios.\n>\n> So overall I value the extensibility part of this activity as the most\n> important one and will be happy to see it completed first.\n\nI think the complexity will just make improving toast in-core harder,\nwithout much benefit.\n\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Sat, 4 Feb 2023 05:31:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\n> I don't think the approaches in either of these threads is\n> promising. They add a lot of complexity, require implementation effort\n> for each type, manual work by the administrator for column, etc.\n\nI would like to point out that compression dictionaries don't require\nper-type work.\n\nCurrent implementation is artificially limited to JSONB because it's a\nPoC. I was hoping to get more feedback from the community before\nproceeding further. Internally it uses type-agnostic compression and\ndoesn't care whether it compresses JSON(B), XML, TEXT, BYTEA or\narrays. This choice was explicitly done in order to support types\nother than JSONB.\n\n> One of the major justifications for work in this area is the cross-row\n> redundancy for types like jsonb. I think there's ways to improve that\n> across types, instead of requiring per-type work.\n\nTo be fair, there are advantages in using type-aware compression. The\ncompression algorithm can be more efficient than a general one and in\ntheory one can implement lazy decompression, e.g. the one that\ndecompresses only the accessed fields of a JSONB document.\n\nI agree though that particularly for PostgreSQL this is not\nnecessarily the right path, especially considering the accompanying\ncomplexity.\n\nIf the user cares about the disk space consumption why storing JSONB\nin a relational DBMS in the first place? We already have a great\nsolution for compacting the data, it was invented in the 70s and is\ncalled normalization.\n\nSince PostgreSQL is not a specified document-oriented DBMS I think we\nbetter focus our (far from being infinite) resources on something more\npeople would benefit from: AIO/DIO [1] or perhaps getting rid of\nfreezing [2], to name a few examples.\n\n> [...]\n> step back and decide which one we dislike the less, and how to fix that\n> one into a shape that we no longer dislike.\n\nFor the sake of completeness, doing neither type-aware TOASTing nor\ncompression dictionaries and leaving this area to the extension\nauthors (e.g. ZSON) is also a possible choice, for the same reasons\nnamed above. However having a built-in type-agnostic dictionary\ncompression IMO is a too attractive idea to completely ignore it.\nEspecially considering the fact that the implementation was proven to\nbe fairly simple and there was even no need to rebase the patch since\nNovember :)\n\nI know that there were concerns [3] regarding the typmod hack. I don't\nlike it either and 100% open to suggestions here. This is merely a\ncurrent implementation detail used in a PoC, not a fundamental design\ndecision.\n\n[1]: https://postgr.es/m/20210223100344.llw5an2aklengrmn%40alap3.anarazel.de\n[2]: https://postgr.es/m/CAJ7c6TOk1mx4KfF0AHkvXi%2BpkdjFqwTwvRE-JmdczZMAYnRQ0w%40mail.gmail.com\n[3]: https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2023_Developer_Meeting#v16_Patch_Triage\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 5 Feb 2023 13:41:17 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-05 13:41:17 +0300, Aleksander Alekseev wrote:\n> > I don't think the approaches in either of these threads is\n> > promising. They add a lot of complexity, require implementation effort\n> > for each type, manual work by the administrator for column, etc.\n> \n> I would like to point out that compression dictionaries don't require\n> per-type work.\n> \n> Current implementation is artificially limited to JSONB because it's a\n> PoC. I was hoping to get more feedback from the community before\n> proceeding further. Internally it uses type-agnostic compression and\n> doesn't care whether it compresses JSON(B), XML, TEXT, BYTEA or\n> arrays. This choice was explicitly done in order to support types\n> other than JSONB.\n\nI don't think we'd want much of the infrastructure introduced in the\npatch for type agnostic cross-row compression. A dedicated \"dictionary\"\ntype as a wrapper around other types IMO is the wrong direction. This\nshould be a relation-level optimization option, possibly automatic, not\nsomething visible to every user of the table.\n\nI assume that manually specifying dictionary entries is a consequence of\nthe prototype state? I don't think this is something humans are very\ngood at, just analyzing the data to see what's useful to dictionarize\nseems more promising.\n\nI also suspect that we'd have to spend a lot of effort to make\ncompression/decompression fast if we want to handle dictionaries\nourselves, rather than using the dictionary support in libraries like\nlz4/zstd.\n\n\n> > One of the major justifications for work in this area is the cross-row\n> > redundancy for types like jsonb. I think there's ways to improve that\n> > across types, instead of requiring per-type work.\n> \n> To be fair, there are advantages in using type-aware compression. The\n> compression algorithm can be more efficient than a general one and in\n> theory one can implement lazy decompression, e.g. the one that\n> decompresses only the accessed fields of a JSONB document.\n\n> I agree though that particularly for PostgreSQL this is not\n> necessarily the right path, especially considering the accompanying\n> complexity.\n\nI agree with both those paragraphs.\n\n\n> above. However having a built-in type-agnostic dictionary compression\n> IMO is a too attractive idea to completely ignore it. Especially\n> considering the fact that the implementation was proven to be fairly\n> simple and there was even no need to rebase the patch since November\n> :)\n\nI don't think a prototype-y patch not needing a rebase two months is a\ngood measure of complexity :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 5 Feb 2023 06:50:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\n> I assume that manually specifying dictionary entries is a consequence of\n> the prototype state? I don't think this is something humans are very\n> good at, just analyzing the data to see what's useful to dictionarize\n> seems more promising.\n\nNo, humans are not good at it. The idea was to automate the process\nand build the dictionaries automatically e.g. during the VACUUM.\n\n> I don't think we'd want much of the infrastructure introduced in the\n> patch for type agnostic cross-row compression. A dedicated \"dictionary\"\n> type as a wrapper around other types IMO is the wrong direction. This\n> should be a relation-level optimization option, possibly automatic, not\n> something visible to every user of the table.\n\nSo to clarify, are we talking about tuple-level compression? Or\nperhaps page-level compression?\n\nImplementing page-level compression should be *relatively*\nstraightforward. As an example this was previously done for InnoDB.\nBasically InnoDB compresses the entire page, then rounds the result to\n1K, 2K, 4K, 8K, etc and stores the result in a corresponding fork\n(\"fork\" in PG terminology), similarly to how a SLAB allocator works.\nAdditionally a page_id -> fork_id map should be maintained, probably\nin yet another fork, similarly to visibility map. A compressed page\ncan change the fork after being modified since this may change the\nsize of a compressed page. The buffer manager is unaffected and deals\nonly with uncompressed pages. (I'm not an expert in InnoDB and this is\nmy very rough understanding of how its compression works.)\n\nI believe this can be implemented as a TAM. Whether this would be a\n\"dictionary\" compression is debatable but it gives the users similar\nbenefits, give or take. The advantage is that users shouldn't define\nany dictionaries manually, nor should DBMS during VACUUM or somehow\nelse.\n\n> I also suspect that we'd have to spend a lot of effort to make\n> compression/decompression fast if we want to handle dictionaries\n> ourselves, rather than using the dictionary support in libraries like\n> lz4/zstd.\n\nThat's a reasonable concern, can't argue with that.\n\n> I don't think a prototype-y patch not needing a rebase two months is a\n> good measure of complexity :)\n\nIt's worth noting that I also invested quite some time into reviewing\ntype-aware TOASTers :) I just choose to keep my personal opinion about\nthe complexity of that patch to myself this time since obviously I'm a\nbit biased. However if you are curious it's all in the corresponding\nthread.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 5 Feb 2023 20:05:51 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-05 20:05:51 +0300, Aleksander Alekseev wrote:\n> > I don't think we'd want much of the infrastructure introduced in the\n> > patch for type agnostic cross-row compression. A dedicated \"dictionary\"\n> > type as a wrapper around other types IMO is the wrong direction. This\n> > should be a relation-level optimization option, possibly automatic, not\n> > something visible to every user of the table.\n>\n> So to clarify, are we talking about tuple-level compression? Or\n> perhaps page-level compression?\n\nTuple level.\n\nWhat I think we should do is basically this:\n\nWhen we compress datums, we know the table being targeted. If there's a\npg_attribute parameter indicating we should, we can pass a prebuilt\ndictionary to the LZ4/zstd [de]compression functions.\n\nIt's possible we'd need to use a somewhat extended header for such\ncompressed datums, to reference the dictionary \"id\" to be used when\ndecompressing, if the compression algorithms don't already have that in\none of their headers, but that's entirely doable.\n\n\nA quick demo of the effect size:\n\n# create data to train dictionary with, use subset to increase realism\nmkdir /tmp/pg_proc_as_json/;\nCREATE EXTENSION adminpack;\nSELECT pg_file_write('/tmp/pg_proc_as_json/'||oid||'.raw', to_json(pp)::text, true)\nFROM pg_proc pp\nLIMIT 2000;\n\n\n# build dictionary\nzstd --train -o /tmp/pg_proc_as_json.dict /tmp/pg_proc_as_json/*.raw\n\n# create more data\nSELECT pg_file_write('/tmp/pg_proc_as_json/'||oid||'.raw', to_json(pp)::text, true) FROM pg_proc pp;\n\n\n# compress without dictionary\nlz4 -k -m /tmp/pg_proc_as_json/*.raw\nzstd -k /tmp/pg_proc_as_json/*.raw\n\n# measure size\ncat /tmp/pg_proc_as_json/*.raw|wc -c; cat /tmp/pg_proc_as_json/*.lz4|wc -c; cat /tmp/pg_proc_as_json/*.zst|wc -c\n\n\n# compress with dictionary\nrm -f /tmp/pg_proc_as_json/*.{lz4,zst};\nlz4 -k -D /tmp/pg_proc_as_json.dict -m /tmp/pg_proc_as_json/*.raw\nzstd -k -D /tmp/pg_proc_as_json.dict /tmp/pg_proc_as_json/*.raw\n\ndid the same with zstd.\n\nHere's the results:\n\n lz4 zstd\tuncompressed\nno dict 1328794 982497 3898498\ndict 375070 267194\n\nI'd say the effect of the dictionary is pretty impressive. And remember,\nthis is with the dictionary having been trained on a subset of the data.\n\n\nAs a comparison, here's all the data compressed compressed at once:\n\n lz4 zstd\nno dict 180231 104913\ndict 179814 106444\n\nUnsurprisingly the dictionary doesn't help much, because the compression\nalgorithm can \"natively\" see the duplication.\n\n\n- Andres\n\n\n",
"msg_date": "Sun, 5 Feb 2023 11:06:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\n> > So to clarify, are we talking about tuple-level compression? Or\n> > perhaps page-level compression?\n>\n> Tuple level.\n>\n> What I think we should do is basically this:\n>\n> When we compress datums, we know the table being targeted. If there's a\n> pg_attribute parameter indicating we should, we can pass a prebuilt\n> dictionary to the LZ4/zstd [de]compression functions.\n>\n> It's possible we'd need to use a somewhat extended header for such\n> compressed datums, to reference the dictionary \"id\" to be used when\n> decompressing, if the compression algorithms don't already have that in\n> one of their headers, but that's entirely doable.\n>\n> A quick demo of the effect size:\n> [...]\n> Here's the results:\n>\n> lz4 zstd uncompressed\n> no dict 1328794 982497 3898498\n> dict 375070 267194\n>\n> I'd say the effect of the dictionary is pretty impressive. And remember,\n> this is with the dictionary having been trained on a subset of the data.\n\nI see your point regarding the fact that creating dictionaries on a\ntraining set is too beneficial to neglect it. Can't argue with this.\n\nWhat puzzles me though is: what prevents us from doing this on a page\nlevel as suggested previously?\n\nMore similar data you compress the more space and disk I/O you save.\nAdditionally you don't have to compress/decompress the data every time\nyou access it. Everything that's in shared buffers is uncompressed.\nNot to mention the fact that you don't care what's in pg_attribute,\nthe fact that schema may change, etc. There is a table and a\ndictionary for this table that you refresh from time to time. Very\nsimple.\n\nOf course the disadvantage here is that we are not saving the memory,\nunlike the case of tuple-level compression. But we are saving a lot of\nCPU cycles and doing less disk IOs. I would argue that saving CPU\ncycles is generally more preferable. CPUs are still often a bottleneck\nwhile the memory becomes more and more available, e.g there are\nrelatively affordable (for a company, not an individual) 1 TB RAM\ninstances, etc.\n\nSo it seems to me that doing page-level compression would be simpler\nand more beneficial in the long run (10+ years). Don't you agree?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 6 Feb 2023 17:03:07 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Mon, 6 Feb 2023 at 15:03, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> I see your point regarding the fact that creating dictionaries on a\n> training set is too beneficial to neglect it. Can't argue with this.\n>\n> What puzzles me though is: what prevents us from doing this on a page\n> level as suggested previously?\n\nThe complexity of page-level compression is significant, as pages are\ncurrently a base primitive of our persistency and consistency scheme.\nTOAST builds on top of these low level primitives and has access to\ncatalogs, but smgr doesn't do that and can't have that, respectively,\nbecause it need to be accessible and usable without access to the\ncatalogs during replay in recovery.\n\nI would like to know how you envision we would provide consistency\nwhen page-level compression would be implemented - wouldn't it\nincrease WAL overhead (and WAL synchronization overhead) when writing\nout updated pages to a new location due to it changing compressed\nsize?\n\n> More similar data you compress the more space and disk I/O you save.\n> Additionally you don't have to compress/decompress the data every time\n> you access it. Everything that's in shared buffers is uncompressed.\n> Not to mention the fact that you don't care what's in pg_attribute,\n> the fact that schema may change, etc. There is a table and a\n> dictionary for this table that you refresh from time to time. Very\n> simple.\n\nYou cannot \"just\" refresh a dictionary used once to compress an\nobject, because you need it to decompress the object too.\n\nAdditionally, I don't think block-level compression is related to this\nthread in a meaningful way: TOAST and datatype -level compression\nreduce the on-page size of attributes, and would benefit from improved\ncompression regardless of the size of pages when stored on disk, but a\npage will always use 8kB when read into memory. A tuple that uses less\nspace on pages will thus always be the better option when you're\noptimizing for memory usage, while also reducing storage size.\n\n> Of course the disadvantage here is that we are not saving the memory,\n> unlike the case of tuple-level compression. But we are saving a lot of\n> CPU cycles\n\nDo you have any indication for how attribute-level compares against\npage-level compression in cpu cycles?\n\n> and doing less disk IOs.\n\nLess IO bandwidth, but I doubt it uses less operations, as each page\nwould still need to be read; which currently happens on a page-by-page\nIO operation. 10 page read operations use 10 syscalls to read data\nfrom disk - 10 IO ops.\n\n> I would argue that saving CPU\n> cycles is generally more preferable. CPUs are still often a bottleneck\n> while the memory becomes more and more available, e.g there are\n> relatively affordable (for a company, not an individual) 1 TB RAM\n> instances, etc.\n\nBut not all systems have that 1TB RAM, and we cannot expect all users\nto increase their RAM.\n\n> So it seems to me that doing page-level compression would be simpler\n> and more beneficial in the long run (10+ years). Don't you agree?\n\nPage-level compression can not compress patterns that have a length of\nmore than 1 page. TOAST is often used to store values larger than 8kB,\nwhich we'd prefer to compress to the greatest extent possible. So, a\nvalue-level compression method specialized to the type of the value\ndoes make a lot of sense, too.\n\nI'm not trying to say that compressing pages doesn't make sense or is\nuseless, I just don't think that we should ignore attribute-level\ncompression just because page-level compression could at some point be\nimplemented too.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 6 Feb 2023 16:16:41 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-06 16:16:41 +0100, Matthias van de Meent wrote:\n> On Mon, 6 Feb 2023 at 15:03, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi,\n> >\n> > I see your point regarding the fact that creating dictionaries on a\n> > training set is too beneficial to neglect it. Can't argue with this.\n> >\n> > What puzzles me though is: what prevents us from doing this on a page\n> > level as suggested previously?\n> \n> The complexity of page-level compression is significant, as pages are\n> currently a base primitive of our persistency and consistency scheme.\n\n+many\n\nIt's also not all a panacea performance-wise, datum-level decompression can\noften be deferred much longer than page level decompression. For things like\njson[b], you'd hopefully normally have some \"pre-filtering\" based on proper\ncolumns, before you need to dig into the json datum.\n\nIt's also not necessarily that good, compression ratio wise. Particularly for\nwider datums you're not going to be able to remove much duplication, because\nthere's only a handful of tuples. Consider the case of json keys - the\ndictionary will often do better than page level compression, because it'll\nhave the common keys in the dictionary, which means the \"full\" keys never will\nhave to appear on a page, whereas page-level compression will have the keys on\nit, at least once.\n\nOf course you can use a dictionary for page-level compression too, but the\ngains when it works well will often be limited, because in most OLTP usable\npage-compression schemes I'm aware of, you can't compress a page all that far\ndown, because you need a small number of possible \"compressed page sizes\".\n\n\n> > More similar data you compress the more space and disk I/O you save.\n> > Additionally you don't have to compress/decompress the data every time\n> > you access it. Everything that's in shared buffers is uncompressed.\n> > Not to mention the fact that you don't care what's in pg_attribute,\n> > the fact that schema may change, etc. There is a table and a\n> > dictionary for this table that you refresh from time to time. Very\n> > simple.\n> \n> You cannot \"just\" refresh a dictionary used once to compress an\n> object, because you need it to decompress the object too.\n\nRight. That's what I was trying to refer to when mentioning that we might need\nto add a bit of additional information to the varlena header for datums\ncompressed with a dictionary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 11:33:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nOn updating dictionary -\n\n>You cannot \"just\" refresh a dictionary used once to compress an\n>object, because you need it to decompress the object too.\n\nand when you have many - updating an existing dictionary requires\ngoing through all objects compressed with it in the whole database.\nIt's a very tricky question how to implement this feature correctly.\nAlso, there are some thoughts on using JSON schema to optimize\nstorage for JSON objects.\n(That's applied to the TOAST too, so at first glance we've decided\nto forbid dropping or changing TOAST implementations already\nregistered in a particular database.)\n\nIn my experience, in modern world, even with fast SSD storage\narrays, with large database (about 40-50 Tb) we had disk access\nas a bottleneck more often than CPU, except for the cases with\na lot of parallel execution threads for a single query (Oracle).\n\nOn Mon, Feb 6, 2023 at 10:33 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-02-06 16:16:41 +0100, Matthias van de Meent wrote:\n> > On Mon, 6 Feb 2023 at 15:03, Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I see your point regarding the fact that creating dictionaries on a\n> > > training set is too beneficial to neglect it. Can't argue with this.\n> > >\n> > > What puzzles me though is: what prevents us from doing this on a page\n> > > level as suggested previously?\n> >\n> > The complexity of page-level compression is significant, as pages are\n> > currently a base primitive of our persistency and consistency scheme.\n>\n> +many\n>\n> It's also not all a panacea performance-wise, datum-level decompression can\n> often be deferred much longer than page level decompression. For things\n> like\n> json[b], you'd hopefully normally have some \"pre-filtering\" based on proper\n> columns, before you need to dig into the json datum.\n>\n> It's also not necessarily that good, compression ratio wise. Particularly\n> for\n> wider datums you're not going to be able to remove much duplication,\n> because\n> there's only a handful of tuples. Consider the case of json keys - the\n> dictionary will often do better than page level compression, because it'll\n> have the common keys in the dictionary, which means the \"full\" keys never\n> will\n> have to appear on a page, whereas page-level compression will have the\n> keys on\n> it, at least once.\n>\n> Of course you can use a dictionary for page-level compression too, but the\n> gains when it works well will often be limited, because in most OLTP usable\n> page-compression schemes I'm aware of, you can't compress a page all that\n> far\n> down, because you need a small number of possible \"compressed page sizes\".\n>\n>\n> > > More similar data you compress the more space and disk I/O you save.\n> > > Additionally you don't have to compress/decompress the data every time\n> > > you access it. Everything that's in shared buffers is uncompressed.\n> > > Not to mention the fact that you don't care what's in pg_attribute,\n> > > the fact that schema may change, etc. There is a table and a\n> > > dictionary for this table that you refresh from time to time. Very\n> > > simple.\n> >\n> > You cannot \"just\" refresh a dictionary used once to compress an\n> > object, because you need it to decompress the object too.\n>\n> Right. That's what I was trying to refer to when mentioning that we might\n> need\n> to add a bit of additional information to the varlena header for datums\n> compressed with a dictionary.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,On updating dictionary ->You cannot \"just\" refresh a dictionary used once to compress an>object, because you need it to decompress the object too.and when you have many - updating an existing dictionary requiresgoing through all objects compressed with it in the whole database.It's a very tricky question how to implement this feature correctly.Also, there are some thoughts on using JSON schema to optimizestorage for JSON objects.(That's applied to the TOAST too, so at first glance we've decidedto forbid dropping or changing TOAST implementations alreadyregistered in a particular database.)In my experience, in modern world, even with fast SSD storagearrays, with large database (about 40-50 Tb) we had disk accessas a bottleneck more often than CPU, except for the cases witha lot of parallel execution threads for a single query (Oracle).On Mon, Feb 6, 2023 at 10:33 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-02-06 16:16:41 +0100, Matthias van de Meent wrote:\n> On Mon, 6 Feb 2023 at 15:03, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi,\n> >\n> > I see your point regarding the fact that creating dictionaries on a\n> > training set is too beneficial to neglect it. Can't argue with this.\n> >\n> > What puzzles me though is: what prevents us from doing this on a page\n> > level as suggested previously?\n> \n> The complexity of page-level compression is significant, as pages are\n> currently a base primitive of our persistency and consistency scheme.\n\n+many\n\nIt's also not all a panacea performance-wise, datum-level decompression can\noften be deferred much longer than page level decompression. For things like\njson[b], you'd hopefully normally have some \"pre-filtering\" based on proper\ncolumns, before you need to dig into the json datum.\n\nIt's also not necessarily that good, compression ratio wise. Particularly for\nwider datums you're not going to be able to remove much duplication, because\nthere's only a handful of tuples. Consider the case of json keys - the\ndictionary will often do better than page level compression, because it'll\nhave the common keys in the dictionary, which means the \"full\" keys never will\nhave to appear on a page, whereas page-level compression will have the keys on\nit, at least once.\n\nOf course you can use a dictionary for page-level compression too, but the\ngains when it works well will often be limited, because in most OLTP usable\npage-compression schemes I'm aware of, you can't compress a page all that far\ndown, because you need a small number of possible \"compressed page sizes\".\n\n\n> > More similar data you compress the more space and disk I/O you save.\n> > Additionally you don't have to compress/decompress the data every time\n> > you access it. Everything that's in shared buffers is uncompressed.\n> > Not to mention the fact that you don't care what's in pg_attribute,\n> > the fact that schema may change, etc. There is a table and a\n> > dictionary for this table that you refresh from time to time. Very\n> > simple.\n> \n> You cannot \"just\" refresh a dictionary used once to compress an\n> object, because you need it to decompress the object too.\n\nRight. That's what I was trying to refer to when mentioning that we might need\nto add a bit of additional information to the varlena header for datums\ncompressed with a dictionary.\n\nGreetings,\n\nAndres Freund\n-- Regards,--Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 7 Feb 2023 09:11:52 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On 2023-Feb-05, Aleksander Alekseev wrote:\n\n> Since PostgreSQL is not a specified document-oriented DBMS I think we\n> better focus our (far from being infinite) resources on something more\n> people would benefit from: AIO/DIO [1] or perhaps getting rid of\n> freezing [2], to name a few examples.\n\nFor what it's worth -- one of the reasons Postgres is successful, at\nleast in my opinion, is that each developer does more or less what they\nsee fit (or what their employer sees fit), without following any sort of\ngrand plan or roadmap. This has allowed us to expand in many directions\nsimultaneously. There's a group working on AIO; others are interested\nin improving partitioning, or logical replication, adding new SQL\nfeatures, and so on. I don't think we should stop thinking about TOAST\n(or more precisely JSON compression) just because we want to have all\nthese other things. Not being a document database didn't stop us from\nadding JSON many years back and JSONB stuff later. When we did, it was\nan enormous enabler of new use cases.\n\nEveryone, from customers of large Postgres support companies, to those\nof small or one-man Postgres support shops, to individual users doing\nstuff on their free time, benefits from everything that happens in the\nPostgres development group. Let's keep improving Postgres for everyone.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n",
"msg_date": "Tue, 7 Feb 2023 11:55:57 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\n> > The complexity of page-level compression is significant, as pages are\n> > currently a base primitive of our persistency and consistency scheme.\n>\n> +many\n>\n> It's also not all a panacea performance-wise, datum-level decompression can\n> often be deferred much longer than page level decompression. For things like\n> json[b], you'd hopefully normally have some \"pre-filtering\" based on proper\n> columns, before you need to dig into the json datum.\n\nThis is actually a good point.\n\n> It's also not necessarily that good, compression ratio wise. Particularly for\n> wider datums you're not going to be able to remove much duplication, because\n> there's only a handful of tuples. Consider the case of json keys - the\n> dictionary will often do better than page level compression, because it'll\n> have the common keys in the dictionary, which means the \"full\" keys never will\n> have to appear on a page, whereas page-level compression will have the keys on\n> it, at least once.\n\nTo clarify, what I meant was applying an idea of compression with\nshared dictionaries to the pages instead of tuples. Just to make sure\nwe are on the same page.\n\n> Page-level compression can not compress patterns that have a length of\n> more than 1 page. TOAST is often used to store values larger than 8kB,\n> which we'd prefer to compress to the greatest extent possible. So, a\n> value-level compression method specialized to the type of the value\n> does make a lot of sense, too.\n\nLet's not forget that TOAST table is a table too. Page-level\ncompression applies to it as well as to a regular one.\n\n> Of course you can use a dictionary for page-level compression too, but the\n> gains when it works well will often be limited, because in most OLTP usable\n> page-compression schemes I'm aware of, you can't compress a page all that far\n> down, because you need a small number of possible \"compressed page sizes\".\n\nThat's true. However compressing an 8 KB page to, let's say, 1 KB, is\nnot a bad result as well.\n\nIn any case, there seems to be advantages and disadvantages of either\napproach. Personally I don't care that much which one to choose. In\nfact, although my own patch proposed attribute-level compression, not\ntuple-level one, it is arguably closer to tuple-level approach than\npage-level one. So to a certain extent I would be contradicting myself\nby trying to prove that page-level compression is the way to go. Also\nMatthias has a reasonable concern that page-level compression may have\nimplications for the WAL size. (Maybe it will not but I'm not ready to\nprove it right now, nor am I convinced this is necessarily true.)\n\nSo, let's focus on tuple-level compression then.\n\n> > > More similar data you compress the more space and disk I/O you save.\n> > > Additionally you don't have to compress/decompress the data every time\n> > > you access it. Everything that's in shared buffers is uncompressed.\n> > > Not to mention the fact that you don't care what's in pg_attribute,\n> > > the fact that schema may change, etc. There is a table and a\n> > > dictionary for this table that you refresh from time to time. Very\n> > > simple.\n> >\n> > You cannot \"just\" refresh a dictionary used once to compress an\n> > object, because you need it to decompress the object too.\n>\n> Right. That's what I was trying to refer to when mentioning that we might need\n> to add a bit of additional information to the varlena header for datums\n> compressed with a dictionary.\n\n> [...]\n> and when you have many - updating an existing dictionary requires\n> going through all objects compressed with it in the whole database.\n> It's a very tricky question how to implement this feature correctly.\n\nYep, that's one of the challenges.\n\nOne approach would be to extend the existing dictionary. Not sure if\nZSTD / LZ4 support this, they probably don't. In any case this is a\nsub-optimal approach because the dictionary will grow indefinitely.\n\nWe could create a dictionary once per table and forbid modifying it.\nUsers will have to re-create and refill a table manually if he/she\nwants to update the dictionary by using `INSERT INTO .. SELECT ..`.\nAlthough this is a possible solution I don't think this is what Andres\nmeant above by being invisible to the user. Also it would mean that\nthe new dictionary should be learned on the old table before creating\nthe new one with a new dictionary which is awkward.\n\nThis is why we need something like dictionary versions. A dictionary\ncan't be erased as long as there is data that uses this version of a\ndictionary. The old data should be decompressed and compressed again\nwith the most recent dictionary, e.g. during VACUUM or perhaps VACUUM\nFULL. This is an idea I ended up using in ZSON.\n\nThere may be alternative solutions, but I don't think I'm aware of\nsuch. (There are JSON Schema, Protobuf etc, but they don't work for\ngeneral-purpose compression algorithms and/or arbitrary data types.)\n\n> Let's keep improving Postgres for everyone.\n\nAmen.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 7 Feb 2023 16:39:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Andres,\n\n> > So to clarify, are we talking about tuple-level compression? Or\n> > perhaps page-level compression?\n>\n> Tuple level.\n\n> although my own patch proposed attribute-level compression, not\n> tuple-level one, it is arguably closer to tuple-level approach than\n> page-level one\n\nJust wanted to make sure that by tuple-level we mean the same thing.\n\nWhen saying tuple-level do you mean that the entire tuple should be\ncompressed as one large binary (i.e. similarly to page-level\ncompression but more granularly), or every single attribute should be\ncompressed separately (similarly to how TOAST does this)?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 9 Feb 2023 13:50:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi, \n\nOn February 9, 2023 2:50:57 AM PST, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>Hi Andres,\n>\n>> > So to clarify, are we talking about tuple-level compression? Or\n>> > perhaps page-level compression?\n>>\n>> Tuple level.\n>\n>> although my own patch proposed attribute-level compression, not\n>> tuple-level one, it is arguably closer to tuple-level approach than\n>> page-level one\n>\n>Just wanted to make sure that by tuple-level we mean the same thing.\n>\n>When saying tuple-level do you mean that the entire tuple should be\n>compressed as one large binary (i.e. similarly to page-level\n>compression but more granularly), or every single attribute should be\n>compressed separately (similarly to how TOAST does this)?\n\nGood point - should have been clearer. I meant attribute wise compression. Like we do today, except that we would use a dictionary to increase compression rates.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 09 Feb 2023 03:01:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi!\n\nIf I understand Andres' message correctly - the proposition is to\nmake use of compression dictionaries automatic, possibly just setting\na parameter when the table is created, something like\nCREATE TABLE t ( ..., t JSONB USE DICTIONARY);\nThe question is then how to create such dictionaries automatically\nand extend them while data is being added to the table. Because\nit is not something unusual when after a time circumstances change\nand a rather small table is started to be loaded with huge amounts\nof data.\n\nI prefer extending a dictionary over re-creating it because while\ndictionary is recreated we leave users two choices - to wait until\ndictionary creation is over or to use the old version (say, kept as\nas a snapshot while a new one is created). Keeping many versions\nsimultaneously does not make sense and would extend DB size.\n\nAlso, compressing small data with a large dictionary (the case for\none-for-many tables dictionary), I think, would add some considerable\noverhead to the INSERT/UPDATE commands, so the most reasonable\nchoice is a per-table dictionary.\n\nAm I right?\n\nAny ideas on how to create and extend such dictionaries automatically?\n\nOn Thu, Feb 9, 2023 at 2:01 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On February 9, 2023 2:50:57 AM PST, Aleksander Alekseev <\n> aleksander@timescale.com> wrote:\n> >Hi Andres,\n> >\n> >> > So to clarify, are we talking about tuple-level compression? Or\n> >> > perhaps page-level compression?\n> >>\n> >> Tuple level.\n> >\n> >> although my own patch proposed attribute-level compression, not\n> >> tuple-level one, it is arguably closer to tuple-level approach than\n> >> page-level one\n> >\n> >Just wanted to make sure that by tuple-level we mean the same thing.\n> >\n> >When saying tuple-level do you mean that the entire tuple should be\n> >compressed as one large binary (i.e. similarly to page-level\n> >compression but more granularly), or every single attribute should be\n> >compressed separately (similarly to how TOAST does this)?\n>\n> Good point - should have been clearer. I meant attribute wise compression.\n> Like we do today, except that we would use a dictionary to increase\n> compression rates.\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!If I understand Andres' message correctly - the proposition is tomake use of compression dictionaries automatic, possibly just settinga parameter when the table is created, something likeCREATE TABLE t ( ..., t JSONB USE DICTIONARY);The question is then how to create such dictionaries automaticallyand extend them while data is being added to the table. Becauseit is not something unusual when after a time circumstances changeand a rather small table is started to be loaded with huge amountsof data.I prefer extending a dictionary over re-creating it because whiledictionary is recreated we leave users two choices - to wait untildictionary creation is over or to use the old version (say, kept asas a snapshot while a new one is created). Keeping many versionssimultaneously does not make sense and would extend DB size.Also, compressing small data with a large dictionary (the case forone-for-many tables dictionary), I think, would add some considerableoverhead to the INSERT/UPDATE commands, so the most reasonablechoice is a per-table dictionary.Am I right?Any ideas on how to create and extend such dictionaries automatically?On Thu, Feb 9, 2023 at 2:01 PM Andres Freund <andres@anarazel.de> wrote:Hi, \n\nOn February 9, 2023 2:50:57 AM PST, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>Hi Andres,\n>\n>> > So to clarify, are we talking about tuple-level compression? Or\n>> > perhaps page-level compression?\n>>\n>> Tuple level.\n>\n>> although my own patch proposed attribute-level compression, not\n>> tuple-level one, it is arguably closer to tuple-level approach than\n>> page-level one\n>\n>Just wanted to make sure that by tuple-level we mean the same thing.\n>\n>When saying tuple-level do you mean that the entire tuple should be\n>compressed as one large binary (i.e. similarly to page-level\n>compression but more granularly), or every single attribute should be\n>compressed separately (similarly to how TOAST does this)?\n\nGood point - should have been clearer. I meant attribute wise compression. Like we do today, except that we would use a dictionary to increase compression rates.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Fri, 10 Feb 2023 21:22:14 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-10 21:22:14 +0300, Nikita Malakhov wrote:\n> If I understand Andres' message correctly - the proposition is to\n> make use of compression dictionaries automatic, possibly just setting\n> a parameter when the table is created, something like\n> CREATE TABLE t ( ..., t JSONB USE DICTIONARY);\n\nI definitely wouldn't make it automatic initially, and then later see how well\nthat works.\n\nWhether automatic or not, it probably makes sense to integrate building\ndictionaries with analyze. We can build a dictionaries from sampled datums, we\ncan check how efficient a dictionary is when compressing sampled datums, we\ncan compare the efficiency of a new dictionary with the existing dictionary to\nsee whether it's worth a new one.\n\n\n> The question is then how to create such dictionaries automatically\n> and extend them while data is being added to the table. Because\n> it is not something unusual when after a time circumstances change\n> and a rather small table is started to be loaded with huge amounts\n> of data.\n\nIt doesn't really make sense to create the dictionaries with small tables,\nanyway. For them to be efficient, you need a reasonable amount of data to\nbuild a dictionary from.\n\n\n> I prefer extending a dictionary over re-creating it because while\n> dictionary is recreated we leave users two choices - to wait until\n> dictionary creation is over or to use the old version (say, kept as\n> as a snapshot while a new one is created). Keeping many versions\n> simultaneously does not make sense and would extend DB size.\n\nI don't think you really can extend dictionaries. The references into the\ndictionary are as small as possible, based on the contents of the\ndictonary. And you normally want to keep the dictionary size bounded for that\nreason alone, but also for [de]compression speed.\n\nSo you'd need build a new dictionary, and use that going forward. And yes,\nyou'd not be able to delete the old dictionary, because there will still be\nreferences to the old one.\n\nWe could add a command to scan the data to see if an old dictionary is still\nused, or even remove references to it. But I don't think it's a particularly\nimportant: It only makes sense to create the initial directory once some data\nhas accumulated, and creating further dictionaries only makes sense once the\ntable is a good bit larger. At that point the size of another dictionary to be\nstored isn't relevant in relation (otherwise you'd just give up after building\na new dictionary and evaluating its effectiveness).\n\n\n> Also, compressing small data with a large dictionary (the case for\n> one-for-many tables dictionary), I think, would add some considerable\n> overhead to the INSERT/UPDATE commands, so the most reasonable\n> choice is a per-table dictionary.\n\nLikely even per-column, but I can see some advantages in either appraoch.\n\n\n> Any ideas on how to create and extend such dictionaries automatically?\n\nAs I said, I don't think we should extend dictionaries. For this to work we'll\nlikely need a new / extended compressed toast datum header of some form, with\na reference to the dictionary. That'd likely be needed even with updatable\ndictionaries, as we IIRC don't know which column a toasted datum is for, but\nwe need to know, to identify the dictionary. As we need that field anyway, we\ncan easily have multiple dictionaries.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 12:01:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Andres,\n\n> As I said, I don't think we should extend dictionaries. For this to work we'll\n> likely need a new / extended compressed toast datum header of some form, with\n> a reference to the dictionary. That'd likely be needed even with updatable\n> dictionaries, as we IIRC don't know which column a toasted datum is for, but\n> we need to know, to identify the dictionary. As we need that field anyway, we\n> can easily have multiple dictionaries.\n\nSo I summarized the requirements we agreed on so far and ended up with\nthe following list:\n\n* This is going to be a PostgreSQL feature, not an extension, not a\nbunch of hooks, etc;\n* We are not going to support lazy/partial decompression since this is\ntoo complicated in a general case and Postgres is not a specialized\ndocument-oriented DBMS (there is a normalization after all);\n* This should be a relation-level optimization option, not something\nvisible to every user of the table (not a custom type, etc);\n* This is going to be an attribute-level compression;\n* The dictionaries should be created automatically (maybe not in a PoC\nbut in the final implementation) since people are not good at it;\n* We are going to be using the existing compression algorithms like\nLZ4/ZSTD, not to invent new ones;\n* When created, a dictionary version is immutable, i.e. no new entries\ncan be added. New version of a dictionary can be created when the data\nevolves. The compressed data stores the dictionary version used for\ncompression. A dictionary version can't be deleted while data exists\nthat uses this version of a dictionary;\n* Dictionaries are created automatically from sampled data during\nANALIZE. We compare the efficiency of a new dictionary vs the\nefficiency of the old one (or the lack of such) on sampled data and\ndepending on the results decide whether it's worth creating a new\nversion of a dictionary;\n* This is going to work for multiple types: TEXT, JSON, JSONB, XML,\nBYTEA etc. Ideally for user-defined types too;\n\nHopefully I didn't miss anything.\n\nWhile thinking about how a user interface could look like it occured\nto me that what we are discussing could be merely a new STORAGE\nstrategy. Currently there are PLAIN, MAIN, EXTERNAL and EXTENDED.\nLet's call a new strategy DICTIONARY, with typstorage = d.\n\nWhen user wants a given attribute to be compressed, he/she says:\n\nALTER TABLE foo ALTER COLUMN bar SET STORAGE DICTIONARY;\n\nAnd the compression algorithms is chosen as usual:\n\nALTER TABLE foo ALTER COLUMN bar SET COMPRESSION lz4;\n\nWhen there are no dictionaries yet, DICTIONARY works the same as\nEXTENDED. When a dictionary is trained the data is compressed using\nthe latest version of this dictionary. For visibility we are going to\nneed some sort of pg_stat_dictionaries view that shows the existing\ndictionaries, how much space they consume, etc.\n\nIf we choose this approach there are a couple of questions/notes that\ncome to mind:\n\n* The default compression algorithm is PGLZ and unlike LZ4 it doesn't\nsupport training dictionaries yet. This should be straightforward to\nimplement though, or alternatively shared dictionaries could work only\nfor LZ4;\n* Currently users have control of toast_tuple_target but not\nTOAST_TUPLE_THRESHOLD. Which means for tuples smaller than 1/4 of the\npage size shared dictionaries are not going to be triggered. Which is\nnot necessarily a bad thing. Alternatively we could give the users\ntoast_tuple_threshold setting. This shouldn't necessarily be part of\nthis CF entry discussion however, we can always discuss it separately;\n* Should we allow setting DICTIONARY storage strategy for a given\ntype, i.e. CREATE TYPE baz STORAGE = DICTIONARY? I suggest we forbid\nit in the first implementation, just for the sake of simplicity.\n* It looks like we won't be able to share a dictionary between\nmultiple columns. Which again is not necessarily a bad thing: data in\nthese columns can be completely different (e.g. BYTEA and XML),\ncolumns can be dropped independently, etc. If a user is interested in\nsharing a dictionary between several columns he/she can join these\ncolumns in a single JSONB column.\n* TOAST currently doesn't support ZSTD. IMO this is not a big deal and\nadding the corresponding support can be discussed separately.\n* If memory serves, there were not so many free bits left in TOAST\npointers. The pointers don't store a storage strategy though so\nhopefully this will not be a problem. We'll see.\n\nPlease let me know what you think about all this. I'm going to prepare\nan updated patch for the next CF so I could use early feedback.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 18 Apr 2023 17:27:48 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 17:28, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Andres,\n>\n> > As I said, I don't think we should extend dictionaries. For this to work we'll\n> > likely need a new / extended compressed toast datum header of some form, with\n> > a reference to the dictionary. That'd likely be needed even with updatable\n> > dictionaries, as we IIRC don't know which column a toasted datum is for, but\n> > we need to know, to identify the dictionary. As we need that field anyway, we\n> > can easily have multiple dictionaries.\n>\n> So I summarized the requirements we agreed on so far and ended up with\n> the following list:\n>\n> * This is going to be a PostgreSQL feature, not an extension, not a\n> bunch of hooks, etc;\n> * We are not going to support lazy/partial decompression since this is\n> too complicated in a general case and Postgres is not a specialized\n> document-oriented DBMS (there is a normalization after all);\n> * This should be a relation-level optimization option, not something\n> visible to every user of the table (not a custom type, etc);\n> * This is going to be an attribute-level compression;\n> * The dictionaries should be created automatically (maybe not in a PoC\n> but in the final implementation) since people are not good at it;\n> * We are going to be using the existing compression algorithms like\n> LZ4/ZSTD, not to invent new ones;\n> * When created, a dictionary version is immutable, i.e. no new entries\n> can be added. New version of a dictionary can be created when the data\n> evolves. The compressed data stores the dictionary version used for\n> compression. A dictionary version can't be deleted while data exists\n> that uses this version of a dictionary;\n> * Dictionaries are created automatically from sampled data during\n> ANALIZE. We compare the efficiency of a new dictionary vs the\n> efficiency of the old one (or the lack of such) on sampled data and\n> depending on the results decide whether it's worth creating a new\n> version of a dictionary;\n> * This is going to work for multiple types: TEXT, JSON, JSONB, XML,\n> BYTEA etc. Ideally for user-defined types too;\n\nAny type with typlen < 0 should work, right?\n\n> Hopefully I didn't miss anything.\n>\n> While thinking about how a user interface could look like it occured\n> to me that what we are discussing could be merely a new STORAGE\n> strategy. Currently there are PLAIN, MAIN, EXTERNAL and EXTENDED.\n> Let's call a new strategy DICTIONARY, with typstorage = d.\n\nThe use of dictionaries should be dependent on only the use of a\ncompression method that supports pre-computed compression\ndictionaries. I think storage=MAIN + compression dictionaries should\nbe supported, to make sure there is no expensive TOAST lookup for the\nattributes of the tuple; but that doesn't seem to be an option with\nthat design.\n\n> When user wants a given attribute to be compressed, he/she says:\n>\n> ALTER TABLE foo ALTER COLUMN bar SET STORAGE DICTIONARY;\n>\n> And the compression algorithms is chosen as usual:\n>\n> ALTER TABLE foo ALTER COLUMN bar SET COMPRESSION lz4;\n>\n> When there are no dictionaries yet, DICTIONARY works the same as\n> EXTENDED. When a dictionary is trained the data is compressed using\n> the latest version of this dictionary. For visibility we are going to\n> need some sort of pg_stat_dictionaries view that shows the existing\n> dictionaries, how much space they consume, etc.\n\nI think \"AT_AC SET COMPRESSION lz4 {[WITH | WITHOUT] DICTIONARY}\",\n\"AT_AC SET COMPRESSION lz4-dictionary\", or \"AT_AC SET\ncompression_dictionary = on\" would be better from a design\nperspective.\n\n> If we choose this approach there are a couple of questions/notes that\n> come to mind:\n>\n> * The default compression algorithm is PGLZ and unlike LZ4 it doesn't\n> support training dictionaries yet. This should be straightforward to\n> implement though, or alternatively shared dictionaries could work only\n> for LZ4;\n\nDidn't we get zstd support recently as well?\n\n> * Currently users have control of toast_tuple_target but not\n> TOAST_TUPLE_THRESHOLD. Which means for tuples smaller than 1/4 of the\n> page size shared dictionaries are not going to be triggered. Which is\n> not necessarily a bad thing. Alternatively we could give the users\n> toast_tuple_threshold setting. This shouldn't necessarily be part of\n> this CF entry discussion however, we can always discuss it separately;\n\nThat makes a lot of sense, but as you said handling that separately\nwould probably be better and easier to review.\n\n> * Should we allow setting DICTIONARY storage strategy for a given\n> type, i.e. CREATE TYPE baz STORAGE = DICTIONARY? I suggest we forbid\n> it in the first implementation, just for the sake of simplicity.\n\nCan we specify a default compression method for each postgresql type,\njust like how we specify the default storage? If not, then the setting\ncould realistically be in conflict with a default_toast_compression\nsetting, assuming that dictionary support is not a requirement for\ncolumn compression methods.\n\n> * It looks like we won't be able to share a dictionary between\n> multiple columns. Which again is not necessarily a bad thing: data in\n> these columns can be completely different (e.g. BYTEA and XML),\n> columns can be dropped independently, etc.\n\nYes\n\n> If a user is interested in\n> sharing a dictionary between several columns he/she can join these\n> columns in a single JSONB column.\n\nIt is unreasonable to expect this to be possible, due to e.g.\npartitioning resulting in columns that share compressable patters to\nbe on different physical tables.\n\n> * TOAST currently doesn't support ZSTD. IMO this is not a big deal and\n> adding the corresponding support can be discussed separately.\n> * If memory serves, there were not so many free bits left in TOAST\n> pointers. The pointers don't store a storage strategy though so\n> hopefully this will not be a problem. We'll see.\n\nThe toast pointer must store enough info about the compression used to\ndecompress the datum, which implies it needs to store the compression\nalgorithm used, and a reference to the compression dictionary (if\nany). I think the idea about introducing a new toast pointer type (in\nthe custom toast patch) wasn't bad per se, and that change would allow\nus to carry more or different info in the header.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 18 Apr 2023 18:40:10 +0300",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nI don't think it's a good idea to interfere with the storage strategies.\nDictionary\nshould be a kind of storage option, like a compression, but not the strategy\ndeclining all others.\n\n>> While thinking about how a user interface could look like it occured\n>> to me that what we are discussing could be merely a new STORAGE\n>> strategy. Currently there are PLAIN, MAIN, EXTERNAL and EXTENDED.\n>> Let's call a new strategy DICTIONARY, with typstorage = d.\n\n>I think \"AT_AC SET COMPRESSION lz4 {[WITH | WITHOUT] DICTIONARY}\",\n>\"AT_AC SET COMPRESSION lz4-dictionary\", or \"AT_AC SET\n>compression_dictionary = on\" would be better from a design\n>perspective.\n\nAgree with Matthias on above.\n\nAbout the TOAST pointer:\n\n>The toast pointer must store enough info about the compression used to\n>decompress the datum, which implies it needs to store the compression\n>algorithm used, and a reference to the compression dictionary (if\n>any). I think the idea about introducing a new toast pointer type (in\n>the custom toast patch) wasn't bad per se, and that change would allow\n>us to carry more or different info in the header.\n\nThe External TOAST pointer is very limited to the amount of service data\nit could keep, that's why we introduced the Custom TOAST pointers in the\nPluggable TOAST. But keep in mind that changing the TOAST pointer\nstructure requires a lot of quite heavy modifications in the core - along\nwith\nsome obvious places like insert/update/delete datum there is very serious\nissue with logical replication.\nThe Pluggable TOAST was rejected, but we have a lot of improvements\nbased on changing the TOAST pointer structure.\n\nOn Tue, Apr 18, 2023 at 6:40 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Tue, 18 Apr 2023 at 17:28, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi Andres,\n> >\n> > > As I said, I don't think we should extend dictionaries. For this to\n> work we'll\n> > > likely need a new / extended compressed toast datum header of some\n> form, with\n> > > a reference to the dictionary. That'd likely be needed even with\n> updatable\n> > > dictionaries, as we IIRC don't know which column a toasted datum is\n> for, but\n> > > we need to know, to identify the dictionary. As we need that field\n> anyway, we\n> > > can easily have multiple dictionaries.\n> >\n> > So I summarized the requirements we agreed on so far and ended up with\n> > the following list:\n> >\n> > * This is going to be a PostgreSQL feature, not an extension, not a\n> > bunch of hooks, etc;\n> > * We are not going to support lazy/partial decompression since this is\n> > too complicated in a general case and Postgres is not a specialized\n> > document-oriented DBMS (there is a normalization after all);\n> > * This should be a relation-level optimization option, not something\n> > visible to every user of the table (not a custom type, etc);\n> > * This is going to be an attribute-level compression;\n> > * The dictionaries should be created automatically (maybe not in a PoC\n> > but in the final implementation) since people are not good at it;\n> > * We are going to be using the existing compression algorithms like\n> > LZ4/ZSTD, not to invent new ones;\n> > * When created, a dictionary version is immutable, i.e. no new entries\n> > can be added. New version of a dictionary can be created when the data\n> > evolves. The compressed data stores the dictionary version used for\n> > compression. A dictionary version can't be deleted while data exists\n> > that uses this version of a dictionary;\n> > * Dictionaries are created automatically from sampled data during\n> > ANALIZE. We compare the efficiency of a new dictionary vs the\n> > efficiency of the old one (or the lack of such) on sampled data and\n> > depending on the results decide whether it's worth creating a new\n> > version of a dictionary;\n> > * This is going to work for multiple types: TEXT, JSON, JSONB, XML,\n> > BYTEA etc. Ideally for user-defined types too;\n>\n> Any type with typlen < 0 should work, right?\n>\n> > Hopefully I didn't miss anything.\n> >\n> > While thinking about how a user interface could look like it occured\n> > to me that what we are discussing could be merely a new STORAGE\n> > strategy. Currently there are PLAIN, MAIN, EXTERNAL and EXTENDED.\n> > Let's call a new strategy DICTIONARY, with typstorage = d.\n>\n> The use of dictionaries should be dependent on only the use of a\n> compression method that supports pre-computed compression\n> dictionaries. I think storage=MAIN + compression dictionaries should\n> be supported, to make sure there is no expensive TOAST lookup for the\n> attributes of the tuple; but that doesn't seem to be an option with\n> that design.\n>\n> > When user wants a given attribute to be compressed, he/she says:\n> >\n> > ALTER TABLE foo ALTER COLUMN bar SET STORAGE DICTIONARY;\n> >\n> > And the compression algorithms is chosen as usual:\n> >\n> > ALTER TABLE foo ALTER COLUMN bar SET COMPRESSION lz4;\n> >\n> > When there are no dictionaries yet, DICTIONARY works the same as\n> > EXTENDED. When a dictionary is trained the data is compressed using\n> > the latest version of this dictionary. For visibility we are going to\n> > need some sort of pg_stat_dictionaries view that shows the existing\n> > dictionaries, how much space they consume, etc.\n>\n> I think \"AT_AC SET COMPRESSION lz4 {[WITH | WITHOUT] DICTIONARY}\",\n> \"AT_AC SET COMPRESSION lz4-dictionary\", or \"AT_AC SET\n> compression_dictionary = on\" would be better from a design\n> perspective.\n>\n> > If we choose this approach there are a couple of questions/notes that\n> > come to mind:\n> >\n> > * The default compression algorithm is PGLZ and unlike LZ4 it doesn't\n> > support training dictionaries yet. This should be straightforward to\n> > implement though, or alternatively shared dictionaries could work only\n> > for LZ4;\n>\n> Didn't we get zstd support recently as well?\n>\n> > * Currently users have control of toast_tuple_target but not\n> > TOAST_TUPLE_THRESHOLD. Which means for tuples smaller than 1/4 of the\n> > page size shared dictionaries are not going to be triggered. Which is\n> > not necessarily a bad thing. Alternatively we could give the users\n> > toast_tuple_threshold setting. This shouldn't necessarily be part of\n> > this CF entry discussion however, we can always discuss it separately;\n>\n> That makes a lot of sense, but as you said handling that separately\n> would probably be better and easier to review.\n>\n> > * Should we allow setting DICTIONARY storage strategy for a given\n> > type, i.e. CREATE TYPE baz STORAGE = DICTIONARY? I suggest we forbid\n> > it in the first implementation, just for the sake of simplicity.\n>\n> Can we specify a default compression method for each postgresql type,\n> just like how we specify the default storage? If not, then the setting\n> could realistically be in conflict with a default_toast_compression\n> setting, assuming that dictionary support is not a requirement for\n> column compression methods.\n>\n> > * It looks like we won't be able to share a dictionary between\n> > multiple columns. Which again is not necessarily a bad thing: data in\n> > these columns can be completely different (e.g. BYTEA and XML),\n> > columns can be dropped independently, etc.\n>\n> Yes\n>\n> > If a user is interested in\n> > sharing a dictionary between several columns he/she can join these\n> > columns in a single JSONB column.\n>\n> It is unreasonable to expect this to be possible, due to e.g.\n> partitioning resulting in columns that share compressable patters to\n> be on different physical tables.\n>\n> > * TOAST currently doesn't support ZSTD. IMO this is not a big deal and\n> > adding the corresponding support can be discussed separately.\n> > * If memory serves, there were not so many free bits left in TOAST\n> > pointers. The pointers don't store a storage strategy though so\n> > hopefully this will not be a problem. We'll see.\n>\n> The toast pointer must store enough info about the compression used to\n> decompress the datum, which implies it needs to store the compression\n> algorithm used, and a reference to the compression dictionary (if\n> any). I think the idea about introducing a new toast pointer type (in\n> the custom toast patch) wasn't bad per se, and that change would allow\n> us to carry more or different info in the header.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,I don't think it's a good idea to interfere with the storage strategies. Dictionaryshould be a kind of storage option, like a compression, but not the strategydeclining all others.>> While thinking about how a user interface could look like it occured>> to me that what we are discussing could be merely a new STORAGE>> strategy. Currently there are PLAIN, MAIN, EXTERNAL and EXTENDED.>> Let's call a new strategy DICTIONARY, with typstorage = d.>I think \"AT_AC SET COMPRESSION lz4 {[WITH | WITHOUT] DICTIONARY}\",>\"AT_AC SET COMPRESSION lz4-dictionary\", or \"AT_AC SET>compression_dictionary = on\" would be better from a design>perspective.Agree with Matthias on above.About the TOAST pointer:>The toast pointer must store enough info about the compression used to>decompress the datum, which implies it needs to store the compression>algorithm used, and a reference to the compression dictionary (if>any). I think the idea about introducing a new toast pointer type (in>the custom toast patch) wasn't bad per se, and that change would allow>us to carry more or different info in the header.The External TOAST pointer is very limited to the amount of service datait could keep, that's why we introduced the Custom TOAST pointers in thePluggable TOAST. But keep in mind that changing the TOAST pointerstructure requires a lot of quite heavy modifications in the core - along withsome obvious places like insert/update/delete datum there is very seriousissue with logical replication.The Pluggable TOAST was rejected, but we have a lot of improvementsbased on changing the TOAST pointer structure.On Tue, Apr 18, 2023 at 6:40 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Tue, 18 Apr 2023 at 17:28, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Andres,\n>\n> > As I said, I don't think we should extend dictionaries. For this to work we'll\n> > likely need a new / extended compressed toast datum header of some form, with\n> > a reference to the dictionary. That'd likely be needed even with updatable\n> > dictionaries, as we IIRC don't know which column a toasted datum is for, but\n> > we need to know, to identify the dictionary. As we need that field anyway, we\n> > can easily have multiple dictionaries.\n>\n> So I summarized the requirements we agreed on so far and ended up with\n> the following list:\n>\n> * This is going to be a PostgreSQL feature, not an extension, not a\n> bunch of hooks, etc;\n> * We are not going to support lazy/partial decompression since this is\n> too complicated in a general case and Postgres is not a specialized\n> document-oriented DBMS (there is a normalization after all);\n> * This should be a relation-level optimization option, not something\n> visible to every user of the table (not a custom type, etc);\n> * This is going to be an attribute-level compression;\n> * The dictionaries should be created automatically (maybe not in a PoC\n> but in the final implementation) since people are not good at it;\n> * We are going to be using the existing compression algorithms like\n> LZ4/ZSTD, not to invent new ones;\n> * When created, a dictionary version is immutable, i.e. no new entries\n> can be added. New version of a dictionary can be created when the data\n> evolves. The compressed data stores the dictionary version used for\n> compression. A dictionary version can't be deleted while data exists\n> that uses this version of a dictionary;\n> * Dictionaries are created automatically from sampled data during\n> ANALIZE. We compare the efficiency of a new dictionary vs the\n> efficiency of the old one (or the lack of such) on sampled data and\n> depending on the results decide whether it's worth creating a new\n> version of a dictionary;\n> * This is going to work for multiple types: TEXT, JSON, JSONB, XML,\n> BYTEA etc. Ideally for user-defined types too;\n\nAny type with typlen < 0 should work, right?\n\n> Hopefully I didn't miss anything.\n>\n> While thinking about how a user interface could look like it occured\n> to me that what we are discussing could be merely a new STORAGE\n> strategy. Currently there are PLAIN, MAIN, EXTERNAL and EXTENDED.\n> Let's call a new strategy DICTIONARY, with typstorage = d.\n\nThe use of dictionaries should be dependent on only the use of a\ncompression method that supports pre-computed compression\ndictionaries. I think storage=MAIN + compression dictionaries should\nbe supported, to make sure there is no expensive TOAST lookup for the\nattributes of the tuple; but that doesn't seem to be an option with\nthat design.\n\n> When user wants a given attribute to be compressed, he/she says:\n>\n> ALTER TABLE foo ALTER COLUMN bar SET STORAGE DICTIONARY;\n>\n> And the compression algorithms is chosen as usual:\n>\n> ALTER TABLE foo ALTER COLUMN bar SET COMPRESSION lz4;\n>\n> When there are no dictionaries yet, DICTIONARY works the same as\n> EXTENDED. When a dictionary is trained the data is compressed using\n> the latest version of this dictionary. For visibility we are going to\n> need some sort of pg_stat_dictionaries view that shows the existing\n> dictionaries, how much space they consume, etc.\n\nI think \"AT_AC SET COMPRESSION lz4 {[WITH | WITHOUT] DICTIONARY}\",\n\"AT_AC SET COMPRESSION lz4-dictionary\", or \"AT_AC SET\ncompression_dictionary = on\" would be better from a design\nperspective.\n\n> If we choose this approach there are a couple of questions/notes that\n> come to mind:\n>\n> * The default compression algorithm is PGLZ and unlike LZ4 it doesn't\n> support training dictionaries yet. This should be straightforward to\n> implement though, or alternatively shared dictionaries could work only\n> for LZ4;\n\nDidn't we get zstd support recently as well?\n\n> * Currently users have control of toast_tuple_target but not\n> TOAST_TUPLE_THRESHOLD. Which means for tuples smaller than 1/4 of the\n> page size shared dictionaries are not going to be triggered. Which is\n> not necessarily a bad thing. Alternatively we could give the users\n> toast_tuple_threshold setting. This shouldn't necessarily be part of\n> this CF entry discussion however, we can always discuss it separately;\n\nThat makes a lot of sense, but as you said handling that separately\nwould probably be better and easier to review.\n\n> * Should we allow setting DICTIONARY storage strategy for a given\n> type, i.e. CREATE TYPE baz STORAGE = DICTIONARY? I suggest we forbid\n> it in the first implementation, just for the sake of simplicity.\n\nCan we specify a default compression method for each postgresql type,\njust like how we specify the default storage? If not, then the setting\ncould realistically be in conflict with a default_toast_compression\nsetting, assuming that dictionary support is not a requirement for\ncolumn compression methods.\n\n> * It looks like we won't be able to share a dictionary between\n> multiple columns. Which again is not necessarily a bad thing: data in\n> these columns can be completely different (e.g. BYTEA and XML),\n> columns can be dropped independently, etc.\n\nYes\n\n> If a user is interested in\n> sharing a dictionary between several columns he/she can join these\n> columns in a single JSONB column.\n\nIt is unreasonable to expect this to be possible, due to e.g.\npartitioning resulting in columns that share compressable patters to\nbe on different physical tables.\n\n> * TOAST currently doesn't support ZSTD. IMO this is not a big deal and\n> adding the corresponding support can be discussed separately.\n> * If memory serves, there were not so many free bits left in TOAST\n> pointers. The pointers don't store a storage strategy though so\n> hopefully this will not be a problem. We'll see.\n\nThe toast pointer must store enough info about the compression used to\ndecompress the datum, which implies it needs to store the compression\nalgorithm used, and a reference to the compression dictionary (if\nany). I think the idea about introducing a new toast pointer type (in\nthe custom toast patch) wasn't bad per se, and that change would allow\nus to carry more or different info in the header.\n\nKind regards,\n\nMatthias van de Meent\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Tue, 18 Apr 2023 19:12:08 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Matthias, Nikita,\n\nMany thanks for the feedback!\n\n> Any type with typlen < 0 should work, right?\n\nRight.\n\n> The use of dictionaries should be dependent on only the use of a\n> compression method that supports pre-computed compression\n> dictionaries. I think storage=MAIN + compression dictionaries should\n> be supported, to make sure there is no expensive TOAST lookup for the\n> attributes of the tuple; but that doesn't seem to be an option with\n> that design.\n\n> I don't think it's a good idea to interfere with the storage strategies. Dictionary\n> should be a kind of storage option, like a compression, but not the strategy\n> declining all others.\n\nMy reasoning behind this proposal was as follows.\n\nLet's not forget that MAIN attributes *can* be stored in a TOAST table\nas a final resort, and also that EXTENDED attributes are compressed\nin-place first, and are stored in a TOAST table *only* if this is\nneeded to fit a tuple in toast_tuple_target bytes (which additionally\nuser can change). So whether in practice it's going to be advantageous\nto distinguish MAIN+dict.compressed and EXTENDED+dict.compressed\nattributes seems to be debatable.\n\nBasically the only difference between MAIN and EXTENDED is the\npriority the four-stage TOASTing algorithm gives to the corresponding\nattributes. I would assume if the user wants dictionary compression,\nthe attribute should be highly compressible and thus always EXTENDED.\n(We seem to use MAIN for types that are not that well compressible.)\n\nThis being said, if the majority believes we should introduce a new\nentity and keep storage strategies as is, I'm fine with that. This\nperhaps is not going to be the most convenient interface for the user.\nOn the flip side it's going to be flexible. It's all about compromise.\n\n> I think \"AT_AC SET COMPRESSION lz4 {[WITH | WITHOUT] DICTIONARY}\",\n> \"AT_AC SET COMPRESSION lz4-dictionary\", or \"AT_AC SET\n> compression_dictionary = on\" would be better from a design\n> perspective.\n\n> Agree with Matthias on above.\n\nOK, unless someone will object, we have a consensus here.\n\n> Didn't we get zstd support recently as well?\n\nUnfortunately, it is not used for TOAST. In fact I vaguely recall that\nZSTD support for TOAST may have been explicitly rejected. Don't quote\nme on that however...\n\nI think it's going to be awkward to support PGLZ/LZ4 for COMPRESSION\nand LZ4/ZSTD for dictionary compression. As a user personally I would\nprefer having one set of compression algorithms that can be used with\nTOAST.\n\nPerhaps for PoC we could focus on LZ4, and maybe PGLZ, if we choose to\nuse PGLZ for compression dictionaries too. We can always discuss ZSTD\nseparately.\n\n> Can we specify a default compression method for each postgresql type,\n> just like how we specify the default storage? If not, then the setting\n> could realistically be in conflict with a default_toast_compression\n> setting, assuming that dictionary support is not a requirement for\n> column compression methods.\n\nNo, only STORAGE can be specified [1].\n\n> The toast pointer must store enough info about the compression used to\n> decompress the datum, which implies it needs to store the compression\n> algorithm used, and a reference to the compression dictionary (if\n> any). I think the idea about introducing a new toast pointer type (in\n> the custom toast patch) wasn't bad per se, and that change would allow\n> us to carry more or different info in the header.\n\n> The Pluggable TOAST was rejected, but we have a lot of improvements\n> based on changing the TOAST pointer structure.\n\nInterestingly it looks like we ended up working on TOAST improvement\nafter all. I'm almost certain that we will have to modify TOAST\npointers to a certain degree in order to make it work. Hopefully it's\nnot going to be too invasive.\n\n[1]: https://www.postgresql.org/docs/current/sql-createtype.html\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 18 Apr 2023 20:21:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Nikita,\n\n> The External TOAST pointer is very limited to the amount of service data\n> it could keep, that's why we introduced the Custom TOAST pointers in the\n> Pluggable TOAST. But keep in mind that changing the TOAST pointer\n> structure requires a lot of quite heavy modifications in the core - along with\n> some obvious places like insert/update/delete datum there is very serious\n> issue with logical replication.\n> The Pluggable TOAST was rejected, but we have a lot of improvements\n> based on changing the TOAST pointer structure.\n\nNow I see what you meant [1]. I agree that we should focus on\nrefactoring TOAST pointers first. So I suggest we continue discussing\nthis in a corresponding thread and return to this one later.\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TPSvR2rKpoVX5TSXo_kMxXF%2B-SxLtrpPaMf907tX%3DnVCw%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 26 Apr 2023 16:00:08 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nI think I should open a new thread related to TOAST pointer refactoring\nbased on Pluggable TOAST, COPY and looping in retrieving new TOAST\nvalue OID issues.\n\nOn Wed, Apr 26, 2023 at 4:00 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > The External TOAST pointer is very limited to the amount of service data\n> > it could keep, that's why we introduced the Custom TOAST pointers in the\n> > Pluggable TOAST. But keep in mind that changing the TOAST pointer\n> > structure requires a lot of quite heavy modifications in the core -\n> along with\n> > some obvious places like insert/update/delete datum there is very serious\n> > issue with logical replication.\n> > The Pluggable TOAST was rejected, but we have a lot of improvements\n> > based on changing the TOAST pointer structure.\n>\n> Now I see what you meant [1]. I agree that we should focus on\n> refactoring TOAST pointers first. So I suggest we continue discussing\n> this in a corresponding thread and return to this one later.\n>\n> [1]:\n> https://www.postgresql.org/message-id/CAJ7c6TPSvR2rKpoVX5TSXo_kMxXF%2B-SxLtrpPaMf907tX%3DnVCw%40mail.gmail.com\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,I think I should open a new thread related to TOAST pointer refactoringbased on Pluggable TOAST, COPY and looping in retrieving new TOASTvalue OID issues.On Wed, Apr 26, 2023 at 4:00 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> The External TOAST pointer is very limited to the amount of service data\n> it could keep, that's why we introduced the Custom TOAST pointers in the\n> Pluggable TOAST. But keep in mind that changing the TOAST pointer\n> structure requires a lot of quite heavy modifications in the core - along with\n> some obvious places like insert/update/delete datum there is very serious\n> issue with logical replication.\n> The Pluggable TOAST was rejected, but we have a lot of improvements\n> based on changing the TOAST pointer structure.\n\nNow I see what you meant [1]. I agree that we should focus on\nrefactoring TOAST pointers first. So I suggest we continue discussing\nthis in a corresponding thread and return to this one later.\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TPSvR2rKpoVX5TSXo_kMxXF%2B-SxLtrpPaMf907tX%3DnVCw%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 27 Apr 2023 13:43:59 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers!\n\nAs discussed above, I've created a new thread on the Extension of the TOAST\npointer subject -\nhttps://www.postgresql.org/message-id/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com\nPlease check and comment.\n\nOn Thu, Apr 27, 2023 at 1:43 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n>\n> I think I should open a new thread related to TOAST pointer refactoring\n> based on Pluggable TOAST, COPY and looping in retrieving new TOAST\n> value OID issues.\n>\n> On Wed, Apr 26, 2023 at 4:00 PM Aleksander Alekseev <\n> aleksander@timescale.com> wrote:\n>\n>> Hi Nikita,\n>>\n>> > The External TOAST pointer is very limited to the amount of service data\n>> > it could keep, that's why we introduced the Custom TOAST pointers in the\n>> > Pluggable TOAST. But keep in mind that changing the TOAST pointer\n>> > structure requires a lot of quite heavy modifications in the core -\n>> along with\n>> > some obvious places like insert/update/delete datum there is very\n>> serious\n>> > issue with logical replication.\n>> > The Pluggable TOAST was rejected, but we have a lot of improvements\n>> > based on changing the TOAST pointer structure.\n>>\n>> Now I see what you meant [1]. I agree that we should focus on\n>> refactoring TOAST pointers first. So I suggest we continue discussing\n>> this in a corresponding thread and return to this one later.\n>>\n>> [1]:\n>> https://www.postgresql.org/message-id/CAJ7c6TPSvR2rKpoVX5TSXo_kMxXF%2B-SxLtrpPaMf907tX%3DnVCw%40mail.gmail.com\n>>\n>> --\n>> Best regards,\n>> Aleksander Alekseev\n>>\n>\n>\n> --\n> Regards,\n>\n> --\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!As discussed above, I've created a new thread on the Extension of the TOAST pointer subject -https://www.postgresql.org/message-id/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.comPlease check and comment.On Thu, Apr 27, 2023 at 1:43 PM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi,I think I should open a new thread related to TOAST pointer refactoringbased on Pluggable TOAST, COPY and looping in retrieving new TOASTvalue OID issues.On Wed, Apr 26, 2023 at 4:00 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> The External TOAST pointer is very limited to the amount of service data\n> it could keep, that's why we introduced the Custom TOAST pointers in the\n> Pluggable TOAST. But keep in mind that changing the TOAST pointer\n> structure requires a lot of quite heavy modifications in the core - along with\n> some obvious places like insert/update/delete datum there is very serious\n> issue with logical replication.\n> The Pluggable TOAST was rejected, but we have a lot of improvements\n> based on changing the TOAST pointer structure.\n\nNow I see what you meant [1]. I agree that we should focus on\nrefactoring TOAST pointers first. So I suggest we continue discussing\nthis in a corresponding thread and return to this one later.\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TPSvR2rKpoVX5TSXo_kMxXF%2B-SxLtrpPaMf907tX%3DnVCw%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 17 May 2023 14:32:13 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi hackers,\n\nI would like to continue discussing compression dictionaries.\n\n> So I summarized the requirements we agreed on so far and ended up with\n> the following list: [...]\n\nAgain, here is the summary of our current agreements, at least how I\nunderstand them. Please feel free to correct me where I'm wrong.\n\nWe are going to focus on supporting the:\n\n````\nSET COMPRESSION lz4 [WITH|WITHOUT] DICTIONARY\n```\n\n... syntax for now. From the UI perspective the rest of the agreements\ndidn't change compared to the previous summary.\n\nIn the [1] discussion (cc: Robert) we agreed to use va_tag != 18 for\nthe on-disk TOAST pointer representation to make TOAST pointers\nextendable. If va_tag has a different value (currently it's always\n18), the TOAST pointer is followed by an utf8-like varint bitmask.\nThis bitmask determines the rest of the content of the TOAST pointer\nand its overall size. This will allow to extend TOAST pointers to\ninclude dictionary_id and also to extend them in the future, e.g. to\nsupport ZSTD and other compression algorithms, use 64-bit TOAST\npointers, etc.\n\nSeveral things occured to me:\n\n- Does anyone believe that va_tag should be part of the utf8-like\nbitmask in order to save a byte or two?\n\n- The described approach means that compression dictionaries are not\ngoing to be used when data is compressed in-place (i.e. within a\ntuple), since no TOAST pointer is involved in this case. Also we will\nbe unable to add additional compression algorithms here. Does anyone\nhave problems with this? Should we use the reserved compression\nalgorithm id instead as a marker of an extended TOAST?\n\n- It would be nice to decompose the feature in several independent\npatches, e.g. modify TOAST first, then add compression dictionaries\nwithout automatic update of the dictionaries, then add the automatic\nupdate. I find it difficult to imagine however how to modify TOAST\npointers and test the code properly without a dependency on a larger\nfeature. Could anyone think of a trivial test case for extendable\nTOAST? Maybe something we could add to src/test/modules similarly to\nhow we test SLRU, background workers, etc.\n\n[1]: https://www.postgresql.org/message-id/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 12 Oct 2023 13:28:48 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 4:16 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> > 8272749e added a few more arguments to CastCreate(). Here is the rebased patch.\n>\n> After merging afbfc029 [1] the patch needed a rebase. PFA v10.\n>\n> The patch is still in a PoC state and this is exactly why comments and\n> suggestions from the community are most welcome! Particularly I would\n> like to know:\n>\n> 1. Would you call it a wanted feature considering the existence of\n> Pluggable TOASTer patchset which (besides other things) tries to\n> introduce type-aware TOASTers for EXTERNAL attributes? I know what\n> Simon's [2] and Nikita's latest answers were, and I know my personal\n> opinion on this [3][4], but I would like to hear from the rest of the\n> community.\n>\n> 2. How should we make sure a dictionary will not consume all the\n> available memory? Limiting the amount of dictionary entries to pow(2,\n> 16) and having dictionary versions seems to work OK for ZSON. However\n> it was pointed out that this may be an unwanted limitation for the\n> in-core implementation.\n>\n> [1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c727f511;hp=afbfc02983f86c4d71825efa6befd547fe81a926\n> [2]: https://www.postgresql.org/message-id/CANbhV-HpCF852WcZuU0wyh1jMU4p6XLbV6rCRkZpnpeKQ9OenQ%40mail.gmail.com\n> [3]: https://www.postgresql.org/message-id/CAJ7c6TN-N3%3DPSykmOjmW1EAf9YyyHFDHEznX-5VORsWUvVN-5w%40mail.gmail.com\n> [4]: https://www.postgresql.org/message-id/CAJ7c6TO2XTTk3cu5w6ePHfhYQkoNpw7u1jeqHf%3DGwn%2BoWci8eA%40mail.gmail.com\n\nI tried to apply the patch but it is failing at the Head. It is giving\nthe following error:\npatching file src/test/regress/expected/dict.out\npatching file src/test/regress/expected/oidjoins.out\npatching file src/test/regress/expected/opr_sanity.out\npatching file src/test/regress/parallel_schedule\nHunk #1 FAILED at 111.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/test/regress/parallel_schedule.rej\npatching file src/test/regress/sql/dict.sql\nPlease send the Re-base version of the patch.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Wed, 17 Jan 2024 16:20:22 +0530",
"msg_from": "Shubham Khanna <khannashubham1197@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi Shubham,\n\n> > > 8272749e added a few more arguments to CastCreate(). Here is the rebased patch.\n> >\n> > After merging afbfc029 [1] the patch needed a rebase. PFA v10.\n> >\n> > The patch is still in a PoC state and this is exactly why comments and\n> > suggestions from the community are most welcome! Particularly I would\n> > like to know:\n> >\n> > 1. Would you call it a wanted feature considering the existence of\n> > Pluggable TOASTer patchset which (besides other things) tries to\n> > introduce type-aware TOASTers for EXTERNAL attributes? I know what\n> > Simon's [2] and Nikita's latest answers were, and I know my personal\n> > opinion on this [3][4], but I would like to hear from the rest of the\n> > community.\n> >\n> > 2. How should we make sure a dictionary will not consume all the\n> > available memory? Limiting the amount of dictionary entries to pow(2,\n> > 16) and having dictionary versions seems to work OK for ZSON. However\n> > it was pointed out that this may be an unwanted limitation for the\n> > in-core implementation.\n> >\n> > [1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c727f511;hp=afbfc02983f86c4d71825efa6befd547fe81a926\n> > [2]: https://www.postgresql.org/message-id/CANbhV-HpCF852WcZuU0wyh1jMU4p6XLbV6rCRkZpnpeKQ9OenQ%40mail.gmail.com\n> > [3]: https://www.postgresql.org/message-id/CAJ7c6TN-N3%3DPSykmOjmW1EAf9YyyHFDHEznX-5VORsWUvVN-5w%40mail.gmail.com\n> > [4]: https://www.postgresql.org/message-id/CAJ7c6TO2XTTk3cu5w6ePHfhYQkoNpw7u1jeqHf%3DGwn%2BoWci8eA%40mail.gmail.com\n>\n> I tried to apply the patch but it is failing at the Head. It is giving\n> the following error:\n\nYes it does for a while now. Until we reach any agreement regarding\nquestions (1) and (2) personally I don't see the point in submitting\nrebased patches. We can continue the discussion but mark the CF entry\nas RwF for now it will be helpful.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:21:54 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi again,\n\n> Yes it does for a while now. Until we reach any agreement regarding\n> questions (1) and (2) personally I don't see the point in submitting\n> rebased patches. We can continue the discussion but mark the CF entry\n> as RwF for now it will be helpful.\n\nSorry, what I actually meant were the following questions:\n\n\"\"\"\nSeveral things occured to me:\n\n- Does anyone believe that va_tag should be part of the utf8-like\nbitmask in order to save a byte or two?\n\n- The described approach means that compression dictionaries are not\ngoing to be used when data is compressed in-place (i.e. within a\ntuple), since no TOAST pointer is involved in this case. Also we will\nbe unable to add additional compression algorithms here. Does anyone\nhave problems with this? Should we use the reserved compression\nalgorithm id instead as a marker of an extended TOAST?\n\n- It would be nice to decompose the feature in several independent\npatches, e.g. modify TOAST first, then add compression dictionaries\nwithout automatic update of the dictionaries, then add the automatic\nupdate. I find it difficult to imagine however how to modify TOAST\npointers and test the code properly without a dependency on a larger\nfeature. Could anyone think of a trivial test case for extendable\nTOAST? Maybe something we could add to src/test/modules similarly to\nhow we test SLRU, background workers, etc.\n\"\"\"\n\nSince there was not much activity since then (for 3 months) I don't\nreally see how to process further.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 17 Jan 2024 17:27:17 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "Hi,\n\nAleksander, there was a quite straightforward answer regarding Pluggable\nTOAST\nin other thread - the Pluggable TOAST feature is not desired by the\ncommunity,\nand advanced TOAST mechanics would be accepted as parts of problematic\ndatatypes extended functionality, on a par with in and out functions, so\nwhat I am\nactually doing now - re-writing JSONb TOAST improvements to be called as\nseparate\nfunctions without Pluggable TOAST API. This takes some time because there\nis a large\nand complex code base left by Nikita Glukhov who has lost interest in this\nwork due\nto some reasons.\n\nI, personally, think that these two features could benefit from each other,\nbut they could\nbe adapted to each other after I would introduce JSONb Toaster in v17\nmaster.\n\nIf you don't mind please check the thread on extending the TOAST pointer -\nit is important\nfor improving TOAST mechanics.\n\n\nOn Wed, Jan 17, 2024 at 5:27 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi again,\n>\n> > Yes it does for a while now. Until we reach any agreement regarding\n> > questions (1) and (2) personally I don't see the point in submitting\n> > rebased patches. We can continue the discussion but mark the CF entry\n> > as RwF for now it will be helpful.\n>\n> Sorry, what I actually meant were the following questions:\n>\n> \"\"\"\n> Several things occured to me:\n>\n> - Does anyone believe that va_tag should be part of the utf8-like\n> bitmask in order to save a byte or two?\n>\n> - The described approach means that compression dictionaries are not\n> going to be used when data is compressed in-place (i.e. within a\n> tuple), since no TOAST pointer is involved in this case. Also we will\n> be unable to add additional compression algorithms here. Does anyone\n> have problems with this? Should we use the reserved compression\n> algorithm id instead as a marker of an extended TOAST?\n>\n> - It would be nice to decompose the feature in several independent\n> patches, e.g. modify TOAST first, then add compression dictionaries\n> without automatic update of the dictionaries, then add the automatic\n> update. I find it difficult to imagine however how to modify TOAST\n> pointers and test the code properly without a dependency on a larger\n> feature. Could anyone think of a trivial test case for extendable\n> TOAST? Maybe something we could add to src/test/modules similarly to\n> how we test SLRU, background workers, etc.\n> \"\"\"\n>\n> Since there was not much activity since then (for 3 months) I don't\n> really see how to process further.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Aleksander, there was a quite straightforward answer regarding Pluggable TOASTin other thread - the Pluggable TOAST feature is not desired by the community,and advanced TOAST mechanics would be accepted as parts of problematicdatatypes extended functionality, on a par with in and out functions, so what I amactually doing now - re-writing JSONb TOAST improvements to be called as separatefunctions without Pluggable TOAST API. This takes some time because there is a largeand complex code base left by Nikita Glukhov who has lost interest in this work dueto some reasons.I, personally, think that these two features could benefit from each other, but they couldbe adapted to each other after I would introduce JSONb Toaster in v17 master.If you don't mind please check the thread on extending the TOAST pointer - it is importantfor improving TOAST mechanics.On Wed, Jan 17, 2024 at 5:27 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi again,\n\n> Yes it does for a while now. Until we reach any agreement regarding\n> questions (1) and (2) personally I don't see the point in submitting\n> rebased patches. We can continue the discussion but mark the CF entry\n> as RwF for now it will be helpful.\n\nSorry, what I actually meant were the following questions:\n\n\"\"\"\nSeveral things occured to me:\n\n- Does anyone believe that va_tag should be part of the utf8-like\nbitmask in order to save a byte or two?\n\n- The described approach means that compression dictionaries are not\ngoing to be used when data is compressed in-place (i.e. within a\ntuple), since no TOAST pointer is involved in this case. Also we will\nbe unable to add additional compression algorithms here. Does anyone\nhave problems with this? Should we use the reserved compression\nalgorithm id instead as a marker of an extended TOAST?\n\n- It would be nice to decompose the feature in several independent\npatches, e.g. modify TOAST first, then add compression dictionaries\nwithout automatic update of the dictionaries, then add the automatic\nupdate. I find it difficult to imagine however how to modify TOAST\npointers and test the code properly without a dependency on a larger\nfeature. Could anyone think of a trivial test case for extendable\nTOAST? Maybe something we could add to src/test/modules similarly to\nhow we test SLRU, background workers, etc.\n\"\"\"\n\nSince there was not much activity since then (for 3 months) I don't\nreally see how to process further.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Wed, 17 Jan 2024 17:43:30 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
},
{
"msg_contents": "On Wed, 17 Jan 2024 at 19:52, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Shubham,\n>\n> > > > 8272749e added a few more arguments to CastCreate(). Here is the rebased patch.\n> > >\n> > > After merging afbfc029 [1] the patch needed a rebase. PFA v10.\n> > >\n> > > The patch is still in a PoC state and this is exactly why comments and\n> > > suggestions from the community are most welcome! Particularly I would\n> > > like to know:\n> > >\n> > > 1. Would you call it a wanted feature considering the existence of\n> > > Pluggable TOASTer patchset which (besides other things) tries to\n> > > introduce type-aware TOASTers for EXTERNAL attributes? I know what\n> > > Simon's [2] and Nikita's latest answers were, and I know my personal\n> > > opinion on this [3][4], but I would like to hear from the rest of the\n> > > community.\n> > >\n> > > 2. How should we make sure a dictionary will not consume all the\n> > > available memory? Limiting the amount of dictionary entries to pow(2,\n> > > 16) and having dictionary versions seems to work OK for ZSON. However\n> > > it was pointed out that this may be an unwanted limitation for the\n> > > in-core implementation.\n> > >\n> > > [1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c727f511;hp=afbfc02983f86c4d71825efa6befd547fe81a926\n> > > [2]: https://www.postgresql.org/message-id/CANbhV-HpCF852WcZuU0wyh1jMU4p6XLbV6rCRkZpnpeKQ9OenQ%40mail.gmail.com\n> > > [3]: https://www.postgresql.org/message-id/CAJ7c6TN-N3%3DPSykmOjmW1EAf9YyyHFDHEznX-5VORsWUvVN-5w%40mail.gmail.com\n> > > [4]: https://www.postgresql.org/message-id/CAJ7c6TO2XTTk3cu5w6ePHfhYQkoNpw7u1jeqHf%3DGwn%2BoWci8eA%40mail.gmail.com\n> >\n> > I tried to apply the patch but it is failing at the Head. It is giving\n> > the following error:\n>\n> Yes it does for a while now. Until we reach any agreement regarding\n> questions (1) and (2) personally I don't see the point in submitting\n> rebased patches. We can continue the discussion but mark the CF entry\n> as RwF for now it will be helpful.\n\nThanks. I have updated the status to \"Returned with feedback\". Feel\nfree to create a new entry after we agree on the approach to take it\nforward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:53:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Compression dictionaries for JSONB"
}
] |
[
{
"msg_contents": "In trying out an OpenSSL 3.1 build with FIPS enabled I realized that our\ncryptohash code had a small issue. Calling a banned cipher generated two\ndifferent error messages interleaved:\n\n postgres=# select md5('foo');\n ERROR: could not compute MD5 hash: unsupported\n postgres=# select md5('foo');\n ERROR: could not compute MD5 hash: initialization error\n\nIt turns out that OpenSSL places two errors in the queue for this operation,\nand we only consume one without clearing the queue in between, so we grab an\nerror from the previous run.\n\nConsuming all (both) errors and creating a concatenated string seems overkill\nas it would alter the API from a const error string to something that needs\nfreeing etc (also, very few OpenSSL consumers actually drain the queue, OpenSSL\nthemselves don't). Skimming the OpenSSL code I was unable to find another\nexample of two errors generated. The attached calls ERR_clear_error() as how\nwe do in libpq in order to avoid consuming earlier errors.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 22 Apr 2022 16:56:38 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> It turns out that OpenSSL places two errors in the queue for this operation,\n> and we only consume one without clearing the queue in between, so we grab an\n> error from the previous run.\n\nUgh.\n\n> Consuming all (both) errors and creating a concatenated string seems overkill\n> as it would alter the API from a const error string to something that needs\n> freeing etc (also, very few OpenSSL consumers actually drain the queue, OpenSSL\n> themselves don't). Skimming the OpenSSL code I was unable to find another\n> example of two errors generated. The attached calls ERR_clear_error() as how\n> we do in libpq in order to avoid consuming earlier errors.\n\nThis seems quite messy. How would clearing the queue *before* creating\nthe object improve matters? It seems like that solution means you're\nleaving an extra error in the queue to break unrelated code. Wouldn't\nit be better to clear after grabbing the error? (Or maybe do both.)\n\nAlso, a comment seems advisable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Apr 2022 13:01:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "> On 22 Apr 2022, at 19:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> Consuming all (both) errors and creating a concatenated string seems overkill\n>> as it would alter the API from a const error string to something that needs\n>> freeing etc (also, very few OpenSSL consumers actually drain the queue, OpenSSL\n>> themselves don't). Skimming the OpenSSL code I was unable to find another\n>> example of two errors generated. The attached calls ERR_clear_error() as how\n>> we do in libpq in order to avoid consuming earlier errors.\n> \n> This seems quite messy. How would clearing the queue *before* creating\n> the object improve matters? \n\nWe know there won't be any leftovers which would make us display the wrong\nmessage.\n\n> It seems like that solution means you're leaving an extra error in the queue to\n> break unrelated code. Wouldn't it be better to clear after grabbing the error?\n> (Or maybe do both.)\n\nThat's a very good point, doing it in both ends of the operation is better\nhere.\n\n> Also, a comment seems advisable.\n\nAgreed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 23 Apr 2022 23:40:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 11:40:19PM +0200, Daniel Gustafsson wrote:\n> On 22 Apr 2022, at 19:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> Consuming all (both) errors and creating a concatenated string seems overkill\n>>> as it would alter the API from a const error string to something that needs\n>>> freeing etc (also, very few OpenSSL consumers actually drain the queue, OpenSSL\n>>> themselves don't). Skimming the OpenSSL code I was unable to find another\n>>> example of two errors generated. The attached calls ERR_clear_error() as how\n>>> we do in libpq in order to avoid consuming earlier errors.\n\nIt looks like the initialization error would come only from\nevp_md_init_internal() in digest.c.\n\n>> This seems quite messy. How would clearing the queue *before* creating\n>> the object improve matters? \n> \n> We know there won't be any leftovers which would make us display the wrong\n> message.\n\nYeah.\n\n>> It seems like that solution means you're leaving an extra error in the queue to\n>> break unrelated code. Wouldn't it be better to clear after grabbing the error?\n>> (Or maybe do both.)\n> \n> That's a very good point, doing it in both ends of the operation is better\n> here.\n\nError queues are cleaned with ERR_clear_error() before specific SSL\ncalls in the frontend and the backend, never after the fact. If we\nassume that multiple errors can be stacked in the OpenSSL error queue,\nshouldn't we worry about cleaning up the error queue in code paths\nlike pgtls_read/write(), be_tls_read/write() and be_tls_open_server()?\nSo it seems to me that SSLerrmessage() should be treated the same way\nfor the backend and the frontend. Any opinions?\n\npgcrypto's openssl.c has the same problem under FIPS as it includes\nEVP calls. Saying that, putting a cleanup in pg_cryptohash_create()\nbefore the fact, and one in SSLerrmessage() after consuming the error\ncode should be fine to keep a clean queue.\n\nDaniel, were you planning to write a patch? The other parts of the\ncode are older than the hmac and cryptohash business, but I would not\nmind writing something for the whole.\n--\nMichael",
"msg_date": "Mon, 25 Apr 2022 09:50:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "> On 25 Apr 2022, at 02:50, Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Apr 23, 2022 at 11:40:19PM +0200, Daniel Gustafsson wrote:\n>> On 22 Apr 2022, at 19:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>>> It seems like that solution means you're leaving an extra error in the queue to\n>>> break unrelated code. Wouldn't it be better to clear after grabbing the error?\n>>> (Or maybe do both.)\n>> \n>> That's a very good point, doing it in both ends of the operation is better\n>> here.\n> \n> Error queues are cleaned with ERR_clear_error() before specific SSL\n> calls in the frontend and the backend, never after the fact. If we\n> assume that multiple errors can be stacked in the OpenSSL error queue,\n> shouldn't we worry about cleaning up the error queue in code paths\n> like pgtls_read/write(), be_tls_read/write() and be_tls_open_server()?\n> So it seems to me that SSLerrmessage() should be treated the same way\n> for the backend and the frontend. Any opinions?\n\nWell, clearing the queue before calling into OpenSSL is the programming pattern\nwhich is quite universally followed so I'm not sure we need to litter the\ncodepaths with calls to clearing the queue as we leave.\n\nIn this particular codepath I think we can afford clearing it on the way out,\nwith a comment explaining why. It's easily reproducible and adding a call and\na comment is a good documentation for ourselves of this OpenSSL behavior. That\nbeing said, clearing on the way in is the important bit.\n\n> pgcrypto's openssl.c has the same problem under FIPS as it includes\n> EVP calls. Saying that, putting a cleanup in pg_cryptohash_create()\n> before the fact, and one in SSLerrmessage() after consuming the error\n> code should be fine to keep a clean queue.\n\npgcrypto doesn't really consume or even inspect the OpenSSL errors, but pass on\na PXE error based on the context of the operation. We could clear the queue as\nwe leave, but as you say we already clear it before calling in other places so\nit's not clear that it's useful. We've had EVP in pgcrypto for some time\nwithout seeing issues from error queues, one error left isn't that different\nfrom two when not consumed.\n\nThe attached 0002 does however correctly (IMO) report the error as an init\nerror instead of the non-descript generic error, which isn't really all that\nhelpful. I think that also removes the last consumer of the generic error, but\nI will take another look with fresh eyes to confirm that.\n\n0003 removes what I think is a weirdly placed questionmark from the message\nthat make it seem strangely ambiguous. This needs to update the test answer\nfiles as well, but I first wanted to float the idea before doing that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 26 Apr 2022 00:07:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> In this particular codepath I think we can afford clearing it on the way out,\n> with a comment explaining why.\n\nYeah. It seems out of the ordinary for an OpenSSL call to stack\ntwo error conditions, so treating a known case of that specially\nseems reasonable. Patches seem sane from here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Apr 2022 18:44:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 12:07:32AM +0200, Daniel Gustafsson wrote:\n> In this particular codepath I think we can afford clearing it on the way out,\n> with a comment explaining why. It's easily reproducible and adding a call and\n> a comment is a good documentation for ourselves of this OpenSSL behavior. That\n> being said, clearing on the way in is the important bit.\n\n+ * consumed an error, but cipher initialization can in FIPS enabled\nIt seems to me that this comment needs a hyphen, as of\n\"FIPS-enabled\".\n\nI am a bit annoyed to assume that having only a localized\nERR_clear_error() in the error code path of the init() call is the\nonly problem that would occur, only because that's the first one we'd\nsee in a hash computation. So my choice would be to call\nERR_get_error() within SSLerrmessage() and clear the queue after\nfetching the error code via ERR_get_error() for both\ncryptohash_openssl.c and hmac_openssl.c, but I won't fight hard\nagainst both of you on this point, either.\n\nPerhaps this should be reported to the upstream folks? We'd still\nneed this code for already released versions, but getting two errors\nlooks like a mistake.\n\n> pgcrypto doesn't really consume or even inspect the OpenSSL errors, but pass on\n> a PXE error based on the context of the operation. We could clear the queue as\n> we leave, but as you say we already clear it before calling in other places so\n> it's not clear that it's useful. We've had EVP in pgcrypto for some time\n> without seeing issues from error queues, one error left isn't that different\n> from two when not consumed.\n\nOkay. I did not recall the full error stack used in pgcrypto. It is\nannoying to not get from OpenSSL the details of the error, though.\nWith FIPS enabled, one computing a hash with pgcrypto would just know\nabout the initialization error, but would miss why the computation\nfailed. It looks like we could use a new error code to tell\npx_strerror() to look at the OpenSSL error queue instead of one of the\nhardcoded strings. Just saying.\n\n> The attached 0002 does however correctly (IMO) report the error as an init\n> error instead of the non-descript generic error, which isn't really all that\n> helpful. I think that also removes the last consumer of the generic error, but\n> I will take another look with fresh eyes to confirm that.\n>\n> 0003 removes what I think is a weirdly placed questionmark from the message\n> that make it seem strangely ambiguous. This needs to update the test answer\n> files as well, but I first wanted to float the idea before doing that.\n\nGood catches.\n--\nMichael",
"msg_date": "Tue, 26 Apr 2022 10:55:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "> On 26 Apr 2022, at 03:55, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Apr 26, 2022 at 12:07:32AM +0200, Daniel Gustafsson wrote:\n>> In this particular codepath I think we can afford clearing it on the way out,\n>> with a comment explaining why. It's easily reproducible and adding a call and\n>> a comment is a good documentation for ourselves of this OpenSSL behavior. That\n>> being said, clearing on the way in is the important bit.\n> \n> + * consumed an error, but cipher initialization can in FIPS enabled\n> It seems to me that this comment needs a hyphen, as of\n> \"FIPS-enabled\".\n\nWill fix.\n\n> I am a bit annoyed to assume that having only a localized\n> ERR_clear_error() in the error code path of the init() call is the\n> only problem that would occur, only because that's the first one we'd\n> see in a hash computation.\n\nIt's also the only one in this case since the computation won't get past the\ninit step with the error no? The queue will be cleared for each computation so\nthe risk of cross contamination is removed.\n\n> Perhaps this should be reported to the upstream folks? We'd still\n> need this code for already released versions, but getting two errors\n> looks like a mistake.\n\nNot really, the error system in OpenSSL has been defined as a queue with\nmultiple errors per call possible at least since SSLeay 0.9.1. I think this is\nvery much intentional, but a rare case of it.\n\n>> pgcrypto doesn't really consume or even inspect the OpenSSL errors, but pass on\n>> a PXE error based on the context of the operation. We could clear the queue as\n>> we leave, but as you say we already clear it before calling in other places so\n>> it's not clear that it's useful. We've had EVP in pgcrypto for some time\n>> without seeing issues from error queues, one error left isn't that different\n>> from two when not consumed.\n> \n> Okay. I did not recall the full error stack used in pgcrypto. It is\n> annoying to not get from OpenSSL the details of the error, though.\n> With FIPS enabled, one computing a hash with pgcrypto would just know\n> about the initialization error, but would miss why the computation\n> failed. It looks like we could use a new error code to tell\n> px_strerror() to look at the OpenSSL error queue instead of one of the\n> hardcoded strings. Just saying.\n\nI looked at that briefly, and might revisit it during the 16 cycle, but it does\nhave a smell of diminishing returns to it. With non-OpenSSL code ripped out\nfrom pgcrypto it's clearly more interesting than before.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 26 Apr 2022 15:15:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 03:15:24PM +0200, Daniel Gustafsson wrote:\n> On 26 Apr 2022, at 03:55, Michael Paquier <michael@paquier.xyz> wrote:\n>> I am a bit annoyed to assume that having only a localized\n>> ERR_clear_error() in the error code path of the init() call is the\n>> only problem that would occur, only because that's the first one we'd\n>> see in a hash computation.\n> \n> It's also the only one in this case since the computation won't get past the\n> init step with the error no? The queue will be cleared for each computation so\n> the risk of cross contamination is removed.\n\nI was wondering about the case where an error is applied while\nupdating or finishing the cryptohash, not just the creation or the\ninitialization. But cleaning up the queue when beginning a\ncomputation is fine enough.\n\n>> Okay. I did not recall the full error stack used in pgcrypto. It is\n>> annoying to not get from OpenSSL the details of the error, though.\n>> With FIPS enabled, one computing a hash with pgcrypto would just know\n>> about the initialization error, but would miss why the computation\n>> failed. It looks like we could use a new error code to tell\n>> px_strerror() to look at the OpenSSL error queue instead of one of the\n>> hardcoded strings. Just saying.\n> \n> I looked at that briefly, and might revisit it during the 16 cycle, but it does\n> have a smell of diminishing returns to it. With non-OpenSSL code ripped out\n> from pgcrypto it's clearly more interesting than before.\n\nClearly.\n\nFor the sake of the archives, this patch series has been applied as\n17ec5fa, 0250a16 and ee97d46.\n--\nMichael",
"msg_date": "Mon, 9 May 2022 09:28:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Cryptohash OpenSSL error queue in FIPS enabled builds"
}
] |
[
{
"msg_contents": "Hi,\n\nThe default behavior on Postgres is to grant EXECUTE to PUBLIC on any\nfunction or procedure that is created.\n\nI feel this this is a security concern, especially for procedures and\nfunctions defined with the \"SECURITY DEFINER\" clause.\n\nNormally, we don’t want everyone on the database to be able to run\nprocedures or function without explicitly granting them the privilege\nto do so.\n\nIs there any reason to keep grant EXECUTE to PUBLIC on routines as the default?\n\nBest,\nJacek Trocinski\n\n\n",
"msg_date": "Fri, 22 Apr 2022 19:31:29 +0200",
"msg_from": "Jacek Trocinski <jacek@hedgehog.app>",
"msg_from_op": true,
"msg_subject": "Why is EXECUTE granted to PUBLIC for all routines?"
},
{
"msg_contents": "Jacek Trocinski <jacek@hedgehog.app> writes:\n> The default behavior on Postgres is to grant EXECUTE to PUBLIC on any\n> function or procedure that is created.\n\n> I feel this this is a security concern, especially for procedures and\n> functions defined with the \"SECURITY DEFINER\" clause.\n\nThere is zero security concern for non-SECURITY-DEFINER functions,\nsince they do nothing callers couldn't do for themselves. For those,\nyou typically do want to grant out permissions. As for SECURITY DEFINER\nfunctions, there is no reason to make one unless it is meant to be called\nby someone besides the owner. Perhaps PUBLIC isn't the scope you want to\ngrant it to, but no-privileges wouldn't be a useful default there either.\n\nIn any case, changing this decision now would cause lots of problems,\nsuch as breaking existing dump files. We're unlikely to revisit it.\n\nAs noted in the docs, best practice is to adjust the permissions\nas you want them in the same transaction that creates the function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Apr 2022 13:44:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is EXECUTE granted to PUBLIC for all routines?"
},
{
"msg_contents": "On Fri, 22 Apr 2022 at 13:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> There is zero security concern for non-SECURITY-DEFINER functions,\n> since they do nothing callers couldn't do for themselves. For those,\n> you typically do want to grant out permissions. As for SECURITY DEFINER\n> functions, there is no reason to make one unless it is meant to be called\n> by someone besides the owner. Perhaps PUBLIC isn't the scope you want to\n> grant it to, but no-privileges wouldn't be a useful default there either.\n>\n\nNo privileges would be a safe default, not entirely unlike the default \"can\nonly connect from localhost\" pg_hba.conf, …\n\n\n> In any case, changing this decision now would cause lots of problems,\n> such as breaking existing dump files. We're unlikely to revisit it.\n>\n\n… but, yeah, this would be rather hard to change without causing more\ntrouble.\n\n\n> As noted in the docs, best practice is to adjust the permissions\n> as you want them in the same transaction that creates the function.\n>\n\nI wrote a function which resets the permissions on all objects in the\nspecified schemas to default. Then for each project I have a\nprivileges-granting file which starts by resetting all permissions, then\ngrants exactly the permissions I want. Most of the resetting is done by\nchecking the existing privileges and revoking them; then it ASSERTs that\nthis leaves an empty ACL, and finally does an UPDATE on the relevant system\ntable to change the ACL from empty to NULL. For SECURITY DEFINER functions,\nthe reset function then revokes PUBLIC privileges, leaving it to the\nspecific project to grant the appropriate privileges.\n\nBTW, the reg* types are amazing for writing this kind of stuff. Makes all\nsorts of things so much easier.\n\nOn Fri, 22 Apr 2022 at 13:44, Tom Lane <tgl@sss.pgh.pa.us> wrote: \nThere is zero security concern for non-SECURITY-DEFINER functions,\nsince they do nothing callers couldn't do for themselves. For those,\nyou typically do want to grant out permissions. As for SECURITY DEFINER\nfunctions, there is no reason to make one unless it is meant to be called\nby someone besides the owner. Perhaps PUBLIC isn't the scope you want to\ngrant it to, but no-privileges wouldn't be a useful default there either.No privileges would be a safe default, not entirely unlike the default \"can only connect from localhost\" pg_hba.conf, … \nIn any case, changing this decision now would cause lots of problems,\nsuch as breaking existing dump files. We're unlikely to revisit it.… but, yeah, this would be rather hard to change without causing more trouble. \nAs noted in the docs, best practice is to adjust the permissions\nas you want them in the same transaction that creates the function.\nI wrote a function which resets the permissions on all objects in the specified schemas to default. Then for each project I have a privileges-granting file which starts by resetting all permissions, then grants exactly the permissions I want. Most of the resetting is done by checking the existing privileges and revoking them; then it ASSERTs that this leaves an empty ACL, and finally does an UPDATE on the relevant system table to change the ACL from empty to NULL. For SECURITY DEFINER functions, the reset function then revokes PUBLIC privileges, leaving it to the specific project to grant the appropriate privileges.BTW, the reg* types are amazing for writing this kind of stuff. Makes all sorts of things so much easier.",
"msg_date": "Sat, 23 Apr 2022 22:43:33 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why is EXECUTE granted to PUBLIC for all routines?"
}
] |
[
{
"msg_contents": "Hi -hackers,\n\nEnclosed is a patch to allow extraction/saving of FPI from the WAL\nstream via pg_waldump.\n\nDescription from the commit:\n\nExtracts full-page images from the WAL stream into a target directory,\nwhich must be empty or not\nexist. These images are subject to the same filtering rules as normal\ndisplay in pg_waldump, which\nmeans that you can isolate the full page writes to a target relation,\namong other things.\n\nFiles are saved with the filename: <lsn>.<ts>.<db>.<rel>.<blk> with\nformatting to make things\nsomewhat sortable; for instance:\n\n00000000-010000C0.1663.1.6117.0\n00000000-01000150.1664.0.6115.0\n00000000-010001E0.1664.0.6114.0\n00000000-01000270.1663.1.6116.0\n00000000-01000300.1663.1.6113.0\n00000000-01000390.1663.1.6112.0\n00000000-01000420.1663.1.8903.0\n00000000-010004B0.1663.1.8902.0\n00000000-01000540.1663.1.6111.0\n00000000-010005D0.1663.1.6110.0\n\nIt's noteworthy that the raw images do not have the current LSN stored\nwith them in the WAL\nstream (as would be true for on-heap versions of the blocks), nor\nwould the checksum be valid in\nthem (though WAL itself has checksums, so there is some protection\nthere). This patch chooses to\nplace the LSN and calculate the proper checksum (if non-zero in the\nsource image) in the outputted\nblock. (This could perhaps be a targetted flag if we decide we don't\nalways want this.)\n\nThese images could be loaded/inspected via `pg_read_binary_file()` and\nused in the `pageinspect`\nsuite of tools to perform detailed analysis on the pages in question,\nbased on historical\ninformation, and may come in handy for forensics work.\n\nBest,\n\nDavid",
"msg_date": "Fri, 22 Apr 2022 17:51:24 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On 2022-Apr-22, David Christensen wrote:\n\n> Hi -hackers,\n> \n> Enclosed is a patch to allow extraction/saving of FPI from the WAL\n> stream via pg_waldump.\n\nI already wrote and posted a patch to do exactly this, and found it the\nonly way to fix a customer problem, so +1 on having this feature. I\nhaven't reviewed David's patch.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 23 Apr 2022 15:39:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Sat, 23 Apr 2022 at 00:51, David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Hi -hackers,\n>\n> Enclosed is a patch to allow extraction/saving of FPI from the WAL\n> stream via pg_waldump.\n>\n> Description from the commit:\n>\n> Extracts full-page images from the WAL stream into a target directory,\n> which must be empty or not\n> exist. These images are subject to the same filtering rules as normal\n> display in pg_waldump, which\n> means that you can isolate the full page writes to a target relation,\n> among other things.\n>\n> Files are saved with the filename: <lsn>.<ts>.<db>.<rel>.<blk> with\n> formatting to make things\n\nRegardless of my (lack of) opinion on the inclusion of this patch in\nPG (I did not significantly review this patch); I noticed that you do\nnot yet identify the 'fork' of the FPI in the file name.\n\nA lack of fork identifier in the exported file names would make\ndebugging much more difficult due to the relatively difficult to\nidentify data contained in !main forks, so I think this oversight\nshould be fixed, be it through `_forkname` postfix like normal fork\nsegments, or be it through `.<forknum>` numerical in- or postfix in\nthe filename.\n\n-Matthias\n\n\n",
"msg_date": "Sat, 23 Apr 2022 16:49:03 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 9:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n\n> Regardless of my (lack of) opinion on the inclusion of this patch in\n> PG (I did not significantly review this patch); I noticed that you do\n> not yet identify the 'fork' of the FPI in the file name.\n>\n> A lack of fork identifier in the exported file names would make\n> debugging much more difficult due to the relatively difficult to\n> identify data contained in !main forks, so I think this oversight\n> should be fixed, be it through `_forkname` postfix like normal fork\n> segments, or be it through `.<forknum>` numerical in- or postfix in\n> the filename.\n>\n> -Matthias\n\nHi Matthias, great point. Enclosed is a revised version of the patch\nthat adds the fork identifier to the end if it's a non-main fork.\n\nBest,\n\nDavid",
"msg_date": "Sat, 23 Apr 2022 13:43:36 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 01:43:36PM -0500, David Christensen wrote:\n> Hi Matthias, great point. Enclosed is a revised version of the patch\n> that adds the fork identifier to the end if it's a non-main fork.\n\nLike Alvaro, I have seen cases where this would have been really\nhandy. So +1 from me, as well, to have more tooling like what you are\nproposing. Fine for me to use one file for each block with a name\nlike what you are suggesting for each one of them. \n\n+ /* we accept an empty existing directory */\n+ if (stat(config.save_fpw_path, &st) == 0 && S_ISDIR(st.st_mode))\n+ {\nI don't think that there is any need to rely on a new logic if there\nis already some code in place able to do the same work. See\nverify_dir_is_empty_or_create() in pg_basebackup.c, as one example,\nthat relies on pg_check_dir(). I think that you'd better rely at\nleast on what pgcheckdir.c offers.\n\n+ {\"raw-fpi\", required_argument, NULL, 'W'},\nI think that we'd better rename this option. \"fpi\", that is not used\nmuch in the user-facing docs, is additionally not adapted when we have\nan other option called -w/--fullpage. I can think of\n--save-fullpage.\n\n+ PageSetLSN(page, record->ReadRecPtr);\n+ /* if checksum field is non-zero then we have checksums enabled,\n+ * so recalculate the checksum with new LSN (yes, this is a hack)\n+ */\nYeah, that looks like a hack, but putting in place a page on a cluster\nthat has checksums enabled would be more annoying with\nzero_damaged_pages enabled if we don't do that, so that's fine by me\nas-is. Perhaps you should mention that FPWs don't have their\npd_checksum updated when written.\n\n+ /* we will now extract the fullpage image from the XLogRecord and save\n+ * it to a calculated filename */\nThe format of this comment is incorrect.\n\n+ <entry>The LSN of the record with this block, formatted\n+ as <literal>%08x-%08X</literal> instead of the\n+ conventional <literal>%X/%X</literal> due to filesystem naming\n+ limits</entry>\nThe last part of the sentence about %X/%X could just be removed. That\ncould be confusing, at worse.\n\n+ PageSetLSN(page, record->ReadRecPtr);\nWhy is pd_lsn set?\n\ngit diff --check complains a bit.\n\nThis stuff should include some tests. With --end, the tests can\nbe cheap.\n--\nMichael",
"msg_date": "Mon, 25 Apr 2022 15:11:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Hi,\n\nOn 4/25/22 8:11 AM, Michael Paquier wrote:\n> On Sat, Apr 23, 2022 at 01:43:36PM -0500, David Christensen wrote:\n>> Hi Matthias, great point. Enclosed is a revised version of the patch\n>> that adds the fork identifier to the end if it's a non-main fork.\n> Like Alvaro, I have seen cases where this would have been really\n> handy. So +1 from me, as well, to have more tooling like what you are\n> proposing.\n\n+1 on the idea.\n\nFWIW, there is an extension doing this [1] but having the feature \nincluded in pg_waldump would be great.\n\nBertrand\n\n[1]: https://github.com/bdrouvot/pg_wal_fp_extract\n\n\n\n",
"msg_date": "Mon, 25 Apr 2022 09:00:16 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Sat, Apr 23, 2022 at 4:21 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Hi -hackers,\n>\n> Enclosed is a patch to allow extraction/saving of FPI from the WAL\n> stream via pg_waldump.\n>\n> Description from the commit:\n>\n> Extracts full-page images from the WAL stream into a target directory,\n> which must be empty or not\n> exist. These images are subject to the same filtering rules as normal\n> display in pg_waldump, which\n> means that you can isolate the full page writes to a target relation,\n> among other things.\n>\n> Files are saved with the filename: <lsn>.<ts>.<db>.<rel>.<blk> with\n> formatting to make things\n> somewhat sortable; for instance:\n>\n> 00000000-010000C0.1663.1.6117.0\n> 00000000-01000150.1664.0.6115.0\n> 00000000-010001E0.1664.0.6114.0\n> 00000000-01000270.1663.1.6116.0\n> 00000000-01000300.1663.1.6113.0\n> 00000000-01000390.1663.1.6112.0\n> 00000000-01000420.1663.1.8903.0\n> 00000000-010004B0.1663.1.8902.0\n> 00000000-01000540.1663.1.6111.0\n> 00000000-010005D0.1663.1.6110.0\n>\n> It's noteworthy that the raw images do not have the current LSN stored\n> with them in the WAL\n> stream (as would be true for on-heap versions of the blocks), nor\n> would the checksum be valid in\n> them (though WAL itself has checksums, so there is some protection\n> there). This patch chooses to\n> place the LSN and calculate the proper checksum (if non-zero in the\n> source image) in the outputted\n> block. (This could perhaps be a targetted flag if we decide we don't\n> always want this.)\n>\n> These images could be loaded/inspected via `pg_read_binary_file()` and\n> used in the `pageinspect`\n> suite of tools to perform detailed analysis on the pages in question,\n> based on historical\n> information, and may come in handy for forensics work.\n\nThanks for working on this. I'm just thinking if we can use these FPIs\nto repair the corrupted pages? I would like to understand more\ndetailed usages of the FPIs other than inspecting with pageinspect.\n\nGiven that others have realistic use-cases (of course I would like to\nknow more about those), +1 for the idea. However, I would suggest\nadding a function to extract raw FPI data to the pg_walinspect\nextension that got recently committed in PG 15, the out of which can\ndirectly be fed to pageinspect functions or\n\nFew comments:\n1) I think it's good to mention the stored file name format.\n+ printf(_(\" -W, --raw-fpi=path save found full page images to\ngiven path\\n\"));\n2)\n+ for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n+ {\n+ /* we will now extract the fullpage image from the XLogRecord and save\n+ * it to a calculated filename */\n+\n+ if (XLogRecHasBlockImage(record, block_id))\n\nI think we need XLogRecHasBlockRef to be true to check\nXLogRecHasBlockImage otherwise, we will see some build farms failing,\nrecently I've seen this failure for pg_walinspect..\n\n for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n {\n if (!XLogRecHasBlockRef(record, block_id))\n continue;\n\n if (XLogRecHasBlockImage(record, block_id))\n *fpi_len += XLogRecGetBlock(record, block_id)->bimg_len;\n }\n3) Please correct the commenting format:\n+ /* we will now extract the fullpage image from the XLogRecord and save\n+ * it to a calculated filename */\n4) Usually we start errors with lower case letters \"could not .....\"\n+ pg_fatal(\"Couldn't open file for output: %s\", filename);\n+ pg_fatal(\"Couldn't write out complete FPI to file: %s\", filename);\nAnd the variable name too:\n+ FILE *OPF;\n5) Not sure how the FPIs of TOASTed tables get stored, but it would be\ngood to check.\n6) Good to specify the known usages of FPIs in the documentation.\n7) Isn't it good to emit an error if RestoreBlockImage returns false?\n+ if (RestoreBlockImage(record, block_id, page))\n+ {\n8) I think I don't mind if a non-empty directory is specified - IMO\nbetter usability is this - if the directory is non-empty, just go add\nthe FPI files if FPI file exists just replace it, if the directory\nisn't existing, create and write the FPI files.\n+ /* we accept an empty existing directory */\n+ if (stat(config.save_fpw_path, &st) == 0 && S_ISDIR(st.st_mode))\n+ {\n 9) Instead of following:\n+ if (XLogRecordHasFPW(xlogreader_state))\n+ XLogRecordSaveFPWs(xlogreader_state, config.save_fpw_path);\nI will just do this in XLogRecordSaveFPWs:\n for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n {\n if (!XLogRecHasBlockRef(record, block_id))\n continue;\n\n if (XLogRecHasBlockImage(record, block_id))\n {\n\n }\n }\n10) Along with pg_pwrite(), can we also fsync the files (of course\nusers can choose it optionally) so that the writes will be durable for\nthe OS crashes?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Apr 2022 16:33:33 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 1:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Apr 23, 2022 at 01:43:36PM -0500, David Christensen wrote:\n> > Hi Matthias, great point. Enclosed is a revised version of the patch\n> > that adds the fork identifier to the end if it's a non-main fork.\n>\n> Like Alvaro, I have seen cases where this would have been really\n> handy. So +1 from me, as well, to have more tooling like what you are\n> proposing. Fine for me to use one file for each block with a name\n> like what you are suggesting for each one of them.\n>\n> + /* we accept an empty existing directory */\n> + if (stat(config.save_fpw_path, &st) == 0 && S_ISDIR(st.st_mode))\n> + {\n> I don't think that there is any need to rely on a new logic if there\n> is already some code in place able to do the same work. See\n> verify_dir_is_empty_or_create() in pg_basebackup.c, as one example,\n> that relies on pg_check_dir(). I think that you'd better rely at\n> least on what pgcheckdir.c offers.\n\nYeah, though I am tending towards what another user had suggested and\njust accepting an existing directory rather than requiring it to be\nempty, so thinking I might just skip this one. (Will review the file\nyou've pointed out to see if there is a relevant function though.)\n\n> + {\"raw-fpi\", required_argument, NULL, 'W'},\n> I think that we'd better rename this option. \"fpi\", that is not used\n> much in the user-facing docs, is additionally not adapted when we have\n> an other option called -w/--fullpage. I can think of\n> --save-fullpage.\n\nMakes sense.\n\n> + PageSetLSN(page, record->ReadRecPtr);\n> + /* if checksum field is non-zero then we have checksums enabled,\n> + * so recalculate the checksum with new LSN (yes, this is a hack)\n> + */\n> Yeah, that looks like a hack, but putting in place a page on a cluster\n> that has checksums enabled would be more annoying with\n> zero_damaged_pages enabled if we don't do that, so that's fine by me\n> as-is. Perhaps you should mention that FPWs don't have their\n> pd_checksum updated when written.\n\nCan make a mention; you thinking just a comment in the code is\nsufficient, or want there to be user-visible docs as well?\n\n> + /* we will now extract the fullpage image from the XLogRecord and save\n> + * it to a calculated filename */\n> The format of this comment is incorrect.\n>\n> + <entry>The LSN of the record with this block, formatted\n> + as <literal>%08x-%08X</literal> instead of the\n> + conventional <literal>%X/%X</literal> due to filesystem naming\n> + limits</entry>\n> The last part of the sentence about %X/%X could just be removed. That\n> could be confusing, at worse.\n\nMakes sense.\n\n> + PageSetLSN(page, record->ReadRecPtr);\n> Why is pd_lsn set?\n\nThis was to make the page as extracted equivalent to the effect of\napplying the WAL record block on replay (the LSN and checksum both);\nsince recovery calls this function to mark the LSN where the page came\nfrom this is why I did this as well. (I do see perhaps a case for\n--raw output that doesn't munge the page whatsoever, just made\ncomparisons easier.)\n\n> git diff --check complains a bit.\n\nCan look into this.\n\n> This stuff should include some tests. With --end, the tests can\n> be cheap.\n\nYeah, makes sense, will include some in the next version.\n\nDavid\n\n\n",
"msg_date": "Mon, 25 Apr 2022 10:11:10 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 2:00 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 4/25/22 8:11 AM, Michael Paquier wrote:\n> > On Sat, Apr 23, 2022 at 01:43:36PM -0500, David Christensen wrote:\n> >> Hi Matthias, great point. Enclosed is a revised version of the patch\n> >> that adds the fork identifier to the end if it's a non-main fork.\n> > Like Alvaro, I have seen cases where this would have been really\n> > handy. So +1 from me, as well, to have more tooling like what you are\n> > proposing.\n>\n> +1 on the idea.\n>\n> FWIW, there is an extension doing this [1] but having the feature\n> included in pg_waldump would be great.\n\nCool, glad to see there is some interest; definitely some overlap in\nforensics inside and outside the database both, as there are different\nuse cases for both.\n\nDavid\n\n\n",
"msg_date": "Mon, 25 Apr 2022 10:12:17 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 6:03 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks for working on this. I'm just thinking if we can use these FPIs\n> to repair the corrupted pages? I would like to understand more\n> detailed usages of the FPIs other than inspecting with pageinspect.\n\nMy main use case was for being able to look at potential corruption,\neither in the WAL stream, on heap, or in tools associated with the WAL\nstream. I suppose you could use the page images to replace corrupted\non-disk pages (and in fact I think I've heard of a tool or two that\ntry to do that), though don't know that I consider this the primary\npurpose (and having toast tables and the list, as well as clog would\nmake it potentially hard to just drop-in a page version without\nissues). Might help in extreme situations though.\n\n> Given that others have realistic use-cases (of course I would like to\n> know more about those), +1 for the idea. However, I would suggest\n> adding a function to extract raw FPI data to the pg_walinspect\n> extension that got recently committed in PG 15, the out of which can\n> directly be fed to pageinspect functions or\n\nYeah, makes sense to have some overlap here; will review what is there\nand see if there is some shared code base we can utilize. (ISTR some\nwork towards getting these two tools using more of the same code, and\nthis seems like another such instance.)\n\n> Few comments:\n> 1) I think it's good to mention the stored file name format.\n> + printf(_(\" -W, --raw-fpi=path save found full page images to\n> given path\\n\"));\n\n+1, though I've also thought there could be uses to have multiple\npossible output formats here (most immediately, there may be cases\nwhere we want *each* FPI for a block vs the *latest*, so files name\nwith/without the LSN component seem the easiest way forward here).\nThat would introduce some additional complexity though, so might need\nto see if others think that makes any sense.\n\n> 2)\n> + for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n> + {\n> + /* we will now extract the fullpage image from the XLogRecord and save\n> + * it to a calculated filename */\n> +\n> + if (XLogRecHasBlockImage(record, block_id))\n>\n> I think we need XLogRecHasBlockRef to be true to check\n> XLogRecHasBlockImage otherwise, we will see some build farms failing,\n> recently I've seen this failure for pg_walinspect..\n>\n> for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n> {\n> if (!XLogRecHasBlockRef(record, block_id))\n> continue;\n>\n> if (XLogRecHasBlockImage(record, block_id))\n> *fpi_len += XLogRecGetBlock(record, block_id)->bimg_len;\n> }\n\nGood point; my previous patch that got committed here (127aea2a65)\nprobably also needed this treatment.\n\n> 3) Please correct the commenting format:\n> + /* we will now extract the fullpage image from the XLogRecord and save\n> + * it to a calculated filename */\n\nAck.\n\n> 4) Usually we start errors with lower case letters \"could not .....\"\n> + pg_fatal(\"Couldn't open file for output: %s\", filename);\n> + pg_fatal(\"Couldn't write out complete FPI to file: %s\", filename);\n> And the variable name too:\n> + FILE *OPF;\n\nAck.\n\n> 5) Not sure how the FPIs of TOASTed tables get stored, but it would be\n> good to check.\n\nWhat would be different here? Are there issues you can think of, or\njust more from the pageinspect side of things?\n\n> 6) Good to specify the known usages of FPIs in the documentation.\n\nAck. Prob good to get additional info/use cases from others, as mine\nis fairly short. :-)\n\n> 7) Isn't it good to emit an error if RestoreBlockImage returns false?\n> + if (RestoreBlockImage(record, block_id, page))\n> + {\n\nAck.\n\n> 8) I think I don't mind if a non-empty directory is specified - IMO\n> better usability is this - if the directory is non-empty, just go add\n> the FPI files if FPI file exists just replace it, if the directory\n> isn't existing, create and write the FPI files.\n> + /* we accept an empty existing directory */\n> + if (stat(config.save_fpw_path, &st) == 0 && S_ISDIR(st.st_mode))\n> + {\n\nAgreed; was mainly trying to prevent accidental expansion inside\n`pg_wal` when an earlier version of the patch implied `.` as the\ncurrent dir with an optional path, but I've since made the path\nnon-optional and agree that this is unnecessarily restrictive.\n\n> 9) Instead of following:\n> + if (XLogRecordHasFPW(xlogreader_state))\n> + XLogRecordSaveFPWs(xlogreader_state, config.save_fpw_path);\n> I will just do this in XLogRecordSaveFPWs:\n> for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n> {\n> if (!XLogRecHasBlockRef(record, block_id))\n> continue;\n>\n> if (XLogRecHasBlockImage(record, block_id))\n> {\n>\n> }\n> }\n\nYeah, a little redundant.\n\n> 10) Along with pg_pwrite(), can we also fsync the files (of course\n> users can choose it optionally) so that the writes will be durable for\n> the OS crashes?\n\nCan add; you thinking a separate flag to disable this with default true?\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Mon, 25 Apr 2022 10:24:52 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 10:24:52AM -0500, David Christensen wrote:\n> On Mon, Apr 25, 2022 at 6:03 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Thanks for working on this. I'm just thinking if we can use these FPIs\n>> to repair the corrupted pages? I would like to understand more\n>> detailed usages of the FPIs other than inspecting with pageinspect.\n> \n> My main use case was for being able to look at potential corruption,\n> either in the WAL stream, on heap, or in tools associated with the WAL\n> stream. I suppose you could use the page images to replace corrupted\n> on-disk pages (and in fact I think I've heard of a tool or two that\n> try to do that), though don't know that I consider this the primary\n> purpose (and having toast tables and the list, as well as clog would\n> make it potentially hard to just drop-in a page version without\n> issues). Might help in extreme situations though.\n\nYou could do a bunch of things with those images, even make things\nworse if you are not careful enough.\n\n>> 10) Along with pg_pwrite(), can we also fsync the files (of course\n>> users can choose it optionally) so that the writes will be durable for\n>> the OS crashes?\n> \n> Can add; you thinking a separate flag to disable this with default true?\n\nWe expect data generated by tools like pg_dump, pg_receivewal\n(depending on the use --synchronous) or pg_basebackup to be consistent\nwhen we exit from the call. FWIW, flushing this data does not seem\nlike a strong requirement for something aimed at being used page-level\nchirurgy or lookups, because the WAL segments should still be around\neven if the host holding the archives is unplugged.\n--\nMichael",
"msg_date": "Tue, 26 Apr 2022 11:42:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 10:11:10AM -0500, David Christensen wrote:\n> On Mon, Apr 25, 2022 at 1:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> I don't think that there is any need to rely on a new logic if there\n>> is already some code in place able to do the same work. See\n>> verify_dir_is_empty_or_create() in pg_basebackup.c, as one example,\n>> that relies on pg_check_dir(). I think that you'd better rely at\n>> least on what pgcheckdir.c offers.\n> \n> Yeah, though I am tending towards what another user had suggested and\n> just accepting an existing directory rather than requiring it to be\n> empty, so thinking I might just skip this one. (Will review the file\n> you've pointed out to see if there is a relevant function though.)\n\nOK. FWIW, pg_check_dir() is used in initdb and pg_basebackup because\nthese care about the behavior to use when specifying a target path.\nYou could reuse it, but use a different policy depending on its\nreturned result for the needs of what you see fit in this case.\n\n>> + PageSetLSN(page, record->ReadRecPtr);\n>> + /* if checksum field is non-zero then we have checksums enabled,\n>> + * so recalculate the checksum with new LSN (yes, this is a hack)\n>> + */\n>> Yeah, that looks like a hack, but putting in place a page on a cluster\n>> that has checksums enabled would be more annoying with\n>> zero_damaged_pages enabled if we don't do that, so that's fine by me\n>> as-is. Perhaps you should mention that FPWs don't have their\n>> pd_checksum updated when written.\n> \n> Can make a mention; you thinking just a comment in the code is\n> sufficient, or want there to be user-visible docs as well?\n\nI was thinking about a comment, at least.\n\n> This was to make the page as extracted equivalent to the effect of\n> applying the WAL record block on replay (the LSN and checksum both);\n> since recovery calls this function to mark the LSN where the page came\n> from this is why I did this as well. (I do see perhaps a case for\n> --raw output that doesn't munge the page whatsoever, just made\n> comparisons easier.)\n\nHm. Okay. The argument goes both ways, I guess, depending on what we\nwant to do with those raw pages. Still you should not need pd_lsn if\nthe point is to be able to stick the pages back in place to attempt to\nget back as much data as possible when loading it back to the shared\nbuffers?\n--\nMichael",
"msg_date": "Tue, 26 Apr 2022 11:53:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 9:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 25, 2022 at 10:11:10AM -0500, David Christensen wrote:\n> > On Mon, Apr 25, 2022 at 1:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> I don't think that there is any need to rely on a new logic if there\n> >> is already some code in place able to do the same work. See\n> >> verify_dir_is_empty_or_create() in pg_basebackup.c, as one example,\n> >> that relies on pg_check_dir(). I think that you'd better rely at\n> >> least on what pgcheckdir.c offers.\n> >\n> > Yeah, though I am tending towards what another user had suggested and\n> > just accepting an existing directory rather than requiring it to be\n> > empty, so thinking I might just skip this one. (Will review the file\n> > you've pointed out to see if there is a relevant function though.)\n>\n> OK. FWIW, pg_check_dir() is used in initdb and pg_basebackup because\n> these care about the behavior to use when specifying a target path.\n> You could reuse it, but use a different policy depending on its\n> returned result for the needs of what you see fit in this case.\n\nI have a new version of the patch (pending tests) that uses\npg_check_dir's return value to handle things appropriately, so at\nleast some code reuse now. It did end up simplifying a lot.\n\n> >> + PageSetLSN(page, record->ReadRecPtr);\n> >> + /* if checksum field is non-zero then we have checksums enabled,\n> >> + * so recalculate the checksum with new LSN (yes, this is a hack)\n> >> + */\n> >> Yeah, that looks like a hack, but putting in place a page on a cluster\n> >> that has checksums enabled would be more annoying with\n> >> zero_damaged_pages enabled if we don't do that, so that's fine by me\n> >> as-is. Perhaps you should mention that FPWs don't have their\n> >> pd_checksum updated when written.\n> >\n> > Can make a mention; you thinking just a comment in the code is\n> > sufficient, or want there to be user-visible docs as well?\n>\n> I was thinking about a comment, at least.\n\nNew patch version has significantly more comments.\n\n> > This was to make the page as extracted equivalent to the effect of\n> > applying the WAL record block on replay (the LSN and checksum both);\n> > since recovery calls this function to mark the LSN where the page came\n> > from this is why I did this as well. (I do see perhaps a case for\n> > --raw output that doesn't munge the page whatsoever, just made\n> > comparisons easier.)\n>\n> Hm. Okay. The argument goes both ways, I guess, depending on what we\n> want to do with those raw pages. Still you should not need pd_lsn if\n> the point is to be able to stick the pages back in place to attempt to\n> get back as much data as possible when loading it back to the shared\n> buffers?\n\nYeah, I can see that too; I think there's at least enough of an\nargument for a flag to apply the fixups or just extract only the raw\npage pre-modification. Not sure which should be the \"default\"\nbehavior; either `--raw` or `--fixup-metadata` or something could\nwork. (Naming suggestions welcomed.)\n\nDavid\n\n\n",
"msg_date": "Tue, 26 Apr 2022 13:13:04 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 9:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 25, 2022 at 10:24:52AM -0500, David Christensen wrote:\n> > On Mon, Apr 25, 2022 at 6:03 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Thanks for working on this. I'm just thinking if we can use these FPIs\n> >> to repair the corrupted pages? I would like to understand more\n> >> detailed usages of the FPIs other than inspecting with pageinspect.\n> >\n> > My main use case was for being able to look at potential corruption,\n> > either in the WAL stream, on heap, or in tools associated with the WAL\n> > stream. I suppose you could use the page images to replace corrupted\n> > on-disk pages (and in fact I think I've heard of a tool or two that\n> > try to do that), though don't know that I consider this the primary\n> > purpose (and having toast tables and the list, as well as clog would\n> > make it potentially hard to just drop-in a page version without\n> > issues). Might help in extreme situations though.\n>\n> You could do a bunch of things with those images, even make things\n> worse if you are not careful enough.\n\nTrue. :-) This does seem like a tool geared towards \"expert mode\", so\nmaybe we just assume if you need it you know what you're doing?\n\n> >> 10) Along with pg_pwrite(), can we also fsync the files (of course\n> >> users can choose it optionally) so that the writes will be durable for\n> >> the OS crashes?\n> >\n> > Can add; you thinking a separate flag to disable this with default true?\n>\n> We expect data generated by tools like pg_dump, pg_receivewal\n> (depending on the use --synchronous) or pg_basebackup to be consistent\n> when we exit from the call. FWIW, flushing this data does not seem\n> like a strong requirement for something aimed at being used page-level\n> chirurgy or lookups, because the WAL segments should still be around\n> even if the host holding the archives is unplugged.\n\nI have added the fsync to the latest path (forthcoming), but I have no\nstrong preferences here as to the correct/expected behavior.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Tue, 26 Apr 2022 13:15:05 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 01:15:05PM -0500, David Christensen wrote:\n> True. :-) This does seem like a tool geared towards \"expert mode\", so\n> maybe we just assume if you need it you know what you're doing?\n\nThis is definitely an expert mode toy.\n--\nMichael",
"msg_date": "Wed, 27 Apr 2022 10:25:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Enclosed is v3 of this patch; this adds two modes for this feature,\none with the raw page `--save-fullpage/-W` and one with the\nLSN+checksum fixups `--save-fullpage-fixup/-X`.\n\nI've added at least some basic sanity-checking of the underlying\nfeature, as well as run the test file and the changes to pg_waldump.c\nthrough pgindent/perltidy to make them adhere to project standards.\nThrew in a rebase as well.\n\nWould appreciate any additional feedback here.\n\nBest,\n\nDavid",
"msg_date": "Mon, 2 May 2022 08:42:13 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "...and pushing a couple fixups pointed out by cfbot, so here's v4.\n\n\nOn Mon, May 2, 2022 at 8:42 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Enclosed is v3 of this patch; this adds two modes for this feature,\n> one with the raw page `--save-fullpage/-W` and one with the\n> LSN+checksum fixups `--save-fullpage-fixup/-X`.\n>\n> I've added at least some basic sanity-checking of the underlying\n> feature, as well as run the test file and the changes to pg_waldump.c\n> through pgindent/perltidy to make them adhere to project standards.\n> Threw in a rebase as well.\n>\n> Would appreciate any additional feedback here.\n>\n> Best,\n>\n> David",
"msg_date": "Mon, 2 May 2022 18:44:45 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On 2022-Apr-27, Michael Paquier wrote:\n\n> On Tue, Apr 26, 2022 at 01:15:05PM -0500, David Christensen wrote:\n> > True. :-) This does seem like a tool geared towards \"expert mode\", so\n> > maybe we just assume if you need it you know what you're doing?\n> \n> This is definitely an expert mode toy.\n\nI remember Greg Mullane posted a tool that attempts to correct page CRC\nmismatches[1]. This new tool might be useful to feed healing attempts,\ntoo. (It's of course not in any way a *solution*, because the page\nmight have been modified by other WAL records since the last FPI, but it\ncould be a start towards building a solution that scoops page contents\nfrom WAL.)\n\n[1] https://github.com/turnstep/pg_healer\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nSubversion to GIT: the shortest path to happiness I've ever heard of\n (Alexey Klyukin)\n\n\n",
"msg_date": "Tue, 3 May 2022 10:34:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Tue, May 03, 2022 at 10:34:41AM +0200, Alvaro Herrera wrote:\n> I remember Greg Mullane posted a tool that attempts to correct page CRC\n> mismatches[1]. This new tool might be useful to feed healing attempts,\n> too. (It's of course not in any way a *solution*, because the page\n> might have been modified by other WAL records since the last FPI, but it\n> could be a start towards building a solution that scoops page contents\n> from WAL.)\n> \n> [1] https://github.com/turnstep/pg_healer\n\nFun. Thanks for mentioning that.\n--\nMichael",
"msg_date": "Wed, 15 Jun 2022 14:57:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "2022年5月3日(火) 8:45 David Christensen <david.christensen@crunchydata.com>:\n>\n> ...and pushing a couple fixups pointed out by cfbot, so here's v4.\n\nHi\n\ncfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\n[1] http://cfbot.cputube.org/patch_40_3628.log\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 11:52:59 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 11:52:59AM +0900, Ian Lawrence Barwick wrote:\n> 2022年5月3日(火) 8:45 David Christensen <david.christensen@crunchydata.com>:\n> >\n> > ...and pushing a couple fixups pointed out by cfbot, so here's v4.\n> \n> cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch.\n\nMore important than needing to be rebased, the patch has never passed\nits current tests on windows.\n\nAs I recall, that's due to relying on \"cp\". And \"rsync\", which\nshouldn't be assumed to exist by regression tests).\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3628\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:02:38 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Nov 4, 2022, at 9:02 AM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Fri, Nov 04, 2022 at 11:52:59AM +0900, Ian Lawrence Barwick wrote:\n>> 2022年5月3日(火) 8:45 David Christensen <david.christensen@crunchydata.com>:\n>>> \n>>> ...and pushing a couple fixups pointed out by cfbot, so here's v4.\n>> \n>> cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n>> currently underway, this would be an excellent time to update the patch.\n> \n> More important than needing to be rebased, the patch has never passed\n> its current tests on windows.\n> \n> As I recall, that's due to relying on \"cp\". And \"rsync\", which\n> shouldn't be assumed to exist by regression tests).\n\nI will work on supporting the windows compatibility here. Is there some list of guidelines for what you can and can’t use? I don’t have a windows machine available to develop on. \n\nWas it failing on windows? I was attempting to skip it as I recall. \n\nBest,\n\nDavid\n\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:16:29 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 09:16:29AM -0500, David Christensen wrote:\n> On Nov 4, 2022, at 9:02 AM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Fri, Nov 04, 2022 at 11:52:59AM +0900, Ian Lawrence Barwick wrote:\n> >> 2022年5月3日(火) 8:45 David Christensen <david.christensen@crunchydata.com>:\n> >>> \n> >>> ...and pushing a couple fixups pointed out by cfbot, so here's v4.\n> >> \n> >> cfbot reports the patch no longer applies [1]. As CommitFest 2022-11 is\n> >> currently underway, this would be an excellent time to update the patch.\n> > \n> > More important than needing to be rebased, the patch has never passed\n> > its current tests on windows.\n> > \n> > As I recall, that's due to relying on \"cp\". And \"rsync\", which\n> > shouldn't be assumed to exist by regression tests).\n> \n> I will work on supporting the windows compatibility here. Is there some list of guidelines for what you can and can’t use? I don’t have a windows machine available to develop on. \n\nI think a lot (most?) developers here don't have a windows environment\navailable, so now have been using cirrusci's tests to verify. If you\nhaven't used cirrusci directly (not via cfbot) before, start at:\nsrc/tools/ci/README\n\nThere's not much assumed about the build environment, and not much more\nassumed about the test environment. Most of the portability is handled\nby using C and perl. I think there's even no assumption that \"tar\" is\navailable (except maybe for building releases). This patch should avoid\nrelying on tools that aren't already required.\n\nAs a practical matter, cfbot needs to pass, not only to demonstrate that\nthe patch consistently passes tests, but also because if the patch were\nmerged while it failed tests in cfbot, it would cause every other patch\nto start to fail, too.\n\n> Was it failing on windows? I was attempting to skip it as I recall. \n\nI don't see anything about skipping, and cirrus's logs from 2\ncommitfests ago were pruned. I looked at this patch earlier this year,\nbut never got around to replacing the calls to rsync and cp.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 4 Nov 2022 13:38:38 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 1:38 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > As I recall, that's due to relying on \"cp\". And \"rsync\", which\n> > > shouldn't be assumed to exist by regression tests).\n\nWill poke around other TAP tests to see if there's a more consistent\ninterface, what perl version we can assume and available modules, etc.\nIf there's not some trivial wrapper at this point so all TAP tests\ncould use it regardless of OS, it would definitely be good to\nintroduce such a method.\n\n> > I will work on supporting the windows compatibility here. Is there some list of guidelines for what you can and can’t use? I don’t have a windows machine available to develop on.\n>\n> I think a lot (most?) developers here don't have a windows environment\n> available, so now have been using cirrusci's tests to verify. If you\n> haven't used cirrusci directly (not via cfbot) before, start at:\n> src/tools/ci/README\n\nThanks, good starting point.\n\n> > Was it failing on windows? I was attempting to skip it as I recall.\n>\n> I don't see anything about skipping, and cirrus's logs from 2\n> commitfests ago were pruned. I looked at this patch earlier this year,\n> but never got around to replacing the calls to rsync and cp.\n\nAh, it's skipped (not fixed) in my git repo, but never got around to\nsubmitting that version through email. That explains it.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Nov 2022 14:43:15 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Hi Justin et al,\n\nEnclosed is v5 of this patch which now passes the CirrusCI checks for\nall supported OSes. I went ahead and reworked the test a bit so it's a\nlittle more amenable to the OS-agnostic approach for testing.\n\nBest,\n\nDavid",
"msg_date": "Mon, 7 Nov 2022 17:01:01 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Nov 07, 2022 at 05:01:01PM -0600, David Christensen wrote:\n> Hi Justin et al,\n> \n> Enclosed is v5 of this patch which now passes the CirrusCI checks for\n> all supported OSes. I went ahead and reworked the test a bit so it's a\n> little more amenable to the OS-agnostic approach for testing.\n\nGreat, thanks.\n\nThis includes the changes that I'd started a few months ago.\nPlus adding the test which was missing for meson.\n\n+ format: <literal><replaceable>LSN</replaceable>.<replaceable>TSOID</replaceable>.<replaceable>DBOID</replaceable>.<replaceable>RELNODE</replaceable>.<replaceable>BLKNO</replaceable></literal>\n\nI'd prefer if the abbreviations were \"reltablespace\" and \"datoid\"\n\nAlso, should the test case call pg_relation_filenode() rather than using\nrelfilenode directly ? Is it a problem that the test code assumes\npagesize=8192 ?\n\n-- \nJustin",
"msg_date": "Tue, 8 Nov 2022 16:45:19 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 4:45 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Nov 07, 2022 at 05:01:01PM -0600, David Christensen wrote:\n> > Hi Justin et al,\n> >\n> > Enclosed is v5 of this patch which now passes the CirrusCI checks for\n> > all supported OSes. I went ahead and reworked the test a bit so it's a\n> > little more amenable to the OS-agnostic approach for testing.\n>\n> Great, thanks.\n>\n> This includes the changes that I'd started a few months ago.\n> Plus adding the test which was missing for meson.\n\nCool, will review, thanks.\n\n> + format: <literal><replaceable>LSN</replaceable>.<replaceable>TSOID</replaceable>.<replaceable>DBOID</replaceable>.<replaceable>RELNODE</replaceable>.<replaceable>BLKNO</replaceable></literal>\n>\n> I'd prefer if the abbreviations were \"reltablespace\" and \"datoid\"\n\nSure, no issues there.\n\n> Also, should the test case call pg_relation_filenode() rather than using\n> relfilenode directly ? Is it a problem that the test code assumes\n> pagesize=8192 ?\n\nBoth good points. Is pagesize just exposed via\n`current_setting('block_size')` or is there a different approach?\n\nDavid\n\n\n",
"msg_date": "Tue, 8 Nov 2022 16:49:19 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Enclosed is v6, which squashes your refactor and adds the additional\nrecent suggestions.\n\nThanks!",
"msg_date": "Tue, 8 Nov 2022 17:37:55 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nHello,\r\n\r\nI tested this patch on Linux and there is no problem.\r\nAlso, I reviewed this patch and commented below.\r\n\r\n@@ -439,6 +447,107 @@ XLogRecordHasFPW(XLogReaderState *record)\r\n+ if (fork >= 0 && fork <= MAX_FORKNUM)\r\n+ {\r\n+ if (fork)\r\n+ sprintf(forkname, \"_%s\", forkNames[fork]);\r\n+ else\r\n+ forkname[0] = 0;\r\n+ }\r\n+ else\r\n+ pg_fatal(\"found invalid fork number: %u\", fork);\r\n\r\nShould we add the comment if the main fork is saved without \"_main\" suffix for code readability?\r\n\r\n@@ -679,6 +788,9 @@ usage(void)\r\n \" (default: 1 or the value used in STARTSEG)\\n\"));\r\n printf(_(\" -V, --version output version information, then exit\\n\"));\r\n printf(_(\" -w, --fullpage only show records with a full page write\\n\"));\r\n+ printf(_(\" -W, --save-fpi=path save full page images to given path as\\n\"\r\n+ \" LSN.T.D.R.B_F\\n\"));\r\n+ printf(_(\" -X, --fixup-fpi=path like --save-fpi but apply LSN fixups to saved page\\n\"));\r\n printf(_(\" -x, --xid=XID only show records with transaction ID XID\\n\"));\r\n printf(_(\" -z, --stats[=record] show statistics instead of records\\n\"\r\n \" (optionally, show per-record statistics)\\n\"));\r\n\r\nSince lower-case options are displayed at the top, should we switch the order of -x and -X?\r\n\r\n@@ -972,6 +1093,25 @@ main(int argc, char **argv)\r\n }\r\n }\r\n\r\n+ int dir_status = pg_check_dir(config.save_fpw_path);\r\n+\r\n+ if (dir_status < 0)\r\n+ {\r\n+ pg_log_error(\"could not access output directory: %s\", config.save_fpw_path);\r\n+ goto bad_argument;\r\n+ }\r\n\r\nShould we output %s enclosed with \\\"?\r\n\r\nRegards,\r\nSho Kato",
"msg_date": "Wed, 09 Nov 2022 02:33:10 +0000",
"msg_from": "sho kato <kato-sho@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 5:08 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Enclosed is v6, which squashes your refactor and adds the additional\n> recent suggestions.\n\nThanks for working on this feature. Here are some comments for now. I\nhaven't looked at the tests, I will take another look at the code and\ntests after these and all other comments are addressed.\n\n1. For ease of review, please split the test patch to 0002.\n\n2. I'm unable to understand the use-case for --fixup-fpi option.\npg_waldump is supposed to be just WAL reader, and must not return any\nmodified information, with --fixup-fpi option, the patch violates this\nprinciple i.e. it sets page LSN and returns. Without actually\nreplaying the WAL record on the page, how is it correct to just set\nthe LSN? How will it be useful? ISTM, we must ignore this option\nunless there's a strong use-case.\n3.\n+ if (fork >= 0 && fork <= MAX_FORKNUM)\n+ {\n+ if (fork)\n+ sprintf(forkname, \"_%s\", forkNames[fork]);\n+ else\n+ forkname[0] = 0;\n+ }\n+ else\n+ pg_fatal(\"found invalid fork number: %u\", fork);\n\nIsn't the above complex? What's the problem with something like below?\nWhy do we need if (fork) - else block?\n\nif (fork >= 0 && fork <= MAX_FORKNUM)\n sprintf(forkname, \"_%s\", forkNames[fork]);\nelse\n pg_fatal(\"found invalid fork number: %u\", fork);\n\n3.\n+ char page[BLCKSZ] = {0};\nI think when writing to a file, we need PGAlignedBlock rather than a\nsimple char array of bytes, see the description around PGAlignedBlock\nfor why it is so.\n\n4.\n+ if (pg_pwrite(fileno(OPF), page, BLCKSZ, 0) != BLCKSZ)\nWhy pg_pwrite(), why not just fwrite()? If fwrite() is used, you can\navoid fileno() system calls, no? Do you need the file position to\nremain the same after writing, hence pg_pwrite()?\n\n5.\n+ if (!RestoreBlockImage(record, block_id, page))\n+ continue;\n+\n+ /* we have our extracted FPI, let's save it now */\nAfter extracting the page from the WAL record, do we need to perform a\nchecksum on it?\n\n6.\n+ if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\nUse pg_dir_create_mode instead of hard-coded 0007?\n\n7.\n+ if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\n+ fsync(fileno(OPF));\n+ fclose(OPF);\nSince you're creating the directory in case it's not available, you\nneed to fsync the directory too?\n\n8.\n+ case 'W':\n+ case 'X':\n+ config.fixup_fpw = (option == 'X');\n+ config.save_fpw_path = pg_strdup(optarg);\n+ break;\nJust set config.fixup_fpw = false before the switch block starts,\nlike the other variables, and then perhaps doing like below is more\nreadable:\ncase 'W':\n config.save_fpw_path = pg_strdup(optarg);\ncase 'X':\n config.fixup_fpw = true;\n config.save_fpw_path = pg_strdup(optarg);\n\n9.\n+ if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\nShould we use pg_mkdir_p() instead of mkdir()?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 9 Nov 2022 18:00:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 09, 2022 at 06:00:40PM +0530, Bharath Rupireddy wrote:\n> On Wed, Nov 9, 2022 at 5:08 AM David Christensen <david.christensen@crunchydata.com> wrote:\n> >\n> > Enclosed is v6, which squashes your refactor and adds the additional\n> > recent suggestions.\n> \n> Thanks for working on this feature. Here are some comments for now. I\n> haven't looked at the tests, I will take another look at the code and\n> tests after these and all other comments are addressed.\n> \n> 1. For ease of review, please split the test patch to 0002.\n\nThis is just my opinion, but .. why ? Since it's easy to\nfilter/skip/display a file, I don't think it's usually useful to have\nseparate patches for tests or docs.\n\n> 6.\n> + if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\n> Use pg_dir_create_mode instead of hard-coded 0007?\n\nI think I thought of that when I first looked at the patch ... but, I'm\nnot sure, since it says:\n\nsrc/include/common/file_perm.h-/* Modes for creating directories and files IN THE DATA DIRECTORY */\nsrc/include/common/file_perm.h:extern PGDLLIMPORT int pg_dir_create_mode;\n\nI was wondering if there's any reason to do \"CREATE DATABASE\". The vast\nmajority of TAP tests don't.\n\n$ git grep -ho 'safe_psql[^ ]*' '*pl' |sort |uniq -c |sort -nr |head\n 1435 safe_psql('postgres',\n 335 safe_psql(\n 23 safe_psql($connect_db,\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 9 Nov 2022 08:14:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On 2022-Nov-09, Justin Pryzby wrote:\n\n> On Wed, Nov 09, 2022 at 06:00:40PM +0530, Bharath Rupireddy wrote:\n\n> > 1. For ease of review, please split the test patch to 0002.\n> \n> This is just my opinion, but .. why ? Since it's easy to\n> filter/skip/display a file, I don't think it's usually useful to have\n> separate patches for tests or docs.\n\nI concur with Justin. When a patch is bugfix and a test is added that\nverifies it, I like to keep the test in a separate commit (for submit\npurposes and in my personal repo -- not for the official push!) so that\nI can git-checkout to just the test and make sure it fails ahead of\npushing the fix commit. But for a new feature, there's no reason to do\nso.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 9 Nov 2022 15:32:07 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 6:30 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Nov 9, 2022 at 5:08 AM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n> >\n> > Enclosed is v6, which squashes your refactor and adds the additional\n> > recent suggestions.\n>\n> Thanks for working on this feature. Here are some comments for now. I\n> haven't looked at the tests, I will take another look at the code and\n> tests after these and all other comments are addressed.\n>\n> 1. For ease of review, please split the test patch to 0002.\n\nPer later discussion seems like new feature tests are fine in the same\npatch, yes?\n\n> 2. I'm unable to understand the use-case for --fixup-fpi option.\n> pg_waldump is supposed to be just WAL reader, and must not return any\n> modified information, with --fixup-fpi option, the patch violates this\n> principle i.e. it sets page LSN and returns. Without actually\n> replaying the WAL record on the page, how is it correct to just set\n> the LSN? How will it be useful? ISTM, we must ignore this option\n> unless there's a strong use-case.\n\nHow I was envisioning this was for cases like extreme surgery for\ncorrupted pages, where you extract the page from WAL but it has lsn\nand checksum set so you could do something like `dd if=fixup-block\nof=relation ...`, so it *simulates* the replay of said fullpage blocks\nin cases where for some reason you can't play the intermediate\nrecords; since this is always a fullpage block, it's capturing what\nwould be the snapshot so you could manually insert somewhere as needed\nwithout needing to replay (say if dealing with an incomplete or\ncorrupted WAL stream).\n\n> 3.\n> + if (fork >= 0 && fork <= MAX_FORKNUM)\n> + {\n> + if (fork)\n> + sprintf(forkname, \"_%s\", forkNames[fork]);\n> + else\n> + forkname[0] = 0;\n> + }\n> + else\n> + pg_fatal(\"found invalid fork number: %u\", fork);\n>\n> Isn't the above complex? What's the problem with something like below?\n> Why do we need if (fork) - else block?\n>\n> if (fork >= 0 && fork <= MAX_FORKNUM)\n> sprintf(forkname, \"_%s\", forkNames[fork]);\n> else\n> pg_fatal(\"found invalid fork number: %u\", fork);\n\nThis was to suppress any suffix for main forks, but yes, could\nsimplify and include the `_` in the suffix name. Will include such a\nchange.\n\n> 3.\n> + char page[BLCKSZ] = {0};\n> I think when writing to a file, we need PGAlignedBlock rather than a\n> simple char array of bytes, see the description around PGAlignedBlock\n> for why it is so.\n\nEasy enough change, and makes sense.\n\n> 4.\n> + if (pg_pwrite(fileno(OPF), page, BLCKSZ, 0) != BLCKSZ)\n> Why pg_pwrite(), why not just fwrite()? If fwrite() is used, you can\n> avoid fileno() system calls, no? Do you need the file position to\n> remain the same after writing, hence pg_pwrite()?\n\nI don't recall the original motivation, TBH.\n\n> 5.\n> + if (!RestoreBlockImage(record, block_id, page))\n> + continue;\n> +\n> + /* we have our extracted FPI, let's save it now */\n> After extracting the page from the WAL record, do we need to perform a\n> checksum on it?\n\nThat is there in fixup mode (or should be). Are you thinking this\nshould also be set if not in fixup mode? That defeats the purpose of\nthe raw page extract, which is to see *exactly* what the WAL stream\nhas.\n\n> 6.\n> + if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\n> Use pg_dir_create_mode instead of hard-coded 0007?\n\nSure.\n\n> 7.\n> + if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\n> + fsync(fileno(OPF));\n> + fclose(OPF);\n> Since you're creating the directory in case it's not available, you\n> need to fsync the directory too?\n\nSure.\n\n> 8.\n> + case 'W':\n> + case 'X':\n> + config.fixup_fpw = (option == 'X');\n> + config.save_fpw_path = pg_strdup(optarg);\n> + break;\n> Just set config.fixup_fpw = false before the switch block starts,\n> like the other variables, and then perhaps doing like below is more\n> readable:\n> case 'W':\n> config.save_fpw_path = pg_strdup(optarg);\n> case 'X':\n> config.fixup_fpw = true;\n> config.save_fpw_path = pg_strdup(optarg);\n\nLike separate opt processing with their own `break` statement?\nProbably a bit more readable/conventional.\n\n> 9.\n> + if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\n> Should we use pg_mkdir_p() instead of mkdir()?\n\nSure.\n\nThanks,\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Nov 2022 14:01:14 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "> > 6.\n> > + if (dir_status == 0 && mkdir(config.save_fpw_path, 0700) < 0)\n> > Use pg_dir_create_mode instead of hard-coded 0007?\n>\n> I think I thought of that when I first looked at the patch ... but, I'm\n> not sure, since it says:\n>\n> src/include/common/file_perm.h-/* Modes for creating directories and files IN THE DATA DIRECTORY */\n> src/include/common/file_perm.h:extern PGDLLIMPORT int pg_dir_create_mode;\n\nLooks like it's pretty evenly split in src/bin:\n\n$ git grep -o -E -w '(pg_mkdir_p|mkdir)' '**.c' | sort | uniq -c\n 3 initdb/initdb.c:mkdir\n 3 initdb/initdb.c:pg_mkdir_p\n 1 pg_basebackup/bbstreamer_file.c:mkdir\n 2 pg_basebackup/pg_basebackup.c:pg_mkdir_p\n 1 pg_dump/pg_backup_directory.c:mkdir\n 1 pg_rewind/file_ops.c:mkdir\n 4 pg_upgrade/pg_upgrade.c:mkdir\n\nSo if that is the preferred approach I'll go ahead and use it.\n\n> I was wondering if there's any reason to do \"CREATE DATABASE\". The vast\n> majority of TAP tests don't.\n>\n> $ git grep -ho 'safe_psql[^ ]*' '*pl' |sort |uniq -c |sort -nr |head\n> 1435 safe_psql('postgres',\n> 335 safe_psql(\n> 23 safe_psql($connect_db,\n\nIf there was a reason, I don't recall offhand; I will test removing it\nand if things still work will consider it good enough.\n\nDavid\n\n\n",
"msg_date": "Wed, 9 Nov 2022 14:08:11 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 2:08 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n> Justin sez:\n> > I was wondering if there's any reason to do \"CREATE DATABASE\". The vast\n> > majority of TAP tests don't.\n> >\n> > $ git grep -ho 'safe_psql[^ ]*' '*pl' |sort |uniq -c |sort -nr |head\n> > 1435 safe_psql('postgres',\n> > 335 safe_psql(\n> > 23 safe_psql($connect_db,\n>\n> If there was a reason, I don't recall offhand; I will test removing it\n> and if things still work will consider it good enough.\n\nThings blew up when I did that; rather than hunt it down, I just left it in. :-)\n\nEnclosed is v7, with changes thus suggested thus far.",
"msg_date": "Wed, 9 Nov 2022 14:37:29 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 1:31 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n\nThanks for providing the v7 patch, please see my comments and responses below.\n\n> > 2. I'm unable to understand the use-case for --fixup-fpi option.\n> > pg_waldump is supposed to be just WAL reader, and must not return any\n> > modified information, with --fixup-fpi option, the patch violates this\n> > principle i.e. it sets page LSN and returns. Without actually\n> > replaying the WAL record on the page, how is it correct to just set\n> > the LSN? How will it be useful? ISTM, we must ignore this option\n> > unless there's a strong use-case.\n>\n> How I was envisioning this was for cases like extreme surgery for\n> corrupted pages, where you extract the page from WAL but it has lsn\n> and checksum set so you could do something like `dd if=fixup-block\n> of=relation ...`, so it *simulates* the replay of said fullpage blocks\n> in cases where for some reason you can't play the intermediate\n> records; since this is always a fullpage block, it's capturing what\n> would be the snapshot so you could manually insert somewhere as needed\n> without needing to replay (say if dealing with an incomplete or\n> corrupted WAL stream).\n\nRecovery sets the page LSN after it replayed the WAL record on the\npage right? Recovery does this - base_page/FPI +\napply_WAL_record_and_then_set_applied_WAL_record's_LSN =\nnew_version_of_page. Essentially, in your patch, you are just setting\nthe WAL record LSN with the page contents being the base page's. I'm\nstill not sure what's the real use-case here. We don't have an\nindependent function in postgres, given a base page and a WAL record\nthat just replays the WAL record and output's the new version of the\npage, so I think what you do in the patch with fixup option seems\nwrong to me.\n\n> > 5.\n> > + if (!RestoreBlockImage(record, block_id, page))\n> > + continue;\n> > +\n> > + /* we have our extracted FPI, let's save it now */\n> > After extracting the page from the WAL record, do we need to perform a\n> > checksum on it?\n\nI think you just need to do the following, this will ensure the\nauthenticity of the page that pg_waldump returns.\nif ((PageHeader) page)->pd_checksum != pg_checksum_page((char *) page, blk))\n{\n pg_fatal(\"page checksum failed\");\n}\n\n> > case 'W':\n> > config.save_fpw_path = pg_strdup(optarg);\n> > case 'X':\n> > config.fixup_fpw = true;\n> > config.save_fpw_path = pg_strdup(optarg);\n>\n> Like separate opt processing with their own `break` statement?\n> Probably a bit more readable/conventional.\n\nYes.\n\nSome more comments:\n\n1.\n+ PGAlignedBlock zerobuf;\nEssentially, it's not a zero buffer, please rename the variable to\nsomething like 'buf' or 'page_buf' or someother?\n\n2.\n+ if (pg_pwrite(fileno(OPF), page, BLCKSZ, 0) != BLCKSZ)\nReplace pg_pwrite with fwrite() and avoid fileno() system calls that\nshould suffice here, AFICS, we don't need pg_pwrite.\n\n3.\n+ if (config.save_fpw_path != NULL)\n+ {\n+ /* Create the dir if it doesn't exist */\n+ if (pg_mkdir_p(config.save_fpw_path, pg_dir_create_mode) < 0)\nI think you still need pg_check_dir() here, how about something like below?\n\nif (pg_check_dir(config.save_fpw_path) == 0)\n{\n if (pg_mkdir_p(config.save_fpw_path, pg_dir_create_mode) < 0)\n {\n /* error */\n }\n}\n\n4.\n+ /* Create the dir if it doesn't exist */\n+ if (pg_mkdir_p(config.save_fpw_path, pg_dir_create_mode) < 0)\n+ {\n+ pg_log_error(\"could not create output directory \\\"%s\\\": %m\",\n+ config.save_fpw_path);\n+ goto bad_argument;\nWhy is the directory creation error a bad_argument? I think you need\njust pg_fatal() here.\n\n5.\n+ fsync(fileno(OPF));\n+ fclose(OPF);\nI think just the fsync() isn't enough, you still need fsync_fname()\nand/or fsync_parent_path(), perhaps after for (block_id = 0; block_id\n<= XLogRecMaxBlockId(record); block_id++) loop.\n\n6. Speaking of which, do we need to do fsync()'s optionally? If we\nwere to write many such FPI files, aren't there going to be more\nfsync() calls and imagine this feature being used on a production\nserver and a lot of fsync() will definitely make running server\nfsync() ops slower. I think we need a new option whether pg_waldump\never do fsync() or not, something similar to --no-sync of\npg_receivewal/pg_upgrade/pg_dump/pg_initdb/pg_checksums etc. I would\nlike it if the pg_waldump's --no-sync is coded as 0001 and 0002 can\nmake use of it.\n\n7.\n+ pg_fatal(\"couldn't write out complete fullpage image to\nfile: %s\", filename);\nWe typically use \"full page image\" in the output strings, please correct.\n\n8.\n+\n+ if (((PageHeader) page)->pd_checksum)\n+ ((PageHeader) page)->pd_checksum =\npg_checksum_page((char *) page, blk);\nWhy do you need to set the page's checksum by yourself? I don't think\nthis is the right way, pg_waldump should just return what it sees in\nthe WAL record, of course, after verifying a few checks (like checksum\nis correct or not), but it mustn't set or compute anything new in the\nreturned page.\n\nFew comments on the tests:\n1.\n+$primary->init('-k');\n+$primary->append_conf('postgresql.conf', \"max_wal_size='100MB'\");\n+$primary->append_conf('postgresql.conf', \"wal_level='replica'\");\n+$primary->append_conf('postgresql.conf', 'archive_mode=on');\n+$primary->append_conf('postgresql.conf', \"archive_command='/bin/false'\");\n+$primary->start;\nI don't think we need these many append_conf calls here, see how\nothers are doing it with just one single call:\n\n$node->append_conf(\n 'postgresql.conf', qq{\nlisten_addresses = '$hostaddr'\nkrb_server_keyfile = '$keytab'\nlog_connections = on\nlc_messages = 'C'\n});\n\n2.\n+$primary->append_conf('postgresql.conf', \"max_wal_size='100MB'\");\nDo you really need 100MB max_wal_size? Why can't you just initialize\nyour cluster with 1MB wal files instead of 16MB and set max_wal_size\nto 4MB or a bit more, something like 019_replslot_limit.pl does?\n\n3.\n+# generate data/wal to examine\n+$primary->safe_psql('postgres', q(CREATE DATABASE db1));\n+$primary->safe_psql('db1', <<EOF);\n+CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n+CHECKPOINT;\n+SELECT pg_switch_wal();\n+UPDATE test_table SET a = a + 1;\n+CHECKPOINT;\n+SELECT pg_switch_wal();\n+UPDATE test_table SET a = a + 1;\n+CHECKPOINT;\n+SELECT pg_switch_wal();\n+EOF\nI don't think you need these many complex things to generate WAL,\nmultiple CHECKPOINT;s can make tests slower. To keep it simple, you\ncan just create a table, insert a single row, checkpoint, update the\nrow, switch the wal - no need to test if your feature generates\nmultiple WAL files, it's enough to test if it generates just one.\nPlease simplfiy the tests.\n\n4.\n+$primary->append_conf('postgresql.conf', \"wal_level='replica'\");\n+$primary->append_conf('postgresql.conf', 'archive_mode=on');\n+$primary->append_conf('postgresql.conf', \"archive_command='/bin/false'\");\nWhy do you need to set wal_level to replica, out of the box your\ncluster comes with replica only no?\nAnd why do you need archive_mode on and set the command to do nothing?\nWhy archiving is needed for testing your feature firstly?\n\n5.\n+my $primary = PostgreSQL::Test::Cluster->new('primary');\nCan you rename your node to other than primary? Because this isn't a\ntest of replication where primary and standby nodes get created. How\nabout just 'node'?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 10 Nov 2022 14:03:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 2:33 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Nov 10, 2022 at 1:31 AM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n> >\n>\n> Thanks for providing the v7 patch, please see my comments and responses below.\n\nHi Bharath, Thanks for the feedback.\n\n> > > 2. I'm unable to understand the use-case for --fixup-fpi option.\n> > > pg_waldump is supposed to be just WAL reader, and must not return any\n> > > modified information, with --fixup-fpi option, the patch violates this\n> > > principle i.e. it sets page LSN and returns. Without actually\n> > > replaying the WAL record on the page, how is it correct to just set\n> > > the LSN? How will it be useful? ISTM, we must ignore this option\n> > > unless there's a strong use-case.\n> >\n> > How I was envisioning this was for cases like extreme surgery for\n> > corrupted pages, where you extract the page from WAL but it has lsn\n> > and checksum set so you could do something like `dd if=fixup-block\n> > of=relation ...`, so it *simulates* the replay of said fullpage blocks\n> > in cases where for some reason you can't play the intermediate\n> > records; since this is always a fullpage block, it's capturing what\n> > would be the snapshot so you could manually insert somewhere as needed\n> > without needing to replay (say if dealing with an incomplete or\n> > corrupted WAL stream).\n>\n> Recovery sets the page LSN after it replayed the WAL record on the\n> page right? Recovery does this - base_page/FPI +\n> apply_WAL_record_and_then_set_applied_WAL_record's_LSN =\n> new_version_of_page. Essentially, in your patch, you are just setting\n> the WAL record LSN with the page contents being the base page's. I'm\n> still not sure what's the real use-case here. We don't have an\n> independent function in postgres, given a base page and a WAL record\n> that just replays the WAL record and output's the new version of the\n> page, so I think what you do in the patch with fixup option seems\n> wrong to me.\n\nWell if it's not the same output then I guess you're right and there's\nnot a use for the `--fixup` mode. By the same token, I'd say\ncalculating/setting the checksum also wouldn't need to be done, we\nshould just include the page as included in the WAL stream.\n\n> > > 5.\n> > > + if (!RestoreBlockImage(record, block_id, page))\n> > > + continue;\n> > > +\n> > > + /* we have our extracted FPI, let's save it now */\n> > > After extracting the page from the WAL record, do we need to perform a\n> > > checksum on it?\n>\n> I think you just need to do the following, this will ensure the\n> authenticity of the page that pg_waldump returns.\n> if ((PageHeader) page)->pd_checksum != pg_checksum_page((char *) page, blk))\n> {\n> pg_fatal(\"page checksum failed\");\n> }\n\nThe WAL already has a checksum, so not certain this makes sense on its\nown. Also I'm inclined to make it a warning if it doesn't match rather\nthan a fatal. (I'd also have to verify that the checksum is properly\nset on the page prior to copying the FPI into WAL, which I'm pretty\nsure it is but not certain.)\n\n> > > case 'W':\n> > > config.save_fpw_path = pg_strdup(optarg);\n> > > case 'X':\n> > > config.fixup_fpw = true;\n> > > config.save_fpw_path = pg_strdup(optarg);\n> >\n> > Like separate opt processing with their own `break` statement?\n> > Probably a bit more readable/conventional.\n>\n> Yes.\n\nMoot with the removal of the --fixup mode.\n\n> Some more comments:\n>\n> 1.\n> + PGAlignedBlock zerobuf;\n> Essentially, it's not a zero buffer, please rename the variable to\n> something like 'buf' or 'page_buf' or someother?\n\nSure.\n\n> 2.\n> + if (pg_pwrite(fileno(OPF), page, BLCKSZ, 0) != BLCKSZ)\n> Replace pg_pwrite with fwrite() and avoid fileno() system calls that\n> should suffice here, AFICS, we don't need pg_pwrite.\n\nSure.\n\n> 3.\n> + if (config.save_fpw_path != NULL)\n> + {\n> + /* Create the dir if it doesn't exist */\n> + if (pg_mkdir_p(config.save_fpw_path, pg_dir_create_mode) < 0)\n> I think you still need pg_check_dir() here, how about something like below?\n\nI was assuming pg_mkdir_p() acted just like mkdir -p, where it's just\nan idempotent action, so an existing dir is just treated the same.\nWhat's the benefit here? Would assume if a non-dir file existed at\nthat path or other permissions issues arose we'd just get an error\nfrom pg_mkdir_p(). (Will review the code there and confirm.)\n\n> 4.\n> + /* Create the dir if it doesn't exist */\n> + if (pg_mkdir_p(config.save_fpw_path, pg_dir_create_mode) < 0)\n> + {\n> + pg_log_error(\"could not create output directory \\\"%s\\\": %m\",\n> + config.save_fpw_path);\n> + goto bad_argument;\n> Why is the directory creation error a bad_argument? I think you need\n> just pg_fatal() here.\n\nSure. Was just following the other patterns I'd seen for argument handling.\n\n> 5.\n> + fsync(fileno(OPF));\n> + fclose(OPF);\n> I think just the fsync() isn't enough, you still need fsync_fname()\n> and/or fsync_parent_path(), perhaps after for (block_id = 0; block_id\n> <= XLogRecMaxBlockId(record); block_id++) loop.\n\nI'm not sure I get the value of the fsyncs here; if you are using this\ntool at this capacity you're by definition doing some sort of\ntransient investigative steps. Since the WAL was fsync'd, you could\nalways rerun/recreate as needed in the unlikely event of an OS crash\nin the middle of this investigation. Since this is outside the\npurview of the database operations proper (unlike, say, initdb) seems\nlike it's unnecessary (or definitely shouldn't need to be selectable).\nMy thoughts are that if we're going to fsync, just do the fsyncs\nunconditionally rather than complicate the interface further.\n\n> 6. Speaking of which, do we need to do fsync()'s optionally? If we\n> were to write many such FPI files, aren't there going to be more\n> fsync() calls and imagine this feature being used on a production\n> server and a lot of fsync() will definitely make running server\n> fsync() ops slower. I think we need a new option whether pg_waldump\n> ever do fsync() or not, something similar to --no-sync of\n> pg_receivewal/pg_upgrade/pg_dump/pg_initdb/pg_checksums etc. I would\n> like it if the pg_waldump's --no-sync is coded as 0001 and 0002 can\n> make use of it.\n\nSee my thoughts on #5; basically I'm -0.5 on the fsyncs at all.\n\n> 7.\n> + pg_fatal(\"couldn't write out complete fullpage image to\n> file: %s\", filename);\n> We typically use \"full page image\" in the output strings, please correct.\n\nSure.\n\n> 8.\n> +\n> + if (((PageHeader) page)->pd_checksum)\n> + ((PageHeader) page)->pd_checksum =\n> pg_checksum_page((char *) page, blk);\n> Why do you need to set the page's checksum by yourself? I don't think\n> this is the right way, pg_waldump should just return what it sees in\n> the WAL record, of course, after verifying a few checks (like checksum\n> is correct or not), but it mustn't set or compute anything new in the\n> returned page.\n\nThis was in the --fixup codepath only, so will go away.\n\n> Few comments on the tests:\n> 1.\n> +$primary->init('-k');\n> +$primary->append_conf('postgresql.conf', \"max_wal_size='100MB'\");\n> +$primary->append_conf('postgresql.conf', \"wal_level='replica'\");\n> +$primary->append_conf('postgresql.conf', 'archive_mode=on');\n> +$primary->append_conf('postgresql.conf', \"archive_command='/bin/false'\");\n> +$primary->start;\n> I don't think we need these many append_conf calls here, see how\n> others are doing it with just one single call:\n>\n> $node->append_conf(\n> 'postgresql.conf', qq{\n> listen_addresses = '$hostaddr'\n> krb_server_keyfile = '$keytab'\n> log_connections = on\n> lc_messages = 'C'\n> });\n\nKk.\n\n> 2.\n> +$primary->append_conf('postgresql.conf', \"max_wal_size='100MB'\");\n> Do you really need 100MB max_wal_size? Why can't you just initialize\n> your cluster with 1MB wal files instead of 16MB and set max_wal_size\n> to 4MB or a bit more, something like 019_replslot_limit.pl does?\n\nYeah, that could work; I wasn't aware that other tests were modifying\nthose params. In order to get a test that wouldn't barf when it hit\nthe end of the WAL stream (and so fail the test) I needed to ensure\nthere were multiple WAL files generated that would not be recycled so\nI could dump a single WAL file without error. This was the approach I\nwas able to come up with. :-)\n\n> 3.\n> +# generate data/wal to examine\n> +$primary->safe_psql('postgres', q(CREATE DATABASE db1));\n> +$primary->safe_psql('db1', <<EOF);\n> +CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n> +CHECKPOINT;\n> +SELECT pg_switch_wal();\n> +UPDATE test_table SET a = a + 1;\n> +CHECKPOINT;\n> +SELECT pg_switch_wal();\n> +UPDATE test_table SET a = a + 1;\n> +CHECKPOINT;\n> +SELECT pg_switch_wal();\n> +EOF\n> I don't think you need these many complex things to generate WAL,\n> multiple CHECKPOINT;s can make tests slower. To keep it simple, you\n> can just create a table, insert a single row, checkpoint, update the\n> row, switch the wal - no need to test if your feature generates\n> multiple WAL files, it's enough to test if it generates just one.\n> Please simplfiy the tests.\n\nCan probably simplify more, but see the rationale on my last point.\n\n> 4.\n> +$primary->append_conf('postgresql.conf', \"wal_level='replica'\");\n> +$primary->append_conf('postgresql.conf', 'archive_mode=on');\n> +$primary->append_conf('postgresql.conf', \"archive_command='/bin/false'\");\n> Why do you need to set wal_level to replica, out of the box your\n> cluster comes with replica only no?\n> And why do you need archive_mode on and set the command to do nothing?\n> Why archiving is needed for testing your feature firstly?\n\nI think it had shown \"minimal\" in my testing; I was purposefully\nfailing archives so the WAL would stick around. Maybe a custom\narchive command that just copied a single WAL file into a known\nlocation so I could use that instead of the current approach would\nwork, though not sure how Windows support would work with that. Open\nto other ideas to more cleanly get a single WAL file that isn't the\nlast one. (Earlier versions of this test were using /all/ of the\ngenerated WAL files rather than a single one, so maybe I am\novercomplicating things for a single WAL file case.)\n\n> 5.\n> +my $primary = PostgreSQL::Test::Cluster->new('primary');\n> Can you rename your node to other than primary? Because this isn't a\n> test of replication where primary and standby nodes get created. How\n> about just 'node'?\n\nSure, np.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Thu, 10 Nov 2022 10:22:28 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 9:52 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> > > > 2. I'm unable to understand the use-case for --fixup-fpi option.\n> > > > pg_waldump is supposed to be just WAL reader, and must not return any\n> > > > modified information, with --fixup-fpi option, the patch violates this\n> > > > principle i.e. it sets page LSN and returns. Without actually\n> > > > replaying the WAL record on the page, how is it correct to just set\n> > > > the LSN? How will it be useful? ISTM, we must ignore this option\n> > > > unless there's a strong use-case.\n> > >\n> > > How I was envisioning this was for cases like extreme surgery for\n> > > corrupted pages, where you extract the page from WAL but it has lsn\n> > > and checksum set so you could do something like `dd if=fixup-block\n> > > of=relation ...`, so it *simulates* the replay of said fullpage blocks\n> > > in cases where for some reason you can't play the intermediate\n> > > records; since this is always a fullpage block, it's capturing what\n> > > would be the snapshot so you could manually insert somewhere as needed\n> > > without needing to replay (say if dealing with an incomplete or\n> > > corrupted WAL stream).\n> >\n> > Recovery sets the page LSN after it replayed the WAL record on the\n> > page right? Recovery does this - base_page/FPI +\n> > apply_WAL_record_and_then_set_applied_WAL_record's_LSN =\n> > new_version_of_page. Essentially, in your patch, you are just setting\n> > the WAL record LSN with the page contents being the base page's. I'm\n> > still not sure what's the real use-case here. We don't have an\n> > independent function in postgres, given a base page and a WAL record\n> > that just replays the WAL record and output's the new version of the\n> > page, so I think what you do in the patch with fixup option seems\n> > wrong to me.\n>\n> Well if it's not the same output then I guess you're right and there's\n> not a use for the `--fixup` mode. By the same token, I'd say\n> calculating/setting the checksum also wouldn't need to be done, we\n> should just include the page as included in the WAL stream.\n\nLet's hear from others, we may be missing something here. I recommend\nkeeping the --fixup patch as 0002, in case if we decide to discard\nit's easier, however I'll leave that to you.\n\n> > > > 5.\n> > > > + if (!RestoreBlockImage(record, block_id, page))\n> > > > + continue;\n> > > > +\n> > > > + /* we have our extracted FPI, let's save it now */\n> > > > After extracting the page from the WAL record, do we need to perform a\n> > > > checksum on it?\n> >\n> > I think you just need to do the following, this will ensure the\n> > authenticity of the page that pg_waldump returns.\n> > if ((PageHeader) page)->pd_checksum != pg_checksum_page((char *) page, blk))\n> > {\n> > pg_fatal(\"page checksum failed\");\n> > }\n>\n> The WAL already has a checksum, so not certain this makes sense on its\n> own. Also I'm inclined to make it a warning if it doesn't match rather\n> than a fatal. (I'd also have to verify that the checksum is properly\n> set on the page prior to copying the FPI into WAL, which I'm pretty\n> sure it is but not certain.)\n\nHow about having it as an Assert()?\n\n> > 5.\n> > + fsync(fileno(OPF));\n> > + fclose(OPF);\n> > I think just the fsync() isn't enough, you still need fsync_fname()\n> > and/or fsync_parent_path(), perhaps after for (block_id = 0; block_id\n> > <= XLogRecMaxBlockId(record); block_id++) loop.\n>\n> I'm not sure I get the value of the fsyncs here; if you are using this\n> tool at this capacity you're by definition doing some sort of\n> transient investigative steps. Since the WAL was fsync'd, you could\n> always rerun/recreate as needed in the unlikely event of an OS crash\n> in the middle of this investigation. Since this is outside the\n> purview of the database operations proper (unlike, say, initdb) seems\n> like it's unnecessary (or definitely shouldn't need to be selectable).\n> My thoughts are that if we're going to fsync, just do the fsyncs\n> unconditionally rather than complicate the interface further.\n\n-1 for fysnc() per file created as it can create a lot of sync load on\nproduction servers impacting performance. How about just syncing the\ndirectory at the end assuming it doesn't cost as much as fsync() per\nFPI file created would?\n\n> > 4.\n> > +$primary->append_conf('postgresql.conf', \"wal_level='replica'\");\n> > +$primary->append_conf('postgresql.conf', 'archive_mode=on');\n> > +$primary->append_conf('postgresql.conf', \"archive_command='/bin/false'\");\n> > Why do you need to set wal_level to replica, out of the box your\n> > cluster comes with replica only no?\n> > And why do you need archive_mode on and set the command to do nothing?\n> > Why archiving is needed for testing your feature firstly?\n>\n> I think it had shown \"minimal\" in my testing; I was purposefully\n> failing archives so the WAL would stick around. Maybe a custom\n> archive command that just copied a single WAL file into a known\n> location so I could use that instead of the current approach would\n> work, though not sure how Windows support would work with that. Open\n> to other ideas to more cleanly get a single WAL file that isn't the\n> last one. (Earlier versions of this test were using /all/ of the\n> generated WAL files rather than a single one, so maybe I am\n> overcomplicating things for a single WAL file case.)\n\nTypically we create a physical replication slot at the beginning so\nthat the server keeps the WAL required for you in pg_wal itself, for\ninstance, please see pg_walinspect:\n\n-- Make sure checkpoints don't interfere with the test.\nSELECT 'init' FROM\npg_create_physical_replication_slot('regress_pg_walinspect_slot',\ntrue, false);\n\n> > 2.\n> > +$primary->append_conf('postgresql.conf', \"max_wal_size='100MB'\");\n> > Do you really need 100MB max_wal_size? Why can't you just initialize\n> > your cluster with 1MB wal files instead of 16MB and set max_wal_size\n> > to 4MB or a bit more, something like 019_replslot_limit.pl does?\n>\n> Yeah, that could work; I wasn't aware that other tests were modifying\n> those params. In order to get a test that wouldn't barf when it hit\n> the end of the WAL stream (and so fail the test) I needed to ensure\n> there were multiple WAL files generated that would not be recycled so\n> I could dump a single WAL file without error. This was the approach I\n> was able to come up with. :-)\n\nIf you do trick specified in the above comment i.e. using replication\nslot to hold the WAL, I think you don't need to set max_wal_size at\nall, if that's true, I think the tests can just spin up a node and run\nthe tests without bothering max_wal_size, archive_mode etc.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 11 Nov 2022 16:27:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 09, 2022 at 02:37:29PM -0600, David Christensen wrote:\n> On Wed, Nov 9, 2022 at 2:08 PM David Christensen <david.christensen@crunchydata.com> wrote:\n> > Justin sez:\n> > > I was wondering if there's any reason to do \"CREATE DATABASE\". The vast\n> > > majority of TAP tests don't.\n> > >\n> > > $ git grep -ho 'safe_psql[^ ]*' '*pl' |sort |uniq -c |sort -nr |head\n> > > 1435 safe_psql('postgres',\n> > > 335 safe_psql(\n> > > 23 safe_psql($connect_db,\n> >\n> > If there was a reason, I don't recall offhand; I will test removing it\n> > and if things still work will consider it good enough.\n> \n> Things blew up when I did that; rather than hunt it down, I just left it in. :-)\n\n> +$primary->safe_psql('db1', <<EOF);\n\nIt worked for me when I removed the 3 references to db1.\nThat's good for efficiency of the test.\n\n> +my $blocksize = 8192;\n\nI think this should be just \"my $blocksize;\" rather than setting a value\nwhich is later overwriten.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 11 Nov 2022 08:15:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 8:15 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Nov 09, 2022 at 02:37:29PM -0600, David Christensen wrote:\n> > On Wed, Nov 9, 2022 at 2:08 PM David Christensen <david.christensen@crunchydata.com> wrote:\n> > > Justin sez:\n> > > > I was wondering if there's any reason to do \"CREATE DATABASE\". The vast\n> > > > majority of TAP tests don't.\n> > > >\n> > > > $ git grep -ho 'safe_psql[^ ]*' '*pl' |sort |uniq -c |sort -nr |head\n> > > > 1435 safe_psql('postgres',\n> > > > 335 safe_psql(\n> > > > 23 safe_psql($connect_db,\n> > >\n> > > If there was a reason, I don't recall offhand; I will test removing it\n> > > and if things still work will consider it good enough.\n> >\n> > Things blew up when I did that; rather than hunt it down, I just left it in. :-)\n>\n> > +$primary->safe_psql('db1', <<EOF);\n>\n> It worked for me when I removed the 3 references to db1.\n> That's good for efficiency of the test.\n\nI did figure that out later; fixed in git.\n\n> > +my $blocksize = 8192;\n>\n> I think this should be just \"my $blocksize;\" rather than setting a value\n> which is later overwriten.\n\nYep. Fixed in git.\n\nDavid\n\n\n",
"msg_date": "Fri, 11 Nov 2022 08:27:51 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 4:57 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Well if it's not the same output then I guess you're right and there's\n> > not a use for the `--fixup` mode. By the same token, I'd say\n> > calculating/setting the checksum also wouldn't need to be done, we\n> > should just include the page as included in the WAL stream.\n>\n> Let's hear from others, we may be missing something here. I recommend\n> keeping the --fixup patch as 0002, in case if we decide to discard\n> it's easier, however I'll leave that to you.\n\nI've whacked in `git` for now; I can resurrect if people consider it useful.\n\n> > > > > 5.\n> > > > > + if (!RestoreBlockImage(record, block_id, page))\n> > > > > + continue;\n> > > > > +\n> > > > > + /* we have our extracted FPI, let's save it now */\n> > > > > After extracting the page from the WAL record, do we need to perform a\n> > > > > checksum on it?\n> > >\n> > > I think you just need to do the following, this will ensure the\n> > > authenticity of the page that pg_waldump returns.\n> > > if ((PageHeader) page)->pd_checksum != pg_checksum_page((char *) page, blk))\n> > > {\n> > > pg_fatal(\"page checksum failed\");\n> > > }\n> >\n> > The WAL already has a checksum, so not certain this makes sense on its\n> > own. Also I'm inclined to make it a warning if it doesn't match rather\n> > than a fatal. (I'd also have to verify that the checksum is properly\n> > set on the page prior to copying the FPI into WAL, which I'm pretty\n> > sure it is but not certain.)\n>\n> How about having it as an Assert()?\n\nBased on empirical testing, the checksums don't match, so\nasserting/alerting on each block extracted seems next to useless, so\ngoing to just remove that.\n\n> > > 5.\n> > > + fsync(fileno(OPF));\n> > > + fclose(OPF);\n> > > I think just the fsync() isn't enough, you still need fsync_fname()\n> > > and/or fsync_parent_path(), perhaps after for (block_id = 0; block_id\n> > > <= XLogRecMaxBlockId(record); block_id++) loop.\n> >\n> > I'm not sure I get the value of the fsyncs here; if you are using this\n> > tool at this capacity you're by definition doing some sort of\n> > transient investigative steps. Since the WAL was fsync'd, you could\n> > always rerun/recreate as needed in the unlikely event of an OS crash\n> > in the middle of this investigation. Since this is outside the\n> > purview of the database operations proper (unlike, say, initdb) seems\n> > like it's unnecessary (or definitely shouldn't need to be selectable).\n> > My thoughts are that if we're going to fsync, just do the fsyncs\n> > unconditionally rather than complicate the interface further.\n>\n> -1 for fysnc() per file created as it can create a lot of sync load on\n> production servers impacting performance. How about just syncing the\n> directory at the end assuming it doesn't cost as much as fsync() per\n> FPI file created would?\n\nI can fsync the dir if that's a useful compromise.\n\n> > > 4.\n> > > +$primary->append_conf('postgresql.conf', \"wal_level='replica'\");\n> > > +$primary->append_conf('postgresql.conf', 'archive_mode=on');\n> > > +$primary->append_conf('postgresql.conf', \"archive_command='/bin/false'\");\n> > > Why do you need to set wal_level to replica, out of the box your\n> > > cluster comes with replica only no?\n> > > And why do you need archive_mode on and set the command to do nothing?\n> > > Why archiving is needed for testing your feature firstly?\n> >\n> > I think it had shown \"minimal\" in my testing; I was purposefully\n> > failing archives so the WAL would stick around. Maybe a custom\n> > archive command that just copied a single WAL file into a known\n> > location so I could use that instead of the current approach would\n> > work, though not sure how Windows support would work with that. Open\n> > to other ideas to more cleanly get a single WAL file that isn't the\n> > last one. (Earlier versions of this test were using /all/ of the\n> > generated WAL files rather than a single one, so maybe I am\n> > overcomplicating things for a single WAL file case.)\n>\n> Typically we create a physical replication slot at the beginning so\n> that the server keeps the WAL required for you in pg_wal itself, for\n> instance, please see pg_walinspect:\n>\n> -- Make sure checkpoints don't interfere with the test.\n> SELECT 'init' FROM\n> pg_create_physical_replication_slot('regress_pg_walinspect_slot',\n> true, false);\n\nWill see if I can get something like this to work; I'm currently\nstopping the server before running the file-based tests, but I suppose\nthere's no reason to do so, so a temporary slot that holds it around\nuntil the test is complete is probably fine.\n\nDavid\n\n\n",
"msg_date": "Mon, 14 Nov 2022 12:41:22 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Enclosed is v8, which uses the replication slot method to retain WAL\nas well as fsync'ing the output directory when everything is done.",
"msg_date": "Mon, 14 Nov 2022 13:59:04 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 1:29 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Enclosed is v8, which uses the replication slot method to retain WAL\n> as well as fsync'ing the output directory when everything is done.\n\nThanks. It mostly is in good shape. However, few more comments:\n\n1.\n+ if it does not exist. The images saved will be subject to the same\n+ filtering and limiting criteria as display records, but in this\n+ mode <application>pg_waldump</application> will not output any other\n+ information.\nMay I know what's the intention of the statement 'The images saved\n....'? If it's not necessary and convey anything useful to the user,\ncan we remove it?\n\n2.\n+#include \"storage/checksum.h\"\n+#include \"storage/checksum_impl.h\"\nI think we don't need the above includes as we got rid of verifying\npage checksums. The patch compiles without them for me.\n\n3.\n+ char *save_fpw_path;\nCan we rename the above variable to save_fpi_path, just to be in sync\nwith what we expose to the user, the option name 'save-fpi'?\n\n4.\n+ if (config.save_fpw_path != NULL)\n+ {\n+ /* Fsync our output directory */\n+ fsync_fname(config.save_fpw_path, true);\n+ }\nI guess adding a comment there as to why we aren't fsyncing for every\nfile that gets created, but once per the directory at the end. That'd\nhelp clarify doubts that other members might get while looking at the\ncode.\n\n5.\n+ if (config.save_fpw_path != NULL)\n+ {\n+ /* Fsync our output directory */\n+ fsync_fname(config.save_fpw_path, true);\n+ }\nSo, are we sure that we don't want to fsync for time_to_stop exit(0)\ncases, say when CTRL+C'ed. Looks like we handle time_to_stop safely\nmeaning exiting with return code 0, shouldn't we fsync the directory?\n\n6.\n+ else if (config.save_fpw_path)\nLet's use the same convention to check non-NULLness,\nconfig.save_fpw_path != NULL.\n\n7.\n+CHECKPOINT;\n+SELECT pg_switch_wal();\n+UPDATE test_table SET a = a + 1;\n+SELECT pg_switch_wal();\nI don't think switching WAL after checkpoint is necessary here,\nbecause the checkpoint ensures all the WAL gets flushed to disk.\nPlease remove it.\n\nPS: I've seen the following code:\n+my $walfile = [sort { $a <=> $b } glob(\"$waldir/00*\")]->[1]; # we\nwant the second WAL file, which will be a complete WAL file with\nfull-page writes for our specific relation.\n\n8.\n+$node->safe_psql('postgres', <<EOF);\n+EOF\nWhy EOF is used here? Can't we do something like below to execute\nmultiple statements?\n$node->safe_psql(\n 'postgres', qq[\n SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL,\n NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL,\n NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n SELECT data FROM pg_logical_slot_get_changes('regression_slot3', NULL,\n NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n SELECT data FROM pg_logical_slot_get_changes('regression_slot4', NULL,\n NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n]);\n\nSame here:\n+$node->safe_psql('postgres', <<EOQ);\n+SELECT pg_drop_replication_slot('regress_pg_waldump_slot');\n+EOQ\n\n9.\n+my $walfile = [sort { $a <=> $b } glob(\"$waldir/00*\")]->[1]; # we\nwant the second WAL file, which will be a complete WAL file with\nfull-page writes for our specific relation.\nIs it guaranteed that just looking at the second WAL file in the\npg_wal directory assures WAL file with FPIs? I think we have to save\nthe WAL file that contains FPIs, that is the file after, CHECKPOINT,\nUPDATE and pg_switch_wal. I think you can store output LSN of\npg_switch_wal\n\n10.\n+$node->safe_psql('postgres', <<EOQ);\n+SELECT pg_drop_replication_slot('regress_pg_waldump_slot');\n+EOQ\n+done_testing();\n\nDo we need to explicitly drop the slot here? I think we don't\nspecifically drop the replication slot in all the places, my guess is\nafter done_testing(), the node would get destroyed and also the slot.\nI think it's not required.\n\n11.\n+# verify filename formats matches w/--save-fpi\n+for my $fullpath (glob \"$tmp_folder/raw/*\")\nDo we need to look for the exact match of the file that gets created\nin the save-fpi path? While checking for this is great, it makes the\ntest code non-portable (may not work on Windows or other platforms,\nno?) and complex? This way, you can get rid of get_block_info() as\nwell? And +for my $fullpath (glob \"$tmp_folder/raw/*\")\nwill also get simplified.\n\nI think you can further simplify the tests by:\ncreate the node\ngenerate an FPI\ncall pg_waldump with save-fpi option\ncheck the target directory for a file that contains the relid,\nsomething like '%relid%'.\n\nThe above would still serve the purpose, tests the code without much complexity.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:11:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 4:41 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Nov 15, 2022 at 1:29 AM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n> >\n> > Enclosed is v8, which uses the replication slot method to retain WAL\n> > as well as fsync'ing the output directory when everything is done.\n>\n> Thanks. It mostly is in good shape. However, few more comments:\n>\n> 1.\n> + if it does not exist. The images saved will be subject to the same\n> + filtering and limiting criteria as display records, but in this\n> + mode <application>pg_waldump</application> will not output any other\n> + information.\n> May I know what's the intention of the statement 'The images saved\n> ....'? If it's not necessary and convey anything useful to the user,\n> can we remove it?\n\nBasically I mean if you're limiting to a specific relation or rmgr\ntype, etc, it only saves those FPIs. (So filtering is applied first\nbefore considering whether to save the FPI or not.)\n\n> 2.\n> +#include \"storage/checksum.h\"\n> +#include \"storage/checksum_impl.h\"\n> I think we don't need the above includes as we got rid of verifying\n> page checksums. The patch compiles without them for me.\n\nGood catch.\n\n> 3.\n> + char *save_fpw_path;\n> Can we rename the above variable to save_fpi_path, just to be in sync\n> with what we expose to the user, the option name 'save-fpi'?\n\nSure.\n\n> 4.\n> + if (config.save_fpw_path != NULL)\n> + {\n> + /* Fsync our output directory */\n> + fsync_fname(config.save_fpw_path, true);\n> + }\n> I guess adding a comment there as to why we aren't fsyncing for every\n> file that gets created, but once per the directory at the end. That'd\n> help clarify doubts that other members might get while looking at the\n> code.\n\nCan do.\n\n> 5.\n> + if (config.save_fpw_path != NULL)\n> + {\n> + /* Fsync our output directory */\n> + fsync_fname(config.save_fpw_path, true);\n> + }\n> So, are we sure that we don't want to fsync for time_to_stop exit(0)\n> cases, say when CTRL+C'ed. Looks like we handle time_to_stop safely\n> meaning exiting with return code 0, shouldn't we fsync the directory?\n\nWe can. Like I've said before, since these aren't production parts of\nthe cluster I don't personally have much of an opinion if fsync() is\nappropriate at all, so don't have strong feelings here.\n\n> 6.\n> + else if (config.save_fpw_path)\n> Let's use the same convention to check non-NULLness,\n> config.save_fpw_path != NULL.\n\nGood catch.\n\n> 7.\n> +CHECKPOINT;\n> +SELECT pg_switch_wal();\n> +UPDATE test_table SET a = a + 1;\n> +SELECT pg_switch_wal();\n> I don't think switching WAL after checkpoint is necessary here,\n> because the checkpoint ensures all the WAL gets flushed to disk.\n> Please remove it.\n\nThe point is to ensure we have a clean WAL segment that we know will\ncontain the relation we are filtering by. Will test if this still\nholds without the extra pg_switch_wal(), but that's the rationale.\n\n> PS: I've seen the following code:\n> +my $walfile = [sort { $a <=> $b } glob(\"$waldir/00*\")]->[1]; # we\n> want the second WAL file, which will be a complete WAL file with\n> full-page writes for our specific relation.\n\nI don't understand the question.\n\n> 8.\n> +$node->safe_psql('postgres', <<EOF);\n> +EOF\n> Why EOF is used here? Can't we do something like below to execute\n> multiple statements?\n> $node->safe_psql(\n> 'postgres', qq[\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot3', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot4', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> ]);\n>\n> Same here:\n> +$node->safe_psql('postgres', <<EOQ);\n> +SELECT pg_drop_replication_slot('regress_pg_waldump_slot');\n> +EOQ\n\nAs a long-time perl programmer, heredocs seem more natural and easier\nto read rather than a string that accomplishes the same function. If\nthere is an established project style I'll stick with it, but it just\nrolled out that way. :-)\n\n> 9.\n> +my $walfile = [sort { $a <=> $b } glob(\"$waldir/00*\")]->[1]; # we\n> want the second WAL file, which will be a complete WAL file with\n> full-page writes for our specific relation.\n> Is it guaranteed that just looking at the second WAL file in the\n> pg_wal directory assures WAL file with FPIs? I think we have to save\n> the WAL file that contains FPIs, that is the file after, CHECKPOINT,\n> UPDATE and pg_switch_wal. I think you can store output LSN of\n> pg_switch_wal\n\nYeah, I could look at that approach; originally this test was doing a\nlot more, so this is sort of residual from that original\nimplementation. For a single file, this would probably be an\nacceptable route.\n\n> 10.\n> +$node->safe_psql('postgres', <<EOQ);\n> +SELECT pg_drop_replication_slot('regress_pg_waldump_slot');\n> +EOQ\n> +done_testing();\n>\n> Do we need to explicitly drop the slot here? I think we don't\n> specifically drop the replication slot in all the places, my guess is\n> after done_testing(), the node would get destroyed and also the slot.\n> I think it's not required.\n\nMaybe not required, but seems good form (and the other test I based\nthis on did do the cleanup).\n\n> 11.\n> +# verify filename formats matches w/--save-fpi\n> +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> Do we need to look for the exact match of the file that gets created\n> in the save-fpi path? While checking for this is great, it makes the\n> test code non-portable (may not work on Windows or other platforms,\n> no?) and complex? This way, you can get rid of get_block_info() as\n> well? And +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> will also get simplified.\n>\n> I think you can further simplify the tests by:\n> create the node\n> generate an FPI\n> call pg_waldump with save-fpi option\n> check the target directory for a file that contains the relid,\n> something like '%relid%'.\n>\n> The above would still serve the purpose, tests the code without much complexity.\n\nI disagree; I think there is utility in keeping the validation of the\nexpected output. Since we have the code that works for it (and does\nwork on Windows, per passing the CI tests) I'm not seeing why we\nwouldn't want to continue to validate as much as possible.\n\nThanks,\n\nDavid\n\n\n",
"msg_date": "Tue, 15 Nov 2022 11:51:42 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Enclosed is v9.\n\n- code style consistency (FPI instead of FPW) internally.\n- cleanup of no-longer needed checksum-related pieces from code and tests.\n- test cleanup/simplification.\n- other comment cleanup.\n\nPasses all CI checks.\n\nBest,\n\nDavid",
"msg_date": "Tue, 15 Nov 2022 13:50:20 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 1:20 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Enclosed is v9.\n>\n> - code style consistency (FPI instead of FPW) internally.\n> - cleanup of no-longer needed checksum-related pieces from code and tests.\n> - test cleanup/simplification.\n> - other comment cleanup.\n>\n> Passes all CI checks.\n\nThanks for the updated patch.\n\n1.\n- if (config.filter_by_fpw && !XLogRecordHasFPW(xlogreader_state))\n+ if (config.filter_by_fpw && !XLogRecordHasFPI(xlogreader_state))\nThese changes are not related to this feature, hence renaming those\nvariables/function names must be dealt with separately. If required,\nproposing another patch can be submitted to change filter_by_fpw to\nfilter_by_fpi and XLogRecordHasFPW() to XLogRecordHasFPI().\n\n2.\n+ /* We fsync our output directory only; since these files are not part\n+ * of the production database we do not require the performance hit\n+ * that fsyncing every FPI would entail, so are doing this as a\n+ * compromise. */\nThe commenting style doesn't match the standard that we follow\nelsewhere in postgres, please refer to other multi-line comments.\n\n3.\n+ fsync_fname(config.save_fpi_path, true);\n+ }\nIt looks like fsync_fname()/fsync() in general isn't recursive, in the\nsense that it doesn't fsync the files under the directory, but the\ndirectory only. So, the idea of directory fsync doesn't seem worth it.\nWe either 1) get rid of fsync entirely or 2) fsync all the files after\nthey are created and the directory at the end or 3) do option (2) with\n--no-sync option similar to its friends. Since option (2) is a no go,\nwe can either choose option (1) or option (2). My vote at this point\nis for option (1).\n\n4.\n+($walfile_name, $blocksize) = split '\\|' =>\n$node->safe_psql('postgres',\"SELECT pg_walfile_name(pg_switch_wal()),\ncurrent_setting('block_size')\");\n+my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\nI think there's something wrong with this, no? pg_switch_wal() can, at\ntimes, return end+1 of the prior segment (see below snippet) and I'm\nnot sure if such a case can happen here.\n\n * The return value is either the end+1 address of the switch record,\n * or the end+1 address of the prior segment if we did not need to\n * write a switch record because we are already at segment start.\n */\nXLogRecPtr\nRequestXLogSwitch(bool mark_unimportant)\n\n5.\n+my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n+ok(-f $walfile, \"Got a WAL file\");\nIs this checking if the WAL file is present or not in PGDATA/pg_wal?\nIf yes, I think this isn't required as pg_switch_wal() ensures that\nthe WAL is written and flushed to disk.\n\n6.\n+my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\nIsn't \"pgdata\" hardcoded here? I think you might need to do the following:\n$node->data_dir . '/pg_wal/' . $walfile_name;;\n\n7.\n+ # save filename for later verification\n+ $files{$file}++;\n\n+# validate that we ended up with some FPIs saved\n+ok(keys %files > 0, 'verify we processed some files');\nWhy do we need to store filenames in an array when we later just check\nthe size of the array? Can't we use a boolean (file_found) or an int\nvariable (file_count) to verify that we found the file.\n\n8.\n+$node->safe_psql('postgres', <<EOF);\n+SELECT 'init' FROM\npg_create_physical_replication_slot('regress_pg_waldump_slot', true,\nfalse);\n+CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n+CHECKPOINT; -- required to force FPI for next writes\n+UPDATE test_table SET a = a + 1;\n+EOF\nThe EOF with append_conf() is being used in 4 files and elsewhere in\nthe TAP test files (more than 100?) qq[] or quotes is being used. I\nhave no strong opinion here, I'll leave it to the other reviewers or\ncommitter.\n\n> > 11.\n> > +# verify filename formats matches w/--save-fpi\n> > +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> > Do we need to look for the exact match of the file that gets created\n> > in the save-fpi path? While checking for this is great, it makes the\n> > test code non-portable (may not work on Windows or other platforms,\n> > no?) and complex? This way, you can get rid of get_block_info() as\n> > well? And +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> > will also get simplified.\n> >\n> > I think you can further simplify the tests by:\n> > create the node\n> > generate an FPI\n> > call pg_waldump with save-fpi option\n> > check the target directory for a file that contains the relid,\n> > something like '%relid%'.\n> >\n> > The above would still serve the purpose, tests the code without much complexity.\n>\n> I disagree; I think there is utility in keeping the validation of the\n> expected output. Since we have the code that works for it (and does\n> work on Windows, per passing the CI tests) I'm not seeing why we\n> wouldn't want to continue to validate as much as possible.\n\nMy intention is to simplify the tests further and I still stick to it.\nIt looks like the majority of test code is to form the file name in\nthe format that pg_waldump outputs and match with the file name in the\ntarget directory - for instance, in get_block_info(), and in the loop\nfor my $fullpath (glob \"$tmp_folder/raw/*\"). I don't think the tests\nneed to aim for file format checks, it's enough to look for the\nwritten file with '%relid%' by pg_waldump, if needed, the contents of\nthe files written/FPI can also be verified with, say, pg_checksum\ntool. Others may have different opinions though.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 16 Nov 2022 15:00:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 3:30 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> 1.\n> - if (config.filter_by_fpw && !XLogRecordHasFPW(xlogreader_state))\n> + if (config.filter_by_fpw && !XLogRecordHasFPI(xlogreader_state))\n> These changes are not related to this feature, hence renaming those\n> variables/function names must be dealt with separately. If required,\n> proposing another patch can be submitted to change filter_by_fpw to\n> filter_by_fpi and XLogRecordHasFPW() to XLogRecordHasFPI().\n\nNot required; can revert the changes unrelated to this specific patch.\n(I'd written the original ones for it, so didn't really think anything\nof it... :-))\n\n> 2.\n> + /* We fsync our output directory only; since these files are not part\n> + * of the production database we do not require the performance hit\n> + * that fsyncing every FPI would entail, so are doing this as a\n> + * compromise. */\n> The commenting style doesn't match the standard that we follow\n> elsewhere in postgres, please refer to other multi-line comments.\n\nWill fix.\n\n> 3.\n> + fsync_fname(config.save_fpi_path, true);\n> + }\n> It looks like fsync_fname()/fsync() in general isn't recursive, in the\n> sense that it doesn't fsync the files under the directory, but the\n> directory only. So, the idea of directory fsync doesn't seem worth it.\n> We either 1) get rid of fsync entirely or 2) fsync all the files after\n> they are created and the directory at the end or 3) do option (2) with\n> --no-sync option similar to its friends. Since option (2) is a no go,\n> we can either choose option (1) or option (2). My vote at this point\n> is for option (1).\n\nAgree to remove.\n\n> 4.\n> +($walfile_name, $blocksize) = split '\\|' =>\n> $node->safe_psql('postgres',\"SELECT pg_walfile_name(pg_switch_wal()),\n> current_setting('block_size')\");\n> +my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n> I think there's something wrong with this, no? pg_switch_wal() can, at\n> times, return end+1 of the prior segment (see below snippet) and I'm\n> not sure if such a case can happen here.\n>\n> * The return value is either the end+1 address of the switch record,\n> * or the end+1 address of the prior segment if we did not need to\n> * write a switch record because we are already at segment start.\n> */\n> XLogRecPtr\n> RequestXLogSwitch(bool mark_unimportant)\n\nI think this approach is pretty common to get the walfile name, no?\nWhile there might be an edge case here, since the rest of the test is\na controlled environment I'm inclined to just not worry about it; this\nwould require the changes prior to this to exactly fill a WAL segment\nwhich strikes me as extremely unlikely to the point of impossible in\nthis specific scenario.\n\n> 5.\n> +my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n> +ok(-f $walfile, \"Got a WAL file\");\n> Is this checking if the WAL file is present or not in PGDATA/pg_wal?\n> If yes, I think this isn't required as pg_switch_wal() ensures that\n> the WAL is written and flushed to disk.\n\nYou are correct, probably another artifact of the earlier version.\nThat said, not sure I see the harm in keeping it as a sanity-check.\n\n> 6.\n> +my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n> Isn't \"pgdata\" hardcoded here? I think you might need to do the following:\n> $node->data_dir . '/pg_wal/' . $walfile_name;;\n\nCan fix.\n\n> 7.\n> + # save filename for later verification\n> + $files{$file}++;\n>\n> +# validate that we ended up with some FPIs saved\n> +ok(keys %files > 0, 'verify we processed some files');\n> Why do we need to store filenames in an array when we later just check\n> the size of the array? Can't we use a boolean (file_found) or an int\n> variable (file_count) to verify that we found the file.\n\nAnother artifact; we were comparing the files output between two\nseparate lists of arbitrary numbers of pages being written out and\nverifying the raw/fixup versions had the same lists.\n\n> 8.\n> +$node->safe_psql('postgres', <<EOF);\n> +SELECT 'init' FROM\n> pg_create_physical_replication_slot('regress_pg_waldump_slot', true,\n> false);\n> +CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n> +CHECKPOINT; -- required to force FPI for next writes\n> +UPDATE test_table SET a = a + 1;\n> +EOF\n> The EOF with append_conf() is being used in 4 files and elsewhere in\n> the TAP test files (more than 100?) qq[] or quotes is being used. I\n> have no strong opinion here, I'll leave it to the other reviewers or\n> committer.\n\nI'm inclined to leave it just for (personal) readability, but can\nchange if there's a strong consensus against.\n\n> > > 11.\n> > > +# verify filename formats matches w/--save-fpi\n> > > +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> > > Do we need to look for the exact match of the file that gets created\n> > > in the save-fpi path? While checking for this is great, it makes the\n> > > test code non-portable (may not work on Windows or other platforms,\n> > > no?) and complex? This way, you can get rid of get_block_info() as\n> > > well? And +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> > > will also get simplified.\n> > >\n> > > I think you can further simplify the tests by:\n> > > create the node\n> > > generate an FPI\n> > > call pg_waldump with save-fpi option\n> > > check the target directory for a file that contains the relid,\n> > > something like '%relid%'.\n> > >\n> > > The above would still serve the purpose, tests the code without much complexity.\n> >\n> > I disagree; I think there is utility in keeping the validation of the\n> > expected output. Since we have the code that works for it (and does\n> > work on Windows, per passing the CI tests) I'm not seeing why we\n> > wouldn't want to continue to validate as much as possible.\n>\n> My intention is to simplify the tests further and I still stick to it.\n> It looks like the majority of test code is to form the file name in\n> the format that pg_waldump outputs and match with the file name in the\n> target directory - for instance, in get_block_info(), and in the loop\n> for my $fullpath (glob \"$tmp_folder/raw/*\"). I don't think the tests\n> need to aim for file format checks, it's enough to look for the\n> written file with '%relid%' by pg_waldump, if needed, the contents of\n> the files written/FPI can also be verified with, say, pg_checksum\n> tool. Others may have different opinions though.\n\nI would like to get broader feedback before changing this.\n\nDavid\n\n\n",
"msg_date": "Thu, 17 Nov 2022 10:32:05 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Thu, Nov 17, 2022 at 10:02 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 3:30 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > 1.\n> > - if (config.filter_by_fpw && !XLogRecordHasFPW(xlogreader_state))\n> > + if (config.filter_by_fpw && !XLogRecordHasFPI(xlogreader_state))\n> > These changes are not related to this feature, hence renaming those\n> > variables/function names must be dealt with separately. If required,\n> > proposing another patch can be submitted to change filter_by_fpw to\n> > filter_by_fpi and XLogRecordHasFPW() to XLogRecordHasFPI().\n>\n> Not required; can revert the changes unrelated to this specific patch.\n> (I'd written the original ones for it, so didn't really think anything\n> of it... :-))\n>\n> > 2.\n> > + /* We fsync our output directory only; since these files are not part\n> > + * of the production database we do not require the performance hit\n> > + * that fsyncing every FPI would entail, so are doing this as a\n> > + * compromise. */\n> > The commenting style doesn't match the standard that we follow\n> > elsewhere in postgres, please refer to other multi-line comments.\n>\n> Will fix.\n>\n> > 3.\n> > + fsync_fname(config.save_fpi_path, true);\n> > + }\n> > It looks like fsync_fname()/fsync() in general isn't recursive, in the\n> > sense that it doesn't fsync the files under the directory, but the\n> > directory only. So, the idea of directory fsync doesn't seem worth it.\n> > We either 1) get rid of fsync entirely or 2) fsync all the files after\n> > they are created and the directory at the end or 3) do option (2) with\n> > --no-sync option similar to its friends. Since option (2) is a no go,\n> > we can either choose option (1) or option (2). My vote at this point\n> > is for option (1).\n>\n> Agree to remove.\n>\n> > 4.\n> > +($walfile_name, $blocksize) = split '\\|' =>\n> > $node->safe_psql('postgres',\"SELECT pg_walfile_name(pg_switch_wal()),\n> > current_setting('block_size')\");\n> > +my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n> > I think there's something wrong with this, no? pg_switch_wal() can, at\n> > times, return end+1 of the prior segment (see below snippet) and I'm\n> > not sure if such a case can happen here.\n> >\n> > * The return value is either the end+1 address of the switch record,\n> > * or the end+1 address of the prior segment if we did not need to\n> > * write a switch record because we are already at segment start.\n> > */\n> > XLogRecPtr\n> > RequestXLogSwitch(bool mark_unimportant)\n>\n> I think this approach is pretty common to get the walfile name, no?\n> While there might be an edge case here, since the rest of the test is\n> a controlled environment I'm inclined to just not worry about it; this\n> would require the changes prior to this to exactly fill a WAL segment\n> which strikes me as extremely unlikely to the point of impossible in\n> this specific scenario.\n>\n> > 5.\n> > +my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n> > +ok(-f $walfile, \"Got a WAL file\");\n> > Is this checking if the WAL file is present or not in PGDATA/pg_wal?\n> > If yes, I think this isn't required as pg_switch_wal() ensures that\n> > the WAL is written and flushed to disk.\n>\n> You are correct, probably another artifact of the earlier version.\n> That said, not sure I see the harm in keeping it as a sanity-check.\n>\n> > 6.\n> > +my $walfile = $node->basedir . '/pgdata/pg_wal/' . $walfile_name;\n> > Isn't \"pgdata\" hardcoded here? I think you might need to do the following:\n> > $node->data_dir . '/pg_wal/' . $walfile_name;;\n>\n> Can fix.\n>\n> > 7.\n> > + # save filename for later verification\n> > + $files{$file}++;\n> >\n> > +# validate that we ended up with some FPIs saved\n> > +ok(keys %files > 0, 'verify we processed some files');\n> > Why do we need to store filenames in an array when we later just check\n> > the size of the array? Can't we use a boolean (file_found) or an int\n> > variable (file_count) to verify that we found the file.\n>\n> Another artifact; we were comparing the files output between two\n> separate lists of arbitrary numbers of pages being written out and\n> verifying the raw/fixup versions had the same lists.\n>\n> > 8.\n> > +$node->safe_psql('postgres', <<EOF);\n> > +SELECT 'init' FROM\n> > pg_create_physical_replication_slot('regress_pg_waldump_slot', true,\n> > false);\n> > +CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n> > +CHECKPOINT; -- required to force FPI for next writes\n> > +UPDATE test_table SET a = a + 1;\n> > +EOF\n> > The EOF with append_conf() is being used in 4 files and elsewhere in\n> > the TAP test files (more than 100?) qq[] or quotes is being used. I\n> > have no strong opinion here, I'll leave it to the other reviewers or\n> > committer.\n>\n> I'm inclined to leave it just for (personal) readability, but can\n> change if there's a strong consensus against.\n>\n> > > > 11.\n> > > > +# verify filename formats matches w/--save-fpi\n> > > > +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> > > > Do we need to look for the exact match of the file that gets created\n> > > > in the save-fpi path? While checking for this is great, it makes the\n> > > > test code non-portable (may not work on Windows or other platforms,\n> > > > no?) and complex? This way, you can get rid of get_block_info() as\n> > > > well? And +for my $fullpath (glob \"$tmp_folder/raw/*\")\n> > > > will also get simplified.\n> > > >\n> > > > I think you can further simplify the tests by:\n> > > > create the node\n> > > > generate an FPI\n> > > > call pg_waldump with save-fpi option\n> > > > check the target directory for a file that contains the relid,\n> > > > something like '%relid%'.\n> > > >\n> > > > The above would still serve the purpose, tests the code without much complexity.\n> > >\n> > > I disagree; I think there is utility in keeping the validation of the\n> > > expected output. Since we have the code that works for it (and does\n> > > work on Windows, per passing the CI tests) I'm not seeing why we\n> > > wouldn't want to continue to validate as much as possible.\n> >\n> > My intention is to simplify the tests further and I still stick to it.\n> > It looks like the majority of test code is to form the file name in\n> > the format that pg_waldump outputs and match with the file name in the\n> > target directory - for instance, in get_block_info(), and in the loop\n> > for my $fullpath (glob \"$tmp_folder/raw/*\"). I don't think the tests\n> > need to aim for file format checks, it's enough to look for the\n> > written file with '%relid%' by pg_waldump, if needed, the contents of\n> > the files written/FPI can also be verified with, say, pg_checksum\n> > tool. Others may have different opinions though.\n>\n> I would like to get broader feedback before changing this.\n\nDavid, is there a plan to provide an updated patch in this commitfest?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:58:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "Hi Bharath,\n\nI can get one sent in tomorrow.\n\nThanks,\n\nDavid\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:44:34 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 04:44:34PM -0600, David Christensen wrote:\n> I can get one sent in tomorrow.\n\n-XLogRecordHasFPW(XLogReaderState *record)\n+XLogRecordHasFPI(XLogReaderState *record)\nThis still refers to a FPW, so let's leave that out as well as any\nrenamings of this kind..\n\n+ if (config.save_fpi_path != NULL)\n+ {\n+ /* Create the dir if it doesn't exist */\n+ if (pg_mkdir_p(config.save_fpi_path, pg_dir_create_mode) < 0)\n+ {\n+ pg_log_error(\"could not create output directory \\\"%s\\\": %m\",\n+ config.save_fpi_path);\n+ goto bad_argument;\n+ }\n+ }\nIt seems to me that you could allow things to work even if the\ndirectory exists and is empty. See for example\nverify_dir_is_empty_or_create() in pg_basebackup.c.\n\n+my $file_re =\n+ qr/^([0-9A-F]{8})-([0-9A-F]{8})[.][0-9]+[.][0-9]+[.][0-9]+[.][0-9]+(?:_vm|_init|_fsm|_main)?$/;\nThis is artistic to parse for people not used to regexps (I do, a\nlittle). Perhaps this could use a small comment with an example of\nname or a reference describing this format?\n\n+# Set umask so test directories and files are created with default permissions\n+umask(0077);\nIncorrect copy-paste coming from elsewhere like the TAP tests of\npg_basebackup with group permissions? Doesn't \nPostgreSQL::Test::Utils::tempdir give already enough protection in\nterms of umask() and permissions?\n\n+ if (config.save_fpi_path != NULL)\n+ {\n+ /* We fsync our output directory only; since these files are not part\n+ * of the production database we do not require the performance hit\n+ * that fsyncing every FPI would entail, so are doing this as a\n+ * compromise. */\n+\n+ fsync_fname(config.save_fpi_path, true);\n+ }\nWhy is it necessary to flush anything at all, then?\n\n+my $relation = $node->safe_psql('postgres',\n+ q{SELECT format('%s/%s/%s', CASE WHEN reltablespace = 0 THEN\ndattablespace ELSE reltablespace END, pg_database.oid,\npg_relation_filenode(pg_class.oid)) FROM pg_class, pg_database WHERE\nrelname = 'test_table' AND datname = current_database()}\nCould you rewrite that to be on multiple lines?\n\n+diag \"using walfile: $walfile\";\nYou should avoid the use of \"diag\", as this would cause extra output\nnoise with a simple make check.\n\n+$node->safe_psql('postgres', \"SELECT pg_drop_replication_slot('regress_pg_waldump_slot')\")\nThat's not really necessary, the nodes are wiped out at the end of the\ntest. \n\n+$node->safe_psql('postgres', <<EOF);\n+SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_waldump_slot', true, false);\n+CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n+CHECKPOINT; -- required to force FPI for next writes\n+UPDATE test_table SET a = a + 1;\nUsing an EOF to execute a multi-line query would be a first. Couldn't\nyou use the same thing as anywhere else? 009_twophase.pl just to\nmention one. (Mentioned by Bharath upthread, where he asked for an\nextra opinion so here it is.)\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 15:36:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 12:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 14, 2022 at 04:44:34PM -0600, David Christensen wrote:\n> > I can get one sent in tomorrow.\n\nThis v10 should incorporate your feedback as well as Bharath's.\n\n> -XLogRecordHasFPW(XLogReaderState *record)\n> +XLogRecordHasFPI(XLogReaderState *record)\n> This still refers to a FPW, so let's leave that out as well as any\n> renamings of this kind..\n\nReverted those changes.\n\n> + if (config.save_fpi_path != NULL)\n> + {\n> + /* Create the dir if it doesn't exist */\n> + if (pg_mkdir_p(config.save_fpi_path, pg_dir_create_mode) < 0)\n> + {\n> + pg_log_error(\"could not create output directory \\\"%s\\\": %m\",\n> + config.save_fpi_path);\n> + goto bad_argument;\n> + }\n> + }\n> It seems to me that you could allow things to work even if the\n> directory exists and is empty. See for example\n> verify_dir_is_empty_or_create() in pg_basebackup.c.\n\nThe `pg_mkdir_p()` supports an existing directory (and I don't think\nwe want to require it to be empty first), so this only errors when it\ncan't create a directory for some reason.\n\n> +my $file_re =\n> + qr/^([0-9A-F]{8})-([0-9A-F]{8})[.][0-9]+[.][0-9]+[.][0-9]+[.][0-9]+(?:_vm|_init|_fsm|_main)?$/;\n> This is artistic to parse for people not used to regexps (I do, a\n> little). Perhaps this could use a small comment with an example of\n> name or a reference describing this format?\n\nAdded a description of what this is looking for.\n\n> +# Set umask so test directories and files are created with default permissions\n> +umask(0077);\n> Incorrect copy-paste coming from elsewhere like the TAP tests of\n> pg_basebackup with group permissions? Doesn't\n> PostgreSQL::Test::Utils::tempdir give already enough protection in\n> terms of umask() and permissions?\n\nI'd expect that's where that came from. Removed.\n\n> + if (config.save_fpi_path != NULL)\n> + {\n> + /* We fsync our output directory only; since these files are not part\n> + * of the production database we do not require the performance hit\n> + * that fsyncing every FPI would entail, so are doing this as a\n> + * compromise. */\n> +\n> + fsync_fname(config.save_fpi_path, true);\n> + }\n> Why is it necessary to flush anything at all, then?\n\nI personally don't think it is, but added it per Bharath's request.\nRemoved in this revision.\n\n> +my $relation = $node->safe_psql('postgres',\n> + q{SELECT format('%s/%s/%s', CASE WHEN reltablespace = 0 THEN\n> dattablespace ELSE reltablespace END, pg_database.oid,\n> pg_relation_filenode(pg_class.oid)) FROM pg_class, pg_database WHERE\n> relname = 'test_table' AND datname = current_database()}\n> Could you rewrite that to be on multiple lines?\n\nSure, reformated.\n\n> +diag \"using walfile: $walfile\";\n> You should avoid the use of \"diag\", as this would cause extra output\n> noise with a simple make check.\n\nHad been using it for debugging and didn't realize it'd cause issues.\nRemoved both instances.\n\n> +$node->safe_psql('postgres', \"SELECT pg_drop_replication_slot('regress_pg_waldump_slot')\")\n> That's not really necessary, the nodes are wiped out at the end of the\n> test.\n\nRemoved.\n\n> +$node->safe_psql('postgres', <<EOF);\n> +SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_waldump_slot', true, false);\n> +CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n> +CHECKPOINT; -- required to force FPI for next writes\n> +UPDATE test_table SET a = a + 1;\n> Using an EOF to execute a multi-line query would be a first. Couldn't\n> you use the same thing as anywhere else? 009_twophase.pl just to\n> mention one. (Mentioned by Bharath upthread, where he asked for an\n> extra opinion so here it is.)\n\nFair enough, while idiomatic perl to me, not a hill to die on;\nconverted to a standard multiline string.\n\nBest,\n\nDavid",
"msg_date": "Thu, 15 Dec 2022 17:17:46 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 05:17:46PM -0600, David Christensen wrote:\n> On Thu, Dec 15, 2022 at 12:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> This v10 should incorporate your feedback as well as Bharath's.\n\nThanks for the new version. I have minor comments.\n\n>> It seems to me that you could allow things to work even if the\n>> directory exists and is empty. See for example\n>> verify_dir_is_empty_or_create() in pg_basebackup.c.\n> \n> The `pg_mkdir_p()` supports an existing directory (and I don't think\n> we want to require it to be empty first), so this only errors when it\n> can't create a directory for some reason.\n\nSure, but things can also be made so as we don't fail if the directory\nexists and is empty? This would be more consistent with the base\ndirectories created by pg_basebackup and initdb.\n\n>> +$node->safe_psql('postgres', <<EOF);\n>> +SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_waldump_slot', true, false);\n>> +CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n>> +CHECKPOINT; -- required to force FPI for next writes\n>> +UPDATE test_table SET a = a + 1;\n>> Using an EOF to execute a multi-line query would be a first. Couldn't\n>> you use the same thing as anywhere else? 009_twophase.pl just to\n>> mention one. (Mentioned by Bharath upthread, where he asked for an\n>> extra opinion so here it is.)\n> \n> Fair enough, while idiomatic perl to me, not a hill to die on;\n> converted to a standard multiline string.\n\nBy the way, knowing that we have an option called --fullpage, could be\nbe better to use --save-fullpage for the option name?\n\n+ OPF = fopen(filename, PG_BINARY_W);\n+ if (!OPF)\n+ pg_fatal(\"couldn't open file for output: %s\", filename);\n[..]\n+ if (fwrite(page, BLCKSZ, 1, OPF) != 1)\n+ pg_fatal(\"couldn't write out complete full page image to file: %s\", filename);\nThese should more more generic, as of \"could not open file \\\"%s\\\"\" and\n\"could not write file \\\"%s\\\"\" as the file name provides all the\ninformation about what this writes.\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 15:22:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 4:47 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> On Thu, Dec 15, 2022 at 12:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Dec 14, 2022 at 04:44:34PM -0600, David Christensen wrote:\n> > > I can get one sent in tomorrow.\n>\n> This v10 should incorporate your feedback as well as Bharath's.\n\nThanks for the patch. Here're some minor comments:\n\n1. +my $node = PostgreSQL::Test::Cluster->new('primary');\nCan the name be other than 'primary' because we don't create a standby\nfor this test? Something like - 'node_a' or 'node_extract_fpi' or some\nother.\n\n2. +$node->init(extra => ['-k'], allows_streaming => 1);\nWhen enabled with allows_streaming, there are a bunch of things that\nhappen to the node while initializing, I don't think we need all of\nthem for this.\n\n3. +$node->init(extra => ['-k'], allows_streaming => 1);\nCan we use --data-checksums instead of -k for more readability?\nPerhaps, a comment on why we need that option helps greatly.\n\n4.\n+ page = (Page) buf.data;\n+\n+ if (!XLogRecHasBlockRef(record, block_id))\n+ continue;\n+\n+ if (!XLogRecHasBlockImage(record, block_id))\n+ continue;\n+\n+ if (!RestoreBlockImage(record, block_id, page))\n+ continue;\nCan you shift page = (Page) buf.data; just before the last if\ncondition RestoreBlockImage() so that it doesn't get executed for the\nother two continue statements?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Dec 2022 17:17:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 5:47 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Dec 16, 2022 at 4:47 AM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n> >\n> > On Thu, Dec 15, 2022 at 12:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Wed, Dec 14, 2022 at 04:44:34PM -0600, David Christensen wrote:\n> > > > I can get one sent in tomorrow.\n> >\n> > This v10 should incorporate your feedback as well as Bharath's.\n>\n> Thanks for the patch. Here're some minor comments:\n>\n> 1. +my $node = PostgreSQL::Test::Cluster->new('primary');\n> Can the name be other than 'primary' because we don't create a standby\n> for this test? Something like - 'node_a' or 'node_extract_fpi' or some\n> other.\n\nSure, no issues.\n\n> 2. +$node->init(extra => ['-k'], allows_streaming => 1);\n> When enabled with allows_streaming, there are a bunch of things that\n> happen to the node while initializing, I don't think we need all of\n> them for this.\n\nI think the \"allows_streaming\" was required to ensure the WAL files\nwere preserved properly, and was the approach we ended up taking\nrather than trying to fail the archive_command or other approaches I'd\ntaken earlier. I'd rather keep this if we can, unless you can propose\na different approach that would continue to work in the same way.\n\n> 3. +$node->init(extra => ['-k'], allows_streaming => 1);\n> Can we use --data-checksums instead of -k for more readability?\n> Perhaps, a comment on why we need that option helps greatly.\n\nYeah, can spell out; don't recall exactly why we needed it offhand,\nbut will confirm or remove if insignificant.\n\n> 4.\n> + page = (Page) buf.data;\n> +\n> + if (!XLogRecHasBlockRef(record, block_id))\n> + continue;\n> +\n> + if (!XLogRecHasBlockImage(record, block_id))\n> + continue;\n> +\n> + if (!RestoreBlockImage(record, block_id, page))\n> + continue;\n> Can you shift page = (Page) buf.data; just before the last if\n> condition RestoreBlockImage() so that it doesn't get executed for the\n> other two continue statements?\n\nSure; since it was just setting a pointer value I didn't consider it\nto be a hotspot for optimization.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:57:30 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 12:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Dec 15, 2022 at 05:17:46PM -0600, David Christensen wrote:\n> > On Thu, Dec 15, 2022 at 12:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > This v10 should incorporate your feedback as well as Bharath's.\n>\n> Thanks for the new version. I have minor comments.\n>\n> >> It seems to me that you could allow things to work even if the\n> >> directory exists and is empty. See for example\n> >> verify_dir_is_empty_or_create() in pg_basebackup.c.\n> >\n> > The `pg_mkdir_p()` supports an existing directory (and I don't think\n> > we want to require it to be empty first), so this only errors when it\n> > can't create a directory for some reason.\n>\n> Sure, but things can also be made so as we don't fail if the directory\n> exists and is empty? This would be more consistent with the base\n> directories created by pg_basebackup and initdb.\n\nI guess I'm feeling a little dense here; how is this failing if there\nis an existing empty directory?\n\n> >> +$node->safe_psql('postgres', <<EOF);\n> >> +SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_waldump_slot', true, false);\n> >> +CREATE TABLE test_table AS SELECT generate_series(1,100) a;\n> >> +CHECKPOINT; -- required to force FPI for next writes\n> >> +UPDATE test_table SET a = a + 1;\n> >> Using an EOF to execute a multi-line query would be a first. Couldn't\n> >> you use the same thing as anywhere else? 009_twophase.pl just to\n> >> mention one. (Mentioned by Bharath upthread, where he asked for an\n> >> extra opinion so here it is.)\n> >\n> > Fair enough, while idiomatic perl to me, not a hill to die on;\n> > converted to a standard multiline string.\n>\n> By the way, knowing that we have an option called --fullpage, could be\n> be better to use --save-fullpage for the option name?\n\nWorks for me. I think I'd just wanted to avoid reformatting the\nentire usage message which is why I'd gone with the shorter version.\n\n> + OPF = fopen(filename, PG_BINARY_W);\n> + if (!OPF)\n> + pg_fatal(\"couldn't open file for output: %s\", filename);\n> [..]\n> + if (fwrite(page, BLCKSZ, 1, OPF) != 1)\n> + pg_fatal(\"couldn't write out complete full page image to file: %s\", filename);\n> These should more more generic, as of \"could not open file \\\"%s\\\"\" and\n> \"could not write file \\\"%s\\\"\" as the file name provides all the\n> information about what this writes.\n\nSure, will update.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:58:45 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 12:57 PM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> On Wed, Dec 21, 2022 at 5:47 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n[snip]\n\n> > 2. +$node->init(extra => ['-k'], allows_streaming => 1);\n> > When enabled with allows_streaming, there are a bunch of things that\n> > happen to the node while initializing, I don't think we need all of\n> > them for this.\n>\n> I think the \"allows_streaming\" was required to ensure the WAL files\n> were preserved properly, and was the approach we ended up taking\n> rather than trying to fail the archive_command or other approaches I'd\n> taken earlier. I'd rather keep this if we can, unless you can propose\n> a different approach that would continue to work in the same way.\n\nConfirmed that we needed this in order to create the replication slot,\nso this /is/ required for the test to work.\n\nEnclosing v11 with yours and Michael's latest feedback.\n\nBest,\n\nDavid",
"msg_date": "Fri, 23 Dec 2022 13:28:27 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Sat, Dec 24, 2022 at 12:58 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> On Fri, Dec 23, 2022 at 12:57 PM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n> >\n> > On Wed, Dec 21, 2022 at 5:47 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > > 2. +$node->init(extra => ['-k'], allows_streaming => 1);\n> > > When enabled with allows_streaming, there are a bunch of things that\n> > > happen to the node while initializing, I don't think we need all of\n> > > them for this.\n> >\n> > I think the \"allows_streaming\" was required to ensure the WAL files\n> > were preserved properly, and was the approach we ended up taking\n> > rather than trying to fail the archive_command or other approaches I'd\n> > taken earlier. I'd rather keep this if we can, unless you can propose\n> > a different approach that would continue to work in the same way.\n>\n> Confirmed that we needed this in order to create the replication slot,\n> so this /is/ required for the test to work.\n\nThe added test needs wal_level to be replica, but the TAP tests set it\nto minimal if allows_streaming isn't passed. However, if passed\nallows_streaming, it sets a bunch of other parameters which are not\nrequired for this test (see note->init function in cluster.pm), hence\nwe could just set the required parameters wal_level = replica and\nmax_wal_senders for the replication slot to be created.\n\n> Enclosing v11 with yours and Michael's latest feedback.\n\nThanks for the patch. I've made the above change as well as renamed\nthe test file name to be save_fpi.pl, everything else remains the same\nas v11. Here's the v12 patch which LGTM. I'll mark it as RfC -\nhttps://commitfest.postgresql.org/41/3628/.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 24 Dec 2022 18:23:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Sat, Dec 24, 2022 at 06:23:29PM +0530, Bharath Rupireddy wrote:\n> Thanks for the patch. I've made the above change as well as renamed\n> the test file name to be save_fpi.pl, everything else remains the same\n> as v11. Here's the v12 patch which LGTM. I'll mark it as RfC -\n> https://commitfest.postgresql.org/41/3628/.\n\nI have done a review of that, and here are my notes:\n- The variable names were a bit inconsistent, so I have switched most\nof the new code to use \"fullpage\".\n- The code was not able to handle the case of a target directory\nexisting but empty, so I have added a wrapper on pg_check_dir().\n- XLogRecordHasFPW() could be checked directly in the function saving\nthe blocks. Still, there is no need for it as we apply the same\nchecks again in the inner loop of the routine.\n- The new test has been renamed.\n- RestoreBlockImage() would report a failure and the code would just\nskip it and continue its work. This could point out to a compression\nfailure for example, so like any code paths calling this routine I\nthink that we'd better do a pg_fatal() and fail hard.\n- I did not understand why there is a reason to make this option\nconditional on the record prints or even the stats, so I have moved\nthe FPW save routine into a separate code path. The other two could\nbe silenced (or not) using --quiet for example, for the same result as\nv12 without impacting the usability of this feature.\n- Few tweaks to the docs, the --help output, the comments and the\ntests.\n- Indentation applied.\n\nBeing able to filter the blocks saved using start/end LSNs or just\n--relation is really cool, especially as the file names use the same\norder as what's needed for this option.\n\nComments?\n--\nMichael",
"msg_date": "Mon, 26 Dec 2022 16:28:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 12:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I have done a review of that, and here are my notes:\n> - The variable names were a bit inconsistent, so I have switched most\n> of the new code to use \"fullpage\".\n>\n> - The new test has been renamed.\n>\n> - RestoreBlockImage() would report a failure and the code would just\n> skip it and continue its work. This could point out to a compression\n> failure for example, so like any code paths calling this routine I\n> think that we'd better do a pg_fatal() and fail hard.\n>\n> - XLogRecordHasFPW() could be checked directly in the function saving\n> the blocks. Still, there is no need for it as we apply the same\n> checks again in the inner loop of the routine.\n>\n> - Few tweaks to the docs, the --help output, the comments and the\n> tests.\n> - Indentation applied.\n>\n> - I did not understand why there is a reason to make this option\n> conditional on the record prints or even the stats, so I have moved\n> the FPW save routine into a separate code path. The other two could\n> be silenced (or not) using --quiet for example, for the same result as\n> v12 without impacting the usability of this feature.\n\nLooks good.\n\n> - The code was not able to handle the case of a target directory\n> existing but empty, so I have added a wrapper on pg_check_dir().\n\nLooks okay and with the following, we impose the user-provided target\ndirectory must be empty.\n\n+ case 4:\n+ /* Exists and not empty */\n+ pg_fatal(\"directory \\\"%s\\\" exists but is not empty\", path);\n\n> Being able to filter the blocks saved using start/end LSNs or just\n> --relation is really cool, especially as the file names use the same\n> order as what's needed for this option.\n>\n> Comments?\n\n+1. I think this feature will also be useful in pg_walinspect.\nHowever, I'm a bit concerned that it can flood the running database\ndisk - if someone generates a lot of FPI files.\n\nOverall, the v13 patch LGTM.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Dec 2022 16:18:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "> On Dec 26, 2022, at 1:29 AM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sat, Dec 24, 2022 at 06:23:29PM +0530, Bharath Rupireddy wrote:\n>> Thanks for the patch. I've made the above change as well as renamed\n>> the test file name to be save_fpi.pl, everything else remains the same\n>> as v11. Here's the v12 patch which LGTM. I'll mark it as RfC -\n>> https://commitfest.postgresql.org/41/3628/.\n> \n> I have done a review of that, and here are my notes:\n> - The variable names were a bit inconsistent, so I have switched most\n> of the new code to use \"fullpage\".\n> - The code was not able to handle the case of a target directory\n> existing but empty, so I have added a wrapper on pg_check_dir().\n> - XLogRecordHasFPW() could be checked directly in the function saving\n> the blocks. Still, there is no need for it as we apply the same\n> checks again in the inner loop of the routine.\n> - The new test has been renamed.\n> - RestoreBlockImage() would report a failure and the code would just\n> skip it and continue its work. This could point out to a compression\n> failure for example, so like any code paths calling this routine I\n> think that we'd better do a pg_fatal() and fail hard.\n> - I did not understand why there is a reason to make this option\n> conditional on the record prints or even the stats, so I have moved\n> the FPW save routine into a separate code path. The other two could\n> be silenced (or not) using --quiet for example, for the same result as\n> v12 without impacting the usability of this feature.\n> - Few tweaks to the docs, the --help output, the comments and the\n> tests.\n> - Indentation applied.\n> \n> Being able to filter the blocks saved using start/end LSNs or just\n> --relation is really cool, especially as the file names use the same\n> order as what's needed for this option.\n\nSounds good, definitely along the ideas of what I’d originally envisioned.\n\nThanks,\n\nDavid\n\n\n\n",
"msg_date": "Mon, 26 Dec 2022 14:00:30 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 04:28:52PM +0900, Michael Paquier wrote:\n> Comments?\n\n> +\t\tfile = fopen(filename, PG_BINARY_W);\n> +\t\tif (!file)\n> +\t\t\tpg_fatal(\"could not open file \\\"%s\\\": %m\", filename);\n> +\n> +\t\tif (fwrite(page, BLCKSZ, 1, file) != 1)\n> +\t\t\tpg_fatal(\"could not write file \\\"%s\\\": %m\", filename);\n> +\n> +\t\tfclose(file);\n\nfclose() should be tested, too:\n\n> +\t\tif (fwrite(page, BLCKSZ, 1, file) != 1 || fclose(file) != 0)\n> +\t\t\tpg_fatal(\"could not write file \\\"%s\\\": %m\", filename);\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 26 Dec 2022 14:39:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 02:39:03PM -0600, Justin Pryzby wrote:\n> fclose() should be tested, too:\n\nSure. Done that too, and applied the change after a last lookup.\n--\nMichael",
"msg_date": "Tue, 27 Dec 2022 08:32:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 04:18:18PM +0530, Bharath Rupireddy wrote:\n> +1. I think this feature will also be useful in pg_walinspect.\n> However, I'm a bit concerned that it can flood the running database\n> disk - if someone generates a lot of FPI files.\n\npg_read_file() and pg_waldump can be fed absolute paths outside the\ndata folder. So, well, just don't do that then :)\n--\nMichael",
"msg_date": "Tue, 27 Dec 2022 12:31:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 4:18 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 12:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> +1. I think this feature will also be useful in pg_walinspect.\n\nJust for the record - here's the pg_walinspect function to extract\nFPIs from WAL records -\nhttps://www.postgresql.org/message-id/CALj2ACVCcvzd7WiWvD%3D6_7NBvVB_r6G0EGSxL4F8vosAi6Se4g%40mail.gmail.com.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Dec 2022 17:22:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Teach pg_waldump to extract FPIs from the WAL"
}
] |
[
{
"msg_contents": "select arraymultirange(arrayrange(array[1,2], array[2,1]));\n\nERROR: 42883: function arrayrange(integer[], integer[]) does not exist\n> LINE 1: select arraymultirange(arrayrange(array[1,2], array[2,1]));\n> ^\n> HINT: No function matches the given name and argument types. You might\n> need to add explicit type casts.\n> LOCATION: ParseFuncOrColumn, parse_func.c:629\n\n\ntested on postgresql 14.\ngit.postgresql.org Git - postgresql.git/blob -\nsrc/test/regress/sql/multirangetypes.sql\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/multirangetypes.sql;h=9be26f10d381f4af61a48f55606a80e49706b959;hb=6df7a9698bb036610c1e8c6d375e1be38cb26d5f>\nline:590\nto 600.\ngit.postgresql.org Git - postgresql.git/blob -\nsrc/test/regress/expected/multirangetypes.out\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/multirangetypes.out;h=ac2eb84c3af90db38a14609749a5607653d0b26e;hb=7ae1619bc5b1794938c7387a766b8cae34e38d8a#l2849>\nline\n3200 to line 3210\n\nI search log: git.postgresql.org Git - postgresql.git/log\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=log;h=6df7a9698bb036610c1e8c6d375e1be38cb26d5f>\nthere\nis no mention of arrayrange. So this feature is available, but now seems\nnot working?\ndb fiddle: Postgres 14 | db<>fiddle (dbfiddle.uk)\n<https://dbfiddle.uk/?rdbms=postgres_14&fiddle=b3cb2a726ff5cedbf8aa05e11c5287e4>\n\nselect arraymultirange(arrayrange(array[1,2], array[2,1]));ERROR: 42883: function arrayrange(integer[], integer[]) does not existLINE 1: select arraymultirange(arrayrange(array[1,2], array[2,1])); ^HINT: No function matches the given name and argument types. You might need to add explicit type casts.LOCATION: ParseFuncOrColumn, parse_func.c:629tested on postgresql 14.git.postgresql.org Git - postgresql.git/blob - src/test/regress/sql/multirangetypes.sql line:590 to 600.git.postgresql.org Git - postgresql.git/blob - src/test/regress/expected/multirangetypes.out line 3200 to line 3210I search log: git.postgresql.org Git - postgresql.git/log there is no mention of arrayrange. So this feature is available, but now seems not working?db fiddle: Postgres 14 | db<>fiddle (dbfiddle.uk)",
"msg_date": "Sat, 23 Apr 2022 11:56:55 +0530",
"msg_from": "Jian He <hejian.mark@gmail.com>",
"msg_from_op": true,
"msg_subject": "multirange of arrays not working on postgresql 14"
},
{
"msg_contents": "On Friday, April 22, 2022, Jian He <hejian.mark@gmail.com> wrote:\n\n> select arraymultirange(arrayrange(array[1,2], array[2,1]));\n>\n> ERROR: 42883: function arrayrange(integer[], integer[]) does not exist\n>> LINE 1: select arraymultirange(arrayrange(array[1,2], array[2,1]));\n>> ^\n>> HINT: No function matches the given name and argument types. You might\n>> need to add explicit type casts.\n>> LOCATION: ParseFuncOrColumn, parse_func.c:629\n>\n>\n> tested on postgresql 14.\n> git.postgresql.org Git - postgresql.git/blob - src/test/regress/sql/\n> multirangetypes.sql\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/multirangetypes.sql;h=9be26f10d381f4af61a48f55606a80e49706b959;hb=6df7a9698bb036610c1e8c6d375e1be38cb26d5f> line:590\n> to 600.\n> git.postgresql.org Git - postgresql.git/blob - src/test/regress/expected/\n> multirangetypes.out\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/expected/multirangetypes.out;h=ac2eb84c3af90db38a14609749a5607653d0b26e;hb=7ae1619bc5b1794938c7387a766b8cae34e38d8a#l2849> line\n> 3200 to line 3210\n>\n> I search log: git.postgresql.org Git - postgresql.git/log\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=log;h=6df7a9698bb036610c1e8c6d375e1be38cb26d5f> there\n> is no mention of arrayrange. So this feature is available, but now seems\n> not working?\n> db fiddle: Postgres 14 | db<>fiddle (dbfiddle.uk)\n> <https://dbfiddle.uk/?rdbms=postgres_14&fiddle=b3cb2a726ff5cedbf8aa05e11c5287e4>\n>\n>\nThe regression tests you link to (check out rangetypes.sql) define those\ntypes using the user extensible type system PostgreSQL offers. As\nevidenced by their absence from the documentation, those types are not part\nof the core system.\n\nDavid J.\n\nOn Friday, April 22, 2022, Jian He <hejian.mark@gmail.com> wrote:select arraymultirange(arrayrange(array[1,2], array[2,1]));ERROR: 42883: function arrayrange(integer[], integer[]) does not existLINE 1: select arraymultirange(arrayrange(array[1,2], array[2,1])); ^HINT: No function matches the given name and argument types. You might need to add explicit type casts.LOCATION: ParseFuncOrColumn, parse_func.c:629tested on postgresql 14.git.postgresql.org Git - postgresql.git/blob - src/test/regress/sql/multirangetypes.sql line:590 to 600.git.postgresql.org Git - postgresql.git/blob - src/test/regress/expected/multirangetypes.out line 3200 to line 3210I search log: git.postgresql.org Git - postgresql.git/log there is no mention of arrayrange. So this feature is available, but now seems not working?db fiddle: Postgres 14 | db<>fiddle (dbfiddle.uk)The regression tests you link to (check out rangetypes.sql) define those types using the user extensible type system PostgreSQL offers. As evidenced by their absence from the documentation, those types are not part of the core system.David J.",
"msg_date": "Sat, 23 Apr 2022 00:03:20 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: multirange of arrays not working on postgresql 14"
}
] |
[
{
"msg_contents": "select json_objectagg(\n k:v absent on null with unique keys returning text )\nfrom (\n values(1,1),(0, null),(3, null),(2,2),(4,null)\n) foo(k, v);\n\nreturn\n\n json_objectagg\n----------------------\n { \"1\" : 1, \"2\" : 2 }\n--------------------\n\nselect json_objectagg(k:v absent on null with unique keys)\nfrom (\n values(1,1),(0, null),(3, null),(2,2),(4,null)\n) foo(k, v);\n\nreturn\n\njson_objectagg ---------------------- { \"1\" : 1, \"2\" : 2 }\n\n*But*\n\nselect json_objectagg(\n k:v absent on null with unique keys returning jsonb )\nfrom (\n values(1,1),(0, null),(3, null),(2,2),(4,null)\n) foo(k, v);\n\nreturn\njson_objectagg ----------------------------- {\"0\": null, \"1\": 1, \"2\": 2}\n\nthe last query \"returning jsonb\" should be { \"1\" : 1, \"2\" : 2 } ?\n\nversion:\n\n> PostgreSQL 15devel (Ubuntu\n> 15~~devel~20220407.0430-1~713.git79b716c.pgdg20.04+1) on\n> x86_64-pc-linux-gnu,\n> compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n\nselect json_objectagg( k:v absent on null with unique keys returning text )from ( values(1,1),(0, null),(3, null),(2,2),(4,null)) foo(k, v);return json_objectagg---------------------- { \"1\" : 1, \"2\" : 2 }--------------------select json_objectagg(k:v absent on null with unique keys)from ( values(1,1),(0, null),(3, null),(2,2),(4,null)) foo(k, v);return json_objectagg\n----------------------\n { \"1\" : 1, \"2\" : 2 }Butselect json_objectagg( k:v absent on null with unique keys returning jsonb )from ( values(1,1),(0, null),(3, null),(2,2),(4,null)) foo(k, v);return json_objectagg\n-----------------------------\n {\"0\": null, \"1\": 1, \"2\": 2}the last query \"returning jsonb\" should be { \"1\" : 1, \"2\" : 2 } ?version:PostgreSQL 15devel (Ubuntu 15~~devel~20220407.0430-1~713.git79b716c.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit",
"msg_date": "Mon, 25 Apr 2022 10:41:42 +0530",
"msg_from": "alias <postgres.rocks@gmail.com>",
"msg_from_op": true,
"msg_subject": "json_object returning jsonb reuslt different from returning json,\n returning text"
},
{
"msg_contents": "seems it's a bug around value 0.\n\nSELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING jsonb)\nFROM (VALUES (1, 1), (10, NULL),(4, null), (5, null),(6, null),(2, 2))\nfoo(k, v);\nreturn:\n{\"1\": 1, \"2\": 2}\n\nSELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING jsonb)\nFROM (VALUES (1, 1), (0, NULL),(4, null), (5, null),(6, null),(2, 2))\nfoo(k, v);\n\nreturn\n {\"0\": null, \"1\": 1, \"2\": 2}\n\n\n\nOn Mon, Apr 25, 2022 at 10:41 AM alias <postgres.rocks@gmail.com> wrote:\n\n> select json_objectagg(\n> k:v absent on null with unique keys returning text )\n> from (\n> values(1,1),(0, null),(3, null),(2,2),(4,null)\n> ) foo(k, v);\n>\n> return\n>\n> json_objectagg\n> ----------------------\n> { \"1\" : 1, \"2\" : 2 }\n> --------------------\n>\n> select json_objectagg(k:v absent on null with unique keys)\n> from (\n> values(1,1),(0, null),(3, null),(2,2),(4,null)\n> ) foo(k, v);\n>\n> return\n>\n> json_objectagg ---------------------- { \"1\" : 1, \"2\" : 2 }\n>\n> *But*\n>\n> select json_objectagg(\n> k:v absent on null with unique keys returning jsonb )\n> from (\n> values(1,1),(0, null),(3, null),(2,2),(4,null)\n> ) foo(k, v);\n>\n> return\n> json_objectagg ----------------------------- {\"0\": null, \"1\": 1, \"2\": 2}\n>\n> the last query \"returning jsonb\" should be { \"1\" : 1, \"2\" : 2 } ?\n>\n> version:\n>\n>> PostgreSQL 15devel (Ubuntu\n>> 15~~devel~20220407.0430-1~713.git79b716c.pgdg20.04+1) on\n>> x86_64-pc-linux-gnu,\n>> compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n>\n>\n\nseems it's a bug around value 0.SELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING jsonb)FROM (VALUES (1, 1), (10, NULL),(4, null), (5, null),(6, null),(2, 2)) foo(k, v);return:{\"1\": 1, \"2\": 2}SELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING jsonb)FROM (VALUES (1, 1), (0, NULL),(4, null), (5, null),(6, null),(2, 2)) foo(k, v);return {\"0\": null, \"1\": 1, \"2\": 2}On Mon, Apr 25, 2022 at 10:41 AM alias <postgres.rocks@gmail.com> wrote:select json_objectagg( k:v absent on null with unique keys returning text )from ( values(1,1),(0, null),(3, null),(2,2),(4,null)) foo(k, v);return json_objectagg---------------------- { \"1\" : 1, \"2\" : 2 }--------------------select json_objectagg(k:v absent on null with unique keys)from ( values(1,1),(0, null),(3, null),(2,2),(4,null)) foo(k, v);return json_objectagg\n----------------------\n { \"1\" : 1, \"2\" : 2 }Butselect json_objectagg( k:v absent on null with unique keys returning jsonb )from ( values(1,1),(0, null),(3, null),(2,2),(4,null)) foo(k, v);return json_objectagg\n-----------------------------\n {\"0\": null, \"1\": 1, \"2\": 2}the last query \"returning jsonb\" should be { \"1\" : 1, \"2\" : 2 } ?version:PostgreSQL 15devel (Ubuntu 15~~devel~20220407.0430-1~713.git79b716c.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit",
"msg_date": "Mon, 25 Apr 2022 10:49:23 +0530",
"msg_from": "alias <postgres.rocks@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: json_object returning jsonb reuslt different from returning json,\n returning text"
},
{
"msg_contents": "On 2022-04-25 Mo 01:19, alias wrote:\n>\n> seems it's a bug around value 0.\n>\n> SELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING\n> jsonb)\n> FROM (VALUES (1, 1), (10, NULL),(4, null), (5, null),(6, null),(2, 2))\n> foo(k, v);\n> return:\n> {\"1\": 1, \"2\": 2}\n>\n> SELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING\n> jsonb)\n> FROM (VALUES (1, 1), (0, NULL),(4, null), (5, null),(6, null),(2, 2))\n> foo(k, v);\n>\n> return\n> {\"0\": null, \"1\": 1, \"2\": 2}\n\n\nThanks for the report.\n\nI don't think there's anything special about '0' except that it sorts\nfirst. There appears to be a bug in the uniquefying code where the first\nitem(s) have nulls. The attached appears to fix it. Please test and see\nif you can break it.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 25 Apr 2022 10:14:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_object returning jsonb reuslt different from returning json,\n returning text"
},
{
"msg_contents": "\nOn 2022-04-25 Mo 10:14, Andrew Dunstan wrote:\n> On 2022-04-25 Mo 01:19, alias wrote:\n>> seems it's a bug around value 0.\n>>\n>> SELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING\n>> jsonb)\n>> FROM (VALUES (1, 1), (10, NULL),(4, null), (5, null),(6, null),(2, 2))\n>> foo(k, v);\n>> return:\n>> {\"1\": 1, \"2\": 2}\n>>\n>> SELECT JSON_OBJECTAGG(k: v ABSENT ON NULL WITH UNIQUE KEYS RETURNING\n>> jsonb)\n>> FROM (VALUES (1, 1), (0, NULL),(4, null), (5, null),(6, null),(2, 2))\n>> foo(k, v);\n>>\n>> return\n>> {\"0\": null, \"1\": 1, \"2\": 2}\n>\n> Thanks for the report.\n>\n> I don't think there's anything special about '0' except that it sorts\n> first. There appears to be a bug in the uniquefying code where the first\n> item(s) have nulls. The attached appears to fix it. Please test and see\n> if you can break it.\n\n\nFix pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 28 Apr 2022 15:35:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_object returning jsonb reuslt different from returning json,\n returning text"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have found maybe buggy behaviour (of psql parser?) when using psql \\copy\nwith psql variable used for filename.\n\nSQL copy is working fine:\n\ncontrib_regression=# \\set afile '/writable_dir/out.csv'\ncontrib_regression=# select :'afile' as filename;\n filename\n-----------------------\n /writable_dir/out.csv\n(1 row)\n\ncontrib_regression=# copy (select 1) to :'afile';\nCOPY 1\n\nbut psql \\copy is returning error:\n\ncontrib_regression=# \\copy (select 1) to :'afile';\nERROR: syntax error at or near \"'afile'\"\nLINE 1: COPY ( select 1 ) TO STDOUT 'afile';\n ^\nwhen used without quotes it works, but it will create file in actual\ndirectory and name ':afile'\n\ncontrib_regression=# \\copy (select 1) to :afile;\nCOPY 1\n\nvagrant@nfiesta_dev_pg:~/npg$ cat :afile\n1\n\nworkaround (suggested by Pavel Stěhule) is here:\n\ncontrib_regression=# \\set afile '/writable_dir/out2.csv'\ncontrib_regression=# \\set cmd '\\\\copy (SELECT 1) to ':afile\ncontrib_regression=# :cmd\nCOPY 1\n\nMy PG versin:\n\ncontrib_regression=# select version();\n version\n-------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.10 (Debian 12.10-1.pgdg110+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\n(1 row)\n\nBest regards, Jiří Fejfar.\n\nHi all,I have found maybe buggy behaviour (of psql parser?) when using psql \\copy with psql variable used for filename.SQL copy is working fine:contrib_regression=# \\set afile '/writable_dir/out.csv'contrib_regression=# select :'afile' as filename; filename----------------------- /writable_dir/out.csv(1 row)contrib_regression=# copy (select 1) to :'afile';COPY 1but psql \\copy is returning error:contrib_regression=# \\copy (select 1) to :'afile';ERROR: syntax error at or near \"'afile'\"LINE 1: COPY ( select 1 ) TO STDOUT 'afile'; ^when used without quotes it works, but it will create file in actual directory and name ':afile'contrib_regression=# \\copy (select 1) to :afile;COPY 1vagrant@nfiesta_dev_pg:~/npg$ cat :afile\n1\n\nworkaround (suggested by Pavel Stěhule) is here:contrib_regression=# \\set afile '/writable_dir/out2.csv'contrib_regression=# \\set cmd '\\\\copy (SELECT 1) to ':afilecontrib_regression=# :cmdCOPY 1My PG versin:contrib_regression=# select version(); version------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 12.10 (Debian 12.10-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit(1 row)Best regards, Jiří Fejfar.",
"msg_date": "Mon, 25 Apr 2022 10:24:49 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>",
"msg_from_op": true,
"msg_subject": "variable filename for psql \\copy"
},
{
"msg_contents": "\tJiří Fejfar wrote:\n\n> I have found maybe buggy behaviour (of psql parser?) when using psql \\copy\n> with psql variable used for filename.\n\nWhile it's annoying that it doesn't work as you tried it, this behavior is \ndocumented, so in that sense it's not a bug.\nThe doc also suggests a workaround in a tip section:\n\nFrom psql manpage:\n\n The syntax of this command is similar to that of the SQL COPY\n command. All options other than the data source/destination are as\n specified for COPY. Because of this, special parsing rules apply to\n the \\copy meta-command. Unlike most other meta-commands, the entire\n remainder of the line is always taken to be the arguments of \\copy,\n and neither variable interpolation nor backquote expansion are\n performed in the arguments.\n\n Tip\n Another way to obtain the same result as \\copy ... to is to use\n the SQL COPY ... TO STDOUT command and terminate it with \\g\n filename or \\g |program. Unlike \\copy, this method allows the\n command to span multiple lines; also, variable interpolation\n and backquote expansion can be used.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 25 Apr 2022 12:24:47 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: variable filename for psql \\copy"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 1:24 AM Jiří Fejfar <jurafejfar@gmail.com> wrote:\n\n> contrib_regression=# copy (select 1) to :'afile';\n>\n\nHopefully you realize that COPY is going to place that file on the server,\nnot send it to the psql client to be placed on the local machine.\n\nThe best way to do copy in psql is:\n\\set afile '...'\n\\o :'afile'\ncopy ... to stdout; --or the variant where you one-shot the \\o ( \\g with\narguments )\n\nNot only do you get variable expansion but you can write the COPY command\non multiple lines just like any other SQL command.\n\nAdditionally, we have a list, and even an online form, for submitting bug\nreports. That would have been the more appropriate place to direct this\nemail.\n\nDavid J.\n\nOn Mon, Apr 25, 2022 at 1:24 AM Jiří Fejfar <jurafejfar@gmail.com> wrote:contrib_regression=# copy (select 1) to :'afile';Hopefully you realize that COPY is going to place that file on the server, not send it to the psql client to be placed on the local machine.The best way to do copy in psql is:\\set afile '...'\\o :'afile'copy ... to stdout; --or the variant where you one-shot the \\o ( \\g with arguments )Not only do you get variable expansion but you can write the COPY command on multiple lines just like any other SQL command.Additionally, we have a list, and even an online form, for submitting bug reports. That would have been the more appropriate place to direct this email.David J.",
"msg_date": "Mon, 25 Apr 2022 09:07:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: variable filename for psql \\copy"
},
{
"msg_contents": "Dear Daniel, David\n\nOn Mon, 25 Apr 2022 at 18:07, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Mon, Apr 25, 2022 at 1:24 AM Jiří Fejfar <jurafejfar@gmail.com> wrote:\n>\n>> contrib_regression=# copy (select 1) to :'afile';\n>>\n>\n> Hopefully you realize that COPY is going to place that file on the server,\n> not send it to the psql client to be placed on the local machine.\n>\n> The best way to do copy in psql is:\n> \\set afile '...'\n> \\o :'afile'\n> copy ... to stdout; --or the variant where you one-shot the \\o ( \\g with\n> arguments )\n>\n> Not only do you get variable expansion but you can write the COPY command\n> on multiple lines just like any other SQL command.\n>\n>\nthank you for your advice, \\g works pretty well in my case\n\n\n> Additionally, we have a list, and even an online form, for submitting bug\n> reports. That would have been the more appropriate place to direct this\n> email.\n>\n>\nsorry, I didn't realize that, next time I will send report there\n\nJ.\n\n> David J.\n>\n>\n\nDear Daniel, DavidOn Mon, 25 Apr 2022 at 18:07, David G. Johnston <david.g.johnston@gmail.com> wrote:On Mon, Apr 25, 2022 at 1:24 AM Jiří Fejfar <jurafejfar@gmail.com> wrote:contrib_regression=# copy (select 1) to :'afile';Hopefully you realize that COPY is going to place that file on the server, not send it to the psql client to be placed on the local machine.The best way to do copy in psql is:\\set afile '...'\\o :'afile'copy ... to stdout; --or the variant where you one-shot the \\o ( \\g with arguments )Not only do you get variable expansion but you can write the COPY command on multiple lines just like any other SQL command.thank you for your advice, \\g works pretty well in my case Additionally, we have a list, and even an online form, for submitting bug reports. That would have been the more appropriate place to direct this email.sorry, I didn't realize that, next time I will send report thereJ.David J.",
"msg_date": "Tue, 26 Apr 2022 13:55:14 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: variable filename for psql \\copy"
}
] |
[
{
"msg_contents": "Hi,\n\nWith synchronous replication typically all the transactions (txns)\nfirst locally get committed, then streamed to the sync standbys and\nthe backend that generated the transaction will wait for ack from sync\nstandbys. While waiting for ack, it may happen that the query or the\ntxn gets canceled (QueryCancelPending is true) or the waiting backend\nis asked to exit (ProcDiePending is true). In either of these cases,\nthe wait for ack gets canceled and leaves the txn in an inconsistent\nstate (as in the client thinks that the txn would have replicated to\nsync standbys) - \"The transaction has already committed locally, but\nmight not have been replicated to the standby.\". Upon restart after\nthe crash or in the next txn after the old locally committed txn was\ncanceled, the client will be able to see the txns that weren't\nactually streamed to sync standbys. Also, if the client fails over to\none of the sync standbys after the crash (either by choice or because\nof automatic failover management after crash), the locally committed\ntxns on the crashed primary would be lost which isn't good in a true\nHA solution.\n\nHere's a proposal (mentioned previously by Satya [1]) to avoid the\nabove problems:\n1) Wait a configurable amount of time before canceling the sync\nreplication by the backends i.e. delay processing of\nQueryCancelPending and ProcDiePending in Introduced a new timeout GUC\nsynchronous_replication_naptime_before_cancel, when set, it will let\nthe backends wait for the ack before canceling the synchronous\nreplication so that the transaction can be available in sync standbys\nas well. If the ack isn't received even within this time frame, the\nbackend cancels the wait and goes ahead as it does today. In\nproduction HA environments, the GUC can be set to a reasonable value\nto avoid missing transactions during failovers.\n2) Wait for sync standbys to catch up upon restart after the crash or\nin the next txn after the old locally committed txn was canceled. One\nway to achieve this is to let the backend, that's making the first\nconnection, wait for sync standbys to catch up in ClientAuthentication\nright after successful authentication. However, I'm not sure this is\nthe best way to do it at this point.\n\nThoughts?\n\nHere's a WIP patch implementing the (1), I'm yet to code for (2). I\nhaven't added tests, I'm yet to figure out how to add one as there's\nno way we can delay the WAL sender so that we can reliably hit this\ncode. I will think more about this.\n\n[1] https://www.postgresql.org/message-id/CAHg%2BQDdTdPsqtu0QLG8rMg3Xo%3D6Xo23TwHPYsUgGNEK13wTY5g%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 25 Apr 2022 19:51:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 07:51:03PM +0530, Bharath Rupireddy wrote:\n> With synchronous replication typically all the transactions (txns)\n> first locally get committed, then streamed to the sync standbys and\n> the backend that generated the transaction will wait for ack from sync\n> standbys. While waiting for ack, it may happen that the query or the\n> txn gets canceled (QueryCancelPending is true) or the waiting backend\n> is asked to exit (ProcDiePending is true). In either of these cases,\n> the wait for ack gets canceled and leaves the txn in an inconsistent\n> state (as in the client thinks that the txn would have replicated to\n> sync standbys) - \"The transaction has already committed locally, but\n> might not have been replicated to the standby.\". Upon restart after\n> the crash or in the next txn after the old locally committed txn was\n> canceled, the client will be able to see the txns that weren't\n> actually streamed to sync standbys. Also, if the client fails over to\n> one of the sync standbys after the crash (either by choice or because\n> of automatic failover management after crash), the locally committed\n> txns on the crashed primary would be lost which isn't good in a true\n> HA solution.\n\nThis topic has come up a few times recently [0] [1] [2].\n\n> Thoughts?\n\nІ think this will require a fair amount of discussion. I'm personally in\nfavor of just adding a GUC that can be enabled to block canceling\nsynchronous replication waits, but I know folks have concerns with that\napproach. When I looked at this stuff previously [2], it seemed possible\nto handle the other data loss scenarios (e.g., forcing failover whenever\nthe primary shut down, turning off restart_after_crash). However, I'm not\nwedded to this approach.\n\n[0] https://postgr.es/m/C1F7905E-5DB2-497D-ABCC-E14D4DEE506C%40yandex-team.ru\n[1] https://postgr.es/m/cac4b9df-92c6-77aa-687b-18b86cb13728%40stratox.cz\n[2] https://postgr.es/m/FDE157D7-3F35-450D-B927-7EC2F82DB1D6%40amazon.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Apr 2022 09:48:13 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Mon, 2022-04-25 at 19:51 +0530, Bharath Rupireddy wrote:\n> With synchronous replication typically all the transactions (txns)\n> first locally get committed, then streamed to the sync standbys and\n> the backend that generated the transaction will wait for ack from sync\n> standbys. While waiting for ack, it may happen that the query or the\n> txn gets canceled (QueryCancelPending is true) or the waiting backend\n> is asked to exit (ProcDiePending is true). In either of these cases,\n> the wait for ack gets canceled and leaves the txn in an inconsistent\n> state [...]\n> \n> Here's a proposal (mentioned previously by Satya [1]) to avoid the\n> above problems:\n> 1) Wait a configurable amount of time before canceling the sync\n> replication by the backends i.e. delay processing of\n> QueryCancelPending and ProcDiePending in Introduced a new timeout GUC\n> synchronous_replication_naptime_before_cancel, when set, it will let\n> the backends wait for the ack before canceling the synchronous\n> replication so that the transaction can be available in sync standbys\n> as well.\n> 2) Wait for sync standbys to catch up upon restart after the crash or\n> in the next txn after the old locally committed txn was canceled.\n\nWhile this may mitigate the problem, I don't think it will deal with\nall the cases which could cause a transaction to end up committed locally,\nbut not on the synchronous standby. I think that only using the full\npower of two-phase commit can make this bulletproof.\n\nIs it worth adding additional complexity that is not a complete solution?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 26 Apr 2022 08:26:59 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "\n\n> 25 апр. 2022 г., в 21:48, Nathan Bossart <nathandbossart@gmail.com> написал(а):\n> \n> I'm personally in\n> favor of just adding a GUC that can be enabled to block canceling\n> synchronous replication waits\n\n+1. I think it's the only option to provide quorum commit guarantees.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 26 Apr 2022 16:22:38 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 11:57 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2022-04-25 at 19:51 +0530, Bharath Rupireddy wrote:\n> > With synchronous replication typically all the transactions (txns)\n> > first locally get committed, then streamed to the sync standbys and\n> > the backend that generated the transaction will wait for ack from sync\n> > standbys. While waiting for ack, it may happen that the query or the\n> > txn gets canceled (QueryCancelPending is true) or the waiting backend\n> > is asked to exit (ProcDiePending is true). In either of these cases,\n> > the wait for ack gets canceled and leaves the txn in an inconsistent\n> > state [...]\n> >\n> > Here's a proposal (mentioned previously by Satya [1]) to avoid the\n> > above problems:\n> > 1) Wait a configurable amount of time before canceling the sync\n> > replication by the backends i.e. delay processing of\n> > QueryCancelPending and ProcDiePending in Introduced a new timeout GUC\n> > synchronous_replication_naptime_before_cancel, when set, it will let\n> > the backends wait for the ack before canceling the synchronous\n> > replication so that the transaction can be available in sync standbys\n> > as well.\n> > 2) Wait for sync standbys to catch up upon restart after the crash or\n> > in the next txn after the old locally committed txn was canceled.\n>\n> While this may mitigate the problem, I don't think it will deal with\n> all the cases which could cause a transaction to end up committed locally,\n> but not on the synchronous standby. I think that only using the full\n> power of two-phase commit can make this bulletproof.\n\nNot sure if it's recommended to use 2PC in postgres HA with sync\nreplication where the documentation says that \"PREPARE TRANSACTION\"\nand other 2PC commands are \"intended for use by external transaction\nmanagement systems\" and with explicit transactions. Whereas, the txns\nwithin a postgres HA with sync replication always don't have to be\nexplicit txns. Am I missing something here?\n\n> Is it worth adding additional complexity that is not a complete solution?\n\nThe proposed approach helps to avoid some common possible problems\nthat arise with simple scenarios (like cancelling a long running query\nwhile in SyncRepWaitForLSN) within sync replication.\n\n[1] https://www.postgresql.org/docs/devel/sql-prepare-transaction.html\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 9 May 2022 14:50:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Mon, May 9, 2022 at 2:50 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Apr 26, 2022 at 11:57 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> >\n> > On Mon, 2022-04-25 at 19:51 +0530, Bharath Rupireddy wrote:\n> > > With synchronous replication typically all the transactions (txns)\n> > > first locally get committed, then streamed to the sync standbys and\n> > > the backend that generated the transaction will wait for ack from sync\n> > > standbys. While waiting for ack, it may happen that the query or the\n> > > txn gets canceled (QueryCancelPending is true) or the waiting backend\n> > > is asked to exit (ProcDiePending is true). In either of these cases,\n> > > the wait for ack gets canceled and leaves the txn in an inconsistent\n> > > state [...]\n> > >\n> > > Here's a proposal (mentioned previously by Satya [1]) to avoid the\n> > > above problems:\n> > > 1) Wait a configurable amount of time before canceling the sync\n> > > replication by the backends i.e. delay processing of\n> > > QueryCancelPending and ProcDiePending in Introduced a new timeout GUC\n> > > synchronous_replication_naptime_before_cancel, when set, it will let\n> > > the backends wait for the ack before canceling the synchronous\n> > > replication so that the transaction can be available in sync standbys\n> > > as well.\n> > > 2) Wait for sync standbys to catch up upon restart after the crash or\n> > > in the next txn after the old locally committed txn was canceled.\n> >\n> > While this may mitigate the problem, I don't think it will deal with\n> > all the cases which could cause a transaction to end up committed locally,\n> > but not on the synchronous standby. I think that only using the full\n> > power of two-phase commit can make this bulletproof.\n>\n> Not sure if it's recommended to use 2PC in postgres HA with sync\n> replication where the documentation says that \"PREPARE TRANSACTION\"\n> and other 2PC commands are \"intended for use by external transaction\n> management systems\" and with explicit transactions. Whereas, the txns\n> within a postgres HA with sync replication always don't have to be\n> explicit txns. Am I missing something here?\n>\n> > Is it worth adding additional complexity that is not a complete solution?\n>\n> The proposed approach helps to avoid some common possible problems\n> that arise with simple scenarios (like cancelling a long running query\n> while in SyncRepWaitForLSN) within sync replication.\n\nIMHO, making it wait for some amount of time, based on GUC is not a\ncomplete solution. It is just a hack to avoid the problem in some\ncases.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 May 2022 15:14:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "\n\n> On 9 May 2022, at 14:20, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Tue, Apr 26, 2022 at 11:57 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>> \n>> While this may mitigate the problem, I don't think it will deal with\n>> all the cases which could cause a transaction to end up committed locally,\n>> but not on the synchronous standby. I think that only using the full\n>> power of two-phase commit can make this bulletproof.\n> \n> Not sure if it's recommended to use 2PC in postgres HA with sync\n> replication where the documentation says that \"PREPARE TRANSACTION\"\n> and other 2PC commands are \"intended for use by external transaction\n> management systems\" and with explicit transactions. Whereas, the txns\n> within a postgres HA with sync replication always don't have to be\n> explicit txns. Am I missing something here?\n\nCOMMIT PREPARED needs to be replicated as well, thus encountering the very same problem as usual COMMIT: if done during failover it can be canceled and committed data can be wrongfully reported durably written. 2PC is not a remedy to the fact that PG silently cancels awaiting of sync replication. The problem arise in presence of any \"commit\". And \"commit\" is there if transactions are there.\n\n> On 9 May 2022, at 14:44, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> IMHO, making it wait for some amount of time, based on GUC is not a\n> complete solution. It is just a hack to avoid the problem in some\n> cases.\n\nDisallowing cancelation of locally committed transactions is not a hack. It's removing of a hack that was erroneously installed to make backend responsible to Ctrl+C (or client side statement timeout).\n\n> On 26 Apr 2022, at 11:26, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> Is it worth adding additional complexity that is not a complete solution?\n\nIts not additional complexity. It is removing additional complexity that made sync rep interruptible. (But I'm surely talking not about GUCs like synchronous_replication_naptime_before_cancel - wait of sync rep must be indefinite until synchrous_commit\\synchronous_standby_names are satisfied )\n\nAnd yes, we need additional complexity - but in some other place. Transaction can also be locally committed in presence of a server crash. But this another difficult problem. Crashed server must not allow data queries until LSN of timeline end is successfully replicated to synchronous_standby_names.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 9 May 2022 16:09:30 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Mon, May 9, 2022 at 4:39 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> > On 9 May 2022, at 14:44, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > IMHO, making it wait for some amount of time, based on GUC is not a\n> > complete solution. It is just a hack to avoid the problem in some\n> > cases.\n>\n> Disallowing cancelation of locally committed transactions is not a hack. It's removing of a hack that was erroneously installed to make backend responsible to Ctrl+C (or client side statement timeout).\n\nI might be missing something but based on my understanding the\napproach is not disallowing the query cancellation but it is just\nadding the configuration for how much to delay before canceling the\nquery. That's the reason I mentioned that this is not a guarenteed\nsolution. I mean with this configuration value also you can not avoid\nproblems in all the cases, right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 May 2022 13:18:20 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, May 10, 2022 at 1:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 9, 2022 at 4:39 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > > On 9 May 2022, at 14:44, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > IMHO, making it wait for some amount of time, based on GUC is not a\n> > > complete solution. It is just a hack to avoid the problem in some\n> > > cases.\n> >\n> > Disallowing cancelation of locally committed transactions is not a hack. It's removing of a hack that was erroneously installed to make backend responsible to Ctrl+C (or client side statement timeout).\n>\n> I might be missing something but based on my understanding the\n> approach is not disallowing the query cancellation but it is just\n> adding the configuration for how much to delay before canceling the\n> query. That's the reason I mentioned that this is not a guarenteed\n> solution. I mean with this configuration value also you can not avoid\n> problems in all the cases, right?\n\nYes Dilip, the proposed GUC in v1 patch doesn't allow waiting forever\nfor sync repl ack, in other words, doesn't allow blocking the pending\nquery cancels or proc die interrupts forever. The backends may linger\nin case repl ack isn't received or sync replicas aren't reachable?\nUsers may have to set the GUC to a 'reasonable value'.\n\nIf okay, I can make the GUC behave this way - value 0 existing\nbehaviour i.e. no wait for sync repl ack, just process query cancels\nand proc die interrupts immediately; value -1 wait unboundedly for the\nack; value > 0 wait for specified milliseconds for the ack.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 10 May 2022 13:29:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "\n\n> 10 мая 2022 г., в 12:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> \n> If okay, I can make the GUC behave this way - value 0 existing\n> behaviour i.e. no wait for sync repl ack, just process query cancels\n> and proc die interrupts immediately; value -1 wait unboundedly for the\n> ack; value > 0 wait for specified milliseconds for the ack.\n+1 if we make -1 and 0 only valid values.\n\n> query cancels or proc die interrupts\n\nPlease note, that typical HA tool would need to handle query cancels and proc die interrupts differently.\n\nWhen the network is partitioned and somewhere standby is promoted you definitely want infinite wait for cancels. Yet once upon a time you want to shutdown postgres without coredump - thus proc die needs to be processed.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 10 May 2022 17:25:01 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Tue, May 10, 2022 at 5:55 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > 10 мая 2022 г., в 12:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> >\n> > If okay, I can make the GUC behave this way - value 0 existing\n> > behaviour i.e. no wait for sync repl ack, just process query cancels\n> > and proc die interrupts immediately; value -1 wait unboundedly for the\n> > ack; value > 0 wait for specified milliseconds for the ack.\n> +1 if we make -1 and 0 only valid values.\n>\n> > query cancels or proc die interrupts\n>\n> Please note, that typical HA tool would need to handle query cancels and proc die interrupts differently.\n\nAgree.\n\n> When the network is partitioned and somewhere standby is promoted you definitely want infinite wait for cancels.\n\nWhen standby is promoted, no point the old primary waiting forever for\nack assuming we are going to discard it.\n\n> Yet once upon a time you want to shutdown postgres without coredump - thus proc die needs to be processed.\n\nI think users can still have the flexibility to set the right amounts\nof time to process cancel and proc die interrupts.\n\nIMHO, the GUC can still have 0, -1 and value > 0 in milliseconds, let\nthe users decide on what they want. Do you see any problems with this?\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 13 May 2022 17:39:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, May 10, 2022 at 5:55 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > 10 мая 2022 г., в 12:59, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> >\n> > If okay, I can make the GUC behave this way - value 0 existing\n> > behaviour i.e. no wait for sync repl ack, just process query cancels\n> > and proc die interrupts immediately; value -1 wait unboundedly for the\n> > ack; value > 0 wait for specified milliseconds for the ack.\n> +1 if we make -1 and 0 only valid values.\n>\n> > query cancels or proc die interrupts\n>\n> Please note, that typical HA tool would need to handle query cancels and proc die interrupts differently.\n\nHm, after thinking for a while, I tend to agree with the above\napproach - meaning, query cancel interrupt processing can completely\nbe disabled in SyncRepWaitForLSN() and process proc die interrupt\nimmediately, this approach requires no GUC as opposed to the proposed\nv1 patch upthread.\n\nHowever, it's good to see what other hackers think about this.\n\n> When the network is partitioned and somewhere standby is promoted you definitely want infinite wait for cancels. Yet once upon a time you want to shutdown postgres without coredump - thus proc die needs to be processed.\n\nAgree.\n\nOn Mon, May 9, 2022 at 4:39 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > On 26 Apr 2022, at 11:26, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> >\n> > Is it worth adding additional complexity that is not a complete solution?\n>\n> Its not additional complexity. It is removing additional complexity that made sync rep interruptible. (But I'm surely talking not about GUCs like synchronous_replication_naptime_before_cancel - wait of sync rep must be indefinite until synchrous_commit\\synchronous_standby_names are satisfied )\n>\n> And yes, we need additional complexity - but in some other place. Transaction can also be locally committed in presence of a server crash. But this another difficult problem. Crashed server must not allow data queries until LSN of timeline end is successfully replicated to synchronous_standby_names.\n\nHm, that needs to be done anyways. How about doing as proposed\ninitially upthread [1]? Also, quoting the idea here [2].\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACUrOB59QaE6=jF2cFAyv1MR7fzD8tr4YM5+OwEYG1SNzA@mail.gmail.com\n[2] 2) Wait for sync standbys to catch up upon restart after the crash or\nin the next txn after the old locally committed txn was canceled. One\nway to achieve this is to let the backend, that's making the first\nconnection, wait for sync standbys to catch up in ClientAuthentication\nright after successful authentication. However, I'm not sure this is\nthe best way to do it at this point.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 14:59:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "\n\n> 25 июля 2022 г., в 14:29, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> \n> Hm, after thinking for a while, I tend to agree with the above\n> approach - meaning, query cancel interrupt processing can completely\n> be disabled in SyncRepWaitForLSN() and process proc die interrupt\n> immediately, this approach requires no GUC as opposed to the proposed\n> v1 patch upthread.\nGUC was proposed here[0] to maintain compatibility with previous behaviour. But I think that having no GUC here is fine too. If we do not allow cancelation of unreplicated backends, of course.\n\n\n>> \n>> And yes, we need additional complexity - but in some other place. Transaction can also be locally committed in presence of a server crash. But this another difficult problem. Crashed server must not allow data queries until LSN of timeline end is successfully replicated to synchronous_standby_names.\n> \n> Hm, that needs to be done anyways. How about doing as proposed\n> initially upthread [1]? Also, quoting the idea here [2].\n> \n> Thoughts?\n> \n> [1] https://www.postgresql.org/message-id/CALj2ACUrOB59QaE6=jF2cFAyv1MR7fzD8tr4YM5+OwEYG1SNzA@mail.gmail.com\n> [2] 2) Wait for sync standbys to catch up upon restart after the crash or\n> in the next txn after the old locally committed txn was canceled. One\n> way to achieve this is to let the backend, that's making the first\n> connection, wait for sync standbys to catch up in ClientAuthentication\n> right after successful authentication. However, I'm not sure this is\n> the best way to do it at this point.\n\n\nI think ideally startup process should not allow read only connections in CheckRecoveryConsistency() until WAL is not replicated to quorum al least up until new timeline LSN.\n\nThanks!\n\n[0] https://commitfest.postgresql.org/34/2402/\n\n\n\n",
"msg_date": "Mon, 25 Jul 2022 15:50:25 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 4:20 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> > 25 июля 2022 г., в 14:29, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> >\n> > Hm, after thinking for a while, I tend to agree with the above\n> > approach - meaning, query cancel interrupt processing can completely\n> > be disabled in SyncRepWaitForLSN() and process proc die interrupt\n> > immediately, this approach requires no GUC as opposed to the proposed\n> > v1 patch upthread.\n> GUC was proposed here[0] to maintain compatibility with previous behaviour. But I think that having no GUC here is fine too. If we do not allow cancelation of unreplicated backends, of course.\n>\n> >>\n> >> And yes, we need additional complexity - but in some other place. Transaction can also be locally committed in presence of a server crash. But this another difficult problem. Crashed server must not allow data queries until LSN of timeline end is successfully replicated to synchronous_standby_names.\n> >\n> > Hm, that needs to be done anyways. How about doing as proposed\n> > initially upthread [1]? Also, quoting the idea here [2].\n> >\n> > Thoughts?\n> >\n> > [1] https://www.postgresql.org/message-id/CALj2ACUrOB59QaE6=jF2cFAyv1MR7fzD8tr4YM5+OwEYG1SNzA@mail.gmail.com\n> > [2] 2) Wait for sync standbys to catch up upon restart after the crash or\n> > in the next txn after the old locally committed txn was canceled. One\n> > way to achieve this is to let the backend, that's making the first\n> > connection, wait for sync standbys to catch up in ClientAuthentication\n> > right after successful authentication. However, I'm not sure this is\n> > the best way to do it at this point.\n>\n>\n> I think ideally startup process should not allow read only connections in CheckRecoveryConsistency() until WAL is not replicated to quorum al least up until new timeline LSN.\n\nWe can't do it in CheckRecoveryConsistency() unless I'm missing\nsomething. Because, the walsenders (required for sending the remaining\nWAL to sync standbys to achieve quorum) can only be started after the\nserver reaches a consistent state, after all walsenders are\nspecialized backends.\n\n\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 13:42:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 1:42 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 4:20 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > > 25 июля 2022 г., в 14:29, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> написал(а):\n> > >\n> > > Hm, after thinking for a while, I tend to agree with the above\n> > > approach - meaning, query cancel interrupt processing can completely\n> > > be disabled in SyncRepWaitForLSN() and process proc die interrupt\n> > > immediately, this approach requires no GUC as opposed to the proposed\n> > > v1 patch upthread.\n> > GUC was proposed here[0] to maintain compatibility with previous behaviour. But I think that having no GUC here is fine too. If we do not allow cancelation of unreplicated backends, of course.\n> >\n> > >>\n> > >> And yes, we need additional complexity - but in some other place. Transaction can also be locally committed in presence of a server crash. But this another difficult problem. Crashed server must not allow data queries until LSN of timeline end is successfully replicated to synchronous_standby_names.\n> > >\n> > > Hm, that needs to be done anyways. How about doing as proposed\n> > > initially upthread [1]? Also, quoting the idea here [2].\n> > >\n> > > Thoughts?\n> > >\n> > > [1] https://www.postgresql.org/message-id/CALj2ACUrOB59QaE6=jF2cFAyv1MR7fzD8tr4YM5+OwEYG1SNzA@mail.gmail.com\n> > > [2] 2) Wait for sync standbys to catch up upon restart after the crash or\n> > > in the next txn after the old locally committed txn was canceled. One\n> > > way to achieve this is to let the backend, that's making the first\n> > > connection, wait for sync standbys to catch up in ClientAuthentication\n> > > right after successful authentication. However, I'm not sure this is\n> > > the best way to do it at this point.\n> >\n> >\n> > I think ideally startup process should not allow read only connections in CheckRecoveryConsistency() until WAL is not replicated to quorum al least up until new timeline LSN.\n>\n> We can't do it in CheckRecoveryConsistency() unless I'm missing\n> something. Because, the walsenders (required for sending the remaining\n> WAL to sync standbys to achieve quorum) can only be started after the\n> server reaches a consistent state, after all walsenders are\n> specialized backends.\n\nContinuing on the above thought (I inadvertently clicked the send\nbutton previously): A simple approach would be to check for quorum in\nPostgresMain() before entering the query loop for (;;) for\nnon-walsender cases. A disadvantage of this would be that all the\nbackends will be waiting here in the worst case if it takes time for\nachieving the sync quorum after restart - roughly we can do the\nfollowing in PostgresMain(), of course we need locking mechanism so\nthat all the backends whoever reaches here will wait for the same lsn:\n\nif (sync_replicaion_defined == true &&\nshmem->wait_for_sync_repl_upon_restart == true)\n{\n SyncRepWaitForLSN(pg_current_wal_flush_lsn(), false);\n shmem->wait_for_sync_repl_upon_restart = false;\n}\n\nThoughts?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 14:01:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "At Tue, 26 Apr 2022 08:26:59 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> While this may mitigate the problem, I don't think it will deal with\n> all the cases which could cause a transaction to end up committed locally,\n> but not on the synchronous standby. I think that only using the full\n> power of two-phase commit can make this bulletproof.\n> \n> Is it worth adding additional complexity that is not a complete solution?\n\nI would agree to this. Likewise 2PC, whatever we do to make it\nperfect, we're left with unresolvable problems at least for now.\n\nDoesn't it meet your requirements if we have a means to know the last\ntransaction on the current session is locally committed or aborted?\n\nWe are already internally managing last committed LSN. I think we can\ndo the same thing about transaction abort and last inserted LSN and we\ncan expose them any way. This is way simpler than the (maybe)\nuncompletable attempt to fill up the deep gap.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Aug 2022 11:49:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in\n synchronous replication"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 8:19 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 26 Apr 2022 08:26:59 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in\n> > While this may mitigate the problem, I don't think it will deal with\n> > all the cases which could cause a transaction to end up committed locally,\n> > but not on the synchronous standby. I think that only using the full\n> > power of two-phase commit can make this bulletproof.\n> >\n> > Is it worth adding additional complexity that is not a complete solution?\n>\n> I would agree to this. Likewise 2PC, whatever we do to make it\n> perfect, we're left with unresolvable problems at least for now.\n>\n> Doesn't it meet your requirements if we have a means to know the last\n> transaction on the current session is locally committed or aborted?\n>\n> We are already internally managing last committed LSN. I think we can\n> do the same thing about transaction abort and last inserted LSN and we\n> can expose them any way. This is way simpler than the (maybe)\n> uncompletable attempt to fill up the deep gap.\n\nThere can be more txns that are\nlocally-committed-but-not-yet-replicated. Even if we have that\ninformation stored somewhere, what do we do with it? Those txns are\ncommitted from the client perspective but not committed from the\nserver's perspective.\n\nCan you please explain more about your idea, I may be missing something?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Mon, 8 Aug 2022 19:13:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "At Mon, 8 Aug 2022 19:13:25 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Aug 5, 2022 at 8:19 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 26 Apr 2022 08:26:59 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in\n> > > While this may mitigate the problem, I don't think it will deal with\n> > > all the cases which could cause a transaction to end up committed locally,\n> > > but not on the synchronous standby. I think that only using the full\n> > > power of two-phase commit can make this bulletproof.\n> > >\n> > > Is it worth adding additional complexity that is not a complete solution?\n> >\n> > I would agree to this. Likewise 2PC, whatever we do to make it\n> > perfect, we're left with unresolvable problems at least for now.\n> >\n> > Doesn't it meet your requirements if we have a means to know the last\n> > transaction on the current session is locally committed or aborted?\n> >\n> > We are already internally managing last committed LSN. I think we can\n> > do the same thing about transaction abort and last inserted LSN and we\n> > can expose them any way. This is way simpler than the (maybe)\n> > uncompletable attempt to fill up the deep gap.\n> \n> There can be more txns that are\n> locally-committed-but-not-yet-replicated. Even if we have that\n> information stored somewhere, what do we do with it? Those txns are\n> committed from the client perspective but not committed from the\n> server's perspective.\n> \n> Can you please explain more about your idea, I may be missing something?\n\n(I'm not sure I understand the requirements here..)\n\nI understand that it is about query cancellation. In the case of\nprimary crash/termination, client cannot even know whether the commit\nof the ongoing transaction, if any, has been recorded. Anyway no way\nother than to somehow confirm that the change by the transaction has\nbeen actually made after restart. I believe it is the standard\npractice of the applications that work on HA clusters.\n\nThe same is true in the case of query cancellation since commit\nresponse doesn't reach the client, too. But even in this case if we\nhad functions/views that tells us the\nlast-committed/last-aborted/last-inserted LSN on a session, we can\nknow whether the last transaction has been committed along with the\ncommit LSN maybe more easily.\n\n# In fact, I see those functions rather as a means to know whether a\n# change by the last transaction on a session is available on some\n# replica.\n\nFor example, the below heavily simplified pseudo code might display\nhow the fucntions (if they were functions) work.\n\n try {\n s.execute(\"INSERT ..\");\n c.commit();\n } catch (Exception x) {\n c.commit();\n if (s.execute(\"SELECT pg_session_last_committed_lsn() = \"\n \"pg_session_last_inserted_lsn()\"))\n {\n /* the transaction has been locally committed */\n if (s.execute(\"SELECT replay_lsn >= pg_session_last_committed_lsn() \"\n \"FROM pg_stat_replication where xxxx\")\n\t /* the commit has been replicated to xxx, LSN is known */\n } else {\n /* the transaction has not been locally committed */\n <retry?>\n }\n }\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 09 Aug 2022 16:12:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in\n synchronous replication"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 12:42 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > Can you please explain more about your idea, I may be missing something?\n>\n> (I'm not sure I understand the requirements here..)\n\nI've explained the problem with the current HA setup with synchronous\nreplication upthread at [1]. Let me reiterate it here once again.\n\nWhen a query is cancelled (a simple stroke of CTRL+C or\npg_cancel_backend() call) while the txn is waiting for ack in\nSyncRepWaitForLSN(); for the client, the txn is actually committed\n(locally-committed-but-not-yet-replicated to all of sync standbys)\nlike any other txn, a warning is emitted into server logs but it is of\nno use for the client (think of client as applications). There can be\nmany such txns waiting for ack in SyncRepWaitForLSN() and query cancel\ncan be issued on all of those sessions. The problem is that the\nsubsequent reads will then be able to read all of those\nlocally-committed-but-not-yet-replicated to all of sync standbys txns\ndata - this is what I call an inconsistency (can we call this a\nread-after-write inconsistency?) because of lack of proper query\ncancel handling. And if the sync standbys are down or unable to come\nup for some reason, until then, the primary will be serving clients\nwith the inconsistent data. BTW, I found a report of this problem here\n[2].\n\nThe solution proposed for the above problem is to just 'not honor\nquery cancels at all while waiting for ack in SyncRepWaitForLSN()'.\n\nWhen a proc die is pending, then also, there can be\nlocally-committed-but-not-yet-replicated to all of sync standbys txns.\nTypically, there are two choices for the clients 1) reuse the primary\ninstance after restart 2) failover to one of sync standbys. For case\n(1), there might be read-after-write inconsistency as explained above.\nFor case (2), those txns might get lost completely if the failover\ntarget sync standby or the new primary didn't receive them and the\nother sync standbys that have received them are now ahead and need a\nspecial treatment (run pg_rewind) for them to be able to connect to\nnew primary.\n\nThe solution proposed for case (1) of the above problem is to 'process\nthe ProcDiePending immediately and upon restart the first backend can\nwait until the sync standbys are caught up to ensure no inconsistent\nreads'.\nThe solution proposed for case (2) of the above problem is to 'either\nrun pg_rewind for the sync standbys that are ahead or use the idea\nproposed at [3]'.\n\nI hope the above explanation helps.\n\n[1] https://www.postgresql.org/message-id/flat/CALj2ACUrOB59QaE6%3DjF2cFAyv1MR7fzD8tr4YM5%2BOwEYG1SNzA%40mail.gmail.com\n[2] https://stackoverflow.com/questions/42686097/how-to-disable-uncommited-reads-in-postgres-synchronous-replication\n[3] https://www.postgresql.org/message-id/CALj2ACX-xO-ZenQt1MWazj0Z3ziSXBMr24N_X2c0dYysPQghrw%40mail.gmail.com\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 9 Aug 2022 14:16:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 2:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I've explained the problem with the current HA setup with synchronous\n> replication upthread at [1]. Let me reiterate it here once again.\n>\n> When a query is cancelled (a simple stroke of CTRL+C or\n> pg_cancel_backend() call) while the txn is waiting for ack in\n> SyncRepWaitForLSN(); for the client, the txn is actually committed\n> (locally-committed-but-not-yet-replicated to all of sync standbys)\n> like any other txn, a warning is emitted into server logs but it is of\n> no use for the client (think of client as applications). There can be\n> many such txns waiting for ack in SyncRepWaitForLSN() and query cancel\n> can be issued on all of those sessions. The problem is that the\n> subsequent reads will then be able to read all of those\n> locally-committed-but-not-yet-replicated to all of sync standbys txns\n> data - this is what I call an inconsistency (can we call this a\n> read-after-write inconsistency?) because of lack of proper query\n> cancel handling. And if the sync standbys are down or unable to come\n> up for some reason, until then, the primary will be serving clients\n> with the inconsistent data. BTW, I found a report of this problem here\n> [2].\n>\n> The solution proposed for the above problem is to just 'not honor\n> query cancels at all while waiting for ack in SyncRepWaitForLSN()'.\n>\n> When a proc die is pending, then also, there can be\n> locally-committed-but-not-yet-replicated to all of sync standbys txns.\n> Typically, there are two choices for the clients 1) reuse the primary\n> instance after restart 2) failover to one of sync standbys. For case\n> (1), there might be read-after-write inconsistency as explained above.\n> For case (2), those txns might get lost completely if the failover\n> target sync standby or the new primary didn't receive them and the\n> other sync standbys that have received them are now ahead and need a\n> special treatment (run pg_rewind) for them to be able to connect to\n> new primary.\n>\n> The solution proposed for case (1) of the above problem is to 'process\n> the ProcDiePending immediately and upon restart the first backend can\n> wait until the sync standbys are caught up to ensure no inconsistent\n> reads'.\n> The solution proposed for case (2) of the above problem is to 'either\n> run pg_rewind for the sync standbys that are ahead or use the idea\n> proposed at [3]'.\n>\n> I hope the above explanation helps.\n>\n> [1] https://www.postgresql.org/message-id/flat/CALj2ACUrOB59QaE6%3DjF2cFAyv1MR7fzD8tr4YM5%2BOwEYG1SNzA%40mail.gmail.com\n> [2] https://stackoverflow.com/questions/42686097/how-to-disable-uncommited-reads-in-postgres-synchronous-replication\n> [3] https://www.postgresql.org/message-id/CALj2ACX-xO-ZenQt1MWazj0Z3ziSXBMr24N_X2c0dYysPQghrw%40mail.gmail.com\n\nI'm attaching the v2 patch rebased on the latest HEAD. Please note\nthat there are still some open points, I'm yet to find time to think\nmore about them. Meanwhile, I'm posting the v2 patch for making cfbot\nhappy. Any further thoughts on the overall design of the patch are\nmost welcome. Thanks.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 27 Sep 2022 18:52:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 06:52:21PM +0530, Bharath Rupireddy wrote:\n> On Tue, Aug 9, 2022 at 2:16 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I've explained the problem with the current HA setup with synchronous\n> > replication upthread at [1]. Let me reiterate it here once again.\n> >\n> > When a query is cancelled (a simple stroke of CTRL+C or\n> > pg_cancel_backend() call) while the txn is waiting for ack in\n> > SyncRepWaitForLSN(); for the client, the txn is actually committed\n> > (locally-committed-but-not-yet-replicated to all of sync standbys)\n> > like any other txn, a warning is emitted into server logs but it is of\n> > no use for the client (think of client as applications). There can be\n> > many such txns waiting for ack in SyncRepWaitForLSN() and query cancel\n> > can be issued on all of those sessions. The problem is that the\n> > subsequent reads will then be able to read all of those\n> > locally-committed-but-not-yet-replicated to all of sync standbys txns\n> > data - this is what I call an inconsistency (can we call this a\n> > read-after-write inconsistency?) because of lack of proper query\n> > cancel handling. And if the sync standbys are down or unable to come\n> > up for some reason, until then, the primary will be serving clients\n> > with the inconsistent data. BTW, I found a report of this problem here\n> > [2].\n> >\n> > The solution proposed for the above problem is to just 'not honor\n> > query cancels at all while waiting for ack in SyncRepWaitForLSN()'.\n> >\n> > When a proc die is pending, then also, there can be\n> > locally-committed-but-not-yet-replicated to all of sync standbys txns.\n> > Typically, there are two choices for the clients 1) reuse the primary\n> > instance after restart 2) failover to one of sync standbys. For case\n> > (1), there might be read-after-write inconsistency as explained above.\n> > For case (2), those txns might get lost completely if the failover\n> > target sync standby or the new primary didn't receive them and the\n> > other sync standbys that have received them are now ahead and need a\n> > special treatment (run pg_rewind) for them to be able to connect to\n> > new primary.\n> >\n> > The solution proposed for case (1) of the above problem is to 'process\n> > the ProcDiePending immediately and upon restart the first backend can\n> > wait until the sync standbys are caught up to ensure no inconsistent\n> > reads'.\n> > The solution proposed for case (2) of the above problem is to 'either\n> > run pg_rewind for the sync standbys that are ahead or use the idea\n> > proposed at [3]'.\n> >\n> > I hope the above explanation helps.\n> >\n> > [1] https://www.postgresql.org/message-id/flat/CALj2ACUrOB59QaE6%3DjF2cFAyv1MR7fzD8tr4YM5%2BOwEYG1SNzA%40mail.gmail.com\n> > [2] https://stackoverflow.com/questions/42686097/how-to-disable-uncommited-reads-in-postgres-synchronous-replication\n> > [3] https://www.postgresql.org/message-id/CALj2ACX-xO-ZenQt1MWazj0Z3ziSXBMr24N_X2c0dYysPQghrw%40mail.gmail.com\n> \n> I'm attaching the v2 patch rebased on the latest HEAD. Please note\n> that there are still some open points, I'm yet to find time to think\n> more about them. Meanwhile, I'm posting the v2 patch for making cfbot\n> happy. Any further thoughts on the overall design of the patch are\n> most welcome. Thanks.\n... \n> In PostgreSQL high availability setup with synchronous replication,\n> typically all the transactions first locally get committed, then\n> streamed to the synchronous standbys and the backends that generated\n> the transaction will wait for acknowledgement from synchronous\n> standbys. While waiting for acknowledgement, it may happen that the\n> query or the transaction gets canceled or the backend that's waiting\n> for acknowledgement is asked to exit. In either of these cases, the\n> wait for acknowledgement gets canceled and leaves transaction in an\n> inconsistent state as it got committed locally but not on the\n> standbys. When set the GUC synchronous_replication_naptime_before_cancel\n> introduced in this patch, it will let the backends wait for the\n> acknowledgement before canceling the wait for acknowledgement so\n> that the transaction can be available in synchronous standbys as\n> well.\n\nI don't think this patch is going in the right direction, and I think we\nneed to step back to see why.\n\nFirst, let's see how synchronous replication works. Each backend waits\nfor one or more synchronous replicas to acknowledge the WAL related to\nits commit and then it marks the commit as done in PGPROC and then to\nthe client; I wrote a blog about it:\n\n\thttps://momjian.us/main/blogs/pgblog/2020.html#June_3_2020\n\nSo, what happens when an insufficient number of synchronous replicas\nreply? Sessions hang because the synchronous behavior cannot be\nguaranteed. We then _allow_ query cancel so the user or administrator\ncan get out of the hung sessions and perhaps modify\nsynchronous_standby_names.\n\nWhat the proposed patch effectively does is to prevent the ability to\nrecovery the hung sessions or auto-continue the sessions if an\ninsufficient number of synchronous replicas respond. So, in a way, it\nis allowing for more strict and less strict behavior of\nsynchronous_standby_names.\n\nHowever, I think trying to solve this at the session level is the wrong\napproach. If you set a timeout to continue stuck sessions, odds are the\ntimeout will be too long for each session and performance will be\nunacceptable anyway, so you haven't gained much. If you prevent\ncancels, you effectively lock up the system with fewer options of\nrecovery.\n\nI have always felt this has to be done at the server level, meaning when\na synchronous_standby_names replica is not responding after a certain\ntimeout, the administrator must be notified by calling a shell command\ndefined in a GUC and all sessions will ignore the replica. This gives a\nmuch more predictable and useful behavior than the one in the patch ---\nwe have discussed this approach many times on the email lists.\n\nOnce we have that, we can consider removing the cancel ability while\nwaiting for synchronous replicas (since we have the timeout) or make it\noptional. We can also consider how do notify the administrator during\nquery cancel (if we allow it), backend abrupt exit/crash, and if we\nshould allow users to specify a retry interval to resynchronize the\nsynchronous replicas.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 29 Sep 2022 18:53:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 4:23 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I don't think this patch is going in the right direction, and I think we\n> need to step back to see why.\n\nThanks a lot Bruce for providing insights.\n\n> First, let's see how synchronous replication works. Each backend waits\n> for one or more synchronous replicas to acknowledge the WAL related to\n> its commit and then it marks the commit as done in PGPROC and then to\n> the client; I wrote a blog about it:\n>\n> https://momjian.us/main/blogs/pgblog/2020.html#June_3_2020\n\nGreat!\n\n> So, what happens when an insufficient number of synchronous replicas\n> reply? Sessions hang because the synchronous behavior cannot be\n> guaranteed. We then _allow_ query cancel so the user or administrator\n> can get out of the hung sessions and perhaps modify\n> synchronous_standby_names.\n\nRight.\n\n> What the proposed patch effectively does is to prevent the ability to\n> recovery the hung sessions or auto-continue the sessions if an\n> insufficient number of synchronous replicas respond. So, in a way, it\n> is allowing for more strict and less strict behavior of\n> synchronous_standby_names.\n\nYes. I do agree that it makes the sessions further wait and closes the\nquick exit path that admins/users have when the problem occurs. But it\ndisallows users cancelling queries or terminating backends only to end\nup in locally-committed-but-not-replicated transactions on a server\nsetup where sync standbys all are working fine. I agree that we don't\nwant to further wait on cancels or proc dies as it makes the system\nmore unresponsive [1].\n\n> However, I think trying to solve this at the session level is the wrong\n> approach. If you set a timeout to continue stuck sessions, odds are the\n> timeout will be too long for each session and performance will be\n> unacceptable anyway, so you haven't gained much. If you prevent\n> cancels, you effectively lock up the system with fewer options of\n> recovery.\n\nYes.\n\n> I have always felt this has to be done at the server level, meaning when\n> a synchronous_standby_names replica is not responding after a certain\n> timeout, the administrator must be notified by calling a shell command\n> defined in a GUC and all sessions will ignore the replica. This gives a\n> much more predictable and useful behavior than the one in the patch ---\n> we have discussed this approach many times on the email lists.\n\nIIUC, each walsender serving a sync standby will determine that the\nsync standby isn't responding for a configurable amount of time (less\nthan wal_sender_timeout) and calls shell command to notify the admin\nif there are any backends waiting for sync replication in\nSyncRepWaitForLSN(). The shell command then provides the unresponsive\nsync standby name at the bare minimum for the admin to ignore it as\nsync standby/remove it from synchronous_standby_names to continue\nfurther. This still requires manual intervention which is a problem if\nrunning postgres server instances at scale. Also, having a new shell\ncommand in any form may pose security risks. I'm not sure at this\npoint how this new timeout is going to work alongside\nwal_sender_timeout.\n\nI'm thinking about the possible options that an admin has to get out\nof this situation:\n1) Removing the standby from synchronous_standby_names.\n2) Fixing the sync standby, by restarting or restoring the lost part\n(such as network or some other).\n\n(1) is something that postgres can help admins get out of the problem\neasily and automatically without any intervention. (2) is something\npostgres can't do much about.\n\nHow about we let postgres automatically remove an unresponsive (for a\npre-configured time) sync standby from synchronous_standby_names and\ninform the user (via log message and via new walsender property and\npg_stat_replication for monitoring purposes)? The users can then\ndetect such standbys and later try to bring them back to the sync\nstandbys group or do other things. I believe that a production level\npostgres HA with sync standbys will have monitoring to detect the\nreplication lag, failover decision etc via monitoring\npg_stat_replication. With this approach, a bit more monitoring is\nneeded. This solution requires less or no manual intervention and\nscales well. Please note that I haven't studied the possibilities of\nimplementing it yet.\n\nThoughts?\n\n> Once we have that, we can consider removing the cancel ability while\n> waiting for synchronous replicas (since we have the timeout) or make it\n> optional. We can also consider how do notify the administrator during\n> query cancel (if we allow it), backend abrupt exit/crash, and\n\nYeah. If we have the\ntimeout-and-auto-removal-of-standby-from-sync-standbys-list solution,\nthe users can then choose to disable processing query cancels/proc\ndies while waiting for sync replication in SyncRepWaitForLSN().\n\n> if we\n> should allow users to specify a retry interval to resynchronize the\n> synchronous replicas.\n\nThis is another interesting thing to consider if we were to make the\nauto-removed (by the above approach) standby a sync standby again\nwithout manual intervention.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoaCBwgMDkeBDOgtPgHcbfSYq%2BzORjL5DoU3pJyjALxtoQ%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 1 Oct 2022 06:59:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Sat, Oct 1, 2022 at 06:59:26AM +0530, Bharath Rupireddy wrote:\n> > I have always felt this has to be done at the server level, meaning when\n> > a synchronous_standby_names replica is not responding after a certain\n> > timeout, the administrator must be notified by calling a shell command\n> > defined in a GUC and all sessions will ignore the replica. This gives a\n ------------------------------------\n> > much more predictable and useful behavior than the one in the patch ---\n> > we have discussed this approach many times on the email lists.\n> \n> IIUC, each walsender serving a sync standby will determine that the\n> sync standby isn't responding for a configurable amount of time (less\n> than wal_sender_timeout) and calls shell command to notify the admin\n> if there are any backends waiting for sync replication in\n> SyncRepWaitForLSN(). The shell command then provides the unresponsive\n> sync standby name at the bare minimum for the admin to ignore it as\n> sync standby/remove it from synchronous_standby_names to continue\n> further. This still requires manual intervention which is a problem if\n> running postgres server instances at scale. Also, having a new shell\n\nAs I highlighted above, by default you notify the administrator that a\nsychronous replica is not responding and then ignore it. If it becomes\nresponsive again, you notify the administrator again and add it back as\na sychronous replica.\n\n> command in any form may pose security risks. I'm not sure at this\n> point how this new timeout is going to work alongside\n> wal_sender_timeout.\n\nWe have archive_command, so I don't see a problem with another shell\ncommand.\n\n> I'm thinking about the possible options that an admin has to get out\n> of this situation:\n> 1) Removing the standby from synchronous_standby_names.\n\nYes, see above. We might need a read-only GUC that reports which\nsychronous replicas are active. As you can see, there is a lot of API\ndesign required here, but this is the most effective approach.\n\n> 2) Fixing the sync standby, by restarting or restoring the lost part\n> (such as network or some other).\n> \n> (1) is something that postgres can help admins get out of the problem\n> easily and automatically without any intervention. (2) is something\n> postgres can't do much about.\n> \n> How about we let postgres automatically remove an unresponsive (for a\n> pre-configured time) sync standby from synchronous_standby_names and\n> inform the user (via log message and via new walsender property and\n> pg_stat_replication for monitoring purposes)? The users can then\n> detect such standbys and later try to bring them back to the sync\n> standbys group or do other things. I believe that a production level\n> postgres HA with sync standbys will have monitoring to detect the\n> replication lag, failover decision etc via monitoring\n> pg_stat_replication. With this approach, a bit more monitoring is\n> needed. This solution requires less or no manual intervention and\n> scales well. Please note that I haven't studied the possibilities of\n> implementing it yet.\n> \n> Thoughts?\n\nYes, see above.\n\n> > Once we have that, we can consider removing the cancel ability while\n> > waiting for synchronous replicas (since we have the timeout) or make it\n> > optional. We can also consider how do notify the administrator during\n> > query cancel (if we allow it), backend abrupt exit/crash, and\n> \n> Yeah. If we have the\n> timeout-and-auto-removal-of-standby-from-sync-standbys-list solution,\n> the users can then choose to disable processing query cancels/proc\n> dies while waiting for sync replication in SyncRepWaitForLSN().\n\nYes. We might also change things so a query cancel that happens during \nsychronous replica waiting can only be done by an administrator, not the\nsession owner. Again, lots of design needed here.\n\n> > if we\n> > should allow users to specify a retry interval to resynchronize the\n> > synchronous replicas.\n> \n> This is another interesting thing to consider if we were to make the\n> auto-removed (by the above approach) standby a sync standby again\n> without manual intervention.\n\nYes, see above. You are addressing the right questions here. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 5 Oct 2022 17:00:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Thu, Oct 6, 2022 at 2:30 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> As I highlighted above, by default you notify the administrator that a\n> sychronous replica is not responding and then ignore it. If it becomes\n> responsive again, you notify the administrator again and add it back as\n> a sychronous replica.\n>\n> > command in any form may pose security risks. I'm not sure at this\n> > point how this new timeout is going to work alongside\n> > wal_sender_timeout.\n>\n> We have archive_command, so I don't see a problem with another shell\n> command.\n\nWhy do we need a new command to inform the admin/user about a sync\nreplication being ignored (from sync quorum) for not responding or\nacknowledging for a certain amount of time in SyncRepWaitForLSN()?\nCan't we just add an extra column or use existing sync_state in\npg_stat_replication()? We can either introduce a new state such as\ntemporary_async or just use the existing state 'potential' [1]. A\nproblem is that the server has to be monitored for this extra, new\nstate. If we do this, we don't need another command to report.\n\n> > I'm thinking about the possible options that an admin has to get out\n> > of this situation:\n> > 1) Removing the standby from synchronous_standby_names.\n>\n> Yes, see above. We might need a read-only GUC that reports which\n> sychronous replicas are active. As you can see, there is a lot of API\n> design required here, but this is the most effective approach.\n\nIf we use the above approach to report via pg_stat_replication(), we\ndon't need this.\n\n> > > Once we have that, we can consider removing the cancel ability while\n> > > waiting for synchronous replicas (since we have the timeout) or make it\n> > > optional. We can also consider how do notify the administrator during\n> > > query cancel (if we allow it), backend abrupt exit/crash, and\n> >\n> > Yeah. If we have the\n> > timeout-and-auto-removal-of-standby-from-sync-standbys-list solution,\n> > the users can then choose to disable processing query cancels/proc\n> > dies while waiting for sync replication in SyncRepWaitForLSN().\n>\n> Yes. We might also change things so a query cancel that happens during\n> sychronous replica waiting can only be done by an administrator, not the\n> session owner. Again, lots of design needed here.\n\nYes, we need infrastructure to track who issued the query cancel or\nproc die and so on. IMO, it's not a good way to allow/disallow query\ncancels or CTRL+C based on role types - superusers or users with\nreplication roles or users who are members of any of predefined roles.\n\nIn general, it is the walsender serving sync standby that has to mark\nitself as async standby by removing itself from\nsynchronous_standby_names, reloading config variables and waking up\nthe backends that are waiting in syncrep wait queue for it to update\nLSN.\n\nAnd, the new auto removal timeout should always be set to less than\nwal_sender_timeout.\n\nAll that said, imagine we have\ntimeout-and-auto-removal-of-standby-from-sync-standbys-list solution\nin one or the other forms with auto removal timeout set to 5 minutes,\nany of following can happen:\n\n1) query is stuck waiting for sync standby ack in SyncRepWaitForLSN(),\nno query cancel or proc die interrupt is arrived, the sync standby is\nmade as async standy after the timeout i.e. 5 minutes.\n2) query is stuck waiting for sync standby ack in SyncRepWaitForLSN(),\nsay for about 3 minutes, then query cancel or proc die interrupt is\narrived, should we immediately process it or wait for timeout to\nhappen (2 more minutes) and then process the interrupt? If we\nimmediately process the interrupts, then the\nlocally-committed-but-not-replicated-to-sync-standby problems\ndescribed upthread [2] are left unresolved.\n\n[1] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW\nsync_state text\nSynchronous state of this standby server. Possible values are:\nasync: This standby server is asynchronous.\npotential: This standby server is now asynchronous, but can\npotentially become synchronous if one of current synchronous ones\nfails.\nsync: This standby server is synchronous.\nquorum: This standby server is considered as a candidate for quorum standbys.\n\n[2] https://www.postgresql.org/message-id/CALj2ACXmMWtpmuT-%3Dv8F%2BLk4QCbdkeN%2ByHKXeRGKFfjG96YbKA%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Oct 2022 13:33:33 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Thu, Oct 6, 2022 at 01:33:33PM +0530, Bharath Rupireddy wrote:\n> On Thu, Oct 6, 2022 at 2:30 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > As I highlighted above, by default you notify the administrator that a\n> > sychronous replica is not responding and then ignore it. If it becomes\n> > responsive again, you notify the administrator again and add it back as\n> > a sychronous replica.\n> >\n> > > command in any form may pose security risks. I'm not sure at this\n> > > point how this new timeout is going to work alongside\n> > > wal_sender_timeout.\n> >\n> > We have archive_command, so I don't see a problem with another shell\n> > command.\n> \n> Why do we need a new command to inform the admin/user about a sync\n> replication being ignored (from sync quorum) for not responding or\n> acknowledging for a certain amount of time in SyncRepWaitForLSN()?\n> Can't we just add an extra column or use existing sync_state in\n> pg_stat_replication()? We can either introduce a new state such as\n> temporary_async or just use the existing state 'potential' [1]. A\n> problem is that the server has to be monitored for this extra, new\n> state. If we do this, we don't need another command to report.\n\nYes, that is a good point. I assumed people would want notification\nimmediately rather than waiting for monitoring to notice it. Consider\nif you monitor every five seconds but the primary loses sync and goes\ndown during that five-second interval --- there would be no way to know\nif sync stopped and reported committed transactions to the client before\nthe primary went down. I would love to just rely on monitoring but I am\nnot sure that is sufficient for this use-case.\n\nOf course, if email is being sent it might be still in the email queue\nwhen the primary goes down, but I guess if I was doing it I would make\nsure the email was delivered _before_ returning. The point is that we\nwould not disable the sync and acknowledge the commit to the client\nuntil the notification command returns success --- that kind of\nguarantee is hard to do with monitoring.\n\nThese are good discussions to have --- maybe I am wrong.\n\n> > > > Once we have that, we can consider removing the cancel ability while\n> > > > waiting for synchronous replicas (since we have the timeout) or make it\n> > > > optional. We can also consider how do notify the administrator during\n> > > > query cancel (if we allow it), backend abrupt exit/crash, and\n> > >\n> > > Yeah. If we have the\n> > > timeout-and-auto-removal-of-standby-from-sync-standbys-list solution,\n> > > the users can then choose to disable processing query cancels/proc\n> > > dies while waiting for sync replication in SyncRepWaitForLSN().\n> >\n> > Yes. We might also change things so a query cancel that happens during\n> > sychronous replica waiting can only be done by an administrator, not the\n> > session owner. Again, lots of design needed here.\n> \n> Yes, we need infrastructure to track who issued the query cancel or\n> proc die and so on. IMO, it's not a good way to allow/disallow query\n> cancels or CTRL+C based on role types - superusers or users with\n> replication roles or users who are members of any of predefined roles.\n> \n> In general, it is the walsender serving sync standby that has to mark\n> itself as async standby by removing itself from\n> synchronous_standby_names, reloading config variables and waking up\n> the backends that are waiting in syncrep wait queue for it to update\n> LSN.\n> \n> And, the new auto removal timeout should always be set to less than\n> wal_sender_timeout.\n> \n> All that said, imagine we have\n> timeout-and-auto-removal-of-standby-from-sync-standbys-list solution\n> in one or the other forms with auto removal timeout set to 5 minutes,\n> any of following can happen:\n> \n> 1) query is stuck waiting for sync standby ack in SyncRepWaitForLSN(),\n> no query cancel or proc die interrupt is arrived, the sync standby is\n> made as async standy after the timeout i.e. 5 minutes.\n> 2) query is stuck waiting for sync standby ack in SyncRepWaitForLSN(),\n> say for about 3 minutes, then query cancel or proc die interrupt is\n> arrived, should we immediately process it or wait for timeout to\n> happen (2 more minutes) and then process the interrupt? If we\n> immediately process the interrupts, then the\n> locally-committed-but-not-replicated-to-sync-standby problems\n> described upthread [2] are left unresolved.\n\nI have a feeling once we have the timeout, we would disable query cancel\nwhen we are in this stage since it is canceling a committed query. The\ntimeout would cancel the sync but at least the administrator would know.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 6 Oct 2022 11:42:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 3:53 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> So, what happens when an insufficient number of synchronous replicas\n> reply?\n\nIt's a failover.\n\n> Sessions hang because the synchronous behavior cannot be\n> guaranteed. We then _allow_ query cancel so the user or administrator\n> can get out of the hung sessions and perhaps modify\n> synchronous_standby_names.\n\nAdministrators should not modify synchronous_standby_names.\nAdministrator must shoot this not in the head.\n\n> I have always felt this has to be done at the server level, meaning when\n> a synchronous_standby_names replica is not responding after a certain\n> timeout, the administrator must be notified by calling a shell command\n> defined in a GUC and all sessions will ignore the replica.\n\nStandbys are expelled from the waitlist according to quorum rules. I'd\npropose not to invent more quorum rules involving shell scripts.\nThe Administrator expressed what number of standbys can be offline by\nsetting synchronous_standby_names. They actively asked for hanging\nqueries in case of insufficient standbys.\n\nWe have reserved administrator connections for the case when all\nconnection slots are used by hanging queries.\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Tue, 8 Nov 2022 21:06:36 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 9:06 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> On Thu, Sep 29, 2022 at 3:53 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > So, what happens when an insufficient number of synchronous replicas\n> > reply?\n>\n> It's a failover.\n>\n> > Sessions hang because the synchronous behavior cannot be\n> > guaranteed. We then _allow_ query cancel so the user or administrator\n> > can get out of the hung sessions and perhaps modify\n> > synchronous_standby_names.\n>\n> Administrators should not modify synchronous_standby_names.\n> Administrator must shoot this not in the head.\n>\n\nSome funny stuff. If a user tries to cancel a non-replicated transaction\nAzure Postgres will answer: \"user requested cancel while waiting for\nsynchronous replication ack. The COMMIT record has already flushed to\nWAL locally and might not have been replicatead to the standby. We\nmust wait here.\"\nAWS RDS will answer: \"ignoring request to cancel wait for synchronous\nreplication\"\nYandex Managed Postgres will answer: \"canceling wait for synchronous\nreplication due requested, but cancelation is not allowed. The\ntransaction has already committed locally and might not have been\nreplicated to the standby. We must wait here.\"\n\nSo, for many services providing Postgres as a service it's only a\nmatter of wording.\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Sun, 27 Nov 2022 11:26:50 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 12:57 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> Some funny stuff. If a user tries to cancel a non-replicated transaction\n> Azure Postgres will answer: \"user requested cancel while waiting for\n> synchronous replication ack. The COMMIT record has already flushed to\n> WAL locally and might not have been replicatead to the standby. We\n> must wait here.\"\n> AWS RDS will answer: \"ignoring request to cancel wait for synchronous\n> replication\"\n> Yandex Managed Postgres will answer: \"canceling wait for synchronous\n> replication due requested, but cancelation is not allowed. The\n> transaction has already committed locally and might not have been\n> replicated to the standby. We must wait here.\"\n>\n> So, for many services providing Postgres as a service it's only a\n> matter of wording.\n\nThanks for verifying the behaviour. And many thanks for an off-list chat.\n\nFWIW, I'm planning to prepare a patch as per the below idea which is\nsomething similar to the initial proposal in this thread. Meanwhile,\nthoughts are welcome.\n\n1. Disable query cancel/CTRL+C/SIGINT when a backend is waiting for\nsync replication acknowledgement.\n2. Process proc die immediately when a backend is waiting for sync\nreplication acknowledgement, as it does today, however, upon restart,\ndon't open up for business (don't accept ready-only connections)\nunless the sync standbys have caught up.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 12:03:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 11:26:50AM -0800, Andrey Borodin wrote:\n> Some funny stuff. If a user tries to cancel a non-replicated transaction\n> Azure Postgres will answer: \"user requested cancel while waiting for\n> synchronous replication ack. The COMMIT record has already flushed to\n> WAL locally and might not have been replicatead to the standby. We\n> must wait here.\"\n> AWS RDS will answer: \"ignoring request to cancel wait for synchronous\n> replication\"\n> Yandex Managed Postgres will answer: \"canceling wait for synchronous\n> replication due requested, but cancelation is not allowed. The\n> transaction has already committed locally and might not have been\n> replicated to the standby. We must wait here.\"\n> \n> So, for many services providing Postgres as a service it's only a\n> matter of wording.\n\nWow, you are telling me all three cloud vendors changed how query cancel\nbehaves on an unresponsive synchronous replica? That is certainly a\ntestament that the community needs to change or at least review our\nbehavior.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 28 Nov 2022 15:55:07 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 12:03:06PM +0530, Bharath Rupireddy wrote:\n> Thanks for verifying the behaviour. And many thanks for an off-list chat.\n> \n> FWIW, I'm planning to prepare a patch as per the below idea which is\n> something similar to the initial proposal in this thread. Meanwhile,\n> thoughts are welcome.\n> \n> 1. Disable query cancel/CTRL+C/SIGINT when a backend is waiting for\n> sync replication acknowledgement.\n> 2. Process proc die immediately when a backend is waiting for sync\n> replication acknowledgement, as it does today, however, upon restart,\n> don't open up for business (don't accept ready-only connections)\n> unless the sync standbys have caught up.\n\nYou can prepare a patch, but it unlikely to get much interest until you\nget agreement on what the behavior should be. The optimal order of\ndeveloper actions is:\n\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\thttps://wiki.postgresql.org/wiki/Todo#Development_Process\n\nTelling us what other cloud vendors do is not sufficient.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 28 Nov 2022 15:59:41 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 12:59 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> You can prepare a patch, but it unlikely to get much interest until you\n> get agreement on what the behavior should be.\n\nWe discussed the approach on 2020's Unconference [0]. And there kind\nof was an agreement.\nThen I made a presentation on FOSDEM with all the details [1].\nThe patch had been on commitfest since 2019 [2]. There were reviewers\nin the CF entry, and we kind of had an agreement.\nJeff Davis proposed a similar patch [3]. And we certainly agreed about cancels.\nAnd now Bharath is proposing the same.\n\nWe have the interest and agreement.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://wiki.postgresql.org/wiki/PgCon_2020_Developer_Unconference/Edge_cases_of_synchronous_replication_in_HA_solutions\n[1] https://archive.fosdem.org/2021/schedule/event/postgresql_caveats_of_replication/attachments/slides/4365/export/events/attachments/postgresql_caveats_of_replication/slides/4365/sides.pdf\n[2] https://commitfest.postgresql.org/26/2402/\n[3] https://www.postgresql.org/message-id/flat/6a052e81060824a8286148b1165bafedbd7c86cd.camel%40j-davis.com#415dc2f7d41b8a251b419256407bb64d\n\n\n",
"msg_date": "Mon, 28 Nov 2022 13:31:39 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 01:31:39PM -0800, Andrey Borodin wrote:\n> On Mon, Nov 28, 2022 at 12:59 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > You can prepare a patch, but it unlikely to get much interest until you\n> > get agreement on what the behavior should be.\n> \n> We discussed the approach on 2020's Unconference [0]. And there kind\n> of was an agreement.\n> Then I made a presentation on FOSDEM with all the details [1].\n> The patch had been on commitfest since 2019 [2]. There were reviewers\n> in the CF entry, and we kind of had an agreement.\n> Jeff Davis proposed a similar patch [3]. And we certainly agreed about cancels.\n> And now Bharath is proposing the same.\n> \n> We have the interest and agreement.\n\nOkay, I was not aware we had such broad agreement.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:53:10 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 10:33 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Nov 28, 2022 at 12:57 AM Andrey Borodin <amborodin86@gmail.com>\n> wrote:\n> >\n> > Some funny stuff. If a user tries to cancel a non-replicated transaction\n> > Azure Postgres will answer: \"user requested cancel while waiting for\n> > synchronous replication ack. The COMMIT record has already flushed to\n> > WAL locally and might not have been replicatead to the standby. We\n> > must wait here.\"\n> > AWS RDS will answer: \"ignoring request to cancel wait for synchronous\n> > replication\"\n> > Yandex Managed Postgres will answer: \"canceling wait for synchronous\n> > replication due requested, but cancelation is not allowed. The\n> > transaction has already committed locally and might not have been\n> > replicated to the standby. We must wait here.\"\n> >\n> > So, for many services providing Postgres as a service it's only a\n> > matter of wording.\n>\n> Thanks for verifying the behaviour. And many thanks for an off-list chat.\n>\n> FWIW, I'm planning to prepare a patch as per the below idea which is\n> something similar to the initial proposal in this thread. Meanwhile,\n> thoughts are welcome.\n>\n> 1. Disable query cancel/CTRL+C/SIGINT when a backend is waiting for\n> sync replication acknowledgement.\n>\n\n+1\n\n\n> 2. Process proc die immediately when a backend is waiting for sync\n> replication acknowledgement, as it does today, however, upon restart,\n> don't open up for business (don't accept ready-only connections)\n> unless the sync standbys have caught up.\n>\n\nAre you planning to block connections or queries to the database? It would\nbe good to allow connections and let them query the monitoring views but\nblock the queries until sync standby have caught up. Otherwise, this leaves\na monitoring hole. In cloud, I presume superusers are allowed to connect\nand monitor (end customers are not the role members and can't query the\ndata). The same can't be true for all the installations. Could you please\nadd more details on your approach?\n\n\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nOn Sun, Nov 27, 2022 at 10:33 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Nov 28, 2022 at 12:57 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n>\n> Some funny stuff. If a user tries to cancel a non-replicated transaction\n> Azure Postgres will answer: \"user requested cancel while waiting for\n> synchronous replication ack. The COMMIT record has already flushed to\n> WAL locally and might not have been replicatead to the standby. We\n> must wait here.\"\n> AWS RDS will answer: \"ignoring request to cancel wait for synchronous\n> replication\"\n> Yandex Managed Postgres will answer: \"canceling wait for synchronous\n> replication due requested, but cancelation is not allowed. The\n> transaction has already committed locally and might not have been\n> replicated to the standby. We must wait here.\"\n>\n> So, for many services providing Postgres as a service it's only a\n> matter of wording.\n\nThanks for verifying the behaviour. And many thanks for an off-list chat.\n\nFWIW, I'm planning to prepare a patch as per the below idea which is\nsomething similar to the initial proposal in this thread. Meanwhile,\nthoughts are welcome.\n\n1. Disable query cancel/CTRL+C/SIGINT when a backend is waiting for\nsync replication acknowledgement.+1 \n2. Process proc die immediately when a backend is waiting for sync\nreplication acknowledgement, as it does today, however, upon restart,\ndon't open up for business (don't accept ready-only connections)\nunless the sync standbys have caught up.Are you planning to block connections or queries to the database? It would be good to allow connections and let them query the monitoring views but block the queries until sync standby have caught up. Otherwise, this leaves a monitoring hole. In cloud, I presume superusers are allowed to connect and monitor (end customers are not the role members and can't query the data). The same can't be true for all the installations. Could you please add more details on your approach? \n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 29 Nov 2022 08:14:10 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> 2. Process proc die immediately when a backend is waiting for sync\n> replication acknowledgement, as it does today, however, upon restart,\n> don't open up for business (don't accept ready-only connections)\n> unless the sync standbys have caught up.\n> \n> \n> Are you planning to block connections or queries to the database? It would be\n> good to allow connections and let them query the monitoring views but block the\n> queries until sync standby have caught up. Otherwise, this leaves a monitoring\n> hole. In cloud, I presume superusers are allowed to connect and monitor (end\n> customers are not the role members and can't query the data). The same can't be\n> true for all the installations. Could you please add more details on your\n> approach?\n\nI think ALTER SYSTEM should be allowed, particularly so you can modify\nsynchronous_standby_names, no?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 29 Nov 2022 11:29:13 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > 2. Process proc die immediately when a backend is waiting for sync\n> > replication acknowledgement, as it does today, however, upon restart,\n> > don't open up for business (don't accept ready-only connections)\n> > unless the sync standbys have caught up.\n> >\n> >\n> > Are you planning to block connections or queries to the database? It\n> would be\n> > good to allow connections and let them query the monitoring views but\n> block the\n> > queries until sync standby have caught up. Otherwise, this leaves a\n> monitoring\n> > hole. In cloud, I presume superusers are allowed to connect and monitor\n> (end\n> > customers are not the role members and can't query the data). The same\n> can't be\n> > true for all the installations. Could you please add more details on your\n> > approach?\n>\n> I think ALTER SYSTEM should be allowed, particularly so you can modify\n> synchronous_standby_names, no?\n\n\nYes, Change in synchronous_standby_names is expected in this situation.\nIMHO, blocking all the connections is not a recommended approach.\n\n\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n>\n\nOn Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> 2. Process proc die immediately when a backend is waiting for sync\n> replication acknowledgement, as it does today, however, upon restart,\n> don't open up for business (don't accept ready-only connections)\n> unless the sync standbys have caught up.\n> \n> \n> Are you planning to block connections or queries to the database? It would be\n> good to allow connections and let them query the monitoring views but block the\n> queries until sync standby have caught up. Otherwise, this leaves a monitoring\n> hole. In cloud, I presume superusers are allowed to connect and monitor (end\n> customers are not the role members and can't query the data). The same can't be\n> true for all the installations. Could you please add more details on your\n> approach?\n\nI think ALTER SYSTEM should be allowed, particularly so you can modify\nsynchronous_standby_names, no?Yes, Change in synchronous_standby_names is expected in this situation. IMHO, blocking all the connections is not a recommended approach.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.",
"msg_date": "Tue, 29 Nov 2022 08:42:01 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 8:42 AM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n>\n>\n> On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n>> > 2. Process proc die immediately when a backend is waiting for sync\n>> > replication acknowledgement, as it does today, however, upon\n>> restart,\n>> > don't open up for business (don't accept ready-only connections)\n>> > unless the sync standbys have caught up.\n>> >\n>> >\n>> > Are you planning to block connections or queries to the database? It\n>> would be\n>> > good to allow connections and let them query the monitoring views but\n>> block the\n>> > queries until sync standby have caught up. Otherwise, this leaves a\n>> monitoring\n>> > hole. In cloud, I presume superusers are allowed to connect and monitor\n>> (end\n>> > customers are not the role members and can't query the data). The same\n>> can't be\n>> > true for all the installations. Could you please add more details on\n>> your\n>> > approach?\n>>\n>> I think ALTER SYSTEM should be allowed, particularly so you can modify\n>> synchronous_standby_names, no?\n>\n>\n> Yes, Change in synchronous_standby_names is expected in this situation.\n> IMHO, blocking all the connections is not a recommended approach.\n>\n\nHow about allowing superusers (they can still read locally committed data)\nand users part of pg_monitor role?\n\n\n>\n>>\n>> --\n>> Bruce Momjian <bruce@momjian.us> https://momjian.us\n>> EDB https://enterprisedb.com\n>>\n>> Embrace your flaws. They make you human, rather than perfect,\n>> which you will never be.\n>>\n>\n\nOn Tue, Nov 29, 2022 at 8:42 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> 2. Process proc die immediately when a backend is waiting for sync\n> replication acknowledgement, as it does today, however, upon restart,\n> don't open up for business (don't accept ready-only connections)\n> unless the sync standbys have caught up.\n> \n> \n> Are you planning to block connections or queries to the database? It would be\n> good to allow connections and let them query the monitoring views but block the\n> queries until sync standby have caught up. Otherwise, this leaves a monitoring\n> hole. In cloud, I presume superusers are allowed to connect and monitor (end\n> customers are not the role members and can't query the data). The same can't be\n> true for all the installations. Could you please add more details on your\n> approach?\n\nI think ALTER SYSTEM should be allowed, particularly so you can modify\nsynchronous_standby_names, no?Yes, Change in synchronous_standby_names is expected in this situation. IMHO, blocking all the connections is not a recommended approach.How about allowing superusers (they can still read locally committed data) and users part of pg_monitor role? \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.",
"msg_date": "Tue, 29 Nov 2022 09:15:21 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > 2. Process proc die immediately when a backend is waiting for sync\n> > replication acknowledgement, as it does today, however, upon restart,\n> > don't open up for business (don't accept ready-only connections)\n> > unless the sync standbys have caught up.\n> >\n> >\n> > Are you planning to block connections or queries to the database? It would be\n> > good to allow connections and let them query the monitoring views but block the\n> > queries until sync standby have caught up. Otherwise, this leaves a monitoring\n> > hole. In cloud, I presume superusers are allowed to connect and monitor (end\n> > customers are not the role members and can't query the data). The same can't be\n> > true for all the installations. Could you please add more details on your\n> > approach?\n>\n> I think ALTER SYSTEM should be allowed, particularly so you can modify\n> synchronous_standby_names, no?\n\nWe don't allow SQL access during crash recovery until it's caught up\nto consistency point. And that's for a reason - the cluster may have\ninvalid system catalog.\nSo no, after crash without a quorum of standbys you can only change\nauto.conf and send SIGHUP. Accessing the system catalog during crash\nrecovery is another unrelated problem.\n\nBut I'd propose to treat these two points differently, they possess\ndrastically different scales of danger. Query Cancels are issued here\nand there during failovers\\switchovers. Crash amidst network\npartitioning is not that common.\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Tue, 29 Nov 2022 10:52:24 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 10:52 AM Andrey Borodin <amborodin86@gmail.com>\nwrote:\n\n> On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > > 2. Process proc die immediately when a backend is waiting for sync\n> > > replication acknowledgement, as it does today, however, upon\n> restart,\n> > > don't open up for business (don't accept ready-only connections)\n> > > unless the sync standbys have caught up.\n> > >\n> > >\n> > > Are you planning to block connections or queries to the database? It\n> would be\n> > > good to allow connections and let them query the monitoring views but\n> block the\n> > > queries until sync standby have caught up. Otherwise, this leaves a\n> monitoring\n> > > hole. In cloud, I presume superusers are allowed to connect and\n> monitor (end\n> > > customers are not the role members and can't query the data). The same\n> can't be\n> > > true for all the installations. Could you please add more details on\n> your\n> > > approach?\n> >\n> > I think ALTER SYSTEM should be allowed, particularly so you can modify\n> > synchronous_standby_names, no?\n>\n> We don't allow SQL access during crash recovery until it's caught up\n> to consistency point. And that's for a reason - the cluster may have\n> invalid system catalog.\n> So no, after crash without a quorum of standbys you can only change\n> auto.conf and send SIGHUP. Accessing the system catalog during crash\n> recovery is another unrelated problem.\n>\n\nIn the crash recovery case, catalog is inconsistent but in this case, the\ncluster has remote uncommitted changes (consistent). Accepting a superuser\nconnection is no harm. The auth checks performed are still valid after\nstandbys fully caught up. I don't see a reason why superuser / pg_monitor\nconnections are required to be blocked.\n\n\n> But I'd propose to treat these two points differently, they possess\n> drastically different scales of danger. Query Cancels are issued here\n> and there during failovers\\switchovers. Crash amidst network\n> partitioning is not that common.\n>\n\nSupportability and operability are more important in corner cases to\nquickly troubleshoot an issue,\n\n\n>\n> Best regards, Andrey Borodin.\n>\n\nOn Tue, Nov 29, 2022 at 10:52 AM Andrey Borodin <amborodin86@gmail.com> wrote:On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > 2. Process proc die immediately when a backend is waiting for sync\n> > replication acknowledgement, as it does today, however, upon restart,\n> > don't open up for business (don't accept ready-only connections)\n> > unless the sync standbys have caught up.\n> >\n> >\n> > Are you planning to block connections or queries to the database? It would be\n> > good to allow connections and let them query the monitoring views but block the\n> > queries until sync standby have caught up. Otherwise, this leaves a monitoring\n> > hole. In cloud, I presume superusers are allowed to connect and monitor (end\n> > customers are not the role members and can't query the data). The same can't be\n> > true for all the installations. Could you please add more details on your\n> > approach?\n>\n> I think ALTER SYSTEM should be allowed, particularly so you can modify\n> synchronous_standby_names, no?\n\nWe don't allow SQL access during crash recovery until it's caught up\nto consistency point. And that's for a reason - the cluster may have\ninvalid system catalog.\nSo no, after crash without a quorum of standbys you can only change\nauto.conf and send SIGHUP. Accessing the system catalog during crash\nrecovery is another unrelated problem.In the crash recovery case, catalog is inconsistent but in this case, the cluster has remote uncommitted changes (consistent). Accepting a superuser connection is no harm. The auth checks performed are still valid after standbys fully caught up. I don't see a reason why superuser / pg_monitor connections are required to be blocked.\n\nBut I'd propose to treat these two points differently, they possess\ndrastically different scales of danger. Query Cancels are issued here\nand there during failovers\\switchovers. Crash amidst network\npartitioning is not that common.Supportability and operability are more important in corner cases to quickly troubleshoot an issue, \n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 29 Nov 2022 11:20:19 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 11:20 AM SATYANARAYANA NARLAPURAM <\nsatyanarlapuram@gmail.com> wrote:\n\n>\n>\n> On Tue, Nov 29, 2022 at 10:52 AM Andrey Borodin <amborodin86@gmail.com>\n> wrote:\n>\n>> On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> >\n>> > On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM\n>> wrote:\n>> > > 2. Process proc die immediately when a backend is waiting for sync\n>> > > replication acknowledgement, as it does today, however, upon\n>> restart,\n>> > > don't open up for business (don't accept ready-only connections)\n>> > > unless the sync standbys have caught up.\n>> > >\n>> > >\n>> > > Are you planning to block connections or queries to the database? It\n>> would be\n>> > > good to allow connections and let them query the monitoring views but\n>> block the\n>> > > queries until sync standby have caught up. Otherwise, this leaves a\n>> monitoring\n>> > > hole. In cloud, I presume superusers are allowed to connect and\n>> monitor (end\n>> > > customers are not the role members and can't query the data). The\n>> same can't be\n>> > > true for all the installations. Could you please add more details on\n>> your\n>> > > approach?\n>> >\n>> > I think ALTER SYSTEM should be allowed, particularly so you can modify\n>> > synchronous_standby_names, no?\n>>\n>> We don't allow SQL access during crash recovery until it's caught up\n>> to consistency point. And that's for a reason - the cluster may have\n>> invalid system catalog.\n>> So no, after crash without a quorum of standbys you can only change\n>> auto.conf and send SIGHUP. Accessing the system catalog during crash\n>> recovery is another unrelated problem.\n>>\n>\n> In the crash recovery case, catalog is inconsistent but in this case, the\n> cluster has remote uncommitted changes (consistent). Accepting a superuser\n> connection is no harm. The auth checks performed are still valid after\n> standbys fully caught up. I don't see a reason why superuser / pg_monitor\n> connections are required to be blocked.\n>\n\nIf blocking queries is harder, and superuser is not allowed to connect as\nit can read remote uncommitted data, how about adding a new role that can\nupdate and reload the server configuration?\n\n>\n>\n>> But I'd propose to treat these two points differently, they possess\n>> drastically different scales of danger. Query Cancels are issued here\n>> and there during failovers\\switchovers. Crash amidst network\n>> partitioning is not that common.\n>>\n>\n> Supportability and operability are more important in corner cases to\n> quickly troubleshoot an issue,\n>\n>\n>>\n>> Best regards, Andrey Borodin.\n>>\n>\n\nOn Tue, Nov 29, 2022 at 11:20 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:On Tue, Nov 29, 2022 at 10:52 AM Andrey Borodin <amborodin86@gmail.com> wrote:On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n> > 2. Process proc die immediately when a backend is waiting for sync\n> > replication acknowledgement, as it does today, however, upon restart,\n> > don't open up for business (don't accept ready-only connections)\n> > unless the sync standbys have caught up.\n> >\n> >\n> > Are you planning to block connections or queries to the database? It would be\n> > good to allow connections and let them query the monitoring views but block the\n> > queries until sync standby have caught up. Otherwise, this leaves a monitoring\n> > hole. In cloud, I presume superusers are allowed to connect and monitor (end\n> > customers are not the role members and can't query the data). The same can't be\n> > true for all the installations. Could you please add more details on your\n> > approach?\n>\n> I think ALTER SYSTEM should be allowed, particularly so you can modify\n> synchronous_standby_names, no?\n\nWe don't allow SQL access during crash recovery until it's caught up\nto consistency point. And that's for a reason - the cluster may have\ninvalid system catalog.\nSo no, after crash without a quorum of standbys you can only change\nauto.conf and send SIGHUP. Accessing the system catalog during crash\nrecovery is another unrelated problem.In the crash recovery case, catalog is inconsistent but in this case, the cluster has remote uncommitted changes (consistent). Accepting a superuser connection is no harm. The auth checks performed are still valid after standbys fully caught up. I don't see a reason why superuser / pg_monitor connections are required to be blocked.If blocking queries is harder, and superuser is not allowed to connect as it can read remote uncommitted data, how about adding a new role that can update and reload the server configuration?\n\nBut I'd propose to treat these two points differently, they possess\ndrastically different scales of danger. Query Cancels are issued here\nand there during failovers\\switchovers. Crash amidst network\npartitioning is not that common.Supportability and operability are more important in corner cases to quickly troubleshoot an issue, \n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 29 Nov 2022 11:37:35 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 10:45 PM SATYANARAYANA NARLAPURAM\n<satyanarlapuram@gmail.com> wrote:\n>\n> On Tue, Nov 29, 2022 at 8:42 AM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote:\n>>\n>> On Tue, Nov 29, 2022 at 8:29 AM Bruce Momjian <bruce@momjian.us> wrote:\n>>>\n>>> On Tue, Nov 29, 2022 at 08:14:10AM -0800, SATYANARAYANA NARLAPURAM wrote:\n>>> > 2. Process proc die immediately when a backend is waiting for sync\n>>> > replication acknowledgement, as it does today, however, upon restart,\n>>> > don't open up for business (don't accept ready-only connections)\n>>> > unless the sync standbys have caught up.\n>>> >\n>>> > Are you planning to block connections or queries to the database? It would be\n>>> > good to allow connections and let them query the monitoring views but block the\n>>> > queries until sync standby have caught up. Otherwise, this leaves a monitoring\n>>> > hole. In cloud, I presume superusers are allowed to connect and monitor (end\n>>> > customers are not the role members and can't query the data). The same can't be\n>>> > true for all the installations. Could you please add more details on your\n>>> > approach?\n>>>\n>>> I think ALTER SYSTEM should be allowed, particularly so you can modify\n>>> synchronous_standby_names, no?\n>>\n>> Yes, Change in synchronous_standby_names is expected in this situation. IMHO, blocking all the connections is not a recommended approach.\n>\n> How about allowing superusers (they can still read locally committed data) and users part of pg_monitor role?\n\nI started to spend time on this feature again. Thanks all for your\ncomments so far.\n\nPer latest comments, it looks like we're mostly okay to emit a warning\nand ignore query cancel interrupts while waiting for sync replication\nACK.\n\nFor proc die, it looks like the suggestion was to process it\nimmediately and upon next restart, don't allow user connections unless\nall sync standbys were caught up. However, we need to be able to allow\nreplication connections from standbys so that they'll be able to\nstream the needed WAL and catch up with primary, allow superuser or\nusers with pg_monitor role to connect to perform ALTER SYSTEM to\nremove the unresponsive sync standbys if any from the list or disable\nsync replication altogether or monitor for flush lsn/catch up status.\nAnd block all other connections. Note that replication, superuser and\nusers with pg_monitor role connections are allowed only after the\nserver reaches a consistent state not before that to not read any\ninconsistent data.\n\nThe trickiest part of doing the above is how we detect upon restart\nthat the server received proc die while waiting for sync replication\nACK. One idea might be to set a flag in the control file before the\ncrash. Second idea might be to write a marker file (although I don't\nfavor this idea); presence indicates that the server was waiting for\nsync replication ACK before the crash. However, we may not detect all\nsorts of crashes in a backend when it is waiting for sync replication\nACK to do any of these two ideas. Therefore, this may not be a\ncomplete solution.\n\nThird idea might be to just let the primary wait for sync standbys to\ncatch up upon restart irrespective of whether it was crashed or not\nwhile waiting for sync replication ACK. While this idea works well\nwithout having to detect all sorts of crashes, the primary may not\ncome up if any unresponsive standbys are present (currently, the\nprimary continues to be operational for read-only queries at least\nirrespective of whether sync standbys have caught up or not).\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:25:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions\n in synchronous replication"
},
{
"msg_contents": "> On 30 Jan 2023, at 06:55, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> I started to spend time on this feature again. Thanks all for your\n> comments so far.\n\nSince there hasn't been any updates for the past six months, and the patch\nhasn't applied for a few months, I am marking this returned with feedback for\nnow. Please feel free to open a new entry in a future CF for this patch when\nthere is a new version.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 2 Aug 2023 21:47:51 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: An attempt to avoid\n locally-committed-but-not-replicated-to-standby-transactions in synchronous\n replication"
}
] |
[
{
"msg_contents": "Hi,\n\nBoth the location and name of the linked to section make no sense to me:\n\nhttps://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT\n\nNeither of the tables listed there manage (cause to change) anything. They\nare pure informational functions - size and path of objects respectively.\nIt belongs in the previous chapter \"System Information Functions and\nOperators\" with a different name.\n\nDavid J.\n\nHi,Both the location and name of the linked to section make no sense to me:https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECTNeither of the tables listed there manage (cause to change) anything. They are pure informational functions - size and path of objects respectively. It belongs in the previous chapter \"System Information Functions and Operators\" with a different name.David J.",
"msg_date": "Mon, 25 Apr 2022 08:33:47 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Move Section 9.27.7 (Data Object Management Functions) to System\n Information Chapter"
},
{
"msg_contents": "On Mon, Apr 25, 2022 at 08:33:47AM -0700, David G. Johnston wrote:\n> Hi,\n> \n> Both the location and name of the linked to section make no sense to me:\n> \n> https://www.postgresql.org/docs/current/functions-admin.html#\n> FUNCTIONS-ADMIN-DBOBJECT\n> \n> Neither of the tables listed there manage (cause to change) anything. They are\n> pure informational functions - size and path of objects respectively. It\n> belongs in the previous chapter \"System Information Functions and Operators\"\n> with a different name.\n\nSo, the section title is:\n\n\t9.27.7. Database Object Management Functions\n\nI think the idea is that they _help_ to manage database objects by\nreporting their size or location. I do think it is in the right\nchapter, but maybe needs a better title? I can't think of one.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 18:43:24 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Move Section 9.27.7 (Data Object Management Functions) to System\n Information Chapter"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Apr 25, 2022 at 08:33:47AM -0700, David G. Johnston wrote:\n>> Both the location and name of the linked to section make no sense to me:\n>> https://www.postgresql.org/docs/current/functions-admin.html#\n>> FUNCTIONS-ADMIN-DBOBJECT\n>> Neither of the tables listed there manage (cause to change) anything. They are\n>> pure informational functions - size and path of objects respectively. It\n>> belongs in the previous chapter \"System Information Functions and Operators\"\n>> with a different name.\n\n> So, the section title is:\n> \t9.27.7. Database Object Management Functions\n> I think the idea is that they _help_ to manage database objects by\n> reporting their size or location. I do think it is in the right\n> chapter, but maybe needs a better title? I can't think of one.\n\nI'm hesitant to move functions to a different documentation page\nwithout a really solid reason. Just a couple days ago I fielded a\ncomplaint from somebody who couldn't find string_to_array anymore\nbecause we'd moved it from \"array functions\" to \"string functions\".\n\nI'd be the first to say that the division between 9.26 and 9.27 is\npretty arbitrary ... but without a clearer demarcation rule,\nmoving functions between the two pages seems more likely to\nadd confusion than subtract it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jul 2022 18:57:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move Section 9.27.7 (Data Object Management Functions) to System\n Information Chapter"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 3:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, Apr 25, 2022 at 08:33:47AM -0700, David G. Johnston wrote:\n> >> Both the location and name of the linked to section make no sense to me:\n> >> https://www.postgresql.org/docs/current/functions-admin.html#\n> >> FUNCTIONS-ADMIN-DBOBJECT\n> >> Neither of the tables listed there manage (cause to change) anything.\n> They are\n> >> pure informational functions - size and path of objects respectively.\n> It\n> >> belongs in the previous chapter \"System Information Functions and\n> Operators\"\n> >> with a different name.\n>\n> > So, the section title is:\n> > 9.27.7. Database Object Management Functions\n> > I think the idea is that they _help_ to manage database objects by\n> > reporting their size or location. I do think it is in the right\n> > chapter, but maybe needs a better title? I can't think of one.\n>\n> I'm hesitant to move functions to a different documentation page\n> without a really solid reason. Just a couple days ago I fielded a\n> complaint from somebody who couldn't find string_to_array anymore\n> because we'd moved it from \"array functions\" to \"string functions\".\n>\n> I'd be the first to say that the division between 9.26 and 9.27 is\n> pretty arbitrary ... but without a clearer demarcation rule,\n> moving functions between the two pages seems more likely to\n> add confusion than subtract it.\n>\n>\nI'm not going to fight the prevailing winds on this one, much...but I've\nprobably been sitting on this annoyance for years since I use the ToC to\nfind stuff fairly quickly in the docs. This seems much more clear to me\nthan a function than deciding whether a function that converts a string\ninto an array belongs in the string chapter or the array chapter.\n\nOn a related note, why itemize 9.27 in the table of contents but not 9.26?\n\nI would ask that we at least rename it to:\n\nDisk Usage Functions\n\nSince this would show in the ToC finding the name of the functions that\nallow one to compute disk usage, which is a question I probably see once a\nyear, and what motivates this request, would be more likely to be found\nwithout skimming the entire 9.26 chapter (since I cannot see those table\nheading in the ToC) and not finding it and then stumbling upon in on a\ntable the only deals with sizes but whose headers says nothing about sizes.\n\nDavid J.\n\nOn Thu, Jul 14, 2022 at 3:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Apr 25, 2022 at 08:33:47AM -0700, David G. Johnston wrote:\n>> Both the location and name of the linked to section make no sense to me:\n>> https://www.postgresql.org/docs/current/functions-admin.html#\n>> FUNCTIONS-ADMIN-DBOBJECT\n>> Neither of the tables listed there manage (cause to change) anything. They are\n>> pure informational functions - size and path of objects respectively. It\n>> belongs in the previous chapter \"System Information Functions and Operators\"\n>> with a different name.\n\n> So, the section title is:\n> 9.27.7. Database Object Management Functions\n> I think the idea is that they _help_ to manage database objects by\n> reporting their size or location. I do think it is in the right\n> chapter, but maybe needs a better title? I can't think of one.\n\nI'm hesitant to move functions to a different documentation page\nwithout a really solid reason. Just a couple days ago I fielded a\ncomplaint from somebody who couldn't find string_to_array anymore\nbecause we'd moved it from \"array functions\" to \"string functions\".\n\nI'd be the first to say that the division between 9.26 and 9.27 is\npretty arbitrary ... but without a clearer demarcation rule,\nmoving functions between the two pages seems more likely to\nadd confusion than subtract it.I'm not going to fight the prevailing winds on this one, much...but I've probably been sitting on this annoyance for years since I use the ToC to find stuff fairly quickly in the docs. This seems much more clear to me than a function than deciding whether a function that converts a string into an array belongs in the string chapter or the array chapter.On a related note, why itemize 9.27 in the table of contents but not 9.26?I would ask that we at least rename it to:Disk Usage FunctionsSince this would show in the ToC finding the name of the functions that allow one to compute disk usage, which is a question I probably see once a year, and what motivates this request, would be more likely to be found without skimming the entire 9.26 chapter (since I cannot see those table heading in the ToC) and not finding it and then stumbling upon in on a table the only deals with sizes but whose headers says nothing about sizes.David J.",
"msg_date": "Fri, 15 Jul 2022 12:36:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Move Section 9.27.7 (Data Object Management Functions) to System\n Information Chapter"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 12:36 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n>\n> I would ask that we at least rename it to:\n>\n> Disk Usage Functions\n>\n>\nNevermind...I identified the scope of that header incorrectly and the\nrename wouldn't be appropriate for the other tables in that section.\n\nDavid J.\n\nOn Fri, Jul 15, 2022 at 12:36 PM David G. Johnston <david.g.johnston@gmail.com> wrote:I would ask that we at least rename it to:Disk Usage FunctionsNevermind...I identified the scope of that header incorrectly and the rename wouldn't be appropriate for the other tables in that section.David J.",
"msg_date": "Sat, 16 Jul 2022 16:50:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Move Section 9.27.7 (Data Object Management Functions) to System\n Information Chapter"
}
] |
[
{
"msg_contents": "I just noticed that publishing tables on multiple publications with\ndifferent row filters and column lists has somewhat surprising behavior.\nTo wit: if a column is published in any row-filtered publication, then\nthe values for that column are sent to the subscriber even for rows that\ndon't match the row filter, as long as the row matches the row filter\nfor any other publication, even if that other publication doesn't\ninclude the column.\n\nHere's an example.\n\nPublisher:\n\ncreate table uno (a int primary key, b int, c int);\ncreate publication uno for table uno (a, b) where (a > 0);\ncreate publication dos for table uno (a, c) where (a < 0);\n\nHere, we specify: publish columns a,b for rows with positive a, and\npublish columns a,c for rows with negative a.\n\nWhat happened next will surprise you! Well, maybe not. On subscriber:\n\ncreate table uno (a int primary key, b int, c int);\ncreate subscription sub_uno connection 'port=55432 dbname=alvherre' publication uno,dos;\n\nPublisher:\ninsert into uno values (1, 2, 3), (-1, 3, 4);\n\nPublication 'uno' only has columns a and b, so row with a=1 should not\nhave value c=3. And publication 'dos' only has columns a and c, so row\nwith a=-1 should not have value b=3. But, on subscriber:\n\ntable uno;\n a │ b │ c \n────┼───┼───\n 1 │ 2 │ 3\n -1 │ 3 │ 4\n\nq.e.d.\n\nI think results from a too simplistic view on how to mix multiple\npublications with row filters and column lists. IIRC we are saying \"if\ncolumn X appears in *any* publication, then the value is published\",\nperiod, and don't stop to evaluate the row filter corresponding to each\nof those publications. \n\nThe desired result on subscriber is:\n\ntable uno;\n a │ b │ c \n────┼───┼───\n 1 │ 2 │\n -1 │ │ 4\n\n\nThoughts?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 25 Apr 2022 17:48:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 4/25/22 17:48, Alvaro Herrera wrote:\n> I just noticed that publishing tables on multiple publications with\n> different row filters and column lists has somewhat surprising behavior.\n> To wit: if a column is published in any row-filtered publication, then\n> the values for that column are sent to the subscriber even for rows that\n> don't match the row filter, as long as the row matches the row filter\n> for any other publication, even if that other publication doesn't\n> include the column.\n> \n> Here's an example.\n> \n> Publisher:\n> \n> create table uno (a int primary key, b int, c int);\n> create publication uno for table uno (a, b) where (a > 0);\n> create publication dos for table uno (a, c) where (a < 0);\n> \n> Here, we specify: publish columns a,b for rows with positive a, and\n> publish columns a,c for rows with negative a.\n> \n> What happened next will surprise you! Well, maybe not. On subscriber:\n> \n> create table uno (a int primary key, b int, c int);\n> create subscription sub_uno connection 'port=55432 dbname=alvherre' publication uno,dos;\n> \n> Publisher:\n> insert into uno values (1, 2, 3), (-1, 3, 4);\n> \n> Publication 'uno' only has columns a and b, so row with a=1 should not\n> have value c=3. And publication 'dos' only has columns a and c, so row\n> with a=-1 should not have value b=3. But, on subscriber:\n> \n> table uno;\n> a │ b │ c \n> ────┼───┼───\n> 1 │ 2 │ 3\n> -1 │ 3 │ 4\n> \n> q.e.d.\n> \n> I think results from a too simplistic view on how to mix multiple\n> publications with row filters and column lists. IIRC we are saying \"if\n> column X appears in *any* publication, then the value is published\",\n> period, and don't stop to evaluate the row filter corresponding to each\n> of those publications. \n> \n\nRight.\n\n> The desired result on subscriber is:\n> \n> table uno;\n> a │ b │ c \n> ────┼───┼───\n> 1 │ 2 │\n> -1 │ │ 4\n> \n> \n> Thoughts?\n> \n\nI'm not quite sure which of the two behaviors is more \"desirable\". In a\nway, it's somewhat similar to publish_as_relid, which is also calculated\nnot considering which of the row filters match?\n\nBut maybe you're right and it should behave the way you propose ... the\nexample I have in mind is a use case replicating table with two types of\nrows - sensitive and non-sensitive. For sensitive, we replicate only\nsome of the columns, for non-sensitive we replicate everything. Which\ncould be implemented as two publications\n\ncreate publication sensitive_rows\n for table t (a, b) where (is_sensitive);\n\ncreate publication non_sensitive_rows\n for table t where (not is_sensitive);\n\nBut the way it's implemented now, we'll always replicate all columns,\nbecause the second publication has no column list.\n\nChanging this to behave the way you expect would be quite difficult,\nbecause at the moment we build a single OR expression from all the row\nfilters. We'd have to keep the individual expressions, so that we can\nbuild a column list for each of them (in order to ignore those that\ndon't match).\n\nWe'd have to remove various other optimizations - for example we can't\njust discard row filters if we found \"no_filter\" publication. Or more\nprecisely, we'd have to consider column lists too.\n\nIn other words, we'd have to merge pgoutput_column_list_init into\npgoutput_row_filter_init, and then modify pgoutput_row_filter to\nevaluate the row filters one by one, and build the column list.\n\nI can take a stab at it, but it seems strange to not apply the same\nlogic to evaluation of publish_as_relid. I wonder what Amit thinks about\nthis, as he wrote the row filter stuff.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 26 Apr 2022 00:30:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 4:00 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/25/22 17:48, Alvaro Herrera wrote:\n>\n> > The desired result on subscriber is:\n> >\n> > table uno;\n> > a │ b │ c\n> > ────┼───┼───\n> > 1 │ 2 │\n> > -1 │ │ 4\n> >\n> >\n> > Thoughts?\n> >\n>\n> I'm not quite sure which of the two behaviors is more \"desirable\". In a\n> way, it's somewhat similar to publish_as_relid, which is also calculated\n> not considering which of the row filters match?\n>\n\nRight, or in other words, we check all publications to decide it and\nsimilar is the case for publication actions which are also computed\nindependently for all publications.\n\n> But maybe you're right and it should behave the way you propose ... the\n> example I have in mind is a use case replicating table with two types of\n> rows - sensitive and non-sensitive. For sensitive, we replicate only\n> some of the columns, for non-sensitive we replicate everything. Which\n> could be implemented as two publications\n>\n> create publication sensitive_rows\n> for table t (a, b) where (is_sensitive);\n>\n> create publication non_sensitive_rows\n> for table t where (not is_sensitive);\n>\n> But the way it's implemented now, we'll always replicate all columns,\n> because the second publication has no column list.\n>\n> Changing this to behave the way you expect would be quite difficult,\n> because at the moment we build a single OR expression from all the row\n> filters. We'd have to keep the individual expressions, so that we can\n> build a column list for each of them (in order to ignore those that\n> don't match).\n>\n> We'd have to remove various other optimizations - for example we can't\n> just discard row filters if we found \"no_filter\" publication.\n>\n\nI don't think that is the right way. We need some way to combine\nexpressions and I feel the current behavior is sane. I mean to say\nthat even if there is one publication that has no filter (column/row),\nwe should publish all rows with all columns. Now, as mentioned above\ncombining row filters or column lists for all publications appears to\nbe consistent with what we already do and seems correct behavior to\nme.\n\nTo me, it appears that the method used to decide whether a particular\ntable is published or not is also similar to what we do for row\nfilters or column lists. Even if there is one publication that\npublishes all tables, we consider the current table to be published\nirrespective of whether other publications have published that table\nor not.\n\n> Or more\n> precisely, we'd have to consider column lists too.\n>\n> In other words, we'd have to merge pgoutput_column_list_init into\n> pgoutput_row_filter_init, and then modify pgoutput_row_filter to\n> evaluate the row filters one by one, and build the column list.\n>\n\nHmm, I think even if we want to do something here, we also need to\nthink about how to achieve similar behavior for initial tablesync\nwhich will be more tricky.\n\n> I can take a stab at it, but it seems strange to not apply the same\n> logic to evaluation of publish_as_relid.\n>\n\nYeah, the current behavior seems to be consistent with what we already do.\n\n> I wonder what Amit thinks about\n> this, as he wrote the row filter stuff.\n>\n\nI feel we can explain a bit more about this in docs. We already have\nsome explanation of how row filters are combined [1]. We can probably\nadd a few examples for column lists.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-row-filter.html#LOGICAL-REPLICATION-ROW-FILTER-COMBINING\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Apr 2022 10:25:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 10:25:50AM +0530, Amit Kapila wrote:\n> I feel we can explain a bit more about this in docs. We already have\n> some explanation of how row filters are combined [1]. We can probably\n> add a few examples for column lists.\n\nI am not completely sure exactly what we should do here, but this\nstuff needs to be at least discussed. I have added an open item.\n--\nMichael",
"msg_date": "Wed, 27 Apr 2022 15:12:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-26, Tomas Vondra wrote:\n\n> I'm not quite sure which of the two behaviors is more \"desirable\". In a\n> way, it's somewhat similar to publish_as_relid, which is also calculated\n> not considering which of the row filters match?\n\nI grepped doc/src/sgml for `publish_as_relid` and found no hits, so\nI suppose it's not a user-visible feature as such.\n\n> But maybe you're right and it should behave the way you propose ... the\n> example I have in mind is a use case replicating table with two types of\n> rows - sensitive and non-sensitive. For sensitive, we replicate only\n> some of the columns, for non-sensitive we replicate everything.\n\nExactly. If we blindly publish row/column values that aren't in *any*\npublications, this may lead to leaking protected values.\n\n> Changing this to behave the way you expect would be quite difficult,\n> because at the moment we build a single OR expression from all the row\n> filters. We'd have to keep the individual expressions, so that we can\n> build a column list for each of them (in order to ignore those that\n> don't match).\n\nI think we should do that, yeah.\n\n> I can take a stab at it, but it seems strange to not apply the same\n> logic to evaluation of publish_as_relid. I wonder what Amit thinks about\n> this, as he wrote the row filter stuff.\n\nBy grepping publicationcmds.c, it seems that publish_as_relid refers to\nthe ancestor partitioned table that is used for column list and\nrowfilter determination, when a partition is being published as part of\nit. I don't think these things are exactly parallel.\n\n... In fact I think they are quite orthogonal: probably you should be\nable to publish a partitioned table in two publications, with different\nrowfilters and different column lists (which can come from the\ntopmost partitioned table), and each partition should still work in the\nway I describe above.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n",
"msg_date": "Wed, 27 Apr 2022 11:43:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-27, Amit Kapila wrote:\n\n> On Tue, Apr 26, 2022 at 4:00 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n\n> > I can take a stab at it, but it seems strange to not apply the same\n> > logic to evaluation of publish_as_relid.\n> \n> Yeah, the current behavior seems to be consistent with what we already\n> do.\n\nSorry, this argument makes no sense to me. The combination of both\nfeatures is not consistent, and both features are new.\n'publish_as_relid' is an implementation detail. If the implementation\nfails to follow the feature design, then the implementation must be\nfixed ... not the design!\n\n\nIMO, we should first determine how we want row filters and column lists\nto work when used in conjunction -- for relations (sets of rows) in a\ngeneral sense. After we have done that, then we can use that design to\ndrive how we want partitioned tables to be handled for it. Keep in mind\nthat when users see a partitioned table, what they first see is a table.\nThey want all their tables to work in pretty much the same way --\npartitioned or not partitioned. The fact that a table is partitioned\nshould affect as little as possible the way it interacts with other\nfeatures.\n\n\nNow, another possibility is to say \"naah, this is too hard\", or even\n\"naah, there's no time to write all that for this release\". That might\nbe okay, but in that case let's add an implementation restriction to\nensure that we don't paint ourselves in a corner regarding what is\nreasonable behavior. For example, an easy restriction might be: if a\ntable is in multiple publications with mismatching row filters/column\nlists, then a subscriber is not allowed to subscribe to both\npublications. (Maybe this restriction isn't exactly what we need so\nthat it actually implements what we need, not sure). Then, if/when in\nthe future we implement this correctly, we can lift the restriction.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)\n\n\n",
"msg_date": "Wed, 27 Apr 2022 11:53:50 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wednesday, April 27, 2022 12:56 PM From: Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Apr 26, 2022 at 4:00 AM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> > On 4/25/22 17:48, Alvaro Herrera wrote:\r\n> >\r\n> > > The desired result on subscriber is:\r\n> > >\r\n> > > table uno;\r\n> > > a │ b │ c\r\n> > > ────┼───┼───\r\n> > > 1 │ 2 │\r\n> > > -1 │ │ 4\r\n> > >\r\n> > >\r\n> > > Thoughts?\r\n> > >\r\n> >\r\n> > I'm not quite sure which of the two behaviors is more \"desirable\". In a\r\n> > way, it's somewhat similar to publish_as_relid, which is also calculated\r\n> > not considering which of the row filters match?\r\n> >\r\n> \r\n> Right, or in other words, we check all publications to decide it and\r\n> similar is the case for publication actions which are also computed\r\n> independently for all publications.\r\n> \r\n> > But maybe you're right and it should behave the way you propose ... the\r\n> > example I have in mind is a use case replicating table with two types of\r\n> > rows - sensitive and non-sensitive. For sensitive, we replicate only\r\n> > some of the columns, for non-sensitive we replicate everything. Which\r\n> > could be implemented as two publications\r\n> >\r\n> > create publication sensitive_rows\r\n> > for table t (a, b) where (is_sensitive);\r\n> >\r\n> > create publication non_sensitive_rows\r\n> > for table t where (not is_sensitive);\r\n> >\r\n> > But the way it's implemented now, we'll always replicate all columns,\r\n> > because the second publication has no column list.\r\n> >\r\n> > Changing this to behave the way you expect would be quite difficult,\r\n> > because at the moment we build a single OR expression from all the row\r\n> > filters. We'd have to keep the individual expressions, so that we can\r\n> > build a column list for each of them (in order to ignore those that\r\n> > don't match).\r\n> >\r\n> > We'd have to remove various other optimizations - for example we can't\r\n> > just discard row filters if we found \"no_filter\" publication.\r\n> >\r\n> \r\n> I don't think that is the right way. We need some way to combine\r\n> expressions and I feel the current behavior is sane. I mean to say\r\n> that even if there is one publication that has no filter (column/row),\r\n> we should publish all rows with all columns. Now, as mentioned above\r\n> combining row filters or column lists for all publications appears to\r\n> be consistent with what we already do and seems correct behavior to\r\n> me.\r\n> \r\n> To me, it appears that the method used to decide whether a particular\r\n> table is published or not is also similar to what we do for row\r\n> filters or column lists. Even if there is one publication that\r\n> publishes all tables, we consider the current table to be published\r\n> irrespective of whether other publications have published that table\r\n> or not.\r\n> \r\n> > Or more\r\n> > precisely, we'd have to consider column lists too.\r\n> >\r\n> > In other words, we'd have to merge pgoutput_column_list_init into\r\n> > pgoutput_row_filter_init, and then modify pgoutput_row_filter to\r\n> > evaluate the row filters one by one, and build the column list.\r\n> >\r\n> \r\n> Hmm, I think even if we want to do something here, we also need to\r\n> think about how to achieve similar behavior for initial tablesync\r\n> which will be more tricky.\r\n\r\nI think it could be difficult to make the initial tablesync behave the same.\r\nCurrently, we make a \"COPY\" command to do the table sync, I am not sure\r\nhow to change the \"COPY\" query to achieve the expected behavior here.\r\n\r\nBTW, For the use case mentioned here:\r\n\"\"\"\r\nreplicating table with two types of\r\nrows - sensitive and non-sensitive. For sensitive, we replicate only\r\nsome of the columns, for non-sensitive we replicate everything.\r\n\"\"\" \r\n\r\nOne approach to do this is to create two subscriptions and two\r\npublications which seems a workaround.\r\n-----\r\ncreate publication uno for table uno (a, b) where (a > 0);\r\ncreate publication dos for table uno (a, c) where (a < 0);\r\n\r\ncreate subscription sub_uno connection 'port=55432 dbname=alvherre' publication uno;\r\ncreate subscription sub_dos connection 'port=55432 dbname=alvherre' publication dos;\r\n-----\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 27 Apr 2022 10:08:12 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 3:13 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Apr-26, Tomas Vondra wrote:\n>\n> > I'm not quite sure which of the two behaviors is more \"desirable\". In a\n> > way, it's somewhat similar to publish_as_relid, which is also calculated\n> > not considering which of the row filters match?\n>\n> I grepped doc/src/sgml for `publish_as_relid` and found no hits, so\n> I suppose it's not a user-visible feature as such.\n>\n\n`publish_as_relid` is computed based on 'publish_via_partition_root'\nsetting of publication which is a user-visible feature.\n\n> > But maybe you're right and it should behave the way you propose ... the\n> > example I have in mind is a use case replicating table with two types of\n> > rows - sensitive and non-sensitive. For sensitive, we replicate only\n> > some of the columns, for non-sensitive we replicate everything.\n>\n> Exactly. If we blindly publish row/column values that aren't in *any*\n> publications, this may lead to leaking protected values.\n>\n> > Changing this to behave the way you expect would be quite difficult,\n> > because at the moment we build a single OR expression from all the row\n> > filters. We'd have to keep the individual expressions, so that we can\n> > build a column list for each of them (in order to ignore those that\n> > don't match).\n>\n> I think we should do that, yeah.\n>\n\nThis can hit the performance as we need to evaluate each expression\nfor each row.\n\n> > I can take a stab at it, but it seems strange to not apply the same\n> > logic to evaluation of publish_as_relid. I wonder what Amit thinks about\n> > this, as he wrote the row filter stuff.\n>\n> By grepping publicationcmds.c, it seems that publish_as_relid refers to\n> the ancestor partitioned table that is used for column list and\n> rowfilter determination, when a partition is being published as part of\n> it.\n>\n\nYeah, this is true when the corresponding publication has set\n'publish_via_partition_root' as true.\n\n> I don't think these things are exactly parallel.\n>\n\nCurrently, when the subscription has multiple publications, we combine\nthe objects, and actions of those publications. It happens for\n'publish_via_partition_root', publication actions, tables, column\nlists, or row filters. I think the whole design works on this idea\neven the initial table sync. I think it might need a major change\n(which I am not sure about at this stage) if we want to make the\ninitial sync also behave similar to what you are proposing.\n\nI feel it would be much easier to create two different subscriptions\nas mentioned by Hou-San [1] for the case you are talking about if the\nuser really needs something like that.\n\n> ... In fact I think they are quite orthogonal: probably you should be\n> able to publish a partitioned table in two publications, with different\n> rowfilters and different column lists (which can come from the\n> topmost partitioned table), and each partition should still work in the\n> way I describe above.\n>\n\nWe consider the column lists or row filters for either the partition\n(on which the current operation is performed) or partitioned table\nbased on 'publish_via_partition_root' parameter of publication.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716B82315A067F1D78F247E94FA9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Apr 2022 16:03:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-27, Amit Kapila wrote:\n\n> On Wed, Apr 27, 2022 at 3:13 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > > Changing this to behave the way you expect would be quite difficult,\n> > > because at the moment we build a single OR expression from all the row\n> > > filters. We'd have to keep the individual expressions, so that we can\n> > > build a column list for each of them (in order to ignore those that\n> > > don't match).\n> >\n> > I think we should do that, yeah.\n> \n> This can hit the performance as we need to evaluate each expression\n> for each row.\n\nSo we do things because they are easy and fast, rather than because they\nwork correctly?\n\n> > ... In fact I think they are quite orthogonal: probably you should be\n> > able to publish a partitioned table in two publications, with different\n> > rowfilters and different column lists (which can come from the\n> > topmost partitioned table), and each partition should still work in the\n> > way I describe above.\n> \n> We consider the column lists or row filters for either the partition\n> (on which the current operation is performed) or partitioned table\n> based on 'publish_via_partition_root' parameter of publication.\n\nOK, but this isn't relevant to what I wrote.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 27 Apr 2022 12:57:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 4:27 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Apr-27, Amit Kapila wrote:\n>\n> > On Wed, Apr 27, 2022 at 3:13 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > > Changing this to behave the way you expect would be quite difficult,\n> > > > because at the moment we build a single OR expression from all the row\n> > > > filters. We'd have to keep the individual expressions, so that we can\n> > > > build a column list for each of them (in order to ignore those that\n> > > > don't match).\n> > >\n> > > I think we should do that, yeah.\n> >\n> > This can hit the performance as we need to evaluate each expression\n> > for each row.\n>\n> So we do things because they are easy and fast, rather than because they\n> work correctly?\n>\n\nThe point is I am not sure if what you are saying is better behavior\nthan current but if others feel it is better then we can try to do\nsomething for it. In the above sentence, I just wanted to say that it\nwill impact performance but if that is required then sure we should do\nit that way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Apr 2022 17:18:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "Hi,\n\nso I've been looking at tweaking the code so that the behavior matches\nAlvaro's expectations. It passes check-world but I'm not claiming it's\nnowhere near commitable - the purpose is mostly to give better idea of\nhow invasive the change is etc.\n\nAs described earlier, this abandons the idea of building a single OR\nexpression from all the row filters (per action), and replaces that with\na list of per-publication info (struct PublicationInfo), combining info\nabout both row filters and column lists.\n\nThis means we can't initialize the row filters and column lists\nseparately, but at the same time. So pgoutput_row_filter_init was\nmodified to initialize both, and pgoutput_column_list_init was removed.\n\nWith this info, we can calculate column lists only for publications with\nmatching row filters, which is what the modified pgoutput_row_filter\ndoes (the calculated column list is returned through a parameter).\n\n\nThis however does not remove the 'columns' from RelationSyncEntry\nentirely. We still need that \"superset\" column list when sending schema.\n\nImagine two publications, one replicating (a,b) and the other (a,c),\nmaybe depending on row filter. send_relation_and_attrs() needs to send\ninfo about all three attributes (a,b,c), i.e. about any attribute that\nmight end up being replicated.\n\nWe might try to be smarter and send the exact schema needed by the next\noperation, i.e. when inserting (a,b) we'd make sure the last schema we\nsent was (a,b) and invalidate/resend it otherwise. But that might easily\nresult in \"trashing\" where we send the schema and the next operation\ninvalidates it right away because it needs a different schema.\n\nBut there's another reason to do it like this - it seems desirable to\nactually reset columns don't match the calculated column list. Using\nAlvaro's example, it seems reasonable to expect these two transactions\nto produce the same result on the subscriber:\n\n1) insert (a,b) + update to (a,c)\n\n insert into uno values (1, 2, 3);\n update uno set a = -1 where a = 1;\n\n2) insert (a,c)\n\n insert into uno values (-1, 2, 3);\n\nBut to do this, the update actually needs to send (-1,NULL,3).\n\nSo in this case we'll have (a,b,c) column list in RelationSyncEntry, and\nonly attributes on this list will be sent as part of schema. And DML\nactions we'll calculate either (a,b) or (a,c) depending on the row\nfilter, and missing attributes will be replicated as NULL.\n\n\nI haven't done any tests how this affect performance, but I have a\ncouple thoughts regarding that:\n\na) I kinda doubt the optimizations would really matter in practice,\nbecause how likely is it that one relation is in many publications (in\nthe same subscription)?\n\nb) Did anyone actually do some benchmarks that I could repeat, to see\nhow much worse this is?\n\nc) AFAICS we could optimize this in at least some common cases. For\nexample we could combine the entries with matching row filters, and/or\ncolumn filters.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 27 Apr 2022 23:56:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 3:26 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> so I've been looking at tweaking the code so that the behavior matches\n> Alvaro's expectations. It passes check-world but I'm not claiming it's\n> nowhere near commitable - the purpose is mostly to give better idea of\n> how invasive the change is etc.\n>\n\nI was just skimming through the patch and didn't find anything related\nto initial sync handling. I feel the behavior should be same for\ninitial sync and replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Apr 2022 08:47:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 27.04.22 11:53, Alvaro Herrera wrote:\n> Now, another possibility is to say \"naah, this is too hard\", or even\n> \"naah, there's no time to write all that for this release\". That might\n> be okay, but in that case let's add an implementation restriction to\n> ensure that we don't paint ourselves in a corner regarding what is\n> reasonable behavior. For example, an easy restriction might be: if a\n> table is in multiple publications with mismatching row filters/column\n> lists, then a subscriber is not allowed to subscribe to both\n> publications. (Maybe this restriction isn't exactly what we need so\n> that it actually implements what we need, not sure). Then, if/when in\n> the future we implement this correctly, we can lift the restriction.\n\nMy feeling is also that we should prohibit the combinations that we \ncannot make work correctly.\n\n\n\n",
"msg_date": "Thu, 28 Apr 2022 14:13:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 27.04.22 12:33, Amit Kapila wrote:\n> Currently, when the subscription has multiple publications, we combine\n> the objects, and actions of those publications. It happens for\n> 'publish_via_partition_root', publication actions, tables, column\n> lists, or row filters. I think the whole design works on this idea\n> even the initial table sync. I think it might need a major change\n> (which I am not sure about at this stage) if we want to make the\n> initial sync also behave similar to what you are proposing.\n\nIf one publication says \"publish if insert\" and another publication says \n\"publish if update\", then the combination of that is clearly \"publish if \ninsert or update\". Similarly, if one publication says \"WHERE (foo)\" and \none says \"WHERE (bar)\", then the combination is \"WHERE (foo OR bar)\".\n\nBut if one publication says \"publish columns a and b if condition-X\" and \nanother publication says \"publish columns a and c if not-condition-X\", \nthen the combination is clearly *not* \"publish columns a, b, c if true\". \n That is not logical, in the literal sense of that word.\n\nI wonder how we handle the combination of\n\npub1: publish=insert WHERE (foo)\npub2: publish=update WHERE (bar)\n\nI think it would be incorrect if the combination is\n\npub1, pub2: publish=insert,update WHERE (foo OR bar).\n\n\n",
"msg_date": "Thu, 28 Apr 2022 14:26:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 4/28/22 05:17, Amit Kapila wrote:\n> On Thu, Apr 28, 2022 at 3:26 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> so I've been looking at tweaking the code so that the behavior matches\n>> Alvaro's expectations. It passes check-world but I'm not claiming it's\n>> nowhere near commitable - the purpose is mostly to give better idea of\n>> how invasive the change is etc.\n>>\n> \n> I was just skimming through the patch and didn't find anything related\n> to initial sync handling. I feel the behavior should be same for\n> initial sync and replication.\n> \n\nYeah, sorry for not mentioning that - my goal was to explore and try\ngetting the behavior in regular replication right first, before\nattempting to do the same thing in tablesync.\n\nAttached is a patch doing the same thing in tablesync. The overall idea\nis to generate copy statement with CASE expressions, applying filters to\nindividual columns. For Alvaro's example, this generates something like\n\n SELECT\n (CASE WHEN (a < 0) OR (a > 0) THEN a ELSE NULL END) AS a,\n (CASE WHEN (a > 0) THEN b ELSE NULL END) AS b,\n (CASE WHEN (a < 0) THEN c ELSE NULL END) AS c\n FROM uno WHERE (a < 0) OR (a > 0)\n\nAnd that seems to work fine. Similarly to regular replication we have to\nuse both the \"total\" column list (union of per-publication lists) and\nper-publication (row filter + column list), but that's expected.\n\nThere's a couple options how we might optimize this for common cases.\nFor example if there's just a single publication, there's no need to\ngenerate the CASE expressions - the WHERE filter will do the trick.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 28 Apr 2022 17:35:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 4/28/22 14:26, Peter Eisentraut wrote:\n> On 27.04.22 12:33, Amit Kapila wrote:\n>> Currently, when the subscription has multiple publications, we combine\n>> the objects, and actions of those publications. It happens for\n>> 'publish_via_partition_root', publication actions, tables, column\n>> lists, or row filters. I think the whole design works on this idea\n>> even the initial table sync. I think it might need a major change\n>> (which I am not sure about at this stage) if we want to make the\n>> initial sync also behave similar to what you are proposing.\n> \n> If one publication says \"publish if insert\" and another publication says\n> \"publish if update\", then the combination of that is clearly \"publish if\n> insert or update\". Similarly, if one publication says \"WHERE (foo)\" and\n> one says \"WHERE (bar)\", then the combination is \"WHERE (foo OR bar)\".\n> \n> But if one publication says \"publish columns a and b if condition-X\" and\n> another publication says \"publish columns a and c if not-condition-X\",\n> then the combination is clearly *not* \"publish columns a, b, c if true\".\n> That is not logical, in the literal sense of that word.\n> \n> I wonder how we handle the combination of\n> \n> pub1: publish=insert WHERE (foo)\n> pub2: publish=update WHERE (bar)\n> \n> I think it would be incorrect if the combination is\n> \n> pub1, pub2: publish=insert,update WHERE (foo OR bar).\n\nThat's a good question, actually. No, we don't combine the publications\nlike this, the row filters are kept \"per action\". But the exact behavior\nturns out to be rather confusing in this case.\n\n(Note: This has nothing to do with column lists.)\n\nConsider an example similar to what Alvaro posted earlier:\n\n create table uno (a int primary key, b int, c int);\n\n create publication uno for table uno where (a > 0)\n with (publish='insert');\n\n create publication dos for table uno where (a < 0)\n with (publish='update');\n\nAnd do this:\n\n insert into uno values (1, 2, 3), (-1, 3, 4)\n\nwhich on the subscriber produces just one row, because (a<0) replicates\nonly updates:\n\n a | b | c\n ---+---+---\n 1 | 2 | 3\n (1 row)\n\nNow, let's update the (a<0) row.\n\n update uno set a = 2 where a = -1;\n\nIt might seem reasonable to expect the updated row (2,3,4) to appear on\nthe subscriber, but no - that's not what happens. Because we have (a<0)\nfor UPDATE, and we evaluate this on the old row (matches) and new row\n(does not match). And pgoutput_row_filter() decides the update needs to\nbe converted to DELETE, despite the old row was not replicated at all.\n\nI'm not sure if pgoutput_row_filter() can even make reasonable decisions\nwith such configuration (combination of row filters, actions ...). But\nit sure seems confusing, because if you just inserted the updated row,\nit would get replicated.\n\nWhich brings me to a second problem, related to this one. Imagine you\ncreate the subscription *after* inserting the two rows. In that case you\nget this:\n\n a | b | c\n ----+---+---\n 1 | 2 | 3\n -1 | 3 | 4\n (2 rows)\n\nbecause tablesync.c ignores which actions is the publication (and thus\nthe rowfilter) defined for.\n\nI think it's natural to expect that (INSERT + sync) and (sync + INSERT)\nproduce the same output on the subscriber.\n\n\nI'm not sure we can actually make this perfectly sane with arbitrary\ncombinations of filters and actions. It would probably depend on whether\nthe actions are commutative, associative and stuff like that. But maybe\nwe can come up with restrictions that'd make this sane?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 28 Apr 2022 19:30:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 11:00 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/28/22 14:26, Peter Eisentraut wrote:\n> > On 27.04.22 12:33, Amit Kapila wrote:\n> >\n> > I wonder how we handle the combination of\n> >\n> > pub1: publish=insert WHERE (foo)\n> > pub2: publish=update WHERE (bar)\n> >\n> > I think it would be incorrect if the combination is\n> >\n> > pub1, pub2: publish=insert,update WHERE (foo OR bar).\n>\n> That's a good question, actually. No, we don't combine the publications\n> like this, the row filters are kept \"per action\".\n>\n\nRight, and it won't work even if try to combine in this case because\nof replica identity restrictions.\n\n> But the exact behavior\n> turns out to be rather confusing in this case.\n>\n> (Note: This has nothing to do with column lists.)\n>\n> Consider an example similar to what Alvaro posted earlier:\n>\n> create table uno (a int primary key, b int, c int);\n>\n> create publication uno for table uno where (a > 0)\n> with (publish='insert');\n>\n> create publication dos for table uno where (a < 0)\n> with (publish='update');\n>\n> And do this:\n>\n> insert into uno values (1, 2, 3), (-1, 3, 4)\n>\n> which on the subscriber produces just one row, because (a<0) replicates\n> only updates:\n>\n> a | b | c\n> ---+---+---\n> 1 | 2 | 3\n> (1 row)\n>\n> Now, let's update the (a<0) row.\n>\n> update uno set a = 2 where a = -1;\n>\n> It might seem reasonable to expect the updated row (2,3,4) to appear on\n> the subscriber, but no - that's not what happens. Because we have (a<0)\n> for UPDATE, and we evaluate this on the old row (matches) and new row\n> (does not match). And pgoutput_row_filter() decides the update needs to\n> be converted to DELETE, despite the old row was not replicated at all.\n>\n\nRight, but we don't know what previously would have happened maybe the\nuser would have altered the publication action after the initial row\nis published in which case this DELETE is required as is shown in the\nexample below. We can only make the decision based on the current\ntuple. For example:\n\ncreate table uno (a int primary key, b int, c int);\n\n create publication uno for table uno where (a > 0)\n with (publish='insert');\n\n create publication dos for table uno where (a < 0)\n with (publish='insert');\n\n-- create subscription for both these publications.\n\ninsert into uno values (1, 2, 3), (-1, 3, 4);\n\nAlter publication dos set (publish='update');\n\nupdate uno set a = 2 where a = -1;\n\nNow, in this case, the old row was replicated and we would need a\nDELETE corresponding to it.\n\n> I'm not sure if pgoutput_row_filter() can even make reasonable decisions\n> with such configuration (combination of row filters, actions ...). But\n> it sure seems confusing, because if you just inserted the updated row,\n> it would get replicated.\n>\n\nTrue, but that is what the combination of publications suggests. The\npublication that publishes inserts have different criteria than\nupdates, so such behavior (a particular row when inserted will be\nreplicated but when it came as a result of an update it won't be\nreplicated) is expected.\n\n> Which brings me to a second problem, related to this one. Imagine you\n> create the subscription *after* inserting the two rows. In that case you\n> get this:\n>\n> a | b | c\n> ----+---+---\n> 1 | 2 | 3\n> -1 | 3 | 4\n> (2 rows)\n>\n> because tablesync.c ignores which actions is the publication (and thus\n> the rowfilter) defined for.\n>\n\nYeah, this is the behavior of tablesync.c with or without rowfilter.\nIt ignores publication actions. So, if you update any tuple before\ncreation of subscription it will be replicated but the same update\nwon't be replicated after initial sync if the publication just\npublishes 'insert'. I think we can't decide which data to copy based\non publication actions as COPY wouldn't know if a particular row is\ndue to a fresh insert or due to an update. In your example, it is\npossible that row (-1, 3, 4) would have been there due to an update.\n\n\n> I think it's natural to expect that (INSERT + sync) and (sync + INSERT)\n> produce the same output on the subscriber.\n>\n>\n> I'm not sure we can actually make this perfectly sane with arbitrary\n> combinations of filters and actions. It would probably depend on whether\n> the actions are commutative, associative and stuff like that. But maybe\n> we can come up with restrictions that'd make this sane?\n>\n\nTrue, I think to some extent we rely on users to define it sanely\notherwise currently also it can easily lead to even replication being\nstuck. This can happen when the user is trying to operate on the same\ntable and define publication/subscription on multiple nodes for it.\nSee [1] where we trying to deal with such a problem.\n\n[1] - https://commitfest.postgresql.org/38/3610/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Apr 2022 10:18:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 5:56 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 27.04.22 12:33, Amit Kapila wrote:\n> > Currently, when the subscription has multiple publications, we combine\n> > the objects, and actions of those publications. It happens for\n> > 'publish_via_partition_root', publication actions, tables, column\n> > lists, or row filters. I think the whole design works on this idea\n> > even the initial table sync. I think it might need a major change\n> > (which I am not sure about at this stage) if we want to make the\n> > initial sync also behave similar to what you are proposing.\n>\n> If one publication says \"publish if insert\" and another publication says\n> \"publish if update\", then the combination of that is clearly \"publish if\n> insert or update\". Similarly, if one publication says \"WHERE (foo)\" and\n> one says \"WHERE (bar)\", then the combination is \"WHERE (foo OR bar)\".\n>\n> But if one publication says \"publish columns a and b if condition-X\" and\n> another publication says \"publish columns a and c if not-condition-X\",\n> then the combination is clearly *not* \"publish columns a, b, c if true\".\n> That is not logical, in the literal sense of that word.\n>\n\nSo, what should be the behavior in the below cases:\n\nCase-1:\npub1: \"publish columns a and b if condition-X\"\npub2: \"publish column c if condition-X\"\n\nIsn't it okay to combine these?\n\nCase-2:\npub1: \"publish columns a and b if condition-X\"\npub2: \"publish columns c if condition-Y\"\n\nHere Y is subset of condition X (say something like condition-X: \"col1\n> 5\" and condition-Y: \"col1 > 10\").\n\nWhat should we do in such a case?\n\nI think if there are some cases where combining them is okay but in\nother cases, it is not okay then it is better to prohibit 'not-okay'\ncases if that is feasible.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Apr 2022 10:35:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 4/29/22 06:48, Amit Kapila wrote:\n> On Thu, Apr 28, 2022 at 11:00 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 4/28/22 14:26, Peter Eisentraut wrote:\n>>> On 27.04.22 12:33, Amit Kapila wrote:\n>>>\n>>> I wonder how we handle the combination of\n>>>\n>>> pub1: publish=insert WHERE (foo)\n>>> pub2: publish=update WHERE (bar)\n>>>\n>>> I think it would be incorrect if the combination is\n>>>\n>>> pub1, pub2: publish=insert,update WHERE (foo OR bar).\n>>\n>> That's a good question, actually. No, we don't combine the publications\n>> like this, the row filters are kept \"per action\".\n>>\n> \n> Right, and it won't work even if try to combine in this case because\n> of replica identity restrictions.\n> \n>> But the exact behavior\n>> turns out to be rather confusing in this case.\n>>\n>> (Note: This has nothing to do with column lists.)\n>>\n>> Consider an example similar to what Alvaro posted earlier:\n>>\n>> create table uno (a int primary key, b int, c int);\n>>\n>> create publication uno for table uno where (a > 0)\n>> with (publish='insert');\n>>\n>> create publication dos for table uno where (a < 0)\n>> with (publish='update');\n>>\n>> And do this:\n>>\n>> insert into uno values (1, 2, 3), (-1, 3, 4)\n>>\n>> which on the subscriber produces just one row, because (a<0) replicates\n>> only updates:\n>>\n>> a | b | c\n>> ---+---+---\n>> 1 | 2 | 3\n>> (1 row)\n>>\n>> Now, let's update the (a<0) row.\n>>\n>> update uno set a = 2 where a = -1;\n>>\n>> It might seem reasonable to expect the updated row (2,3,4) to appear on\n>> the subscriber, but no - that's not what happens. Because we have (a<0)\n>> for UPDATE, and we evaluate this on the old row (matches) and new row\n>> (does not match). And pgoutput_row_filter() decides the update needs to\n>> be converted to DELETE, despite the old row was not replicated at all.\n>>\n> \n> Right, but we don't know what previously would have happened maybe the\n> user would have altered the publication action after the initial row\n> is published in which case this DELETE is required as is shown in the\n> example below. We can only make the decision based on the current\n> tuple. For example:\n> \n> create table uno (a int primary key, b int, c int);\n> \n> create publication uno for table uno where (a > 0)\n> with (publish='insert');\n> \n> create publication dos for table uno where (a < 0)\n> with (publish='insert');\n> \n> -- create subscription for both these publications.\n> \n> insert into uno values (1, 2, 3), (-1, 3, 4);\n> \n> Alter publication dos set (publish='update');\n> \n> update uno set a = 2 where a = -1;\n> \n> Now, in this case, the old row was replicated and we would need a\n> DELETE corresponding to it.\n> \n\nI think such issues due to ALTER of the publication are somewhat\nexpected, and I think users will understand they might need to resync\nthe subscription or something like that.\n\nA similar example might be just changing the where condition,\n\n create publication p for table t where (a > 10);\n\nand then\n\n alter publication p set table t where (a > 15);\n\nIf we replicated any rows with (a > 10) and (a <= 15), we'll just stop\nreplicating them. But if we re-create the subscription, we end up with a\ndifferent set of rows on the subscriber, omitting rows with (a <= 15).\n\nIn principle we'd need to replicate the ALTER somehow, to delete or\ninsert the rows that start/stop matching the row filter. It's a bit\nsimilar to not replicating DDL, perhaps.\n\nBut I think the issue I've described is different, because you don't\nhave to change the subscriptions at all and you'll still have the\nproblem. I mean, imagine doing this:\n\n-- publisher\ncreate table t (a int primary key, b int);\ncreate publication p for table t where (a > 10) with (publish='update');\n\n-- subscriber\ncreate table t (a int primary key, b int);\ncreate subscription s connection '...' publication p;\n\n-- publisher\ninsert into t select i, i from generate_series(1,20) s(i);\nupdate t set b = b * 10;\n\n-- subscriber\n--> has no rows in \"t\"\n--> recreate the subscription\ndrop subscription s;\ncreate subscription s connection '...' publication p;\n\n--> now it has all the rows with (a>10), because tablesync ignores\npublication actions\n\n\nThe reason why I find this really annoying is that it makes it almost\nimpossible to setup two logical replicas that'd be \"consistent\", unless\nyou create them at the same time (= without any writes in between). And\nit's damn difficult to think about the inconsistencies.\n\n\nIMHO this all stems from allowing row filters and restricting pubactions\nat the same time (notice this only used a single publication). So maybe\nthe best option would be to disallow combining these two features? That\nwould ensure the row filter filter is always applied to all actions in a\nconsistent manner, preventing all these issues.\n\nMaybe that's not possible - maybe there are valid use cases that would\nneed such combination, and you mentioned replica identity might be an\nissue (and maybe requiring RIF with row filters is not desirable).\n\nSo maybe we should at least warn against this in the documentation?\n\n\n>> I'm not sure if pgoutput_row_filter() can even make reasonable decisions\n>> with such configuration (combination of row filters, actions ...). But\n>> it sure seems confusing, because if you just inserted the updated row,\n>> it would get replicated.\n>>\n> \n> True, but that is what the combination of publications suggests. The\n> publication that publishes inserts have different criteria than\n> updates, so such behavior (a particular row when inserted will be\n> replicated but when it came as a result of an update it won't be\n> replicated) is expected.\n> \n>> Which brings me to a second problem, related to this one. Imagine you\n>> create the subscription *after* inserting the two rows. In that case you\n>> get this:\n>>\n>> a | b | c\n>> ----+---+---\n>> 1 | 2 | 3\n>> -1 | 3 | 4\n>> (2 rows)\n>>\n>> because tablesync.c ignores which actions is the publication (and thus\n>> the rowfilter) defined for.\n>>\n> \n> Yeah, this is the behavior of tablesync.c with or without rowfilter.\n> It ignores publication actions. So, if you update any tuple before\n> creation of subscription it will be replicated but the same update\n> won't be replicated after initial sync if the publication just\n> publishes 'insert'. I think we can't decide which data to copy based\n> on publication actions as COPY wouldn't know if a particular row is\n> due to a fresh insert or due to an update. In your example, it is\n> possible that row (-1, 3, 4) would have been there due to an update.\n> \n\nRight. Which is why I think disallowing these two features (filtering\nactions and row filters) might prevent this, because it eliminates this\nambiguity. It would not matter if a row was INSERTed or UPDATEd when\nevaluating the row filter.\n\n> \n>> I think it's natural to expect that (INSERT + sync) and (sync + INSERT)\n>> produce the same output on the subscriber.\n>>\n>>\n>> I'm not sure we can actually make this perfectly sane with arbitrary\n>> combinations of filters and actions. It would probably depend on whether\n>> the actions are commutative, associative and stuff like that. But maybe\n>> we can come up with restrictions that'd make this sane?\n>>\n> \n> True, I think to some extent we rely on users to define it sanely\n> otherwise currently also it can easily lead to even replication being\n> stuck. This can happen when the user is trying to operate on the same\n> table and define publication/subscription on multiple nodes for it.\n> See [1] where we trying to deal with such a problem.\n> \n> [1] - https://commitfest.postgresql.org/38/3610/\n> \n\nThat seems to deal with a circular replication, i.e. two logical\nreplication links - a bit like a multi-master. Not sure how is that\nrelated to the issue we're discussing here?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Apr 2022 22:31:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Sat, Apr 30, 2022 at 2:02 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/29/22 06:48, Amit Kapila wrote:\n> > On Thu, Apr 28, 2022 at 11:00 PM Tomas Vondra\n>\n> I think such issues due to ALTER of the publication are somewhat\n> expected, and I think users will understand they might need to resync\n> the subscription or something like that.\n>\n> A similar example might be just changing the where condition,\n>\n> create publication p for table t where (a > 10);\n>\n> and then\n>\n> alter publication p set table t where (a > 15);\n>\n> If we replicated any rows with (a > 10) and (a <= 15), we'll just stop\n> replicating them. But if we re-create the subscription, we end up with a\n> different set of rows on the subscriber, omitting rows with (a <= 15).\n>\n> In principle we'd need to replicate the ALTER somehow, to delete or\n> insert the rows that start/stop matching the row filter. It's a bit\n> similar to not replicating DDL, perhaps.\n>\n> But I think the issue I've described is different, because you don't\n> have to change the subscriptions at all and you'll still have the\n> problem. I mean, imagine doing this:\n>\n> -- publisher\n> create table t (a int primary key, b int);\n> create publication p for table t where (a > 10) with (publish='update');\n>\n> -- subscriber\n> create table t (a int primary key, b int);\n> create subscription s connection '...' publication p;\n>\n> -- publisher\n> insert into t select i, i from generate_series(1,20) s(i);\n> update t set b = b * 10;\n>\n> -- subscriber\n> --> has no rows in \"t\"\n> --> recreate the subscription\n> drop subscription s;\n> create subscription s connection '...' publication p;\n>\n> --> now it has all the rows with (a>10), because tablesync ignores\n> publication actions\n>\n>\n> The reason why I find this really annoying is that it makes it almost\n> impossible to setup two logical replicas that'd be \"consistent\", unless\n> you create them at the same time (= without any writes in between). And\n> it's damn difficult to think about the inconsistencies.\n>\n\nI understood your case related to the initial sync and it is with or\nwithout rowfilter.\n\n>\n> IMHO this all stems from allowing row filters and restricting pubactions\n> at the same time (notice this only used a single publication). So maybe\n> the best option would be to disallow combining these two features? That\n> would ensure the row filter filter is always applied to all actions in a\n> consistent manner, preventing all these issues.\n>\n> Maybe that's not possible - maybe there are valid use cases that would\n> need such combination, and you mentioned replica identity might be an\n> issue\n>\n\nYes, that is the reason we can't combine the row filters for all pubactions.\n\n> (and maybe requiring RIF with row filters is not desirable).\n>\n> So maybe we should at least warn against this in the documentation?\n>\n\nYeah, I find this as the most suitable thing to do to address your\nconcern. I would like to add this information to the 'Initial\nSnapshot' page with some examples (both with and without a row\nfilter).\n\n> >\n> > True, I think to some extent we rely on users to define it sanely\n> > otherwise currently also it can easily lead to even replication being\n> > stuck. This can happen when the user is trying to operate on the same\n> > table and define publication/subscription on multiple nodes for it.\n> > See [1] where we trying to deal with such a problem.\n> >\n> > [1] - https://commitfest.postgresql.org/38/3610/\n> >\n>\n> That seems to deal with a circular replication, i.e. two logical\n> replication links - a bit like a multi-master. Not sure how is that\n> related to the issue we're discussing here?\n>\n\nIt is not directly related to what we are discussing here but I was\ntrying to emphasize the point that users need to define the logical\nreplication via pub/sub sanely otherwise they might see some weird\nbehaviors like that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 30 Apr 2022 06:50:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-28, Tomas Vondra wrote:\n\n> Attached is a patch doing the same thing in tablesync. The overall idea\n> is to generate copy statement with CASE expressions, applying filters to\n> individual columns. For Alvaro's example, this generates something like\n> \n> SELECT\n> (CASE WHEN (a < 0) OR (a > 0) THEN a ELSE NULL END) AS a,\n> (CASE WHEN (a > 0) THEN b ELSE NULL END) AS b,\n> (CASE WHEN (a < 0) THEN c ELSE NULL END) AS c\n> FROM uno WHERE (a < 0) OR (a > 0)\n\nI've been reading the tablesync.c code you propose and the idea seems\ncorrect. (I was distracted by wondering if a different data structure\nwould be more appropriate, because what's there looks slightly\nuncomfortable to work with. But after playing around I can't find\nanything that feels better in an obvious way.)\n\n(I confess I'm a bit bothered by the fact that there are now three\ndifferent data structures in our code called PublicationInfo).\n\nI propose some comment changes in the attached patch, and my\ninterpretation (untested) of the idea of optimizing for a single\npublication. (In there I also rename logicalrep_relmap_free_entry\nbecause it's confusing. That should be a separate patch but I didn't\nsplit it before posting, apologies.)\n\n> There's a couple options how we might optimize this for common cases.\n> For example if there's just a single publication, there's no need to\n> generate the CASE expressions - the WHERE filter will do the trick.\n\nRight.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Sat, 30 Apr 2022 11:28:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-30, Amit Kapila wrote:\n\n> On Sat, Apr 30, 2022 at 2:02 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n\n> > That seems to deal with a circular replication, i.e. two logical\n> > replication links - a bit like a multi-master. Not sure how is that\n> > related to the issue we're discussing here?\n> \n> It is not directly related to what we are discussing here but I was\n> trying to emphasize the point that users need to define the logical\n> replication via pub/sub sanely otherwise they might see some weird\n> behaviors like that.\n\nI agree with that.\n\nMy proposal is that if users want to define multiple publications, and\ntheir definitions conflict in a way that would behave ridiculously (==\nbound to cause data inconsistencies eventually), an error should be\nthrown. Maybe we will not be able to catch all bogus cases, but we can\nbe prepared for the most obvious ones, and patch later when we find\nothers.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Sat, 30 Apr 2022 11:31:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Sat, Apr 30, 2022 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Apr-30, Amit Kapila wrote:\n>\n> > On Sat, Apr 30, 2022 at 2:02 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n>\n> > > That seems to deal with a circular replication, i.e. two logical\n> > > replication links - a bit like a multi-master. Not sure how is that\n> > > related to the issue we're discussing here?\n> >\n> > It is not directly related to what we are discussing here but I was\n> > trying to emphasize the point that users need to define the logical\n> > replication via pub/sub sanely otherwise they might see some weird\n> > behaviors like that.\n>\n> I agree with that.\n>\n> My proposal is that if users want to define multiple publications, and\n> their definitions conflict in a way that would behave ridiculously (==\n> bound to cause data inconsistencies eventually), an error should be\n> thrown. Maybe we will not be able to catch all bogus cases, but we can\n> be prepared for the most obvious ones, and patch later when we find\n> others.\n>\n\nI agree with throwing errors for obvious/known bogus cases but do we\nwant to throw errors or restrict the combining of column lists when\nrow filters are present in all cases? See some examples [1 ] where it\nmay be valid to combine them.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1K%2BPkkC6_FDemGMC_i%2BAakx%2B3%3DQG-g4We3BdCK7dK_bgA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 30 Apr 2022 15:41:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-30, Amit Kapila wrote:\n\n> I agree with throwing errors for obvious/known bogus cases but do we\n> want to throw errors or restrict the combining of column lists when\n> row filters are present in all cases? See some examples [1 ] where it\n> may be valid to combine them.\n\nI agree we should try to combine things when it is sensible to do so.\nAnother case that may make sense if there are two or more publications\nwith identical column lists but different row filters -- in such cases,\nas Tomas suggested, we should combine the filters with OR.\n\nAlso, if only INSERTs are published and not UPDATE/DELETEs, then it\nmight be sensible to combine everything, regardless of whether or not\nthe column lists and row filters match.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n",
"msg_date": "Sat, 30 Apr 2022 18:40:46 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 4/30/22 11:28, Alvaro Herrera wrote:\n> On 2022-Apr-28, Tomas Vondra wrote:\n> \n>> Attached is a patch doing the same thing in tablesync. The overall idea\n>> is to generate copy statement with CASE expressions, applying filters to\n>> individual columns. For Alvaro's example, this generates something like\n>>\n>> SELECT\n>> (CASE WHEN (a < 0) OR (a > 0) THEN a ELSE NULL END) AS a,\n>> (CASE WHEN (a > 0) THEN b ELSE NULL END) AS b,\n>> (CASE WHEN (a < 0) THEN c ELSE NULL END) AS c\n>> FROM uno WHERE (a < 0) OR (a > 0)\n> \n> I've been reading the tablesync.c code you propose and the idea seems\n> correct. (I was distracted by wondering if a different data structure\n> would be more appropriate, because what's there looks slightly\n> uncomfortable to work with. But after playing around I can't find\n> anything that feels better in an obvious way.)\n> \n> (I confess I'm a bit bothered by the fact that there are now three\n> different data structures in our code called PublicationInfo).\n> \n\nTrue. I haven't really thought about naming of the data structures, so\nmaybe we should name them differently.\n\n> I propose some comment changes in the attached patch, and my\n> interpretation (untested) of the idea of optimizing for a single\n> publication. (In there I also rename logicalrep_relmap_free_entry\n> because it's confusing. That should be a separate patch but I didn't\n> split it before posting, apologies.)\n> \n>> There's a couple options how we might optimize this for common cases.\n>> For example if there's just a single publication, there's no need to\n>> generate the CASE expressions - the WHERE filter will do the trick.\n> \n> Right.\n> \n\nOK, now that we agree on the approach in general, I'll look into these\noptimizations (and the comments from your patch).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 1 May 2022 22:53:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 4/29/22 07:05, Amit Kapila wrote:\n> On Thu, Apr 28, 2022 at 5:56 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 27.04.22 12:33, Amit Kapila wrote:\n>>> Currently, when the subscription has multiple publications, we combine\n>>> the objects, and actions of those publications. It happens for\n>>> 'publish_via_partition_root', publication actions, tables, column\n>>> lists, or row filters. I think the whole design works on this idea\n>>> even the initial table sync. I think it might need a major change\n>>> (which I am not sure about at this stage) if we want to make the\n>>> initial sync also behave similar to what you are proposing.\n>>\n>> If one publication says \"publish if insert\" and another publication says\n>> \"publish if update\", then the combination of that is clearly \"publish if\n>> insert or update\". Similarly, if one publication says \"WHERE (foo)\" and\n>> one says \"WHERE (bar)\", then the combination is \"WHERE (foo OR bar)\".\n>>\n>> But if one publication says \"publish columns a and b if condition-X\" and\n>> another publication says \"publish columns a and c if not-condition-X\",\n>> then the combination is clearly *not* \"publish columns a, b, c if true\".\n>> That is not logical, in the literal sense of that word.\n>>\n> \n> So, what should be the behavior in the below cases:\n> \n> Case-1:\n> pub1: \"publish columns a and b if condition-X\"\n> pub2: \"publish column c if condition-X\"\n> \n> Isn't it okay to combine these?\n> \n\nYes, I think it's reasonable to combine those. So the whole publication\nwill have\n\n WHERE (condition-X)\n\nand the column list will be (a,b,c).\n\n> Case-2:\n> pub1: \"publish columns a and b if condition-X\"\n> pub2: \"publish columns c if condition-Y\"\n> \n\nIn this case the publication will have\n\n WHERE (condition-X or condition-Y)\n\nand there will be different column filters for different row sets:\n\n if (condition-X and condition-Y)\n => (a,b,c)\n else if (condition-X and NOT condition-Y)\n => (a,b)\n else if (condition-Y and NOT condition-X)\n => (c)\n\nI think this behavior is reasonable, and it's what the patch does.\n\n> Here Y is subset of condition X (say something like condition-X:\n> \"col > 5\" and condition-Y: \"col1 > 10\").>\n> What should we do in such a case?\n> \n> I think if there are some cases where combining them is okay but in\n> other cases, it is not okay then it is better to prohibit 'not-okay'\n> cases if that is feasible.\n> \n\nNot sure I understand what's the (supposed) issue with this example.\nWe'll simply do this:\n\n if (col1 > 5 and col1 > 10)\n => (a,b,c)\n else if (col1 > 5 and col1 <= 10)\n => (a,b)\n else if (col1 > 10 and col1 <= 5)\n => (c)\n\nObviously, the third branch is unreachable, because the if condition can\nnever be satisfied, so we can never see only column list (c). But that's\nfine IMO. When phrased using the CASE expressions (as in tablesync) it's\nprobably somewhat less cumbersome.\n\nI think it's easier to think about this using \"data redaction\" example\nwhere you specify which columns can be replicated under what condition.\nObviously, that's \"orthogonal\" in the sense that we specify column list\nfor a row filer condition, not row filter for a column. But in principle\nit's the same thing, just different grammar.\n\nAnd in that case it makes perfect sense that you don't blindly combine\nthe column lists from all publications, because that'd defeat the whole\npoint of filtering columns based on row filters.\n\nImagine have a table with customers from different regions, and you want\nto replicate the data somewhere else, but for some reason you can only\nreplicate details for one particular region, and subset of columns for\neveryone else. So you'd do something like this:\n\nCREATE PUBLICATION p1 FOR TABLE customers (... all columns ...)\n WHERE region = 'USA';\n\nCREATE PUBLICATION p1 FOR TABLE customers (... subset of columns ...)\n WHERE region != 'USA';\n\nI think ignoring the row filters and just merging the column lists makes\nno sense for this use case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 1 May 2022 23:42:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 4/30/22 12:11, Amit Kapila wrote:\n> On Sat, Apr 30, 2022 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2022-Apr-30, Amit Kapila wrote:\n>>\n>>> On Sat, Apr 30, 2022 at 2:02 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>>> That seems to deal with a circular replication, i.e. two logical\n>>>> replication links - a bit like a multi-master. Not sure how is that\n>>>> related to the issue we're discussing here?\n>>>\n>>> It is not directly related to what we are discussing here but I was\n>>> trying to emphasize the point that users need to define the logical\n>>> replication via pub/sub sanely otherwise they might see some weird\n>>> behaviors like that.\n>>\n>> I agree with that.\n>>\n>> My proposal is that if users want to define multiple publications, and\n>> their definitions conflict in a way that would behave ridiculously (==\n>> bound to cause data inconsistencies eventually), an error should be\n>> thrown. Maybe we will not be able to catch all bogus cases, but we can\n>> be prepared for the most obvious ones, and patch later when we find\n>> others.\n>>\n> \n> I agree with throwing errors for obvious/known bogus cases but do we\n> want to throw errors or restrict the combining of column lists when\n> row filters are present in all cases? See some examples [1 ] where it\n> may be valid to combine them.\n> \n\nI think there are three challenges:\n\n(a) Deciding what's an obvious bug or an unsupported case (e.g. because\nit's not clear what's the correct behavior / way to merge column lists).\n\n(b) When / where to detect the issue.\n\n(c) Making sure this does not break/prevent existing use cases.\n\n\nAs I said before [1], I think the issue stems from essentially allowing\nDML to have different row filters / column lists. So we could forbid\npublications to specify WITH (publish=...) and one of the two features,\nor make sure subscription does not combine multiple such publications.\n\nThe second option has the annoying consequence that it makes this\nuseless for the \"data redaction\" use case I described in [2], because\nthat relies on combining multiple publications.\n\nFurthermore, what if the publications change after the subscriptions get\ncreated? Will we be able to detect the error etc.?\n\nSo I'd prefer the first option, but maybe that prevents some useful use\ncases too ...\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/45d27a8a-7c7a-88e8-a3db-c7c1d144df5e%40enterprisedb.com\n\n[2]\nhttps://www.postgresql.org/message-id/338e719c-4bc8-f40a-f701-e29543a264e4%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 1 May 2022 23:57:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 2, 2022 at 3:27 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/30/22 12:11, Amit Kapila wrote:\n> > On Sat, Apr 30, 2022 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> My proposal is that if users want to define multiple publications, and\n> >> their definitions conflict in a way that would behave ridiculously (==\n> >> bound to cause data inconsistencies eventually), an error should be\n> >> thrown. Maybe we will not be able to catch all bogus cases, but we can\n> >> be prepared for the most obvious ones, and patch later when we find\n> >> others.\n> >>\n> >\n> > I agree with throwing errors for obvious/known bogus cases but do we\n> > want to throw errors or restrict the combining of column lists when\n> > row filters are present in all cases? See some examples [1 ] where it\n> > may be valid to combine them.\n> >\n>\n> I think there are three challenges:\n>\n> (a) Deciding what's an obvious bug or an unsupported case (e.g. because\n> it's not clear what's the correct behavior / way to merge column lists).\n>\n> (b) When / where to detect the issue.\n>\n> (c) Making sure this does not break/prevent existing use cases.\n>\n>\n> As I said before [1], I think the issue stems from essentially allowing\n> DML to have different row filters / column lists. So we could forbid\n> publications to specify WITH (publish=...) and one of the two features,\n>\n\nI don't think this is feasible for row filters because that would mean\npublishing all actions because we have a restriction that all columns\nreferenced in the row filter expression are part of the REPLICA\nIDENTITY index. This restriction is only valid for updates/deletes, so\nif we allow all pubactions then this will be imposed on inserts as\nwell. A similar restriction is there for column lists as well, so I\ndon't think we can do it there as well. Do you have some idea to\naddress it?\n\n> or make sure subscription does not combine multiple such publications.\n>\n\nYeah, or don't allow to define such publications in the first place so\nthat different subscriptions can't combine them but I guess that might\nforbid some useful cases as well where publication may not get\ncombined with other publications.\n\n> The second option has the annoying consequence that it makes this\n> useless for the \"data redaction\" use case I described in [2], because\n> that relies on combining multiple publications.\n>\n\nTrue, but as a workaround users can create different subscriptions for\ndifferent publications.\n\n> Furthermore, what if the publications change after the subscriptions get\n> created? Will we be able to detect the error etc.?\n>\n\nI think from that apart from 'Create Subscription', the same check\nneeds to be added for Alter Subscription ... Refresh, Alter\nSubscription ... Enable.\n\nIn the publication side, we need an additional check in Alter\nPublication ... SET table variant. One idea is that we get all other\npublications for which the corresponding relation is defined. And then\nif we find anything which we don't want to allow then we can throw an\nerror. This will forbid some useful cases as well as mentioned above.\nSo, the other possibility is to expose all publications for a\nwalsender, and then we can find the exact set of publications where\nthe current publication is used with other publications and we can\ncheck only those publications. So, if we have three walsenders\n(walsnd1: pub1, pub2; walsnd2 pub2; walsnd3: pub2, pub3) in the system\nand we are currently altering publication pub1 then we need to check\nonly pub3 for any conflicting conditions. Yet another simple way could\nbe that we don't allow to change column list via Alter Publication ...\nSet variant because the other variants anyway need REFRESH publication\nwhich we have covered.\n\nI think it is tricky to decide what exactly we want to forbid, so, we\nmay want to follow something simple like if the column list and row\nfilters for a table are different in the required set of publications\nthen we treat it as an unsupported case. I think this will prohibit\nsome useful cases but should probably forbid the cases we are worried\nabout here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 May 2022 11:01:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 2, 2022 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 3:27 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 4/30/22 12:11, Amit Kapila wrote:\n> > > On Sat, Apr 30, 2022 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >>\n> > >> My proposal is that if users want to define multiple publications, and\n> > >> their definitions conflict in a way that would behave ridiculously (==\n> > >> bound to cause data inconsistencies eventually), an error should be\n> > >> thrown. Maybe we will not be able to catch all bogus cases, but we can\n> > >> be prepared for the most obvious ones, and patch later when we find\n> > >> others.\n> > >>\n> > >\n> > > I agree with throwing errors for obvious/known bogus cases but do we\n> > > want to throw errors or restrict the combining of column lists when\n> > > row filters are present in all cases? See some examples [1 ] where it\n> > > may be valid to combine them.\n> > >\n> >\n> > I think there are three challenges:\n> >\n> > (a) Deciding what's an obvious bug or an unsupported case (e.g. because\n> > it's not clear what's the correct behavior / way to merge column lists).\n> >\n> > (b) When / where to detect the issue.\n> >\n> > (c) Making sure this does not break/prevent existing use cases.\n> >\n> >\n> > As I said before [1], I think the issue stems from essentially allowing\n> > DML to have different row filters / column lists. So we could forbid\n> > publications to specify WITH (publish=...) and one of the two features,\n> >\n>\n> I don't think this is feasible for row filters because that would mean\n> publishing all actions because we have a restriction that all columns\n>\n\nRead the above sentence as: \"publishing all actions and we have a\nrestriction that all columns ...\"\n\n> referenced in the row filter expression are part of the REPLICA\n> IDENTITY index. This restriction is only valid for updates/deletes, so\n> if we allow all pubactions then this will be imposed on inserts as\n> well. A similar restriction is there for column lists as well, so I\n> don't think we can do it there as well. Do you have some idea to\n> address it?\n>\n> > or make sure subscription does not combine multiple such publications.\n> >\n>\n> Yeah, or don't allow to define such publications in the first place so\n> that different subscriptions can't combine them but I guess that might\n> forbid some useful cases as well where publication may not get\n> combined with other publications.\n>\n> > The second option has the annoying consequence that it makes this\n> > useless for the \"data redaction\" use case I described in [2], because\n> > that relies on combining multiple publications.\n> >\n>\n> True, but as a workaround users can create different subscriptions for\n> different publications.\n>\n> > Furthermore, what if the publications change after the subscriptions get\n> > created? Will we be able to detect the error etc.?\n> >\n>\n> I think from that apart from 'Create Subscription', the same check\n> needs to be added for Alter Subscription ... Refresh, Alter\n> Subscription ... Enable.\n>\n> In the publication side, we need an additional check in Alter\n> Publication ... SET table variant. One idea is that we get all other\n> publications for which the corresponding relation is defined. And then\n> if we find anything which we don't want to allow then we can throw an\n> error. This will forbid some useful cases as well as mentioned above.\n> So, the other possibility is to expose all publications for a\n> walsender, and then we can find the exact set of publications where\n> the current publication is used with other publications and we can\n> check only those publications. So, if we have three walsenders\n> (walsnd1: pub1, pub2; walsnd2 pub2; walsnd3: pub2, pub3) in the system\n> and we are currently altering publication pub1 then we need to check\n> only pub3 for any conflicting conditions.\n>\n\nTypo, it should be pub2 instead of pub3 in the above sentence.\n\n> Yet another simple way could\n> be that we don't allow to change column list via Alter Publication ...\n> Set variant because the other variants anyway need REFRESH publication\n> which we have covered.\n>\n> I think it is tricky to decide what exactly we want to forbid, so, we\n> may want to follow something simple like if the column list and row\n> filters for a table are different in the required set of publications\n> then we treat it as an unsupported case. I think this will prohibit\n> some useful cases but should probably forbid the cases we are worried\n> about here.\n>\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 May 2022 11:06:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/2/22 07:31, Amit Kapila wrote:\n> On Mon, May 2, 2022 at 3:27 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 4/30/22 12:11, Amit Kapila wrote:\n>>> On Sat, Apr 30, 2022 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>>\n>>>> My proposal is that if users want to define multiple publications, and\n>>>> their definitions conflict in a way that would behave ridiculously (==\n>>>> bound to cause data inconsistencies eventually), an error should be\n>>>> thrown. Maybe we will not be able to catch all bogus cases, but we can\n>>>> be prepared for the most obvious ones, and patch later when we find\n>>>> others.\n>>>>\n>>>\n>>> I agree with throwing errors for obvious/known bogus cases but do we\n>>> want to throw errors or restrict the combining of column lists when\n>>> row filters are present in all cases? See some examples [1 ] where it\n>>> may be valid to combine them.\n>>>\n>>\n>> I think there are three challenges:\n>>\n>> (a) Deciding what's an obvious bug or an unsupported case (e.g. because\n>> it's not clear what's the correct behavior / way to merge column lists).\n>>\n>> (b) When / where to detect the issue.\n>>\n>> (c) Making sure this does not break/prevent existing use cases.\n>>\n>>\n>> As I said before [1], I think the issue stems from essentially allowing\n>> DML to have different row filters / column lists. So we could forbid\n>> publications to specify WITH (publish=...) and one of the two features,\n>>\n> \n> I don't think this is feasible for row filters because that would mean\n> publishing all actions because we have a restriction that all columns\n> referenced in the row filter expression are part of the REPLICA\n> IDENTITY index. This restriction is only valid for updates/deletes, so\n> if we allow all pubactions then this will be imposed on inserts as\n> well. A similar restriction is there for column lists as well, so I\n> don't think we can do it there as well. Do you have some idea to\n> address it?\n> \n\nNo, I haven't thought about how exactly to implement this, and I have\nnot thought about how to deal with the replica identity issues. My\nthoughts were that we'd only really need this for tables with row\nfilters and/or column lists, treating it as a cost of those features.\n\nBut yeah, it seems annoying.\n\n>> or make sure subscription does not combine multiple such publications.\n>>\n> \n> Yeah, or don't allow to define such publications in the first place so\n> that different subscriptions can't combine them but I guess that might\n> forbid some useful cases as well where publication may not get\n> combined with other publications.\n> \n\nBut how would you check that? You don't know which publications will be\ncombined by a subscription until you create the subscription, right?\n\n>> The second option has the annoying consequence that it makes this\n>> useless for the \"data redaction\" use case I described in [2], because\n>> that relies on combining multiple publications.\n>>\n> \n> True, but as a workaround users can create different subscriptions for\n> different publications.\n> \n\nWon't that replicate duplicate data, when the row filters re not\nmutually exclusive?\n\n>> Furthermore, what if the publications change after the subscriptions get\n>> created? Will we be able to detect the error etc.?\n>>\n> \n> I think from that apart from 'Create Subscription', the same check\n> needs to be added for Alter Subscription ... Refresh, Alter\n> Subscription ... Enable.\n> \n> In the publication side, we need an additional check in Alter\n> Publication ... SET table variant. One idea is that we get all other\n> publications for which the corresponding relation is defined. And then\n> if we find anything which we don't want to allow then we can throw an\n> error. This will forbid some useful cases as well as mentioned above.\n> So, the other possibility is to expose all publications for a\n> walsender, and then we can find the exact set of publications where\n> the current publication is used with other publications and we can\n> check only those publications. So, if we have three walsenders\n> (walsnd1: pub1, pub2; walsnd2 pub2; walsnd3: pub2, pub3) in the system\n> and we are currently altering publication pub1 then we need to check\n> only pub3 for any conflicting conditions. Yet another simple way could\n> be that we don't allow to change column list via Alter Publication ...\n> Set variant because the other variants anyway need REFRESH publication\n> which we have covered.\n> \n> I think it is tricky to decide what exactly we want to forbid, so, we\n> may want to follow something simple like if the column list and row\n> filters for a table are different in the required set of publications\n> then we treat it as an unsupported case. I think this will prohibit\n> some useful cases but should probably forbid the cases we are worried\n> about here.\n> \n\nI don't have a clear idea on what the right tradeoff is :-(\n\nMaybe we're digressing a bit from the stuff Alvaro complained about\ninitially. Arguably the existing column list behavior is surprising and\nwould not work with reasonable use cases. So let's fix it.\n\nBut maybe you're right validating row filters is a step too far. Yes,\nusers may define strange combinations of publications, but is that\nreally an issue we have to solve?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 May 2022 11:35:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-May-02, Tomas Vondra wrote:\n> On 5/2/22 07:31, Amit Kapila wrote:\n\n> > Yeah, or don't allow to define such publications in the first place so\n> > that different subscriptions can't combine them but I guess that might\n> > forbid some useful cases as well where publication may not get\n> > combined with other publications.\n> \n> But how would you check that? You don't know which publications will be\n> combined by a subscription until you create the subscription, right?\n\n... and I think this poses a problem: if the publisher has multiple\npublications and the subscriber later uses those to create a combined\nsubscription, we can check at CREATE SUBSCRIPTION time that they can be\ncombined correctly. But if the publisher decides to change the\npublications changing the rules and they are no longer consistent, can\nwe throw an error at ALTER PUBLICATION point? If the publisher can\ndetect that they are being used together by some subscription, then\nmaybe we can check consistency in the publication side and everything is\nall right. But I'm not sure that the publisher knows who is subscribed\nto what, so this might not be an option.\n\nThe latter ultimately means that we aren't sure that a combined\nsubscription is safe. And in turn this means that a pg_dump of such a\ndatabase cannot be restored (because the CREATE SUBSCRIPTION will be\nrejected as being inconsistent).\n\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 2 May 2022 12:17:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/2/22 12:17, Alvaro Herrera wrote:\n> On 2022-May-02, Tomas Vondra wrote:\n>> On 5/2/22 07:31, Amit Kapila wrote:\n> \n>>> Yeah, or don't allow to define such publications in the first place so\n>>> that different subscriptions can't combine them but I guess that might\n>>> forbid some useful cases as well where publication may not get\n>>> combined with other publications.\n>>\n>> But how would you check that? You don't know which publications will be\n>> combined by a subscription until you create the subscription, right?\n> \n> ... and I think this poses a problem: if the publisher has multiple\n> publications and the subscriber later uses those to create a combined\n> subscription, we can check at CREATE SUBSCRIPTION time that they can be\n> combined correctly. But if the publisher decides to change the\n> publications changing the rules and they are no longer consistent, can\n> we throw an error at ALTER PUBLICATION point? If the publisher can\n> detect that they are being used together by some subscription, then\n> maybe we can check consistency in the publication side and everything is\n> all right. But I'm not sure that the publisher knows who is subscribed\n> to what, so this might not be an option.\n> \n\nAFAIK we don't track that (publication/subscription mapping). The\npublications are listed in publication_names parameter of the\nSTART_REPLICATION command.\n\n> The latter ultimately means that we aren't sure that a combined\n> subscription is safe. And in turn this means that a pg_dump of such a\n> database cannot be restored (because the CREATE SUBSCRIPTION will be\n> rejected as being inconsistent).\n> \n\nWe could do this check when executing the START_REPLICATION command, no?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 May 2022 12:23:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-May-02, Tomas Vondra wrote:\n\n> On 5/2/22 12:17, Alvaro Herrera wrote:\n\n> > The latter ultimately means that we aren't sure that a combined\n> > subscription is safe. And in turn this means that a pg_dump of such a\n> > database cannot be restored (because the CREATE SUBSCRIPTION will be\n> > rejected as being inconsistent).\n> \n> We could do this check when executing the START_REPLICATION command, no?\n\nAh! That sounds like it might work: we throw WARNINGs are CREATE\nSUBSCRIPTION (so that users are immediately aware in case something is\ngoing to fail later, but the objects are still created and they can fix\nthe publications afterwards), but the real ERROR is in START_REPLICATION.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n",
"msg_date": "Mon, 2 May 2022 12:55:10 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 2, 2022 at 3:53 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/2/22 12:17, Alvaro Herrera wrote:\n> > On 2022-May-02, Tomas Vondra wrote:\n> >> On 5/2/22 07:31, Amit Kapila wrote:\n> >\n> >>> Yeah, or don't allow to define such publications in the first place so\n> >>> that different subscriptions can't combine them but I guess that might\n> >>> forbid some useful cases as well where publication may not get\n> >>> combined with other publications.\n> >>\n> >> But how would you check that? You don't know which publications will be\n> >> combined by a subscription until you create the subscription, right?\n> >\n\nYeah, I was thinking to check for all publications where the same\nrelation is published but as mentioned that may not be a very good\noption as that would unnecessarily block many valid cases.\n\n> > ... and I think this poses a problem: if the publisher has multiple\n> > publications and the subscriber later uses those to create a combined\n> > subscription, we can check at CREATE SUBSCRIPTION time that they can be\n> > combined correctly. But if the publisher decides to change the\n> > publications changing the rules and they are no longer consistent, can\n> > we throw an error at ALTER PUBLICATION point? If the publisher can\n> > detect that they are being used together by some subscription, then\n> > maybe we can check consistency in the publication side and everything is\n> > all right. But I'm not sure that the publisher knows who is subscribed\n> > to what, so this might not be an option.\n> >\n>\n> AFAIK we don't track that (publication/subscription mapping). The\n> publications are listed in publication_names parameter of the\n> START_REPLICATION command.\n>\n\nWe don't do that currently but we can as mentioned in my previous\nemail [1]. Let me write the relevant part again. We need to expose all\npublications for a walsender, and then we can find the exact set of\npublications where the current publication is used with other\npublications and we can check only those publications. So, if we have\nthree walsenders (walsnd1: pub1, pub2; walsnd2 pub2; walsnd3: pub2,\npub3) in the system and we are currently altering publication pub1\nthen we need to check only pub3 for any conflicting conditions.\n\nI think it is possible to expose a list of publications for each\nwalsender as it is stored in each walsenders\nLogicalDecodingContext->output_plugin_private. AFAIK, each walsender\ncan have one such LogicalDecodingContext and we can probably share it\nvia shared memory?\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LGX-ig%3D%3DQyL%2B%3D%3DnKvcAS3qFU7%3DNiKL77ukUT-Q_4XncQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 May 2022 16:44:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 2, 2022 at 3:05 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/2/22 07:31, Amit Kapila wrote:\n> > On Mon, May 2, 2022 at 3:27 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n>\n> >> The second option has the annoying consequence that it makes this\n> >> useless for the \"data redaction\" use case I described in [2], because\n> >> that relies on combining multiple publications.\n> >>\n> >\n> > True, but as a workaround users can create different subscriptions for\n> > different publications.\n> >\n>\n> Won't that replicate duplicate data, when the row filters re not\n> mutually exclusive?\n>\n\nTrue, but this is a recommendation for mutually exclusive data, and as\nfar as I can understand the example given by you [1] and Alvaro has\nmutually exclusive conditions. In your example, one of the\npublications has a condition (region = 'USA') and the other\npublication has a condition (region != 'USA'), so will there be a\nproblem in using different subscriptions for such cases?\n\n[1] - https://www.postgresql.org/message-id/338e719c-4bc8-f40a-f701-e29543a264e4@enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 May 2022 16:53:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-May-02, Amit Kapila wrote:\n\n> We don't do that currently but we can as mentioned in my previous\n> email [1]. Let me write the relevant part again. We need to expose all\n> publications for a walsender, and then we can find the exact set of\n> publications where the current publication is used with other\n> publications and we can check only those publications. So, if we have\n> three walsenders (walsnd1: pub1, pub2; walsnd2 pub2; walsnd3: pub2,\n> pub3) in the system and we are currently altering publication pub1\n> then we need to check only pub3 for any conflicting conditions.\n\nHmm ... so what happens in the current system, if you have a running\nwalsender and modify the publication concurrently? Will the subscriber\nstart getting the changes with the new publication definition, at some\narbitrary point in the middle of their stream? If that's what we do,\nmaybe we should have a signalling system which disconnects all\nwalsenders using that publication, so that they can connect and receive\nthe new definition.\n\nI don't see anything in the publication DDL that interacts with\nwalsenders -- perhaps I'm overlooking something.\n\n> I think it is possible to expose a list of publications for each\n> walsender as it is stored in each walsenders\n> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n> can have one such LogicalDecodingContext and we can probably share it\n> via shared memory?\n\nI guess we need to create a DSM each time a walsender opens a\nconnection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\nconnect to all DSMs of all running walsenders and see if they are\nreading from it. Is that what you have in mind? Alternatively, we\ncould have one DSM per publication with a PID array of all walsenders\nthat are sending it (each walsender needs to add its PID as it starts).\nThe latter might be better.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Mon, 2 May 2022 13:44:15 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-Apr-28, Tomas Vondra wrote:\n\n> SELECT\n> (CASE WHEN (a < 0) OR (a > 0) THEN a ELSE NULL END) AS a,\n> (CASE WHEN (a > 0) THEN b ELSE NULL END) AS b,\n> (CASE WHEN (a < 0) THEN c ELSE NULL END) AS c\n> FROM uno WHERE (a < 0) OR (a > 0)\n\nBTW, looking at the new COPY commands, the idea of \"COPY table_foo\n(PUBLICATION pub1, pub2)\" is looking more and more attractive, as a\nreplacement for having the replica cons up an ad-hoc subquery to COPY\nfrom. Something to think about for pg16, maybe.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n",
"msg_date": "Mon, 2 May 2022 18:30:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/2/22 13:44, Alvaro Herrera wrote:\n> On 2022-May-02, Amit Kapila wrote:\n> \n>> We don't do that currently but we can as mentioned in my previous\n>> email [1]. Let me write the relevant part again. We need to expose all\n>> publications for a walsender, and then we can find the exact set of\n>> publications where the current publication is used with other\n>> publications and we can check only those publications. So, if we have\n>> three walsenders (walsnd1: pub1, pub2; walsnd2 pub2; walsnd3: pub2,\n>> pub3) in the system and we are currently altering publication pub1\n>> then we need to check only pub3 for any conflicting conditions.\n> \n> Hmm ... so what happens in the current system, if you have a running\n> walsender and modify the publication concurrently? Will the subscriber\n> start getting the changes with the new publication definition, at some\n> arbitrary point in the middle of their stream? If that's what we do,\n> maybe we should have a signalling system which disconnects all\n> walsenders using that publication, so that they can connect and receive\n> the new definition.\n> \n> I don't see anything in the publication DDL that interacts with\n> walsenders -- perhaps I'm overlooking something.\n> \n\npgoutput.c is relies on relcache callbacks to get notified of changes.\nSee the stuff that touches replicate_valid and publications_valid. So\nthe walsender should notice the changes immediately.\n\nMaybe you have some particular case in mind, though?\n\n\n>> I think it is possible to expose a list of publications for each\n>> walsender as it is stored in each walsenders\n>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n>> can have one such LogicalDecodingContext and we can probably share it\n>> via shared memory?\n> \n> I guess we need to create a DSM each time a walsender opens a\n> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n> connect to all DSMs of all running walsenders and see if they are\n> reading from it. Is that what you have in mind? Alternatively, we\n> could have one DSM per publication with a PID array of all walsenders\n> that are sending it (each walsender needs to add its PID as it starts).\n> The latter might be better.\n> \n\nI don't quite follow what we're trying to build here. The walsender\nalready knows which publications it works with - how else would\npgoutput.c know that? So the walsender should be able to validate the\nstuff it's supposed to replicate is OK.\n\nWhy would we need to know publications replicated by other walsenders?\nAnd what if the subscriber is not connected at the moment? In that case\nthere'll be no walsender.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 May 2022 19:36:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-May-02, Tomas Vondra wrote:\n\n> pgoutput.c is relies on relcache callbacks to get notified of changes.\n> See the stuff that touches replicate_valid and publications_valid. So\n> the walsender should notice the changes immediately.\n\nHmm, I suppose that makes any changes easy enough to detect. We don't\nneed a separate signalling mechanism.\n\nBut it does mean that the walsender needs to test the consistency of\n[rowfilter, column list, published actions] whenever they change for any\nof the current publications and it is working for more than one, and\ndisconnect if the combination no longer complies with the rules. By the\nnext time the replica tries to connect, START_REPLICATION will throw the\nerror.\n\n> Why would we need to know publications replicated by other walsenders?\n> And what if the subscriber is not connected at the moment? In that case\n> there'll be no walsender.\n\nSure, if the replica is not connected then there's no issue -- as you\nsay, that replica will fail at START_REPLICATION time.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"\n\n\n",
"msg_date": "Mon, 2 May 2022 19:51:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/2/22 13:23, Amit Kapila wrote:\n> On Mon, May 2, 2022 at 3:05 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 5/2/22 07:31, Amit Kapila wrote:\n>>> On Mon, May 2, 2022 at 3:27 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>\n>>>> The second option has the annoying consequence that it makes this\n>>>> useless for the \"data redaction\" use case I described in [2], because\n>>>> that relies on combining multiple publications.\n>>>>\n>>>\n>>> True, but as a workaround users can create different subscriptions for\n>>> different publications.\n>>>\n>>\n>> Won't that replicate duplicate data, when the row filters re not\n>> mutually exclusive?\n>>\n> \n> True, but this is a recommendation for mutually exclusive data, and as\n> far as I can understand the example given by you [1] and Alvaro has\n> mutually exclusive conditions. In your example, one of the\n> publications has a condition (region = 'USA') and the other\n> publication has a condition (region != 'USA'), so will there be a\n> problem in using different subscriptions for such cases?\n> \n\nI kept that example intentionally simple, but I'm sure we could come up\nwith more complex use cases. Following the \"data redaction\" idea, we\ncould also apply the \"deny all\" approach, and do something like this:\n\n-- replicate the minimal column list by default (replica identity)\nCREATE PUBLICATION p1 FOR TABLE t (id, region);\n\n-- replicate more columns for the selected region\nCREATE PUBLICATION p2 FOR TABLE t (...) WHERE (region = 'USA')\n\nNow, I admit this is something I just made up, but I think it seems like\na pretty common approach.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 May 2022 20:37:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/2/22 19:51, Alvaro Herrera wrote:\n> On 2022-May-02, Tomas Vondra wrote:\n> \n>> pgoutput.c is relies on relcache callbacks to get notified of changes.\n>> See the stuff that touches replicate_valid and publications_valid. So\n>> the walsender should notice the changes immediately.\n> \n> Hmm, I suppose that makes any changes easy enough to detect. We don't\n> need a separate signalling mechanism.\n> \n> But it does mean that the walsender needs to test the consistency of\n> [rowfilter, column list, published actions] whenever they change for any\n> of the current publications and it is working for more than one, and\n> disconnect if the combination no longer complies with the rules. By the\n> next time the replica tries to connect, START_REPLICATION will throw the\n> error.\n> \n>> Why would we need to know publications replicated by other walsenders?\n>> And what if the subscriber is not connected at the moment? In that case\n>> there'll be no walsender.\n> \n> Sure, if the replica is not connected then there's no issue -- as you\n> say, that replica will fail at START_REPLICATION time.\n> \n\nRight, I got confused a bit.\n\nAnyway, I think the main challenge is defining what exactly we want to\ncheck, in order to ensure \"sensible\" behavior, without preventing way\ntoo many sensible use cases.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 2 May 2022 20:40:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 01.05.22 23:42, Tomas Vondra wrote:\n> Imagine have a table with customers from different regions, and you want\n> to replicate the data somewhere else, but for some reason you can only\n> replicate details for one particular region, and subset of columns for\n> everyone else. So you'd do something like this:\n> \n> CREATE PUBLICATION p1 FOR TABLE customers (... all columns ...)\n> WHERE region = 'USA';\n> \n> CREATE PUBLICATION p1 FOR TABLE customers (... subset of columns ...)\n> WHERE region != 'USA';\n> \n> I think ignoring the row filters and just merging the column lists makes\n> no sense for this use case.\n\nI'm thinking now the underlying problem is that we shouldn't combine \ncolumn lists at all. Examples like the above where you want to redact \nvalues somehow are better addressed with something like triggers or an \nactual \"column filter\" that works dynamically or some other mechanism.\n\nThe main purpose, in my mind, of column lists is if the tables \nstatically have different shapes on publisher and subscriber. Perhaps \nfor space reasons or regulatory reasons you don't want to replicate \neverything. But then it doesn't make sense to combine column lists. If \nyou decide over here that the subscriber table has this shape and over \nthere that the subscriber table has that other shape, then the \ncombination of the two will be a table that has neither shape and so \nwill not work for anything.\n\nI think in general we should be much more restrictive in how we combine \npublications. Unless we are really sure it makes sense, we should \ndisallow it. Users can always make a new publication with different \nsettings and subscribe to that directly.\n\n\n",
"msg_date": "Mon, 2 May 2022 22:34:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tue, May 3, 2022 at 12:10 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/2/22 19:51, Alvaro Herrera wrote:\n> >> Why would we need to know publications replicated by other walsenders?\n> >> And what if the subscriber is not connected at the moment? In that case\n> >> there'll be no walsender.\n> >\n> > Sure, if the replica is not connected then there's no issue -- as you\n> > say, that replica will fail at START_REPLICATION time.\n> >\n>\n> Right, I got confused a bit.\n>\n> Anyway, I think the main challenge is defining what exactly we want to\n> check, in order to ensure \"sensible\" behavior, without preventing way\n> too many sensible use cases.\n>\n\nI could think of below two options:\n1. Forbid any case where column list is different for the same table\nwhen combining publications.\n2. Forbid if the column list and row filters for a table are different\nin the set of publications we are planning to combine. This means we\nwill allow combining column lists when row filters are not present or\nwhen column list is the same (we don't get anything additional by\ncombining but the idea is we won't forbid such cases) and row filters\nare different.\n\nNow, I think the points in favor of (1) are that the main purpose of\nintroducing a column list are: (a) the structure/schema of the\nsubscriber is different from the publisher, (b) want to hide sensitive\ncolumns data. In both cases, it should be fine if we follow (1) and\nfrom Peter E.'s latest email [1] he also seems to be indicating the\nsame. If we want to be slightly more relaxed then we can probably (2).\nWe can decide on something else as well but I feel it should be such\nthat it is easy to explain.\n\n[1] - https://www.postgresql.org/message-id/47dd2cb9-4e96-169f-15ac-f9407fb54d43%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 May 2022 09:00:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-May-02, Amit Kapila wrote:\n>\n> > I think it is possible to expose a list of publications for each\n> > walsender as it is stored in each walsenders\n> > LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n> > can have one such LogicalDecodingContext and we can probably share it\n> > via shared memory?\n>\n> I guess we need to create a DSM each time a walsender opens a\n> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n> connect to all DSMs of all running walsenders and see if they are\n> reading from it. Is that what you have in mind?\n>\n\nYes, something on these lines. We need a way to get the list of\npublications each walsender is publishing data for.\n\n> Alternatively, we\n> could have one DSM per publication with a PID array of all walsenders\n> that are sending it (each walsender needs to add its PID as it starts).\n>\n\nI think for this we need to check DSM for all the publications and I\nfeel in general publications should be more than the number of\nwalsenders, so the previous approach seems better to me. However, any\none of these or similar ideas should be okay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 May 2022 09:23:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/2/22 22:34, Peter Eisentraut wrote:\n> On 01.05.22 23:42, Tomas Vondra wrote:\n>> Imagine have a table with customers from different regions, and you want\n>> to replicate the data somewhere else, but for some reason you can only\n>> replicate details for one particular region, and subset of columns for\n>> everyone else. So you'd do something like this:\n>>\n>> CREATE PUBLICATION p1 FOR TABLE customers (... all columns ...)\n>> WHERE region = 'USA';\n>>\n>> CREATE PUBLICATION p1 FOR TABLE customers (... subset of columns ...)\n>> WHERE region != 'USA';\n>>\n>> I think ignoring the row filters and just merging the column lists makes\n>> no sense for this use case.\n> \n> I'm thinking now the underlying problem is that we shouldn't combine\n> column lists at all. Examples like the above where you want to redact\n> values somehow are better addressed with something like triggers or an\n> actual \"column filter\" that works dynamically or some other mechanism.\n> \n\nSo what's wrong with merging the column lists as implemented in the v2\npatch, posted a couple days ago?\n\nI don't think triggers are a suitable alternative, as it executes on the\nsubscriber node. So you have to first copy the data to the remote node,\nwhere it gets filtered. With column filters the data gets redacted on\nthe publisher.\n\n\n> The main purpose, in my mind, of column lists is if the tables\n> statically have different shapes on publisher and subscriber. Perhaps\n> for space reasons or regulatory reasons you don't want to replicate\n> everything. But then it doesn't make sense to combine column lists. If\n> you decide over here that the subscriber table has this shape and over\n> there that the subscriber table has that other shape, then the\n> combination of the two will be a table that has neither shape and so\n> will not work for anything.\n> \n\nYeah. If we intend to use column lists only to adapt to a different\nschema on the subscriber node, then maybe it'd be fine to not merge\ncolumn lists. It'd probably be reasonable to allow at least cases with\nmultiple publications using the same column list, though. In that case\nthere's no ambiguity.\n\n> I think in general we should be much more restrictive in how we combine\n> publications. Unless we are really sure it makes sense, we should\n> disallow it. Users can always make a new publication with different\n> settings and subscribe to that directly.\n\nI agree with that in principle - correct first, flexibility second. If\nthe behavior is not correct, it doesn't matter how flexible it is.\n\nI still think the data redaction use case is valid/interesting, but if\nwe want to impose some restrictions I'm OK with that, as long as it's\ndone in a way that we can relax in the future to allow that use case\n(that is, without introducing any incompatibilities).\n\nHowever, what's the definition of \"correctness\" in this context? Without\nthat it's hard to say if the restrictions make the behavior any more\ncorrect. It'd be unfortunate to impose restritions, which will prevent\nsome use cases, only to discover we haven't actually made it correct.\n\nFor example, is it enough to restrict column lists, or does it need to\nrestrict e.g. row filters too? And does it need to consider other stuff,\nlike publications replicating different actions?\n\nFor example, if we allow different column lists (or row filters) for\ndifferent actions (one publication for insert, another one for update),\nwe still have the strange behavior described before.\n\nAnd if we force users to use separate subscriptions, I'm not sure that\nreally improves the situation for users who actually need that. They'll\ndo that, and aside from all the problems they'll also face issues with\ntiming between the two concurrent subscriptions, having to decode stuff\nmultiple times, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 3 May 2022 21:40:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\nOn 03.05.22 21:40, Tomas Vondra wrote:\n> So what's wrong with merging the column lists as implemented in the v2\n> patch, posted a couple days ago?\n\nMerging the column lists is ok if all other publication attributes \nmatch. Otherwise, I think not.\n\n> I don't think triggers are a suitable alternative, as it executes on the\n> subscriber node. So you have to first copy the data to the remote node,\n> where it gets filtered. With column filters the data gets redacted on\n> the publisher.\n\nRight, triggers are not currently a solution. But you could imagine a \nredaction filter system that runs on the publisher that modifies rows \nbefore they are sent out.\n\n\n",
"msg_date": "Wed, 4 May 2022 15:56:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tuesday, May 3, 2022 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, May 3, 2022 at 12:10 AM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> > On 5/2/22 19:51, Alvaro Herrera wrote:\r\n> > >> Why would we need to know publications replicated by other\r\n> walsenders?\r\n> > >> And what if the subscriber is not connected at the moment? In that case\r\n> > >> there'll be no walsender.\r\n> > >\r\n> > > Sure, if the replica is not connected then there's no issue -- as you\r\n> > > say, that replica will fail at START_REPLICATION time.\r\n> > >\r\n> >\r\n> > Right, I got confused a bit.\r\n> >\r\n> > Anyway, I think the main challenge is defining what exactly we want to\r\n> > check, in order to ensure \"sensible\" behavior, without preventing way\r\n> > too many sensible use cases.\r\n> >\r\n> \r\n> I could think of below two options:\r\n> 1. Forbid any case where column list is different for the same table\r\n> when combining publications.\r\n> 2. Forbid if the column list and row filters for a table are different\r\n> in the set of publications we are planning to combine. This means we\r\n> will allow combining column lists when row filters are not present or\r\n> when column list is the same (we don't get anything additional by\r\n> combining but the idea is we won't forbid such cases) and row filters\r\n> are different.\r\n> \r\n> Now, I think the points in favor of (1) are that the main purpose of\r\n> introducing a column list are: (a) the structure/schema of the\r\n> subscriber is different from the publisher, (b) want to hide sensitive\r\n> columns data. In both cases, it should be fine if we follow (1) and\r\n> from Peter E.'s latest email [1] he also seems to be indicating the\r\n> same. If we want to be slightly more relaxed then we can probably (2).\r\n> We can decide on something else as well but I feel it should be such\r\n> that it is easy to explain.\r\n\r\nI also think it makes sense to add a restriction like (1). I am planning to\r\nimplement the restriction if no one objects.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Fri, 6 May 2022 03:23:55 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/6/22 05:23, houzj.fnst@fujitsu.com wrote:\n> On Tuesday, May 3, 2022 11:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, May 3, 2022 at 12:10 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 5/2/22 19:51, Alvaro Herrera wrote:\n>>>>> Why would we need to know publications replicated by other\n>> walsenders?\n>>>>> And what if the subscriber is not connected at the moment? In that case\n>>>>> there'll be no walsender.\n>>>>\n>>>> Sure, if the replica is not connected then there's no issue -- as you\n>>>> say, that replica will fail at START_REPLICATION time.\n>>>>\n>>>\n>>> Right, I got confused a bit.\n>>>\n>>> Anyway, I think the main challenge is defining what exactly we want to\n>>> check, in order to ensure \"sensible\" behavior, without preventing way\n>>> too many sensible use cases.\n>>>\n>>\n>> I could think of below two options:\n>> 1. Forbid any case where column list is different for the same table\n>> when combining publications.\n>> 2. Forbid if the column list and row filters for a table are different\n>> in the set of publications we are planning to combine. This means we\n>> will allow combining column lists when row filters are not present or\n>> when column list is the same (we don't get anything additional by\n>> combining but the idea is we won't forbid such cases) and row filters\n>> are different.\n>>\n>> Now, I think the points in favor of (1) are that the main purpose of\n>> introducing a column list are: (a) the structure/schema of the\n>> subscriber is different from the publisher, (b) want to hide sensitive\n>> columns data. In both cases, it should be fine if we follow (1) and\n>> from Peter E.'s latest email [1] he also seems to be indicating the\n>> same. If we want to be slightly more relaxed then we can probably (2).\n>> We can decide on something else as well but I feel it should be such\n>> that it is easy to explain.\n> \n> I also think it makes sense to add a restriction like (1). I am planning to\n> implement the restriction if no one objects.\n> \n\nI'm not going to block that approach if that's the consensus here,\nthough I'm not convinced.\n\nLet me point out (1) does *not* work for data redaction use case,\ncertainly not the example Alvaro and me presented, because that relies\non a combination of row filters and column filters. Requiring all column\nlists to be the same (and not specific to row filter) prevents that\nexample from working. Yes, you can create multiple subscriptions, but\nthat brings it's own set of challenges too.\n\nI doubt forcing users to use the more complex setup is good idea, and\ncombining the column lists per [1] seems sound to me.\n\nThat being said, the good thing is this restriction seems it might be\nrelaxed in the future to work per [1], without causing any backwards\ncompatibility issues.\n\nShould we do something similar for row filters, though? It seems quite\nweird we're so concerned about unexpected behavior due to combining\ncolumn lists (despite having a patch that makes it behave sanely), and\nat the same time wave off similarly strange behavior due to combining\nrow filters because \"that's what you get if you define the publications\nin a strange way\".\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/5a85b8b7-fc1c-364b-5c62-0bb3e1e25824%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 May 2022 14:26:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Fri, May 6, 2022 at 5:56 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> >>\n> >> I could think of below two options:\n> >> 1. Forbid any case where column list is different for the same table\n> >> when combining publications.\n> >> 2. Forbid if the column list and row filters for a table are different\n> >> in the set of publications we are planning to combine. This means we\n> >> will allow combining column lists when row filters are not present or\n> >> when column list is the same (we don't get anything additional by\n> >> combining but the idea is we won't forbid such cases) and row filters\n> >> are different.\n> >>\n> >> Now, I think the points in favor of (1) are that the main purpose of\n> >> introducing a column list are: (a) the structure/schema of the\n> >> subscriber is different from the publisher, (b) want to hide sensitive\n> >> columns data. In both cases, it should be fine if we follow (1) and\n> >> from Peter E.'s latest email [1] he also seems to be indicating the\n> >> same. If we want to be slightly more relaxed then we can probably (2).\n> >> We can decide on something else as well but I feel it should be such\n> >> that it is easy to explain.\n> >\n> > I also think it makes sense to add a restriction like (1). I am planning to\n> > implement the restriction if no one objects.\n> >\n>\n> I'm not going to block that approach if that's the consensus here,\n> though I'm not convinced.\n>\n> Let me point out (1) does *not* work for data redaction use case,\n> certainly not the example Alvaro and me presented, because that relies\n> on a combination of row filters and column filters.\n>\n\nThis should just forbid the case presented by Alvaro in his first\nemail in this thread [1].\n\n> Requiring all column\n> lists to be the same (and not specific to row filter) prevents that\n> example from working. Yes, you can create multiple subscriptions, but\n> that brings it's own set of challenges too.\n>\n> I doubt forcing users to use the more complex setup is good idea, and\n> combining the column lists per [1] seems sound to me.\n>\n> That being said, the good thing is this restriction seems it might be\n> relaxed in the future to work per [1], without causing any backwards\n> compatibility issues.\n>\n\nThese are my thoughts as well. Even, if we decide to go via the column\nlist merging approach (in selective cases), we need to do some\nperformance testing of that approach as it does much more work per\ntuple. It is possible that the impact is not much but still worth\nevaluating, so let's try to see the patch to prohibit combining the\ncolumn lists then we can decide.\n\n> Should we do something similar for row filters, though? It seems quite\n> weird we're so concerned about unexpected behavior due to combining\n> column lists (despite having a patch that makes it behave sanely), and\n> at the same time wave off similarly strange behavior due to combining\n> row filters because \"that's what you get if you define the publications\n> in a strange way\".\n>\n\nDuring development, we found that we can't combine the row-filters for\n'insert' and 'update'/'delete' because of replica identity\nrestrictions, so we have kept them separate. But if we came across\nother such things then we can either try to fix those or forbid them.\n\n[1] - https://www.postgresql.org/message-id/202204251548.mudq7jbqnh7r%40alvherre.pgsql\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 6 May 2022 19:10:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/6/22 15:40, Amit Kapila wrote:\n> On Fri, May 6, 2022 at 5:56 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>>>\n>>>> I could think of below two options:\n>>>> 1. Forbid any case where column list is different for the same table\n>>>> when combining publications.\n>>>> 2. Forbid if the column list and row filters for a table are different\n>>>> in the set of publications we are planning to combine. This means we\n>>>> will allow combining column lists when row filters are not present or\n>>>> when column list is the same (we don't get anything additional by\n>>>> combining but the idea is we won't forbid such cases) and row filters\n>>>> are different.\n>>>>\n>>>> Now, I think the points in favor of (1) are that the main purpose of\n>>>> introducing a column list are: (a) the structure/schema of the\n>>>> subscriber is different from the publisher, (b) want to hide sensitive\n>>>> columns data. In both cases, it should be fine if we follow (1) and\n>>>> from Peter E.'s latest email [1] he also seems to be indicating the\n>>>> same. If we want to be slightly more relaxed then we can probably (2).\n>>>> We can decide on something else as well but I feel it should be such\n>>>> that it is easy to explain.\n>>>\n>>> I also think it makes sense to add a restriction like (1). I am planning to\n>>> implement the restriction if no one objects.\n>>>\n>>\n>> I'm not going to block that approach if that's the consensus here,\n>> though I'm not convinced.\n>>\n>> Let me point out (1) does *not* work for data redaction use case,\n>> certainly not the example Alvaro and me presented, because that relies\n>> on a combination of row filters and column filters.\n>>\n> \n> This should just forbid the case presented by Alvaro in his first\n> email in this thread [1].\n> \n>> Requiring all column\n>> lists to be the same (and not specific to row filter) prevents that\n>> example from working. Yes, you can create multiple subscriptions, but\n>> that brings it's own set of challenges too.\n>>\n>> I doubt forcing users to use the more complex setup is good idea, and\n>> combining the column lists per [1] seems sound to me.\n>>\n>> That being said, the good thing is this restriction seems it might be\n>> relaxed in the future to work per [1], without causing any backwards\n>> compatibility issues.\n>>\n> \n> These are my thoughts as well. Even, if we decide to go via the column\n> list merging approach (in selective cases), we need to do some\n> performance testing of that approach as it does much more work per\n> tuple. It is possible that the impact is not much but still worth\n> evaluating, so let's try to see the patch to prohibit combining the\n> column lists then we can decide.\n> \n\nSurely we could do some performance testing now. I doubt it's very\nexpensive - sure, you can construct cases with many row filters / column\nlists, but how likely is that in practice?\n\nMoreover, it's not like this would affect existing setups, so even if\nit's a bit expensive, we may interpret that as cost of the feature.\n\n>> Should we do something similar for row filters, though? It seems quite\n>> weird we're so concerned about unexpected behavior due to combining\n>> column lists (despite having a patch that makes it behave sanely), and\n>> at the same time wave off similarly strange behavior due to combining\n>> row filters because \"that's what you get if you define the publications\n>> in a strange way\".\n>>\n> \n> During development, we found that we can't combine the row-filters for\n> 'insert' and 'update'/'delete' because of replica identity\n> restrictions, so we have kept them separate. But if we came across\n> other such things then we can either try to fix those or forbid them.\n> \n\nI understand how we got to the current state. I'm just saying that this\nallows defining separate publications for insert, update and delete\nactions, and set different row filters for each of them. Which results\nin behavior that is hard to explain/understand, especially when it comes\nto tablesync.\n\nIt seems quite strange to prohibit merging column lists because there\nmight be some strange behavior that no one described, and allow setups\nwith different row filters that definitely have strange behavior.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 May 2022 15:57:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-May-02, Amit Kapila wrote:\n>\n>\n> > I think it is possible to expose a list of publications for each\n> > walsender as it is stored in each walsenders\n> > LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n> > can have one such LogicalDecodingContext and we can probably share it\n> > via shared memory?\n>\n> I guess we need to create a DSM each time a walsender opens a\n> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n> connect to all DSMs of all running walsenders and see if they are\n> reading from it. Is that what you have in mind? Alternatively, we\n> could have one DSM per publication with a PID array of all walsenders\n> that are sending it (each walsender needs to add its PID as it starts).\n> The latter might be better.\n>\n\nWhile thinking about using DSM here, I came across one of your commits\nf2f9fcb303 which seems to indicate that it is not a good idea to rely\non it but I think you have changed dynamic shared memory to fixed\nshared memory usage because that was more suitable rather than DSM is\nnot portable. Because I see a commit bcbd940806 where we have removed\nthe 'none' option of dynamic_shared_memory_type. So, I think it should\nbe okay to use DSM in this context. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 7 May 2022 11:06:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 5/7/22 07:36, Amit Kapila wrote:\n> On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2022-May-02, Amit Kapila wrote:\n>>\n>>\n>>> I think it is possible to expose a list of publications for each\n>>> walsender as it is stored in each walsenders\n>>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n>>> can have one such LogicalDecodingContext and we can probably share it\n>>> via shared memory?\n>>\n>> I guess we need to create a DSM each time a walsender opens a\n>> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n>> connect to all DSMs of all running walsenders and see if they are\n>> reading from it. Is that what you have in mind? Alternatively, we\n>> could have one DSM per publication with a PID array of all walsenders\n>> that are sending it (each walsender needs to add its PID as it starts).\n>> The latter might be better.\n>>\n> \n> While thinking about using DSM here, I came across one of your commits\n> f2f9fcb303 which seems to indicate that it is not a good idea to rely\n> on it but I think you have changed dynamic shared memory to fixed\n> shared memory usage because that was more suitable rather than DSM is\n> not portable. Because I see a commit bcbd940806 where we have removed\n> the 'none' option of dynamic_shared_memory_type. So, I think it should\n> be okay to use DSM in this context. What do you think?\n> \n\nWhy would any of this be needed?\n\nALTER PUBLICATION will invalidate the RelationSyncEntry entries in all\nwalsenders, no? So AFAICS it should be enough to enforce the limitations\nin get_rel_sync_entry, which is necessary anyway because the subscriber\nmay not be connected when ALTER PUBLICATION gets executed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 8 May 2022 20:11:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Sun, May 8, 2022 at 11:41 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/7/22 07:36, Amit Kapila wrote:\n> > On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> On 2022-May-02, Amit Kapila wrote:\n> >>\n> >>\n> >>> I think it is possible to expose a list of publications for each\n> >>> walsender as it is stored in each walsenders\n> >>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n> >>> can have one such LogicalDecodingContext and we can probably share it\n> >>> via shared memory?\n> >>\n> >> I guess we need to create a DSM each time a walsender opens a\n> >> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n> >> connect to all DSMs of all running walsenders and see if they are\n> >> reading from it. Is that what you have in mind? Alternatively, we\n> >> could have one DSM per publication with a PID array of all walsenders\n> >> that are sending it (each walsender needs to add its PID as it starts).\n> >> The latter might be better.\n> >>\n> >\n> > While thinking about using DSM here, I came across one of your commits\n> > f2f9fcb303 which seems to indicate that it is not a good idea to rely\n> > on it but I think you have changed dynamic shared memory to fixed\n> > shared memory usage because that was more suitable rather than DSM is\n> > not portable. Because I see a commit bcbd940806 where we have removed\n> > the 'none' option of dynamic_shared_memory_type. So, I think it should\n> > be okay to use DSM in this context. What do you think?\n> >\n>\n> Why would any of this be needed?\n>\n> ALTER PUBLICATION will invalidate the RelationSyncEntry entries in all\n> walsenders, no? So AFAICS it should be enough to enforce the limitations\n> in get_rel_sync_entry,\n>\n\nYes, that should be sufficient to enforce limitations in\nget_rel_sync_entry() but it will lead to the following behavior:\na. The Alter Publication command will be successful but later in the\nlogs, the error will be logged and the user needs to check it and take\nappropriate action. Till that time the walsender will be in an error\nloop which means it will restart and again lead to the same error till\nthe user takes some action.\nb. As we use historic snapshots, so even after the user takes action\nsay by changing publication, it won't be reflected. So, the option for\nthe user would be to drop their subscription.\n\nAm, I missing something? If not, then are we okay with such behavior?\nIf yes, then I think it would be much easier implementation-wise and\nprobably advisable at this point. We can document it so that users are\ncareful and can take necessary action if they get into such a\nsituation. Any way we can improve this in future as you also suggested\nearlier.\n\n> which is necessary anyway because the subscriber\n> may not be connected when ALTER PUBLICATION gets executed.\n>\n\nIf we are not okay with the resultant behavior of detecting this in\nget_rel_sync_entry(), then we can solve this in some other way as\nAlvaro has indicated in one of his responses which is to detect that\nat start replication time probably in the subscriber-side.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 May 2022 09:15:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "\n\nOn 5/9/22 05:45, Amit Kapila wrote:\n> On Sun, May 8, 2022 at 11:41 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 5/7/22 07:36, Amit Kapila wrote:\n>>> On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>>\n>>>> On 2022-May-02, Amit Kapila wrote:\n>>>>\n>>>>\n>>>>> I think it is possible to expose a list of publications for each\n>>>>> walsender as it is stored in each walsenders\n>>>>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n>>>>> can have one such LogicalDecodingContext and we can probably share it\n>>>>> via shared memory?\n>>>>\n>>>> I guess we need to create a DSM each time a walsender opens a\n>>>> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n>>>> connect to all DSMs of all running walsenders and see if they are\n>>>> reading from it. Is that what you have in mind? Alternatively, we\n>>>> could have one DSM per publication with a PID array of all walsenders\n>>>> that are sending it (each walsender needs to add its PID as it starts).\n>>>> The latter might be better.\n>>>>\n>>>\n>>> While thinking about using DSM here, I came across one of your commits\n>>> f2f9fcb303 which seems to indicate that it is not a good idea to rely\n>>> on it but I think you have changed dynamic shared memory to fixed\n>>> shared memory usage because that was more suitable rather than DSM is\n>>> not portable. Because I see a commit bcbd940806 where we have removed\n>>> the 'none' option of dynamic_shared_memory_type. So, I think it should\n>>> be okay to use DSM in this context. What do you think?\n>>>\n>>\n>> Why would any of this be needed?\n>>\n>> ALTER PUBLICATION will invalidate the RelationSyncEntry entries in all\n>> walsenders, no? So AFAICS it should be enough to enforce the limitations\n>> in get_rel_sync_entry,\n>>\n> \n> Yes, that should be sufficient to enforce limitations in\n> get_rel_sync_entry() but it will lead to the following behavior:\n> a. The Alter Publication command will be successful but later in the\n> logs, the error will be logged and the user needs to check it and take\n> appropriate action. Till that time the walsender will be in an error\n> loop which means it will restart and again lead to the same error till\n> the user takes some action.\n> b. As we use historic snapshots, so even after the user takes action\n> say by changing publication, it won't be reflected. So, the option for\n> the user would be to drop their subscription.\n> \n> Am, I missing something? If not, then are we okay with such behavior?\n> If yes, then I think it would be much easier implementation-wise and\n> probably advisable at this point. We can document it so that users are\n> careful and can take necessary action if they get into such a\n> situation. Any way we can improve this in future as you also suggested\n> earlier.\n> \n>> which is necessary anyway because the subscriber\n>> may not be connected when ALTER PUBLICATION gets executed.\n>>\n> \n> If we are not okay with the resultant behavior of detecting this in\n> get_rel_sync_entry(), then we can solve this in some other way as\n> Alvaro has indicated in one of his responses which is to detect that\n> at start replication time probably in the subscriber-side.\n> \n\nIMO that behavior is acceptable. We have to do that check anyway, and\nthe subscription may start failing after ALTER PUBLICATION for a number\nof other reasons anyway so the user needs/should check the logs.\n\nAnd if needed, we can improve this and start doing the proactive-checks\nduring ALTER PUBLICATION too.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 10 May 2022 21:05:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, May 11, 2022 at 12:35 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/9/22 05:45, Amit Kapila wrote:\n> > On Sun, May 8, 2022 at 11:41 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 5/7/22 07:36, Amit Kapila wrote:\n> >>> On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>>>\n> >>>> On 2022-May-02, Amit Kapila wrote:\n> >>>>\n> >>>>\n> >>>>> I think it is possible to expose a list of publications for each\n> >>>>> walsender as it is stored in each walsenders\n> >>>>> LogicalDecodingContext->output_plugin_private. AFAIK, each walsender\n> >>>>> can have one such LogicalDecodingContext and we can probably share it\n> >>>>> via shared memory?\n> >>>>\n> >>>> I guess we need to create a DSM each time a walsender opens a\n> >>>> connection, at START_REPLICATION time. Then ALTER PUBLICATION needs to\n> >>>> connect to all DSMs of all running walsenders and see if they are\n> >>>> reading from it. Is that what you have in mind? Alternatively, we\n> >>>> could have one DSM per publication with a PID array of all walsenders\n> >>>> that are sending it (each walsender needs to add its PID as it starts).\n> >>>> The latter might be better.\n> >>>>\n> >>>\n> >>> While thinking about using DSM here, I came across one of your commits\n> >>> f2f9fcb303 which seems to indicate that it is not a good idea to rely\n> >>> on it but I think you have changed dynamic shared memory to fixed\n> >>> shared memory usage because that was more suitable rather than DSM is\n> >>> not portable. Because I see a commit bcbd940806 where we have removed\n> >>> the 'none' option of dynamic_shared_memory_type. So, I think it should\n> >>> be okay to use DSM in this context. What do you think?\n> >>>\n> >>\n> >> Why would any of this be needed?\n> >>\n> >> ALTER PUBLICATION will invalidate the RelationSyncEntry entries in all\n> >> walsenders, no? So AFAICS it should be enough to enforce the limitations\n> >> in get_rel_sync_entry,\n> >>\n> >\n> > Yes, that should be sufficient to enforce limitations in\n> > get_rel_sync_entry() but it will lead to the following behavior:\n> > a. The Alter Publication command will be successful but later in the\n> > logs, the error will be logged and the user needs to check it and take\n> > appropriate action. Till that time the walsender will be in an error\n> > loop which means it will restart and again lead to the same error till\n> > the user takes some action.\n> > b. As we use historic snapshots, so even after the user takes action\n> > say by changing publication, it won't be reflected. So, the option for\n> > the user would be to drop their subscription.\n> >\n> > Am, I missing something? If not, then are we okay with such behavior?\n> > If yes, then I think it would be much easier implementation-wise and\n> > probably advisable at this point. We can document it so that users are\n> > careful and can take necessary action if they get into such a\n> > situation. Any way we can improve this in future as you also suggested\n> > earlier.\n> >\n> >> which is necessary anyway because the subscriber\n> >> may not be connected when ALTER PUBLICATION gets executed.\n> >>\n> >\n> > If we are not okay with the resultant behavior of detecting this in\n> > get_rel_sync_entry(), then we can solve this in some other way as\n> > Alvaro has indicated in one of his responses which is to detect that\n> > at start replication time probably in the subscriber-side.\n> >\n>\n> IMO that behavior is acceptable.\n>\n\nFair enough, then we should go with a simpler approach to detect it in\npgoutput.c (get_rel_sync_entry).\n\n> We have to do that check anyway, and\n> the subscription may start failing after ALTER PUBLICATION for a number\n> of other reasons anyway so the user needs/should check the logs.\n>\n\nI think ALTER PUBLICATION won't ever lead to failure in walsender.\nSure, users can do something due to which subscriber-side failures can\nhappen due to constraint failures. Do you have some specific cases in\nmind?\n\n> And if needed, we can improve this and start doing the proactive-checks\n> during ALTER PUBLICATION too.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 May 2022 09:03:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wednesday, May 11, 2022 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, May 11, 2022 at 12:35 AM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> > On 5/9/22 05:45, Amit Kapila wrote:\r\n> > > On Sun, May 8, 2022 at 11:41 PM Tomas Vondra\r\n> > > <tomas.vondra@enterprisedb.com> wrote:\r\n> > >>\r\n> > >> On 5/7/22 07:36, Amit Kapila wrote:\r\n> > >>> On Mon, May 2, 2022 at 6:11 PM Alvaro Herrera\r\n> <alvherre@alvh.no-ip.org> wrote:\r\n> > >>>>\r\n> > >>>> On 2022-May-02, Amit Kapila wrote:\r\n> > >>>>\r\n> > >>>>\r\n> > >>>>> I think it is possible to expose a list of publications for each\r\n> > >>>>> walsender as it is stored in each walsenders\r\n> > >>>>> LogicalDecodingContext->output_plugin_private. AFAIK, each\r\n> > >>>>> LogicalDecodingContext->walsender\r\n> > >>>>> can have one such LogicalDecodingContext and we can probably\r\n> > >>>>> share it via shared memory?\r\n> > >>>>\r\n> > >>>> I guess we need to create a DSM each time a walsender opens a\r\n> > >>>> connection, at START_REPLICATION time. Then ALTER PUBLICATION\r\n> > >>>> needs to connect to all DSMs of all running walsenders and see if\r\n> > >>>> they are reading from it. Is that what you have in mind?\r\n> > >>>> Alternatively, we could have one DSM per publication with a PID\r\n> > >>>> array of all walsenders that are sending it (each walsender needs to\r\n> add its PID as it starts).\r\n> > >>>> The latter might be better.\r\n> > >>>>\r\n> > >>>\r\n> > >>> While thinking about using DSM here, I came across one of your\r\n> > >>> commits\r\n> > >>> f2f9fcb303 which seems to indicate that it is not a good idea to\r\n> > >>> rely on it but I think you have changed dynamic shared memory to\r\n> > >>> fixed shared memory usage because that was more suitable rather\r\n> > >>> than DSM is not portable. Because I see a commit bcbd940806 where\r\n> > >>> we have removed the 'none' option of dynamic_shared_memory_type.\r\n> > >>> So, I think it should be okay to use DSM in this context. What do you\r\n> think?\r\n> > >>>\r\n> > >>\r\n> > >> Why would any of this be needed?\r\n> > >>\r\n> > >> ALTER PUBLICATION will invalidate the RelationSyncEntry entries in\r\n> > >> all walsenders, no? So AFAICS it should be enough to enforce the\r\n> > >> limitations in get_rel_sync_entry,\r\n> > >>\r\n> > >\r\n> > > Yes, that should be sufficient to enforce limitations in\r\n> > > get_rel_sync_entry() but it will lead to the following behavior:\r\n> > > a. The Alter Publication command will be successful but later in the\r\n> > > logs, the error will be logged and the user needs to check it and\r\n> > > take appropriate action. Till that time the walsender will be in an\r\n> > > error loop which means it will restart and again lead to the same\r\n> > > error till the user takes some action.\r\n> > > b. As we use historic snapshots, so even after the user takes action\r\n> > > say by changing publication, it won't be reflected. So, the option\r\n> > > for the user would be to drop their subscription.\r\n> > >\r\n> > > Am, I missing something? If not, then are we okay with such behavior?\r\n> > > If yes, then I think it would be much easier implementation-wise and\r\n> > > probably advisable at this point. We can document it so that users\r\n> > > are careful and can take necessary action if they get into such a\r\n> > > situation. Any way we can improve this in future as you also\r\n> > > suggested earlier.\r\n> > >\r\n> > >> which is necessary anyway because the subscriber may not be\r\n> > >> connected when ALTER PUBLICATION gets executed.\r\n> > >>\r\n> > >\r\n> > > If we are not okay with the resultant behavior of detecting this in\r\n> > > get_rel_sync_entry(), then we can solve this in some other way as\r\n> > > Alvaro has indicated in one of his responses which is to detect that\r\n> > > at start replication time probably in the subscriber-side.\r\n> > >\r\n> >\r\n> > IMO that behavior is acceptable.\r\n> >\r\n> \r\n> Fair enough, then we should go with a simpler approach to detect it in\r\n> pgoutput.c (get_rel_sync_entry).\r\n\r\nOK, here is the patch that try to check column list in that way. The patch also\r\ncheck the column list when CREATE SUBSCRIPTION and when starting initial copy.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Wed, 11 May 2022 07:25:03 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, May 11, 2022 at 12:55 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, May 11, 2022 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Fair enough, then we should go with a simpler approach to detect it in\n> > pgoutput.c (get_rel_sync_entry).\n>\n> OK, here is the patch that try to check column list in that way. The patch also\n> check the column list when CREATE SUBSCRIPTION and when starting initial copy.\n>\n\nFew comments:\n===============\n1.\ninitStringInfo(&cmd);\n- appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename\\n\"\n- \" FROM pg_catalog.pg_publication_tables t\\n\"\n+ appendStringInfoString(&cmd,\n+ \"SELECT DISTINCT t.schemaname,\\n\"\n+ \" t.tablename,\\n\"\n+ \" (CASE WHEN (array_length(pr.prattrs, 1) = t.relnatts)\\n\"\n+ \" THEN NULL ELSE pr.prattrs END)\\n\"\n+ \" FROM (SELECT P.pubname AS pubname,\\n\"\n+ \" N.nspname AS schemaname,\\n\"\n+ \" C.relname AS tablename,\\n\"\n+ \" P.oid AS pubid,\\n\"\n+ \" C.oid AS reloid,\\n\"\n+ \" C.relnatts\\n\"\n+ \" FROM pg_publication P,\\n\"\n+ \" LATERAL pg_get_publication_tables(P.pubname) GPT,\\n\"\n+ \" pg_class C JOIN pg_namespace N\\n\"\n+ \" ON (N.oid = C.relnamespace)\\n\"\n+ \" WHERE C.oid = GPT.relid) t\\n\"\n+ \" LEFT OUTER JOIN pg_publication_rel pr\\n\"\n+ \" ON (t.pubid = pr.prpubid AND\\n\"\n+ \" pr.prrelid = reloid)\\n\"\n\nCan we modify pg_publication_tables to get the row filter and column\nlist and then use it directly instead of constructing this query?\n\n2.\n+ if (list_member(tablelist, rv))\n+ ereport(WARNING,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in\ndifferent publications\",\n+ nspname, relname));\n+ else\n\nCan we write comments to explain why we are using WARNING here instead of ERROR?\n\n3.\nstatic void\n-pgoutput_ensure_entry_cxt(PGOutputData *data, RelationSyncEntry *entry)\n+pgoutput_ensure_entry_cxt(PGOutputData *data, RelationSyncEntry *entry,\n+ Relation relation)\n\nWhat is the need to change this interface as part of this patch?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 May 2022 12:15:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, May 12, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 11, 2022 at 12:55 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Wednesday, May 11, 2022 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Fair enough, then we should go with a simpler approach to detect it in\n> > > pgoutput.c (get_rel_sync_entry).\n> >\n> > OK, here is the patch that try to check column list in that way. The patch also\n> > check the column list when CREATE SUBSCRIPTION and when starting initial copy.\n> >\n>\n> Few comments:\n> ===============\n...\n\nOne more point, I think we should update the docs for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 May 2022 14:02:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thursday, May 12, 2022 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, May 11, 2022 at 12:55 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, May 11, 2022 11:33 AM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> > > Fair enough, then we should go with a simpler approach to detect it\r\n> > > in pgoutput.c (get_rel_sync_entry).\r\n> >\r\n> > OK, here is the patch that try to check column list in that way. The\r\n> > patch also check the column list when CREATE SUBSCRIPTION and when\r\n> starting initial copy.\r\n> >\r\n> \r\n> Few comments:\r\n> ===============\r\n> 1.\r\n> initStringInfo(&cmd);\r\n> - appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname,\r\n> t.tablename\\n\"\r\n> - \" FROM pg_catalog.pg_publication_tables t\\n\"\r\n> + appendStringInfoString(&cmd,\r\n> + \"SELECT DISTINCT t.schemaname,\\n\"\r\n> + \" t.tablename,\\n\"\r\n> + \" (CASE WHEN (array_length(pr.prattrs, 1) = t.relnatts)\\n\"\r\n> + \" THEN NULL ELSE pr.prattrs END)\\n\"\r\n> + \" FROM (SELECT P.pubname AS pubname,\\n\"\r\n> + \" N.nspname AS schemaname,\\n\"\r\n> + \" C.relname AS tablename,\\n\"\r\n> + \" P.oid AS pubid,\\n\"\r\n> + \" C.oid AS reloid,\\n\"\r\n> + \" C.relnatts\\n\"\r\n> + \" FROM pg_publication P,\\n\"\r\n> + \" LATERAL pg_get_publication_tables(P.pubname) GPT,\\n\"\r\n> + \" pg_class C JOIN pg_namespace N\\n\"\r\n> + \" ON (N.oid = C.relnamespace)\\n\"\r\n> + \" WHERE C.oid = GPT.relid) t\\n\"\r\n> + \" LEFT OUTER JOIN pg_publication_rel pr\\n\"\r\n> + \" ON (t.pubid = pr.prpubid AND\\n\"\r\n> + \" pr.prrelid = reloid)\\n\"\r\n> \r\n> Can we modify pg_publication_tables to get the row filter and column list and\r\n> then use it directly instead of constructing this query?\r\n\r\nAgreed. If we can get columnlist and rowfilter from pg_publication_tables, it\r\nwill be more convenient. And I think users that want to fetch the columnlist\r\nand rowfilter of table can also benefit from it.\r\n\r\n> 2.\r\n> + if (list_member(tablelist, rv))\r\n> + ereport(WARNING,\r\n> + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n> + errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in\r\n> different publications\",\r\n> + nspname, relname));\r\n> + else\r\n> \r\n> Can we write comments to explain why we are using WARNING here instead of\r\n> ERROR?\r\n> \r\n> 3.\r\n> static void\r\n> -pgoutput_ensure_entry_cxt(PGOutputData *data, RelationSyncEntry *entry)\r\n> +pgoutput_ensure_entry_cxt(PGOutputData *data, RelationSyncEntry *entry,\r\n> + Relation relation)\r\n> \r\n> What is the need to change this interface as part of this patch?\r\n\r\nAttach the new version patch which addressed these comments and update the\r\ndocument. 0001 patch is to extent the view and 0002 patch is to add restriction\r\nfor column list.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Fri, 13 May 2022 06:02:47 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Fri, May 13, 2022 at 11:32 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thursday, May 12, 2022 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, May 11, 2022 at 12:55 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Few comments:\n> > ===============\n> > 1.\n> > initStringInfo(&cmd);\n> > - appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname,\n> > t.tablename\\n\"\n> > - \" FROM pg_catalog.pg_publication_tables t\\n\"\n> > + appendStringInfoString(&cmd,\n> > + \"SELECT DISTINCT t.schemaname,\\n\"\n> > + \" t.tablename,\\n\"\n> > + \" (CASE WHEN (array_length(pr.prattrs, 1) = t.relnatts)\\n\"\n> > + \" THEN NULL ELSE pr.prattrs END)\\n\"\n> > + \" FROM (SELECT P.pubname AS pubname,\\n\"\n> > + \" N.nspname AS schemaname,\\n\"\n> > + \" C.relname AS tablename,\\n\"\n> > + \" P.oid AS pubid,\\n\"\n> > + \" C.oid AS reloid,\\n\"\n> > + \" C.relnatts\\n\"\n> > + \" FROM pg_publication P,\\n\"\n> > + \" LATERAL pg_get_publication_tables(P.pubname) GPT,\\n\"\n> > + \" pg_class C JOIN pg_namespace N\\n\"\n> > + \" ON (N.oid = C.relnamespace)\\n\"\n> > + \" WHERE C.oid = GPT.relid) t\\n\"\n> > + \" LEFT OUTER JOIN pg_publication_rel pr\\n\"\n> > + \" ON (t.pubid = pr.prpubid AND\\n\"\n> > + \" pr.prrelid = reloid)\\n\"\n> >\n> > Can we modify pg_publication_tables to get the row filter and column list and\n> > then use it directly instead of constructing this query?\n>\n> Agreed. If we can get columnlist and rowfilter from pg_publication_tables, it\n> will be more convenient. And I think users that want to fetch the columnlist\n> and rowfilter of table can also benefit from it.\n>\n\nAfter the change for this, we will give an error on combining\npublications where one of the publications specifies all columns in\nthe table and the other doesn't provide any columns. We should not\ngive an error as both mean all columns.\n\n>\n> Attach the new version patch which addressed these comments and update the\n> document. 0001 patch is to extent the view and 0002 patch is to add restriction\n> for column list.\n>\n\nFew comments:\n=================\n1.\npostgres=# select * from pg_publication_tables;\n pubname | schemaname | tablename | columnlist | rowfilter\n---------+------------+-----------+------------+-----------\n pub1 | public | t1 | |\n pub2 | public | t1 | 1 2 | (c3 < 10)\n(2 rows)\n\nI think it is better to display column names for columnlist in the\nexposed view similar to attnames in the pg_stats_ext view. I think\nthat will make it easier for users to understand this information.\n\n2.\n { oid => '6119', descr => 'get OIDs of tables in a publication',\n proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',\n- provolatile => 's', prorettype => 'oid', proargtypes => 'text',\n- proallargtypes => '{text,oid}', proargmodes => '{i,o}',\n- proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },\n+ provolatile => 's', prorettype => 'record', proargtypes => 'text',\n+ proallargtypes => '{text,oid,int2vector,pg_node_tree}', proargmodes\n=> '{i,o,o,o}',\n\nI think we should change the \"descr\" to something like: 'get\ninformation of tables in a publication'\n\n3.\n+\n+ /*\n+ * We only throw a warning here so that the subcription can still be\n+ * created and let user aware that something is going to fail later and\n+ * they can fix the publications afterwards.\n+ */\n+ if (list_member(tablelist, rv))\n+ ereport(WARNING,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in\ndifferent publications\",\n+ nspname, relname));\n\nCan we extend this comment to explain the case where after Alter\nPublication, if the user dumps and restores back the subscription,\nthere is a possibility that \"CREATE SUBSCRIPTION\" won't work if we\ngive ERROR here instead of WARNING?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 16 May 2022 11:40:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Monday, May 16, 2022 2:10 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Fri, May 13, 2022 at 11:32 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, May 12, 2022 2:45 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Wed, May 11, 2022 at 12:55 PM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Few comments:\r\n> > > ===============\r\n> > > 1.\r\n> > > initStringInfo(&cmd);\r\n> > > - appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname,\r\n> > > t.tablename\\n\"\r\n> > > - \" FROM pg_catalog.pg_publication_tables t\\n\"\r\n> > > + appendStringInfoString(&cmd,\r\n> > > + \"SELECT DISTINCT t.schemaname,\\n\"\r\n> > > + \" t.tablename,\\n\"\r\n> > > + \" (CASE WHEN (array_length(pr.prattrs, 1) =\r\n> t.relnatts)\\n\"\r\n> > > + \" THEN NULL ELSE pr.prattrs END)\\n\"\r\n> > > + \" FROM (SELECT P.pubname AS pubname,\\n\"\r\n> > > + \" N.nspname AS schemaname,\\n\"\r\n> > > + \" C.relname AS tablename,\\n\"\r\n> > > + \" P.oid AS pubid,\\n\"\r\n> > > + \" C.oid AS reloid,\\n\"\r\n> > > + \" C.relnatts\\n\"\r\n> > > + \" FROM pg_publication P,\\n\"\r\n> > > + \" LATERAL pg_get_publication_tables(P.pubname) GPT,\\n\"\r\n> > > + \" pg_class C JOIN pg_namespace N\\n\"\r\n> > > + \" ON (N.oid = C.relnamespace)\\n\"\r\n> > > + \" WHERE C.oid = GPT.relid) t\\n\"\r\n> > > + \" LEFT OUTER JOIN pg_publication_rel pr\\n\"\r\n> > > + \" ON (t.pubid = pr.prpubid AND\\n\"\r\n> > > + \" pr.prrelid = reloid)\\n\"\r\n> > >\r\n> > > Can we modify pg_publication_tables to get the row filter and column list\r\n> and\r\n> > > then use it directly instead of constructing this query?\r\n> >\r\n> > Agreed. If we can get columnlist and rowfilter from pg_publication_tables, it\r\n> > will be more convenient. And I think users that want to fetch the columnlist\r\n> > and rowfilter of table can also benefit from it.\r\n> >\r\n> \r\n> After the change for this, we will give an error on combining\r\n> publications where one of the publications specifies all columns in\r\n> the table and the other doesn't provide any columns. We should not\r\n> give an error as both mean all columns.\r\n\r\nThanks for the comments. Fixed.\r\n\r\n> >\r\n> > Attach the new version patch which addressed these comments and update\r\n> the\r\n> > document. 0001 patch is to extent the view and 0002 patch is to add\r\n> restriction\r\n> > for column list.\r\n> >\r\n> \r\n> Few comments:\r\n> =================\r\n> 1.\r\n> postgres=# select * from pg_publication_tables;\r\n> pubname | schemaname | tablename | columnlist | rowfilter\r\n> ---------+------------+-----------+------------+-----------\r\n> pub1 | public | t1 | |\r\n> pub2 | public | t1 | 1 2 | (c3 < 10)\r\n> (2 rows)\r\n> \r\n> I think it is better to display column names for columnlist in the\r\n> exposed view similar to attnames in the pg_stats_ext view. I think\r\n> that will make it easier for users to understand this information.\r\n\r\nAgreed and changed.\r\n\r\n \r\n> 2.\r\n> { oid => '6119', descr => 'get OIDs of tables in a publication',\r\n> proname => 'pg_get_publication_tables', prorows => '1000', proretset =>\r\n> 't',\r\n> - provolatile => 's', prorettype => 'oid', proargtypes => 'text',\r\n> - proallargtypes => '{text,oid}', proargmodes => '{i,o}',\r\n> - proargnames => '{pubname,relid}', prosrc => 'pg_get_publication_tables' },\r\n> + provolatile => 's', prorettype => 'record', proargtypes => 'text',\r\n> + proallargtypes => '{text,oid,int2vector,pg_node_tree}', proargmodes\r\n> => '{i,o,o,o}',\r\n> \r\n> I think we should change the \"descr\" to something like: 'get\r\n> information of tables in a publication'\r\n\r\nChanged.\r\n\r\n> 3.\r\n> +\r\n> + /*\r\n> + * We only throw a warning here so that the subcription can still be\r\n> + * created and let user aware that something is going to fail later and\r\n> + * they can fix the publications afterwards.\r\n> + */\r\n> + if (list_member(tablelist, rv))\r\n> + ereport(WARNING,\r\n> + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n> + errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in\r\n> different publications\",\r\n> + nspname, relname));\r\n> \r\n> Can we extend this comment to explain the case where after Alter\r\n> Publication, if the user dumps and restores back the subscription,\r\n> there is a possibility that \"CREATE SUBSCRIPTION\" won't work if we\r\n> give ERROR here instead of WARNING?\r\n\r\nAfter rethinking about this, It seems ok to report an ERROR here as the pg_dump\r\nof subscription always set (connect = false). So, we won't hit the check when\r\nrestore the dump which means the restore can be successful even if user change\r\nthe publication afterwards. Based on this, I have changed the warning to error.\r\n\r\nAttach the new version patch.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Mon, 16 May 2022 12:34:25 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On 2022-May-16, Amit Kapila wrote:\n\n> > Agreed. If we can get columnlist and rowfilter from pg_publication_tables, it\n> > will be more convenient. And I think users that want to fetch the columnlist\n> > and rowfilter of table can also benefit from it.\n> \n> After the change for this, we will give an error on combining\n> publications where one of the publications specifies all columns in\n> the table and the other doesn't provide any columns. We should not\n> give an error as both mean all columns.\n\nBut don't we need to behave the same way for both column lists and row\nfilters? I understand that some cases with different row filters for\ndifferent publications have shown to have weird behavior, so I think\nit'd make sense to restrict it in the same way. That would allow us to\nextend the behavior in a sensible way when we develop that, instead of\nsetting in stone now behavior that we regret later.\n\n> Few comments:\n> =================\n> 1.\n> postgres=# select * from pg_publication_tables;\n> pubname | schemaname | tablename | columnlist | rowfilter\n> ---------+------------+-----------+------------+-----------\n> pub1 | public | t1 | |\n> pub2 | public | t1 | 1 2 | (c3 < 10)\n> (2 rows)\n> \n> I think it is better to display column names for columnlist in the\n> exposed view similar to attnames in the pg_stats_ext view. I think\n> that will make it easier for users to understand this information.\n\n+1\n\n> I think we should change the \"descr\" to something like: 'get\n> information of tables in a publication'\n\n+1\n\n> 3.\n> +\n> + /*\n> + * We only throw a warning here so that the subcription can still be\n> + * created and let user aware that something is going to fail later and\n> + * they can fix the publications afterwards.\n> + */\n> + if (list_member(tablelist, rv))\n> + ereport(WARNING,\n> + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in\n> different publications\",\n> + nspname, relname));\n> \n> Can we extend this comment to explain the case where after Alter\n> Publication, if the user dumps and restores back the subscription,\n> there is a possibility that \"CREATE SUBSCRIPTION\" won't work if we\n> give ERROR here instead of WARNING?\n\nYeah, and not only the comment — I think we need to have more in the\nwarning message itself. How about:\n\nERROR: cannot use different column lists for table \"...\" in different publications\nDETAIL: The subscription \"...\" cannot currently be used for replication.\n\n\nI think this whole affair is a bit sad TBH and I'm sure it'll give us\nsome grief -- similar to replication slots becoming inactive and nobody\nnoticing. A user changing a publication in a way that prevents some\nreplica from working and the warnings are hidden, they could have\ntrouble noticing that the replica is stuck.\n\nBut I have no better ideas.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 May 2022 15:20:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 16, 2022 8:34 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> Attach the new version patch.\r\n> \r\n\r\nThanks for your patch. Here are some comments:\r\n\r\n1. (0001 patch)\r\n/*\r\n * Returns Oids of tables in a publication.\r\n */\r\nDatum\r\npg_get_publication_tables(PG_FUNCTION_ARGS)\r\n\r\nShould we modify the comment of pg_get_publication_tables() to \"Returns\r\ninformation of tables in a publication\"?\r\n\r\n2. (0002 patch)\r\n\r\n+\t * Note that we don't support the case where column list is different for\r\n+\t * the same table when combining publications. But we still need to check\r\n+\t * all the given publication-table mappings and report an error if any\r\n+\t * publications have different column list.\r\n \t *\r\n \t * Multiple publications might have multiple column lists for this\r\n \t * relation.\r\n\r\nI think it would be better if we swap the order of these two paragraphs. \r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Tue, 17 May 2022 03:25:29 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 16, 2022 at 6:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-May-16, Amit Kapila wrote:\n>\n> > > Agreed. If we can get columnlist and rowfilter from pg_publication_tables, it\n> > > will be more convenient. And I think users that want to fetch the columnlist\n> > > and rowfilter of table can also benefit from it.\n> >\n> > After the change for this, we will give an error on combining\n> > publications where one of the publications specifies all columns in\n> > the table and the other doesn't provide any columns. We should not\n> > give an error as both mean all columns.\n>\n> But don't we need to behave the same way for both column lists and row\n> filters? I understand that some cases with different row filters for\n> different publications have shown to have weird behavior, so I think\n> it'd make sense to restrict it in the same way.\n>\n\nI think the case where we are worried about row filter behavior is for\ninitial table sync where we ignore publication actions and that is\ntrue with and without row filters. See email [1]. We are planning to\ndocument that behavior as a separate patch. The idea we have used for\nrow filters is similar to what IBM DB2 [2] and Oracle [3] uses where\nthey allow combining filters with pub-action (operation (insert,\nupdate, delete) in their case).\n\nI think both column lists and row filters have a different purpose and\nwe shouldn't try to make them behave in the same way. The main purpose\nof introducing a column list is to have statically different shapes on\npublisher and subscriber or hide sensitive column data. In both cases,\nit doesn't seem to make sense to combine column lists and we didn't\nfind any other database doing so. OTOH, for row filters, it makes\nsense to combine filters for each pub-action as both IBM DB2 and\nOracle seems to be doing.\n\n>\n> > 3.\n> > +\n> > + /*\n> > + * We only throw a warning here so that the subcription can still be\n> > + * created and let user aware that something is going to fail later and\n> > + * they can fix the publications afterwards.\n> > + */\n> > + if (list_member(tablelist, rv))\n> > + ereport(WARNING,\n> > + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in\n> > different publications\",\n> > + nspname, relname));\n> >\n> > Can we extend this comment to explain the case where after Alter\n> > Publication, if the user dumps and restores back the subscription,\n> > there is a possibility that \"CREATE SUBSCRIPTION\" won't work if we\n> > give ERROR here instead of WARNING?\n>\n> Yeah, and not only the comment — I think we need to have more in the\n> warning message itself.\n>\n\nBut as mentioned by Hou-San in his last email (pg_dump of subscription\nalways set (connect = false) which means it won't try to fetch column\nlist), I think we don't need to give a WARNING here, instead, we can\nuse ERROR. So, do we need the extra DETAIL (The subscription \"...\"\ncannot currently be used for replication.) as that is implicit for the\nERROR case?\n\n>\n> I think this whole affair is a bit sad TBH and I'm sure it'll give us\n> some grief -- similar to replication slots becoming inactive and nobody\n> noticing. A user changing a publication in a way that prevents some\n> replica from working and the warnings are hidden, they could have\n> trouble noticing that the replica is stuck.\n>\n\nI agree and it seems this is the best we can do for now.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1L_98LF7Db4yFY1PhKKRzoT83xtN41jTS5X%2B8OeGrAkLw%40mail.gmail.com\n[2] - https://www.ibm.com/docs/en/idr/11.4.0?topic=rows-log-record-variables\n[3] - https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/selecting-and-filtering-rows.html#GUID-11296A70-D953-4426-8EAA-37C2B4432446\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 May 2022 08:56:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Monday, May 16, 2022 9:34 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> Attach the new version patch.\r\nHi,\r\n\r\n\r\nI have few minor comments.\r\n\r\nFor v2-0001.\r\n\r\n(1) Unnecessary period at the end of column explanation\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>rowfilter</structfield> <type>text</type>\r\n+ </para>\r\n+ <para>\r\n+ Expression for the table's publication qualifying condition.\r\n+ </para></entry>\r\n+ </row>\r\n\r\n\r\nIt seems when we write a simple noun to explain a column,\r\nwe don't need to put a period at the end of the explanation.\r\nKindly change\r\nFROM:\r\n\"Expression for the table's publication qualifying condition.\"\r\nTO:\r\n\"Expression for the table's publication qualifying condition\"\r\n\r\n\r\nFor v2-0002.\r\n\r\n(a) typo in the commit message\r\n\r\nKindly change\r\nFROM:\r\n\"In both cases, it doesn't seems to make sense to combine column lists.\"\r\nTO:\r\n\"In both cases, it doesn't seem to make sense to combine column lists.\"\r\nor \"In both cases, it doesn't make sense to combine column lists.\"\r\n\r\n\r\n(b) fetch_table_list\r\n\r\n+ if (list_member(tablelist, rv))\r\n+ ereport(ERROR,\r\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\r\n+ errmsg(\"cannot use different column lists for table \\\"%s.%s\\\" in different publications\",\r\n+ nspname, relname));\r\n\r\n\r\nKindly add tests for new error paths, when we add them.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 17 May 2022 06:49:20 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 16, 2022 at 6:04 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch.\n>\n\nFew minor comments:\n==================\n1.\n+ <para>\n+ Names of table columns included in the publication. This contains all\n+ the columns of table when user didn't specify column list for the\n+ table.\n+ </para></entry>\n\nCan we slightly change it to: \"Names of table columns included in the\npublication. This contains all the columns of the table when the user\ndidn't specify the column list for the table.\"\n\n2. Below comments needs to be removed from tablesync.c as we don't\ncombine column lists after this patch.\n\n * For initial synchronization, column lists can be ignored in following\n* cases:\n*\n* 1) one of the subscribed publications for the table hasn't specified\n* any column list\n*\n* 2) one of the subscribed publications has puballtables set to true\n*\n* 3) one of the subscribed publications is declared as ALL TABLES IN\n* SCHEMA that includes this relation\n\n3.\nNote that we don't support the case where column list is different for\n+ * the same table when combining publications. But we still need to check\n+ * all the given publication-table mappings and report an error if any\n+ * publications have different column list.\n\nCan we change this comment to: \"Note that we don't support the case\nwhere the column list is different for the same table when combining\npublications. But one can later change the publication so we still\nneed to check all the given publication-table mappings and report an\nerror if any publications have a different column list.\"?\n\n4. Can we add a test for different column lists if it is not already there?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 May 2022 12:22:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tuesday, May 17, 2022 2:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> Few minor comments:\r\n> ==================\r\n> 1.\r\n> + <para>\r\n> + Names of table columns included in the publication. This contains all\r\n> + the columns of table when user didn't specify column list for the\r\n> + table.\r\n> + </para></entry>\r\n> \r\n> Can we slightly change it to: \"Names of table columns included in the\r\n> publication. This contains all the columns of the table when the user\r\n> didn't specify the column list for the table.\"\r\n> \r\n> 2. Below comments needs to be removed from tablesync.c as we don't\r\n> combine column lists after this patch.\r\n> \r\n> * For initial synchronization, column lists can be ignored in following\r\n> * cases:\r\n> *\r\n> * 1) one of the subscribed publications for the table hasn't specified\r\n> * any column list\r\n> *\r\n> * 2) one of the subscribed publications has puballtables set to true\r\n> *\r\n> * 3) one of the subscribed publications is declared as ALL TABLES IN\r\n> * SCHEMA that includes this relation\r\n> \r\n> 3.\r\n> Note that we don't support the case where column list is different for\r\n> + * the same table when combining publications. But we still need to check\r\n> + * all the given publication-table mappings and report an error if any\r\n> + * publications have different column list.\r\n> \r\n> Can we change this comment to: \"Note that we don't support the case\r\n> where the column list is different for the same table when combining\r\n> publications. But one can later change the publication so we still\r\n> need to check all the given publication-table mappings and report an\r\n> error if any publications have a different column list.\"?\r\n> \r\n> 4. Can we add a test for different column lists if it is not already there?\r\n\r\nThanks for the comments.\r\n\r\nAttach the new version patch which addressed all the above comments and\r\ncomments from Shi yu[1] and Osumi-san[2].\r\n\r\n[1] https://www.postgresql.org/message-id/OSZPR01MB6310F32344884F9C12F45071FDCE9%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n[2] https://www.postgresql.org/message-id/TYCPR01MB83736AEC2493FCBB75CC7556EDCE9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Tue, 17 May 2022 09:10:00 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tue, May 17, 2022 at 2:40 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Attach the new version patch which addressed all the above comments and\n> comments from Shi yu[1] and Osumi-san[2].\n>\n\nThanks, your first patch looks good to me. I'll commit that tomorrow\nunless there are more comments on the same. The second one is also in\ngood shape but I would like to test it a bit more and also see if\nothers have any suggestions/objections on the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 May 2022 07:58:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, May 16, 2022 at 6:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-May-16, Amit Kapila wrote:\n>\n> > Few comments:\n> > =================\n> > 1.\n> > postgres=# select * from pg_publication_tables;\n> > pubname | schemaname | tablename | columnlist | rowfilter\n> > ---------+------------+-----------+------------+-----------\n> > pub1 | public | t1 | |\n> > pub2 | public | t1 | 1 2 | (c3 < 10)\n> > (2 rows)\n> >\n> > I think it is better to display column names for columnlist in the\n> > exposed view similar to attnames in the pg_stats_ext view. I think\n> > that will make it easier for users to understand this information.\n>\n> +1\n>\n\nI have committed the first patch after fixing this part. It seems Tom\nis not very happy doing this after beta-1 [1]. The reason we get this\ninformation via this view (and underlying function) is that it\nsimplifies the queries on the subscriber-side as you can see in the\nsecond patch. The query change is as below:\n@@ -1761,17 +1762,18 @@ fetch_table_list(WalReceiverConn *wrconn, List\n*publications)\n WalRcvExecResult *res;\n StringInfoData cmd;\n TupleTableSlot *slot;\n- Oid tableRow[2] = {TEXTOID, TEXTOID};\n+ Oid tableRow[3] = {TEXTOID, TEXTOID, NAMEARRAYOID};\n List *tablelist = NIL;\n\n initStringInfo(&cmd);\n- appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename\\n\"\n+ appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename, \\n\"\n+ \" t.attnames\\n\"\n \" FROM pg_catalog.pg_publication_tables t\\n\"\n \" WHERE t.pubname IN (\");\n\n\nNow, there is another way to change this query as well as done by\nHou-San in his first version [2] of the patch. The changed query with\nthat approach will be something like:\n@@ -1761,17 +1762,34 @@ fetch_table_list(WalReceiverConn *wrconn, List\n*publications)\n WalRcvExecResult *res;\n StringInfoData cmd;\n TupleTableSlot *slot;\n- Oid tableRow[2] = {TEXTOID, TEXTOID};\n+ Oid tableRow[3] = {TEXTOID, TEXTOID, INT2VECTOROID};\n List *tablelist = NIL;\n\n initStringInfo(&cmd);\n- appendStringInfoString(&cmd, \"SELECT DISTINCT t.schemaname, t.tablename\\n\"\n- \" FROM pg_catalog.pg_publication_tables t\\n\"\n+ appendStringInfoString(&cmd,\n+ \"SELECT DISTINCT t.schemaname,\\n\"\n+ \" t.tablename,\\n\"\n+ \" (CASE WHEN (array_length(pr.prattrs, 1) = t.relnatts)\\n\"\n+ \" THEN NULL ELSE pr.prattrs END)\\n\"\n+ \" FROM (SELECT P.pubname AS pubname,\\n\"\n+ \" N.nspname AS schemaname,\\n\"\n+ \" C.relname AS tablename,\\n\"\n+ \" P.oid AS pubid,\\n\"\n+ \" C.oid AS reloid,\\n\"\n+ \" C.relnatts\\n\"\n+ \" FROM pg_publication P,\\n\"\n+ \" LATERAL pg_get_publication_tables(P.pubname) GPT,\\n\"\n+ \" pg_class C JOIN pg_namespace N\\n\"\n+ \" ON (N.oid = C.relnamespace)\\n\"\n+ \" WHERE C.oid = GPT.relid) t\\n\"\n+ \" LEFT OUTER JOIN pg_publication_rel pr\\n\"\n+ \" ON (t.pubid = pr.prpubid AND\\n\"\n+ \" pr.prrelid = reloid)\\n\"\n\nIt appeared slightly complex and costly to me, so I have given the\nsuggestion to change it as we have now in the second patch as shown\nabove. Now, I can think of below ways to proceed here:\n\na. Revert the change in view (and underlying function) as done in\ncommit 0ff20288e1 and consider the alternate way (using a slightly\ncomplex query) to fix. Then maybe for PG-16, we can simplify it by\nchanging the underlying function and view.\nb. Proceed with the current approach of using a simplified query.\n\nWhat do you think?\n\n[1] - https://www.postgresql.org/message-id/91075.1652929852%40sss.pgh.pa.us\n[2] - https://www.postgresql.org/message-id/OS0PR01MB5716A594C58DE4FFD1F8100B94C89%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 May 2022 10:33:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, May 19, 2022 at 10:33:13AM +0530, Amit Kapila wrote:\n> I have committed the first patch after fixing this part. It seems Tom\n> is not very happy doing this after beta-1 [1]. The reason we get this\n> information via this view (and underlying function) is that it\n> simplifies the queries on the subscriber-side as you can see in the\n> second patch. The query change is as below:\n> [1] - https://www.postgresql.org/message-id/91075.1652929852%40sss.pgh.pa.us\n\nI think Tom's concern is that adding information to a view seems like adding a\nfeature that hadn't previously been contemplated.\n(Catalog changes themselves are not prohibited during the beta period).\n\n> a. Revert the change in view (and underlying function) as done in\n> commit 0ff20288e1 and consider the alternate way (using a slightly\n> complex query) to fix. Then maybe for PG-16, we can simplify it by\n> changing the underlying function and view.\n\nBut, ISTM that it makes no sense to do it differently for v15 just to avoid the\nappearance of adding a new feature, only to re-do it in 2 weeks for v16...\nSo (from a passive observer) +0.1 to keep the current patch.\n\nI have some minor language fixes to that patch.\n\ndiff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\nindex d96c72e5310..82aa84e96e1 100644\n--- a/doc/src/sgml/catalogs.sgml\n+++ b/doc/src/sgml/catalogs.sgml\n@@ -9691,7 +9691,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n \n <row>\n <entry><link linkend=\"view-pg-publication-tables\"><structname>pg_publication_tables</structname></link></entry>\n- <entry>publications and information of their associated tables</entry>\n+ <entry>publications and information about their associated tables</entry>\n </row>\n \n <row>\n@@ -11635,7 +11635,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx\n \n <para>\n The view <structname>pg_publication_tables</structname> provides\n- information about the mapping between publications and information of\n+ information about the mapping between publications and information about\n tables they contain. Unlike the underlying catalog\n <link linkend=\"catalog-pg-publication-rel\"><structname>pg_publication_rel</structname></link>,\n this view expands publications defined as <literal>FOR ALL TABLES</literal>\n@@ -11695,7 +11695,7 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx\n </para>\n <para>\n Names of table columns included in the publication. This contains all\n- the columns of the table when the user didn't specify the column list\n+ the columns of the table when the user didn't specify a column list\n for the table.\n </para></entry>\n </row>\ndiff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c\nindex 8c7fca62de3..2f706f638ce 100644\n--- a/src/backend/catalog/pg_publication.c\n+++ b/src/backend/catalog/pg_publication.c\n@@ -1077,7 +1077,7 @@ get_publication_name(Oid pubid, bool missing_ok)\n }\n \n /*\n- * Returns information of tables in a publication.\n+ * Returns information about tables in a publication.\n */\n Datum\n pg_get_publication_tables(PG_FUNCTION_ARGS)\ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex 87aa571a331..86f13293090 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -11673,7 +11673,7 @@\n prosrc => 'pg_show_replication_origin_status' },\n \n # publications\n-{ oid => '6119', descr => 'get information of tables in a publication',\n+{ oid => '6119', descr => 'get information about tables in a publication',\n proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',\n provolatile => 's', prorettype => 'record', proargtypes => 'text',\n proallargtypes => '{text,oid,int2vector,pg_node_tree}', proargmodes => '{i,o,o,o}',\n\n\n",
"msg_date": "Thu, 19 May 2022 07:07:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, May 19, 2022 at 10:33:13AM +0530, Amit Kapila wrote:\n>> I have committed the first patch after fixing this part. It seems Tom\n>> is not very happy doing this after beta-1 [1]. The reason we get this\n>> information via this view (and underlying function) is that it\n>> simplifies the queries on the subscriber-side as you can see in the\n>> second patch. The query change is as below:\n>> [1] - https://www.postgresql.org/message-id/91075.1652929852%40sss.pgh.pa.us\n\n> I think Tom's concern is that adding information to a view seems like adding a\n> feature that hadn't previously been contemplated.\n> (Catalog changes themselves are not prohibited during the beta period).\n\nIt certainly smells like a new feature, but my concern was more around the\npost-beta catalog change. We do those only if really forced to, and the\nexplanation in the commit message didn't satisfy me as to why it was\nnecessary. This explanation isn't much better --- if we're trying to\nprohibit a certain class of publication definitions, what good does it do\nto check that on the subscriber side? Even more to the point, how can we\nhave a subscriber do that by relying on view columns that don't exist in\nolder versions? I'm also quite concerned about anything that involves\nsubscribers examining row filter conditions; that sounds like a pretty\ndirect route to bugs involving unsolvability and the halting problem.\n\n(But I've not read very much of this thread ... been a bit under the\nweather the last couple weeks. Maybe this actually is a sane solution.\nIt just doesn't sound like one at this level of detail.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 10:24:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, May 19, 2022 at 7:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Thu, May 19, 2022 at 10:33:13AM +0530, Amit Kapila wrote:\n> >> I have committed the first patch after fixing this part. It seems Tom\n> >> is not very happy doing this after beta-1 [1]. The reason we get this\n> >> information via this view (and underlying function) is that it\n> >> simplifies the queries on the subscriber-side as you can see in the\n> >> second patch. The query change is as below:\n> >> [1] - https://www.postgresql.org/message-id/91075.1652929852%40sss.pgh.pa.us\n>\n> > I think Tom's concern is that adding information to a view seems like adding a\n> > feature that hadn't previously been contemplated.\n> > (Catalog changes themselves are not prohibited during the beta period).\n>\n> It certainly smells like a new feature, but my concern was more around the\n> post-beta catalog change. We do those only if really forced to, and the\n> explanation in the commit message didn't satisfy me as to why it was\n> necessary. This explanation isn't much better --- if we're trying to\n> prohibit a certain class of publication definitions, what good does it do\n> to check that on the subscriber side?\n>\n\nIt is required on the subscriber side because prohibition is only for\nthe cases where multiple publications are combined. We disallow the\ncases where the column list is different for the same table when\ncombining publications. For example:\n\nPublisher-side:\nCreate table tab(c1 int, c2 int);\nCreate Publication pub1 for table tab(c1);\nCreate Publication pub1 for table tab(c2);\n\nSubscriber-side:\nCreate Subscription sub1 Connection 'dbname=postgres' Publication pub1, pub2;\n\nWe want to prohibit such cases. So, it would be better to check at the\ntime of 'Create Subscription' to validate such combinations and\nprohibit them. To achieve that we extended the existing function\npg_get_publication_tables() and view pg_publication_tables to expose\nthe column list and verify such a combination. We primarily need\ncolumn list information for this prohibition but it appeared natural\nto expose the row filter.\n\nAs mentioned in my previous email, we can fetch the required\ninformation directly from system table pg_publication_rel and extend\nthe query in fetch_table_list to achieve the desired purpose but\nextending the existing function/view for this appears to be a simpler\nway.\n\n> Even more to the point, how can we\n> have a subscriber do that by relying on view columns that don't exist in\n> older versions?\n>\n\nWe need a version check like (if\n(walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)) for that.\n\n> I'm also quite concerned about anything that involves\n> subscribers examining row filter conditions; that sounds like a pretty\n> direct route to bugs involving unsolvability and the halting problem.\n>\n\nWe examine only the column list for the purpose of this prohibition.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 May 2022 08:36:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Friday, May 20, 2022 11:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, May 19, 2022 at 7:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> \r\n> > Even more to the point, how can we\r\n> > have a subscriber do that by relying on view columns that don't exist\r\n> > in older versions?\r\n> >\r\n> \r\n> We need a version check like (if\r\n> (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)) for that.\r\n\r\nThanks for pointing it out. Here is the new version patch which add this version check.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Tue, 24 May 2022 05:33:50 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Fri, May 20, 2022 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 19, 2022 at 7:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > On Thu, May 19, 2022 at 10:33:13AM +0530, Amit Kapila wrote:\n> > >> I have committed the first patch after fixing this part. It seems Tom\n> > >> is not very happy doing this after beta-1 [1]. The reason we get this\n> > >> information via this view (and underlying function) is that it\n> > >> simplifies the queries on the subscriber-side as you can see in the\n> > >> second patch. The query change is as below:\n> > >> [1] - https://www.postgresql.org/message-id/91075.1652929852%40sss.pgh.pa.us\n> >\n> > > I think Tom's concern is that adding information to a view seems like adding a\n> > > feature that hadn't previously been contemplated.\n> > > (Catalog changes themselves are not prohibited during the beta period).\n> >\n> > It certainly smells like a new feature, but my concern was more around the\n> > post-beta catalog change. We do those only if really forced to, and the\n> > explanation in the commit message didn't satisfy me as to why it was\n> > necessary. This explanation isn't much better --- if we're trying to\n> > prohibit a certain class of publication definitions, what good does it do\n> > to check that on the subscriber side?\n> >\n>\n> It is required on the subscriber side because prohibition is only for\n> the cases where multiple publications are combined. We disallow the\n> cases where the column list is different for the same table when\n> combining publications. For example:\n>\n> Publisher-side:\n> Create table tab(c1 int, c2 int);\n> Create Publication pub1 for table tab(c1);\n> Create Publication pub1 for table tab(c2);\n>\n> Subscriber-side:\n> Create Subscription sub1 Connection 'dbname=postgres' Publication pub1, pub2;\n>\n> We want to prohibit such cases. So, it would be better to check at the\n> time of 'Create Subscription' to validate such combinations and\n> prohibit them. To achieve that we extended the existing function\n> pg_get_publication_tables() and view pg_publication_tables to expose\n> the column list and verify such a combination. We primarily need\n> column list information for this prohibition but it appeared natural\n> to expose the row filter.\n>\n\nI still feel that the current approach to extend the underlying\nfunction and view is a better idea but if you and or others are not\nconvinced then we can try to achieve it by extending the existing\nquery on the subscriber side as mentioned in my previous email [1].\nKindly let me know your opinion?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KfL%3Dez5fKPB-0Nrgf7wiqN9bXP-YHHj2YH5utXAmjYug%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 May 2022 15:19:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tue, May 24, 2022 at 3:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 20, 2022 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 19, 2022 at 7:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > > On Thu, May 19, 2022 at 10:33:13AM +0530, Amit Kapila wrote:\n> > > >> I have committed the first patch after fixing this part. It seems Tom\n> > > >> is not very happy doing this after beta-1 [1]. The reason we get this\n> > > >> information via this view (and underlying function) is that it\n> > > >> simplifies the queries on the subscriber-side as you can see in the\n> > > >> second patch. The query change is as below:\n> > > >> [1] - https://www.postgresql.org/message-id/91075.1652929852%40sss.pgh.pa.us\n> > >\n> > > > I think Tom's concern is that adding information to a view seems like adding a\n> > > > feature that hadn't previously been contemplated.\n> > > > (Catalog changes themselves are not prohibited during the beta period).\n> > >\n> > > It certainly smells like a new feature, but my concern was more around the\n> > > post-beta catalog change. We do those only if really forced to, and the\n> > > explanation in the commit message didn't satisfy me as to why it was\n> > > necessary. This explanation isn't much better --- if we're trying to\n> > > prohibit a certain class of publication definitions, what good does it do\n> > > to check that on the subscriber side?\n> > >\n> >\n> > It is required on the subscriber side because prohibition is only for\n> > the cases where multiple publications are combined. We disallow the\n> > cases where the column list is different for the same table when\n> > combining publications. For example:\n> >\n> > Publisher-side:\n> > Create table tab(c1 int, c2 int);\n> > Create Publication pub1 for table tab(c1);\n> > Create Publication pub1 for table tab(c2);\n> >\n> > Subscriber-side:\n> > Create Subscription sub1 Connection 'dbname=postgres' Publication pub1, pub2;\n> >\n> > We want to prohibit such cases. So, it would be better to check at the\n> > time of 'Create Subscription' to validate such combinations and\n> > prohibit them. To achieve that we extended the existing function\n> > pg_get_publication_tables() and view pg_publication_tables to expose\n> > the column list and verify such a combination. We primarily need\n> > column list information for this prohibition but it appeared natural\n> > to expose the row filter.\n> >\n>\n> I still feel that the current approach to extend the underlying\n> function and view is a better idea but if you and or others are not\n> convinced then we can try to achieve it by extending the existing\n> query on the subscriber side as mentioned in my previous email [1].\n> Kindly let me know your opinion?\n>\n\nUnless someone has objections or thinks otherwise, I am planning to\nproceed with the approach of extending the function/view (patch for\nwhich is already committed) and using it to prohibit the combinations\nof publications having different column lists as is done in the\ncurrently proposed patch [1].\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716AD7C0FE7386630BDBAAB94D79%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 26 May 2022 08:56:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Tue, May 24, 2022 at 11:03 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, May 20, 2022 11:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for pointing it out. Here is the new version patch which add this version check.\n>\n\nI have added/edited a few comments and ran pgindent. The attached\nlooks good to me. I'll push this early next week unless there are more\ncomments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 27 May 2022 11:17:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Fri, May 27, 2022 at 11:17:00AM +0530, Amit Kapila wrote:\n> On Tue, May 24, 2022 at 11:03 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, May 20, 2022 11:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Thanks for pointing it out. Here is the new version patch which add this version check.\n> \n> I have added/edited a few comments and ran pgindent. The attached\n> looks good to me. I'll push this early next week unless there are more\n> comments/suggestions.\n\nA minor doc review.\nNote that I also sent some doc comments at 20220519120724.GO19626@telsasoft.com.\n\n+ lists among publications in which case <command>ALTER PUBLICATION</command>\n+ command will be successful but later the WalSender in publisher or the\n\nCOMMA in which\n\nremove \"command\" ?\n\ns/in publisher/on the publisher/\n\n+ Subscription having several publications in which the same table has been\n+ published with different column lists is not supported.\n\nEither \"Subscriptions having .. are not supported\"; or,\n\"A subscription having .. is not supported\".\n\n\n",
"msg_date": "Fri, 27 May 2022 00:53:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Friday, May 27, 2022 1:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Fri, May 27, 2022 at 11:17:00AM +0530, Amit Kapila wrote:\n> > On Tue, May 24, 2022 at 11:03 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Friday, May 20, 2022 11:06 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > Thanks for pointing it out. Here is the new version patch which add this\n> version check.\n> >\n> > I have added/edited a few comments and ran pgindent. The attached\n> > looks good to me. I'll push this early next week unless there are more\n> > comments/suggestions.\n> \n> A minor doc review.\n> Note that I also sent some doc comments at\n> 20220519120724.GO19626@telsasoft.com.\n> \n> + lists among publications in which case <command>ALTER\n> PUBLICATION</command>\n> + command will be successful but later the WalSender in publisher\n> + or the\n> \n> COMMA in which\n> \n> remove \"command\" ?\n> \n> s/in publisher/on the publisher/\n> \n> + Subscription having several publications in which the same table has been\n> + published with different column lists is not supported.\n> \n> Either \"Subscriptions having .. are not supported\"; or, \"A subscription having ..\n> is not supported\".\n\nThanks for the comments. Here is the new version patch set which fixes these.\n\nBest regards,\nHou zj",
"msg_date": "Fri, 27 May 2022 07:34:32 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Fri, May 27, 2022 at 1:04 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, May 27, 2022 1:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, May 27, 2022 at 11:17:00AM +0530, Amit Kapila wrote:\n> > > On Tue, May 24, 2022 at 11:03 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Friday, May 20, 2022 11:06 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > >\n> > > > Thanks for pointing it out. Here is the new version patch which add this\n> > version check.\n> > >\n> > > I have added/edited a few comments and ran pgindent. The attached\n> > > looks good to me. I'll push this early next week unless there are more\n> > > comments/suggestions.\n> >\n> > A minor doc review.\n> > Note that I also sent some doc comments at\n> > 20220519120724.GO19626@telsasoft.com.\n> >\n> > + lists among publications in which case <command>ALTER\n> > PUBLICATION</command>\n> > + command will be successful but later the WalSender in publisher\n> > + or the\n> >\n> > COMMA in which\n> >\n> > remove \"command\" ?\n> >\n> > s/in publisher/on the publisher/\n> >\n> > + Subscription having several publications in which the same table has been\n> > + published with different column lists is not supported.\n> >\n> > Either \"Subscriptions having .. are not supported\"; or, \"A subscription having ..\n> > is not supported\".\n>\n> Thanks for the comments. Here is the new version patch set which fixes these.\n>\n\nI have pushed the bug-fix patch. I'll look at the language\nimprovements patch next.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Jun 2022 17:28:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 9:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 27, 2022 at 1:04 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, May 27, 2022 1:54 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Fri, May 27, 2022 at 11:17:00AM +0530, Amit Kapila wrote:\n> > > > On Tue, May 24, 2022 at 11:03 AM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > On Friday, May 20, 2022 11:06 AM Amit Kapila <amit.kapila16@gmail.com>\n> > > wrote:\n> > > > >\n> > > > > Thanks for pointing it out. Here is the new version patch which add this\n> > > version check.\n> > > >\n> > > > I have added/edited a few comments and ran pgindent. The attached\n> > > > looks good to me. I'll push this early next week unless there are more\n> > > > comments/suggestions.\n> > >\n> > > A minor doc review.\n> > > Note that I also sent some doc comments at\n> > > 20220519120724.GO19626@telsasoft.com.\n> > >\n> > > + lists among publications in which case <command>ALTER\n> > > PUBLICATION</command>\n> > > + command will be successful but later the WalSender in publisher\n> > > + or the\n> > >\n> > > COMMA in which\n> > >\n> > > remove \"command\" ?\n> > >\n> > > s/in publisher/on the publisher/\n> > >\n> > > + Subscription having several publications in which the same table has been\n> > > + published with different column lists is not supported.\n> > >\n> > > Either \"Subscriptions having .. are not supported\"; or, \"A subscription having ..\n> > > is not supported\".\n> >\n> > Thanks for the comments. Here is the new version patch set which fixes these.\n> >\n>\n> I have pushed the bug-fix patch. I'll look at the language\n> improvements patch next.\n\n\nI noticed the patch \"0001-language-fixes-on-HEAD-from-Justin.patch\" says:\n\n@@ -11673,7 +11673,7 @@\n prosrc => 'pg_show_replication_origin_status' },\n\n # publications\n-{ oid => '6119', descr => 'get information of tables in a publication',\n+{ oid => '6119', descr => 'get information about tables in a publication',\n\n~~~\n\nBut, this grammar website [1] says:\n\nWhat Does Of Mean\nAs defined by Cambridge dictionary Of is basically used “to show\npossession, belonging, or origin”.\n\nWhat Does About Mean\nSimilarly about primarily indicates ‘On the subject of; concerning’ as\ndefined by the Oxford dictionary. Or about in brief highlights some\nfact ‘on the subject of, or connected with’\n\nThe main difference between of and about is that of implies a\npossessive quality while about implies concerning or on the subject of\nsomething.\n\n~~~\n\n From which I guess\n\n1. 'get information of tables in a publication' ~= 'get information\nbelonging to tables in a publication'\n\n2. 'get information about tables in a publication' ~= 'get information\non the subject of tables in a publication'\n\n\nThe 'pg_publication_tables' view contains various attributes\n(tablename, attnames, rowfilter, etc) BELONGING TO each table of the\npublication, so the current description (using 'of') was already the\nmore accurate one wasn't it?\n\n------\n[1] https://pediaa.com/difference-between-of-and-about/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 6 Jun 2022 15:42:31 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 03:42:31PM +1000, Peter Smith wrote:\n> I noticed the patch \"0001-language-fixes-on-HEAD-from-Justin.patch\" says:\n> \n> @@ -11673,7 +11673,7 @@\n> prosrc => 'pg_show_replication_origin_status' },\n> \n> # publications\n> -{ oid => '6119', descr => 'get information of tables in a publication',\n> +{ oid => '6119', descr => 'get information about tables in a publication',\n> \n> ~~~\n> \n> But, this grammar website [1] says:\n...\n> From which I guess\n> \n> 1. 'get information of tables in a publication' ~= 'get information\n> belonging to tables in a publication'\n\nBut the information doesn't \"belong to\" the tables.\n\nThe information is \"regarding\" the tables (or \"associated with\" or \"concerned\nwith\" or \"respecting\" or \"on the subject of\" the tables).\n\nI think my change is correct based on the grammar definition, as well as its\nintuitive \"feel\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 7 Jun 2022 22:25:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 1:25 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jun 06, 2022 at 03:42:31PM +1000, Peter Smith wrote:\n> > I noticed the patch \"0001-language-fixes-on-HEAD-from-Justin.patch\" says:\n> >\n> > @@ -11673,7 +11673,7 @@\n> > prosrc => 'pg_show_replication_origin_status' },\n> >\n> > # publications\n> > -{ oid => '6119', descr => 'get information of tables in a publication',\n> > +{ oid => '6119', descr => 'get information about tables in a publication',\n> >\n> > ~~~\n> >\n> > But, this grammar website [1] says:\n> ...\n> > From which I guess\n> >\n> > 1. 'get information of tables in a publication' ~= 'get information\n> > belonging to tables in a publication'\n>\n> But the information doesn't \"belong to\" the tables.\n>\n> The information is \"regarding\" the tables (or \"associated with\" or \"concerned\n> with\" or \"respecting\" or \"on the subject of\" the tables).\n>\n> I think my change is correct based on the grammar definition, as well as its\n> intuitive \"feel\".\n>\n\nActually, I have no problem with this being worded either way. My\npoint was mostly to question if it was really worth changing it at\nthis time - e.g. I think there is a reluctance to change anything to\ndo with the catalogs during beta (even when a catversion bump may not\nbe required).\n\nI agree that \"about\" seems better if the text said, \"get information\nabout tables\". But it does not say that - it says \"get information\nabout tables in a publication\" which I felt made a subtle difference.\n\ne.g.1 \"... on the subject of / concerned with tables.\"\n- sounds like attributes about each table (col names, row filter etc)\n\nversus\n\ne.g.2 \"... on the subject of / concerned with tables in a publication.\"\n- sounds less like information PER table, and more like information\nabout the table membership of the publication.\n\n~~\n\nAny ambiguities can be eliminated if this text was just fixed to be\nconsistent with the wording of catalogs.sgml:\ne.g. \"publications and information about their associated tables\"\n\nBut then this comes full circle back to my question if during beta is\na good time to be making such a change.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 Jun 2022 15:35:05 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Wed, Jun 8, 2022 at 11:05 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jun 8, 2022 at 1:25 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Mon, Jun 06, 2022 at 03:42:31PM +1000, Peter Smith wrote:\n> > > I noticed the patch \"0001-language-fixes-on-HEAD-from-Justin.patch\" says:\n> > >\n> > > @@ -11673,7 +11673,7 @@\n> > > prosrc => 'pg_show_replication_origin_status' },\n> > >\n> > > # publications\n> > > -{ oid => '6119', descr => 'get information of tables in a publication',\n> > > +{ oid => '6119', descr => 'get information about tables in a publication',\n> > >\n> > > ~~~\n> > >\n> > > But, this grammar website [1] says:\n> > ...\n> > > From which I guess\n> > >\n> > > 1. 'get information of tables in a publication' ~= 'get information\n> > > belonging to tables in a publication'\n> >\n> > But the information doesn't \"belong to\" the tables.\n> >\n> > The information is \"regarding\" the tables (or \"associated with\" or \"concerned\n> > with\" or \"respecting\" or \"on the subject of\" the tables).\n> >\n> > I think my change is correct based on the grammar definition, as well as its\n> > intuitive \"feel\".\n> >\n>\n> Actually, I have no problem with this being worded either way. My\n> point was mostly to question if it was really worth changing it at\n> this time - e.g. I think there is a reluctance to change anything to\n> do with the catalogs during beta (even when a catversion bump may not\n> be required).\n>\n> I agree that \"about\" seems better if the text said, \"get information\n> about tables\". But it does not say that - it says \"get information\n> about tables in a publication\" which I felt made a subtle difference.\n>\n> e.g.1 \"... on the subject of / concerned with tables.\"\n> - sounds like attributes about each table (col names, row filter etc)\n>\n> versus\n>\n> e.g.2 \"... on the subject of / concerned with tables in a publication.\"\n> - sounds less like information PER table, and more like information\n> about the table membership of the publication.\n>\n> ~~\n>\n> Any ambiguities can be eliminated if this text was just fixed to be\n> consistent with the wording of catalogs.sgml:\n> e.g. \"publications and information about their associated tables\"\n>\n\nI don't know if this is better than the current text for this view:\n'get information of tables in a publication' and unless we have a\nconsensus on any change here, I think it is better to retain the\ncurrent text as it is.\n\nI would like to close the Open item listed corresponding to this\nthread [1] as the fix for the reported issue is committed\n(fd0b9dcebd). Do let me know if you or others think otherwise?\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Jun 2022 08:54:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 8:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I would like to close the Open item listed corresponding to this\n> thread [1] as the fix for the reported issue is committed\n> (fd0b9dcebd). Do let me know if you or others think otherwise?\n>\n\nSeeing no objections, I have closed this item.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Jun 2022 09:53:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus: logical replication rows/cols combinations"
}
] |
[
{
"msg_contents": "Newly promoted primary may leave an invalid checkpoint.\r\n\r\nIn function CreateRestartPoint, control file is updated and old wals are removed. But in some situations, control file is not updated, old wals are still removed. Thus produces an invalid checkpoint with nonexistent wal. Crucial log: \"invalid primary checkpoint record\", \"could not locate a valid checkpoint record\".\r\n\r\n\r\n\r\n\r\nThe following timeline reproduces above situation:\r\n\r\ntl1: standby begins to create restart point (time or wal triggered).\r\n\r\ntl2: standby promotes and control file state is updated to DB_IN_PRODUCTION. Control file will not update (xlog.c:9690). But old wals are still removed (xlog.c:9719).\r\n\r\ntl3: standby becomes primary. primary may crash before the next complete checkpoint (OOM in my situation). primary will crash continually with invalid checkpoint.\r\n\r\n\r\n\r\n\r\nThe attached patch reproduces this problem using standard postgresql perl test, you can run with \r\n\r\n./configure --enable-tap-tests; make -j; make -C src/test/recovery/ check PROVE_TESTS=t/027_invalid_checkpoint_after_promote.pl\r\n\r\nThe attached patch also fixes this problem by ensuring that remove old wals only after control file is updated.",
"msg_date": "Tue, 26 Apr 2022 15:16:13 +0800",
"msg_from": "\"=?ISO-8859-1?B?WmhhbyBSdWk=?=\" <875941708@qq.com>",
"msg_from_op": true,
"msg_subject": "Fix primary crash continually with invalid checkpoint after promote"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 03:16:13PM +0800, Zhao Rui wrote:\n> In function CreateRestartPoint, control file is updated and old wals are removed. But in some situations, control file is not updated, old wals are still removed. Thus produces an invalid checkpoint with nonexistent wal. Crucial log: \"invalid primary checkpoint record\", \"could not locate a valid checkpoint record\".\n\nI think this is the same issue tracked here: [0].\n\n[0] https://postgr.es/m/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Apr 2022 11:16:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix primary crash continually with invalid checkpoint after\n promote"
},
{
"msg_contents": "\"=?ISO-8859-1?B?WmhhbyBSdWk=?=\" <875941708@qq.com> writes:\n> Newly promoted primary may leave an invalid checkpoint.\n> In function CreateRestartPoint, control file is updated and old wals are removed. But in some situations, control file is not updated, old wals are still removed. Thus produces an invalid checkpoint with nonexistent wal. Crucial log: \"invalid primary checkpoint record\", \"could not locate a valid checkpoint record\".\n\nI believe this is the same issue being discussed here:\n\nhttps://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com\n\nbut Horiguchi-san's proposed fix looks quite different from yours.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Apr 2022 15:47:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix primary crash continually with invalid checkpoint after\n promote"
},
{
"msg_contents": "At Tue, 26 Apr 2022 15:47:13 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> \"=?ISO-8859-1?B?WmhhbyBSdWk=?=\" <875941708@qq.com> writes:\n> > Newly promoted primary may leave an invalid checkpoint.\n> > In function CreateRestartPoint, control file is updated and old wals are removed. But in some situations, control file is not updated, old wals are still removed. Thus produces an invalid checkpoint with nonexistent wal. Crucial log: \"invalid primary checkpoint record\", \"could not locate a valid checkpoint record\".\n> \n> I believe this is the same issue being discussed here:\n> \n> https://www.postgresql.org/message-id/flat/20220316.102444.2193181487576617583.horikyota.ntt%40gmail.com\n> \n> but Horiguchi-san's proposed fix looks quite different from yours.\n\nThe root cause is that CreateRestartPoint omits to update last\ncheckpoint in control file if archiver recovery exits at an\nunfortunate timing. So my proposal is going to fix the root cause.\n\nZhao Rui's proposal is retension of WAL files according to (the wrong\ncontent of) control file.\n\nAside from the fact that it may let slots be invalidated ealier, It's\nnot great that an acutally performed restartpoint is forgotten, which\nmay cause the next crash recovery starts from an already performed\ncheckpoint.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Apr 2022 11:24:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix primary crash continually with invalid checkpoint after\n promote"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 11:24:11AM +0900, Kyotaro Horiguchi wrote:\n> Zhao Rui's proposal is retension of WAL files according to (the wrong\n> content of) control file.\n> \n> Aside from the fact that it may let slots be invalidated ealier, It's\n> not great that an acutally performed restartpoint is forgotten, which\n> may cause the next crash recovery starts from an already performed\n> checkpoint.\n\nYeah, I was analyzing this problem and took a look at what's proposed\nhere, and I agree that what is proposed on this thread would just do\nsome unnecessary work if we find ourselves in a situation where we\nwe need to replay from a point earlier than necessary, aka the\ncheckpoint that should have been already finished.\n--\nMichael",
"msg_date": "Wed, 27 Apr 2022 12:00:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix primary crash continually with invalid checkpoint after\n promote"
}
] |
[
{
"msg_contents": "When I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\nCREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\nANALYZE test1;\nEXPLAIN SELECT * FROM test1;\n QUERY PLAN\n---------------------------------------------------------\n Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n(1 row)\n\nTable is empty but rows=2550. Seem like it was calculated from some default values.\nIs this normal behavior or a bug? Can it lead to a poor choice of the plan of a query in general?\n\n\n\n\n\n\n\n\n\n\nWhen I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\nCREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\nANALYZE test1;\nEXPLAIN SELECT * FROM test1;\n QUERY PLAN\n---------------------------------------------------------\n Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n(1 row)\n\n\nTable is empty but rows=2550. Seem like it was calculated from some default values.\nIs this normal behavior or a bug? Can it lead to a poor choice of the plan of a query in general?",
"msg_date": "Tue, 26 Apr 2022 08:45:24 +0000",
"msg_from": "=?koi8-r?B?8MHO1MDbyc4g4czFy9PBzsTSIOnXwc7P18ne?=\n\t<AI.Pantyushin@gaz-is.ru>",
"msg_from_op": true,
"msg_subject": "Wrong rows count in EXPLAIN"
},
{
"msg_contents": "Hi!\n\n> 26 апр. 2022 г., в 13:45, Пантюшин Александр Иванович <AI.Pantyushin@gaz-is.ru> написал(а):\n> \n> When I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\n> CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\n> ANALYZE test1;\n> EXPLAIN SELECT * FROM test1;\n> QUERY PLAN\n> ---------------------------------------------------------\n> Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n> (1 row)\n> \n> Table is empty but rows=2550. Seem like it was calculated from some default values.\n> Is this normal behavior or a bug? Can it lead to a poor choice of the plan of a query in general?\n\nWhich Postgres version do you use?\n\nI observe:\npostgres=# CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\nCREATE TABLE\npostgres=# ANALYZE test1;\nANALYZE\npostgres=# EXPLAIN SELECT * FROM test1;\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Only Scan using test1_pkey on test1 (cost=0.12..8.14 rows=1 width=4)\n(1 row)\n\npostgres=# select version();\n version \n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15devel on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit\n(1 row)\n\nWithout \"ANALYZE test1;\" table_block_relation_estimate_size() assumes relation size is 10 blocks.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 27 Apr 2022 14:08:22 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows count in EXPLAIN"
},
{
"msg_contents": "Hi,\n>Which Postgres version do you use?\nI checked this on PG 11\npostgres=# select version();\n version\n-------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 11.5 on x86_64-w64-mingw32, compiled by x86_64-w64-mingw32-gcc.exe (x86_64-win32-seh-rev0, Built by MinGW-W64 project) 8.1.0, 64-bit\n(1 row)\n\nand on PG 13\npostgres=# select version();\n version\n-----------------------------------------------------------------------------------------------------\n PostgreSQL 13.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 8.4.0-1ubuntu1~18.04) 8.4.0, 64-bit\n(1 row)\n\nBoth versions shows same strange rows counting in EXPLAINE\n\n________________________________\nОт: Andrey Borodin <x4mmm@yandex-team.ru>\nОтправлено: 27 апреля 2022 г. 13:08:22\nКому: Пантюшин Александр Иванович\nКопия: pgsql-hackers@postgresql.org; Тарасов Георгий Витальевич\nТема: Re: Wrong rows count in EXPLAIN\n\nHi!\n\n> 26 апр. 2022 г., в 13:45, Пантюшин Александр Иванович <AI.Pantyushin@gaz-is.ru> написал(а):\n>\n> When I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\n> CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\n> ANALYZE test1;\n> EXPLAIN SELECT * FROM test1;\n> QUERY PLAN\n> ---------------------------------------------------------\n> Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n> (1 row)\n>\n> Table is empty but rows=2550. Seem like it was calculated from some default values.\n> Is this normal behavior or a bug? Can it lead to a poor choice of the plan of a query in general?\n\nWhich Postgres version do you use?\n\nI observe:\npostgres=# CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\nCREATE TABLE\npostgres=# ANALYZE test1;\nANALYZE\npostgres=# EXPLAIN SELECT * FROM test1;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Index Only Scan using test1_pkey on test1 (cost=0.12..8.14 rows=1 width=4)\n(1 row)\n\npostgres=# select version();\n version\n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15devel on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit\n(1 row)\n\nWithout \"ANALYZE test1;\" table_block_relation_estimate_size() assumes relation size is 10 blocks.\n\nBest regards, Andrey Borodin.\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n>Which Postgres version do you use?\nI checked this on PG 11\npostgres=# select version();\n version\n-------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 11.5 on x86_64-w64-mingw32, compiled by x86_64-w64-mingw32-gcc.exe (x86_64-win32-seh-rev0, Built by MinGW-W64 project) 8.1.0, 64-bit\n(1 row)\n\nand on PG 13\npostgres=# select version();\n version \n-----------------------------------------------------------------------------------------------------\n PostgreSQL 13.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 8.4.0-1ubuntu1~18.04) 8.4.0, 64-bit\n(1 row)\n\n\nBoth versions shows same strange rows counting in EXPLAINE\n\n\n\nОт: Andrey Borodin <x4mmm@yandex-team.ru>\nОтправлено: 27 апреля 2022 г. 13:08:22\nКому: Пантюшин Александр Иванович\nКопия: pgsql-hackers@postgresql.org; Тарасов Георгий Витальевич\nТема: Re: Wrong rows count in EXPLAIN\n \n\n\n\nHi!\n\n> 26 апр. 2022 г., в 13:45, Пантюшин Александр Иванович <AI.Pantyushin@gaz-is.ru> написал(а):\n> \n> When I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\n> CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\n> ANALYZE test1;\n> EXPLAIN SELECT * FROM test1;\n> QUERY PLAN\n> ---------------------------------------------------------\n> Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n> (1 row)\n> \n> Table is empty but rows=2550. Seem like it was calculated from some default values.\n> Is this normal behavior or a bug? Can it lead to a poor choice of the plan of a query in general?\n\nWhich Postgres version do you use?\n\nI observe:\npostgres=# CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\nCREATE TABLE\npostgres=# ANALYZE test1;\nANALYZE\npostgres=# EXPLAIN SELECT * FROM test1;\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Only Scan using test1_pkey on test1 (cost=0.12..8.14 rows=1 width=4)\n(1 row)\n\npostgres=# select version();\n version \n\n----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 15devel on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit\n(1 row)\n\nWithout \"ANALYZE test1;\" table_block_relation_estimate_size() assumes relation size is 10 blocks.\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 27 Apr 2022 10:17:38 +0000",
"msg_from": "=?koi8-r?B?8MHO1MDbyc4g4czFy9PBzsTSIOnXwc7P18ne?=\n\t<AI.Pantyushin@gaz-is.ru>",
"msg_from_op": true,
"msg_subject": "Re: Wrong rows count in EXPLAIN"
},
{
"msg_contents": "\n\n> 27 апр. 2022 г., в 15:17, Пантюшин Александр Иванович <AI.Pantyushin@gaz-is.ru> написал(а):\n> \n> Hi,\n> >Which Postgres version do you use?\n> I checked this on PG 11\n> ...\n\n> and on PG 13\n\nYes, I think before 3d351d91 it was impossible to distinguish between actually empty and never analyzed table.\nBut now it is working just as you would expect. There's an interesting relevant discussion linked to the commit message.\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/postgres/postgres/commit/3d351d916b20534f973eda760cde17d96545d4c4\n\n\n\n",
"msg_date": "Wed, 27 Apr 2022 15:39:00 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows count in EXPLAIN"
},
{
"msg_contents": "On Wed, 27 Apr 2022 at 21:08, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Which Postgres version do you use?\n\n3d351d91 changed things so we could tell the difference between a\nrelation which was analyzed and is empty vs a relation that's never\nbeen analyzed. That's why you're not seeing the same behaviour as the\nOP.\n\nTom's commit message [1] also touches on the \"safety measure\". Here\nhe's referring to the 2550 estimate, or more accurately, 10 pages\nfilled with tuples of that width. This is intended so that newly\ncreated tables that quickly subsequently are loaded with data then\nqueried before auto-analyze gets a chance to run are not assumed to be\nempty. The problem, if we assumed these non-analyzed tables were\nempty, would be that the planner would likely choose plans containing\nnodes like Seq Scans and non-parameterized Nested Loops rather than\nmaybe Index Scans and Merge or Hash joins. The 10-page thing is aimed\nto try and avoid the planner from making that mistake. Generally, the\nplanner underestimating the number of rows causes worse problems than\nwhen it overestimates the row counts. So 10 seems much better than 0.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3d351d916b20534f973eda760cde17d96545d4c4\n\n\n",
"msg_date": "Wed, 27 Apr 2022 22:43:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows count in EXPLAIN"
},
{
"msg_contents": "=?koi8-r?B?8MHO1MDbyc4g4czFy9PBzsTSIOnXwc7P18ne?= <AI.Pantyushin@gaz-is.ru> writes:\n> When I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\n> CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\n> ANALYZE test1;\n> EXPLAIN SELECT * FROM test1;\n> QUERY PLAN\n> ---------------------------------------------------------\n> Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n> (1 row)\n\n> Table is empty but rows=2550.\n\nThis is intentional, arising from the planner's unwillingness to\nassume that a table is empty. It assumes that such a table actually\ncontains (from memory) 10 pages, and then backs into a rowcount\nestimate from that depending on the data-type-dependent width of\nthe table rows.\n\nWithout this provision, we'd produce very bad plans for cases\nwhere a newly-populated table hasn't been analyzed yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Apr 2022 09:44:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows count in EXPLAIN"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 09:44:21AM -0400, Tom Lane wrote:\n> =?koi8-r?B?8MHO1MDbyc4g4czFy9PBzsTSIOnXwc7P18ne?= <AI.Pantyushin@gaz-is.ru> writes:\n> > When I create a new table, and then I evaluate the execution of the SELECT query, I see a strange rows count in EXPLAIN\n> > CREATE TABLE test1(f INTEGER PRIMARY KEY NOT NULL);\n> > ANALYZE test1;\n> > EXPLAIN SELECT * FROM test1;\n> > QUERY PLAN\n> > ---------------------------------------------------------\n> > Seq Scan on test1 (cost=0.00..35.50 rows=2550 width=4)\n> > (1 row)\n> \n> > Table is empty but rows=2550.\n> \n> This is intentional, arising from the planner's unwillingness to\n> assume that a table is empty. It assumes that such a table actually\n> contains (from memory) 10 pages, and then backs into a rowcount\n> estimate from that depending on the data-type-dependent width of\n> the table rows.\n> \n> Without this provision, we'd produce very bad plans for cases\n> where a newly-populated table hasn't been analyzed yet.\n\nWe could have a noice mode that warns when a table without statistics is\nused.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 27 Apr 2022 09:51:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Wrong rows count in EXPLAIN"
}
] |
[
{
"msg_contents": "range of composite types. I found this would be a great idea!!!\nQuestion on stackoverflow\n<https://stackoverflow.com/questions/71996169/some-of-range-composite-type-operator-only-check-the-elements-of-composite-type>\nDB Fiddle\n<https://dbfiddle.uk/?rdbms=postgres_14&fiddle=cdffa53650e8df576bc82d0ae2e1beef>\n\n source code regress test\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/rangetypes.sql;h=b69efede3ae4977e322c1349c02a5dc2f74b7cc4;hb=6df7a9698bb036610c1e8c6d375e1be38cb26d5f>\nranges of composite types code part:\n\n 504 --\n> 505 -- Ranges of composites\n> 506 --\n> 507\n> 508 create type two_ints as (a int, b int);\n> 509 create type two_ints_range as range (subtype = two_ints);\n> 510\n> 511 -- with force_parallel_mode on, this exercises tqueue.c's range\n> remapping\n> 512 select *, row_to_json(upper(t)) as u from\n> 513 (values (two_ints_range(row(1,2), row(3,4))),\n> 514 (two_ints_range(row(5,6), row(7,8)))) v(t);\n>\n\n-- composite type range.\n> create type mytype as (t1 int, t2 date);\n> -- create type my_interval as (t1 int, t2 interval);\n> select (2,'2022-01-02')::mytype ;\n> create type mytyperange as range(subtype = mytype);\n>\n\nI am thinking construct a composite type range that would be equivalent as:\n\n> select a, b::datefrom generate_series(1,8) a,\n> generate_series('2022-01-01'::timestamp,\n> '2022-01-31'::timestamp, interval '1 day') b;\n>\n> for that means the following sql queries should return* false:*\n\nselect mytyperange (\n> (1,'2022-01-01')::mytype,\n> (8, '2022-01-31')::mytype, '[]') @> (2, '2020-01-19')::mytype;\n>\n\n\n> select\n> (2, '2020-01-19')::mytype <@\n> mytyperange(\n> (1,'2022-01-01')::mytype,\n> (8, '2022-01-31')::mytype, '[]') ;\n>\n\n\n> --does the range overlaps, that is, have any common element.\n> select\n> mytyperange ((2,'2020-12-30')::mytype,\n> (2, '2020-12-31')::mytype)\n> &&\n> mytyperange(\n> (1,'2022-01-01')::mytype,\n> (8, '2022-01-31')::mytype) ;\n>\n\nfrom the db fiddle link, so far I failed.\nIf this is possible then we may need a *subtype_diff *function and *canonical\n*function.\n\n\nrange of composite types. I found this would be a great idea!!! Question on stackoverflowDB Fiddle\n source code regress test ranges of composite types code part: 504 -- 505 -- Ranges of composites 506 -- 507 508 create type two_ints as (a int, b int); 509 create type two_ints_range as range (subtype = two_ints); 510 511 -- with force_parallel_mode on, this exercises tqueue.c's range remapping 512 select *, row_to_json(upper(t)) as u from 513 (values (two_ints_range(row(1,2), row(3,4))), 514 (two_ints_range(row(5,6), row(7,8)))) v(t);-- composite type range.create type mytype as (t1 int, t2 date);-- create type my_interval as (t1 int, t2 interval);select (2,'2022-01-02')::mytype ;create type mytyperange as range(subtype = mytype); I am thinking construct a composite type range that would be equivalent as: \nselect a, b::date\nfrom generate_series(1,8) a,\ngenerate_series('2022-01-01'::timestamp,\n '2022-01-31'::timestamp, interval '1 day') b;for that means the following sql queries should return false:select mytyperange ( (1,'2022-01-01')::mytype, (8, '2022-01-31')::mytype, '[]') @> (2, '2020-01-19')::mytype; select (2, '2020-01-19')::mytype <@ mytyperange( (1,'2022-01-01')::mytype, (8, '2022-01-31')::mytype, '[]') ;\n --does the range overlaps, that is, have any common element.select mytyperange ((2,'2020-12-30')::mytype, (2, '2020-12-31')::mytype) && mytyperange( (1,'2022-01-01')::mytype, (8, '2022-01-31')::mytype) ;from the db fiddle link, so far I failed.If this is possible then we may need a subtype_diff function and canonical function.",
"msg_date": "Tue, 26 Apr 2022 14:46:13 +0530",
"msg_from": "Jian He <hejian.mark@gmail.com>",
"msg_from_op": true,
"msg_subject": "range of composite types!"
},
{
"msg_contents": "Hello.\nJust wondering if this is possible or not..\n\n---------- Forwarded message ---------\nFrom: Jian He <hejian.mark@gmail.com>\nDate: Tue, Apr 26, 2022 at 2:46 PM\nSubject: range of composite types!\nTo: pgsql-general <pgsql-general@lists.postgresql.org>\n\nrange of composite types. I found this would be a great idea!!!\nQuestion on stackoverflow\n<https://stackoverflow.com/questions/71996169/some-of-range-composite-type-operator-only-check-the-elements-of-composite-type>\nDB Fiddle\n<https://dbfiddle.uk/?rdbms=postgres_14&fiddle=cdffa53650e8df576bc82d0ae2e1beef>\n\n source code regress test\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/rangetypes.sql;h=b69efede3ae4977e322c1349c02a5dc2f74b7cc4;hb=6df7a9698bb036610c1e8c6d375e1be38cb26d5f>\nranges of composite types code part:\n\n 504 --\n> 505 -- Ranges of composites\n> 506 --\n> 507\n> 508 create type two_ints as (a int, b int);\n> 509 create type two_ints_range as range (subtype = two_ints);\n> 510\n> 511 -- with force_parallel_mode on, this exercises tqueue.c's range\n> remapping\n> 512 select *, row_to_json(upper(t)) as u from\n> 513 (values (two_ints_range(row(1,2), row(3,4))),\n> 514 (two_ints_range(row(5,6), row(7,8)))) v(t);\n>\n\n-- composite type range.\n> create type mytype as (t1 int, t2 date);\n> -- create type my_interval as (t1 int, t2 interval);\n> select (2,'2022-01-02')::mytype ;\n> create type mytyperange as range(subtype = mytype);\n>\n\nI am thinking construct a composite type range that would be equivalent as:\n\n> select a, b::datefrom generate_series(1,8) a,\n> generate_series('2022-01-01'::timestamp,\n> '2022-01-31'::timestamp, interval '1 day') b;\n>\n> for that means the following sql queries should return* false:*\n\nselect mytyperange (\n> (1,'2022-01-01')::mytype,\n> (8, '2022-01-31')::mytype, '[]') @> (2, '2020-01-19')::mytype;\n>\n\n\n> select\n> (2, '2020-01-19')::mytype <@\n> mytyperange(\n> (1,'2022-01-01')::mytype,\n> (8, '2022-01-31')::mytype, '[]') ;\n>\n\n\n> --does the range overlaps, that is, have any common element.\n> select\n> mytyperange ((2,'2020-12-30')::mytype,\n> (2, '2020-12-31')::mytype)\n> &&\n> mytyperange(\n> (1,'2022-01-01')::mytype,\n> (8, '2022-01-31')::mytype) ;\n>\n\nfrom the db fiddle link, so far I failed.\nIf this is possible then we may need a *subtype_diff *function and *canonical\n*function.\n\n Hello. Just wondering if this is possible or not.. ---------- Forwarded message ---------From: Jian He <hejian.mark@gmail.com>Date: Tue, Apr 26, 2022 at 2:46 PMSubject: range of composite types!To: pgsql-general <pgsql-general@lists.postgresql.org>\nrange of composite types. I found this would be a great idea!!! Question on stackoverflowDB Fiddle\n source code regress test ranges of composite types code part: 504 -- 505 -- Ranges of composites 506 -- 507 508 create type two_ints as (a int, b int); 509 create type two_ints_range as range (subtype = two_ints); 510 511 -- with force_parallel_mode on, this exercises tqueue.c's range remapping 512 select *, row_to_json(upper(t)) as u from 513 (values (two_ints_range(row(1,2), row(3,4))), 514 (two_ints_range(row(5,6), row(7,8)))) v(t);-- composite type range.create type mytype as (t1 int, t2 date);-- create type my_interval as (t1 int, t2 interval);select (2,'2022-01-02')::mytype ;create type mytyperange as range(subtype = mytype); I am thinking construct a composite type range that would be equivalent as: \nselect a, b::date\nfrom generate_series(1,8) a,\ngenerate_series('2022-01-01'::timestamp,\n '2022-01-31'::timestamp, interval '1 day') b;for that means the following sql queries should return false:select mytyperange ( (1,'2022-01-01')::mytype, (8, '2022-01-31')::mytype, '[]') @> (2, '2020-01-19')::mytype; select (2, '2020-01-19')::mytype <@ mytyperange( (1,'2022-01-01')::mytype, (8, '2022-01-31')::mytype, '[]') ;\n --does the range overlaps, that is, have any common element.select mytyperange ((2,'2020-12-30')::mytype, (2, '2020-12-31')::mytype) && mytyperange( (1,'2022-01-01')::mytype, (8, '2022-01-31')::mytype) ;from the db fiddle link, so far I failed.If this is possible then we may need a subtype_diff function and canonical function.",
"msg_date": "Wed, 27 Apr 2022 10:33:49 +0530",
"msg_from": "Jian He <hejian.mark@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fwd: range of composite types!"
},
{
"msg_contents": "Jian He <hejian.mark@gmail.com> writes:\n>> for that means the following sql queries should return* false:*\n\n>> select mytyperange (\n>> (1,'2022-01-01')::mytype,\n>> (8, '2022-01-31')::mytype, '[]') @> (2, '2020-01-19')::mytype;\n\nWhy should that return false? The comparison rules for composites\nsay that you compare the first column, only if that's equal\ncompare the second, etc. Here, \"2\" is between \"1\" and \"8\" so\nthe contents of the second column don't matter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Apr 2022 01:26:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: range of composite types!"
},
{
"msg_contents": "On Tuesday, April 26, 2022, Jian He <hejian.mark@gmail.com> wrote:\n\n>\n> -- composite type range.\n>> create type mytype as (t1 int, t2 date);\n>> -- create type my_interval as (t1 int, t2 interval);\n>> select (2,'2022-01-02')::mytype ;\n>> create type mytyperange as range(subtype = mytype);\n>>\n>\n> I am thinking construct a composite type range that would be equivalent\n> as:\n>\n>> select a, b::datefrom generate_series(1,8) a,\n>> generate_series('2022-01-01'::timestamp,\n>> '2022-01-31'::timestamp, interval '1 day') b;\n>>\n>> Ranges have to be ordered. How do you propose to order the above?\nComposite type comparisons have defined ordering semantics. Your results\ndemonstrate what those are (namely, subsequent fields are used only to\nbreak ties). If you want different behavior you will have to code it\nyourself - possibly including ignoring the generic composite type\ninfrastructure and make a formal base type of whatever it is you need.\n\nDavid J.\n\nOn Tuesday, April 26, 2022, Jian He <hejian.mark@gmail.com> wrote:\n-- composite type range.create type mytype as (t1 int, t2 date);-- create type my_interval as (t1 int, t2 interval);select (2,'2022-01-02')::mytype ;create type mytyperange as range(subtype = mytype); I am thinking construct a composite type range that would be equivalent as: \nselect a, b::date\nfrom generate_series(1,8) a,\ngenerate_series('2022-01-01'::timestamp,\n '2022-01-31'::timestamp, interval '1 day') b;Ranges have to be ordered. How do you propose to order the above? Composite type comparisons have defined ordering semantics. Your results demonstrate what those are (namely, subsequent fields are used only to break ties). If you want different behavior you will have to code it yourself - possibly including ignoring the generic composite type infrastructure and make a formal base type of whatever it is you need.David J.",
"msg_date": "Tue, 26 Apr 2022 22:29:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: range of composite types!"
}
] |
[
{
"msg_contents": "Hi,\n\nI see that we have included '-p' flex flag twice in the commands used\nto generate the scanner files. See below:\n\nsrc/backend/parser/Makefile:60: scan.c: FLEXFLAGS = -CF -p -p\nsrc/backend/utils/adt/Makefile:122: jsonpath_scan.c: FLEXFLAGS = -CF -p -p\nsrc/bin/psql/Makefile:61: psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\nsrc/fe_utils/Makefile:43: psqlscan.c: FLEXFLAGS = -Cfe -p -p\nsrc/backend/utils/adt/Makefile:122: jsonpath_scan.c: FLEXFLAGS = -CF -p -p\nsrc/bin/psql/Makefile:61: psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n\nDo we need this or can the extra -p flag be removed?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Tue, 26 Apr 2022 17:46:39 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "double inclusion of '-p' flex flag"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 7:16 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> I see that we have included '-p' flex flag twice in the commands used\n> to generate the scanner files. See below:\n>\n> src/backend/parser/Makefile:60: scan.c: FLEXFLAGS = -CF -p -p\n> src/backend/utils/adt/Makefile:122: jsonpath_scan.c: FLEXFLAGS = -CF -p -p\n> src/bin/psql/Makefile:61: psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n> src/fe_utils/Makefile:43: psqlscan.c: FLEXFLAGS = -Cfe -p -p\n> src/backend/utils/adt/Makefile:122: jsonpath_scan.c: FLEXFLAGS = -CF -p -p\n> src/bin/psql/Makefile:61: psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n>\n> Do we need this or can the extra -p flag be removed?\n\n From the Flex manual:\n\n\"generates a performance report to stderr. The report consists of\ncomments regarding features of the flex input file which will cause a\nserious loss of performance in the resulting scanner. If you give the\nflag twice, you will also get comments regarding features that lead to\nminor performance losses.\"\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Apr 2022 19:25:28 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: double inclusion of '-p' flex flag"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 5:55 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Tue, Apr 26, 2022 at 7:16 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I see that we have included '-p' flex flag twice in the commands used\n> > to generate the scanner files. See below:\n> >\n> > src/backend/parser/Makefile:60: scan.c: FLEXFLAGS = -CF -p -p\n> > src/backend/utils/adt/Makefile:122: jsonpath_scan.c: FLEXFLAGS = -CF -p -p\n> > src/bin/psql/Makefile:61: psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n> > src/fe_utils/Makefile:43: psqlscan.c: FLEXFLAGS = -Cfe -p -p\n> > src/backend/utils/adt/Makefile:122: jsonpath_scan.c: FLEXFLAGS = -CF -p -p\n> > src/bin/psql/Makefile:61: psqlscanslash.c: FLEXFLAGS = -Cfe -p -p\n> >\n> > Do we need this or can the extra -p flag be removed?\n>\n> From the Flex manual:\n>\n> \"generates a performance report to stderr. The report consists of\n> comments regarding features of the flex input file which will cause a\n> serious loss of performance in the resulting scanner. If you give the\n> flag twice, you will also get comments regarding features that lead to\n> minor performance losses.\"\n>\n\nAhh. I see. This information is missing in the man page. thanks.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Tue, 26 Apr 2022 18:28:23 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: double inclusion of '-p' flex flag"
}
] |
[
{
"msg_contents": "Hi,\n\nThe below files in orafce contrib module are generated at build time.\nHowever, these are checked into the repository. Shouldn't these files\nbe removed from the repository and added to the .gitignore file so\nthat they get ignored in the future commits.\n\nsqlparse.c\nsqlscan.c\nsqlparse.h\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Tue, 26 Apr 2022 17:49:35 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "orafce: some of the build time generated files are not present in\n .gitignore and also checked into the repository"
},
{
"msg_contents": "> On 26 Apr 2022, at 14:19, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\n> The below files in orafce contrib module are generated at build time.\n> However, these are checked into the repository. Shouldn't these files\n> be removed from the repository and added to the .gitignore file so\n> that they get ignored in the future commits.\n\nYou should probably take this to the orafce project instead, possibly with a PR\nagainst their repo?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 26 Apr 2022 14:21:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: orafce: some of the build time generated files are not present in\n .gitignore and also checked into the repository"
},
{
"msg_contents": "Sure, I'll do that.\n\nThanks,\nAshutosh\n\nOn Tue, Apr 26, 2022 at 5:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 26 Apr 2022, at 14:19, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> > The below files in orafce contrib module are generated at build time.\n> > However, these are checked into the repository. Shouldn't these files\n> > be removed from the repository and added to the .gitignore file so\n> > that they get ignored in the future commits.\n>\n> You should probably take this to the orafce project instead, possibly with a PR\n> against their repo?\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n\n\n",
"msg_date": "Tue, 26 Apr 2022 18:34:54 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: orafce: some of the build time generated files are not present in\n .gitignore and also checked into the repository"
},
{
"msg_contents": "út 26. 4. 2022 v 14:19 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com>\nnapsal:\n\n> Hi,\n>\n> The below files in orafce contrib module are generated at build time.\n> However, these are checked into the repository. Shouldn't these files\n> be removed from the repository and added to the .gitignore file so\n> that they get ignored in the future commits.\n>\n> sqlparse.c\n> sqlscan.c\n> sqlparse.h\n>\n\nWithout these files there is a problem with MSVC build.\n\n\n> --\n> With Regards,\n> Ashutosh Sharma.\n>\n>\n>\n\nút 26. 4. 2022 v 14:19 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com> napsal:Hi,\n\nThe below files in orafce contrib module are generated at build time.\nHowever, these are checked into the repository. Shouldn't these files\nbe removed from the repository and added to the .gitignore file so\nthat they get ignored in the future commits.\n\nsqlparse.c\nsqlscan.c\nsqlparse.hWithout these files there is a problem with MSVC build. \n\n--\nWith Regards,\nAshutosh Sharma.",
"msg_date": "Tue, 26 Apr 2022 18:17:41 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: orafce: some of the build time generated files are not present in\n .gitignore and also checked into the repository"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 06:17:41PM +0200, Pavel Stehule wrote:\n> \n> út 26. 4. 2022 v 14:19 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com>\n> napsal:\n> \n> Hi,\n> \n> The below files in orafce contrib module are generated at build time.\n> However, these are checked into the repository. Shouldn't these files\n> be removed from the repository and added to the .gitignore file so\n> that they get ignored in the future commits.\n> \n> sqlparse.c\n> sqlscan.c\n> sqlparse.h\n> \n> Without these files there is a problem with MSVC build. \n\nUh, I am kind of lost here. Why was this reported to the Postgres\nserver email lists and not to the orafce email lists? Is there\nconfusion who develops/support orafce? Which MSVC build is broken? The\nPostgres server or orafce?\n\nI assume the problem is that the 'makefile' system generates these\nfiles, but the MSVC build doesn't generate them, so it is just easier to\ncheck them in after a make build so MSVC can use them, and another make\nrun will just overwrite them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 26 Apr 2022 12:36:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: orafce: some of the build time generated files are not present\n in .gitignore and also checked into the repository"
},
{
"msg_contents": "út 26. 4. 2022 v 18:36 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Tue, Apr 26, 2022 at 06:17:41PM +0200, Pavel Stehule wrote:\n> >\n> > út 26. 4. 2022 v 14:19 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com\n> >\n> > napsal:\n> >\n> > Hi,\n> >\n> > The below files in orafce contrib module are generated at build time.\n> > However, these are checked into the repository. Shouldn't these files\n> > be removed from the repository and added to the .gitignore file so\n> > that they get ignored in the future commits.\n> >\n> > sqlparse.c\n> > sqlscan.c\n> > sqlparse.h\n> >\n> > Without these files there is a problem with MSVC build.\n>\n> Uh, I am kind of lost here. Why was this reported to the Postgres\n> server email lists and not to the orafce email lists? Is there\n> confusion who develops/support orafce? Which MSVC build is broken? The\n> Postgres server or orafce?\n>\n\nI am sorry. This is just Orafce topic.\n\nRegards\n\nPavel\n\n\n>\n> I assume the problem is that the 'makefile' system generates these\n> files, but the MSVC build doesn't generate them, so it is just easier to\n> check them in after a make build so MSVC can use them, and another make\n> run will just overwrite them.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Indecision is a decision. Inaction is an action. Mark Batterson\n>\n>\n\nút 26. 4. 2022 v 18:36 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Tue, Apr 26, 2022 at 06:17:41PM +0200, Pavel Stehule wrote:\n> \n> út 26. 4. 2022 v 14:19 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com>\n> napsal:\n> \n> Hi,\n> \n> The below files in orafce contrib module are generated at build time.\n> However, these are checked into the repository. Shouldn't these files\n> be removed from the repository and added to the .gitignore file so\n> that they get ignored in the future commits.\n> \n> sqlparse.c\n> sqlscan.c\n> sqlparse.h\n> \n> Without these files there is a problem with MSVC build. \n\nUh, I am kind of lost here. Why was this reported to the Postgres\nserver email lists and not to the orafce email lists? Is there\nconfusion who develops/support orafce? Which MSVC build is broken? The\nPostgres server or orafce?I am sorry. This is just Orafce topic.RegardsPavel \n\nI assume the problem is that the 'makefile' system generates these\nfiles, but the MSVC build doesn't generate them, so it is just easier to\ncheck them in after a make build so MSVC can use them, and another make\nrun will just overwrite them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 26 Apr 2022 18:42:16 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: orafce: some of the build time generated files are not present in\n .gitignore and also checked into the repository"
}
] |
[
{
"msg_contents": "Kyotaro's patch seems good to me and fixes the test case in my patch.\r\nDo you have interest in adding a test like one in my patch?\r\n\r\n\r\n> +\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE); > + > \t/* > \t * Remember the prior checkpoint's redo ptr for > \t * UpdateCheckPointDistanceEstimate() > \t */ > \tPriorRedoPtr = ControlFile->checkPointCopy.redo; > > +\tAssert (PriorRedoPtr < RedoRecPtr);Maybe PriorRedoPtr does not need to be under LWLockAcquire?\r\nregards. -- Zhao Rui Alibaba Cloud: https://www.aliyun.com/\r\n------------------ Original ------------------\r\nFrom: \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com>;\r\nDate: Wed, Mar 16, 2022 09:24 AM\r\nTo: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\r\nCc: \"masao.fujii\"<masao.fujii@oss.nttdata.com>;\r\nSubject: Possible corruption by CreateRestartPoint at promotion\r\n\r\n\r\n\r\nHello, (Cc:ed Fujii-san)\r\n\r\nThis is a diverged topic from [1], which is summarized as $SUBJECT.\r\n\r\nTo recap:\r\n\r\nWhile discussing on additional LSNs in checkpoint log message,\r\nFujii-san pointed out [2] that there is a case where\r\nCreateRestartPoint leaves unrecoverable database when concurrent\r\npromotion happens. That corruption is \"fixed\" by the next checkpoint\r\nso it is not a severe corruption.\r\n\r\nAFAICS since 9.5, no check(/restart)pionts won't run concurrently with\r\nrestartpoint [3]. So I propose to remove the code path as attached.\r\n\r\nregards.\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/20220316.091913.806120467943749797.horikyota.ntt%40gmail.com\r\n\r\n[2] https://www.postgresql.org/message-id/7bfad665-db9c-0c2a-2604-9f54763c5f9e%40oss.nttdata.com\r\n\r\n[3] https://www.postgresql.org/message-id/20220222.174401.765586897814316743.horikyota.ntt%40gmail.com\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center",
"msg_date": "Wed, 27 Apr 2022 12:36:10 +0800",
"msg_from": "\"=?ISO-8859-1?B?UnVpIFpoYW8=?=\" <875941708@qq.com>",
"msg_from_op": true,
"msg_subject": "Re:Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 12:36:10PM +0800, Rui Zhao wrote:\n> Do you have interest in adding a test like one in my patch?\n\nI have studied the test case you are proposing, and I am afraid that\nit is too expensive as designed. And it is actually racy as you\nexpect the restart point to take longer than the promotion with a\ntiming based on an arbitrary (and large!) amount of data inserted into\nthe primary. Well, the promotion should be shorter than the restart \npoint in any case, but such tests should be designed so as they would\nwork reliably on slow machines while being able to complete quickly on\nfast machines.\n\nIt would much better if the test is designed so as the restart point\nis stopped at an arbitrary step rather than throttled, moving on when\nthe promotion of the standby is done. A well-known method, that would\nnot work on Windows, is to rely on SIGSTOP that could be used on the\ncheckpointer for such things. Anyway, we don't have any mean to\nreliably stop a restart point while in the middle of its processing,\ndo we?\n--\nMichael",
"msg_date": "Wed, 27 Apr 2022 14:27:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Apr 27, 2022 at 12:36:10PM +0800, Rui Zhao wrote:\n>> Do you have interest in adding a test like one in my patch?\n\n> I have studied the test case you are proposing, and I am afraid that\n> it is too expensive as designed.\n\nThat was my feeling too. It's certainly a useful test for verifying\nthat we fixed the problem, but that doesn't mean that it's worth the\ncycles to add it as a permanent fixture in check-world, even if we\ncould get rid of the timing assumptions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Apr 2022 01:31:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
},
{
"msg_contents": "At Wed, 27 Apr 2022 01:31:55 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Wed, Apr 27, 2022 at 12:36:10PM +0800, Rui Zhao wrote:\n> >> Do you have interest in adding a test like one in my patch?\n> \n> > I have studied the test case you are proposing, and I am afraid that\n> > it is too expensive as designed.\n> \n> That was my feeling too. It's certainly a useful test for verifying\n> that we fixed the problem, but that doesn't mean that it's worth the\n> cycles to add it as a permanent fixture in check-world, even if we\n> could get rid of the timing assumptions.\n\nMy first feeling is the same. And I don't find a way to cause this\ncheap and reliably without inserting a dedicate debugging-aid code.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Apr 2022 11:50:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible corruption by CreateRestartPoint at promotion"
}
] |
[
{
"msg_contents": "In function ItemPointerEquals, the ItemPointerGetBlockNumber\nalready checked the ItemPointer if valid, there is no need\nto check it again in ItemPointerGetOffset, so use\nItemPointerGetOffsetNumberNoCheck instead.\n\nSigned-off-by: Junwang Zhao <zhjwpku@gmail.com>\n---\n src/backend/storage/page/itemptr.c | 4 ++--\n 1 file changed, 2 insertions(+), 2 deletions(-)\n\ndiff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c\nindex 9011337aa8..61ad727b1d 100644\n--- a/src/backend/storage/page/itemptr.c\n+++ b/src/backend/storage/page/itemptr.c\n@@ -37,8 +37,8 @@ ItemPointerEquals(ItemPointer pointer1, ItemPointer pointer2)\n \n \tif (ItemPointerGetBlockNumber(pointer1) ==\n \t\tItemPointerGetBlockNumber(pointer2) &&\n-\t\tItemPointerGetOffsetNumber(pointer1) ==\n-\t\tItemPointerGetOffsetNumber(pointer2))\n+\t\tItemPointerGetOffsetNumberNoCheck(pointer1) ==\n+\t\tItemPointerGetOffsetNumberNoCheck(pointer2))\n \t\treturn true;\n \telse\n \t\treturn false;\n-- \n2.33.0\n\n\n\n",
"msg_date": "Wed, 27 Apr 2022 20:04:00 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH v1] remove redundant check of item pointer"
},
{
"msg_contents": "On Wed, Apr 27, 2022 at 08:04:00PM +0800, Junwang Zhao wrote:\n> In function ItemPointerEquals, the ItemPointerGetBlockNumber\n> already checked the ItemPointer if valid, there is no need\n> to check it again in ItemPointerGetOffset, so use\n> ItemPointerGetOffsetNumberNoCheck instead.\n> \n> Signed-off-by: Junwang Zhao <zhjwpku@gmail.com>\n> ---\n> src/backend/storage/page/itemptr.c | 4 ++--\n> 1 file changed, 2 insertions(+), 2 deletions(-)\n> \n> diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c\n> index 9011337aa8..61ad727b1d 100644\n> --- a/src/backend/storage/page/itemptr.c\n> +++ b/src/backend/storage/page/itemptr.c\n> @@ -37,8 +37,8 @@ ItemPointerEquals(ItemPointer pointer1, ItemPointer pointer2)\n> \n> \tif (ItemPointerGetBlockNumber(pointer1) ==\n> \t\tItemPointerGetBlockNumber(pointer2) &&\n> -\t\tItemPointerGetOffsetNumber(pointer1) ==\n> -\t\tItemPointerGetOffsetNumber(pointer2))\n> +\t\tItemPointerGetOffsetNumberNoCheck(pointer1) ==\n> +\t\tItemPointerGetOffsetNumberNoCheck(pointer2))\n> \t\treturn true;\n> \telse\n> \t\treturn false;\n\nLooking at the code:\n\n\t/*\n\t * ItemPointerGetOffsetNumberNoCheck\n\t * Returns the offset number of a disk item pointer.\n\t */\n\tstatic inline OffsetNumber\n\tItemPointerGetOffsetNumberNoCheck(const ItemPointerData *pointer)\n\t{\n\t return pointer->ip_posid;\n\t}\n\t\n\t/*\n\t * ItemPointerGetOffsetNumber\n\t * As above, but verifies that the item pointer looks valid.\n\t */\n\tstatic inline OffsetNumber\n\tItemPointerGetOffsetNumber(const ItemPointerData *pointer)\n\t{\n\t Assert(ItemPointerIsValid(pointer));\n\t return ItemPointerGetOffsetNumberNoCheck(pointer);\n\t}\n\nfor non-Assert builds, ItemPointerGetOffsetNumberNoCheck() and\nItemPointerGetOffsetNumber() are the same, so I don't see the point to\nmaking this change. Frankly, I don't know why we even have two\nfunctions for this. I am guessing ItemPointerGetOffsetNumberNoCheck is\nfor cases where you have an Assert build and do not want the check.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 18:31:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] remove redundant check of item pointer"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 3:31 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Wed, Apr 27, 2022 at 08:04:00PM +0800, Junwang Zhao wrote:\n> for non-Assert builds, ItemPointerGetOffsetNumberNoCheck() and\n> ItemPointerGetOffsetNumber() are the same, so I don't see the point to\n> making this change. Frankly, I don't know why we even have two\n> functions for this. I am guessing ItemPointerGetOffsetNumberNoCheck is\n> for cases where you have an Assert build and do not want the check.\n\nSometimes we use ItemPointerData for things that aren't actually TIDs.\nFor example, both GIN and B-Tree type-pun the ItemPointerData field\nfrom the Indextuple struct. Plus we do something like that with\nUPDATEs that affect a partitioning key in a partitioned table.\n\nThe proposal doesn't seem like an improvement. Technically the\nassertion cannot possibly fail here because the earlier assertion\nwould always fail instead, so strictly speaking it is redundant -- at\nleast right now. That is true. But it seems much more important to be\nconsistent about which variant to use. Especially because there is\nobviously no overhead in builds without assertions enabled.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Jul 2022 15:51:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] remove redundant check of item pointer"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The proposal doesn't seem like an improvement. Technically the\n> assertion cannot possibly fail here because the earlier assertion\n> would always fail instead, so strictly speaking it is redundant -- at\n> least right now. That is true. But it seems much more important to be\n> consistent about which variant to use. Especially because there is\n> obviously no overhead in builds without assertions enabled.\n\nEven in an assert-enabled build, wouldn't you expect the compiler to\noptimize away the second assertion as unreachable code?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Jul 2022 18:59:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] remove redundant check of item pointer"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Even in an assert-enabled build, wouldn't you expect the compiler to\n> optimize away the second assertion as unreachable code?\n\nI think that it probably would, even at -O0 (GCC doesn't really allow\nyou to opt out of all optimizations). I did think of that myself, but\nit seemed rather beside the point.\n\nThere have been individual cases where individual assertions were\ndeemed a bit too heavyweight. But those have been few and far between.\nI myself tend to use *lots* of technically-redundant assertions like\nthis for preconditions and postconditions. At worst they're code\ncomments that are all but guaranteed to stay current.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Jul 2022 16:10:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] remove redundant check of item pointer"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 10:31, Bruce Momjian <bruce@momjian.us> wrote:\n> for non-Assert builds, ItemPointerGetOffsetNumberNoCheck() and\n> ItemPointerGetOffsetNumber() are the same, so I don't see the point to\n> making this change. Frankly, I don't know why we even have two\n> functions for this. I am guessing ItemPointerGetOffsetNumberNoCheck is\n> for cases where you have an Assert build and do not want the check.\n\nWe'll want to use ItemPointerGetOffsetNumberNoCheck() where the TID\ncomes from sources we can't verify. e.g user input... '(2,0)'::tid.\nWe want to use ItemPointerGetOffsetNumber() for item pointers that\ncome from locations that we want to ensure are correct. e.g TIDs\nwe're storing in an index.\n\nDavid\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:13:01 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] remove redundant check of item pointer"
}
] |
[
{
"msg_contents": "fork: <20220325000933.vgazz7pjk2ytj65d@alap3.anarazel.de>\n\nOn Thu, Mar 24, 2022 at 05:09:33PM -0700, Andres Freund wrote:\n> On 2022-03-24 18:51:30 -0400, Andrew Dunstan wrote:\n> > I wonder if we should add these compile flags to the cfbot's setup?\n> \n> Yes, I think we should. There's a bit of discussion of that in and below\n> https://postgr.es/m/20220213051937.GO31460%40telsasoft.com - that veered a bit\n> of course, so I haven't done anything about it yet. Perhaps one build\n> COPY_PARSE_PLAN_TREES and RAW_EXPRESSION_COVERAGE_TEST another\n> WRITE_READ_PARSE_PLAN_TREES? We should add the slower to the macos build,\n> that's plenty fast and I'm intending to slow the linux test by using ubsan,\n> which works better on linux.\n\nWhy would you put them on different tasks ?\nto avoid slowing down one task too much ?\nThat doesn't seem to be an issue, at least for those three defines.\n\nWhat about adding RELCACHE_FORCE_RELEASE, too ?\nEven with that, macos is only ~1min slower.\n\nhttps://cirrus-ci.com/task/5456727205216256\n\ncommit 53480b8db63b5cd2476142e28ed3f9fe8480f9f3\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Apr 14 06:27:07 2022 -0500\n\n cirrus/macos: enable various runtime checks\n \n cirrus CI can take a while to be schedule on macos, but the instance always has\n many cores, so this is a good platform to enable options which will slow it\n down.\n \n See:\n https://www.postgresql.org/message-id/20211217193159.pwrelhiyx7kevgsn@alap3.anarazel.de\n https://www.postgresql.org/message-id/20211213211223.vkgg3wwiss2tragj%40alap3.anarazel.de\n https://www.postgresql.org/message-id/CAH2-WzmevBhKNEtqX3N-Tkb0gVBHH62C0KfeTxXzqYES_PiFiA%40mail.gmail.com\n https://www.postgresql.org/message-id/20220325000933.vgazz7pjk2ytj65d@alap3.anarazel.de\n \n ci-os-only: macos\n\ndiff --git a/.cirrus.yml b/.cirrus.yml\nindex e0264929c74..4a6511115fc 100644\n--- a/.cirrus.yml\n+++ b/.cirrus.yml\n@@ -337,6 +337,7 @@ task:\n CLANG=\"ccache ${brewpath}/llvm/bin/ccache\" \\\n CFLAGS=\"-Og -ggdb\" \\\n CXXFLAGS=\"-Og -ggdb\" \\\n+ CPPFLAGS=\"-DRELCACHE_FORCE_RELEASE -DCOPY_PARSE_PLAN_TREES -DWRITE_READ_PARSE_PLAN_TREES -DRAW_EXPRESSION_COVERAGE_TEST\" \\\n \\\n LLVM_CONFIG=${brewpath}/llvm/bin/llvm-config \\\n PYTHON=python3\n\n\n",
"msg_date": "Wed, 27 Apr 2022 09:53:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "cirrus: run macos with COPY_PARSE_PLAN_TREES etc"
}
] |
[
{
"msg_contents": "In function ItemPointerEquals, the ItemPointerGetBlockNumber\nalready checked the ItemPointer if valid, there is no need\nto check it again in ItemPointerGetOffset, so use\nItemPointerGetOffsetNumberNoCheck instead.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 27 Apr 2022 23:11:26 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "remove redundant check of item pointer"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> In function ItemPointerEquals, the ItemPointerGetBlockNumber\n> already checked the ItemPointer if valid, there is no need\n> to check it again in ItemPointerGetOffset, so use\n> ItemPointerGetOffsetNumberNoCheck instead.\n\nI do not think this change is worth making. The point of\nItemPointerGetOffsetNumberNoCheck is not to save some cycles,\nit's to be able to fetch the offset field in cases where it might\nvalidly be zero. The assertion will be compiled out anyway in\nproduction builds --- and even in assert-enabled builds, I'd kind\nof expect the compiler to optimize away the duplicated tests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Apr 2022 11:34:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove redundant check of item pointer"
},
{
"msg_contents": "got it, thanks for the explanation.\n\nOn Wed, Apr 27, 2022 at 11:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > In function ItemPointerEquals, the ItemPointerGetBlockNumber\n> > already checked the ItemPointer if valid, there is no need\n> > to check it again in ItemPointerGetOffset, so use\n> > ItemPointerGetOffsetNumberNoCheck instead.\n>\n> I do not think this change is worth making. The point of\n> ItemPointerGetOffsetNumberNoCheck is not to save some cycles,\n> it's to be able to fetch the offset field in cases where it might\n> validly be zero. The assertion will be compiled out anyway in\n> production builds --- and even in assert-enabled builds, I'd kind\n> of expect the compiler to optimize away the duplicated tests.\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 28 Apr 2022 09:54:10 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: remove redundant check of item pointer"
}
] |
[
{
"msg_contents": "MULTI-MASTER LOGICAL REPLICATION\n\n1.0 BACKGROUND\n\nLet’s assume that a user wishes to set up a multi-master environment\nso that a set of PostgreSQL instances (nodes) use logical replication\nto share tables with every other node in the set.\n\nWe define this as a multi-master logical replication (MMLR) node-set.\n\n<please refer to the attached node-set diagram>\n\n1.1 ADVANTAGES OF MMLR\n\n- Increases write scalability (e.g., all nodes can write arbitrary data).\n- Allows load balancing\n- Allows rolling updates of nodes (e.g., logical replication works\nbetween different major versions of PostgreSQL).\n- Improves the availability of the system (e.g., no single point of failure)\n- Improves performance (e.g., lower latencies for geographically local nodes)\n\n2.0 MMLR AND POSTGRESQL\n\nIt is already possible to configure a kind of MMLR set in PostgreSQL\n15 using PUB/SUB, but it is very restrictive because it can only work\nwhen no two nodes operate on the same table. This is because when two\nnodes try to share the same table then there becomes a circular\nrecursive problem where Node1 replicates data to Node2 which is then\nreplicated back to Node1 and so on.\n\nTo prevent the circular recursive problem Vignesh is developing a\npatch [1] that introduces new SUBSCRIPTION options \"local_only\" (for\npublishing only data originating at the publisher node) and\n\"copy_data=force\". Using this patch, we have created a script [2]\ndemonstrating how to set up all the above multi-node examples. An\noverview of the necessary steps is given in the next section.\n\n2.1 STEPS – Adding a new node N to an existing node-set\n\nstep 1. Prerequisites – Apply Vignesh’s patch [1]. All nodes in the\nset must be visible to each other by a known CONNECTION. All shared\ntables must already be defined on all nodes.\n\nstep 2. On node N do CREATE PUBLICATION pub_N FOR ALL TABLES\n\nstep 3. All other nodes then CREATE SUBSCRIPTION to PUBLICATION pub_N\nwith \"local_only=on, copy_data=on\" (this will replicate initial data\nfrom the node N tables to every other node).\n\nstep 4. On node N, temporarily ALTER PUBLICATION pub_N to prevent\nreplication of 'truncate', then TRUNCATE all tables of node N, then\nre-allow replication of 'truncate'.\n\nstep 5. On node N do CREATE SUBSCRIPTION to the publications of all\nother nodes in the set\n5a. Specify \"local_only=on, copy_data=force\" for exactly one of the\nsubscriptions (this will make the node N tables now have the same\ndata as the other nodes)\n5b. Specify \"local_only=on, copy_data=off\" for all other subscriptions.\n\nstep 6. Result - Now changes to any table on any node should be\nreplicated to every other node in the set.\n\nNote: Steps 4 and 5 need to be done within the same transaction to\navoid loss of data in case of some command failure. (Because we can't\nperform create subscription in a transaction, we need to create the\nsubscription in a disabled mode first and then enable it in the\ntransaction).\n\n2.2 DIFFICULTIES\n\nNotice that it becomes increasingly complex to configure MMLR manually\nas the number of nodes in the set increases. There are also some\ndifficulties such as\n- dealing with initial table data\n- coordinating the timing to avoid concurrent updates\n- getting the SUBSCRIPTION options for copy_data exactly right.\n\n3.0 PROPOSAL\n\nTo make the MMLR setup simpler, we propose to create a new API that\nwill hide all the step details and remove the burden on the user to\nget it right without mistakes.\n\n3.1 MOTIVATION\n- MMLR (sharing the same tables) is not currently possible\n- Vignesh's patch [1] makes MMLR possible, but the manual setup is\nstill quite difficult\n- An MMLR implementation can solve the timing problems (e.g., using\nDatabase Locking)\n\n3.2 API\n\nPreferably the API would be implemented as new SQL functions in\nPostgreSQL core, however, implementation using a contrib module or\nsome new SQL syntax may also be possible.\n\nSQL functions will be like below:\n- pg_mmlr_set_create = create a new set, and give it a name\n- pg_mmlr_node_attach = attach the current node to a specified set\n- pg_mmlr_node_detach = detach a specified node from a specified set\n- pg_mmlr_set_delete = delete a specified set\n\nFor example, internally the pg_mmlr_node_attach API function would\nexecute the equivalent of all the CREATE PUBLICATION, CREATE\nSUBSCRIPTION, and TRUNCATE steps described above.\n\nNotice this proposal has some external API similarities with the BDR\nextension [3] (which also provides multi-master logical replication),\nalthough we plan to implement it entirely using PostgreSQL’s PUB/SUB.\n\n4.0 ACKNOWLEDGEMENTS\n\nThe following people have contributed to this proposal – Hayato\nKuroda, Vignesh C, Peter Smith, Amit Kapila.\n\n5.0 REFERENCES\n\n[1] https://www.postgresql.org/message-id/flat/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9%3D9orqubhjcQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPvY2P%3DUL-X6maMA5QxFKdcdciRRCKDH3j%3D_hO8u2OyRYg%40mail.gmail.com\n[3] https://www.enterprisedb.com/docs/bdr/latest/\n\n[END]\n\n~~~\n\nOne of my colleagues will post more detailed information later.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 28 Apr 2022 09:49:56 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, 2022-04-28 at 09:49 +1000, Peter Smith wrote:\n> To prevent the circular recursive problem Vignesh is developing a\n> patch [1] that introduces new SUBSCRIPTION options \"local_only\" (for\n> publishing only data originating at the publisher node) and\n> \"copy_data=force\". Using this patch, we have created a script [2]\n> demonstrating how to set up all the above multi-node examples. An\n> overview of the necessary steps is given in the next section.\n\nI am missing a discussion how replication conflicts are handled to\nprevent replication from breaking or the databases from drifting apart.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 28 Apr 2022 08:48:36 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "Dear Laurenz,\r\n\r\nThank you for your interest in our works!\r\n\r\n> I am missing a discussion how replication conflicts are handled to\r\n> prevent replication from breaking or the databases from drifting apart.\r\n\r\nActually we don't have plans for developing the feature that avoids conflict.\r\nWe think that it should be done as core PUB/SUB feature, and\r\nthis module will just use that.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 28 Apr 2022 08:34:23 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "В Чт, 28/04/2022 в 09:49 +1000, Peter Smith пишет:\n\n> 1.1 ADVANTAGES OF MMLR\n> \n> - Increases write scalability (e.g., all nodes can write arbitrary data).\n\nI've never heard how transactional-aware multimaster increases\nwrite scalability. More over, usually even non-transactional\nmultimaster doesn't increase write scalability. At the best it\ndoesn't decrease.\n\nThat is because all hosts have to write all changes anyway. But\nside cost increases due to increased network interchange and\ninterlocking (for transaction-aware MM) and increased latency.\n\nВ Чт, 28/04/2022 в 08:34 +0000, kuroda.hayato@fujitsu.com пишет:\n> Dear Laurenz,\n> \n> Thank you for your interest in our works!\n> \n> > I am missing a discussion how replication conflicts are handled to\n> > prevent replication from breaking\n> \n> Actually we don't have plans for developing the feature that avoids conflict.\n> We think that it should be done as core PUB/SUB feature, and\n> this module will just use that.\n\nIf you really want to have some proper isolation levels (\nRead Committed? Repeatable Read?) and/or want to have\nsame data on each \"master\", there is no easy way. If you\nthink it will be \"easy\", you are already wrong.\n\nOur company has MultiMaster which is built on top of\nlogical replication. It is even partially open source\n( https://github.com/postgrespro/mmts ) , although some\ncore patches that have to be done for are not up to\ndate.\n\nAnd it is second iteration of MM. First iteration were\nnot \"simple\" or \"easy\" already. But even that version had\nthe hidden bug: rare but accumulating data difference\nbetween nodes. Attempt to fix this bug led to almost\nfull rewrite of multi-master.\n\n(Disclaimer: I had no relation to both MM versions,\nI just work in the same firm).\n\n\nregards\n\n---------\n\nYura Sokolov\n\n\n\n",
"msg_date": "Thu, 28 Apr 2022 13:54:09 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 4:24 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> В Чт, 28/04/2022 в 09:49 +1000, Peter Smith пишет:\n>\n> > 1.1 ADVANTAGES OF MMLR\n> >\n> > - Increases write scalability (e.g., all nodes can write arbitrary data).\n>\n> I've never heard how transactional-aware multimaster increases\n> write scalability. More over, usually even non-transactional\n> multimaster doesn't increase write scalability. At the best it\n> doesn't decrease.\n>\n> That is because all hosts have to write all changes anyway. But\n> side cost increases due to increased network interchange and\n> interlocking (for transaction-aware MM) and increased latency.\n\nI agree it won't increase in all cases, but it will be better in a few\ncases when the user works on different geographical regions operating\non independent schemas in asynchronous mode. Since the write node is\ncloser to the geographical zone, the performance will be better in a\nfew cases.\n\n> В Чт, 28/04/2022 в 08:34 +0000, kuroda.hayato@fujitsu.com пишет:\n> > Dear Laurenz,\n> >\n> > Thank you for your interest in our works!\n> >\n> > > I am missing a discussion how replication conflicts are handled to\n> > > prevent replication from breaking\n> >\n> > Actually we don't have plans for developing the feature that avoids conflict.\n> > We think that it should be done as core PUB/SUB feature, and\n> > this module will just use that.\n>\n> If you really want to have some proper isolation levels (\n> Read Committed? Repeatable Read?) and/or want to have\n> same data on each \"master\", there is no easy way. If you\n> think it will be \"easy\", you are already wrong.\n\nThe synchronous_commit and synchronous_standby_names configuration\nparameters will help in getting the same data across the nodes. Can\nyou give an example for the scenario where it will be difficult?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 28 Apr 2022 17:37:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "В Чт, 28/04/2022 в 17:37 +0530, vignesh C пишет:\n> On Thu, Apr 28, 2022 at 4:24 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > В Чт, 28/04/2022 в 09:49 +1000, Peter Smith пишет:\n> > \n> > > 1.1 ADVANTAGES OF MMLR\n> > > \n> > > - Increases write scalability (e.g., all nodes can write arbitrary data).\n> > \n> > I've never heard how transactional-aware multimaster increases\n> > write scalability. More over, usually even non-transactional\n> > multimaster doesn't increase write scalability. At the best it\n> > doesn't decrease.\n> > \n> > That is because all hosts have to write all changes anyway. But\n> > side cost increases due to increased network interchange and\n> > interlocking (for transaction-aware MM) and increased latency.\n> \n> I agree it won't increase in all cases, but it will be better in a few\n> cases when the user works on different geographical regions operating\n> on independent schemas in asynchronous mode. Since the write node is\n> closer to the geographical zone, the performance will be better in a\n> few cases.\n\n From EnterpriseDB BDB page [1]:\n\n> Adding more master nodes to a BDR Group does not result in\n> significant write throughput increase when most tables are\n> replicated because BDR has to replay all the writes on all nodes.\n> Because BDR writes are in general more effective than writes coming\n> from Postgres clients via SQL, some performance increase can be\n> achieved. Read throughput generally scales linearly with the number\n> of nodes.\n\nAnd I'm sure EnterpriseDB does the best.\n\n> > В Чт, 28/04/2022 в 08:34 +0000, kuroda.hayato@fujitsu.com пишет:\n> > > Dear Laurenz,\n> > > \n> > > Thank you for your interest in our works!\n> > > \n> > > > I am missing a discussion how replication conflicts are handled to\n> > > > prevent replication from breaking\n> > > \n> > > Actually we don't have plans for developing the feature that avoids conflict.\n> > > We think that it should be done as core PUB/SUB feature, and\n> > > this module will just use that.\n> > \n> > If you really want to have some proper isolation levels (\n> > Read Committed? Repeatable Read?) and/or want to have\n> > same data on each \"master\", there is no easy way. If you\n> > think it will be \"easy\", you are already wrong.\n> \n> The synchronous_commit and synchronous_standby_names configuration\n> parameters will help in getting the same data across the nodes. Can\n> you give an example for the scenario where it will be difficult?\n\nSo, synchronous or asynchronous?\nSynchronous commit on every master, every alive master or on quorum\nof masters?\n\nAnd it is not about synchronicity. It is about determinism at\nconflicts.\n\nIf you have fully determenistic conflict resolution that works\nexactly same way on each host, then it is possible to have same\ndata on each host. (But it will not be transactional.)And it seems EDB BDB achieved this.\n\nOr if you have fully and correctly implemented one of distributed\ntransactions protocols.\n\n[1] https://www.enterprisedb.com/docs/bdr/latest/overview/#characterising-bdr-performance\n\nregards\n\n------\n\nYura Sokolov\n\n\n\n",
"msg_date": "Fri, 29 Apr 2022 07:16:44 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Fri, Apr 29, 2022 at 2:16 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> В Чт, 28/04/2022 в 17:37 +0530, vignesh C пишет:\n> > On Thu, Apr 28, 2022 at 4:24 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > > В Чт, 28/04/2022 в 09:49 +1000, Peter Smith пишет:\n> > >\n> > > > 1.1 ADVANTAGES OF MMLR\n> > > >\n> > > > - Increases write scalability (e.g., all nodes can write arbitrary data).\n> > >\n> > > I've never heard how transactional-aware multimaster increases\n> > > write scalability. More over, usually even non-transactional\n> > > multimaster doesn't increase write scalability. At the best it\n> > > doesn't decrease.\n> > >\n> > > That is because all hosts have to write all changes anyway. But\n> > > side cost increases due to increased network interchange and\n> > > interlocking (for transaction-aware MM) and increased latency.\n> >\n> > I agree it won't increase in all cases, but it will be better in a few\n> > cases when the user works on different geographical regions operating\n> > on independent schemas in asynchronous mode. Since the write node is\n> > closer to the geographical zone, the performance will be better in a\n> > few cases.\n>\n> From EnterpriseDB BDB page [1]:\n>\n> > Adding more master nodes to a BDR Group does not result in\n> > significant write throughput increase when most tables are\n> > replicated because BDR has to replay all the writes on all nodes.\n> > Because BDR writes are in general more effective than writes coming\n> > from Postgres clients via SQL, some performance increase can be\n> > achieved. Read throughput generally scales linearly with the number\n> > of nodes.\n>\n> And I'm sure EnterpriseDB does the best.\n>\n> > > В Чт, 28/04/2022 в 08:34 +0000, kuroda.hayato@fujitsu.com пишет:\n> > > > Dear Laurenz,\n> > > >\n> > > > Thank you for your interest in our works!\n> > > >\n> > > > > I am missing a discussion how replication conflicts are handled to\n> > > > > prevent replication from breaking\n> > > >\n> > > > Actually we don't have plans for developing the feature that avoids conflict.\n> > > > We think that it should be done as core PUB/SUB feature, and\n> > > > this module will just use that.\n> > >\n> > > If you really want to have some proper isolation levels (\n> > > Read Committed? Repeatable Read?) and/or want to have\n> > > same data on each \"master\", there is no easy way. If you\n> > > think it will be \"easy\", you are already wrong.\n> >\n> > The synchronous_commit and synchronous_standby_names configuration\n> > parameters will help in getting the same data across the nodes. Can\n> > you give an example for the scenario where it will be difficult?\n>\n> So, synchronous or asynchronous?\n> Synchronous commit on every master, every alive master or on quorum\n> of masters?\n>\n> And it is not about synchronicity. It is about determinism at\n> conflicts.\n>\n> If you have fully determenistic conflict resolution that works\n> exactly same way on each host, then it is possible to have same\n> data on each host. (But it will not be transactional.)And it seems EDB BDB achieved this.\n>\n> Or if you have fully and correctly implemented one of distributed\n> transactions protocols.\n>\n> [1] https://www.enterprisedb.com/docs/bdr/latest/overview/#characterising-bdr-performance\n>\n> regards\n>\n> ------\n>\n> Yura Sokolov\n\nThanks for your feedback.\n\nThis MMLR proposal was mostly just to create an interface making it\neasier to use PostgreSQL core logical replication CREATE\nPUBLICATION/SUBSCRIPTION for table sharing among a set of nodes.\nOtherwise, this is difficult for a user to do manually. (e.g.\ndifficulties as mentioned in section 2.2 of the original post [1] -\ndealing with initial table data, coordinating the timing/locking to\navoid concurrent updates, getting the SUBSCRIPTION options for\ncopy_data exactly right etc)\n\nAt this time we have no provision for HA, nor for transaction\nconsistency awareness, conflict resolutions, node failure detections,\nDDL replication etc. Some of the features like DDL replication are\ncurrently being implemented [2], so when committed it will become\navailable in the core, and can then be integrated into this module.\n\nOnce the base feature of the current MMLR proposal is done, perhaps it\ncan be extended in subsequent versions.\n\nProbably our calling this “Multi-Master” has been\nmisleading/confusing, because that term implies much more to other\nreaders. We really only intended it to mean the ability to set up\nlogical replication across a set of nodes. Of course, we can rename\nthe proposal (and API) to something different if there are better\nsuggestions.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuwRAoWY9pz%3DEubps3ooQCOBFiYPU9Yi%3DVB-U%2ByORU7OA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/45d0d97c-3322-4054-b94f-3c08774bbd90%40www.fastmail.com#db6e810fc93f17b0a5585bac25fb3d4b\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 29 Apr 2022 19:05:11 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Fri, Apr 29, 2022 at 2:35 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Apr 29, 2022 at 2:16 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> >\n> > В Чт, 28/04/2022 в 17:37 +0530, vignesh C пишет:\n> > > On Thu, Apr 28, 2022 at 4:24 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > > > В Чт, 28/04/2022 в 09:49 +1000, Peter Smith пишет:\n> > > >\n> > > > > 1.1 ADVANTAGES OF MMLR\n> > > > >\n> > > > > - Increases write scalability (e.g., all nodes can write arbitrary data).\n> > > >\n> > > > I've never heard how transactional-aware multimaster increases\n> > > > write scalability. More over, usually even non-transactional\n> > > > multimaster doesn't increase write scalability. At the best it\n> > > > doesn't decrease.\n> > > >\n> > > > That is because all hosts have to write all changes anyway. But\n> > > > side cost increases due to increased network interchange and\n> > > > interlocking (for transaction-aware MM) and increased latency.\n> > >\n> > > I agree it won't increase in all cases, but it will be better in a few\n> > > cases when the user works on different geographical regions operating\n> > > on independent schemas in asynchronous mode. Since the write node is\n> > > closer to the geographical zone, the performance will be better in a\n> > > few cases.\n> >\n> > From EnterpriseDB BDB page [1]:\n> >\n> > > Adding more master nodes to a BDR Group does not result in\n> > > significant write throughput increase when most tables are\n> > > replicated because BDR has to replay all the writes on all nodes.\n> > > Because BDR writes are in general more effective than writes coming\n> > > from Postgres clients via SQL, some performance increase can be\n> > > achieved. Read throughput generally scales linearly with the number\n> > > of nodes.\n> >\n> > And I'm sure EnterpriseDB does the best.\n> >\n> > > > В Чт, 28/04/2022 в 08:34 +0000, kuroda.hayato@fujitsu.com пишет:\n> > > > > Dear Laurenz,\n> > > > >\n> > > > > Thank you for your interest in our works!\n> > > > >\n> > > > > > I am missing a discussion how replication conflicts are handled to\n> > > > > > prevent replication from breaking\n> > > > >\n> > > > > Actually we don't have plans for developing the feature that avoids conflict.\n> > > > > We think that it should be done as core PUB/SUB feature, and\n> > > > > this module will just use that.\n> > > >\n> > > > If you really want to have some proper isolation levels (\n> > > > Read Committed? Repeatable Read?) and/or want to have\n> > > > same data on each \"master\", there is no easy way. If you\n> > > > think it will be \"easy\", you are already wrong.\n> > >\n> > > The synchronous_commit and synchronous_standby_names configuration\n> > > parameters will help in getting the same data across the nodes. Can\n> > > you give an example for the scenario where it will be difficult?\n> >\n> > So, synchronous or asynchronous?\n> > Synchronous commit on every master, every alive master or on quorum\n> > of masters?\n> >\n> > And it is not about synchronicity. It is about determinism at\n> > conflicts.\n> >\n> > If you have fully determenistic conflict resolution that works\n> > exactly same way on each host, then it is possible to have same\n> > data on each host. (But it will not be transactional.)And it seems EDB BDB achieved this.\n> >\n> > Or if you have fully and correctly implemented one of distributed\n> > transactions protocols.\n> >\n> > [1] https://www.enterprisedb.com/docs/bdr/latest/overview/#characterising-bdr-performance\n> >\n> > regards\n> >\n> > ------\n> >\n> > Yura Sokolov\n>\n> Thanks for your feedback.\n>\n> This MMLR proposal was mostly just to create an interface making it\n> easier to use PostgreSQL core logical replication CREATE\n> PUBLICATION/SUBSCRIPTION for table sharing among a set of nodes.\n> Otherwise, this is difficult for a user to do manually. (e.g.\n> difficulties as mentioned in section 2.2 of the original post [1] -\n> dealing with initial table data, coordinating the timing/locking to\n> avoid concurrent updates, getting the SUBSCRIPTION options for\n> copy_data exactly right etc)\n\nDifferent problems and how to solve each scenario is mentioned detailly in [1].\nIt gets even more complex when there are more nodes associated, let's\nconsider the 3 node case:\nAdding a new node node3 to the existing node1 and node2 when data is\npresent in existing nodes node1 and node2, the following steps are\nrequired:\nCreate a publication in node3:\nCREATE PUBLICATION pub_node3 for all tables;\n\nCreate a subscription in node1 to subscribe the changes from node3:\nCREATE SUBSCRIPTION sub_node1_node3 CONNECTION 'dbname=foo host=node3\nuser=repuser' PUBLICATION pub_node3 WITH (copy_data = off, local_only\n= on);\n\nCreate a subscription in node2 to subscribe the changes from node3:\nCREATE SUBSCRIPTION sub_node2_node3 CONNECTION 'dbname=foo host=node3\nuser=repuser' PUBLICATION pub_node3 WITH (copy_data = off, local_only\n= on);\n\nLock database at node2 and wait till walsender sends WAL to node1(upto\ncurrent lsn) to avoid any data loss because of node2's WAL not being\nsent to node1. This lock needs to be held till the setup is complete.\n\nCreate a subscription in node3 to subscribe the changes from node1,\nhere copy_data is specified as force so that the existing table data\nis copied during initial sync:\nCREATE SUBSCRIPTION sub_node3_node1\nCONNECTION 'dbname=foo host=node1 user=repuser'\nPUBLICATION pub_node1\n WITH (copy_data = force, local_only = on);\n\nCreate a subscription in node3 to subscribe the changes from node2:\nCREATE SUBSCRIPTION sub_node3_node2\n CONNECTION 'dbname=foo host=node2 user=repuser'\n PUBLICATION pub_node2\n WITH (copy_data = off, local_only = on);\n\nIf data is present in node3 few more additional steps are required: a)\ncopying node3 data to node1 b) copying node3 data to node2 c) altering\npublication not to send truncate operation d) truncate the data in\nnode3 e) altering the publication to include sending of truncate.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2Bco2cd8a6okgUD_pcFEHcc7mVc0k_RE2%3D6ahyv3WPRMg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 29 Apr 2022 16:08:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 5:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> MULTI-MASTER LOGICAL REPLICATION\n>\n> 1.0 BACKGROUND\n>\n> Let’s assume that a user wishes to set up a multi-master environment\n> so that a set of PostgreSQL instances (nodes) use logical replication\n> to share tables with every other node in the set.\n>\n> We define this as a multi-master logical replication (MMLR) node-set.\n>\n> <please refer to the attached node-set diagram>\n>\n> 1.1 ADVANTAGES OF MMLR\n>\n> - Increases write scalability (e.g., all nodes can write arbitrary data).\n> - Allows load balancing\n> - Allows rolling updates of nodes (e.g., logical replication works\n> between different major versions of PostgreSQL).\n> - Improves the availability of the system (e.g., no single point of failure)\n> - Improves performance (e.g., lower latencies for geographically local nodes)\n>\n> 2.0 MMLR AND POSTGRESQL\n>\n> It is already possible to configure a kind of MMLR set in PostgreSQL\n> 15 using PUB/SUB, but it is very restrictive because it can only work\n> when no two nodes operate on the same table. This is because when two\n> nodes try to share the same table then there becomes a circular\n> recursive problem where Node1 replicates data to Node2 which is then\n> replicated back to Node1 and so on.\n>\n> To prevent the circular recursive problem Vignesh is developing a\n> patch [1] that introduces new SUBSCRIPTION options \"local_only\" (for\n> publishing only data originating at the publisher node) and\n> \"copy_data=force\". Using this patch, we have created a script [2]\n> demonstrating how to set up all the above multi-node examples. An\n> overview of the necessary steps is given in the next section.\n>\n> 2.1 STEPS – Adding a new node N to an existing node-set\n>\n> step 1. Prerequisites – Apply Vignesh’s patch [1]. All nodes in the\n> set must be visible to each other by a known CONNECTION. All shared\n> tables must already be defined on all nodes.\n>\n> step 2. On node N do CREATE PUBLICATION pub_N FOR ALL TABLES\n>\n> step 3. All other nodes then CREATE SUBSCRIPTION to PUBLICATION pub_N\n> with \"local_only=on, copy_data=on\" (this will replicate initial data\n> from the node N tables to every other node).\n>\n> step 4. On node N, temporarily ALTER PUBLICATION pub_N to prevent\n> replication of 'truncate', then TRUNCATE all tables of node N, then\n> re-allow replication of 'truncate'.\n>\n> step 5. On node N do CREATE SUBSCRIPTION to the publications of all\n> other nodes in the set\n> 5a. Specify \"local_only=on, copy_data=force\" for exactly one of the\n> subscriptions (this will make the node N tables now have the same\n> data as the other nodes)\n> 5b. Specify \"local_only=on, copy_data=off\" for all other subscriptions.\n>\n> step 6. Result - Now changes to any table on any node should be\n> replicated to every other node in the set.\n>\n> Note: Steps 4 and 5 need to be done within the same transaction to\n> avoid loss of data in case of some command failure. (Because we can't\n> perform create subscription in a transaction, we need to create the\n> subscription in a disabled mode first and then enable it in the\n> transaction).\n>\n> 2.2 DIFFICULTIES\n>\n> Notice that it becomes increasingly complex to configure MMLR manually\n> as the number of nodes in the set increases. There are also some\n> difficulties such as\n> - dealing with initial table data\n> - coordinating the timing to avoid concurrent updates\n> - getting the SUBSCRIPTION options for copy_data exactly right.\n>\n> 3.0 PROPOSAL\n>\n> To make the MMLR setup simpler, we propose to create a new API that\n> will hide all the step details and remove the burden on the user to\n> get it right without mistakes.\n>\n> 3.1 MOTIVATION\n> - MMLR (sharing the same tables) is not currently possible\n> - Vignesh's patch [1] makes MMLR possible, but the manual setup is\n> still quite difficult\n> - An MMLR implementation can solve the timing problems (e.g., using\n> Database Locking)\n>\n> 3.2 API\n>\n> Preferably the API would be implemented as new SQL functions in\n> PostgreSQL core, however, implementation using a contrib module or\n> some new SQL syntax may also be possible.\n>\n> SQL functions will be like below:\n> - pg_mmlr_set_create = create a new set, and give it a name\n> - pg_mmlr_node_attach = attach the current node to a specified set\n> - pg_mmlr_node_detach = detach a specified node from a specified set\n> - pg_mmlr_set_delete = delete a specified set\n>\n> For example, internally the pg_mmlr_node_attach API function would\n> execute the equivalent of all the CREATE PUBLICATION, CREATE\n> SUBSCRIPTION, and TRUNCATE steps described above.\n>\n> Notice this proposal has some external API similarities with the BDR\n> extension [3] (which also provides multi-master logical replication),\n> although we plan to implement it entirely using PostgreSQL’s PUB/SUB.\n>\n> 4.0 ACKNOWLEDGEMENTS\n>\n> The following people have contributed to this proposal – Hayato\n> Kuroda, Vignesh C, Peter Smith, Amit Kapila.\n>\n> 5.0 REFERENCES\n>\n> [1] https://www.postgresql.org/message-id/flat/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9%3D9orqubhjcQ%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAHut%2BPvY2P%3DUL-X6maMA5QxFKdcdciRRCKDH3j%3D_hO8u2OyRYg%40mail.gmail.com\n> [3] https://www.enterprisedb.com/docs/bdr/latest/\n>\n> [END]\n>\n> ~~~\n>\n> One of my colleagues will post more detailed information later.\n\nMMLR is changed to LRG(Logical replication group) to avoid confusions.\n\nThe LRG functionality will be implemented as given below:\nThe lrg contrib module provides a set of API to allow setting up\nbi-directional logical replication among different nodes. The lrg\nstands for Logical Replication Group.\nTo use this functionality shared_preload_libraries must be set to lrg like:\nshared_preload_libraries = lrg\nA new process \"lrg launcher\" is added which will be launched when the\nextension is created. This process is responsible for checking if user\nhas created new logical replication group or if the user is attaching\na new node to the logical replication group or detach a node or drop a\nlogical replication group and if so, then launches another new “lrg\nworker” for the corresponding database.\nThe new process \"lrg worker\" is responsible for handling the core\ntasks of lrg_create, lrg_node_attach, lrg_node_detach and lrg_drop\nfunctionality.\nThe “lrg worker” is required here because there are a lot of steps\ninvolved in this process like create publication, create subscription,\nalter publication, lock table, etc. If there is a failure during any\nof the process, the worker will be restarted and is responsible to\ncontinue the operation from where it left off to completion.\nThe following new tables were added to maintain the logical\nreplication group related information:\n-- pg_lrg_info table to maintain the logical replication group information.\nCREATE TABLE lrg.pg_lrg_info\n(\n groupname text PRIMARY KEY, -- name of the logical replication group\n pubtype text – type of publication(ALL TABLES, SCHEMA, TABLE)\ncurrently only “ALL TABLES” is supported\n);\n\n-- pg_ lrg_nodes table to maintain the node information that are\nmembers of the logical replication group.\nCREATE TABLE lrg.pg_lrg_nodes\n(\n nodeid text PRIMARY KEY, -- node id (actual node_id format is\nstill not finalized)\n groupname text REFERENCES pg_lrg_info(groupname), -- name of the\nlogical replication group\n dbid oid NOT NULL, -- db id\n status text NOT NULL, -- status of the node\n nodename text, -- node name\n localconn text NOT NULL, -- local connection string\n upstreamconn text – upstream connection string to connect to\nanother node already in the logical replication group\n);\n\n-- pg_ lrg_pub table to maintain the publications that were created\nfor this node.\nCREATE TABLE lrg.pg_lrg_pub\n(\n groupname text REFERENCES pg_lrg_info(groupname), -- name of the\nlogical replication group\n pubid oid NOT NULL – oid of the publication\n);\n\n-- pg_lrg_sub table to maintain the subscriptions that were created\nfor this node.\nCREATE TABLE lrg.pg_lrg_sub\n(\n groupname text REFERENCES pg_lrg_info(groupname), -- name of the\nlogical replication group\n subid oid NOT NULL– oid of the subscription\n);\n\nThe following functionality was added to support the various logical\nreplication group functionalities:\nlrg_create(group_name text, pub_type text, local_connection_string\ntext, node_name text)\nlrg _node_attach(group_name text, local_connection_string text,\nupstream_connection_string text, node_name text)\nlrg_node_detach(group_name text, node_name text)\nlrg_drop(group_name text)\n-----------------------------------------------------------------------------------------------------------------------------------\n\nlrg_create – This function creates a logical replication group as\nspecified in group_name.\nexample:\npostgres=# SELECT lrg.lrg_create('test', 'FOR ALL TABLES',\n'user=postgres port=5432', 'testnode1');\n\nThis function adds a logical replication group “test” with pubtype as\n“FOR ALL TABLES” to pg_lrg_info like given below:\npostgres=# select * from lrg. pg_lrg_info;\ngroupname | pubtype\n----------+------------------\n test | FOR ALL TABLES\n(1 row)\n\nIt adds node information which includes the node id, database id,\nstatus, node name, connection string and upstream connection string to\npg_lrg_nodes like given below:\npostgres=# select * from lrg.pg_lrg_nodes ;\n nodeid | groupname |\ndbid | status | nodename | localconn | upstreamconn\n-------------------------------------------------------------+------+--------+-----------+-----------------------------------------+-----------------------------------------\n 70934590432710321605user=postgres port=5432 | test | 5 | ready |\ntestnode1 | user=postgres port=5432 |\n(1 row)\n\nThe “lrg worker” will perform the following:\n1) It will lock the pg_lrg_info and pg_lrg_nodes tables.\n2) It will create the publication in the current node.\n3) It will change the (pg_lrg_nodes) status from init to createpublication.\n4) It will unlock the pg_lrg_info and pg_lrg_nodes tables\n5) It will change the (pg_lrg_nodes) status from createpublication to ready.\n-----------------------------------------------------------------------------------------------------------------------------------\n\nlrg_node_attach – Attach the specified node to the specified logical\nreplication group.\nexample:\npostgres=# SELECT lrg.lrg_node_attach('test', 'user=postgres\nport=9999', 'user=postgres port=5432', 'testnode2')\nThis function adds logical replication group “test” with pubtype as\n“FOR ALL TABLES” to pg_lrg_info in the new node like given below:\npostgres=# select * from pg_lrg_info;\n groupname | pubtype\n----------+------------------\n test | FOR ALL TABLES\n(1 row)\n\nThis is the same group name that was added during lrg_create in the\ncreate node. Now this information will be available in the new node\ntoo. This information will help the user to attach to any of the nodes\npresent in the logical replication group.\nIt adds node information which includes the node id, database id,\nstatus, node name, connection string and upstream connection string of\nthe current node and the other nodes that are part of the logical\nreplication group to pg_lrg_nodes like given below:\npostgres=# select * from lrg.pg_lrg_nodes ;\n nodeid | groupname |\ndbid | status | nodename | localconn | upstreamconn\n-------------------------------------------------------------+------+--------+-----------+-----------------------------------------+-----------------------------------------\n 70937999584732760095user=vignesh dbname=postgres port=9999 | test |\n 5 | ready | testnode2 | user=vignesh dbname=postgres port=9999 |\nuser=vignesh dbname=postgres port=5432\n 70937999523629205245user=vignesh dbname=postgres port=5432 | test |\n 5 | ready | testnode1 | user=vignesh dbname=postgres port=5432 |\n(2 rows)\n\nIt will use the upstream connection to connect to the upstream node\nand get the nodes that are part of the logical replication group.\nNote: The nodeid used here is for illustrative purpose, actual nodeid\nformat is still not finalized.\nFor this API the “lrg worker” will perform the following:\n1) It will lock the pg_lrg_info and pg_lrg_nodes tables.\n2) It will connect to the upstream node specified and get the list of\nother nodes present in the logical replication group.\n3) It will connect to the remaining nodes and lock the database so\nthat no new operations are performed.\n4) It will wait in the upstream node till it reaches the latest lsn of\nthe remaining nodes, this is somewhat similar to wait_for_catchup\nfunction in tap tests.\n5) It will change the status (pg_lrg_nodes) from init to waitforlsncatchup.\n6) It will create the publication in the current node.\n7) It will change the status (pg_lrg_nodes) from waitforlsncatchup to\ncreatepublication.\n8) It will create a subscription in all the remaining nodes to get the\ndata from new node.\n9) It will change the status (pg_lrg_nodes) from createpublication to\ncreatesubscription.\n10) It will alter the publication not to replicate truncate operation.\n11) It will truncate the table.\n12) It will alter the publication to include sending the truncate operation.\n13) It will create a subscription in the current node to subscribe the\ndata with copy_data force.\n14) It will create a subscription in the remaining nodes to subscribe\nthe data with copy_data off.\n15) It will unlock the database in all the remaining nodes.\n16) It will unlock the pg_lrg_info and pg_lrg_nodes tables.\n17) It will change the status (pg_lrg_nodes) from createsubscription to ready.\n\nThe status will be useful to display the progress of the operation to\nthe user and help in failure handling to continue the operation from\nthe state it had failed.\n-----------------------------------------------------------------------------------------------------------------------------------\n\nlrg_node_detach – detach a node from the logical replication group.\nexample:\npostgres=# SELECT lrg.lrg_node_detach('test', 'testnode');\nFor this API the “lrg worker” will perform the following:\n1) It will lock the pg_lrg_info and pg_lrg_nodes tables.\n2) It will get the list of other nodes present in the logical replication group.\n3) It will connect to the remaining nodes and lock the database so\nthat no new operations are performed.\n4) It will drop the subscription in all the nodes corresponding to\nthis node of the cluster.\n5) It will drop the publication in the current node.\n6) It will remove all the data associated with this logical\nreplication group from pg_lrg_* tables.\n7) It will unlock the pg_lrg_info and pg_lrg_nodes tables.\n-----------------------------------------------------------------------------------------------------------------------------------\n\nlrg_drop - drop a group from logical replication groups.\nexample:\npostgres=# SELECT lrg.lrg_drop('test');\n\nThis function removes the group specified from the logical replication\ngroups. This function must be executed at the member of a given\nlogical replication group.\nFor this API the “lrg worker” will perform the following:\n1) It will lock the pg_lrg_info and pg_lrg_nodes tables..\n2) DROP PUBLICATION of this node that was created for this logical\nreplication group.\n3) Remove all data from the logical replication group system table\nassociated with the logical replication group.\n4) It will unlock the pg_lrg_info and pg_lrg_nodes tables.\n\nIf there are no objections the API can be implemented as SQL functions\nin PostgreSQL core and the new tables can be created as system tables.\n\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 May 2022 15:46:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Fri, Apr 29, 2022 at 07:05:11PM +1000, Peter Smith wrote:\n> This MMLR proposal was mostly just to create an interface making it\n> easier to use PostgreSQL core logical replication CREATE\n> PUBLICATION/SUBSCRIPTION for table sharing among a set of nodes.\n> Otherwise, this is difficult for a user to do manually. (e.g.\n> difficulties as mentioned in section 2.2 of the original post [1] -\n> dealing with initial table data, coordinating the timing/locking to\n> avoid concurrent updates, getting the SUBSCRIPTION options for\n> copy_data exactly right etc)\n> \n> At this time we have no provision for HA, nor for transaction\n> consistency awareness, conflict resolutions, node failure detections,\n> DDL replication etc. Some of the features like DDL replication are\n> currently being implemented [2], so when committed it will become\n> available in the core, and can then be integrated into this module.\n\nUh, without these features, what workload would this help with? I think\nyou made the mistake of jumping too far into implementation without\nexplaining the problem you are trying to solve. The TODO list has this\nordering:\n\n\thttps://wiki.postgresql.org/wiki/Todo\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 13 May 2022 15:02:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Sat, May 14, 2022 at 12:33 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Uh, without these features, what workload would this help with?\n>\n\nTo allow replication among multiple nodes when some of the nodes may\nhave pre-existing data. This work plans to provide simple APIs to\nachieve that. Now, let me try to explain the difficulties users can\nface with the existing interface. It is simple to set up replication\namong various nodes when they don't have any pre-existing data but\neven in that case if the user operates on the same table at multiple\nnodes, the replication will lead to an infinite loop and won't\nproceed. The example in email [1] demonstrates that and the patch in\nthat thread attempts to solve it. I have mentioned that problem\nbecause this work will need that patch.\n\nNow, let's take a simple case where two nodes have the same table\nwhich has some pre-existing data:\n\nNode-1:\nTable t1 (c1 int) has data\n1, 2, 3, 4\n\nNode-2:\nTable t1 (c1 int) has data\n5, 6, 7, 8\n\nIf we have to set up replication among the above two nodes using\nexisting interfaces, it could be very tricky. Say user performs\noperations like below:\n\nNode-1\n#Publication for t1\nCreate Publication pub1 For Table t1;\n\nNode-2\n#Publication for t1,\nCreate Publication pub1_2 For Table t1;\n\nNode-1:\nCreate Subscription sub1 Connection '<node-2 details>' Publication pub1_2;\n\nNode-2:\nCreate Subscription sub1_2 Connection '<node-1 details>' Publication pub1;\n\nAfter this the data will be something like this:\nNode-1:\n1, 2, 3, 4, 5, 6, 7, 8\n\nNode-2:\n1, 2, 3, 4, 5, 6, 7, 8, 5, 6, 7, 8\n\nSo, you can see that data on Node-2 (5, 6, 7, 8) is duplicated. In\ncase, table t1 has a unique key, it will lead to a unique key\nviolation and replication won't proceed. Here, I have assumed that we\nalready have functionality for the patch in email [1], otherwise,\nreplication will be an infinite loop replicating the above data again\nand again. Now one way to achieve this could be that we can ask users\nto stop all operations on both nodes before starting replication\nbetween those and take data dumps of tables from each node they want\nto replicate and restore them to other nodes. Then use the above\ncommands to set up replication and allow to start operations on those\nnodes. The other possibility for users could be as below. Assume, we\nhave already created publications as in the above example, and then:\n\nNode-2:\nCreate Subscription sub1_2 Connection '<node-1 details>' Publication pub1;\n\n#Wait for the initial sync of table t1 to finish. Users can ensure\nthat by checking 'srsubstate' in pg_subscription_rel.\n\nNode-1:\nBegin;\n# Disallow truncates to be published and then truncate the table\nAlter Publication pub1 Set (publish = 'insert, update, delete');\nTruncate t1;\nCreate Subscription sub1 Connection '<node-2 details>' Publication pub1_2;\nAlter Publication pub1 Set (publish = 'insert, update, delete, truncate');\nCommit;\n\nThis will become more complicated when more than two nodes are\ninvolved, see the example provided for the three nodes case [2]. Can\nyou think of some other simpler way to achieve the same? If not, I\ndon't think the current way is ideal and even users won't prefer that.\nI am not telling that the APIs proposed in this thread is the only or\nbest way to achieve the desired purpose but I think we should do\nsomething to allow users to easily set up replication among multiple\nnodes.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9%3D9orqubhjcQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm3aD3nZ0HWXA8V435AGMvORyR5-mq2FzqQdKQ8CPomB5Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 14 May 2022 12:20:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "Hi hackers,\r\n\r\nI created a small PoC. Please see the attached patches.\r\n\r\nREQUIREMENT\r\n\r\nBefore patching them, patches in [1] must also be applied.\r\n\r\n\r\nDIFFERENCES FROM PREVIOUS DESCRIPTIONS\r\n\r\n* LRG is now implemented as SQL functions, not as a contrib module.\r\n* New tables are added as system catalogs. Therefore, added tables have oid column.\r\n* The node_id is the strcat of system identifier and dbid.\r\n\r\n\r\nHOW TO USE\r\n\r\nIn the document patch, a subsection 'Example' was added for understanding LRG. In short, we can do\r\n\r\n1. lrg_create on one node\r\n2. lrg_node_attach on another node\r\n\r\nAlso attached is a test script that constructs a three-nodes system.\r\n\r\n\r\nLIMITATIONS\r\n\r\nThis feature is under development, so there are many limitations for use case.\r\n\r\n* The function for detaching a node from a group is not implemented.\r\n* The function for removing a group is not implemented.\r\n* LRG does not lock system catalogs and databases. Concurrent operations may cause inconsistent state.\r\n* LRG does not wait until the upstream node reaches the latest lsn of the remaining nodes.\r\n* LRG does not support initial data sync. That is, it can work well only when all nodes do not have initial data.\r\n\r\n\r\n[1]: https://commitfest.postgresql.org/38/3610/\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 19 May 2022 02:20:23 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "Hi hackers,\r\n\r\n[1] has changed the name of the parameter, so I rebased the patch.\r\nFurthermore I implemented the first version of lrg_node_detach and lrg_drop functions,\r\nand some code comments are fixed.\r\n\r\n0001 and 0002 were copied from the [1], they were attached for the cfbot.\r\nPlease see 0003 and 0004 for LRG related codes.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 23 May 2022 09:16:38 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "Sorry, I forgot to attach the test script.\r\nFor cfbot I attached again all files. Sorry for the noise.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 23 May 2022 09:30:25 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "On Sat, May 14, 2022 at 12:20:05PM +0530, Amit Kapila wrote:\n> On Sat, May 14, 2022 at 12:33 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Uh, without these features, what workload would this help with?\n> >\n> \n> To allow replication among multiple nodes when some of the nodes may\n> have pre-existing data. This work plans to provide simple APIs to\n> achieve that. Now, let me try to explain the difficulties users can\n> face with the existing interface. It is simple to set up replication\n> among various nodes when they don't have any pre-existing data but\n> even in that case if the user operates on the same table at multiple\n> nodes, the replication will lead to an infinite loop and won't\n> proceed. The example in email [1] demonstrates that and the patch in\n> that thread attempts to solve it. I have mentioned that problem\n> because this work will need that patch.\n...\n> This will become more complicated when more than two nodes are\n> involved, see the example provided for the three nodes case [2]. Can\n> you think of some other simpler way to achieve the same? If not, I\n> don't think the current way is ideal and even users won't prefer that.\n> I am not telling that the APIs proposed in this thread is the only or\n> best way to achieve the desired purpose but I think we should do\n> something to allow users to easily set up replication among multiple\n> nodes.\n\nYou still have not answered my question above. \"Without these features,\nwhat workload would this help with?\" You have only explained how the\npatch would fix one of the many larger problems.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 08:27:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Tue, May 24, 2022 at 5:57 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, May 14, 2022 at 12:20:05PM +0530, Amit Kapila wrote:\n> > On Sat, May 14, 2022 at 12:33 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > Uh, without these features, what workload would this help with?\n> > >\n> >\n> > To allow replication among multiple nodes when some of the nodes may\n> > have pre-existing data. This work plans to provide simple APIs to\n> > achieve that. Now, let me try to explain the difficulties users can\n> > face with the existing interface. It is simple to set up replication\n> > among various nodes when they don't have any pre-existing data but\n> > even in that case if the user operates on the same table at multiple\n> > nodes, the replication will lead to an infinite loop and won't\n> > proceed. The example in email [1] demonstrates that and the patch in\n> > that thread attempts to solve it. I have mentioned that problem\n> > because this work will need that patch.\n> ...\n> > This will become more complicated when more than two nodes are\n> > involved, see the example provided for the three nodes case [2]. Can\n> > you think of some other simpler way to achieve the same? If not, I\n> > don't think the current way is ideal and even users won't prefer that.\n> > I am not telling that the APIs proposed in this thread is the only or\n> > best way to achieve the desired purpose but I think we should do\n> > something to allow users to easily set up replication among multiple\n> > nodes.\n>\n> You still have not answered my question above. \"Without these features,\n> what workload would this help with?\" You have only explained how the\n> patch would fix one of the many larger problems.\n>\n\nIt helps with setting up logical replication among two or more nodes\n(data flows both ways) which is important for use cases where\napplications are data-aware. For such apps, it will be beneficial to\nalways send and retrieve data to local nodes in a geographically\ndistributed database. Now, for such apps, to get 100% consistent data\namong nodes, one needs to enable synchronous_mode (aka set\nsynchronous_standby_names) but if that hurts performance and the data\nis for analytical purposes then one can use it in asynchronous mode.\nNow, for such cases, if the local node goes down, the other master\nnode can be immediately available to use, sure it may slow down the\noperations for some time till the local node come-up. For such apps,\nlater it will be also easier to perform online upgrades.\n\nWithout this, if the user tries to achieve the same via physical\nreplication by having two local nodes, it can take quite long before\nthe standby can be promoted to master and local reads/writes will be\nmuch costlier.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 25 May 2022 12:13:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Wed, May 25, 2022 at 4:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 24, 2022 at 5:57 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Sat, May 14, 2022 at 12:20:05PM +0530, Amit Kapila wrote:\n> > > On Sat, May 14, 2022 at 12:33 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > >\n> > > > Uh, without these features, what workload would this help with?\n> > > >\n> > >\n> > > To allow replication among multiple nodes when some of the nodes may\n> > > have pre-existing data. This work plans to provide simple APIs to\n> > > achieve that. Now, let me try to explain the difficulties users can\n> > > face with the existing interface. It is simple to set up replication\n> > > among various nodes when they don't have any pre-existing data but\n> > > even in that case if the user operates on the same table at multiple\n> > > nodes, the replication will lead to an infinite loop and won't\n> > > proceed. The example in email [1] demonstrates that and the patch in\n> > > that thread attempts to solve it. I have mentioned that problem\n> > > because this work will need that patch.\n> > ...\n> > > This will become more complicated when more than two nodes are\n> > > involved, see the example provided for the three nodes case [2]. Can\n> > > you think of some other simpler way to achieve the same? If not, I\n> > > don't think the current way is ideal and even users won't prefer that.\n> > > I am not telling that the APIs proposed in this thread is the only or\n> > > best way to achieve the desired purpose but I think we should do\n> > > something to allow users to easily set up replication among multiple\n> > > nodes.\n> >\n> > You still have not answered my question above. \"Without these features,\n> > what workload would this help with?\" You have only explained how the\n> > patch would fix one of the many larger problems.\n> >\n>\n> It helps with setting up logical replication among two or more nodes\n> (data flows both ways) which is important for use cases where\n> applications are data-aware. For such apps, it will be beneficial to\n> always send and retrieve data to local nodes in a geographically\n> distributed database. Now, for such apps, to get 100% consistent data\n> among nodes, one needs to enable synchronous_mode (aka set\n> synchronous_standby_names) but if that hurts performance and the data\n> is for analytical purposes then one can use it in asynchronous mode.\n> Now, for such cases, if the local node goes down, the other master\n> node can be immediately available to use, sure it may slow down the\n> operations for some time till the local node come-up. For such apps,\n> later it will be also easier to perform online upgrades.\n>\n> Without this, if the user tries to achieve the same via physical\n> replication by having two local nodes, it can take quite long before\n> the standby can be promoted to master and local reads/writes will be\n> much costlier.\n>\n\nAs mentioned above, the LRG idea might be a useful addition to logical\nreplication for configuring certain types of \"data-aware\"\napplications.\n\nLRG for data-aware apps (e.g. sensor data)\n------------------------------------------\nConsider an example where there are multiple weather stations for a\ncountry. Each weather station is associated with a PostgreSQL node and\ninserts the local sensor data (e.g wind/rain/sunshine etc) once a\nminute to some local table. The row data is identified by some station\nID.\n\n- Perhaps there are many nodes.\n\n- Loss of a single row of replicated sensor data if some node goes\ndown is not a major problem for this sort of application.\n\n- Benefits of processing data locally can be realised.\n\n- Using LRG simplifies the setup/sharing of the data across all group\nnodes via a common table.\n\n~~\n\nLRG makes setup easier\n----------------------\nAlthough it is possible already (using Vignesh's \"infinite recursion\"\nWIP patch [1]) to set up this kind of environment using logical\nreplication, as the number of nodes grows it becomes more and more\ndifficult to do it. For each new node, there needs to be N-1 x CREATE\nSUBSCRIPTION for the other group nodes, meaning the connection details\nfor every other node also must be known up-front for the script.\n\nOTOH, the LRG API can simplify all this, removing the user's burden\nand risk of mistakes. Also, LRG only needs to know how to reach just 1\nother node in the group (the implementation will discover all the\nother node connection details internally).\n\n~~\n\nLRG can handle initial table data\n--------------------------------\nIf the joining node (e.g. a new weather station) already has some\ninitial local sensor data then sharing that initial data manually with\nall the other nodes requires some tricky steps. LRG can hide all this\ncomplexity behind the API, so it is not a user problem anymore.\n\n------\n[1] https://www.postgresql.org/message-id/flat/CALDaNm0gwjY_4HFxvvty01BOT01q_fJLKQ3pWP9%3D9orqubhjcQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 25 May 2022 18:11:38 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Wed, May 25, 2022 at 12:13:17PM +0530, Amit Kapila wrote:\n> > You still have not answered my question above. \"Without these features,\n> > what workload would this help with?\" You have only explained how the\n> > patch would fix one of the many larger problems.\n> >\n> \n> It helps with setting up logical replication among two or more nodes\n> (data flows both ways) which is important for use cases where\n> applications are data-aware. For such apps, it will be beneficial to\n\nThat does make sense, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 25 May 2022 22:32:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "Dear hackers,\n\nI added documentation more and tap-tests about LRG.\nSame as previous e-mail, 0001 and 0002 are copied from [1].\n\nFollowing lists are the TODO of patches, they will be solved one by one.\n\n## Functional\n\n* implement a new state \"waitforlsncatchup\",\n that waits until the upstream node receives the latest lsn of the remaining nodes,\n* implement an over-node locking mechanism\n* implement operations that shares initial data\n* implement mechanisms to avoid concurrent API execution\n\nNote that tap-test must be also added if above are added.\n\n## Implemental\n\n* consider failure-handing while executing APIs\n* add error codes for LRG\n* move elog() to ereport() for native language support\n* define pg_lrg_nodes that has NULL-able attribute as proper style\n\n\n[1]: https://commitfest.postgresql.org/38/3610/\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 30 May 2022 09:49:58 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "On Wed, May 25, 2022 at 10:32:50PM -0400, Bruce Momjian wrote:\n> On Wed, May 25, 2022 at 12:13:17PM +0530, Amit Kapila wrote:\n> > > You still have not answered my question above. \"Without these features,\n> > > what workload would this help with?\" You have only explained how the\n> > > patch would fix one of the many larger problems.\n> > >\n> > \n> > It helps with setting up logical replication among two or more nodes\n> > (data flows both ways) which is important for use cases where\n> > applications are data-aware. For such apps, it will be beneficial to\n> \n> That does make sense, thanks.\n\nUh, thinking some more, why would anyone set things up this way ---\nhaving part of a table being primary on one server and a different part\nof the table be a subscriber. Seems it would be simpler and safer to\ncreate two child tables and have one be primary on only one server. \nUsers can access both tables using the parent.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 31 May 2022 10:06:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Tue, May 31, 2022 at 7:36 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, May 25, 2022 at 10:32:50PM -0400, Bruce Momjian wrote:\n> > On Wed, May 25, 2022 at 12:13:17PM +0530, Amit Kapila wrote:\n> > >\n> > > It helps with setting up logical replication among two or more nodes\n> > > (data flows both ways) which is important for use cases where\n> > > applications are data-aware. For such apps, it will be beneficial to\n> >\n> > That does make sense, thanks.\n>\n> Uh, thinking some more, why would anyone set things up this way ---\n> having part of a table being primary on one server and a different part\n> of the table be a subscriber. Seems it would be simpler and safer to\n> create two child tables and have one be primary on only one server.\n> Users can access both tables using the parent.\n>\n\nYes, users can choose to do that way but still, to keep the nodes in\nsync and continuity of operations, it will be very difficult to manage\nthe operations without the LRG APIs. Let us consider a simple two-node\nexample where on each node there is Table T that has partitions P1 and\nP2. As far as I can understand, one needs to have the below kind of\nset-up to allow local operations on geographically distributed nodes.\n\nNode-1:\nnode1 writes to P1\nnode1 publishes P1\nnode2 subscribes to P1 of node1\n\nNode-2:\nnode2 writes to P2\nnode2 publishes P2\nnode1 subscribes to P2 on node2\n\nIn this setup, we need to publish individual partitions, otherwise, we\nwill face the loop problem where the data sent by node-1 to node-2 via\nlogical replication will again come back to it causing problems like\nconstraints violations, duplicate data, etc. There could be other ways\nto do this set up with current logical replication commands (for ex.\npublishing via root table) but that would require ways to avoid loops\nand could have other challenges.\n\nNow, in such a setup/scheme, consider a scenario (scenario-1), where\nnode-2 went off (either it crashes, went out of network, just died,\netc.) and comes up after some time. Now, one can either make the\nnode-2 available by fixing the problem it has or can promote standby\nin that location (if any) to become master, both might require some\ntime. In the meantime to continue the operations (which provides a\nseamless experience to users), users will be connected to node-1 to\nperform the required write operations. Now, to achieve this without\nLRG APIs, it will be quite complex for users to keep the data in sync.\nOne needs to perform various steps to get the partition P2 data that\nwent to node-1 till the time node-2 was not available. On node-1, it\nhas to publish P2 changes for the time node-2 becomes available with\nthe help of Create/Drop Publication APIs. And when node-2 comes back,\nit has to create a subscription for the above publication pub-2 to get\nthat data, ensure both the nodes and in sync, and then allow\noperations on node-2.\n\nNot only this, but if there are more nodes in this set-up (say-10), it\nhas to change (drop/create) subscriptions corresponding to partition\nP2 on all other nodes as each individual node is the owner of some\npartition.\n\nAnother possibility is that the entire data center where node-2 was\npresent was gone due to some unfortunate incident in which case they\nneed to set up a new data center and hence a new node. Now, in such a\ncase, the user needs to do all the steps mentioned in the previous\nscenario and additionally, it needs to ensure that it set up the node\nto sync all the existing data (of all partitions) before this node\nagain starts receiving write changes for partition P2.\n\nI think all this should be relatively simpler with LRG APIs wherein\nfor the second scenario user ideally just needs to use the lrg_attach*\nAPI and in the first scenario, it should automatically sync the\nmissing data once the node-2 comes back.\n\nNow, the other important point that we should also consider for these\nLRG APIs is the ease of setup even in the normal case where we are\njust adding a new node as mentioned by Peter Smith in his email [1]\n(LRG makes setup easier). e.g. even if there are many nodes we only\nneed a single lrg_attach by the joining node instead of needing N-1\nsubscriptions on all the existing nodes.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPsvvfTWWwE8vkgUg4q%2BQLyoCyNE7NU%3DmEiYHcMcXciXdg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 10:27:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 10:27:27AM +0530, Amit Kapila wrote:\n> On Tue, May 31, 2022 at 7:36 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Uh, thinking some more, why would anyone set things up this way ---\n> > having part of a table being primary on one server and a different part\n> > of the table be a subscriber. Seems it would be simpler and safer to\n> > create two child tables and have one be primary on only one server.\n> > Users can access both tables using the parent.\n> \n> Yes, users can choose to do that way but still, to keep the nodes in\n> sync and continuity of operations, it will be very difficult to manage\n> the operations without the LRG APIs. Let us consider a simple two-node\n> example where on each node there is Table T that has partitions P1 and\n> P2. As far as I can understand, one needs to have the below kind of\n> set-up to allow local operations on geographically distributed nodes.\n> \n> Node-1:\n> node1 writes to P1\n> node1 publishes P1\n> node2 subscribes to P1 of node1\n> \n> Node-2:\n> node2 writes to P2\n> node2 publishes P2\n> node1 subscribes to P2 on node2\n\nYes, that is how you would set it up.\n\n> In this setup, we need to publish individual partitions, otherwise, we\n> will face the loop problem where the data sent by node-1 to node-2 via\n> logical replication will again come back to it causing problems like\n> constraints violations, duplicate data, etc. There could be other ways\n> to do this set up with current logical replication commands (for ex.\n> publishing via root table) but that would require ways to avoid loops\n> and could have other challenges.\n\nRight, individual paritions.\n\n> Now, in such a setup/scheme, consider a scenario (scenario-1), where\n> node-2 went off (either it crashes, went out of network, just died,\n> etc.) and comes up after some time. Now, one can either make the\n> node-2 available by fixing the problem it has or can promote standby\n> in that location (if any) to become master, both might require some\n> time. In the meantime to continue the operations (which provides a\n> seamless experience to users), users will be connected to node-1 to\n> perform the required write operations. Now, to achieve this without\n> LRG APIs, it will be quite complex for users to keep the data in sync.\n> One needs to perform various steps to get the partition P2 data that\n> went to node-1 till the time node-2 was not available. On node-1, it\n> has to publish P2 changes for the time node-2 becomes available with\n> the help of Create/Drop Publication APIs. And when node-2 comes back,\n> it has to create a subscription for the above publication pub-2 to get\n> that data, ensure both the nodes and in sync, and then allow\n> operations on node-2.\n\nWell, you are going to need to modify the app so it knows it can write\nto both partitions on failover anyway. I just don't see how adding this\ncomplexity is wise.\n\nMy big point is that you should not be showing up with a patch but\nrather have these discussions to get agreement that this is the\ndirection the community wants to go.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 1 Jun 2022 10:03:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 7:33 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Jun 1, 2022 at 10:27:27AM +0530, Amit Kapila wrote:\n> > On Tue, May 31, 2022 at 7:36 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Uh, thinking some more, why would anyone set things up this way ---\n> > > having part of a table being primary on one server and a different part\n> > > of the table be a subscriber. Seems it would be simpler and safer to\n> > > create two child tables and have one be primary on only one server.\n> > > Users can access both tables using the parent.\n> >\n> > Yes, users can choose to do that way but still, to keep the nodes in\n> > sync and continuity of operations, it will be very difficult to manage\n> > the operations without the LRG APIs. Let us consider a simple two-node\n> > example where on each node there is Table T that has partitions P1 and\n> > P2. As far as I can understand, one needs to have the below kind of\n> > set-up to allow local operations on geographically distributed nodes.\n> >\n> > Node-1:\n> > node1 writes to P1\n> > node1 publishes P1\n> > node2 subscribes to P1 of node1\n> >\n> > Node-2:\n> > node2 writes to P2\n> > node2 publishes P2\n> > node1 subscribes to P2 on node2\n>\n> Yes, that is how you would set it up.\n>\n> > In this setup, we need to publish individual partitions, otherwise, we\n> > will face the loop problem where the data sent by node-1 to node-2 via\n> > logical replication will again come back to it causing problems like\n> > constraints violations, duplicate data, etc. There could be other ways\n> > to do this set up with current logical replication commands (for ex.\n> > publishing via root table) but that would require ways to avoid loops\n> > and could have other challenges.\n>\n> Right, individual paritions.\n>\n> > Now, in such a setup/scheme, consider a scenario (scenario-1), where\n> > node-2 went off (either it crashes, went out of network, just died,\n> > etc.) and comes up after some time. Now, one can either make the\n> > node-2 available by fixing the problem it has or can promote standby\n> > in that location (if any) to become master, both might require some\n> > time. In the meantime to continue the operations (which provides a\n> > seamless experience to users), users will be connected to node-1 to\n> > perform the required write operations. Now, to achieve this without\n> > LRG APIs, it will be quite complex for users to keep the data in sync.\n> > One needs to perform various steps to get the partition P2 data that\n> > went to node-1 till the time node-2 was not available. On node-1, it\n> > has to publish P2 changes for the time node-2 becomes available with\n> > the help of Create/Drop Publication APIs. And when node-2 comes back,\n> > it has to create a subscription for the above publication pub-2 to get\n> > that data, ensure both the nodes and in sync, and then allow\n> > operations on node-2.\n>\n> Well, you are going to need to modify the app so it knows it can write\n> to both partitions on failover anyway.\n>\n\nI am not sure if this point is clear to me. From what I can understand\nthere are two possibilities for the app in this case and both seem to\nbe problematic.\n\n(a) The app can be taught to write to the P2 partition in node-1 till\nthe time node-2 is not available. If so, how will we get the partition\nP2 data that went to node-1 till the time node-2 was unavailable? If\nwe don't get the data to node-2 then the operations on node-2 (once it\ncomes back) can return incorrect results. Also, we need to ensure all\nthe data for P2 that went to node-1 should be replicated to all other\nnodes in the system and for that also we need to create new\nsubscriptions pointing to node-1. It is easier to think of doing this\nfor physical replication where after failover the old master node can\nstart following the new node and the app just need to be taught to\nwrite to the new master node. I can't see how we can achieve that by\ncurrent logical replication APIs (apart from doing the complex steps\nshared by me). One of the purposes of these new LRG APIs is to ensure\nthat users don't need to follow those complex steps after failover.\n\n(b) The other possibility is that the app is responsible to ensure\nthat the same data is written on both node-1 and node-2 for the time\none of those is not available. For that app needs to store the data at\nsomeplace for the time one of the nodes is unavailable and then write\nit once the other node becomes available? Also, it won't be practical\nwhen there are more partitions (say 10 or more) as all the partitions\ndata needs to be present on each node. I think it is the\nresponsibility of the database to keep the data in sync among nodes\nwhen one or more of the nodes are not available.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Jun 2022 11:38:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 12:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Jun 1, 2022 at 10:27:27AM +0530, Amit Kapila wrote:\n...\n\n> My big point is that you should not be showing up with a patch but\n> rather have these discussions to get agreement that this is the\n> direction the community wants to go.\n\nThe purpose of posting the POC patch was certainly not to present a\nfait accompli design/implementation.\n\nWe wanted to solicit some community feedback about the desirability of\nthe feature, but because LRG is complicated to describe we felt that\nhaving a basic functional POC might help to better understand the\nproposal. Also, we thought the ability to experiment with the proposed\nAPI could help people to decide whether LRG is something worth\npursuing or not.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Jun 2022 17:12:49 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 05:12:49PM +1000, Peter Smith wrote:\n> On Thu, Jun 2, 2022 at 12:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, Jun 1, 2022 at 10:27:27AM +0530, Amit Kapila wrote:\n> ...\n> \n> > My big point is that you should not be showing up with a patch but\n> > rather have these discussions to get agreement that this is the\n> > direction the community wants to go.\n> \n> The purpose of posting the POC patch was certainly not to present a\n> fait accompli design/implementation.\n> \n> We wanted to solicit some community feedback about the desirability of\n> the feature, but because LRG is complicated to describe we felt that\n> having a basic functional POC might help to better understand the\n> proposal. Also, we thought the ability to experiment with the proposed\n> API could help people to decide whether LRG is something worth\n> pursuing or not.\n\nI don't think the POC is helping, and I am not sure we really want to\nsupport this style of architecture due to its complexity vs other\noptions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 2 Jun 2022 21:42:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Fri, Jun 3, 2022 at 7:12 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Jun 2, 2022 at 05:12:49PM +1000, Peter Smith wrote:\n> > On Thu, Jun 2, 2022 at 12:03 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Wed, Jun 1, 2022 at 10:27:27AM +0530, Amit Kapila wrote:\n> > ...\n> >\n> > > My big point is that you should not be showing up with a patch but\n> > > rather have these discussions to get agreement that this is the\n> > > direction the community wants to go.\n> >\n> > The purpose of posting the POC patch was certainly not to present a\n> > fait accompli design/implementation.\n> >\n> > We wanted to solicit some community feedback about the desirability of\n> > the feature, but because LRG is complicated to describe we felt that\n> > having a basic functional POC might help to better understand the\n> > proposal. Also, we thought the ability to experiment with the proposed\n> > API could help people to decide whether LRG is something worth\n> > pursuing or not.\n>\n> I don't think the POC is helping, and I am not sure we really want to\n> support this style of architecture due to its complexity vs other\n> options.\n>\n\nNone of the other options discussed on this thread appears to be\nbetter or can serve the intent. What other options do you have in mind\nand how are they simpler than this? As far as I can understand this\nprovides a simple way to set up n-way replication among nodes.\n\nI see that other databases provide similar ways to set up n-way\nreplication. See [1] and in particular [2][3][4] provides a way to set\nup n-way replication via APIs. Yet, another way is via configuration\nas seems to be provided by MySQL [5] (Group Replication Settings).\nMost of the advantages have already been shared but let me summarize\nagain the benefits it brings (a) more localized database access for\ngeographically distributed databases, (b) ensuring continuous\navailability in case of the primary site becomes unavailable due to a\nsystem or network outage, any natural disaster on the site, (c)\nenvironments that require a fluid replication infrastructure, where\nthe number of servers has to grow or shrink dynamically and with as\nfew side-effects as possible. For instance, database services for the\ncloud, and (d) load balancing. Some of these can probably be served in\nother ways but not everything.\n\nI see your point about POC not helping here and it can also sometimes\ndiscourage OP if we decide not to do this feature or do it in an\nentirely different way. But OTOH, I don't see it stopping us from\ndiscussing the desirability or design of this feature.\n\n[1] - https://docs.oracle.com/cd/E18283_01/server.112/e10707/rarrcatpac.htm\n[2] - https://docs.oracle.com/cd/E18283_01/server.112/e10707/rarrcatpac.htm#i96251\n[3] - https://docs.oracle.com/cd/E18283_01/server.112/e10707/rarrcatpac.htm#i94500\n[4] - https://docs.oracle.com/cd/E18283_01/server.112/e10707/rarrcatpac.htm#i97185\n[5] - https://dev.mysql.com/doc/refman/8.0/en/group-replication-configuring-instances.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Jun 2022 16:20:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "Dear hackers,\r\n\r\nI found another use-case for LRG. It might be helpful for migration.\r\n\r\n\r\nLRG for migration\r\n------------------------------------------\r\nLRG may be helpful for machine migration, OS upgrade,\r\nor PostgreSQL itself upgrade.\r\n\r\nAssumes that users want to migrate database to other environment,\r\ne.g., PG16 on RHEL7 to PG18 on RHEL8.\r\nUsers must copy all data into new server and catchup all changes.\r\nIn this case streaming replication cannot be used\r\nbecause it requires same OS and same PostgreSQL major version.\r\nMoreover, it is desirable to be able to return to the original environment at any time\r\nin case of application or other environmental deficiencies.\r\n\r\n\r\nOperation steps with LRG\r\n------------------------------------------\r\n\r\nLRG is appropriate for the situation. Following lines are the workflow that users must do:\r\n\r\n1. Copy the table definition to the newer node(PG18), via pg_dump/pg_restore\r\n2. Execute lrg_create() in the older node(PG16)\r\n3. Execute lrg_node_attach() in PG18\r\n\r\n=== data will be shared here===\r\n\r\n4. Change the connection of the user application to PG18\r\n5. Check whether ERROR is raised or not. If some ERRORs are raised,\r\n users can change back the connection to PG16.\r\n6. Remove the created node group if application works well.\r\n\r\nThese operations may reduce system downtime\r\ndue to incompatibilities associated with version upgrades.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 6 Jun 2022 10:54:21 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 5:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> MULTI-MASTER LOGICAL REPLICATION\n>\n> 1.0 BACKGROUND\n>\n> Let’s assume that a user wishes to set up a multi-master environment\n> so that a set of PostgreSQL instances (nodes) use logical replication\n> to share tables with every other node in the set.\n>\n> We define this as a multi-master logical replication (MMLR) node-set.\n>\n> <please refer to the attached node-set diagram>\n>\n> 1.1 ADVANTAGES OF MMLR\n>\n> - Increases write scalability (e.g., all nodes can write arbitrary data).\n> - Allows load balancing\n> - Allows rolling updates of nodes (e.g., logical replication works\n> between different major versions of PostgreSQL).\n> - Improves the availability of the system (e.g., no single point of failure)\n> - Improves performance (e.g., lower latencies for geographically local nodes)\n\nThanks for working on this proposal. I have a few high-level thoughts,\nplease bear with me if I repeat any of them:\n\n1. Are you proposing to use logical replication subscribers to be in\nsync quorum? In other words, in an N-masters node, M (M >= N)-node\nconfiguration, will each master be part of the sync quorum in the\nother master?\n2. Is there any mention of reducing the latencies that logical\nreplication will have generally (initial table sync and\nafter-caught-up decoding and replication latencies)?\n3. What if \"some\" postgres provider assures an SLA of very few seconds\nfor failovers in typical HA set up with primary and multiple sync and\nasync standbys? In this context, where does the multi-master\narchitecture sit in the broad range of postgres use-cases?\n4. Can the design proposed here be implemented as an extension instead\nof a core postgres solution?\n5. Why should one use logical replication for multi master\nreplication? If logical replication is used, isn't it going to be\nsomething like logically decode and replicate every WAL record from\none master to all other masters? Instead, can't it be achieved via\nstreaming/physical replication?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 9 Jun 2022 18:03:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Thu, Jun 9, 2022 at 6:04 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Apr 28, 2022 at 5:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > MULTI-MASTER LOGICAL REPLICATION\n> >\n> > 1.0 BACKGROUND\n> >\n> > Let’s assume that a user wishes to set up a multi-master environment\n> > so that a set of PostgreSQL instances (nodes) use logical replication\n> > to share tables with every other node in the set.\n> >\n> > We define this as a multi-master logical replication (MMLR) node-set.\n> >\n> > <please refer to the attached node-set diagram>\n> >\n> > 1.1 ADVANTAGES OF MMLR\n> >\n> > - Increases write scalability (e.g., all nodes can write arbitrary data).\n> > - Allows load balancing\n> > - Allows rolling updates of nodes (e.g., logical replication works\n> > between different major versions of PostgreSQL).\n> > - Improves the availability of the system (e.g., no single point of failure)\n> > - Improves performance (e.g., lower latencies for geographically local nodes)\n>\n> Thanks for working on this proposal. I have a few high-level thoughts,\n> please bear with me if I repeat any of them:\n>\n> 1. Are you proposing to use logical replication subscribers to be in\n> sync quorum? In other words, in an N-masters node, M (M >= N)-node\n> configuration, will each master be part of the sync quorum in the\n> other master?\n>\n\nWhat exactly do you mean by sync quorum here? If you mean to say that\neach master node will be allowed to wait till the commit happens on\nall other nodes similar to how our current synchronous_commit and\nsynchronous_standby_names work, then yes, it could be achieved. I\nthink the patch currently doesn't support this but it could be\nextended to support the same. Basically, one can be allowed to set up\nasync and sync nodes in combination depending on its use case.\n\n> 2. Is there any mention of reducing the latencies that logical\n> replication will have generally (initial table sync and\n> after-caught-up decoding and replication latencies)?\n>\n\nNo, this won't change under the hood replication mechanism.\n\n> 3. What if \"some\" postgres provider assures an SLA of very few seconds\n> for failovers in typical HA set up with primary and multiple sync and\n> async standbys? In this context, where does the multi-master\n> architecture sit in the broad range of postgres use-cases?\n>\n\nI think this is one of the primary use cases of the n-way logical\nreplication solution where in there shouldn't be any noticeable wait\ntime when one or more of the nodes goes down. All nodes have the\ncapability to allow writes so the app just needs to connect to another\nnode. I feel some analysis is required to find out and state exactly\nhow the users can achieve this but seems doable. The other use cases\nare discussed in this thread and are summarized in emails [1][2].\n\n> 4. Can the design proposed here be implemented as an extension instead\n> of a core postgres solution?\n>\n\nYes, I think it could be. I think this proposal introduces some system\ntables, so need to analyze what to do about that. BTW, do you see any\nadvantages to doing so?\n\n> 5. Why should one use logical replication for multi master\n> replication? If logical replication is used, isn't it going to be\n> something like logically decode and replicate every WAL record from\n> one master to all other masters? Instead, can't it be achieved via\n> streaming/physical replication?\n>\n\nThe failover/downtime will be much lesser in a solution based on\nlogical replication because all nodes are master nodes and users will\nbe allowed to write on other nodes instead of waiting for the physical\nstandby to become writeable. Then it will allow more localized\ndatabase access for geographically distributed databases, see the\nemail for further details on this [3]. Also, the benefiting scenarios\nare the same as all usual Logical Replication quoted benefits - e.g\nversion independence, getting selective/required data, etc.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BZP9c6q1BQWSQC__w09WQ-qGt22dTmajDmTxR_CAUyJQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/TYAPR01MB58660FCFEC7633E15106C94BF5A29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[3] - https://www.postgresql.org/message-id/CAA4eK1%2BDRHCNLongM0stsVBY01S-s%3DEa_yjBFnv_Uz3m3Hky-w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 09:54:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 9:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 9, 2022 at 6:04 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Apr 28, 2022 at 5:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > MULTI-MASTER LOGICAL REPLICATION\n> > >\n> > > 1.0 BACKGROUND\n> > >\n> > > Let’s assume that a user wishes to set up a multi-master environment\n> > > so that a set of PostgreSQL instances (nodes) use logical replication\n> > > to share tables with every other node in the set.\n> > >\n> > > We define this as a multi-master logical replication (MMLR) node-set.\n> > >\n> > > <please refer to the attached node-set diagram>\n> > >\n> > > 1.1 ADVANTAGES OF MMLR\n> > >\n> > > - Increases write scalability (e.g., all nodes can write arbitrary data).\n> > > - Allows load balancing\n> > > - Allows rolling updates of nodes (e.g., logical replication works\n> > > between different major versions of PostgreSQL).\n> > > - Improves the availability of the system (e.g., no single point of failure)\n> > > - Improves performance (e.g., lower latencies for geographically local nodes)\n> >\n> > Thanks for working on this proposal. I have a few high-level thoughts,\n> > please bear with me if I repeat any of them:\n> >\n> > 1. Are you proposing to use logical replication subscribers to be in\n> > sync quorum? In other words, in an N-masters node, M (M >= N)-node\n> > configuration, will each master be part of the sync quorum in the\n> > other master?\n> >\n>\n> What exactly do you mean by sync quorum here? If you mean to say that\n> each master node will be allowed to wait till the commit happens on\n> all other nodes similar to how our current synchronous_commit and\n> synchronous_standby_names work, then yes, it could be achieved. I\n> think the patch currently doesn't support this but it could be\n> extended to support the same. Basically, one can be allowed to set up\n> async and sync nodes in combination depending on its use case.\n\nYes, I meant each master node will be in synchronous_commit with\nothers. In this setup, do you see any problems such as deadlocks if\nwrite-txns on the same table occur on all the masters at a time?\n\nIf the master nodes are not in synchronous_commit i.e. connected in\nasynchronous mode, don't we have data synchronous problems because of\nlogical decoding and replication latencies? Say, I do a bulk-insert to\na table foo on master 1, Imagine there's a latency with which the\ninserted rows get replicated to master 2 and meanwhile I do update on\nthe same table foo on master 2 based on the rows inserted in master 1\n- master 2 doesn't have all the inserted rows on master 1 - how does\nthe solution proposed here address this problem?\n\n> > 3. What if \"some\" postgres provider assures an SLA of very few seconds\n> > for failovers in typical HA set up with primary and multiple sync and\n> > async standbys? In this context, where does the multi-master\n> > architecture sit in the broad range of postgres use-cases?\n> >\n>\n> I think this is one of the primary use cases of the n-way logical\n> replication solution where in there shouldn't be any noticeable wait\n> time when one or more of the nodes goes down. All nodes have the\n> capability to allow writes so the app just needs to connect to another\n> node. I feel some analysis is required to find out and state exactly\n> how the users can achieve this but seems doable. The other use cases\n> are discussed in this thread and are summarized in emails [1][2].\n\nIIUC, the main goals of this feature are - zero failover times and\nless write latencies, right? How is it going to solve the data\nsynchronization problem (stated above) with the master nodes connected\nto each other in asynchronous mode?\n\n> > 4. Can the design proposed here be implemented as an extension instead\n> > of a core postgres solution?\n> >\n>\n> Yes, I think it could be. I think this proposal introduces some system\n> tables, so need to analyze what to do about that. BTW, do you see any\n> advantages to doing so?\n\nIMO, yes, doing it the extension way has many advantages - it doesn't\nhave to touch the core part of postgres, usability will be good -\nwhoever requires this solution will use and we can avoid code chunks\nwithin the core such as if (feature_enabled) { do foo} else { do bar}\nsorts. Since this feature is based on core postgres logical\nreplication infrastructure, I think it's worth implementing it as an\nextension first, maybe the extension as a PoC?\n\n> > 5. Why should one use logical replication for multi master\n> > replication? If logical replication is used, isn't it going to be\n> > something like logically decode and replicate every WAL record from\n> > one master to all other masters? Instead, can't it be achieved via\n> > streaming/physical replication?\n> >\n>\n> The failover/downtime will be much lesser in a solution based on\n> logical replication because all nodes are master nodes and users will\n> be allowed to write on other nodes instead of waiting for the physical\n> standby to become writeable.\n\nI don't think that's a correct statement unless the design proposed\nhere addresses the data synchronization problem (stated above) with\nthe master nodes connected to each other in asynchronous mode.\n\n> Then it will allow more localized\n> database access for geographically distributed databases, see the\n> email for further details on this [3]. Also, the benefiting scenarios\n> are the same as all usual Logical Replication quoted benefits - e.g\n> version independence, getting selective/required data, etc.\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1%2BZP9c6q1BQWSQC__w09WQ-qGt22dTmajDmTxR_CAUyJQ%40mail.gmail.com\n> [2] - https://www.postgresql.org/message-id/TYAPR01MB58660FCFEC7633E15106C94BF5A29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n> [3] - https://www.postgresql.org/message-id/CAA4eK1%2BDRHCNLongM0stsVBY01S-s%3DEa_yjBFnv_Uz3m3Hky-w%40mail.gmail.com\n\nIMHO, geographically distributed databases are \"different sorts in\nthemselves\" and have different ways and means to address data\nsynchronization, latencies, replication, failovers, conflict\nresolutions etc. (I'm no expert there, others may have better\nthoughts).\n\nHaving said that, it will be great to know if there are any notable or\nmentionable customer typical scenarios or use-cases for multi master\nsolutions within postgres.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 12:40:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "On Fri, Jun 10, 2022 at 12:40 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jun 10, 2022 at 9:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 1. Are you proposing to use logical replication subscribers to be in\n> > > sync quorum? In other words, in an N-masters node, M (M >= N)-node\n> > > configuration, will each master be part of the sync quorum in the\n> > > other master?\n> > >\n> >\n> > What exactly do you mean by sync quorum here? If you mean to say that\n> > each master node will be allowed to wait till the commit happens on\n> > all other nodes similar to how our current synchronous_commit and\n> > synchronous_standby_names work, then yes, it could be achieved. I\n> > think the patch currently doesn't support this but it could be\n> > extended to support the same. Basically, one can be allowed to set up\n> > async and sync nodes in combination depending on its use case.\n>\n> Yes, I meant each master node will be in synchronous_commit with\n> others. In this setup, do you see any problems such as deadlocks if\n> write-txns on the same table occur on all the masters at a time?\n>\n\nI have not tried but I don't see in theory why this should happen\nunless someone tries to update a similar set of rows in conflicting\norder similar to how it can happen in a single node. If so, it will\nerror out and one of the conflicting transactions needs to be retried.\nIOW, I think the behavior should be the same as on a single node. Do\nyou have any particular examples in mind?\n\n> If the master nodes are not in synchronous_commit i.e. connected in\n> asynchronous mode, don't we have data synchronous problems because of\n> logical decoding and replication latencies? Say, I do a bulk-insert to\n> a table foo on master 1, Imagine there's a latency with which the\n> inserted rows get replicated to master 2 and meanwhile I do update on\n> the same table foo on master 2 based on the rows inserted in master 1\n> - master 2 doesn't have all the inserted rows on master 1 - how does\n> the solution proposed here address this problem?\n>\n\nI don't think that is possible even in theory and none of the other\nn-way replication solutions I have read seems to be claiming to have\nsomething like that. It is quite possible that I am missing something\nhere but why do we want to have such a requirement from asynchronous\nreplication? I think in such cases even for load balancing we can\ndistribute reads where eventually consistent data is acceptable and\nwrites on separate tables/partitions can be distributed.\n\nI haven't responded to some of your other points as they are\nassociated with the above theory.\n\n>\n> > > 4. Can the design proposed here be implemented as an extension instead\n> > > of a core postgres solution?\n> > >\n> >\n> > Yes, I think it could be. I think this proposal introduces some system\n> > tables, so need to analyze what to do about that. BTW, do you see any\n> > advantages to doing so?\n>\n> IMO, yes, doing it the extension way has many advantages - it doesn't\n> have to touch the core part of postgres, usability will be good -\n> whoever requires this solution will use and we can avoid code chunks\n> within the core such as if (feature_enabled) { do foo} else { do bar}\n> sorts. Since this feature is based on core postgres logical\n> replication infrastructure, I think it's worth implementing it as an\n> extension first, maybe the extension as a PoC?\n>\n\nI don't know if it requires the kind of code you are thinking but I\nagree that it is worth considering implementing it as an extension.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Jun 2022 14:59:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multi-Master Logical Replication"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nIn addition to the use cases mentioned above, some users want to use n-way\r\nreplication of partial database.\r\n\r\nThe following is the typical use case.\r\n\r\n* There are several data centers.\r\n (ex. Japan and India)\r\n* The database in each data center has its unique data.\r\n (ex. the database in Japan has the data related to Japan)\r\n* There are some common data.\r\n (ex. the shipment data from Japan to India should be stored on both database)\r\n* To replicate common data, users want to use n-way replication.\r\n\r\n\r\nThe current POC patch seems to support only n-way replication of entire database, \r\nbut I think we should support n-way replication of partial database to achieve\r\nabove use case.\r\n\r\n\r\n> I don't know if it requires the kind of code you are thinking but I\r\n> agree that it is worth considering implementing it as an extension.\r\n\r\nI think the other advantage to implement as an extension is that users could\r\ninstall the extension to older Postgres.\r\n\r\nAs mentioned in previous email, the one use case of n-way replication is migration\r\nfrom older Postgres to newer Postgres.\r\n\r\nIf we implement as an extension, users could use n-way replication for migration\r\nfrom PG10 to PG16.\r\n\r\n\r\nRegards,\r\nRyohei Takahashi\r\n",
"msg_date": "Mon, 13 Jun 2022 11:02:40 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "Dear Takahashi-san,\r\n\r\nThanks for giving feedbacks!\r\n\r\n> > I don't know if it requires the kind of code you are thinking but I\r\n> > agree that it is worth considering implementing it as an extension.\r\n> \r\n> I think the other advantage to implement as an extension is that users could\r\n> install the extension to older Postgres.\r\n> \r\n> As mentioned in previous email, the one use case of n-way replication is migration\r\n> from older Postgres to newer Postgres.\r\n> \r\n> If we implement as an extension, users could use n-way replication for migration\r\n> from PG10 to PG16.\r\n>\r\n\r\nI think even if LRG is implemented as contrib modules or any extensions,\r\nit will deeply depend on the subscription option \"origin\" proposed in [1].\r\nSo LRG cannot be used for older version, only PG16 or later.\r\n\r\n[1]: https://www.postgresql.org/message-id/CALDaNm3Pt1CpEb3y9pE7ff91gZVpNXr91y4ZtWiw6h+GAyG4Gg@mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 14 Jun 2022 09:33:27 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "Hi Kuroda san,\r\n\r\n\r\n> I think even if LRG is implemented as contrib modules or any extensions,\r\n> it will deeply depend on the subscription option \"origin\" proposed in [1].\r\n> So LRG cannot be used for older version, only PG16 or later.\r\n\r\nSorry, I misunderstood.\r\nI understand now.\r\n\r\nRegards,\r\nRyohei Takahashi\r\n",
"msg_date": "Tue, 14 Jun 2022 10:40:51 +0000",
"msg_from": "\"r.takahashi_2@fujitsu.com\" <r.takahashi_2@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
},
{
"msg_contents": "Hi hackers,\r\n\r\nWhile analyzing about failure handling in the N-way logical replication,\r\nI found that in previous PoC detaching API cannot detach a node which has failed.\r\n\r\nI though lack of the feature was not suitable for testing purpose, so I would like to post a new version.\r\nAlso this patch was adjusted to new version of the infinite recursive patch[1]. \r\n0001-0004 were copied from the thread.\r\n\r\nNote that LRG has been still implemented as the core feature.\r\nWe have not yet compared advantages for implementing as contrib modules.\r\n\r\n\r\n[1]: https://www.postgresql.org/message-id/CALDaNm0PYba4dJPO9YAnQmuCFHgLEfOBFwbfidB1-pOS3pBCXA@mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 30 Jun 2022 01:50:15 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Multi-Master Logical Replication"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reading worker.c, I noticed that the referred SQL command was wrong.\nALTER SUBSCRIPTION ... REFRESH PUBLICATION instead of ALTER TABLE ... REFRESH\nPUBLICATION. Trivial fix attached.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 27 Apr 2022 21:27:08 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": true,
"msg_subject": "trivial comment fix"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 7:27 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> Hi,\n>\n> While reading worker.c, I noticed that the referred SQL command was wrong.\n> ALTER SUBSCRIPTION ... REFRESH PUBLICATION instead of ALTER TABLE ... REFRESH\n> PUBLICATION. Trivial fix attached.\n\nPushed, thanks!\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Apr 2022 09:32:18 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: trivial comment fix"
}
] |
[
{
"msg_contents": "Hi\r\n\r\nI am learning new JSON API, and I am not sure, how the result of JSON_QUERY\r\nin one case is correct. So I am asking here\r\n\r\n(2022-04-28 10:13:26) postgres=# SELECT JSON_QUERY(jsonb '[{\"a\":10, \"b\":\r\n20}, {\"a\": 30, \"b\":100}]', '$.**.a' with wrapper);\r\n┌──────────────────┐\r\n│ json_query │\r\n╞══════════════════╡\r\n│ [10, 30, 10, 30] │\r\n└──────────────────┘\r\n(1 row)\r\n\r\nIs this result correct? I am expecting just [10, 30]\r\n\r\nRegards\r\n\r\nPavel\r\n\nHi I am learning new JSON API, and I am not sure, how the result of JSON_QUERY in one case is correct. So I am asking here(2022-04-28 10:13:26) postgres=# SELECT JSON_QUERY(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30, \"b\":100}]', '$.**.a' with wrapper);┌──────────────────┐│ json_query │╞══════════════════╡│ [10, 30, 10, 30] │└──────────────────┘(1 row)Is this result correct? I am expecting just [10, 30]RegardsPavel",
"msg_date": "Thu, 28 Apr 2022 10:16:41 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "json_query - redundant result"
},
{
"msg_contents": "\nOn 2022-04-28 Th 04:16, Pavel Stehule wrote:\n> Hi\n>\n> I am learning new JSON API, and I am not sure, how the result of\n> JSON_QUERY in one case is correct. So I am asking here\n>\n> (2022-04-28 10:13:26) postgres=# SELECT JSON_QUERY(jsonb '[{\"a\":10,\n> \"b\": 20}, {\"a\": 30, \"b\":100}]', '$.**.a' with wrapper);\n> ┌──────────────────┐\n> │ json_query │\n> ╞══════════════════╡\n> │ [10, 30, 10, 30] │\n> └──────────────────┘\n> (1 row)\n>\n> Is this result correct? I am expecting just [10, 30]\n\n\nIt's just a wrapper around jsonb_path_query, which hasn't changed.\n\n\n# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n\"b\":100}]', '$.**.a');\n jsonb_path_query\n------------------\n 10\n 30\n 10\n 30\n(4 rows)\n\n\nIf that's a bug it's not a new one - release 14 gives the same result.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 28 Apr 2022 09:49:15 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_query - redundant result"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-04-28 Th 04:16, Pavel Stehule wrote:\n>> Is this result correct? I am expecting just [10, 30]\n\n> It's just a wrapper around jsonb_path_query, which hasn't changed.\n\n> # SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n> \"b\":100}]', '$.**.a');\n> jsonb_path_query\n> ------------------\n> 10\n> 30\n> 10\n> 30\n> (4 rows)\n\n> If that's a bug it's not a new one - release 14 gives the same result.\n\nI'm pretty clueless in this area, but I think this might have to do with\nthe \"lax mode\" described in 9.16.2.1:\n\nhttps://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH\n\nregression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\nregression'# \"b\":100}]', '$.**.a');\n jsonb_path_query \n------------------\n 10\n 30\n 10\n 30\n(4 rows)\n\nregression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n\"b\":100}]', 'strict $.**.a');\n jsonb_path_query \n------------------\n 10\n 30\n(2 rows)\n\nMaybe these SQL-standard syntaxes ought to default to strict mode?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Apr 2022 10:00:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: json_query - redundant result"
},
{
"msg_contents": "čt 28. 4. 2022 v 16:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-04-28 Th 04:16, Pavel Stehule wrote:\n> >> Is this result correct? I am expecting just [10, 30]\n>\n> > It's just a wrapper around jsonb_path_query, which hasn't changed.\n>\n> > # SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n> > \"b\":100}]', '$.**.a');\n> > jsonb_path_query\n> > ------------------\n> > 10\n> > 30\n> > 10\n> > 30\n> > (4 rows)\n>\n> > If that's a bug it's not a new one - release 14 gives the same result.\n>\n> I'm pretty clueless in this area, but I think this might have to do with\n> the \"lax mode\" described in 9.16.2.1:\n>\n>\n> https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH\n>\n> regression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n> regression'# \"b\":100}]', '$.**.a');\n> jsonb_path_query\n> ------------------\n> 10\n> 30\n> 10\n> 30\n> (4 rows)\n>\n> regression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n> \"b\":100}]', 'strict $.**.a');\n> jsonb_path_query\n> ------------------\n> 10\n> 30\n> (2 rows)\n>\n> Maybe these SQL-standard syntaxes ought to default to strict mode?\n>\n\nIt looks like a perfect trap, although it is documented.\n\nI don't think the default strict mode is better. Maybe disallow .** in lax\nmode?\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n\nčt 28. 4. 2022 v 16:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-04-28 Th 04:16, Pavel Stehule wrote:\n>> Is this result correct? I am expecting just [10, 30]\n\n> It's just a wrapper around jsonb_path_query, which hasn't changed.\n\n> # SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n> \"b\":100}]', '$.**.a');\n> jsonb_path_query\n> ------------------\n> 10\n> 30\n> 10\n> 30\n> (4 rows)\n\n> If that's a bug it's not a new one - release 14 gives the same result.\n\nI'm pretty clueless in this area, but I think this might have to do with\nthe \"lax mode\" described in 9.16.2.1:\n\nhttps://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH\n\nregression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\nregression'# \"b\":100}]', '$.**.a');\n jsonb_path_query \n------------------\n 10\n 30\n 10\n 30\n(4 rows)\n\nregression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n\"b\":100}]', 'strict $.**.a');\n jsonb_path_query \n------------------\n 10\n 30\n(2 rows)\n\nMaybe these SQL-standard syntaxes ought to default to strict mode?It looks like a perfect trap, although it is documented.I don't think the default strict mode is better. Maybe disallow .** in lax mode?RegardsPavel \n\n regards, tom lane",
"msg_date": "Thu, 28 Apr 2022 16:06:28 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: json_query - redundant result"
},
{
"msg_contents": "\nOn 2022-04-28 Th 10:06, Pavel Stehule wrote:\n>\n>\n> čt 28. 4. 2022 v 16:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-04-28 Th 04:16, Pavel Stehule wrote:\n> >> Is this result correct? I am expecting just [10, 30]\n>\n> > It's just a wrapper around jsonb_path_query, which hasn't changed.\n>\n> > # SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20}, {\"a\": 30,\n> > \"b\":100}]', '$.**.a');\n> > jsonb_path_query\n> > ------------------\n> > 10\n> > 30\n> > 10\n> > 30\n> > (4 rows)\n>\n> > If that's a bug it's not a new one - release 14 gives the same\n> result.\n>\n> I'm pretty clueless in this area, but I think this might have to\n> do with\n> the \"lax mode\" described in 9.16.2.1 <http://9.16.2.1>:\n>\n> https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH\n>\n> regression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20},\n> {\"a\": 30,\n> regression'# \"b\":100}]', '$.**.a');\n> jsonb_path_query\n> ------------------\n> 10\n> 30\n> 10\n> 30\n> (4 rows)\n>\n> regression=# SELECT jsonb_path_query(jsonb '[{\"a\":10, \"b\": 20},\n> {\"a\": 30,\n> \"b\":100}]', 'strict $.**.a');\n> jsonb_path_query\n> ------------------\n> 10\n> 30\n> (2 rows)\n>\n> Maybe these SQL-standard syntaxes ought to default to strict mode?\n>\n>\n> It looks like a perfect trap, although it is documented.\n>\n> I don't think the default strict mode is better. Maybe disallow .** in\n> lax mode?\n>\n>\n\n\nYeah, having strict the default for json_query and lax the default for\njsonb_path_query seems like a recipe for serious confusion.\n\n\nI have no opinion about .** in lax mode.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 28 Apr 2022 10:20:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_query - redundant result"
}
] |
[
{
"msg_contents": "1\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l1>\n--\n2\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l2>\n-- TRIGGERS\n3\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l3>\n--\n4\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l4>\n5\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l5>\n-- directory paths and dlsuffix are passed to us in environment variables\n6\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l6>\n\\getenv libdir PG_LIBDIR\n7\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l7>\n\\getenv dlsuffix PG_DLSUFFIX\n8\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l8>\n9\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l9>\n\\set autoinclib :libdir '/autoinc' :dlsuffix\n10\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c#l10>\n\\set refintlib :libdir '/refint' :dlsuffix\n\ngit.postgresql.org Git - postgresql.git/blob -\nsrc/test/regress/sql/triggers.sql\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/triggers.sql;h=83cd00f54f0f45ffc73e7ffc3f02506f346cfcdd;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\n\nI want to play around with src\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\n / test\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/test;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\n / regress\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/test/regress;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\n / sql\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/test/regress/sql;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\n / triggers.sql\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/test/regress/sql/triggers.sql;hb=7103ebb7aae8ab8076b7e85f335ceb8fe799097c>\nNow I am not sure what the first 10 lines mean.\nShould I just copy these 10 lines in the terminal and not worry about it?\nOr to get the same result as triggers.out,\nI need to properly understand these 10 lines first.\n\n 1 -- 2 -- TRIGGERS 3 -- 4 5 -- directory paths and dlsuffix are passed to us in environment variables 6 \\getenv libdir PG_LIBDIR 7 \\getenv dlsuffix PG_DLSUFFIX 8 9 \\set autoinclib :libdir '/autoinc' :dlsuffix 10 \\set refintlib :libdir '/refint' :dlsuffixgit.postgresql.org Git - postgresql.git/blob - src/test/regress/sql/triggers.sqlI want to play around with src / test / regress / sql / triggers.sqlNow I am not sure what the first 10 lines mean. Should I just copy these 10 lines in the terminal and not worry about it? Or to get the same result as triggers.out, I need to properly understand these 10 lines first.",
"msg_date": "Thu, 28 Apr 2022 17:20:56 +0530",
"msg_from": "alias <postgres.rocks@gmail.com>",
"msg_from_op": true,
"msg_subject": "src / test / regress / sql / triggers.sql first 10 lines."
},
{
"msg_contents": "alias <postgres.rocks@gmail.com> writes:\n> Now I am not sure what the first 10 lines mean.\n\nThose are computing the file pathnames of regress.so and a couple of\nother .so files that contain the C functions referred to in the\nCREATE commands just below here. We can't just hard-wire those file\nnames into the script; they have to be computed at run-time because\neverybody's paths will be different.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Apr 2022 09:55:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: src / test / regress / sql / triggers.sql first 10 lines."
}
] |
[
{
"msg_contents": "Current unnaccent dictionary does not include many popular numeric symbols,\nin example: \"m²\" -> \"m2\"\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 28 Apr 2022 18:50:57 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "[PATCH] Completed unaccent dictionary with many missing characters"
},
{
"msg_contents": "On 28.04.22 18:50, Przemysław Sztoch wrote:\n> Current unnaccent dictionary does not include many popular numeric symbols,\n> in example: \"m²\" -> \"m2\"\n\nSeems reasonable.\n\nCan you explain what your patch does to achieve this?\n\n\n",
"msg_date": "Wed, 4 May 2022 17:17:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 28.04.22 18:50, Przemysław Sztoch wrote:\n>> Current unnaccent dictionary does not include many popular numeric symbols,\n>> in example: \"m²\" -> \"m2\"\n\n> Seems reasonable.\n\nIt kinda feels like this is outside the charter of an \"unaccent\"\ndictionary. I don't object to having these conversions available\nbut it seems like it ought to be a separate feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 11:32:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Peter Eisentraut wrote on 5/4/2022 5:17 PM:\n> On 28.04.22 18:50, Przemysław Sztoch wrote:\n>> Current unnaccent dictionary does not include many popular numeric \n>> symbols,\n>> in example: \"m²\" -> \"m2\"\n> Seems reasonable.\n>\n> Can you explain what your patch does to achieve this?\nI used an existing python implementation of the generator.\nIt is based on ready-made unicode dictionary: \nsrc/common/unicode/UnicodeData.txt.\nThe current generator was filtering UnicodeData.txt too much.\nI relaxed these conditions, because the previous implementation focused \nonly on selected character types.\n\nBrowsing the unaccent.rules file is the easiest way to see how many and \nwhat missing characters have been completed.\n\nFor FTS, the addition of these characters is very much needed.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nPeter Eisentraut wrote on 5/4/2022 \n5:17 PM:\nOn \n28.04.22 18:50, Przemysław Sztoch wrote:\n \nCurrent unnaccent dictionary does not include \nmany popular numeric symbols,\nin example: \"m²\" -> \"m2\"\n\n\nSeems reasonable.\n \n\nCan you explain what your patch does to achieve this?\n \n\nI used an existing python implementation of \nthe generator.\n\n\nIt is based on ready-made unicode dictionary: \nsrc/common/unicode/UnicodeData.txt.\n\n\nThe current generator was filtering UnicodeData.txt too much.\n\n\nI relaxed these conditions, because the previous implementation focused \nonly on selected character types.\n\nBrowsing the unaccent.rules file is the easiest way to see how many and \nwhat missing characters have been completed.\n\n\n\nFor FTS, the addition of these characters is very much needed.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 5 May 2022 21:40:09 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Tom Lane wrote on 5/4/2022 5:32 PM:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 28.04.22 18:50, Przemysław Sztoch wrote:\n>>> Current unnaccent dictionary does not include many popular numeric symbols,\n>>> in example: \"m²\" -> \"m2\"\n>> Seems reasonable.\n> It kinda feels like this is outside the charter of an \"unaccent\"\n> dictionary. I don't object to having these conversions available\n> but it seems like it ought to be a separate feature.\n>\n> \t\t\tregards, tom lane\nTom, I disagree with you because many similar numerical conversions are \nalready taking place, e.g. 1/2, 1/4...\n\nToday Unicode is ubiquitous and we use a lot more weird characters.\nI just completed these less common characters.\n\nTherefore, the problem of missing characters in unaccent.rules affects \nthe correct operation of the FTS mechanisms.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nTom Lane wrote on 5/4/2022 5:32 PM:\n\nPeter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\nOn 28.04.22 18:50, Przemysław Sztoch wrote:\nCurrent unnaccent dictionary does not include many popular numeric symbols,\nin example: \"m²\" -> \"m2\"\n\n\n\nSeems reasonable.\n\n\nIt kinda feels like this is outside the charter of an \"unaccent\"\ndictionary. I don't object to having these conversions available\nbut it seems like it ought to be a separate feature.\n\n\t\t\tregards, tom lane\n\nTom, I disagree with you because many similar numerical conversions are \nalready taking place, e.g. 1/2, 1/4...\n\nToday \nUnicode is ubiquitous and we use a lot more weird characters.\nI just completed these less common characters.\n\n\nTherefore, the problem of missing characters in unaccent.rules affects \nthe correct operation of the FTS mechanisms.\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 5 May 2022 21:44:15 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Thu, May 05, 2022 at 09:44:15PM +0200, Przemysław Sztoch wrote:\n> Tom, I disagree with you because many similar numerical conversions are\n> already taking place, e.g. 1/2, 1/4...\n\nThis part sounds like a valid argument to me. unaccent.rules does\nalready the conversion of some mathematical signs, and the additions\nproposed in the patch don't look that weird to me. I agree with Peter\nand Przemysław that this is reasonable.\n--\nMichael",
"msg_date": "Tue, 17 May 2022 16:11:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Two fixes (bad comment and fixed Latin-ASCII.xml).\n\nMichael Paquier wrote on 17.05.2022 09:11:\n> On Thu, May 05, 2022 at 09:44:15PM +0200, Przemysław Sztoch wrote:\n>> Tom, I disagree with you because many similar numerical conversions are\n>> already taking place, e.g. 1/2, 1/4...\n> This part sounds like a valid argument to me. unaccent.rules does\n> already the conversion of some mathematical signs, and the additions\n> proposed in the patch don't look that weird to me. I agree with Peter\n> and Przemysław that this is reasonable.\n> --\n> Michael\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Wed, 15 Jun 2022 13:01:37 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 01:01:37PM +0200, Przemysław Sztoch wrote:\n> Two fixes (bad comment and fixed Latin-ASCII.xml).\n\n if codepoint.general_category.startswith('L') and \\\n- len(codepoint.combining_ids) > 1:\n+ len(codepoint.combining_ids) > 0:\nSo, this one checks for the case where a codepoint is within the\nletter category. As far as I can see this indeed adds a couple of\ncharacters, with a combination of Greek and Latin letters. So that\nlooks fine.\n\n+ elif codepoint.general_category.startswith('N') and \\\n+ len(codepoint.combining_ids) > 0 and \\\n+ args.noLigaturesExpansion is False and is_ligature(codepoint, table):\n+ charactersSet.add((codepoint.id,\n+ \"\".join(chr(combining_codepoint.id)\n+ for combining_codepoint\n+ in get_plain_letters(codepoint, table))))\nAnd this one is for the numerical part of the change. Do you actually\nneed to apply is_ligature() here? I would have thought that this only\napplies to letters.\n\n- assert(False)\n+ assert False, 'Codepoint U+%0.2X' % codepoint.id\n[...]\n- assert(is_ligature(codepoint, table))\n+ assert is_ligature(codepoint, table), 'Codepoint U+%0.2X' % codepoint.id\nThese two are a good idea for debugging.\n\n- return all(is_letter(table[i], table) for i in codepoint.combining_ids)\n+ return all(i in table and is_letter(table[i], table) for i in codepoint.combining_ids)\nIt looks like this makes the code weaker, as we would silently skip\ncharacters that are not part of the table rather than checking for\nthem all the time?\n\nWhile recreating unaccent.rules with your patch, I have noticed what\nlooks like an error. An extra rule mapping U+210C (black-letter\ncapital h) to \"x\" gets added on top of te existing one for \"H\", but\nthe correct answer is the existing rule, not the one added by the\npatch.\n--\nMichael",
"msg_date": "Mon, 20 Jun 2022 10:49:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Michael Paquier wrote on 20.06.2022 03:49:\n> On Wed, Jun 15, 2022 at 01:01:37PM +0200, Przemysław Sztoch wrote:\n>> Two fixes (bad comment and fixed Latin-ASCII.xml).\n> if codepoint.general_category.startswith('L') and \\\n> - len(codepoint.combining_ids) > 1:\n> + len(codepoint.combining_ids) > 0:\n> So, this one checks for the case where a codepoint is within the\n> letter category. As far as I can see this indeed adds a couple of\n> characters, with a combination of Greek and Latin letters. So that\n> looks fine.\nPreviously, there were only multi-letter conversions. Now we also have \nsingle letters.\n>\n> + elif codepoint.general_category.startswith('N') and \\\n> + len(codepoint.combining_ids) > 0 and \\\n> + args.noLigaturesExpansion is False and is_ligature(codepoint, table):\n> + charactersSet.add((codepoint.id,\n> + \"\".join(chr(combining_codepoint.id)\n> + for combining_codepoint\n> + in get_plain_letters(codepoint, table))))\n> And this one is for the numerical part of the change. Do you actually\n> need to apply is_ligature() here? I would have thought that this only\n> applies to letters.\nBut ligature check is performed on combining_ids (result of \ntranslation), not on base codepoint.\nWithout it, you will get assertions in get_plain_letters.\n\nThe idea is that we take translations that turn into normal letters. \nOthers (strange) are rejected.\nMaybe it could be done better. I didn't like it as much as you did, but \nI couldn't do better.\nIn the end, I left it just like in the original script.\n\nNote that the plain letter list (PLAIN_LETTER_RANGES) has now been \nexpanded with numbers.\n> - assert(False)\n> + assert False, 'Codepoint U+%0.2X' % codepoint.id\n> [...]\n> - assert(is_ligature(codepoint, table))\n> + assert is_ligature(codepoint, table), 'Codepoint U+%0.2X' % codepoint.id\n> These two are a good idea for debugging.\n>\n> - return all(is_letter(table[i], table) for i in codepoint.combining_ids)\n> + return all(i in table and is_letter(table[i], table) for i in codepoint.combining_ids)\n> It looks like this makes the code weaker, as we would silently skip\n> characters that are not part of the table rather than checking for\n> them all the time?\nUnfortunately, there are entries in combining_ids that are not in the \ncharacter table being used.\nThis protection is necessary so that there is no error. But unfamiliar \ncharacters are omitted.\n> While recreating unaccent.rules with your patch, I have noticed what\n> looks like an error. An extra rule mapping U+210C (black-letter\n> capital h) to \"x\" gets added on top of te existing one for \"H\", but\n> the correct answer is the existing rule, not the one added by the\n> patch.\nThe problem with the sign of U+210C is that there are conflicting \ntranslations for it.\nAs the name suggests \"(black-letter capital h)\", it should be converted \nto a capital H.\nHowever, the current Latin-ASCII.xml suggests a conversion to x.\nI found an open discussion on the internet about this and the suggestion \nthat the Latin-ASCII.xml file should be corrected for this letter.\nBut I wouldn't expect that Unicode makes the revised Latin-ASCII.xml \nquickly into the official repo.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\n\n\nMichael Paquier wrote on 20.06.2022 03:49:\n\nOn Wed, Jun 15, 2022 at 01:01:37PM +0200, Przemysław Sztoch wrote:\n\nTwo fixes (bad comment and fixed Latin-ASCII.xml).\n\n\n if codepoint.general_category.startswith('L') and \\\n- len(codepoint.combining_ids) > 1:\n+ len(codepoint.combining_ids) > 0:\nSo, this one checks for the case where a codepoint is within the\nletter category. As far as I can see this indeed adds a couple of\ncharacters, with a combination of Greek and Latin letters. So that\nlooks fine.\n\nPreviously, there were only multi-letter conversions. Now we also have \nsingle letters.\n\n\n\n+ elif codepoint.general_category.startswith('N') and \\\n+ len(codepoint.combining_ids) > 0 and \\\n+ args.noLigaturesExpansion is False and is_ligature(codepoint, table):\n+ charactersSet.add((codepoint.id,\n+ \"\".join(chr(combining_codepoint.id)\n+ for combining_codepoint\n+ in get_plain_letters(codepoint, table))))\nAnd this one is for the numerical part of the change. Do you actually\nneed to apply is_ligature() here? I would have thought that this only\napplies to letters.\n\nBut ligature check is performed on combining_ids (result of translation),\n not on base codepoint.\nWithout it, you will get assertions in get_plain_letters.\n\nThe idea is that we take translations that turn into normal letters. \nOthers (strange) are rejected.\nMaybe it could be done better. I didn't like it as much as you did, but I\n couldn't do better.\nIn the end, I left it just like in the original script.\n\nNote that the plain letter list (PLAIN_LETTER_RANGES) has now been \nexpanded with numbers.\n\n- assert(False)\n+ assert False, 'Codepoint U+%0.2X' % codepoint.id\n[...]\n- assert(is_ligature(codepoint, table))\n+ assert is_ligature(codepoint, table), 'Codepoint U+%0.2X' % codepoint.id\nThese two are a good idea for debugging.\n\n- return all(is_letter(table[i], table) for i in codepoint.combining_ids)\n+ return all(i in table and is_letter(table[i], table) for i in codepoint.combining_ids)\nIt looks like this makes the code weaker, as we would silently skip\ncharacters that are not part of the table rather than checking for\nthem all the time?\n\nUnfortunately, there are entries in combining_ids that are not in the \ncharacter table being used.\nThis protection is necessary so that there is no error. But unfamiliar \ncharacters are omitted.\n\nWhile recreating unaccent.rules with your patch, I have noticed what\nlooks like an error. An extra rule mapping U+210C (black-letter\ncapital h) to \"x\" gets added on top of te existing one for \"H\", but\nthe correct answer is the existing rule, not the one added by the\npatch.\n\nThe problem with the sign of U+210C is that there are conflicting \ntranslations for it.\nAs the name suggests \"(black-letter capital h)\", it should be converted \nto a capital H.\nHowever, the current Latin-ASCII.xml suggests a conversion to x.\nI found an open discussion on the internet about this and the suggestion\n that the Latin-ASCII.xml file should be corrected for this letter.\nBut I wouldn't expect that Unicode makes the revised Latin-ASCII.xml \nquickly into the official repo.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Mon, 20 Jun 2022 10:37:57 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Mon, Jun 20, 2022 at 10:37:57AM +0200, Przemysław Sztoch wrote:\n> But ligature check is performed on combining_ids (result of translation),\n> not on base codepoint.\n> Without it, you will get assertions in get_plain_letters.\n\nHmm. I am wondering if we could make the whole logic a bit more\nintuitive here. The loop that builds the set of mappings gets now\nmuch more complicated with the addition of the categories beginning by\nN for the numbers, and that's mostly the same set of checks as the\nones applied for T.\n\n> However, the current Latin-ASCII.xml suggests a conversion to x.\n> I found an open discussion on the internet about this and the suggestion\n> that the Latin-ASCII.xml file should be corrected for this letter.\n> But I wouldn't expect that Unicode makes the revised Latin-ASCII.xml quickly\n> into the official repo.\n\nYeah, Latin-ASCII.xml is getting it wrong here, then. unaccent\nfetches the thing from this URL currently:\nhttps://raw.githubusercontent.com/unicode-org/cldr/release-41/common/transforms/Latin-ASCII.xml\n\nCould it be better to handle that as an exception in\ngenerate_unaccent_rules.py, documenting why we are doing it this way\nthen? My concern is somebody re-running the script without noticing\nthis exception, and the set of rules would be blindly, and\nincorrectly, updated.\n--\nMichael",
"msg_date": "Tue, 21 Jun 2022 09:11:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yeah, Latin-ASCII.xml is getting it wrong here, then. unaccent\n> fetches the thing from this URL currently:\n> https://raw.githubusercontent.com/unicode-org/cldr/release-41/common/transforms/Latin-ASCII.xml\n\nOh, we're using CLDR 41, which reminds me: CLDR 36 added SOUND\nRECORDING COPYRIGHT[1] so we could drop it from special_cases().\n\nHmm, is it possible to get rid of CYRILLIC CAPITAL LETTER IO and\nCYRILLIC SMALL LETTER IO by adding Cyrillic to PLAIN_LETTER_RANGES?\n\nThat'd leave just DEGREE CELSIUS and DEGREE FAHRENHEIT. Not sure how\nto kill those last two special cases -- they should be directly\nreplaced by their decomposition.\n\n[1] https://unicode-org.atlassian.net/browse/CLDR-11383\n\n\n",
"msg_date": "Tue, 21 Jun 2022 12:53:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Michael Paquier wrote on 21.06.2022 02:11:\n> On Mon, Jun 20, 2022 at 10:37:57AM +0200, Przemysław Sztoch wrote:\n>> But ligature check is performed on combining_ids (result of translation),\n>> not on base codepoint.\n>> Without it, you will get assertions in get_plain_letters.\n> Hmm. I am wondering if we could make the whole logic a bit more\n> intuitive here. The loop that builds the set of mappings gets now\n> much more complicated with the addition of the categories beginning by\n> N for the numbers, and that's mostly the same set of checks as the\n> ones applied for T.\nI'm sorry, but I can't correct this condition.\nI have tried, but there are further exceptions and errors.\n>\n>> However, the current Latin-ASCII.xml suggests a conversion to x.\n>> I found an open discussion on the internet about this and the suggestion\n>> that the Latin-ASCII.xml file should be corrected for this letter.\n>> But I wouldn't expect that Unicode makes the revised Latin-ASCII.xml quickly\n>> into the official repo.\n> Yeah, Latin-ASCII.xml is getting it wrong here, then. unaccent\n> fetches the thing from this URL currently:\n> https://raw.githubusercontent.com/unicode-org/cldr/release-41/common/transforms/Latin-ASCII.xml\n>\n> Could it be better to handle that as an exception in\n> generate_unaccent_rules.py, documenting why we are doing it this way\n> then? My concern is somebody re-running the script without noticing\n> this exception, and the set of rules would be blindly, and\n> incorrectly, updated.\nI replaced python set with python dictionary.\nIt resolve problem with duplicated entry.\nI left the conversion to \"x\". It was like that before and I leave it as \nit was.\nThe conversion to \"x\" is probably due to the phonetic interpretation of \nthis sign.\nIf they correct the Latin-ASCII.xml file, it will change.\n> --\n> Michael\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 21 Jun 2022 15:36:49 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Thomas Munro wrote on 21.06.2022 02:53:\n> On Tue, Jun 21, 2022 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Yeah, Latin-ASCII.xml is getting it wrong here, then. unaccent\n>> fetches the thing from this URL currently:\n>> https://raw.githubusercontent.com/unicode-org/cldr/release-41/common/transforms/Latin-ASCII.xml\n> Oh, we're using CLDR 41, which reminds me: CLDR 36 added SOUND\n> RECORDING COPYRIGHT[1] so we could drop it from special_cases().\n>\n> Hmm, is it possible to get rid of CYRILLIC CAPITAL LETTER IO and\n> CYRILLIC SMALL LETTER IO by adding Cyrillic to PLAIN_LETTER_RANGES?\n>\n> That'd leave just DEGREE CELSIUS and DEGREE FAHRENHEIT. Not sure how\n> to kill those last two special cases -- they should be directly\n> replaced by their decomposition.\n>\n> [1] https://unicode-org.atlassian.net/browse/CLDR-11383\nI patch v3 support for cirilic is added.\nSpecial character function has been purged.\nAdded support for category: So - Other Symbol. This category include \ncharacters from special_cases().\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\n\nThomas Munro wrote on 21.06.2022 02:53:\n\nOn Tue, Jun 21, 2022 at 12:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n\nYeah, Latin-ASCII.xml is getting it wrong here, then. unaccent\nfetches the thing from this URL currently:\nhttps://raw.githubusercontent.com/unicode-org/cldr/release-41/common/transforms/Latin-ASCII.xml\n\n\nOh, we're using CLDR 41, which reminds me: CLDR 36 added SOUND\nRECORDING COPYRIGHT[1] so we could drop it from special_cases().\n\nHmm, is it possible to get rid of CYRILLIC CAPITAL LETTER IO and\nCYRILLIC SMALL LETTER IO by adding Cyrillic to PLAIN_LETTER_RANGES?\n\nThat'd leave just DEGREE CELSIUS and DEGREE FAHRENHEIT. Not sure how\nto kill those last two special cases -- they should be directly\nreplaced by their decomposition.\n\n[1] https://unicode-org.atlassian.net/browse/CLDR-11383\n\n\nI patch v3 support for cirilic is added.\nSpecial character function has been purged.\nAdded support for category: So - Other Symbol. This category include \ncharacters from special_cases().\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 21 Jun 2022 15:41:48 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 03:41:48PM +0200, Przemysław Sztoch wrote:\n> Thomas Munro wrote on 21.06.2022 02:53:\n>> Oh, we're using CLDR 41, which reminds me: CLDR 36 added SOUND\n>> RECORDING COPYRIGHT[1] so we could drop it from special_cases().\n\nIndeed.\n\n>> Hmm, is it possible to get rid of CYRILLIC CAPITAL LETTER IO and\n>> CYRILLIC SMALL LETTER IO by adding Cyrillic to PLAIN_LETTER_RANGES?\n\nThat's a good point. There are quite a bit of cyrillic characters\nmissing a conversion, visibly.\n\n>> That'd leave just DEGREE CELSIUS and DEGREE FAHRENHEIT. Not sure how\n>> to kill those last two special cases -- they should be directly\n>> replaced by their decomposition.\n>> \n>> [1] https://unicode-org.atlassian.net/browse/CLDR-11383\n>\n> I patch v3 support for cirilic is added.\n> Special character function has been purged.\n> Added support for category: So - Other Symbol. This category include\n> characters from special_cases().\n\nI think that we'd better split v3 into more patches to keep each\nimprovement isolated. The addition of cyrillic characters in the\nrange of letters and the removal of the sound copyright from the\nspecial cases can be done on their own, before considering the\noriginal case tackled by this thread.\n--\nMichael",
"msg_date": "Thu, 23 Jun 2022 13:39:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Michael Paquier wrote on 23.06.2022 06:39:\n>>> That'd leave just DEGREE CELSIUS and DEGREE FAHRENHEIT. Not sure how\n>>> to kill those last two special cases -- they should be directly\n>>> replaced by their decomposition.\n>>>\n>>> [1] https://unicode-org.atlassian.net/browse/CLDR-11383\n>> I patch v3 support for cirilic is added.\n>> Special character function has been purged.\n>> Added support for category: So - Other Symbol. This category include\n>> characters from special_cases().\n> I think that we'd better split v3 into more patches to keep each\n> improvement isolated. The addition of cyrillic characters in the\n> range of letters and the removal of the sound copyright from the\n> special cases can be done on their own, before considering the\n> original case tackled by this thread.\n> --\n> Michael\nThe only division that is probably possible is the one attached.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Thu, 23 Jun 2022 14:10:42 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Thu, Jun 23, 2022 at 02:10:42PM +0200, Przemysław Sztoch wrote:\n> The only division that is probably possible is the one attached.\n\nWell, the addition of cyrillic does not make necessary the removal of\nSOUND RECORDING COPYRIGHT or the DEGREEs, that implies the use of a\ndictionnary when manipulating the set of codepoints, but that's me\nbeing too picky. Just to say that I am fine with what you are\nproposing here.\n\nBy the way, could you add a couple of regressions tests for each\npatch with a sample of the characters added? U+210C is a particularly\nsensitive case, as we should really make sure that it maps to what we\nwant even if Latin-ASCII.xml tells a different story. This requires\nthe addition of a couple of queries in unaccent.sql with the expected\noutput updated in unaccent.out.\n--\nMichael",
"msg_date": "Tue, 28 Jun 2022 14:14:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Michael Paquier wrote on 6/28/2022 7:14 AM:\n> On Thu, Jun 23, 2022 at 02:10:42PM +0200, Przemysław Sztoch wrote:\n>> The only division that is probably possible is the one attached.\n> Well, the addition of cyrillic does not make necessary the removal of\n> SOUND RECORDING COPYRIGHT or the DEGREEs, that implies the use of a\n> dictionnary when manipulating the set of codepoints, but that's me\n> being too picky. Just to say that I am fine with what you are\n> proposing here.\n>\n> By the way, could you add a couple of regressions tests for each\n> patch with a sample of the characters added? U+210C is a particularly\n> sensitive case, as we should really make sure that it maps to what we\n> want even if Latin-ASCII.xml tells a different story. This requires\n> the addition of a couple of queries in unaccent.sql with the expected\n> output updated in unaccent.out.\n> --\n> Michael\nRegression tests has been added.\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Sun, 3 Jul 2022 22:51:56 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 02:14:53PM +0900, Michael Paquier wrote:\n> Well, the addition of cyrillic does not make necessary the removal of\n> SOUND RECORDING COPYRIGHT or the DEGREEs, that implies the use of a\n> dictionnary when manipulating the set of codepoints, but that's me\n> being too picky. Just to say that I am fine with what you are\n> proposing here.\n\nSo, I have been looking at the change for cyrillic letters, and are\nyou sure that the range of codepoints [U+0410,U+044f] is right when it\ncomes to consider all those letters as plain letters? There are a\ncouple of characters that itch me a bit with this range:\n- What of the letter CAPITAL SHORT I (U+0419) and SMALL SHORT I\n(U+0439)? Shouldn't U+0439 be translated to U+0438 and U+0419\ntranslated to U+0418? That's what I get while looking at\nUnicodeData.txt, and it would mean that the range of plain letters\nshould not include both of them.\n- It seems like we are missing a couple of letters after U+044F, like\nU+0454, U+0456 or U+0455 just to name three of them?\n\nI have extracted from 0001 and applied the parts about the regression\ntests for degree signs, while adding two more for SOUND RECORDING\nCOPYRIGHT (U+2117) and Black-Letter Capital H (U+210C) translated to\n'x', while it should be probably 'H'.\n--\nMichael",
"msg_date": "Tue, 5 Jul 2022 16:22:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Michael Paquier wrote on 7/5/2022 9:22 AM:\n> On Tue, Jun 28, 2022 at 02:14:53PM +0900, Michael Paquier wrote:\n>> Well, the addition of cyrillic does not make necessary the removal of\n>> SOUND RECORDING COPYRIGHT or the DEGREEs, that implies the use of a\n>> dictionnary when manipulating the set of codepoints, but that's me\n>> being too picky. Just to say that I am fine with what you are\n>> proposing here.\n> So, I have been looking at the change for cyrillic letters, and are\n> you sure that the range of codepoints [U+0410,U+044f] is right when it\n> comes to consider all those letters as plain letters? There are a\n> couple of characters that itch me a bit with this range:\n> - What of the letter CAPITAL SHORT I (U+0419) and SMALL SHORT I\n> (U+0439)? Shouldn't U+0439 be translated to U+0438 and U+0419\n> translated to U+0418? That's what I get while looking at\n> UnicodeData.txt, and it would mean that the range of plain letters\n> should not include both of them.\n1. It's good that you noticed it. I missed it. But it doesn't affect the \ngenerated rule list.\n> - It seems like we are missing a couple of letters after U+044F, like\n> U+0454, U+0456 or U+0455 just to name three of them?\n2. I added a few more letters that are used in languages other than \nRussian: Byelorussian or Ukrainian.\n\n- (0x0410, 0x044f), # Cyrillic capital and \nsmall letters\n+ (0x0402, 0x0402), # Cyrillic capital and small letters\n+ (0x0404, 0x0406), #\n+ (0x0408, 0x040b), #\n+ (0x040f, 0x0418), #\n+ (0x041a, 0x0438), #\n+ (0x043a, 0x044f), #\n+ (0x0452, 0x0452), #\n+ (0x0454, 0x0456), #\n\nI do not add more, because they probably concern older languages.\nAn alternative might be to rely entirely on Unicode decomposition ...\nHowever, after the change, only one additional Ukrainian letter with an \naccent was added to the rule file.\n>\n> I have extracted from 0001 and applied the parts about the regression\n> tests for degree signs, while adding two more for SOUND RECORDING\n> COPYRIGHT (U+2117) and Black-Letter Capital H (U+210C) translated to\n> 'x', while it should be probably 'H'.\n3. The matter is not that simple. When I change priorities (ie \nLatin-ASCII.xml is less important than Unicode decomposition),\nthen \"U + 33D7\" changes not to pH but to PH.\nIn the end, I left it like it was before ...\n\nIf you decide what to do with point 3, I will correct it and send new \npatches.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nMichael Paquier wrote on 7/5/2022 9:22\n AM:\n\nOn Tue, Jun 28, 2022 at 02:14:53PM +0900, Michael Paquier wrote:\n\nWell, the addition of cyrillic does not make necessary the removal of\nSOUND RECORDING COPYRIGHT or the DEGREEs, that implies the use of a\ndictionnary when manipulating the set of codepoints, but that's me\nbeing too picky. Just to say that I am fine with what you are\nproposing here.\n\n\nSo, I have been looking at the change for cyrillic letters, and are\nyou sure that the range of codepoints [U+0410,U+044f] is right when it\ncomes to consider all those letters as plain letters? There are a\ncouple of characters that itch me a bit with this range:\n- What of the letter CAPITAL SHORT I (U+0419) and SMALL SHORT I\n(U+0439)? Shouldn't U+0439 be translated to U+0438 and U+0419\ntranslated to U+0418? That's what I get while looking at\nUnicodeData.txt, and it would mean that the range of plain letters\nshould not include both of them.\n\n1. It's good that you noticed it. I missed it. But it doesn't affect the\n generated rule list.\n\n- It seems like we are missing a couple of letters after U+044F, like\nU+0454, U+0456 or U+0455 just to name three of them?\n\n2. I added a few more letters that are used in languages other than \nRussian: Byelorussian or Ukrainian.\n\n- (0x0410, \n0x044f), # Cyrillic capital and small letters\n+ \n(0x0402, 0x0402), # Cyrillic capital and small letters\n+ \n(0x0404, 0x0406), #\n+ \n(0x0408, 0x040b), #\n+ \n(0x040f, 0x0418), #\n+ \n(0x041a, 0x0438), #\n+ \n(0x043a, 0x044f), #\n+ \n(0x0452, 0x0452), #\n+ \n(0x0454, 0x0456), #\n\nI do not add more, because they probably concern older languages.\nAn alternative might be to rely entirely on Unicode decomposition ...\nHowever, after the change, only one additional Ukrainian letter with an \naccent was added to the rule file.\n\n\n\nI have extracted from 0001 and applied the parts about the regression\ntests for degree signs, while adding two more for SOUND RECORDING\nCOPYRIGHT (U+2117) and Black-Letter Capital H (U+210C) translated to\n'x', while it should be probably 'H'.\n\n3. The matter is not that simple. When I change priorities (ie \nLatin-ASCII.xml is less important than Unicode decomposition),\nthen \"U + 33D7\" changes not to pH but to PH.\nIn the end, I left it like it was before ...\n\nIf you decide what to do with point 3, I will correct it and send new \npatches.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Tue, 5 Jul 2022 21:24:49 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "Dear Michael P.,\n> 3. The matter is not that simple. When I change priorities (ie \n> Latin-ASCII.xml is less important than Unicode decomposition),\n> then \"U + 33D7\" changes not to pH but to PH.\n> In the end, I left it like it was before ...\n>\n> If you decide what to do with point 3, I will correct it and send new \n> patches.\nWhat is your decision?\nOption 1: We leave x as in Latin-ASCII.xml and we also have full \ncompatibility with previous PostgreSQL versions.\nIf they fix Latin-ASCII.xml at Unicode, it will fix itself.\n\nOption 2: We choose a lower priority for entries in Latin-ASCII.xml\n\nI would choose option 1.\n\nP.S. I will be going on vacation and it would be nice to close this \npatch soon. TIA.\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nDear Michael P.,\n\n3. The matter is not that simple. When I change priorities (ie \nLatin-ASCII.xml is less important than Unicode decomposition),\n\nthen \"U + 33D7\" changes not to pH but to PH.\n\nIn the end, I left it like it was before ...\n\n\nIf you decide what to do with point 3, I will correct it and send new \npatches.\n\nWhat is your decision? \nOption 1: We leave x as in Latin-ASCII.xml and we also have full \ncompatibility with previous PostgreSQL versions.\nIf they fix Latin-ASCII.xml at Unicode, it will fix itself.\n\nOption 2: We choose a lower priority for entries in Latin-ASCII.xml\n\nI would choose option 1.\n\nP.S. I will be going on vacation and it would be nice to close this \npatch soon. TIA.\n\n-- Przemysław\n Sztoch | Mobile +48 509 99 00 66",
"msg_date": "Wed, 13 Jul 2022 12:12:43 +0200",
"msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 09:24:49PM +0200, Przemysław Sztoch wrote:\n> I do not add more, because they probably concern older languages.\n> An alternative might be to rely entirely on Unicode decomposition ...\n> However, after the change, only one additional Ukrainian letter with an\n> accent was added to the rule file.\n\nHmm. I was wondering about the decomposition part, actually. How\nmuch would it make things simpler if we treat the full range of the\ncyrillic characters, aka from U+0400 to U+4FF, scanning all of them\nand building rules only if there are decompositions? Is it worth\nconsidering the Cyrillic supplement, as of U+0500-U+052F?\n\nI was also thinking about the regression tests, and as unaccent\ncharacters are more spread than for Latin and Greek, it could be a\ngood thing to have a complete coverage. We could for example use a\nquery like that to check if a character is treated properly or not:\nSELECT chr(i.a) = unaccent(chr(i.a))\n FROM generate_series(1024, 1327) AS i(a); -- range of Cyrillic.\n--\nMichael",
"msg_date": "Thu, 14 Jul 2022 14:41:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "2022年7月13日(水) 19:13 Przemysław Sztoch <przemyslaw@sztoch.pl>:\n>\n> Dear Michael P.,\n>\n> 3. The matter is not that simple. When I change priorities (ie Latin-ASCII.xml is less important than Unicode decomposition),\n> then \"U + 33D7\" changes not to pH but to PH.\n> In the end, I left it like it was before ...\n>\n> If you decide what to do with point 3, I will correct it and send new patches.\n>\n> What is your decision?\n> Option 1: We leave x as in Latin-ASCII.xml and we also have full compatibility with previous PostgreSQL versions.\n> If they fix Latin-ASCII.xml at Unicode, it will fix itself.\n>\n> Option 2: We choose a lower priority for entries in Latin-ASCII.xml\n>\n> I would choose option 1.\n>\n> P.S. I will be going on vacation and it would be nice to close this patch soon. TIA.\n\nHi\n\nThis entry was marked as \"Needs review\" in the CommitFest app but cfbot\nreports the patch no longer applies.\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3631/\n\nand changing the status to \"Needs review\".\n\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 08:28:51 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Fri, 4 Nov 2022 at 04:59, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年7月13日(水) 19:13 Przemysław Sztoch <przemyslaw@sztoch.pl>:\n> >\n> > Dear Michael P.,\n> >\n> > 3. The matter is not that simple. When I change priorities (ie Latin-ASCII.xml is less important than Unicode decomposition),\n> > then \"U + 33D7\" changes not to pH but to PH.\n> > In the end, I left it like it was before ...\n> >\n> > If you decide what to do with point 3, I will correct it and send new patches.\n> >\n> > What is your decision?\n> > Option 1: We leave x as in Latin-ASCII.xml and we also have full compatibility with previous PostgreSQL versions.\n> > If they fix Latin-ASCII.xml at Unicode, it will fix itself.\n> >\n> > Option 2: We choose a lower priority for entries in Latin-ASCII.xml\n> >\n> > I would choose option 1.\n> >\n> > P.S. I will be going on vacation and it would be nice to close this patch soon. TIA.\n>\n> Hi\n>\n> This entry was marked as \"Needs review\" in the CommitFest app but cfbot\n> reports the patch no longer applies.\n>\n> We've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\n> currently underway, this would be an excellent time update the patch.\n>\n> Once you think the patchset is ready for review again, you (or any\n> interested party) can move the patch entry forward by visiting\n>\n> https://commitfest.postgresql.org/40/3631/\n>\n> and changing the status to \"Needs review\".\n\nI was not sure if you will be planning to post an updated version of\npatch as the patch has been awaiting your attention from last\ncommitfest, please post an updated version for it soon or update the\ncommitfest entry accordingly. As CommitFest 2023-01 is currently\nunderway, this would be an excellent time to update the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 16 Jan 2023 20:07:24 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
},
{
"msg_contents": "On Mon, 16 Jan 2023 at 20:07, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 4 Nov 2022 at 04:59, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > 2022年7月13日(水) 19:13 Przemysław Sztoch <przemyslaw@sztoch.pl>:\n> > >\n> > > Dear Michael P.,\n> > >\n> > > 3. The matter is not that simple. When I change priorities (ie Latin-ASCII.xml is less important than Unicode decomposition),\n> > > then \"U + 33D7\" changes not to pH but to PH.\n> > > In the end, I left it like it was before ...\n> > >\n> > > If you decide what to do with point 3, I will correct it and send new patches.\n> > >\n> > > What is your decision?\n> > > Option 1: We leave x as in Latin-ASCII.xml and we also have full compatibility with previous PostgreSQL versions.\n> > > If they fix Latin-ASCII.xml at Unicode, it will fix itself.\n> > >\n> > > Option 2: We choose a lower priority for entries in Latin-ASCII.xml\n> > >\n> > > I would choose option 1.\n> > >\n> > > P.S. I will be going on vacation and it would be nice to close this patch soon. TIA.\n> >\n> > Hi\n> >\n> > This entry was marked as \"Needs review\" in the CommitFest app but cfbot\n> > reports the patch no longer applies.\n> >\n> > We've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\n> > currently underway, this would be an excellent time update the patch.\n> >\n> > Once you think the patchset is ready for review again, you (or any\n> > interested party) can move the patch entry forward by visiting\n> >\n> > https://commitfest.postgresql.org/40/3631/\n> >\n> > and changing the status to \"Needs review\".\n>\n> I was not sure if you will be planning to post an updated version of\n> patch as the patch has been awaiting your attention from last\n> commitfest, please post an updated version for it soon or update the\n> commitfest entry accordingly. As CommitFest 2023-01 is currently\n> underway, this would be an excellent time to update the patch.\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:01:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Completed unaccent dictionary with many missing\n characters"
}
] |
[
{
"msg_contents": "I happened to notice that there are a couple of places in plpgsql\nthat will let you assign a new value to a variable that's marked\nCONSTANT:\n\n* We don't complain if an output parameter in a CALL statement\nis constant.\n\n* We don't complain if a refcursor variable is constant, even\nthough OPEN may assign a new value to it.\n\nThe attached quick-hack patch closes both of these oversights.\n\nPerhaps the OPEN change is a little too aggressive, since if\nyou give the refcursor variable some non-null initial value,\nOPEN won't change it; in that usage a CONSTANT marking could\nbe allowed. But I really seriously doubt that anybody out\nthere is marking such variables as constants, so I thought\nthrowing the error at compile time was better than postponing\nit to runtime so we could handle that.\n\nRegardless of which way we handle that point, I'm inclined to\nchange this only in HEAD. Probably people wouldn't thank us\nfor making the back branches more strict.\n\n\t\t\tregards, tom lane\n\nPS: I didn't do it here, but I'm kind of tempted to pull out\nall the cursor-related tests in plpgsql.sql and move them to\na new test file under src/pl/plpgsql/src/sql/. They look\npretty self-contained, and I doubt they're worth keeping in\nthe core tests.",
"msg_date": "Thu, 28 Apr 2022 17:52:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Missing can't-assign-to-constant checks in plpgsql"
},
{
"msg_contents": "čt 28. 4. 2022 v 23:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I happened to notice that there are a couple of places in plpgsql\n> that will let you assign a new value to a variable that's marked\n> CONSTANT:\n>\n> * We don't complain if an output parameter in a CALL statement\n> is constant.\n>\n> * We don't complain if a refcursor variable is constant, even\n> though OPEN may assign a new value to it.\n>\n> The attached quick-hack patch closes both of these oversights.\n>\n> Perhaps the OPEN change is a little too aggressive, since if\n> you give the refcursor variable some non-null initial value,\n> OPEN won't change it; in that usage a CONSTANT marking could\n> be allowed. But I really seriously doubt that anybody out\n> there is marking such variables as constants, so I thought\n> throwing the error at compile time was better than postponing\n> it to runtime so we could handle that.\n>\n> Regardless of which way we handle that point, I'm inclined to\n> change this only in HEAD. Probably people wouldn't thank us\n> for making the back branches more strict.\n>\n\n+1\n\nI can implement these checks in plpgsql_check. So possible issues can be\ndetected and fixed on older versions by using plpgsql_check.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n> PS: I didn't do it here, but I'm kind of tempted to pull out\n> all the cursor-related tests in plpgsql.sql and move them to\n> a new test file under src/pl/plpgsql/src/sql/. They look\n> pretty self-contained, and I doubt they're worth keeping in\n> the core tests.\n>\n>\n\nčt 28. 4. 2022 v 23:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I happened to notice that there are a couple of places in plpgsql\nthat will let you assign a new value to a variable that's marked\nCONSTANT:\n\n* We don't complain if an output parameter in a CALL statement\nis constant.\n\n* We don't complain if a refcursor variable is constant, even\nthough OPEN may assign a new value to it.\n\nThe attached quick-hack patch closes both of these oversights.\n\nPerhaps the OPEN change is a little too aggressive, since if\nyou give the refcursor variable some non-null initial value,\nOPEN won't change it; in that usage a CONSTANT marking could\nbe allowed. But I really seriously doubt that anybody out\nthere is marking such variables as constants, so I thought\nthrowing the error at compile time was better than postponing\nit to runtime so we could handle that.\n\nRegardless of which way we handle that point, I'm inclined to\nchange this only in HEAD. Probably people wouldn't thank us\nfor making the back branches more strict.+1I can implement these checks in plpgsql_check. So possible issues can be detected and fixed on older versions by using plpgsql_check.RegardsPavel\n\n regards, tom lane\n\nPS: I didn't do it here, but I'm kind of tempted to pull out\nall the cursor-related tests in plpgsql.sql and move them to\na new test file under src/pl/plpgsql/src/sql/. They look\npretty self-contained, and I doubt they're worth keeping in\nthe core tests.",
"msg_date": "Fri, 29 Apr 2022 00:11:09 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing can't-assign-to-constant checks in plpgsql"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> čt 28. 4. 2022 v 23:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Perhaps the OPEN change is a little too aggressive, since if\n>> you give the refcursor variable some non-null initial value,\n>> OPEN won't change it; in that usage a CONSTANT marking could\n>> be allowed. But I really seriously doubt that anybody out\n>> there is marking such variables as constants, so I thought\n>> throwing the error at compile time was better than postponing\n>> it to runtime so we could handle that.\n>> \n>> Regardless of which way we handle that point, I'm inclined to\n>> change this only in HEAD. Probably people wouldn't thank us\n>> for making the back branches more strict.\n\n> +1\n\nAfter sleeping on it, I got cold feet about breaking arguably\nlegal code, so I made OPEN check at runtime instead. Which\nwas probably a good thing anyway, because it made me notice\nthat exec_stmt_forc() needed a check too. AFAICS there are no\nother places in pl_exec.c that are performing assignments to\nvariables not checked at parse time.\n\nPushed that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Apr 2022 11:57:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing can't-assign-to-constant checks in plpgsql"
},
{
"msg_contents": "Hi\n\n\n>> Regardless of which way we handle that point, I'm inclined to\n>> change this only in HEAD. Probably people wouldn't thank us\n>> for making the back branches more strict.\n>>\n>\n> +1\n>\n> I can implement these checks in plpgsql_check. So possible issues can be\n> detected and fixed on older versions by using plpgsql_check.\n>\n\nnew related checks are implemented on plpgsql_check 2.1.4\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>> regards, tom lane\n>>\n>> PS: I didn't do it here, but I'm kind of tempted to pull out\n>> all the cursor-related tests in plpgsql.sql and move them to\n>> a new test file under src/pl/plpgsql/src/sql/. They look\n>> pretty self-contained, and I doubt they're worth keeping in\n>> the core tests.\n>>\n>>\n\nHi\n\nRegardless of which way we handle that point, I'm inclined to\nchange this only in HEAD. Probably people wouldn't thank us\nfor making the back branches more strict.+1I can implement these checks in plpgsql_check. So possible issues can be detected and fixed on older versions by using plpgsql_check.new related checks are implemented on plpgsql_check 2.1.4RegardsPavel RegardsPavel\n\n regards, tom lane\n\nPS: I didn't do it here, but I'm kind of tempted to pull out\nall the cursor-related tests in plpgsql.sql and move them to\na new test file under src/pl/plpgsql/src/sql/. They look\npretty self-contained, and I doubt they're worth keeping in\nthe core tests.",
"msg_date": "Sun, 1 May 2022 19:25:17 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing can't-assign-to-constant checks in plpgsql"
}
] |
[
{
"msg_contents": "Hi,\n\nAt times, there can be many temp files (under pgsql_tmp) and temp\nrelation files (under removal which after crash may take longer during\nwhich users have no clue about what's going on in the server before it\ncomes up online.\n\nHere's a proposal to use ereport_startup_progress to report the\nprogress of the file removal.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 30 Apr 2022 11:07:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Progress report removal of temp files and temp relation files using\n ereport_startup_progress"
},
{
"msg_contents": "Hi Bharath,\n\n\nOn Sat, Apr 30, 2022 at 11:08 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> At times, there can be many temp files (under pgsql_tmp) and temp\n> relation files (under removal which after crash may take longer during\n> which users have no clue about what's going on in the server before it\n> comes up online.\n>\n> Here's a proposal to use ereport_startup_progress to report the\n> progress of the file removal.\n>\n> Thoughts?\n\nThe patch looks good to me.\n\nWith this patch, the user would at least know which directory is being\nscanned and how much time has elapsed. It would be better to know how\nmuch work is remaining. I could not find a way to estimate the number\nof files in the directory so that we can extrapolate elapsed time and\nestimate the remaining time. Well, we could loop the output of\nopendir() twice, first to estimate and then for the actual work. This\nmight actually work, if the time to delete all the files is very high\ncompared to the time it takes to scan all the files/directories.\n\nAnother possibility is to scan the sorted output of opendir() thus\nusing the current file name to estimate remaining files in a very\ncrude and inaccurate way. That doesn't look attractive either. I can't\nthink of any better way to estimate the remaining time.\n\nBut at least with this patch, a user knows which files have been\ndeleted, guessing how far, in the directory structure, the process has\nreached. S/he can then take a look at the remaining contents of the\ndirectory to estimate how much it should wait.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 2 May 2022 18:26:33 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report removal of temp files and temp relation files\n using ereport_startup_progress"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:26 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Bharath,\n>\n>\n> On Sat, Apr 30, 2022 at 11:08 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > At times, there can be many temp files (under pgsql_tmp) and temp\n> > relation files (under removal which after crash may take longer during\n> > which users have no clue about what's going on in the server before it\n> > comes up online.\n> >\n> > Here's a proposal to use ereport_startup_progress to report the\n> > progress of the file removal.\n> >\n> > Thoughts?\n>\n> The patch looks good to me.\n>\n> With this patch, the user would at least know which directory is being\n> scanned and how much time has elapsed.\n\nThere's a problem with the patch, the timeout mechanism isn't being\nused by the postmaster process. Postmaster doesn't\nInitializeTimeouts() and doesn't register STARTUP_PROGRESS_TIMEOUT, I\ntried to make postmaster do that (attached a v2 patch) but make check\nfails.\n\nNow, I'm thinking if it's a good idea to let postmaster use timeouts at all?\n\n> It would be better to know how\n> much work is remaining. I could not find a way to estimate the number\n> of files in the directory so that we can extrapolate elapsed time and\n> estimate the remaining time. Well, we could loop the output of\n> opendir() twice, first to estimate and then for the actual work. This\n> might actually work, if the time to delete all the files is very high\n> compared to the time it takes to scan all the files/directories.\n>\n> Another possibility is to scan the sorted output of opendir() thus\n> using the current file name to estimate remaining files in a very\n> crude and inaccurate way. That doesn't look attractive either. I can't\n> think of any better way to estimate the remaining time.\n\nI think 'how much work/how many files remaining to process' is a\ngeneric problem, for instance, snapshot, mapping files, old WAL file\nprocessing and so on. I don't think we can do much about it.\n\n> But at least with this patch, a user knows which files have been\n> deleted, guessing how far, in the directory structure, the process has\n> reached. S/he can then take a look at the remaining contents of the\n> directory to estimate how much it should wait.\n\nNot sure we will be able to use the timeout mechanism within\npostmaster. Another idea is to have a generic GUC something like\nlog_file_processing_traffic = {none, medium, high} (similar idea is\nproposed for WAL files processing while replaying/recovering at [1]),\ndefault being none, when set to medium a log message gets emitted for\nevery say 128 or 256 (just a random number) files processed. when set\nto high, log messages get emitted for every file processed (too\nverbose). I think this generic GUC log_file_processing_traffic can be\nused in many other file processing areas.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACVnhbx4pLZepvdqOfeOekvZXJ2F%3DwJeConGzok%2B6kgCVA%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 5 May 2022 12:11:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report removal of temp files and temp relation files\n using ereport_startup_progress"
},
{
"msg_contents": "On Thu, May 5, 2022 at 12:11 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 6:26 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi Bharath,\n> >\n> >\n> > On Sat, Apr 30, 2022 at 11:08 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > At times, there can be many temp files (under pgsql_tmp) and temp\n> > > relation files (under removal which after crash may take longer during\n> > > which users have no clue about what's going on in the server before it\n> > > comes up online.\n> > >\n> > > Here's a proposal to use ereport_startup_progress to report the\n> > > progress of the file removal.\n> > >\n> > > Thoughts?\n> >\n> > The patch looks good to me.\n> >\n> > With this patch, the user would at least know which directory is being\n> > scanned and how much time has elapsed.\n>\n> There's a problem with the patch, the timeout mechanism isn't being\n> used by the postmaster process. Postmaster doesn't\n> InitializeTimeouts() and doesn't register STARTUP_PROGRESS_TIMEOUT, I\n> tried to make postmaster do that (attached a v2 patch) but make check\n> fails.\n>\n> Now, I'm thinking if it's a good idea to let postmaster use timeouts at all?\n\nHere's the v3 patch, which adds progress reports for temp file removal\nunder the pgsql_tmp directory and temporary relation files under the\npg_tblspc directory, regression tests pass with it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 2 Aug 2022 11:52:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report removal of temp files and temp relation files\n using ereport_startup_progress"
}
] |
[
{
"msg_contents": "Hi\n\nThis function should produce form_urlencoded data from a row.\nIt works on the first invoction.\nbut the second fails and references the table from the previous invocation.\n\nIn this case I am using pgbouncer but I have tested it without\nand also without the urldecode on several platforms (pg13)\n\nthanks,\ndh\n-------to reproduce --------------------------------------\nCREATE OR REPLACE FUNCTION record_to_form_data(p_r record)\n RETURNS text\n LANGUAGE plpgsql\nAS $function$\nbegin\nreturn (\nselect string_agg(format('%s=%s',key,urlencode(value)),'&')\n from\n\t(select p_r.*) i,\n\thstore(i.*) as h,each(h) );\n end;\n$function$;\n\ncreate table fruit1(id varchar not null,name varchar not null,color varchar);\ncreate table fruit2(id varchar not null,name varchar not null);\n\ninsert into fruit1 values('1','apple','red');\ninsert into fruit2 values('1','apple');\n\nselect record_to_form_data(f.*) from fruit1 f;\nselect record_to_form_data(f.*) from fruit2 f;\n\n\n--------------------------------\ntestit6=# select record_to_form_data(f.*) from fruit1 f;\n record_to_form_data \n---------------------------\n id=1&name=apple&color=red\n(1 row)\n\ntestit6=# select record_to_form_data(f.*) from fruit2 f;\nERROR: type of parameter 1 (fruit2) does not match that when preparing the plan (fruit1)\nCONTEXT: SQL statement \"SELECT (\nselect string_agg(format('%s=%s',key,urlencode(value)),'&')\n from\n\t(select p_r.*) i,\n\thstore(i.*) as h,each(h) )\"\nPL/pgSQL function record_to_form_data(record) line 6 at RETURN\ntestit6=# \\c\npsql (13.5 (Debian 13.5-0+deb11u1), server 13.6 (Debian 13.6-1.pgdg110+1))\nSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)\nYou are now connected to database \"testit6\" as user \"david\".\ntestit6=# select record_to_form_data(f.*) from fruit2 f;\n record_to_form_data \n---------------------\n id=1&name=apple\n(1 row)\n\n\n",
"msg_date": "Sat, 30 Apr 2022 16:23:19 -0700",
"msg_from": "d <dchuck@yurfish.com>",
"msg_from_op": true,
"msg_subject": "ERROR: type of parameter 1 (fruit2) does not match that when\n preparing the plan (fruit1)"
},
{
"msg_contents": "On Sun, May 1, 2022 at 8:44 AM d <dchuck@yurfish.com> wrote:\n\n> -------to reproduce --------------------------------------\n> CREATE OR REPLACE FUNCTION record_to_form_data(p_r record)\n> RETURNS text\n> LANGUAGE plpgsql\n> AS $function$\n> begin\n> return (\n> select string_agg(format('%s=%s',key,urlencode(value)),'&')\n> from\n> (select p_r.*) i,\n> hstore(i.*) as h,each(h) );\n> end;\n> $function$;\n>\n\nNot a bug, it is a documented limitation.\n\nIt is your use of \"(select p_r.*)\" that is problematic.\n\nhttps://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING\n\n\"\"\"\nThe mutable nature of record variables presents another problem in this\nconnection. When fields of a record variable are used in expressions or\nstatements, the data types of the fields must not change from one call of\nthe function to the next, since each expression will be analyzed using the\ndata type that is present when the expression is first reached. EXECUTE can\nbe used to get around this problem when necessary.\n\"\"\"\n\nDavid J.\n\nOn Sun, May 1, 2022 at 8:44 AM d <dchuck@yurfish.com> wrote:-------to reproduce --------------------------------------\nCREATE OR REPLACE FUNCTION record_to_form_data(p_r record)\n RETURNS text\n LANGUAGE plpgsql\nAS $function$\nbegin\nreturn (\nselect string_agg(format('%s=%s',key,urlencode(value)),'&')\n from\n (select p_r.*) i,\n hstore(i.*) as h,each(h) );\n end;\n$function$;Not a bug, it is a documented limitation.It is your use of \"(select p_r.*)\" that is problematic.https://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING\"\"\"The mutable nature of record variables presents another problem in this connection. When fields of a record variable are used in expressions or statements, the data types of the fields must not change from one call of the function to the next, since each expression will be analyzed using the data type that is present when the expression is first reached. EXECUTE can be used to get around this problem when necessary.\"\"\"David J.",
"msg_date": "Sun, 1 May 2022 08:58:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: type of parameter 1 (fruit2) does not match that when\n preparing the plan (fruit1)"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sun, May 1, 2022 at 8:44 AM d <dchuck@yurfish.com> wrote:\n>> CREATE OR REPLACE FUNCTION record_to_form_data(p_r record)\n\n> Not a bug, it is a documented limitation.\n\nFWIW, it does seem to work as desired if you declare the argument as\n\"anyelement\".\n\nMaybe we could improve this situation by treating a \"record\" parameter\nas polymorphic, though that might cause some odd inconsistencies with\nplpgsql's historical treatment of \"record\" local variables.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 May 2022 13:08:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: type of parameter 1 (fruit2) does not match that when\n preparing the plan (fruit1)"
},
{
"msg_contents": "On Sun, May 1, 2022 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Sun, May 1, 2022 at 8:44 AM d <dchuck@yurfish.com> wrote:\n> >> CREATE OR REPLACE FUNCTION record_to_form_data(p_r record)\n>\n> > Not a bug, it is a documented limitation.\n>\n> FWIW, it does seem to work as desired if you declare the argument as\n> \"anyelement\".\n>\n\n+1\n\n\n>\n> Maybe we could improve this situation by treating a \"record\" parameter\n> as polymorphic, though that might cause some odd inconsistencies with\n> plpgsql's historical treatment of \"record\" local variables.\n>\n>\nThe extent of needing to treat \"record\" as polymorphic-like seems like it\nwould be limited to resolve_polymorphic_argtype in funcapi.c. Namely, in\ncomputing the hash key for the compiled hash entry for the function.\nSimilar to how we append the trigger oid in compute_function_hashkey in\npl.compile (which ultimately calls the former) so trigger invocations\nbecome per-table.\n\nDavid J.\n\nOn Sun, May 1, 2022 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sun, May 1, 2022 at 8:44 AM d <dchuck@yurfish.com> wrote:\n>> CREATE OR REPLACE FUNCTION record_to_form_data(p_r record)\n\n> Not a bug, it is a documented limitation.\n\nFWIW, it does seem to work as desired if you declare the argument as\n\"anyelement\".+1 \n\nMaybe we could improve this situation by treating a \"record\" parameter\nas polymorphic, though that might cause some odd inconsistencies with\nplpgsql's historical treatment of \"record\" local variables.The extent of needing to treat \"record\" as polymorphic-like seems like it would be limited to resolve_polymorphic_argtype in funcapi.c. Namely, in computing the hash key for the compiled hash entry for the function. Similar to how we append the trigger oid in compute_function_hashkey in pl.compile (which ultimately calls the former) so trigger invocations become per-table.David J.",
"msg_date": "Sun, 1 May 2022 12:34:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: type of parameter 1 (fruit2) does not match that when\n preparing the plan (fruit1)"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sun, May 1, 2022 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Maybe we could improve this situation by treating a \"record\" parameter\n>> as polymorphic, though that might cause some odd inconsistencies with\n>> plpgsql's historical treatment of \"record\" local variables.\n\n> The extent of needing to treat \"record\" as polymorphic-like seems like it\n> would be limited to resolve_polymorphic_argtype in funcapi.c. Namely, in\n> computing the hash key for the compiled hash entry for the function.\n> Similar to how we append the trigger oid in compute_function_hashkey in\n> pl.compile (which ultimately calls the former) so trigger invocations\n> become per-table.\n\nI'm hesitant to touch funcapi.c for this; the scope of potential\nside-effects becomes enormous as soon as you do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 May 2022 15:46:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: type of parameter 1 (fruit2) does not match that when\n preparing the plan (fruit1)"
},
{
"msg_contents": "Moving discussion to -hackers\n\nOn Sun, May 1, 2022 at 12:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Sun, May 1, 2022 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Maybe we could improve this situation by treating a \"record\" parameter\n> >> as polymorphic, though that might cause some odd inconsistencies with\n> >> plpgsql's historical treatment of \"record\" local variables.\n>\n> > The extent of needing to treat \"record\" as polymorphic-like seems like it\n> > would be limited to resolve_polymorphic_argtype in funcapi.c. Namely, in\n> > computing the hash key for the compiled hash entry for the function.\n> > Similar to how we append the trigger oid in compute_function_hashkey in\n> > pl.compile (which ultimately calls the former) so trigger invocations\n> > become per-table.\n>\n> I'm hesitant to touch funcapi.c for this; the scope of potential\n> side-effects becomes enormous as soon as you do.\n>\n>\nAgreed, though the only caller of this particular function seems to be in\nplpgsql/pl_comp.c anyway...one more-or-less directly from do_compile and\nthe other via compute_function_hashkey, so its presence in funcapi.c seems\nunusual.\n\nBut, to get rid of the cache error it seems sufficient to simply ensure we\ncompute a hash key that considers the called types.\n\nThat do_compile doesn't see these substitutions is a positive for minimal\npotential impact with no apparent downside.\n\nI added the test case from the -bug email (I'd probably change it to match\nthe \"scenario\" for the file for a final version).\n\n I have some questions in the code that I could probably get answers for\nvia tests - would including those tests be acceptable (defaults,\nnamed-argument-calling, output argmode)?\n\nI did check-world without issue - but I suspect this to be under-tested and\nnot naturally used in the course of unrelated testing.\n\nDavid J.\n\n+ /*\n+ * Saved compiled functions with record-typed input args to a\nhashkey\n+ * that substitutes all known actual composite type OIDs in the\n+ * call signature for the corresponding generic record OID from\n+ * the definition signature. This avoids a probable error:\n+ * \"type of parameter ... does not match that when preparing the\nplan\"\n+ * when such a record variable is used in a query within the\nfunction.\n+ */\n+ for (int i = 0; i < procStruct->pronargs; i++)\n+ {\n+ if (hashkey->argtypes[i] != RECORDOID)\n+ continue;\n+\n+ // ??? I presume other parts of the system synchronize these\ntwo arrays\n+ // In particular for default arguments not specified in the\nfunction call\n+ // or named-argument function call syntax.\n+\n+ /* Don't bother trying to substitute once we run out of input\narguments */\n+ if (i > fcinfo->nargs - 1)\n+ break;\n+\n+ hashkey->argtypes[i] =\n+ get_call_expr_argtype(fcinfo->flinfo->fn_expr, i);\n+ }\n\n+select record_to_form_data(fruit1) from fruit1;\n+ record_to_form_data\n+---------------------\n+ (1,apple,red)\n+(1 row)\n+\n+select record_to_form_data(fruit2) from fruit2;\n+ record_to_form_data\n+---------------------\n+ (1,apple)\n+(1 row)\n+",
"msg_date": "Sun, 1 May 2022 19:22:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: type of parameter 1 (fruit2) does not match that when\n preparing the plan (fruit1)"
}
] |
[
{
"msg_contents": "This is ok:\n\ngit clone ssh://git@gitmaster.postgresql.org/postgresql.git\n\nBut this fails:\n--------------------------------------------------------\n$ git clone ssh://git@git.postgresql.org/postgresql.git\nCloning into 'postgresql'...\nPermission denied on repository for user ishii\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n--------------------------------------------------------\n\nIs accessing git.postgresql.org wrong and should we access\ngitmaster.postgresql.org instead?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 01 May 2022 16:47:49 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Accessing git.postgresql.org fails"
},
{
"msg_contents": "Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> This is ok:\n> git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n\nThat's the thing to use if you're a committer.\n\n> But this fails:\n> $ git clone ssh://git@git.postgresql.org/postgresql.git\n\nPer [1], the recommended git URL for non-committers is\n\nhttps://git.postgresql.org/git/postgresql.git\n\nnot ssh:. I'm not sure that ssh: has ever worked --- wouldn't it\nrequire an account on the target machine?\n\n\t\t\tregards, tom lane\n\n[1] https://wiki.postgresql.org/wiki/Working_with_Git\n\n\n",
"msg_date": "Sun, 01 May 2022 10:52:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Accessing git.postgresql.org fails"
},
{
"msg_contents": "On Sun, May 1, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> > This is ok:\n> > git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n>\n> That's the thing to use if you're a committer.\n>\n> > But this fails:\n> > $ git clone ssh://git@git.postgresql.org/postgresql.git\n>\n> Per [1], the recommended git URL for non-committers is\n>\n> https://git.postgresql.org/git/postgresql.git\n>\n> not ssh:. I'm not sure that ssh: has ever worked --- wouldn't it\n> require an account on the target machine?\n>\n\nThat's correct.\n\nssh works if you have committer access on the repo at git.postgresql.org.\nSince the main postgresql.git repo there is a mirror only, nobody has\ncommit access there, so it doesn't work (but there are other repos hosted\non the same server that does have committers). But for the postgresql.git\nrepo, it has never worked on that server.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, May 1, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> This is ok:\n> git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n\nThat's the thing to use if you're a committer.\n\n> But this fails:\n> $ git clone ssh://git@git.postgresql.org/postgresql.git\n\nPer [1], the recommended git URL for non-committers is\n\nhttps://git.postgresql.org/git/postgresql.git\n\nnot ssh:. I'm not sure that ssh: has ever worked --- wouldn't it\nrequire an account on the target machine?That's correct.ssh works if you have committer access on the repo at git.postgresql.org. Since the main postgresql.git repo there is a mirror only, nobody has commit access there, so it doesn't work (but there are other repos hosted on the same server that does have committers). But for the postgresql.git repo, it has never worked on that server. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 1 May 2022 17:53:16 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Accessing git.postgresql.org fails"
},
{
"msg_contents": "> Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n>> This is ok:\n>> git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n> \n> That's the thing to use if you're a committer.\n> \n>> But this fails:\n>> $ git clone ssh://git@git.postgresql.org/postgresql.git\n> \n> Per [1], the recommended git URL for non-committers is\n> \n> https://git.postgresql.org/git/postgresql.git\n> \n> not ssh:. I'm not sure that ssh: has ever worked --- wouldn't it\n> require an account on the target machine?\n\nI know. My point is, if ssh://git@git.postgresql.org/postgresql.git\ndoes not work for even committers, shouldn't descriptions at:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=summary be\nchanged?\n\n---------------------------------------------------------\ndescription\tThis is the main PostgreSQL git repository.\nowner\tMagnus Hagander\nlast change\tSat, 30 Apr 2022 16:05:32 +0000 (09:05 -0700)\nURL\tgit://git.postgresql.org/git/postgresql.git\n\thttps://git.postgresql.org/git/postgresql.git\n\tssh://git@git.postgresql.org/postgresql.git\n---------------------------------------------------------\n\nI think:\n\tssh://git@git.postgresql.org/postgresql.git\nneeds to be changed to:\n\tssh://git@gitmaster.postgresql.org/postgresql.git\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 02 May 2022 07:17:01 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Accessing git.postgresql.org fails"
}
] |
[
{
"msg_contents": "My annual audit for executables missing Windows icons turned up these:\n\n pginstall/bin/testclient.exe\n pginstall/bin/uri-regress.exe\n\nI was going to add the icons, but I felt the testclient.exe name is too\ngeneric-sounding to be installed. testclient originated in commit ebc8b7d. I\nrecommend ceasing to install both programs under MSVC. (The GNU make build\nsystem does not install them.) If that's unwanted for some reason, could you\nrename testclient to something like libpq_test?\n\nThanks,\nnm\n\n\n",
"msg_date": "Sun, 1 May 2022 01:07:06 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "testclient.exe installed under MSVC"
},
{
"msg_contents": "On Sun, May 01, 2022 at 01:07:06AM -0700, Noah Misch wrote:\n> My annual audit for executables missing Windows icons turned up these:\n> \n> pginstall/bin/testclient.exe\n> pginstall/bin/uri-regress.exe\n> \n> I was going to add the icons, but I felt the testclient.exe name is too\n> generic-sounding to be installed. testclient originated in commit ebc8b7d. I\n> recommend ceasing to install both programs under MSVC. (The GNU make build\n> system does not install them.)\n\nBut MSVC works differently. vcregress.pl does a TempInstall(), which\nis a simple Install(), so isn't it going to be an issue for the tests\nif these two tools are not installed anymore?\n\n> If that's unwanted for some reason, could you\n> rename testclient to something like libpq_test?\n\nYes, the renaming makes sense. I'd say to do more, and also rename\nuri-regress, removing the hyphen from the binary name and prefix both\nbinaries with a \"pg_\".\n--\nMichael",
"msg_date": "Sun, 1 May 2022 22:23:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "On Sun, May 01, 2022 at 10:23:18PM +0900, Michael Paquier wrote:\n> On Sun, May 01, 2022 at 01:07:06AM -0700, Noah Misch wrote:\n> > My annual audit for executables missing Windows icons turned up these:\n> > \n> > pginstall/bin/testclient.exe\n> > pginstall/bin/uri-regress.exe\n> > \n> > I was going to add the icons, but I felt the testclient.exe name is too\n> > generic-sounding to be installed. testclient originated in commit ebc8b7d. I\n> > recommend ceasing to install both programs under MSVC. (The GNU make build\n> > system does not install them.)\n\nSee also:\na17fd67d2f2861ae0ce00d1aeefdf2facc47cd5e Build libpq test programs under MSVC.\nhttps://www.postgresql.org/message-id/74952229-b3b0-fe47-f958-4088529a3f21@dunslane.net MSVC build system installs extra executables\nhttps://www.postgresql.org/message-id/e4233934-98a6-6f76-46a0-992c0f4f1208@dunslane.net Re: set TESTDIR from perl rather than Makefile\n\nI'm not really sure what the plan is for the TESTDIR patches. Is \"vcregress\nalltaptests\" still an interesting patch to pursue, or is that going to be\nobsoleted by meson build ? \n\n> But MSVC works differently. vcregress.pl does a TempInstall(), which\n> is a simple Install(), so isn't it going to be an issue for the tests\n> if these two tools are not installed anymore?\n\nAndrew didn't propose any mechanism for avoiding installation of the\nexecutables, so it would break the tests. However, at least cfbot currently\ndoesn't run them anyway.\n\nOne idea is if \"vcregress install\" accepted an option like\n\"vcregress install check\", which would mean \"install extra binaries needed for\nrunning tests\". Something maybe not much more elegant than this.\n\n next\n if ($insttype eq \"client\" && !grep { $_ eq $pf }\n @client_program_files);\n \n+ next if ($pf =~ /testclient|uri-regress/);\n\n\n\n",
"msg_date": "Sun, 1 May 2022 11:52:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "> On 1 May 2022, at 15:23, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sun, May 01, 2022 at 01:07:06AM -0700, Noah Misch wrote:\n>> My annual audit for executables missing Windows icons turned up these:\n>> \n>> pginstall/bin/testclient.exe\n>> pginstall/bin/uri-regress.exe\n>> \n>> I was going to add the icons, but I felt the testclient.exe name is too\n>> generic-sounding to be installed. testclient originated in commit ebc8b7d. I\n>> recommend ceasing to install both programs under MSVC. (The GNU make build\n>> system does not install them.)\n> \n> But MSVC works differently. vcregress.pl does a TempInstall(), which\n> is a simple Install(), so isn't it going to be an issue for the tests\n> if these two tools are not installed anymore?\n> \n>> If that's unwanted for some reason, could you\n>> rename testclient to something like libpq_test?\n> \n> Yes, the renaming makes sense. I'd say to do more, and also rename\n> uri-regress, removing the hyphen from the binary name and prefix both\n> binaries with a \"pg_\".\n\nRenaming is probably the best option given how MSVC works. Using a pg_ prefix\nmakes them sound like actual useful tools though with (albeit small) risk for\nconfusion? Noah's suggestion of libpq_ is perhaps better: libpq_testclient.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 2 May 2022 15:14:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "On Mon, 2022-05-02 at 15:14 +0200, Daniel Gustafsson wrote:\r\n> Using a pg_ prefix\r\n> makes them sound like actual useful tools though with (albeit small) risk for\r\n> confusion? Noah's suggestion of libpq_ is perhaps better: libpq_testclient.\r\n\r\n+1\r\n\r\nI also like Justin's idea of only installing the test executables when\r\nasked to explicitly, but I don't know enough about our existing MSVC\r\nconventions to have a strong opinion there.\r\n\r\nThanks,\r\n--Jacob\r\n",
"msg_date": "Mon, 2 May 2022 15:21:23 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "On Mon, May 02, 2022 at 03:14:50PM +0200, Daniel Gustafsson wrote:\n> > On 1 May 2022, at 15:23, Michael Paquier <michael@paquier.xyz> wrote:\n> > On Sun, May 01, 2022 at 01:07:06AM -0700, Noah Misch wrote:\n> >> My annual audit for executables missing Windows icons turned up these:\n> >> \n> >> pginstall/bin/testclient.exe\n> >> pginstall/bin/uri-regress.exe\n> >> \n> >> I was going to add the icons, but I felt the testclient.exe name is too\n> >> generic-sounding to be installed. testclient originated in commit ebc8b7d. I\n> >> recommend ceasing to install both programs under MSVC. (The GNU make build\n> >> system does not install them.)\n> > \n> > But MSVC works differently. vcregress.pl does a TempInstall(), which\n> > is a simple Install(), so isn't it going to be an issue for the tests\n> > if these two tools are not installed anymore?\n\nResolving that would be part of any project to stop installing them.\n\n> >> If that's unwanted for some reason, could you\n> >> rename testclient to something like libpq_test?\n> > \n> > Yes, the renaming makes sense. I'd say to do more, and also rename\n> > uri-regress, removing the hyphen from the binary name and prefix both\n> > binaries with a \"pg_\".\n> \n> Renaming is probably the best option given how MSVC works. Using a pg_ prefix\n> makes them sound like actual useful tools though with (albeit small) risk for\n> confusion? Noah's suggestion of libpq_ is perhaps better: libpq_testclient.\n\nAgreed. libpq_testclient and libpq_uri_regress sound fine.\n\n\n",
"msg_date": "Mon, 2 May 2022 19:02:01 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "> On 3 May 2022, at 04:02, Noah Misch <noah@leadboat.com> wrote:\n> On Mon, May 02, 2022 at 03:14:50PM +0200, Daniel Gustafsson wrote:\n\n>> Renaming is probably the best option given how MSVC works. Using a pg_ prefix\n>> makes them sound like actual useful tools though with (albeit small) risk for\n>> confusion? Noah's suggestion of libpq_ is perhaps better: libpq_testclient.\n> \n> Agreed. libpq_testclient and libpq_uri_regress sound fine.\n\nThe attached works in both Linux and (Cirrus CI) MSVC for me.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 3 May 2022 15:04:26 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "On Tue, May 03, 2022 at 03:04:26PM +0200, Daniel Gustafsson wrote:\n> > On 3 May 2022, at 04:02, Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, May 02, 2022 at 03:14:50PM +0200, Daniel Gustafsson wrote:\n> \n> >> Renaming is probably the best option given how MSVC works. Using a pg_ prefix\n> >> makes them sound like actual useful tools though with (albeit small) risk for\n> >> confusion? Noah's suggestion of libpq_ is perhaps better: libpq_testclient.\n> > \n> > Agreed. libpq_testclient and libpq_uri_regress sound fine.\n> \n> The attached works in both Linux and (Cirrus CI) MSVC for me.\n\nMichael Paquier recommended s/-/_/ for uri-regress, and I agree with that.\nWhat do you think?\n\n\n",
"msg_date": "Tue, 3 May 2022 06:50:38 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "On 2022-May-03, Noah Misch wrote:\n\n> Michael Paquier recommended s/-/_/ for uri-regress, and I agree with that.\n> What do you think?\n\nlibpq_uri-regress is horrible, so +1 for that. I would personally\nrename more thoroughly (say pq_uri_test), but I doubt it's worth the\nbikeshedding effort.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n",
"msg_date": "Tue, 3 May 2022 15:58:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "> On 3 May 2022, at 15:58, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2022-May-03, Noah Misch wrote:\n> \n>> Michael Paquier recommended s/-/_/ for uri-regress, and I agree with that.\n>> What do you think?\n> \n> libpq_uri-regress is horrible, so +1 for that.\n\nAgreed, I'll do that before pushing.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 3 May 2022 16:50:29 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "\nOn 2022-05-01 Su 09:23, Michael Paquier wrote:\n> On Sun, May 01, 2022 at 01:07:06AM -0700, Noah Misch wrote:\n>> My annual audit for executables missing Windows icons turned up these:\n>>\n>> pginstall/bin/testclient.exe\n>> pginstall/bin/uri-regress.exe\n>>\n>> I was going to add the icons, but I felt the testclient.exe name is too\n>> generic-sounding to be installed. testclient originated in commit ebc8b7d. I\n>> recommend ceasing to install both programs under MSVC. (The GNU make build\n>> system does not install them.)\n> But MSVC works differently. vcregress.pl does a TempInstall(), which\n> is a simple Install(), so isn't it going to be an issue for the tests\n> if these two tools are not installed anymore?\n>\n>> If that's unwanted for some reason, could you\n>> rename testclient to something like libpq_test?\n> Yes, the renaming makes sense. I'd say to do more, and also rename\n> uri-regress, removing the hyphen from the binary name and prefix both\n> binaries with a \"pg_\".\n\n\nI've complained before about binaries that are installed under MSVC\nwhere the equivalent are not installed under Unix or msys{2}.\n\nI think we should make the standard MSVC install look as much like the\nstandard Unix/msys install as possible. If we need a test mode that\ninstalls a few extra things then that can be managed fairly simply I\nthink. I'm prepared to help out with that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 3 May 2022 20:34:06 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "> On 3 May 2022, at 16:50, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 3 May 2022, at 15:58, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> \n>> On 2022-May-03, Noah Misch wrote:\n>> \n>>> Michael Paquier recommended s/-/_/ for uri-regress, and I agree with that.\n>>> What do you think?\n>> \n>> libpq_uri-regress is horrible, so +1 for that.\n> \n> Agreed, I'll do that before pushing.\n\nDone that way.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 4 May 2022 14:18:12 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
},
{
"msg_contents": "> On 4 May 2022, at 02:34, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I think we should make the standard MSVC install look as much like the\n> standard Unix/msys install as possible.\n\n+1\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 4 May 2022 14:19:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: testclient.exe installed under MSVC"
}
] |
[
{
"msg_contents": "Hello,\n\na Psycopg 3 user has tested what boils down pretty much to a\n\"generate_series(100K)\" and has reported a 100x difference between\nfetching it normally and in single-row mode. I have repeated the test\nmyself and I have found a 50x difference (the OP doesn't specify which\nplatform is using, mine is Linux).\n\nOf course calling PQconsumeInput 100K times has some overhead compared\nto calling it only once. However, I would like to know if this level\nof overhead is expected, or if instead anyone smells some wasted\ncycles.\n\nAccording to some profiling, the largest part of the time is spent\ninside a libc function I don't know the symbol of, called by\npqReadData(). Details and pretty graphs are available at\nhttps://github.com/psycopg/psycopg/issues/286\n\nThe operations we perform, for every row, are PQconsumeInput,\nPQisBusy, PQgetResult. Every PQconsumeInput results in a recvfrom()\nsyscall, of which the first one returns the whole recordset, the\nfollowing ones return EAGAIN. There are two extra cycles: one to get\nthe TUPLES_OK result, one to get the end-of-stream NULL. It seems the\ndocumented usage pattern, but I'm not sure if I'm not misreading it,\nespecially in the light of this libpq grumble [1].\n\n[1] https://github.com/postgres/postgres/blob/master/src/interfaces/libpq/fe-misc.c#L681\n\nOur connection is in non-blocking mode and we see the need for waiting\n(using epoll here) only on the first call. The resulting strace of the\nentire query process (of two rows) is:\n\n21:36:53.659529 sendto(3, \"P\\0\\0\\0>\\0select 'test' a, t b from \"...,\n108, MSG_NOSIGNAL, NULL, 0) = 108\n21:36:53.660236 recvfrom(3, 0x1f6a870, 16384, 0, NULL, NULL) = -1\nEAGAIN (Resource temporarily unavailable)\n21:36:53.660589 epoll_create1(EPOLL_CLOEXEC) = 4\n21:36:53.660848 epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLONESHOT,\n{u32=3, u64=3}}) = 0\n21:36:53.661099 epoll_wait(4, [{EPOLLIN, {u32=3, u64=3}}], 1023, 100000) = 1\n21:36:53.661941 recvfrom(3,\n\"1\\0\\0\\0\\0042\\0\\0\\0\\4T\\0\\0\\0.\\0\\2a\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\31\\377\\377\\377\"...,\n16384, 0, NULL, NULL) = 117\n21:36:53.662506 close(4) = 0\n21:36:53.662830 recvfrom(3, 0x1f6a898, 16344, 0, NULL, NULL) = -1\nEAGAIN (Resource temporarily unavailable)\n21:36:53.663162 recvfrom(3, 0x1f6a884, 16364, 0, NULL, NULL) = -1\nEAGAIN (Resource temporarily unavailable)\n21:36:53.663359 recvfrom(3, 0x1f6a876, 16378, 0, NULL, NULL) = -1\nEAGAIN (Resource temporarily unavailable)\n\nThe test is on a localhost connection with sslmode disabled using libpq 14.2.\n\nIs this the correct usage? Any insight is welcome.\n\nThank you very much!\n\n-- Daniele\n\n\n",
"msg_date": "Sun, 1 May 2022 22:35:52 +0200",
"msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Libpq single-row mode slowness"
},
{
"msg_contents": "Daniele Varrazzo <daniele.varrazzo@gmail.com> writes:\n> The operations we perform, for every row, are PQconsumeInput,\n> PQisBusy, PQgetResult. Every PQconsumeInput results in a recvfrom()\n> syscall, of which the first one returns the whole recordset, the\n> following ones return EAGAIN. There are two extra cycles: one to get\n> the TUPLES_OK result, one to get the end-of-stream NULL. It seems the\n> documented usage pattern, but I'm not sure if I'm not misreading it,\n> especially in the light of this libpq grumble [1].\n\nThe usual expectation is that you call PQconsumeInput to get rid of\na read-ready condition on the socket. If you don't have a poll() or\nselect() or the like in the loop, you might be wasting a lot of\npointless recvfrom calls. You definitely don't need to call\nPQconsumeInput if PQisBusy is already saying that a result is available,\nand in single-row mode it's likely that several results can be consumed\nper recvfrom call.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 May 2022 17:12:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Libpq single-row mode slowness"
},
{
"msg_contents": "On Sun, 1 May 2022 at 23:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The usual expectation is that you call PQconsumeInput to get rid of\n> a read-ready condition on the socket. If you don't have a poll() or\n> select() or the like in the loop, you might be wasting a lot of\n> pointless recvfrom calls. You definitely don't need to call\n> PQconsumeInput if PQisBusy is already saying that a result is available,\n> and in single-row mode it's likely that several results can be consumed\n> per recvfrom call.\n\nThis makes sense and, with some refactoring of our fetch loop, the\noverhead of using single-row mode is now down to about 3x, likely\ncaused by the greater overhead in Python calls.\n\nPlease note that the insight you gave in your answer seems to\ncontradict the documentation. Some excerpts of\nhttps://www.postgresql.org/docs/current/libpq-async.html:\n\n\"\"\"\nPQconsumeInput: \"After calling PQconsumeInput , the application can\ncheck PQisBusy and/or PQnotifies to see if their state has changed\"\n\nPQisBusy: \"will not itself attempt to read data from the server;\ntherefore PQconsumeInput must be invoked first, or the busy state will\nnever end.\"\n\n...\nA typical application [will use select()]. When the main loop detects\ninput ready, it should call PQconsumeInput to read the input. It can\nthen call PQisBusy, followed by PQgetResult if PQisBusy returns false\n(0).\n\"\"\"\n\nAll these indications give the impression that there is a sort of\nmandatory order, requiring to call first PQconsumeInput, then\nPQisBusy. As a consequence, the core of our function to fetch a single\nresult was implemented as:\n\n```\ndef fetch(pgconn):\n while True:\n pgconn.consume_input()\n if not pgconn.is_busy():\n break\n yield Wait.R\n\n return pgconn.get_result()\n```\n\n(Where the `yield Wait.R` suspends this execution to call into\nselect() or whatever waiting policy the program is using.)\n\nYour remarks suggest that PQisBusy() can be called before\nPQconsumeInput(), and that the latter doesn't need to be called if not\nbusy. As such I have modified the loop to be:\n\n```\ndef fetch(pgconn):\n if pgconn.is_busy():\n yield Wait.R\n while True:\n pgconn.consume_input()\n if not pgconn.is_busy():\n break\n yield Wait.R\n\n return pgconn.get_result()\n```\n\nwhich seems to work well: tests don't show regressions and single-row\nmode doesn't waste recvfrom() anymore.\n\nIs this new fetching pattern the expected way to interact with the libpq?\n\nIf so, should we improve the documentation to suggest that there are\nreasons to call PQisBusy before PQconsumeInput? Especially in the\nsingle-row mode docs page, which doesn't make relevant mentions to the\nuse of these functions.\n\nThank you very much for your help, really appreciated.\n\n-- Daniele\n\n\n",
"msg_date": "Mon, 2 May 2022 01:51:41 +0200",
"msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Libpq single-row mode slowness"
}
] |
[
{
"msg_contents": "Hi,\n\nAs visible on seawasp (and noticed here in passing, while hacking on\nthe opaque pointer changes for bleeding edge LLVM), Clang 15 now warns\nby default about our use of tree walkers functions with no function\nprototype, because the next revision of C (C23?) will apparently be\nharmonising with C++ in interpreting f() to mean f(void), not\nf(anything goes).\n\nnodeFuncs.c:2051:17: warning: passing arguments to a function without\na prototype is deprecated in all versions of C and is not supported in\nC2x [-Wdeprecated-non-prototype]\n return walker(((WithCheckOption *)\nnode)->qual, context);\n\nDiscussion trail:\n\nhttps://reviews.llvm.org/D123456\nhttps://discourse.llvm.org/t/rfc-enabling-wstrict-prototypes-by-default-in-c/60521\nhttp://www.open-std.org/jtc1/sc22/wg14/www/docs/n2841.htm\n\nNot sure where to see the official status of N2841 (other than waiting\nfor the next draft to pop out), but on random/unofficial social media\nI saw that it was accepted in February, and the Clang people\napparently think it's in and I also saw a rumour that bleeding edge\nGCC takes this view if you run with -std=c2x (not tested by me).\n\n\n",
"msg_date": "Mon, 2 May 2022 11:41:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> As visible on seawasp (and noticed here in passing, while hacking on\n> the opaque pointer changes for bleeding edge LLVM), Clang 15 now warns\n> by default about our use of tree walkers functions with no function\n> prototype, because the next revision of C (C23?) will apparently be\n> harmonising with C++ in interpreting f() to mean f(void), not\n> f(anything goes).\n\nUgh. I wonder if we can get away with declaring the walker arguments\nas something like \"bool (*walker) (Node *, void *)\" without having\nto change all the actual walkers to be exactly that signature.\nHaving to insert casts in the walkers would be a major pain-in-the-butt.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 May 2022 20:02:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "I wrote:\n> Ugh. I wonder if we can get away with declaring the walker arguments\n> as something like \"bool (*walker) (Node *, void *)\" without having\n> to change all the actual walkers to be exactly that signature.\n> Having to insert casts in the walkers would be a major pain-in-the-butt.\n\nNo joy on that: both gcc and clang want the walkers to be declared\nas taking exactly \"void *\".\n\nAttached is an incomplete POC patch that suppresses these warnings\nin nodeFuncs.c itself and in costsize.c, which I selected at random\nas a typical caller. I'll push forward with converting the other\ncall sites if this way seems good to people.\n\nIn nodeFuncs.c, we can hide the newly-required casts inside macros;\nindeed, the mutators barely need any changes because they already\nhad MUTATE() macros that contained casts. So on that side, it feels\nto me that this is actually a bit nicer than before.\n\nFor the callers, we can either do it as I did below:\n\n static bool\n-cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)\n+cost_qual_eval_walker(Node *node, void *ctx)\n {\n+\tcost_qual_eval_context *context = (cost_qual_eval_context *) ctx;\n+\n \tif (node == NULL)\n \t\treturn false;\n\nor perhaps like this:\n\n static bool\n-cost_qual_eval_walker(Node *node, cost_qual_eval_context *context)\n+cost_qual_eval_walker(Node *node, void *context)\n {\n+\tcost_qual_eval_context *cqctx = (cost_qual_eval_context *) context;\n+\n \tif (node == NULL)\n \t\treturn false;\n\nbut the latter would require changing references further down in the\nfunction, so I felt it more invasive.\n\nIt's sad to note that this exercise in hoop-jumping actually leaves\nus with net LESS type safety, because the outside callers of\ncost_qual_eval_walker are no longer constrained to call it with\nthe appropriate kind of context struct. Thanks, C committee.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 16 Sep 2022 21:08:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "I wrote:\n> Attached is an incomplete POC patch that suppresses these warnings\n> in nodeFuncs.c itself and in costsize.c, which I selected at random\n> as a typical caller. I'll push forward with converting the other\n> call sites if this way seems good to people.\n\nHere's a fleshed-out patch that gets rid of all warnings of this sort\n(tested on clang version 15.0.0).\n\nWhile I remain happy enough with what has to be done in nodeFuncs.c,\nI'm really not happy at all with this point:\n\n> It's sad to note that this exercise in hoop-jumping actually leaves\n> us with net LESS type safety, because the outside callers of\n> cost_qual_eval_walker are no longer constrained to call it with\n> the appropriate kind of context struct. Thanks, C committee.\n\nThere are a lot of these walker/mutator functions and hence a whole\nlot of opportunity to pass the wrong thing, not only from the outer\nnon-recursive call points but during internal recursions in the\nwalkers/mutators themselves.\n\nI think we ought to seriously consider the alternative of changing\nnodeFuncs.c about like I have here, but not touching the walkers/mutators,\nand silencing the resulting complaints about function type casting by\ndoing the equivalent of\n\n- return expression_tree_walker(node, cost_qual_eval_walker,\n- (void *) context);\n+ return expression_tree_walker(node,\n+ (tree_walker_callback) cost_qual_eval_walker,\n+ (void *) context);\n\nWe could avoid touching all the call sites by turning\nexpression_tree_walker and friends into macro wrappers that incorporate\nthese casts. This is fairly annoying, in that it gives up the function\ntype safety the C committee wants to impose on us; but I really think\nthe data type safety that we're giving up in this version of the patch\nis a worse hazard.\n\nBTW, I was distressed to discover that someone decided they could\nuse ExecShutdownNode as a planstate_tree_walker() walker even though\nits argument list is not even the right length. I'm a bit flabbergasted\nthat we seem to have gotten away with that so far, because I'd have\nthought for sure that it'd break some platform's convention for which\nargument gets passed where. I think we need to fix that, independently\nof what we do about the larger scope of these problems. To avoid an\nAPI break, I propose making ExecShutdownNode just be a one-liner that\ncalls an internal ExecShutdownNode_walker() function. (I've not done\nit that way in the attached, though.)\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 18 Sep 2022 16:57:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think we ought to seriously consider the alternative of changing\n> nodeFuncs.c about like I have here, but not touching the walkers/mutators,\n> and silencing the resulting complaints about function type casting by\n> doing the equivalent of\n>\n> - return expression_tree_walker(node, cost_qual_eval_walker,\n> - (void *) context);\n> + return expression_tree_walker(node,\n> + (tree_walker_callback) cost_qual_eval_walker,\n> + (void *) context);\n>\n> We could avoid touching all the call sites by turning\n> expression_tree_walker and friends into macro wrappers that incorporate\n> these casts. This is fairly annoying, in that it gives up the function\n> type safety the C committee wants to impose on us; but I really think\n> the data type safety that we're giving up in this version of the patch\n> is a worse hazard.\n\nBut is it defined behaviour?\n\nhttps://stackoverflow.com/questions/559581/casting-a-function-pointer-to-another-type\n\n> BTW, I was distressed to discover that someone decided they could\n> use ExecShutdownNode as a planstate_tree_walker() walker even though\n> its argument list is not even the right length. I'm a bit flabbergasted\n> that we seem to have gotten away with that so far, because I'd have\n> thought for sure that it'd break some platform's convention for which\n> argument gets passed where. I think we need to fix that, independently\n> of what we do about the larger scope of these problems. To avoid an\n> API break, I propose making ExecShutdownNode just be a one-liner that\n> calls an internal ExecShutdownNode_walker() function. (I've not done\n> it that way in the attached, though.)\n\nHuh... wouldn't systems that pass arguments right-to-left on the stack\nreceive NULL for node? That'd include the SysV i386 convention used\non Linux, *BSD etc. But that can't be right or we'd know about it...\n\nBut certainly +1 for fixing that regardless.\n\n\n",
"msg_date": "Mon, 19 Sep 2022 10:16:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Sep 19, 2022 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... This is fairly annoying, in that it gives up the function\n>> type safety the C committee wants to impose on us; but I really think\n>> the data type safety that we're giving up in this version of the patch\n>> is a worse hazard.\n\n> But is it defined behaviour?\n> https://stackoverflow.com/questions/559581/casting-a-function-pointer-to-another-type\n\nWell, what we're talking about is substituting \"void *\" (which is\nrequired to be compatible with \"char *\") for a struct pointer type.\nStandards legalese aside, that could only be a problem if the platform\nABI handles \"char *\" differently from struct pointer types. The last\narchitecture I can remember dealing with where that might actually be\na thing was the PDP-10. Everybody has learned better since then, but\nthe C committee is apparently still intent on making the world safe\nfor crappy machine architectures.\n\nAlso, if you want to argue that \"void *\" is not compatible with struct\npointer types, then it's not real clear to me that we aren't full of\nother spec violations, because we sure do a lot of casting across that\n(and even more with this patch as it stands).\n\nI don't have the slightest hesitation about saying that if there's\nstill an architecture out there that's like that, we won't support it.\nI also note that our existing code in this area would break pretty\nthoroughly on such a machine, so this isn't making it worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Sep 2022 23:39:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Mon, Sep 19, 2022 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ... This is fairly annoying, in that it gives up the function\n> >> type safety the C committee wants to impose on us; but I really think\n> >> the data type safety that we're giving up in this version of the patch\n> >> is a worse hazard.\n>\n> > But is it defined behaviour?\n> > https://stackoverflow.com/questions/559581/casting-a-function-pointer-to-another-type\n>\n> Well, what we're talking about is substituting \"void *\" (which is\n> required to be compatible with \"char *\") for a struct pointer type.\n> Standards legalese aside, that could only be a problem if the platform\n> ABI handles \"char *\" differently from struct pointer types. The last\n> architecture I can remember dealing with where that might actually be\n> a thing was the PDP-10. Everybody has learned better since then, but\n> the C committee is apparently still intent on making the world safe\n> for crappy machine architectures.\n>\n> Also, if you want to argue that \"void *\" is not compatible with struct\n> pointer types, then it's not real clear to me that we aren't full of\n> other spec violations, because we sure do a lot of casting across that\n> (and even more with this patch as it stands).\n>\n> I don't have the slightest hesitation about saying that if there's\n> still an architecture out there that's like that, we won't support it.\n> I also note that our existing code in this area would break pretty\n> thoroughly on such a machine, so this isn't making it worse.\n\nYeah, I don't expect it to be a practical problem on any real system\n(that is, I don't expect any real calling convention to transfer a\nstruct T * argument in a different place than void *). I just wanted\nto mention that it's a new liberty. It's one thing to cast struct T *\nto void * and back before dereferencing, and another to cast a pointer\nto a function that takes struct T * to a pointer to a function that\ntakes void * and call it. I considered proposing that myself when\nfirst reporting this problem, but fear of language lawyers put me off.\n\n\n",
"msg_date": "Mon, 19 Sep 2022 16:32:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Sep 19, 2022 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I also note that our existing code in this area would break pretty\n>> thoroughly on such a machine, so this isn't making it worse.\n\n> Yeah, I don't expect it to be a practical problem on any real system\n> (that is, I don't expect any real calling convention to transfer a\n> struct T * argument in a different place than void *). I just wanted\n> to mention that it's a new liberty.\n\nNo, it's not, because the existing coding here is already assuming that.\nThe walker callbacks are generally declared as taking a \"struct *\"\nsecond parameter, but expression_tree_walker et al think they are\npassing a \"void *\" to them. Even if a platform ABI had some weird\nspecial rule about how to call functions that you don't know the\nargument list for, it wouldn't fix this because the walkers sure do know\nwhat their arguments are. The only reason this code works today is that\nin practice, \"void *\" *is* ABI-compatible with \"struct *\".\n\nI'm not excited about creating a demonstrable opportunity for bugs\nin order to make the code hypothetically more compatible with\nhardware designs that are thirty years obsolete. (Hypothetical\nin the sense that there's little reason to believe there would\nbe no other problems.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Sep 2022 00:53:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 4:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Mon, Sep 19, 2022 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I also note that our existing code in this area would break pretty\n> >> thoroughly on such a machine, so this isn't making it worse.\n>\n> > Yeah, I don't expect it to be a practical problem on any real system\n> > (that is, I don't expect any real calling convention to transfer a\n> > struct T * argument in a different place than void *). I just wanted\n> > to mention that it's a new liberty.\n>\n> No, it's not, because the existing coding here is already assuming that.\n> The walker callbacks are generally declared as taking a \"struct *\"\n> second parameter, but expression_tree_walker et al think they are\n> passing a \"void *\" to them. Even if a platform ABI had some weird\n> special rule about how to call functions that you don't know the\n> argument list for, it wouldn't fix this because the walkers sure do know\n> what their arguments are. The only reason this code works today is that\n> in practice, \"void *\" *is* ABI-compatible with \"struct *\".\n\nTrue.\n\n> I'm not excited about creating a demonstrable opportunity for bugs\n> in order to make the code hypothetically more compatible with\n> hardware designs that are thirty years obsolete. (Hypothetical\n> in the sense that there's little reason to believe there would\n> be no other problems.)\n\nFair enough.\n\n\n",
"msg_date": "Mon, 19 Sep 2022 19:30:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 10:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Sep 19, 2022 at 8:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > BTW, I was distressed to discover that someone decided they could\n> > use ExecShutdownNode as a planstate_tree_walker() walker even though\n> > its argument list is not even the right length. I'm a bit flabbergasted\n> > that we seem to have gotten away with that so far, because I'd have\n> > thought for sure that it'd break some platform's convention for which\n> > argument gets passed where. I think we need to fix that, independently\n> > of what we do about the larger scope of these problems. To avoid an\n> > API break, I propose making ExecShutdownNode just be a one-liner that\n> > calls an internal ExecShutdownNode_walker() function. (I've not done\n> > it that way in the attached, though.)\n>\n> Huh... wouldn't systems that pass arguments right-to-left on the stack\n> receive NULL for node? That'd include the SysV i386 convention used\n> on Linux, *BSD etc. But that can't be right or we'd know about it...\n\nI take that back after looking up some long forgotten details; it\nhappily ignores extra arguments.\n\n\n",
"msg_date": "Mon, 19 Sep 2022 19:40:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Sep 19, 2022 at 10:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Huh... wouldn't systems that pass arguments right-to-left on the stack\n>> receive NULL for node? That'd include the SysV i386 convention used\n>> on Linux, *BSD etc. But that can't be right or we'd know about it...\n\n> I take that back after looking up some long forgotten details; it\n> happily ignores extra arguments.\n\nYeah; the fact that no one has complained in several years seems to\nindicate that there's not a problem on supported platforms. Still,\nunlike the quibbles over whether char and struct pointers are the\nsame, it seems clear that this is the sort of inconsistency that\nC2x wants to forbid, presumably in the name of making the world\nsafe for more-efficient function calling code. So I think we'd\nbetter go fix ExecShutdownNode before somebody breaks it.\n\nWhichever way we jump on the tree-walker API changes, those won't\nbe back-patchable. I think the best we can do for the back branches\nis add a configure test to use -Wno-deprecated-non-prototype\nif available. But the ExecShutdownNode change could be back-patched,\nand I'm leaning to doing so even though that breakage is just\nhypothetical today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Sep 2022 10:00:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "Here's a second-generation patch that fixes the warnings by inserting\ncasts into a layer of macro wrappers. I had supposed that this would\ncause us to lose all detection of wrongly-chosen walker functions,\nso I was very pleased to see this when applying it to yesterday's HEAD:\n\nexecProcnode.c:792:2: warning: cast from 'bool (*)(PlanState *)' (aka 'bool (*)(struct PlanState *)') to 'planstate_tree_walker_callback' (aka 'bool (*)(struct PlanState *, void *)') converts to incompatible function type [-Wcast-function-type]\n planstate_tree_walker(node, ExecShutdownNode, NULL);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../src/include/nodes/nodeFuncs.h:180:33: note: expanded from macro 'planstate_tree_walker'\n planstate_tree_walker_impl(ps, (planstate_tree_walker_callback) (w), c)\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSo we've successfully suppressed the pedantic -Wdeprecated-non-prototype\nwarnings, and we have activated the actually-useful -Wcast-function-type\nwarnings, which seem to do exactly what we want in this context:\n\n'-Wcast-function-type'\n Warn when a function pointer is cast to an incompatible function\n pointer. In a cast involving function types with a variable\n argument list only the types of initial arguments that are provided\n are considered. Any parameter of pointer-type matches any other\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n pointer-type. Any benign differences in integral types are\n ^^^^^^^^^^^^\n ignored, like 'int' vs. 'long' on ILP32 targets. Likewise type\n qualifiers are ignored. The function type 'void (*) (void)' is\n special and matches everything, which can be used to suppress this\n warning. In a cast involving pointer to member types this warning\n warns whenever the type cast is changing the pointer to member\n type. This warning is enabled by '-Wextra'.\n\n(That verbiage is from the gcc manual; clang seems to act the same\nexcept that -Wcast-function-type is selected by -Wall, or perhaps is\neven on by default.)\n\nSo I'm pretty pleased with this formulation: no caller changes are\nneeded, and it does exactly what we want warning-wise.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 19 Sep 2022 14:10:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Sun, Sep 18, 2022 at 4:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, I was distressed to discover that someone decided they could\n> use ExecShutdownNode as a planstate_tree_walker() walker even though\n> its argument list is not even the right length. I'm a bit flabbergasted\n> that we seem to have gotten away with that so far, because I'd have\n> thought for sure that it'd break some platform's convention for which\n> argument gets passed where. I think we need to fix that, independently\n> of what we do about the larger scope of these problems. To avoid an\n> API break, I propose making ExecShutdownNode just be a one-liner that\n> calls an internal ExecShutdownNode_walker() function. (I've not done\n> it that way in the attached, though.)\n\nI think this was brain fade on my part ... or possibly on Amit\nKapila's part, but I believe it was probably me. I agree that it's\nimpressive that it actually seemed to work that way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 14:11:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "I wrote:\n> (That verbiage is from the gcc manual; clang seems to act the same\n> except that -Wcast-function-type is selected by -Wall, or perhaps is\n> even on by default.)\n\nNah, scratch that: the reason -Wcast-function-type is on is that\nwe explicitly enable it, and have done so since de8feb1f3 (v14).\nI did not happen to see this warning with gcc because the test runs\nI made with this patch already had c35ba141d, whereas I did my\nclang test on another machine that wasn't quite up to HEAD.\nSo we should have good warning coverage for bogus walker signatures\non both compilers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Sep 2022 12:15:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "As visible on seawasp and locally (16/main branch nightly packages),\nthey decided to start warning about these casts with a new strict\nvariant of the warning. Their discussion:\n\nhttps://reviews.llvm.org/D134831\n\nThere are also a few other cases unrelated to this thread's original\nproblem, for example casts involving pg_funcptr_t, HashCompareFunc. I\nguess our options would be to turn that warning off, or reconsider and\ntry shoving the cast of \"generic\" arguments pointers down into the\nfunctions?\n\n\n",
"msg_date": "Mon, 12 Dec 2022 15:45:51 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> As visible on seawasp and locally (16/main branch nightly packages),\n> they decided to start warning about these casts with a new strict\n> variant of the warning. Their discussion:\n\n> https://reviews.llvm.org/D134831\n\n> There are also a few other cases unrelated to this thread's original\n> problem, for example casts involving pg_funcptr_t, HashCompareFunc. I\n> guess our options would be to turn that warning off, or reconsider and\n> try shoving the cast of \"generic\" arguments pointers down into the\n> functions?\n\nI'm for \"turn the warning off\". Per previous discussion, adhering\nstrictly to that rule would make our code worse (less legible AND\nless safe), not better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Dec 2022 22:07:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > As visible on seawasp and locally (16/main branch nightly packages),\n> > they decided to start warning about these casts with a new strict\n> > variant of the warning. Their discussion:\n>\n> > https://reviews.llvm.org/D134831\n>\n> > There are also a few other cases unrelated to this thread's original\n> > problem, for example casts involving pg_funcptr_t, HashCompareFunc. I\n> > guess our options would be to turn that warning off, or reconsider and\n> > try shoving the cast of \"generic\" arguments pointers down into the\n> > functions?\n>\n> I'm for \"turn the warning off\". Per previous discussion, adhering\n> strictly to that rule would make our code worse (less legible AND\n> less safe), not better.\n\nAlright, this seems to do the trick here.",
"msg_date": "Mon, 12 Dec 2022 16:43:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 4:43 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Dec 12, 2022 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm for \"turn the warning off\". Per previous discussion, adhering\n> > strictly to that rule would make our code worse (less legible AND\n> > less safe), not better.\n>\n> Alright, this seems to do the trick here.\n\nThat did fix that problem. But... seawasp also just recompiled its\ncompiler and picked up new opaque pointer API changes. So no green\ntoday. I have more work to do to fix that, which might take some time\nto get back to.\n\n\n",
"msg_date": "Tue, 13 Dec 2022 14:18:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tree-walker callbacks vs -Wdeprecated-non-prototype"
}
] |
[
{
"msg_contents": "Hi,\n\nIn production environments WAL receiver connection attempts to primary\nmay fail for many reasons (primary down, network is broken,\nauthentication tokens changes, primary_conn_info modifications, socket\nerrors and so on.). Although we emit the error message to server logs,\nisn't it useful to show the last connection error message via\npg_stat_wal_receiver or pg_stat_get_wal_receiver? This will be super\nhelpful in production environments to analyse what the WAL receiver\nissues as accessing and sifting through server logs can be quite\ncumbersome for the end users.\n\nThoughts?\n\nAttached patch can only display the last_conn_error only after the WAL\nreceiver is up, but it will be good to let pg_stat_wal_receiver emit\nlast_conn_error even before that. Imagine WAL receiver is continuously\nfailing on the standby, if we let pg_stat_wal_receiver report\nlast_conn_error, all other columns will show NULL. I can change this\nway, if others are okay with it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 2 May 2022 13:27:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add last failed connection error message to pg_stat_wal_receiver"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHello\r\n\r\nThe patch can be applied to PG master branch without problem and it passed regression and tap tests. I manually tested this feature too and the last conn error is correctly shown in the pg_stat_get_wal_receiver output, which does exactly as described. I think this feature is nice to have to troubleshoot replication issues on the standby side.\r\n\r\nthank you\r\n\r\nCary Huang\r\n----------------\r\nHighgo Software Canada",
"msg_date": "Fri, 22 Jul 2022 20:58:08 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Add last failed connection error message to pg_stat_wal_receiver"
},
{
"msg_contents": "On Sat, Jul 23, 2022 at 2:29 AM Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> Hello\n>\n> The patch can be applied to PG master branch without problem and it passed regression and tap tests. I manually tested this feature too and the last conn error is correctly shown in the pg_stat_get_wal_receiver output, which does exactly as described. I think this feature is nice to have to troubleshoot replication issues on the standby side.\n\nThanks a lot Cary for reviewing. It will be great if you can add\nyourself as a reviewer and set the status accordingly in the CF entry\nhere - https://commitfest.postgresql.org/38/3666/.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:19:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last failed connection error message to pg_stat_wal_receiver"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 12:19:40PM +0530, Bharath Rupireddy wrote:\n> Thanks a lot Cary for reviewing. It will be great if you can add\n> yourself as a reviewer and set the status accordingly in the CF entry\n> here - https://commitfest.postgresql.org/38/3666/.\n\nHmm. This stands for the connection error, but there are other things\nthat could cause a failure down the road, like an incorrect system\nID or a TLI-related report, so that seems a bit limited to me?\n--\nMichael",
"msg_date": "Mon, 25 Jul 2022 18:10:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add last failed connection error message to pg_stat_wal_receiver"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 2:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 25, 2022 at 12:19:40PM +0530, Bharath Rupireddy wrote:\n> > Thanks a lot Cary for reviewing. It will be great if you can add\n> > yourself as a reviewer and set the status accordingly in the CF entry\n> > here - https://commitfest.postgresql.org/38/3666/.\n>\n> Hmm. This stands for the connection error, but there are other things\n> that could cause a failure down the road, like an incorrect system\n> ID or a TLI-related report, so that seems a bit limited to me?\n\nGood point. The walreceiver can exit for any reason. We can either 1)\nstore for all the error messages or 2) think of using sigsetjmp but\nthat only catches the ERROR kinds, leaving FATAL and PANIC messages.\nThe option (1) is simple but there are problems - we may miss storing\nfuture error messages, good commenting and reviewing may help here and\nall the error messages now need to be stored in string, which is\ncomplex. The option (2) seems reasonable but we will miss FATAL and\nPANIC messages (we have many ERRORs, 2 FATALs, 3 PANICs). Maybe a\ncombination of option (1) for FATALs and PANICs, and option (2) for\nERRORs helps.\n\nThoughts?\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Thu, 4 Aug 2022 15:27:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last failed connection error message to pg_stat_wal_receiver"
},
{
"msg_contents": "On Thu, Aug 04, 2022 at 03:27:11PM +0530, Bharath Rupireddy wrote:\n> Good point. The walreceiver can exit for any reason. We can either 1)\n> store for all the error messages or 2) think of using sigsetjmp but\n> that only catches the ERROR kinds, leaving FATAL and PANIC messages.\n> The option (1) is simple but there are problems - we may miss storing\n> future error messages, good commenting and reviewing may help here and\n> all the error messages now need to be stored in string, which is\n> complex. The option (2) seems reasonable but we will miss FATAL and\n> PANIC messages (we have many ERRORs, 2 FATALs, 3 PANICs). Maybe a\n> combination of option (1) for FATALs and PANICs, and option (2) for\n> ERRORs helps.\n> \n> Thoughts?\n\nPANIC is not something you'd care about as the system would go down as\nand shared memory would be reset (right?) even if restart_on_crash is\nenabled. Perhaps it would help here to use something like a macro to\ncatch and save the error, in a style similar to what's in hba.c for\nexample, which is the closest example I can think of, even if on ERROR\nwe don't really care about the error string anyway as there is nothing\nto report back to the SQL views used for the HBA/ident files.\n\nFATAL may prove to be tricky though, because I'd expect the error to\nbe saved in shared memory in this case. This is particularly critical\nas this takes the WAL receiver process down, actually.\n\nAnyway, outside the potential scope of the proposal, there are more\nthings that I find strange with the code:\n- Why isn't the string reset when the WAL receiver is starting up?\nThat surely is not OK to keep a past state not referring to what\nactually happens with a receiver currently running.\n- pg_stat_wal_receiver (system view) reports no rows if pid is NULL,\nwhich would be the state stored in shared memory after a connection.\nThis means that one would never be able to see last_conn_error except\nwhen calling directly the SQL function pg_stat_get_wal_receiver().\n\nOne could say that we should report a row for this view all the time,\nbut this creates a compatibility breakage: existing application\nassuming something like (one row <=> WAL receiver running) could\nbreak.\n--\nMichael",
"msg_date": "Thu, 18 Aug 2022 12:31:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add last failed connection error message to pg_stat_wal_receiver"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 9:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> PANIC is not something you'd care about as the system would go down as\n> and shared memory would be reset (right?) even if restart_on_crash is\n> enabled. Perhaps it would help here to use something like a macro to\n> catch and save the error, in a style similar to what's in hba.c for\n> example, which is the closest example I can think of, even if on ERROR\n> we don't really care about the error string anyway as there is nothing\n> to report back to the SQL views used for the HBA/ident files.\n>\n> FATAL may prove to be tricky though, because I'd expect the error to\n> be saved in shared memory in this case. This is particularly critical\n> as this takes the WAL receiver process down, actually.\n\nHm, we can use error callbacks or pg try/catch blocks to save the\nerror message into walreceiver shared memory.\n\n> Anyway, outside the potential scope of the proposal, there are more\n> things that I find strange with the code:\n> - Why isn't the string reset when the WAL receiver is starting up?\n> That surely is not OK to keep a past state not referring to what\n> actually happens with a receiver currently running.\n\nI agree that it's not a good way to show some past failure state when\nthings are fine currently. Would naming the column name as\nlast_connectivity_error or something better and describing it in the\ndocs clearly help here?\n\nOtherwise, we can have another simple function that just returns the\nlast connection failure of walreceiver and if required PID.\n\n> - pg_stat_wal_receiver (system view) reports no rows if pid is NULL,\n> which would be the state stored in shared memory after a connection.\n> This means that one would never be able to see last_conn_error except\n> when calling directly the SQL function pg_stat_get_wal_receiver().\n>\n> One could say that we should report a row for this view all the time,\n> but this creates a compatibility breakage: existing application\n> assuming something like (one row <=> WAL receiver running) could\n> break.\n\n-1.\n\nWe can think of having a separate infrastructure for reporting all\nbackend or process specific errors similar to pg_stat_activity and\npg_stat_get_activity, but that needs some shared memory and all of\nthat - IMO, it's an overkill.\n\nI'm fine to withdraw this thread, if none of the above thoughts is\nsensible enough to pursue further.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Oct 2022 11:36:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add last failed connection error message to pg_stat_wal_receiver"
}
] |
[
{
"msg_contents": "[for PG16]\n\nThere are many calls to construct_array() and deconstruct_array() for \nbuilt-in types, for example, when dealing with system catalog columns. \nThese all hardcode the type attributes necessary to pass to these functions.\n\nTo simplify this a bit, add construct_array_builtin(), \ndeconstruct_array_builtin() as wrappers that centralize this hardcoded \nknowledge. This simplifies many call sites and reduces the amount of \nhardcoded stuff that is spread around.\n\nI also considered having genbki.pl generate lookup tables for these \nhardcoded values, similar to schemapg.h, but that ultimately seemed \nexcessive.\n\nThoughts?",
"msg_date": "Mon, 2 May 2022 10:38:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Refactor construct_array() and deconstruct_array() for built-in types"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> There are many calls to construct_array() and deconstruct_array() for \n> built-in types, for example, when dealing with system catalog columns. \n> These all hardcode the type attributes necessary to pass to these functions.\n\n> To simplify this a bit, add construct_array_builtin(), \n> deconstruct_array_builtin() as wrappers that centralize this hardcoded \n> knowledge. This simplifies many call sites and reduces the amount of \n> hardcoded stuff that is spread around.\n\n> I also considered having genbki.pl generate lookup tables for these \n> hardcoded values, similar to schemapg.h, but that ultimately seemed \n> excessive.\n\n+1 --- the added overhead of the switch statements is probably a\nreasonable price to pay for the notational simplification and\nbug-proofing.\n\nOne minor coding gripe is that compilers that don't know that elog(ERROR)\ndoesn't return will certainly generate \"use of possibly-uninitialized\nvariable\" complaints. Suggest inserting \"return NULL;\" or similar into\nthe default: cases. I'd also use more specific error wording to help\npeople find where they need to add code when they make use of a new type;\nmaybe like \"type %u not supported by construct_array_builtin\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 May 2022 10:48:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactor construct_array() and deconstruct_array() for built-in\n types"
},
{
"msg_contents": "On 02.05.22 16:48, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> There are many calls to construct_array() and deconstruct_array() for\n>> built-in types, for example, when dealing with system catalog columns.\n>> These all hardcode the type attributes necessary to pass to these functions.\n> \n>> To simplify this a bit, add construct_array_builtin(),\n>> deconstruct_array_builtin() as wrappers that centralize this hardcoded\n>> knowledge. This simplifies many call sites and reduces the amount of\n>> hardcoded stuff that is spread around.\n> \n>> I also considered having genbki.pl generate lookup tables for these\n>> hardcoded values, similar to schemapg.h, but that ultimately seemed\n>> excessive.\n> \n> +1 --- the added overhead of the switch statements is probably a\n> reasonable price to pay for the notational simplification and\n> bug-proofing.\n> \n> One minor coding gripe is that compilers that don't know that elog(ERROR)\n> doesn't return will certainly generate \"use of possibly-uninitialized\n> variable\" complaints. Suggest inserting \"return NULL;\" or similar into\n> the default: cases. I'd also use more specific error wording to help\n> people find where they need to add code when they make use of a new type;\n> maybe like \"type %u not supported by construct_array_builtin\".\n\nI have pushed this with the improvements you had suggested.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 11:41:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor construct_array() and deconstruct_array() for built-in\n types"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 02.05.22 16:48, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> There are many calls to construct_array() and deconstruct_array() for\n>>> built-in types, for example, when dealing with system catalog columns.\n>>> These all hardcode the type attributes necessary to pass to these functions.\n>> \n>>> To simplify this a bit, add construct_array_builtin(),\n>>> deconstruct_array_builtin() as wrappers that centralize this hardcoded\n>>> knowledge. This simplifies many call sites and reduces the amount of\n>>> hardcoded stuff that is spread around.\n>> \n>>> I also considered having genbki.pl generate lookup tables for these\n>>> hardcoded values, similar to schemapg.h, but that ultimately seemed\n>>> excessive.\n>> +1 --- the added overhead of the switch statements is probably a\n>> reasonable price to pay for the notational simplification and\n>> bug-proofing.\n>> One minor coding gripe is that compilers that don't know that\n>> elog(ERROR)\n>> doesn't return will certainly generate \"use of possibly-uninitialized\n>> variable\" complaints. Suggest inserting \"return NULL;\" or similar into\n>> the default: cases. I'd also use more specific error wording to help\n>> people find where they need to add code when they make use of a new type;\n>> maybe like \"type %u not supported by construct_array_builtin\".\n>\n> I have pushed this with the improvements you had suggested.\n\nI dind't pay much attention to this thread earlier, but I was struck by\nthe duplication of the switch statement determining the elemlen,\nelembyval, and elemalign values between the construct and deconstruct\nfunctions. How about a common function they can both call? Something\nlike:\n\nstatic void builtin_type_details(Oid elemtype,\n int *elemlen,\n bool *elembyval,\n char *elemalign);\n\n- ilmari\n\n\n",
"msg_date": "Fri, 01 Jul 2022 11:43:21 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor construct_array() and deconstruct_array() for built-in\n types"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> I dind't pay much attention to this thread earlier, but I was struck by\n> the duplication of the switch statement determining the elemlen,\n> elembyval, and elemalign values between the construct and deconstruct\n> functions. How about a common function they can both call? Something\n> like:\n>\n> static void builtin_type_details(Oid elemtype,\n> int *elemlen,\n> bool *elembyval,\n> char *elemalign);\n\nI just realised that this would require the error message to not include\nthe function name (which isn't really that critical, since it's a\ndeveloper-facing message), but an option would to make it return false\nfor unknown types, so each of the calling functions can emit their own\nerror message.\n\n> - ilmari\n\n\n",
"msg_date": "Fri, 01 Jul 2022 11:47:50 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor construct_array() and deconstruct_array() for built-in\n types"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n>> I dind't pay much attention to this thread earlier, but I was struck by\n>> the duplication of the switch statement determining the elemlen,\n>> elembyval, and elemalign values between the construct and deconstruct\n>> functions. How about a common function they can both call?\n\nI was wondering about that too while reading the committed version.\nHowever, adding an additional function call would weaken the argument\nthat this adds just a tolerable amount of overhead, primarily because\nyou'd need to return a record or introduce pointers or the like.\n\n> I just realised that this would require the error message to not include\n> the function name (which isn't really that critical, since it's a\n> developer-facing message), but an option would to make it return false\n> for unknown types, so each of the calling functions can emit their own\n> error message.\n\nNah, because the point of that was just to direct people to where\nto fix it when they need to. So the message need only refer to\nthe common function, if we were to change it.\n\nPerhaps a good compromise could be to turn the duplicated code into\na macro that's instantiated in both places? But I don't actually\nsee anything much wrong with the code as Peter has it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Jul 2022 09:37:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactor construct_array() and deconstruct_array() for built-in\n types"
},
{
"msg_contents": "On 01.07.22 15:37, Tom Lane wrote:\n> Perhaps a good compromise could be to turn the duplicated code into\n> a macro that's instantiated in both places? But I don't actually\n> see anything much wrong with the code as Peter has it.\n\nThere are opportunities to refine this further. For example, there is \nsimilar code in TupleDescInitBuiltinEntry(), and bootstrap.c also \ncontains hardcoded info on built-in types, and GetCCHashEqFuncs() is \nalso loosely related. As I mentioned earlier in the thread, one could \nhave genbki.pl generate support code for this.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 17:00:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor construct_array() and deconstruct_array() for built-in\n types"
}
] |
[
{
"msg_contents": "Hi\n\nI found a query that is significantly slower with more memory\n\nplan 1 - fast https://explain.depesz.com/s/XM1f\n\nplan 2 - slow https://explain.depesz.com/s/2rBw\n\nStrange - the time of last row is +/- same, but execution time is 10x worse\n\nIt looks like slow environment cleaning\n\nHiI found a query that is significantly slower with more memoryplan 1 - fast https://explain.depesz.com/s/XM1fplan 2 - slow https://explain.depesz.com/s/2rBwStrange - the time of last row is +/- same, but execution time is 10x worseIt looks like slow environment cleaning",
"msg_date": "Mon, 2 May 2022 10:59:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, 2 May 2022 at 11:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> I found a query that is significantly slower with more memory\n\nWhich PostgreSQL version did you use? Did you enable assert checking?\nDo you have an example database setup to work with?\n\n> plan 2\n> QUERY PLAN\n> ----------------\n> Nested Loop Anti Join (cost=46.53..2914.58 rows=1 width=16) (actual time=18.306..23.065 rows=32 loops=1)\n> ...\n> Execution Time: 451.161 ms\n\nTruly strange; especially the 418ms difference between execution time\nand the root node's \"actual time\". I haven't really seen such\ndifferences happen, except when concurrent locks were held at the\ntable / index level.\n\n> plan 1 - fast https://explain.depesz.com/s/XM1f\n>\n> plan 2 - slow https://explain.depesz.com/s/2rBw\n>\n> Strange - the time of last row is +/- same, but execution time is 10x worse\n\nThe only difference between the two plans that I see is that plan 1\ndoesn't use memoization, whereas plan 2 does use 2 memoize plan nodes\n(one of 66 misses; one of 342 misses). The only \"expensive\" operation\nthat I see in memoize nodes is the check for memory size in\nassert-enabled builds; and that should have very low overhead\nconsidering that the size of the memoized data is only 8kB and 25kB\nrespectively.\n\n\n",
"msg_date": "Mon, 2 May 2022 15:27:49 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "po 2. 5. 2022 v 15:28 odesílatel Matthias van de Meent <\nboekewurm+postgres@gmail.com> napsal:\n\n> On Mon, 2 May 2022 at 11:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > Hi\n> >\n> > I found a query that is significantly slower with more memory\n>\n> Which PostgreSQL version did you use? Did you enable assert checking?\n> Do you have an example database setup to work with?\n>\n> > plan 2\n> > QUERY PLAN\n> > ----------------\n> > Nested Loop Anti Join (cost=46.53..2914.58 rows=1 width=16) (actual\n> time=18.306..23.065 rows=32 loops=1)\n> > ...\n> > Execution Time: 451.161 ms\n>\n> Truly strange; especially the 418ms difference between execution time\n> and the root node's \"actual time\". I haven't really seen such\n> differences happen, except when concurrent locks were held at the\n> table / index level.\n>\n> > plan 1 - fast https://explain.depesz.com/s/XM1f\n> >\n> > plan 2 - slow https://explain.depesz.com/s/2rBw\n> >\n> > Strange - the time of last row is +/- same, but execution time is 10x\n> worse\n>\n> The only difference between the two plans that I see is that plan 1\n> doesn't use memoization, whereas plan 2 does use 2 memoize plan nodes\n> (one of 66 misses; one of 342 misses). The only \"expensive\" operation\n> that I see in memoize nodes is the check for memory size in\n> assert-enabled builds; and that should have very low overhead\n> considering that the size of the memoized data is only 8kB and 25kB\n> respectively.\n>\n\nThis is PostgreSQL 14 used in production environment\n\n (2022-05-02 15:37:29) prd_aukro=# show debug_assertions ;\n┌──────────────────┐\n│ debug_assertions │\n├──────────────────┤\n│ off │\n└──────────────────┘\n(1 řádka)\n\nČas: 0,295 ms\n(2022-05-02 15:37:35) prd_aukro=# select version();\n┌────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ version\n │\n├────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0\n20210514 (Red Hat 8.5.0-4), 64-bit │\n└────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(1 řádka)\nČas: 0,629 ms\n\nthere is just shared buffers changed to 32GB and work_mem to 70MB.\nUnfortunately - it is in production environment with customer data, so I\ncannot to play too much\n\nThis is perf of slow\n\n 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms\n 11,06% postmaster [kernel.kallsyms] [k] page_fault\n 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode\n 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid\n 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms\n 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range\n 3,07% postmaster postgres [.] hash_search_with_hash_value\n 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate\n 2,49% postmaster postgres [.] heap_page_prune_opt\n 1,92% postmaster [kernel.kallsyms] [k] try_charge\n 1,85% postmaster [kernel.kallsyms] [k]\nswapgs_restore_regs_and_return_to_usermode\n 1,82% postmaster [kernel.kallsyms] [k] error_entry\n 1,73% postmaster postgres [.] _bt_checkkeys\n 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk\n 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist\n 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn\n 1,08% postmaster [kernel.kallsyms] [k]\npercpu_ref_put_many.constprop.84\n 1,08% postmaster postgres [.] 0x00000000003c1be6\n 1,06% postmaster [kernel.kallsyms] [k] get_mem_cgroup_from_mm.part.49\n 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault\n 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge\n 0,70% postmaster [kernel.kallsyms] [k] release_pages\n 0,61% postmaster postgres [.] _bt_checkpage\n 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache\n 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault\n 0,57% postmaster postgres [.] tbm_iterate\n 0,56% postmaster [kernel.kallsyms] [k] __count_memcg_events.part.70\n 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state\n 0,52% postmaster postgres [.] 0x000000000015f6e5\n 0,50% postmaster [kernel.kallsyms] [k] prep_new_page\n 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault\n 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock\n 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_page\n\nThis is fast\n\n 21,13% postmaster postgres [.] hash_search_with_hash_value\n 15,33% postmaster postgres [.] heap_page_prune_opt\n 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms\n 10,00% postmaster postgres [.] _bt_checkkeys\n 6,23% postmaster postgres [.] 0x00000000003c1be6\n 4,94% postmaster postgres [.] _bt_checkpage\n 2,85% postmaster postgres [.] tbm_iterate\n 2,31% postmaster postgres [.] nocache_index_getattr\n 2,13% postmaster postgres [.] pg_qsort\n 1,58% postmaster postgres [.] heap_hot_search_buffer\n 1,58% postmaster postgres [.] FunctionCall2Coll\n 1,58% postmaster postgres [.] 0x000000000015f6e5\n 1,17% postmaster postgres [.] LWLockRelease\n 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe\n 0,64% postmaster postgres [.] 0x00000000003e4233\n 0,54% postmaster postgres [.] hash_bytes\n 0,53% postmaster postgres [.] 0x0000000000306fbb\n 0,53% postmaster postgres [.] LWLockAcquire\n 0,42% postmaster postgres [.] 0x00000000003c1c6f\n 0,42% postmaster postgres [.] _bt_compare\n\n\n\n\nRegards\n\nPavel\n\npo 2. 5. 2022 v 15:28 odesílatel Matthias van de Meent <boekewurm+postgres@gmail.com> napsal:On Mon, 2 May 2022 at 11:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> Hi\r\n>\r\n> I found a query that is significantly slower with more memory\n\r\nWhich PostgreSQL version did you use? Did you enable assert checking?\r\nDo you have an example database setup to work with?\n\r\n> plan 2\r\n> QUERY PLAN\r\n> ----------------\r\n> Nested Loop Anti Join (cost=46.53..2914.58 rows=1 width=16) (actual time=18.306..23.065 rows=32 loops=1)\r\n> ...\r\n> Execution Time: 451.161 ms\n\r\nTruly strange; especially the 418ms difference between execution time\r\nand the root node's \"actual time\". I haven't really seen such\r\ndifferences happen, except when concurrent locks were held at the\r\ntable / index level.\n\r\n> plan 1 - fast https://explain.depesz.com/s/XM1f\r\n>\r\n> plan 2 - slow https://explain.depesz.com/s/2rBw\r\n>\r\n> Strange - the time of last row is +/- same, but execution time is 10x worse\n\r\nThe only difference between the two plans that I see is that plan 1\r\ndoesn't use memoization, whereas plan 2 does use 2 memoize plan nodes\r\n(one of 66 misses; one of 342 misses). The only \"expensive\" operation\r\nthat I see in memoize nodes is the check for memory size in\r\nassert-enabled builds; and that should have very low overhead\r\nconsidering that the size of the memoized data is only 8kB and 25kB\r\nrespectively.This is PostgreSQL 14 used in production environment (2022-05-02 15:37:29) prd_aukro=# show debug_assertions ;┌──────────────────┐│ debug_assertions │├──────────────────┤│ off │└──────────────────┘(1 řádka)Čas: 0,295 ms(2022-05-02 15:37:35) prd_aukro=# select version();┌────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ version │├────────────────────────────────────────────────────────────────────────────────────────────────────────┤│ PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4), 64-bit │└────────────────────────────────────────────────────────────────────────────────────────────────────────┘(1 řádka)Čas: 0,629 msthere is just shared buffers changed to 32GB and work_mem to 70MB. Unfortunately - it is in production environment with customer data, so I cannot to play too muchThis is perf of slow 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms 11,06% postmaster [kernel.kallsyms] [k] page_fault 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range 3,07% postmaster postgres [.] hash_search_with_hash_value 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate 2,49% postmaster postgres [.] heap_page_prune_opt 1,92% postmaster [kernel.kallsyms] [k] try_charge 1,85% postmaster [kernel.kallsyms] [k] swapgs_restore_regs_and_return_to_usermode 1,82% postmaster [kernel.kallsyms] [k] error_entry 1,73% postmaster postgres [.] _bt_checkkeys 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn 1,08% postmaster [kernel.kallsyms] [k] percpu_ref_put_many.constprop.84 1,08% postmaster postgres [.] 0x00000000003c1be6 1,06% postmaster [kernel.kallsyms] [k] get_mem_cgroup_from_mm.part.49 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge 0,70% postmaster [kernel.kallsyms] [k] release_pages 0,61% postmaster postgres [.] _bt_checkpage 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault 0,57% postmaster postgres [.] tbm_iterate 0,56% postmaster [kernel.kallsyms] [k] __count_memcg_events.part.70 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state 0,52% postmaster postgres [.] 0x000000000015f6e5 0,50% postmaster [kernel.kallsyms] [k] prep_new_page 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_pageThis is fast 21,13% postmaster postgres [.] hash_search_with_hash_value 15,33% postmaster postgres [.] heap_page_prune_opt 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms 10,00% postmaster postgres [.] _bt_checkkeys 6,23% postmaster postgres [.] 0x00000000003c1be6 4,94% postmaster postgres [.] _bt_checkpage 2,85% postmaster postgres [.] tbm_iterate 2,31% postmaster postgres [.] nocache_index_getattr 2,13% postmaster postgres [.] pg_qsort 1,58% postmaster postgres [.] heap_hot_search_buffer 1,58% postmaster postgres [.] FunctionCall2Coll 1,58% postmaster postgres [.] 0x000000000015f6e5 1,17% postmaster postgres [.] LWLockRelease 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe 0,64% postmaster postgres [.] 0x00000000003e4233 0,54% postmaster postgres [.] hash_bytes 0,53% postmaster postgres [.] 0x0000000000306fbb 0,53% postmaster postgres [.] LWLockAcquire 0,42% postmaster postgres [.] 0x00000000003c1c6f 0,42% postmaster postgres [.] _bt_compareRegardsPavel",
"msg_date": "Mon, 2 May 2022 16:02:18 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "po 2. 5. 2022 v 16:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 2. 5. 2022 v 15:28 odesílatel Matthias van de Meent <\n> boekewurm+postgres@gmail.com> napsal:\n>\n>> On Mon, 2 May 2022 at 11:00, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> >\n>> > Hi\n>> >\n>> > I found a query that is significantly slower with more memory\n>>\n>> Which PostgreSQL version did you use? Did you enable assert checking?\n>> Do you have an example database setup to work with?\n>>\n>> > plan 2\n>> > QUERY PLAN\n>> > ----------------\n>> > Nested Loop Anti Join (cost=46.53..2914.58 rows=1 width=16) (actual\n>> time=18.306..23.065 rows=32 loops=1)\n>> > ...\n>> > Execution Time: 451.161 ms\n>>\n>> Truly strange; especially the 418ms difference between execution time\n>> and the root node's \"actual time\". I haven't really seen such\n>> differences happen, except when concurrent locks were held at the\n>> table / index level.\n>>\n>> > plan 1 - fast https://explain.depesz.com/s/XM1f\n>> >\n>> > plan 2 - slow https://explain.depesz.com/s/2rBw\n>> >\n>> > Strange - the time of last row is +/- same, but execution time is 10x\n>> worse\n>>\n>> The only difference between the two plans that I see is that plan 1\n>> doesn't use memoization, whereas plan 2 does use 2 memoize plan nodes\n>> (one of 66 misses; one of 342 misses). The only \"expensive\" operation\n>> that I see in memoize nodes is the check for memory size in\n>> assert-enabled builds; and that should have very low overhead\n>> considering that the size of the memoized data is only 8kB and 25kB\n>> respectively.\n>>\n>\n> This is PostgreSQL 14 used in production environment\n>\n> (2022-05-02 15:37:29) prd_aukro=# show debug_assertions ;\n> ┌──────────────────┐\n> │ debug_assertions │\n> ├──────────────────┤\n> │ off │\n> └──────────────────┘\n> (1 řádka)\n>\n> Čas: 0,295 ms\n> (2022-05-02 15:37:35) prd_aukro=# select version();\n>\n> ┌────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n> │ version\n> │\n>\n> ├────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n> │ PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0\n> 20210514 (Red Hat 8.5.0-4), 64-bit │\n>\n> └────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n> (1 řádka)\n> Čas: 0,629 ms\n>\n> there is just shared buffers changed to 32GB and work_mem to 70MB.\n> Unfortunately - it is in production environment with customer data, so I\n> cannot to play too much\n>\n> This is perf of slow\n>\n> 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms\n> 11,06% postmaster [kernel.kallsyms] [k] page_fault\n> 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode\n> 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid\n> 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms\n> 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range\n> 3,07% postmaster postgres [.] hash_search_with_hash_value\n> 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate\n> 2,49% postmaster postgres [.] heap_page_prune_opt\n> 1,92% postmaster [kernel.kallsyms] [k] try_charge\n> 1,85% postmaster [kernel.kallsyms] [k]\n> swapgs_restore_regs_and_return_to_usermode\n> 1,82% postmaster [kernel.kallsyms] [k] error_entry\n> 1,73% postmaster postgres [.] _bt_checkkeys\n> 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk\n> 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist\n> 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn\n> 1,08% postmaster [kernel.kallsyms] [k]\n> percpu_ref_put_many.constprop.84\n> 1,08% postmaster postgres [.] 0x00000000003c1be6\n> 1,06% postmaster [kernel.kallsyms] [k] get_mem_cgroup_from_mm.part.49\n> 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault\n> 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge\n> 0,70% postmaster [kernel.kallsyms] [k] release_pages\n> 0,61% postmaster postgres [.] _bt_checkpage\n> 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache\n> 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault\n> 0,57% postmaster postgres [.] tbm_iterate\n> 0,56% postmaster [kernel.kallsyms] [k] __count_memcg_events.part.70\n> 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state\n> 0,52% postmaster postgres [.] 0x000000000015f6e5\n> 0,50% postmaster [kernel.kallsyms] [k] prep_new_page\n> 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault\n> 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock\n> 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_page\n>\n> This is fast\n>\n> 21,13% postmaster postgres [.] hash_search_with_hash_value\n> 15,33% postmaster postgres [.] heap_page_prune_opt\n> 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms\n> 10,00% postmaster postgres [.] _bt_checkkeys\n> 6,23% postmaster postgres [.] 0x00000000003c1be6\n> 4,94% postmaster postgres [.] _bt_checkpage\n> 2,85% postmaster postgres [.] tbm_iterate\n> 2,31% postmaster postgres [.] nocache_index_getattr\n> 2,13% postmaster postgres [.] pg_qsort\n> 1,58% postmaster postgres [.] heap_hot_search_buffer\n> 1,58% postmaster postgres [.] FunctionCall2Coll\n> 1,58% postmaster postgres [.] 0x000000000015f6e5\n> 1,17% postmaster postgres [.] LWLockRelease\n> 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe\n> 0,64% postmaster postgres [.] 0x00000000003e4233\n> 0,54% postmaster postgres [.] hash_bytes\n> 0,53% postmaster postgres [.] 0x0000000000306fbb\n> 0,53% postmaster postgres [.] LWLockAcquire\n> 0,42% postmaster postgres [.] 0x00000000003c1c6f\n> 0,42% postmaster postgres [.] _bt_compare\n>\n>\nIt looks so memoization allocate lot of memory - maybe there are some\ntemporal memory leak\n\n Performance counter stats for process id '4004464':\n\n 84,26 msec task-clock # 0,012 CPUs utilized\n\n 3 context-switches # 0,036 K/sec\n\n 0 cpu-migrations # 0,000 K/sec\n\n 19 page-faults # 0,225 K/sec\n\n 0 cycles # 0,000 GHz\n\n 106 873 995 instructions\n\n 20 225 431 branches # 240,026 M/sec\n\n 348 834 branch-misses # 1,72% of all\nbranches\n\n 7,106142051 seconds time elapsed\n\n Performance counter stats for process id '4004464':\n\n 1 116,97 msec task-clock # 0,214 CPUs utilized\n\n 4 context-switches # 0,004 K/sec\n\n 0 cpu-migrations # 0,000 K/sec\n\n 99 349 page-faults # 0,089 M/sec\n\n 0 cycles # 0,000 GHz\n\n 478 842 411 instructions\n\n 89 495 015 branches # 80,123 M/sec\n\n 1 014 763 branch-misses # 1,13% of all\nbranches\n\n 5,221116331 seconds time elapsed\n\nRegards\n\nPavel\n\n\n>\n>\n>\n> Regards\n>\n> Pavel\n>\n>\n\npo 2. 5. 2022 v 16:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 2. 5. 2022 v 15:28 odesílatel Matthias van de Meent <boekewurm+postgres@gmail.com> napsal:On Mon, 2 May 2022 at 11:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> Hi\r\n>\r\n> I found a query that is significantly slower with more memory\n\r\nWhich PostgreSQL version did you use? Did you enable assert checking?\r\nDo you have an example database setup to work with?\n\r\n> plan 2\r\n> QUERY PLAN\r\n> ----------------\r\n> Nested Loop Anti Join (cost=46.53..2914.58 rows=1 width=16) (actual time=18.306..23.065 rows=32 loops=1)\r\n> ...\r\n> Execution Time: 451.161 ms\n\r\nTruly strange; especially the 418ms difference between execution time\r\nand the root node's \"actual time\". I haven't really seen such\r\ndifferences happen, except when concurrent locks were held at the\r\ntable / index level.\n\r\n> plan 1 - fast https://explain.depesz.com/s/XM1f\r\n>\r\n> plan 2 - slow https://explain.depesz.com/s/2rBw\r\n>\r\n> Strange - the time of last row is +/- same, but execution time is 10x worse\n\r\nThe only difference between the two plans that I see is that plan 1\r\ndoesn't use memoization, whereas plan 2 does use 2 memoize plan nodes\r\n(one of 66 misses; one of 342 misses). The only \"expensive\" operation\r\nthat I see in memoize nodes is the check for memory size in\r\nassert-enabled builds; and that should have very low overhead\r\nconsidering that the size of the memoized data is only 8kB and 25kB\r\nrespectively.This is PostgreSQL 14 used in production environment (2022-05-02 15:37:29) prd_aukro=# show debug_assertions ;┌──────────────────┐│ debug_assertions │├──────────────────┤│ off │└──────────────────┘(1 řádka)Čas: 0,295 ms(2022-05-02 15:37:35) prd_aukro=# select version();┌────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ version │├────────────────────────────────────────────────────────────────────────────────────────────────────────┤│ PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4), 64-bit │└────────────────────────────────────────────────────────────────────────────────────────────────────────┘(1 řádka)Čas: 0,629 msthere is just shared buffers changed to 32GB and work_mem to 70MB. Unfortunately - it is in production environment with customer data, so I cannot to play too muchThis is perf of slow 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms 11,06% postmaster [kernel.kallsyms] [k] page_fault 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range 3,07% postmaster postgres [.] hash_search_with_hash_value 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate 2,49% postmaster postgres [.] heap_page_prune_opt 1,92% postmaster [kernel.kallsyms] [k] try_charge 1,85% postmaster [kernel.kallsyms] [k] swapgs_restore_regs_and_return_to_usermode 1,82% postmaster [kernel.kallsyms] [k] error_entry 1,73% postmaster postgres [.] _bt_checkkeys 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn 1,08% postmaster [kernel.kallsyms] [k] percpu_ref_put_many.constprop.84 1,08% postmaster postgres [.] 0x00000000003c1be6 1,06% postmaster [kernel.kallsyms] [k] get_mem_cgroup_from_mm.part.49 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge 0,70% postmaster [kernel.kallsyms] [k] release_pages 0,61% postmaster postgres [.] _bt_checkpage 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault 0,57% postmaster postgres [.] tbm_iterate 0,56% postmaster [kernel.kallsyms] [k] __count_memcg_events.part.70 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state 0,52% postmaster postgres [.] 0x000000000015f6e5 0,50% postmaster [kernel.kallsyms] [k] prep_new_page 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_pageThis is fast 21,13% postmaster postgres [.] hash_search_with_hash_value 15,33% postmaster postgres [.] heap_page_prune_opt 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms 10,00% postmaster postgres [.] _bt_checkkeys 6,23% postmaster postgres [.] 0x00000000003c1be6 4,94% postmaster postgres [.] _bt_checkpage 2,85% postmaster postgres [.] tbm_iterate 2,31% postmaster postgres [.] nocache_index_getattr 2,13% postmaster postgres [.] pg_qsort 1,58% postmaster postgres [.] heap_hot_search_buffer 1,58% postmaster postgres [.] FunctionCall2Coll 1,58% postmaster postgres [.] 0x000000000015f6e5 1,17% postmaster postgres [.] LWLockRelease 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe 0,64% postmaster postgres [.] 0x00000000003e4233 0,54% postmaster postgres [.] hash_bytes 0,53% postmaster postgres [.] 0x0000000000306fbb 0,53% postmaster postgres [.] LWLockAcquire 0,42% postmaster postgres [.] 0x00000000003c1c6f 0,42% postmaster postgres [.] _bt_compareIt looks so memoization allocate lot of memory - maybe there are some temporal memory leak Performance counter stats for process id '4004464': 84,26 msec task-clock # 0,012 CPUs utilized 3 context-switches # 0,036 K/sec 0 cpu-migrations # 0,000 K/sec 19 page-faults # 0,225 K/sec 0 cycles # 0,000 GHz 106 873 995 instructions 20 225 431 branches # 240,026 M/sec 348 834 branch-misses # 1,72% of all branches 7,106142051 seconds time elapsed Performance counter stats for process id '4004464': 1 116,97 msec task-clock # 0,214 CPUs utilized 4 context-switches # 0,004 K/sec 0 cpu-migrations # 0,000 K/sec 99 349 page-faults # 0,089 M/sec 0 cycles # 0,000 GHz 478 842 411 instructions 89 495 015 branches # 80,123 M/sec 1 014 763 branch-misses # 1,13% of all branches 5,221116331 seconds time elapsedRegardsPavel RegardsPavel",
"msg_date": "Mon, 2 May 2022 16:08:31 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, 2 May 2022 at 16:09, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> po 2. 5. 2022 v 16:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> there is just shared buffers changed to 32GB and work_mem to 70MB. Unfortunately - it is in production environment with customer data, so I cannot to play too much\n>>\n>> This is perf of slow\n>>\n>> 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms\n>> 11,06% postmaster [kernel.kallsyms] [k] page_fault\n>> 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode\n>> 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid\n>> 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms\n>> 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range\n>> 3,07% postmaster postgres [.] hash_search_with_hash_value\n>> 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate\n>> 2,49% postmaster postgres [.] heap_page_prune_opt\n>> 1,92% postmaster [kernel.kallsyms] [k] try_charge\n>> 1,85% postmaster [kernel.kallsyms] [k] swapgs_restore_regs_and_return_to_usermode\n>> 1,82% postmaster [kernel.kallsyms] [k] error_entry\n>> 1,73% postmaster postgres [.] _bt_checkkeys\n>> 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk\n>> 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist\n>> 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn\n>> 1,08% postmaster [kernel.kallsyms] [k] percpu_ref_put_many.constprop.84\n>> 1,08% postmaster postgres [.] 0x00000000003c1be6\n>> 1,06% postmaster [kernel.kallsyms] [k] get_mem_cgroup_from_mm.part.49\n>> 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault\n>> 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge\n>> 0,70% postmaster [kernel.kallsyms] [k] release_pages\n>> 0,61% postmaster postgres [.] _bt_checkpage\n>> 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache\n>> 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault\n>> 0,57% postmaster postgres [.] tbm_iterate\n>> 0,56% postmaster [kernel.kallsyms] [k] __count_memcg_events.part.70\n>> 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state\n>> 0,52% postmaster postgres [.] 0x000000000015f6e5\n>> 0,50% postmaster [kernel.kallsyms] [k] prep_new_page\n>> 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault\n>> 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock\n>> 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_page\n>>\n>> This is fast\n>>\n>> 21,13% postmaster postgres [.] hash_search_with_hash_value\n>> 15,33% postmaster postgres [.] heap_page_prune_opt\n>> 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms\n>> 10,00% postmaster postgres [.] _bt_checkkeys\n>> 6,23% postmaster postgres [.] 0x00000000003c1be6\n>> 4,94% postmaster postgres [.] _bt_checkpage\n>> 2,85% postmaster postgres [.] tbm_iterate\n>> 2,31% postmaster postgres [.] nocache_index_getattr\n>> 2,13% postmaster postgres [.] pg_qsort\n>> 1,58% postmaster postgres [.] heap_hot_search_buffer\n>> 1,58% postmaster postgres [.] FunctionCall2Coll\n>> 1,58% postmaster postgres [.] 0x000000000015f6e5\n>> 1,17% postmaster postgres [.] LWLockRelease\n>> 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe\n>> 0,64% postmaster postgres [.] 0x00000000003e4233\n>> 0,54% postmaster postgres [.] hash_bytes\n>> 0,53% postmaster postgres [.] 0x0000000000306fbb\n>> 0,53% postmaster postgres [.] LWLockAcquire\n>> 0,42% postmaster postgres [.] 0x00000000003c1c6f\n>> 0,42% postmaster postgres [.] _bt_compare\n>>\n>\n> It looks so memoization allocate lot of memory - maybe there are some temporal memory leak\n\nMemoization doesn't leak memory any more than hash tables do; so I\ndoubt that that is the issue.\n\n> Performance counter stats for process id '4004464':\n>\n> 84,26 msec task-clock # 0,012 CPUs utilized\n> 3 context-switches # 0,036 K/sec\n> 0 cpu-migrations # 0,000 K/sec\n> 19 page-faults # 0,225 K/sec\n> 0 cycles # 0,000 GHz\n> 106 873 995 instructions\n> 20 225 431 branches # 240,026 M/sec\n> 348 834 branch-misses # 1,72% of all branches\n>\n> 7,106142051 seconds time elapsed\n>\n\nAssuming the above was for the fast query\n\n> Performance counter stats for process id '4004464':\n>\n> 1 116,97 msec task-clock # 0,214 CPUs utilized\n> 4 context-switches # 0,004 K/sec\n> 0 cpu-migrations # 0,000 K/sec\n> 99 349 page-faults # 0,089 M/sec\n> 0 cycles # 0,000 GHz\n> 478 842 411 instructions\n> 89 495 015 branches # 80,123 M/sec\n> 1 014 763 branch-misses # 1,13% of all branches\n>\n> 5,221116331 seconds time elapsed\n\n... and this for the slow one:\n\nIt seems like this system is actively swapping memory; which has a\nnegative impact on your system. This seems to be indicated by the high\namount of page faults and the high amount of time spent in the kernel\n(as per the perf report one mail earlier). Maybe too much (work)memory\nwas assigned and the machine you're running on doesn't have that\namount of memory left?\n\nEither way, seeing that so much time is spent in the kernel I don't\nthink that PostgreSQL is the main/only source of the slow query here,\nso I don't think pgsql-hackers is the right place to continue with\nthis conversation.\n\nRegards,\n\nMatthias\n\n\nPS. Maybe next time start off in\nhttps://www.postgresql.org/list/pgsql-performance/ if you have\nperformance issues with unknown origin.\nThe wiki also has some nice tips to debug performance issues:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\n",
"msg_date": "Mon, 2 May 2022 16:43:55 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "po 2. 5. 2022 v 16:44 odesílatel Matthias van de Meent <\nboekewurm+postgres@gmail.com> napsal:\n\n> On Mon, 2 May 2022 at 16:09, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > po 2. 5. 2022 v 16:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> there is just shared buffers changed to 32GB and work_mem to 70MB.\n> Unfortunately - it is in production environment with customer data, so I\n> cannot to play too much\n> >>\n> >> This is perf of slow\n> >>\n> >> 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms\n> >> 11,06% postmaster [kernel.kallsyms] [k] page_fault\n> >> 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode\n> >> 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid\n> >> 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms\n> >> 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range\n> >> 3,07% postmaster postgres [.] hash_search_with_hash_value\n> >> 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate\n> >> 2,49% postmaster postgres [.] heap_page_prune_opt\n> >> 1,92% postmaster [kernel.kallsyms] [k] try_charge\n> >> 1,85% postmaster [kernel.kallsyms] [k]\n> swapgs_restore_regs_and_return_to_usermode\n> >> 1,82% postmaster [kernel.kallsyms] [k] error_entry\n> >> 1,73% postmaster postgres [.] _bt_checkkeys\n> >> 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk\n> >> 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist\n> >> 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn\n> >> 1,08% postmaster [kernel.kallsyms] [k]\n> percpu_ref_put_many.constprop.84\n> >> 1,08% postmaster postgres [.] 0x00000000003c1be6\n> >> 1,06% postmaster [kernel.kallsyms] [k]\n> get_mem_cgroup_from_mm.part.49\n> >> 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault\n> >> 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge\n> >> 0,70% postmaster [kernel.kallsyms] [k] release_pages\n> >> 0,61% postmaster postgres [.] _bt_checkpage\n> >> 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache\n> >> 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault\n> >> 0,57% postmaster postgres [.] tbm_iterate\n> >> 0,56% postmaster [kernel.kallsyms] [k]\n> __count_memcg_events.part.70\n> >> 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state\n> >> 0,52% postmaster postgres [.] 0x000000000015f6e5\n> >> 0,50% postmaster [kernel.kallsyms] [k] prep_new_page\n> >> 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault\n> >> 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock\n> >> 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_page\n> >>\n> >> This is fast\n> >>\n> >> 21,13% postmaster postgres [.] hash_search_with_hash_value\n> >> 15,33% postmaster postgres [.] heap_page_prune_opt\n> >> 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms\n> >> 10,00% postmaster postgres [.] _bt_checkkeys\n> >> 6,23% postmaster postgres [.] 0x00000000003c1be6\n> >> 4,94% postmaster postgres [.] _bt_checkpage\n> >> 2,85% postmaster postgres [.] tbm_iterate\n> >> 2,31% postmaster postgres [.] nocache_index_getattr\n> >> 2,13% postmaster postgres [.] pg_qsort\n> >> 1,58% postmaster postgres [.] heap_hot_search_buffer\n> >> 1,58% postmaster postgres [.] FunctionCall2Coll\n> >> 1,58% postmaster postgres [.] 0x000000000015f6e5\n> >> 1,17% postmaster postgres [.] LWLockRelease\n> >> 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe\n> >> 0,64% postmaster postgres [.] 0x00000000003e4233\n> >> 0,54% postmaster postgres [.] hash_bytes\n> >> 0,53% postmaster postgres [.] 0x0000000000306fbb\n> >> 0,53% postmaster postgres [.] LWLockAcquire\n> >> 0,42% postmaster postgres [.] 0x00000000003c1c6f\n> >> 0,42% postmaster postgres [.] _bt_compare\n> >>\n> >\n> > It looks so memoization allocate lot of memory - maybe there are some\n> temporal memory leak\n>\n> Memoization doesn't leak memory any more than hash tables do; so I\n> doubt that that is the issue.\n>\n> > Performance counter stats for process id '4004464':\n> >\n> > 84,26 msec task-clock # 0,012 CPUs\n> utilized\n> > 3 context-switches # 0,036 K/sec\n> > 0 cpu-migrations # 0,000 K/sec\n> > 19 page-faults # 0,225 K/sec\n> > 0 cycles # 0,000 GHz\n> > 106 873 995 instructions\n> > 20 225 431 branches # 240,026 M/sec\n> > 348 834 branch-misses # 1,72% of all\n> branches\n> >\n> > 7,106142051 seconds time elapsed\n> >\n>\n> Assuming the above was for the fast query\n>\n> > Performance counter stats for process id '4004464':\n> >\n> > 1 116,97 msec task-clock # 0,214 CPUs\n> utilized\n> > 4 context-switches # 0,004 K/sec\n> > 0 cpu-migrations # 0,000 K/sec\n> > 99 349 page-faults # 0,089 M/sec\n> > 0 cycles # 0,000 GHz\n> > 478 842 411 instructions\n> > 89 495 015 branches # 80,123 M/sec\n> > 1 014 763 branch-misses # 1,13% of all\n> branches\n> >\n> > 5,221116331 seconds time elapsed\n>\n> ... and this for the slow one:\n>\n> It seems like this system is actively swapping memory; which has a\n> negative impact on your system. This seems to be indicated by the high\n> amount of page faults and the high amount of time spent in the kernel\n> (as per the perf report one mail earlier). Maybe too much (work)memory\n> was assigned and the machine you're running on doesn't have that\n> amount of memory left?\n>\n\nThis computer has 354GB RAM, and there is 63GB RAM free (unused memory)\n\n\n\n> Either way, seeing that so much time is spent in the kernel I don't\n> think that PostgreSQL is the main/only source of the slow query here,\n> so I don't think pgsql-hackers is the right place to continue with\n> this conversation.\n>\n\nI can see this issue only when Memoize is enabled. So it looks like a\nPostgres issue. 400MB of work mem is not too much.\n\n\n\n\n\n\n\n\n>\n> Regards,\n>\n> Matthias\n>\n>\n> PS. Maybe next time start off in\n> https://www.postgresql.org/list/pgsql-performance/ if you have\n> performance issues with unknown origin.\n> The wiki also has some nice tips to debug performance issues:\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n\npo 2. 5. 2022 v 16:44 odesílatel Matthias van de Meent <boekewurm+postgres@gmail.com> napsal:On Mon, 2 May 2022 at 16:09, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> po 2. 5. 2022 v 16:02 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> there is just shared buffers changed to 32GB and work_mem to 70MB. Unfortunately - it is in production environment with customer data, so I cannot to play too much\n>>\n>> This is perf of slow\n>>\n>> 25,94% postmaster [kernel.kallsyms] [k] clear_page_erms\n>> 11,06% postmaster [kernel.kallsyms] [k] page_fault\n>> 5,51% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode\n>> 5,18% postmaster [kernel.kallsyms] [k] __list_del_entry_valid\n>> 5,15% postmaster libc-2.28.so [.] __memset_avx2_erms\n>> 3,99% postmaster [kernel.kallsyms] [k] unmap_page_range\n>> 3,07% postmaster postgres [.] hash_search_with_hash_value\n>> 2,73% postmaster [kernel.kallsyms] [k] cgroup_throttle_swaprate\n>> 2,49% postmaster postgres [.] heap_page_prune_opt\n>> 1,92% postmaster [kernel.kallsyms] [k] try_charge\n>> 1,85% postmaster [kernel.kallsyms] [k] swapgs_restore_regs_and_return_to_usermode\n>> 1,82% postmaster [kernel.kallsyms] [k] error_entry\n>> 1,73% postmaster postgres [.] _bt_checkkeys\n>> 1,48% postmaster [kernel.kallsyms] [k] free_pcppages_bulk\n>> 1,35% postmaster [kernel.kallsyms] [k] get_page_from_freelist\n>> 1,20% postmaster [kernel.kallsyms] [k] __pagevec_lru_add_fn\n>> 1,08% postmaster [kernel.kallsyms] [k] percpu_ref_put_many.constprop.84\n>> 1,08% postmaster postgres [.] 0x00000000003c1be6\n>> 1,06% postmaster [kernel.kallsyms] [k] get_mem_cgroup_from_mm.part.49\n>> 0,86% postmaster [kernel.kallsyms] [k] __handle_mm_fault\n>> 0,79% postmaster [kernel.kallsyms] [k] mem_cgroup_charge\n>> 0,70% postmaster [kernel.kallsyms] [k] release_pages\n>> 0,61% postmaster postgres [.] _bt_checkpage\n>> 0,61% postmaster [kernel.kallsyms] [k] free_pages_and_swap_cache\n>> 0,60% postmaster [kernel.kallsyms] [k] handle_mm_fault\n>> 0,57% postmaster postgres [.] tbm_iterate\n>> 0,56% postmaster [kernel.kallsyms] [k] __count_memcg_events.part.70\n>> 0,55% postmaster [kernel.kallsyms] [k] __mod_memcg_lruvec_state\n>> 0,52% postmaster postgres [.] 0x000000000015f6e5\n>> 0,50% postmaster [kernel.kallsyms] [k] prep_new_page\n>> 0,49% postmaster [kernel.kallsyms] [k] __do_page_fault\n>> 0,46% postmaster [kernel.kallsyms] [k] _raw_spin_lock\n>> 0,44% postmaster [kernel.kallsyms] [k] do_anonymous_page\n>>\n>> This is fast\n>>\n>> 21,13% postmaster postgres [.] hash_search_with_hash_value\n>> 15,33% postmaster postgres [.] heap_page_prune_opt\n>> 10,22% postmaster libc-2.28.so [.] __memset_avx2_erms\n>> 10,00% postmaster postgres [.] _bt_checkkeys\n>> 6,23% postmaster postgres [.] 0x00000000003c1be6\n>> 4,94% postmaster postgres [.] _bt_checkpage\n>> 2,85% postmaster postgres [.] tbm_iterate\n>> 2,31% postmaster postgres [.] nocache_index_getattr\n>> 2,13% postmaster postgres [.] pg_qsort\n>> 1,58% postmaster postgres [.] heap_hot_search_buffer\n>> 1,58% postmaster postgres [.] FunctionCall2Coll\n>> 1,58% postmaster postgres [.] 0x000000000015f6e5\n>> 1,17% postmaster postgres [.] LWLockRelease\n>> 0,85% postmaster libc-2.28.so [.] __memcmp_avx2_movbe\n>> 0,64% postmaster postgres [.] 0x00000000003e4233\n>> 0,54% postmaster postgres [.] hash_bytes\n>> 0,53% postmaster postgres [.] 0x0000000000306fbb\n>> 0,53% postmaster postgres [.] LWLockAcquire\n>> 0,42% postmaster postgres [.] 0x00000000003c1c6f\n>> 0,42% postmaster postgres [.] _bt_compare\n>>\n>\n> It looks so memoization allocate lot of memory - maybe there are some temporal memory leak\n\nMemoization doesn't leak memory any more than hash tables do; so I\ndoubt that that is the issue.\n\n> Performance counter stats for process id '4004464':\n>\n> 84,26 msec task-clock # 0,012 CPUs utilized\n> 3 context-switches # 0,036 K/sec\n> 0 cpu-migrations # 0,000 K/sec\n> 19 page-faults # 0,225 K/sec\n> 0 cycles # 0,000 GHz\n> 106 873 995 instructions\n> 20 225 431 branches # 240,026 M/sec\n> 348 834 branch-misses # 1,72% of all branches\n>\n> 7,106142051 seconds time elapsed\n>\n\nAssuming the above was for the fast query\n\n> Performance counter stats for process id '4004464':\n>\n> 1 116,97 msec task-clock # 0,214 CPUs utilized\n> 4 context-switches # 0,004 K/sec\n> 0 cpu-migrations # 0,000 K/sec\n> 99 349 page-faults # 0,089 M/sec\n> 0 cycles # 0,000 GHz\n> 478 842 411 instructions\n> 89 495 015 branches # 80,123 M/sec\n> 1 014 763 branch-misses # 1,13% of all branches\n>\n> 5,221116331 seconds time elapsed\n\n... and this for the slow one:\n\nIt seems like this system is actively swapping memory; which has a\nnegative impact on your system. This seems to be indicated by the high\namount of page faults and the high amount of time spent in the kernel\n(as per the perf report one mail earlier). Maybe too much (work)memory\nwas assigned and the machine you're running on doesn't have that\namount of memory left?This computer has 354GB RAM, and there is 63GB RAM free (unused memory)\n\nEither way, seeing that so much time is spent in the kernel I don't\nthink that PostgreSQL is the main/only source of the slow query here,\nso I don't think pgsql-hackers is the right place to continue with\nthis conversation.I can see this issue only when Memoize is enabled. So it looks like a Postgres issue. 400MB of work mem is not too much. \n\nRegards,\n\nMatthias\n\n\nPS. Maybe next time start off in\nhttps://www.postgresql.org/list/pgsql-performance/ if you have\nperformance issues with unknown origin.\nThe wiki also has some nice tips to debug performance issues:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions",
"msg_date": "Mon, 2 May 2022 16:56:16 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I found a query that is significantly slower with more memory\n\nCan you clarify what you mean here? More memory was installed on the\nmachine? or work_mem was increased? or?\n\n> plan 1 - fast https://explain.depesz.com/s/XM1f\n>\n> plan 2 - slow https://explain.depesz.com/s/2rBw\n\nIf it was work_mem you increased, it seems strange that the plan would\nswitch over to using a Nested Loop / Memoize plan. Only 91 rows are\nestimated on the outer side of the join. It's hard to imagine that\nwork_mem was so low that the Memoize costing code thought there would\never be cache evictions.\n\n> Strange - the time of last row is +/- same, but execution time is 10x worse\n>\n> It looks like slow environment cleaning\n\nCan you also show EXPLAIN for the Memoize plan without ANALYZE?\n\nDoes the slowness present every time that plan is executed?\n\nCan you show the EXPLAIN ANALYZE of the nested loop plan with\nenable_memoize = off? You may ned to disable hash and merge join.\n\nDavid\n\n\n",
"msg_date": "Tue, 3 May 2022 09:48:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> I found a query that is significantly slower with more memory\n\n> If it was work_mem you increased, it seems strange that the plan would\n> switch over to using a Nested Loop / Memoize plan.\n\nYeah, there's something unexplained there.\n\nI think that the most probable explanation for the symptoms is that\ncost_memoize_rescan is computing some insane value for est_entries,\ncausing ExecInitMemoize to allocate-and-zero a huge hash table,\nwhich ExecEndMemoize then frees again. Neither of those steps\ngets counted into any plan node's runtime, but EXPLAIN's total\nexecution time will include them. An insane value for est_entries\ncould perhaps go along with a cost misestimate that convinces the\nplanner to include the memoize even though it seems pointless.\n\nI spent some time studying cost_memoize_rescan, and the only\nconclusions I arrived at were that the variable names are poorly\nchosen and the comments are unhelpful. For instance, one would\nthink that est_entry_bytes is the expected size of one cache entry,\nbut it seems to actually be the total space that would be occupied\nif the whole input relation were loaded into the cache. And\nthe est_cache_entries computation seems nonsensical; if it does\nmake sense, the comment sure doesn't illuminate why. So I am\nquite prepared to buy into the idea that cost_memoize_rescan is\nproducing bogus answers, but it's hard to tell what it's coming out\nwith in this example. Too bad EXPLAIN doesn't print est_entries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 May 2022 19:02:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, May 2, 2022 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> I found a query that is significantly slower with more memory\n>\n> > If it was work_mem you increased, it seems strange that the plan would\n> > switch over to using a Nested Loop / Memoize plan.\n>\n> Yeah, there's something unexplained there.\n>\n> I spent some time studying cost_memoize_rescan, and the only\n> conclusions I arrived at were that the variable names are poorly\n> chosen and the comments are unhelpful. For instance, one would\n> think that est_entry_bytes is the expected size of one cache entry,\n> but it seems to actually be the total space that would be occupied\n> if the whole input relation were loaded into the cache.\n\n And\n> the est_cache_entries computation seems nonsensical; if it does\n> make sense, the comment sure doesn't illuminate why.\n\n\nMy take on this is that a cache entry is keyed by a parameterization and\nany given entry can have, at most, every tuple saved into it (hence the\ncomputation of tuples*per-tuple-size). So the maximum number of hash keys\nis the total available memory divided by input relation size. This upper\nbound is stored in est_cache_entries. If the number of unique\nparameterizations expected (at worst one-per-call) is less than this we use\nthat value and never evict. If it is more we use the est_cache_entries and\nplan to evict.\n\nWhat I'm expecting to find but don't see is that by definition each unique\nparameterization must positively match a unique subset of the input\nrelation tuples. If we remember only those tuples that matched then at no\npoint should the total memory for the hash table exceed the size of the\ninput relation.\n\nNow, I'm not completely confident the cache only holds matched tuples...but\nif so:\n\n From that the mpath->est_entries should be \"min(hash_mem_bytes /\nest_entry_bytes, 1.0) * ndistinct\"\ni.e., all groups or a fractional subset based upon available hash memory\n\nThen:\n\nndistinct = estimate_num_groups() || calls\nretention_ratio = min(hash_mem_bytes / est_entry_bytes, 1.0)\nest_entries = retention_ratio * ndistinct\nevict_ratio = 1.0 - retention_ratio\n\nhit_ratio = (est_entries / ndistinct) - (ndistinct / calls) || clamp to 0.0\nI don't understand the adjustment factor ndistinct/calls\n\nevictions total cost adjustment also assumes we are evicting all tuples as\nopposed to tuples/est_entries\n\nThis is a \"rescan\" so aside from cache management isn't the cost of\noriginally populating the cache already accounted for elsewhere?\n\nDavid J.\n\nOn Mon, May 2, 2022 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes:\n> On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> I found a query that is significantly slower with more memory\n\n> If it was work_mem you increased, it seems strange that the plan would\n> switch over to using a Nested Loop / Memoize plan.\n\nYeah, there's something unexplained there.\nI spent some time studying cost_memoize_rescan, and the only\nconclusions I arrived at were that the variable names are poorly\nchosen and the comments are unhelpful. For instance, one would\nthink that est_entry_bytes is the expected size of one cache entry,\nbut it seems to actually be the total space that would be occupied\nif the whole input relation were loaded into the cache. And\nthe est_cache_entries computation seems nonsensical; if it does\nmake sense, the comment sure doesn't illuminate why.My take on this is that a cache entry is keyed by a parameterization and any given entry can have, at most, every tuple saved into it (hence the computation of tuples*per-tuple-size). So the maximum number of hash keys is the total available memory divided by input relation size. This upper bound is stored in est_cache_entries. If the number of unique parameterizations expected (at worst one-per-call) is less than this we use that value and never evict. If it is more we use the est_cache_entries and plan to evict.What I'm expecting to find but don't see is that by definition each unique parameterization must positively match a unique subset of the input relation tuples. If we remember only those tuples that matched then at no point should the total memory for the hash table exceed the size of the input relation.Now, I'm not completely confident the cache only holds matched tuples...but if so:From that the mpath->est_entries should be \"min(hash_mem_bytes / est_entry_bytes, 1.0) * ndistinct\"i.e., all groups or a fractional subset based upon available hash memoryThen:ndistinct = estimate_num_groups() || callsretention_ratio = min(hash_mem_bytes / est_entry_bytes, 1.0)est_entries = retention_ratio * ndistinctevict_ratio = 1.0 - retention_ratiohit_ratio = (est_entries / ndistinct) - (ndistinct / calls) || clamp to 0.0I don't understand the adjustment factor ndistinct/callsevictions total cost adjustment also assumes we are evicting all tuples as opposed to tuples/est_entriesThis is a \"rescan\" so aside from cache management isn't the cost of originally populating the cache already accounted for elsewhere?David J.",
"msg_date": "Mon, 2 May 2022 18:43:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Tue, 3 May 2022 at 11:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> I found a query that is significantly slower with more memory\n>\n> > If it was work_mem you increased, it seems strange that the plan would\n> > switch over to using a Nested Loop / Memoize plan.\n>\n> Yeah, there's something unexplained there.\n>\n> I think that the most probable explanation for the symptoms is that\n> cost_memoize_rescan is computing some insane value for est_entries,\n> causing ExecInitMemoize to allocate-and-zero a huge hash table,\n> which ExecEndMemoize then frees again. Neither of those steps\n> gets counted into any plan node's runtime, but EXPLAIN's total\n> execution time will include them. An insane value for est_entries\n> could perhaps go along with a cost misestimate that convinces the\n> planner to include the memoize even though it seems pointless.\n\nThat seems pretty unlikely to me. est_entries is based on the minimum\nvalue of the expected number of total cache entries and the ndistinct\nvalue. ndistinct cannot be insane here as ndistinct is never going to\nbe higher than the number of calls, which is the row estimate from the\nouter side of the join. That's 91 in both cases here. As far as I\ncan see, that's just going to make a table of 128 buckets.\n\nSee estimate_num_groups_incremental() at:\n\n/*\n* We don't ever want to return an estimate of zero groups, as that tends\n* to lead to division-by-zero and other unpleasantness. The input_rows\n* estimate is usually already at least 1, but clamp it just in case it\n* isn't.\n*/\ninput_rows = clamp_row_est(input_rows);\n\n\n> I spent some time studying cost_memoize_rescan, and the only\n> conclusions I arrived at were that the variable names are poorly\n> chosen and the comments are unhelpful. For instance, one would\n> think that est_entry_bytes is the expected size of one cache entry,\n> but it seems to actually be the total space that would be occupied\n> if the whole input relation were loaded into the cache.\n\nI think you've misunderstood. It *is* the estimated size of a single\nentry. I think you might be going wrong in assuming \"tuples\" is the\nexpected tuples from all rescans of the inner side of the join. It's\nactually from a single scan. I can add a comment there to help make\nthat clear.\n\n> And\n> the est_cache_entries computation seems nonsensical; if it does\n> make sense, the comment sure doesn't illuminate why. So I am\n> quite prepared to buy into the idea that cost_memoize_rescan is\n> producing bogus answers, but it's hard to tell what it's coming out\n> with in this example. Too bad EXPLAIN doesn't print est_entries.\n\nI'm wishing I put the initial hash table size and the final hash table\nsize in EXPLAIN + EXPLAIN ANALYZE now. Perhaps it's not too late for\nv15 to do that so that it might help us figure things out in the\nfuture.\n\nI'm open to making improvements to the comments in that area. I do\nremember spending quite a bit of time trying to make things as clear\nas possible as it is fairly complex what's going on there.\n\nDavid\n\n\n",
"msg_date": "Tue, 3 May 2022 14:13:18 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Tue, 3 May 2022 at 13:43, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> hit_ratio = (est_entries / ndistinct) - (ndistinct / calls) || clamp to 0.0\n> I don't understand the adjustment factor ndistinct/calls\n\nI've attached a spreadsheet showing you the impact of subtracting\n(ndistinct / calls). What this is correcting for is the fact that the\nfirst scan from each unique value is a cache miss. The more calls we\nhave, the more hits we'll get. If there was only 1 call per distinct\nvalue then there'd never be any hits. Without subtracting (ndistinct /\ncalls) and assuming there's space in the cache for each ndistinct\nvalue, we'd assume 100% cache hit ratio if calls == ndistinct. What\nwe should assume in that case is a 0% hit ratio as the first scan for\neach distinct parameter must always be a miss as we've never had a\nchance to cache any tuples for it yet.\n\n> This is a \"rescan\" so aside from cache management isn't the cost of originally populating the cache already accounted for elsewhere?\n\nThe cost of the first scan is calculated in create_memoize_path().\nSince the first scan will always be a cache miss, the code there just\nadds some cache management surcharges. Namely:\n\n/*\n* Add a small additional charge for caching the first entry. All the\n* harder calculations for rescans are performed in cost_memoize_rescan().\n*/\npathnode->path.startup_cost = subpath->startup_cost + cpu_tuple_cost;\npathnode->path.total_cost = subpath->total_cost + cpu_tuple_cost;\n\nDavid",
"msg_date": "Tue, 3 May 2022 14:30:35 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, May 2, 2022 at 7:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 3 May 2022 at 11:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > >> I found a query that is significantly slower with more memory\n> >\n> > > If it was work_mem you increased, it seems strange that the plan would\n> > > switch over to using a Nested Loop / Memoize plan.\n> >\n> > Yeah, there's something unexplained there.\n> >\n> > I think that the most probable explanation for the symptoms is that\n> > cost_memoize_rescan is computing some insane value for est_entries,\n> > causing ExecInitMemoize to allocate-and-zero a huge hash table,\n> > which ExecEndMemoize then frees again. Neither of those steps\n> > gets counted into any plan node's runtime, but EXPLAIN's total\n> > execution time will include them. An insane value for est_entries\n> > could perhaps go along with a cost misestimate that convinces the\n> > planner to include the memoize even though it seems pointless.\n>\n> That seems pretty unlikely to me. est_entries is based on the minimum\n> value of the expected number of total cache entries and the ndistinct\n> value. ndistinct cannot be insane here as ndistinct is never going to\n> be higher than the number of calls, which is the row estimate from the\n> outer side of the join. That's 91 in both cases here. As far as I\n> can see, that's just going to make a table of 128 buckets.\n>\n\nIf est_entries goes to zero due to hash_mem_bytes/est_entry_bytes < 1\n(hence floor takes it to zero) the executor will use a size value of 1024\ninstead in build_hash_table.\n\nThat seems unlikely but there is no data to support or refute it.\n\n\n> I'm open to making improvements to the comments in that area. I do\n> remember spending quite a bit of time trying to make things as clear\n> as possible as it is fairly complex what's going on there.\n>\n>\nA few more intermediate calculation variables, along with descriptions,\nwould help.\n\ne.g., min(est_cache_entries, ndistinct) is repeated twice after its initial\ndefinition.\n\nretention_ratio per my other reply\n\nThe (ndistinct/calls) part of hit_ratio being described specifically.\n\nDavid J.\n\nOn Mon, May 2, 2022 at 7:13 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 3 May 2022 at 11:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> I found a query that is significantly slower with more memory\n>\n> > If it was work_mem you increased, it seems strange that the plan would\n> > switch over to using a Nested Loop / Memoize plan.\n>\n> Yeah, there's something unexplained there.\n>\n> I think that the most probable explanation for the symptoms is that\n> cost_memoize_rescan is computing some insane value for est_entries,\n> causing ExecInitMemoize to allocate-and-zero a huge hash table,\n> which ExecEndMemoize then frees again. Neither of those steps\n> gets counted into any plan node's runtime, but EXPLAIN's total\n> execution time will include them. An insane value for est_entries\n> could perhaps go along with a cost misestimate that convinces the\n> planner to include the memoize even though it seems pointless.\n\nThat seems pretty unlikely to me. est_entries is based on the minimum\nvalue of the expected number of total cache entries and the ndistinct\nvalue. ndistinct cannot be insane here as ndistinct is never going to\nbe higher than the number of calls, which is the row estimate from the\nouter side of the join. That's 91 in both cases here. As far as I\ncan see, that's just going to make a table of 128 buckets.If est_entries goes to zero due to hash_mem_bytes/est_entry_bytes < 1 (hence floor takes it to zero) the executor will use a size value of 1024 instead in build_hash_table.That seems unlikely but there is no data to support or refute it.\n\nI'm open to making improvements to the comments in that area. I do\nremember spending quite a bit of time trying to make things as clear\nas possible as it is fairly complex what's going on there.A few more intermediate calculation variables, along with descriptions, would help.e.g., min(est_cache_entries, ndistinct) is repeated twice after its initial definition.retention_ratio per my other replyThe (ndistinct/calls) part of hit_ratio being described specifically.David J.",
"msg_date": "Mon, 2 May 2022 19:31:16 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, May 2, 2022 at 7:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 3 May 2022 at 13:43, David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > hit_ratio = (est_entries / ndistinct) - (ndistinct / calls) || clamp to\n> 0.0\n> > I don't understand the adjustment factor ndistinct/calls\n>\n> I've attached a spreadsheet showing you the impact of subtracting\n> (ndistinct / calls). What this is correcting for is the fact that the\n> first scan from each unique value is a cache miss. The more calls we\n> have, the more hits we'll get. If there was only 1 call per distinct\n> value then there'd never be any hits. Without subtracting (ndistinct /\n> calls) and assuming there's space in the cache for each ndistinct\n> value, we'd assume 100% cache hit ratio if calls == ndistinct. What\n> we should assume in that case is a 0% hit ratio as the first scan for\n> each distinct parameter must always be a miss as we've never had a\n> chance to cache any tuples for it yet.\n>\n>\nThank you. I understand the theory and agree with it - but the math\ndoesn't seem to be working out.\n\nPlugging in:\nn = 2,000\ne = 500\nc = 10,000\n\nproper = 5%\nincorrect = 25%\n\nBut of the 10,000 calls we will receive, the first 2,000 will be\nmisses while 2,000 of the remaining 8,000 will be hits, due to sharing\n2,000 distinct groups among the available inventory of 500 (25% of 8,000 is\n2,000). 2,000 hits in 10,000 calls yields 20%.\n\nI believe the correct formula to be:\n\n((calls - ndistinct) / calls) * (est_entries / ndistinct) = hit_ratio\n.80 * .25 = .20\n\nFirst we recognize that our hit ratio can be at most c-n/c since n misses\nare guaranteed. We take that ratio and scale it by our cache efficiency\nsince of the remaining hits that fraction will turn into misses due to the\nrelevant cache not being in memory.\n\nDavid J.\n\nOn Mon, May 2, 2022 at 7:30 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 3 May 2022 at 13:43, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> hit_ratio = (est_entries / ndistinct) - (ndistinct / calls) || clamp to 0.0\n> I don't understand the adjustment factor ndistinct/calls\n\nI've attached a spreadsheet showing you the impact of subtracting\n(ndistinct / calls). What this is correcting for is the fact that the\nfirst scan from each unique value is a cache miss. The more calls we\nhave, the more hits we'll get. If there was only 1 call per distinct\nvalue then there'd never be any hits. Without subtracting (ndistinct /\ncalls) and assuming there's space in the cache for each ndistinct\nvalue, we'd assume 100% cache hit ratio if calls == ndistinct. What\nwe should assume in that case is a 0% hit ratio as the first scan for\neach distinct parameter must always be a miss as we've never had a\nchance to cache any tuples for it yet.Thank you. I understand the theory and agree with it - but the math doesn't seem to be working out.Plugging in:n = 2,000e = 500c = 10,000proper = 5%incorrect = 25%But of the 10,000 calls we will receive, the first 2,000 will be misses while 2,000 of the remaining 8,000 will be hits, due to sharing 2,000 distinct groups among the available inventory of 500 (25% of 8,000 is 2,000). 2,000 hits in 10,000 calls yields 20%.I believe the correct formula to be:\n\n((calls - ndistinct) / calls) * (est_entries / ndistinct) = hit_ratio.80 * .25 = .20First we recognize that our hit ratio can be at most c-n/c since n misses are guaranteed. We take that ratio and scale it by our cache efficiency since of the remaining hits that fraction will turn into misses due to the relevant cache not being in memory.David J.",
"msg_date": "Mon, 2 May 2022 20:21:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Tue, 3 May 2022 at 15:22, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Plugging in:\n> n = 2,000\n> e = 500\n> c = 10,000\n>\n> proper = 5%\n> incorrect = 25%\n>\n> But of the 10,000 calls we will receive, the first 2,000 will be misses while 2,000 of the remaining 8,000 will be hits, due to sharing 2,000 distinct groups among the available inventory of 500 (25% of 8,000 is 2,000). 2,000 hits in 10,000 calls yields 20%.\n>\n> I believe the correct formula to be:\n>\n> ((calls - ndistinct) / calls) * (est_entries / ndistinct) = hit_ratio\n> .80 * .25 = .20\n\nI think you're correct here. The formula should be that. However,\ntwo things; 1) this being incorrect is not the cause of the original\nproblem reported on this thread, and 2) There's just no way we could\nconsider fixing this in v15, let alone back patch it to v14.\n\nMaybe we should open a new thread about this and put an entry in the\nfirst CF for v16 under bugs and come back to it after we branch.\nThinking the cache hit ratio is lower than it actually is going to be\nwill reduce the chances of the planner switching to a Nested Loop /\nMemoize plan vs a Hash or Merge Join plan.\n\nI was already fairly concerned that Memoize could cause performance\nregressions when the ndistinct value or expected cache entry size is\nunderestimated or the outer side rows are overestimated. What I've\ngot to calculate the cache hit ratio does seem incorrect given what\nyou're showing, however it does add an element of pessimism and\nreduces the chances of a bad plan being picked when work_mem is too\nlow to cache all entries. Swapping it out for your formula seems like\nit would increase the chances of a Memoize plan being picked when the\nrow, ndistinct and cache entry size estimates are correct, however, it\ncould also increase the chance of a bad plan when being picked in\ncases where the estimates are incorrect.\n\nMy problem with changing this now would be that we already often\nperform Nested Loop joins when a Hash or Merge join would be a better\noption. I'd hate to take us in a direction where we make that problem\neven worse.\n\nDavid\n\n\n",
"msg_date": "Tue, 3 May 2022 16:04:38 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "po 2. 5. 2022 v 23:48 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\r\n\r\n> On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n> > I found a query that is significantly slower with more memory\r\n>\r\n> Can you clarify what you mean here? More memory was installed on the\r\n> machine? or work_mem was increased? or?\r\n>\r\n> > plan 1 - fast https://explain.depesz.com/s/XM1f\r\n> >\r\n> > plan 2 - slow https://explain.depesz.com/s/2rBw\r\n>\r\n> If it was work_mem you increased, it seems strange that the plan would\r\n> switch over to using a Nested Loop / Memoize plan. Only 91 rows are\r\n> estimated on the outer side of the join. It's hard to imagine that\r\n> work_mem was so low that the Memoize costing code thought there would\r\n> ever be cache evictions.\r\n>\r\n> > Strange - the time of last row is +/- same, but execution time is 10x\r\n> worse\r\n> >\r\n> > It looks like slow environment cleaning\r\n>\r\n> Can you also show EXPLAIN for the Memoize plan without ANALYZE?\r\n>\r\n\r\nyes - it is strange - it is just slow without execution\r\n\r\n ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│\r\n QUERY PLAN\r\n │\r\n├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Nested Loop Anti Join (cost=59.62..3168.15 rows=1 width=16)\r\n\r\n │\r\n│ -> Nested Loop Anti Join (cost=59.34..3165.24 rows=1 width=16)\r\n\r\n │\r\n│ -> Nested Loop Semi Join (cost=58.48..3133.09 rows=1 width=16)\r\n\r\n │\r\n│ -> Bitmap Heap Scan on item itembo0_ (cost=57.34..2708.22\r\nrows=11 width=16)\r\n │\r\n│ Recheck Cond: ((ending_time IS NULL) OR ((status_id =\r\nANY ('{1,4,5}'::bigint[])) AND (CURRENT_TIMESTAMP < ending_time) AND\r\n(starting_time <= CURRENT_TIMESTAMP))) │\r\n│ Filter: ((to_expose_date IS NULL) AND (status_id =\r\nANY ('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP) AND\r\n((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time))) │\r\n│ -> BitmapOr (cost=57.34..57.34 rows=1751 width=0)\r\n\r\n │\r\n│ -> Bitmap Index Scan on stehule354\r\n (cost=0.00..2.08 rows=1 width=0)\r\n │\r\n│ Index Cond: (ending_time IS NULL)\r\n\r\n │\r\n│ -> Bitmap Index Scan on stehule1010\r\n (cost=0.00..55.26 rows=1751 width=0)\r\n │\r\n│ Index Cond: ((status_id = ANY\r\n('{1,4,5}'::bigint[])) AND (ending_time > CURRENT_TIMESTAMP) AND\r\n(starting_time <= CURRENT_TIMESTAMP))\r\n │\r\n│ -> Nested Loop (cost=1.14..37.71 rows=91 width=8)\r\n\r\n │\r\n│ -> Index Only Scan using uq_isi_itemid_itemimageid\r\non item_share_image itemsharei2__1 (cost=0.57..3.80 rows=91 width=16)\r\n │\r\n│ Index Cond: (item_id = itembo0_.id)\r\n\r\n │\r\n│ -> Memoize (cost=0.57..2.09 rows=1 width=8)\r\n\r\n │\r\n│ Cache Key: itemsharei2__1.item_image_id\r\n\r\n │\r\n│ Cache Mode: logical\r\n\r\n │\r\n│ -> Index Only Scan using pk_item_image on\r\nitem_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8)\r\n │\r\n│ Index Cond: (id =\r\nitemsharei2__1.item_image_id)\r\n\r\n │\r\n│ -> Nested Loop (cost=0.85..32.14 rows=1 width=8)\r\n\r\n │\r\n│ -> Index Only Scan using uq_isi_itemid_itemimageid on\r\nitem_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16)\r\n │\r\n│ Index Cond: (item_id = itembo0_.id)\r\n\r\n │\r\n│ -> Memoize (cost=0.29..1.72 rows=1 width=8)\r\n\r\n │\r\n│ Cache Key: itemsharei2_.item_image_id\r\n\r\n │\r\n│ Cache Mode: logical\r\n\r\n │\r\n│ -> Index Only Scan using stehule3001 on item_image\r\nitemimageb3_ (cost=0.28..1.71 rows=1 width=8)\r\n │\r\n│ Index Cond: (id = itemsharei2_.item_image_id)\r\n\r\n │\r\n│ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute\r\nitemmissin1_ (cost=0.28..1.66 rows=1 width=8)\r\n │\r\n│ Index Cond: (item_id = itembo0_.id)\r\n\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(29 řádek)\r\n\r\nČas: 420,392 ms\r\n\r\n\r\n\r\n> Does the slowness present every time that plan is executed?\r\n>\r\n\r\nlooks yes\r\n\r\n\r\n>\r\n> Can you show the EXPLAIN ANALYZE of the nested loop plan with\r\n> enable_memoize = off? You may ned to disable hash and merge join.\r\n>\r\n\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│\r\n QUERY PLAN\r\n │\r\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Nested Loop Anti Join (cost=1093.22..4488.89 rows=1 width=16) (actual\r\ntime=5.723..60.470 rows=13 loops=1)\r\n │\r\n│ -> Nested Loop Anti Join (cost=1092.94..4485.97 rows=1 width=16)\r\n(actual time=5.165..60.368 rows=41 loops=1)\r\n │\r\n│ -> Gather (cost=1001.70..4391.26 rows=1 width=16) (actual\r\ntime=1.909..56.913 rows=41 loops=1)\r\n │\r\n│ Workers Planned: 2\r\n\r\n │\r\n│ Workers Launched: 2\r\n\r\n │\r\n│ -> Nested Loop Semi Join (cost=1.70..3391.16 rows=1\r\nwidth=16) (actual time=22.032..39.253 rows=14 loops=3)\r\n │\r\n│ -> Parallel Index Only Scan using stehule1010 on\r\nitem itembo0_ (cost=0.57..2422.83 rows=5 width=16) (actual\r\ntime=21.785..38.851 rows=14 loops=3) │\r\n│ Index Cond: ((status_id = ANY\r\n('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP))\r\n │\r\n│ Filter: ((to_expose_date IS NULL) AND\r\n((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time)))\r\n │\r\n│ Rows Removed by Filter: 1589\r\n\r\n │\r\n│ Heap Fetches: 21\r\n\r\n │\r\n│ -> Nested Loop (cost=1.13..192.76 rows=91 width=8)\r\n(actual time=0.029..0.029 rows=1 loops=41)\r\n │\r\n│ -> Index Only Scan using\r\nuq_isi_itemid_itemimageid on item_share_image itemsharei2__1\r\n (cost=0.57..3.80 rows=91 width=16) (actual time=0.015..0.015 rows=1\r\nloops=41) │\r\n│ Index Cond: (item_id = itembo0_.id)\r\n\r\n │\r\n│ Heap Fetches: 2\r\n\r\n │\r\n│ -> Index Only Scan using pk_item_image on\r\nitem_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8) (actual\r\ntime=0.013..0.013 rows=1 loops=41) │\r\n│ Index Cond: (id =\r\nitemsharei2__1.item_image_id)\r\n │\r\n│ Heap Fetches: 2\r\n\r\n │\r\n│ -> Hash Join (cost=91.24..94.71 rows=1 width=8) (actual\r\ntime=0.084..0.084 rows=0 loops=41)\r\n │\r\n│ Hash Cond: (itemsharei2_.item_image_id = itemimageb3_.id)\r\n\r\n │\r\n│ -> Index Only Scan using uq_isi_itemid_itemimageid on\r\nitem_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16) (actual\r\ntime=0.003..0.004 rows=6 loops=41) │\r\n│ Index Cond: (item_id = itembo0_.id)\r\n\r\n │\r\n│ Heap Fetches: 2\r\n\r\n │\r\n│ -> Hash (cost=67.41..67.41 rows=1861 width=8) (actual\r\ntime=3.213..3.214 rows=3950 loops=1)\r\n │\r\n│ Buckets: 4096 (originally 2048) Batches: 1\r\n(originally 1) Memory Usage: 187kB\r\n │\r\n│ -> Index Only Scan using stehule3001 on item_image\r\nitemimageb3_ (cost=0.28..67.41 rows=1861 width=8) (actual\r\ntime=0.029..2.479 rows=3950 loops=1) │\r\n│ Heap Fetches: 2203\r\n\r\n │\r\n│ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute\r\nitemmissin1_ (cost=0.28..1.66 rows=1 width=8) (actual time=0.002..0.002\r\nrows=1 loops=41) │\r\n│ Index Cond: (item_id = itembo0_.id)\r\n\r\n │\r\n│ Heap Fetches: 0\r\n\r\n │\r\n│ Planning Time: 1.471 ms\r\n\r\n │\r\n│ Execution Time: 60.570 ms\r\n\r\n │\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(32 řádek)\r\n\r\nČas: 62,982 ms\r\n\r\n\r\n>\r\n> David\r\n>\r\n\npo 2. 5. 2022 v 23:48 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n> I found a query that is significantly slower with more memory\n\r\nCan you clarify what you mean here? More memory was installed on the\r\nmachine? or work_mem was increased? or?\n\r\n> plan 1 - fast https://explain.depesz.com/s/XM1f\r\n>\r\n> plan 2 - slow https://explain.depesz.com/s/2rBw\n\r\nIf it was work_mem you increased, it seems strange that the plan would\r\nswitch over to using a Nested Loop / Memoize plan. Only 91 rows are\r\nestimated on the outer side of the join. It's hard to imagine that\r\nwork_mem was so low that the Memoize costing code thought there would\r\never be cache evictions.\n\r\n> Strange - the time of last row is +/- same, but execution time is 10x worse\r\n>\r\n> It looks like slow environment cleaning\n\r\nCan you also show EXPLAIN for the Memoize plan without ANALYZE?yes - it is strange - it is just slow without execution ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤│ Nested Loop Anti Join (cost=59.62..3168.15 rows=1 width=16) ││ -> Nested Loop Anti Join (cost=59.34..3165.24 rows=1 width=16) ││ -> Nested Loop Semi Join (cost=58.48..3133.09 rows=1 width=16) ││ -> Bitmap Heap Scan on item itembo0_ (cost=57.34..2708.22 rows=11 width=16) ││ Recheck Cond: ((ending_time IS NULL) OR ((status_id = ANY ('{1,4,5}'::bigint[])) AND (CURRENT_TIMESTAMP < ending_time) AND (starting_time <= CURRENT_TIMESTAMP))) ││ Filter: ((to_expose_date IS NULL) AND (status_id = ANY ('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP) AND ((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time))) ││ -> BitmapOr (cost=57.34..57.34 rows=1751 width=0) ││ -> Bitmap Index Scan on stehule354 (cost=0.00..2.08 rows=1 width=0) ││ Index Cond: (ending_time IS NULL) ││ -> Bitmap Index Scan on stehule1010 (cost=0.00..55.26 rows=1751 width=0) ││ Index Cond: ((status_id = ANY ('{1,4,5}'::bigint[])) AND (ending_time > CURRENT_TIMESTAMP) AND (starting_time <= CURRENT_TIMESTAMP)) ││ -> Nested Loop (cost=1.14..37.71 rows=91 width=8) ││ -> Index Only Scan using uq_isi_itemid_itemimageid on item_share_image itemsharei2__1 (cost=0.57..3.80 rows=91 width=16) ││ Index Cond: (item_id = itembo0_.id) ││ -> Memoize (cost=0.57..2.09 rows=1 width=8) ││ Cache Key: itemsharei2__1.item_image_id ││ Cache Mode: logical ││ -> Index Only Scan using pk_item_image on item_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8) ││ Index Cond: (id = itemsharei2__1.item_image_id) ││ -> Nested Loop (cost=0.85..32.14 rows=1 width=8) ││ -> Index Only Scan using uq_isi_itemid_itemimageid on item_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16) ││ Index Cond: (item_id = itembo0_.id) ││ -> Memoize (cost=0.29..1.72 rows=1 width=8) ││ Cache Key: itemsharei2_.item_image_id ││ Cache Mode: logical ││ -> Index Only Scan using stehule3001 on item_image itemimageb3_ (cost=0.28..1.71 rows=1 width=8) ││ Index Cond: (id = itemsharei2_.item_image_id) ││ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute itemmissin1_ (cost=0.28..1.66 rows=1 width=8) ││ Index Cond: (item_id = itembo0_.id) │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(29 řádek)Čas: 420,392 ms\n\r\nDoes the slowness present every time that plan is executed?looks yes \n\r\nCan you show the EXPLAIN ANALYZE of the nested loop plan with\r\nenable_memoize = off? You may ned to disable hash and merge join.┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤│ Nested Loop Anti Join (cost=1093.22..4488.89 rows=1 width=16) (actual time=5.723..60.470 rows=13 loops=1) ││ -> Nested Loop Anti Join (cost=1092.94..4485.97 rows=1 width=16) (actual time=5.165..60.368 rows=41 loops=1) ││ -> Gather (cost=1001.70..4391.26 rows=1 width=16) (actual time=1.909..56.913 rows=41 loops=1) ││ Workers Planned: 2 ││ Workers Launched: 2 ││ -> Nested Loop Semi Join (cost=1.70..3391.16 rows=1 width=16) (actual time=22.032..39.253 rows=14 loops=3) ││ -> Parallel Index Only Scan using stehule1010 on item itembo0_ (cost=0.57..2422.83 rows=5 width=16) (actual time=21.785..38.851 rows=14 loops=3) ││ Index Cond: ((status_id = ANY ('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP)) ││ Filter: ((to_expose_date IS NULL) AND ((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time))) ││ Rows Removed by Filter: 1589 ││ Heap Fetches: 21 ││ -> Nested Loop (cost=1.13..192.76 rows=91 width=8) (actual time=0.029..0.029 rows=1 loops=41) ││ -> Index Only Scan using uq_isi_itemid_itemimageid on item_share_image itemsharei2__1 (cost=0.57..3.80 rows=91 width=16) (actual time=0.015..0.015 rows=1 loops=41) ││ Index Cond: (item_id = itembo0_.id) ││ Heap Fetches: 2 ││ -> Index Only Scan using pk_item_image on item_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8) (actual time=0.013..0.013 rows=1 loops=41) ││ Index Cond: (id = itemsharei2__1.item_image_id) ││ Heap Fetches: 2 ││ -> Hash Join (cost=91.24..94.71 rows=1 width=8) (actual time=0.084..0.084 rows=0 loops=41) ││ Hash Cond: (itemsharei2_.item_image_id = itemimageb3_.id) ││ -> Index Only Scan using uq_isi_itemid_itemimageid on item_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16) (actual time=0.003..0.004 rows=6 loops=41) ││ Index Cond: (item_id = itembo0_.id) ││ Heap Fetches: 2 ││ -> Hash (cost=67.41..67.41 rows=1861 width=8) (actual time=3.213..3.214 rows=3950 loops=1) ││ Buckets: 4096 (originally 2048) Batches: 1 (originally 1) Memory Usage: 187kB ││ -> Index Only Scan using stehule3001 on item_image itemimageb3_ (cost=0.28..67.41 rows=1861 width=8) (actual time=0.029..2.479 rows=3950 loops=1) ││ Heap Fetches: 2203 ││ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute itemmissin1_ (cost=0.28..1.66 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=41) ││ Index Cond: (item_id = itembo0_.id) ││ Heap Fetches: 0 ││ Planning Time: 1.471 ms ││ Execution Time: 60.570 ms │└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(32 řádek)Čas: 62,982 ms \n\r\nDavid",
"msg_date": "Tue, 3 May 2022 06:09:13 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "út 3. 5. 2022 v 6:09 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\nnapsal:\r\n\r\n>\r\n>\r\n> po 2. 5. 2022 v 23:48 odesílatel David Rowley <dgrowleyml@gmail.com>\r\n> napsal:\r\n>\r\n>> On Mon, 2 May 2022 at 21:00, Pavel Stehule <pavel.stehule@gmail.com>\r\n>> wrote:\r\n>> > I found a query that is significantly slower with more memory\r\n>>\r\n>> Can you clarify what you mean here? More memory was installed on the\r\n>> machine? or work_mem was increased? or?\r\n>>\r\n>> > plan 1 - fast https://explain.depesz.com/s/XM1f\r\n>> >\r\n>> > plan 2 - slow https://explain.depesz.com/s/2rBw\r\n>>\r\n>> If it was work_mem you increased, it seems strange that the plan would\r\n>> switch over to using a Nested Loop / Memoize plan. Only 91 rows are\r\n>> estimated on the outer side of the join. It's hard to imagine that\r\n>> work_mem was so low that the Memoize costing code thought there would\r\n>> ever be cache evictions.\r\n>>\r\n>> > Strange - the time of last row is +/- same, but execution time is 10x\r\n>> worse\r\n>> >\r\n>> > It looks like slow environment cleaning\r\n>>\r\n>> Can you also show EXPLAIN for the Memoize plan without ANALYZE?\r\n>>\r\n>\r\n> yes - it is strange - it is just slow without execution\r\n>\r\n>\r\n> ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │\r\n> QUERY PLAN\r\n> │\r\n>\r\n> ├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n> │ Nested Loop Anti Join (cost=59.62..3168.15 rows=1 width=16)\r\n>\r\n> │\r\n> │ -> Nested Loop Anti Join (cost=59.34..3165.24 rows=1 width=16)\r\n>\r\n> │\r\n> │ -> Nested Loop Semi Join (cost=58.48..3133.09 rows=1 width=16)\r\n>\r\n> │\r\n> │ -> Bitmap Heap Scan on item itembo0_\r\n> (cost=57.34..2708.22 rows=11 width=16)\r\n>\r\n> │\r\n> │ Recheck Cond: ((ending_time IS NULL) OR ((status_id\r\n> = ANY ('{1,4,5}'::bigint[])) AND (CURRENT_TIMESTAMP < ending_time) AND\r\n> (starting_time <= CURRENT_TIMESTAMP))) │\r\n> │ Filter: ((to_expose_date IS NULL) AND (status_id =\r\n> ANY ('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP) AND\r\n> ((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time))) │\r\n> │ -> BitmapOr (cost=57.34..57.34 rows=1751 width=0)\r\n>\r\n> │\r\n> │ -> Bitmap Index Scan on stehule354\r\n> (cost=0.00..2.08 rows=1 width=0)\r\n> │\r\n> │ Index Cond: (ending_time IS NULL)\r\n>\r\n> │\r\n> │ -> Bitmap Index Scan on stehule1010\r\n> (cost=0.00..55.26 rows=1751 width=0)\r\n> │\r\n> │ Index Cond: ((status_id = ANY\r\n> ('{1,4,5}'::bigint[])) AND (ending_time > CURRENT_TIMESTAMP) AND\r\n> (starting_time <= CURRENT_TIMESTAMP))\r\n> │\r\n> │ -> Nested Loop (cost=1.14..37.71 rows=91 width=8)\r\n>\r\n> │\r\n> │ -> Index Only Scan using uq_isi_itemid_itemimageid\r\n> on item_share_image itemsharei2__1 (cost=0.57..3.80 rows=91 width=16)\r\n> │\r\n> │ Index Cond: (item_id = itembo0_.id)\r\n>\r\n> │\r\n> │ -> Memoize (cost=0.57..2.09 rows=1 width=8)\r\n>\r\n> │\r\n> │ Cache Key: itemsharei2__1.item_image_id\r\n>\r\n> │\r\n> │ Cache Mode: logical\r\n>\r\n> │\r\n> │ -> Index Only Scan using pk_item_image on\r\n> item_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8)\r\n> │\r\n> │ Index Cond: (id =\r\n> itemsharei2__1.item_image_id)\r\n>\r\n> │\r\n> │ -> Nested Loop (cost=0.85..32.14 rows=1 width=8)\r\n>\r\n> │\r\n> │ -> Index Only Scan using uq_isi_itemid_itemimageid on\r\n> item_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16)\r\n> │\r\n> │ Index Cond: (item_id = itembo0_.id)\r\n>\r\n> │\r\n> │ -> Memoize (cost=0.29..1.72 rows=1 width=8)\r\n>\r\n> │\r\n> │ Cache Key: itemsharei2_.item_image_id\r\n>\r\n> │\r\n> │ Cache Mode: logical\r\n>\r\n> │\r\n> │ -> Index Only Scan using stehule3001 on item_image\r\n> itemimageb3_ (cost=0.28..1.71 rows=1 width=8)\r\n> │\r\n> │ Index Cond: (id = itemsharei2_.item_image_id)\r\n>\r\n> │\r\n> │ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute\r\n> itemmissin1_ (cost=0.28..1.66 rows=1 width=8)\r\n> │\r\n> │ Index Cond: (item_id = itembo0_.id)\r\n>\r\n> │\r\n>\r\n> └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (29 řádek)\r\n>\r\n> Čas: 420,392 ms\r\n>\r\n\r\nthere is really something strange (see attached file). Looks so this issue\r\nis much more related to planning time than execution time\r\n\r\n\r\n\r\n>\r\n>\r\n>\r\n>> Does the slowness present every time that plan is executed?\r\n>>\r\n>\r\n> looks yes\r\n>\r\n>\r\n>>\r\n>> Can you show the EXPLAIN ANALYZE of the nested loop plan with\r\n>> enable_memoize = off? You may ned to disable hash and merge join.\r\n>>\r\n>\r\n>\r\n> ┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n> │\r\n> QUERY PLAN\r\n> │\r\n>\r\n> ├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n> │ Nested Loop Anti Join (cost=1093.22..4488.89 rows=1 width=16) (actual\r\n> time=5.723..60.470 rows=13 loops=1)\r\n> │\r\n> │ -> Nested Loop Anti Join (cost=1092.94..4485.97 rows=1 width=16)\r\n> (actual time=5.165..60.368 rows=41 loops=1)\r\n> │\r\n> │ -> Gather (cost=1001.70..4391.26 rows=1 width=16) (actual\r\n> time=1.909..56.913 rows=41 loops=1)\r\n> │\r\n> │ Workers Planned: 2\r\n>\r\n> │\r\n> │ Workers Launched: 2\r\n>\r\n> │\r\n> │ -> Nested Loop Semi Join (cost=1.70..3391.16 rows=1\r\n> width=16) (actual time=22.032..39.253 rows=14 loops=3)\r\n> │\r\n> │ -> Parallel Index Only Scan using stehule1010 on\r\n> item itembo0_ (cost=0.57..2422.83 rows=5 width=16) (actual\r\n> time=21.785..38.851 rows=14 loops=3) │\r\n> │ Index Cond: ((status_id = ANY\r\n> ('{1,4,5}'::bigint[])) AND (starting_time <= CURRENT_TIMESTAMP))\r\n> │\r\n> │ Filter: ((to_expose_date IS NULL) AND\r\n> ((ending_time IS NULL) OR (CURRENT_TIMESTAMP < ending_time)))\r\n> │\r\n> │ Rows Removed by Filter: 1589\r\n>\r\n> │\r\n> │ Heap Fetches: 21\r\n>\r\n> │\r\n> │ -> Nested Loop (cost=1.13..192.76 rows=91 width=8)\r\n> (actual time=0.029..0.029 rows=1 loops=41)\r\n> │\r\n> │ -> Index Only Scan using\r\n> uq_isi_itemid_itemimageid on item_share_image itemsharei2__1\r\n> (cost=0.57..3.80 rows=91 width=16) (actual time=0.015..0.015 rows=1\r\n> loops=41) │\r\n> │ Index Cond: (item_id = itembo0_.id)\r\n>\r\n> │\r\n> │ Heap Fetches: 2\r\n>\r\n> │\r\n> │ -> Index Only Scan using pk_item_image on\r\n> item_image itemimageb3__1 (cost=0.56..2.08 rows=1 width=8) (actual\r\n> time=0.013..0.013 rows=1 loops=41) │\r\n> │ Index Cond: (id =\r\n> itemsharei2__1.item_image_id)\r\n> │\r\n> │ Heap Fetches: 2\r\n>\r\n> │\r\n> │ -> Hash Join (cost=91.24..94.71 rows=1 width=8) (actual\r\n> time=0.084..0.084 rows=0 loops=41)\r\n> │\r\n> │ Hash Cond: (itemsharei2_.item_image_id = itemimageb3_.id)\r\n>\r\n> │\r\n> │ -> Index Only Scan using uq_isi_itemid_itemimageid on\r\n> item_share_image itemsharei2_ (cost=0.57..3.80 rows=91 width=16) (actual\r\n> time=0.003..0.004 rows=6 loops=41) │\r\n> │ Index Cond: (item_id = itembo0_.id)\r\n>\r\n> │\r\n> │ Heap Fetches: 2\r\n>\r\n> │\r\n> │ -> Hash (cost=67.41..67.41 rows=1861 width=8) (actual\r\n> time=3.213..3.214 rows=3950 loops=1)\r\n> │\r\n> │ Buckets: 4096 (originally 2048) Batches: 1\r\n> (originally 1) Memory Usage: 187kB\r\n> │\r\n> │ -> Index Only Scan using stehule3001 on item_image\r\n> itemimageb3_ (cost=0.28..67.41 rows=1861 width=8) (actual\r\n> time=0.029..2.479 rows=3950 loops=1) │\r\n> │ Heap Fetches: 2203\r\n>\r\n> │\r\n> │ -> Index Only Scan using ixfk_ima_itemid on item_missing_attribute\r\n> itemmissin1_ (cost=0.28..1.66 rows=1 width=8) (actual time=0.002..0.002\r\n> rows=1 loops=41) │\r\n> │ Index Cond: (item_id = itembo0_.id)\r\n>\r\n> │\r\n> │ Heap Fetches: 0\r\n>\r\n> │\r\n> │ Planning Time: 1.471 ms\r\n>\r\n> │\r\n> │ Execution Time: 60.570 ms\r\n>\r\n> │\r\n>\r\n> └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n> (32 řádek)\r\n>\r\n> Čas: 62,982 ms\r\n>\r\n>\r\n>>\r\n>> David\r\n>>\r\n>",
"msg_date": "Tue, 3 May 2022 06:16:54 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> there is really something strange (see attached file). Looks so this issue\n> is much more related to planning time than execution time\n\nYou sure there's not something taking an exclusive lock on one of these\ntables every so often?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 May 2022 00:57:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > there is really something strange (see attached file). Looks so this\n> issue\n> > is much more related to planning time than execution time\n>\n> You sure there's not something taking an exclusive lock on one of these\n> tables every so often?\n>\n\nI am almost sure, I can see this issue only every time when I set a higher\nwork mem. I don't see this issue in other cases.\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n\nút 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> there is really something strange (see attached file). Looks so this issue\n> is much more related to planning time than execution time\n\nYou sure there's not something taking an exclusive lock on one of these\ntables every so often?I am almost sure, I can see this issue only every time when I set a higher work mem. I don't see this issue in other cases.RegardsPavel \n\n regards, tom lane",
"msg_date": "Tue, 3 May 2022 07:02:09 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Tue, 3 May 2022 at 17:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> You sure there's not something taking an exclusive lock on one of these\n>> tables every so often?\n>\n> I am almost sure, I can see this issue only every time when I set a higher work mem. I don't see this issue in other cases.\n\nhmm, I don't think the query being blocked on a table lock would cause\nthis behaviour. As far as I know, all table locks should be obtained\nbefore the timer starts for the \"Execution Time\" timer in EXPLAIN\nANALYZE. However, locks are obtained on indexes at executor startup,\nso if there was some delay in obtaining a lock on the index it would\nshow up this way. I just don't know of anything that obtains a\nconflicting lock on an index without the same conflicting lock on the\ntable that the index belongs to.\n\nI do agree that the perf report does indicate that the extra time is\ntaken due to some large amount of memory being allocated. I just can't\nquite see how that would happen in Memoize given that\nestimate_num_groups() clamps the distinct estimate as the number of\ninput rows, which is 91 in both cases in your problem query.\n\nAre you able to run the Memoize query in psql with \\watch 0.1 for a\nfew seconds while you do:\n\nperf record --call-graph dwarf --pid <pid> sleep 2\n\nthen send along the perf report.\n\nI locally hacked build_hash_table() in nodeMemoize.c to make the\nhashtable 100 million elements and I see my perf report for a trivial\nMemoize query come up as:\n\n+ 99.98% 0.00% postgres postgres [.] _start\n+ 99.98% 0.00% postgres libc.so.6 [.]\n__libc_start_main_alias_2 (inlined)\n+ 99.98% 0.00% postgres libc.so.6 [.] __libc_start_call_main\n+ 99.98% 0.00% postgres postgres [.] main\n+ 99.98% 0.00% postgres postgres [.] PostmasterMain\n+ 99.98% 0.00% postgres postgres [.] ServerLoop\n+ 99.98% 0.00% postgres postgres [.] BackendStartup (inlined)\n+ 99.98% 0.00% postgres postgres [.] BackendRun (inlined)\n+ 99.98% 0.00% postgres postgres [.] PostgresMain\n+ 99.98% 0.00% postgres postgres [.] exec_simple_query\n+ 99.98% 0.00% postgres postgres [.] PortalRun\n+ 99.98% 0.00% postgres postgres [.] FillPortalStore\n+ 99.98% 0.00% postgres postgres [.] PortalRunUtility\n+ 99.98% 0.00% postgres postgres [.] standard_ProcessUtility\n+ 99.98% 0.00% postgres postgres [.] ExplainQuery\n+ 99.98% 0.00% postgres postgres [.] ExplainOneQuery\n+ 99.95% 0.00% postgres postgres [.] ExplainOnePlan\n+ 87.87% 0.00% postgres postgres [.] standard_ExecutorStart\n+ 87.87% 0.00% postgres postgres [.] InitPlan (inlined)\n+ 87.87% 0.00% postgres postgres [.] ExecInitNode\n+ 87.87% 0.00% postgres postgres [.] ExecInitNestLoop\n+ 87.87% 0.00% postgres postgres [.] ExecInitMemoize\n+ 87.87% 0.00% postgres postgres [.]\nbuild_hash_table (inlined) <----\n+ 87.87% 0.00% postgres postgres [.] memoize_create (inlined)\n+ 87.87% 0.00% postgres postgres [.]\nmemoize_allocate (inlined)\n+ 87.87% 0.00% postgres postgres [.]\nMemoryContextAllocExtended\n+ 87.87% 0.00% postgres postgres [.] memset (inlined)\n\nFailing that, are you able to pg_dump these tables and load them into\na PostgreSQL instance that you can play around with and patch?\nProvided you can actually recreate the problem on that instance.\n\nDavid\n\n\n",
"msg_date": "Wed, 4 May 2022 12:14:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "> I do agree that the perf report does indicate that the extra time is taken due to\r\n> some large amount of memory being allocated. I just can't quite see how that\r\n> would happen in Memoize given that\r\n> estimate_num_groups() clamps the distinct estimate as the number of input\r\n> rows, which is 91 in both cases in your problem query.\r\n> \r\n> Are you able to run the Memoize query in psql with \\watch 0.1 for a few seconds\r\n> while you do:\r\n> \r\n> perf record --call-graph dwarf --pid <pid> sleep 2\r\n> \r\n> then send along the perf report.\r\n> \r\n> I locally hacked build_hash_table() in nodeMemoize.c to make the hashtable 100\r\n> million elements and I see my perf report for a trivial Memoize query come up\r\n> as:\r\n> \r\n[..]\r\n> \r\n> Failing that, are you able to pg_dump these tables and load them into a\r\n> PostgreSQL instance that you can play around with and patch?\r\n> Provided you can actually recreate the problem on that instance.\r\n> \r\n\r\n+1 to what David says, we need a reproducer. In [1] Pavel wrote that he's having a lot of clear_page_erms(), so maybe this will be a little help: I recall having similar issue having a lot of minor page faults and high %sys when raising work_mem. For me it was different issue some time ago, but it was something like build_hash_table() being used by UNION recursive calls -> BuildTupleHashTable() -> .. malloc() -> mmap64(). When mmap() is issued with MAP_ANONYMOUS the kernel will zero out the memory (more memory -> potentially bigger CPU waste visible as minor page faults; erms stands for \"Enhanced REP MOVSB/STOSB\"; this is on kernel side). The culprit was planner allocating something that wouldn't be used later.\r\n\r\nAdditional three ways to figure that one (all are IMHO production safe):\r\na) already mentioned perf with --call-graph dwarf -p PID\r\nb) strace -p PID -e 'mmap' # verify if mmap() NULL is not having MAP_ANONYMOUS flag, size of mmap() request will somehow match work_mem sizing\r\nc) gdb -p PID and then breakpoint for mmap and verify each mmap() # check MAP_ANONYMOUS as above\r\n\r\n[1] - https://www.postgresql.org/message-id/CAFj8pRAo5CrF8mpPxMvnBYFSqu4HYDqRsQnLqGphckNHkHosFg%40mail.gmail.com\r\n\r\n-J.\r\n",
"msg_date": "Wed, 4 May 2022 14:08:25 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "st 4. 5. 2022 v 2:15 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Tue, 3 May 2022 at 17:02, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> You sure there's not something taking an exclusive lock on one of these\n> >> tables every so often?\n> >\n> > I am almost sure, I can see this issue only every time when I set a\n> higher work mem. I don't see this issue in other cases.\n>\n> hmm, I don't think the query being blocked on a table lock would cause\n> this behaviour. As far as I know, all table locks should be obtained\n> before the timer starts for the \"Execution Time\" timer in EXPLAIN\n> ANALYZE. However, locks are obtained on indexes at executor startup,\n> so if there was some delay in obtaining a lock on the index it would\n> show up this way. I just don't know of anything that obtains a\n> conflicting lock on an index without the same conflicting lock on the\n> table that the index belongs to.\n>\n> I do agree that the perf report does indicate that the extra time is\n> taken due to some large amount of memory being allocated. I just can't\n> quite see how that would happen in Memoize given that\n> estimate_num_groups() clamps the distinct estimate as the number of\n> input rows, which is 91 in both cases in your problem query.\n>\n> Are you able to run the Memoize query in psql with \\watch 0.1 for a\n> few seconds while you do:\n>\n> perf record --call-graph dwarf --pid <pid> sleep 2\n>\n> then send along the perf report.\n>\n> I locally hacked build_hash_table() in nodeMemoize.c to make the\n> hashtable 100 million elements and I see my perf report for a trivial\n> Memoize query come up as:\n>\n> + 99.98% 0.00% postgres postgres [.] _start\n> + 99.98% 0.00% postgres libc.so.6 [.]\n> __libc_start_main_alias_2 (inlined)\n> + 99.98% 0.00% postgres libc.so.6 [.]\n> __libc_start_call_main\n> + 99.98% 0.00% postgres postgres [.] main\n> + 99.98% 0.00% postgres postgres [.] PostmasterMain\n> + 99.98% 0.00% postgres postgres [.] ServerLoop\n> + 99.98% 0.00% postgres postgres [.] BackendStartup\n> (inlined)\n> + 99.98% 0.00% postgres postgres [.] BackendRun (inlined)\n> + 99.98% 0.00% postgres postgres [.] PostgresMain\n> + 99.98% 0.00% postgres postgres [.] exec_simple_query\n> + 99.98% 0.00% postgres postgres [.] PortalRun\n> + 99.98% 0.00% postgres postgres [.] FillPortalStore\n> + 99.98% 0.00% postgres postgres [.] PortalRunUtility\n> + 99.98% 0.00% postgres postgres [.]\n> standard_ProcessUtility\n> + 99.98% 0.00% postgres postgres [.] ExplainQuery\n> + 99.98% 0.00% postgres postgres [.] ExplainOneQuery\n> + 99.95% 0.00% postgres postgres [.] ExplainOnePlan\n> + 87.87% 0.00% postgres postgres [.]\n> standard_ExecutorStart\n> + 87.87% 0.00% postgres postgres [.] InitPlan (inlined)\n> + 87.87% 0.00% postgres postgres [.] ExecInitNode\n> + 87.87% 0.00% postgres postgres [.] ExecInitNestLoop\n> + 87.87% 0.00% postgres postgres [.] ExecInitMemoize\n> + 87.87% 0.00% postgres postgres [.]\n> build_hash_table (inlined) <----\n> + 87.87% 0.00% postgres postgres [.] memoize_create\n> (inlined)\n> + 87.87% 0.00% postgres postgres [.]\n> memoize_allocate (inlined)\n> + 87.87% 0.00% postgres postgres [.]\n> MemoryContextAllocExtended\n> + 87.87% 0.00% postgres postgres [.] memset (inlined)\n>\n> Failing that, are you able to pg_dump these tables and load them into\n> a PostgreSQL instance that you can play around with and patch?\n> Provided you can actually recreate the problem on that instance.\n>\n\n+ 71,98% 14,36% postmaster [kernel.kallsyms] [k] page_fault\n ▒\n+ 70,19% 6,59% postmaster libc-2.28.so [.]\n__memset_avx2_erms ▒\n+ 68,20% 0,00% postmaster postgres [.] ExecInitNode\n ▒\n+ 68,20% 0,00% postmaster postgres [.]\nExecInitNestLoop ▒\n+ 68,13% 0,00% postmaster postgres [.]\nExecInitMemoize ▒\n+ 68,13% 0,00% postmaster postgres [.]\nMemoryContextAllocExtended ▒\n+ 63,20% 0,00% postmaster postgres [.]\n0x0000000000776b89 ▒\n+ 63,20% 0,00% postmaster postgres [.] PostgresMain\n ◆\n+ 63,03% 0,00% postmaster postgres [.]\n0x00000000007f48ca ▒\n+ 63,03% 0,00% postmaster postgres [.] PortalRun\n ▒\n+ 63,03% 0,00% postmaster postgres [.]\n0x00000000007f83ae ▒\n+ 63,03% 0,00% postmaster postgres [.]\n0x00000000007f7fee ▒\n+ 63,03% 0,00% postmaster pg_stat_statements.so [.]\n0x00007f5579b599c6 ▒\n+ 63,03% 0,00% postmaster postgres [.]\nstandard_ProcessUtility ▒\n+ 63,03% 0,00% postmaster postgres [.] ExplainQuery\n ▒\n+ 62,83% 0,00% postmaster postgres [.]\n0x000000000062e83c ▒\n+ 62,83% 0,00% postmaster postgres [.] ExplainOnePlan\n ▒\n+ 57,47% 0,14% postmaster [kernel.kallsyms] [k] do_page_fault\n ▒\n+ 57,23% 0,51% postmaster [kernel.kallsyms] [k]\n__do_page_fault ▒\n+ 55,61% 0,71% postmaster [kernel.kallsyms] [k]\nhandle_mm_fault ▒\n+ 55,19% 0,00% postmaster pg_stat_statements.so [.]\n0x00007f5579b5ad2c ▒\n+ 55,19% 0,00% postmaster postgres [.]\nstandard_ExecutorStart ▒\n+ 54,78% 0,87% postmaster [kernel.kallsyms] [k]\n__handle_mm_fault ▒\n+ 53,54% 0,37% postmaster [kernel.kallsyms] [k]\ndo_anonymous_page ▒\n+ 36,36% 0,21% postmaster [kernel.kallsyms] [k]\nalloc_pages_vma ▒\n+ 35,99% 0,31% postmaster [kernel.kallsyms] [k]\n__alloc_pages_nodemask ▒\n+ 35,40% 1,06% postmaster [kernel.kallsyms] [k]\nget_page_from_freelist ▒\n+ 27,71% 0,62% postmaster [kernel.kallsyms] [k] prep_new_page\n ▒\n+ 27,09% 26,99% postmaster [kernel.kallsyms] [k]\nclear_page_erms ▒\n+ 11,24% 2,29% postmaster [kernel.kallsyms] [k]\nswapgs_restore_regs_and_return_to_usermode ▒\n+ 8,95% 6,87% postmaster [kernel.kallsyms] [k]\nprepare_exit_to_usermode ▒\n+ 7,83% 1,01% postmaster [kernel.kallsyms] [k]\nmem_cgroup_charge\n\n\n\n>\n> David\n>\n\nst 4. 5. 2022 v 2:15 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Tue, 3 May 2022 at 17:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n> út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\r\n>> You sure there's not something taking an exclusive lock on one of these\r\n>> tables every so often?\r\n>\r\n> I am almost sure, I can see this issue only every time when I set a higher work mem. I don't see this issue in other cases.\n\r\nhmm, I don't think the query being blocked on a table lock would cause\r\nthis behaviour. As far as I know, all table locks should be obtained\r\nbefore the timer starts for the \"Execution Time\" timer in EXPLAIN\r\nANALYZE. However, locks are obtained on indexes at executor startup,\r\nso if there was some delay in obtaining a lock on the index it would\r\nshow up this way. I just don't know of anything that obtains a\r\nconflicting lock on an index without the same conflicting lock on the\r\ntable that the index belongs to.\n\r\nI do agree that the perf report does indicate that the extra time is\r\ntaken due to some large amount of memory being allocated. I just can't\r\nquite see how that would happen in Memoize given that\r\nestimate_num_groups() clamps the distinct estimate as the number of\r\ninput rows, which is 91 in both cases in your problem query.\n\r\nAre you able to run the Memoize query in psql with \\watch 0.1 for a\r\nfew seconds while you do:\n\r\nperf record --call-graph dwarf --pid <pid> sleep 2\n\r\nthen send along the perf report.\n\r\nI locally hacked build_hash_table() in nodeMemoize.c to make the\r\nhashtable 100 million elements and I see my perf report for a trivial\r\nMemoize query come up as:\n\r\n+ 99.98% 0.00% postgres postgres [.] _start\r\n+ 99.98% 0.00% postgres libc.so.6 [.]\r\n__libc_start_main_alias_2 (inlined)\r\n+ 99.98% 0.00% postgres libc.so.6 [.] __libc_start_call_main\r\n+ 99.98% 0.00% postgres postgres [.] main\r\n+ 99.98% 0.00% postgres postgres [.] PostmasterMain\r\n+ 99.98% 0.00% postgres postgres [.] ServerLoop\r\n+ 99.98% 0.00% postgres postgres [.] BackendStartup (inlined)\r\n+ 99.98% 0.00% postgres postgres [.] BackendRun (inlined)\r\n+ 99.98% 0.00% postgres postgres [.] PostgresMain\r\n+ 99.98% 0.00% postgres postgres [.] exec_simple_query\r\n+ 99.98% 0.00% postgres postgres [.] PortalRun\r\n+ 99.98% 0.00% postgres postgres [.] FillPortalStore\r\n+ 99.98% 0.00% postgres postgres [.] PortalRunUtility\r\n+ 99.98% 0.00% postgres postgres [.] standard_ProcessUtility\r\n+ 99.98% 0.00% postgres postgres [.] ExplainQuery\r\n+ 99.98% 0.00% postgres postgres [.] ExplainOneQuery\r\n+ 99.95% 0.00% postgres postgres [.] ExplainOnePlan\r\n+ 87.87% 0.00% postgres postgres [.] standard_ExecutorStart\r\n+ 87.87% 0.00% postgres postgres [.] InitPlan (inlined)\r\n+ 87.87% 0.00% postgres postgres [.] ExecInitNode\r\n+ 87.87% 0.00% postgres postgres [.] ExecInitNestLoop\r\n+ 87.87% 0.00% postgres postgres [.] ExecInitMemoize\r\n+ 87.87% 0.00% postgres postgres [.]\r\nbuild_hash_table (inlined) <----\r\n+ 87.87% 0.00% postgres postgres [.] memoize_create (inlined)\r\n+ 87.87% 0.00% postgres postgres [.]\r\nmemoize_allocate (inlined)\r\n+ 87.87% 0.00% postgres postgres [.]\r\nMemoryContextAllocExtended\r\n+ 87.87% 0.00% postgres postgres [.] memset (inlined)\n\r\nFailing that, are you able to pg_dump these tables and load them into\r\na PostgreSQL instance that you can play around with and patch?\r\nProvided you can actually recreate the problem on that instance.+ 71,98% 14,36% postmaster [kernel.kallsyms] [k] page_fault ▒+ 70,19% 6,59% postmaster libc-2.28.so [.] __memset_avx2_erms ▒+ 68,20% 0,00% postmaster postgres [.] ExecInitNode ▒+ 68,20% 0,00% postmaster postgres [.] ExecInitNestLoop ▒+ 68,13% 0,00% postmaster postgres [.] ExecInitMemoize ▒+ 68,13% 0,00% postmaster postgres [.] MemoryContextAllocExtended ▒+ 63,20% 0,00% postmaster postgres [.] 0x0000000000776b89 ▒+ 63,20% 0,00% postmaster postgres [.] PostgresMain ◆+ 63,03% 0,00% postmaster postgres [.] 0x00000000007f48ca ▒+ 63,03% 0,00% postmaster postgres [.] PortalRun ▒+ 63,03% 0,00% postmaster postgres [.] 0x00000000007f83ae ▒+ 63,03% 0,00% postmaster postgres [.] 0x00000000007f7fee ▒+ 63,03% 0,00% postmaster pg_stat_statements.so [.] 0x00007f5579b599c6 ▒+ 63,03% 0,00% postmaster postgres [.] standard_ProcessUtility ▒+ 63,03% 0,00% postmaster postgres [.] ExplainQuery ▒+ 62,83% 0,00% postmaster postgres [.] 0x000000000062e83c ▒+ 62,83% 0,00% postmaster postgres [.] ExplainOnePlan ▒+ 57,47% 0,14% postmaster [kernel.kallsyms] [k] do_page_fault ▒+ 57,23% 0,51% postmaster [kernel.kallsyms] [k] __do_page_fault ▒+ 55,61% 0,71% postmaster [kernel.kallsyms] [k] handle_mm_fault ▒+ 55,19% 0,00% postmaster pg_stat_statements.so [.] 0x00007f5579b5ad2c ▒+ 55,19% 0,00% postmaster postgres [.] standard_ExecutorStart ▒+ 54,78% 0,87% postmaster [kernel.kallsyms] [k] __handle_mm_fault ▒+ 53,54% 0,37% postmaster [kernel.kallsyms] [k] do_anonymous_page ▒+ 36,36% 0,21% postmaster [kernel.kallsyms] [k] alloc_pages_vma ▒+ 35,99% 0,31% postmaster [kernel.kallsyms] [k] __alloc_pages_nodemask ▒+ 35,40% 1,06% postmaster [kernel.kallsyms] [k] get_page_from_freelist ▒+ 27,71% 0,62% postmaster [kernel.kallsyms] [k] prep_new_page ▒+ 27,09% 26,99% postmaster [kernel.kallsyms] [k] clear_page_erms ▒+ 11,24% 2,29% postmaster [kernel.kallsyms] [k] swapgs_restore_regs_and_return_to_usermode ▒+ 8,95% 6,87% postmaster [kernel.kallsyms] [k] prepare_exit_to_usermode ▒+ 7,83% 1,01% postmaster [kernel.kallsyms] [k] mem_cgroup_charge \n\r\nDavid",
"msg_date": "Wed, 4 May 2022 20:38:25 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "st 4. 5. 2022 v 16:08 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com>\nnapsal:\n\n>\n> Additional three ways to figure that one (all are IMHO production safe):\n> a) already mentioned perf with --call-graph dwarf -p PID\n> b) strace -p PID -e 'mmap' # verify if mmap() NULL is not having\n> MAP_ANONYMOUS flag, size of mmap() request will somehow match work_mem\n> sizing\n> c) gdb -p PID and then breakpoint for mmap and verify each mmap() # check\n> MAP_ANONYMOUS as above\n>\n>\nI have not debug symbols, so I have not more details now\n\nBreakpoint 1 at 0x7f557f0c16c0\n(gdb) c\nContinuing.\n\nBreakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n#1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n#2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n#3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n#4 0x0000000000932134 in AllocSetAlloc ()\n#5 0x00000000009376cf in MemoryContextAllocExtended ()\n#6 0x00000000006ad915 in ExecInitMemoize ()\n#7 0x000000000068dc02 in ExecInitNode ()\n#8 0x00000000006b37ff in ExecInitNestLoop ()\n#9 0x000000000068dc56 in ExecInitNode ()\n#10 0x00000000006b37ff in ExecInitNestLoop ()\n#11 0x000000000068dc56 in ExecInitNode ()\n#12 0x00000000006b37de in ExecInitNestLoop ()\n#13 0x000000000068dc56 in ExecInitNode ()\n#14 0x00000000006b37de in ExecInitNestLoop ()\n#15 0x000000000068dc56 in ExecInitNode ()\n#16 0x0000000000687e4d in standard_ExecutorStart ()\n#17 0x00007f5579b5ad2d in pgss_ExecutorStart () from\n/usr/pgsql-14/lib/pg_stat_statements.so\n#18 0x000000000062e643 in ExplainOnePlan ()\n#19 0x000000000062e83d in ExplainOneQuery ()\n#20 0x000000000062ee6f in ExplainQuery ()\n#21 0x00000000007f9b15 in standard_ProcessUtility ()\n#22 0x00007f5579b599c7 in pgss_ProcessUtility () from\n/usr/pgsql-14/lib/pg_stat_statements.so\n#23 0x00000000007f7fef in PortalRunUtility ()\n#24 0x00000000007f83af in FillPortalStore ()\n#25 0x00000000007f86dd in PortalRun ()\n#26 0x00000000007f48cb in exec_simple_query ()\n#27 0x00000000007f610e in PostgresMain ()\n#28 0x0000000000776b8a in ServerLoop ()\n#29 0x0000000000777a03 in PostmasterMain ()\n#30 0x00000000004fe413 in main ()\n(gdb) p\nThe history is empty.\n(gdb) c\nContinuing.\n\nBreakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n#1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n#2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n#3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n#4 0x0000000000932134 in AllocSetAlloc ()\n#5 0x00000000009376cf in MemoryContextAllocExtended ()\n#6 0x00000000006ad915 in ExecInitMemoize ()\n#7 0x000000000068dc02 in ExecInitNode ()\n#8 0x00000000006b37ff in ExecInitNestLoop ()\n#9 0x000000000068dc56 in ExecInitNode ()\n#10 0x00000000006b37ff in ExecInitNestLoop ()\n#11 0x000000000068dc56 in ExecInitNode ()\n#12 0x00000000006b37de in ExecInitNestLoop ()\n#13 0x000000000068dc56 in ExecInitNode ()\n#14 0x0000000000687e4d in standard_ExecutorStart ()\n#15 0x00007f5579b5ad2d in pgss_ExecutorStart () from\n/usr/pgsql-14/lib/pg_stat_statements.so\n#16 0x000000000062e643 in ExplainOnePlan ()\n#17 0x000000000062e83d in ExplainOneQuery ()\n#18 0x000000000062ee6f in ExplainQuery ()\n#19 0x00000000007f9b15 in standard_ProcessUtility ()\n#20 0x00007f5579b599c7 in pgss_ProcessUtility () from\n/usr/pgsql-14/lib/pg_stat_statements.so\n#21 0x00000000007f7fef in PortalRunUtility ()\n#22 0x00000000007f83af in FillPortalStore ()\n#23 0x00000000007f86dd in PortalRun ()\n#24 0x00000000007f48cb in exec_simple_query ()\n#25 0x00000000007f610e in PostgresMain ()\n#26 0x0000000000776b8a in ServerLoop ()\n#27 0x0000000000777a03 in PostmasterMain ()\n#28 0x00000000004fe413 in main ()\n(gdb) c\nContinuing.\n\nthere was 2 hits of mmap\n\nRegards\n\nPavel\n\n\n\n\n> [1] -\n> https://www.postgresql.org/message-id/CAFj8pRAo5CrF8mpPxMvnBYFSqu4HYDqRsQnLqGphckNHkHosFg%40mail.gmail.com\n>\n> -J.\n>\n\nst 4. 5. 2022 v 16:08 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com> napsal:\nAdditional three ways to figure that one (all are IMHO production safe):\na) already mentioned perf with --call-graph dwarf -p PID\nb) strace -p PID -e 'mmap' # verify if mmap() NULL is not having MAP_ANONYMOUS flag, size of mmap() request will somehow match work_mem sizing\nc) gdb -p PID and then breakpoint for mmap and verify each mmap() # check MAP_ANONYMOUS as above\nI have not debug symbols, so I have not more details nowBreakpoint 1 at 0x7f557f0c16c0(gdb) cContinuing.Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6(gdb) bt#0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6#1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6#2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6#3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6#4 0x0000000000932134 in AllocSetAlloc ()#5 0x00000000009376cf in MemoryContextAllocExtended ()#6 0x00000000006ad915 in ExecInitMemoize ()#7 0x000000000068dc02 in ExecInitNode ()#8 0x00000000006b37ff in ExecInitNestLoop ()#9 0x000000000068dc56 in ExecInitNode ()#10 0x00000000006b37ff in ExecInitNestLoop ()#11 0x000000000068dc56 in ExecInitNode ()#12 0x00000000006b37de in ExecInitNestLoop ()#13 0x000000000068dc56 in ExecInitNode ()#14 0x00000000006b37de in ExecInitNestLoop ()#15 0x000000000068dc56 in ExecInitNode ()#16 0x0000000000687e4d in standard_ExecutorStart ()#17 0x00007f5579b5ad2d in pgss_ExecutorStart () from /usr/pgsql-14/lib/pg_stat_statements.so#18 0x000000000062e643 in ExplainOnePlan ()#19 0x000000000062e83d in ExplainOneQuery ()#20 0x000000000062ee6f in ExplainQuery ()#21 0x00000000007f9b15 in standard_ProcessUtility ()#22 0x00007f5579b599c7 in pgss_ProcessUtility () from /usr/pgsql-14/lib/pg_stat_statements.so#23 0x00000000007f7fef in PortalRunUtility ()#24 0x00000000007f83af in FillPortalStore ()#25 0x00000000007f86dd in PortalRun ()#26 0x00000000007f48cb in exec_simple_query ()#27 0x00000000007f610e in PostgresMain ()#28 0x0000000000776b8a in ServerLoop ()#29 0x0000000000777a03 in PostmasterMain ()#30 0x00000000004fe413 in main ()(gdb) pThe history is empty.(gdb) cContinuing.Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6(gdb) bt#0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6#1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6#2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6#3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6#4 0x0000000000932134 in AllocSetAlloc ()#5 0x00000000009376cf in MemoryContextAllocExtended ()#6 0x00000000006ad915 in ExecInitMemoize ()#7 0x000000000068dc02 in ExecInitNode ()#8 0x00000000006b37ff in ExecInitNestLoop ()#9 0x000000000068dc56 in ExecInitNode ()#10 0x00000000006b37ff in ExecInitNestLoop ()#11 0x000000000068dc56 in ExecInitNode ()#12 0x00000000006b37de in ExecInitNestLoop ()#13 0x000000000068dc56 in ExecInitNode ()#14 0x0000000000687e4d in standard_ExecutorStart ()#15 0x00007f5579b5ad2d in pgss_ExecutorStart () from /usr/pgsql-14/lib/pg_stat_statements.so#16 0x000000000062e643 in ExplainOnePlan ()#17 0x000000000062e83d in ExplainOneQuery ()#18 0x000000000062ee6f in ExplainQuery ()#19 0x00000000007f9b15 in standard_ProcessUtility ()#20 0x00007f5579b599c7 in pgss_ProcessUtility () from /usr/pgsql-14/lib/pg_stat_statements.so#21 0x00000000007f7fef in PortalRunUtility ()#22 0x00000000007f83af in FillPortalStore ()#23 0x00000000007f86dd in PortalRun ()#24 0x00000000007f48cb in exec_simple_query ()#25 0x00000000007f610e in PostgresMain ()#26 0x0000000000776b8a in ServerLoop ()#27 0x0000000000777a03 in PostmasterMain ()#28 0x00000000004fe413 in main ()(gdb) cContinuing.there was 2 hits of mmapRegardsPavel \n[1] - https://www.postgresql.org/message-id/CAFj8pRAo5CrF8mpPxMvnBYFSqu4HYDqRsQnLqGphckNHkHosFg%40mail.gmail.com\n\n-J.",
"msg_date": "Wed, 4 May 2022 20:48:02 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "Hi Pavel,\n\n> I have not debug symbols, so I have not more details now\n> Breakpoint 1 at 0x7f557f0c16c0\n> (gdb) c\n> Continuing.\n\n> Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> (gdb) bt\n> #0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> #1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n> #2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n> #3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n> #4 0x0000000000932134 in AllocSetAlloc ()\n> #5 0x00000000009376cf in MemoryContextAllocExtended ()\n> #6 0x00000000006ad915 in ExecInitMemoize ()\n\nWell the PGDG repo have the debuginfos (e.g. postgresql14-debuginfo) rpms / dpkgs(?) so I hope you are basically 1 command away of being able to debug it further what happens in ExecInitMemoize()\nThose packages seem to be safe as they modify only /usr/lib/debug so should not have any impact on production workload.\n\n-J.\n\n\n\n\n",
"msg_date": "Thu, 5 May 2022 06:51:35 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "čt 5. 5. 2022 v 8:51 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com>\nnapsal:\n\n> Hi Pavel,\n>\n> > I have not debug symbols, so I have not more details now\n> > Breakpoint 1 at 0x7f557f0c16c0\n> > (gdb) c\n> > Continuing.\n>\n> > Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> > (gdb) bt\n> > #0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> > #1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n> > #2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n> > #3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n> > #4 0x0000000000932134 in AllocSetAlloc ()\n> > #5 0x00000000009376cf in MemoryContextAllocExtended ()\n> > #6 0x00000000006ad915 in ExecInitMemoize ()\n>\n> Well the PGDG repo have the debuginfos (e.g. postgresql14-debuginfo) rpms\n> / dpkgs(?) so I hope you are basically 1 command away of being able to\n> debug it further what happens in ExecInitMemoize()\n> Those packages seem to be safe as they modify only /usr/lib/debug so\n> should not have any impact on production workload.\n>\n\nI just have to wait for admin action - I have no root rights for the server.\n\n\n\n>\n> -J.\n>\n>\n>\n\nčt 5. 5. 2022 v 8:51 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com> napsal:Hi Pavel,\n\n> I have not debug symbols, so I have not more details now\n> Breakpoint 1 at 0x7f557f0c16c0\n> (gdb) c\n> Continuing.\n\n> Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> (gdb) bt\n> #0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> #1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n> #2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n> #3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n> #4 0x0000000000932134 in AllocSetAlloc ()\n> #5 0x00000000009376cf in MemoryContextAllocExtended ()\n> #6 0x00000000006ad915 in ExecInitMemoize ()\n\nWell the PGDG repo have the debuginfos (e.g. postgresql14-debuginfo) rpms / dpkgs(?) so I hope you are basically 1 command away of being able to debug it further what happens in ExecInitMemoize()\nThose packages seem to be safe as they modify only /usr/lib/debug so should not have any impact on production workload.I just have to wait for admin action - I have no root rights for the server. \n\n-J.",
"msg_date": "Thu, 5 May 2022 09:26:03 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Thu, 5 May 2022 at 19:26, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> čt 5. 5. 2022 v 8:51 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com> napsal:\n>> > Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n>> > (gdb) bt\n>> > #0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n>> > #1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n>> > #2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n>> > #3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n>> > #4 0x0000000000932134 in AllocSetAlloc ()\n>> > #5 0x00000000009376cf in MemoryContextAllocExtended ()\n>> > #6 0x00000000006ad915 in ExecInitMemoize ()\n>>\n>> Well the PGDG repo have the debuginfos (e.g. postgresql14-debuginfo) rpms / dpkgs(?) so I hope you are basically 1 command away of being able to debug it further what happens in ExecInitMemoize()\n>> Those packages seem to be safe as they modify only /usr/lib/debug so should not have any impact on production workload.\n>\n> I just have to wait for admin action - I have no root rights for the server.\n\nLooking at ExecInitMemoize() it's hard to see what could take such a\nlong time other than the build_hash_table(). Tom did mention this,\nbut I can't quite see how the size given to that function could be\nlarger than 91 in your case.\n\nIf you get the debug symbols installed, can you use gdb to\n\nbreak nodeMemoize.c:268\np size\n\nmaybe there's something I'm missing following the code and maybe there\nis some way that est_entries is not set to what I thought it was.\n\nIt would also be good to see the same perf report again after the\ndebug symbols are installed in order to resolve those unresolved\nfunction names.\n\nDavid\n\n\n",
"msg_date": "Fri, 6 May 2022 11:18:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Mon, May 2, 2022 at 10:02 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > there is really something strange (see attached file). Looks so this\n>> issue\n>> > is much more related to planning time than execution time\n>>\n>> You sure there's not something taking an exclusive lock on one of these\n>> tables every so often?\n>>\n>\n> I am almost sure, I can see this issue only every time when I set a higher\n> work mem. I don't see this issue in other cases.\n>\n>>\n>>\nWhat are the values of work_mem and hash_mem_multiplier for the two cases?\n\nDavid J.\n\nOn Mon, May 2, 2022 at 10:02 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> there is really something strange (see attached file). Looks so this issue\n> is much more related to planning time than execution time\n\nYou sure there's not something taking an exclusive lock on one of these\ntables every so often?I am almost sure, I can see this issue only every time when I set a higher work mem. I don't see this issue in other cases.What are the values of work_mem and hash_mem_multiplier for the two cases?David J.",
"msg_date": "Thu, 5 May 2022 16:28:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "pá 6. 5. 2022 v 1:28 odesílatel David G. Johnston <\r\ndavid.g.johnston@gmail.com> napsal:\r\n\r\n> On Mon, May 2, 2022 at 10:02 PM Pavel Stehule <pavel.stehule@gmail.com>\r\n> wrote:\r\n>\r\n>>\r\n>>\r\n>> út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\r\n>>\r\n>>> Pavel Stehule <pavel.stehule@gmail.com> writes:\r\n>>> > there is really something strange (see attached file). Looks so this\r\n>>> issue\r\n>>> > is much more related to planning time than execution time\r\n>>>\r\n>>> You sure there's not something taking an exclusive lock on one of these\r\n>>> tables every so often?\r\n>>>\r\n>>\r\n>> I am almost sure, I can see this issue only every time when I set a\r\n>> higher work mem. I don't see this issue in other cases.\r\n>>\r\n>>>\r\n>>>\r\n> What are the values of work_mem and hash_mem_multiplier for the two cases?\r\n>\r\n\r\n (2022-05-06 07:35:21) prd_aukro=# show work_mem ;\r\n┌──────────┐\r\n│ work_mem │\r\n├──────────┤\r\n│ 400MB │\r\n└──────────┘\r\n(1 řádka)\r\n\r\nČas: 0,331 ms\r\n(2022-05-06 07:35:32) prd_aukro=# show hash_mem_multiplier ;\r\n┌─────────────────────┐\r\n│ hash_mem_multiplier │\r\n├─────────────────────┤\r\n│ 1 │\r\n└─────────────────────┘\r\n(1 řádka)\r\n\r\n\r\n> David J.\r\n>\r\n\npá 6. 5. 2022 v 1:28 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Mon, May 2, 2022 at 10:02 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 3. 5. 2022 v 6:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> there is really something strange (see attached file). Looks so this issue\n> is much more related to planning time than execution time\n\nYou sure there's not something taking an exclusive lock on one of these\ntables every so often?I am almost sure, I can see this issue only every time when I set a higher work mem. I don't see this issue in other cases.What are the values of work_mem and hash_mem_multiplier for the two cases? (2022-05-06 07:35:21) prd_aukro=# show work_mem ;┌──────────┐│ work_mem │├──────────┤│ 400MB │└──────────┘(1 řádka)Čas: 0,331 ms(2022-05-06 07:35:32) prd_aukro=# show hash_mem_multiplier ;┌─────────────────────┐│ hash_mem_multiplier │├─────────────────────┤│ 1 │└─────────────────────┘(1 řádka)David J.",
"msg_date": "Fri, 6 May 2022 07:35:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "pá 6. 5. 2022 v 1:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Thu, 5 May 2022 at 19:26, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > čt 5. 5. 2022 v 8:51 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com>\n> napsal:\n> >> > Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> >> > (gdb) bt\n> >> > #0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\n> >> > #1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\n> >> > #2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\n> >> > #3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\n> >> > #4 0x0000000000932134 in AllocSetAlloc ()\n> >> > #5 0x00000000009376cf in MemoryContextAllocExtended ()\n> >> > #6 0x00000000006ad915 in ExecInitMemoize ()\n> >>\n> >> Well the PGDG repo have the debuginfos (e.g. postgresql14-debuginfo)\n> rpms / dpkgs(?) so I hope you are basically 1 command away of being able to\n> debug it further what happens in ExecInitMemoize()\n> >> Those packages seem to be safe as they modify only /usr/lib/debug so\n> should not have any impact on production workload.\n> >\n> > I just have to wait for admin action - I have no root rights for the\n> server.\n>\n> Looking at ExecInitMemoize() it's hard to see what could take such a\n> long time other than the build_hash_table(). Tom did mention this,\n> but I can't quite see how the size given to that function could be\n> larger than 91 in your case.\n>\n> If you get the debug symbols installed, can you use gdb to\n>\n> break nodeMemoize.c:268\n> p size\n>\n> maybe there's something I'm missing following the code and maybe there\n> is some way that est_entries is not set to what I thought it was.\n>\n> It would also be good to see the same perf report again after the\n> debug symbols are installed in order to resolve those unresolved\n> function names.\n>\n\nBreakpoint 1, build_hash_table (size=4369066, mstate=0xfc7f08) at\nnodeMemoize.c:268\n268 if (size == 0)\n(gdb) p size\n$1 = 4369066\n\nThis is work_mem size\n\n+ 99,92% 0,00% postmaster postgres [.] ServerLoop\n ▒\n+ 99,92% 0,00% postmaster postgres [.] PostgresMain\n ▒\n+ 99,92% 0,00% postmaster postgres [.]\nexec_simple_query\n▒\n+ 99,70% 0,00% postmaster postgres [.] PortalRun\n ▒\n+ 99,70% 0,00% postmaster postgres [.]\nFillPortalStore\n▒\n+ 99,70% 0,02% postmaster postgres [.]\nPortalRunUtility\n ▒\n+ 99,68% 0,00% postmaster pg_stat_statements.so [.]\n0x00007f5579b599c6\n ▒\n+ 99,68% 0,00% postmaster postgres [.]\nstandard_ProcessUtility\n▒\n+ 99,68% 0,00% postmaster postgres [.] ExplainQuery\n ◆\n+ 99,63% 0,00% postmaster postgres [.]\nExplainOneQuery\n▒\n+ 99,16% 0,00% postmaster postgres [.] ExplainOnePlan\n ▒\n+ 99,06% 0,00% postmaster pg_stat_statements.so [.]\n0x00007f5579b5ad2c\n ▒\n+ 99,06% 0,00% postmaster postgres [.]\nstandard_ExecutorStart\n ▒\n+ 99,06% 0,00% postmaster postgres [.] InitPlan\n(inlined) ▒\n+ 99,06% 0,00% postmaster postgres [.] ExecInitNode\n ▒\n+ 99,06% 0,00% postmaster postgres [.]\nExecInitNestLoop\n ▒\n+ 99,00% 0,02% postmaster postgres [.]\nExecInitMemoize\n▒\n+ 98,87% 26,80% postmaster libc-2.28.so [.]\n__memset_avx2_erms\n ▒\n+ 98,87% 0,00% postmaster postgres [.]\nbuild_hash_table (inlined)\n ▒\n+ 98,87% 0,00% postmaster postgres [.] memoize_create\n(inlined) ▒\n+ 98,87% 0,00% postmaster postgres [.]\nmemoize_allocate (inlined)\n ▒\n+ 98,87% 0,00% postmaster postgres [.]\nMemoryContextAllocExtended\n ▒\n+ 72,08% 72,08% postmaster [unknown] [k]\n0xffffffffbaa010e0\n 0,47% 0,00% postmaster postgres [.] pg_plan_query\n 0,47% 0,00% postmaster pg_stat_statements.so [.]\n0x00007f5579b59ba4\n 0,47% 0,00% postmaster postgres [.]\nstandard_planner\n 0,47% 0,00% postmaster postgres [.]\nsubquery_planner\n 0,47% 0,00% postmaster postgres [.]\ngrouping_planner\n 0,47% 0,00% postmaster postgres [.] query_planner\n\n\n>\n> David\n>\n\npá 6. 5. 2022 v 1:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Thu, 5 May 2022 at 19:26, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n>\r\n> čt 5. 5. 2022 v 8:51 odesílatel Jakub Wartak <Jakub.Wartak@tomtom.com> napsal:\r\n>> > Breakpoint 1, 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\r\n>> > (gdb) bt\r\n>> > #0 0x00007f557f0c16c0 in mmap64 () from /lib64/libc.so.6\r\n>> > #1 0x00007f557f04dd91 in sysmalloc () from /lib64/libc.so.6\r\n>> > #2 0x00007f557f04eaa9 in _int_malloc () from /lib64/libc.so.6\r\n>> > #3 0x00007f557f04fb1e in malloc () from /lib64/libc.so.6\r\n>> > #4 0x0000000000932134 in AllocSetAlloc ()\r\n>> > #5 0x00000000009376cf in MemoryContextAllocExtended ()\r\n>> > #6 0x00000000006ad915 in ExecInitMemoize ()\r\n>>\r\n>> Well the PGDG repo have the debuginfos (e.g. postgresql14-debuginfo) rpms / dpkgs(?) so I hope you are basically 1 command away of being able to debug it further what happens in ExecInitMemoize()\r\n>> Those packages seem to be safe as they modify only /usr/lib/debug so should not have any impact on production workload.\r\n>\r\n> I just have to wait for admin action - I have no root rights for the server.\n\r\nLooking at ExecInitMemoize() it's hard to see what could take such a\r\nlong time other than the build_hash_table(). Tom did mention this,\r\nbut I can't quite see how the size given to that function could be\r\nlarger than 91 in your case.\n\r\nIf you get the debug symbols installed, can you use gdb to\n\r\nbreak nodeMemoize.c:268\r\np size\n\r\nmaybe there's something I'm missing following the code and maybe there\r\nis some way that est_entries is not set to what I thought it was.\n\r\nIt would also be good to see the same perf report again after the\r\ndebug symbols are installed in order to resolve those unresolved\r\nfunction names.Breakpoint 1, build_hash_table (size=4369066, mstate=0xfc7f08) at nodeMemoize.c:268268\t\tif (size == 0)(gdb) p size$1 = 4369066This is work_mem size+ 99,92% 0,00% postmaster postgres [.] ServerLoop ▒+ 99,92% 0,00% postmaster postgres [.] PostgresMain ▒+ 99,92% 0,00% postmaster postgres [.] exec_simple_query ▒+ 99,70% 0,00% postmaster postgres [.] PortalRun ▒+ 99,70% 0,00% postmaster postgres [.] FillPortalStore ▒+ 99,70% 0,02% postmaster postgres [.] PortalRunUtility ▒+ 99,68% 0,00% postmaster pg_stat_statements.so [.] 0x00007f5579b599c6 ▒+ 99,68% 0,00% postmaster postgres [.] standard_ProcessUtility ▒+ 99,68% 0,00% postmaster postgres [.] ExplainQuery ◆+ 99,63% 0,00% postmaster postgres [.] ExplainOneQuery ▒+ 99,16% 0,00% postmaster postgres [.] ExplainOnePlan ▒+ 99,06% 0,00% postmaster pg_stat_statements.so [.] 0x00007f5579b5ad2c ▒+ 99,06% 0,00% postmaster postgres [.] standard_ExecutorStart ▒+ 99,06% 0,00% postmaster postgres [.] InitPlan (inlined) ▒+ 99,06% 0,00% postmaster postgres [.] ExecInitNode ▒+ 99,06% 0,00% postmaster postgres [.] ExecInitNestLoop ▒+ 99,00% 0,02% postmaster postgres [.] ExecInitMemoize ▒+ 98,87% 26,80% postmaster libc-2.28.so [.] __memset_avx2_erms ▒+ 98,87% 0,00% postmaster postgres [.] build_hash_table (inlined) ▒+ 98,87% 0,00% postmaster postgres [.] memoize_create (inlined) ▒+ 98,87% 0,00% postmaster postgres [.] memoize_allocate (inlined) ▒+ 98,87% 0,00% postmaster postgres [.] MemoryContextAllocExtended ▒+ 72,08% 72,08% postmaster [unknown] [k] 0xffffffffbaa010e0 0,47% 0,00% postmaster postgres [.] pg_plan_query 0,47% 0,00% postmaster pg_stat_statements.so [.] 0x00007f5579b59ba4 0,47% 0,00% postmaster postgres [.] standard_planner 0,47% 0,00% postmaster postgres [.] subquery_planner 0,47% 0,00% postmaster postgres [.] grouping_planner 0,47% 0,00% postmaster postgres [.] query_planner \n\r\nDavid",
"msg_date": "Fri, 6 May 2022 07:52:14 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Breakpoint 1, build_hash_table (size=4369066, mstate=0xfc7f08) at\n> nodeMemoize.c:268\n> 268 if (size == 0)\n> (gdb) p size\n> $1 = 4369066\n\nUh-huh ....\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 May 2022 02:00:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Fri, 6 May 2022 at 17:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Breakpoint 1, build_hash_table (size=4369066, mstate=0xfc7f08) at nodeMemoize.c:268\n> 268 if (size == 0)\n> (gdb) p size\n> $1 = 4369066\n\nThanks for the report. I think I now see the problem. Looking at\n[1], it seems that's a bushy plan. That's fine, but less common than a\nleft deep plan.\n\nI think the problem is down to some incorrect code in\nget_memoize_path() where we pass the wrong value of \"calls\" to\ncreate_memoize_path(). I think instead of outer_path->parent->rows it\ninstead should be outer_path->rows.\n\nIf you look closely at the plan, you'll see that the outer side of the\ninner-most Nested Loop is parameterized by some higher-level nested\nloop.\n\n-> Nested Loop (cost=1.14..79.20 rows=91 width=8) (actual\ntime=0.024..0.024 rows=1 loops=66)\n -> Index Only Scan using\nuq_isi_itemid_itemimageid on item_share_image itemsharei2__1\n(cost=0.57..3.85 rows=91 width=16) (actual time=0.010..0.010 rows=1\nloops=66)\n Index Cond: (item_id = itembo0_.id)\n Heap Fetches: 21\n -> Memoize (cost=0.57..2.07 rows=1 width=8)\n(actual time=0.013..0.013 rows=1 loops=66)\n\nso instead of passing 91 to create_memoize_path() as I thought. Since\nI can't see any WHERE clause items filtering rows from the\nitemsharei2__1 relation, then the outer_path->parent->rows is should\nbe whatever pg_class.reltuples says.\n\nAre you able to send the results of:\n\nexplain select item_id from item_share_image group by item_id; -- I'm\ninterested in the estimated number of groups in the plan's top node.\n\nselect reltuples from pg_class where oid = 'item_share_image'::regclass;\n\nI'm expecting the estimated number of rows in the top node of the\ngroup by plan to be about 4369066.\n\nDavid\n\n[1] https://explain.depesz.com/s/2rBw#source\n\n\n",
"msg_date": "Fri, 6 May 2022 20:04:55 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Fri, 6 May 2022 at 20:04, David Rowley <dgrowleyml@gmail.com> wrote:\n> Thanks for the report. I think I now see the problem. Looking at\n> [1], it seems that's a bushy plan. That's fine, but less common than a\n> left deep plan.\n\nOn second thoughts, it does not need to be a bushy plan for the outer\nside of the nested loop to be parameterized by some higher-level\nnested loop. There's an example of a plan like this in the regression\ntests.\n\nregression=# explain (analyze, costs off, summary off)\nregression-# select * from tenk1 t1 left join\nregression-# (tenk1 t2 join tenk1 t3 on t2.thousand = t3.unique2)\nregression-# on t1.hundred = t2.hundred and t1.ten = t3.ten\nregression-# where t1.unique1 = 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.258..0.487 rows=20 loops=1)\n -> Index Scan using tenk1_unique1 on tenk1 t1 (actual\ntime=0.049..0.049 rows=1 loops=1)\n Index Cond: (unique1 = 1)\n -> Nested Loop (actual time=0.204..0.419 rows=20 loops=1)\n Join Filter: (t1.ten = t3.ten)\n Rows Removed by Join Filter: 80\n -> Bitmap Heap Scan on tenk1 t2 (actual time=0.064..0.194\nrows=100 loops=1)\n Recheck Cond: (t1.hundred = hundred)\n Heap Blocks: exact=86\n -> Bitmap Index Scan on tenk1_hundred (actual\ntime=0.036..0.036 rows=100 loops=1)\n Index Cond: (hundred = t1.hundred)\n -> Memoize (actual time=0.001..0.001 rows=1 loops=100)\n Cache Key: t2.thousand\n Cache Mode: logical\n Hits: 90 Misses: 10 Evictions: 0 Overflows: 0\nMemory Usage: 4kB\n -> Index Scan using tenk1_unique2 on tenk1 t3 (actual\ntime=0.009..0.009 rows=1 loops=10)\n Index Cond: (unique2 = t2.thousand)\n(17 rows)\n\ndebugging this I see that the memorize plan won because it was passing\n10000 as the number of calls. It should have been passing 100. The\nmemorize node's number of loops agrees with that. Fixing the calls to\ncorrectly pass 100 gets rid of the Memoize node.\n\nI've attached a patch to fix. I'll look at it in more detail after the weekend.\n\nI'm very tempted to change the EXPLAIN output in at least master to\ndisplay the initial and final (maximum) hash table sizes. Wondering if\nanyone would object to that?\n\nDavid",
"msg_date": "Fri, 6 May 2022 21:27:57 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "pá 6. 5. 2022 v 10:05 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Fri, 6 May 2022 at 17:52, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Breakpoint 1, build_hash_table (size=4369066, mstate=0xfc7f08) at\n> nodeMemoize.c:268\n> > 268 if (size == 0)\n> > (gdb) p size\n> > $1 = 4369066\n>\n> Thanks for the report. I think I now see the problem. Looking at\n> [1], it seems that's a bushy plan. That's fine, but less common than a\n> left deep plan.\n>\n> I think the problem is down to some incorrect code in\n> get_memoize_path() where we pass the wrong value of \"calls\" to\n> create_memoize_path(). I think instead of outer_path->parent->rows it\n> instead should be outer_path->rows.\n>\n> If you look closely at the plan, you'll see that the outer side of the\n> inner-most Nested Loop is parameterized by some higher-level nested\n> loop.\n>\n> -> Nested Loop (cost=1.14..79.20 rows=91 width=8) (actual\n> time=0.024..0.024 rows=1 loops=66)\n> -> Index Only Scan using\n> uq_isi_itemid_itemimageid on item_share_image itemsharei2__1\n> (cost=0.57..3.85 rows=91 width=16) (actual time=0.010..0.010 rows=1\n> loops=66)\n> Index Cond: (item_id = itembo0_.id)\n> Heap Fetches: 21\n> -> Memoize (cost=0.57..2.07 rows=1 width=8)\n> (actual time=0.013..0.013 rows=1 loops=66)\n>\n> so instead of passing 91 to create_memoize_path() as I thought. Since\n> I can't see any WHERE clause items filtering rows from the\n> itemsharei2__1 relation, then the outer_path->parent->rows is should\n> be whatever pg_class.reltuples says.\n>\n> Are you able to send the results of:\n>\n> explain select item_id from item_share_image group by item_id; -- I'm\n> interested in the estimated number of groups in the plan's top node.\n>\n\n\n\n>\n> select reltuples from pg_class where oid = 'item_share_image'::regclass;\n>\n> I'm expecting the estimated number of rows in the top node of the\n> group by plan to be about 4369066.\n>\n\n(2022-05-06 12:30:23) prd_aukro=# explain select item_id from\nitem_share_image group by item_id;\n QUERY PLAN\n\n────────────────────────────────────────────────────────────────────────────\nFinalize HashAggregate (cost=1543418.63..1554179.08 rows=1076045 width=8)\n Group Key: item_id\n -> Gather (cost=1000.57..1532658.18 rows=4304180 width=8)\n Workers Planned: 4\n -> Group (cost=0.57..1101240.18 rows=1076045 width=8)\n Group Key: item_id\n -> Parallel Index Only Scan using ixfk_isi_itemid on\nitem_share_image (cost=0.57..1039823.86 rows=24566530 width=8)\n(7 řádek)\n\nČas: 1,808 ms\n(2022-05-06 12:30:26) prd_aukro=# select reltuples from pg_class where oid\n= 'item_share_image'::regclass;\n reltuples\n────────────\n9.826612e+07\n(1 řádka)\n\nČas: 0,887 ms\n\nRegards\n\nPavel\n\n\n> David\n>\n> [1] https://explain.depesz.com/s/2rBw#source\n>\n\npá 6. 5. 2022 v 10:05 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Fri, 6 May 2022 at 17:52, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Breakpoint 1, build_hash_table (size=4369066, mstate=0xfc7f08) at nodeMemoize.c:268\n> 268 if (size == 0)\n> (gdb) p size\n> $1 = 4369066\n\nThanks for the report. I think I now see the problem. Looking at\n[1], it seems that's a bushy plan. That's fine, but less common than a\nleft deep plan.\n\nI think the problem is down to some incorrect code in\nget_memoize_path() where we pass the wrong value of \"calls\" to\ncreate_memoize_path(). I think instead of outer_path->parent->rows it\ninstead should be outer_path->rows.\n\nIf you look closely at the plan, you'll see that the outer side of the\ninner-most Nested Loop is parameterized by some higher-level nested\nloop.\n\n-> Nested Loop (cost=1.14..79.20 rows=91 width=8) (actual\ntime=0.024..0.024 rows=1 loops=66)\n -> Index Only Scan using\nuq_isi_itemid_itemimageid on item_share_image itemsharei2__1\n(cost=0.57..3.85 rows=91 width=16) (actual time=0.010..0.010 rows=1\nloops=66)\n Index Cond: (item_id = itembo0_.id)\n Heap Fetches: 21\n -> Memoize (cost=0.57..2.07 rows=1 width=8)\n(actual time=0.013..0.013 rows=1 loops=66)\n\nso instead of passing 91 to create_memoize_path() as I thought. Since\nI can't see any WHERE clause items filtering rows from the\nitemsharei2__1 relation, then the outer_path->parent->rows is should\nbe whatever pg_class.reltuples says.\n\nAre you able to send the results of:\n\nexplain select item_id from item_share_image group by item_id; -- I'm\ninterested in the estimated number of groups in the plan's top node. \n\nselect reltuples from pg_class where oid = 'item_share_image'::regclass;\n\nI'm expecting the estimated number of rows in the top node of the\ngroup by plan to be about 4369066. (2022-05-06 12:30:23) prd_aukro=# explain select item_id from item_share_image group by item_id; QUERY PLAN ────────────────────────────────────────────────────────────────────────────Finalize HashAggregate (cost=1543418.63..1554179.08 rows=1076045 width=8) Group Key: item_id -> Gather (cost=1000.57..1532658.18 rows=4304180 width=8) Workers Planned: 4 -> Group (cost=0.57..1101240.18 rows=1076045 width=8) Group Key: item_id -> Parallel Index Only Scan using ixfk_isi_itemid on item_share_image (cost=0.57..1039823.86 rows=24566530 width=8)(7 řádek)Čas: 1,808 ms(2022-05-06 12:30:26) prd_aukro=# select reltuples from pg_class where oid = 'item_share_image'::regclass; reltuples ────────────9.826612e+07(1 řádka)Čas: 0,887 ms RegardsPavel\n\nDavid\n\n[1] https://explain.depesz.com/s/2rBw#source",
"msg_date": "Fri, 6 May 2022 12:31:23 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Tue, May 03, 2022 at 02:13:18PM +1200, David Rowley wrote:\n> I'm wishing I put the initial hash table size and the final hash table\n> size in EXPLAIN + EXPLAIN ANALYZE now. Perhaps it's not too late for\n> v15 to do that so that it might help us figure things out in the\n> future.\n\nOn Fri, May 06, 2022 at 09:27:57PM +1200, David Rowley wrote:\n> I'm very tempted to change the EXPLAIN output in at least master to\n> display the initial and final (maximum) hash table sizes. Wondering if\n> anyone would object to that?\n\nNo objection to add it to v15.\n\nI'll point out that \"Cache Mode\" was added to EXPLAIN between 11.1 and 11.2\nwithout controversy, so this could conceivably be backpatched to v14, too.\n\ncommit 6c32c0977783fae217b5eaa1d22d26c96e5b0085\nAuthor: David Rowley <drowley@postgresql.org>\nDate: Wed Nov 24 10:07:38 2021 +1300\n\n Allow Memoize to operate in binary comparison mode\n\n\n",
"msg_date": "Mon, 9 May 2022 21:22:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Tue, 10 May 2022 at 14:22, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, May 06, 2022 at 09:27:57PM +1200, David Rowley wrote:\n> > I'm very tempted to change the EXPLAIN output in at least master to\n> > display the initial and final (maximum) hash table sizes. Wondering if\n> > anyone would object to that?\n>\n> No objection to add it to v15.\n>\n> I'll point out that \"Cache Mode\" was added to EXPLAIN between 11.1 and 11.2\n> without controversy, so this could conceivably be backpatched to v14, too.\n>\n> commit 6c32c0977783fae217b5eaa1d22d26c96e5b0085\n\nThis is seemingly a good point, but I don't really think it's a case\nof just keeping the EXPLAIN output stable in minor versions, it's more\nabout adding new fields to structs.\n\nI just went and wrote the patch and the fundamental difference seems\nto be that what I did in 6c32c0977 managed to only add a new field in\nthe empty padding between two fields. That resulted in no fields in\nthe struct being pushed up in their address offset. The idea here is\nnot to break any extension that's already been compiled that\nreferences some field that comes after that.\n\nIn the patch I've just written, I've had to add some fields which\ncauses sizeof(MemoizeState) to go up resulting in the offsets of some\nlater fields changing.\n\nOne thing I'll say about this patch is that I found it annoying that I\nhad to add code to cache_lookup() when we failed to find an entry.\nThat's probably not the end of the world as that's only for cache\nmisses. Ideally, I'd just be looking at the size of the hash table at\nthe end of execution, however, naturally, we must show the EXPLAIN\noutput before we shut down the executor.\n\nI just copied the Hash Join output. It looks like:\n\n# alter table tt alter column a set (n_distinct=4);\nALTER TABLE\n# analyze tt;\n# explain (analyze, costs off, timing off) select * from tt inner join\nt2 on tt.a=t2.a;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Nested Loop (actual rows=1000000 loops=1)\n -> Seq Scan on tt (actual rows=1000000 loops=1)\n -> Memoize (actual rows=1 loops=1000000)\n Cache Key: tt.a\n Cache Mode: logical\n Hits: 999990 Misses: 10 Evictions: 0 Overflows: 0 Memory Usage: 2kB\n Hash Buckets: 16 (originally 4)\n -> Index Only Scan using t2_pkey on t2 (actual rows=1 loops=10)\n Index Cond: (a = tt.a)\n Heap Fetches: 0\n Planning Time: 0.483 ms\n Execution Time: 862.860 ms\n(12 rows)\n\nDoes anyone have any views about the attached patch going into v15?\n\nDavid",
"msg_date": "Wed, 11 May 2022 15:50:11 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "On Fri, 6 May 2022 at 21:27, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a patch to fix. I'll look at it in more detail after the weekend.\n\nI've now pushed this fix to master and backpatched to 14.\n\nDavid\n\n\n",
"msg_date": "Mon, 16 May 2022 16:10:54 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
},
{
"msg_contents": "po 16. 5. 2022 v 6:11 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Fri, 6 May 2022 at 21:27, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've attached a patch to fix. I'll look at it in more detail after the\n> weekend.\n>\n> I've now pushed this fix to master and backpatched to 14.\n>\n\nThank you\n\nPavel\n\n\n>\n> David\n>\n\npo 16. 5. 2022 v 6:11 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Fri, 6 May 2022 at 21:27, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a patch to fix. I'll look at it in more detail after the weekend.\n\nI've now pushed this fix to master and backpatched to 14.Thank youPavel \n\nDavid",
"msg_date": "Mon, 16 May 2022 06:14:32 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: strange slow query - lost lot of time somewhere"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like commit c6306db24 (Add 'basebackup_to_shell' contrib\nmodule.) missed to reserve basebackup_to_shell module's custom GUC\nprefix via MarkGUCPrefixReserved(\"basebackup_to_shell\");. This will\nremove\n\nAttaching a tiny patch to fix it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 2 May 2022 15:06:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add missing MarkGUCPrefixReserved() in basebackup_to_shell module"
},
{
"msg_contents": "On Mon, May 02, 2022 at 03:06:39PM +0530, Bharath Rupireddy wrote:\n> It looks like commit c6306db24 (Add 'basebackup_to_shell' contrib\n> module.) missed to reserve basebackup_to_shell module's custom GUC\n> prefix via MarkGUCPrefixReserved(\"basebackup_to_shell\");. This will\n> remove\n> \n> Attaching a tiny patch to fix it.\n\nYou are obviously right. Will fix.\n--\nMichael",
"msg_date": "Mon, 2 May 2022 19:50:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add missing MarkGUCPrefixReserved() in basebackup_to_shell module"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently the emit_log_hook gets called only for the log messages of\ntype <= log_min_message i.e when edata->output_to_server is true [1],\nwhich means that I can't use an implementation of emit_log_hook to\njust intercept, say, all DEBUGX messages without interrupting the\nactual server logs flow and changing the log_min_message. The use case\nis this, in production environments, say an issue is occuring every\ntime or sporadically, to figure out what the issue is or do root cause\nanalysis, I might need some of the DEBUGX messages (not all of course)\nand I may not want to set log_min_message to DEBUGX as it might\noverload the server logs (can lead to server out of disk crashes) or\nwriting to the postgres container console at a higher pace. If I had\npostgres elog hook, say emit_unfiltered_log_hook [2], I can basically\nwrite an external module (with a bunch of GUCs say log_level to route,\nplace to store the logs, even an option to filter logs based on text\nsay logs containing word 'replication', max disk space that these\nrouted logs would occupy etc.) implementing emit_unfiltered_log_hook\nto just route the interested logs to a cheaper storage (for debugging\npurposes), after analysis I can disable the external module and blow\naway the routed logs.\n\nIn the production environments such a hook and extension will be super\nuseful IMO. Many times, we would have better debugged issues had there\nbeen certain logs without disturbing the main flow of server logs. We\ncould've used the existing hook emit_log_hook but that breaks the\nexisting external modules implementing emit_log_hook, that's why a new\nhook emit_unfiltered_log_hook.\n\nThoughts?\n\n[1]\n if (edata->output_to_server && emit_log_hook)\n (*emit_log_hook) (edata);\n[2]\n if (emit_unfiltered_log_hook)\n (*emit_unfiltered_log_hook) (edata);\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 2 May 2022 17:11:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "Hi,\n\nOn Mon, May 02, 2022 at 05:11:34PM +0530, Bharath Rupireddy wrote:\n>\n> Currently the emit_log_hook gets called only for the log messages of\n> type <= log_min_message i.e when edata->output_to_server is true [1],\n> which means that I can't use an implementation of emit_log_hook to\n> just intercept, say, all DEBUGX messages without interrupting the\n> actual server logs flow and changing the log_min_message.\n> [...]\n> If I had\n> postgres elog hook, say emit_unfiltered_log_hook [2], I can basically\n> write an external module (with a bunch of GUCs say log_level to route,\n> place to store the logs, even an option to filter logs based on text\n> say logs containing word 'replication', max disk space that these\n> routed logs would occupy etc.) implementing emit_unfiltered_log_hook\n> to just route the interested logs to a cheaper storage (for debugging\n> purposes), after analysis I can disable the external module and blow\n> away the routed logs.\n\nUnless I'm missing something you can already do all of that with the current\nhook, since as mentioned in the comment above the hook can disable the server's\nlogging:\n\n\t * Call hook before sending message to log. The hook function is allowed\n\t * to turn off edata->output_to_server, so we must recheck that afterward.\n\nSo you can configure your server with a very verbose log_min_message, and have\nthe same setting in your own extension to disable output_to_server after its\nown processing is done.\n\n\n",
"msg_date": "Mon, 2 May 2022 21:02:41 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, May 02, 2022 at 05:11:34PM +0530, Bharath Rupireddy wrote:\n> >\n> > Currently the emit_log_hook gets called only for the log messages of\n> > type <= log_min_message i.e when edata->output_to_server is true [1],\n> > which means that I can't use an implementation of emit_log_hook to\n> > just intercept, say, all DEBUGX messages without interrupting the\n> > actual server logs flow and changing the log_min_message.\n> > [...]\n> > If I had\n> > postgres elog hook, say emit_unfiltered_log_hook [2], I can basically\n> > write an external module (with a bunch of GUCs say log_level to route,\n> > place to store the logs, even an option to filter logs based on text\n> > say logs containing word 'replication', max disk space that these\n> > routed logs would occupy etc.) implementing emit_unfiltered_log_hook\n> > to just route the interested logs to a cheaper storage (for debugging\n> > purposes), after analysis I can disable the external module and blow\n> > away the routed logs.\n>\n> Unless I'm missing something you can already do all of that with the current\n> hook, since as mentioned in the comment above the hook can disable the server's\n> logging:\n>\n> * Call hook before sending message to log. The hook function is allowed\n> * to turn off edata->output_to_server, so we must recheck that afterward.\n>\n> So you can configure your server with a very verbose log_min_message, and have\n> the same setting in your own extension to disable output_to_server after its\n> own processing is done.\n\nNo. The emit_log_hook isn't called for all the log messages, but only\nwhen output_to_server = true which means, say my log_min_messages is\n'WARNING', the hook isn't called for the messages say elevel above it\n(NOTICE, INFO, DEBUGX).\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 2 May 2022 18:40:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "On Mon, May 02, 2022 at 06:40:05PM +0530, Bharath Rupireddy wrote:\n> On Mon, May 2, 2022 at 6:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Unless I'm missing something you can already do all of that with the current\n> > hook, since as mentioned in the comment above the hook can disable the server's\n> > logging:\n> >\n> > * Call hook before sending message to log. The hook function is allowed\n> > * to turn off edata->output_to_server, so we must recheck that afterward.\n> >\n> > So you can configure your server with a very verbose log_min_message, and have\n> > the same setting in your own extension to disable output_to_server after its\n> > own processing is done.\n> \n> No. The emit_log_hook isn't called for all the log messages, but only\n> when output_to_server = true which means, say my log_min_messages is\n> 'WARNING', the hook isn't called for the messages say elevel above it\n> (NOTICE, INFO, DEBUGX).\n\nI know. What I said you could do is configure log_min_message to DEBUGX, so\nyour extension sees everything you want it to see. And *in your extension* set\noutput_to_server to false if the level is not the *real level* you want to log.\n\n\n",
"msg_date": "Mon, 2 May 2022 21:14:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, May 02, 2022 at 06:40:05PM +0530, Bharath Rupireddy wrote:\n> > On Mon, May 2, 2022 at 6:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > Unless I'm missing something you can already do all of that with the current\n> > > hook, since as mentioned in the comment above the hook can disable the server's\n> > > logging:\n> > >\n> > > * Call hook before sending message to log. The hook function is allowed\n> > > * to turn off edata->output_to_server, so we must recheck that afterward.\n> > >\n> > > So you can configure your server with a very verbose log_min_message, and have\n> > > the same setting in your own extension to disable output_to_server after its\n> > > own processing is done.\n> >\n> > No. The emit_log_hook isn't called for all the log messages, but only\n> > when output_to_server = true which means, say my log_min_messages is\n> > 'WARNING', the hook isn't called for the messages say elevel above it\n> > (NOTICE, INFO, DEBUGX).\n>\n> I know. What I said you could do is configure log_min_message to DEBUGX, so\n> your extension sees everything you want it to see. And *in your extension* set\n> output_to_server to false if the level is not the *real level* you want to log.\n\nI basically want to avoid normal users/developers setting any\nparameter (especially the superuser-only log_min_message GUC, all\nusers might not have superuser access in production environments) or\nmaking any changes to the running server except just LOADing the\nserver log routing/intercepting extension.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 2 May 2022 19:24:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "On Mon, May 02, 2022 at 07:24:04PM +0530, Bharath Rupireddy wrote:\n> On Mon, May 2, 2022 at 6:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I know. What I said you could do is configure log_min_message to DEBUGX, so\n> > your extension sees everything you want it to see. And *in your extension* set\n> > output_to_server to false if the level is not the *real level* you want to log.\n>\n> I basically want to avoid normal users/developers setting any\n> parameter (especially the superuser-only log_min_message GUC, all\n> users might not have superuser access in production environments) or\n> making any changes to the running server except just LOADing the\n> server log routing/intercepting extension.\n\nThe kind of scenario you mentioned didn't seem \"normal users\" oriented. Note\nthat LOAD is restricted to superuser anyway.\n\n\n",
"msg_date": "Mon, 2 May 2022 22:03:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, May 02, 2022 at 07:24:04PM +0530, Bharath Rupireddy wrote:\n>> I basically want to avoid normal users/developers setting any\n>> parameter (especially the superuser-only log_min_message GUC, all\n>> users might not have superuser access in production environments) or\n>> making any changes to the running server except just LOADing the\n>> server log routing/intercepting extension.\n\n> The kind of scenario you mentioned didn't seem \"normal users\" oriented. Note\n> that LOAD is restricted to superuser anyway.\n\nIt seems completely silly to be worrying that setting a GUC in a\nparticular way is too hard for somebody who's going to be installing\na loadable extension. In any case, if you wanted to force the issue\nyou could set log_min_messages in the extension's _PG_init function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 May 2022 10:07:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
},
{
"msg_contents": "On Mon, May 2, 2022 at 7:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Mon, May 02, 2022 at 07:24:04PM +0530, Bharath Rupireddy wrote:\n> >> I basically want to avoid normal users/developers setting any\n> >> parameter (especially the superuser-only log_min_message GUC, all\n> >> users might not have superuser access in production environments) or\n> >> making any changes to the running server except just LOADing the\n> >> server log routing/intercepting extension.\n>\n> > The kind of scenario you mentioned didn't seem \"normal users\" oriented. Note\n> > that LOAD is restricted to superuser anyway.\n>\n> It seems completely silly to be worrying that setting a GUC in a\n> particular way is too hard for somebody who's going to be installing\n> a loadable extension. In any case, if you wanted to force the issue\n> you could set log_min_messages in the extension's _PG_init function.\n\nThanks Tom and Julien. I developed a simple external module called\npg_intercept_logs [1].\n\n[1] https://github.com/BRupireddy/pg_intercept_server_logs\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 13 May 2022 18:10:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unfiltered server logs routing via a new elog hook or existing\n emit_log_hook bypassing log_min_message check"
}
] |
[
{
"msg_contents": "Greetings,\n\nI want to limit the query text that gets captured in pg_stat_statements. We have sql statements with thousands of values clauses (upwards of 10,000) that run at a 1 second interval. When just a handful are running plus 2 or 3 loads using the same technique (10,000 entry values clauses) querying the pg_stat_statements table gets bogged down (see below). With the pg_stat_statements.max is set to 1000 statements just querying the table stats table seems to impact the running statements! I have temporarily staved off the issue by reducing the max to 250 statements, and I have made recommendations to the development team to cut down the number of values clauses. However, it seems to me that the ability to truncate the captured query would be a useful feature.\n\nI've peeked at the source code and I don't see the track_activity_query_size used (pg_stat_activity.query) which would be one mechanism. I don't really know what would be the right way to do this or even if it is a good idea, i.e. if limiting that would have a larger impact to the statistics ecosystem...\n\nThoughts or suggestions?\nRegards,\npg\n\n\n\n# select length(query) from pg_stat_statements;\n\nlength\n\n---------\n\n 876153\n\n 879385\n\n 171\n\n 44\n\n 3796\n\n 873527\n\n <snip>\n\n 896454\n\n 864538\n\n1869286\n\n 938\n\n 869891\n\n <snip>\n\n 883526\n\n 877365\n\n(969 rows)\n\n\n\nTime: 9898.411 ms (00:09.898)\n\n\n\n# select count(*) from pg_stat_statements;\n\ncount\n\n-------\n\n 971\n\n(1 row)\n\nTime: 6457.985 ms (00:06.458)\n\n\n\nUsing showtext:=false shows the impact of the large columns:\n\n\n\n# select count(*) from pg_stat_statements(showtext:=false);\n\ncount\n\n-------\n\n 970\n\n(1 row)\n\nTime: 10.644 ms\n\n\n\n\n\nPhil Godfrin | Database Administration\nNOV\nNOV US | Engineering Data\n9720 Beechnut St | Houston, Texas 77036\nM 281.825.2311\nE Philippe.Godfrin@nov.com<mailto:Philippe.Godfrin@nov.com>\n\n\n\n\n\n\n\n\n\n\nGreetings,\n \nI want to limit the query text that gets captured in pg_stat_statements. We have sql statements with thousands of values clauses (upwards of 10,000) that run at a 1 second interval. When just a handful are running plus 2 or 3 loads using\n the same technique (10,000 entry values clauses) querying the pg_stat_statements table gets bogged down (see below). With the pg_stat_statements.max is set to 1000 statements just querying the table stats table seems to impact the running statements! I have\n temporarily staved off the issue by reducing the max to 250 statements, and I have made recommendations to the development team to cut down the number of values clauses. However, it seems to me that the ability to truncate the captured query would be a useful\n feature.\n \nI’ve peeked at the source code and I don’t see the track_activity_query_size used (pg_stat_activity.query) which would be one mechanism. I don’t really know what would be the right way to do this or even if it is a good idea, i.e. if limiting\n that would have a larger impact to the statistics ecosystem…\n \n\nThoughts or suggestions?\nRegards,\npg\n \n \n# select length(query) from pg_stat_statements;\nlength \n---------\n 876153\n 879385\n 171\n 44\n 3796\n 873527\n <snip>\n 896454\n 864538\n1869286\n 938\n 869891\n <snip>\n 883526\n 877365\n(969 rows)\n \nTime: 9898.411 ms (00:09.898)\n \n# select count(*) from pg_stat_statements;\ncount \n-------\n 971\n(1 row)\nTime: 6457.985 ms (00:06.458)\n \nUsing showtext:=false shows the impact of the large columns:\n \n# select count(*) from pg_stat_statements(showtext:=false);\ncount \n-------\n 970\n(1 row)\nTime: 10.644 ms\n \n \n \n \nPhil Godfrin |\nDatabase Administration\nNOV\n\nNOV US | Engineering Data\n9720 Beechnut St | Houston, Texas 77036\n\nM \n281.825.2311\nE \nPhilippe.Godfrin@nov.com",
"msg_date": "Mon, 2 May 2022 12:45:28 +0000",
"msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>",
"msg_from_op": true,
"msg_subject": "limiting collected query text length in pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn Mon, May 02, 2022 at 12:45:28PM +0000, Godfrin, Philippe E wrote:\n> Greetings,\n> \n> I want to limit the query text that gets captured in pg_stat_statements. We\n> have sql statements with thousands of values clauses (upwards of 10,000) that\n> run at a 1 second interval. When just a handful are running plus 2 or 3 loads\n> using the same technique (10,000 entry values clauses) querying the\n> pg_stat_statements table gets bogged down (see below). With the\n> pg_stat_statements.max is set to 1000 statements just querying the table\n> stats table seems to impact the running statements! I have temporarily staved\n> off the issue by reducing the max to 250 statements, and I have made\n> recommendations to the development team to cut down the number of values\n> clauses. However, it seems to me that the ability to truncate the captured\n> query would be a useful feature.\n\nThe store queries are normalized so the values themselves won't be stored, only\na \"?\" per value. And as long as all the queries have the same number of values\nthere should be a single entry stored for the same role and database, so all in\nall it should limit the size of the stored query texts.\n\nOn the other hand, with such a low pg_stat_statements.max, you may have a lot\nof entry evictions, which tends to bloat the external query file\n($PGDATA/pg_stat_tmp/pgss_query_texts.stat). Did you check how big it is and\nif yes how fast it grows? I've once seen the file being more than 1GB without\nany reason why, which was obviously slowing everything down. A simple call to\npg_stat_statements_reset() fixed the problem, at least as far as I know as I\nnever had access to the server and never had any news after that.\n\n> I've peeked at the source code and I don't see the track_activity_query_size\n> used (pg_stat_activity.query) which would be one mechanism. I don't really\n> know what would be the right way to do this or even if it is a good idea,\n> i.e. if limiting that would have a larger impact to the statistics\n> ecosystem...\n\npg_stat_statements used to truncate the query text to\ntrack_activity_query_size, but that limitation was removed when the query texts\nwere moved to the external query file. It's quite convenient to have the full\nnormalized query text available, especially with the application is using some\nORM, as they tend to make SQL even more verbose than it already is. Having a\nvery high number of values (and I'm assuming queries with different number of\nvalues) seems like a corner case, but truncating the query\ntext would only fix part of the problem. It will lead to a very high number of\ndifferent queryid, which is also problematic as frequent entry evictions also\ntends to have an insanely high overhead, and you can't and an infinite number\nof entries stored.\n\n> Thoughts or suggestions?\n\nYou didn't explain how you're using pg_stat_statements. Do you really need to\nquery pg_stat_statements with the query text each time? If you only need to\nget some performance metrics you could adapt your system to only retrieve the\nquery text for the wanted queryid(s) once you find some problematic pattern,\nand/or cache the query texts a table or some other place. But with a very low\npg_stat_statements.max (especially if you can have a varying number of values\nfrom 1 to 10k) it might be hard to do.\n\n\n",
"msg_date": "Mon, 2 May 2022 21:57:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: limiting collected query text length in pg_stat_statements"
}
] |
[
{
"msg_contents": "Hi,\nwe are developing an extension for multidimensional data. We have \ncreated a Gist index that is heavily inspired by a cube extension. Right \nnow we would like to add some item compression since data items in a \nnode share a significant portion of a tuple prefix. However, I have no \nidea how to handle information stored on a node level in Gists' \ncompression method. Is there any example? Unfortunately, the cube \nextension does not implement item compression.\nTo be more specific we would like to store in the node a common prefix \nfor all tuples in the node.\nThanks for any advice,\nRadim\n\n\n-- \nThis email has been checked for viruses by Avast antivirus software.\nhttps://www.avast.com/antivirus\n\n\n\n",
"msg_date": "Mon, 2 May 2022 14:48:27 +0200",
"msg_from": "Baca Radim <rad.baca@gmail.com>",
"msg_from_op": true,
"msg_subject": "Item compression in the Gist index"
},
{
"msg_contents": "On Mon, 2022-05-02 at 14:48 +0200, Baca Radim wrote:\n> we are developing an extension for multidimensional data. We have \n> created a Gist index that is heavily inspired by a cube extension. Right \n> now we would like to add some item compression since data items in a \n> node share a significant portion of a tuple prefix. However, I have no \n> idea how to handle information stored on a node level in Gists' \n> compression method. Is there any example? Unfortunately, the cube \n> extension does not implement item compression.\n> To be more specific we would like to store in the node a common prefix \n> for all tuples in the node.\n> Thanks for any advice,\n\nPerhaps the PostGIS source will inspire you. They are compressing an\nentry to its bounding box.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 02 May 2022 15:00:07 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Item compression in the Gist index"
},
{
"msg_contents": "On Mon, May 2, 2022 at 02:48:27PM +0200, Baca Radim wrote:\n> Hi,\n> we are developing an extension for multidimensional data. We have created a\n> Gist index that is heavily inspired by a cube extension. Right now we would\n> like to add some item compression since data items in a node share a\n> significant portion of a tuple prefix. However, I have no idea how to handle\n> information stored on a node level in Gists' compression method. Is there\n> any example? Unfortunately, the cube extension does not implement item\n> compression.\n> To be more specific we would like to store in the node a common prefix for\n> all tuples in the node.\n\nUh, SP-GiST does prefix compression.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 13 May 2022 15:29:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Item compression in the Gist index"
}
] |
[
{
"msg_contents": "Hi,\n\nI came across pg_toupper and pg_tolower functions, converting a single\ncharacter, are being used in loops to convert an entire\nnull-terminated string. The cost of calling these character-based\nconversion functions (even though small) can be avoided if we have two\nnew functions pg_strtoupper and pg_strtolower.\n\nAttaching a patch with these new two functions and their usage in most\nof the possible places in the code.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 2 May 2022 18:21:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add pg_strtoupper and pg_strtolower functions"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> I came across pg_toupper and pg_tolower functions, converting a single\n> character, are being used in loops to convert an entire\n> null-terminated string. The cost of calling these character-based\n> conversion functions (even though small) can be avoided if we have two\n> new functions pg_strtoupper and pg_strtolower.\n\nHave we measured the saving in cost? Let's say for a million character\nlong string?\n\n>\n> Attaching a patch with these new two functions and their usage in most\n> of the possible places in the code.\n\nConverting pg_toupper and pg_tolower to \"inline\" might save cost\nsimilarly and also avoid code duplication?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 2 May 2022 18:43:20 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_strtoupper and pg_strtolower functions"
},
{
"msg_contents": "On Mon, May 2, 2022 at 6:43 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 6:21 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I came across pg_toupper and pg_tolower functions, converting a single\n> > character, are being used in loops to convert an entire\n> > null-terminated string. The cost of calling these character-based\n> > conversion functions (even though small) can be avoided if we have two\n> > new functions pg_strtoupper and pg_strtolower.\n>\n> Have we measured the saving in cost? Let's say for a million character\n> long string?\n\nI didn't spend time on figuring out the use-cases hitting all the code\nareas, even if I do so, the function call cost savings might not\nimpress most of the time and the argument of saving function call cost\nthen becomes pointless.\n\n> > Attaching a patch with these new two functions and their usage in most\n> > of the possible places in the code.\n>\n> Converting pg_toupper and pg_tolower to \"inline\" might save cost\n> similarly and also avoid code duplication?\n\nI think most of the modern compilers do inline small functions. But,\ninlining isn't always good as it increases the size of the code. With\nthe proposed helper functions, the code looks cleaner (at least IMO,\nothers may have different opinions though).\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 4 May 2022 16:33:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_strtoupper and pg_strtolower functions"
},
{
"msg_contents": "On 2022-May-02, Bharath Rupireddy wrote:\n\n> Hi,\n> \n> I came across pg_toupper and pg_tolower functions, converting a single\n> character, are being used in loops to convert an entire\n> null-terminated string. The cost of calling these character-based\n> conversion functions (even though small) can be avoided if we have two\n> new functions pg_strtoupper and pg_strtolower.\n\nCurrently, pg_toupper/pg_tolower are used in very limited situations.\nAre they really always safe enough to run in arbitrary situations,\nenough to create this new layer on top of them? Reading the comment on\npg_tolower, \"the whole thing is a bit bogus for multibyte charsets\", I\nworry that we might create security holes, either now or in future\ncallsites that use these new functions.\n\nConsider that in the Turkish locale you lowercase an I (single-byte\nASCII character) with a dotless-i (two bytes). So overwriting the input\nstring is not a great solution.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Wed, 4 May 2022 15:13:31 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_strtoupper and pg_strtolower functions"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Currently, pg_toupper/pg_tolower are used in very limited situations.\n> Are they really always safe enough to run in arbitrary situations,\n> enough to create this new layer on top of them?\n\nThey are not, and we should absolutely not be encouraging additional uses\nof them. The existing multi-character str_toupper/str_tolower functions\nshould be used instead. (Perhaps those should be relocated to someplace\nmore prominent?)\n\n> Reading the comment on\n> pg_tolower, \"the whole thing is a bit bogus for multibyte charsets\", I\n> worry that we might create security holes, either now or in future\n> callsites that use these new functions.\n\nI doubt that they are security holes, but they do give unexpected\nanswers in some locales.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 09:40:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_strtoupper and pg_strtolower functions"
}
] |
[
{
"msg_contents": "configure can report this:\n\nconfigure: WARNING:\n*** With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, each backend\n*** process that loads libpq (via WAL receiver, dblink, or postgres_fdw) and\n*** also uses LDAP will crash on exit.\n\nThe source code also says\n\n# PostgreSQL sometimes loads libldap_r and plain libldap into the same\n# process. Check for OpenLDAP versions known not to tolerate doing so; \nassume\n# non-OpenLDAP implementations are safe. The dblink test suite \nexercises the\n# hazardous interaction directly.\n\nThe libldap installation that comes with the macOS operating system \nreports itself as version 2.4.28, so this warning comes up every time, \nbut the dblink test suite passes without problem.\n\nI looked into this a bit further. I checked by installing openldap \n2.4.31 from source on an centos 7 instance, and indeed the dblink test \nsuite crashes right away. So the test is still good. I think it's very \nlikely that Apple has patched around in their libldap code without \nchanging the version.\n\nI wonder whether we can do something to not make this warning appear \nwhen it's not necessary. The release of openldap 2.4.32 (the first good \none) is now ten years ago, so seeing faulty versions in the wild should \nbe very rare. I'm tempted to suggest that we can just remove the \nwarning and rely on the dblink test to catch any stragglers. We could \nalso disable the test on macOS only. Thoughts?\n\n\n\n",
"msg_date": "Mon, 2 May 2022 15:24:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "configure openldap crash warning"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> configure can report this:\n> configure: WARNING:\n> *** With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, each backend\n> *** process that loads libpq (via WAL receiver, dblink, or postgres_fdw) and\n> *** also uses LDAP will crash on exit.\n\n> I wonder whether we can do something to not make this warning appear \n> when it's not necessary. The release of openldap 2.4.32 (the first good \n> one) is now ten years ago, so seeing faulty versions in the wild should \n> be very rare. I'm tempted to suggest that we can just remove the \n> warning and rely on the dblink test to catch any stragglers. We could \n> also disable the test on macOS only. Thoughts?\n\nI'm not that excited about getting rid of this warning, because to the\nextent that anyone notices it at all, it'll motivate them to get OpenLDAP\nfrom Homebrew or MacPorts, which seems like a good thing. This configure\nwarning is already far less in-your-face than the compile-time deprecation\nwarnings that get spewed at you when you use Apple's headers; their slapd\ndoesn't really work as far as we can tell; and whether they fixed this\nissue or not, it's a safe bet that there are a lot of other unfixed bugs\nin their ancient copy of OpenLDAP. The deprecation warnings seem like\nclear evidence that Apple intends to remove the bundled libldap at some\npoint, so I don't think we should put effort into encouraging people to\nuse it.\n\nThere's certainly a conversation to be had about whether this configure\nwarning is still worth the cycles it takes to run it; maybe it isn't.\nBut I don't think that we should be looking at it from the standpoint of\nsilencing the complaint on macOS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 May 2022 10:03:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "On 02.05.22 16:03, Tom Lane wrote:\n> I'm not that excited about getting rid of this warning, because to the\n> extent that anyone notices it at all, it'll motivate them to get OpenLDAP\n> from Homebrew or MacPorts, which seems like a good thing.\n\nI tried building with Homebrew-supplied openldap. What ends up \nhappening is that the postgres binary is indeed linked with openldap, \nbut libpq still is linked against the OS-supplied LDAP framework. \n(Checked with \"otool -L\" in each case.) Can someone else reproduce \nthis, too?\n\n\n\n",
"msg_date": "Wed, 4 May 2022 16:05:45 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.05.22 16:03, Tom Lane wrote:\n>> I'm not that excited about getting rid of this warning, because to the\n>> extent that anyone notices it at all, it'll motivate them to get OpenLDAP\n>> from Homebrew or MacPorts, which seems like a good thing.\n\n> I tried building with Homebrew-supplied openldap. What ends up \n> happening is that the postgres binary is indeed linked with openldap, \n> but libpq still is linked against the OS-supplied LDAP framework. \n> (Checked with \"otool -L\" in each case.) Can someone else reproduce \n> this, too?\n\nHmm, I just tried it with up-to-date MacPorts, and it was a *complete*\nfail: I got all the deprecation warnings (so the system include headers\nwere used), and both postgres and libpq.dylib still ended up linked\nagainst /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP.\n\nBut then I went \"doh!\" and added\n --with-includes=/opt/local/include --with-libraries=/opt/local/lib\nto the configure call, and everything built the way I expected.\nI'm not sure offhand if the docs include a reminder to do that when\nusing stuff out of MacPorts, or the equivalent for Homebrew.\n\nWe still have a bit of work to do, because this setup isn't getting\nall the way through src/test/ldap/:\n\n2022-05-04 11:01:33.407 EDT [21312] [unknown] LOG: connection received: host=[local]\n2022-05-04 11:01:33.457 EDT [21312] [unknown] LOG: could not start LDAP TLS session: Operations error\n2022-05-04 11:01:33.457 EDT [21312] [unknown] DETAIL: LDAP diagnostics: TLS already started\n2022-05-04 11:01:33.457 EDT [21312] [unknown] FATAL: LDAP authentication failed for user \"test1\"\n2022-05-04 11:01:33.457 EDT [21312] [unknown] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapurl=\"ldaps://localhost:51335/dc=example,dc=net??sub?(uid=$username)\" ldaptls=1\"\n2022-05-04 11:01:33.459 EDT [21304] LOG: server process (PID 21312) was terminated by signal 11: Segmentation fault: 11\n\nMany of the test cases pass, but it looks like ldaps-related ones don't.\nThe stack trace isn't very helpful:\n\n(lldb) bt\n* thread #1, stop reason = ESR_EC_DABORT_EL0 (fault address: 0x0)\n * frame #0: 0x00000001b5bfc628 libsystem_pthread.dylib`pthread_rwlock_rdlock\n frame #1: 0x00000001054a74c4 libcrypto.3.dylib`CRYPTO_THREAD_read_lock + 12\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 11:16:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I tried building with Homebrew-supplied openldap. What ends up \n>> happening is that the postgres binary is indeed linked with openldap, \n>> but libpq still is linked against the OS-supplied LDAP framework. \n>> (Checked with \"otool -L\" in each case.) Can someone else reproduce \n>> this, too?\n\n> [ it works with MacPorts ]\n\nOh, I have a theory about this: I bet your Homebrew installation\nhas a recent OpenLDAP version that only supplies libldap not libldap_r.\nIn that case, configure will still find libldap_r available and will\nbind libpq to it, and you get the observed result. The configure\ncheck is not sophisticated enough to realize that it's finding chunks\nof two different OpenLDAP installations.\n\nNot sure about a good fix. If we had a way to detect which library\nfile AC_CHECK_LIB finds, we could verify that libldap and libldap_r\ncome from the same directory ... but I don't think we have that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 11:30:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "I wrote:\n> Oh, I have a theory about this: I bet your Homebrew installation\n> has a recent OpenLDAP version that only supplies libldap not libldap_r.\n> In that case, configure will still find libldap_r available and will\n> bind libpq to it, and you get the observed result. The configure\n> check is not sophisticated enough to realize that it's finding chunks\n> of two different OpenLDAP installations.\n\nAfter thinking about this for awhile, it seems like the best solution\nis to make configure proceed like this:\n\n1. Find libldap.\n2. Detect whether it's OpenLDAP 2.5 or newer.\n3. If not, try to find libldap_r.\n\nThere are various ways we could perform step 2, but I think the most\nreliable is to try to link to some function that's present in 2.5\nbut not before. (In particular, this doesn't require any strong\nassumptions about whether the installation's header files match the\nlibrary.) After a quick dig in 2.4 and 2.5, it looks like\nldap_verify_credentials() would serve.\n\nBarring objections, I'll make a patch for that.\n\nBTW, I was a little distressed to read this in the 2.4 headers:\n\n ** If you fail to define LDAP_THREAD_SAFE when linking with\n ** -lldap_r or define LDAP_THREAD_SAFE when linking with -lldap,\n ** provided header definations and declarations may be incorrect.\n\nThat's not something we do or ever have done, AFAIK. Given the\nlack of complaints and the fact that 2.4 is more or less EOL,\nI don't feel a strong need to worry about it; but it might be\nsomething to keep in mind in case we get bug reports.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 May 2022 14:00:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "I wrote:\n> We still have a bit of work to do, because this setup isn't getting\n> all the way through src/test/ldap/:\n> 2022-05-04 11:01:33.459 EDT [21304] LOG: server process (PID 21312) was terminated by signal 11: Segmentation fault: 11\n> Many of the test cases pass, but it looks like ldaps-related ones don't.\n\nSadly, this still happens with MacPorts' build of openldap 2.6.1.\nI was able to get a stack trace from the point of the segfault\nthis time:\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)\n * frame #0: 0x000000018f120628 libsystem_pthread.dylib`pthread_rwlock_rdlock\n frame #1: 0x00000001019174c4 libcrypto.3.dylib`CRYPTO_THREAD_read_lock + 12\n frame #2: 0x00000001019099d0 libcrypto.3.dylib`ossl_lib_ctx_get_data + 56\n frame #3: 0x00000001019144d0 libcrypto.3.dylib`get_provider_store + 28\n frame #4: 0x000000010191641c libcrypto.3.dylib`ossl_provider_deregister_child_cb + 32\n frame #5: 0x0000000101909748 libcrypto.3.dylib`OSSL_LIB_CTX_free + 48\n frame #6: 0x000000010aa982d8 legacy.dylib`legacy_teardown + 24\n frame #7: 0x0000000101914840 libcrypto.3.dylib`ossl_provider_free + 76\n frame #8: 0x00000001018ec404 libcrypto.3.dylib`evp_cipher_free_int + 48\n frame #9: 0x0000000101472804 libssl.3.dylib`SSL_CTX_free + 420\n frame #10: 0x00000001013a4768 libldap.2.dylib`ldap_int_tls_destroy + 40\n frame #11: 0x000000018f00fdd0 libsystem_c.dylib`__cxa_finalize_ranges + 464\n frame #12: 0x000000018f00fb74 libsystem_c.dylib`exit + 44\n frame #13: 0x0000000100941cb8 postgres`proc_exit(code=<unavailable>) at ipc.c:152:2 [opt]\n frame #14: 0x000000010096d804 postgres`PostgresMain(dbname=<unavailable>, username=<unavailable>) at postgres.c:4756:5 [opt]\n frame #15: 0x00000001008d7730 postgres`BackendRun(port=<unavailable>) at postmaster.c:4489:2 [opt]\n frame #16: 0x00000001008d6ff4 postgres`ServerLoop [inlined] BackendStartup(port=<unavailable>) at postmaster.c:4217:3 [opt]\n frame #17: 0x00000001008d6fcc postgres`ServerLoop at postmaster.c:1791:7 [opt]\n frame #18: 0x00000001008d474c postgres`PostmasterMain(argc=<unavailable>, argv=<unavailable>) at postmaster.c:1463:11 [opt]\n frame #19: 0x000000010083a248 postgres`main(argc=<unavailable>, argv=<unavailable>) at main.c:202:3 [opt]\n frame #20: 0x000000010117908c dyld`start + 520\n\nSo (1) libldap relies on libssl to implement ldaps ... no surprise there,\nand (2) something's going wrong in the atexit callback that it seemingly\ninstalls to close down its SSL context. It's not clear from this whether\nthis is purely libldap's fault or if there is something we're doing that\nsends it off the rails. I could believe that the problem is essentially\na double shutdown of libssl, except that there doesn't seem to be any\nreason why PG itself would have touched libssl; this isn't an SSL-enabled\nbuild. (Adding --with-openssl doesn't make it better, either.)\n\nOn the whole I'm leaning to the position that this is openldap's fault\nnot ours.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 May 2022 15:17:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "I wrote:\n> After thinking about this for awhile, it seems like the best solution\n> is to make configure proceed like this:\n\n> 1. Find libldap.\n> 2. Detect whether it's OpenLDAP 2.5 or newer.\n> 3. If not, try to find libldap_r.\n\nHere's a proposed patch for that. It seems to do the right thing\nwith openldap 2.4.x and 2.6.x, but I don't have a 2.5 installation\nat hand to try.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 06 May 2022 15:25:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "On 06.05.22 21:25, Tom Lane wrote:\n> I wrote:\n>> After thinking about this for awhile, it seems like the best solution\n>> is to make configure proceed like this:\n> \n>> 1. Find libldap.\n>> 2. Detect whether it's OpenLDAP 2.5 or newer.\n>> 3. If not, try to find libldap_r.\n> \n> Here's a proposed patch for that. It seems to do the right thing\n> with openldap 2.4.x and 2.6.x, but I don't have a 2.5 installation\n> at hand to try.\n\nThis patch works for me. I think it's a good solution.\n\n\n",
"msg_date": "Mon, 9 May 2022 14:28:13 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "On 04.05.22 17:16, Tom Lane wrote:\n> Hmm, I just tried it with up-to-date MacPorts, and it was a*complete*\n> fail: I got all the deprecation warnings (so the system include headers\n> were used), and both postgres and libpq.dylib still ended up linked\n> against /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP.\n\nBtw., I was a bit puzzled about all this talk about deprecation \nwarnings, which I have not seen. I turns out that you only get those if \nyou use the OS compiler, not a third-party gcc installation.\n\nSo in terms of my original message, my installation is clearly niche. \nThe possibly false-positive configure warning is a drop in the bucket \ncompared to the deprecation warnings from the compiler. So it's \nprobably okay to leave this as is and encourage users to use openldap.\n\n\n\n",
"msg_date": "Mon, 9 May 2022 14:36:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Btw., I was a bit puzzled about all this talk about deprecation \n> warnings, which I have not seen. I turns out that you only get those if \n> you use the OS compiler, not a third-party gcc installation.\n\nAh-hah.\n\n> So in terms of my original message, my installation is clearly niche. \n> The possibly false-positive configure warning is a drop in the bucket \n> compared to the deprecation warnings from the compiler. So it's \n> probably okay to leave this as is and encourage users to use openldap.\n\nOK. I will push the configure change once the release freeze lifts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 May 2022 09:47:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "I wrote:\n> OK. I will push the configure change once the release freeze lifts.\n\nAnd done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 May 2022 18:46:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-06 14:00:43 -0400, Tom Lane wrote:\n> I wrote:\n> > Oh, I have a theory about this: I bet your Homebrew installation\n> > has a recent OpenLDAP version that only supplies libldap not libldap_r.\n> > In that case, configure will still find libldap_r available and will\n> > bind libpq to it, and you get the observed result. The configure\n> > check is not sophisticated enough to realize that it's finding chunks\n> > of two different OpenLDAP installations.\n> \n> After thinking about this for awhile, it seems like the best solution\n> is to make configure proceed like this:\n> \n> 1. Find libldap.\n> 2. Detect whether it's OpenLDAP 2.5 or newer.\n> 3. If not, try to find libldap_r.\n> \n> There are various ways we could perform step 2, but I think the most\n> reliable is to try to link to some function that's present in 2.5\n> but not before. (In particular, this doesn't require any strong\n> assumptions about whether the installation's header files match the\n> library.) After a quick dig in 2.4 and 2.5, it looks like\n> ldap_verify_credentials() would serve.\n\nWhy do we continue to link the backend to ldap when we find ldap_r, given that\nwe know that it can cause problems for extension libraries using libpq? I did\na cursory search of the archives without finding an answer. ISTM that we'd be\nmore robust if we used your check from above to decide when to use ldap (so we\ndon't use an older ldap_r), and use ldap_r for both FE and BE otherwise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Aug 2022 20:08:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Why do we continue to link the backend to ldap when we find ldap_r, given that\n> we know that it can cause problems for extension libraries using libpq?\n\nUh ... if we know that, it's news to me.\n\nI think we might've avoided ldap_r for fear of pulling libpthread into\nthe backend; per recent discussion, it's not clear that avoiding that\nis possible anyway. But you didn't make a case for changing this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 23:18:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-29 23:18:23 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Why do we continue to link the backend to ldap when we find ldap_r, given that\n> > we know that it can cause problems for extension libraries using libpq?\n>\n> Uh ... if we know that, it's news to me.\n\nIsn't that what the configure warning Peter mentioned upthread is about?\n\n# PGAC_LDAP_SAFE\n# --------------\n# PostgreSQL sometimes loads libldap_r and plain libldap into the same\n# process. Check for OpenLDAP versions known not to tolerate doing so; assume\n# non-OpenLDAP implementations are safe. The dblink test suite exercises the\n# hazardous interaction directly.\n\n\nThe patch applied as a result of this thread dealt with a different version of\nthe problem, with -lldap_r picking up a different library version than -lldap.\n\nLeaving that aside it also doesn't seem like a great idea to have two\ndifferent copies of the nearly same library loaded for efficiency reasons, not\nthat it'll make a large difference...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Aug 2022 20:48:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: configure openldap crash warning"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now postgres can't prevent users setting certain configuration\nparameters or GUCs (like shared_buffers, temp_buffers, work_mem,\nmaintenance_work_mem, max_stack_depth, temp_file_limit,\nmax_worker_processes, other worker processes settings,\neffective_io_concurrency and so on) to unreasonable values, say\nshared_buffers to 80% of available memory. What happens is that the\nserver comes up initially and but soon it ends up crashing or some\nother PANICs or errors. IMO, we all have to agree with the fact that\nthe users setting these parameters aren't always familiar with the\nconsequences of unreasonable values and come to the vendors to bring\nback their server up after it crashed and went down. Mostly, these\nparameters, that worry the vendors, are some or the other way\nplatform/Virtual Machine configuration (vcores, RAM, OS, disk)\ndependent and vary offering to offering. Of course, each postgres\nvendor can implement their own solution in their control plane or\nsomewhere in the service stack before allowing users to set these\nvalues, but that involves looking at the parameters and their type\nwhich isn't good from maintainability and extensibility (if the server\nadds a new GUC or changes data type of a certain parameter)\nperspective and it might be difficult to do it right as well.\n\nIs there any hook or a way in postgres today, to address the above\nproblem? One way, I can think of to use is to have a\nProcessUtility_hook and see if the statement is T_AlterSystemStmt or\nT_VariableSetStmt or T_AlterDatabaseSetStmt or T_AlterRoleSetStmt type\nand check for the interested GUC params and allow or reject based on\nthe value (but we might have to do some extra stuff to know the GUC\ndata type and parse the value). And this solution doesn't cover\nextensions setting the server GUCs or custom GUCs.\n\nI propose to add a simple new hook in set_config_option (void\nset_config_option_hook(struct config_generic *record);) and the\nvendors can implement their own platform-dependent extensions to\naccept or reject certain parameters (based on platform/VM\nconfiguration) that are of interest to them.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 2 May 2022 20:08:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> I propose to add a simple new hook in set_config_option (void\n> set_config_option_hook(struct config_generic *record);) and the\n> vendors can implement their own platform-dependent extensions to\n> accept or reject certain parameters (based on platform/VM\n> configuration) that are of interest to them.\n\nThis seems entirely useless. Vendors are unlikely to have any better\nidea than we do about what are \"reasonable\" values. Moreover, if they\ndid, modifying the source code directly would be an easier route to\nintroducing their code than making use of a hook (which'd require\nfinding a way to ensure that some extension is loaded).\n\nIn general, I think you need a much more concrete use-case than this\nbefore proposing a new hook. Otherwise we're going to have tons of\nhooks that we don't know whether they're actually useful or being\nused by anyone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 May 2022 10:54:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Mon, May 2, 2022 at 10:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > I propose to add a simple new hook in set_config_option (void\n> > set_config_option_hook(struct config_generic *record);) and the\n> > vendors can implement their own platform-dependent extensions to\n> > accept or reject certain parameters (based on platform/VM\n> > configuration) that are of interest to them.\n>\n> This seems entirely useless. Vendors are unlikely to have any better\n> idea than we do about what are \"reasonable\" values. Moreover, if they\n> did, modifying the source code directly would be an easier route to\n> introducing their code than making use of a hook (which'd require\n> finding a way to ensure that some extension is loaded).\n\nI don't think we should be in the business of encouraging vendors to\nfork the source code, and I think it is quite likely that a vendor\nwill have better ideas than we do about what values they want to allow\ntheir users to set. It's far easier to know what is reasonable in a\nparticular context than it is to make a statement about reasonableness\nin general.\n\nI have some desire here to see us solve this problem not just for\nservice providers, but for users in general. You don't have to be a\nservice provider to want to disallow SET work_mem = '1TB' -- you just\nneed to be a DBA on a system where such a setting will cause bad\nthings to happen. But, if you are a DBA on some random system, you\nwon't likely find a hook to be a particularly useful way of\ncontrolling this sort of thing. I feel like Alice wants to do\nsomething like GRANT work_mem BETWEEN '1MB' AND '2GB' to bob, not that\nI'm proposing that particular syntax. I also don't have a clear idea\nfor what to do about GUCs where a range constraint isn't useful. One\ncould allow a list of permissible values, but that might not be\npowerful enough. One could maybe allow a PL validation function for a\nvalue, but that might be complicated to make work.\n\nIn the end, I don't think providing a hook here is particularly\nunreasonable. I would be a little sad if it ended up being the only\nthing we provided, but I'm also not a huge believer in the idea of\nforcing people to write the patch that I want written as a condition\nof writing the patch that they want written.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 May 2022 10:52:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have some desire here to see us solve this problem not just for\n> service providers, but for users in general. You don't have to be a\n> service provider to want to disallow SET work_mem = '1TB' -- you just\n> need to be a DBA on a system where such a setting will cause bad\n> things to happen. But, if you are a DBA on some random system, you\n> won't likely find a hook to be a particularly useful way of\n> controlling this sort of thing.\n\nYeah, I think this is a more realistic point. I too am not sure what\na good facility would look like. I guess an argument in favor of\nproviding a hook is that we could then leave it to extension authors\nto try to devise a facility that's useful to end users, rather than\nhaving to write an in-core feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 May 2022 11:45:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Tue, May 3, 2022 at 11:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I have some desire here to see us solve this problem not just for\n> > service providers, but for users in general. You don't have to be a\n> > service provider to want to disallow SET work_mem = '1TB' -- you just\n> > need to be a DBA on a system where such a setting will cause bad\n> > things to happen. But, if you are a DBA on some random system, you\n> > won't likely find a hook to be a particularly useful way of\n> > controlling this sort of thing.\n>\n> Yeah, I think this is a more realistic point. I too am not sure what\n> a good facility would look like. I guess an argument in favor of\n> providing a hook is that we could then leave it to extension authors\n> to try to devise a facility that's useful to end users, rather than\n> having to write an in-core feature.\n\nRIght. The counter-argument is that if we just do that, then what will\nlikely happen is that people who buy PostgreSQL services from\nMicrosoft, Amazon, EDB, Crunchy, etc. will end up with reasonable\noptions in this area, and people who download the source code from the\nInternet probably won't. As an open-source project, we might hope to\navoid a scenario where it doesn't work unless you buy something. On\nthe third hand, half a loaf is better than nothing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 May 2022 13:13:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Tue, May 3, 2022 at 10:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 3, 2022 at 11:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > I have some desire here to see us solve this problem not just for\n> > > service providers, but for users in general. You don't have to be a\n> > > service provider to want to disallow SET work_mem = '1TB' -- you just\n> > > need to be a DBA on a system where such a setting will cause bad\n> > > things to happen. But, if you are a DBA on some random system, you\n> > > won't likely find a hook to be a particularly useful way of\n> > > controlling this sort of thing.\n> >\n> > Yeah, I think this is a more realistic point. I too am not sure what\n> > a good facility would look like. I guess an argument in favor of\n> > providing a hook is that we could then leave it to extension authors\n> > to try to devise a facility that's useful to end users, rather than\n> > having to write an in-core feature.\n>\n> RIght. The counter-argument is that if we just do that, then what will\n> likely happen is that people who buy PostgreSQL services from\n> Microsoft, Amazon, EDB, Crunchy, etc. will end up with reasonable\n> options in this area, and people who download the source code from the\n> Internet probably won't. As an open-source project, we might hope to\n> avoid a scenario where it doesn't work unless you buy something. On\n> the third hand, half a loaf is better than nothing.\n\nThanks Tom and Robert for your responses.\n\nHow about we provide a sample extension (limiting some important\nparameters say shared_buffers, work_mem and so on to some\n\"reasonable/recommended\" limits) in the core along with the\nset_config_option_hook? This way, all the people using open source\npostgres out-of-the-box will benefit and whoever wants, can modify\nthat sample extension to suit their needs. The sampe extension can\nalso serve as an example to implement set_config_option_hook.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 4 May 2022 16:42:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Wed, May 4, 2022 at 7:12 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks Tom and Robert for your responses.\n>\n> How about we provide a sample extension (limiting some important\n> parameters say shared_buffers, work_mem and so on to some\n> \"reasonable/recommended\" limits) in the core along with the\n> set_config_option_hook? This way, all the people using open source\n> postgres out-of-the-box will benefit and whoever wants, can modify\n> that sample extension to suit their needs. The sampe extension can\n> also serve as an example to implement set_config_option_hook.\n>\n> Thoughts?\n\nWell, it's better than just adding a hook and stopping, but I'm not\nreally sure that it's as good as what I'd like to have.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 May 2022 05:43:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Wed, May 4, 2022, at 8:12 AM, Bharath Rupireddy wrote:\n> How about we provide a sample extension (limiting some important\n> parameters say shared_buffers, work_mem and so on to some\n> \"reasonable/recommended\" limits) in the core along with the\n> set_config_option_hook? This way, all the people using open source\n> postgres out-of-the-box will benefit and whoever wants, can modify\n> that sample extension to suit their needs. The sampe extension can\n> also serve as an example to implement set_config_option_hook.\nI agree with Robert that providing a feature for it instead of a hook is the\nway to go. It is not just one or two vendors that will benefit from it but\nalmost or if not all vendors will use this feature. Hooks should be used for\nniche features; that's not the case here.\n\nThe commit a0ffa885e47 introduced the GRANT SET ON PARAMETER command. It could\nbe used for this purpose. Despite of accepting GRANT on PGC_USERSET GUCs, it\nhas no use. It doesn't mean that additional properties couldn't be added to the\ncurrent syntax. This additional use case should be enforced before or while\nexecuting set_config_option(). Is it ok to extend this SQL command?\n\nThe syntax could be:\n\nGRANT SET ON PARAMETER work_mem (MIN '1MB', MAX '512MB') TO bob;\n\nNULL keyword can be used to remove the MIN and MAX limit. The idea is to avoid\na verbose syntax (add an \"action\" to MIN/MAX -- ADD MIN 1, DROP MAX 234, SET\nMIN 456).\n\nThe other alternative is to ALTER USER SET and ALTER DATABASE SET. The current\nuser can set parameter for himself and he could adjust the limits. Besides that\nthe purpose of these SQL commands are to apply initial settings for a\ncombination of user/database. I'm afraid it is out of scope to check after the\nsession is established.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, May 4, 2022, at 8:12 AM, Bharath Rupireddy wrote:How about we provide a sample extension (limiting some importantparameters say shared_buffers, work_mem and so on to some\"reasonable/recommended\" limits) in the core along with theset_config_option_hook? This way, all the people using open sourcepostgres out-of-the-box will benefit and whoever wants, can modifythat sample extension to suit their needs. The sampe extension canalso serve as an example to implement set_config_option_hook.I agree with Robert that providing a feature for it instead of a hook is theway to go. It is not just one or two vendors that will benefit from it butalmost or if not all vendors will use this feature. Hooks should be used forniche features; that's not the case here.The commit a0ffa885e47 introduced the GRANT SET ON PARAMETER command. It couldbe used for this purpose. Despite of accepting GRANT on PGC_USERSET GUCs, ithas no use. It doesn't mean that additional properties couldn't be added to thecurrent syntax. This additional use case should be enforced before or whileexecuting set_config_option(). Is it ok to extend this SQL command?The syntax could be:GRANT SET ON PARAMETER work_mem (MIN '1MB', MAX '512MB') TO bob;NULL keyword can be used to remove the MIN and MAX limit. The idea is to avoida verbose syntax (add an \"action\" to MIN/MAX -- ADD MIN 1, DROP MAX 234, SETMIN 456).The other alternative is to ALTER USER SET and ALTER DATABASE SET. The currentuser can set parameter for himself and he could adjust the limits. Besides thatthe purpose of these SQL commands are to apply initial settings for acombination of user/database. I'm afraid it is out of scope to check after thesession is established.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 06 May 2022 15:41:04 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Sat, May 7, 2022 at 12:11 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, May 4, 2022, at 8:12 AM, Bharath Rupireddy wrote:\n>\n> How about we provide a sample extension (limiting some important\n> parameters say shared_buffers, work_mem and so on to some\n> \"reasonable/recommended\" limits) in the core along with the\n> set_config_option_hook? This way, all the people using open source\n> postgres out-of-the-box will benefit and whoever wants, can modify\n> that sample extension to suit their needs. The sampe extension can\n> also serve as an example to implement set_config_option_hook.\n>\n> I agree with Robert that providing a feature for it instead of a hook is the\n> way to go. It is not just one or two vendors that will benefit from it but\n> almost or if not all vendors will use this feature. Hooks should be used for\n> niche features; that's not the case here.\n>\n> The commit a0ffa885e47 introduced the GRANT SET ON PARAMETER command. It could\n> be used for this purpose. Despite of accepting GRANT on PGC_USERSET GUCs, it\n> has no use. It doesn't mean that additional properties couldn't be added to the\n> current syntax. This additional use case should be enforced before or while\n> executing set_config_option(). Is it ok to extend this SQL command?\n>\n> The syntax could be:\n>\n> GRANT SET ON PARAMETER work_mem (MIN '1MB', MAX '512MB') TO bob;\n>\n> NULL keyword can be used to remove the MIN and MAX limit. The idea is to avoid\n> a verbose syntax (add an \"action\" to MIN/MAX -- ADD MIN 1, DROP MAX 234, SET\n> MIN 456).\n>\n> The other alternative is to ALTER USER SET and ALTER DATABASE SET. The current\n> user can set parameter for himself and he could adjust the limits. Besides that\n> the purpose of these SQL commands are to apply initial settings for a\n> combination of user/database. I'm afraid it is out of scope to check after the\n> session is established.\n\nThanks for providing thoughts. I'm personally not in favour of adding\nany new syntax, as the new syntax would require some education and\nchanges to other layers. I see some downsides with new syntax:\n1) It will be a bit difficult to deal with the parameters that don't\nhave ranges (as pointed out by Robert upthread).\n2) It will be a bit difficult to enforce platform specific\nconfigurations at run time - (say the user has scaled-up the host\nsystem/VM, now has more vcores, RAM and now they will have more memory\nand number of workers to use for their setting).\n3) If someone wants to disallow users setting some core/extension\nconfiguration parameters which can make the server unmanageable (block\nsetting full_page_writes to off, zero_damaged_pages to on, fsync to\noff, log levels to debug5, huge_pages to on, all the command options\n(archive_command, restore_command .... etc.).\n\nIMO, the hook and a sample extension in the core helps greatly to\nachieve the above.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 9 May 2022 13:14:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
},
{
"msg_contents": "On Mon, May 9, 2022 at 3:44 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks for providing thoughts. I'm personally not in favour of adding\n> any new syntax, as the new syntax would require some education and\n> changes to other layers. I see some downsides with new syntax:\n> 1) It will be a bit difficult to deal with the parameters that don't\n> have ranges (as pointed out by Robert upthread).\n> 2) It will be a bit difficult to enforce platform specific\n> configurations at run time - (say the user has scaled-up the host\n> system/VM, now has more vcores, RAM and now they will have more memory\n> and number of workers to use for their setting).\n> 3) If someone wants to disallow users setting some core/extension\n> configuration parameters which can make the server unmanageable (block\n> setting full_page_writes to off, zero_damaged_pages to on, fsync to\n> off, log levels to debug5, huge_pages to on, all the command options\n> (archive_command, restore_command .... etc.).\n>\n> IMO, the hook and a sample extension in the core helps greatly to\n> achieve the above.\n\nI don't think that any of these are very fundamental objections. Every\nfeature requires education, many require changes to various layers,\nand the fact that some parameters don't have ranges is a topic to\nthink about how to handle, not a reason to give up on the idea. (2)\nmay mean that some users - large service providers, in particular -\nprefer the hook to the SQL syntax, but that's not a reason not to have\nSQL syntax. (3) basically seems like an argument that people my do\ndumb things with it, but that's true of every feature.\n\nI'm not sure that a hook and sample extension is unacceptable; it\nmight be fine. But I think it is not saying anything other than the\ntruth to say that this will benefit large service providers while\nleaving the corresponding problem unsolved for ordinary end users. And\nI remain of the opinion that that's not great.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 May 2022 09:17:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Parameter/GUC value validation hook"
}
] |
[
{
"msg_contents": "I just tested \"postgres -C\" on Postgres head, and got odd LOG output\nlines:\n\n\t$ postgres -C shared_memory_size\n\t143\n-->\t2022-05-02 13:08:06.445 EDT [1582048] LOG: database system is shut down\n\n\t$ postgres -C \"wal_segment_size\"\n\t16777216\n-->\t2022-05-02 13:13:30.499 EDT [1584650] LOG: database system is shut down\n\nAre those last lines expected?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 2 May 2022 13:15:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Odd LOG output from \"postgres -C\""
},
{
"msg_contents": "On Mon, May 02, 2022 at 01:15:00PM -0400, Bruce Momjian wrote:\n> I just tested \"postgres -C\" on Postgres head, and got odd LOG output\n> lines:\n> \n> \t$ postgres -C shared_memory_size\n> \t143\n> -->\t2022-05-02 13:08:06.445 EDT [1582048] LOG: database system is shut down\n> \n> \t$ postgres -C \"wal_segment_size\"\n> \t16777216\n> -->\t2022-05-02 13:13:30.499 EDT [1584650] LOG: database system is shut down\n> \n> Are those last lines expected?\n\nAn attempt to fix this is being tracked here:\n\n\thttps://commitfest.postgresql.org/38/3596\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 2 May 2022 10:21:30 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Odd LOG output from \"postgres -C\""
},
{
"msg_contents": "On Mon, May 2, 2022 at 10:21:30AM -0700, Nathan Bossart wrote:\n> On Mon, May 02, 2022 at 01:15:00PM -0400, Bruce Momjian wrote:\n> > I just tested \"postgres -C\" on Postgres head, and got odd LOG output\n> > lines:\n> > \n> > \t$ postgres -C shared_memory_size\n> > \t143\n> > -->\t2022-05-02 13:08:06.445 EDT [1582048] LOG: database system is shut down\n> > \n> > \t$ postgres -C \"wal_segment_size\"\n> > \t16777216\n> > -->\t2022-05-02 13:13:30.499 EDT [1584650] LOG: database system is shut down\n> > \n> > Are those last lines expected?\n> \n> An attempt to fix this is being tracked here:\n> \n> \thttps://commitfest.postgresql.org/38/3596\n\nOkay, good to know, thanks. Somehow I missed seeing this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 2 May 2022 13:25:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Odd LOG output from \"postgres -C\""
}
] |
[
{
"msg_contents": "Hi,\n\nI'm exploring the Dynamic Tracing [1] facility that postgres provides and\nplanning to set it up in a reliable way on postgres running on Ubuntu. It\nlooks like SystemTap is available on Ubuntu whereas DTrace isn't. I have no\nexperience in using any of these tools. I would like to hear from hackers\nwho have knowledge in this area or set it up previously.\n\nIn general, I have the following questions:\n\n1) Can the Dynamic Tracing reliably be used in production servers without\nimpacting the performance? In other words, will the Dynamic Tracing incur\nextra costs(CPU or IO) that may impact the postgres performance eventually?\n2) Does it fare better in terms of postgres observability? If yes, is it a\nbetter choice than the existing way that postgres emits server logs and\nprovides stats via views, pg_stat_statements etc.?\n\nMy motivation is to see if Dynamic Tracing is a better way to get the\nimportant information out of a running postgres server.\n\nPlease provide your thoughts here?\n\n[1] https://www.postgresql.org/docs/devel/dynamic-trace.html\n\nRegards,\nBharath Rupireddy.\n\nHi,\n\nI'm exploring the Dynamic Tracing [1] facility that postgres provides and planning to set it up in a reliable way on postgres running on Ubuntu. It looks like SystemTap is available on Ubuntu whereas DTrace isn't. I have no experience in using any of these tools. I would like to hear from hackers who have knowledge in this area or set it up previously.\n\nIn general, I have the following questions:\n\n1) Can the Dynamic Tracing reliably be used in production servers without impacting the performance? In other words, will the Dynamic Tracing incur extra costs(CPU or IO) that may impact the postgres performance eventually?\n2) Does it fare better in terms of postgres observability? If yes, is it a better choice than the existing way that postgres emits server logs and provides stats via views, pg_stat_statements etc.?\n\nMy motivation is to see if Dynamic Tracing is a better way to get the important information out of a running postgres server.\n\nPlease provide your thoughts here?\n\n[1] https://www.postgresql.org/docs/devel/dynamic-trace.html\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 2 May 2022 23:25:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is Dynamic Tracing in Postgres running on Ubuntu a good choice?"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nThe release date for PostgreSQL 15 Beta 1 is scheduled for May 19, 2022.\r\n\r\nPlease ensure you have committed any work for Beta 1 by May 15, 2022 AoE[1].\r\n\r\nThank you for your efforts with resolving open items[2] as we work to \r\nstabilize PostgreSQL 15 for GA!\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1]https://en.wikipedia.org/wiki/Anywhere_on_Earth\r\n[2] \r\nhttps://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items#Important_Dates",
"msg_date": "Mon, 2 May 2022 20:56:56 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 15 Beta 1 release date"
}
] |
[
{
"msg_contents": "The reason that mylodon has been failing in v10 and v11 for awhile\nis that \"-Werror=c99-extensions\" breaks its test for <stdbool.h>:\n\nconfigure:12708: checking for stdbool.h that conforms to C99\nconfigure:12775: ccache clang-13 -c -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers -Wno-array-bounds -std=c89 -Wc99-extensions -Werror=c99-extensions -D_GNU_SOURCE -I/usr/include/libxml2 conftest.c >&5\nconftest.c:83:25: error: '_Bool' is a C99 extension [-Werror,-Wc99-extensions]\n struct s { _Bool s: 1; _Bool t; } s;\n ^\n\nwhich causes us to not use stdbool.h, which might be all right if you\nweren't also specifying --with-icu.\n\nWhat's not quite clear to me is what changed on mylodon to make it\nstart failing recently. Maybe you updated ICU to a version that\ninsists on importing <stdbool.h> in its headers?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 May 2022 23:18:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "mylodon's failures in the back branches"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-02 23:18:20 -0400, Tom Lane wrote:\n> The reason that mylodon has been failing in v10 and v11 for awhile\n> is that \"-Werror=c99-extensions\" breaks its test for <stdbool.h>:\n\nWas planning to send an email once I looked into it in a bit more detail...\n\n\n> configure:12708: checking for stdbool.h that conforms to C99\n> configure:12775: ccache clang-13 -c -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers -Wno-array-bounds -std=c89 -Wc99-extensions -Werror=c99-extensions -D_GNU_SOURCE -I/usr/include/libxml2 conftest.c >&5\n> conftest.c:83:25: error: '_Bool' is a C99 extension [-Werror,-Wc99-extensions]\n> struct s { _Bool s: 1; _Bool t; } s;\n> ^\n> \n> which causes us to not use stdbool.h, which might be all right if you\n> weren't also specifying --with-icu.\n\nHow did you conclude that ICU is the problem? I didn't immediately find\nanything in the buildfarm output indicating that's where the stdbool include\nis coming from. Don't get me wrong, it's a plausible guess, just curious.\n\n\n> What's not quite clear to me is what changed on mylodon to make it\n> start failing recently. Maybe you updated ICU to a version that\n> insists on importing <stdbool.h> in its headers?\n\nThe machine is updated automatically. Looking at the package manager's log, it\nindeed looks like ICU was updated around that time...\n\n2022-04-24 06:52:10 install libicu71:amd64 <none> 71.1-2\n\n\nSeems easiest to just change the configuration so that ICU isn't enabled for\n10, 11? It's pretty reasonable to rely on it these days...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 May 2022 21:11:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: mylodon's failures in the back branches"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-02 23:18:20 -0400, Tom Lane wrote:\n>> which causes us to not use stdbool.h, which might be all right if you\n>> weren't also specifying --with-icu.\n\n> How did you conclude that ICU is the problem? I didn't immediately find\n> anything in the buildfarm output indicating that's where the stdbool include\n> is coming from. Don't get me wrong, it's a plausible guess, just curious.\n\nA bit of a leap I agree, but IIRC we have discovered in the past that\nrecent ICU headers require C99 bool.\n\n> Seems easiest to just change the configuration so that ICU isn't enabled for\n> 10, 11? It's pretty reasonable to rely on it these days...\n\nYeah, that seemed like the most plausible answer to me too. The point\nof the animal is to check C90 compatibility, not ICU compatibility.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 May 2022 00:24:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: mylodon's failures in the back branches"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-03 00:24:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-02 23:18:20 -0400, Tom Lane wrote:\n> > Seems easiest to just change the configuration so that ICU isn't enabled for\n> > 10, 11? It's pretty reasonable to rely on it these days...\n> \n> Yeah, that seemed like the most plausible answer to me too. The point\n> of the animal is to check C90 compatibility, not ICU compatibility.\n\nDid that now - didn't suffice. Disabling libxml seems to do the trick though -\nmight still be ICU, just indirectly pulled in...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 May 2022 21:42:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: mylodon's failures in the back branches"
}
] |
[
{
"msg_contents": "Hi,\nOn Mon, May 02, 2022 at 12:45:28PM +0000, Godfrin, Philippe E wrote:\n>> Greetings,\n>>\n>> I want to limit the query text that gets captured in pg_stat_statements. We\n>> have sql statements with thousands of values clauses (upwards of 10,000) that\n>> run at a 1 second interval. When just a handful are running plus 2 or 3 loads\n>> using the same technique (10,000 entry values clauses) querying the\n>> pg_stat_statements table gets bogged down (see below). With the\n>> pg_stat_statements.max is set to 1000 statements just querying the table\n>> stats table seems to impact the running statements! I have temporarily staved\n>> off the issue by reducing the max to 250 statements, and I have made\n>> recommendations to the development team to cut down the number of values\n>> clauses. However, it seems to me that the ability to truncate the captured\n>> query would be a useful feature.\n\n>The store queries are normalized so the values themselves won't be stored, only\n>a \"?\" per value. And as long as all the queries have the same number of values\n>there should be a single entry stored for the same role and database, so all in\n>all it should limit the size of the stored query texts.\n>On the other hand, with such a low pg_stat_statements.max, you may have a lot\n>of entry evictions, which tends to bloat the external query file\n>($PGDATA/pg_stat_tmp/pgss_query_texts.stat). Did you check how big it is and\n>if yes how fast it grows? I've once seen the file being more than 1GB without\n>any reason why, which was obviously slowing everything down. A simple call to\n>pg_stat_statements_reset() fixed the problem, at least as far as I know as I\n>never had access to the server and never had any news after that.\n\nI wasn't exactly clear about the queries. The values clauses themselves are not long -\nWe are using repeated values clauses:\n\nINSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)\nVALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),\n($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),\n($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),\n($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),\n($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)\n\nThis one's not long, but some 'load statements' have 10,000 values clauses, others add up to 10,000 more\nin an ON CONFLICT clause. I've checked the external Query file and it's currently not large\nat all. But I will keep an eye on that. When I had The settings at 1000 statements\nthe file was indeed over 1GB. For the record, development is reducing those statement\nlengths.\n\n>> I've peeked at the source code and I don't see the track_activity_query_size\n>> used (pg_stat_activity.query) which would be one mechanism. I don't really\n>> know what would be the right way to do this or even if it is a good idea,\n>> i.e. if limiting that would have a larger impact to the statistics\n>> ecosystem...\n\n>pg_stat_statements used to truncate the query text to\n>track_activity_query_size, but that limitation was removed when the query texts\n>were moved to the external query file. It's quite convenient to have the full\n>normalized query text available, especially with the application is using some\n>ORM, as they tend to make SQL even more verbose than it already is. Having a\n>very high number of values (and I'm assuming queries with different number of\n>values) seems like a corner case, but truncating the query\n>text would only fix part of the problem. It will lead to a very high number of\n>different queryid, which is also problematic as frequent entry evictions also\n>tends to have an insanely high overhead, and you can't and an infinite number\n>of entries stored.\n\n>> Thoughts or suggestions?\n\n>You didn't explain how you're using pg_stat_statements. Do you really need to\n>query pg_stat_statements with the query text each time? If you only need to\n>get some performance metrics you could adapt your system to only retrieve the\n>query text for the wanted queryid(s) once you find some problematic pattern,\n>and/or cache the query texts a table or some other place. But with a very low\n>pg_stat_statements.max (especially if you can have a varying number of values\n>from 1 to 10k) it might be hard to do.\n\nThe first observation is how long a simple query took:\n\n# select count(*) from pg_stat_statements;\ncount\n-------\n 971\nTime: 6457.985 ms (00:06.458)\n\nMORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:\n# select count(*) from pg_stat_statements(showtext:=false);\ncount\n-------\n 970\nTime: 10.644 ms\n\nOnly 10ms...\n\nSecond, we have Datadog installed. Datadoq queries the pg_stat_statements table\nevery 10 seconds. The real pain point is querying the pg_stat_statements seems\nto have an impact on running queries, specifically inserts in my case.\n\nI believe this is an actual impact that needs a solution.\n\n\nMy apologies, for some reason these mails are not making it to me.\n\nPhil Godfrin | Database Administration\nNOV\nNOV US | Engineering Data\n9720 Beechnut St | Houston, Texas 77036\nM 281.825.2311\nE Philippe.Godfrin@nov.com<mailto:Philippe.Godfrin@nov.com>\n\n\n\n\n\n\n\n\n\n\nHi,\nOn Mon, May 02, 2022 at 12:45:28PM +0000, Godfrin, Philippe E wrote:\n>> Greetings,\n>>\n\n>> I want to limit the query text that gets captured in pg_stat_statements. We\n>> have sql statements with thousands of values clauses (upwards of 10,000) that\n>> run at a 1 second interval. When just a handful are running plus 2 or 3 loads\n>> using the same technique (10,000 entry values clauses) querying the\n>> pg_stat_statements table gets bogged down (see below). With the\n>> pg_stat_statements.max is set to 1000 statements just querying the table\n>> stats table seems to impact the running statements! I have temporarily staved\n>> off the issue by reducing the max to 250 statements, and I have made\n>> recommendations to the development team to cut down the number of values\n>> clauses. However, it seems to me that the ability to truncate the captured\n>> query would be a useful feature.\n \n>The store queries are normalized so the values themselves won't be stored, only\n>a \"?\" per value. And as long as all the queries have the same number of values\n>there should be a single entry stored for the same role and database, so all in\n>all it should limit the size of the stored query texts.\n>On the other hand, with such a low pg_stat_statements.max, you may have a lot\n>of entry evictions, which tends to bloat the external query file\n>($PGDATA/pg_stat_tmp/pgss_query_texts.stat). Did you check how big it is and\n>if yes how fast it grows? I've once seen the file being more than 1GB without\n>any reason why, which was obviously slowing everything down. A simple call to\n>pg_stat_statements_reset() fixed the problem, at least as far as I know as I\n>never had access to the server and never had any news after that.\n \nI wasn’t exactly clear about the queries. The values clauses themselves are not long –\n\nWe are using repeated values clauses:\n \nINSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)\n\nVALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),\n($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),\n($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),\n($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),\n($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)\n \nThis one’s not long, but some ‘load statements’ have 10,000 values clauses, others add up to 10,000 more\n\nin an ON CONFLICT clause. I’ve checked the external Query file and it’s currently not large\n\nat all. But I will keep an eye on that. When I had The settings at 1000 statements\n\nthe file was indeed over 1GB. For the record, development is reducing those statement\nlengths.\n \n>> I've peeked at the source code and I don't see the track_activity_query_size\n>> used (pg_stat_activity.query) which would be one mechanism. I don't really\n>> know what would be the right way to do this or even if it is a good idea,\n>> i.e. if limiting that would have a larger impact to the statistics\n>> ecosystem...\n \n>pg_stat_statements used to truncate the query text to\n>track_activity_query_size, but that limitation was removed when the query texts\n>were moved to the external query file. It's quite convenient to have the full\n>normalized query text available, especially with the application is using some\n>ORM, as they tend to make SQL even more verbose than it already is. Having a\n>very high number of values (and I'm assuming queries with different number of\n>values) seems like a corner case, but truncating the query\n>text would only fix part of the problem. It will lead to a very high number of\n>different queryid, which is also problematic as frequent entry evictions also\n>tends to have an insanely high overhead, and you can't and an infinite number\n>of entries stored.\n \n>> Thoughts or suggestions?\n \n>You didn't explain how you're using pg_stat_statements. Do you really need to\n>query pg_stat_statements with the query text each time? If you only need to\n>get some performance metrics you could adapt your system to only retrieve the\n>query text for the wanted queryid(s) once you find some problematic pattern,\n>and/or cache the query texts a table or some other place. But with a very low\n>pg_stat_statements.max (especially if you can have a varying number of values\n>from 1 to 10k) it might be hard to do.\n \nThe first observation is how long a simple query took:\n \n# select count(*) from pg_stat_statements;\ncount\n\n-------\n 971\nTime: 6457.985 ms (00:06.458)\n \nMORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:\n# select count(*) from pg_stat_statements(showtext:=false);\ncount\n\n-------\n 970\nTime: 10.644 ms\n \nOnly 10ms…\n \nSecond, we have Datadog installed. Datadoq queries the pg_stat_statements table\nevery 10 seconds. The real pain point is querying the pg_stat_statements seems\n\nto have an impact on running queries, specifically inserts in my case.\n \nI believe this is an actual impact that needs a solution.\n\n \n \nMy apologies, for some reason these mails are not making it to me.\n \nPhil Godfrin |\nDatabase Administration\nNOV\n\nNOV US | Engineering Data\n9720 Beechnut St | Houston, Texas 77036\n\nM \n281.825.2311\nE \nPhilippe.Godfrin@nov.com",
"msg_date": "Tue, 3 May 2022 13:30:32 +0000",
"msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn Tue, May 03, 2022 at 01:30:32PM +0000, Godfrin, Philippe E wrote:\n>\n> I wasn't exactly clear about the queries. The values clauses themselves are not long -\n> We are using repeated values clauses:\n>\n> INSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)\n> VALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),\n> ($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),\n> ($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),\n> ($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),\n> ($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)\n>\n> This one's not long, but some 'load statements' have 10,000 values clauses, others add up to 10,000 more\n> in an ON CONFLICT clause. I've checked the external Query file and it's currently not large\n> at all. But I will keep an eye on that. When I had The settings at 1000 statements\n> the file was indeed over 1GB. For the record, development is reducing those statement\n> lengths.\n> [...]\n> The first observation is how long a simple query took:\n>\n> # select count(*) from pg_stat_statements;\n> count\n> -------\n> 971\n> Time: 6457.985 ms (00:06.458)\n>\n> MORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:\n> # select count(*) from pg_stat_statements(showtext:=false);\n> count\n> -------\n> 970\n> Time: 10.644 ms\n>\n> Only 10ms...\n\nWell, 10ms is still quite slow.\n\nYou're not removing the long queries texts, you're removing all queries texts.\nI don't know if the overhead comes from processing at least some long\nstatements or is mostly due to having to retrieve the query file. Do you get\nthe same times if you run the query twice? Maybe you're short on RAM and have\nsomewhat slow disks, and the text file has to be read from disk rather than OS\ncache?\n\nAlso I don't know what you mean by \"not large at all\", so it's hard to compare\nor try to reproduce. FWIW on some instance I have around, I have a 140kB file\nand querying pg_stat_statements *with* the query text file only takes a few ms.\n\nYou could try to query that view with some unprivileged user. This way you\nwill still retrieve the query text file but will only emit \"<insufficient\nprivilege>\" rather than processing the query texts, this may narrow down the\nproblem. Or better, if you could run perf [1] to see where the overhead really\nis.\n\n> Second, we have Datadog installed. Datadoq queries the pg_stat_statements table\n> every 10 seconds. The real pain point is querying the pg_stat_statements seems\n> to have an impact on running queries, specifically inserts in my case.\n\nI think this is a side effect of having a very low pg_stat_statements.max, if\nof course you have more queries than the current value.\n\nIf the extra time is due to loading the query text file and if it's loaded\nafter acquiring the lightweight lock, then you will prevent evicting or\ncreating new entries for a long time, which means that the query execution for\nthose queries will be blocked until the query on pg_stat_statements ends.\n\nThere are unfortunately *a lot* of unknowns here, so I can't do anything apart\nfrom guessing.\n\n> I believe this is an actual impact that needs a solution.\n\nFirst, if you have an OLTP workload you have to make sure that\npg_stat_statements.max is high enough so that you don't have to evict entries,\nor at least not often. Then, I think that querying pg_stat_statements every\n10s is *really* aggressive, that's always going to have some noticeable\noverhead. For the rest, we need more information to understand where the\nslowdown is coming from.\n\n[1] https://wiki.postgresql.org/wiki/Profiling_with_perf\n\n\n",
"msg_date": "Thu, 5 May 2022 11:24:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements"
},
{
"msg_contents": "From: Julien Rouhaud <rjuju123@gmail.com>\nSent: Wednesday, May 4, 2022 10:24 PM\nTo: Godfrin, Philippe E <Philippe.Godfrin@nov.com>\nCc: pgsql-hackers@lists.postgresql.org\nSubject: [EXTERNAL] Re: pg_stat_statements\n\nHi,\n\nOn Tue, May 03, 2022 at 01:30:32PM +0000, Godfrin, Philippe E wrote:\n>\n> I wasn't exactly clear about the queries. The values clauses themselves are not long -\n> We are using repeated values clauses:\n>\n> INSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)\n> VALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),\n> ($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),\n> ($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),\n> ($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),\n> ($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)\n>\n> This one's not long, but some 'load statements' have 10,000 values clauses, others add up to 10,000 more\n> in an ON CONFLICT clause. I've checked the external Query file and it's currently not large\n> at all. But I will keep an eye on that. When I had The settings at 1000 statements\n> the file was indeed over 1GB. For the record, development is reducing those statement\n> lengths.\n> [...]\n> The first observation is how long a simple query took:\n>\n> # select count(*) from pg_stat_statements;\n> count\n> -------\n> 971\n> Time: 6457.985 ms (00:06.458)\n>\n> MORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:\n> # select count(*) from pg_stat_statements(showtext:=false);\n> count\n> -------\n> 970\n> Time: 10.644 ms\n>\n> Only 10ms...\n\nWell, 10ms is still quite slow.\n\nYou're not removing the long queries texts, you're removing all queries texts.\nI don't know if the overhead comes from processing at least some long\nstatements or is mostly due to having to retrieve the query file. Do you get\nthe same times if you run the query twice? Maybe you're short on RAM and have\nsomewhat slow disks, and the text file has to be read from disk rather than OS\ncache?\n\nAlso I don't know what you mean by \"not large at all\", so it's hard to compare\nor try to reproduce. FWIW on some instance I have around, I have a 140kB file\nand querying pg_stat_statements *with* the query text file only takes a few ms.\n\nYou could try to query that view with some unprivileged user. This way you\nwill still retrieve the query text file but will only emit \"<insufficient\nprivilege>\" rather than processing the query texts, this may narrow down the\nproblem. Or better, if you could run perf [1] to see where the overhead really\nis.\n\n> Second, we have Datadog installed. Datadoq queries the pg_stat_statements table\n> every 10 seconds. The real pain point is querying the pg_stat_statements seems\n> to have an impact on running queries, specifically inserts in my case.\n\nI think this is a side effect of having a very low pg_stat_statements.max, if\nof course you have more queries than the current value.\n\nIf the extra time is due to loading the query text file and if it's loaded\nafter acquiring the lightweight lock, then you will prevent evicting or\ncreating new entries for a long time, which means that the query execution for\nthose queries will be blocked until the query on pg_stat_statements ends.\n\nThere are unfortunately *a lot* of unknowns here, so I can't do anything apart\nfrom guessing.\n\n> I believe this is an actual impact that needs a solution.\n\nFirst, if you have an OLTP workload you have to make sure that\npg_stat_statements.max is high enough so that you don't have to evict entries,\nor at least not often. Then, I think that querying pg_stat_statements every\n10s is *really* aggressive, that's always going to have some noticeable\noverhead. For the rest, we need more information to understand where the\nslowdown is coming from.\n\n[1] https://wiki.postgresql.org/wiki/Profiling_with_perf<https://wiki.postgresql.org/wiki/Profiling_with_perf>\n\nHello Julien,\nThanks very much for looking closely at this. To answer your questions:\nI misspoke the query file at the time of the queries above was around 1GB.\n\nI don't believe I am short on RAM, although I will re-examine that aspect. I'm running 32GB\nwith a 22GB shared pool, which seems OK to me. The disk are SSD (AWS EBS) and\nthe disk volumes are the same as the data volumes. If a regular file on disk at 1GB\ntook 6 seconds to read, the rest of the system would be in serious degradation.\n\nThe impact on running queries was observed when the max was set at 1000. I don't\nquite understand what you keep saying about evictions and other things relative to the\npgss file. Can you refer me to some detailed documentation or a good article which\ndescribes the processes you're alluding to?\n\nInsofar as querying the stats table every 10 seconds, I think that's not aggressive enough as\nI want to have statement monitoring as close to realtime as possible.\n\nYou are indeed correct insofar as unknowns - the biggest one for me is I don't know\nenough about how the stats extension works - as I asked before, more detail on the\ninternals of the extension would be useful. Is my only choice in that regard to browse\nthe source code?\n\nShort of running the profile that should deal with the unknowns, any other ideas?\nThanks,\nphilippe\n\n\n\n\n\n\n\n\n\n\n \n \n\n\nFrom: Julien Rouhaud <rjuju123@gmail.com> \nSent: Wednesday, May 4, 2022 10:24 PM\nTo: Godfrin, Philippe E <Philippe.Godfrin@nov.com>\nCc: pgsql-hackers@lists.postgresql.org\nSubject: [EXTERNAL] Re: pg_stat_statements\n\n\n \nHi,\n\nOn Tue, May 03, 2022 at 01:30:32PM +0000, Godfrin, Philippe E wrote:\n>\n> I wasn't exactly clear about the queries. The values clauses themselves are not long -\n> We are using repeated values clauses:\n>\n> INSERT INTO timeseries.dvc_104 (tag_id, event_ts, bool_val, float_val, int_val, string_val, last_updt_id)\n> VALUES ($1,$2,$3,$4,$5,$6,$7),($8,$9,$10,$11,$12,$13,$14),($15,$16,$17,$18,$19,$20,$21),\n> ($22,$23,$24,$25,$26,$27,$28),($29,$30,$31,$32,$33,$34,$35),($36,$37,$38,$39,$40,$41,$42),\n> ($43,$44,$45,$46,$47,$48,$49),($50,$51,$52,$53,$54,$55,$56),($57,$58,$59,$60,$61,$62,$63),\n> ($64,$65,$66,$67,$68,$69,$70),($71,$72,$73,$74,$75,$76,$77),($78,$79,$80,$81,$82,$83,$84),\n> ($85,$86,$87,$88,$89,$90,$91),($92,$93,$94,$95,$96,$97,$98)\n>\n> This one's not long, but some 'load statements' have 10,000 values clauses, others add up to 10,000 more\n> in an ON CONFLICT clause. I've checked the external Query file and it's currently not large\n> at all. But I will keep an eye on that. When I had The settings at 1000 statements\n> the file was indeed over 1GB. For the record, development is reducing those statement\n> lengths.\n> [...]\n> The first observation is how long a simple query took:\n>\n> # select count(*) from pg_stat_statements;\n> count\n> -------\n> 971\n> Time: 6457.985 ms (00:06.458)\n>\n> MORE than six seconds for a mere 971 rows! Furthermore, when removing the long queries:\n> # select count(*) from pg_stat_statements(showtext:=false);\n> count\n> -------\n> 970\n> Time: 10.644 ms\n>\n> Only 10ms...\n\nWell, 10ms is still quite slow.\n\nYou're not removing the long queries texts, you're removing all queries texts.\nI don't know if the overhead comes from processing at least some long\nstatements or is mostly due to having to retrieve the query file. Do you get\nthe same times if you run the query twice? Maybe you're short on RAM and have\nsomewhat slow disks, and the text file has to be read from disk rather than OS\ncache?\n\nAlso I don't know what you mean by \"not large at all\", so it's hard to compare\nor try to reproduce. FWIW on some instance I have around, I have a 140kB file\nand querying pg_stat_statements *with* the query text file only takes a few ms.\n\nYou could try to query that view with some unprivileged user. This way you\nwill still retrieve the query text file but will only emit \"<insufficient\nprivilege>\" rather than processing the query texts, this may narrow down the\nproblem. Or better, if you could run perf [1] to see where the overhead really\nis.\n\n> Second, we have Datadog installed. Datadoq queries the pg_stat_statements table\n> every 10 seconds. The real pain point is querying the pg_stat_statements seems\n> to have an impact on running queries, specifically inserts in my case.\n\nI think this is a side effect of having a very low pg_stat_statements.max, if\nof course you have more queries than the current value.\n\nIf the extra time is due to loading the query text file and if it's loaded\nafter acquiring the lightweight lock, then you will prevent evicting or\ncreating new entries for a long time, which means that the query execution for\nthose queries will be blocked until the query on pg_stat_statements ends.\n\nThere are unfortunately *a lot* of unknowns here, so I can't do anything apart\nfrom guessing.\n\n> I believe this is an actual impact that needs a solution.\n\nFirst, if you have an OLTP workload you have to make sure that\npg_stat_statements.max is high enough so that you don't have to evict entries,\nor at least not often. Then, I think that querying pg_stat_statements every\n10s is *really* aggressive, that's always going to have some noticeable\noverhead. For the rest, we need more information to understand where the\nslowdown is coming from.\n\n[1] \nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n \nHello Julien,\nThanks very much for looking closely at this. To answer your questions:\nI misspoke the query file at the time of the queries above was around 1GB.\n \nI don’t believe I am short on RAM, although I will re-examine that aspect. I’m running 32GB\n\nwith a 22GB shared pool, which seems OK to me. The disk are SSD (AWS EBS) and\n\nthe disk volumes are the same as the data volumes. If a regular file on disk at 1GB\ntook 6 seconds to read, the rest of the system would be in serious degradation.\n \nThe impact on running queries was observed when the max was set at 1000. I don’t\nquite understand what you keep saying about evictions and other things relative to the\npgss file. Can you refer me to some detailed documentation or a good article which\ndescribes the processes you’re alluding to?\n \nInsofar as querying the stats table every 10 seconds, I think that’s not aggressive enough as\n\nI want to have statement monitoring as close to realtime as possible.\n\n \nYou are indeed correct insofar as unknowns – the biggest one for me is I don’t know\nenough about how the stats extension works – as I asked before, more detail on the\n\ninternals of the extension would be useful. Is my only choice in that regard to browse\nthe source code?\n \nShort of running the profile that should deal with the unknowns, any other ideas?\nThanks,\nphilippe",
"msg_date": "Thu, 5 May 2022 12:21:41 +0000",
"msg_from": "\"Godfrin, Philippe E\" <Philippe.Godfrin@nov.com>",
"msg_from_op": true,
"msg_subject": "RE: [EXTERNAL] Re: pg_stat_statements"
},
{
"msg_contents": "On Thu, May 05, 2022 at 12:21:41PM +0000, Godfrin, Philippe E wrote:\n>\n> Thanks very much for looking closely at this. To answer your questions:\n> I misspoke the query file at the time of the queries above was around 1GB.\n\nAh, that's clearly big enough to lead to some slowdown.\n\n> I don't believe I am short on RAM, although I will re-examine that aspect. I'm running 32GB\n> with a 22GB shared pool, which seems OK to me. The disk are SSD (AWS EBS) and\n> the disk volumes are the same as the data volumes. If a regular file on disk at 1GB\n> took 6 seconds to read, the rest of the system would be in serious degradation.\n\nI don't know what you mean by shared pool, and you also didn't give any kind of\ninformation about your postgres usage, workload, number of connections or\nanything so it's impossible to know. Note that if your system is quite busy\nyou could definitely have some IO saturation, especially if that file is\ndiscarded from OS cache, so I wouldn't blindly rule that possibility out. I\nsuggested multiple ways to try to figure out if that's the problem though, so\nhaving such answer would be better than guessing if IO or the \"AWS EBS\" (which\nI also don't know anything about) is a problem or not.\n\n> The impact on running queries was observed when the max was set at 1000. I don't\n> quite understand what you keep saying about evictions and other things relative to the\n> pgss file. Can you refer me to some detailed documentation or a good article which\n> describes the processes you're alluding to?\n\nI don't think there's any thorough documentation or article explaining how\npg_stat_statements works internally. But you have a maximum number of\ndifferent (identified by userid, dbid, queryid) entries stored, so if your\nworkload leads to more entries than the max then pg_stat_statements will have\nto evict the least used ones to store the new one, and that process is costly\nand done using some exclusive lwlock. You didn't say which version of postgres\nyou're using, but one thin you can do to see if you probably have eviction is\nto check the number of rows in pg_stat_statements view. If the number changes\nvery often and is always close to pg_stat_statements.max then you probably have\nfrequent evictions.\n\n> Insofar as querying the stats table every 10 seconds, I think that's not aggressive enough as\n> I want to have statement monitoring as close to realtime as possible.\n\nWhat problem are you trying to solve? Why aren't you using pg_stat_activity if\nyou want realtime overview of what is happening?\n\n> You are indeed correct insofar as unknowns - the biggest one for me is I don't know\n> enough about how the stats extension works - as I asked before, more detail on the\n> internals of the extension would be useful. Is my only choice in that regard to browse\n> the source code?\n\nI think so.\n\n> Short of running the profile that should deal with the unknowns, any other ideas?\n\nDo you mean using perf, per the \"profiling with perf\" wiki article? Other than\nthat I suggested other ways to try to narrow down the problem, what was the\noutcome for those?\n\n\n",
"msg_date": "Fri, 6 May 2022 10:08:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: pg_stat_statements"
}
] |
[
{
"msg_contents": "Hi\n\nI've copied some statements from the .pdf called:\n\"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\nPart SQL Notation support 6: (JSON) for JavaScript Object\"\n(not available anymore although there should be a similar replacement file)\n\nIn that pdf I found the data and statement (called 'table 15' in the \n.pdf) as in the attached bash file. But the result is different: as \nimplemented by 15devel, the column rowseq is always 1. It seems to me \nthat that is wrong; it should count 1, 2, 3 as indeed the example-result \ncolumn in that pdf shows.\n\nWhat do you think?\n\nErik Rijkers",
"msg_date": "Tue, 3 May 2022 17:19:01 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "\nOn 2022-05-03 Tu 11:19, Erik Rijkers wrote:\n> Hi\n>\n> I've copied some statements from the .pdf called:\n> \"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\n> Part SQL Notation support 6: (JSON) for JavaScript Object\"\n> (not available anymore although there should be a similar replacement\n> file)\n>\n> In that pdf I found the data and statement (called 'table 15' in the\n> .pdf) as in the attached bash file. But the result is different: as\n> implemented by 15devel, the column rowseq is always 1. It seems to me\n> that that is wrong; it should count 1, 2, 3 as indeed the\n> example-result column in that pdf shows.\n>\n> What do you think?\n>\n>\n\nPossibly. \n\n\nHere's what the standard says in section 7.11 in I think the relevant\nbit of mindbogglingly impenetrable prose:\n\n\nGeneral Rules\n1)\nIf a <table primary> simply contains a <JSON table primitive> JTP, then:\na) If the value of the <JSON context item> simply contained in the <JSON\nAPI common syntax> is the null value, then the result of <JSON table\nprimitive> is an empty table and no further General Rules of this\nSubclause are applied.\nb) Let JACS be the <JSON API common syntax> simply contained in JTP.\nc) Let JTEB be the <JSON table error behavior> simply contained in JTP.\nd) The General Rules of Subclause 10.14, “<JSON API common syntax>”, are\napplied with JACS as JSON API COMMON SYNTAX; let ROWST be the STATUS and\nlet ROWSEQ be the SQL/JSON SEQUENCE returned from the application of\nthose General Rules.\n460\nFoundation (SQL/Foundation)\ne) Case:\ni) If ROWST is an exception condition, then\nCase:\n1) If JTEB is ERROR, then the exception condition ROWST is raised.\n2) Otherwise, the result of JTP is an empty table.\nii) Otherwise, let NI be the number of SQL/JSON items in ROWSEQ, let Ij,\n1 (one) ≤ j ≤ NI, be those SQL/JSON items in order, let NCD be the\nnumber of <JSON table primitive column definition>s contained in JTP,\nand let JTCDi, 1 (one) ≤ i ≤ NCD, be those <JSON table primitive column\ndefinition>s.\nFor all j, 1 (one) ≤ j ≤ NI, and for all i, 1 (one) ≤ i ≤ NCD, the value\nof the i-th column of the j-th row in the result of JTP is determined as\nfollows:\nCase:\n1) If JTCDi is a <JSON table ordinality column definition>, then the\nvalue of the i-th column\nof the j-th row is j.\n\n\nMaybe some language lawyer can turn that into comprehensible English.\n\nThis should probably be an open item for release 15, but I don't really\nknow what the precise behaviour should be, so it's hard to modify it.\n\nIf we can't get it right maybe we should disable the \"WITH ORDINALITY\"\nclause, although that would be a pity.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 3 May 2022 20:27:02 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "On Tue, May 3, 2022 at 5:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-05-03 Tu 11:19, Erik Rijkers wrote:\n> > Hi\n> >\n> > I've copied some statements from the .pdf called:\n> > \"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\n> > Part SQL Notation support 6: (JSON) for JavaScript Object\"\n> > (not available anymore although there should be a similar replacement\n> > file)\n> >\n> > In that pdf I found the data and statement (called 'table 15' in the\n> > .pdf) as in the attached bash file. But the result is different: as\n> > implemented by 15devel, the column rowseq is always 1. It seems to me\n> > that that is wrong; it should count 1, 2, 3 as indeed the\n> > example-result column in that pdf shows.\n> >\n> > What do you think?\n> >\n> >\n>\n> Possibly.\n>\n>\nI don't see how rowseq can be anything but 1. Each invocation of\njson_table is given a single jsonb record via the lateral reference to\nbookclub.jcol. It produces one result, having a rowseq 1. It does this\nfor all three outer lateral reference tuples and thus produces three output\nrows each with one match numbered rowseq 1.\n\nDavid J.\n\nOn Tue, May 3, 2022 at 5:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-05-03 Tu 11:19, Erik Rijkers wrote:\n> Hi\n>\n> I've copied some statements from the .pdf called:\n> \"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\n> Part SQL Notation support 6: (JSON) for JavaScript Object\"\n> (not available anymore although there should be a similar replacement\n> file)\n>\n> In that pdf I found the data and statement (called 'table 15' in the\n> .pdf) as in the attached bash file. But the result is different: as\n> implemented by 15devel, the column rowseq is always 1. It seems to me\n> that that is wrong; it should count 1, 2, 3 as indeed the\n> example-result column in that pdf shows.\n>\n> What do you think?\n>\n>\n\nPossibly. \nI don't see how rowseq can be anything but 1. Each invocation of json_table is given a single jsonb record via the lateral reference to bookclub.jcol. It produces one result, having a rowseq 1. It does this for all three outer lateral reference tuples and thus produces three output rows each with one match numbered rowseq 1.David J.",
"msg_date": "Tue, 3 May 2022 17:39:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "\nOn 2022-05-03 Tu 20:39, David G. Johnston wrote:\n> On Tue, May 3, 2022 at 5:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2022-05-03 Tu 11:19, Erik Rijkers wrote:\n> > Hi\n> >\n> > I've copied some statements from the .pdf called:\n> > \"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\n> > Part SQL Notation support 6: (JSON) for JavaScript Object\"\n> > (not available anymore although there should be a similar\n> replacement\n> > file)\n> >\n> > In that pdf I found the data and statement (called 'table 15' in the\n> > .pdf) as in the attached bash file. But the result is different: as\n> > implemented by 15devel, the column rowseq is always 1. It seems\n> to me\n> > that that is wrong; it should count 1, 2, 3 as indeed the\n> > example-result column in that pdf shows.\n> >\n> > What do you think?\n> >\n> >\n>\n> Possibly. \n>\n>\n> I don't see how rowseq can be anything but 1. Each invocation of\n> json_table is given a single jsonb record via the lateral reference to\n> bookclub.jcol. It produces one result, having a rowseq 1. It does\n> this for all three outer lateral reference tuples and thus produces\n> three output rows each with one match numbered rowseq 1.\n>\n\nI imagine we could overcome that by stashing the sequence counter\nsomewhere it would survive across calls. The question really is what is\nthe right thing to do? I'm also a bit worried about how correct is\nordinal numbering with nested paths, e.g. (from the regression tests):\n\n\nselect\n jt.*\nfrom\n jsonb_table_test jtt,\n json_table (\n jtt.js,'strict $[*]' as p\n columns (\n n for ordinality,\n a int path 'lax $.a' default -1 on empty,\n nested path 'strict $.b[*]' as pb columns ( b int path '$' ),\n nested path 'strict $.c[*]' as pc columns ( c int path '$' )\n )\n ) jt;\n n | a | b | c \n---+----+---+----\n 1 | 1 | | \n 2 | 2 | 1 | \n 2 | 2 | 2 | \n 2 | 2 | 3 | \n 2 | 2 | | 10\n 2 | 2 | | \n 2 | 2 | | 20\n 3 | 3 | 1 | \n 3 | 3 | 2 | \n 4 | -1 | 1 | \n 4 | -1 | 2 | \n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 4 May 2022 07:55:16 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "Op 04-05-2022 om 13:55 schreef Andrew Dunstan:\n> \n> On 2022-05-03 Tu 20:39, David G. Johnston wrote:\n>> On Tue, May 3, 2022 at 5:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>>\n>> On 2022-05-03 Tu 11:19, Erik Rijkers wrote:\n>> > Hi\n>> >\n>> > I've copied some statements from the .pdf called:\n>> > \"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\n>> > Part SQL Notation support 6: (JSON) for JavaScript Object\"\n>> > (not available anymore although there should be a similar\n>> replacement\n>> > file)\n>> >\n>> > In that pdf I found the data and statement (called 'table 15' in the\n>> > .pdf) as in the attached bash file. But the result is different: as\n>> > implemented by 15devel, the column rowseq is always 1. It seems\n>> to me\n>> > that that is wrong; it should count 1, 2, 3 as indeed the\n>> > example-result column in that pdf shows.\n>> >\n>> > What do you think?\n>> >\n>> >\n>>\n>> Possibly.\n>>\n>>\n>> I don't see how rowseq can be anything but 1. Each invocation of\n\n\nAfter some further experimentation, I now think you must be right, David.\n\nAlso, looking at the DB2 docs:\n https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n (see especially under 'Handling nested information')\n\nThere, I gathered some example data + statements where one is the case \nat hand. I also made them runnable under postgres (attached).\n\nI thought that was an instructive example, with those 'outer_ordinality' \nand 'inner_ordinality' columns.\n\nErik\n\n\n>> json_table is given a single jsonb record via the lateral reference to\n>> bookclub.jcol. It produces one result, having a rowseq 1. It does\n>> this for all three outer lateral reference tuples and thus produces\n>> three output rows each with one match numbered rowseq 1.\n>>\n> \n> I imagine we could overcome that by stashing the sequence counter\n> somewhere it would survive across calls. The question really is what is\n> the right thing to do? I'm also a bit worried about how correct is\n> ordinal numbering with nested paths, e.g. (from the regression tests):\n> \n> \n> select\n> jt.*\n> from\n> jsonb_table_test jtt,\n> json_table (\n> jtt.js,'strict $[*]' as p\n> columns (\n> n for ordinality,\n> a int path 'lax $.a' default -1 on empty,\n> nested path 'strict $.b[*]' as pb columns ( b int path '$' ),\n> nested path 'strict $.c[*]' as pc columns ( c int path '$' )\n> )\n> ) jt;\n> n | a | b | c\n> ---+----+---+----\n> 1 | 1 | |\n> 2 | 2 | 1 |\n> 2 | 2 | 2 |\n> 2 | 2 | 3 |\n> 2 | 2 | | 10\n> 2 | 2 | |\n> 2 | 2 | | 20\n> 3 | 3 | 1 |\n> 3 | 3 | 2 |\n> 4 | -1 | 1 |\n> 4 | -1 | 2 |\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>",
"msg_date": "Wed, 4 May 2022 16:39:45 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "\nOn 2022-05-04 We 10:39, Erik Rijkers wrote:\n> Op 04-05-2022 om 13:55 schreef Andrew Dunstan:\n>>\n>> On 2022-05-03 Tu 20:39, David G. Johnston wrote:\n>>> On Tue, May 3, 2022 at 5:27 PM Andrew Dunstan <andrew@dunslane.net>\n>>> wrote:\n>>>\n>>>\n>>> On 2022-05-03 Tu 11:19, Erik Rijkers wrote:\n>>> > Hi\n>>> >\n>>> > I've copied some statements from the .pdf called:\n>>> > \"TECHNICAL REPORT ISO/IEC TR 19075-6 First edition 2017-03\n>>> > Part SQL Notation support 6: (JSON) for JavaScript Object\"\n>>> > (not available anymore although there should be a similar\n>>> replacement\n>>> > file)\n>>> >\n>>> > In that pdf I found the data and statement (called 'table 15'\n>>> in the\n>>> > .pdf) as in the attached bash file. But the result is\n>>> different: as\n>>> > implemented by 15devel, the column rowseq is always 1. It seems\n>>> to me\n>>> > that that is wrong; it should count 1, 2, 3 as indeed the\n>>> > example-result column in that pdf shows.\n>>> >\n>>> > What do you think?\n>>> >\n>>> >\n>>>\n>>> Possibly.\n>>>\n>>>\n>>> I don't see how rowseq can be anything but 1. Each invocation of\n>\n>\n> After some further experimentation, I now think you must be right, David.\n>\n> Also, looking at the DB2 docs:\n> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n> (see especially under 'Handling nested information')\n>\n> There, I gathered some example data + statements where one is the case\n> at hand. I also made them runnable under postgres (attached).\n>\n> I thought that was an instructive example, with those\n> 'outer_ordinality' and 'inner_ordinality' columns.\n>\n>\n\nYeah, I just reviewed the latest version of that page (7.5) and the\nexample seems fairly plain that we are doing the right thing, or if not\nwe're in pretty good company, so I guess this is probably a false alarm.\nLooks like ordinality is for the number of the element produced by the\npath expression. So a path of 'lax $' should just produce ordinality of\n1 in each case, while a path of 'lax $[*]' will produce increasing\nordinality for each element of the root array.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 4 May 2022 15:12:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "Op 04-05-2022 om 21:12 schreef Andrew Dunstan:\n> \n>>>>\n>>>> I don't see how rowseq can be anything but 1. Each invocation of\n>>\n>>\n>> After some further experimentation, I now think you must be right, David.\n>>\n>> Also, looking at the DB2 docs:\n>> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n>> (see especially under 'Handling nested information')\n>>\n>> There, I gathered some example data + statements where one is the case\n>> at hand. I also made them runnable under postgres (attached).\n>>\n>> I thought that was an instructive example, with those\n>> 'outer_ordinality' and 'inner_ordinality' columns.\n>>\n>>\n> \n> Yeah, I just reviewed the latest version of that page (7.5) and the\n> example seems fairly plain that we are doing the right thing, or if not\n> we're in pretty good company, so I guess this is probably a false alarm.\n> Looks like ordinality is for the number of the element produced by the\n> path expression. So a path of 'lax $' should just produce ordinality of\n> 1 in each case, while a path of 'lax $[*]' will produce increasing\n> ordinality for each element of the root array.\n\nAgreed.\n\nYou've probably noticed then that on that same page under 'Sibling \nNesting' is a statement that gives a 13-row resultset on DB2 whereas in \n15devel that statement yields just 10 rows. I don't know which is correct.\n\n\nErik\n\n\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n\n\n",
"msg_date": "Wed, 4 May 2022 22:09:51 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "On Wed, May 4, 2022 at 1:09 PM Erik Rijkers <er@xs4all.nl> wrote:\n\n> Op 04-05-2022 om 21:12 schreef Andrew Dunstan:\n> >\n> >>>>\n> >>>> I don't see how rowseq can be anything but 1. Each invocation of\n> >>\n> >>\n> >> After some further experimentation, I now think you must be right,\n> David.\n> >>\n> >> Also, looking at the DB2 docs:\n> >> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n> >> (see especially under 'Handling nested information')\n> >>\n> >> There, I gathered some example data + statements where one is the case\n> >> at hand. I also made them runnable under postgres (attached).\n> >>\n> >> I thought that was an instructive example, with those\n> >> 'outer_ordinality' and 'inner_ordinality' columns.\n> >>\n> >>\n> >\n> > Yeah, I just reviewed the latest version of that page (7.5) and the\n> > example seems fairly plain that we are doing the right thing, or if not\n> > we're in pretty good company, so I guess this is probably a false alarm.\n> > Looks like ordinality is for the number of the element produced by the\n> > path expression. So a path of 'lax $' should just produce ordinality of\n> > 1 in each case, while a path of 'lax $[*]' will produce increasing\n> > ordinality for each element of the root array.\n>\n> Agreed.\n>\n> You've probably noticed then that on that same page under 'Sibling\n> Nesting' is a statement that gives a 13-row resultset on DB2 whereas in\n> 15devel that statement yields just 10 rows. I don't know which is correct.\n>\n>\nThere should be 12 results (minimum would be 8 - 5 of which are used for\nreal matches, plus 4 new row producing matches).\n\nOur result seems internally inconsistent; conceptually there are two kinds\nof nulls here and we cannot collapse them.\n\nnull-val: we are outputting the record from the nested path but there is no\nactual value to output so we output null-val\nnull-union: we are not outputting the record for the nested path (we are\ndoing a different one) but we need to output something for this column so\nwe output null-union.\n\nSally, null-val, null-union\nSally, null-union, null-val\n\nWe only have one Sally but need both (11)\n\nWe are also missing:\n\nMary, null-union, null-val (12)\n\nThe fact that we agree on John means that we at least agree on UNION\nmeaning we output a pair of rows when there are two nested paths.\n\nI point to relative comparisons for fear of reading the specification\nhere...\n\nDavid J.\n\n\nDavid J.\n\nOn Wed, May 4, 2022 at 1:09 PM Erik Rijkers <er@xs4all.nl> wrote:Op 04-05-2022 om 21:12 schreef Andrew Dunstan:\n> \n>>>>\n>>>> I don't see how rowseq can be anything but 1. Each invocation of\n>>\n>>\n>> After some further experimentation, I now think you must be right, David.\n>>\n>> Also, looking at the DB2 docs:\n>> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n>> (see especially under 'Handling nested information')\n>>\n>> There, I gathered some example data + statements where one is the case\n>> at hand. I also made them runnable under postgres (attached).\n>>\n>> I thought that was an instructive example, with those\n>> 'outer_ordinality' and 'inner_ordinality' columns.\n>>\n>>\n> \n> Yeah, I just reviewed the latest version of that page (7.5) and the\n> example seems fairly plain that we are doing the right thing, or if not\n> we're in pretty good company, so I guess this is probably a false alarm.\n> Looks like ordinality is for the number of the element produced by the\n> path expression. So a path of 'lax $' should just produce ordinality of\n> 1 in each case, while a path of 'lax $[*]' will produce increasing\n> ordinality for each element of the root array.\n\nAgreed.\n\nYou've probably noticed then that on that same page under 'Sibling \nNesting' is a statement that gives a 13-row resultset on DB2 whereas in \n15devel that statement yields just 10 rows. I don't know which is correct.There should be 12 results (minimum would be 8 - 5 of which are used for real matches, plus 4 new row producing matches).Our result seems internally inconsistent; conceptually there are two kinds of nulls here and we cannot collapse them.null-val: we are outputting the record from the nested path but there is no actual value to output so we output null-valnull-union: we are not outputting the record for the nested path (we are doing a different one) but we need to output something for this column so we output null-union.Sally, null-val, null-unionSally, null-union, null-valWe only have one Sally but need both (11)We are also missing:Mary, null-union, null-val (12)The fact that we agree on John means that we at least agree on UNION meaning we output a pair of rows when there are two nested paths.I point to relative comparisons for fear of reading the specification here...David J.David J.",
"msg_date": "Wed, 4 May 2022 13:43:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "On Wed, May 4, 2022 at 1:43 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wed, May 4, 2022 at 1:09 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n>> Op 04-05-2022 om 21:12 schreef Andrew Dunstan:\n>> >\n>> >>>>\n>> >>>> I don't see how rowseq can be anything but 1. Each invocation of\n>> >>\n>> >>\n>> >> After some further experimentation, I now think you must be right,\n>> David.\n>> >>\n>> >> Also, looking at the DB2 docs:\n>> >> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n>> >> (see especially under 'Handling nested information')\n>>\n>> You've probably noticed then that on that same page under 'Sibling\n>> Nesting' is a statement that gives a 13-row resultset on DB2 whereas in\n>> 15devel that statement yields just 10 rows. I don't know which is\n>> correct.\n>>\n>>\n> There should be 12 results (minimum would be 8 - 5 of which are used for\n> real matches, plus 4 new row producing matches).\n>\n> Our result seems internally inconsistent; conceptually there are two kinds\n> of nulls here and we cannot collapse them.\n>\n\n\n> null-val: we are outputting the record from the nested path but there is\n> no actual value to output so we output null-val\n> null-union: we are not outputting the record for the nested path (we are\n> doing a different one) but we need to output something for this column so\n> we output null-union.\n>\n>\nThinking this over - I think the difference is we implemented a FULL OUTER\nJOIN to combine the siblings - including the behavior of that construct and\nthe absence of rows. DB2 took the word \"UNION\" for the plan modifier\nliterally and unioned (actually union all) the two subpaths together using\nthe null concepts above (though somehow ensuring that at least one row was\nproduced from each subpath...).\n\nThus we are indeed back to seeing whether the standard defines sibling\ncombining as union or join, or some other special construct. I'm now\nleaning toward what we've done as at least being the more sane option.\n\nEven if our outer join process is correct the existing wording is odd.\n\n\"Use FULL OUTER JOIN ON FALSE, so that both parent and child rows are\nincluded into the output, with NULL values inserted into both child and\nparent columns for all missing values.\"\n\nI don't think it helps to mention parent here. This aspect of plan doesn't\nconcern itself with the final output, only the output of the subplan which\nis then combined with the parent using a join. I would probably want to\nphrase the default more like:\n\n\"This is the default option for joining the combined child rows to the\nparent.\"\n\nDavid J.\n\nOn Wed, May 4, 2022 at 1:43 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Wed, May 4, 2022 at 1:09 PM Erik Rijkers <er@xs4all.nl> wrote:Op 04-05-2022 om 21:12 schreef Andrew Dunstan:\n> \n>>>>\n>>>> I don't see how rowseq can be anything but 1. Each invocation of\n>>\n>>\n>> After some further experimentation, I now think you must be right, David.\n>>\n>> Also, looking at the DB2 docs:\n>> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n>> (see especially under 'Handling nested information')\nYou've probably noticed then that on that same page under 'Sibling \nNesting' is a statement that gives a 13-row resultset on DB2 whereas in \n15devel that statement yields just 10 rows. I don't know which is correct.There should be 12 results (minimum would be 8 - 5 of which are used for real matches, plus 4 new row producing matches).Our result seems internally inconsistent; conceptually there are two kinds of nulls here and we cannot collapse them. null-val: we are outputting the record from the nested path but there is no actual value to output so we output null-valnull-union: we are not outputting the record for the nested path (we are doing a different one) but we need to output something for this column so we output null-union.Thinking this over - I think the difference is we implemented a FULL OUTER JOIN to combine the siblings - including the behavior of that construct and the absence of rows. DB2 took the word \"UNION\" for the plan modifier literally and unioned (actually union all) the two subpaths together using the null concepts above (though somehow ensuring that at least one row was produced from each subpath...).Thus we are indeed back to seeing whether the standard defines sibling combining as union or join, or some other special construct. I'm now leaning toward what we've done as at least being the more sane option.Even if our outer join process is correct the existing wording is odd.\"Use FULL OUTER JOIN ON FALSE, so that both parent and child rows are included into the output, with NULL values inserted into both child and parent columns for all missing values.\"I don't think it helps to mention parent here. This aspect of plan doesn't concern itself with the final output, only the output of the subplan which is then combined with the parent using a join. I would probably want to phrase the default more like:\"This is the default option for joining the combined child rows to the parent.\"David J.",
"msg_date": "Wed, 4 May 2022 14:52:38 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
},
{
"msg_contents": "\nOn 2022-05-04 We 16:09, Erik Rijkers wrote:\n> Op 04-05-2022 om 21:12 schreef Andrew Dunstan:\n>>\n>>>>>\n>>>>> I don't see how rowseq can be anything but 1. Each invocation of\n>>>\n>>>\n>>> After some further experimentation, I now think you must be right,\n>>> David.\n>>>\n>>> Also, looking at the DB2 docs:\n>>> https://www.ibm.com/docs/en/i/7.2?topic=data-using-json-table\n>>> (see especially under 'Handling nested information')\n>>>\n>>> There, I gathered some example data + statements where one is the case\n>>> at hand. I also made them runnable under postgres (attached).\n>>>\n>>> I thought that was an instructive example, with those\n>>> 'outer_ordinality' and 'inner_ordinality' columns.\n>>>\n>>>\n>>\n>> Yeah, I just reviewed the latest version of that page (7.5) and the\n>> example seems fairly plain that we are doing the right thing, or if not\n>> we're in pretty good company, so I guess this is probably a false alarm.\n>> Looks like ordinality is for the number of the element produced by the\n>> path expression. So a path of 'lax $' should just produce ordinality of\n>> 1 in each case, while a path of 'lax $[*]' will produce increasing\n>> ordinality for each element of the root array.\n>\n> Agreed.\n>\n> You've probably noticed then that on that same page under 'Sibling\n> Nesting' is a statement that gives a 13-row resultset on DB2 whereas\n> in 15devel that statement yields just 10 rows. I don't know which is\n> correct.\n\n\nOracle also gives 10 rows for that query according to my testing, so I\nsuspect either DB2 and/or its docs are wrong.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 9 May 2022 16:37:33 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON: FOR ORDINALITY bug"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\n(I'm cleaning up some old git branches and found this. It was helpful\r\nwhen I was trying to debug failures between an NSS client and an\r\nOpenSSL server, and it seems general enough to help for more\r\ncomplicated OpenSSL-only setups as well.)\r\n\r\nCurrently, debugging client cert verification failures is mostly\r\nlimited to looking at the TLS alert code on the client side. For simple\r\ndeployments, usually it's enough to see \"sslv3 alert certificate\r\nrevoked\" and know exactly what needs to be fixed, but if you add any\r\nmore complexity (multiple CA layers, misconfigured CA certificates,\r\netc.), trying to debug what happened based on the TLS alert alone can\r\nbe an exercise in frustration.\r\n\r\nLuckily, the server has more information about exactly what failed in\r\nthe chain, and we already have the requisite callback implemented as a\r\nstub, so I've filled it out with error handling and added a COMMERROR\r\nlog so that a DBA can debug client failures more easily.\r\n\r\nIt ends up looking like\r\n\r\n LOG: connection received: host=localhost port=44120\r\n LOG: client certificate verification failed at depth 1: unable to get local issuer certificate\r\n DETAIL: failed certificate's subject: /CN=Test CA for PostgreSQL SSL regression test client certs\r\n LOG: could not accept SSL connection: certificate verify failed\r\n\r\nIt might be even nicer to make this available to the client, but I\r\nthink the server log is an appropriate place for this information -- an\r\nadmin might not want to advertise exactly why a client certificate has\r\nfailed verification (other than what's already available via the TLS\r\nalert, that is), and I think more complicated failures (with\r\nintermediate CAs, etc.) are going to need administrator intervention\r\nanyway. So having to check the logs doesn't seem like a big hurdle.\r\n\r\nOne question/concern -- the Subject that's printed to the logs could be\r\npretty big (OpenSSL limits the incoming certificate chain to 100K, by\r\ndefault), which introduces an avenue for intentional log spamming. Is\r\nthere an existing convention for limiting the length of log output used\r\nfor debugging? Maybe I should just hardcode a smaller limit and\r\ntruncate anything past that? Or we could just log the Common Name,\r\nwhich should be limited to 64 bytes...\r\n\r\nI'll add this to the July commitfest.\r\n\r\nThanks,\r\n--Jacob",
"msg_date": "Tue, 3 May 2022 17:04:31 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 03.05.22 19:04, Jacob Champion wrote:\n> One question/concern -- the Subject that's printed to the logs could be\n> pretty big (OpenSSL limits the incoming certificate chain to 100K, by\n> default), which introduces an avenue for intentional log spamming. Is\n> there an existing convention for limiting the length of log output used\n> for debugging? Maybe I should just hardcode a smaller limit and\n> truncate anything past that? Or we could just log the Common Name,\n> which should be limited to 64 bytes...\n\nThe information in pg_stat_ssl is limited to NAMEDATALEN (see struct \nPgBackendSSLStatus).\n\nIt might make sense to align what your patch prints to identify \ncertificates with what is shown in that view.\n\n\n",
"msg_date": "Tue, 3 May 2022 21:06:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Tue, 2022-05-03 at 21:06 +0200, Peter Eisentraut wrote:\r\n> The information in pg_stat_ssl is limited to NAMEDATALEN (see struct\r\n> PgBackendSSLStatus).\r\n> \r\n> It might make sense to align what your patch prints to identify\r\n> certificates with what is shown in that view.\r\n\r\nSure, a max length should be easy enough to do. Is there a reason to\r\nlimit to NAMEDATALEN specifically? I was under the impression that we\r\nwould rather not have had that limitation in the stats framework, if we\r\ncould have avoided it. (In particular I think NAMEDATALEN will cut off\r\nthe longest possible Common Name by just five bytes.)\r\n\r\nThanks,\r\n--Jacob\r\n",
"msg_date": "Tue, 3 May 2022 23:05:30 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "\nOn 04.05.22 01:05, Jacob Champion wrote:\n> On Tue, 2022-05-03 at 21:06 +0200, Peter Eisentraut wrote:\n>> The information in pg_stat_ssl is limited to NAMEDATALEN (see struct\n>> PgBackendSSLStatus).\n>>\n>> It might make sense to align what your patch prints to identify\n>> certificates with what is shown in that view.\n> \n> Sure, a max length should be easy enough to do. Is there a reason to\n> limit to NAMEDATALEN specifically? I was under the impression that we\n> would rather not have had that limitation in the stats framework, if we\n> could have avoided it. (In particular I think NAMEDATALEN will cut off\n> the longest possible Common Name by just five bytes.)\n\nJust saying that cutting it off appears to be acceptable. A bit more \nthan 63 bytes should be okay for the log.\n\nIn terms of aligning what is printed, I meant that pg_stat_ssl uses the \nissuer plus serial number to identify the certificate unambiguously.\n\n\n",
"msg_date": "Wed, 4 May 2022 15:53:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Wed, 2022-05-04 at 15:53 +0200, Peter Eisentraut wrote:\r\n> Just saying that cutting it off appears to be acceptable. A bit more\r\n> than 63 bytes should be okay for the log.\r\n\r\nGotcha.\r\n\r\n> In terms of aligning what is printed, I meant that pg_stat_ssl uses the\r\n> issuer plus serial number to identify the certificate unambiguously.\r\n\r\nOh, that's a great idea. I'll do that too.\r\n\r\nThanks!\r\n--Jacob\r\n",
"msg_date": "Thu, 5 May 2022 15:12:20 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, 2022-05-05 at 15:12 +0000, Jacob Champion wrote:\r\n> On Wed, 2022-05-04 at 15:53 +0200, Peter Eisentraut wrote:\r\n> > In terms of aligning what is printed, I meant that pg_stat_ssl uses the\r\n> > issuer plus serial number to identify the certificate unambiguously.\r\n> \r\n> Oh, that's a great idea. I'll do that too.\r\n\r\nv2 limits the maximum subject length and adds the serial number to the\r\nlogs.\r\n\r\nThanks!\r\n--Jacob",
"msg_date": "Thu, 12 May 2022 22:36:01 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 13.05.22 00:36, Jacob Champion wrote:\n> On Thu, 2022-05-05 at 15:12 +0000, Jacob Champion wrote:\n>> On Wed, 2022-05-04 at 15:53 +0200, Peter Eisentraut wrote:\n>>> In terms of aligning what is printed, I meant that pg_stat_ssl uses the\n>>> issuer plus serial number to identify the certificate unambiguously.\n>>\n>> Oh, that's a great idea. I'll do that too.\n> \n> v2 limits the maximum subject length and adds the serial number to the\n> logs.\n\nI wrote that pg_stat_ssl uses the *issuer* plus serial number to \nidentify a certificate. What your patch shows is the subject and the \nserial number, which isn't the same thing. Let's get that sorted out \none way or the other.\n\nAnother point, your patch produces\n\n LOG: connection received: host=localhost port=44120\n LOG: client certificate verification failed at depth 1: ...\n DETAIL: failed certificate had subject ...\n LOG: could not accept SSL connection: certificate verify failed\n\nI guess what we really would like is\n\n LOG: connection received: host=localhost port=44120\n LOG: could not accept SSL connection: certificate verify failed\n DETAIL: client certificate verification failed at depth 1: ...\n failed certificate had subject ...\n\nBut I suppose that would be very cumbersome to produce with the callback \nstructure provided by OpenSSL?\n\nI'm not saying the proposed way is unacceptable, but maybe it's worth \nbeing explicit about this tradeoff.\n\n\n",
"msg_date": "Thu, 30 Jun 2022 10:43:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 30 Jun 2022, at 10:43, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> I wrote that pg_stat_ssl uses the *issuer* plus serial number to identify a certificate. What your patch shows is the subject and the serial number, which isn't the same thing. Let's get that sorted out one way or the other.\n\nQuick observation on this one, the string format of an issuer and serial number is defined as a “Certificate Exact Assertion” in RFC 4523.\n\nI added this to httpd a while back:\n\nSSL_CLIENT_CERT_RFC4523_CEA\n\nIt would be good to interoperate.\n\nRegards,\nGraham\n—\n\n\nOn 30 Jun 2022, at 10:43, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:I wrote that pg_stat_ssl uses the *issuer* plus serial number to identify a certificate. What your patch shows is the subject and the serial number, which isn't the same thing. Let's get that sorted out one way or the other.Quick observation on this one, the string format of an issuer and serial number is defined as a “Certificate Exact Assertion” in RFC 4523.I added this to httpd a while back:SSL_CLIENT_CERT_RFC4523_CEAIt would be good to interoperate.Regards,Graham—",
"msg_date": "Thu, 30 Jun 2022 10:53:58 +0100",
"msg_from": "Graham Leggett <minfrin@sharp.fm>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 2:43 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 13.05.22 00:36, Jacob Champion wrote:\n> > v2 limits the maximum subject length and adds the serial number to the\n> > logs.\n>\n> I wrote that pg_stat_ssl uses the *issuer* plus serial number to\n> identify a certificate. What your patch shows is the subject and the\n> serial number, which isn't the same thing. Let's get that sorted out\n> one way or the other.\n\nSorry for the misunderstanding! v3 adds the Issuer to the logs as well.\n\nI wanted to clarify that this \"issuer\" has not actually been verified,\nbut all I could come up with was \"purported issuer\" which doesn't read\nwell to me. \"Claimed issuer\"? \"Alleged issuer\"? Thoughts?\n\n> Another point, your patch produces\n>\n> LOG: connection received: host=localhost port=44120\n> LOG: client certificate verification failed at depth 1: ...\n> DETAIL: failed certificate had subject ...\n> LOG: could not accept SSL connection: certificate verify failed\n>\n> I guess what we really would like is\n>\n> LOG: connection received: host=localhost port=44120\n> LOG: could not accept SSL connection: certificate verify failed\n> DETAIL: client certificate verification failed at depth 1: ...\n> failed certificate had subject ...\n>\n> But I suppose that would be very cumbersome to produce with the callback\n> structure provided by OpenSSL?\n\nI was about to say \"yes, very cumbersome\", but I actually think we\nmight be able to do that without bubbling the error up through\nmultiple callback layers, using SSL_set_ex_data() and friends. I'll\ntake a closer look.\n\nThanks!\n--Jacob",
"msg_date": "Fri, 1 Jul 2022 13:51:24 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 2:54 AM Graham Leggett <minfrin@sharp.fm> wrote:\n>\n> I added this to httpd a while back:\n>\n> SSL_CLIENT_CERT_RFC4523_CEA\n>\n> It would be good to interoperate.\n\nWhat kind of interoperation did you have in mind? Are there existing\ntools that want to scrape this information for observability?\n\nI think the CEA syntax might not be a good fit for this particular\npatch: first, we haven't actually verified the certificate, so no one\nshould be using it to assert certificate equality (and I'm truncating\nthe Issuer anyway, to avoid letting someone flood the logs). Second,\nthis is designed to be human-readable rather than machine-readable.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 1 Jul 2022 13:59:42 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 1:51 PM Jacob Champion <jchampion@timescale.com> wrote:\n> Sorry for the misunderstanding! v3 adds the Issuer to the logs as well.\n\nResending v3; I messed up the certificate diff with my gitconfig.\n\n--Jacob",
"msg_date": "Tue, 5 Jul 2022 09:34:02 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 05.07.22 18:34, Jacob Champion wrote:\n> On Fri, Jul 1, 2022 at 1:51 PM Jacob Champion <jchampion@timescale.com> wrote:\n>> Sorry for the misunderstanding! v3 adds the Issuer to the logs as well.\n> \n> Resending v3; I messed up the certificate diff with my gitconfig.\n\nThis patch looks pretty good to me. Some minor details:\n\nI looked into how you decode the serial number. I have found some code \nelsewhere that passed the result of X509_get_serialNumber() directly to \nASN1_INTEGER_set(). But I guess a serial number of maximum length 20 \noctets wouldn't fit into a 32-bit long. (There is \nASN1_INTEGER_set_int64(), but that requires OpenSSL 1.1.0.) Does that \nmatch your understanding?\n\nFor the detail string, I think we could do something like:\n\nDETAIL: Failed certificate data (unverified): subject '%s', serial \nnumber %s, issuer '%s'\n\n\n",
"msg_date": "Thu, 7 Jul 2022 11:50:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 2:50 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I looked into how you decode the serial number. I have found some code\n> elsewhere that passed the result of X509_get_serialNumber() directly to\n> ASN1_INTEGER_set(). But I guess a serial number of maximum length 20\n> octets wouldn't fit into a 32-bit long. (There is\n> ASN1_INTEGER_set_int64(), but that requires OpenSSL 1.1.0.) Does that\n> match your understanding?\n\nYep. And the bit lengths of the serial numbers used in the test suite\nare in the low 60s already. Many people will just randomize their\nserial numbers, so I think BN_bn2dec() is the way to go.\n\n> For the detail string, I think we could do something like:\n>\n> DETAIL: Failed certificate data (unverified): subject '%s', serial\n> number %s, issuer '%s'\n\nDone that way in v4.\n\nI also added an optional 0002 that bubbles the error info up to the\nfinal ereport(ERROR), using errdetail() and errhint(). I can squash it\ninto 0001 if you like it, or drop it if you don't. (This approach\ncould be adapted to the client, too.)\n\nThanks!\n--Jacob",
"msg_date": "Fri, 8 Jul 2022 11:39:04 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 01 Jul 2022, at 22:59, Jacob Champion <jchampion@timescale.com> wrote:\n\n>> I added this to httpd a while back:\n>> \n>> SSL_CLIENT_CERT_RFC4523_CEA\n>> \n>> It would be good to interoperate.\n> \n> What kind of interoperation did you have in mind? Are there existing\n> tools that want to scrape this information for observability?\n\nThis is for human troubleshooting.\n\n> I think the CEA syntax might not be a good fit for this particular\n> patch: first, we haven't actually verified the certificate, so no one\n> should be using it to assert certificate equality (and I'm truncating\n> the Issuer anyway, to avoid letting someone flood the logs). Second,\n> this is designed to be human-readable rather than machine-readable.\n\nThis is what a CEA looks like:\n\n{ serialNumber 400410167207191393705333222102472642510002355884, issuer rdnSequence:”CN=Foo UK G1,O=Foo,C=UK\" }\n\nWhitespace and escaping is important above.\n\nWhen troubleshooting, you want a string like the above that you can cut and paste and search for in other systems and log files. The verification status of the cert isn’t an issue at this point, you have a system in front of you where it doesn’t work when it should, and you need to know exactly what’s connecting, not what you think you’re connecting to, and you need precise data.\n\nPlease don’t invent another format, or try and truncate the data. This is a huge headache when troubleshooting.\n\nRegards,\nGraham\n—\n\n\n\n",
"msg_date": "Sat, 9 Jul 2022 15:49:34 +0200",
"msg_from": "Graham Leggett <minfrin@sharp.fm>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 08.07.22 20:39, Jacob Champion wrote:\n> I also added an optional 0002 that bubbles the error info up to the\n> final ereport(ERROR), using errdetail() and errhint(). I can squash it\n> into 0001 if you like it, or drop it if you don't. (This approach\n> could be adapted to the client, too.)\n\nI squashed those two together. I also adjusted the error message a bit \nmore for project style. (We can put both lines into detail.)\n\nI had to read up on this \"ex_data\" API. Interesting. But I'm wondering \na bit about how the life cycle of these objects is managed. What \nhappens if the allocated error string is deallocated before its \ncontaining object? Or vice versa? How do we ensure we don't \naccidentally reuse the error string when the code runs again? (I guess \ncurrently it can't?) Maybe we should avoid this and just put the \nerrdetail itself into a static variable that we unset once the message \nis printed?",
"msg_date": "Mon, 11 Jul 2022 15:09:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 6:49 AM Graham Leggett <minfrin@sharp.fm> wrote:\n> Please don’t invent another format, or try and truncate the data. This is a huge headache when troubleshooting.\n\nI hear you, and I agree that correlating these things across machines\nis something we should be making easier. I'm just not convinced that\nthe particular format you've proposed, with a new set of rules for\nquoting and escaping, needs to be part of this patch. (And I think\nthere are good reasons to truncate unverified cert data, so there'd\nhave to be clear benefits to offset the risk of opening it up.)\n\nSearching Google for \"issuer rdnSequence\" comes up with mostly false\npositives related to LDAP filtering and certificate dumps, and the\ntrue positives seem to be mail threads that you've participated in. Do\nmany LDAP servers log certificate failures in this format by default?\n(For that matter, does httpd?) The discussion at the time you added\nthis to httpd [1] seemed to be making the point that this was a niche\nformat, suited mostly for interaction with LDAP filters -- and Kaspar\nadditionally pointed out that it's not a canonical format, so all of\nour implementations would have to have an ad hoc agreement to choose\nexactly one encoding.\n\nIf you're using randomized serial numbers, you should be able to grep\nfor those by themselves and successfully match many different formats,\nno? To me, that seems good enough for a first patch, considering we\ndon't currently log any of this information.\n\n--Jacobfi\n\n[1] https://lists.apache.org/thread/1665qc4mod7ppp58qk3bqc2l3wtl3lkn\n\n\n",
"msg_date": "Tue, 12 Jul 2022 16:05:58 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 6:09 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I squashed those two together. I also adjusted the error message a bit\n> more for project style. (We can put both lines into detail.)\n\nOh, okay. Log parsers don't have any issues with that?\n\n> I had to read up on this \"ex_data\" API. Interesting. But I'm wondering\n> a bit about how the life cycle of these objects is managed. What\n> happens if the allocated error string is deallocated before its\n> containing object? Or vice versa?\n\nYeah, I'm currently leaning heavily on the lack of any memory context\nswitches here. And I end up leaking out a pointer to the stale stack\nof be_tls_open_server(), which is gross -- it works since there are no\nother clients, but that could probably come back to bite us.\n\nThe ex_data API exposes optional callbacks for new/dup/free (I'm\ncurrently setting those to NULL), so we can run custom code whenever\nthe SSL* is destroyed. If you'd rather the data have the same lifetime\nof the SSL* object, we can switch to malloc/strdup/free (or even\nOPENSSL_strdup() in later versions). But since we don't have any use\nfor the ex_data outside of this function, maybe we should just clear\nit before we return, rather than carrying it around.\n\n> How do we ensure we don't\n> accidentally reuse the error string when the code runs again? (I guess\n> currently it can't?)\n\nThe ex_data is associated with the SSL*, not the global SSL_CTX*, so\nthat shouldn't be an issue. A new SSL* gets created at the start of\nbe_tls_open_server().\n\n> Maybe we should avoid this and just put the\n> errdetail itself into a static variable that we unset once the message\n> is printed?\n\nIf you're worried about the lifetime of the palloc'd data being too\nshort, does switching to a static variable help in that case?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 12 Jul 2022 16:06:50 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 13.07.22 01:06, Jacob Champion wrote:\n>> I had to read up on this \"ex_data\" API. Interesting. But I'm wondering\n>> a bit about how the life cycle of these objects is managed. What\n>> happens if the allocated error string is deallocated before its\n>> containing object? Or vice versa?\n> \n> Yeah, I'm currently leaning heavily on the lack of any memory context\n> switches here. And I end up leaking out a pointer to the stale stack\n> of be_tls_open_server(), which is gross -- it works since there are no\n> other clients, but that could probably come back to bite us.\n> \n> The ex_data API exposes optional callbacks for new/dup/free (I'm\n> currently setting those to NULL), so we can run custom code whenever\n> the SSL* is destroyed. If you'd rather the data have the same lifetime\n> of the SSL* object, we can switch to malloc/strdup/free (or even\n> OPENSSL_strdup() in later versions). But since we don't have any use\n> for the ex_data outside of this function, maybe we should just clear\n> it before we return, rather than carrying it around.\n\nConcretely, I was thinking like the attached top-up patch.\n\nThe other way can surely be made to work somehow, but this seems much \nsimpler and with fewer questions about the details.",
"msg_date": "Thu, 14 Jul 2022 22:12:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 1:12 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Concretely, I was thinking like the attached top-up patch.\n>\n> The other way can surely be made to work somehow, but this seems much\n> simpler and with fewer questions about the details.\n\nAh, seeing it side-by-side helps. That's much easier, I agree.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 14 Jul 2022 14:09:02 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 14.07.22 23:09, Jacob Champion wrote:\n> On Thu, Jul 14, 2022 at 1:12 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> Concretely, I was thinking like the attached top-up patch.\n>>\n>> The other way can surely be made to work somehow, but this seems much\n>> simpler and with fewer questions about the details.\n> \n> Ah, seeing it side-by-side helps. That's much easier, I agree.\n\nCommitted like that.\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 18:34:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 7/15/22 09:34, Peter Eisentraut wrote:\n> Committed like that.\n\nThanks for all the reviews!\n\n--Jacob\n\n\n\n",
"msg_date": "Fri, 15 Jul 2022 09:46:40 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 09:46:40 -0700, Jacob Champion wrote:\n> On 7/15/22 09:34, Peter Eisentraut wrote:\n> > Committed like that.\n> \n> Thanks for all the reviews!\n\nThis might have been discussed somewhere, but I'm worried about emitting\nunescaped data from pre-auth clients. What guarantees that subject / issuer\nname only contain printable ascii-chars? Printing terminal control chars or\nsuch would not be great, nor would splitting a string at a multi-boundary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 12:11:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 7/15/22 12:11, Andres Freund wrote:\n> This might have been discussed somewhere, but I'm worried about emitting\n> unescaped data from pre-auth clients. What guarantees that subject / issuer\n> name only contain printable ascii-chars? Printing terminal control chars or\n> such would not be great, nor would splitting a string at a multi-boundary.\n\nHm. The last time I asked about that, Magnus pointed out that we reflect\nport->user_name as-is [1], so I kind of stopped worrying about it. Is\nthis more dangerous than that? (And do we want to fix it now,\nregardless?) What guarantees are we supposed to be making for log encoding?\n\nThanks,\n--Jacob\n\n[1]\nhttps://www.postgresql.org/message-id/CABUevExVHryTasKmtJW5RtU-dBesYj4bV7ggpeVMfiPCHCvLNA%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 15 Jul 2022 13:20:59 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 13:20:59 -0700, Jacob Champion wrote:\n> On 7/15/22 12:11, Andres Freund wrote:\n> > This might have been discussed somewhere, but I'm worried about emitting\n> > unescaped data from pre-auth clients. What guarantees that subject / issuer\n> > name only contain printable ascii-chars? Printing terminal control chars or\n> > such would not be great, nor would splitting a string at a multi-boundary.\n> \n> Hm. The last time I asked about that, Magnus pointed out that we reflect\n> port->user_name as-is [1], so I kind of stopped worrying about it.\n\nI think we need to fix a number of these. But no, I don't think we should just\nadd more because we've not been careful in a bunch of other places.\n\n\n> Is this more dangerous than that?\n\nHard to say.\n\n\n> (And do we want to fix it now, regardless?)\n\nYes.\n\n\n> What guarantees are we supposed to be making for log encoding?\n\nI don't know, but I don't think not caring at all is a good\noption. Particularly for unauthenticated data I'd say that escaping everything\nbut printable ascii chars is a sensible approach.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 13:35:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 7/15/22 13:35, Andres Freund wrote:\n>> (And do we want to fix it now, regardless?)\n> \n> Yes.\n\nCool. I can get on board with that.\n\n>> What guarantees are we supposed to be making for log encoding?\n> \n> I don't know, but I don't think not caring at all is a good\n> option. Particularly for unauthenticated data I'd say that escaping everything\n> but printable ascii chars is a sensible approach.\n\nIt'll also be painful for anyone whose infrastructure isn't in a Latin\ncharacter set... Maybe that's worth the tradeoff for a v1.\n\nIs there an acceptable approach that could centralize it, so we fix it\nonce and are done? E.g. a log_encoding GUC and either conversion or\nescaping in send_message_to_server_log()?\n\n--Jacob\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:01:53 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 14:01:53 -0700, Jacob Champion wrote:\n> On 7/15/22 13:35, Andres Freund wrote:\n> >> (And do we want to fix it now, regardless?)\n> > \n> > Yes.\n> \n> Cool. I can get on board with that.\n> \n> >> What guarantees are we supposed to be making for log encoding?\n> > \n> > I don't know, but I don't think not caring at all is a good\n> > option. Particularly for unauthenticated data I'd say that escaping everything\n> > but printable ascii chars is a sensible approach.\n> \n> It'll also be painful for anyone whose infrastructure isn't in a Latin\n> character set... Maybe that's worth the tradeoff for a v1.\n\nI don't think it's a huge issue, or really avoidable, pre-authentication.\nDon't we require all server-side encodings to be supersets of ascii?\n\nWe already have pg_clean_ascii() and use it for application_name, fwiw.\n\n\n> Is there an acceptable approach that could centralize it, so we fix it\n> once and are done? E.g. a log_encoding GUC and either conversion or\n> escaping in send_message_to_server_log()?\n\nIntroducing escaping to ascii for all log messages seems like it'd be\nincredibly invasive, and would remove a lot of worthwhile information. Nor\ndoes it really address the whole scope - consider e.g. the truncation in this\npatch, that can't be done correctly by the time send_message_to_server_log()\nis reached - just chopping in the middle of a multi-byte string would have\nmade the string invalidly encoded. And we can't perform encoding conversion\nfrom client data until we've gone further into the authentication process, I\nthink.\n\nAlways escaping ANSI escape codes (or rather the non-printable ascii range) is\nmore convincing. Then we'd just need to make sure that client controlled data\nis properly encoded before handing it over to other parts of the system.\n\nI can see a point in a log_encoding GUC at some point, but it seems a bit\nseparate from the discussion here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:19:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> We already have pg_clean_ascii() and use it for application_name, fwiw.\n\nYeah, we should just use that. If anyone wants to upgrade the situation\nfor non-ASCII data later, fixing it for all of these cases at once would\nbe appropriate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jul 2022 17:23:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 7/15/22 14:19, Andres Freund wrote:\n> On 2022-07-15 14:01:53 -0700, Jacob Champion wrote:\n>> On 7/15/22 13:35, Andres Freund wrote:\n>>> I don't know, but I don't think not caring at all is a good\n>>> option. Particularly for unauthenticated data I'd say that escaping everything\n>>> but printable ascii chars is a sensible approach.\n>>\n>> It'll also be painful for anyone whose infrastructure isn't in a Latin\n>> character set... Maybe that's worth the tradeoff for a v1.\n> \n> I don't think it's a huge issue, or really avoidable, pre-authentication.\n> Don't we require all server-side encodings to be supersets of ascii?\n\nWell, I was going to say that for this feature, where the goal is to\ndebug a failed certificate chain, having to manually unescape the logged\ncertificate names if your infrastructure already handled UTF-8 natively\nwould be a real pain.\n\nBut your later point about truncation makes that moot; I forgot that my\npatch can truncate in the middle of a UTF-8 sequence, so you're probably\ndealing with replacement glyphs anyway. I don't really have a leg to\nstand on there.\n\n> We already have pg_clean_ascii() and use it for application_name, fwiw.\n\nThat seems much worse than escaping for this particular patch; if your\ncert's Common Name is in (non-ASCII) UTF-8 then all you'll see is\n\"CN=?????????\" in the log lines that were supposed to be helping you\nroot-cause. Escaping would be much more helpful in this case.\n\n>> Is there an acceptable approach that could centralize it, so we fix it\n>> once and are done? E.g. a log_encoding GUC and either conversion or\n>> escaping in send_message_to_server_log()?\n> \n> Introducing escaping to ascii for all log messages seems like it'd be\n> incredibly invasive, and would remove a lot of worthwhile information. Nor\n> does it really address the whole scope - consider e.g. the truncation in this\n> patch, that can't be done correctly by the time send_message_to_server_log()\n> is reached - just chopping in the middle of a multi-byte string would have\n> made the string invalidly encoded. And we can't perform encoding conversion\n> from client data until we've gone further into the authentication process, I\n> think.\n> \n> Always escaping ANSI escape codes (or rather the non-printable ascii range) is\n> more convincing. Then we'd just need to make sure that client controlled data\n> is properly encoded before handing it over to other parts of the system.\n> \n> I can see a point in a log_encoding GUC at some point, but it seems a bit\n> separate from the discussion here.\n\nFair enough.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 15 Jul 2022 14:51:38 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 14:51:38 -0700, Jacob Champion wrote:\n> > We already have pg_clean_ascii() and use it for application_name, fwiw.\n> \n> That seems much worse than escaping for this particular patch; if your\n> cert's Common Name is in (non-ASCII) UTF-8 then all you'll see is\n> \"CN=?????????\" in the log lines that were supposed to be helping you\n> root-cause. Escaping would be much more helpful in this case.\n\nI'm doubtful that's all that common. But either way, I suggest a separate\npatch to deal with that...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 16:45:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-15 14:51:38 -0700, Jacob Champion wrote:\n> > That seems much worse than escaping for this particular patch; if your\n> > cert's Common Name is in (non-ASCII) UTF-8 then all you'll see is\n> > \"CN=?????????\" in the log lines that were supposed to be helping you\n> > root-cause. Escaping would be much more helpful in this case.\n>\n> I'm doubtful that's all that common.\n\nProbably not, but the more systems that support it without weird\nusability bugs, the more common it will hopefully become.\n\n> But either way, I suggest a separate patch to deal with that...\n\nProposed fix attached, which uses \\x-escaping for bytes outside of\nprintable ASCII.\n\nThanks,\n--Jacob",
"msg_date": "Tue, 19 Jul 2022 09:07:31 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 09:07:31 -0700, Jacob Champion wrote:\n> On Fri, Jul 15, 2022 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-07-15 14:51:38 -0700, Jacob Champion wrote:\n> > > That seems much worse than escaping for this particular patch; if your\n> > > cert's Common Name is in (non-ASCII) UTF-8 then all you'll see is\n> > > \"CN=?????????\" in the log lines that were supposed to be helping you\n> > > root-cause. Escaping would be much more helpful in this case.\n> >\n> > I'm doubtful that's all that common.\n> \n> Probably not, but the more systems that support it without weird\n> usability bugs, the more common it will hopefully become.\n> \n> > But either way, I suggest a separate patch to deal with that...\n> \n> Proposed fix attached, which uses \\x-escaping for bytes outside of\n> printable ASCII.\n\nI don't think this should be open coded in the ssl part of the code. IMO this\nshould replace the existing ascii escape function instead. I strongly oppose\nopen coding this functionality in prepare_cert_name().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:14:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "[resending to list]\n\nOn 7/19/22 09:14, Andres Freund wrote:\n> IMO this should replace the existing ascii escape function instead.\nThat will affect the existing behavior of application_name and\ncluster_name; is that acceptable?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 19 Jul 2022 09:30:18 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On 7/19/22 09:14, Andres Freund wrote:\n>> IMO this should replace the existing ascii escape function instead.\n\n> That will affect the existing behavior of application_name and\n> cluster_name; is that acceptable?\n\nI think Andres' point is exactly that these should all act alike.\n\nHaving said that, I struggle to see why we are panicking about badly\nencoded log data from this source while blithely ignoring the problems\nposed by non-ASCII role names, database names, and tablespace names.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 12:39:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 12:39:43 -0400, Tom Lane wrote:\n> Having said that, I struggle to see why we are panicking about badly\n> encoded log data from this source while blithely ignoring the problems\n> posed by non-ASCII role names, database names, and tablespace names.\n\nI think we should fix these as well. I'm not as concerned about post-auth\nencoding issues (i.e. tablespace name) as about pre-auth data (role name,\ndatabase name) - obviously being allowed to log in already is a pretty good\nfilter...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 10:09:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-07-19 12:39:43 -0400, Tom Lane wrote:\n> > Having said that, I struggle to see why we are panicking about badly\n> > encoded log data from this source while blithely ignoring the problems\n> > posed by non-ASCII role names, database names, and tablespace names.\n>\n> I think we should fix these as well. I'm not as concerned about post-auth\n> encoding issues (i.e. tablespace name) as about pre-auth data (role name,\n> database name) - obviously being allowed to log in already is a pretty good\n> filter...\n\nv2 adds escaping to pg_clean_ascii(). My original attempt used\nStringInfo allocation, but that didn't play well with guc_malloc(), so\nI switched to a two-pass API where the caller allocates. Let me know\nif I'm missing something obvious; this way is more verbose than I'd\nlike...\n\nThanks,\n--Jacob",
"msg_date": "Tue, 19 Jul 2022 15:08:38 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-19 15:08:38 -0700, Jacob Champion wrote:\n> v2 adds escaping to pg_clean_ascii(). My original attempt used\n> StringInfo allocation, but that didn't play well with guc_malloc(), so\n> I switched to a two-pass API where the caller allocates. Let me know\n> if I'm missing something obvious; this way is more verbose than I'd\n> like...\n\nHm, that's pretty awkward. Perhaps we can have a better API for\neverything but guc.c?\n\nOr alternatively, perhaps we can just make pg_clean_ascii() return NULL\nif allocation failed and then guc_strdup() the result in guc.c?\n\nIf we end up needing a two phase approach, why use the same function for\nboth phases? That seems quite awkward.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Jul 2022 15:38:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> Or alternatively, perhaps we can just make pg_clean_ascii() return NULL\n> if allocation failed and then guc_strdup() the result in guc.c?\n\nThe guc_strdup() approach really reduces the amount of code, so that's\nwhat I did in v3. I'm not following why we need to return NULL on\nfailure, though -- both palloc() and guc_malloc() ERROR on failure, so\nis it okay to keep those semantics the same?\n\n> If we end up needing a two phase approach, why use the same function for\n> both phases? That seems quite awkward.\n\nMostly so the byte counting always agrees between the two phases, no\nmatter how the implementation evolves. But it's hopefully moot now.\n\n--Jacob",
"msg_date": "Wed, 20 Jul 2022 15:11:10 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> The guc_strdup() approach really reduces the amount of code, so that's\n> what I did in v3. I'm not following why we need to return NULL on\n> failure, though -- both palloc() and guc_malloc() ERROR on failure, so\n> is it okay to keep those semantics the same?\n\nguc_malloc's behavior varies depending on elevel. It's *not*\nequivalent to palloc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 18:15:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 3:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> guc_malloc's behavior varies depending on elevel. It's *not*\n> equivalent to palloc.\n\nRight, sorry -- a better way for me to ask the question:\n\nI'm currently hardcoding an elevel of ERROR on the new guc_strdup()s,\nbecause that seems to be a common case for the check hooks. If that's\nokay, is there any reason not to use palloc() semantics for\npg_clean_ascii()? (And if it's not okay, why?)\n\n--Jacob\n\n\n",
"msg_date": "Wed, 20 Jul 2022 15:29:55 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> I'm currently hardcoding an elevel of ERROR on the new guc_strdup()s,\n> because that seems to be a common case for the check hooks.\n\nReally? That's almost certainly NOT okay. As an example, if you\nhave a problem with a new value loaded from postgresql.conf during\nSIGHUP processing, throwing ERROR will cause the postmaster to exit.\n\nI wouldn't be too surprised if there are isolated cases where people\ndidn't understand what they were doing and wrote that, but that\nneeds to be fixed not emulated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Jul 2022 18:42:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jacob Champion <jchampion@timescale.com> writes:\n> > I'm currently hardcoding an elevel of ERROR on the new guc_strdup()s,\n> > because that seems to be a common case for the check hooks.\n>\n> Really? That's almost certainly NOT okay. As an example, if you\n> have a problem with a new value loaded from postgresql.conf during\n> SIGHUP processing, throwing ERROR will cause the postmaster to exit.\n\nv4 attempts to fix this by letting the check hooks pass\nMCXT_ALLOC_NO_OOM to pg_clean_ascii(). (It's ignored in the frontend,\nwhich just mallocs.)\n\n> I wouldn't be too surprised if there are isolated cases where people\n> didn't understand what they were doing and wrote that, but that\n> needs to be fixed not emulated.\n\nI might be missing something, but in guc.c at least it appears to be\nthe rule and not the exception.\n\nThanks,\n--Jacob",
"msg_date": "Thu, 21 Jul 2022 16:29:35 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 4:29 PM Jacob Champion <jchampion@timescale.com> wrote:\n> v4 attempts to fix this by letting the check hooks pass\n> MCXT_ALLOC_NO_OOM to pg_clean_ascii(). (It's ignored in the frontend,\n> which just mallocs.)\n\nPing -- should I add an open item somewhere so this isn't lost?\n\n--Jacob\n\n\n",
"msg_date": "Thu, 28 Jul 2022 09:19:56 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 9:19 AM Jacob Champion <jchampion@timescale.com> wrote:\n> On Thu, Jul 21, 2022 at 4:29 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > v4 attempts to fix this by letting the check hooks pass\n> > MCXT_ALLOC_NO_OOM to pg_clean_ascii(). (It's ignored in the frontend,\n> > which just mallocs.)\n>\n> Ping -- should I add an open item somewhere so this isn't lost?\n\nTrying again. Peter, is this approach acceptable? Should I try something else?\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 8 Sep 2022 15:32:35 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 09.09.22 00:32, Jacob Champion wrote:\n> On Thu, Jul 28, 2022 at 9:19 AM Jacob Champion <jchampion@timescale.com> wrote:\n>> On Thu, Jul 21, 2022 at 4:29 PM Jacob Champion <jchampion@timescale.com> wrote:\n>>> v4 attempts to fix this by letting the check hooks pass\n>>> MCXT_ALLOC_NO_OOM to pg_clean_ascii(). (It's ignored in the frontend,\n>>> which just mallocs.)\n>>\n>> Ping -- should I add an open item somewhere so this isn't lost?\n> \n> Trying again. Peter, is this approach acceptable? Should I try something else?\n\nThis looks fine to me. Committed.\n\n\n",
"msg_date": "Tue, 13 Sep 2022 16:11:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 7:11 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> This looks fine to me. Committed.\n\nThanks!\n\n--Jacob\n\n\n",
"msg_date": "Tue, 13 Sep 2022 09:52:04 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 13, 2022 at 11:11 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 09.09.22 00:32, Jacob Champion wrote:\n> > On Thu, Jul 28, 2022 at 9:19 AM Jacob Champion <jchampion@timescale.com> wrote:\n> >> On Thu, Jul 21, 2022 at 4:29 PM Jacob Champion <jchampion@timescale.com> wrote:\n> >>> v4 attempts to fix this by letting the check hooks pass\n> >>> MCXT_ALLOC_NO_OOM to pg_clean_ascii(). (It's ignored in the frontend,\n> >>> which just mallocs.)\n> >>\n> >> Ping -- should I add an open item somewhere so this isn't lost?\n> >\n> > Trying again. Peter, is this approach acceptable? Should I try something else?\n>\n> This looks fine to me. Committed.\n\nWhile looking at the recent changes for check_cluster_name() I found\nthis thread. Regarding the following change made by the commit\n45b1a67a0fc, there is possibly small memory leak:\n\n static bool\n check_cluster_name(char **newval, void **extra, GucSource source)\n {\n+ char *clean;\n+\n /* Only allow clean ASCII chars in the cluster name */\n- pg_clean_ascii(*newval);\n+ clean = pg_clean_ascii(*newval, MCXT_ALLOC_NO_OOM);\n+ if (!clean)\n+ return false;\n+\n+ clean = guc_strdup(WARNING, clean);\n+ if (!clean)\n+ return false;\n\n+ *newval = clean;\n return true;\n }\n\npg_clean_ascii() does palloc_extended() to allocate memory in\nPostmaster context for the new characters and the clean is then\nreplaced with the new memory allocated by guc_strdup(). No-one\nreferences the memory allocated by pg_clean_ascii() and it lasts for\npostmaster lifetime. Valgrind memcheck also shows:\n\n 1 bytes in 1 blocks are definitely lost in loss record 4 of 70\n at 0xCD2A16: palloc_extended (mcxt.c:1239)\n by 0xD09437: pg_clean_ascii (string.c:99)\n by 0x7A5CF3: check_cluster_name (variable.c:1061)\n by 0xCAF7CD: call_string_check_hook (guc.c:6365)\n by 0xCAA724: InitializeOneGUCOption (guc.c:1439)\n by 0xCAA0ED: InitializeGUCOptions (guc.c:1268)\n by 0x99B245: PostmasterMain (postmaster.c:691)\n by 0x858896: main (main.c:197)\n\nI think we can fix it by the attached patch but I'd like to discuss\nwhether it's worth fixing it.\n\nRegards,\n\n--\nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 27 Sep 2022 17:51:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 1:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think we can fix it by the attached patch but I'd like to discuss\n> whether it's worth fixing it.\n\nWhoops. So every time it's changed, we leak a little postmaster memory?\n\nYour patch looks good to me and I see no reason not to fix it.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 27 Sep 2022 12:44:38 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 4:44 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Tue, Sep 27, 2022 at 1:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I think we can fix it by the attached patch but I'd like to discuss\n> > whether it's worth fixing it.\n>\n> Whoops. So every time it's changed, we leak a little postmaster memory?\n\nNo. Since cluster_name is PGC_POSTMATER, we leak a little postmaster\nmemory only once when starting up. application_name is PGC_USERSET but\nsince we normally allocate memory in PortalMemoryContext we eventually\ncan free it. Since check_cluster_name and check_application_name are\nsimilar, I changed both for consistency.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 10:13:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> No. Since cluster_name is PGC_POSTMATER, we leak a little postmaster\n> memory only once when starting up. application_name is PGC_USERSET but\n> since we normally allocate memory in PortalMemoryContext we eventually\n> can free it.\n\nOh, I see; thank you for the correction. And even if someone put an\napplication_name into their postgresql.conf, and then changed it a\nbunch of times, we'd free the leaked memory from the config_cxt that's\ncreated in ProcessConfigFile().\n\nIs there a reason we don't provide a similar temporary context during\nInitializeGUCOptions()? Naively it seems like that would suppress any\nfuture one-time leaks, and maybe cut down on some Valgrind noise. Then\nagain, maybe there's just not that much demand for pallocs during GUC\nhooks.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 28 Sep 2022 09:43:11 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 1:43 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Tue, Sep 27, 2022 at 6:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > No. Since cluster_name is PGC_POSTMATER, we leak a little postmaster\n> > memory only once when starting up. application_name is PGC_USERSET but\n> > since we normally allocate memory in PortalMemoryContext we eventually\n> > can free it.\n>\n> Oh, I see; thank you for the correction. And even if someone put an\n> application_name into their postgresql.conf, and then changed it a\n> bunch of times, we'd free the leaked memory from the config_cxt that's\n> created in ProcessConfigFile().\n\nRight.\n\n>\n> Is there a reason we don't provide a similar temporary context during\n> InitializeGUCOptions()? Naively it seems like that would suppress any\n> future one-time leaks, and maybe cut down on some Valgrind noise. Then\n> again, maybe there's just not that much demand for pallocs during GUC\n> hooks.\n\nWhile this seems a future-proof idea, I wonder if it might be overkill\nsince we don't need to worry about accumulation of leaked memory in\nthis case. Given that only check_cluter_name is the case where we\nfound a small memory leak, I think it's adequate to fix it.\n\nFixing this issue suppresses the valgrind's complaint but since the\nboot value of cluster_name is \"\" the memory leak we can avoid is only\n1 byte.\n\nRegards,\n\n--\nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Sep 2022 13:52:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On 29.09.22 06:52, Masahiko Sawada wrote:\n> While this seems a future-proof idea, I wonder if it might be overkill\n> since we don't need to worry about accumulation of leaked memory in\n> this case. Given that only check_cluter_name is the case where we\n> found a small memory leak, I think it's adequate to fix it.\n> \n> Fixing this issue suppresses the valgrind's complaint but since the\n> boot value of cluster_name is \"\" the memory leak we can avoid is only\n> 1 byte.\n\nI have committed this. I think it's better to keep the code locally \nrobust and not to have to rely on complex analysis of how GUC memory \nmanagement works.\n\n\n",
"msg_date": "Sat, 1 Oct 2022 12:52:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
},
{
"msg_contents": "On Sat, Oct 1, 2022 at 7:53 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 29.09.22 06:52, Masahiko Sawada wrote:\n> > While this seems a future-proof idea, I wonder if it might be overkill\n> > since we don't need to worry about accumulation of leaked memory in\n> > this case. Given that only check_cluter_name is the case where we\n> > found a small memory leak, I think it's adequate to fix it.\n> >\n> > Fixing this issue suppresses the valgrind's complaint but since the\n> > boot value of cluster_name is \"\" the memory leak we can avoid is only\n> > 1 byte.\n>\n> I have committed this. I think it's better to keep the code locally\n> robust and not to have to rely on complex analysis of how GUC memory\n> management works.\n\nThanks! Agreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Oct 2022 11:59:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Log details for client certificate failures"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.