threads
listlengths
1
2.99k
[ { "msg_contents": "On Tue, 11 Feb 2020 at 09:58, Andres Freund <andres@anarazel.de> wrote:\n> Isn't that basically a problem of the past by now? Partially due to\n> changed laws (e.g. France, which used to be a problematic case), but\n> also because it's basically futile to have import restrictions on\n> cryptography by now. Just about every larger project contains\n> significant amounts of cryptographic code and it's entirely impractical\n> to operate anything interfacing with network without some form of\n> transport encryption. And just about all open source distribution\n> mechanism have stopped separating out crypto code a long time ago.\n\nAustralia passed some stunningly backwards crypto laws only quite recently.\nThe Defense Trade Control Act (DCTA) imposes restrictions not only on\nexporting crypto software, but even on teaching about cryptography without\na permit. While supposedly restricted to military items and software, it's\nrather broad and unclear how that is determined. It's one of those \"written\nbroadly, applied selectively, trust us to be nice\" laws, because they're\nNEVER abused, right? See\nhttps://www.defence.gov.au/ExportControls/Cryptography.asp .\n\nMore recently we passed another idiotic \"did you even bother to listen at\nall to the people who explained this to you\" law called the\nTelecommunications (Assistance and Access) Act. This allows the Government\nto order companies/organisations to permit \"lawful access\" to encrypted\ncommunication, including end-to-end encrypted communications. It doesn't\nlegislatively order the creation of backdoors, it just legislates that\ncompanies must be able to add them on demand, so ... um, it legislates\nbackdoors. The law was drafted quickly, with little consultation, and\nrammed through Parliament during Christmas with the usual \"but the\nTerrorists\" handwaving. (Nevermind that real world terrorist organisations\nare communicating mainly through videogames chats and other innocuous\nplaces, not relying on strong crypto.) The law is already being abused to\nattack journalists. It has some nasty provisions about what Australia may\norder employees of a company to do as well, but thankfully the politicians\nwho drafted those provisions did not appear to understand things like\nrevision control or code review, so their real world threat is minimal.\n\nMy point? In practice, much of what we do with crypto is subject to a\nvariety of legal questions in many legal jurisdictions. Not much is\noutright illegal in most places, but it's definitely complicated. I do not\nparticipate in anything I know to be illegal or reasonably suspect to be\nillegal - but with the kind of incredibly broad laws we have now on the\nbooks in so many places, talking about the Ceasar Cipher / rot13 could be a\nviolation of someone's crypto law somewhere if you get the wrong judge and\nthe wrong situation.\n\nThe main change has been that it got simpler in the US, so enough\ndevelopers stopped caring. The US's Dep't of Commerce export restrictions\nwere weakened and the set of countries they applied to were narrowed,\nallowing US companies and citizens the ability to participate in projects\ncontaining non-crippled crypto.\n\nThere are still plenty of places where any sort of non-backdoored crypto is\nentirely illegal, we just say \"that's your problem\" to people in those\nplaces.\n\nI wholly support this approach. Pretty much everything is illegal\nsomewhere. Patents are pain enough already.\n\n(Apologies for thread-breaking reply, this is not from my\nusually-subscribed account. I do not speak in any way for my employer on\nthis matter.)\n\n-- \nCraig Ringer\n\nOn Tue, 11 Feb 2020 at 09:58, Andres Freund <andres@anarazel.de> wrote:\n> Isn't that basically a problem of the past by now?  Partially due to\n> changed laws (e.g. France, which used to be a problematic case), but\n> also because it's basically futile to have import restrictions on\n> cryptography by now. Just about every larger project contains\n> significant amounts of cryptographic code and it's entirely impractical\n> to operate anything interfacing with network without some form of\n> transport encryption.  And just about all open source distribution\n> mechanism have stopped separating out crypto code a long time ago.\n\nAustralia passed some stunningly backwards crypto laws only quite recently. The Defense Trade Control Act (DCTA) imposes restrictions not only on exporting crypto software, but even on teaching about cryptography without a permit. While supposedly restricted to military items and software, it's rather broad and unclear how that is determined. It's one of those \"written broadly, applied selectively, trust us to be nice\" laws, because they're NEVER abused, right? See https://www.defence.gov.au/ExportControls/Cryptography.asp .\n\nMore recently we passed another idiotic \"did you even bother to listen at all to the people who explained this to you\" law called the Telecommunications (Assistance and Access) Act. This allows the Government to order companies/organisations to permit \"lawful access\" to encrypted communication, including end-to-end encrypted communications. It doesn't legislatively order the creation of backdoors, it just legislates that companies must be able to add them on demand, so ... um, it legislates backdoors. The law was drafted quickly, with little consultation, and rammed through Parliament during Christmas with the usual \"but the Terrorists\" handwaving. (Nevermind that real world terrorist organisations are communicating mainly through videogames chats and other innocuous places, not relying on strong crypto.) The law is already being abused to attack journalists. It has some nasty provisions about what Australia may order employees of a company to do as well, but thankfully the politicians who drafted those provisions did not appear to understand things like revision control or code review, so their real world threat is minimal.\n\nMy point? In practice, much of what we do with crypto is subject to a variety of legal questions in many legal jurisdictions. Not much is outright illegal in most places, but it's definitely complicated. I do not participate in anything I know to be illegal or reasonably suspect to be illegal - but with the kind of incredibly broad laws we have now on the books in so many places, talking about the Ceasar Cipher / rot13 could be a violation of someone's crypto law somewhere if you get the wrong judge and the wrong situation.\n\nThe main change has been that it got simpler in the US, so enough developers stopped caring. The US's Dep't of Commerce export restrictions were weakened and the set of countries they applied to were narrowed, allowing US companies and citizens the ability to participate in projects containing non-crippled crypto.\n\nThere are still plenty of places where any sort of non-backdoored crypto is entirely illegal, we just say \"that's your problem\" to people in those places.\n\nI wholly support this approach. Pretty much everything is illegal somewhere. Patents are pain enough already.(Apologies for thread-breaking reply, this is not from my usually-subscribed account. I do not speak in any way for my employer on this matter.)-- Craig Ringer", "msg_date": "Wed, 19 Feb 2020 09:35:41 +0800", "msg_from": "Craig Ringer <ringerc@ringerc.id.au>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" } ]
[ { "msg_contents": "Hello.\n\nI saw a failure of vcregress check with the following message several\ntimes, on a machine under a heavy load and maybe with realtime virus\nscanning.\n\n> pg_regress: could not create directory \".../testtablespace\": Permission denied.\n\nI found that pg_regress repeats the sequence\nrmtree(tablespace)->make_directory(tablespace) twice under\ninitialize_environment. So it should be THE DELETE_PENDING. It is\nbecause the code is in convert_sourcefiles_in, which is called\nsuccssively twice in convert_sourcefiles.\n\nBut in the first place it comes from [1] and the comment says:\n\n> * XXX it would be better if pg_regress.c had nothing at all to do with\n> * testtablespace, and this were handled by a .BAT file or similar on\n> * Windows. See pgsql-hackers discussion of 2008-01-18.\n\nIs there any reason not to do that in vcregress.pl? I think the\ncommands other than 'check' don't needs this.\n\n[1] https://www.postgresql.org/message-id/11718.1200684807%40sss.pgh.pa.us\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 19 Feb 2020 14:25:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "pg_regress cleans up tablespace twice." }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> But in the first place it comes from [1] and the comment says:\n\n>> * XXX it would be better if pg_regress.c had nothing at all to do with\n>> * testtablespace, and this were handled by a .BAT file or similar on\n>> * Windows. See pgsql-hackers discussion of 2008-01-18.\n\n> Is there any reason not to do that in vcregress.pl? I think the\n> commands other than 'check' don't needs this.\n\nI think the existing coding dates from before we had a Perl driver for\nthis, or else we had it but there were other less-functional ways to\nreplace \"make check\" on Windows. +1 for taking the code out of\npg_regress.c --- but I'm not in a position to say whether the other\npart of your patch is sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Feb 2020 16:06:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Wed, Feb 19, 2020 at 04:06:33PM -0500, Tom Lane wrote:\n> I think the existing coding dates from before we had a Perl driver for\n> this, or else we had it but there were other less-functional ways to\n> replace \"make check\" on Windows. +1 for taking the code out of\n> pg_regress.c --- but I'm not in a position to say whether the other\n> part of your patch is sufficient.\n\nRemoving this code from pg_regress.c makes also sense to me. Now, the\npatch breaks \"vcregress installcheck\" as this is missing to patch\ninstallcheck_internal() for the tablespace path creation. I would\nalso recommend using a full path for the directory location to avoid\nany potential issues if this code is refactored or moved around, the\npatch now relying on the current path used.\n--\nMichael", "msg_date": "Thu, 20 Feb 2020 14:23:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "At Thu, 20 Feb 2020 14:23:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Feb 19, 2020 at 04:06:33PM -0500, Tom Lane wrote:\n> > I think the existing coding dates from before we had a Perl driver for\n> > this, or else we had it but there were other less-functional ways to\n> > replace \"make check\" on Windows. +1 for taking the code out of\n> > pg_regress.c --- but I'm not in a position to say whether the other\n> > part of your patch is sufficient.\n> \n> Removing this code from pg_regress.c makes also sense to me. Now, the\n> patch breaks \"vcregress installcheck\" as this is missing to patch\n> installcheck_internal() for the tablespace path creation. I would\n> also recommend using a full path for the directory location to avoid\n> any potential issues if this code is refactored or moved around, the\n> patch now relying on the current path used.\n\nHmm. Right. I confused database directory and tablespace\ndirectory. Tablespace directory should be provided by the test script,\neven though the database directory is preexisting in installcheck. To\navoid useless future failure, I would do that that for all\nsubcommands, as regress/GNUmakefile does.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Feb 2020 17:05:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "At Fri, 21 Feb 2020 17:05:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 20 Feb 2020 14:23:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > Removing this code from pg_regress.c makes also sense to me. Now, the\n> > patch breaks \"vcregress installcheck\" as this is missing to patch\n> > installcheck_internal() for the tablespace path creation. I would\n> > also recommend using a full path for the directory location to avoid\n> > any potential issues if this code is refactored or moved around, the\n> > patch now relying on the current path used.\n> \n> Hmm. Right. I confused database directory and tablespace\n> directory. Tablespace directory should be provided by the test script,\n> even though the database directory is preexisting in installcheck. To\n> avoid useless future failure, I would do that that for all\n> subcommands, as regress/GNUmakefile does.\n\nTablespace directory cleanup is not done for all testing\ntargets. Actually it is not done for the tools under bin/ except\npg_upgrade.\n\nOn the other hand, it was done every by pg_regress run for Windows\nbuild. So I made vcregress.pl do the same with that. Spcecifically to\nset up tablespace always before pg_regress is executed.\n\nThere is a place where --outputdir is specified for pg_regress,\npg_upgrade/test.sh. It is explained as the follows.\n\n# Send installcheck outputs to a private directory. This avoids conflict when\n# check-world runs pg_upgrade check concurrently with src/test/regress check.\n# To retrieve interesting files after a run, use pattern tmp_check/*/*.diffs.\noutputdir=\"$temp_root/regress\"\nEXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --outputdir=$outputdir\"\n\nWhere the $temp_root is $(TOP)/src/bin/pg_upgrade/tmp_check/regress.\n\nThus the current regress/GNUMakefile does break this consideration and\nthe current vc_regress (of Windows build) does the right thing in the\nlight of the comment. Don't we need to avoid cleaning up\n\"$(TOP)/src/test/regress/tablesapce\" in that case? (the second patch\nattached)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 11 May 2020 17:13:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Mon, May 11, 2020 at 05:13:54PM +0900, Kyotaro Horiguchi wrote:\n> Tablespace directory cleanup is not done for all testing\n> targets. Actually it is not done for the tools under bin/ except\n> pg_upgrade.\n\nLet's first take one problem at a time, as I can see that your patch\n0002 is modifying a portion of what you added in 0001, and so let's\ntry to remove this WIN32 stuff from pg_regress.c.\n\n+sub CleanupTablespaceDirectory\n+{\n+ my $tablespace = 'testtablespace';\n+\n+ rmtree($tablespace) if (-e $tablespace);\n+ mkdir($tablespace);\n+}\nThis check should use \"-d\" and not \"-e\" as it would be true for a file\nas well. Also, in pg_regress.c, we remove the existing tablespace\ntest directory in --outputdir, which is \".\" by default but it can be a\ncustom one. Shouldn't you do the same logic in this new routine? So\nwe should have an optional argument for the output directory that\ndefaults to `pwd` if not defined, no? This means passing down the\nargument only for upgradecheck() in vcregress.pl.\n\n sub isolationcheck\n {\n \tchdir \"../isolation\";\n+\tCleanupTablespaceDirectory();\n \tcopy(\"../../../$Config/isolationtester/isolationtester.exe\",\n \t\t\"../../../$Config/pg_isolation_regress\");\n \tmy @args = (\n[...]\n \tprint \"============================================================\\n\";\n \tprint \"Checking $module\\n\";\n+\tCleanupTablespaceDirectory();\n \tmy @args = (\n \t\t\"$topdir/$Config/pg_regress/pg_regress\",\n \t\t\"--bindir=${topdir}/${Config}/psql\",\nI would put that just before the system() calls for consistency with\nthe rest.\n--\nMichael", "msg_date": "Fri, 15 May 2020 11:58:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Fri, May 15, 2020 at 11:58:55AM +0900, Michael Paquier wrote:\n> Let's first take one problem at a time, as I can see that your patch\n> 0002 is modifying a portion of what you added in 0001, and so let's\n> try to remove this WIN32 stuff from pg_regress.c.\n\n(Please note that this is not v13 material.)\n--\nMichael", "msg_date": "Fri, 15 May 2020 12:01:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "Thank you for looking this!\n\nAt Fri, 15 May 2020 11:58:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, May 11, 2020 at 05:13:54PM +0900, Kyotaro Horiguchi wrote:\n> > Tablespace directory cleanup is not done for all testing\n> > targets. Actually it is not done for the tools under bin/ except\n> > pg_upgrade.\n> \n> Let's first take one problem at a time, as I can see that your patch\n> 0002 is modifying a portion of what you added in 0001, and so let's\n> try to remove this WIN32 stuff from pg_regress.c.\n\nYes, 0001 and 0001+0002 are alternatives. They should be merged if we\nare going to fix the pg_upgrade test. I take this as we go on 0001+0002.\n\n> +sub CleanupTablespaceDirectory\n> +{\n> + my $tablespace = 'testtablespace';\n> +\n> + rmtree($tablespace) if (-e $tablespace);\n> + mkdir($tablespace);\n> +}\n> This check should use \"-d\" and not \"-e\" as it would be true for a file\n> as well. Also, in pg_regress.c, we remove the existing tablespace\n\nThat was intentional so that a file with the name don't stop\ntesting. Actually pg_regress is checking only for a directory in other\nplace and it's not that bad since no-one can create a file with that\nname while running test. On the other hand, is there any reason for\nrefraining from removing if it weren't a directory but a file?\n\nChanged to -d in the attached.\n\n> as well. Also, in pg_regress.c, we remove the existing tablespace\n> test directory in --outputdir, which is \".\" by default but it can be a\n> custom one. Shouldn't you do the same logic in this new routine? So\n> we should have an optional argument for the output directory that\n> defaults to `pwd` if not defined, no? This means passing down the\n> argument only for upgradecheck() in vcregress.pl.\n\nI thought of that but didn't in the patch. I refrained from doing\nthat because the output directory is dedicatedly created at the only\nplace (pg_upgrade test) where the --outputdir is specified. (I think I\ntend to do too-much.)\n\nIt is easy in perl scripts, but rather complex for makefiles. The\nattached is using a perl one-liner to extract outputdir from\nEXTRA_REGRESS_OPTS. I don't like that but I didn't come up with better\nalternatives. On the other hand ActivePerl (with default\ninstallation) doesn't seem to know Getopt::Long::GetOptions and\nfriends. In the attached vcregress.pl parses --outputdir not using\nGetOpt::Long...\n\n> sub isolationcheck\n> {\n> \tchdir \"../isolation\";\n> +\tCleanupTablespaceDirectory();\n> \tcopy(\"../../../$Config/isolationtester/isolationtester.exe\",\n> \t\t\"../../../$Config/pg_isolation_regress\");\n> \tmy @args = (\n> [...]\n> \tprint \"============================================================\\n\";\n> \tprint \"Checking $module\\n\";\n> +\tCleanupTablespaceDirectory();\n> \tmy @args = (\n> \t\t\"$topdir/$Config/pg_regress/pg_regress\",\n> \t\t\"--bindir=${topdir}/${Config}/psql\",\n> I would put that just before the system() calls for consistency with\n> the rest.\n\nRight. That's just an mistake. Fixed along with subdircheck.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 15 May 2020 17:25:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Fri, May 15, 2020 at 05:25:08PM +0900, Kyotaro Horiguchi wrote:\n> I thought of that but didn't in the patch. I refrained from doing\n> that because the output directory is dedicatedly created at the only\n> place (pg_upgrade test) where the --outputdir is specified. (I think I\n> tend to do too-much.)\n\nSo, I have reviewed the patch aimed at removing the cleanup of\ntesttablespace done with WIN32, and finished with the attached to\nclean up things. I simplified the logic, to not have to parse\nREGRESS_OPTS for --outputdir (no need for a regex, leaving\nEXTRA_REGRESS_OPTS alone), and reworked the code so as the tablespace\ncleanup only happens only where we need to: check, installcheck and\nupgradecheck. No need for that with contribcheck, modulescheck,\nplcheck and ecpgcheck.\n\nNote that after I changed my patch, this converged with a portion of\npatch 0002 you have posted here:\nhttps://www.postgresql.org/message-id/20200511.171354.514381788845037011.horikyota.ntt@gmail.com\n\nNow about 0002, I tend to agree that we should try to do something\nabout pg_upgrade test creating removing and then creating an extra\ntesttablespace path that is not necessary as pg_upgrade test uses its\nown --outputdir. I have not actually seen this stuff being a problem\nin practice as the main regression test suite runs first, largely\nbefore pg_upgrade test even with parallel runs so they have a low\nprobability of conflict. I'll try to think about a couple of options,\none of them I have in mind now being that we could finally switch the\nupgrade tests to TAP and let test.sh go to the void. This is an\nindependent problem, so let's tackle both issues separately.\n--\nMichael", "msg_date": "Wed, 17 Jun 2020 16:12:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "Thanks for working on this.\n\nAt Wed, 17 Jun 2020 16:12:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, May 15, 2020 at 05:25:08PM +0900, Kyotaro Horiguchi wrote:\n> > I thought of that but didn't in the patch. I refrained from doing\n> > that because the output directory is dedicatedly created at the only\n> > place (pg_upgrade test) where the --outputdir is specified. (I think I\n> > tend to do too-much.)\n> \n> So, I have reviewed the patch aimed at removing the cleanup of\n> testtablespace done with WIN32, and finished with the attached to\n> clean up things. I simplified the logic, to not have to parse\n> REGRESS_OPTS for --outputdir (no need for a regex, leaving\n> EXTRA_REGRESS_OPTS alone), and reworked the code so as the tablespace\n> cleanup only happens only where we need to: check, installcheck and\n> upgradecheck. No need for that with contribcheck, modulescheck,\n> plcheck and ecpgcheck.\n\nIt look good to me as the Windows part. I agree that vcregress.pl\ndon't need to parse EXTRA_REGRESS_OPTS by allowing a bit more tight\nbond between the caller sites of pg_regress and pg_regress.\n\n> Note that after I changed my patch, this converged with a portion of\n> patch 0002 you have posted here:\n> https://www.postgresql.org/message-id/20200511.171354.514381788845037011.horikyota.ntt@gmail.com\n> \n> Now about 0002, I tend to agree that we should try to do something\n> about pg_upgrade test creating removing and then creating an extra\n> testtablespace path that is not necessary as pg_upgrade test uses its\n> own --outputdir. I have not actually seen this stuff being a problem\n> in practice as the main regression test suite runs first, largely\n> before pg_upgrade test even with parallel runs so they have a low\n> probability of conflict. I'll try to think about a couple of options,\n\nAgreed on probability. \n\n> one of them I have in mind now being that we could finally switch the\n> upgrade tests to TAP and let test.sh go to the void. This is an\n> independent problem, so let's tackle both issues separately.\n\nChaning to TAP sounds nice as a goal.\n\nAs the next step we need to amend GNUmakefile not to cleanup\ntablespace for the four test items. Then somehow treat tablespaces at\nnon-default place?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 17 Jun 2020 17:02:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Wed, Jun 17, 2020 at 05:02:31PM +0900, Kyotaro Horiguchi wrote:\n> It look good to me as the Windows part. I agree that vcregress.pl\n> don't need to parse EXTRA_REGRESS_OPTS by allowing a bit more tight\n> bond between the caller sites of pg_regress and pg_regress.\n\nThanks, applied this part to HEAD then after more testing.\n\n> Chaining to TAP sounds nice as a goal.\n\nI submitted a patch for that, but we had no clear agreements about how\nto handle major upgrades, as this involves a somewhat large\nrefactoring of PostgresNode.pm so as you register a path to the\nbinaries used by a given node registered.\n\n> As the next step we need to amend GNUmakefile not to cleanup\n> tablespace for the four test items. Then somehow treat tablespaces at\n> non-default place?\n\nAh, you mean to not reset testtablespace where that's not necessary in\nthe tests by reworking the rules? Yeah, perhaps we could do something\nlike that. Not sure yet how to shape that in term of code but if you\nhave a clear idea, please feel free to submit it. I think that this\nmay be better if discussed on a different thread though.\n--\nMichael", "msg_date": "Thu, 18 Jun 2020 10:42:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Thu, Jun 18, 2020 at 1:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Thanks, applied this part to HEAD then after more testing.\n\nHmm, somehow this (well I guess it's this commit based on timing and\nthe area it touches, not sure exactly why) made cfbot's Windows build\nfail, like this:\n\n--- C:/projects/postgresql/src/test/regress/expected/tablespace.out\n2020-06-19 21:26:24.661817000 +0000\n+++ C:/projects/postgresql/src/test/regress/results/tablespace.out\n2020-06-19 21:26:28.613257500 +0000\n@@ -2,83 +2,78 @@\nCREATE TABLESPACE regress_tblspacewith LOCATION\n'C:/projects/postgresql/src/test/regress/testtablespace' WITH\n(some_nonexistent_parameter = true); -- fail\nERROR: unrecognized parameter \"some_nonexistent_parameter\"\nCREATE TABLESPACE regress_tblspacewith LOCATION\n'C:/projects/postgresql/src/test/regress/testtablespace' WITH\n(random_page_cost = 3.0); -- ok\n+ERROR: could not set permissions on directory\n\"C:/projects/postgresql/src/test/regress/testtablespace\": Permission\ndenied\n\nAny ideas? Here's what it does:\n\nhttps://github.com/macdice/cfbot/tree/master/appveyor\n\n\n", "msg_date": "Sat, 20 Jun 2020 09:33:26 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sat, Jun 20, 2020 at 09:33:26AM +1200, Thomas Munro wrote:\n> Hmm, somehow this (well I guess it's this commit based on timing and\n> the area it touches, not sure exactly why) made cfbot's Windows build\n> fail, like this:\n> \n> --- C:/projects/postgresql/src/test/regress/expected/tablespace.out\n> 2020-06-19 21:26:24.661817000 +0000\n> +++ C:/projects/postgresql/src/test/regress/results/tablespace.out\n> 2020-06-19 21:26:28.613257500 +0000\n> @@ -2,83 +2,78 @@\n> CREATE TABLESPACE regress_tblspacewith LOCATION\n> 'C:/projects/postgresql/src/test/regress/testtablespace' WITH\n> (some_nonexistent_parameter = true); -- fail\n> ERROR: unrecognized parameter \"some_nonexistent_parameter\"\n> CREATE TABLESPACE regress_tblspacewith LOCATION\n> 'C:/projects/postgresql/src/test/regress/testtablespace' WITH\n> (random_page_cost = 3.0); -- ok\n> +ERROR: could not set permissions on directory\n> \"C:/projects/postgresql/src/test/regress/testtablespace\": Permission\n> denied\n> \n> Any ideas? Here's what it does:\n> \n> https://github.com/macdice/cfbot/tree/master/appveyor\n\nI am not sure, and I am not really familiar with this stuff. Your\ncode does a simple vcregress check, and that should take care of\nautomatically cleaning up the testtablespace path. The buildfarm uses\nthis code for MSVC builds and does not complain, nor do my own VMs\ncomplain. A difference in the processing after 2b2a070d is that the\ntablespace cleanup/creation does not happen while holding a restricted \ntoken [1] anymore because it got out of pg_regress.c. Are there any\nkind of restrictions applied to the user running appveyor on Windows?\n\n[1]: https://docs.microsoft.com/en-us/windows/win32/secauthz/restricted-tokens\n--\nMichael", "msg_date": "Sat, 20 Jun 2020 11:42:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sat, Jun 20, 2020 at 2:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > +ERROR: could not set permissions on directory\n> > \"C:/projects/postgresql/src/test/regress/testtablespace\": Permission\n> > denied\n> >\n> > Any ideas? Here's what it does:\n> >\n> > https://github.com/macdice/cfbot/tree/master/appveyor\n>\n> I am not sure, and I am not really familiar with this stuff. Your\n> code does a simple vcregress check, and that should take care of\n> automatically cleaning up the testtablespace path. The buildfarm uses\n> this code for MSVC builds and does not complain, nor do my own VMs\n> complain. A difference in the processing after 2b2a070d is that the\n> tablespace cleanup/creation does not happen while holding a restricted\n> token [1] anymore because it got out of pg_regress.c. Are there any\n> kind of restrictions applied to the user running appveyor on Windows?\n\nThanks for the clue. Appveyor runs your build script as a privileged\nuser (unlike, I assume, the build farm animals), and that has caused a\nproblem with this test in the past, though I don't know the details.\nI might go and teach it to skip that test until a fix can be found.\n\n\n", "msg_date": "Sat, 20 Jun 2020 15:01:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sat, Jun 20, 2020 at 03:01:36PM +1200, Thomas Munro wrote:\n> Thanks for the clue. Appveyor runs your build script as a privileged\n> user (unlike, I assume, the build farm animals), and that has caused a\n> problem with this test in the past, though I don't know the details.\n> I might go and teach it to skip that test until a fix can be found.\n\nThanks, I was not aware of that. Is it a fix that involves your code\nor something else? How long do you think it would take to address\nthat? Another strategy that we could do is also a revert of 2b2a070\nfor now to allow the cfbot to go through and then register this thread\nin the CF app to allow the bot to pick it up and test it, so as there\nis more room to get a fix. The next CF is in ten days, so it would be\nannoying to reduce the automatic test coverage the cfbot provides :/\n--\nMichael", "msg_date": "Sat, 20 Jun 2020 15:46:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sat, Jun 20, 2020 at 6:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sat, Jun 20, 2020 at 03:01:36PM +1200, Thomas Munro wrote:\n> > Thanks for the clue. Appveyor runs your build script as a privileged\n> > user (unlike, I assume, the build farm animals), and that has caused a\n> > problem with this test in the past, though I don't know the details.\n> > I might go and teach it to skip that test until a fix can be found.\n>\n> Thanks, I was not aware of that. Is it a fix that involves your code\n> or something else? How long do you think it would take to address\n> that? Another strategy that we could do is also a revert of 2b2a070\n> for now to allow the cfbot to go through and then register this thread\n> in the CF app to allow the bot to pick it up and test it, so as there\n> is more room to get a fix. The next CF is in ten days, so it would be\n> annoying to reduce the automatic test coverage the cfbot provides :/\n\nI'm not sure what needs to change, but in the meantime I told it to\ncomment out the offending test from the schedule files:\n\n+before_test:\n+ - 'perl -p -i.bak -e \"s/^test: tablespace/#test: tablespace/\"\nsrc/test/regress/serial_schedule'\n+ - 'perl -p -i.bak -e \"s/^test: tablespace/#test: tablespace/\"\nsrc/test/regress/parallel_schedule'\n\nNow the results are slowly turning green again.\n\n\n", "msg_date": "Sun, 21 Jun 2020 12:08:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sun, Jun 21, 2020 at 12:08:37PM +1200, Thomas Munro wrote:\n> I'm not sure what needs to change, but in the meantime I told it to\n> comment out the offending test from the schedule files:\n> \n> +before_test:\n> + - 'perl -p -i.bak -e \"s/^test: tablespace/#test: tablespace/\"\n> src/test/regress/serial_schedule'\n> + - 'perl -p -i.bak -e \"s/^test: tablespace/#test: tablespace/\"\n> src/test/regress/parallel_schedule'\n> \n> Now the results are slowly turning green again.\n\nThanks, and sorry for the trouble. What actually happened back in\n2018? I can see c2ff3c68 in the git history of the cfbot code, but it\ndoes not give much details.\n--\nMichael", "msg_date": "Sun, 21 Jun 2020 17:42:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sun, Jun 21, 2020 at 8:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Sun, Jun 21, 2020 at 12:08:37PM +1200, Thomas Munro wrote:\n> > I'm not sure what needs to change, but in the meantime I told it to\n> > comment out the offending test from the schedule files:\n> >\n> > +before_test:\n> > + - 'perl -p -i.bak -e \"s/^test: tablespace/#test: tablespace/\"\n> > src/test/regress/serial_schedule'\n> > + - 'perl -p -i.bak -e \"s/^test: tablespace/#test: tablespace/\"\n> > src/test/regress/parallel_schedule'\n> >\n> > Now the results are slowly turning green again.\n>\n> Thanks, and sorry for the trouble. What actually happened back in\n> 2018? I can see c2ff3c68 in the git history of the cfbot code, but it\n> does not give much details.\n\ncommit ce5d3424d6411f7a7228fd4463242cb382af3e0c\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Sat Oct 20 09:02:36 2018 -0400\n\n Lower privilege level of programs calling regression_main\n\n On Windows this mean that the regression tests can now safely and\n successfully run as Administrator, which is useful in situations like\n Appveyor. Elsewhere it's a no-op.\n\n Backpatch to 9.5 - this is harder in earlier branches and not worth the\n trouble.\n\n Discussion:\nhttps://postgr.es/m/650b0c29-9578-8571-b1d2-550d7f89f307@2ndQuadrant.com\n\n\n", "msg_date": "Sun, 21 Jun 2020 22:38:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sun, Jun 21, 2020 at 10:38:22PM +1200, Thomas Munro wrote:\n> On Sun, Jun 21, 2020 at 8:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Thanks, and sorry for the trouble. What actually happened back in\n>> 2018? I can see c2ff3c68 in the git history of the cfbot code, but it\n>> does not give much details.\n> \n> commit ce5d3424d6411f7a7228fd4463242cb382af3e0c\n> Author: Andrew Dunstan <andrew@dunslane.net>\n> Date: Sat Oct 20 09:02:36 2018 -0400\n> \n> Lower privilege level of programs calling regression_main\n> \n> On Windows this mean that the regression tests can now safely and\n> successfully run as Administrator, which is useful in situations like\n> Appveyor. Elsewhere it's a no-op.\n> \n> Backpatch to 9.5 - this is harder in earlier branches and not worth the\n> trouble.\n> \n> Discussion:\n> https://postgr.es/m/650b0c29-9578-8571-b1d2-550d7f89f307@2ndQuadrant.com\n\nThanks for the reference. This also means that as much as I'd like to\nkeep the recreation of testtablespace out of pg_regress for\nconsistency, 2b2a070 has also broken a case we have claimed to support\nsince ce5d342.\n\nA bit of digging around I have found this case from a guy of Yandex,\nvisibly running our regression test suite:\nhttps://help.appveyor.com/discussions/questions/1888-running-tests-with-reduced-privileges\n\nAnd the conclusion seems like it is not really possible to do that\nwithin appveyor, using a trick with openssh to manipulate privileges\nas wanted, as referenced here:\nhttps://github.com/yandex-qatools/postgresql-embedded\n\nAt the end of the day, it looks more simple to me to just revert\n2b2a070 if we just want to keep your stuff running without extra\nworkload from your side. Extra opinions are welcome.\n--\nMichael", "msg_date": "Tue, 23 Jun 2020 10:40:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jun 18, 2020 at 1:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Thanks, applied this part to HEAD then after more testing.\n\n> Hmm, somehow this (well I guess it's this commit based on timing and\n> the area it touches, not sure exactly why) made cfbot's Windows build\n> fail, like this:\n\nShould now be possible to undo whatever hack you had to use ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Jul 2020 09:35:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Fri, Jul 10, 2020 at 09:35:56AM -0400, Tom Lane wrote:\n> Should now be possible to undo whatever hack you had to use ...\n\nYes, I have also opened an issue on github:\nhttps://github.com/macdice/cfbot/issues/11/\n--\nMichael", "msg_date": "Sat, 11 Jul 2020 10:35:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." }, { "msg_contents": "On Sat, Jul 11, 2020 at 10:35:07AM +0900, Michael Paquier wrote:\n> Yes, I have also opened an issue on github:\n> https://github.com/macdice/cfbot/issues/11/\n\nAnd Thomas has just fixed it:\nhttps://github.com/macdice/cfbot/commit/e78438444a00bc8d83863645503b2f7c1a9da016\n--\nMichael", "msg_date": "Sat, 11 Jul 2020 15:05:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_regress cleans up tablespace twice." } ]
[ { "msg_contents": "Hi hackers,\n\nMy colleague Chris Travers discovered something that looks like a bug.\nLet's say we have a table with a constraint that is declared as NO INHERIT.\n\nCREATE TABLE test (\n x INT CHECK (x > 0) NO INHERIT\n);\n\\d test\n Table \"public.test\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n x | integer | | |\nCheck constraints:\n \"test_x_check1\" CHECK (x > 0) NO INHERIT\n\nNow when we want to make a copy of the table structure into a new table\nthe `NO INHERIT` option is ignored.\n\nCREATE TABLE test2 (LIKE test INCLUDING CONSTRAINTS);\n\\d test2\n Table \"public.test2\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n x | integer | | |\nCheck constraints:\n \"test_x_check1\" CHECK (x > 0)\n\nIs this a bug or expected behaviour? Just in case I've attached a patch\nthat fixes this.\n\nRegards,\nIldar", "msg_date": "Wed, 19 Feb 2020 14:59:40 +0100", "msg_from": "Ildar Musin <ildar@adjust.com>", "msg_from_op": true, "msg_subject": "Constraint's NO INHERIT option is ignored in CREATE TABLE LIKE\n statement" }, { "msg_contents": "Ildar Musin <ildar@adjust.com> writes:\n> My colleague Chris Travers discovered something that looks like a bug.\n> Let's say we have a table with a constraint that is declared as NO INHERIT.\n> ...\n> Now when we want to make a copy of the table structure into a new table\n> the `NO INHERIT` option is ignored.\n\nHm, I agree that's a bug, since the otherwise-pretty-detailed CREATE TABLE\nLIKE documentation makes no mention of such a difference between original\nand cloned constraint.\n\nHowever, I'd be disinclined to back-patch, since it's barely possible\nsomebody out there is depending on the existing behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Feb 2020 18:02:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Constraint's NO INHERIT option is ignored in CREATE TABLE LIKE\n statement" }, { "msg_contents": "On Wed, Feb 19, 2020 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ildar Musin <ildar@adjust.com> writes:\n> > My colleague Chris Travers discovered something that looks like a bug.\n> > Let's say we have a table with a constraint that is declared as NO\n> INHERIT.\n> > ...\n> > Now when we want to make a copy of the table structure into a new table\n> > the `NO INHERIT` option is ignored.\n>\n> Hm, I agree that's a bug, since the otherwise-pretty-detailed CREATE TABLE\n> LIKE documentation makes no mention of such a difference between original\n> and cloned constraint.\n>\n> However, I'd be disinclined to back-patch, since it's barely possible\n> somebody out there is depending on the existing behavior.\n>\n\nNot sure I agree with the premise that it is not supposed to be copied; is\nthere some other object type the allows NO INHERIT that isn't copied when\nCREATE TABLE LIKE is used and check constraints are the odd ones out?\n\nInheritance is what NO INHERIT is about and CREATE TABLE LIKE pointedly\ndoesn't setup an inheritance structure. The documentation seems ok since\nsaying that NO INHERIT is ignored when inheritance is not being used seems\nself-evident. Sure, maybe some clarity here could be had, but its not like\nthis comes up with any regularity.\n\nDavid J.\n\nOn Wed, Feb 19, 2020 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ildar Musin <ildar@adjust.com> writes:\n> My colleague Chris Travers discovered something that looks like a bug.\n> Let's say we have a table with a constraint that is declared as NO INHERIT.\n> ...\n> Now when we want to make a copy of the table structure into a new table\n> the `NO INHERIT` option is ignored.\n\nHm, I agree that's a bug, since the otherwise-pretty-detailed CREATE TABLE\nLIKE documentation makes no mention of such a difference between original\nand cloned constraint.\n\nHowever, I'd be disinclined to back-patch, since it's barely possible\nsomebody out there is depending on the existing behavior.Not sure I agree with the premise that it is not supposed to be copied; is there some other object type the allows NO INHERIT that isn't copied when CREATE TABLE LIKE is used and check constraints are the odd ones out?Inheritance is what NO INHERIT is about and CREATE TABLE LIKE pointedly doesn't setup an inheritance structure.  The documentation seems ok since saying that NO INHERIT is ignored when inheritance is not being used seems self-evident.  Sure, maybe some clarity here could be had, but its not like this comes up with any regularity.David J.", "msg_date": "Wed, 19 Feb 2020 16:20:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Constraint's NO INHERIT option is ignored in CREATE TABLE LIKE\n statement" }, { "msg_contents": "On Thu, Feb 20, 2020 at 8:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ildar Musin <ildar@adjust.com> writes:\n> > My colleague Chris Travers discovered something that looks like a bug.\n> > Let's say we have a table with a constraint that is declared as NO INHERIT.\n> > ...\n> > Now when we want to make a copy of the table structure into a new table\n> > the `NO INHERIT` option is ignored.\n>\n> Hm, I agree that's a bug, since the otherwise-pretty-detailed CREATE TABLE\n> LIKE documentation makes no mention of such a difference between original\n> and cloned constraint.\n\nBy the way, partitioned tables to not allow constraints that are\nmarked NO INHERIT. For example,\n\ncreate table b (a int check (a > 0) no inherit) partition by list (a);\nERROR: cannot add NO INHERIT constraint to partitioned table \"b\"\n\nWe must ensure that partitioned tables don't accidentally end up with\none via CREATE TABLE LIKE path. I tested Ildar's patch and things\nseem fine, but it might be better to add a test. Attached updated\npatch with that taken care of.\n\nThanks,\nAmit", "msg_date": "Thu, 20 Feb 2020 11:36:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Constraint's NO INHERIT option is ignored in CREATE TABLE LIKE\n statement" }, { "msg_contents": "On Thu, Feb 20, 2020 at 8:20 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> On Wed, Feb 19, 2020 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ildar Musin <ildar@adjust.com> writes:\n>> > My colleague Chris Travers discovered something that looks like a bug.\n>> > Let's say we have a table with a constraint that is declared as NO INHERIT.\n>> > ...\n>> > Now when we want to make a copy of the table structure into a new table\n>> > the `NO INHERIT` option is ignored.\n>>\n>> Hm, I agree that's a bug, since the otherwise-pretty-detailed CREATE TABLE\n>> LIKE documentation makes no mention of such a difference between original\n>> and cloned constraint.\n>>\n>> However, I'd be disinclined to back-patch, since it's barely possible\n>> somebody out there is depending on the existing behavior.\n>\n> Not sure I agree with the premise that it is not supposed to be copied; is there some other object type the allows NO INHERIT that isn't copied when CREATE TABLE LIKE is used and check constraints are the odd ones out?\n\nSyntax currently allows only CHECK constraints to be marked NO INHERIT.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 20 Feb 2020 11:53:09 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Constraint's NO INHERIT option is ignored in CREATE TABLE LIKE\n statement" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Feb 20, 2020 at 8:20 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>> Not sure I agree with the premise that it is not supposed to be copied; is there some other object type the allows NO INHERIT that isn't copied when CREATE TABLE LIKE is used and check constraints are the odd ones out?\n\n> Syntax currently allows only CHECK constraints to be marked NO INHERIT.\n\nHearing no further comments, pushed, with a bit of cosmetic polishing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Mar 2020 14:55:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Constraint's NO INHERIT option is ignored in CREATE TABLE LIKE\n statement" } ]
[ { "msg_contents": "Hello,\n\nMy name is Misha Patel and I’m reaching out on behalf of the HackIllinois\nOutreach team. HackIllinois is a 36-hour collegiate Open Source hackathon\nthat takes place annually at the University of Illinois Urbana-Champaign.\nThis year, it will be from February 28th-March 1st, 2020. Our mission is to\nintroduce college students to Open Source, while giving back to the\ncommunity. We strive to create a collaborative environment in which our\nattendees can learn from and work with developers to make their own\ncontributions. In past years, we’ve had developers from prominent projects\nsuch as npm, Rust, and Apache come to mentor students from our pool of 900+\nattendees.\n\nWe’d love it if you could pass along this message to the PostgreSQL\ncommunity or any individuals you believe would be interested. We will\nprovide meals throughout the event and can reimburse for travel and lodging\nup to a certain amount depending on where in the US people are coming from.\nMore information on mentorship can be found at hackillinois.org/mentor. You\ncan also visit opensource.hackillinois.org to see what kinds of projects\nwere represented at our event last year.\n\nBest,\nMisha Patel\nHackIllinois 2020 Outreach Director\n\nHello,My name is Misha Patel and I’m reaching out on behalf of the HackIllinois Outreach team. HackIllinois is a 36-hour collegiate Open Source hackathon that takes place annually at the University of Illinois Urbana-Champaign. This year, it will be from February 28th-March 1st, 2020. Our mission is to introduce college students to Open Source, while giving back to the community. We strive to create a collaborative environment in which our attendees can learn from and work with developers to make their own contributions. In past years, we’ve had developers from prominent projects such as npm, Rust, and Apache come to mentor students from our pool of 900+ attendees. We’d love it if you could pass along this message to the PostgreSQL community or any individuals you believe would be interested. We will provide meals throughout the event and can reimburse for travel and lodging up to a certain amount depending on where in the US people are coming from. More information on mentorship can be found at hackillinois.org/mentor. You can also visit opensource.hackillinois.org to see what kinds of projects were represented at our event last year.Best,Misha PatelHackIllinois 2020 Outreach Director", "msg_date": "Wed, 19 Feb 2020 15:13:01 -0500", "msg_from": "Misha Patel <misha.patel@hackillinois.org>", "msg_from_op": true, "msg_subject": "Open Source Hackathon Mentorship Invitation" } ]
[ { "msg_contents": "Hello Postgres Hackers -\n\nWe are having a reoccurring issue on 2 of our replicas where replication\nstops due to this message:\n\"incorrect resource manager data checksum in record at ...\"\nThis has been occurring on average once every 1 to 2 weeks during large\ndata imports (100s of GBs being written)\non one of two replicas.\nFixing the issue has been relatively straight forward: shutdown replica,\nremove the bad wal file, restart replica and\nthe good wal file is retrieved from the master.\nWe are doing streaming replication using replication slots.\nHowever twice now, the master had already removed the WAL file so the file\nhad to retrieved from the wal archive.\n\nThe WAL log directories on the master and the replicas are on ZFS file\nsystems.\nAll servers are running RHEL 7.7 (Maipo)\nPostgreSQL 10.11\nZFS v0.7.13-1\n\nThe issue seems similar to\nhttps://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com\n and to https://github.com/timescale/timescaledb/issues/1443\n\nOne quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS\nsees our array as a single device.\n\nRight before the issue started we did some upgrades and altered some\npostgres configs and ZFS settings.\nWe have been slowly rolling back changes but so far the the issue continues.\n\nSome interesting data points while debugging:\nWe had lowered the ZFS recordsize from 128K to 32K and for that week the\nissue started happening every other day.\nUsing xxd and diff we compared \"good\" and \"bad\" wal files and the\ndifferences were not random bad bytes.\n\nThe bad file either had a block of zeros that were not in the good file at\nthat position or other data. Occasionally the bad data has contained\nlegible strings not in the good file at that position. At least one of\nthose exact strings has existed elsewhere in the files.\nHowever I am not sure if that is the case for all of them.\n\nThis made me think that maybe there was an issue w/ wal file recycling and\nZFS under heavy load, so we tried lowering\nmin_wal_size in order to \"discourage\" wal file recycling but my\nunderstanding is a low value discourages recycling but it will still\nhappen (unless setting wal_recycle in psql 12).\n\nThere is a third replica where this bug has not (yet?) surfaced.\nThis leads me to guess the bad data does not originate on the master.\nThis replica is older than the other replicas, slower CPUs, less RAM, and\nthe WAL disk array is spinning disks.\nThe OS, version of Postgres, and version of ZFS are the same as the other\nreplicas.\nThis replica is not using a replication slot.\nThis replica does not serve users so load/contention is much lower than the\nothers.\nThe other replicas often have 100% utilization of the disk array that\nhouses the (non-wal) data.\n\nAny insight into the source of this bug or how to address it?\n\nSince the master has a good copy of the WAL file, can the replica\nre-request the file from the master? Or from the archive?\n\nWhen using replication slots, what circumstances would cause the master to\nnot save the WAL file?\n(I can't remember if it always had the next wal file or the one after that)\n\nThanks in advance,\nAlex Malek\n\nHello Postgres Hackers -We are having a reoccurring issue on 2 of our replicas where replication stops due to this message:\"incorrect resource manager data checksum in record at ...\"This has been occurring on average once every 1 to 2 weeks during large data imports (100s of GBs being written)on one of two replicas.Fixing the issue has been relatively straight forward: shutdown replica, remove the bad wal file, restart replica andthe good wal file is retrieved from the master.We are doing streaming replication using replication slots.However twice now, the master had already removed the WAL file so the file had to retrieved from the wal archive.The WAL log directories on the master and the replicas are on ZFS file systems.All servers are running RHEL 7.7 (Maipo)PostgreSQL 10.11ZFS v0.7.13-1The issue seems similar to https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com and to https://github.com/timescale/timescaledb/issues/1443One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS sees our array as a single device.Right before the issue started we did some upgrades and altered some postgres configs and ZFS settings.We have been slowly rolling back changes but so far the the issue continues.Some interesting data points while debugging:We had lowered the ZFS recordsize from 128K to 32K and for that week the issue started happening every other day.Using xxd and diff we compared \"good\" and \"bad\" wal files and the differences were not random bad bytes.The bad file either had a block of zeros that were not in the good file at that position or other data.  Occasionally the bad data has contained legible strings not in the good file at that position. At least one of those exact strings has existed elsewhere in the files.However I am not sure if that is the case for all of them.This made me think that maybe there was an issue w/ wal file recycling and ZFS under heavy load, so we tried lowering min_wal_size in order to \"discourage\" wal file recycling but my understanding is a low value discourages recycling but it will stillhappen (unless setting wal_recycle in psql 12).There is a third replica where this bug has not (yet?) surfaced.This leads me to guess the bad data does not originate on the master.This replica is older than the other replicas, slower CPUs, less RAM, and the WAL disk array is spinning disks.The OS, version of Postgres, and version of ZFS are the same as the other replicas.This replica is not using a replication slot.This replica does not serve users so load/contention is much lower than the others.The other replicas often have 100% utilization of the disk array that houses the (non-wal) data.Any insight into the source of this bug or how to address it?Since the master has a good copy of the WAL file, can the replica re-request  the file from the master? Or from the archive?When using replication slots, what circumstances would cause the master to not save the WAL file?(I can't remember if it always had the next wal file or the one after that)Thanks in advance,Alex Malek", "msg_date": "Wed, 19 Feb 2020 16:35:53 -0500", "msg_from": "Alex Malek <magicagent@gmail.com>", "msg_from_op": true, "msg_subject": "bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "On Thu, Feb 20, 2020 at 3:06 AM Alex Malek <magicagent@gmail.com> wrote:\n>\n>\n> Hello Postgres Hackers -\n>\n> We are having a reoccurring issue on 2 of our replicas where replication stops due to this message:\n> \"incorrect resource manager data checksum in record at ...\"\n> This has been occurring on average once every 1 to 2 weeks during large data imports (100s of GBs being written)\n> on one of two replicas.\n> Fixing the issue has been relatively straight forward: shutdown replica, remove the bad wal file, restart replica and\n> the good wal file is retrieved from the master.\n> We are doing streaming replication using replication slots.\n> However twice now, the master had already removed the WAL file so the file had to retrieved from the wal archive.\n>\n> The WAL log directories on the master and the replicas are on ZFS file systems.\n> All servers are running RHEL 7.7 (Maipo)\n> PostgreSQL 10.11\n> ZFS v0.7.13-1\n>\n> The issue seems similar to https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com and to https://github.com/timescale/timescaledb/issues/1443\n>\n> One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS sees our array as a single device.\n>\n> Right before the issue started we did some upgrades and altered some postgres configs and ZFS settings.\n> We have been slowly rolling back changes but so far the the issue continues.\n>\n> Some interesting data points while debugging:\n> We had lowered the ZFS recordsize from 128K to 32K and for that week the issue started happening every other day.\n> Using xxd and diff we compared \"good\" and \"bad\" wal files and the differences were not random bad bytes.\n>\n> The bad file either had a block of zeros that were not in the good file at that position or other data. Occasionally the bad data has contained legible strings not in the good file at that position. At least one of those exact strings has existed elsewhere in the files.\n> However I am not sure if that is the case for all of them.\n>\n> This made me think that maybe there was an issue w/ wal file recycling and ZFS under heavy load, so we tried lowering\n> min_wal_size in order to \"discourage\" wal file recycling but my understanding is a low value discourages recycling but it will still\n> happen (unless setting wal_recycle in psql 12).\n>\n\nWe do print a message \"recycled write-ahead log file ..\" in DEBUG2\nmode. You either want to run the server with DEBUG2 or maybe change\nthe code to make it LOG and see if that is printed. If you do that,\nyou can verify if the corrupted WAL is the same as a recycled one.\n\n> There is a third replica where this bug has not (yet?) surfaced.\n> This leads me to guess the bad data does not originate on the master.\n> This replica is older than the other replicas, slower CPUs, less RAM, and the WAL disk array is spinning disks.\n> The OS, version of Postgres, and version of ZFS are the same as the other replicas.\n> This replica is not using a replication slot.\n> This replica does not serve users so load/contention is much lower than the others.\n> The other replicas often have 100% utilization of the disk array that houses the (non-wal) data.\n>\n> Any insight into the source of this bug or how to address it?\n>\n> Since the master has a good copy of the WAL file, can the replica re-request the file from the master? Or from the archive?\n>\n\nI think we do check in the archive if we get the error during\nstreaming, but archive might also have the same data due to which this\nproblem happens. Have you checked that the archive WAL file, is it\ndifferent from the bad WAL? See the relevant bits of code in\nWaitForWALToBecomeAvailable especially the code near below comment:\n\n\"Failure while streaming. Most likely, we got here because streaming\nreplication was terminated, or promotion was triggered. But we also\nget here if we find an invalid record in the WAL streamed from master,\nin which case something is seriously wrong. There's little chance that\nthe problem will just go away, but PANIC is not good for availability\neither, especially in hot standby mode. So, we treat that the same as\ndisconnection, and retry from archive/pg_wal again. The WAL in the\narchive should be identical to what was streamed, so it's unlikely\nthat it helps, but one can hope...\"\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Feb 2020 16:46:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "On Thu, Feb 20, 2020, 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Feb 20, 2020 at 3:06 AM Alex Malek <magicagent@gmail.com> wrote:\n> >\n> >\n> > Hello Postgres Hackers -\n> >\n> > We are having a reoccurring issue on 2 of our replicas where replication\n> stops due to this message:\n> > \"incorrect resource manager data checksum in record at ...\"\n> > This has been occurring on average once every 1 to 2 weeks during large\n> data imports (100s of GBs being written)\n> > on one of two replicas.\n> > Fixing the issue has been relatively straight forward: shutdown replica,\n> remove the bad wal file, restart replica and\n> > the good wal file is retrieved from the master.\n> > We are doing streaming replication using replication slots.\n> > However twice now, the master had already removed the WAL file so the\n> file had to retrieved from the wal archive.\n> >\n> > The WAL log directories on the master and the replicas are on ZFS file\n> systems.\n> > All servers are running RHEL 7.7 (Maipo)\n> > PostgreSQL 10.11\n> > ZFS v0.7.13-1\n> >\n> > The issue seems similar to\n> https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com\n> and to https://github.com/timescale/timescaledb/issues/1443\n> >\n> > One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS\n> sees our array as a single device.\n> >\n> > Right before the issue started we did some upgrades and altered some\n> postgres configs and ZFS settings.\n> > We have been slowly rolling back changes but so far the the issue\n> continues.\n> >\n> > Some interesting data points while debugging:\n> > We had lowered the ZFS recordsize from 128K to 32K and for that week the\n> issue started happening every other day.\n> > Using xxd and diff we compared \"good\" and \"bad\" wal files and the\n> differences were not random bad bytes.\n> >\n> > The bad file either had a block of zeros that were not in the good file\n> at that position or other data. Occasionally the bad data has contained\n> legible strings not in the good file at that position. At least one of\n> those exact strings has existed elsewhere in the files.\n> > However I am not sure if that is the case for all of them.\n> >\n> > This made me think that maybe there was an issue w/ wal file recycling\n> and ZFS under heavy load, so we tried lowering\n> > min_wal_size in order to \"discourage\" wal file recycling but my\n> understanding is a low value discourages recycling but it will still\n> > happen (unless setting wal_recycle in psql 12).\n> >\n>\n> We do print a message \"recycled write-ahead log file ..\" in DEBUG2\n> mode. You either want to run the server with DEBUG2 or maybe change\n> the code to make it LOG and see if that is printed. If you do that,\n> you can verify if the corrupted WAL is the same as a recycled one.\n>\n\nAre you suggesting having the master, the replicas or all in debug mode?\nHow much extra logging would this generate?\nA replica typically consumes over 1 TB of WAL files before a bad wal file\nis encountered.\n\n\n\n> > There is a third replica where this bug has not (yet?) surfaced.\n> > This leads me to guess the bad data does not originate on the master.\n> > This replica is older than the other replicas, slower CPUs, less RAM,\n> and the WAL disk array is spinning disks.\n> > The OS, version of Postgres, and version of ZFS are the same as the\n> other replicas.\n> > This replica is not using a replication slot.\n> > This replica does not serve users so load/contention is much lower than\n> the others.\n> > The other replicas often have 100% utilization of the disk array that\n> houses the (non-wal) data.\n> >\n> > Any insight into the source of this bug or how to address it?\n> >\n> > Since the master has a good copy of the WAL file, can the replica\n> re-request the file from the master? Or from the archive?\n> >\n>\n> I think we do check in the archive if we get the error during\n> streaming, but archive might also have the same data due to which this\n> problem happens. Have you checked that the archive WAL file, is it\n> different from the bad WAL? See the\n\n\nTypically the master, the archive and the other replicas all have a good\ncopy of the WAL file.\n\nrelevant bits of code in\n> WaitForWALToBecomeAvailable especially the code near below comment:\n>\n> \"Failure while streaming. Most likely, we got here because streaming\n> replication was terminated, or promotion was triggered. But we also\n> get here if we find an invalid record in the WAL streamed from master,\n> in which case something is seriously wrong. There's little chance that\n> the problem will just go away, but PANIC is not good for availability\n> either, especially in hot standby mode. So, we treat that the same as\n> disconnection, and retry from archive/pg_wal again. The WAL in the\n> archive should be identical to what was streamed, so it's unlikely\n> that it helps, but one can hope...\"\n>\n>\nThank you for this comment!\nThis made me realize that on the replicas I had mentioned we had removed\nthe restore_command.\nThe replica we thought was not having the issue, was actually also\ngetting/producing bad WAL files but was eventually recovering by getting a\ngood WAL file from the archive b/c it had the restore_command defined.\n\nOn Thu, Feb 20, 2020, 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Feb 20, 2020 at 3:06 AM Alex Malek <magicagent@gmail.com> wrote:\n>\n>\n> Hello Postgres Hackers -\n>\n> We are having a reoccurring issue on 2 of our replicas where replication stops due to this message:\n> \"incorrect resource manager data checksum in record at ...\"\n> This has been occurring on average once every 1 to 2 weeks during large data imports (100s of GBs being written)\n> on one of two replicas.\n> Fixing the issue has been relatively straight forward: shutdown replica, remove the bad wal file, restart replica and\n> the good wal file is retrieved from the master.\n> We are doing streaming replication using replication slots.\n> However twice now, the master had already removed the WAL file so the file had to retrieved from the wal archive.\n>\n> The WAL log directories on the master and the replicas are on ZFS file systems.\n> All servers are running RHEL 7.7 (Maipo)\n> PostgreSQL 10.11\n> ZFS v0.7.13-1\n>\n> The issue seems similar to https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com and to https://github.com/timescale/timescaledb/issues/1443\n>\n> One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS sees our array as a single device.\n>\n> Right before the issue started we did some upgrades and altered some postgres configs and ZFS settings.\n> We have been slowly rolling back changes but so far the the issue continues.\n>\n> Some interesting data points while debugging:\n> We had lowered the ZFS recordsize from 128K to 32K and for that week the issue started happening every other day.\n> Using xxd and diff we compared \"good\" and \"bad\" wal files and the differences were not random bad bytes.\n>\n> The bad file either had a block of zeros that were not in the good file at that position or other data.  Occasionally the bad data has contained legible strings not in the good file at that position. At least one of those exact strings has existed elsewhere in the files.\n> However I am not sure if that is the case for all of them.\n>\n> This made me think that maybe there was an issue w/ wal file recycling and ZFS under heavy load, so we tried lowering\n> min_wal_size in order to \"discourage\" wal file recycling but my understanding is a low value discourages recycling but it will still\n> happen (unless setting wal_recycle in psql 12).\n>\n\nWe do print a message \"recycled write-ahead log file ..\" in DEBUG2\nmode.  You either want to run the server with DEBUG2 or maybe change\nthe code to make it LOG and see if that is printed.  If you do that,\nyou can verify if the corrupted WAL is the same as a recycled one.Are you suggesting having the master, the replicas or all in debug mode?How much extra logging would this generate?A replica typically consumes over 1 TB of WAL files before a bad wal file is encountered. \n\n> There is a third replica where this bug has not (yet?) surfaced.\n> This leads me to guess the bad data does not originate on the master.\n> This replica is older than the other replicas, slower CPUs, less RAM, and the WAL disk array is spinning disks.\n> The OS, version of Postgres, and version of ZFS are the same as the other replicas.\n> This replica is not using a replication slot.\n> This replica does not serve users so load/contention is much lower than the others.\n> The other replicas often have 100% utilization of the disk array that houses the (non-wal) data.\n>\n> Any insight into the source of this bug or how to address it?\n>\n> Since the master has a good copy of the WAL file, can the replica re-request  the file from the master? Or from the archive?\n>\n\nI think we do check in the archive if we get the error during\nstreaming, but archive might also have the same data due to which this\nproblem happens.  Have you checked that the archive WAL file, is it\ndifferent from the bad WAL?  See the Typically the master, the archive and the other replicas all have a good copy of the WAL file.relevant bits of code in\nWaitForWALToBecomeAvailable especially the code near below comment:\n\n\"Failure while streaming. Most likely, we got here because streaming\nreplication was terminated, or promotion was triggered. But we also\nget here if we find an invalid record in the WAL streamed from master,\nin which case something is seriously wrong. There's little chance that\nthe problem will just go away, but PANIC is not good for availability\neither, especially in hot standby mode. So, we treat that the same as\ndisconnection, and retry from archive/pg_wal again. The WAL in the\narchive should be identical to what was streamed, so it's unlikely\nthat it helps, but one can hope...\"\nThank you for this comment!This made me realize that on the replicas I had mentioned we had removed the restore_command.The replica we thought was not having the issue, was actually also getting/producing bad WAL files but was eventually recovering by getting a good WAL file from the archive b/c it had the restore_command defined.", "msg_date": "Thu, 20 Feb 2020 12:01:45 -0500", "msg_from": "Alex Malek <magicagent@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "On Thu, Feb 20, 2020 at 7:40 PM Alex Malek <amalek@gmail.com> wrote:\n>\n> On Thu, Feb 20, 2020, 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, Feb 20, 2020 at 3:06 AM Alex Malek <magicagent@gmail.com> wrote:\n>> >\n>> > Some interesting data points while debugging:\n>> > We had lowered the ZFS recordsize from 128K to 32K and for that week the issue started happening every other day.\n>> > Using xxd and diff we compared \"good\" and \"bad\" wal files and the differences were not random bad bytes.\n>> >\n>> > The bad file either had a block of zeros that were not in the good file at that position or other data. Occasionally the bad data has contained legible strings not in the good file at that position. At least one of those exact strings has existed elsewhere in the files.\n>> > However I am not sure if that is the case for all of them.\n>> >\n>> > This made me think that maybe there was an issue w/ wal file recycling and ZFS under heavy load, so we tried lowering\n>> > min_wal_size in order to \"discourage\" wal file recycling but my understanding is a low value discourages recycling but it will still\n>> > happen (unless setting wal_recycle in psql 12).\n>> >\n>>\n>> We do print a message \"recycled write-ahead log file ..\" in DEBUG2\n>> mode. You either want to run the server with DEBUG2 or maybe change\n>> the code to make it LOG and see if that is printed. If you do that,\n>> you can verify if the corrupted WAL is the same as a recycled one.\n>\n>\n> Are you suggesting having the master, the replicas or all in debug mode?\n>\n\nThe system(s) where you are expecting that wal recycling would have\ncreated some problem.\n\n> How much extra logging would this generate?\n>\n\nTo some extent, it depends on your workload. It will certainly\ngenerate much more than when you have not enabled the debug level.\nBut, what other option you have to identify the root cause or at least\nfind out whether your suspicion is right or not. As mentioned\nearlier, if you have the flexibility of changing code to find out the\nreason, then you can change the code (at the place I told yesterday)\nto make the level as LOG in which case you can set the\nlog_min_messages to LOG and it will generate much fewer logs on the\nserver.\n\n> A replica typically consumes over 1 TB of WAL files before a bad wal file is encountered.\n>\n>\n>> >\n>> > Any insight into the source of this bug or how to address it?\n>> >\n>> > Since the master has a good copy of the WAL file, can the replica re-request the file from the master? Or from the archive?\n>> >\n>>\n>> I think we do check in the archive if we get the error during\n>> streaming, but archive might also have the same data due to which this\n>> problem happens. Have you checked that the archive WAL file, is it\n>> different from the bad WAL? See the\n>\n>\n> Typically the master, the archive and the other replicas all have a good copy of the WAL file.\n>\n>> relevant bits of code in\n>> WaitForWALToBecomeAvailable especially the code near below comment:\n>>\n>> \"Failure while streaming. Most likely, we got here because streaming\n>> replication was terminated, or promotion was triggered. But we also\n>> get here if we find an invalid record in the WAL streamed from master,\n>> in which case something is seriously wrong. There's little chance that\n>> the problem will just go away, but PANIC is not good for availability\n>> either, especially in hot standby mode. So, we treat that the same as\n>> disconnection, and retry from archive/pg_wal again. The WAL in the\n>> archive should be identical to what was streamed, so it's unlikely\n>> that it helps, but one can hope...\"\n>>\n>\n> Thank you for this comment!\n> This made me realize that on the replicas I had mentioned we had removed the restore_command.\n> The replica we thought was not having the issue, was actually also getting/producing bad WAL files but was eventually recovering by getting a good WAL file from the archive b/c it had the restore_command defined.\n>\n\nGood to know that there is some way to recover from the situation.\nBut, I think it is better to find the root cause of what led to bad\nWAL files so that you can fix it if possible.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Feb 2020 08:52:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "On Thu, Feb 20, 2020 at 12:01 PM Alex Malek <magicagent@gmail.com> wrote:\n\n> On Thu, Feb 20, 2020, 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>> On Thu, Feb 20, 2020 at 3:06 AM Alex Malek <magicagent@gmail.com> wrote:\n>> >\n>> >\n>> > Hello Postgres Hackers -\n>> >\n>> > We are having a reoccurring issue on 2 of our replicas where\n>> replication stops due to this message:\n>> > \"incorrect resource manager data checksum in record at ...\"\n>> > This has been occurring on average once every 1 to 2 weeks during large\n>> data imports (100s of GBs being written)\n>> > on one of two replicas.\n>> > Fixing the issue has been relatively straight forward: shutdown\n>> replica, remove the bad wal file, restart replica and\n>> > the good wal file is retrieved from the master.\n>> > We are doing streaming replication using replication slots.\n>> > However twice now, the master had already removed the WAL file so the\n>> file had to retrieved from the wal archive.\n>> >\n>> > The WAL log directories on the master and the replicas are on ZFS file\n>> systems.\n>> > All servers are running RHEL 7.7 (Maipo)\n>> > PostgreSQL 10.11\n>> > ZFS v0.7.13-1\n>> >\n>> > The issue seems similar to\n>> https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com\n>> and to https://github.com/timescale/timescaledb/issues/1443\n>> >\n>> > One quirk in our ZFS setup is ZFS is not handling our RAID array, so\n>> ZFS sees our array as a single device.\n>> >\n>> > Right before the issue started we did some upgrades and altered some\n>> postgres configs and ZFS settings.\n>> > We have been slowly rolling back changes but so far the the issue\n>> continues.\n>> >\n>> > Some interesting data points while debugging:\n>> > We had lowered the ZFS recordsize from 128K to 32K and for that week\n>> the issue started happening every other day.\n>> > Using xxd and diff we compared \"good\" and \"bad\" wal files and the\n>> differences were not random bad bytes.\n>> >\n>> > The bad file either had a block of zeros that were not in the good file\n>> at that position or other data. Occasionally the bad data has contained\n>> legible strings not in the good file at that position. At least one of\n>> those exact strings has existed elsewhere in the files.\n>> > However I am not sure if that is the case for all of them.\n>> >\n>> > This made me think that maybe there was an issue w/ wal file recycling\n>> and ZFS under heavy load, so we tried lowering\n>> > min_wal_size in order to \"discourage\" wal file recycling but my\n>> understanding is a low value discourages recycling but it will still\n>> > happen (unless setting wal_recycle in psql 12).\n>> >\n>>\n>> We do print a message \"recycled write-ahead log file ..\" in DEBUG2\n>> mode. You either want to run the server with DEBUG2 or maybe change\n>> the code to make it LOG and see if that is printed. If you do that,\n>> you can verify if the corrupted WAL is the same as a recycled one.\n>>\n>\n> Are you suggesting having the master, the replicas or all in debug mode?\n> How much extra logging would this generate?\n> A replica typically consumes over 1 TB of WAL files before a bad wal file\n> is encountered.\n>\n>\n>\n>> > There is a third replica where this bug has not (yet?) surfaced.\n>> > This leads me to guess the bad data does not originate on the master.\n>> > This replica is older than the other replicas, slower CPUs, less RAM,\n>> and the WAL disk array is spinning disks.\n>> > The OS, version of Postgres, and version of ZFS are the same as the\n>> other replicas.\n>> > This replica is not using a replication slot.\n>> > This replica does not serve users so load/contention is much lower than\n>> the others.\n>> > The other replicas often have 100% utilization of the disk array that\n>> houses the (non-wal) data.\n>> >\n>> > Any insight into the source of this bug or how to address it?\n>> >\n>> > Since the master has a good copy of the WAL file, can the replica\n>> re-request the file from the master? Or from the archive?\n>> >\n>>\n>> I think we do check in the archive if we get the error during\n>> streaming, but archive might also have the same data due to which this\n>> problem happens. Have you checked that the archive WAL file, is it\n>> different from the bad WAL? See the\n>\n>\n> Typically the master, the archive and the other replicas all have a good\n> copy of the WAL file.\n>\n> relevant bits of code in\n>> WaitForWALToBecomeAvailable especially the code near below comment:\n>>\n>> \"Failure while streaming. Most likely, we got here because streaming\n>> replication was terminated, or promotion was triggered. But we also\n>> get here if we find an invalid record in the WAL streamed from master,\n>> in which case something is seriously wrong. There's little chance that\n>> the problem will just go away, but PANIC is not good for availability\n>> either, especially in hot standby mode. So, we treat that the same as\n>> disconnection, and retry from archive/pg_wal again. The WAL in the\n>> archive should be identical to what was streamed, so it's unlikely\n>> that it helps, but one can hope...\"\n>>\n>>\n> Thank you for this comment!\n> This made me realize that on the replicas I had mentioned we had removed\n> the restore_command.\n> The replica we thought was not having the issue, was actually also\n> getting/producing bad WAL files but was eventually recovering by getting a\n> good WAL file from the archive b/c it had the restore_command defined.\n>\n>\n\nSo ignoring what is causing the underlying issue, what would be involved in\nadding the ability of the replica to try to re-request the WAL file first\nfrom the master? It seems that would make replication more resilient and\naddress similar issues such as\nhttps://www.postgresql.org/message-id/CAPv0rXGZtFr2u5o3g70OMoH+WQYhmwq1aGsmL+PQHMjFf71Dkw@mail.gmail.com\nthat do not involve ZFS at all.\n\nThanks.\nAlex\n\nOn Thu, Feb 20, 2020 at 12:01 PM Alex Malek <magicagent@gmail.com> wrote:On Thu, Feb 20, 2020, 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Feb 20, 2020 at 3:06 AM Alex Malek <magicagent@gmail.com> wrote:\n>\n>\n> Hello Postgres Hackers -\n>\n> We are having a reoccurring issue on 2 of our replicas where replication stops due to this message:\n> \"incorrect resource manager data checksum in record at ...\"\n> This has been occurring on average once every 1 to 2 weeks during large data imports (100s of GBs being written)\n> on one of two replicas.\n> Fixing the issue has been relatively straight forward: shutdown replica, remove the bad wal file, restart replica and\n> the good wal file is retrieved from the master.\n> We are doing streaming replication using replication slots.\n> However twice now, the master had already removed the WAL file so the file had to retrieved from the wal archive.\n>\n> The WAL log directories on the master and the replicas are on ZFS file systems.\n> All servers are running RHEL 7.7 (Maipo)\n> PostgreSQL 10.11\n> ZFS v0.7.13-1\n>\n> The issue seems similar to https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com and to https://github.com/timescale/timescaledb/issues/1443\n>\n> One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS sees our array as a single device.\n>\n> Right before the issue started we did some upgrades and altered some postgres configs and ZFS settings.\n> We have been slowly rolling back changes but so far the the issue continues.\n>\n> Some interesting data points while debugging:\n> We had lowered the ZFS recordsize from 128K to 32K and for that week the issue started happening every other day.\n> Using xxd and diff we compared \"good\" and \"bad\" wal files and the differences were not random bad bytes.\n>\n> The bad file either had a block of zeros that were not in the good file at that position or other data.  Occasionally the bad data has contained legible strings not in the good file at that position. At least one of those exact strings has existed elsewhere in the files.\n> However I am not sure if that is the case for all of them.\n>\n> This made me think that maybe there was an issue w/ wal file recycling and ZFS under heavy load, so we tried lowering\n> min_wal_size in order to \"discourage\" wal file recycling but my understanding is a low value discourages recycling but it will still\n> happen (unless setting wal_recycle in psql 12).\n>\n\nWe do print a message \"recycled write-ahead log file ..\" in DEBUG2\nmode.  You either want to run the server with DEBUG2 or maybe change\nthe code to make it LOG and see if that is printed.  If you do that,\nyou can verify if the corrupted WAL is the same as a recycled one.Are you suggesting having the master, the replicas or all in debug mode?How much extra logging would this generate?A replica typically consumes over 1 TB of WAL files before a bad wal file is encountered. \n\n> There is a third replica where this bug has not (yet?) surfaced.\n> This leads me to guess the bad data does not originate on the master.\n> This replica is older than the other replicas, slower CPUs, less RAM, and the WAL disk array is spinning disks.\n> The OS, version of Postgres, and version of ZFS are the same as the other replicas.\n> This replica is not using a replication slot.\n> This replica does not serve users so load/contention is much lower than the others.\n> The other replicas often have 100% utilization of the disk array that houses the (non-wal) data.\n>\n> Any insight into the source of this bug or how to address it?\n>\n> Since the master has a good copy of the WAL file, can the replica re-request  the file from the master? Or from the archive?\n>\n\nI think we do check in the archive if we get the error during\nstreaming, but archive might also have the same data due to which this\nproblem happens.  Have you checked that the archive WAL file, is it\ndifferent from the bad WAL?  See the Typically the master, the archive and the other replicas all have a good copy of the WAL file.relevant bits of code in\nWaitForWALToBecomeAvailable especially the code near below comment:\n\n\"Failure while streaming. Most likely, we got here because streaming\nreplication was terminated, or promotion was triggered. But we also\nget here if we find an invalid record in the WAL streamed from master,\nin which case something is seriously wrong. There's little chance that\nthe problem will just go away, but PANIC is not good for availability\neither, especially in hot standby mode. So, we treat that the same as\ndisconnection, and retry from archive/pg_wal again. The WAL in the\narchive should be identical to what was streamed, so it's unlikely\nthat it helps, but one can hope...\"\nThank you for this comment!This made me realize that on the replicas I had mentioned we had removed the restore_command.The replica we thought was not having the issue, was actually also getting/producing bad WAL files but was eventually recovering by getting a good WAL file from the archive b/c it had the restore_command defined. So ignoring what is causing the underlying issue, what would be involved in adding the ability of the replica to try to re-request the WAL file first from the master?  It seems that would make replication more resilient and address similar issues such as https://www.postgresql.org/message-id/CAPv0rXGZtFr2u5o3g70OMoH+WQYhmwq1aGsmL+PQHMjFf71Dkw@mail.gmail.comthat do not involve ZFS at all.Thanks.Alex", "msg_date": "Wed, 26 Feb 2020 10:18:30 -0500", "msg_from": "Alex Malek <magicagent@gmail.com>", "msg_from_op": true, "msg_subject": "Re: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "On Wed, Feb 19, 2020 at 4:35 PM Alex Malek <magicagent@gmail.com> wrote:\n\n>\n> Hello Postgres Hackers -\n>\n> We are having a reoccurring issue on 2 of our replicas where replication\n> stops due to this message:\n> \"incorrect resource manager data checksum in record at ...\"\n> This has been occurring on average once every 1 to 2 weeks during large\n> data imports (100s of GBs being written)\n> on one of two replicas.\n> Fixing the issue has been relatively straight forward: shutdown replica,\n> remove the bad wal file, restart replica and\n> the good wal file is retrieved from the master.\n> We are doing streaming replication using replication slots.\n> However twice now, the master had already removed the WAL file so the file\n> had to retrieved from the wal archive.\n>\n> The WAL log directories on the master and the replicas are on ZFS file\n> systems.\n> All servers are running RHEL 7.7 (Maipo)\n> PostgreSQL 10.11\n> ZFS v0.7.13-1\n>\n> The issue seems similar to\n> https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com\n> and to https://github.com/timescale/timescaledb/issues/1443\n>\n> One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS\n> sees our array as a single device.\n> ....\n> <snip>\n>\n\n\nAn update in case someone else encounters the same issue.\n\nAbout 5 weeks ago, on the master database server, we turned off ZFS\ncompression for the volume where the WAL log resides.\nThe error has not occurred on any replica since.\n\nBest,\nAlex\n\nOn Wed, Feb 19, 2020 at 4:35 PM Alex Malek <magicagent@gmail.com> wrote:Hello Postgres Hackers -We are having a reoccurring issue on 2 of our replicas where replication stops due to this message:\"incorrect resource manager data checksum in record at ...\"This has been occurring on average once every 1 to 2 weeks during large data imports (100s of GBs being written)on one of two replicas.Fixing the issue has been relatively straight forward: shutdown replica, remove the bad wal file, restart replica andthe good wal file is retrieved from the master.We are doing streaming replication using replication slots.However twice now, the master had already removed the WAL file so the file had to retrieved from the wal archive.The WAL log directories on the master and the replicas are on ZFS file systems.All servers are running RHEL 7.7 (Maipo)PostgreSQL 10.11ZFS v0.7.13-1The issue seems similar to https://www.postgresql.org/message-id/CANQ55Tsoa6%3Dvk2YkeVUN7qO-2YdqJf_AMVQxqsVTYJm0qqQQuw%40mail.gmail.com and to https://github.com/timescale/timescaledb/issues/1443One quirk in our ZFS setup is ZFS is not handling our RAID array, so ZFS sees our array as a single device.....<snip>An update in case someone else encounters the same issue.About 5 weeks ago, on the master database server, we turned off ZFS compression for the volume where the WAL log resides.The error has not occurred on any replica since.Best,Alex", "msg_date": "Thu, 2 Apr 2020 13:44:57 -0400", "msg_from": "Alex Malek <magicagent@gmail.com>", "msg_from_op": true, "msg_subject": "Re: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "Hi,\n\nOn 2020-02-19 16:35:53 -0500, Alex Malek wrote:\n> We are having a reoccurring issue on 2 of our replicas where replication\n> stops due to this message:\n> \"incorrect resource manager data checksum in record at ...\"\n\nCould you show the *exact* log output please? Because this could\ntemporarily occur without signalling anything bad, if e.g. the\nreplication connection goes down.\n\n\n> Right before the issue started we did some upgrades and altered some\n> postgres configs and ZFS settings.\n> We have been slowly rolling back changes but so far the the issue continues.\n> \n> Some interesting data points while debugging:\n> We had lowered the ZFS recordsize from 128K to 32K and for that week the\n> issue started happening every other day.\n> Using xxd and diff we compared \"good\" and \"bad\" wal files and the\n> differences were not random bad bytes.\n> \n> The bad file either had a block of zeros that were not in the good file at\n> that position or other data. Occasionally the bad data has contained\n> legible strings not in the good file at that position. At least one of\n> those exact strings has existed elsewhere in the files.\n> However I am not sure if that is the case for all of them.\n> \n> This made me think that maybe there was an issue w/ wal file recycling and\n> ZFS under heavy load, so we tried lowering\n> min_wal_size in order to \"discourage\" wal file recycling but my\n> understanding is a low value discourages recycling but it will still\n> happen (unless setting wal_recycle in psql 12).\n\nThis sounds a lot more like a broken filesystem than anythingon the PG\nlevel.\n\n\n> When using replication slots, what circumstances would cause the master to\n> not save the WAL file?\n\nWhat do you mean by \"save the WAL file\"?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:10:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" }, { "msg_contents": "On Thu, Apr 2, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-02-19 16:35:53 -0500, Alex Malek wrote:\n> > We are having a reoccurring issue on 2 of our replicas where replication\n> > stops due to this message:\n> > \"incorrect resource manager data checksum in record at ...\"\n>\n> Could you show the *exact* log output please? Because this could\n> temporarily occur without signalling anything bad, if e.g. the\n> replication connection goes down.\n>\n\nFeb 23 00:02:02 wrds-pgdata10-2-w postgres[68329]: [12491-1] 5e4aac44.10ae9\n(@) LOG: incorrect resource manager data checksum in record at\n39002/57AC0338\n\nWhen it occurred replication stopped. The only way to resume replication\nwas to stop server and remove bad WAL file.\n\n\n>\n>\n> > Right before the issue started we did some upgrades and altered some\n> > postgres configs and ZFS settings.\n> > We have been slowly rolling back changes but so far the the issue\n> continues.\n> >\n> > Some interesting data points while debugging:\n> > We had lowered the ZFS recordsize from 128K to 32K and for that week the\n> > issue started happening every other day.\n> > Using xxd and diff we compared \"good\" and \"bad\" wal files and the\n> > differences were not random bad bytes.\n> >\n> > The bad file either had a block of zeros that were not in the good file\n> at\n> > that position or other data. Occasionally the bad data has contained\n> > legible strings not in the good file at that position. At least one of\n> > those exact strings has existed elsewhere in the files.\n> > However I am not sure if that is the case for all of them.\n> >\n> > This made me think that maybe there was an issue w/ wal file recycling\n> and\n> > ZFS under heavy load, so we tried lowering\n> > min_wal_size in order to \"discourage\" wal file recycling but my\n> > understanding is a low value discourages recycling but it will still\n> > happen (unless setting wal_recycle in psql 12).\n>\n> This sounds a lot more like a broken filesystem than anythingon the PG\n> level.\n>\n\nProbably. In my recent updated comment turning off ZFS compression on\nmaster seems to have fixed the issue.\nHowever I will note that the WAL file stored on the master was always fine\nupon inspection.\n\n\n>\n>\n> > When using replication slots, what circumstances would cause the master\n> to\n> > not save the WAL file?\n>\n> What do you mean by \"save the WAL file\"?\n>\n\nTypically, when using replication slots, when replication stops the master\nwill save the next needed WAL file.\nHowever once or twice when this error occurred the master recycled/removed\nthe WAL file needed.\nI suspect perhaps b/c the replica had started to read the WAL file it sent\nsome signal to the master that the WAL\nfile was already consumed. I am guessing, not knowing exactly what is\nhappening and w/ the caveat that this\nsituation was rare and not the norm. It is also possible caused by a\ndifferent error.\n\n\nThanks.\nAlex\n\nOn Thu, Apr 2, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-02-19 16:35:53 -0500, Alex Malek wrote:\n> We are having a reoccurring issue on 2 of our replicas where replication\n> stops due to this message:\n> \"incorrect resource manager data checksum in record at ...\"\n\nCould you show the *exact* log output please? Because this could\ntemporarily occur without signalling anything bad, if e.g. the\nreplication connection goes down.Feb 23 00:02:02 wrds-pgdata10-2-w postgres[68329]: [12491-1] 5e4aac44.10ae9 (@) LOG:  incorrect resource manager data checksum in record at 39002/57AC0338When it occurred replication stopped.  The only way to resume replication was to stop server and remove bad WAL file. \n\n\n> Right before the issue started we did some upgrades and altered some\n> postgres configs and ZFS settings.\n> We have been slowly rolling back changes but so far the the issue continues.\n> \n> Some interesting data points while debugging:\n> We had lowered the ZFS recordsize from 128K to 32K and for that week the\n> issue started happening every other day.\n> Using xxd and diff we compared \"good\" and \"bad\" wal files and the\n> differences were not random bad bytes.\n> \n> The bad file either had a block of zeros that were not in the good file at\n> that position or other data.  Occasionally the bad data has contained\n> legible strings not in the good file at that position. At least one of\n> those exact strings has existed elsewhere in the files.\n> However I am not sure if that is the case for all of them.\n> \n> This made me think that maybe there was an issue w/ wal file recycling and\n> ZFS under heavy load, so we tried lowering\n> min_wal_size in order to \"discourage\" wal file recycling but my\n> understanding is a low value discourages recycling but it will still\n> happen (unless setting wal_recycle in psql 12).\n\nThis sounds a lot more like a broken filesystem than anythingon the PG\nlevel.Probably. In my recent updated comment turning off ZFS compression on master seems to have fixed the issue.However I will note that the WAL file stored on the master was always fine upon inspection. \n\n\n> When using replication slots, what circumstances would cause the master to\n> not save the WAL file?\n\nWhat do you mean by \"save the WAL file\"?Typically, when using replication slots, when replication stops the master will save the next needed WAL file.However once or twice when this error occurred the master recycled/removed the WAL file needed.I suspect perhaps b/c the replica had started to read the WAL file it sent some signal to the master that the WALfile was already consumed.  I am guessing, not knowing exactly what is happening and w/ the caveat that thissituation was rare and not the norm.  It is also possible caused by a different error. Thanks.Alex", "msg_date": "Mon, 6 Apr 2020 10:59:47 -0400", "msg_from": "Alex Malek <magicagent@gmail.com>", "msg_from_op": true, "msg_subject": "Re: bad wal on replica / incorrect resource manager data checksum in\n record / zfs" } ]
[ { "msg_contents": "Hi PostgreSQL Hackers,\nPlease forgive me if this is not the preferred place to suggest a new\nfeature. I found that a lot of items in the psql TODO list [1] were\nposted to this email list.\n\nI need to pass a connection string to psql inside Docker [2]. I can\npass it as a process argument, but this exposes the password to other\nprocesses on my machine:\n$ docker run --rm -i -t postgres:11 psql \"$(cat db_uri)\"\n\nThe alternative is to parse the URI, remove the password, and provide\nthe password via environment variable. It's ugly:\n\n$ PGPASSWORD=$(cat db_uri |grep -oE ':[a-zA-Z0-9]*@' |tr -d :@ ) \\\ndocker run --rm -i -t postgres:11 \\\npsql \"$(cat db_uri |sed 's/:[:alnum:]*@/@/')\"\n\nI would prefer to do this:\n$ PGURI=\"$(cat db_uri)\" docker run --rm -i -t postgres:11 -e PGURI psql\nHow about adding PGURI to the list of supported environment variables [3]?\n\nSincerely,\nMichael\n\n[1] https://wiki.postgresql.org/wiki/Todo#psql\n[2] https://hub.docker.com/_/postgres\n[3] https://www.postgresql.org/docs/devel/app-psql.html#APP-PSQL-ENVIRONMENT\n\n\n", "msg_date": "Wed, 19 Feb 2020 18:25:10 -0800", "msg_from": "Michael Leonhard <michael@leonhardllc.com>", "msg_from_op": true, "msg_subject": "Add PGURI env var for passing connection string to psql in Docker" }, { "msg_contents": "Michael Leonhard <michael@leonhardllc.com> writes:\n> I need to pass a connection string to psql inside Docker [2]. I can\n> pass it as a process argument, but this exposes the password to other\n> processes on my machine:\n> $ docker run --rm -i -t postgres:11 psql \"$(cat db_uri)\"\n\nYeah, if you include the password in the URI :-(\n\n> How about adding PGURI to the list of supported environment variables [3]?\n\nThat will not fix your security problem, because on a lot of OSes,\nenvironment variables are *also* visible to other processes.\n\nThere are other practical problems with such a proposal, mainly that\nit's not clear how such a variable ought to interact with existing\nconnection-control variables (eg, if you set both PGURI and PGHOST,\nwhich wins?)\n\nThe only safe way to deal with a password is to have some other\nout-of-band way to pass it. That's one reason for the popularity\nof ~/.pgpass files. Alternatively, look into non-password\nauthentication methods.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 15:20:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add PGURI env var for passing connection string to psql in Docker" }, { "msg_contents": "Hi Tom,\nThanks for your reply. A new PGURI env var would have the same\nsecurity risks as the existing PGPASSWORD env var, but no more. It\nwould be a usability improvement for folks using Docker. Docker\nprovides some special security benefits. I believe that we can\nimprove security for users by helping them to use Docker.\n\n~/.pgpass is useful for folks who manually connect to databases. I'm\nwriting deployment, backup, and restore automation tools. I would\nlike to keep these tools simple. Using pgpass requires extra steps:\n1. parse a perfectly good URI\n2. join it back together without the secret part\n3. write the secret part to a file in a special format\n4. protect the file from unauthorized access\n5. expose that file to the Docker container\n6. pass the secret-less URI to the process\nThe chances for screwing this up and leaking credentials are real.\nTherefore, I believe PGURI will be much safer in practice than\nPGPASSWORD.\n\nYour point about ambiguity if the user sets multiple overlapping env\nvars is good. I think it could be solved reasonably by having other\nvars override values in PGURI. A short sentence in the documentation\nwould eliminate confusion for users. Example changes to\napp-psql.html:\n>>>>>\n+ PGURI (other environment variables override values from this variable)\nPGDATABASE\nPGHOST\nPGPORT\nPGUSER\n+ PGPASSWORD\nDefault connection parameters (see Section 33.14).\n<<<<<\n\nWe could get the best of both worlds by adding both PGURI and\nPGURIFILE env vars. What do you think?\n\n-Michael\n\nOn Thu, Feb 20, 2020 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Leonhard <michael@leonhardllc.com> writes:\n> > I need to pass a connection string to psql inside Docker [2]. I can\n> > pass it as a process argument, but this exposes the password to other\n> > processes on my machine:\n> > $ docker run --rm -i -t postgres:11 psql \"$(cat db_uri)\"\n>\n> Yeah, if you include the password in the URI :-(\n>\n> > How about adding PGURI to the list of supported environment variables [3]?\n>\n> That will not fix your security problem, because on a lot of OSes,\n> environment variables are *also* visible to other processes.\n>\n> There are other practical problems with such a proposal, mainly that\n> it's not clear how such a variable ought to interact with existing\n> connection-control variables (eg, if you set both PGURI and PGHOST,\n> which wins?)\n>\n> The only safe way to deal with a password is to have some other\n> out-of-band way to pass it. That's one reason for the popularity\n> of ~/.pgpass files. Alternatively, look into non-password\n> authentication methods.\n>\n> regards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 14:09:31 -0800", "msg_from": "Michael Leonhard <michael@leonhardllc.com>", "msg_from_op": true, "msg_subject": "Re: Add PGURI env var for passing connection string to psql in Docker" }, { "msg_contents": "On Fri, 21 Feb 2020 at 08:03, Michael Leonhard <michael@leonhardllc.com> wrote:\n> 1. parse a perfectly good URI\n\nYou have a URI with embedded password, which to me is not a perfectly\ngood URI at all. I think the problem really lies with the input:\nseparate your secret credentials out to start with, don't munge them\ninto a URI.\n\n> ~/.pgpass is useful for folks who manually connect to databases. I'm\n> writing deployment, backup, and restore automation tools. I would\n> like to keep these tools simple. Using pgpass requires extra steps:\n\nThat's why we have pg_service.conf, though that only helps libpq applications.\n\nIt's a shame that Docker doesn't make it simpler to inject individual\nfiles into containers at \"docker run\" time. But wrapper dockerfiles\nare trivial. -v bind mounting is also an option but then you have the\nfile sitting around on the host, which is undesirable. You can unlink\nthe bind mounted dir though.\n\nFor Docker you have --env-file to avoid putting the environment on the\ncommand line of the container-host, which helps explain why you are\nwilling to use an env var for this. I wouldn't be too confident in\nassuming there's no way to peek at the environment of the\ncontainerised process(es) from outside the container. Much more likely\nthan being able to peek at a file, anyway.\n\nThen again, Docker relies on dropping capabilities and likes to run as\nroot-that-isn't-root-except-when-it's-root, which doesn't thrill me\nwhen it comes to security. At all.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Fri, 21 Feb 2020 14:14:21 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add PGURI env var for passing connection string to psql in Docker" } ]
[ { "msg_contents": "Hello hackers,\n\nHere's a *highly* experimental patch set that tries to skip the LWLock\nprotocol in predicate.c and use HTM[1] instead. HTM is itself a sort\nof hardware-level implementation of SSI for shared memory. My\nthinking was that if your workload already suits the optimistic nature\nof SSI, perhaps it could make sense to go all-in and remove the rather\ncomplicated pessimistic locking it's built on top of. It falls back\nto an LWLock-based path at compile time if you don't build with\n--enable-htm, or at runtime if a startup test discovered that your CPU\ndoesn't have the Intel TSX instruction set (microarchitectures older\nthan Skylake, and some mobile and low power variants of current ones),\nor if a hardware transaction is aborted for various reasons.\n\nThe good news is that it seems to produce correct results in simple\ntests (well, some lock-held-by-me assertions can fail in an\n--enable-cassert build, that's trivial to fix). The bad news is that\nit doesn't perform very well yet, and I think the reason for that is\nthat there are some inherently serial parts of the current design that\ncause frequent conflicts. In particular, the\nFinishedSerializableTransactions list, protected by\nSerializableFinishedListLock, produces a stream of conflicts, and\nfalls back to the traditional behaviour which involves long lock wait\nqueues and thereby more HTM conflicts. I think we probably need a\nmore concurrent way to release SSI transactions, entirely independent\nof this HTM experiment. There's another point of serialisation at\nsnapshot acquisition time, which may be less avoidable; I don't know.\nFor much of the code that runs between snapshot acquisition and\ntransaction release, we really only care about touching memory\ndirectly related to the SQL objects we touch in our SQL transaction,\nand the other SQL transactions which have also touched them. The\nquestion is whether it's possible to get to a situation where\nnon-overlapping read/write sets at the SQL level don't cause conflicts\nat the memory level and everything goes faster, or whether the SSI\nalgorithm is somehow inherently unsuitable for running on top of, erm,\nSSI-like technology. It seems like a potentially interesting research\nproject.\n\nHere's my one paragraph introduction to HTM programming: Using the\nwrapper macros from my 0001 patch, you call pg_htm_begin(), and if\nthat returns true you're in a memory transaction and should eventually\ncall pg_htm_commit() or pg_htm_abort(), and if it returns false your\ntransaction has aborted and you need to fall back to some other\nstrategy. (Retrying is also an option, but the reason codes are\ncomplicated, and progress is not guaranteed, so introductions to the\ntopic often advise going straight to a fallback.) No other thread is\nallowed to see your changes to memory until you commit, and if you\nabort (explicitly, due to lack of cache for uncommitted changes, due\nto a serialisation conflict, or due to other internal details possibly\nknown only to Intel), all queued changes to memory are abandoned, and\ncontrol returns at pg_htm_begin(), a bit like the way setjmp() returns\nnon-locally when you call longjmp(). There are plenty of sources to\nread about this stuff in detail, but for a very gentle introduction I\nrecommend Maurice Herlihy's 2-part talk[2][3] (the inventor of this\nstuff at DEC in the early 90s), despite some strange claims he makes\nabout database hackers.\n\nIn theory this should work on POWER and future ARM systems too, with a\nbit more work, but I haven't looked into that. There are doubtless\nmany other applications for this type of technology within PostgreSQL.\nPerhaps some more fruitful.\n\n[1] https://en.wikipedia.org/wiki/Transactional_memory\n[2] https://www.youtube.com/watch?v=S3Fx-7avfs4\n[3] https://www.youtube.com/watch?v=94ieceVxSHs", "msg_date": "Thu, 20 Feb 2020 16:55:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Experimenting with transactional memory for SERIALIZABLE" }, { "msg_contents": "> On Thu, Feb 20, 2020 at 04:55:12PM +1300, Thomas Munro wrote:\n> Hello hackers,\n>\n> Here's a *highly* experimental patch set that tries to skip the LWLock\n> protocol in predicate.c and use HTM[1] instead. HTM is itself a sort\n> of hardware-level implementation of SSI for shared memory. My\n> thinking was that if your workload already suits the optimistic nature\n> of SSI, perhaps it could make sense to go all-in and remove the rather\n> complicated pessimistic locking it's built on top of. It falls back\n> to an LWLock-based path at compile time if you don't build with\n> --enable-htm, or at runtime if a startup test discovered that your CPU\n> doesn't have the Intel TSX instruction set (microarchitectures older\n> than Skylake, and some mobile and low power variants of current ones),\n> or if a hardware transaction is aborted for various reasons.\n\nThanks, that sounds cool!\n\n> The good news is that it seems to produce correct results in simple\n> tests (well, some lock-held-by-me assertions can fail in an\n> --enable-cassert build, that's trivial to fix). The bad news is that\n> it doesn't perform very well yet, and I think the reason for that is\n> that there are some inherently serial parts of the current design that\n> cause frequent conflicts.\n\nCan you share some numbers about how not well it perform and how many\nhardware transactions were aborted with a fallback? I'm curious because\nfrom this paper [1] I've got an impression that the bigger (in terms of\nmemory) and longer transaction is, the higher changes for it to get\naborted. So I wonder if it needs to be taken into account, or using it\nfor SSI as presented in the patch somehow implicitely minimize those\nchances? Otherwise not only conflicting transactions will cause\nfallback, but also those that e.g. span too much memory.\n\nAnother interesting for me question is how much is it affected by TAA\nvulnerability [2], and what are the prospects of this approach in the\nview of many suggests to disable TSX due to that (there are mitigations\nofcourse, but if I understand correctly e.g. for Linux it's similar to\nMDS, where a mitigation is done via flushing cpu buffers on entering the\nkernel space, but in between speculative access still could be\nperformed).\n\n[1]: https://db.in.tum.de/~leis/papers/HTM.pdf\n[2]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html\n\n\n", "msg_date": "Thu, 20 Feb 2020 11:39:45 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Experimenting with transactional memory for SERIALIZABLE" }, { "msg_contents": "On Thu, Feb 20, 2020 at 11:38 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Can you share some numbers about how not well it perform and how many\n> hardware transactions were aborted with a fallback? I'm curious because\n> from this paper [1] I've got an impression that the bigger (in terms of\n> memory) and longer transaction is, the higher changes for it to get\n> aborted. So I wonder if it needs to be taken into account, or using it\n> for SSI as presented in the patch somehow implicitely minimize those\n> chances? Otherwise not only conflicting transactions will cause\n> fallback, but also those that e.g. span too much memory.\n\nGood questions, and I don't have good enough numbers to share right\nnow; to be clear, the stage this work is at is: \"wow, I think this new\nalien technology might actually be producing the right answers at\nleast some of the time, now maybe we could start to think about\nanalysing its behaviour some more\", and I wanted to share early and\nsee if anyone else was interested in the topic too :-)\n\nThanks for that paper, added to my reading list. The HTM\ntransactions' size is not linked to the size of database transactions,\nwhich would certainly be too large. It's just used for lower level\noperations that need to be atomic and serializable, replacing a bunch\nof LWLocks. I see from skimming the final paragraph of that paper\nthat they're also not mapping database transactions directly to HTM.\nSo, the amount of memory you touch depends on the current size of\nvarious lists in SSI's internal book keeping, and I haven't done the\nwork to figure out at which point space runs out (_XABORT_CAPACITY) in\nany test workloads etc, or to consider whether some operations that I\ncovered with one HTM transaction could be safely broken up into\nmultiple transactions to minimise transaction size, though I am aware\nof at least one opportunity like that.\n\n> Another interesting for me question is how much is it affected by TAA\n> vulnerability [2], and what are the prospects of this approach in the\n> view of many suggests to disable TSX due to that (there are mitigations\n> ofcourse, but if I understand correctly e.g. for Linux it's similar to\n> MDS, where a mitigation is done via flushing cpu buffers on entering the\n> kernel space, but in between speculative access still could be\n> performed).\n\nYeah, the rollout of TSX has been a wild ride since the beginning. I\ndidn't want to comment on that aspect because I just don't know enough\nabout it and at this point it's frankly pretty confusing. As far as\nI know from limited reading, as of late last year a few well known\nOSes are offering easy ways to disable TSX due to Zombieload v2 if you\nwould like to, but not doing so by default. I tested with the Debian\nintel-microcode package version 3.20191115.2~deb10u1 installed which I\nunderstand to the be latest and greatest, and made no relevant\nmodifications, and the instructions were available. I haven't read\nanywhere that TSX itself is ending. Other ISAs have comparable\ntechnology[1][2], and the concept has been worked on for over 20\nyears, so... I just don't know.\n\n[1] https://developer.arm.com/docs/101028/0008/transactional-memory-extension-tme-intrinsics\n[2] https://www.ibm.com/developerworks/aix/library/au-aix-ibm-xl-compiler-built-in-functions/index.html\n\n\n", "msg_date": "Fri, 21 Feb 2020 01:33:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Experimenting with transactional memory for SERIALIZABLE" } ]
[ { "msg_contents": "partition_bounds_copy() sets the hash_part and natts variable in each\niteration of a loop to copy the datums in the datums array, which\nwould not be efficient. Attached is small patch for avoiding that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 20 Feb 2020 20:36:10 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Minor improvement to partition_bounds_copy()" }, { "msg_contents": "Fujita-san,\n\nOn Thu, Feb 20, 2020 at 8:36 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> partition_bounds_copy() sets the hash_part and natts variable in each\n> iteration of a loop to copy the datums in the datums array, which\n> would not be efficient. Attached is small patch for avoiding that.\n\nThat looks good to me.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 20 Feb 2020 21:38:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Minor improvement to partition_bounds_copy()" }, { "msg_contents": "On Thu, Feb 20, 2020 at 09:38:26PM +0900, Amit Langote wrote:\n> Fujita-san,\n>\n> On Thu, Feb 20, 2020 at 8:36 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > partition_bounds_copy() sets the hash_part and natts variable in each\n> > iteration of a loop to copy the datums in the datums array, which\n> > would not be efficient. Attached is small patch for avoiding that.\n>\n> That looks good to me.\n\nLooks good to me too!\n\n\n", "msg_date": "Thu, 20 Feb 2020 14:52:29 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Minor improvement to partition_bounds_copy()" }, { "msg_contents": "On Thu, Feb 20, 2020 at 10:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Thu, Feb 20, 2020 at 09:38:26PM +0900, Amit Langote wrote:\n> > On Thu, Feb 20, 2020 at 8:36 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > partition_bounds_copy() sets the hash_part and natts variable in each\n> > > iteration of a loop to copy the datums in the datums array, which\n> > > would not be efficient. Attached is small patch for avoiding that.\n> >\n> > That looks good to me.\n>\n> Looks good to me too!\n\nPushed. Thanks, Amit and Julien!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 21 Feb 2020 20:06:31 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Minor improvement to partition_bounds_copy()" } ]
[ { "msg_contents": "Over in the thread at [1] we agreed to remove assorted code that copes\nwith missing <stdint.h>, on the grounds that C99 requires that header\nso we should not have to cater anymore for platforms without it.\nThis logic could obviously be carried further. I scraped the buildfarm\nconfigure logs to see what other tests seem pointless (on the grounds that\nevery active animal reports the same result) and found a fair number.\n\nI think we can just remove these tests, and the corresponding\nsrc/port/ files where there is one:\n\nfseeko\nisinf\nmemmove\nrint\nsigned types\nutime\nutime.h\nwchar.h\n\nAll of the above are required by C99 and/or SUSv2, and the configure-using\nbuildfarm members are unanimous in reporting that they have them, and\nmsvc/Solution.pm expects Windows to have them. Removing src/port/isinf.c\nwill let us get rid of a few more configure tests too:\n\n # Look for a way to implement a substitute for isinf()\n AC_CHECK_FUNCS([fpclass fp_class fp_class_d class], [break])\n\nalthough that code path is never taken so it won't save much.\n\nI believe that we can also get rid of these tests:\n\nflexible array members\ncbrt\nintptr_t\nuintptr_t\n\nas these features are likewise required by C99. Solution.pm thinks that\nMSVC does not have the above, but I suspect its information is out of\ndate. We could soon find out from the buildfarm, of course.\n\nI also noted that these header checks are passing everywhere,\nwhich is unsurprising because they're required by C99 and/or POSIX:\n\nANSI C header files\ninttypes.h\nmemory.h\nstdlib.h\nstring.h\nstrings.h\nsys/stat.h\nsys/types.h\nunistd.h\n\nUnfortunately we're not actually asking for any of those to be probed\nfor --- it looks like Autoconf just up and does that of its own accord.\nSo we can't get rid of the tests and save configure cycles thereby.\nBut we can skip testing the HAVE_FOO_H symbols for them. We mostly\nwere already, but there's one or two exceptions.\n\nThere are a few other tests that are getting the same results in\nall buildfarm configure checks, but Solution.pm is injecting different\nresults for Windows, such as what to expand \"inline\" to. Conceivably\nwe could hard-code that based on the WIN32 #define and remove the\nconfigure probes, but I'm inclined to think it's not worth the\ntrouble and possible loss of flexibility.\n\nBarring objections I'll go make this happen.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/5d398bbb-262a-5fed-d839-d0e5cff3c0d7%402ndquadrant.com\n\n\n", "msg_date": "Thu, 20 Feb 2020 13:00:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Removing obsolete configure checks" }, { "msg_contents": "On 2020-02-20 19:00, Tom Lane wrote:\n> I think we can just remove these tests, and the corresponding\n> src/port/ files where there is one:\n> \n> fseeko\n> isinf\n> memmove\n> rint\n> signed types\n> utime\n> utime.h\n> wchar.h\n\nmakes sense\n\n> I believe that we can also get rid of these tests:\n> \n> flexible array members\n> cbrt\n> intptr_t\n> uintptr_t\n> \n> as these features are likewise required by C99. Solution.pm thinks that\n> MSVC does not have the above, but I suspect its information is out of\n> date. We could soon find out from the buildfarm, of course.\n\nThe flexible array members test on Solution.pm looks correct to me \n(define to empty if supported, else define to 1). cbrt is probably a \nmistake or outdated. The intptr_t/uintptr_t results are inconsistent: \nIt correctly defines intptr_t to empty, so that it will use the existing \ntypedef, but it does not define HAVE_INTPTR_T, but nothing uses that \nanyway. But these are gone now anyway.\n\n> I also noted that these header checks are passing everywhere,\n> which is unsurprising because they're required by C99 and/or POSIX:\n> \n> ANSI C header files\n> inttypes.h\n> memory.h\n> stdlib.h\n> string.h\n> strings.h\n> sys/stat.h\n> sys/types.h\n> unistd.h\n> \n> Unfortunately we're not actually asking for any of those to be probed\n> for --- it looks like Autoconf just up and does that of its own accord.\n> So we can't get rid of the tests and save configure cycles thereby.\n> But we can skip testing the HAVE_FOO_H symbols for them. We mostly\n> were already, but there's one or two exceptions.\n\nAutoconf git master seems to have modernized that a little bit. For \ninstance, HAVE_STDLIB_H and HAVE_STRING_H are always defined to 1, just \nfor backward compatibility. If we wanted to fiddle with this, I'd \nconsider importing the updated macro. Not sure if it's worth it though.\n\n> There are a few other tests that are getting the same results in\n> all buildfarm configure checks, but Solution.pm is injecting different\n> results for Windows, such as what to expand \"inline\" to.\n\nMSVC indeed does not appear to support plain inline.\n\n> Conceivably\n> we could hard-code that based on the WIN32 #define and remove the\n> configure probes, but I'm inclined to think it's not worth the\n> trouble and possible loss of flexibility.\n\nRight, better to leave it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Feb 2020 20:40:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Removing obsolete configure checks" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-02-20 19:00, Tom Lane wrote:\n>> I believe that we can also get rid of these tests:\n>> flexible array members\n>> cbrt\n>> intptr_t\n>> uintptr_t\n>> as these features are likewise required by C99. Solution.pm thinks that\n>> MSVC does not have the above, but I suspect its information is out of\n>> date. We could soon find out from the buildfarm, of course.\n\n> The flexible array members test on Solution.pm looks correct to me \n> (define to empty if supported, else define to 1).\n\nYeah, I misread it the first time.\n\n> cbrt is probably a mistake or outdated.\n\nRight; at least, Microsoft's documentation claims to have it. We'll\nsoon find out.\n\n> The intptr_t/uintptr_t results are inconsistent: \n> It correctly defines intptr_t to empty, so that it will use the existing \n> typedef, but it does not define HAVE_INTPTR_T, but nothing uses that \n> anyway. But these are gone now anyway.\n\nI forgot that your pending patch would nuke those, or I wouldn't\nhave listed them.\n\n>> Unfortunately we're not actually asking for any of those to be probed\n>> for --- it looks like Autoconf just up and does that of its own accord.\n>> So we can't get rid of the tests and save configure cycles thereby.\n>> But we can skip testing the HAVE_FOO_H symbols for them. We mostly\n>> were already, but there's one or two exceptions.\n\n> Autoconf git master seems to have modernized that a little bit. For \n> instance, HAVE_STDLIB_H and HAVE_STRING_H are always defined to 1, just \n> for backward compatibility. If we wanted to fiddle with this, I'd \n> consider importing the updated macro. Not sure if it's worth it though.\n\nHmm. If I thought they'd actually put out a new release sometime soon,\nI'd be content to wait for that. Seems like they have forgotten the\nrule about \"great artists ship\", though. Maybe we need to just\nperiodically grab their git master? Keeping all committers in sync\nwould be a problem though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Feb 2020 14:46:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Removing obsolete configure checks" }, { "msg_contents": "On Fri, Feb 21, 2020 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> wchar.h\n>\n> All of the above are required by C99 and/or SUSv2, and the configure-using\n> buildfarm members are unanimous in reporting that they have them, and\n> msvc/Solution.pm expects Windows to have them.\n\nI think the same now applies to <wctype.h>, without gaur. So I\npropose the attached. I split it into two patches, because 0001 is\nbased on scraping build farm configure output, while 0002 is an\neducated guess and might finish up needing to be reverted if I'm\nwrong.", "msg_date": "Sat, 23 Jul 2022 15:47:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing obsolete configure checks" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Feb 21, 2020 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> All of the above are required by C99 and/or SUSv2, and the configure-using\n>> buildfarm members are unanimous in reporting that they have them, and\n>> msvc/Solution.pm expects Windows to have them.\n\n> I think the same now applies to <wctype.h>, without gaur. So I\n> propose the attached. I split it into two patches, because 0001 is\n> based on scraping build farm configure output, while 0002 is an\n> educated guess and might finish up needing to be reverted if I'm\n> wrong.\n\n+1. SUSv2 is perfectly clear that <wctype.h> is supposed to declare\nthese functions. I'm not surprised that gaur's 1996-ish system headers\nfailed to see into the future; but prairiedog is up to speed on this\npoint, and I should think all the surviving BF animals are too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 23 Jul 2022 00:05:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Removing obsolete configure checks" }, { "msg_contents": "On Sat, Jul 23, 2022 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Feb 21, 2020 at 7:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> All of the above are required by C99 and/or SUSv2, and the configure-using\n> >> buildfarm members are unanimous in reporting that they have them, and\n> >> msvc/Solution.pm expects Windows to have them.\n>\n> > I think the same now applies to <wctype.h>, without gaur. So I\n> > propose the attached. I split it into two patches, because 0001 is\n> > based on scraping build farm configure output, while 0002 is an\n> > educated guess and might finish up needing to be reverted if I'm\n> > wrong.\n>\n> +1. SUSv2 is perfectly clear that <wctype.h> is supposed to declare\n> these functions. I'm not surprised that gaur's 1996-ish system headers\n> failed to see into the future; but prairiedog is up to speed on this\n> point, and I should think all the surviving BF animals are too.\n\nThanks. After looking more closely I pushed it as one commit. (I\nsuspect that we have some redundant #includes around here but my\ncurrent mission is focused on redundant configure/portability gloop.)\n\n\n", "msg_date": "Sat, 23 Jul 2022 16:57:47 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing obsolete configure checks" } ]
[ { "msg_contents": "Allow running src/tools/msvc/mkvcbuild.pl under not Windows\n\nThis to allow verifying the MSVC build file generation without having\nto have Windows.\n\nTo do this, we avoid Windows-specific Perl modules and don't run the\n\"cl\" compiler or \"nmake\". The resulting build files won't actually be\ncompletely correct, but it's useful enough.\n\nReviewed-by: Michael Paquier <michael@paquier.xyz>\nReviewed-by: Julien Rouhaud <rjuju123@gmail.com>\nDiscussion: https://www.postgresql.org/message-id/flat/d73b2c7b-f081-8357-8422-7564d55f1aac%402ndquadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/73c8596488fd5fd619991f56dae5d22f551b06d9\n\nModified Files\n--------------\nsrc/tools/msvc/Mkvcbuild.pm | 6 ++++--\nsrc/tools/msvc/Project.pm | 2 +-\nsrc/tools/msvc/Solution.pm | 17 ++++++++++++-----\nsrc/tools/msvc/VSObjectFactory.pm | 31 +++++++++++++++++++------------\n4 files changed, 36 insertions(+), 20 deletions(-)", "msg_date": "Fri, 21 Feb 2020 19:58:43 +0000", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "pgsql: Allow running src/tools/msvc/mkvcbuild.pl under not Windows" }, { "msg_contents": "On 2020-Feb-21, Peter Eisentraut wrote:\n\n> Allow running src/tools/msvc/mkvcbuild.pl under not Windows\n> \n> This to allow verifying the MSVC build file generation without having\n> to have Windows.\n\nI suggest that src/tools/msvc/README should indicate how to use this; I\ndon't think it's completely obvious.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Feb 2020 13:59:28 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Allow running src/tools/msvc/mkvcbuild.pl under not\n Windows" } ]
[ { "msg_contents": "So last year in 10.5, 9.6.10, 9.5.9, 9.4.19, and 9.3.24 we had a\nbinary ABI break that caused pglogical and other logical decoding\nplugins to break:\n\nhttps://github.com/2ndQuadrant/pglogical/issues/183#issuecomment-417558313\n\nThis wasn't discovered until after the release so the release notes\ndon't highlight the risk. Someone upgrading past this release now\ncould diligently read all the release notes for all the versions they\ncare about and would never see anything warning that there was an\nunintentional ABI break.\n\nI wonder if we shouldn't be adding a note to those release notes after\nthe fact (and subsequent \"However if you are upgrading from a version\nearliers than....\" notes in later releases). It would be quite a pain\nI'm sure but I don't see any other way to get the information to\nsomeone upgrading in the future. I suppose we could just add the note\nto the current release notes on the theory that we only support\ninstalling the current release.\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 21 Feb 2020 16:14:57 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "Do we ever edit release notes after release to add warnings about\n incompatibilities?" }, { "msg_contents": "On Fri, Feb 21, 2020 at 10:15 PM Greg Stark <stark@mit.edu> wrote:\n>\n> So last year in 10.5, 9.6.10, 9.5.9, 9.4.19, and 9.3.24 we had a\n> binary ABI break that caused pglogical and other logical decoding\n> plugins to break:\n>\n> https://github.com/2ndQuadrant/pglogical/issues/183#issuecomment-417558313\n>\n> This wasn't discovered until after the release so the release notes\n> don't highlight the risk. Someone upgrading past this release now\n> could diligently read all the release notes for all the versions they\n> care about and would never see anything warning that there was an\n> unintentional ABI break.\n>\n> I wonder if we shouldn't be adding a note to those release notes after\n> the fact (and subsequent \"However if you are upgrading from a version\n> earliers than....\" notes in later releases). It would be quite a pain\n> I'm sure but I don't see any other way to get the information to\n> someone upgrading in the future. I suppose we could just add the note\n> to the current release notes on the theory that we only support\n> installing the current release.\n>\n\nI definitely think we should. People will be looking at the release\nnotes for many years to come... And people will be installing the old\nversions.\n\nI think the right thing to do is to add them in the same place as they\nwould've been added if we had noticed the thing at the right time. We\nshouldn't duplicate it across every notes since then, but it should be\nback-patched into those.\n\n//Magnus\n\n\n", "msg_date": "Fri, 21 Feb 2020 22:40:04 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Do we ever edit release notes after release to add warnings about\n incompatibilities?" } ]
[ { "msg_contents": "Hi PostgreSQL Hackers,\nI've run into something confusing. The psql command accepts\nconnection strings of the form:\n\npostgresql://user1:pass1@localhost:5432/db1?sslmode=require\n\nBut passing this string to the java client library (with a \"jdbc:\"\nprefix) fails. See the exception and stack trace below. According to\nthe docs https://jdbc.postgresql.org/documentation/80/connect.html ,\nthe java client library accepts connection strings with this form:\n\npostgresql://localhost:5432/db1?user=user1&password=pass1&ssl=true\n\nHow about making the Java client library accept the same connection\nstrings as psql and other command-line tools? That would make\nPostgreSQL easier to use and increase its popularity.\n\nSincerely,\nMichael\n\nException in thread \"main\" org.postgresql.util.PSQLException: The\nconnection attempt failed.\nat org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:292)\nat org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)\nat org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)\nat org.postgresql.Driver.makeConnection(Driver.java:458)\nat org.postgresql.Driver.connect(Driver.java:260)\nat java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)\nat java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)\nat org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:98)\nat org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:83)\nat com.leonhardllc.x.db.temp.TemporaryDatabase.createTempDatabase(TemporaryDatabase.java:39)\nat com.leonhardllc.x.db.generated.JOOQSourceGenerator.main(JOOQSourceGenerator.java:35)\nCaused by: java.net.UnknownHostException: user1:pass1@localhost\nat java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)\nat java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)\nat java.base/java.net.Socket.connect(Socket.java:591)\nat org.postgresql.core.PGStream.<init>(PGStream.java:75)\nat org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91)\nat org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)\n... 10 more\n\n\n", "msg_date": "Fri, 21 Feb 2020 17:08:47 -0800", "msg_from": "Michael Leonhard <michael@leonhardllc.com>", "msg_from_op": true, "msg_subject": "Make java client lib accept same connection strings as psql" }, { "msg_contents": "On Fri, Feb 21, 2020 at 6:21 PM Michael Leonhard <michael@leonhardllc.com>\nwrote:\n\n> How about making the Java client library accept the same connection\n> strings as psql and other command-line tools? That would make\n> PostgreSQL easier to use and increase its popularity.\n>\n\nThat falls outside the scope of this list/project. The separate pgJDBC\nproject team would need to decide to take that up.\n\nI also doubt both unsubstantiated claims you make - that it would make\ndevelopers' lives materially easier and influence popularity measurably.\n\nDavid J.\n\nOn Fri, Feb 21, 2020 at 6:21 PM Michael Leonhard <michael@leonhardllc.com> wrote:How about making the Java client library accept the same connection\nstrings as psql and other command-line tools?  That would make\nPostgreSQL easier to use and increase its popularity.That falls outside the scope of this list/project.  The separate pgJDBC project team would need to decide to take that up.I also doubt both unsubstantiated claims you make - that it would make developers' lives materially easier and influence popularity measurably.David J.", "msg_date": "Fri, 21 Feb 2020 20:04:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make java client lib accept same connection strings as psql" }, { "msg_contents": "On Sat, Feb 22, 2020 at 4:05 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Fri, Feb 21, 2020 at 6:21 PM Michael Leonhard <michael@leonhardllc.com>\n> wrote:\n>\n>> How about making the Java client library accept the same connection\n>> strings as psql and other command-line tools? [...]\n>>\n>\n> That falls outside the scope of this list/project. The separate pgJDBC\n> project team would need to decide to take that up.\n>\n\nMichael,\n\nWhile your proposed end goal sounds as a desirable thing to me, there are\ncertain obstacles to that, unfortunately.\n\nFirst, consider that the URL support appeared in libpq after, not before\nthe support in JDBC driver.\nSecond, the common subset of allowed connection parameters between the two\nis only as big as \"host\", \"port\", \"database\", \"user\" and \"password\".\n\nAdditionally, libpq understands the \"ssl=true\" parameter for JDBC\ncompatibility, though I don't think that was a good idea in the end. For\none, in the JDBC world \"ssl=false\" is treated exactly the same as\n\"ssl=true\" or any other value, which is questionable design in the first\nplace. And even if you could use exactly the same URL in both libpq-based\nand JDBC clients, without running into syntax errors, the semantics of\n\"ssl=true\" is subtly different between the two: in the former case, the\nclient does not attempt to validate certificate, nor the hostname, as\nopposed to the latter.\n\nAs to your actual example, the part of syntax that is treated differently\nin libpq is the the \"userinfo\":\nhttps://tools.ietf.org/html/rfc3986#section-3.2.1\nThe JDBC driver could be extended to support this as well, as such a change\nis backwards-compatible. As David has pointed out, this question should be\nasked to the PgJDBC project.\n\nLastly, the RFC provides some good arguments as to why providing username\nand, especially, password in the connection URL is undesirable. While the\n\"user:password@host\" or \"?user=fred&password=secret\" syntax can be handy\nfor local testing, this is definitely not something that should be used in\nproduction. Both libpq and the JDBC driver provide ways to pass login\ncredentials in a more secure manner.\n\nKind regards,\n--\nAlex\n\nOn Sat, Feb 22, 2020 at 4:05 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, Feb 21, 2020 at 6:21 PM Michael Leonhard <michael@leonhardllc.com> wrote:How about making the Java client library accept the same connection\nstrings as psql and other command-line tools?  [...]That falls outside the scope of this list/project.  The separate pgJDBC project team would need to decide to take that up.Michael,While your proposed end goal sounds as a desirable thing to me, there are certain obstacles to that, unfortunately.First, consider that the URL support appeared in libpq after, not before the support in JDBC driver.Second, the common subset of allowed connection parameters between the two is only as big as \"host\", \"port\", \"database\", \"user\" and \"password\".Additionally, libpq understands the \"ssl=true\" parameter for JDBC compatibility, though I don't think that was a good idea in the end.  For one, in the JDBC world \"ssl=false\" is treated exactly the same as \"ssl=true\" or any other value, which is questionable design in the first place.  And even if you could use exactly the same URL in both libpq-based and JDBC clients, without running into syntax errors, the semantics of \"ssl=true\" is subtly different between the two: in the former case, the client does not attempt to validate certificate, nor the hostname, as opposed to the latter.As to your actual example, the part of syntax that is treated differently in libpq is the the \"userinfo\": https://tools.ietf.org/html/rfc3986#section-3.2.1The JDBC driver could be extended to support this as well, as such a change is backwards-compatible.  As David has pointed out, this question should be asked to the PgJDBC project.Lastly, the RFC provides some good arguments as to why providing username and, especially, password in the connection URL is undesirable.  While the \"user:password@host\" or \"?user=fred&password=secret\" syntax can be handy for local testing, this is definitely not something that should be used in production.  Both libpq and the JDBC driver provide ways to pass login credentials in a more secure manner.Kind regards,--Alex", "msg_date": "Mon, 24 Feb 2020 09:52:21 +0100", "msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>", "msg_from_op": false, "msg_subject": "Re: Make java client lib accept same connection strings as psql" } ]
[ { "msg_contents": "Hi,\n\nISO seems to allow spaces among the digits of a binary literal, so as\nto group them for readability, as in X'00ba b10c'. We seem not to.\n(The B'...' form appears to be a PostgreSQL extension, but I imagine\nif ISO had it, it would allow spaces too.)\n\nIs it worthwhile to allow that? Or to add a compatibility note somewhere\naround sql-syntax-bit-strings saying we don't allow it?\n\nFor comparison, byteain does allow grouping whitespace.\n\nIt seems that byteain allows arbitrary whitespace (tabs, newlines, etc.),\nwhereas ISO's X'...' allows exactly and only U+0020 space characters.\n\nWhitespace for byteain must occur between digit pairs; spaces in X'...'\nper ISO can be anywhere.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 23 Feb 2020 12:24:00 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Ought binary literals to allow spaces?" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> ISO seems to allow spaces among the digits of a binary literal, so as\n> to group them for readability, as in X'00ba b10c'. We seem not to.\n\nHmm. SQL99 did not allow noise spaces in <binary string literal> and\nrelated productions, but it does look like they added that in SQL:2008.\n\n> (The B'...' form appears to be a PostgreSQL extension, but I imagine\n> if ISO had it, it would allow spaces too.)\n\nThe B'...' form was there in SQL99. The committee took it out in\nSQL:2003 or so, along with the BIT type, but we still have both.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Feb 2020 17:00:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ought binary literals to allow spaces?" } ]
[ { "msg_contents": "This links to a long thread, from which I've tried to quote some of the\nmost important mails, below.\nhttps://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#Won.27t_Fix\n\nI wondered if there's an effort to pursue a resolution for v13 ?\n\nOn Fri, Apr 12, 2019 at 11:42:24AM -0400, Tom Lane wrote in <31027.1555083744@sss.pgh.pa.us>:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Wed, Apr 10, 2019 at 05:03:21PM +0900, Amit Langote wrote:\n> >> The problem lies in all branches that have partitioning, so it should be\n> >> listed under Older Bugs, right? You may have noticed that I posted\n> >> patches for all branches down to 10.\n> \n> > I have noticed. The message from Tom upthread outlined that an open\n> > item should be added, but this is not one. That's what I wanted to\n> > emphasize. Thanks for making it clearer.\n> \n> To clarify a bit: there's more than one problem here. Amit added an\n> open item about pre-existing leaks related to rd_partcheck. (I'm going\n> to review and hopefully push his fix for that today.) However, there's\n> a completely separate leak associated with mismanagement of rd_pdcxt,\n> as I showed in [1], and it doesn't seem like we have consensus about\n> what to do about that one. AFAIK that's a new bug in 12 (caused by\n> 898e5e329) and so it ought to be in the main open-items list.\n> \n> This thread has discussed a bunch of possible future changes like\n> trying to replace copying of relcache-provided data structures\n> with reference-counting, but I don't think any such thing could be\n> v12 material at this point. We do need to fix the newly added\n> leak though.\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/10797.1552679128%40sss.pgh.pa.us\n> \n> \n\nOn Fri, Mar 15, 2019 at 05:41:47PM -0400, Robert Haas wrote in <CA+Tgmoa5rT+ZR+Vv+q1XLwQtDMCqkNL6B4PjR4V6YAC9K_LBxw@mail.gmail.com>:\n> On Fri, Mar 15, 2019 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > More to the point, we turned *one* rebuild = false situation into\n> > a bunch of rebuild = true situations. I haven't studied it closely,\n> > but I think a CCA animal would probably see O(N^2) rebuild = true\n> > invocations in a query with N partitions, since each time\n> > expand_partitioned_rtentry opens another child table, we'll incur\n> > an AcceptInvalidationMessages call which leads to forced rebuilds\n> > of the previously-pinned rels. In non-CCA situations, almost always\n> > nothing happens with the previously-examined relcache entries.\n> \n> That's rather unfortunate. I start to think that clobbering the cache\n> \"always\" is too hard a line.\n> \n> > I agree that copying data isn't great. What I don't agree is that this\n> > solution is better. In particular, copying data out of the relcache\n> > rather than expecting the relcache to hold still over long periods\n> > is the way we've done things everywhere else, cf RelationGetIndexList,\n> > RelationGetStatExtList, RelationGetIndexExpressions,\n> > RelationGetIndexPredicate, RelationGetIndexAttrBitmap,\n> > RelationGetExclusionInfo, GetRelationPublicationActions. I don't care\n> > for a patch randomly deciding to do things differently on the basis of an\n> > unsupported-by-evidence argument that it might cost too much to copy the\n> > data. If we're going to make a push to reduce the amount of copying of\n> > that sort that we do, it should be a separately (and carefully) designed\n> > thing that applies to all the relcache substructures that have the issue,\n> > not one-off hacks that haven't been reviewed thoroughly.\n> \n> That's not an unreasonable argument. On the other hand, if you never\n> try new stuff, you lose opportunities to explore things that might\n> turn out to be better and worth adopting more widely.\n> \n> I am not very convinced that it makes sense to lump something like\n> RelationGetIndexAttrBitmap in with something like\n> RelationGetPartitionDesc. RelationGetIndexAttrBitmap is returning a\n> single Bitmapset, whereas the data structure RelationGetPartitionDesc\n> is vastly larger and more complex. You can't say \"well, if it's best\n> to copy 32 bytes of data out of the relcache every time we need it, it\n> must also be right to copy 10k or 100k of data out of the relcache\n> every time we need it.\"\n> \n> There is another difference as well: there's a good chance that\n> somebody is going to want to mutate a Bitmapset, whereas they had\n> BETTER NOT think that they can mutate the PartitionDesc. So returning\n> an uncopied Bitmapset is kinda risky in a way that returning an\n> uncopied PartitionDesc is not.\n> \n> If we want an at-least-somewhat unified solution here, I think we need\n> to bite the bullet and make the planner hold a reference to the\n> relcache throughout planning. (The executor already keeps it open, I\n> believe.) Otherwise, how is the relcache supposed to know when it can\n> throw stuff away and when it can't? The only alternative seems to be\n> to have each subsystem hold its own reference count, as I did with the\n> PartitionDirectory stuff, which is not something we'd want to scale\n> out.\n> \n> > I especially don't care for the much-less-than-half-baked kluge of\n> > chaining the old rd_pdcxt onto the new one and hoping that it will go away\n> > at a suitable time.\n> \n> It will go away at a suitable time, but maybe not at the soonest\n> suitable time. It wouldn't be hard to improve that, though. The\n> easiest thing to do, I think, would be to have an rd_oldpdcxt and\n> stuff the old context there. If there already is one there, parent\n> the new one under it. When RelationDecrementReferenceCount reduces\n> the reference count to zero, destroy anything found in rd_oldpdcxt.\n> With that change, things get destroyed at the earliest time at which\n> we know the old things aren't referenced, instead of the earliest time\n> at which they are not referenced + an invalidation arrives.\n> \n> > regression=# create table parent (a text, b int) partition by list (a);\n> > CREATE TABLE\n> > regression=# create table child (a text, b int);\n> > CREATE TABLE\n> > regression=# do $$\n> > regression$# begin\n> > regression$# for i in 1..10000000 loop\n> > regression$# alter table parent attach partition child for values in ('x');\n> > regression$# alter table parent detach partition child;\n> > regression$# end loop;\n> > regression$# end $$;\n> \n> Interesting example.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n\nOn Sun, Apr 14, 2019 at 03:29:26PM -0400, Tom Lane wrote in <27380.1555270166@sss.pgh.pa.us>:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Mar 15, 2019 at 3:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I agree that copying data isn't great. What I don't agree is that this\n> >> solution is better.\n> \n> > I am not very convinced that it makes sense to lump something like\n> > RelationGetIndexAttrBitmap in with something like\n> > RelationGetPartitionDesc. RelationGetIndexAttrBitmap is returning a\n> > single Bitmapset, whereas the data structure RelationGetPartitionDesc\n> > is vastly larger and more complex. You can't say \"well, if it's best\n> > to copy 32 bytes of data out of the relcache every time we need it, it\n> > must also be right to copy 10k or 100k of data out of the relcache\n> > every time we need it.\"\n> \n> I did not say that. What I did say is that they're both correct\n> solutions. Copying large data structures is clearly a potential\n> performance problem, but that doesn't mean we can take correctness\n> shortcuts.\n> \n> > If we want an at-least-somewhat unified solution here, I think we need\n> > to bite the bullet and make the planner hold a reference to the\n> > relcache throughout planning. (The executor already keeps it open, I\n> > believe.) Otherwise, how is the relcache supposed to know when it can\n> > throw stuff away and when it can't?\n> \n> The real problem with that is that we *still* won't know whether it's\n> okay to throw stuff away or not. The issue with these subsidiary\n> data structures is exactly that we're trying to reduce the lock strength\n> required for changing one of them, and as soon as you make that lock\n> strength less than AEL, you have the problem that that value may need\n> to change in relcache entries despite them being pinned. The code I'm\n> complaining about is trying to devise a shortcut solution to exactly\n> that situation ... and it's not a good shortcut.\n> \n> > The only alternative seems to be to have each subsystem hold its own\n> > reference count, as I did with the PartitionDirectory stuff, which is\n> > not something we'd want to scale out.\n> \n> Well, we clearly don't want to devise a separate solution for every\n> subsystem, but it doesn't seem out of the question to me that we could\n> build a \"reference counted cache sub-structure\" mechanism and apply it\n> to each of these relcache fields. Maybe it could be unified with the\n> existing code in the typcache that does a similar thing. Sure, this'd\n> be a fair amount of work, but we've done it before. Syscache entries\n> didn't use to have reference counts, for example.\n> \n> BTW, the problem I have with the PartitionDirectory stuff is exactly\n> that it *isn't* a reference-counted solution. If it were, we'd not\n> have this problem of not knowing when we could free rd_pdcxt.\n> \n> >> I especially don't care for the much-less-than-half-baked kluge of\n> >> chaining the old rd_pdcxt onto the new one and hoping that it will go away\n> >> at a suitable time.\n> \n> > It will go away at a suitable time, but maybe not at the soonest\n> > suitable time. It wouldn't be hard to improve that, though. The\n> > easiest thing to do, I think, would be to have an rd_oldpdcxt and\n> > stuff the old context there. If there already is one there, parent\n> > the new one under it. When RelationDecrementReferenceCount reduces\n> > the reference count to zero, destroy anything found in rd_oldpdcxt.\n> \n> Meh. While it seems likely that that would mask most practical problems,\n> it still seems like covering up a wart with a dirty bandage. In\n> particular, that fix doesn't fix anything unless relcache reference counts\n> go to zero pretty quickly --- which I'll just note is directly contrary\n> to your enthusiasm for holding relcache pins longer.\n> \n> I think that what we ought to do for v12 is have PartitionDirectory\n> copy the data, and then in v13 work on creating real reference-count\n> infrastructure that would allow eliminating the copy steps with full\n> safety. The $64 question is whether that really would cause unacceptable\n> performance problems. To look into that, I made the attached WIP patches.\n> (These are functionally complete, but I didn't bother for instance with\n> removing the hunk that 898e5e329 added to relcache.c, and the comments\n> need work, etc.) The first one just changes the PartitionDirectory\n> code to do that, and then the second one micro-optimizes\n> partition_bounds_copy() to make it somewhat less expensive, mostly by\n> collapsing lots of small palloc's into one big one.\n> \n> What I get for test cases like [1] is\n> \n> single-partition SELECT, hash partitioning:\n> \n> N tps, HEAD tps, patch\n> 2 11426.243754 11448.615193\n> 8 11254.833267 11374.278861\n> 32 11288.329114 11371.942425\n> 128 11222.329256 11185.845258\n> 512 11001.177137 10572.917288\n> 1024 10612.456470 9834.172965\n> 4096 8819.110195 7021.864625\n> 8192 7372.611355 5276.130161\n> \n> single-partition SELECT, range partitioning:\n> \n> N tps, HEAD tps, patch\n> 2 11037.855338 11153.595860\n> 8 11085.218022 11019.132341\n> 32 10994.348207 10935.719951\n> 128 10884.417324 10532.685237\n> 512 10635.583411 9578.108915\n> 1024 10407.286414 8689.585136\n> 4096 8361.463829 5139.084405\n> 8192 7075.880701 3442.542768\n> \n> Now certainly these numbers suggest that avoiding the copy could be worth\n> our trouble, but these results are still several orders of magnitude\n> better than where we were two weeks ago [2]. Plus, this is an extreme\n> case that's not really representative of real-world usage, since the test\n> tables have neither indexes nor any data. In practical situations the\n> baseline would be lower and the dropoff less bad. So I don't feel bad\n> about shipping v12 with these sorts of numbers and hoping for more\n> improvement later.\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/3529.1554051867%40sss.pgh.pa.us\n> \n> [2] https://www.postgresql.org/message-id/0F97FA9ABBDBE54F91744A9B37151A512BAC60%40g01jpexmbkw24\n> \n\n\nOn Wed, May 01, 2019 at 01:09:07PM -0400, Robert Haas wrote in <CA+Tgmob-cska+-WUC7T-G4zkSJp7yum6M_bzYd4YFzwQ51qiow@mail.gmail.com>:\n> On Wed, May 1, 2019 at 11:59 AM Andres Freund <andres@anarazel.de> wrote:\n> > The message I'm replying to is marked as an open item. Robert, what do\n> > you think needs to be done here before release? Could you summarize,\n> > so we then can see what others think?\n> \n> The original issue on this thread was that hyrax started running out\n> of memory when it hadn't been doing so previously. That happened\n> because, for complicated reasons, commit\n> 898e5e3290a72d288923260143930fb32036c00c resulted in the leak being\n> hit lots of times in CLOBBER_CACHE_ALWAYS builds instead of just once.\n> Commits 2455ab48844c90419714e27eafd235a85de23232 and\n> d3f48dfae42f9655425d1f58f396e495c7fb7812 fixed that problem.\n> \n> In the email at issue, Tom complains about two things. First, he\n> complains about the fact that I try to arrange things so that relcache\n> data remains valid for as long as required instead of just copying it.\n> Second, he complains about the fact repeated ATTACH and DETACH\n> PARTITION operations can produce a slow session-lifespan memory leak.\n> \n> I think it's reasonable to fix the second issue for v12. I am not\n> sure how important it is, because (1) the leak is small, (2) it seems\n> unlikely that anyone would execute enough ATTACH/DETACH PARTITION\n> operations in one backend for the leak to amount to anything\n> significant, and (3) if a relcache flush ever happens when you don't\n> have the relation open, all of the \"leaked\" memory will be un-leaked.\n> However, I believe that we could fix it as follows. First, invent\n> rd_pdoldcxt and put the first old context there; if that pointer is\n> already in use, then parent the new context under the old one.\n> Second, in RelationDecrementReferenceCount, if the refcount hits 0,\n> nuke rd_pdoldcxt and set the pointer back to NULL. With that change,\n> you would only keep the old PartitionDesc around until the ref count\n> hits 0, whereas at present, you keep the old PartitionDesc around\n> until an invalidation happens while the ref count is 0.\n> \n> I think the first issue is not v12 material. Tom proposed to fix it\n> by copying all the relevant data out of the relcache, but his own\n> performance results show a pretty significant hit, and AFAICS he\n> hasn't pointed to anything that's actually broken by the current\n> approach. What I think should be done is actually generalize the\n> approach I took in this patch, so that instead of the partition\n> directory holding a reference count, the planner or executor holds\n> one. Then not only could people who want information about the\n> PartitionDesc avoid copying stuff from the relcache, but maybe other\n> things as well. I think that would be significantly better than\n> continuing to double-down on the current copy-everything approach,\n> which really only works well in a world where a table can't have all\n> that much metadata, which is clearly not true for PostgreSQL any more.\n> I'm not sure that Tom is actually opposed to this approach -- although\n> I may have misunderstood his position -- but where we disagree, I\n> think, is that I see what I did in this commit as a stepping-stone\n> toward a better world, and he sees it as something that should be\n> killed with fire until that better world has fully arrived.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n> \n\nOn Tue, Jun 11, 2019 at 01:57:16PM -0400, Tom Lane wrote in <18286.1560275836@sss.pgh.pa.us>:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jun 6, 2019 at 2:48 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> Attached is a patch that applies on top of Robert's pdoldcxt-v1.patch,\n> >> which seems to fix this issue for me.\n> \n> > Yeah, that looks right. I think my patch was full of fuzzy thinking\n> > and inadequate testing; thanks for checking it over and coming up with\n> > the right solution.\n> \n> > Anyone else want to look/comment?\n> \n> I think the existing code is horribly ugly and this is even worse.\n> It adds cycles to RelationDecrementReferenceCount which is a hotspot\n> that has no business dealing with this; the invariants are unclear;\n> and there's no strong reason to think there aren't still cases where\n> we accumulate lots of copies of old partition descriptors during a\n> sequence of operations. Basically you're just doubling down on a\n> wrong design.\n> \n> As I said upthread, my current inclination is to do nothing in this\n> area for v12 and then try to replace the whole thing with proper\n> reference counting in v13. I think the cases where we have a major\n> leak are corner-case-ish enough that we can leave it as-is for one\n> release.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nOn Wed, Jun 12, 2019 at 03:11:56PM -0400, Tom Lane wrote in <3800.1560366716@sss.pgh.pa.us>:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jun 11, 2019 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > I think the change is responsive to your previous complaint that the\n> > timing of stuff getting freed is not very well pinned down. With this\n> > change, it's much more tightly pinned down: it happens when the\n> > refcount goes to 0. That is definitely not perfect, but I think that\n> > it is a lot easier to come up with scenarios where the leak\n> > accumulates because no cache flush happens while the relfcount is 0\n> > than it is to come up with scenarios where the refcount never reaches\n> > 0. I agree that the latter type of scenario probably exists, but I\n> > don't think we've come up with one yet.\n> \n> I don't know why you think that's improbable, given that the changes\n> around PartitionDirectory-s cause relcache entries to be held open much\n> longer than before (something I've also objected to on this thread).\n> \n> >> As I said upthread, my current inclination is to do nothing in this\n> >> area for v12 and then try to replace the whole thing with proper\n> >> reference counting in v13. I think the cases where we have a major\n> >> leak are corner-case-ish enough that we can leave it as-is for one\n> >> release.\n> \n> > Is this something you're planning to work on yourself?\n> \n> Well, I'd rather farm it out to somebody else, but ...\n> \n> > Do you have a\n> > design in mind? Is the idea to reference-count the PartitionDesc?\n> \n> What we discussed upthread was refcounting each of the various\n> large sub-objects of relcache entries, not just the partdesc.\n> I think if we're going to go this way we should bite the bullet and fix\n> them all. I really want to burn down RememberToFreeTupleDescAtEOX() in\n> particular ... it seems likely to me that that's also a source of\n> unpleasant memory leaks.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n", "msg_date": "Sun, 23 Feb 2020 16:01:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "v12 \"won't fix\" item regarding memory leak in \"ATTACH PARTITION\n without AEL\"; (or, relcache ref counting)" }, { "msg_contents": "On Mon, Feb 24, 2020 at 7:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This links to a long thread, from which I've tried to quote some of the\n> most important mails, below.\n> https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#Won.27t_Fix\n>\n> I wondered if there's an effort to pursue a resolution for v13 ?\n\nI think commit 5b9312378 in master branch fixes this. Commit message\nmentions it like this:\n\n ...\n Also, fix things so that old copies of a relcache partition descriptor\n will be dropped when the cache entry's refcount goes to zero. In the\n previous coding it was possible for such copies to survive for the\n lifetime of the session, as I'd complained of in a previous discussion.\n (This management technique still isn't perfect, but it's better than\n before.)\n ...\n ...Although this is a pre-existing\n problem, no back-patch: the patch seems too invasive to be safe to\n back-patch, and the bug it fixes is a corner case that seems\n relatively unlikely to cause problems in the field.\n\n Discussion:\nhttps://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com\n Discussion:\nhttps://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 24 Feb 2020 22:10:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v12 \"won't fix\" item regarding memory leak in \"ATTACH PARTITION\n without AEL\"; (or, relcache ref counting)" } ]
[ { "msg_contents": "Hi,\n\nWhile working on a patch to reuse a common WaitEventSet for latch\nwaits, I noticed that be-secure-gssapi.c and be-secure-openssl.c don't\nuse FeBeWaitSet. They probably should, for consistency with\nbe-secure.c, because that surely holds the socket they want, no? The\nattached patch passes the \"ssl\" and \"kerberos\" tests and reaches that\ncode, confirmed by adding some log messages.\n\nI'm not actually sure what the rationale is for reporting \"terminating\nconnection due to unexpected postmaster exit\" here but not elsewhere.\nI copied the error from be-secure.c.", "msg_date": "Mon, 24 Feb 2020 16:49:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Shouldn't GSSAPI and SSL code use FeBeWaitSet?" }, { "msg_contents": "On Mon, Feb 24, 2020 at 4:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> While working on a patch to reuse a common WaitEventSet for latch\n> waits, I noticed that be-secure-gssapi.c and be-secure-openssl.c don't\n> use FeBeWaitSet. They probably should, for consistency with\n> be-secure.c, because that surely holds the socket they want, no?\n\nHmm. Perhaps it's like that because they're ignoring their latch\n(though they pass it in just because that interface requires it). So\nthen why not reset it and process read interrupts, like be-secure.c?\n\n\n", "msg_date": "Mon, 24 Feb 2020 16:55:35 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't GSSAPI and SSL code use FeBeWaitSet?" }, { "msg_contents": "On Mon, Feb 24, 2020 at 4:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Feb 24, 2020 at 4:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > While working on a patch to reuse a common WaitEventSet for latch\n> > waits, I noticed that be-secure-gssapi.c and be-secure-openssl.c don't\n> > use FeBeWaitSet. They probably should, for consistency with\n> > be-secure.c, because that surely holds the socket they want, no?\n>\n> Hmm. Perhaps it's like that because they're ignoring their latch\n> (though they pass it in just because that interface requires it). So\n> then why not reset it and process read interrupts, like be-secure.c?\n\nPerhaps the theory is that it doesn't matter if they ignore eg\nSIGQUIT, because authentication_timeout will come along in (say) 60\nseconds and exit anyway? That makes me wonder whether it's OK that\nStartupPacketTimeoutHandler() does proc_exit() from a signal handler.\n\n\n", "msg_date": "Thu, 27 Feb 2020 12:31:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Shouldn't GSSAPI and SSL code use FeBeWaitSet?" } ]
[ { "msg_contents": "Hi Team,\n\nThe PG 12.2 server is crashing on setting the jit_above_cost param. Below\nis the output.\n\npostgres=# select version();\n\n version\n\n\n----------------------------------------------------------------------------------------------------------------------------\n\n PostgreSQL 12.2 on x86_64-apple-darwin, compiled by Apple LLVM version 6.0\n(clang-600.0.54) (based on LLVM 3.5svn), 64-bit\n\n(1 row)\n\n\npostgres=# SET jit_above_cost=10;\n\nSET\n\npostgres=# SELECT count(*) FROM pg_class;\n\nserver closed the connection unexpectedly\n\nThis probably means the server terminated abnormally\n\nbefore or while processing the request.\n\nThe connection to the server was lost. Attempting reset: Failed.\n\n!>\n\n-- \nThanks and Regards,\nAditya Toshniwal\npgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\n\"Don't Complain about Heat, Plant a TREE\"\n\nHi Team,The PG 12.2 server is crashing on setting the jit_above_cost param. Below is the output.\npostgres=# select version();\n                                                          version                                                           \n----------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.2 on x86_64-apple-darwin, compiled by Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn), 64-bit\n(1 row)\n\npostgres=# SET jit_above_cost=10;\nSET\npostgres=# SELECT count(*) FROM pg_class;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> -- Thanks and Regards,Aditya ToshniwalpgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\"Don't Complain about Heat, Plant a TREE\"", "msg_date": "Mon, 24 Feb 2020 12:16:08 +0530", "msg_from": "Aditya Toshniwal <aditya.toshniwal@enterprisedb.com>", "msg_from_op": true, "msg_subject": "PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "++hackers.\n\nOn Mon, Feb 24, 2020 at 12:16 PM Aditya Toshniwal <\naditya.toshniwal@enterprisedb.com> wrote:\n\n> Hi Team,\n>\n> The PG 12.2 server is crashing on setting the jit_above_cost param. Below\n> is the output.\n>\n> postgres=# select version();\n>\n> version\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------\n>\n> PostgreSQL 12.2 on x86_64-apple-darwin, compiled by Apple LLVM version\n> 6.0 (clang-600.0.54) (based on LLVM 3.5svn), 64-bit\n>\n> (1 row)\n>\n>\n> postgres=# SET jit_above_cost=10;\n>\n> SET\n>\n> postgres=# SELECT count(*) FROM pg_class;\n>\n> server closed the connection unexpectedly\n>\n> This probably means the server terminated abnormally\n>\n> before or while processing the request.\n>\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> !>\n>\n> --\n> Thanks and Regards,\n> Aditya Toshniwal\n> pgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\n> \"Don't Complain about Heat, Plant a TREE\"\n>\n\n\n-- \nThanks and Regards,\nAditya Toshniwal\npgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\n\"Don't Complain about Heat, Plant a TREE\"\n\n++hackers.On Mon, Feb 24, 2020 at 12:16 PM Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> wrote:Hi Team,The PG 12.2 server is crashing on setting the jit_above_cost param. Below is the output.\npostgres=# select version();\n                                                          version                                                           \n----------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.2 on x86_64-apple-darwin, compiled by Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn), 64-bit\n(1 row)\n\npostgres=# SET jit_above_cost=10;\nSET\npostgres=# SELECT count(*) FROM pg_class;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> -- Thanks and Regards,Aditya ToshniwalpgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\"Don't Complain about Heat, Plant a TREE\"\n-- Thanks and Regards,Aditya ToshniwalpgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\"Don't Complain about Heat, Plant a TREE\"", "msg_date": "Mon, 24 Feb 2020 12:29:06 +0530", "msg_from": "Aditya Toshniwal <aditya.toshniwal@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "\nHi,\n\nOn 2020-02-24 12:16:08 +0530, Aditya Toshniwal wrote:\n> The PG 12.2 server is crashing on setting the jit_above_cost param. Below\n> is the output.\n\n> postgres=# SELECT count(*) FROM pg_class;\n> \n> server closed the connection unexpectedly\n> \n> This probably means the server terminated abnormally\n> \n> before or while processing the request.\n> \n> The connection to the server was lost. Attempting reset: Failed.\n\nThis isn't reproducible here. Are you sure that you're running on a\nclean installation?\n\nIf not, we'd at least need a backtrace to figure out what's going on.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Feb 2020 23:16:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "Hi Andres,\n\nOn Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Hi,\n>\n> On 2020-02-24 12:16:08 +0530, Aditya Toshniwal wrote:\n> > The PG 12.2 server is crashing on setting the jit_above_cost param. Below\n> > is the output.\n>\n> > postgres=# SELECT count(*) FROM pg_class;\n> >\n> > server closed the connection unexpectedly\n> >\n> > This probably means the server terminated abnormally\n> >\n> > before or while processing the request.\n> >\n> > The connection to the server was lost. Attempting reset: Failed.\n>\n> This isn't reproducible here. Are you sure that you're running on a\n> clean installation?\n>\nYes I did a fresh installation using installer provided here -\nhttps://www.enterprisedb.com/downloads/postgresql\n\n>\n> If not, we'd at least need a backtrace to figure out what's going on.\n>\nPlease let me know how can I provide required info.\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nThanks and Regards,\nAditya Toshniwal\npgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\n\"Don't Complain about Heat, Plant a TREE\"\n\nHi Andres,On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\nHi,\n\nOn 2020-02-24 12:16:08 +0530, Aditya Toshniwal wrote:\n> The PG 12.2 server is crashing on setting the jit_above_cost param. Below\n> is the output.\n\n> postgres=# SELECT count(*) FROM pg_class;\n> \n> server closed the connection unexpectedly\n> \n> This probably means the server terminated abnormally\n> \n> before or while processing the request.\n> \n> The connection to the server was lost. Attempting reset: Failed.\n\nThis isn't reproducible here. Are you sure that you're running on a\nclean installation?Yes I did a fresh installation using installer provided here - https://www.enterprisedb.com/downloads/postgresql \n\nIf not, we'd at least need a backtrace to figure out what's going on.Please let me know how can I provide required info.\n\nGreetings,\n\nAndres Freund\n-- Thanks and Regards,Aditya ToshniwalpgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\"Don't Complain about Heat, Plant a TREE\"", "msg_date": "Mon, 24 Feb 2020 13:05:19 +0530", "msg_from": "Aditya Toshniwal <aditya.toshniwal@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": ">PostgreSQL 12.2 ...\n>compiled by Apple LLVM version 6.0(clang-600.0.54)* (based on LLVM 3.5svn)*\n\nthis LLVM is >= 3.9 ?\n\naccording to the docs : \"Build with support for LLVM based JIT compilation\n.... *The minimum required version of LLVM is currently 3.9.*\"\nhttps://www.postgresql.org/docs/12/install-procedure.html\nRegards,\nImre\n\n\nAditya Toshniwal <aditya.toshniwal@enterprisedb.com> ezt írta (időpont:\n2020. febr. 24., H, 7:46):\n\n> Hi Team,\n>\n> The PG 12.2 server is crashing on setting the jit_above_cost param. Below\n> is the output.\n>\n> postgres=# select version();\n>\n> version\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------\n>\n> PostgreSQL 12.2 on x86_64-apple-darwin, compiled by Apple LLVM version\n> 6.0 (clang-600.0.54) (based on LLVM 3.5svn), 64-bit\n>\n> (1 row)\n>\n>\n> postgres=# SET jit_above_cost=10;\n>\n> SET\n>\n> postgres=# SELECT count(*) FROM pg_class;\n>\n> server closed the connection unexpectedly\n>\n> This probably means the server terminated abnormally\n>\n> before or while processing the request.\n>\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> !>\n>\n> --\n> Thanks and Regards,\n> Aditya Toshniwal\n> pgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\n> \"Don't Complain about Heat, Plant a TREE\"\n>\n\n>PostgreSQL 12.2  ...>compiled by Apple LLVM version 6.0(clang-600.0.54) (based on LLVM 3.5svn)this LLVM is >= 3.9 ? according to the docs : \"Build with support for LLVM based JIT compilation  .... The minimum required version of LLVM is currently 3.9.\"https://www.postgresql.org/docs/12/install-procedure.htmlRegards,ImreAditya Toshniwal <aditya.toshniwal@enterprisedb.com> ezt írta (időpont: 2020. febr. 24., H, 7:46):Hi Team,The PG 12.2 server is crashing on setting the jit_above_cost param. Below is the output.\npostgres=# select version();\n                                                          version                                                           \n----------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.2 on x86_64-apple-darwin, compiled by Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn), 64-bit\n(1 row)\n\npostgres=# SET jit_above_cost=10;\nSET\npostgres=# SELECT count(*) FROM pg_class;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!> -- Thanks and Regards,Aditya ToshniwalpgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\"Don't Complain about Heat, Plant a TREE\"", "msg_date": "Mon, 24 Feb 2020 08:44:33 +0100", "msg_from": "Imre Samu <pella.samu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n> On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\n>> This isn't reproducible here. Are you sure that you're running on a\n>> clean installation?\n\n> Yes I did a fresh installation using installer provided here -\n> https://www.enterprisedb.com/downloads/postgresql\n\nThere is apparently something wrong with the JIT stuff in EDB's 12.2\nbuild for macOS. At least, that's the conclusion I came to after\noff-list discussion with the submitter of bug #16264, which has pretty\nmuch exactly this symptom (especially if you're seeing \"signal 9\"\nreports in the postmaster log). For him, either disabling JIT or\nreverting to 12.1 made it go away.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 09:50:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "On Mon, Feb 24, 2020 at 8:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n> > On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> >> This isn't reproducible here. Are you sure that you're running on a\n> >> clean installation?\n>\n> > Yes I did a fresh installation using installer provided here -\n> > https://www.enterprisedb.com/downloads/postgresql\n>\n> There is apparently something wrong with the JIT stuff in EDB's 12.2\n> build for macOS. At least, that's the conclusion I came to after\n> off-list discussion with the submitter of bug #16264, which has pretty\n> much exactly this symptom (especially if you're seeing \"signal 9\"\n> reports in the postmaster log). For him, either disabling JIT or\n> reverting to 12.1 made it go away.\n>\nYes it seems like issue with EDB build. It works fine on macOS < Catalina.\n\n>\n> regards, tom lane\n>\n\n\n-- \nThanks and Regards,\nAditya Toshniwal\npgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\n\"Don't Complain about Heat, Plant a TREE\"\n\nOn Mon, Feb 24, 2020 at 8:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n> On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\n>> This isn't reproducible here. Are you sure that you're running on a\n>> clean installation?\n\n> Yes I did a fresh installation using installer provided here -\n> https://www.enterprisedb.com/downloads/postgresql\n\nThere is apparently something wrong with the JIT stuff in EDB's 12.2\nbuild for macOS.  At least, that's the conclusion I came to after\noff-list discussion with the submitter of bug #16264, which has pretty\nmuch exactly this symptom (especially if you're seeing \"signal 9\"\nreports in the postmaster log).  For him, either disabling JIT or\nreverting to 12.1 made it go away.Yes it seems like issue with EDB build. It works fine on macOS < Catalina. \n\n                        regards, tom lane\n-- Thanks and Regards,Aditya ToshniwalpgAdmin Hacker | Sr. Software Engineer | EnterpriseDB India | Pune\"Don't Complain about Heat, Plant a TREE\"", "msg_date": "Tue, 25 Feb 2020 10:53:35 +0530", "msg_from": "Aditya Toshniwal <aditya.toshniwal@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "Hi\n\nOn Thu, Feb 27, 2020 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n> > On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> >> This isn't reproducible here. Are you sure that you're running on a\n> >> clean installation?\n>\n> > Yes I did a fresh installation using installer provided here -\n> > https://www.enterprisedb.com/downloads/postgresql\n>\n> There is apparently something wrong with the JIT stuff in EDB's 12.2\n> build for macOS. At least, that's the conclusion I came to after\n> off-list discussion with the submitter of bug #16264, which has pretty\n> much exactly this symptom (especially if you're seeing \"signal 9\"\n> reports in the postmaster log). For him, either disabling JIT or\n> reverting to 12.1 made it go away.\n>\n\nWe've been looking into this;\n\nApple started a notarisation process some time ago, designed to mark their\napplications as conforming to various security requirements, but prior to\nCatalina it was essentially optional. When Catalina was released, they made\nnotarisation for distributed software a requirement, but had the process\nissue warnings for non-compliance. As-of the end of January, those warnings\nbecame hard errors, so now our packages must be notarised, and for that to\nhappen, must be hardened by linking with a special runtime and having\nsecurely time stamped signatures on every binary before being checked and\nnotarised as such by Apple. Without that, users would have to disable\nsecurity features on their systems before they could run our software.\n\nOur packages are being successfully notarised at the moment, because that's\nessentially done through a static analysis. We can (and have) added what\nApple call an entitlement in test builds which essentially puts a flag in\nthe notarisation for the product that declares that it will do JIT\noperations, however, it seems that this alone is not enough and that in\naddition to the entitlement, we also need to include the MAP_JIT flag in\nmmap() calls. See\nhttps://developer.apple.com/documentation/security/hardened_runtime and\nhttps://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_cs_allow-jit\n\nWe're working on trying to test a patch for that at the moment.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHiOn Thu, Feb 27, 2020 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n> On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\n>> This isn't reproducible here. Are you sure that you're running on a\n>> clean installation?\n\n> Yes I did a fresh installation using installer provided here -\n> https://www.enterprisedb.com/downloads/postgresql\n\nThere is apparently something wrong with the JIT stuff in EDB's 12.2\nbuild for macOS.  At least, that's the conclusion I came to after\noff-list discussion with the submitter of bug #16264, which has pretty\nmuch exactly this symptom (especially if you're seeing \"signal 9\"\nreports in the postmaster log).  For him, either disabling JIT or\nreverting to 12.1 made it go away.We've been looking into this;Apple started a notarisation process some time ago, designed to mark their applications as conforming to various security requirements, but prior to Catalina it was essentially optional. When Catalina was released, they made notarisation for distributed software a requirement, but had the process issue warnings for non-compliance. As-of the end of January, those warnings became hard errors, so now our packages must be notarised, and for that to happen, must be hardened by linking with a special runtime and having securely time stamped signatures on every binary before being checked and notarised as such by Apple. Without that, users would have to disable security features on their systems before they could run our software.Our packages are being successfully notarised at the moment, because that's essentially done through a static analysis. We can (and have) added what Apple call an entitlement in test builds which essentially puts a flag in the notarisation for the product that declares that it will do JIT operations, however, it seems that this alone is not enough and that in addition to the entitlement, we also need to include the MAP_JIT flag in mmap() calls. See https://developer.apple.com/documentation/security/hardened_runtime and https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_cs_allow-jitWe're working on trying to test a patch for that at the moment. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Thu, 27 Feb 2020 12:53:26 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 27, 2020 at 6:23 PM Dave Page <dpage@pgadmin.org> wrote:\n\n> Hi\n>\n> On Thu, Feb 27, 2020 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n>> > On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de>\n>> wrote:\n>> >> This isn't reproducible here. Are you sure that you're running on a\n>> >> clean installation?\n>>\n>> > Yes I did a fresh installation using installer provided here -\n>> > https://www.enterprisedb.com/downloads/postgresql\n>>\n>> There is apparently something wrong with the JIT stuff in EDB's 12.2\n>> build for macOS. At least, that's the conclusion I came to after\n>> off-list discussion with the submitter of bug #16264, which has pretty\n>> much exactly this symptom (especially if you're seeing \"signal 9\"\n>> reports in the postmaster log). For him, either disabling JIT or\n>> reverting to 12.1 made it go away.\n>>\n>\n> We've been looking into this;\n>\n> Apple started a notarisation process some time ago, designed to mark their\n> applications as conforming to various security requirements, but prior to\n> Catalina it was essentially optional. When Catalina was released, they made\n> notarisation for distributed software a requirement, but had the process\n> issue warnings for non-compliance. As-of the end of January, those warnings\n> became hard errors, so now our packages must be notarised, and for that to\n> happen, must be hardened by linking with a special runtime and having\n> securely time stamped signatures on every binary before being checked and\n> notarised as such by Apple. Without that, users would have to disable\n> security features on their systems before they could run our software.\n>\n> Our packages are being successfully notarised at the moment, because\n> that's essentially done through a static analysis. We can (and have) added\n> what Apple call an entitlement in test builds which essentially puts a flag\n> in the notarisation for the product that declares that it will do JIT\n> operations, however, it seems that this alone is not enough and that in\n> addition to the entitlement, we also need to include the MAP_JIT flag in\n> mmap() calls. See\n> https://developer.apple.com/documentation/security/hardened_runtime and\n> https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_cs_allow-jit\n>\n> We're working on trying to test a patch for that at the moment.\n>\n>\nWe have fixed the issue. To explain in brief, It was related to the\nhardened runtime. Hardening the runtime can produce such issues, and\ntherefore Apple provides the runtime exceptions. We were previously using\nan entitlement \"com.apple.security.cs.disable-library-validation\" for the\napp bundle and then tried adding\n\"com.apple.security.cs.allow-unsigned-executable-memory\" but still the\nquery would crash the server process when JIT is enabled. Later we applied\nthese entitlements to the PG binaries (postgres, pg_ctl and others) and the\nbundles (llvmjit.so and others) which actually resolved the problem.\n\nThe updates will be released in a couple of days.\n\n-- \n> Dave Page\n> Blog: http://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EnterpriseDB UK: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \nSandeep Thakkar\n\nHi,On Thu, Feb 27, 2020 at 6:23 PM Dave Page <dpage@pgadmin.org> wrote:HiOn Thu, Feb 27, 2020 at 12:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Aditya Toshniwal <aditya.toshniwal@enterprisedb.com> writes:\n> On Mon, Feb 24, 2020 at 12:46 PM Andres Freund <andres@anarazel.de> wrote:\n>> This isn't reproducible here. Are you sure that you're running on a\n>> clean installation?\n\n> Yes I did a fresh installation using installer provided here -\n> https://www.enterprisedb.com/downloads/postgresql\n\nThere is apparently something wrong with the JIT stuff in EDB's 12.2\nbuild for macOS.  At least, that's the conclusion I came to after\noff-list discussion with the submitter of bug #16264, which has pretty\nmuch exactly this symptom (especially if you're seeing \"signal 9\"\nreports in the postmaster log).  For him, either disabling JIT or\nreverting to 12.1 made it go away.We've been looking into this;Apple started a notarisation process some time ago, designed to mark their applications as conforming to various security requirements, but prior to Catalina it was essentially optional. When Catalina was released, they made notarisation for distributed software a requirement, but had the process issue warnings for non-compliance. As-of the end of January, those warnings became hard errors, so now our packages must be notarised, and for that to happen, must be hardened by linking with a special runtime and having securely time stamped signatures on every binary before being checked and notarised as such by Apple. Without that, users would have to disable security features on their systems before they could run our software.Our packages are being successfully notarised at the moment, because that's essentially done through a static analysis. We can (and have) added what Apple call an entitlement in test builds which essentially puts a flag in the notarisation for the product that declares that it will do JIT operations, however, it seems that this alone is not enough and that in addition to the entitlement, we also need to include the MAP_JIT flag in mmap() calls. See https://developer.apple.com/documentation/security/hardened_runtime and https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_cs_allow-jitWe're working on trying to test a patch for that at the moment. We have fixed the issue. To explain in brief, It was related to the hardened runtime. Hardening the runtime can produce such issues, and therefore Apple provides the runtime exceptions. We were previously using an entitlement \"com.apple.security.cs.disable-library-validation\" for the app bundle and then tried adding \"com.apple.security.cs.allow-unsigned-executable-memory\" but still the query would crash the server process when JIT is enabled. Later we applied these entitlements to the PG binaries (postgres, pg_ctl and others) and the bundles (llvmjit.so and others) which actually resolved the problem.The updates will be released in a couple of days.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- Sandeep Thakkar", "msg_date": "Thu, 19 Mar 2020 16:46:47 +0530", "msg_from": "Sandeep Thakkar <sandeep.thakkar@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: PG v12.2 - Setting jit_above_cost is causing the server to crash" } ]
[ { "msg_contents": "This removes another relic from the old nmake-based Windows build. \nversion_stamp.pl put version number information into win32ver.rc. But \nwin32ver.rc already gets other version number information from the \npreprocessor at build time, so it would make more sense if all version \nnumber information would be handled in the same way and we don't have \ntwo places that do it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 24 Feb 2020 09:02:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove win32ver.rc from version_stamp.pl" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> This removes another relic from the old nmake-based Windows build. \n> version_stamp.pl put version number information into win32ver.rc. But \n> win32ver.rc already gets other version number information from the \n> preprocessor at build time, so it would make more sense if all version \n> number information would be handled in the same way and we don't have \n> two places that do it.\n\nThis has a minor conflict in Solution.pm according to the cfbot.\n\nIn general, while I'm on board with the idea, I wonder whether it\nwouldn't be smarter to keep on defining PG_MAJORVERSION as a string,\nand just add PG_MAJORVERSION_NUM alongside of it. This would\neliminate some hunks from the patch in places where it's more\nconvenient to have the version as a string, and it would avoid\nwhat could otherwise be a pretty painful cross-version incompatibility\nfor extensions. We already provide PG_VERSION in both forms, so\nI don't see any inconsistency in doing likewise for PG_MAJORVERSION.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Mar 2020 17:51:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove win32ver.rc from version_stamp.pl" }, { "msg_contents": "On 2020-03-01 23:51, Tom Lane wrote:\n> In general, while I'm on board with the idea, I wonder whether it\n> wouldn't be smarter to keep on defining PG_MAJORVERSION as a string,\n> and just add PG_MAJORVERSION_NUM alongside of it. This would\n> eliminate some hunks from the patch in places where it's more\n> convenient to have the version as a string, and it would avoid\n> what could otherwise be a pretty painful cross-version incompatibility\n> for extensions. We already provide PG_VERSION in both forms, so\n> I don't see any inconsistency in doing likewise for PG_MAJORVERSION.\n\nAgreed. Here is another patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 9 Mar 2020 08:43:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove win32ver.rc from version_stamp.pl" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-03-01 23:51, Tom Lane wrote:\n>> In general, while I'm on board with the idea, I wonder whether it\n>> wouldn't be smarter to keep on defining PG_MAJORVERSION as a string,\n>> and just add PG_MAJORVERSION_NUM alongside of it.\n\n> Agreed. Here is another patch.\n\nThis version LGTM. (I can't actually test the Windows aspects\nof this, but I assume you did.)\n\nI'm wondering a little bit whether it'd be worth back-patching the\nadditions of the new #defines. That would cut about five years off\nthe time till they could be relied on by extensions. However,\nI'm not sure anyone is eager to rely on them, so it may not be\nworth the effort.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Mar 2020 10:55:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove win32ver.rc from version_stamp.pl" }, { "msg_contents": "On 2020-03-09 15:55, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-03-01 23:51, Tom Lane wrote:\n>>> In general, while I'm on board with the idea, I wonder whether it\n>>> wouldn't be smarter to keep on defining PG_MAJORVERSION as a string,\n>>> and just add PG_MAJORVERSION_NUM alongside of it.\n> \n>> Agreed. Here is another patch.\n> \n> This version LGTM. (I can't actually test the Windows aspects\n> of this, but I assume you did.)\n\ncommitted\n\n> I'm wondering a little bit whether it'd be worth back-patching the\n> additions of the new #defines. That would cut about five years off\n> the time till they could be relied on by extensions. However,\n> I'm not sure anyone is eager to rely on them, so it may not be\n> worth the effort.\n\nI doubt external code really needs these symbols. You can always use \nPG_VERSION_NUM.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 11:54:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove win32ver.rc from version_stamp.pl" } ]
[ { "msg_contents": "This is a change to make the bitmap of updated columns available to a \ntrigger in TriggerData. This is the same idea as was recently done to \ngenerated columns [0]: Generic triggers such as tsvector_update_trigger \ncan use this information to skip work if the columns they are interested \nin haven't changed. With the generated columns change, perhaps this \nisn't so interesting anymore, but I suspect a lot of existing \ninstallations still use tsvector_update_trigger. In any case, since I \nhad already written the code, I figured I post it here. Perhaps there \nare other use cases.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/b05e781a-fa16-6b52-6738-761181204567@2ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 24 Feb 2020 10:58:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "allow trigger to get updated columns" }, { "msg_contents": "> On 24 Feb 2020, at 10:58, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> This is a change to make the bitmap of updated columns available to a trigger in TriggerData. This is the same idea as was recently done to generated columns [0]: Generic triggers such as tsvector_update_trigger can use this information to skip work if the columns they are interested in haven't changed. With the generated columns change, perhaps this isn't so interesting anymore, but I suspect a lot of existing installations still use tsvector_update_trigger. In any case, since I had already written the code, I figured I post it here. Perhaps there are other use cases.\n\nI wouldn't at all be surprised if there are usecases for this in the wild, and\ngiven the very minor impact I absolutely think it's worth doing. The patches\nboth apply, compile and pass tests without warnings.\n\nThe 0001 refactoring patch seems a clear win to me.\n\nIn the 0002 patch:\n\n+ For <literal>UPDATE</literal> triggers, a bitmap set indicating the\n+ columns that were updated by the triggering command. Generic trigger\n\nIs it worth pointing out that tg_updatedcols will be NULL rather than an empty\nBitmapset for non-UPDATE triggers? bitmapset.c treats NULL as an empty bitmap\nbut since a Bitmapset can be allocated but empty, maybe it's worth being\nexplicit to help developers?\n\nThere isn't really a test suite that excercises this IIUC, how about adding\nsomething like the attached diff to contrib/lo? It seemed like a lower impact\nchange than widening test_tsvector.\n\n+1 on the patchset, marking this entry as Ready For Committer.\n\ncheers ./daniel", "msg_date": "Thu, 5 Mar 2020 13:53:01 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: allow trigger to get updated columns" }, { "msg_contents": "On 2020-03-05 13:53, Daniel Gustafsson wrote:\n> The 0001 refactoring patch seems a clear win to me.\n> \n> In the 0002 patch:\n> \n> + For <literal>UPDATE</literal> triggers, a bitmap set indicating the\n> + columns that were updated by the triggering command. Generic trigger\n> \n> Is it worth pointing out that tg_updatedcols will be NULL rather than an empty\n> Bitmapset for non-UPDATE triggers? bitmapset.c treats NULL as an empty bitmap\n> but since a Bitmapset can be allocated but empty, maybe it's worth being\n> explicit to help developers?\n\ndone\n\n> There isn't really a test suite that excercises this IIUC, how about adding\n> something like the attached diff to contrib/lo? It seemed like a lower impact\n> change than widening test_tsvector.\n\ndone\n\n> +1 on the patchset, marking this entry as Ready For Committer.\n\nand done\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 09:39:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow trigger to get updated columns" } ]
[ { "msg_contents": "I'm trying to figure out what's specific about RS_EPHEMERAL and RS_TEMPORARY\nslot kinds. The following comment (see definition of the\nReplicationSlotPersistency enumeration) tells when each kind is dropped\n\n * Slots marked as PERSISTENT are crash-safe and will not be dropped when\n * released. Slots marked as EPHEMERAL will be dropped when released or after\n * restarts. Slots marked TEMPORARY will be dropped at the end of a session\n * or on error.\n...\ntypedef enum ReplicationSlotPersistency\n\nHowever I don't see the actual difference: whenever ReplicationSlotCleanup()\nis called (on error or session end), ReplicationSlotRelease() has already been\ncalled too. And as for server restart, I see that RestoreSlotFromDisk()\ndiscards both EPHEMERAL and TEMPORARY. Do we really need both of them?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 25 Feb 2020 08:30:57 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "RS_EPHEMERAL vs RS_TEMPORARY" }, { "msg_contents": "On 2020-02-25 08:30, Antonin Houska wrote:\n> I'm trying to figure out what's specific about RS_EPHEMERAL and RS_TEMPORARY\n> slot kinds. The following comment (see definition of the\n> ReplicationSlotPersistency enumeration) tells when each kind is dropped\n\nThe general idea is that an \"ephemeral\" slot is a future persistent slot \nthat is not completely initialized yet. If there is a crash and you \nfind an ephemeral slot, you can clean it up. The name is perhaps a bit \nodd, you can think of it as an uninitialized one. A temporary slot is \none that behaves like a temporary table: It is removed at the end of a \nsession.\n\nPerhaps the implementation differences are not big or are none, but it's \nrelevant for reporting. For example, the pg_replication_slots view \nshows which slots are temporary. You wouldn't want to show an ephemeral \nslot as temporary.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Feb 2020 09:01:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: RS_EPHEMERAL vs RS_TEMPORARY" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-02-25 08:30, Antonin Houska wrote:\n> > I'm trying to figure out what's specific about RS_EPHEMERAL and RS_TEMPORARY\n> > slot kinds. The following comment (see definition of the\n> > ReplicationSlotPersistency enumeration) tells when each kind is dropped\n> \n> The general idea is that an \"ephemeral\" slot is a future persistent slot that\n> is not completely initialized yet. If there is a crash and you find an\n> ephemeral slot, you can clean it up. The name is perhaps a bit odd, you can\n> think of it as an uninitialized one. A temporary slot is one that behaves\n> like a temporary table: It is removed at the end of a session.\n> \n> Perhaps the implementation differences are not big or are none, but it's\n> relevant for reporting. For example, the pg_replication_slots view shows\n> which slots are temporary. You wouldn't want to show an ephemeral slot as\n> temporary.\n\nok, so only comments seem to be the problem.\n\nAnyway, the reason I started to think about it was that I noticed an Assert()\nstatement I considered inaccurate. Does this patch make sense?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 25 Feb 2020 13:37:40 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: RS_EPHEMERAL vs RS_TEMPORARY" } ]
[ { "msg_contents": "Hi,\n\nI'm thinking to change the progress reporting views like\npg_stat_progress_vacuum so that they also report the time when\nthe target command was started and the time when the phase was\nlast changed. IMO this is very helpful to estimate the remaining\ntime required to complete the current phase. For example,\nif pg_stat_progress_vacuum reports that the current phase\n\"scanning heap\" started 1 hour before and the progress percentage\nis 50%, we can imagine the remaining time of this phase would be\napproximately 1 hour. Of course, this is not the exact estimation,\nbut would be helpful as a hint for operations. Thought?\n\n\tProgressCommandType st_progress_command;\n\tOid\t\t\tst_progress_command_target;\n\tint64\t\tst_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n\nWe cannnot add those timestamp fields simply in the progress\nreporting views because the type of the fields in PgBackendStatus\nstruct is only int64 for now, as the above. So I'm thinking to add\nnew TimestampTz fields (maybe four fields are enough even for\nfuture usager?) into PgBackendStatus and make pg_stat_get_progress_info()\nreport those fields as timestamp. This change leads to increase\nin the size of PgBackendStatus, as demerit. But I like this approach\nbecause it's simple and intuitive.\n\nAnother idea is to store TimestampTz values in int64 fields\n(for example, always store TimestampTz values in the last two int64\nfields) and make pg_stat_get_progress_info() report not only int64\nbut also those TimestampTz fields. This approach doesn't increase\nthe struct size, but is a bit tricky. Also int64 fields that TimestampTz\nvalues will be stored into might be already used to store int64 values\nin some existing extensions. If we need to handle this case, further\ntricky way might need to be implemented. That sounds not good.\n\nTherefore, I'd like to implement the first idea that I described, to\nadd the timestamp fields in the progress reporting view. Thought?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 25 Feb 2020 17:13:24 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "progress reporting views and TimestampTz fields" }, { "msg_contents": "On 25/02/2020 09:13, Fujii Masao wrote:\n> Hi,\n> \n> I'm thinking to change the progress reporting views like\n> pg_stat_progress_vacuum so that they also report the time when\n> the target command was started and the time when the phase was\n> last changed. IMO this is very helpful to estimate the remaining\n> time required to complete the current phase. For example,\n> if pg_stat_progress_vacuum reports that the current phase\n> \"scanning heap\" started 1 hour before and the progress percentage\n> is 50%, we can imagine the remaining time of this phase would be\n> approximately 1 hour. Of course, this is not the exact estimation,\n> but would be helpful as a hint for operations. Thought?\n> \n>     ProgressCommandType st_progress_command;\n>     Oid            st_progress_command_target;\n>     int64        st_progress_param[PGSTAT_NUM_PROGRESS_PARAM];\n> \n> We cannnot add those timestamp fields simply in the progress\n> reporting views because the type of the fields in PgBackendStatus\n> struct is only int64 for now, as the above. So I'm thinking to add\n> new TimestampTz fields (maybe four fields are enough even for\n> future usager?) into PgBackendStatus and make pg_stat_get_progress_info()\n> report those fields as timestamp. This change leads to increase\n> in the size of PgBackendStatus, as demerit. But I like this approach\n> because it's simple and intuitive.\n> \n> Another idea is to store TimestampTz values in int64 fields\n> (for example, always store TimestampTz values in the last two int64\n> fields) and make pg_stat_get_progress_info() report not only int64\n> but also those TimestampTz fields. This approach doesn't increase\n> the struct size, but is a bit tricky. Also int64 fields that TimestampTz\n> values will be stored into might be already used to store int64 values\n> in some existing extensions. If we need to handle this case, further\n> tricky way might need to be implemented. That sounds not good.\n> \n> Therefore, I'd like to implement the first idea that I described, to\n> add the timestamp fields in the progress reporting view. Thought?\n\n+1 on the idea. No opinion on the implementation.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 25 Feb 2020 10:58:54 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: progress reporting views and TimestampTz fields" } ]
[ { "msg_contents": "I've noticed that two variables in RelationCopyStorage() are defined in a\nscope higher than necessary. Please see the patch.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 25 Feb 2020 09:35:52 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Define variables in the approprieate scope" }, { "msg_contents": "On Tue, Feb 25, 2020 at 09:35:52AM +0100, Antonin Houska wrote:\n> I've noticed that two variables in RelationCopyStorage() are defined in a\n> scope higher than necessary. Please see the patch.\n\nIt seems cleaner to me to allocate the variables once before the loop\nstarts, rather than for each loop iteration.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 18 Mar 2020 19:08:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Define variables in the approprieate scope" }, { "msg_contents": "On 2020-Mar-18, Bruce Momjian wrote:\n\n> On Tue, Feb 25, 2020 at 09:35:52AM +0100, Antonin Houska wrote:\n> > I've noticed that two variables in RelationCopyStorage() are defined in a\n> > scope higher than necessary. Please see the patch.\n> \n> It seems cleaner to me to allocate the variables once before the loop\n> starts, rather than for each loop iteration.\n\nIf we're talking about personal preference, my own is what Antonin\nshows. However, since disagreement has been expressed, I think we\nshould only change it if the generated code turns out better.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 23 Mar 2020 13:00:24 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Define variables in the approprieate scope" }, { "msg_contents": "On Mon, Mar 23, 2020 at 01:00:24PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-18, Bruce Momjian wrote:\n> \n> > On Tue, Feb 25, 2020 at 09:35:52AM +0100, Antonin Houska wrote:\n> > > I've noticed that two variables in RelationCopyStorage() are defined in a\n> > > scope higher than necessary. Please see the patch.\n> > \n> > It seems cleaner to me to allocate the variables once before the loop\n> > starts, rather than for each loop iteration.\n> \n> If we're talking about personal preference, my own is what Antonin\n> shows. However, since disagreement has been expressed, I think we\n> should only change it if the generated code turns out better.\n\nI am fine with either usage, frankly. I was just pointing out what\nmight be the benefit of the current coding.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 23 Mar 2020 20:50:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Define variables in the approprieate scope" }, { "msg_contents": "On Mon, Mar 23, 2020 at 08:50:55PM -0400, Bruce Momjian wrote:\n> On Mon, Mar 23, 2020 at 01:00:24PM -0300, Alvaro Herrera wrote:\n>> If we're talking about personal preference, my own is what Antonin\n>> shows. However, since disagreement has been expressed, I think we\n>> should only change it if the generated code turns out better.\n> \n> I am fine with either usage, frankly. I was just pointing out what\n> might be the benefit of the current coding.\n\nPersonal opinion here. I tend to prefer putting variable declarations\ninto the inner portions because it makes it easier to reason about the\ncode, though I agree that this concept does not need to be applied all\nthe time.\n--\nMichael", "msg_date": "Tue, 24 Mar 2020 10:04:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Define variables in the approprieate scope" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Mar 23, 2020 at 08:50:55PM -0400, Bruce Momjian wrote:\n>> I am fine with either usage, frankly. I was just pointing out what\n>> might be the benefit of the current coding.\n\n> Personal opinion here. I tend to prefer putting variable declarations\n> into the inner portions because it makes it easier to reason about the\n> code, though I agree that this concept does not need to be applied all\n> the time.\n\nMy vote is to not make this sort of change until there's another\nreason to touch the code in question. All changes create hazards for\nback-patching, and I don't think this change is worth it on its own.\nBut if there are going to be diffs in the immediate vicinity anyway,\nthen sure.\n\n(I'm feeling a bit sensitized to this, perhaps, because of recent\nunpleasant experience with back-patching b4570d33a. That didn't touch\nvery much code, and the functions in question seemed like fairly stagnant\nbackwaters of the code base, so it should not have been painful to\nback-patch ... but it was, because of assorted often-cosmetic changes\nin said code.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 22:41:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Define variables in the approprieate scope" } ]
[ { "msg_contents": "Greetings,\n\nI am a Computer Science student at National and Kapodistrian University of\nAthens, and I would like to be a part of this year's GsoC program.\nDuring my academic courses, I developed an interest on databases (back-end)\nand the main language I used during my academic career is C.\n\nA project I recently developed was In-Memory SUM SQL Query Execution on\nRDBMS-like queries using SortMerge Join. As input data we used the data\nprovided from SIGMOD 2018 contest.\n(github link : https://github.com/kchasialis/Query-Compiler-Executor)\n\nTherefore, the project idea \"Read/Write transaction-level routing in\nOdyssea\" highly intrigued me and I would like to be a part of it.\nI believe that I have the necessary background to live up to your\nexpectations and I would like to learn more things about this project and\nways I can contribute to its development.\n\nThanks in advance, Kostas.\n\nGreetings, I am a Computer Science student at National and Kapodistrian University of Athens, and I would like to be a part of this year's GsoC program.During my academic courses, I developed an interest on databases (back-end) and the main language I used during my academic career is C.A project I recently developed was In-Memory SUM SQL Query Execution on RDBMS-like queries using SortMerge Join. As input data we used the data provided from SIGMOD 2018 contest.(github link : https://github.com/kchasialis/Query-Compiler-Executor)Therefore, the project idea \"Read/Write transaction-level routing in Odyssea\" highly intrigued me and I would like to be a part of it.I believe that I have the necessary background to live up to your expectations and I would like to learn more things about this project and ways I can contribute to its development.Thanks in advance, Kostas.", "msg_date": "Tue, 25 Feb 2020 17:56:12 +0200", "msg_from": "Kostas Chasialis <koschasialis@gmail.com>", "msg_from_op": true, "msg_subject": "[GsoC] Read/write transaction-level routing in Odyssey Project Idea" }, { "msg_contents": "Hello once again,\n\nI send this response to remind you of my initial email because it might have been lost among others. \n\nI also want to know, if I can familiarize myself with your project idea before even sending my proposal.\n\nThanks for your time, I understand you receive hundreds of emails.\n\n> On 25 Feb 2020, at 17:56, Kostas Chasialis <koschasialis@gmail.com> wrote:\n> \n> \n> Greetings, \n> \n> I am a Computer Science student at National and Kapodistrian University of Athens, and I would like to be a part of this year's GsoC program.\n> During my academic courses, I developed an interest on databases (back-end) and the main language I used during my academic career is C.\n> \n> A project I recently developed was In-Memory SUM SQL Query Execution on RDBMS-like queries using SortMerge Join. As input data we used the data provided from SIGMOD 2018 contest.\n> (github link : https://github.com/kchasialis/Query-Compiler-Executor)\n> \n> Therefore, the project idea \"Read/Write transaction-level routing in Odyssea\" highly intrigued me and I would like to be a part of it.\n> I believe that I have the necessary background to live up to your expectations and I would like to learn more things about this project and ways I can contribute to its development.\n> \n> Thanks in advance, Kostas.\n> \n\nHello once again,I send this response to remind you of my initial email because it might have been lost among others. I also want to know, if I can familiarize myself with your project idea before even sending my proposal.Thanks for your time, I understand you receive hundreds of emails.On 25 Feb 2020, at 17:56, Kostas Chasialis <koschasialis@gmail.com> wrote:Greetings, I am a Computer Science student at National and Kapodistrian University of Athens, and I would like to be a part of this year's GsoC program.During my academic courses, I developed an interest on databases (back-end) and the main language I used during my academic career is C.A project I recently developed was In-Memory SUM SQL Query Execution on RDBMS-like queries using SortMerge Join. As input data we used the data provided from SIGMOD 2018 contest.(github link : https://github.com/kchasialis/Query-Compiler-Executor)Therefore, the project idea \"Read/Write transaction-level routing in Odyssea\" highly intrigued me and I would like to be a part of it.I believe that I have the necessary background to live up to your expectations and I would like to learn more things about this project and ways I can contribute to its development.Thanks in advance, Kostas.", "msg_date": "Thu, 27 Feb 2020 21:23:24 +0200", "msg_from": "koschasialis@gmail.com", "msg_from_op": false, "msg_subject": "Re: [GsoC] Read/write transaction-level routing in Odyssey Project\n Idea" }, { "msg_contents": "Greetings,\n\n* koschasialis@gmail.com (koschasialis@gmail.com) wrote:\n> I send this response to remind you of my initial email because it might have been lost among others. \n\nAndrey is the one listed as a possible mentor for that project- I've\nadded him to the CC list. Hopefully he'll get back to you soon.\n\nThanks,\n\nStephen", "msg_date": "Thu, 27 Feb 2020 14:31:52 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [GsoC] Read/write transaction-level routing in Odyssey Project\n Idea" } ]
[ { "msg_contents": "Greetings,\n\nI was trying to use postgresql database as a backend with Ejabberd XMPP\nserver for load test (Using TSUNG).\n\nNoticed, while using Mnesia the “simultaneous users and open TCP/UDP\nconnections” graph in Tsung report is showing consistency, but while using\nPostgres, we see drop in connections during 100 to 500 seconds of runtime,\nand then recovering and staying consistent.\n\nI have been trying to figure out what the issue could be without any\nsuccess. I am kind of a noob in this technology, and hoping for some help\nfrom the good people from the community to understand the problem and how\nto fix this. Below are some details..\n\n· Postgres server utilization is low ( Avg load 1, Highest Cpu\nutilization 26%, lowest freemem 9000)\n\n\n\nTsung graph:\n[image: image.png]\n Graph 1: Postgres 12 Backen\n[image: image.png]\n\n Graph 2: Mnesia backend\n\n\n· Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n\n· Postgres on remote server: same config\n\n· Errors encountered during the same time: error_connect_etimedout\n(same outcome for other 2 tests)\n\n· *Tsung Load: *512 Bytes message size, user arrival rate 50/s,\n80k registered users.\n\n· Postgres server utilization is low ( Avg load 1, Highest Cpu\nutilization 26%, lowest freemem 9000)\n\n· Same tsung.xm and userlist used for the tests in Mnesia and\nPostgres.\n\n*Postgres Configuration used:*\nshared_buffers = 4GB\neffective_cache_size = 12GB\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\ndefault_statistics_target = 100\nrandom_page_cost = 4\neffective_io_concurrency = 2\nwork_mem = 256MB\nmin_wal_size = 1GB\nmax_wal_size = 2GB\nmax_worker_processes = 4\nmax_parallel_workers_per_gather = 2\nmax_parallel_workers = 4\nmax_parallel_maintenance_workers = 2\nmax_connections=50000\n\n\nKindly help understanding this behavior. Some advice on how to fix this\nwill be a big help .\n\n\n\nThanks,\n\nDipanjan", "msg_date": "Tue, 25 Feb 2020 21:58:38 +0530", "msg_from": "Dipanjan Ganguly <dipagnjan@gmail.com>", "msg_from_op": true, "msg_subject": "Connections dropping while using Postgres backend DB with Ejabberd" }, { "msg_contents": "Hi Dipanjan\n\nPlease do not post to all the postgresql mailing list lets keep this on one\nlist at a time, Keep this on general list\n\nAm i reading this correctly 10,000 to 50,000 open connections.\nPostgresql really is not meant to serve that many open connections.\nDue to design of Postgresql each client connection can use up to the\nwork_mem of 256MB plus additional for parallel processes. Memory will be\nexhausted long before 50,0000 connections is reached\n\nI'm not surprised Postgresql and the server is showing issues long before\n10K connections is reached. The OS is probably throwing everything to the\nswap file and see connections dropped or time out.\n\nShould be using a connection pooler to service this kind of load so the\nPostgresql does not exhaust resources just from the open connections.\nhttps://www.pgbouncer.org/\n\n\nOn Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly <dipagnjan@gmail.com>\nwrote:\n\n> Greetings,\n>\n> I was trying to use postgresql database as a backend with Ejabberd XMPP\n> server for load test (Using TSUNG).\n>\n> Noticed, while using Mnesia the “simultaneous users and open TCP/UDP\n> connections” graph in Tsung report is showing consistency, but while using\n> Postgres, we see drop in connections during 100 to 500 seconds of runtime,\n> and then recovering and staying consistent.\n>\n> I have been trying to figure out what the issue could be without any\n> success. I am kind of a noob in this technology, and hoping for some help\n> from the good people from the community to understand the problem and how\n> to fix this. Below are some details..\n>\n> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n> utilization 26%, lowest freemem 9000)\n>\n>\n>\n> Tsung graph:\n> [image: image.png]\n> Graph 1: Postgres 12 Backen\n> [image: image.png]\n>\n> Graph 2: Mnesia backend\n>\n>\n> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n>\n> · Postgres on remote server: same config\n>\n> · Errors encountered during the same time:\n> error_connect_etimedout (same outcome for other 2 tests)\n>\n> · *Tsung Load: *512 Bytes message size, user arrival rate 50/s,\n> 80k registered users.\n>\n> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n> utilization 26%, lowest freemem 9000)\n>\n> · Same tsung.xm and userlist used for the tests in Mnesia and\n> Postgres.\n>\n> *Postgres Configuration used:*\n> shared_buffers = 4GB\n> effective_cache_size = 12GB\n> maintenance_work_mem = 1GB\n> checkpoint_completion_target = 0.9\n> wal_buffers = 16MB\n> default_statistics_target = 100\n> random_page_cost = 4\n> effective_io_concurrency = 2\n> work_mem = 256MB\n> min_wal_size = 1GB\n> max_wal_size = 2GB\n> max_worker_processes = 4\n> max_parallel_workers_per_gather = 2\n> max_parallel_workers = 4\n> max_parallel_maintenance_workers = 2\n> max_connections=50000\n>\n>\n> Kindly help understanding this behavior. Some advice on how to fix this\n> will be a big help .\n>\n>\n>\n> Thanks,\n>\n> Dipanjan\n>", "msg_date": "Tue, 25 Feb 2020 12:01:24 -0500", "msg_from": "Justin <zzzzz.graf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "work_mem can be used many times per connection given it is per sort, hash,\nor other operations and as mentioned that can be multiplied if the query is\nhandled with parallel workers. I am guessing the server has 16GB memory\ntotal given shared_buffers and effective_cache_size, and a more reasonable\nwork_mem setting might be on the order of 32-64MB.\n\nDepending on the type of work being done and how quickly the application\nreleases the db connection once it is done, max connections might be on the\norder of 4-20x the number of cores I would expect. If more simultaneous\nusers need to be serviced, a connection pooler like pgbouncer or pgpool\nwill allow those connections to be re-used quickly.\n\nThese numbers are generalizations based on my experience. Others with more\nexperience may have different configurations to recommend.\n\n>\n\nwork_mem can be used many times per connection given it is per sort, hash, or other operations and as mentioned that can be multiplied if the query is handled with parallel workers. I am guessing the server has 16GB memory total given shared_buffers and effective_cache_size, and a more reasonable work_mem setting might be on the order of 32-64MB.Depending on the type of work being done and how quickly the application releases the db connection once it is done, max connections might be on the order of 4-20x the number of cores I would expect. If more simultaneous users need to be serviced, a connection pooler like pgbouncer or pgpool will allow those connections to be re-used quickly.These numbers are generalizations based on my experience. Others with more experience may have different configurations to recommend.", "msg_date": "Tue, 25 Feb 2020 10:20:34 -0700", "msg_from": "Michael Lewis <mlewis@entrata.com>", "msg_from_op": false, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "Thanks Michael for the recommendation and clarification.\n\nWill try the with 32 MB on my next run.\n\nBR,\nDipanjan\n\nOn Tue, Feb 25, 2020 at 10:51 PM Michael Lewis <mlewis@entrata.com> wrote:\n\n> work_mem can be used many times per connection given it is per sort, hash,\n> or other operations and as mentioned that can be multiplied if the query is\n> handled with parallel workers. I am guessing the server has 16GB memory\n> total given shared_buffers and effective_cache_size, and a more reasonable\n> work_mem setting might be on the order of 32-64MB.\n>\n> Depending on the type of work being done and how quickly the application\n> releases the db connection once it is done, max connections might be on the\n> order of 4-20x the number of cores I would expect. If more simultaneous\n> users need to be serviced, a connection pooler like pgbouncer or pgpool\n> will allow those connections to be re-used quickly.\n>\n> These numbers are generalizations based on my experience. Others with more\n> experience may have different configurations to recommend.\n>\n>>\n\nThanks Michael for the recommendation and clarification.Will try the with 32 MB on my next run.BR,DipanjanOn Tue, Feb 25, 2020 at 10:51 PM Michael Lewis <mlewis@entrata.com> wrote:work_mem can be used many times per connection given it is per sort, hash, or other operations and as mentioned that can be multiplied if the query is handled with parallel workers. I am guessing the server has 16GB memory total given shared_buffers and effective_cache_size, and a more reasonable work_mem setting might be on the order of 32-64MB.Depending on the type of work being done and how quickly the application releases the db connection once it is done, max connections might be on the order of 4-20x the number of cores I would expect. If more simultaneous users need to be serviced, a connection pooler like pgbouncer or pgpool will allow those connections to be re-used quickly.These numbers are generalizations based on my experience. Others with more experience may have different configurations to recommend.", "msg_date": "Wed, 26 Feb 2020 00:46:52 +0530", "msg_from": "Dipanjan Ganguly <dipagnjan@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "Hi Dipanjan\n\nIf the connections are not being closed and left open , you should see\n50,000 processes running on the server because postgresql creates/forks a\nnew process for each connection\n\nJust having that many processes running will exhaust resources, I would\nconfirm that the process are still running.\nyou can use the command\n\nps aux |wc -l\n\nto get a count on the number of processes\nBeyond just opening the connection are there any actions such as Select *\nfrom sometable being fired off to measure performance?\n\nAttempting to open and leave 50K connections open should exhaust the server\nresources long before reaching 50K\n\nSomething is off here I would be looking into how this test actually works,\nhow the connections are opened, and commands it sends to Postgresql\n\n\n\nOn Tue, Feb 25, 2020 at 2:12 PM Dipanjan Ganguly <dipagnjan@gmail.com>\nwrote:\n\n> Hi Justin,\n>\n> Thanks for your insight.\n>\n> I agree with you completely, but as mentioned in my previous email, the\n> fact that Postgres server resource utilization is less *\"( Avg load 1,\n> Highest Cpu utilization 26%, lowest freemem 9000)*\" and it recovers at a\n> certain point then consistently reaches close to 50 k , is what confusing\n> me..\n>\n> Legends from the Tsung report:\n> users\n> Number of simultaneous users (it's session has started, but not yet\n> finished).connectednumber of users with an opened TCP/UDP connection\n> (example: for HTTP, during a think time, the TCP connection can be closed\n> by the server, and it won't be reopened until the thinktime has expired)\n> I have also used pgcluu to monitor the events. Sharing the stats below..*Memory\n> information*\n>\n> - 15.29 GB Total memory\n> - 8.79 GB Free memory\n> - 31.70 MB Buffers\n> - 5.63 GB Cached\n> - 953.12 MB Total swap\n> - 953.12 MB Free swap\n> - 13.30 MB Page Tables\n> - 3.19 GB Shared memory\n>\n> Any thoughts ??!! 🤔🤔\n>\n> Thanks,\n> Dipanjan\n>\n>\n> On Tue, Feb 25, 2020 at 10:31 PM Justin <zzzzz.graf@gmail.com> wrote:\n>\n>> Hi Dipanjan\n>>\n>> Please do not post to all the postgresql mailing list lets keep this on\n>> one list at a time, Keep this on general list\n>>\n>> Am i reading this correctly 10,000 to 50,000 open connections.\n>> Postgresql really is not meant to serve that many open connections.\n>> Due to design of Postgresql each client connection can use up to the\n>> work_mem of 256MB plus additional for parallel processes. Memory will be\n>> exhausted long before 50,0000 connections is reached\n>>\n>> I'm not surprised Postgresql and the server is showing issues long before\n>> 10K connections is reached. The OS is probably throwing everything to the\n>> swap file and see connections dropped or time out.\n>>\n>> Should be using a connection pooler to service this kind of load so the\n>> Postgresql does not exhaust resources just from the open connections.\n>> https://www.pgbouncer.org/\n>>\n>>\n>> On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly <dipagnjan@gmail.com>\n>> wrote:\n>>\n>>> Greetings,\n>>>\n>>> I was trying to use postgresql database as a backend with Ejabberd XMPP\n>>> server for load test (Using TSUNG).\n>>>\n>>> Noticed, while using Mnesia the “simultaneous users and open TCP/UDP\n>>> connections” graph in Tsung report is showing consistency, but while using\n>>> Postgres, we see drop in connections during 100 to 500 seconds of runtime,\n>>> and then recovering and staying consistent.\n>>>\n>>> I have been trying to figure out what the issue could be without any\n>>> success. I am kind of a noob in this technology, and hoping for some help\n>>> from the good people from the community to understand the problem and how\n>>> to fix this. Below are some details..\n>>>\n>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>> utilization 26%, lowest freemem 9000)\n>>>\n>>>\n>>>\n>>> Tsung graph:\n>>> [image: image.png]\n>>> Graph 1: Postgres 12 Backen\n>>> [image: image.png]\n>>>\n>>> Graph 2: Mnesia backend\n>>>\n>>>\n>>> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n>>>\n>>> · Postgres on remote server: same config\n>>>\n>>> · Errors encountered during the same time:\n>>> error_connect_etimedout (same outcome for other 2 tests)\n>>>\n>>> · *Tsung Load: *512 Bytes message size, user arrival rate\n>>> 50/s, 80k registered users.\n>>>\n>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>> utilization 26%, lowest freemem 9000)\n>>>\n>>> · Same tsung.xm and userlist used for the tests in Mnesia and\n>>> Postgres.\n>>>\n>>> *Postgres Configuration used:*\n>>> shared_buffers = 4GB\n>>> effective_cache_size = 12GB\n>>> maintenance_work_mem = 1GB\n>>> checkpoint_completion_target = 0.9\n>>> wal_buffers = 16MB\n>>> default_statistics_target = 100\n>>> random_page_cost = 4\n>>> effective_io_concurrency = 2\n>>> work_mem = 256MB\n>>> min_wal_size = 1GB\n>>> max_wal_size = 2GB\n>>> max_worker_processes = 4\n>>> max_parallel_workers_per_gather = 2\n>>> max_parallel_workers = 4\n>>> max_parallel_maintenance_workers = 2\n>>> max_connections=50000\n>>>\n>>>\n>>> Kindly help understanding this behavior. Some advice on how to fix this\n>>> will be a big help .\n>>>\n>>>\n>>>\n>>> Thanks,\n>>>\n>>> Dipanjan\n>>>\n>>", "msg_date": "Tue, 25 Feb 2020 14:35:05 -0500", "msg_from": "Justin <zzzzz.graf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "Hi Justin,\nI have already checked running Postgres processes and strangely never\ncounted more than 20.\n\n I'll check as you recommend on how ejabberd to postgresql connectivity\nworks. May be the answer lies there. Will get back if I find something.\n\nThanks for giving some direction to my thoughts.\n\nGood talk. 👍👍\n\nBR,\nDipanjan\n\n\nOn Wed 26 Feb, 2020 1:05 am Justin, <zzzzz.graf@gmail.com> wrote:\n\n> Hi Dipanjan\n>\n> If the connections are not being closed and left open , you should see\n> 50,000 processes running on the server because postgresql creates/forks a\n> new process for each connection\n>\n> Just having that many processes running will exhaust resources, I would\n> confirm that the process are still running.\n> you can use the command\n>\n> ps aux |wc -l\n>\n> to get a count on the number of processes\n> Beyond just opening the connection are there any actions such as Select *\n> from sometable being fired off to measure performance?\n>\n> Attempting to open and leave 50K connections open should exhaust the\n> server resources long before reaching 50K\n>\n> Something is off here I would be looking into how this test actually\n> works, how the connections are opened, and commands it sends to Postgresql\n>\n>\n>\n> On Tue, Feb 25, 2020 at 2:12 PM Dipanjan Ganguly <dipagnjan@gmail.com>\n> wrote:\n>\n>> Hi Justin,\n>>\n>> Thanks for your insight.\n>>\n>> I agree with you completely, but as mentioned in my previous email, the\n>> fact that Postgres server resource utilization is less *\"( Avg load 1,\n>> Highest Cpu utilization 26%, lowest freemem 9000)*\" and it recovers at\n>> a certain point then consistently reaches close to 50 k , is what confusing\n>> me..\n>>\n>> Legends from the Tsung report:\n>> users\n>> Number of simultaneous users (it's session has started, but not yet\n>> finished).connectednumber of users with an opened TCP/UDP connection\n>> (example: for HTTP, during a think time, the TCP connection can be closed\n>> by the server, and it won't be reopened until the thinktime has expired)\n>> I have also used pgcluu to monitor the events. Sharing the stats below..*Memory\n>> information*\n>>\n>> - 15.29 GB Total memory\n>> - 8.79 GB Free memory\n>> - 31.70 MB Buffers\n>> - 5.63 GB Cached\n>> - 953.12 MB Total swap\n>> - 953.12 MB Free swap\n>> - 13.30 MB Page Tables\n>> - 3.19 GB Shared memory\n>>\n>> Any thoughts ??!! 🤔🤔\n>>\n>> Thanks,\n>> Dipanjan\n>>\n>>\n>> On Tue, Feb 25, 2020 at 10:31 PM Justin <zzzzz.graf@gmail.com> wrote:\n>>\n>>> Hi Dipanjan\n>>>\n>>> Please do not post to all the postgresql mailing list lets keep this on\n>>> one list at a time, Keep this on general list\n>>>\n>>> Am i reading this correctly 10,000 to 50,000 open connections.\n>>> Postgresql really is not meant to serve that many open connections.\n>>> Due to design of Postgresql each client connection can use up to the\n>>> work_mem of 256MB plus additional for parallel processes. Memory will be\n>>> exhausted long before 50,0000 connections is reached\n>>>\n>>> I'm not surprised Postgresql and the server is showing issues long\n>>> before 10K connections is reached. The OS is probably throwing everything\n>>> to the swap file and see connections dropped or time out.\n>>>\n>>> Should be using a connection pooler to service this kind of load so the\n>>> Postgresql does not exhaust resources just from the open connections.\n>>> https://www.pgbouncer.org/\n>>>\n>>>\n>>> On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly <dipagnjan@gmail.com>\n>>> wrote:\n>>>\n>>>> Greetings,\n>>>>\n>>>> I was trying to use postgresql database as a backend with Ejabberd XMPP\n>>>> server for load test (Using TSUNG).\n>>>>\n>>>> Noticed, while using Mnesia the “simultaneous users and open TCP/UDP\n>>>> connections” graph in Tsung report is showing consistency, but while using\n>>>> Postgres, we see drop in connections during 100 to 500 seconds of runtime,\n>>>> and then recovering and staying consistent.\n>>>>\n>>>> I have been trying to figure out what the issue could be without any\n>>>> success. I am kind of a noob in this technology, and hoping for some help\n>>>> from the good people from the community to understand the problem and how\n>>>> to fix this. Below are some details..\n>>>>\n>>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>>> utilization 26%, lowest freemem 9000)\n>>>>\n>>>>\n>>>>\n>>>> Tsung graph:\n>>>> [image: image.png]\n>>>> Graph 1: Postgres 12 Backen\n>>>> [image: image.png]\n>>>>\n>>>> Graph 2: Mnesia backend\n>>>>\n>>>>\n>>>> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n>>>>\n>>>> · Postgres on remote server: same config\n>>>>\n>>>> · Errors encountered during the same time:\n>>>> error_connect_etimedout (same outcome for other 2 tests)\n>>>>\n>>>> · *Tsung Load: *512 Bytes message size, user arrival rate\n>>>> 50/s, 80k registered users.\n>>>>\n>>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>>> utilization 26%, lowest freemem 9000)\n>>>>\n>>>> · Same tsung.xm and userlist used for the tests in Mnesia and\n>>>> Postgres.\n>>>>\n>>>> *Postgres Configuration used:*\n>>>> shared_buffers = 4GB\n>>>> effective_cache_size = 12GB\n>>>> maintenance_work_mem = 1GB\n>>>> checkpoint_completion_target = 0.9\n>>>> wal_buffers = 16MB\n>>>> default_statistics_target = 100\n>>>> random_page_cost = 4\n>>>> effective_io_concurrency = 2\n>>>> work_mem = 256MB\n>>>> min_wal_size = 1GB\n>>>> max_wal_size = 2GB\n>>>> max_worker_processes = 4\n>>>> max_parallel_workers_per_gather = 2\n>>>> max_parallel_workers = 4\n>>>> max_parallel_maintenance_workers = 2\n>>>> max_connections=50000\n>>>>\n>>>>\n>>>> Kindly help understanding this behavior. Some advice on how to fix\n>>>> this will be a big help .\n>>>>\n>>>>\n>>>>\n>>>> Thanks,\n>>>>\n>>>> Dipanjan\n>>>>\n>>>", "msg_date": "Wed, 26 Feb 2020 01:23:57 +0530", "msg_from": "Dipanjan Ganguly <dipagnjan@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" } ]
[ { "msg_contents": "Hi,\n\nPostGIS 2.5 had raster and vector blended together in single extension.\nIn PostGIS 3, they were split out into postgis and postgis_raster extensions.\nTo upgrade, there is now postgis_extensions_upgrade() function, that\nunpackages the raster part out of postgis extensions, upgrades it, and\npackages raster functions back into postgis_raster by utilizing FROM\nUNPACKAGED.\nRemoval of FROM UNPACKAGED breaks PostGIS 2.5 -> 3.0 upgrade path, and\nwe haven't yet found a proper replacement since such removal wasn't\nsomething we were expecting.\n\nOn Tue, Feb 25, 2020 at 11:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Darafei \"Komяpa\" Praliaskouski (me@komzpa.net) wrote:\n> > can it be raised on pgsql-hackers as a thing impacting PostGIS upgrade path?\n>\n> Why is it impacting the PostGIS upgrade path? The FROM UNPACKAGED was\n> never intended to be used as an upgrade path..\n>\n> Thanks,\n>\n> Stephen\n> _______________________________________________\n> postgis-devel mailing list\n> postgis-devel@lists.osgeo.org\n> https://lists.osgeo.org/mailman/listinfo/postgis-devel\n\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\n\n", "msg_date": "Tue, 25 Feb 2020 23:52:47 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": true, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> Removal of FROM UNPACKAGED breaks PostGIS 2.5 -> 3.0 upgrade path, and\n> we haven't yet found a proper replacement since such removal wasn't\n> something we were expecting.\n\nI'd agree with Stephen's comment:\n\n> On Tue, Feb 25, 2020 at 11:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Why is it impacting the PostGIS upgrade path? The FROM UNPACKAGED was\n>> never intended to be used as an upgrade path..\n\nThis seems like a serious abuse of the FROM option, not to mention being\nfundamentally unsafe --- the whole problem with FROM is that you can't\nbe entirely sure what the starting state is. So unless you can make a\npretty strong case as to why you need to do it like that, and that there's\nno other way to handle it in the many months before v13 ships, I'm not\ngoing to have a lot of sympathy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Feb 2020 16:00:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "Greetings,\n\n* Darafei \"Komяpa\" Praliaskouski (me@komzpa.net) wrote:\n> PostGIS 2.5 had raster and vector blended together in single extension.\n> In PostGIS 3, they were split out into postgis and postgis_raster extensions.\n\nFor my 2c, at least, I still don't really get why that split was done.\n\n> To upgrade, there is now postgis_extensions_upgrade() function, that\n> unpackages the raster part out of postgis extensions, upgrades it, and\n> packages raster functions back into postgis_raster by utilizing FROM\n> UNPACKAGED.\n> Removal of FROM UNPACKAGED breaks PostGIS 2.5 -> 3.0 upgrade path, and\n> we haven't yet found a proper replacement since such removal wasn't\n> something we were expecting.\n\nI agree that there probably isn't a very good path to allow an extension\nto be split up like that without having to drop some things. An\nalternative would have been to *not* split up postgis, but rather to\nhave a postgis_raster and a postgis_vector. Adding in support for other\nways to migrate a function from one extension to another would make\nsense too.\n\nThanks,\n\nStephen", "msg_date": "Wed, 26 Feb 2020 08:55:03 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "On Wed, Feb 26, 2020 at 08:55:03AM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Darafei \"Komяpa\" Praliaskouski (me@komzpa.net) wrote:\n> > PostGIS 2.5 had raster and vector blended together in single extension.\n> > In PostGIS 3, they were split out into postgis and postgis_raster extensions.\n> \n> For my 2c, at least, I still don't really get why that split was done.\n\nIt's pretty easy to understand: to let user decide what he needs and\nwhat not.\n\n> > Removal of FROM UNPACKAGED breaks PostGIS 2.5 -> 3.0 upgrade path, and\n> > we haven't yet found a proper replacement since such removal wasn't\n> > something we were expecting.\n> \n> I agree that there probably isn't a very good path to allow an extension\n> to be split up like that without having to drop some things. An\n> alternative would have been to *not* split up postgis, but rather to\n> have a postgis_raster and a postgis_vector. Adding in support for other\n> ways to migrate a function from one extension to another would make\n> sense too.\n\nI think support for migrating an object between extensions DOES exist,\nit's just that you cannot use it from extension upgrade scripts.\n\nAnyway pgsql-hackers is not the right place for discussion.\nOn pgsql-hackers we only want to find a future-proof way to \"package\nexisting objects into an extension\". If the syntax\n`CREATE EXTENSION <extname> FROM UNPACKAGED` \nhas gone, would it be ok for just:\n`CREATE EXTENSION <extname>`\nto intercept unpackaged objects and package them ?\n\n--strk;\n\n\n", "msg_date": "Wed, 26 Feb 2020 15:13:52 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "> On 26 Feb 2020, at 15:13, Sandro Santilli <strk@kbt.io> wrote:\n\n> On pgsql-hackers we only want to find a future-proof way to \"package\n> existing objects into an extension\".\n\nWhat is the longterm goal of PostGIS, to use this as a stepping stone to reach\na point where no unpackaged extensions exist; or find a way to continue with\nthe current setup except with syntax that isn't going away?\n\n> If the syntax\n> `CREATE EXTENSION <extname> FROM UNPACKAGED` \n> has gone, would it be ok for just:\n> `CREATE EXTENSION <extname>`\n> to intercept unpackaged objects and package them ?\n\nOverloading the same syntax for creating packaged as well as unpackaged\nextensions sounds like the wrong path to go down.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 26 Feb 2020 15:35:46 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "On Wed, Feb 26, 2020 at 03:35:46PM +0100, Daniel Gustafsson wrote:\n> > On 26 Feb 2020, at 15:13, Sandro Santilli <strk@kbt.io> wrote:\n> \n> > On pgsql-hackers we only want to find a future-proof way to \"package\n> > existing objects into an extension\".\n> \n> What is the longterm goal of PostGIS, to use this as a stepping stone to reach\n> a point where no unpackaged extensions exist; or find a way to continue with\n> the current setup except with syntax that isn't going away?\n\nNo unpackaged extension seems like a good goal in the long term.\n\n> Overloading the same syntax for creating packaged as well as unpackaged\n> extensions sounds like the wrong path to go down.\n\nSo what other options would we have to let people upgrade a running\npostgis or postgis_raster outside of the EXTENSION mechanism ?\n\n--strk;\n\n\n", "msg_date": "Wed, 26 Feb 2020 16:06:14 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "Greetings,\n\n* Sandro Santilli (strk@kbt.io) wrote:\n> On pgsql-hackers we only want to find a future-proof way to \"package\n> existing objects into an extension\". If the syntax\n> `CREATE EXTENSION <extname> FROM UNPACKAGED` \n> has gone, would it be ok for just:\n> `CREATE EXTENSION <extname>`\n> to intercept unpackaged objects and package them ?\n\nNo. The reason it was removed is because it's not going to be safe to\ndo when we have trusted extensions. Perhaps it would be possible to\nfigure out a way to make it safe, but the reason FROM UNPACKAGED was\ncreated and existed doesn't apply any more. That PostGIS has been using\nit for something else entirely is unfortunate, but the way to address\nwhat PostGIS needs is to talk about that, not talk about how this ugly\nhack used to work and doesn't any more.\n\nThanks,\n\nStephen", "msg_date": "Wed, 26 Feb 2020 10:37:41 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "On Wed, Feb 26, 2020 at 10:37:41AM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Sandro Santilli (strk@kbt.io) wrote:\n> > On pgsql-hackers we only want to find a future-proof way to \"package\n> > existing objects into an extension\". If the syntax\n> > `CREATE EXTENSION <extname> FROM UNPACKAGED` \n> > has gone, would it be ok for just:\n> > `CREATE EXTENSION <extname>`\n> > to intercept unpackaged objects and package them ?\n> \n> No. The reason it was removed is because it's not going to be safe to\n> do when we have trusted extensions.\n\nThis part is not clear to me. You're _assuming_ that the unpackaged--xxx\nwill not make checks, so you _drop_ support for it ? Can't the normal\nextension script also be unsafe for some reason ? Or can't the\nunpackaged-xxx script be made safe by the publishers ? Or, as a last\nresort.. can't you just mark postgis as UNSAFE and still require\nsuperuser, which would give us the same experience as before ?\n\n\n> Perhaps it would be possible to\n> figure out a way to make it safe, but the reason FROM UNPACKAGED was\n> created and existed doesn't apply any more.\n\nWasn't the reason of existance the ability for people to switch from\nnon-extension to extension based installs ?\n\n> That PostGIS has been using\n> it for something else entirely is unfortunate, but the way to address\n> what PostGIS needs is to talk about that, not talk about how this ugly\n> hack used to work and doesn't any more.\n\nSeriously, what was FROM UNPACKAGED meant to be used for ?\n\n--strk;\n\n\n", "msg_date": "Wed, 26 Feb 2020 16:52:13 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "OK, well, what PostGIS needs is the ability for 'ALTER EXTENSION …. UPDATE foo’ to end up with two extensions in the end, ‘foo’ and ‘foo_new’. That’s what’s happening in the 2.x -> 3 upgrade process, as ‘postgis’ becomes ‘postgis’ and ‘postgis_raster’. \n\nPresumably 15 years out from the 1.x -> 2.x we can stop worrying about bundling unpackaged postgis into an extension, and just recommend a hard upgrade dump/restore to the hardy souls still running 1.x.\n\nP.\n\n> On Feb 26, 2020, at 7:37 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Sandro Santilli (strk@kbt.io) wrote:\n>> On pgsql-hackers we only want to find a future-proof way to \"package\n>> existing objects into an extension\". If the syntax\n>> `CREATE EXTENSION <extname> FROM UNPACKAGED` \n>> has gone, would it be ok for just:\n>> `CREATE EXTENSION <extname>`\n>> to intercept unpackaged objects and package them ?\n> \n> No. The reason it was removed is because it's not going to be safe to\n> do when we have trusted extensions. Perhaps it would be possible to\n> figure out a way to make it safe, but the reason FROM UNPACKAGED was\n> created and existed doesn't apply any more. That PostGIS has been using\n> it for something else entirely is unfortunate, but the way to address\n> what PostGIS needs is to talk about that, not talk about how this ugly\n> hack used to work and doesn't any more.\n> \n> Thanks,\n> \n> Stephen\n> _______________________________________________\n> postgis-devel mailing list\n> postgis-devel@lists.osgeo.org\n> https://lists.osgeo.org/mailman/listinfo/postgis-devel\n\n\n\n", "msg_date": "Wed, 26 Feb 2020 07:52:24 -0800", "msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "On 2/26/20 10:52 AM, Sandro Santilli wrote:\n\n> This part is not clear to me. You're _assuming_ that the unpackaged--xxx\n> will not make checks, so you _drop_ support for it ? Can't the normal\n> extension script also be unsafe for some reason ? Or can't the\n> unpackaged-xxx script be made safe by the publishers ? Or, as a last\n> resort.. can't you just mark postgis as UNSAFE and still require\n> superuser, which would give us the same experience as before ?\n\nI am wondering: does anything in the PG 13 change preclude writing\na postgis_raster--unpackaged.sql script that could be applied with\nCREATE EXTENSION postgis_raster VERSION unpackaged;\nand would do perhaps nothing at all, or merely confirm that the\nright unpackaged things are present and are the right things...\n\n... from which an ALTER EXTENSION postgis_raster UPDATE TO 3.0;\nwould naturally run the existing postgis_raster--unpackaged--3.0.sql\nand execute all of its existing ALTER EXTENSION ... ADD operations?\n\nHas the disadvantage of being goofy, but possibly the advantage of\nbeing implementable in the current state of affairs.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 26 Feb 2020 11:18:43 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "> Presumably 15 years out from the 1.x -> 2.x we can stop worrying about\n> bundling unpackaged postgis into an extension, and just recommend a hard\n> upgrade dump/restore to the hardy souls still running 1.x.\n> \n> P.\n> \n\nWe don't need to worry about 1.x cause 1.x can only do a hard upgrade to 2 or 3. We never supported soft upgrade from 1.x\nEasy solution there is just to install postgis extension and do pg_restore/postgis_restore of your data.\n\nSo it's really just the 2.1 -> 3 that are of concern.\nI think now is a fine time to encourage everyone to upgrade to 3 if they can so they don't need to suffer any crazy solutions we come up with :)\n\nTurn this into a convenient emergency.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 26 Feb 2020 15:11:29 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "On Wed, Feb 26, 2020 at 11:18:43AM -0500, Chapman Flack wrote:\n> On 2/26/20 10:52 AM, Sandro Santilli wrote:\n> \n> > This part is not clear to me. You're _assuming_ that the unpackaged--xxx\n> > will not make checks, so you _drop_ support for it ? Can't the normal\n> > extension script also be unsafe for some reason ? Or can't the\n> > unpackaged-xxx script be made safe by the publishers ? Or, as a last\n> > resort.. can't you just mark postgis as UNSAFE and still require\n> > superuser, which would give us the same experience as before ?\n> \n> I am wondering: does anything in the PG 13 change preclude writing\n> a postgis_raster--unpackaged.sql script that could be applied with\n> CREATE EXTENSION postgis_raster VERSION unpackaged;\n> and would do perhaps nothing at all, or merely confirm that the\n> right unpackaged things are present and are the right things...\n> \n> ... from which an ALTER EXTENSION postgis_raster UPDATE TO 3.0;\n> would naturally run the existing postgis_raster--unpackaged--3.0.sql\n> and execute all of its existing ALTER EXTENSION ... ADD operations?\n> \n> Has the disadvantage of being goofy, but possibly the advantage of\n> being implementable in the current state of affairs.\n\nThanks for this hint, yes, seems to be technically feasible, as well\nas doing packaging in the extension creation scripts. Only... this\nwould basically work around the intentionally removed syntax, which\nSteven Frost was against (still unclear to me why)...\n\n--strk;\n\n\n", "msg_date": "Thu, 27 Feb 2020 09:32:24 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "On Thu, Feb 27, 2020 at 09:32:24AM +0100, Sandro Santilli wrote:\n> On Wed, Feb 26, 2020 at 11:18:43AM -0500, Chapman Flack wrote:\n> > On 2/26/20 10:52 AM, Sandro Santilli wrote:\n> > \n> > > This part is not clear to me. You're _assuming_ that the unpackaged--xxx\n> > > will not make checks, so you _drop_ support for it ? Can't the normal\n> > > extension script also be unsafe for some reason ? Or can't the\n> > > unpackaged-xxx script be made safe by the publishers ? Or, as a last\n> > > resort.. can't you just mark postgis as UNSAFE and still require\n> > > superuser, which would give us the same experience as before ?\n> > \n> > I am wondering: does anything in the PG 13 change preclude writing\n> > a postgis_raster--unpackaged.sql script that could be applied with\n> > CREATE EXTENSION postgis_raster VERSION unpackaged;\n> > and would do perhaps nothing at all, or merely confirm that the\n> > right unpackaged things are present and are the right things...\n> > \n> > ... from which an ALTER EXTENSION postgis_raster UPDATE TO 3.0;\n> > would naturally run the existing postgis_raster--unpackaged--3.0.sql\n> > and execute all of its existing ALTER EXTENSION ... ADD operations?\n> > \n> > Has the disadvantage of being goofy, but possibly the advantage of\n> > being implementable in the current state of affairs.\n> \n> Thanks for this hint, yes, seems to be technically feasible, as well\n> as doing packaging in the extension creation scripts. Only... this\n> would basically work around the intentionally removed syntax, which\n> Steven Frost was against (still unclear to me why)...\n\nNOTE: my suggestion was to directly have CREATE EXTENSION do the\npackaging, which would give the same level of security as the\nworkaround suggested here, but with less hops.\n\n--strk;\n\n\n", "msg_date": "Thu, 27 Feb 2020 09:58:21 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" }, { "msg_contents": "Hi,\n\nOn 2020-02-26 16:52:13 +0100, Sandro Santilli wrote:\n> This part is not clear to me. You're _assuming_ that the unpackaged--xxx\n> will not make checks, so you _drop_ support for it ? Can't the normal\n> extension script also be unsafe for some reason ?\n\nYes. But it's at least plausible to make it safe. But in the case of an\nindeterminate start state there's basically no way to make it safe. If\nan attacker has entire control over the start state, you really can't\nwrite a non-trivial upgrade script that safely manipulate that state.\n\n\n> Or can't the unpackaged-xxx script be made safe by the publishers ?\n\nPretty much.\n\n\n> Or, as a last resort.. can't you just mark postgis as UNSAFE and still\n> require superuser, which would give us the same experience as before ?\n\nYes, we could potentially do that. But it's also a huge trap. And users\nwant to have the option of trusted extensions.\n\n\n> > Perhaps it would be possible to\n> > figure out a way to make it safe, but the reason FROM UNPACKAGED was\n> > created and existed doesn't apply any more.\n> \n> Wasn't the reason of existance the ability for people to switch from\n> non-extension to extension based installs ?\n\nYea. But that was many years ago. It is/was a transition\nfunctionality. And you're not using it as a way to transition, you're\nusing it to support a somewhat odd separate usecase that nobody ever\ntried to make supported in postgres.\n\n\n> > That PostGIS has been using\n> > it for something else entirely is unfortunate, but the way to address\n> > what PostGIS needs is to talk about that, not talk about how this ugly\n> > hack used to work and doesn't any more.\n> \n> Seriously, what was FROM UNPACKAGED meant to be used for ?\n\n?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Mar 2020 09:29:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [postgis-devel] About EXTENSION from UNPACKAGED on PostgreSQL 13" } ]
[ { "msg_contents": "Hi Hackers,\n\nThe other day I was helping someone with pg_upgrade on IRC, and they got\na rather unhelpful error message:\n\n ERROR: could not open version file /path/to/new/cluster/PG_VERSION\n\nIt would have saved some minutes of debugging time if that had included\nthe reason why the open failed, so here's a patch to do so.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl", "msg_date": "Tue, 25 Feb 2020 23:14:06 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "[PATCH] pg_upgrade: report the reason for failing to open the cluster\n version file" }, { "msg_contents": "> On 26 Feb 2020, at 00:14, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> It would have saved some minutes of debugging time if that had included\n> the reason why the open failed, so here's a patch to do so.\n\n+1 on the attached patch. A quick skim across the similar error reportings in\npg_upgrade doesn't turn up any others which lack the more detailed information.\n\n> -\t\tpg_fatal(\"could not open version file: %s\\n\", ver_filename);\n> +\t\tpg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n\nA few lines further down from this we report an error in case we are unable to\nparse the file in question:\n\n pg_fatal(\"could not parse PG_VERSION file from %s\\n\", cluster->pgdata);\n\nShould the pgdata argument be quoted there as well, like \\\"%s\\\", to make it\nconsistent for how we report filenames and directories in pg_upgrade?\n\ncheers ./daniel\n\n", "msg_date": "Wed, 26 Feb 2020 00:31:26 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> A few lines further down from this we report an error in case we are unable to\n> parse the file in question:\n>\n> pg_fatal(\"could not parse PG_VERSION file from %s\\n\", cluster->pgdata);\n>\n> Should the pgdata argument be quoted there as well, like \\\"%s\\\", to make it\n> consistent for how we report filenames and directories in pg_upgrade?\n\nGood point, I agree we should. Updated patch attached.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Tue, 25 Feb 2020 23:55:06 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "On Tue, Feb 25, 2020 at 11:55:06PM +0000, Dagfinn Ilmari Mannsåker wrote:\n> @@ -164,11 +164,11 @@ get_major_server_version(ClusterInfo *cluster)\n> \tsnprintf(ver_filename, sizeof(ver_filename), \"%s/PG_VERSION\",\n> \t\t\t cluster->pgdata);\n> \tif ((version_fd = fopen(ver_filename, \"r\")) == NULL)\n> -\t\tpg_fatal(\"could not open version file: %s\\n\", ver_filename);\n> +\t\tpg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n\nHere I think that it would be better to just use \"could not open\nfile\" as we know that we are dealing with a version file already\nthanks to ver_filename.\n\n> \tif (fscanf(version_fd, \"%63s\", cluster->major_version_str) == 0 ||\n> \t\tsscanf(cluster->major_version_str, \"%d.%d\", &v1, &v2) < 1)\n> -\t\tpg_fatal(\"could not parse PG_VERSION file from %s\\n\", cluster->pgdata);\n> +\t\tpg_fatal(\"could not parse PG_VERSION file from \\\"%s\\\"\\n\", cluster->pgdata);\n> \n> \tfclose(version_fd);\n\nNo objection to this one.\n--\nMichael", "msg_date": "Wed, 26 Feb 2020 10:48:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "> On 26 Feb 2020, at 02:48, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Feb 25, 2020 at 11:55:06PM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> @@ -164,11 +164,11 @@ get_major_server_version(ClusterInfo *cluster)\n>> \tsnprintf(ver_filename, sizeof(ver_filename), \"%s/PG_VERSION\",\n>> \t\t\t cluster->pgdata);\n>> \tif ((version_fd = fopen(ver_filename, \"r\")) == NULL)\n>> -\t\tpg_fatal(\"could not open version file: %s\\n\", ver_filename);\n>> +\t\tpg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n> \n> Here I think that it would be better to just use \"could not open\n> file\" as we know that we are dealing with a version file already\n> thanks to ver_filename.\n\nIsn't that a removal of detail with very little benefit? Not everyone running\npg_upgrade will know internal filenames, and the ver_filename contains the\npgdata path as well which might provide additional clues in case this goes\nwrong.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 26 Feb 2020 09:56:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "On Wed, Feb 26, 2020 at 9:56 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 26 Feb 2020, at 02:48, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Feb 25, 2020 at 11:55:06PM +0000, Dagfinn Ilmari Mannsåker wrote:\n> >> @@ -164,11 +164,11 @@ get_major_server_version(ClusterInfo *cluster)\n> >> snprintf(ver_filename, sizeof(ver_filename), \"%s/PG_VERSION\",\n> >> cluster->pgdata);\n> >> if ((version_fd = fopen(ver_filename, \"r\")) == NULL)\n> >> - pg_fatal(\"could not open version file: %s\\n\", ver_filename);\n> >> + pg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n> >\n> > Here I think that it would be better to just use \"could not open\n> > file\" as we know that we are dealing with a version file already\n> > thanks to ver_filename.\n>\n> Isn't that a removal of detail with very little benefit? Not everyone running\n> pg_upgrade will know internal filenames, and the ver_filename contains the\n> pgdata path as well which might provide additional clues in case this goes\n> wrong.\n\n+1, seems like that would be a regression in value.\n\nCommitted as per Dagfinn's v2.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 26 Feb 2020 10:06:38 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "On Wed, Feb 26, 2020 at 10:06:38AM +0100, Magnus Hagander wrote:\n> +1, seems like that would be a regression in value.\n\nHaving more generic messages is less work for translators, we have\nPG_VERSION in the file name, and that's more complicated to translate\nin both French and Japanese. No idea about other languages.\n\n> Committed as per Dagfinn's v2.\n\nAnyway, too late :)\n--\nMichael", "msg_date": "Wed, 26 Feb 2020 18:35:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Feb 26, 2020 at 10:06:38AM +0100, Magnus Hagander wrote:\n>> +1, seems like that would be a regression in value.\n\n> Having more generic messages is less work for translators, we have\n> PG_VERSION in the file name, and that's more complicated to translate\n> in both French and Japanese. No idea about other languages.\n\nJust looking at the committed diff, it seems painfully obvious that these\ntwo messages were written by different people who weren't talking to each\nother. Why aren't they more alike? Given\n\n pg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n\n(which seems fine to me), I think the second ought to be\n\n pg_fatal(\"could not parse version file \\\"%s\\\"\\n\", ver_filename);\n\nThe wording as it stands:\n\n pg_fatal(\"could not parse PG_VERSION file from \\\"%s\\\"\\n\", cluster->pgdata);\n\ncould be criticized on more grounds than just that it's pointlessly\ndifferent from the adjacent message: it doesn't follow the style guideline\nabout saying what each mentioned object is. You could fix that maybe with\ns/from/from directory/, but I think this construction is unfortunate and\noverly verbose already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Feb 2020 10:55:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Wed, Feb 26, 2020 at 10:06:38AM +0100, Magnus Hagander wrote:\n>>> +1, seems like that would be a regression in value.\n>\n>> Having more generic messages is less work for translators, we have\n>> PG_VERSION in the file name, and that's more complicated to translate\n>> in both French and Japanese. No idea about other languages.\n>\n> Just looking at the committed diff, it seems painfully obvious that these\n> two messages were written by different people who weren't talking to each\n> other. Why aren't they more alike? Given\n>\n> pg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n>\n> (which seems fine to me), I think the second ought to be\n>\n> pg_fatal(\"could not parse version file \\\"%s\\\"\\n\", ver_filename);\n\nGood point. Patch attached.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl", "msg_date": "Wed, 26 Feb 2020 18:32:00 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "On Wed, Feb 26, 2020 at 06:32:00PM +0000, Dagfinn Ilmari Manns�ker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > Michael Paquier <michael@paquier.xyz> writes:\n> >> On Wed, Feb 26, 2020 at 10:06:38AM +0100, Magnus Hagander wrote:\n> >>> +1, seems like that would be a regression in value.\n> >\n> >> Having more generic messages is less work for translators, we have\n> >> PG_VERSION in the file name, and that's more complicated to translate\n> >> in both French and Japanese. No idea about other languages.\n> >\n> > Just looking at the committed diff, it seems painfully obvious that these\n> > two messages were written by different people who weren't talking to each\n> > other. Why aren't they more alike? Given\n> >\n> > pg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n> >\n> > (which seems fine to me), I think the second ought to be\n> >\n> > pg_fatal(\"could not parse version file \\\"%s\\\"\\n\", ver_filename);\n> \n> Good point. Patch attached.\n\nPatch applied, and other adjustments:\n\n\tThis patch fixes the error message in get_major_server_version()\n\tto be \"could not parse version file\", and uses the full file path\n\tname, rather than just the data directory path.\n\n\tAlso, commit 4109bb5de4 added the cause of the failure to the\n\t\"could not open\" error message, and improved quoting. This patch\n\tbackpatches the \"could not open\" cause to PG 12, where it was\n\tfirst widely used, and backpatches the quoting fix in that patch\n\tto all supported releases.\n\nBecause some of the branches are different, I am attaching the applied\nmulti-version patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Thu, 19 Mar 2020 15:23:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n\n> On Wed, Feb 26, 2020 at 06:32:00PM +0000, Dagfinn Ilmari Mannsåker wrote:\n>> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> \n>> > Michael Paquier <michael@paquier.xyz> writes:\n>> >> On Wed, Feb 26, 2020 at 10:06:38AM +0100, Magnus Hagander wrote:\n>> >>> +1, seems like that would be a regression in value.\n>> >\n>> >> Having more generic messages is less work for translators, we have\n>> >> PG_VERSION in the file name, and that's more complicated to translate\n>> >> in both French and Japanese. No idea about other languages.\n>> >\n>> > Just looking at the committed diff, it seems painfully obvious that these\n>> > two messages were written by different people who weren't talking to each\n>> > other. Why aren't they more alike? Given\n>> >\n>> > pg_fatal(\"could not open version file \\\"%s\\\": %m\\n\", ver_filename);\n>> >\n>> > (which seems fine to me), I think the second ought to be\n>> >\n>> > pg_fatal(\"could not parse version file \\\"%s\\\"\\n\", ver_filename);\n>> \n>> Good point. Patch attached.\n>\n> Patch applied, and other adjustments:\n\nThanks!\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Thu, 19 Mar 2020 19:40:35 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_upgrade: report the reason for failing to open the\n cluster version file" } ]
[ { "msg_contents": "Hi,\n\nWhen analyzing time-series data, it's useful to be able to bin\ntimestamps into equally spaced ranges. date_trunc() is only able to\nbin on a specified whole unit. In the attached patch for the March\ncommitfest, I propose a new function date_trunc_interval(), which can\ntruncate to arbitrary intervals, e.g.:\n\nselect date_trunc_interval('15 minutes', timestamp '2020-02-16\n20:48:40'); date_trunc_interval\n---------------------\n 2020-02-16 20:45:00\n(1 row)\n\nWith this addition, it might be possible to turn the existing\ndate_trunc() functions into wrappers. I haven't done that here because\nit didn't seem practical at this point. For one, the existing\nfunctions have special treatment for weeks, centuries, and millennia.\n\nNote: I've only written the implementation for the type timestamp\nwithout timezone. Adding timezone support would be pretty simple, but\nI wanted to get feedback on the basic idea first before making it\ncomplete. I've also written tests and very basic documentation.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Feb 2020 10:50:19 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Feb 26, 2020 at 10:50:19AM +0800, John Naylor wrote:\n> Hi,\n> \n> When analyzing time-series data, it's useful to be able to bin\n> timestamps into equally spaced ranges. date_trunc() is only able to\n> bin on a specified whole unit.\n\nThanks for adding this very handy feature!\n\n> In the attached patch for the March\n> commitfest, I propose a new function date_trunc_interval(), which can\n> truncate to arbitrary intervals, e.g.:\n> \n> select date_trunc_interval('15 minutes', timestamp '2020-02-16\n> 20:48:40'); date_trunc_interval\n> ---------------------\n> 2020-02-16 20:45:00\n> (1 row)\n\nI believe the following should error out, but doesn't.\n\n# SELECT date_trunc_interval('1 year 1 ms', TIMESTAMP '2001-02-16 20:38:40');\n date_trunc_interval \n═════════════════════\n 2001-01-01 00:00:00\n(1 row)\n\n> With this addition, it might be possible to turn the existing\n> date_trunc() functions into wrappers. I haven't done that here because\n> it didn't seem practical at this point. For one, the existing\n> functions have special treatment for weeks, centuries, and millennia.\n\nI agree that turning it into a wrapper would be separate work.\n\n> Note: I've only written the implementation for the type timestamp\n> without timezone. Adding timezone support would be pretty simple,\n> but I wanted to get feedback on the basic idea first before making\n> it complete. I've also written tests and very basic documentation.\n\nPlease find attached an update that I believe fixes the bug I found in\na principled way.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Wed, 26 Feb 2020 08:51:08 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Feb 26, 2020 at 3:51 PM David Fetter <david@fetter.org> wrote:\n>\n> I believe the following should error out, but doesn't.\n>\n> # SELECT date_trunc_interval('1 year 1 ms', TIMESTAMP '2001-02-16 20:38:40');\n> date_trunc_interval\n> ═════════════════════\n> 2001-01-01 00:00:00\n> (1 row)\n\nYou're quite right. I forgot to add error checking for\nsecond-and-below units. I've added your example to the tests. (I\nneglected to mention in my first email that because I chose to convert\nthe interval to the pg_tm struct (seemed easiest), it's not\nstraightforward how to allow multiple unit types, and I imagine the\nuse case is small, so I had it throw an error.)\n\n> Please find attached an update that I believe fixes the bug I found in\n> a principled way.\n\nThanks for that! I made a couple adjustments and incorporated your fix\ninto v3: While working on v1, I noticed the DTK_FOO macros already had\nan idiom for bitmasking (see utils/datetime.h), so I used that instead\nof a bespoke enum. Also, since the bitmask is checked once, I removed\nthe individual member checks, allowing me to remove all the gotos.\n\nThere's another small wrinkle: Since we store microseconds internally,\nit's neither convenient nor useful to try to error out for things like\n'2 ms 500 us', since that is just as well written as '2500 us', and\nstored exactly the same. I'm inclined to just skip the millisecond\ncheck and just use microseconds, but I haven't done that yet.\n\nAlso, I noticed this bug in v1:\n\nSELECT date_trunc_interval('17 days', TIMESTAMP '2001-02-16 20:38:40.123456');\n date_trunc_interval\n---------------------\n 2001-01-31 00:00:00\n(1 row)\n\nThis is another consequence of month and day being 1-based. Fixed,\nwith new tests.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Feb 2020 18:38:57 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> [ v3-datetrunc_interval.patch ]\n\nA few thoughts:\n\n* In general, binning involves both an origin and a stride. When\nworking with plain numbers it's almost always OK to set the origin\nto zero, but it's less clear to me whether that's all right for\ntimestamps. Do we need another optional argument? Even if we\ndon't, \"zero\" for tm_year is 1900, which is going to give results\nthat surprise somebody.\n\n* I'm still not convinced that the code does the right thing for\n1-based months or days. Shouldn't you need to subtract 1, then\ndo the modulus, then add back 1?\n\n* Speaking of modulus, would it be clearer to express the\ncalculations like\n\ttimestamp -= timestamp % interval;\n(That's just a question, I'm not sure.)\n\n* Code doesn't look to have thought carefully about what to do with\nnegative intervals, or BC timestamps.\n\n* The comment \n\t * Justify all lower timestamp units and throw an error if any\n\t * of the lower interval units are non-zero.\ndoesn't seem to have a lot to do with what the code after it actually\ndoes. Also, you need explicit /* FALLTHRU */-type comments in that\nswitch, or pickier buildfarm members will complain.\n\n* Seems like you could jam all the unit-related error checking into\nthat switch's default: case, where it will cost nothing if the\ncall is valid:\n\n\tswitch (unit)\n\t{\n\t ...\n\t default:\n\t\tif (unit == 0)\n\t\t\t// complain about zero interval\n\t\telse\n\t\t\t// complain about interval with multiple components\n\t}\n\n* I'd use ERRCODE_INVALID_PARAMETER_VALUE for any case of disallowed\ncontents of the interval.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Feb 2020 10:36:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Feb 26, 2020 at 06:38:57PM +0800, John Naylor wrote:\n> On Wed, Feb 26, 2020 at 3:51 PM David Fetter <david@fetter.org> wrote:\n> >\n> > I believe the following should error out, but doesn't.\n> >\n> > # SELECT date_trunc_interval('1 year 1 ms', TIMESTAMP '2001-02-16 20:38:40');\n> > date_trunc_interval\n> > ═════════════════════\n> > 2001-01-01 00:00:00\n> > (1 row)\n> \n> You're quite right. I forgot to add error checking for\n> second-and-below units. I've added your example to the tests. (I\n> neglected to mention in my first email that because I chose to convert\n> the interval to the pg_tm struct (seemed easiest), it's not\n> straightforward how to allow multiple unit types, and I imagine the\n> use case is small, so I had it throw an error.)\n\nI suspect that this could be sanely expanded to span some sets of\nadjacent types in a future patch, e.g. year + month or hour + minute.\n\n> > Please find attached an update that I believe fixes the bug I found in\n> > a principled way.\n> \n> Thanks for that! I made a couple adjustments and incorporated your fix\n> into v3: While working on v1, I noticed the DTK_FOO macros already had\n> an idiom for bitmasking (see utils/datetime.h),\n\nOops! Sorry I missed that.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 26 Feb 2020 18:30:53 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Feb 26, 2020 at 11:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > [ v3-datetrunc_interval.patch ]\n>\n> A few thoughts:\n>\n> * In general, binning involves both an origin and a stride. When\n> working with plain numbers it's almost always OK to set the origin\n> to zero, but it's less clear to me whether that's all right for\n> timestamps. Do we need another optional argument? Even if we\n> don't, \"zero\" for tm_year is 1900, which is going to give results\n> that surprise somebody.\n\nNot sure.\n\nA surprise I foresee in general might be: '1 week' is just '7 days',\nand not aligned on \"WOY\". Since the function is passed an interval and\nnot text, we can't raise a warning. But date_trunc() already covers\nthat, so probably not a big deal.\n\n> * I'm still not convinced that the code does the right thing for\n> 1-based months or days. Shouldn't you need to subtract 1, then\n> do the modulus, then add back 1?\n\nYes, brain fade on my part. Fixed in the attached v4.\n\n> * Speaking of modulus, would it be clearer to express the\n> calculations like\n> timestamp -= timestamp % interval;\n> (That's just a question, I'm not sure.)\n\nSeems nicer, so done that way.\n\n> * Code doesn't look to have thought carefully about what to do with\n> negative intervals, or BC timestamps.\n\nBy accident, negative intervals currently behave the same as positive\nones. We could make negative intervals round up rather than truncate\n(or vice versa). I don't know the best thing to do here.\n\nAs for BC, changed so it goes in the correct direction at least, and added test.\n\n> * The comment\n> * Justify all lower timestamp units and throw an error if any\n> * of the lower interval units are non-zero.\n> doesn't seem to have a lot to do with what the code after it actually\n> does. Also, you need explicit /* FALLTHRU */-type comments in that\n> switch, or pickier buildfarm members will complain.\n\nStale comment from an earlier version, fixed. Not sure if \"justify\" is\nthe right term, but \"zero\" is a bit misleading. Added fall thru's.\n\n> * Seems like you could jam all the unit-related error checking into\n> that switch's default: case, where it will cost nothing if the\n> call is valid:\n>\n> switch (unit)\n> {\n> ...\n> default:\n> if (unit == 0)\n> // complain about zero interval\n> else\n> // complain about interval with multiple components\n> }\n\nDone.\n\n> * I'd use ERRCODE_INVALID_PARAMETER_VALUE for any case of disallowed\n> contents of the interval.\n\nDone.\n\nAlso removed the millisecond case, since it's impossible, or at least\nnot worth it, to distinguish from the microsecond case.\n\nNote: I haven't done any additional docs changes in v4.\n\nTODO: with timezone\n\npossible TODO: origin parameter\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 28 Feb 2020 16:42:34 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Feb 26, 2020 at 11:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> * In general, binning involves both an origin and a stride. When\n> working with plain numbers it's almost always OK to set the origin\n> to zero, but it's less clear to me whether that's all right for\n> timestamps. Do we need another optional argument? Even if we\n> don't, \"zero\" for tm_year is 1900, which is going to give results\n> that surprise somebody.\n\nI tried the simplest way in the attached v5. Examples (third param is origin):\n\n-- same result as no origin:\nselect date_trunc_interval('5 min'::interval, TIMESTAMP '2020-02-01\n01:01:01', TIMESTAMP '2020-02-01');\n date_trunc_interval\n---------------------\n 2020-02-01 01:00:00\n(1 row)\n\n-- shift bins by 2.5 min:\nselect date_trunc_interval('5 min'::interval, TIMESTAMP '2020-02-1\n01:01:01', TIMESTAMP '2020-02-01 00:02:30');\n date_trunc_interval\n---------------------\n 2020-02-01 00:57:30\n(1 row)\n\n-- align weeks to start on Sunday\nselect date_trunc_interval('7 days'::interval, TIMESTAMP '2020-02-11\n01:01:01.0', TIMESTAMP '1900-01-02');\n date_trunc_interval\n---------------------\n 2020-02-09 00:00:00\n(1 row)\n\nI've put off adding documentation on the origin piece pending comments\nabout the approach.\n\nI haven't thought seriously about timezone yet, but hopefully it's\njust work and nothing to think too hard about.\n\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 13 Mar 2020 15:13:02 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Fri, 13 Mar 2020 at 03:13, John Naylor <john.naylor@2ndquadrant.com>\nwrote:\n\n> On Wed, Feb 26, 2020 at 11:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > * In general, binning involves both an origin and a stride. When\n> > working with plain numbers it's almost always OK to set the origin\n> > to zero, but it's less clear to me whether that's all right for\n> > timestamps. Do we need another optional argument? Even if we\n> > don't, \"zero\" for tm_year is 1900, which is going to give results\n> > that surprise somebody.\n>\n\n- align weeks to start on Sunday\n> select date_trunc_interval('7 days'::interval, TIMESTAMP '2020-02-11\n> 01:01:01.0', TIMESTAMP '1900-01-02');\n> date_trunc_interval\n> ---------------------\n> 2020-02-09 00:00:00\n> (1 row)\n>\n\nI'm confused by this. If my calendars are correct, both 1900-01-02\nand 2020-02-11 are Tuesdays. So if the date being adjusted and the origin\nare both Tuesday, shouldn't the day part be left alone when truncating to 7\ndays? Also, I'd like to confirm that the default starting point for 7 day\nperiods (weeks) is Monday, per ISO. I know it's very fashionable in North\nAmerica to split the weekend in half but it's not the international\nstandard.\n\nPerhaps the starting point for dates should be either 0001-01-01 (the\nproleptic beginning of the CE calendar) or 2001-01-01 (the beginning of the\ncurrent 400-year repeating cycle of leap years and weeks, and a Monday,\ngiving the appropriate ISO result for truncating to 7 day periods).\n\nOn Fri, 13 Mar 2020 at 03:13, John Naylor <john.naylor@2ndquadrant.com> wrote:On Wed, Feb 26, 2020 at 11:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> * In general, binning involves both an origin and a stride.  When\n> working with plain numbers it's almost always OK to set the origin\n> to zero, but it's less clear to me whether that's all right for\n> timestamps.  Do we need another optional argument?  Even if we\n> don't, \"zero\" for tm_year is 1900, which is going to give results\n> that surprise somebody.\n- align weeks to start on Sunday\nselect date_trunc_interval('7 days'::interval, TIMESTAMP '2020-02-11\n01:01:01.0', TIMESTAMP '1900-01-02');\n date_trunc_interval\n---------------------\n 2020-02-09 00:00:00\n(1 row)I'm confused by this. If my calendars are correct, both 1900-01-02 and 2020-02-11 are Tuesdays. So if the date being adjusted and the origin are both Tuesday, shouldn't the day part be left alone when truncating to 7 days? Also, I'd like to confirm that the default starting point for 7 day periods (weeks) is Monday, per ISO. I know it's very fashionable in North America to split the weekend in half but it's not the international standard.Perhaps the starting point for dates should be either 0001-01-01 (the proleptic beginning of the CE calendar) or 2001-01-01 (the beginning of the current 400-year repeating cycle of leap years and weeks, and a Monday, giving the appropriate ISO result for truncating to 7 day periods).", "msg_date": "Fri, 13 Mar 2020 07:48:33 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Fri, Mar 13, 2020 at 7:48 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n>\n> On Fri, 13 Mar 2020 at 03:13, John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n>> - align weeks to start on Sunday\n>> select date_trunc_interval('7 days'::interval, TIMESTAMP '2020-02-11\n>> 01:01:01.0', TIMESTAMP '1900-01-02');\n>> date_trunc_interval\n>> ---------------------\n>> 2020-02-09 00:00:00\n>> (1 row)\n>\n>\n> I'm confused by this. If my calendars are correct, both 1900-01-02 and 2020-02-11 are Tuesdays. So if the date being adjusted and the origin are both Tuesday, shouldn't the day part be left alone when truncating to 7 days?\n\nThanks for taking a look! The non-intuitive behavior you found is\nbecause the patch shifts the timestamp before converting to the\ninternal pg_tm type. The pg_tm type stores day of the month, which is\nused for the calculation. It's not counting the days since the origin.\nThen the result is shifted back.\n\nTo get more logical behavior, perhaps the optional parameter is better\nas an offset instead of an origin. Alternatively (or additionally),\nthe function could do the math on int64 timestamps directly.\n\n> Also, I'd like to confirm that the default starting point for 7 day periods (weeks) is Monday, per ISO.\n\nThat's currently the behavior in the existing date_trunc function,\nwhen passed the string 'week'. Given that keyword, it calculates the\nweek of the year.\n\nWhen using the proposed function with arbitrary intervals, it uses day\nof the month, as found in the pg_tm struct. It doesn't treat 7 days\ndifferently then 5 or 10 without user input (origin or offset), since\nthere is nothing special about 7 day intervals as such internally. To\nshow the difference between date_trunc, and date_trunc_interval as\nimplemented in v5 with no origin:\n\nselect date_trunc('week', d), count(*) from generate_series(\n'2020-02-01'::timestamp, '2020-03-31', '1 day') d group by 1 order by\n1;\n date_trunc | count\n---------------------+-------\n 2020-01-27 00:00:00 | 2\n 2020-02-03 00:00:00 | 7\n 2020-02-10 00:00:00 | 7\n 2020-02-17 00:00:00 | 7\n 2020-02-24 00:00:00 | 7\n 2020-03-02 00:00:00 | 7\n 2020-03-09 00:00:00 | 7\n 2020-03-16 00:00:00 | 7\n 2020-03-23 00:00:00 | 7\n 2020-03-30 00:00:00 | 2\n(10 rows)\n\nselect date_trunc_interval('7 days'::interval, d), count(*) from\ngenerate_series( '2020-02-01'::timestamp, '2020-03-31', '1 day') d\ngroup by 1 order by 1;\n date_trunc_interval | count\n---------------------+-------\n 2020-02-01 00:00:00 | 7\n 2020-02-08 00:00:00 | 7\n 2020-02-15 00:00:00 | 7\n 2020-02-22 00:00:00 | 7\n 2020-02-29 00:00:00 | 1\n 2020-03-01 00:00:00 | 7\n 2020-03-08 00:00:00 | 7\n 2020-03-15 00:00:00 | 7\n 2020-03-22 00:00:00 | 7\n 2020-03-29 00:00:00 | 3\n(10 rows)\n\nResetting the day every month is counterintuitive if not broken, and\nas I mentioned it might make more sense to use the int64 timestamp\ndirectly, at least for intervals less than one month. I'll go look\ninto doing that.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 15 Mar 2020 14:26:07 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "Hello,\n\nOn 3/13/2020 4:13 PM, John Naylor wrote:\n> I've put off adding documentation on the origin piece pending comments\n> about the approach.\n> \n> I haven't thought seriously about timezone yet, but hopefully it's\n> just work and nothing to think too hard about.\n\nThank you for the patch. I looked it and tested a bit.\n\nThere is one interesting case which might be mentioned in the \ndocumentation or in the tests is the following. The function has \ninteresting behaviour with real numbers:\n\n=# select date_trunc_interval('0.1 year'::interval, TIMESTAMP \n'2020-02-01 01:21:01');\n date_trunc_interval\n---------------------\n 2020-02-01 00:00:00\n\n=# select date_trunc_interval('1.1 year'::interval, TIMESTAMP \n'2020-02-01 01:21:01');\nERROR: only one interval unit allowed for truncation\n\nIt is because the second interval has two interval units:\n\n=# select '0.1 year'::interval;\n interval\n----------\n 1 mon\n\n=# select '1.1 year'::interval;\n interval\n--------------\n 1 year 1 mon\n\n-- \nArtur\n\n\n", "msg_date": "Thu, 19 Mar 2020 17:20:55 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Sun, Mar 15, 2020 at 2:26 PM I wrote:\n>\n> To get more logical behavior, perhaps the optional parameter is better\n> as an offset instead of an origin. Alternatively (or additionally),\n> the function could do the math on int64 timestamps directly.\n\nFor v6, I changed the algorithm to use pg_tm for months and years, and\nint64 for all smaller units. Despite the split, I think it's easier to\nread now, and certainly shorter. This has the advantage that now\nmixing units is allowed, as long as you don't mix months/years with\ndays or smaller, which often doesn't make sense and is not very\npractical. (not yet documented) One consequence of this is that when\noperating on months/years, and the origin contains smaller units, the\nsmaller units are ignored. Example:\n\nselect date_trunc_interval('12 months'::interval, timestamp\n'2012-03-01 01:21:01', timestamp '2011-03-22');\n date_trunc_interval\n---------------------\n 2012-03-01 00:00:00\n(1 row)\n\nEven though not quite a full year has passed, it ignores the days in\nthe origin time and detects a difference in 12 calendar months. That\nmight be fine, although we could also throw an error and say origins\nmust be in the form of 'YYYY-01-01 00:00:00' when truncating on months\nand/or years.\n\nI added a sketch of documentation for the origin parameter and more tests.\n\nOn Fri, Mar 13, 2020 at 7:48 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n> I'm confused by this. If my calendars are correct, both 1900-01-02 and 2020-02-11 are Tuesdays. So if the date being adjusted and the origin are both Tuesday, shouldn't the day part be left alone when truncating to 7 days? Also, I'd like to confirm that the default starting point for 7 day periods (weeks) is Monday, per ISO.\n\nThis is fixed.\n\nselect date_trunc_interval('7 days'::interval, timestamp '2020-02-11\n01:01:01.0', TIMESTAMP '1900-01-02');\n date_trunc_interval\n---------------------\n 2020-02-11 00:00:00\n(1 row)\n\nselect date_trunc_interval('7 days'::interval, d), count(*) from\ngenerate_series( '2020-02-01'::timestamp, '2020-03-31', '1 day') d\ngroup by 1 order by 1;\n date_trunc_interval | count\n---------------------+-------\n 2020-01-27 00:00:00 | 2\n 2020-02-03 00:00:00 | 7\n 2020-02-10 00:00:00 | 7\n 2020-02-17 00:00:00 | 7\n 2020-02-24 00:00:00 | 7\n 2020-03-02 00:00:00 | 7\n 2020-03-09 00:00:00 | 7\n 2020-03-16 00:00:00 | 7\n 2020-03-23 00:00:00 | 7\n 2020-03-30 00:00:00 | 2\n(10 rows)\n\n> Perhaps the starting point for dates should be either 0001-01-01 (the proleptic beginning of the CE calendar) or 2001-01-01 (the beginning of the current 400-year repeating cycle of leap years and weeks, and a Monday, giving the appropriate ISO result for truncating to 7 day periods).\n\nI went ahead with 2001-01-01 for the time being.\n\nOn Thu, Mar 19, 2020 at 4:20 PM Artur Zakirov <zaartur@gmail.com> wrote:\n>\n> =# select date_trunc_interval('1.1 year'::interval, TIMESTAMP\n> '2020-02-01 01:21:01');\n> ERROR: only one interval unit allowed for truncation\n\nFor any lingering cases like this (i don't see any), maybe an error\nhint is in order. The following works now, as expected for 1 year 1\nmonth:\n\nselect date_trunc_interval('1.1 year'::interval, timestamp '2002-05-01\n01:21:01');\n date_trunc_interval\n---------------------\n 2002-02-01 00:00:0\n\nI'm going to look into implementing timezone while awaiting comments on v6.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 24 Mar 2020 18:27:30 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "I wrote:\n\n> I'm going to look into implementing timezone while awaiting comments on v6.\n\nI attempted this in the attached v7. There are 4 new functions for\ntruncating timestamptz on an interval -- with and without origin, and\nwith and without time zone.\n\nParts of it are hackish, and need more work, but I think it's in\npassable enough shape to get feedback on. The origin parameter logic\nwas designed with timestamps-without-time-zone in mind, and\nretrofitting time zone on top of that was a bit messy. There might be\nbugs.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Mar 2020 20:30:32 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 3/30/2020 9:30 PM, John Naylor wrote:\n> I attempted this in the attached v7. There are 4 new functions for\n> truncating timestamptz on an interval -- with and without origin, and\n> with and without time zone.\n\nThank you for new version of the patch.\n\nI'm not sure that I fully understand the 'origin' parameter. Is it valid \nto have a value of 'origin' which is greater than a value of 'timestamp' \nparameter?\n\nI get some different results in such case:\n\n=# select date_trunc_interval('2 year', timestamp '2020-01-16 20:38:40', \ntimestamp '2022-01-17 00:00:00');\n date_trunc_interval\n---------------------\n 2020-01-01 00:00:00\n\n=# select date_trunc_interval('3 year', timestamp '2020-01-16 20:38:40', \ntimestamp '2022-01-17 00:00:00');\n date_trunc_interval\n---------------------\n 2022-01-01 00:00:00\n\nSo here I'm not sure which result is correct.\n\nIt seems that the patch is still in progress, but I have some nitpicking.\n\n> + <entry><literal><function>date_trunc_interval(<type>interval</type>, <type>timestamptz</type>, <type>text</type>)</function></literal></entry>\n> + <entry><type>timestamptz </type></entry>\n\nIt seems that 'timestamptz' in both argument and result descriptions \nshould be replaced by 'timestamp with time zone' (see other functions \ndescriptions). Though it is okay to use 'timestamptz' in SQL examples.\n\ntimestamp_trunc_interval_internal() and \ntimestamptz_trunc_interval_internal() have similar code. I think they \ncan be rewritten to avoid code duplication.\n\n-- \nArtur\n\n\n", "msg_date": "Tue, 31 Mar 2020 17:34:18 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Tue, Mar 31, 2020 at 4:34 PM Artur Zakirov <zaartur@gmail.com> wrote:\n> Thank you for new version of the patch.\n\nThanks for taking a look! Attached is v8, which addresses your points,\nadds tests and fixes some bugs. There are still some WIP detritus in\nthe timezone code, so I'm not claiming it's ready, but it's much\ncloser. I'm fairly confident in the implementation of timestamp\nwithout time zone, however.\n\n> I'm not sure that I fully understand the 'origin' parameter. Is it valid\n> to have a value of 'origin' which is greater than a value of 'timestamp'\n> parameter?\n\nThat is the intention. The returned values should be\n\norigin +/- (n * interval)\n\nwhere n is an integer.\n\n> I get some different results in such case:\n>\n> =# select date_trunc_interval('2 year', timestamp '2020-01-16 20:38:40',\n> timestamp '2022-01-17 00:00:00');\n> date_trunc_interval\n> ---------------------\n> 2020-01-01 00:00:00\n\nThis was correct per how I coded it, but I have rethought where to\ndraw the bins for user-specified origins. I have decided that the\nabove is inconsistent with units smaller than a month. We shouldn't\n\"cross\" the bin until the input has reached Jan. 17, in this case. In\nv8, the answer to the above is\n\n date_trunc_interval\n---------------------\n 2018-01-17 00:00:00\n(1 row)\n\n(This could probably be better documented)\n\n> =# select date_trunc_interval('3 year', timestamp '2020-01-16 20:38:40',\n timestamp '2022-01-17 00:00:00');\n> date_trunc_interval\n> ---------------------\n> 2022-01-01 00:00:00\n>\n> So here I'm not sure which result is correct.\n\nThis one is just plain broken. The result should always be equal or\nearlier than the input. In v8 the result is now:\n\n date_trunc_interval\n---------------------\n 2019-01-17 00:00:00\n(1 row)\n\n> It seems that the patch is still in progress, but I have some nitpicking.\n>\n> > + <entry><literal><function>date_trunc_interval(<type>interval</type>, <type>timestamptz</type>, <type>text</type>)</function></literal></entry>\n> > + <entry><type>timestamptz </type></entry>\n>\n> It seems that 'timestamptz' in both argument and result descriptions\n> should be replaced by 'timestamp with time zone' (see other functions\n> descriptions). Though it is okay to use 'timestamptz' in SQL examples.\n\nAny and all nitpicks welcome! I have made these match the existing\ndate_trunc documentation more closely.\n\n> timestamp_trunc_interval_internal() and\n> timestamptz_trunc_interval_internal() have similar code. I think they\n> can be rewritten to avoid code duplication.\n\nI thought so too (and noticed the same about the existing date_trunc),\nbut it's more difficult than it looks.\n\nNote: I copied some tests from timestamp to timestamptz with a few\ntweaks. A few tz tests still don't pass. I'm not yet sure if the\nproblem is in the test, or my code. Some detailed review of the tests\nand their results would be helpful.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Apr 2020 17:22:31 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "In v9, I've simplified the patch somewhat to make it easier for future\nwork to build on.\n\n- When truncating on month-or-greater intervals, require the origin to\nalign on month. This removes the need to handle weird corner cases\nthat have no straightforward behavior.\n- Remove hackish and possibly broken code to allow origin to be after\nthe input timestamp. The default origin is Jan 1, 1 AD, so only AD\ndates will behave correctly by default. This is not enforced for now,\nsince it may be desirable to find a way to get this to work in a nicer\nway.\n- Rebase docs over PG13 formatting changes.\n\n\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 30 Jun 2020 12:34:22 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 2020-06-30 06:34, John Naylor wrote:\n> In v9, I've simplified the patch somewhat to make it easier for future\n> work to build on.\n> \n> - When truncating on month-or-greater intervals, require the origin to\n> align on month. This removes the need to handle weird corner cases\n> that have no straightforward behavior.\n> - Remove hackish and possibly broken code to allow origin to be after\n> the input timestamp. The default origin is Jan 1, 1 AD, so only AD\n> dates will behave correctly by default. This is not enforced for now,\n> since it may be desirable to find a way to get this to work in a nicer\n> way.\n> - Rebase docs over PG13 formatting changes.\n\nThis looks pretty solid now. Are there any more corner cases and other \nareas with unclear behavior that you are aware of?\n\nA couple of thoughts:\n\n- After reading the discussion a few times, I'm not so sure anymore \nwhether making this a cousin of date_trunc is the right way to go. As \nyou mentioned, there are some behaviors specific to date_trunc that \ndon't really make sense in date_trunc_interval, and maybe we'll have \nmore of those. Also, date_trunc_interval isn't exactly a handy name. \nMaybe something to think about. It's obviously fairly straightforward \nto change it.\n\n- There were various issues with the stride interval having months and \nyears. I'm not sure we even need that. It could be omitted unless you \nare confident that your implementation is now sufficient.\n\n- Also, negative intervals could be prohibited, but I suppose that \nmatters less.\n\n- I'm curious about the origin being set to 0001-01-01. This seems to \nwork correctly in that it sets the origin to a Monday, which is what we \nwanted, but according to Google that day was a Saturday. Something to \ndo with Julian vs. Gregorian calendar? Maybe we should choose a date \nthat is a bit more recent and easier to reason with.\n\n- Then again, I'm thinking that maybe we should make the origin \nmandatory. Otherwise, the default answers when having strides larger \nthan a day are entirely arbitrary, e.g.,\n\n=> select date_trunc_interval('10 year', '0196-05-20 BC'::timestamp);\n0190-01-01 00:00:00 BC\n\n=> select date_trunc_interval('10 year', '0196-05-20 AD'::timestamp);\n0191-01-01 00:00:00\n\nPerhaps the origin could be defaulted if the interval is less than a day \nor something like that.\n\n- I'm wondering whether we need the date_trunc_interval(interval, \ntimestamptz, timezone) variant. Isn't that the same as \ndate_trunc_interval(foo AT ZONE xyz, value)?\n\n\n", "msg_date": "Thu, 12 Nov 2020 14:56:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Thu, Nov 12, 2020 at 9:56 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n> On 2020-06-30 06:34, John Naylor wrote:\n> > In v9, I've simplified the patch somewhat to make it easier for future\n> > work to build on.\n> >\n> > - When truncating on month-or-greater intervals, require the origin to\n> > align on month. This removes the need to handle weird corner cases\n> > that have no straightforward behavior.\n> > - Remove hackish and possibly broken code to allow origin to be after\n> > the input timestamp. The default origin is Jan 1, 1 AD, so only AD\n> > dates will behave correctly by default. This is not enforced for now,\n> > since it may be desirable to find a way to get this to work in a nicer\n> > way.\n> > - Rebase docs over PG13 formatting changes.\n>\n> This looks pretty solid now. Are there any more corner cases and other\n> areas with unclear behavior that you are aware of?\n\nHi Peter,\n\nThanks for taking a look!\n\nI believe there are no known corner cases aside from not throwing an error\nif origin > input, but I'll revisit that when we are more firm on what\nfeatures we want support.\n\n> A couple of thoughts:\n>\n> - After reading the discussion a few times, I'm not so sure anymore\n> whether making this a cousin of date_trunc is the right way to go. As\n> you mentioned, there are some behaviors specific to date_trunc that\n> don't really make sense in date_trunc_interval, and maybe we'll have\n> more of those.\n\nAs far as the behaviors, I'm not sure exactly what you what you were\nthinking of, but here are two issues off the top of my head:\n\n- If the new functions are considered variants of date_trunc(), there is\nthe expectation that the options work the same way, in particular the\ntimezone parameter. You asked specifically about that below, so I'll\naddress that separately.\n- In the \"week\" case, the boundary position depends on the origin, since a\nweek-long interval is just 7 days.\n\n> Also, date_trunc_interval isn't exactly a handy name.\n> Maybe something to think about. It's obviously fairly straightforward\n> to change it.\n\nEffectively, it puts timestamps into bins, so maybe date_bin() or something\nlike that?\n\n> - There were various issues with the stride interval having months and\n> years. I'm not sure we even need that. It could be omitted unless you\n> are confident that your implementation is now sufficient.\n\nMonths and years were a bit tricky, so I'd be happy to leave that out if\nthere is not much demand for it. date_trunc() already has quarters,\ndecades, centuries, and millenia.\n\n> - Also, negative intervals could be prohibited, but I suppose that\n> matters less.\n\nGood for the sake of completeness. I think they happen to work in v9 by\naccident, but it would be better not to expose that.\n\n> - I'm curious about the origin being set to 0001-01-01. This seems to\n> work correctly in that it sets the origin to a Monday, which is what we\n> wanted, but according to Google that day was a Saturday. Something to\n> do with Julian vs. Gregorian calendar?\n\nRight, working backwards from our calendar today, it's Monday, but at the\ntime it would theoretically be Saturday, barring leap year miscalculations.\n\n> Maybe we should choose a date\n> that is a bit more recent and easier to reason with.\n\n2001-01-01 would also be a Monday aligned with centuries and millenia, so\nthat would be my next suggestion. If we don't care to match with\ndate_trunc() on those larger units, we could also use 1900-01-01, so the\nvast majority of dates in databases are after the origin.\n\n> - Then again, I'm thinking that maybe we should make the origin\n> mandatory. Otherwise, the default answers when having strides larger\n> than a day are entirely arbitrary, e.g.,\n>\n> => select date_trunc_interval('10 year', '0196-05-20 BC'::timestamp);\n> 0190-01-01 00:00:00 BC\n>\n> => select date_trunc_interval('10 year', '0196-05-20 AD'::timestamp);\n> 0191-01-01 00:00:00\n\nRight. In the first case, the default origin is also after the input, and\ncrosses the AD/BC boundary. Tricky to get right.\n\n> Perhaps the origin could be defaulted if the interval is less than a day\n> or something like that.\n\nIf we didn't allow months and years to be units, it seems the default would\nalways make sense?\n\n> - I'm wondering whether we need the date_trunc_interval(interval,\n> timestamptz, timezone) variant. Isn't that the same as\n> date_trunc_interval(foo AT ZONE xyz, value)?\n\nI based this on 600b04d6b5ef6 for date_trunc(), whose message states:\n\ndate_trunc(field, timestamptz, zone_name)\n\nis the same as\n\ndate_trunc(field, timestamptz at time zone zone_name) at time zone zone_name\n\nso without the shorthand, you need to specify the timezone twice, once for\nthe calculation, and once for the output.\n\n--\nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Nov 12, 2020 at 9:56 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>> On 2020-06-30 06:34, John Naylor wrote:> > In v9, I've simplified the patch somewhat to make it easier for future> > work to build on.> >> > - When truncating on month-or-greater intervals, require the origin to> > align on month. This removes the need to handle weird corner cases> > that have no straightforward behavior.> > - Remove hackish and possibly broken code to allow origin to be after> > the input timestamp. The default origin is Jan 1, 1 AD, so only AD> > dates will behave correctly by default. This is not enforced for now,> > since it may be desirable to find a way to get this to work in a nicer> > way.> > - Rebase docs over PG13 formatting changes.>> This looks pretty solid now.  Are there any more corner cases and other> areas with unclear behavior that you are aware of?Hi Peter,Thanks for taking a look! I believe there are no known corner cases aside from not throwing an error if origin > input, but I'll revisit that when we are more firm on what features we want support.> A couple of thoughts:>> - After reading the discussion a few times, I'm not so sure anymore> whether making this a cousin of date_trunc is the right way to go.  As> you mentioned, there are some behaviors specific to date_trunc that> don't really make sense in date_trunc_interval, and maybe we'll have> more of those.As far as the behaviors, I'm not sure exactly what you what you were thinking of, but here are two issues off the top of my head:- If the new functions are considered variants of date_trunc(), there is the expectation that the options work the same way, in particular the timezone parameter. You asked specifically about that below, so I'll address that separately.- In the \"week\" case, the boundary position depends on the origin, since a week-long interval is just 7 days.> Also, date_trunc_interval isn't exactly a handy name.> Maybe something to think about.  It's obviously fairly straightforward> to change it.Effectively, it puts timestamps into bins, so maybe date_bin() or something like that?> - There were various issues with the stride interval having months and> years.  I'm not sure we even need that.  It could be omitted unless you> are confident that your implementation is now sufficient.Months and years were a bit tricky, so I'd be happy to leave that out if there is not much demand for it. date_trunc() already has quarters, decades, centuries, and millenia.> - Also, negative intervals could be prohibited, but I suppose that> matters less.Good for the sake of completeness. I think they happen to work in v9 by accident, but it would be better not to expose that.> - I'm curious about the origin being set to 0001-01-01.  This seems to> work correctly in that it sets the origin to a Monday, which is what we> wanted, but according to Google that day was a Saturday.  Something to> do with Julian vs. Gregorian calendar?Right, working backwards from our calendar today, it's Monday, but at the time it would theoretically be Saturday, barring leap year miscalculations.> Maybe we should choose a date> that is a bit more recent and easier to reason with.2001-01-01 would also be a Monday aligned with centuries and millenia, so that would be my next suggestion. If we don't care to match with date_trunc() on those larger units, we could also use 1900-01-01, so the vast majority of dates in databases are after the origin.> - Then again, I'm thinking that maybe we should make the origin> mandatory.  Otherwise, the default answers when having strides larger> than a day are entirely arbitrary, e.g.,>> => select date_trunc_interval('10 year', '0196-05-20 BC'::timestamp);> 0190-01-01 00:00:00 BC>> => select date_trunc_interval('10 year', '0196-05-20 AD'::timestamp);> 0191-01-01 00:00:00Right. In the first case, the default origin is also after the input, and crosses the AD/BC boundary. Tricky to get right.> Perhaps the origin could be defaulted if the interval is less than a day> or something like that.If we didn't allow months and years to be units, it seems the default would always make sense?> - I'm wondering whether we need the date_trunc_interval(interval,> timestamptz, timezone) variant.  Isn't that the same as> date_trunc_interval(foo AT ZONE xyz, value)?I based this on 600b04d6b5ef6 for date_trunc(), whose message states:date_trunc(field, timestamptz, zone_name)is the same asdate_trunc(field, timestamptz at time zone zone_name) at time zone zone_nameso without the shorthand, you need to specify the timezone twice, once for the calculation, and once for the output.--John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 23 Nov 2020 13:44:57 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Mon, Nov 23, 2020 at 1:44 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Thu, Nov 12, 2020 at 9:56 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n> > - After reading the discussion a few times, I'm not so sure anymore\n> > whether making this a cousin of date_trunc is the right way to go. As\n> > you mentioned, there are some behaviors specific to date_trunc that\n> > don't really make sense in date_trunc_interval, and maybe we'll have\n> > more of those.\n\nFor v10, I simplified the behavior by decoupling the behavior from\ndate_trunc() and putting in some restrictions as discussed earlier. It's\nmuch simpler now. It could be argued that it goes too far in that\ndirection, but it's easy to reason about and we can put back some features\nas we see fit.\n\n> > Also, date_trunc_interval isn't exactly a handy name.\n> > Maybe something to think about. It's obviously fairly straightforward\n> > to change it.\n>\n> Effectively, it puts timestamps into bins, so maybe date_bin() or\nsomething like that?\n\nFor v10 I went with date_bin() so we can see how that looks.\n\n> > - There were various issues with the stride interval having months and\n> > years. I'm not sure we even need that. It could be omitted unless you\n> > are confident that your implementation is now sufficient.\n>\n> Months and years were a bit tricky, so I'd be happy to leave that out if\nthere is not much demand for it. date_trunc() already has quarters,\ndecades, centuries, and millenia.\n\nI removed months and years for this version, but that can be reconsidered\nof course. The logic is really simple now.\n\n> > - Also, negative intervals could be prohibited, but I suppose that\n> > matters less.\n\nI didn't go this far, but probably should before long.\n\n> > - Then again, I'm thinking that maybe we should make the origin\n> > mandatory. Otherwise, the default answers when having strides larger\n> > than a day are entirely arbitrary, e.g.,\n\nI've tried this and like the resulting simplification.\n\n> > - I'm wondering whether we need the date_trunc_interval(interval,\n> > timestamptz, timezone) variant. Isn't that the same as\n> > date_trunc_interval(foo AT ZONE xyz, value)?\n>\n> I based this on 600b04d6b5ef6 for date_trunc(), whose message states:\n>\n> date_trunc(field, timestamptz, zone_name)\n>\n> is the same as\n>\n> date_trunc(field, timestamptz at time zone zone_name) at time zone\nzone_name\n>\n> so without the shorthand, you need to specify the timezone twice, once\nfor the calculation, and once for the output.\n\nIn light of making the origin mandatory, it no longer makes sense to have a\ntime zone parameter, since we can specify the time zone on the origin; and\nif desired on the output as well.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 18 Jan 2021 16:54:20 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 1/18/21 3:54 PM, John Naylor wrote:\n> On Mon, Nov 23, 2020 at 1:44 PM John Naylor \n> <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> wrote:\n> >\n> > On Thu, Nov 12, 2020 at 9:56 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> > > - After reading the discussion a few times, I'm not so sure anymore\n> > > whether making this a cousin of date_trunc is the right way to go.  As\n> > > you mentioned, there are some behaviors specific to date_trunc that\n> > > don't really make sense in date_trunc_interval, and maybe we'll have\n> > > more of those.\n> \n> For v10, I simplified the behavior by decoupling the behavior from \n> date_trunc() and putting in some restrictions as discussed earlier. It's \n> much simpler now. It could be argued that it goes too far in that \n> direction, but it's easy to reason about and we can put back some \n> features as we see fit.\n\nPeter, thoughts on the new patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 19 Mar 2021 10:54:53 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 18.01.21 21:54, John Naylor wrote:\n> On Mon, Nov 23, 2020 at 1:44 PM John Naylor \n> <john.naylor@enterprisedb.com <mailto:john.naylor@enterprisedb.com>> wrote:\n> >\n> > On Thu, Nov 12, 2020 at 9:56 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> > > - After reading the discussion a few times, I'm not so sure anymore\n> > > whether making this a cousin of date_trunc is the right way to go.  As\n> > > you mentioned, there are some behaviors specific to date_trunc that\n> > > don't really make sense in date_trunc_interval, and maybe we'll have\n> > > more of those.\n> \n> For v10, I simplified the behavior by decoupling the behavior from \n> date_trunc() and putting in some restrictions as discussed earlier. It's \n> much simpler now. It could be argued that it goes too far in that \n> direction, but it's easy to reason about and we can put back some \n> features as we see fit.\n\nCommitted.\n\nI noticed that some of the documentation disappeared between v9 and v10. \n So I put that back and updated it appropriately. I also added a few \nmore test cases to cover some things that have been discussed during the \ncourse of this thread.\n\nAs a potential follow-up, should we perhaps add named arguments? That \nmight make the invocations easier to read, depending on taste.\n\n\n", "msg_date": "Wed, 24 Mar 2021 16:38:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "> On 2021.03.24. 16:38 Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> \n> Committed.\n> \n\n'In cases full units' seems strange.\n\nNot a native speaker but maybe the attached changes are improvements?\n\n\nErik Rijkers", "msg_date": "Wed, 24 Mar 2021 18:25:26 +0100 (CET)", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Mar 24, 2021 at 11:38 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> Committed.\n>\n> I noticed that some of the documentation disappeared between v9 and v10.\n> So I put that back and updated it appropriately. I also added a few\n> more test cases to cover some things that have been discussed during the\n> course of this thread.\n\nThanks! I put off updating the documentation in case the latest approach\nwas not feature-rich enough.\n\n> As a potential follow-up, should we perhaps add named arguments? That\n> might make the invocations easier to read, depending on taste.\n\nI think it's quite possible some users will prefer that. All we need is to\nadd something like\n\nproargnames => '{bin_width,input,origin}'\n\nto the catalog, right?\n\nAlso, I noticed that I put in double semicolons in the new functions\nsomehow. I'll fix that as well.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 24, 2021 at 11:38 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:> Committed.>> I noticed that some of the documentation disappeared between v9 and v10.>   So I put that back and updated it appropriately.  I also added a few> more test cases to cover some things that have been discussed during the> course of this thread.Thanks! I put off updating the documentation in case the latest approach was not feature-rich enough.> As a potential follow-up, should we perhaps add named arguments?  That> might make the invocations easier to read, depending on taste.I think it's quite possible some users will prefer that. All we need is to add something likeproargnames => '{bin_width,input,origin}'to the catalog, right?Also, I noticed that I put in double semicolons in the new functions somehow. I'll fix that as well.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Mar 2021 13:58:09 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Mar 24, 2021 at 1:25 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> 'In cases full units' seems strange.\n>\n> Not a native speaker but maybe the attached changes are improvements?\n\n- In cases full units (1 minute, 1 hour, etc.), it gives the same result\nas\n+ In case of full units (1 minute, 1 hour, etc.), it gives the same\nresult as\n the analogous <function>date_trunc</function> call, but the difference\nis\n that <function>date_bin</function> can truncate to an arbitrary\ninterval.\n </para>\n\nI would say \"In the case of\"\n\n <para>\n- The <parameter>stride</parameter> interval cannot contain units of\nmonth\n+ The <parameter>stride</parameter> interval cannot contain units of a\nmonth\n or larger.\n\nThe original seems fine to me here.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 24, 2021 at 1:25 PM Erik Rijkers <er@xs4all.nl> wrote:>> 'In cases full units' seems strange.>> Not a native speaker but maybe the attached changes are improvements?-    In cases full units (1 minute, 1 hour, etc.), it gives the same result as+    In case of full units (1 minute, 1 hour, etc.), it gives the same result as     the analogous <function>date_trunc</function> call, but the difference is     that <function>date_bin</function> can truncate to an arbitrary interval.    </para> I would say \"In the case of\"    <para>-    The <parameter>stride</parameter> interval cannot contain units of month+    The <parameter>stride</parameter> interval cannot contain units of a month     or larger.The original seems fine to me here.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Mar 2021 14:01:58 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 24.03.21 18:25, Erik Rijkers wrote:\n>> On 2021.03.24. 16:38 Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>>\n>> Committed.\n>>\n> \n> 'In cases full units' seems strange.\n\nfixed, thanks\n\n\n", "msg_date": "Wed, 24 Mar 2021 20:49:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 24.03.21 18:58, John Naylor wrote:\n> > As a potential follow-up, should we perhaps add named arguments?  That\n> > might make the invocations easier to read, depending on taste.\n> \n> I think it's quite possible some users will prefer that. All we need is \n> to add something like\n> \n> proargnames => '{bin_width,input,origin}'\n> \n> to the catalog, right?\n\nright, plus some documentation adjustments perhaps\n\n> Also, I noticed that I put in double semicolons in the new functions \n> somehow. I'll fix that as well.\n\nI have fixed that.\n\n\n", "msg_date": "Wed, 24 Mar 2021 20:50:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Mar 24, 2021 at 08:50:59PM +0100, Peter Eisentraut wrote:\n> On 24.03.21 18:58, John Naylor wrote:\n> > > As a potential follow-up, should we perhaps add named arguments?� That\n> > > might make the invocations easier to read, depending on taste.\n> > \n> > I think it's quite possible some users will prefer that. All we need is\n> > to add something like\n> > \n> > proargnames => '{bin_width,input,origin}'\n> > \n> > to the catalog, right?\n> \n> right, plus some documentation adjustments perhaps\n\n+1\n\nThe current docs seem to be missing a \"synopsis\", like\n\n+<synopsis>\n+date_trunc(<replaceable>stride</replaceable>, <replaceable>timestamp</replaceable>, <replaceable>origin</replaceable>)\n+</synopsis>\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 27 Mar 2021 12:06:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "Currently, when the origin is after the input, the result is the timestamp\nat the end of the bin, rather than the beginning as expected. The attached\nputs the result consistently at the beginning of the bin.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 30 Mar 2021 12:06:33 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Sat, Mar 27, 2021 at 1:06 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> The current docs seem to be missing a \"synopsis\", like\n>\n> +<synopsis>\n> +date_trunc(<replaceable>stride</replaceable>,\n<replaceable>timestamp</replaceable>, <replaceable>origin</replaceable>)\n> +</synopsis>\n\nThe attached\n- adds a synopsis\n- adds a bit more description to the parameters similar to those in\ndate_trunc\n- documents that negative intervals are treated the same as positive ones\n\nNote on the last point: This just falls out of the math, so was not\ndeliberate, but it seems fine to me. We could ban negative intervals, but\nthat would possibly just inconvenience some people unnecessarily. We could\nalso treat negative strides differently somehow, but I don't immediately\nsee a useful and/or intuitive change in behavior to come of that.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 30 Mar 2021 12:50:28 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "Hi all,\n\nit might be a bit late now, but do you know that TimescaleDB already has a\nsimilar feature, named time_bucket?\nhttps://docs.timescale.com/latest/api#time_bucket\nPerhaps that can help with some design decisions.\nI saw your feature on Depesz' \"Waiting for PostgreSQL 14\" and remembered\nreading about it just two days ago.\n\nBest regards\nSalek Talangi\n\nAm Do., 1. Apr. 2021 um 13:31 Uhr schrieb John Naylor <\njohn.naylor@enterprisedb.com>:\n\n> On Sat, Mar 27, 2021 at 1:06 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >\n> > The current docs seem to be missing a \"synopsis\", like\n> >\n> > +<synopsis>\n> > +date_trunc(<replaceable>stride</replaceable>,\n> <replaceable>timestamp</replaceable>, <replaceable>origin</replaceable>)\n> > +</synopsis>\n>\n> The attached\n> - adds a synopsis\n> - adds a bit more description to the parameters similar to those in\n> date_trunc\n> - documents that negative intervals are treated the same as positive ones\n>\n> Note on the last point: This just falls out of the math, so was not\n> deliberate, but it seems fine to me. We could ban negative intervals, but\n> that would possibly just inconvenience some people unnecessarily. We could\n> also treat negative strides differently somehow, but I don't immediately\n> see a useful and/or intuitive change in behavior to come of that.\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\nHi all,it might be a bit late now, but do you know that \nTimescaleDB already has a similar feature, named \ntime_bucket?https://docs.timescale.com/latest/api#time_bucketPerhaps that can help with some design decisions.I saw your feature on Depesz' \"Waiting for PostgreSQL 14\" and remembered reading about it just two days ago.Best regardsSalek TalangiAm Do., 1. Apr. 2021 um 13:31 Uhr schrieb John Naylor <john.naylor@enterprisedb.com>:On Sat, Mar 27, 2021 at 1:06 PM Justin Pryzby <pryzby@telsasoft.com> wrote:>> The current docs seem to be missing a \"synopsis\", like>> +<synopsis>> +date_trunc(<replaceable>stride</replaceable>, <replaceable>timestamp</replaceable>, <replaceable>origin</replaceable>)> +</synopsis>The attached - adds a synopsis - adds a bit more description to the parameters similar to those in date_trunc- documents that negative intervals are treated the same as positive onesNote on the last point: This just falls out of the math, so was not deliberate, but it seems fine to me. We could ban negative intervals, but that would possibly just inconvenience some people unnecessarily. We could also treat negative strides differently somehow, but I don't immediately see a useful and/or intuitive change in behavior to come of that.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 14:11:25 +0100", "msg_from": "Salek Talangi <salek.talangi@googlemail.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Thu, Apr 1, 2021 at 9:11 AM Salek Talangi <salek.talangi@googlemail.com>\nwrote:\n>\n> Hi all,\n>\n> it might be a bit late now, but do you know that TimescaleDB already has\na similar feature, named time_bucket?\n> https://docs.timescale.com/latest/api#time_bucket\n> Perhaps that can help with some design decisions.\n\nYes, thanks I'm aware of it. It's a bit more feature-rich, and I wanted to\nhave something basic that users can have available without installing an\nextension.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 1, 2021 at 9:11 AM Salek Talangi <salek.talangi@googlemail.com> wrote:>> Hi all,>> it might be a bit late now, but do you know that TimescaleDB already has a similar feature, named time_bucket?> https://docs.timescale.com/latest/api#time_bucket> Perhaps that can help with some design decisions.Yes, thanks I'm aware of it. It's a bit more feature-rich, and I wanted to have something basic that users can have available without installing an extension. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 12:08:01 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 30.03.21 18:50, John Naylor wrote:\n> On Sat, Mar 27, 2021 at 1:06 PM Justin Pryzby <pryzby@telsasoft.com \n> <mailto:pryzby@telsasoft.com>> wrote:\n> >\n> > The current docs seem to be missing a \"synopsis\", like\n> >\n> > +<synopsis>\n> > +date_trunc(<replaceable>stride</replaceable>, \n> <replaceable>timestamp</replaceable>, <replaceable>origin</replaceable>)\n> > +</synopsis>\n> \n> The attached\n> - adds a synopsis\n> - adds a bit more description to the parameters similar to those in \n> date_trunc\n> - documents that negative intervals are treated the same as positive ones\n> \n> Note on the last point: This just falls out of the math, so was not \n> deliberate, but it seems fine to me. We could ban negative intervals, \n> but that would possibly just inconvenience some people unnecessarily. We \n> could also treat negative strides differently somehow, but I don't \n> immediately see a useful and/or intuitive change in behavior to come of \n> that.\n\ncommitted\n\n\n", "msg_date": "Fri, 9 Apr 2021 22:02:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 30.03.21 18:06, John Naylor wrote:\n> Currently, when the origin is after the input, the result is the \n> timestamp at the end of the bin, rather than the beginning as expected. \n> The attached puts the result consistently at the beginning of the bin.\n\nIn the patch\n\n+ if (origin > timestamp && stride_usecs > 1)\n+ tm_delta -= stride_usecs;\n\nis the condition stride_usecs > 1 really necessary? My assessment is \nthat it's not, in which case it would be better to omit it.\n\n\n", "msg_date": "Sat, 10 Apr 2021 13:42:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Sat, Apr 10, 2021 at 7:43 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n> On 30.03.21 18:06, John Naylor wrote:\n> > Currently, when the origin is after the input, the result is the\n> > timestamp at the end of the bin, rather than the beginning as expected.\n> > The attached puts the result consistently at the beginning of the bin.\n>\n> In the patch\n>\n> + if (origin > timestamp && stride_usecs > 1)\n> + tm_delta -= stride_usecs;\n>\n> is the condition stride_usecs > 1 really necessary? My assessment is\n> that it's not, in which case it would be better to omit it.\n\nWithout the condition, the case of 1 microsecond will fail to be a no-op.\nThis case has no practical use, but it still must work correctly, just as\ndate_trunc('microsecond', input) does.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Apr 10, 2021 at 7:43 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>> On 30.03.21 18:06, John Naylor wrote:> > Currently, when the origin is after the input, the result is the> > timestamp at the end of the bin, rather than the beginning as expected.> > The attached puts the result consistently at the beginning of the bin.>> In the patch>> +   if (origin > timestamp && stride_usecs > 1)> +       tm_delta -= stride_usecs;>> is the condition stride_usecs > 1 really necessary?  My assessment is> that it's not, in which case it would be better to omit it.Without the condition, the case of 1 microsecond will fail to be a no-op. This case has no practical use, but it still must work correctly, just as date_trunc('microsecond', input) does.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sat, 10 Apr 2021 08:53:28 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 10.04.21 14:53, John Naylor wrote:\n> \n> On Sat, Apr 10, 2021 at 7:43 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> >\n> > On 30.03.21 18:06, John Naylor wrote:\n> > > Currently, when the origin is after the input, the result is the\n> > > timestamp at the end of the bin, rather than the beginning as expected.\n> > > The attached puts the result consistently at the beginning of the bin.\n> >\n> > In the patch\n> >\n> > +   if (origin > timestamp && stride_usecs > 1)\n> > +       tm_delta -= stride_usecs;\n> >\n> > is the condition stride_usecs > 1 really necessary?  My assessment is\n> > that it's not, in which case it would be better to omit it.\n> \n> Without the condition, the case of 1 microsecond will fail to be a \n> no-op. This case has no practical use, but it still must work correctly, \n> just as date_trunc('microsecond', input) does.\n\nAh yes, the tests cover that. Committed.\n\n\n", "msg_date": "Sat, 10 Apr 2021 19:56:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Fri, Apr 09, 2021 at 10:02:47PM +0200, Peter Eisentraut wrote:\n> On 30.03.21 18:50, John Naylor wrote:\n> > On Sat, Mar 27, 2021 at 1:06 PM Justin Pryzby wrote:\n> > >\n> > > The current docs seem to be missing a \"synopsis\", like\n> > >\n> > > +<synopsis>\n> > > +date_trunc(<replaceable>stride</replaceable>, <replaceable>timestamp</replaceable>, <replaceable>origin</replaceable>)\n> > > +</synopsis>\n> > \n> > The attached\n> > - adds a synopsis\n> > - adds a bit more description to the parameters similar to those in\n> > date_trunc\n> > - documents that negative intervals are treated the same as positive ones\n> > \n> > Note on the last point: This just falls out of the math, so was not\n> > deliberate, but it seems fine to me. We could ban negative intervals,\n> > but that would possibly just inconvenience some people unnecessarily. We\n> > could also treat negative strides�differently somehow, but I don't\n> > immediately see a useful and/or intuitive change in behavior to come of\n> > that.\n> \n> committed\n\nIt looks like we all missed that I misspelled \"date_bin\" as\n\"date_trunc\"...sorry. I will include this with my next round of doc review, in\ncase you don't want to make a separate commit for it.\n\nhttps://www.postgresql.org/docs/devel/functions-datetime.html#FUNCTIONS-DATETIME-BIN\n\n From f4eab5c0f908d868540ab33aa12b82fd05f19f52 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu, 22 Apr 2021 03:37:18 -0500\nSubject: [PATCH] date_bin: fixup for added documentation in 49fb4e\n\n---\n doc/src/sgml/func.sgml | 4 ++--\n 1 file changed, 2 insertions(+), 2 deletions(-)\n\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 53f4c09c81..cc4e1b0a36 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -9946,13 +9946,13 @@ date_trunc(<replaceable>stride</replaceable>, <replaceable>timestamp</replaceabl\n \n <para>\n <synopsis>\n-date_trunc(<replaceable>stride</replaceable>, <replaceable>source</replaceable>, <replaceable>origin</replaceable>)\n+date_bin(<replaceable>stride</replaceable>, <replaceable>source</replaceable>, <replaceable>origin</replaceable>)\n </synopsis>\n <replaceable>source</replaceable> is a value expression of type\n <type>timestamp</type> or <type>timestamp with time zone</type>. (Values\n of type <type>date</type> are cast automatically to\n <type>timestamp</type>.) <replaceable>stride</replaceable> is a value\n- expression of type <type> interval</type>. The return value is likewise\n+ expression of type <type>interval</type>. The return value is likewise\n of type <type>timestamp</type> or <type>timestamp with time zone</type>,\n and it marks the beginning of the bin into which the\n <replaceable>source</replaceable> is placed.\n-- \n2.17.0\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 04:16:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On 22.04.21 11:16, Justin Pryzby wrote:\n> It looks like we all missed that I misspelled \"date_bin\" as\n> \"date_trunc\"...sorry. I will include this with my next round of doc review, in\n> case you don't want to make a separate commit for it.\n\nfixed\n\n\n", "msg_date": "Fri, 23 Apr 2021 09:31:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "Is date_bin supposed to return the beginning of the bin? And does the sign\nof an interval define the \"direction\" of the bin?\nJudging by results of queries #1 and #2, sign of interval decides a\ndirection timestamp gets shifted to (in both cases ts < origin)\nbut when ts >origin (queries #3 and #4) interval sign doesn't matter,\nspecifically #4 doesn't return 6-th of January.\n\n1. SELECT date_bin('-2 days'::interval, timestamp '2001-01-01\n00:00:00', timestamp\n'2001-01-04 00:00:00'); -- 2001-01-02 00:00:00\n2. SELECT date_bin('2 days'::interval, timestamp '2001-01-01\n00:00:00', timestamp\n'2001-01-04 00:00:00'); -- 2000-12-31 00:00:00\n3. SELECT date_bin('2 days'::interval, timestamp '2001-01-04\n00:00:00', timestamp\n'2001-01-01 00:00:00'); -- 2001-01-03 00:00:00\n4. SELECT date_bin('-2 days'::interval, timestamp '2001-01-04\n00:00:00', timestamp\n'2001-01-01 00:00:00'); -- 2001-01-03 00:00:00\n\nOn Thu, Jul 22, 2021 at 6:21 PM John Naylor <john.naylor@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> When analyzing time-series data, it's useful to be able to bin\n> timestamps into equally spaced ranges. date_trunc() is only able to\n> bin on a specified whole unit. In the attached patch for the March\n> commitfest, I propose a new function date_trunc_interval(), which can\n> truncate to arbitrary intervals, e.g.:\n>\n> select date_trunc_interval('15 minutes', timestamp '2020-02-16\n> 20:48:40'); date_trunc_interval\n> ---------------------\n> 2020-02-16 20:45:00\n> (1 row)\n>\n> With this addition, it might be possible to turn the existing\n> date_trunc() functions into wrappers. I haven't done that here because\n> it didn't seem practical at this point. For one, the existing\n> functions have special treatment for weeks, centuries, and millennia.\n>\n> Note: I've only written the implementation for the type timestamp\n> without timezone. Adding timezone support would be pretty simple, but\n> I wanted to get feedback on the basic idea first before making it\n> complete. I've also written tests and very basic documentation.\n>\n> --\n> John Naylor https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nIs date_bin supposed to return the beginning of the bin? And does the sign of an interval define the \"direction\" of the bin?Judging by results of queries #1 and #2, sign of interval decides a direction timestamp gets shifted to (in both cases ts < origin)but when ts >origin (queries #3 and #4) interval sign doesn't matter, specifically #4 doesn't return 6-th of January.1. SELECT date_bin('-2 days'::interval, timestamp '2001-01-01 00:00:00', timestamp '2001-01-04 00:00:00'); -- 2001-01-02 00:00:002. SELECT date_bin('2 days'::interval, timestamp '2001-01-01 00:00:00', timestamp '2001-01-04 00:00:00'); -- 2000-12-31 00:00:003. SELECT date_bin('2 days'::interval, timestamp '2001-01-04 00:00:00', timestamp '2001-01-01 00:00:00'); -- 2001-01-03 00:00:004. SELECT date_bin('-2 days'::interval, timestamp '2001-01-04 00:00:00', timestamp '2001-01-01 00:00:00'); -- 2001-01-03 00:00:00On Thu, Jul 22, 2021 at 6:21 PM John Naylor <john.naylor@2ndquadrant.com> wrote:Hi,\n\nWhen analyzing time-series data, it's useful to be able to bin\ntimestamps into equally spaced ranges. date_trunc() is only able to\nbin on a specified whole unit. In the attached patch for the March\ncommitfest, I propose a new function date_trunc_interval(), which can\ntruncate to arbitrary intervals, e.g.:\n\nselect date_trunc_interval('15 minutes', timestamp '2020-02-16\n20:48:40'); date_trunc_interval\n---------------------\n 2020-02-16 20:45:00\n(1 row)\n\nWith this addition, it might be possible to turn the existing\ndate_trunc() functions into wrappers. I haven't done that here because\nit didn't seem practical at this point. For one, the existing\nfunctions have special treatment for weeks, centuries, and millennia.\n\nNote: I've only written the implementation for the type timestamp\nwithout timezone. Adding timezone support would be pretty simple, but\nI wanted to get feedback on the basic idea first before making it\ncomplete. I've also written tests and very basic documentation.\n\n-- \nJohn Naylor                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 22 Jul 2021 18:24:35 +0200", "msg_from": "Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Thu, Jul 22, 2021 at 12:24 PM Bauyrzhan Sakhariyev <\nbaurzhansahariev@gmail.com> wrote:\n>\n> Is date_bin supposed to return the beginning of the bin?\n\nThanks for testing! And yes.\n\n> And does the sign of an interval define the \"direction\" of the bin?\n\nNo, the boundary is intentionally the earlier one:\n\n/*\n * Make sure the returned timestamp is at the start of the bin, even if\n * the origin is in the future.\n */\nif (origin > timestamp && stride_usecs > 1)\n tm_delta -= stride_usecs;\n\nI wonder if we should just disallow negative intervals here.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 22, 2021 at 12:24 PM Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com> wrote:>> Is date_bin supposed to return the beginning of the bin?Thanks for testing! And yes.> And does the sign of an interval define the \"direction\" of the bin?No, the boundary is intentionally the earlier one:/* * Make sure the returned timestamp is at the start of the bin, even if * the origin is in the future. */\tif (origin > timestamp && stride_usecs > 1)    tm_delta -= stride_usecs;I wonder if we should just disallow negative intervals here.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Jul 2021 13:28:38 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "> No, the boundary is intentionally the earlier one:\n\nI found that commit in GitHub, thanks for pointing it out.\nWhen I test locally *origin_in_the_future *case I get different results for\npositive and negative intervals (see queries #1 and #2 from above, they\nhave same timestamp, origin and interval magnitude, difference is only in\ninterval sign) - can it be that the version I downloaded from\nhttps://www.enterprisedb.com/postgresql-early-experience doesn't include\ncommit with that improvement?\n\n> I wonder if we should just disallow negative intervals here.\n\nI cannot imagine somebody using negative as a constant argument but users\ncan pass another column as a first argument date or some function(ts) - not\nlikely but possible. A line in docs about the leftmost point of interval as\nstart of the bin could be helpful.\n\nNot related to negative interval - I created a PR for adding zero check for\nstride https://github.com/postgres/postgres/pull/67 and after getting it\nclosed I stopped right there - 1 line check doesn't worth going through the\npatching process I'm not familiar with.\n\n>In the case of full units (1 minute, 1 hour, etc.), it gives the same\nresult as the analogous date_trunc call,\nWas not obvious to me that we need to supply Monday origin to make\ndate_bin(1 week, ts) produce same result with date_trunc\n\nSorry for the verbose report and thanks for the nice function - I know\nit's not yet released, was just playing around with beta as I want to\nalign CrateDB\ndate_bin <https://github.com/crate/crate/issues/11310> with Postgresql\n\nOn Thu, Jul 22, 2021 at 7:28 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n>\n> On Thu, Jul 22, 2021 at 12:24 PM Bauyrzhan Sakhariyev <\n> baurzhansahariev@gmail.com> wrote:\n> >\n> > Is date_bin supposed to return the beginning of the bin?\n>\n> Thanks for testing! And yes.\n>\n> > And does the sign of an interval define the \"direction\" of the bin?\n>\n> No, the boundary is intentionally the earlier one:\n>\n> /*\n> * Make sure the returned timestamp is at the start of the bin, even if\n> * the origin is in the future.\n> */\n> if (origin > timestamp && stride_usecs > 1)\n> tm_delta -= stride_usecs;\n>\n> I wonder if we should just disallow negative intervals here.\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\n> No, the boundary is intentionally the earlier one:I found that commit in GitHub, thanks for pointing it out. When I test locally origin_in_the_future case I get different results for positive and negative intervals (see queries #1 and #2 from above, they have same timestamp, origin and interval magnitude, difference is only in interval sign) - can it be that the version I downloaded from https://www.enterprisedb.com/postgresql-early-experience doesn't include commit with that improvement?> \n\nI wonder if we should just disallow negative intervals here.I cannot imagine somebody using negative as a constant argument but users can pass another column as a first argument date or some function(ts) - not likely but possible. A line in docs about the leftmost point of interval as start of the bin could be helpful.Not related to negative interval - I created a PR for adding zero check for stride https://github.com/postgres/postgres/pull/67 and after getting it closed I stopped right there - 1 line check doesn't worth going through the patching process I'm not familiar with.>In the case of full units (1 minute, 1 hour, etc.), it gives the same result as the analogous date_trunc call, Was not obvious to me that we need to supply Monday origin to make date_bin(1 week, ts) produce same result with date_truncSorry for the verbose report and thanks for the nice function -  I know it's not yet released, was just playing around with beta as I want to align CrateDB date_bin with PostgresqlOn Thu, Jul 22, 2021 at 7:28 PM John Naylor <john.naylor@enterprisedb.com> wrote:On Thu, Jul 22, 2021 at 12:24 PM Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com> wrote:>> Is date_bin supposed to return the beginning of the bin?Thanks for testing! And yes.> And does the sign of an interval define the \"direction\" of the bin?No, the boundary is intentionally the earlier one:/* * Make sure the returned timestamp is at the start of the bin, even if * the origin is in the future. */\tif (origin > timestamp && stride_usecs > 1)    tm_delta -= stride_usecs;I wonder if we should just disallow negative intervals here.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Jul 2021 22:49:17 +0200", "msg_from": "Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Thu, Jul 22, 2021 at 4:49 PM Bauyrzhan Sakhariyev <\nbaurzhansahariev@gmail.com> wrote:\n> Not related to negative interval - I created a PR for adding zero check\nfor stride https://github.com/postgres/postgres/pull/67 and after getting\nit closed I stopped right there - 1 line check doesn't worth going through\nthe patching process I'm not familiar with.\n\nThanks for the pull request! I added tests and reworded the error message\nslightly to match current style, and pushed.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 22, 2021 at 4:49 PM Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com> wrote:> Not related to negative interval - I created a PR for adding zero check for stride https://github.com/postgres/postgres/pull/67 and after getting it closed I stopped right there - 1 line check doesn't worth going through the patching process I'm not familiar with.Thanks for the pull request! I added tests and reworded the error message slightly to match current style, and pushed.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Jul 2021 17:40:12 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Thu, Jul 22, 2021 at 4:49 PM Bauyrzhan Sakhariyev <\nbaurzhansahariev@gmail.com> wrote:\n>\n> > No, the boundary is intentionally the earlier one:\n>\n> I found that commit in GitHub, thanks for pointing it out.\n> When I test locally origin_in_the_future case I get different results for\npositive and negative intervals (see queries #1 and #2 from above, they\nhave same timestamp, origin and interval magnitude, difference is only in\ninterval sign) - can it be that the version I downloaded from\nhttps://www.enterprisedb.com/postgresql-early-experience doesn't include\ncommit with that improvement?\n\nSorry, I wasn't clear. The intention is that the boundary is on the lower\nside, but query #1 doesn't follow that, so that's a bug in my view. I found\nwhile developing the feature that the sign of the stride didn't seem to\nmatter, but evidently I didn't try with the origin in the future.\n\n> > I wonder if we should just disallow negative intervals here.\n>\n> I cannot imagine somebody using negative as a constant argument but users\ncan pass another column as a first argument date or some function(ts) - not\nlikely but possible. A line in docs about the leftmost point of interval as\nstart of the bin could be helpful.\n\nIn recent years there have been at least two attempts to add an absolute\nvalue function for intervals, and both stalled over semantics, so I'd\nrather just side-step the issue, especially as we're in beta.\n\n> >In the case of full units (1 minute, 1 hour, etc.), it gives the same\nresult as the analogous date_trunc call,\n> Was not obvious to me that we need to supply Monday origin to make\ndate_bin(1 week, ts) produce same result with date_trunc\n\nThe docs for date_trunc() don't mention this explicitly, but it might be\nworth mentioning ISO weeks. There is a nearby mention for EXTRACT():\n\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n\n\"The number of the ISO 8601 week-numbering week of the year. By definition,\nISO weeks start on Mondays and the first week of a year contains January 4\nof that year. In other words, the first Thursday of a year is in week 1 of\nthat year.\"\n\n> Sorry for the verbose report and thanks for the nice function - I know\nit's not yet released, was just playing around with beta as I want to align\nCrateDB date_bin with Postgresql\n\nThanks again for testing! This is good feedback.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 22, 2021 at 4:49 PM Bauyrzhan Sakhariyev <baurzhansahariev@gmail.com> wrote:>> > No, the boundary is intentionally the earlier one:>> I found that commit in GitHub, thanks for pointing it out.> When I test locally origin_in_the_future case I get different results for positive and negative intervals (see queries #1 and #2 from above, they have same timestamp, origin and interval magnitude, difference is only in interval sign) - can it be that the version I downloaded from https://www.enterprisedb.com/postgresql-early-experience doesn't include commit with that improvement?Sorry, I wasn't clear. The intention is that the boundary is on the lower side, but query #1 doesn't follow that, so that's a bug in my view. I found while developing the feature that the sign of the stride didn't seem to matter, but evidently I didn't try with the origin in the future.> >  I wonder if we should just disallow negative intervals here.>> I cannot imagine somebody using negative as a constant argument but users can pass another column as a first argument date or some function(ts) - not likely but possible. A line in docs about the leftmost point of interval as start of the bin could be helpful.In recent years there have been at least two attempts to add an absolute value function for intervals, and both stalled over semantics, so I'd rather just side-step the issue, especially as we're in beta.> >In the case of full units (1 minute, 1 hour, etc.), it gives the same result as the analogous date_trunc call,> Was not obvious to me that we need to supply Monday origin to make date_bin(1 week, ts) produce same result with date_truncThe docs for date_trunc() don't mention this explicitly, but it might be worth mentioning ISO weeks. There is a nearby mention for EXTRACT():https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\"The number of the ISO 8601 week-numbering week of the year. By definition, ISO weeks start on Mondays and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year.\"> Sorry for the verbose report and thanks for the nice function -  I know it's not yet released, was just playing around with beta as I want to align CrateDB date_bin with PostgresqlThanks again for testing! This is good feedback.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 23 Jul 2021 08:05:36 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "I wrote:\n\n> On Thu, Jul 22, 2021 at 4:49 PM Bauyrzhan Sakhariyev <\nbaurzhansahariev@gmail.com> wrote:\n> >\n> > > No, the boundary is intentionally the earlier one:\n> >\n> > I found that commit in GitHub, thanks for pointing it out.\n> > When I test locally origin_in_the_future case I get different results\nfor positive and negative intervals (see queries #1 and #2 from above, they\nhave same timestamp, origin and interval magnitude, difference is only in\ninterval sign) - can it be that the version I downloaded from\nhttps://www.enterprisedb.com/postgresql-early-experience doesn't include\ncommit with that improvement?\n>\n> Sorry, I wasn't clear. The intention is that the boundary is on the lower\nside, but query #1 doesn't follow that, so that's a bug in my view. I found\nwhile developing the feature that the sign of the stride didn't seem to\nmatter, but evidently I didn't try with the origin in the future.\n>\n> > > I wonder if we should just disallow negative intervals here.\n> >\n> > I cannot imagine somebody using negative as a constant argument but\nusers can pass another column as a first argument date or some function(ts)\n- not likely but possible. A line in docs about the leftmost point of\ninterval as start of the bin could be helpful.\n>\n> In recent years there have been at least two attempts to add an absolute\nvalue function for intervals, and both stalled over semantics, so I'd\nrather just side-step the issue, especially as we're in beta.\n\nConcretely, I propose to push the attached on master and v14. Since we're\nin beta 2 and this thread might not get much attention, I've CC'd the RMT.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Jul 2021 12:05:51 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Concretely, I propose to push the attached on master and v14. Since we're\n> in beta 2 and this thread might not get much attention, I've CC'd the RMT.\n\n+1, we can figure out whether that has a use some other time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Jul 2021 12:17:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Tue, Jul 27, 2021 at 12:05:51PM -0400, John Naylor wrote:\n> Concretely, I propose to push the attached on master and v14. Since we're\n> in beta 2 and this thread might not get much attention, I've CC'd the RMT.\n\n(It looks like gmail has messed up a bit the format of your last\nmessage.)\n\nHmm. The docs say also the following thing, but your patch does not\nreflect that anymore:\n\"Negative intervals are allowed and are treated the same as positive\nintervals.\"\nSo you may want to update that, at least.\n--\nMichael", "msg_date": "Wed, 28 Jul 2021 13:14:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Wed, Jul 28, 2021 at 12:15 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Tue, Jul 27, 2021 at 12:05:51PM -0400, John Naylor wrote:\n> > Concretely, I propose to push the attached on master and v14. Since\nwe're\n> > in beta 2 and this thread might not get much attention, I've CC'd the\nRMT.\n>\n> (It looks like gmail has messed up a bit the format of your last\n> message.)\n\nHmm, it looks fine in the archives.\n\n> Hmm. The docs say also the following thing, but your patch does not\n> reflect that anymore:\n> \"Negative intervals are allowed and are treated the same as positive\n> intervals.\"\n\nI'd forgotten that was documented based on incomplete information, thanks\nfor looking! Pushed with that fixed.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 28, 2021 at 12:15 AM Michael Paquier <michael@paquier.xyz> wrote:>> On Tue, Jul 27, 2021 at 12:05:51PM -0400, John Naylor wrote:> > Concretely, I propose to push the attached on master and v14. Since we're> > in beta 2 and this thread might not get much attention, I've CC'd the RMT.>> (It looks like gmail has messed up a bit the format of your last> message.)Hmm, it looks fine in the archives.> Hmm.  The docs say also the following thing, but your patch does not> reflect that anymore:> \"Negative intervals are allowed and are treated the same as positive> intervals.\"I'd forgotten that was documented based on incomplete information, thanks for looking! Pushed with that fixed.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 28 Jul 2021 12:14:36 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" } ]
[ { "msg_contents": "Hi all,\n\nThe next commit fets is going to begin in a couple of days, and there\nis a total of 207 patches registered as of today. We don't have any\nmanager yet, so is there any volunteer for taking the lead this time?\n\nThanks,\n--\nMichael", "msg_date": "Wed, 26 Feb 2020 16:41:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Commit fest manager for 2020-03" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 26, 2020 at 04:41:12PM +0900, Michael Paquier wrote:\n> The next commit fets is going to begin in a couple of days, and there\n> is a total of 207 patches registered as of today. We don't have any\n> manager yet, so is there any volunteer for taking the lead this time?\n\nThis is the last one for v13 I think? So probably somebody quite\nexperienced should handle this one. Or (if there is one already for\nv13?) maybe even the Release Team themselves?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Wed, 26 Feb 2020 09:15:26 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "On 2/26/20 3:15 AM, Michael Banck wrote:\n> \n> On Wed, Feb 26, 2020 at 04:41:12PM +0900, Michael Paquier wrote:\n>> The next commit fets is going to begin in a couple of days, and there\n>> is a total of 207 patches registered as of today. We don't have any\n>> manager yet, so is there any volunteer for taking the lead this time?\n> \n> This is the last one for v13 I think? So probably somebody quite\n> experienced should handle this one. Or (if there is one already for\n> v13?) maybe even the Release Team themselves?\n\nI'm happy to be CFM for this commitfest.\n\nI'm not sure it would be a good use of resources to have the release \nteam performing CFM duties directly. They will have enough on their \nplate and in any case I don't think the team has been announced yet.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 26 Feb 2020 08:39:13 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "> On 26 Feb 2020, at 14:39, David Steele <david@pgmasters.net> wrote:\n> \n> On 2/26/20 3:15 AM, Michael Banck wrote:\n>> On Wed, Feb 26, 2020 at 04:41:12PM +0900, Michael Paquier wrote:\n>>> The next commit fets is going to begin in a couple of days, and there\n>>> is a total of 207 patches registered as of today. We don't have any\n>>> manager yet, so is there any volunteer for taking the lead this time?\n>> This is the last one for v13 I think? So probably somebody quite\n>> experienced should handle this one. Or (if there is one already for\n>> v13?) maybe even the Release Team themselves?\n> \n> I'm happy to be CFM for this commitfest.\n\nThanks! \n\n> I'm not sure it would be a good use of resources to have the release team performing CFM duties directly. They will have enough on their plate\n\nAbsolutely, combining CFM and RM duties will make one of them (or both) suffer\na lack of attention.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 26 Feb 2020 15:29:18 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "On Wed, Feb 26, 2020 at 03:29:18PM +0100, Daniel Gustafsson wrote:\n>> On 26 Feb 2020, at 14:39, David Steele <david@pgmasters.net> wrote:\n>>\n>> On 2/26/20 3:15 AM, Michael Banck wrote:\n>>> On Wed, Feb 26, 2020 at 04:41:12PM +0900, Michael Paquier wrote:\n>>>> The next commit fets is going to begin in a couple of days, and\n>>>> there is a total of 207 patches registered as of today. We don't\n>>>> have any manager yet, so is there any volunteer for taking the lead\n>>>> this time?\n>>> This is the last one for v13 I think? So probably somebody quite\n>>> experienced should handle this one. Or (if there is one already for\n>>> v13?) maybe even the Release Team themselves?\n>>\n>> I'm happy to be CFM for this commitfest.\n>\n>Thanks!\n>\n>> I'm not sure it would be a good use of resources to have the release\n>> team performing CFM duties directly. They will have enough on their\n>> plate\n>\n>Absolutely, combining CFM and RM duties will make one of them (or both)\n>suffer a lack of attention.\n>\n\nDid we actually decide who's going to be on RMT this year? I don't think\nanyone particular was mentioned / proposed at the FOSDEM dev meeting. It\nmight be a good idea to decide that before the last CF too.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Feb 2020 16:29:13 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "> On 26 Feb 2020, at 16:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\n> Did we actually decide who's going to be on RMT this year? I don't think\n> anyone particular was mentioned / proposed at the FOSDEM dev meeting. It\n> might be a good idea to decide that before the last CF too.\n\nWe didn't, we only discussed (based on feedback from previous RMTs) that having\nsome level of timezone overlap between the members makes for shorter roundtrips\nin communication.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 26 Feb 2020 16:33:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 2/26/20 3:15 AM, Michael Banck wrote:\n>> This is the last one for v13 I think? So probably somebody quite\n>> experienced should handle this one. Or (if there is one already for\n>> v13?) maybe even the Release Team themselves?\n\n> I'm not sure it would be a good use of resources to have the release \n> team performing CFM duties directly. They will have enough on their \n> plate and in any case I don't think the team has been announced yet.\n\nThe Release Team hasn't been picked yet. In past years I think\nwe've chosen them at the PGCon dev meeting. So really, there's no\noverlap --- the CF will be done before we need the RT.\n\nHaving said that, if you want to do it, that's great.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Feb 2020 10:45:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "On Wed, Feb 26, 2020 at 10:45:16AM -0500, Tom Lane wrote:\n>David Steele <david@pgmasters.net> writes:\n>> On 2/26/20 3:15 AM, Michael Banck wrote:\n>>> This is the last one for v13 I think? So probably somebody quite\n>>> experienced should handle this one. Or (if there is one already for\n>>> v13?) maybe even the Release Team themselves?\n>\n>> I'm not sure it would be a good use of resources to have the release\n>> team performing CFM duties directly. They will have enough on their\n>> plate and in any case I don't think the team has been announced yet.\n>\n>The Release Team hasn't been picked yet. In past years I think\n>we've chosen them at the PGCon dev meeting. So really, there's no\n>overlap --- the CF will be done before we need the RT.\n>\n\nNope, the RMT for PG12 was announced on 2019/03/30 [1], i.e. shortly\nbefore the end of the last CF (and before pgcon). I think there was some\ndiscussion about the members at/after the FOSDEM dev meeting. The\noverlap with CFM duties is still fairly minimal, and there's not much\nfor RMT to do before the end of the last CF anyway ...\n\n[1] https://www.postgresql.org/message-id/20190330094043.GA28827@paquier.xyz\n\nMaybe we shouldn't wait with assembling RMT until pgcon, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Feb 2020 20:34:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "On Wed, Feb 26, 2020 at 03:29:18PM +0100, Daniel Gustafsson wrote:\n> On 26 Feb 2020, at 14:39, David Steele <david@pgmasters.net> wrote:\n>> I'm happy to be CFM for this commitfest.\n> \n> Thanks! \n\nThanks David!\n--\nMichael", "msg_date": "Thu, 27 Feb 2020 11:37:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "On Wed, Feb 26, 2020 at 08:34:26PM +0100, Tomas Vondra wrote:\n> Nope, the RMT for PG12 was announced on 2019/03/30 [1], i.e. shortly\n> before the end of the last CF (and before pgcon). I think there was some\n> discussion about the members at/after the FOSDEM dev meeting. The\n> overlap with CFM duties is still fairly minimal, and there's not much\n> for RMT to do before the end of the last CF anyway ...\n> \n> [1] https://www.postgresql.org/message-id/20190330094043.GA28827@paquier.xyz\n> \n> Maybe we shouldn't wait with assembling RMT until pgcon, though.\n\nWaiting until PGCon if a bad idea, because we need to decide the\nfeature freeze deadline after the last CF, and this decision is taken\nmainly by the RMT. I think that it is also good to begin categorizing\nopen items and handle them when reported.\n--\nMichael", "msg_date": "Thu, 27 Feb 2020 11:39:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Commit fest manager for 2020-03" }, { "msg_contents": "On 2020-Feb-27, Michael Paquier wrote:\n\n> On Wed, Feb 26, 2020 at 08:34:26PM +0100, Tomas Vondra wrote:\n> > Nope, the RMT for PG12 was announced on 2019/03/30 [1], i.e. shortly\n> > before the end of the last CF (and before pgcon). I think there was some\n> > discussion about the members at/after the FOSDEM dev meeting. The\n> > overlap with CFM duties is still fairly minimal, and there's not much\n> > for RMT to do before the end of the last CF anyway ...\n> > \n> > [1] https://www.postgresql.org/message-id/20190330094043.GA28827@paquier.xyz\n> > \n> > Maybe we shouldn't wait with assembling RMT until pgcon, though.\n> \n> Waiting until PGCon if a bad idea,\n\n+1. As I recall, the RMT assembles around the time the last CF is over.\nLast year it was announced on March 30th, which is the latest date it\nhas ever happened. The history can be seen at the bottom here:\nhttps://wiki.postgresql.org/wiki/RMT\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Feb 2020 11:23:07 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Commit fest manager for 2020-03" } ]
[ { "msg_contents": "Hi,\n\nAttached is a patch for allowing auto_explain to log plans before\nqueries are executed.\n\nCurrently, auto_explain logs plans only after query executions,\nso if a query gets stuck its plan could not be logged. If we can\nknow plans of stuck queries, we may get some hints to resolve the\nstuck. This is useful when you are testing and debugging your\napplication whose queries get stuck in some situations.\n\nThis patch adds new option log_before_query to auto_explain.\nSetting auto_explain.log_before_query option logs all plans before\nqueries are executed regardless of auto_explain.log_min_duration\nunless this is set -1 to disable logging. If log_before_query is\nenabled, only duration time is logged after query execution as in\nthe case of when both log_statement and log_min_duration_statement\nare enabled.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 27 Feb 2020 02:35:18 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> Hi,\n>\n> Attached is a patch for allowing auto_explain to log plans before\n> queries are executed.\n>\n> Currently, auto_explain logs plans only after query executions,\n> so if a query gets stuck its plan could not be logged. If we can\n> know plans of stuck queries, we may get some hints to resolve the\n> stuck. This is useful when you are testing and debugging your\n> application whose queries get stuck in some situations.\n\nIndeed that could be useful.\n\n> This patch adds new option log_before_query to auto_explain.\n\nMaybe \"log_before_execution\" would be better?\n\n> Setting auto_explain.log_before_query option logs all plans before\n> queries are executed regardless of auto_explain.log_min_duration\n> unless this is set -1 to disable logging. If log_before_query is\n> enabled, only duration time is logged after query execution as in\n> the case of when both log_statement and log_min_duration_statement\n> are enabled.\n\nI'm not sure about this behavior. The final explain plan is needed at least if\nlog_analyze, log_buffers or log_timing are enabled.\n\n\n", "msg_date": "Wed, 26 Feb 2020 18:51:21 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On Wed, 26 Feb 2020 18:51:21 +0100\nJulien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > Hi,\n> >\n> > Attached is a patch for allowing auto_explain to log plans before\n> > queries are executed.\n> >\n> > Currently, auto_explain logs plans only after query executions,\n> > so if a query gets stuck its plan could not be logged. If we can\n> > know plans of stuck queries, we may get some hints to resolve the\n> > stuck. This is useful when you are testing and debugging your\n> > application whose queries get stuck in some situations.\n> \n> Indeed that could be useful.\n> \n> > This patch adds new option log_before_query to auto_explain.\n> \n> Maybe \"log_before_execution\" would be better?\n\nThanks! This seems better also to me.\n\n> \n> > Setting auto_explain.log_before_query option logs all plans before\n> > queries are executed regardless of auto_explain.log_min_duration\n> > unless this is set -1 to disable logging. If log_before_query is\n> > enabled, only duration time is logged after query execution as in\n> > the case of when both log_statement and log_min_duration_statement\n> > are enabled.\n> \n> I'm not sure about this behavior. The final explain plan is needed at least if\n> log_analyze, log_buffers or log_timing are enabled.\n\nIn the current patch, log_before_query (will be log_before_execution)\nhas no effect if log_analyze is enabled in order to avoid to log the\nsame plans twice. Instead, is it better to log the plan always twice,\nbefore and after the execution, if log_before_query is enabled\nregardless of log_min_duration or log_analyze?\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 27 Feb 2020 10:18:16 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "Hello.\n\nAt Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> On Wed, 26 Feb 2020 18:51:21 +0100\n> Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > Hi,\n> > >\n> > > Attached is a patch for allowing auto_explain to log plans before\n> > > queries are executed.\n> > >\n> > > Currently, auto_explain logs plans only after query executions,\n> > > so if a query gets stuck its plan could not be logged. If we can\n> > > know plans of stuck queries, we may get some hints to resolve the\n> > > stuck. This is useful when you are testing and debugging your\n> > > application whose queries get stuck in some situations.\n> > \n> > Indeed that could be useful.\n>\n> > > This patch adds new option log_before_query to auto_explain.\n> > \n> > Maybe \"log_before_execution\" would be better?\n> \n> Thanks! This seems better also to me.\n> \n> > \n> > > Setting auto_explain.log_before_query option logs all plans before\n> > > queries are executed regardless of auto_explain.log_min_duration\n> > > unless this is set -1 to disable logging. If log_before_query is\n> > > enabled, only duration time is logged after query execution as in\n> > > the case of when both log_statement and log_min_duration_statement\n> > > are enabled.\n> > \n> > I'm not sure about this behavior. The final explain plan is needed at least if\n> > log_analyze, log_buffers or log_timing are enabled.\n> \n> In the current patch, log_before_query (will be log_before_execution)\n> has no effect if log_analyze is enabled in order to avoid to log the\n> same plans twice. Instead, is it better to log the plan always twice,\n> before and after the execution, if log_before_query is enabled\n> regardless of log_min_duration or log_analyze?\n\nHonestly, I don't think showing plans for all queries is useful\nbehavior.\n\nIf you allow the stuck query to be canceled, showing plan in\nPG_FINALLY() block in explain_ExecutorRun would work, which look like\nthis.\n\nexplain_ExecutorRun()\n{\n ...\n PG_TRY();\n {\n ...\n else\n starndard_ExecutorRun();\n nesting_level--;\n }\n PG_CATCH();\n {\n nesting_level--;\n\n if (auto_explain_log_failed_plan &&\n <maybe the time elapsed from start exceeds min_duration>)\n {\n 'show the plan'\n }\n }\n}\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Feb 2020 14:14:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> Hello.\n>\n> At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n> wrote in\n> > On Wed, 26 Feb 2020 18:51:21 +0100\n> > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > > Hi,\n> > > >\n> > > > Attached is a patch for allowing auto_explain to log plans before\n> > > > queries are executed.\n> > > >\n> > > > Currently, auto_explain logs plans only after query executions,\n> > > > so if a query gets stuck its plan could not be logged. If we can\n> > > > know plans of stuck queries, we may get some hints to resolve the\n> > > > stuck. This is useful when you are testing and debugging your\n> > > > application whose queries get stuck in some situations.\n> > >\n> > > Indeed that could be useful.\n> >\n> > > > This patch adds new option log_before_query to auto_explain.\n> > >\n> > > Maybe \"log_before_execution\" would be better?\n> >\n> > Thanks! This seems better also to me.\n> >\n> > >\n> > > > Setting auto_explain.log_before_query option logs all plans before\n> > > > queries are executed regardless of auto_explain.log_min_duration\n> > > > unless this is set -1 to disable logging. If log_before_query is\n> > > > enabled, only duration time is logged after query execution as in\n> > > > the case of when both log_statement and log_min_duration_statement\n> > > > are enabled.\n> > >\n> > > I'm not sure about this behavior. The final explain plan is needed at\n> least if\n> > > log_analyze, log_buffers or log_timing are enabled.\n> >\n> > In the current patch, log_before_query (will be log_before_execution)\n> > has no effect if log_analyze is enabled in order to avoid to log the\n> > same plans twice. Instead, is it better to log the plan always twice,\n> > before and after the execution, if log_before_query is enabled\n> > regardless of log_min_duration or log_analyze?\n>\n> Honestly, I don't think showing plans for all queries is useful\n> behavior.\n>\n> If you allow the stuck query to be canceled, showing plan in\n> PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> this.\n>\n> explain_ExecutorRun()\n> {\n> ...\n> PG_TRY();\n> {\n> ...\n> else\n> starndard_ExecutorRun();\n> nesting_level--;\n> }\n> PG_CATCH();\n> {\n> nesting_level--;\n>\n> if (auto_explain_log_failed_plan &&\n> <maybe the time elapsed from start exceeds min_duration>)\n> {\n> 'show the plan'\n> }\n> }\n> }\n>\n> regards.\n>\n\nIt can work - but still it is not good enough solution. We need \"query\ndebugger\" that allows to get some query execution metrics online.\n\nThere was a problem with memory management for passing plans between\nprocesses. Can we used temp files instead shared memory?\n\nRegards\n\nPavel\n\n\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n>\n\nčt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:Hello.\n\nAt Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> On Wed, 26 Feb 2020 18:51:21 +0100\n> Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > Hi,\n> > >\n> > > Attached is a patch for allowing auto_explain to log plans before\n> > > queries are executed.\n> > >\n> > > Currently, auto_explain logs plans only after query executions,\n> > > so if a query gets stuck its plan could not be logged. If we can\n> > > know plans of stuck queries, we may get some hints to resolve the\n> > > stuck. This is useful when you are testing and debugging your\n> > > application whose queries get stuck in some situations.\n> > \n> > Indeed that could be useful.\n>\n> > > This patch adds  new option log_before_query to auto_explain.\n> > \n> > Maybe \"log_before_execution\" would be better?\n> \n> Thanks!  This seems better also to me.\n> \n> > \n> > > Setting auto_explain.log_before_query option logs all plans before\n> > > queries are executed regardless of auto_explain.log_min_duration\n> > > unless this is set -1 to disable logging.  If log_before_query is\n> > > enabled, only duration time is logged after query execution as in\n> > > the case of when both log_statement and log_min_duration_statement\n> > > are enabled.\n> > \n> > I'm not sure about this behavior.  The final explain plan is needed at least if\n> > log_analyze, log_buffers or log_timing are enabled.\n> \n> In the current patch, log_before_query (will be log_before_execution)\n> has no effect if log_analyze is enabled in order to avoid to log the\n> same plans twice.  Instead, is it better to log the plan always twice,\n> before and after the execution, if  log_before_query is enabled\n> regardless of log_min_duration or log_analyze?\n\nHonestly, I don't think showing plans for all queries is useful\nbehavior.\n\nIf you allow the stuck query to be canceled, showing plan in\nPG_FINALLY() block in explain_ExecutorRun would work, which look like\nthis.\n\nexplain_ExecutorRun()\n{\n  ...\n  PG_TRY();\n  {\n      ...\n      else\n         starndard_ExecutorRun();\n      nesting_level--;\n  }\n  PG_CATCH();\n  {\n      nesting_level--;\n\n      if (auto_explain_log_failed_plan &&\n       <maybe the time elapsed from start exceeds min_duration>)\n      {\n          'show the plan'\n      }\n   }\n}\n\nregards.It can work - but still it is not good enough solution. We need \"query debugger\" that allows to get some query execution metrics online.There was a problem with memory management for passing plans between processes. Can we used temp files instead shared memory?RegardsPavel\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Feb 2020 06:27:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On Thu, 27 Feb 2020 14:14:41 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Hello.\n> \n> At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp> wrote in \n> > On Wed, 26 Feb 2020 18:51:21 +0100\n> > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > \n> > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > > Hi,\n> > > >\n> > > > Attached is a patch for allowing auto_explain to log plans before\n> > > > queries are executed.\n> > > >\n> > > > Currently, auto_explain logs plans only after query executions,\n> > > > so if a query gets stuck its plan could not be logged. If we can\n> > > > know plans of stuck queries, we may get some hints to resolve the\n> > > > stuck. This is useful when you are testing and debugging your\n> > > > application whose queries get stuck in some situations.\n> > > \n> > > Indeed that could be useful.\n> >\n> > > > This patch adds new option log_before_query to auto_explain.\n> > > \n> > > Maybe \"log_before_execution\" would be better?\n> > \n> > Thanks! This seems better also to me.\n> > \n> > > \n> > > > Setting auto_explain.log_before_query option logs all plans before\n> > > > queries are executed regardless of auto_explain.log_min_duration\n> > > > unless this is set -1 to disable logging. If log_before_query is\n> > > > enabled, only duration time is logged after query execution as in\n> > > > the case of when both log_statement and log_min_duration_statement\n> > > > are enabled.\n> > > \n> > > I'm not sure about this behavior. The final explain plan is needed at least if\n> > > log_analyze, log_buffers or log_timing are enabled.\n> > \n> > In the current patch, log_before_query (will be log_before_execution)\n> > has no effect if log_analyze is enabled in order to avoid to log the\n> > same plans twice. Instead, is it better to log the plan always twice,\n> > before and after the execution, if log_before_query is enabled\n> > regardless of log_min_duration or log_analyze?\n> \n> Honestly, I don't think showing plans for all queries is useful\n> behavior.\n> \n> If you allow the stuck query to be canceled, showing plan in\n> PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> this.\n> \n> explain_ExecutorRun()\n> {\n> ...\n> PG_TRY();\n> {\n> ...\n> else\n> starndard_ExecutorRun();\n> nesting_level--;\n> }\n> PG_CATCH();\n> {\n> nesting_level--;\n> \n> if (auto_explain_log_failed_plan &&\n> <maybe the time elapsed from start exceeds min_duration>)\n> {\n> 'show the plan'\n> }\n> }\n> }\n\nThat makes sense. The initial purpose is to log plans of stuck queries\nnot of all queries, so your suggestion, doing it only when the query\nfails, is reasonable. I'll consider it little more.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 27 Feb 2020 14:48:05 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> napsal:\n> > > In the current patch, log_before_query (will be log_before_execution)\n> > > has no effect if log_analyze is enabled in order to avoid to log the\n> > > same plans twice. Instead, is it better to log the plan always twice,\n> > > before and after the execution, if log_before_query is enabled\n> > > regardless of log_min_duration or log_analyze?\n> >\n> > Honestly, I don't think showing plans for all queries is useful\n> > behavior.\n> >\n> > If you allow the stuck query to be canceled, showing plan in\n> > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> > this.\n...\n> It can work - but still it is not good enough solution. We need \"query\n> debugger\" that allows to get some query execution metrics online.\n\nIf we need a live plan dump of a running query, We could do that using\nsome kind of inter-backend triggering. (I'm not sure if PG offers\ninter-backend signalling facility usable by extensions..)\n\n=# select auto_explain.log_plan_backend(12345);\n\npostgresql.log:\n LOG: requested plan dump: <blah, blah>..\n\n\n\n> There was a problem with memory management for passing plans between\n> processes. Can we used temp files instead shared memory?\n\n=# select auto_explain.dump_plan_backend(12345);\n pid | query | plan\n-------+-------------+-------------------\n 12345 | SELECT 1; | Result (cost=....) (actual..)\n(1 row)\n\nDoesn't DSA work? I think it would be easier to handle than files.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Feb 2020 14:57:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On Thu, 27 Feb 2020 06:27:24 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> napsal:\n> \n> > Hello.\n> >\n> > At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n> > wrote in\n> > > On Wed, 26 Feb 2020 18:51:21 +0100\n> > > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > > > Hi,\n> > > > >\n> > > > > Attached is a patch for allowing auto_explain to log plans before\n> > > > > queries are executed.\n> > > > >\n> > > > > Currently, auto_explain logs plans only after query executions,\n> > > > > so if a query gets stuck its plan could not be logged. If we can\n> > > > > know plans of stuck queries, we may get some hints to resolve the\n> > > > > stuck. This is useful when you are testing and debugging your\n> > > > > application whose queries get stuck in some situations.\n> > > >\n> > > > Indeed that could be useful.\n> > >\n> > > > > This patch adds new option log_before_query to auto_explain.\n> > > >\n> > > > Maybe \"log_before_execution\" would be better?\n> > >\n> > > Thanks! This seems better also to me.\n> > >\n> > > >\n> > > > > Setting auto_explain.log_before_query option logs all plans before\n> > > > > queries are executed regardless of auto_explain.log_min_duration\n> > > > > unless this is set -1 to disable logging. If log_before_query is\n> > > > > enabled, only duration time is logged after query execution as in\n> > > > > the case of when both log_statement and log_min_duration_statement\n> > > > > are enabled.\n> > > >\n> > > > I'm not sure about this behavior. The final explain plan is needed at\n> > least if\n> > > > log_analyze, log_buffers or log_timing are enabled.\n> > >\n> > > In the current patch, log_before_query (will be log_before_execution)\n> > > has no effect if log_analyze is enabled in order to avoid to log the\n> > > same plans twice. Instead, is it better to log the plan always twice,\n> > > before and after the execution, if log_before_query is enabled\n> > > regardless of log_min_duration or log_analyze?\n> >\n> > Honestly, I don't think showing plans for all queries is useful\n> > behavior.\n> >\n> > If you allow the stuck query to be canceled, showing plan in\n> > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> > this.\n> >\n> > explain_ExecutorRun()\n> > {\n> > ...\n> > PG_TRY();\n> > {\n> > ...\n> > else\n> > starndard_ExecutorRun();\n> > nesting_level--;\n> > }\n> > PG_CATCH();\n> > {\n> > nesting_level--;\n> >\n> > if (auto_explain_log_failed_plan &&\n> > <maybe the time elapsed from start exceeds min_duration>)\n> > {\n> > 'show the plan'\n> > }\n> > }\n> > }\n> >\n> > regards.\n> >\n> \n> It can work - but still it is not good enough solution. We need \"query\n> debugger\" that allows to get some query execution metrics online.\n> \n> There was a problem with memory management for passing plans between\n> processes. Can we used temp files instead shared memory?\n\n I think \"query debugger\" feature you proposed is out of scope of\nauto_explain module. I also think the feature to analyze running\nquery online is great, but we will need another discussion on a new\nmodule or eature for it.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 27 Feb 2020 15:00:38 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "čt 27. 2. 2020 v 6:58 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote in\n> > odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > napsal:\n> > > > In the current patch, log_before_query (will be log_before_execution)\n> > > > has no effect if log_analyze is enabled in order to avoid to log the\n> > > > same plans twice. Instead, is it better to log the plan always\n> twice,\n> > > > before and after the execution, if log_before_query is enabled\n> > > > regardless of log_min_duration or log_analyze?\n> > >\n> > > Honestly, I don't think showing plans for all queries is useful\n> > > behavior.\n> > >\n> > > If you allow the stuck query to be canceled, showing plan in\n> > > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> > > this.\n> ...\n> > It can work - but still it is not good enough solution. We need \"query\n> > debugger\" that allows to get some query execution metrics online.\n>\n> If we need a live plan dump of a running query, We could do that using\n> some kind of inter-backend triggering. (I'm not sure if PG offers\n> inter-backend signalling facility usable by extensions..)\n>\n> =# select auto_explain.log_plan_backend(12345);\n>\n> postgresql.log:\n> LOG: requested plan dump: <blah, blah>..\n>\n>\n>\n> > There was a problem with memory management for passing plans between\n> > processes. Can we used temp files instead shared memory?\n>\n> =# select auto_explain.dump_plan_backend(12345);\n> pid | query | plan\n> -------+-------------+-------------------\n> 12345 | SELECT 1; | Result (cost=....) (actual..)\n> (1 row)\n>\n> Doesn't DSA work? I think it would be easier to handle than files.\n>\n\nI am not sure. There is hard questions when the allocated shared memory\nshould be deallocated.\n\nMaybe using third process can be the most nice, safe solution.\n\nThe execution plans can be pushed to some background worker memory, and\nthis process can works like stats_collector.\n\n\n\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nčt 27. 2. 2020 v 6:58 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> napsal:\n> > > In the current patch, log_before_query (will be log_before_execution)\n> > > has no effect if log_analyze is enabled in order to avoid to log the\n> > > same plans twice.  Instead, is it better to log the plan always twice,\n> > > before and after the execution, if  log_before_query is enabled\n> > > regardless of log_min_duration or log_analyze?\n> >\n> > Honestly, I don't think showing plans for all queries is useful\n> > behavior.\n> >\n> > If you allow the stuck query to be canceled, showing plan in\n> > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> > this.\n...\n> It can work - but still it is not good enough solution. We need \"query\n> debugger\" that allows to get some query execution metrics online.\n\nIf we need a live plan dump of a running query, We could do that using\nsome kind of inter-backend triggering. (I'm not sure if PG offers\ninter-backend signalling facility usable by extensions..)\n\n=# select auto_explain.log_plan_backend(12345);\n\npostgresql.log:\n LOG: requested plan dump: <blah, blah>..\n\n\n\n> There was a problem with memory management for passing plans between\n> processes. Can we used temp files instead shared memory?\n\n=# select auto_explain.dump_plan_backend(12345);\n  pid  | query       | plan\n-------+-------------+-------------------\n 12345 | SELECT 1;   | Result (cost=....) (actual..)\n(1 row)\n\nDoesn't DSA work?  I think it would be easier to handle than files.I am not sure. There is hard questions when the allocated shared memory should be  deallocated. Maybe using third process can be the most nice, safe solution. The execution plans can be pushed to some background worker memory, and this process can works like stats_collector. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Feb 2020 07:08:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:\n\n> On Thu, 27 Feb 2020 06:27:24 +0100\n> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com>\n> > napsal:\n> >\n> > > Hello.\n> > >\n> > > At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n> > > wrote in\n> > > > On Wed, 26 Feb 2020 18:51:21 +0100\n> > > > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > > > > Hi,\n> > > > > >\n> > > > > > Attached is a patch for allowing auto_explain to log plans before\n> > > > > > queries are executed.\n> > > > > >\n> > > > > > Currently, auto_explain logs plans only after query executions,\n> > > > > > so if a query gets stuck its plan could not be logged. If we can\n> > > > > > know plans of stuck queries, we may get some hints to resolve the\n> > > > > > stuck. This is useful when you are testing and debugging your\n> > > > > > application whose queries get stuck in some situations.\n> > > > >\n> > > > > Indeed that could be useful.\n> > > >\n> > > > > > This patch adds new option log_before_query to auto_explain.\n> > > > >\n> > > > > Maybe \"log_before_execution\" would be better?\n> > > >\n> > > > Thanks! This seems better also to me.\n> > > >\n> > > > >\n> > > > > > Setting auto_explain.log_before_query option logs all plans\n> before\n> > > > > > queries are executed regardless of auto_explain.log_min_duration\n> > > > > > unless this is set -1 to disable logging. If log_before_query is\n> > > > > > enabled, only duration time is logged after query execution as in\n> > > > > > the case of when both log_statement and\n> log_min_duration_statement\n> > > > > > are enabled.\n> > > > >\n> > > > > I'm not sure about this behavior. The final explain plan is\n> needed at\n> > > least if\n> > > > > log_analyze, log_buffers or log_timing are enabled.\n> > > >\n> > > > In the current patch, log_before_query (will be log_before_execution)\n> > > > has no effect if log_analyze is enabled in order to avoid to log the\n> > > > same plans twice. Instead, is it better to log the plan always\n> twice,\n> > > > before and after the execution, if log_before_query is enabled\n> > > > regardless of log_min_duration or log_analyze?\n> > >\n> > > Honestly, I don't think showing plans for all queries is useful\n> > > behavior.\n> > >\n> > > If you allow the stuck query to be canceled, showing plan in\n> > > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> > > this.\n> > >\n> > > explain_ExecutorRun()\n> > > {\n> > > ...\n> > > PG_TRY();\n> > > {\n> > > ...\n> > > else\n> > > starndard_ExecutorRun();\n> > > nesting_level--;\n> > > }\n> > > PG_CATCH();\n> > > {\n> > > nesting_level--;\n> > >\n> > > if (auto_explain_log_failed_plan &&\n> > > <maybe the time elapsed from start exceeds min_duration>)\n> > > {\n> > > 'show the plan'\n> > > }\n> > > }\n> > > }\n> > >\n> > > regards.\n> > >\n> >\n> > It can work - but still it is not good enough solution. We need \"query\n> > debugger\" that allows to get some query execution metrics online.\n> >\n> > There was a problem with memory management for passing plans between\n> > processes. Can we used temp files instead shared memory?\n>\n> I think \"query debugger\" feature you proposed is out of scope of\n> auto_explain module. I also think the feature to analyze running\n> query online is great, but we will need another discussion on a new\n> module or eature for it.\n>\n\nsure. My note was about using auto_explain like query_debugger. It has not\ntoo sense, and from this perspective, the original proposal to log plan\nbefore execution has more sense.\n\nyou can log every plan with higher cost than some constant.\n\n\n\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <nagata@sraoss.co.jp>\n>\n>\n>\n\nčt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:On Thu, 27 Feb 2020 06:27:24 +0100\nPavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> napsal:\n> \n> > Hello.\n> >\n> > At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n> > wrote in\n> > > On Wed, 26 Feb 2020 18:51:21 +0100\n> > > Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n> > > > > Hi,\n> > > > >\n> > > > > Attached is a patch for allowing auto_explain to log plans before\n> > > > > queries are executed.\n> > > > >\n> > > > > Currently, auto_explain logs plans only after query executions,\n> > > > > so if a query gets stuck its plan could not be logged. If we can\n> > > > > know plans of stuck queries, we may get some hints to resolve the\n> > > > > stuck. This is useful when you are testing and debugging your\n> > > > > application whose queries get stuck in some situations.\n> > > >\n> > > > Indeed that could be useful.\n> > >\n> > > > > This patch adds  new option log_before_query to auto_explain.\n> > > >\n> > > > Maybe \"log_before_execution\" would be better?\n> > >\n> > > Thanks!  This seems better also to me.\n> > >\n> > > >\n> > > > > Setting auto_explain.log_before_query option logs all plans before\n> > > > > queries are executed regardless of auto_explain.log_min_duration\n> > > > > unless this is set -1 to disable logging.  If log_before_query is\n> > > > > enabled, only duration time is logged after query execution as in\n> > > > > the case of when both log_statement and log_min_duration_statement\n> > > > > are enabled.\n> > > >\n> > > > I'm not sure about this behavior.  The final explain plan is needed at\n> > least if\n> > > > log_analyze, log_buffers or log_timing are enabled.\n> > >\n> > > In the current patch, log_before_query (will be log_before_execution)\n> > > has no effect if log_analyze is enabled in order to avoid to log the\n> > > same plans twice.  Instead, is it better to log the plan always twice,\n> > > before and after the execution, if  log_before_query is enabled\n> > > regardless of log_min_duration or log_analyze?\n> >\n> > Honestly, I don't think showing plans for all queries is useful\n> > behavior.\n> >\n> > If you allow the stuck query to be canceled, showing plan in\n> > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n> > this.\n> >\n> > explain_ExecutorRun()\n> > {\n> >   ...\n> >   PG_TRY();\n> >   {\n> >       ...\n> >       else\n> >          starndard_ExecutorRun();\n> >       nesting_level--;\n> >   }\n> >   PG_CATCH();\n> >   {\n> >       nesting_level--;\n> >\n> >       if (auto_explain_log_failed_plan &&\n> >        <maybe the time elapsed from start exceeds min_duration>)\n> >       {\n> >           'show the plan'\n> >       }\n> >    }\n> > }\n> >\n> > regards.\n> >\n> \n> It can work - but still it is not good enough solution. We need \"query\n> debugger\" that allows to get some query execution metrics online.\n> \n> There was a problem with memory management for passing plans between\n> processes. Can we used temp files instead shared memory?\n\n I think \"query debugger\" feature you proposed is out of scope of\nauto_explain module. I also think the feature to analyze running\nquery online is great, but we will need another discussion on a new\nmodule or eature for it.sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.you can log every plan with higher cost than some constant.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 27 Feb 2020 07:11:26 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On Thu, Feb 27, 2020 at 7:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:\n>>\n>> On Thu, 27 Feb 2020 06:27:24 +0100\n>> Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>\n>> > čt 27. 2. 2020 v 6:16 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n>> > napsal:\n>> >\n>> > > Hello.\n>> > >\n>> > > At Thu, 27 Feb 2020 10:18:16 +0900, Yugo NAGATA <nagata@sraoss.co.jp>\n>> > > wrote in\n>> > > > On Wed, 26 Feb 2020 18:51:21 +0100\n>> > > > Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> > > >\n>> > > > > On Thu, Feb 27, 2020 at 02:35:18AM +0900, Yugo NAGATA wrote:\n>> > > > > > Hi,\n>> > > > > >\n>> > > > > > Attached is a patch for allowing auto_explain to log plans before\n>> > > > > > queries are executed.\n>> > > > > >\n>> > > > > > Currently, auto_explain logs plans only after query executions,\n>> > > > > > so if a query gets stuck its plan could not be logged. If we can\n>> > > > > > know plans of stuck queries, we may get some hints to resolve the\n>> > > > > > stuck. This is useful when you are testing and debugging your\n>> > > > > > application whose queries get stuck in some situations.\n>> > > > >\n>> > > > > Indeed that could be useful.\n>> > > >\n>> > > > > > This patch adds new option log_before_query to auto_explain.\n>> > > > >\n>> > > > > Maybe \"log_before_execution\" would be better?\n>> > > >\n>> > > > Thanks! This seems better also to me.\n>> > > >\n>> > > > >\n>> > > > > > Setting auto_explain.log_before_query option logs all plans before\n>> > > > > > queries are executed regardless of auto_explain.log_min_duration\n>> > > > > > unless this is set -1 to disable logging. If log_before_query is\n>> > > > > > enabled, only duration time is logged after query execution as in\n>> > > > > > the case of when both log_statement and log_min_duration_statement\n>> > > > > > are enabled.\n>> > > > >\n>> > > > > I'm not sure about this behavior. The final explain plan is needed at\n>> > > least if\n>> > > > > log_analyze, log_buffers or log_timing are enabled.\n>> > > >\n>> > > > In the current patch, log_before_query (will be log_before_execution)\n>> > > > has no effect if log_analyze is enabled in order to avoid to log the\n>> > > > same plans twice. Instead, is it better to log the plan always twice,\n>> > > > before and after the execution, if log_before_query is enabled\n>> > > > regardless of log_min_duration or log_analyze?\n>> > >\n>> > > Honestly, I don't think showing plans for all queries is useful\n>> > > behavior.\n>> > >\n>> > > If you allow the stuck query to be canceled, showing plan in\n>> > > PG_FINALLY() block in explain_ExecutorRun would work, which look like\n>> > > this.\n>> > >\n>> > > explain_ExecutorRun()\n>> > > {\n>> > > ...\n>> > > PG_TRY();\n>> > > {\n>> > > ...\n>> > > else\n>> > > starndard_ExecutorRun();\n>> > > nesting_level--;\n>> > > }\n>> > > PG_CATCH();\n>> > > {\n>> > > nesting_level--;\n>> > >\n>> > > if (auto_explain_log_failed_plan &&\n>> > > <maybe the time elapsed from start exceeds min_duration>)\n>> > > {\n>> > > 'show the plan'\n>> > > }\n>> > > }\n>> > > }\n>> > >\n>> > > regards.\n>> > >\n>> >\n>> > It can work - but still it is not good enough solution. We need \"query\n>> > debugger\" that allows to get some query execution metrics online.\n>> >\n>> > There was a problem with memory management for passing plans between\n>> > processes. Can we used temp files instead shared memory?\n>>\n>> I think \"query debugger\" feature you proposed is out of scope of\n>> auto_explain module. I also think the feature to analyze running\n>> query online is great, but we will need another discussion on a new\n>> module or eature for it.\n>\n>\n> sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.\n>\n> you can log every plan with higher cost than some constant.\n\nYes I thought about that too. If you're not in an OLAP environment\n(or with a specific user running few expensive queries), setup an\nauto_explain.log_before_execution_min_cost.\n\n\n", "msg_date": "Thu, 27 Feb 2020 07:31:58 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "Hi,\n\nthat feature for dumping plans with auto explain is already available in\nhttps://github.com/legrandlegrand/pg_stat_sql_plans\n\nThis is an hybrid extension combining auto_explain and pg_stat_statements,\nadding a planid and tracking metrics even on error, ..., ...\n\nWith \npg_stat_sql_plans.track_planid = true\npg_stat_sql_plans.explain = true\n --> it writes explain plan in log file after planning and only one time\nper (queryid,planid)\n then no need of sampling\n\nand with\npg_stat_sql_plans.track = 'all'\n --> function pgssp_backend_queryid(pid) retrieves (nested) queryid of a\nstuck statement, \n and permit to retrieve its plan (by its queryid) in logs.\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Thu, 27 Feb 2020 10:37:47 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On Thu, Feb 27, 2020 at 7:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Feb 27, 2020 at 7:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:\n> >> I think \"query debugger\" feature you proposed is out of scope of\n> >> auto_explain module. I also think the feature to analyze running\n> >> query online is great, but we will need another discussion on a new\n> >> module or eature for it.\n> >\n> >\n> > sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.\n> >\n> > you can log every plan with higher cost than some constant.\n>\n> Yes I thought about that too. If you're not in an OLAP environment\n> (or with a specific user running few expensive queries), setup an\n> auto_explain.log_before_execution_min_cost.\n\nThere was some discussion but no clear consensus on what should really\nbe done. I'm marking the patch as waiting on author which seems more\naccurate. Feel free to switch it back if that's a wrong move.\n\n\n", "msg_date": "Thu, 5 Mar 2020 14:46:54 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "Kyotaro Horiguchi-4 wrote\n> At Thu, 27 Feb 2020 06:27:24 +0100, Pavel Stehule &lt;\n\n> pavel.stehule@\n\n> &gt; wrote in \n>> odesílatel Kyotaro Horiguchi &lt;\n\n> horikyota.ntt@\n\n> &gt;\n>> napsal:\n> \n> If we need a live plan dump of a running query, We could do that using\n> some kind of inter-backend triggering. (I'm not sure if PG offers\n> inter-backend signalling facility usable by extensions..)\n> \n> =# select auto_explain.log_plan_backend(12345);\n> \n> postgresql.log:\n> LOG: requested plan dump: &lt;blah, blah&gt;..\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\nDid you know\nhttps://www.postgresql-archive.org/pg-show-plans-Seeing-all-execution-plans-at-once-td6129231.html\n?\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Fri, 27 Mar 2020 10:23:51 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" }, { "msg_contents": "On 3/5/20 8:46 AM, Julien Rouhaud wrote:\n> On Thu, Feb 27, 2020 at 7:31 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Thu, Feb 27, 2020 at 7:12 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>>\n>>> čt 27. 2. 2020 v 7:01 odesílatel Yugo NAGATA <nagata@sraoss.co.jp> napsal:\n>>>> I think \"query debugger\" feature you proposed is out of scope of\n>>>> auto_explain module. I also think the feature to analyze running\n>>>> query online is great, but we will need another discussion on a new\n>>>> module or eature for it.\n>>>\n>>>\n>>> sure. My note was about using auto_explain like query_debugger. It has not too sense, and from this perspective, the original proposal to log plan before execution has more sense.\n>>>\n>>> you can log every plan with higher cost than some constant.\n>>\n>> Yes I thought about that too. If you're not in an OLAP environment\n>> (or with a specific user running few expensive queries), setup an\n>> auto_explain.log_before_execution_min_cost.\n> \n> There was some discussion but no clear consensus on what should really\n> be done. I'm marking the patch as waiting on author which seems more\n> accurate. Feel free to switch it back if that's a wrong move.\n\nThere does seem to be any progress towards a consensus so I'm marking \nthis Returned with Feedback.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 8 Apr 2020 09:25:26 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Allow auto_explain to log plans before queries are executed" } ]
[ { "msg_contents": "Hello.\n\nWe found that targetted promotion can cause an assertion failure. The\nattached TAP test causes that.\n\n> TRAP: FailedAssertion(\"StandbyMode\", File: \"xlog.c\", Line: 12078)\n\nAfter recovery target is reached, StartupXLOG turns off standby mode\nthen refetches the last record. If the last record starts from the\nprevious WAL segment, the assertion failure is triggered.\n\nThe wrong point is that StartupXLOG does random access fetching while\nWaitForWALToBecomeAvailable is thinking it is still in streaming. I\nthink if it is called with random access mode,\nWaitForWALToBecomeAvailable should move to XLOG_FROM_ARCHIVE even\nthough it is thinking that it is still reading from stream.\n\nregards.\n\n-- Kyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Feb 2020 12:48:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/02/27 12:48, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> We found that targetted promotion can cause an assertion failure. The\n> attached TAP test causes that.\n> \n>> TRAP: FailedAssertion(\"StandbyMode\", File: \"xlog.c\", Line: 12078)\n> \n> After recovery target is reached, StartupXLOG turns off standby mode\n> then refetches the last record. If the last record starts from the\n> previous WAL segment, the assertion failure is triggered.\n\nGood catch!\n\n> The wrong point is that StartupXLOG does random access fetching while\n> WaitForWALToBecomeAvailable is thinking it is still in streaming. I\n> think if it is called with random access mode,\n> WaitForWALToBecomeAvailable should move to XLOG_FROM_ARCHIVE even\n> though it is thinking that it is still reading from stream.\n\nI failed to understand why random access while reading from\nstream is bad idea. Could you elaborate why?\n\nIsn't it sufficient to set currentSource to 0 when disabling\nStandbyMode?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 27 Feb 2020 14:40:55 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Thu, 27 Feb 2020 14:40:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/02/27 12:48, Kyotaro Horiguchi wrote:\n> > Hello.\n> > We found that targetted promotion can cause an assertion failure. The\n> > attached TAP test causes that.\n> > \n> >> TRAP: FailedAssertion(\"StandbyMode\", File: \"xlog.c\", Line: 12078)\n> > After recovery target is reached, StartupXLOG turns off standby mode\n> > then refetches the last record. If the last record starts from the\n> > previous WAL segment, the assertion failure is triggered.\n> \n> Good catch!\n> \n> > The wrong point is that StartupXLOG does random access fetching while\n> > WaitForWALToBecomeAvailable is thinking it is still in streaming. I\n> > think if it is called with random access mode,\n> > WaitForWALToBecomeAvailable should move to XLOG_FROM_ARCHIVE even\n> > though it is thinking that it is still reading from stream.\n> \n> I failed to understand why random access while reading from\n> stream is bad idea. Could you elaborate why?\n\nIt seems to me the word \"streaming\" suggests that WAL record should be\nread sequentially. Random access, which means reading from arbitrary\nlocation, breaks a stream. (But the patch doesn't try to stop wal\nsender if randAccess.)\n\n> Isn't it sufficient to set currentSource to 0 when disabling\n> StandbyMode?\n\nI thought that and it should work, but I hesitated to manipulate on\ncurrentSource in StartupXLOG. currentSource is basically a private\nstate of WaitForWALToBecomeAvailable. ReadRecord modifies it but I\nthink it's not good to modify it out of the the logic in\nWaitForWALToBecomeAvailable. Come to think of that I got to think the\nfollowing part in ReadRecord should use randAccess instead..\n\nxlog.c:4384\n> /*\n- * Before we retry, reset lastSourceFailed and currentSource\n- * so that we will check the archive next.\n+ * Streaming has broken, we retry from the same LSN.\n> */\n> lastSourceFailed = false;\n- currentSource = 0;\n+ private->randAccess = true;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Feb 2020 15:23:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/02/27 15:23, Kyotaro Horiguchi wrote:\n> At Thu, 27 Feb 2020 14:40:55 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/02/27 12:48, Kyotaro Horiguchi wrote:\n>>> Hello.\n>>> We found that targetted promotion can cause an assertion failure. The\n>>> attached TAP test causes that.\n>>>\n>>>> TRAP: FailedAssertion(\"StandbyMode\", File: \"xlog.c\", Line: 12078)\n>>> After recovery target is reached, StartupXLOG turns off standby mode\n>>> then refetches the last record. If the last record starts from the\n>>> previous WAL segment, the assertion failure is triggered.\n>>\n>> Good catch!\n>>\n>>> The wrong point is that StartupXLOG does random access fetching while\n>>> WaitForWALToBecomeAvailable is thinking it is still in streaming. I\n>>> think if it is called with random access mode,\n>>> WaitForWALToBecomeAvailable should move to XLOG_FROM_ARCHIVE even\n>>> though it is thinking that it is still reading from stream.\n>>\n>> I failed to understand why random access while reading from\n>> stream is bad idea. Could you elaborate why?\n> \n> It seems to me the word \"streaming\" suggests that WAL record should be\n> read sequentially. Random access, which means reading from arbitrary\n> location, breaks a stream. (But the patch doesn't try to stop wal\n> sender if randAccess.)\n> \n>> Isn't it sufficient to set currentSource to 0 when disabling\n>> StandbyMode?\n> \n> I thought that and it should work, but I hesitated to manipulate on\n> currentSource in StartupXLOG. currentSource is basically a private\n> state of WaitForWALToBecomeAvailable. ReadRecord modifies it but I\n> think it's not good to modify it out of the the logic in\n> WaitForWALToBecomeAvailable.\n\nIf so, what about adding the following at the top of\nWaitForWALToBecomeAvailable()?\n\n if (!StandbyMode && currentSource == XLOG_FROM_STREAM)\n currentSource = 0;\n\n> Come to think of that I got to think the\n> following part in ReadRecord should use randAccess instead..\n> \n> xlog.c:4384\n>> /*\n> - * Before we retry, reset lastSourceFailed and currentSource\n> - * so that we will check the archive next.\n> + * Streaming has broken, we retry from the same LSN.\n>> */\n>> lastSourceFailed = false;\n> - currentSource = 0;\n> + private->randAccess = true;\n\nSorry, I failed to understand why this change is necessary...\nAt least the comment that you added seems incorrect\nbecause WAL streaming should not have started yet when\nwe reach the above point.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 27 Feb 2020 16:23:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "Thank you for the comment.\nAt Thu, 27 Feb 2020 16:23:44 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/02/27 15:23, Kyotaro Horiguchi wrote:\n> >> I failed to understand why random access while reading from\n> >> stream is bad idea. Could you elaborate why?\n> > It seems to me the word \"streaming\" suggests that WAL record should be\n> > read sequentially. Random access, which means reading from arbitrary\n> > location, breaks a stream. (But the patch doesn't try to stop wal\n> > sender if randAccess.)\n> > \n> >> Isn't it sufficient to set currentSource to 0 when disabling\n> >> StandbyMode?\n> > I thought that and it should work, but I hesitated to manipulate on\n> > currentSource in StartupXLOG. currentSource is basically a private\n> > state of WaitForWALToBecomeAvailable. ReadRecord modifies it but I\n> > think it's not good to modify it out of the the logic in\n> > WaitForWALToBecomeAvailable.\n> \n> If so, what about adding the following at the top of\n> WaitForWALToBecomeAvailable()?\n> \n> if (!StandbyMode && currentSource == XLOG_FROM_STREAM)\n> currentSource = 0;\n\nIt works virtually the same way. I'm happy to do that if you don't\nagree to using randAccess. But I'd rather do that in the 'if\n(!InArchiveRecovery)' section.\n\n> > Come to think of that I got to think the\n> > following part in ReadRecord should use randAccess instead..\n> > xlog.c:4384\n> >> /*\n> > - * Before we retry, reset lastSourceFailed and currentSource\n> > - * so that we will check the archive next.\n> > + * Streaming has broken, we retry from the same LSN.\n> >> */\n> >> lastSourceFailed = false;\n> > - currentSource = 0;\n> > + private->randAccess = true;\n> \n> Sorry, I failed to understand why this change is necessary...\n\nIt's not necessary, just for being tidy about the responsibility on\ncurrentSource.\n\n> At least the comment that you added seems incorrect\n> because WAL streaming should not have started yet when\n> we reach the above point.\n\nOops, right.\n\n- * Streaming has broken, we retry from the same LSN.\n+ * Restart recovery from the current LSN.\n\nFor clarity, I don't insist on the change at all. If it were\nnecessary, it's another topic, anyway. Please forget it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Feb 2020 17:05:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/02/27 17:05, Kyotaro Horiguchi wrote:\n> Thank you for the comment.\n> At Thu, 27 Feb 2020 16:23:44 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> On 2020/02/27 15:23, Kyotaro Horiguchi wrote:\n>>>> I failed to understand why random access while reading from\n>>>> stream is bad idea. Could you elaborate why?\n>>> It seems to me the word \"streaming\" suggests that WAL record should be\n>>> read sequentially. Random access, which means reading from arbitrary\n>>> location, breaks a stream. (But the patch doesn't try to stop wal\n>>> sender if randAccess.)\n>>>\n>>>> Isn't it sufficient to set currentSource to 0 when disabling\n>>>> StandbyMode?\n>>> I thought that and it should work, but I hesitated to manipulate on\n>>> currentSource in StartupXLOG. currentSource is basically a private\n>>> state of WaitForWALToBecomeAvailable. ReadRecord modifies it but I\n>>> think it's not good to modify it out of the the logic in\n>>> WaitForWALToBecomeAvailable.\n>>\n>> If so, what about adding the following at the top of\n>> WaitForWALToBecomeAvailable()?\n>>\n>> if (!StandbyMode && currentSource == XLOG_FROM_STREAM)\n>> currentSource = 0;\n> \n> It works virtually the same way. I'm happy to do that if you don't\n> agree to using randAccess. But I'd rather do that in the 'if\n> (!InArchiveRecovery)' section.\n\nThe approach using randAccess seems unsafe. Please imagine\nthe case where currentSource is changed to XLOG_FROM_ARCHIVE\nbecause randAccess is true, while walreceiver is still running.\nFor example, this case can occur when the record at REDO\nstarting point is fetched with randAccess = true after walreceiver\nis invoked to fetch the last checkpoint record. The situation\n\"currentSource != XLOG_FROM_STREAM while walreceiver is\n running\" seems invalid. No?\n\nSo I think that the approach that I proposed is better.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 27 Feb 2020 20:04:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Thu, 27 Feb 2020 20:04:41 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/02/27 17:05, Kyotaro Horiguchi wrote:\n> > Thank you for the comment.\n> > At Thu, 27 Feb 2020 16:23:44 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >> On 2020/02/27 15:23, Kyotaro Horiguchi wrote:\n> >>>> I failed to understand why random access while reading from\n> >>>> stream is bad idea. Could you elaborate why?\n> >>> It seems to me the word \"streaming\" suggests that WAL record should be\n> >>> read sequentially. Random access, which means reading from arbitrary\n> >>> location, breaks a stream. (But the patch doesn't try to stop wal\n> >>> sender if randAccess.)\n> >>>\n> >>>> Isn't it sufficient to set currentSource to 0 when disabling\n> >>>> StandbyMode?\n> >>> I thought that and it should work, but I hesitated to manipulate on\n> >>> currentSource in StartupXLOG. currentSource is basically a private\n> >>> state of WaitForWALToBecomeAvailable. ReadRecord modifies it but I\n> >>> think it's not good to modify it out of the the logic in\n> >>> WaitForWALToBecomeAvailable.\n> >>\n> >> If so, what about adding the following at the top of\n> >> WaitForWALToBecomeAvailable()?\n> >>\n> >> if (!StandbyMode && currentSource == XLOG_FROM_STREAM)\n> >> currentSource = 0;\n> > It works virtually the same way. I'm happy to do that if you don't\n> > agree to using randAccess. But I'd rather do that in the 'if\n> > (!InArchiveRecovery)' section.\n> \n> The approach using randAccess seems unsafe. Please imagine\n> the case where currentSource is changed to XLOG_FROM_ARCHIVE\n> because randAccess is true, while walreceiver is still running.\n> For example, this case can occur when the record at REDO\n> starting point is fetched with randAccess = true after walreceiver\n> is invoked to fetch the last checkpoint record. The situation\n> \"currentSource != XLOG_FROM_STREAM while walreceiver is\n> running\" seems invalid. No?\n\nWhen I mentioned an possibility of changing ReadRecord so that it\nmodifies randAccess instead of currentSource, I thought that\nWaitForWALToBecomeAvailable should shutdown wal receiver as\nneeded.\n\nAt Thu, 27 Feb 2020 15:23:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> location, breaks a stream. (But the patch doesn't try to stop wal\nme> sender if randAccess.)\n\nAnd random access during StandbyMode ususally (always?) lets RecPtr go\nback. I'm not sure WaitForWALToBecomeAvailable works correctly if we\ndon't have a file in pg_wal and the REDO point is far back by more\nthan a segment from the initial checkpoint record. (It seems to cause\nassertion failure, but I haven't checked that.)\n\nIf we go back to XLOG_FROM_ARCHIVE by random access, it correctly\nre-connects to the primary for the past segment.\n\n> So I think that the approach that I proposed is better.\n\nIt depends on how far we assume RecPtr go back.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Feb 2020 12:13:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/02/28 12:13, Kyotaro Horiguchi wrote:\n> At Thu, 27 Feb 2020 20:04:41 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/02/27 17:05, Kyotaro Horiguchi wrote:\n>>> Thank you for the comment.\n>>> At Thu, 27 Feb 2020 16:23:44 +0900, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote in\n>>>> On 2020/02/27 15:23, Kyotaro Horiguchi wrote:\n>>>>>> I failed to understand why random access while reading from\n>>>>>> stream is bad idea. Could you elaborate why?\n>>>>> It seems to me the word \"streaming\" suggests that WAL record should be\n>>>>> read sequentially. Random access, which means reading from arbitrary\n>>>>> location, breaks a stream. (But the patch doesn't try to stop wal\n>>>>> sender if randAccess.)\n>>>>>\n>>>>>> Isn't it sufficient to set currentSource to 0 when disabling\n>>>>>> StandbyMode?\n>>>>> I thought that and it should work, but I hesitated to manipulate on\n>>>>> currentSource in StartupXLOG. currentSource is basically a private\n>>>>> state of WaitForWALToBecomeAvailable. ReadRecord modifies it but I\n>>>>> think it's not good to modify it out of the the logic in\n>>>>> WaitForWALToBecomeAvailable.\n>>>>\n>>>> If so, what about adding the following at the top of\n>>>> WaitForWALToBecomeAvailable()?\n>>>>\n>>>> if (!StandbyMode && currentSource == XLOG_FROM_STREAM)\n>>>> currentSource = 0;\n>>> It works virtually the same way. I'm happy to do that if you don't\n>>> agree to using randAccess. But I'd rather do that in the 'if\n>>> (!InArchiveRecovery)' section.\n>>\n>> The approach using randAccess seems unsafe. Please imagine\n>> the case where currentSource is changed to XLOG_FROM_ARCHIVE\n>> because randAccess is true, while walreceiver is still running.\n>> For example, this case can occur when the record at REDO\n>> starting point is fetched with randAccess = true after walreceiver\n>> is invoked to fetch the last checkpoint record. The situation\n>> \"currentSource != XLOG_FROM_STREAM while walreceiver is\n>> running\" seems invalid. No?\n> \n> When I mentioned an possibility of changing ReadRecord so that it\n> modifies randAccess instead of currentSource, I thought that\n> WaitForWALToBecomeAvailable should shutdown wal receiver as\n> needed.\n> \n> At Thu, 27 Feb 2020 15:23:07 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> me> location, breaks a stream. (But the patch doesn't try to stop wal\n> me> sender if randAccess.)\n\nSorry, I failed to notice that.\n\n> And random access during StandbyMode ususally (always?) lets RecPtr go\n> back. I'm not sure WaitForWALToBecomeAvailable works correctly if we\n> don't have a file in pg_wal and the REDO point is far back by more\n> than a segment from the initial checkpoint record.\n\nIt works correctly. This is why WaitForWALToBecomeAvailable() uses\nfetching_ckpt argument.\n\n> If we go back to XLOG_FROM_ARCHIVE by random access, it correctly\n> re-connects to the primary for the past segment.\n\nBut this can lead to unnecessary restart of walreceiver. Since\nfetching_ckpt ensures that the WAL file containing the REDO\nstarting record exists in pg_wal, there is no need to reconnect\nto the primary when reading the REDO starting record.\n\nIs there other case where we need to go back to XLOG_FROM_ARCHIVE\nby random access?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 2 Mar 2020 20:54:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Mon, 2 Mar 2020 20:54:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > And random access during StandbyMode ususally (always?) lets RecPtr go\n> > back. I'm not sure WaitForWALToBecomeAvailable works correctly if we\n> > don't have a file in pg_wal and the REDO point is far back by more\n> > than a segment from the initial checkpoint record.\n> \n> It works correctly. This is why WaitForWALToBecomeAvailable() uses\n> fetching_ckpt argument.\n\nHmm. You're right. We start streaming from RedoStartLSN when\nfetching_ckpt. So that doesn't happen.\n\n> > If we go back to XLOG_FROM_ARCHIVE by random access, it correctly\n> > re-connects to the primary for the past segment.\n> \n> But this can lead to unnecessary restart of walreceiver. Since\n> fetching_ckpt ensures that the WAL file containing the REDO\n> starting record exists in pg_wal, there is no need to reconnect\n> to the primary when reading the REDO starting record.\n> \n> Is there other case where we need to go back to XLOG_FROM_ARCHIVE\n> by random access?\n\nI understand that the reconnection for REDO record is useless. Ok I\ntake the !StandbyMode way.\n\nThe attached is the test script that is changed to count the added\ntest, and the slight revised main patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 05 Mar 2020 12:08:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/03/05 12:08, Kyotaro Horiguchi wrote:\n> At Mon, 2 Mar 2020 20:54:04 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> And random access during StandbyMode ususally (always?) lets RecPtr go\n>>> back. I'm not sure WaitForWALToBecomeAvailable works correctly if we\n>>> don't have a file in pg_wal and the REDO point is far back by more\n>>> than a segment from the initial checkpoint record.\n>>\n>> It works correctly. This is why WaitForWALToBecomeAvailable() uses\n>> fetching_ckpt argument.\n> \n> Hmm. You're right. We start streaming from RedoStartLSN when\n> fetching_ckpt. So that doesn't happen.\n> \n>>> If we go back to XLOG_FROM_ARCHIVE by random access, it correctly\n>>> re-connects to the primary for the past segment.\n>>\n>> But this can lead to unnecessary restart of walreceiver. Since\n>> fetching_ckpt ensures that the WAL file containing the REDO\n>> starting record exists in pg_wal, there is no need to reconnect\n>> to the primary when reading the REDO starting record.\n>>\n>> Is there other case where we need to go back to XLOG_FROM_ARCHIVE\n>> by random access?\n> \n> I understand that the reconnection for REDO record is useless. Ok I\n> take the !StandbyMode way.\n> \n> The attached is the test script that is changed to count the added\n> test, and the slight revised main patch.\n\nThanks for the patch!\n\n+\t\t/* Wal receiver should not active when entring XLOG_FROM_ARCHIVE */\n+\t\tAssert(!WalRcvStreaming());\n\n+1 to add this assertion check.\n\nIsn't it better to always check this while trying to read WAL from\narchive or pg_wal? So, what about the following change?\n\n {\n case XLOG_FROM_ARCHIVE:\n case XLOG_FROM_PG_WAL:\n+ /*\n+ * WAL receiver should not be running while trying to\n+ * read WAL from archive or pg_wal.\n+ */\n+ Assert(!WalRcvStreaming());\n+\n /* Close any old file we might have open. */\n if (readFile >= 0)\n\n\n+\t\tlastSourceFailed = false; /* We haven't failed on the new source */\n\nIs this really necessary? Since ReadRecord() always reset\nlastSourceFailed to false, it seems not necessary.\n\n\n-\telse if (currentSource == 0)\n+\telse if (currentSource == 0 ||\n\nThough this is a *separate topic*, 0 should be XLOG_FROM_ANY?\nThere are some places where 0 is used as the value of currentSource.\nIMO they should be updated so that XLOG_FROM_ANY is used instead of 0.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 5 Mar 2020 19:51:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Thu, 5 Mar 2020 19:51:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/03/05 12:08, Kyotaro Horiguchi wrote:\n> > I understand that the reconnection for REDO record is useless. Ok I\n> > take the !StandbyMode way.\n> > The attached is the test script that is changed to count the added\n> > test, and the slight revised main patch.\n> \n> Thanks for the patch!\n> \n> + /* Wal receiver should not active when entring XLOG_FROM_ARCHIVE */\n> +\t\tAssert(!WalRcvStreaming());\n> \n> +1 to add this assertion check.\n> \n> Isn't it better to always check this while trying to read WAL from\n> archive or pg_wal? So, what about the following change?\n> \n> {\n> case XLOG_FROM_ARCHIVE:\n> case XLOG_FROM_PG_WAL:\n> + /*\n> + * WAL receiver should not be running while trying to\n> + * read WAL from archive or pg_wal.\n> + */\n> + Assert(!WalRcvStreaming());\n> +\n> /* Close any old file we might have open. */\n> if (readFile >= 0)\n\n(It seems retroverting to the first patch when I started this...)\nThe second place covers wider cases so I reverted the first place.\n\n> + lastSourceFailed = false; /* We haven't failed on the new source */\n> \n> Is this really necessary? Since ReadRecord() always reset\n> lastSourceFailed to false, it seems not necessary.\n\nIt's just to make sure. Actually lastSourceFailed is always false\nwhen we get there. But when the source is switched, lastSourceFailed\nshould be changed to false as a matter of design. I'd like to do that\nunless that harms.\n\n> -\telse if (currentSource == 0)\n> +\telse if (currentSource == 0 ||\n> \n> Though this is a *separate topic*, 0 should be XLOG_FROM_ANY?\n> There are some places where 0 is used as the value of currentSource.\n> IMO they should be updated so that XLOG_FROM_ANY is used instead of 0.\n\nYeah, I've thought that many times but have neglected since it is not\ncritical and trivial as a separate patch. I'd take the chance to do\nthat now. Another minor glitch is \"int oldSource = currentSource;\" it\nis not debugger-friendly so I changed it to XLogSource. It is added\nas a new patch file before the main patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 06 Mar 2020 10:29:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "On 2020/03/06 10:29, Kyotaro Horiguchi wrote:\n> At Thu, 5 Mar 2020 19:51:11 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/03/05 12:08, Kyotaro Horiguchi wrote:\n>>> I understand that the reconnection for REDO record is useless. Ok I\n>>> take the !StandbyMode way.\n>>> The attached is the test script that is changed to count the added\n>>> test, and the slight revised main patch.\n>>\n>> Thanks for the patch!\n>>\n>> + /* Wal receiver should not active when entring XLOG_FROM_ARCHIVE */\n>> +\t\tAssert(!WalRcvStreaming());\n>>\n>> +1 to add this assertion check.\n>>\n>> Isn't it better to always check this while trying to read WAL from\n>> archive or pg_wal? So, what about the following change?\n>>\n>> {\n>> case XLOG_FROM_ARCHIVE:\n>> case XLOG_FROM_PG_WAL:\n>> + /*\n>> + * WAL receiver should not be running while trying to\n>> + * read WAL from archive or pg_wal.\n>> + */\n>> + Assert(!WalRcvStreaming());\n>> +\n>> /* Close any old file we might have open. */\n>> if (readFile >= 0)\n> \n> (It seems retroverting to the first patch when I started this...)\n> The second place covers wider cases so I reverted the first place.\n\nThanks for updating the patch that way.\nNot sure which patch you're mentioning, though.\n\nRegarding 0003 patch, I added a bit more detail comments into\nthe patch so that we can understand the code more easily.\nUpdated version of 0003 patch attached. Barring any objection,\nat first, I plan to commit this patch.\n\n>> + lastSourceFailed = false; /* We haven't failed on the new source */\n>>\n>> Is this really necessary? Since ReadRecord() always reset\n>> lastSourceFailed to false, it seems not necessary.\n> \n> It's just to make sure. Actually lastSourceFailed is always false\n> when we get there. But when the source is switched, lastSourceFailed\n> should be changed to false as a matter of design. I'd like to do that\n> unless that harms.\n\nOK.\n \n>> -\telse if (currentSource == 0)\n>> +\telse if (currentSource == 0 ||\n>>\n>> Though this is a *separate topic*, 0 should be XLOG_FROM_ANY?\n>> There are some places where 0 is used as the value of currentSource.\n>> IMO they should be updated so that XLOG_FROM_ANY is used instead of 0.\n> \n> Yeah, I've thought that many times but have neglected since it is not\n> critical and trivial as a separate patch. I'd take the chance to do\n> that now. Another minor glitch is \"int oldSource = currentSource;\" it\n> is not debugger-friendly so I changed it to XLogSource. It is added\n> as a new patch file before the main patch.\n\nThere seems to be more other places where XLogSource and\nXLOG_FROM_XXX are not used yet. For example, the initial values\nof readSource and XLogReceiptSource, the type of argument\n\"source\" in XLogFileReadAnyTLI() and XLogFileRead(), etc.\nThese also should be updated?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Sat, 7 Mar 2020 01:46:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Sat, 7 Mar 2020 01:46:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > (It seems retroverting to the first patch when I started this...)\n> > The second place covers wider cases so I reverted the first place.\n> \n> Thanks for updating the patch that way.\n> Not sure which patch you're mentioning, though.\n\nThat meant 0003.\n\n> Regarding 0003 patch, I added a bit more detail comments into\n> the patch so that we can understand the code more easily.\n> Updated version of 0003 patch attached. Barring any objection,\n> at first, I plan to commit this patch.\n\nLooks good to me. Thanks for writing the detailed comments.\n\n> >> + lastSourceFailed = false; /* We haven't failed on the new source */\n> >>\n> >> Is this really necessary? Since ReadRecord() always reset\n> >> lastSourceFailed to false, it seems not necessary.\n> > It's just to make sure. Actually lastSourceFailed is always false\n> > when we get there. But when the source is switched, lastSourceFailed\n> > should be changed to false as a matter of design. I'd like to do that\n> > unless that harms.\n> \n> OK.\n\nThanks.\n\n> >> -\telse if (currentSource == 0)\n> >> +\telse if (currentSource == 0 ||\n> >>\n> >> Though this is a *separate topic*, 0 should be XLOG_FROM_ANY?\n> >> There are some places where 0 is used as the value of currentSource.\n> >> IMO they should be updated so that XLOG_FROM_ANY is used instead of 0.\n> > Yeah, I've thought that many times but have neglected since it is not\n> > critical and trivial as a separate patch. I'd take the chance to do\n> > that now. Another minor glitch is \"int oldSource = currentSource;\" it\n> > is not debugger-friendly so I changed it to XLogSource. It is added\n> > as a new patch file before the main patch.\n> \n> There seems to be more other places where XLogSource and\n> XLOG_FROM_XXX are not used yet. For example, the initial values\n> of readSource and XLogReceiptSource, the type of argument\n> \"source\" in XLogFileReadAnyTLI() and XLogFileRead(), etc.\n> These also should be updated?\n\nRight. I checked through the file and AFAICS that's all. The attachec\nv5-0001-Tidy...patch is the fix on top of the v4-0003 on the current\nmaster.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 09 Mar 2020 13:49:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/03/09 13:49, Kyotaro Horiguchi wrote:\n> At Sat, 7 Mar 2020 01:46:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> (It seems retroverting to the first patch when I started this...)\n>>> The second place covers wider cases so I reverted the first place.\n>>\n>> Thanks for updating the patch that way.\n>> Not sure which patch you're mentioning, though.\n> \n> That meant 0003.\n> \n>> Regarding 0003 patch, I added a bit more detail comments into\n>> the patch so that we can understand the code more easily.\n>> Updated version of 0003 patch attached. Barring any objection,\n>> at first, I plan to commit this patch.\n> \n> Looks good to me. Thanks for writing the detailed comments.\n\nThanks for the review! Pushed.\n\nI will review other two patches later.\n\n>> There seems to be more other places where XLogSource and\n>> XLOG_FROM_XXX are not used yet. For example, the initial values\n>> of readSource and XLogReceiptSource, the type of argument\n>> \"source\" in XLogFileReadAnyTLI() and XLogFileRead(), etc.\n>> These also should be updated?\n> \n> Right. I checked through the file and AFAICS that's all. The attachec\n> v5-0001-Tidy...patch is the fix on top of the v4-0003 on the current\n> master.\n\nThanks for the patch!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 9 Mar 2020 15:46:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Mon, 9 Mar 2020 15:46:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/03/09 13:49, Kyotaro Horiguchi wrote:\n> > At Sat, 7 Mar 2020 01:46:16 +0900, Fujii Masao\n> >> Regarding 0003 patch, I added a bit more detail comments into\n> >> the patch so that we can understand the code more easily.\n> >> Updated version of 0003 patch attached. Barring any objection,\n> >> at first, I plan to commit this patch.\n> > Looks good to me. Thanks for writing the detailed comments.\n> \n> Thanks for the review! Pushed.\n\nThanks for commiting.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 09 Mar 2020 19:02:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "\n\nOn 2020/03/09 15:46, Fujii Masao wrote:\n> \n> \n> On 2020/03/09 13:49, Kyotaro Horiguchi wrote:\n>> At Sat, 7 Mar 2020 01:46:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>> (It seems retroverting to the first patch when I started this...)\n>>>> The second place covers wider cases so I reverted the first place.\n>>>\n>>> Thanks for updating the patch that way.\n>>> Not sure which patch you're mentioning, though.\n>>\n>> That meant 0003.\n>>\n>>> Regarding 0003 patch, I added a bit more detail comments into\n>>> the patch so that we can understand the code more easily.\n>>> Updated version of 0003 patch attached. Barring any objection,\n>>> at first, I plan to commit this patch.\n>>\n>> Looks good to me. Thanks for writing the detailed comments.\n> \n> Thanks for the review! Pushed.\n> \n> I will review other two patches later.\n\nPushed the v5-0001-Tidy-up-XLogSource-usage.patch!\n\nRegarding the remaining patch adding the regression test,\n\n+$result =\n+ $node_standby->safe_psql('postgres', \"SELECT pg_last_wal_replay_lsn()\");\n+my ($seg, $off) = split('/', $result);\n+my $target = sprintf(\"$seg/%08X\", (hex($off) / $segsize + 1) * $segsize);\n\nWhat happens if \"off\" part gets the upper limit and \"seg\" part needs\nto be incremented? What happens if pg_last_wal_replay_lsn() advances\nvery much (e.g., because of autovacuum) beyond the segment boundary\nuntil the standby restarts? Of course, these situations very rarely happen,\nbut I'd like to avoid adding such not stable test if possible.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 10 Mar 2020 10:50:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "At Tue, 10 Mar 2020 10:50:52 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Pushed the v5-0001-Tidy-up-XLogSource-usage.patch!\n\nThanks!\n\n> Regarding the remaining patch adding the regression test,\n\nI didn't seriously inteneded it to be in the tree.\n\n> +$result =\n> + $node_standby->safe_psql('postgres', \"SELECT\n> pg_last_wal_replay_lsn()\");\n> +my ($seg, $off) = split('/', $result);\n> +my $target = sprintf(\"$seg/%08X\", (hex($off) / $segsize + 1) *\n> $segsize);\n> \n> What happens if \"off\" part gets the upper limit and \"seg\" part needs\n> to be incremented? What happens if pg_last_wal_replay_lsn() advances\n> very much (e.g., because of autovacuum) beyond the segment boundary\n> until the standby restarts? Of course, these situations very rarely\n> happen,\n> but I'd like to avoid adding such not stable test if possible.\n\nIn the first place the \"seg\" is \"fileno\". Honestly I don't think the\ntest doesn't reach to fileno boundary but I did in the attached. Since\nperl complains over-32bit integer arithmetic as incomptible, the\ncalculation gets a bit odd shape to avoid over-32bit arithmetic.\n\nFor the second point, which seems more likely to happen, I added the\nVACUUM/pg_switch_wal() sequence then wait standby for catch up, before\ndoing the test.\n\nDoes it make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 10 Mar 2020 14:59:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" }, { "msg_contents": "(Hmm. My sight must be as short as 2 word length..)\n\nAt Tue, 10 Mar 2020 14:59:00 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> In the first place the \"seg\" is \"fileno\". Honestly I don't think the\n> test doesn't reach to fileno boundary but I did in the attached. Since\n\nOf course it is a mistake of \"Honestly I don't think the test reaches\nto fileno boudary\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Mar 2020 15:17:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Crash by targetted recovery" } ]
[ { "msg_contents": "When certain parameters are changed on a physical replication primary, \n this is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL \nrecord. The standby then checks whether its own settings are at least \nas big as the ones on the primary. If not, the standby shuts down with \na fatal error.\n\nThe correspondence of settings between primary and standby is required \nbecause those settings influence certain shared memory sizings that are \nrequired for processing WAL records that the primary might send. For \nexample, if the primary sends a prepared transaction, the standby must \nhave had max_prepared_transaction set appropriately or it won't be able \nto process those WAL records.\n\nHowever, fatally shutting down the standby immediately upon receipt of \nthe parameter change record might be a bit of an overreaction. The \nresources related to those settings are not required immediately at that \npoint, and might never be required if the activity on the primary does \nnot exhaust all those resources. An extreme example is raising \nmax_prepared_transactions on the primary but never actually using \nprepared transactions.\n\nWhere this becomes a serious problem is if you have many standbys and \nyou do a failover. If the newly promoted standby happens to have a \nhigher setting for one of the relevant parameters, all the other \nstandbys that have followed it then shut down immediately and won't be \nable to continue until you change all their settings.\n\nIf we didn't do the hard shutdown and we just let the standby roll on \nwith recovery, nothing bad will happen and it will eventually produce an \nappropriate error when those resources are required (e.g., \"maximum \nnumber of prepared transactions reached\").\n\nSo I think there are better ways to handle this. It might be reasonable \nto provide options. The attached patch doesn't do that but it would be \npretty easy. What the attached patch does is:\n\nUpon receipt of XLOG_PARAMETER_CHANGE, we still check the settings but \nonly issue a warning and set a global flag if there is a problem. Then \nwhen we actually hit the resource issue and the flag was set, we issue \nanother warning message with relevant information. Additionally, at \nthat point we pause recovery instead of shutting down, so a hot standby \nremains usable. (That could certainly be configurable.)\n\nBtw., I think the current setup is slightly buggy. The MaxBackends \nvalue that is used to size shared memory is computed as MaxConnections + \nautovacuum_max_workers + 1 + max_worker_processes + max_wal_senders, but \nwe don't track autovacuum_max_workers in WAL.\n\n(This patch was developed together with Simon Riggs.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 27 Feb 2020 09:23:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Improve handling of parameter differences in physical replication" }, { "msg_contents": "Hello\n\nThank you for working on this!\n\n> Where this becomes a serious problem is if you have many standbys and you do a failover.\n\n+1\nSeveral times my team would like to pause recovery instead of panic after change settings on primary. (same thing for create_tablespace_directories replay errors too...)\n\nWe documented somewhere (excluding code) shutting down the standby immediately upon receipt of the parameter change? doc/src/sgml/high-availability.sgml says only about \"refuse to start\".\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 27 Feb 2020 12:48:14 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "\n\nOn 2020/02/27 17:23, Peter Eisentraut wrote:\n> When certain parameters are changed on a physical replication primary,   this is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL record.  The standby then checks whether its own settings are at least as big as the ones on the primary.  If not, the standby shuts down with a fatal error.\n> \n> The correspondence of settings between primary and standby is required because those settings influence certain shared memory sizings that are required for processing WAL records that the primary might send.  For example, if the primary sends a prepared transaction, the standby must have had max_prepared_transaction set appropriately or it won't be able to process those WAL records.\n> \n> However, fatally shutting down the standby immediately upon receipt of the parameter change record might be a bit of an overreaction.  The resources related to those settings are not required immediately at that point, and might never be required if the activity on the primary does not exhaust all those resources.  An extreme example is raising max_prepared_transactions on the primary but never actually using prepared transactions.\n> \n> Where this becomes a serious problem is if you have many standbys and you do a failover.  If the newly promoted standby happens to have a higher setting for one of the relevant parameters, all the other standbys that have followed it then shut down immediately and won't be able to continue until you change all their settings.\n> \n> If we didn't do the hard shutdown and we just let the standby roll on with recovery, nothing bad will happen and it will eventually produce an appropriate error when those resources are required (e.g., \"maximum number of prepared transactions reached\").\n> \n> So I think there are better ways to handle this.  It might be reasonable to provide options.  The attached patch doesn't do that but it would be pretty easy.  What the attached patch does is:\n> \n> Upon receipt of XLOG_PARAMETER_CHANGE, we still check the settings but only issue a warning and set a global flag if there is a problem.  Then when we actually hit the resource issue and the flag was set, we issue another warning message with relevant information.  Additionally, at that point we pause recovery instead of shutting down, so a hot standby remains usable.  (That could certainly be configurable.)\n\n+1\n> Btw., I think the current setup is slightly buggy.  The MaxBackends value that is used to size shared memory is computed as MaxConnections + autovacuum_max_workers + 1 + max_worker_processes + max_wal_senders, but we don't track autovacuum_max_workers in WAL.\n\nMaybe this is because autovacuum doesn't work during recovery?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 27 Feb 2020 19:13:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-02-27 11:13, Fujii Masao wrote:\n>> Btw., I think the current setup is slightly buggy.  The MaxBackends value that is used to size shared memory is computed as MaxConnections + autovacuum_max_workers + 1 + max_worker_processes + max_wal_senders, but we don't track autovacuum_max_workers in WAL.\n> Maybe this is because autovacuum doesn't work during recovery?\n\nAutovacuum on the primary can use locks or xids, and so it's possible \nthat the standby when processing WAL encounters more of those than it \nhas locally allocated shared memory to handle.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Feb 2020 14:37:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On Thu, Feb 27, 2020 at 02:37:24PM +0100, Peter Eisentraut wrote:\n> On 2020-02-27 11:13, Fujii Masao wrote:\n>>> Btw., I think the current setup is slightly buggy.  The\n> MaxBackends value that is used to size shared memory is computed as\n> MaxConnections + autovacuum_max_workers + 1 + max_worker_processes +\n> max_wal_senders, but we don't track autovacuum_max_workers in WAL. \n>> Maybe this is because autovacuum doesn't work during recovery?\n> \n> Autovacuum on the primary can use locks or xids, and so it's possible that\n> the standby when processing WAL encounters more of those than it has locally\n> allocated shared memory to handle.\n\nPutting aside your patch because that sounds like a separate issue..\nDoesn't this mean that autovacuum_max_workers should be added to the\ncontrol file, that we need to record in WAL any updates done to it and\nthat CheckRequiredParameterValues() is wrong?\n--\nMichael", "msg_date": "Fri, 28 Feb 2020 16:45:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-02-28 08:45, Michael Paquier wrote:\n> On Thu, Feb 27, 2020 at 02:37:24PM +0100, Peter Eisentraut wrote:\n>> On 2020-02-27 11:13, Fujii Masao wrote:\n>>>> Btw., I think the current setup is slightly buggy.� The\n>> MaxBackends value that is used to size shared memory is computed as\n>> MaxConnections + autovacuum_max_workers + 1 + max_worker_processes +\n>> max_wal_senders, but we don't track autovacuum_max_workers in WAL.\n>>> Maybe this is because autovacuum doesn't work during recovery?\n>>\n>> Autovacuum on the primary can use locks or xids, and so it's possible that\n>> the standby when processing WAL encounters more of those than it has locally\n>> allocated shared memory to handle.\n> \n> Putting aside your patch because that sounds like a separate issue..\n> Doesn't this mean that autovacuum_max_workers should be added to the\n> control file, that we need to record in WAL any updates done to it and\n> that CheckRequiredParameterValues() is wrong?\n\nThat would be a direct fix, yes.\n\nPerhaps it might be better to track the combined MaxBackends instead, \nhowever.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 08:49:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On Fri, Feb 28, 2020 at 08:49:08AM +0100, Peter Eisentraut wrote:\n> Perhaps it might be better to track the combined MaxBackends instead,\n> however.\n\nNot sure about that. I think that we should keep them separated, as\nthat's more useful for debugging and more verbose for error reporting.\n\n(Worth noting that max_prepared_xacts is separate because of its dummy\nPGPROC entries created by PREPARE TRANSACTION, so it cannot be\nincluded in the set).\n--\nMichael", "msg_date": "Fri, 28 Feb 2020 17:06:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-Feb-27, Peter Eisentraut wrote:\n\n> So this patch relaxes this a bit. Upon receipt of\n> XLOG_PARAMETER_CHANGE, we still check the settings but only issue a\n> warning and set a global flag if there is a problem. Then when we\n> actually hit the resource issue and the flag was set, we issue another\n> warning message with relevant information. Additionally, at that\n> point we pause recovery, so a hot standby remains usable.\n\nHmm, so what is the actual end-user behavior? As I read the code, we\nfirst send the WARNING, then pause recovery until the user resumes\nreplication; at that point we raise the original error. Presumably, at\nthat point the startup process terminates and is relaunched, and replay\ncontinues normally. Is that it?\n\nI think if the startup process terminates because of the original error,\nafter it is unpaused, postmaster will get that as a signal to do a\ncrash-recovery cycle, closing all existing connections. Is that right?\nIf so, it would be worth improving that (possibly by adding a\nsigsetjmp() block) to avoid the disruption.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 12:33:31 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On Sat, 29 Feb 2020 at 06:39, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Feb-27, Peter Eisentraut wrote:\n>\n> > So this patch relaxes this a bit. Upon receipt of\n> > XLOG_PARAMETER_CHANGE, we still check the settings but only issue a\n> > warning and set a global flag if there is a problem. Then when we\n> > actually hit the resource issue and the flag was set, we issue another\n> > warning message with relevant information. Additionally, at that\n> > point we pause recovery, so a hot standby remains usable.\n>\n> Hmm, so what is the actual end-user behavior? As I read the code, we\n> first send the WARNING, then pause recovery until the user resumes\n> replication; at that point we raise the original error.\n\nI think after recovery is paused users will be better to restart the\nserver rather than resume the recovery. I agree with this idea but I'm\nslightly concerned that users might not realize that recovery is\npaused until they look at that line in server log or at\npg_stat_replication because the standby server is still functional. So\nI think we can periodically send WARNING to inform user that we're\nstill waiting for parameter change and restart.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 17:11:56 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-02-28 16:33, Alvaro Herrera wrote:\n> Hmm, so what is the actual end-user behavior? As I read the code, we\n> first send the WARNING, then pause recovery until the user resumes\n> replication; at that point we raise the original error. Presumably, at\n> that point the startup process terminates and is relaunched, and replay\n> continues normally. Is that it?\n\nNo, at that point you get the original, current behavior that the server \ninstance shuts down with a fatal error.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 10:42:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-03-09 09:11, Masahiko Sawada wrote:\n> I think after recovery is paused users will be better to restart the\n> server rather than resume the recovery. I agree with this idea but I'm\n> slightly concerned that users might not realize that recovery is\n> paused until they look at that line in server log or at\n> pg_stat_replication because the standby server is still functional. So\n> I think we can periodically send WARNING to inform user that we're\n> still waiting for parameter change and restart.\n\nI think that would be annoying, unless you create a system for \nconfiguring those periodic warnings.\n\nI imagine in a case like having set max_prepared_transactions but never \nactually using prepared transactions, people will just ignore the \nwarning until they have their next restart, so it could be months of \nperiodic warnings.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 10:45:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On Mon, 9 Mar 2020 at 18:45, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-03-09 09:11, Masahiko Sawada wrote:\n> > I think after recovery is paused users will be better to restart the\n> > server rather than resume the recovery. I agree with this idea but I'm\n> > slightly concerned that users might not realize that recovery is\n> > paused until they look at that line in server log or at\n> > pg_stat_replication because the standby server is still functional. So\n> > I think we can periodically send WARNING to inform user that we're\n> > still waiting for parameter change and restart.\n>\n> I think that would be annoying, unless you create a system for\n> configuring those periodic warnings.\n>\n> I imagine in a case like having set max_prepared_transactions but never\n> actually using prepared transactions, people will just ignore the\n> warning until they have their next restart, so it could be months of\n> periodic warnings.\n\nWell I meant to periodically send warning messages while waiting for\nparameter change, that is after exhausting resources and stopping\nrecovery. In this situation user need to notice that as soon as\npossible.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 21:13:38 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "At Mon, 9 Mar 2020 21:13:38 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Mon, 9 Mar 2020 at 18:45, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2020-03-09 09:11, Masahiko Sawada wrote:\n> > > I think after recovery is paused users will be better to restart the\n> > > server rather than resume the recovery. I agree with this idea but I'm\n> > > slightly concerned that users might not realize that recovery is\n> > > paused until they look at that line in server log or at\n> > > pg_stat_replication because the standby server is still functional. So\n> > > I think we can periodically send WARNING to inform user that we're\n> > > still waiting for parameter change and restart.\n> >\n> > I think that would be annoying, unless you create a system for\n> > configuring those periodic warnings.\n> >\n> > I imagine in a case like having set max_prepared_transactions but never\n> > actually using prepared transactions, people will just ignore the\n> > warning until they have their next restart, so it could be months of\n> > periodic warnings.\n> \n> Well I meant to periodically send warning messages while waiting for\n> parameter change, that is after exhausting resources and stopping\n> recovery. In this situation user need to notice that as soon as\n> possible.\n\nIf we lose connection, standby continues to complain about lost\nconnection every 5 seconds. This is a situation of that kind.\n\nBy the way, when I reduced max_connection only on master then take\nexclusive locks until standby complains on lock exchaustion, I see a\nWARNING that is saying max_locks_per_transaction instead of\nmax_connection.\n\n\nWARNING: insufficient setting for parameter max_connections\nDETAIL: max_connections = 2 is a lower setting than on the master server (where its value was 3).\nHINT: Change parameters and restart the server, or there may be resource exhaustion errors sooner or later.\nCONTEXT: WAL redo at 0/60000A0 for XLOG/PARAMETER_CHANGE: max_connections=3 max_worker_processes=8 max_wal_senders=2 max_prepared_xacts=0 max_locks_per_xact=10 wal_level=replica wal_log_hints=off track_commit_timestamp=off\nWARNING: recovery paused because of insufficient setting of parameter max_locks_per_transaction (currently 10)\nDETAIL: The value must be at least as high as on the primary server.\nHINT: Recovery cannot continue unless the parameter is changed and the server restarted.\nCONTEXT: WAL redo at 0/6004A80 for Standb\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Mar 2020 17:57:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical\n replication" }, { "msg_contents": "On 2020-03-10 09:57, Kyotaro Horiguchi wrote:\n>> Well I meant to periodically send warning messages while waiting for\n>> parameter change, that is after exhausting resources and stopping\n>> recovery. In this situation user need to notice that as soon as\n>> possible.\n> \n> If we lose connection, standby continues to complain about lost\n> connection every 5 seconds. This is a situation of that kind.\n\nMy argument is that it's not really the same. If a standby is \ndisconnected for more than a few minutes, it's really not going to be a \ngood standby anymore after a while. In this case, however, having \ncertain parameter discrepancies is really harmless and you can run with \nit for a long time. I'm not strictly opposed to a periodic warning, but \nit's unclear to me how we would find a good interval.\n\n> By the way, when I reduced max_connection only on master then take\n> exclusive locks until standby complains on lock exchaustion, I see a\n> WARNING that is saying max_locks_per_transaction instead of\n> max_connection.\n> \n> \n> WARNING: insufficient setting for parameter max_connections\n> DETAIL: max_connections = 2 is a lower setting than on the master server (where its value was 3).\n> HINT: Change parameters and restart the server, or there may be resource exhaustion errors sooner or later.\n> CONTEXT: WAL redo at 0/60000A0 for XLOG/PARAMETER_CHANGE: max_connections=3 max_worker_processes=8 max_wal_senders=2 max_prepared_xacts=0 max_locks_per_xact=10 wal_level=replica wal_log_hints=off track_commit_timestamp=off\n> WARNING: recovery paused because of insufficient setting of parameter max_locks_per_transaction (currently 10)\n> DETAIL: The value must be at least as high as on the primary server.\n> HINT: Recovery cannot continue unless the parameter is changed and the server restarted.\n> CONTEXT: WAL redo at 0/6004A80 for Standb\n\nThis is all a web of half-truths. The lock tables are sized based on \nmax_locks_per_xact * (MaxBackends + max_prepared_xacts). So if you run \nout of lock space, we currently recommend (in the single-server case), \nthat you raise max_locks_per_xact, but you could also raise \nmax_prepared_xacts or something else. So this is now the opposite case \nwhere the lock table on the master was bigger because of max_connections.\n\nWe could make the advice less specific and just say, in essence, you \nneed to make some parameter changes; see earlier for some hints.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 14:47:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "At Tue, 10 Mar 2020 14:47:47 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-03-10 09:57, Kyotaro Horiguchi wrote:\n> >> Well I meant to periodically send warning messages while waiting for\n> >> parameter change, that is after exhausting resources and stopping\n> >> recovery. In this situation user need to notice that as soon as\n> >> possible.\n> > If we lose connection, standby continues to complain about lost\n> > connection every 5 seconds. This is a situation of that kind.\n> \n> My argument is that it's not really the same. If a standby is\n> disconnected for more than a few minutes, it's really not going to be\n> a good standby anymore after a while. In this case, however, having\n> certain parameter discrepancies is really harmless and you can run\n> with it for a long time. I'm not strictly opposed to a periodic\n> warning, but it's unclear to me how we would find a good interval.\n\nI meant the behavior after streaming is paused. That situation leads\nto loss of WAL or running out of WAL storage on the master. Actually\n5 seconds as a interval would be too frequent, but, maybe, we need at\nleast one message for a WAL segment-size?\n\n> > By the way, when I reduced max_connection only on master then take\n> > exclusive locks until standby complains on lock exchaustion, I see a\n> > WARNING that is saying max_locks_per_transaction instead of\n> > max_connection.\n...\n> > WARNING: recovery paused because of insufficient setting of parameter\n> > max_locks_per_transaction (currently 10)\n> > DETAIL: The value must be at least as high as on the primary server.\n> > HINT: Recovery cannot continue unless the parameter is changed and the\n> > server restarted.\n> > CONTEXT: WAL redo at 0/6004A80 for Standb\n> \n> This is all a web of half-truths. The lock tables are sized based on\n> max_locks_per_xact * (MaxBackends + max_prepared_xacts). So if you\n> run out of lock space, we currently recommend (in the single-server\n> case), that you raise max_locks_per_xact, but you could also raise\n> max_prepared_xacts or something else. So this is now the opposite\n> case where the lock table on the master was bigger because of\n> max_connections.\n\nYeah, I know. So, I'm not sure whether the checks on individual GUC\nvariable (other than wal_level) makes sense. We might even not need\nthe WARNING on parameter change.\n\n> We could make the advice less specific and just say, in essence, you\n> need to make some parameter changes; see earlier for some hints.\n\nIn that sense the direction menetioned above seems sensible.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 11 Mar 2020 11:06:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical\n replication" }, { "msg_contents": "Here is an updated patch that incorporates some of the suggestions. In \nparticular, some of the warning messages have been rephrased to more \naccurate (but also less specific), the warning message at recovery pause \nrepeats every 1 minute, and the documentation has been updated.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 11 Mar 2020 20:34:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On Thu, 12 Mar 2020 at 04:34, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> Here is an updated patch that incorporates some of the suggestions. In\n> particular, some of the warning messages have been rephrased to more\n> accurate (but also less specific), the warning message at recovery pause\n> repeats every 1 minute, and the documentation has been updated.\n>\n\nThank you for updating the patch. I have one comment on the latest\nversion patch:\n\n+ do\n+ {\n+ TimestampTz now = GetCurrentTimestamp();\n+\n+ if (TimestampDifferenceExceeds(last_warning, now, 60000))\n+ {\n+ ereport(WARNING,\n+ (errmsg(\"recovery paused because of insufficient\nparameter settings\"),\n+ errdetail(\"See earlier in the log about which\nsettings are insufficient.\"),\n+ errhint(\"Recovery cannot continue unless the\nconfiguration is changed and the server restarted.\")));\n+ last_warning = now;\n+ }\n+\n+ pg_usleep(1000000L); /* 1000 ms */\n+ HandleStartupProcInterrupts();\n+ }\n\nI think we can set wait event WAIT_EVENT_RECOVERY_PAUSE here.\n\nThe others look good to me.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Mar 2020 16:55:54 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "Hello\n\n> I think we can set wait event WAIT_EVENT_RECOVERY_PAUSE here.\n\n+1, since we added this in recoveryPausesHere.\n\nPS: do we need to add a prototype for the RecoveryRequiredIntParameter function in top of xlog.c?\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 27 Mar 2020 22:15:50 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-03-27 20:15, Sergei Kornilov wrote:\n>> I think we can set wait event WAIT_EVENT_RECOVERY_PAUSE here.\n> \n> +1, since we added this in recoveryPausesHere.\n\ncommitted with that addition\n\n> PS: do we need to add a prototype for the RecoveryRequiredIntParameter function in top of xlog.c?\n\nThere is no consistent style, I think, but I usually only add prototypes \nfor static functions if they are required because of the ordering in the \nfile.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 30 Mar 2020 10:09:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "Here is another stab at this subject.\n\nThis is a much simplified variant: When encountering a parameter change \nin the WAL that is higher than the standby's current setting, we log a \nwarning (instead of an error until now) and pause recovery. If you \nresume (unpause) recovery, the instance shuts down as before.\n\nThis allows you to keep your standbys running for a bit (depending on \nlag requirements) and schedule the required restart more deliberately.\n\nI had previously suggested making this new behavior configurable, but \nthere didn't seem to be much interest in that, so I have not included \nthat there.\n\nThe documentation changes are mostly carried over from previous patch \nversions (but adjusted for the actual behavior of the patch).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 24 Jun 2020 10:00:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "Here is a minimally updated new patch version to resolve a merge conflict.\n\nOn 2020-06-24 10:00, Peter Eisentraut wrote:\n> Here is another stab at this subject.\n> \n> This is a much simplified variant: When encountering a parameter change\n> in the WAL that is higher than the standby's current setting, we log a\n> warning (instead of an error until now) and pause recovery. If you\n> resume (unpause) recovery, the instance shuts down as before.\n> \n> This allows you to keep your standbys running for a bit (depending on\n> lag requirements) and schedule the required restart more deliberately.\n> \n> I had previously suggested making this new behavior configurable, but\n> there didn't seem to be much interest in that, so I have not included\n> that there.\n> \n> The documentation changes are mostly carried over from previous patch\n> versions (but adjusted for the actual behavior of the patch).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 15 Jul 2020 15:47:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "Hello\n\nThank you! I'm on vacation, so I was finally able to review the patch.\n\nSeems WAIT_EVENT_RECOVERY_PAUSE addition was lost during patch simplification.\n\n> \t\tereport(FATAL,\n>\t\t\t\t(errmsg(\"recovery aborted because of insufficient parameter settings\"),\n>\t\t\t\t errhint(\"You can restart the server after making the necessary configuration changes.\")));\n\nI think we should repeat here conflicted param_name and minValue. pg_wal_replay_resume can be called days after recovery being paused. The initial message can be difficult to find.\n\n> errmsg(\"recovery will be paused\")\n\nMay be use the same \"recovery has paused\" as in recoveryPausesHere? It doesn't seem to make any difference since we set pause right after that, but there will be a little less work translators.\n\nNot sure about \"If recovery is unpaused\". The word \"resumed\" seems to have been usually used in docs.\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 19 Nov 2020 22:17:34 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-11-19 20:17, Sergei Kornilov wrote:\n> Seems WAIT_EVENT_RECOVERY_PAUSE addition was lost during patch simplification.\n\nadded\n\n>> \t\tereport(FATAL,\n>> \t\t\t\t(errmsg(\"recovery aborted because of insufficient parameter settings\"),\n>> \t\t\t\t errhint(\"You can restart the server after making the necessary configuration changes.\")));\n> \n> I think we should repeat here conflicted param_name and minValue. pg_wal_replay_resume can be called days after recovery being paused. The initial message can be difficult to find.\n\ndone\n\n> \n>> errmsg(\"recovery will be paused\")\n> \n> May be use the same \"recovery has paused\" as in recoveryPausesHere? It doesn't seem to make any difference since we set pause right after that, but there will be a little less work translators.\n\ndone\n\n> Not sure about \"If recovery is unpaused\". The word \"resumed\" seems to have been usually used in docs.\n\nI think I like \"unpaused\" better here, because \"resumed\" would seem to \nimply that recovery can actually continue.\n\nOne thing that has not been added to my patch is the equivalent of \n496ee647ecd2917369ffcf1eaa0b2cdca07c8730, which allows promotion while \nrecovery is paused. I'm not sure that would be necessary, and it \ndoesn't look easy to add either.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Fri, 20 Nov 2020 14:14:44 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "Hello\n\n> I think I like \"unpaused\" better here, because \"resumed\" would seem to\n> imply that recovery can actually continue.\n\nGood, I agree.\n\n> One thing that has not been added to my patch is the equivalent of\n> 496ee647ecd2917369ffcf1eaa0b2cdca07c8730, which allows promotion while\n> recovery is paused. I'm not sure that would be necessary, and it\n> doesn't look easy to add either.\n\nHmm... Good question. How about putting CheckForStandbyTrigger() in a wait loop, but reporting FATAL with an appropriate message, such as \"promotion is not possible because of insufficient parameter settings\"?\nAlso it suits me if we only document that we ignore promote here. I don't think this is an important case. And yes, it's not easy to allow promotion, since we have already updated control file.\n\nProbably we need pause only after we reached consistency?\n\n2020-11-20 18:10:23.617 MSK 19722 @ from [vxid: txid:0] [] LOG: entering standby mode\n2020-11-20 18:10:23.632 MSK 19722 @ from [vxid: txid:0] [] WARNING: hot standby is not possible because of insufficient parameter settings\n2020-11-20 18:10:23.632 MSK 19722 @ from [vxid: txid:0] [] DETAIL: max_connections = 100 is a lower setting than on the primary server, where its value was 150.\n2020-11-20 18:10:23.632 MSK 19722 @ from [vxid: txid:0] [] LOG: recovery has paused\n2020-11-20 18:10:23.632 MSK 19722 @ from [vxid: txid:0] [] DETAIL: If recovery is unpaused, the server will shut down.\n2020-11-20 18:10:23.632 MSK 19722 @ from [vxid: txid:0] [] HINT: You can then restart the server after making the necessary configuration changes.\n2020-11-20 18:13:09.767 MSK 19755 melkij@postgres from [local] [vxid: txid:0] [] FATAL: the database system is starting up\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 20 Nov 2020 18:47:59 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2020-11-20 16:47, Sergei Kornilov wrote:\n> Hmm... Good question. How about putting CheckForStandbyTrigger() in a wait loop, but reporting FATAL with an appropriate message, such as \"promotion is not possible because of insufficient parameter settings\"?\n> Also it suits me if we only document that we ignore promote here. I don't think this is an important case. And yes, it's not easy to allow promotion, since we have already updated control file.\n> \n> Probably we need pause only after we reached consistency?\n\nHere is an updated patch that implements both of these points. I have \nused hot standby active instead of reached consistency. I guess \narguments could be made either way, but the original use case really \ncared about hot standby.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Mon, 30 Nov 2020 19:37:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nHello\r\nLook good for me. I think the patch is ready for commiter.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 15 Jan 2021 11:28:56 +0000", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: Improve handling of parameter differences in physical replication" }, { "msg_contents": "On 2021-01-15 12:28, Sergei Kornilov wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n> \n> Hello\n> Look good for me. I think the patch is ready for commiter.\n> \n> The new status of this patch is: Ready for Committer\n\ncommitted\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Mon, 18 Jan 2021 09:33:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Improve handling of parameter differences in physical replication" } ]
[ { "msg_contents": "Hi\n\nThere is a often problem with taking source of long SQL strings. The\npg_stat_activity field query is reduced to some short limit and is not too\npractical to increase this limit.\n\nI have a idea to use for solution of this problem stat_collector process.\nWhen the query string is reduced, then full form of query string can be\nsaved in stat_collector process. Maybe with execution plan. Then any\nprocess can read these information from this process.\n\nWhat do you think about this idea? It can be base for implementation\nEXPLAIN PID ?\n\nNotes, comments?\n\nRegards\n\nPavel\n\nHiThere is a often problem with taking source of long SQL strings. The pg_stat_activity field query is reduced to some short limit and is not too practical to increase this limit.I have a idea to use for solution of this problem stat_collector process. When the query string is reduced, then full form of query string can be saved in stat_collector process. Maybe with execution plan. Then any process can read these information from this process.What do you think about this idea? It can be base for implementation EXPLAIN PID ?Notes, comments?RegardsPavel", "msg_date": "Thu, 27 Feb 2020 09:42:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Using stat collector for collecting long SQL" }, { "msg_contents": "Hi\n\n> There is a often problem with taking source of long SQL strings. The\n> pg_stat_activity field query is reduced to some short limit and is not too\n> practical to increase this limit.\n\nI thought it was \"old story\", since that track_activity_query_size can be\nincreased widely:\nhttps://www.postgresql.org/message-id/flat/7b5ecc5a9991045e2f13c84e3047541d%40postgrespro.ru\n\n[...]\n\n> It can be base for implementation EXPLAIN PID ?\n+1 for this concept, that would be very usefull for diagnosing stuck\nqueries.\n\nUntil now the only proposal I saw regarding this was detailled in\nhttps://www.postgresql.org/message-id/1582756552256-0.post@n3.nabble.com\na prerequisite to be able to use extension postgrespro/pg_query_state\n\nRegards \nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Thu, 27 Feb 2020 06:12:58 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Using stat collector for collecting long SQL" }, { "msg_contents": "Hi,\n\nOn 2020-02-27 09:42:19 +0100, Pavel Stehule wrote:\n> There is a often problem with taking source of long SQL strings. The\n> pg_stat_activity field query is reduced to some short limit and is not too\n> practical to increase this limit.\n> \n> I have a idea to use for solution of this problem stat_collector process.\n> When the query string is reduced, then full form of query string can be\n> saved in stat_collector process. Maybe with execution plan. Then any\n> process can read these information from this process.\n\nHow? That sounds extremely expensive.\n\n\n> What do you think about this idea? It can be base for implementation\n> EXPLAIN PID ?\n\nI'm fairly strongly against adding any new dependencies on the stats\ncollector process. We're slowly working on getting rid of it (see the\nthread about making the stats system use dynamic shared memory).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Mar 2020 09:32:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Using stat collector for collecting long SQL" } ]
[ { "msg_contents": "Hi, hackers!\n\nAttached patches implement several useful jsonpath syntax extensions.\nI already published them two years ago in the original SQL/JSON thread,\nbut then after creation of separate threads for SQL/JSON functions and\nJSON_TABLE I forgot about them.\n\nA brief description of the patches:\n\n1. Introduced new jsonpath modifier 'pg' which is used for enabling\nPostgreSQL-specific extensions. This feature was already proposed in the\ndiscussion of jsonpath's like_regex implementation.\n\n2. Added support for raw jbvObject and jbvArray JsonbValues inside jsonpath\nengine. Now, jsonpath can operate with JSON arrays and objects only in\njbvBinary form. But with introduction of array and object constructors in\npatches #4 and #5 raw in-memory jsonb containers can appear in jsonpath engine.\nIn some places we can iterate through jbvArrays, in others we need to encode\njbvArrays and jbvObjects into jbvBinay.\n\n3. SQL/JSON sequence construction syntax. A simple comma-separated list can be\nused to concatenate single values or sequences into a single resulting sequence.\n\n SELECT jsonb_path_query('[1, 2, 3]', 'pg $[*], 4, 2 + 3');\n jsonb_path_query\n ------------------\n 1\n 2\n 3\n 4\n 5\n\n SELECT jsonb_path_query('{ \"a\": [1, 2, 3], \"b\": [4, 5] }',\n 'pg ($.a[*], $.b[*]) ? (@ % 2 == 1)');\n jsonb_path_query\n ------------------\n 1\n 3\n 5\n\n\nPatches #4-#6 implement ECMAScript-like syntax constructors and accessors:\n\n4. Array construction syntax.\nThis can also be considered as enclosing a sequence constructor into brackets.\n \n SELECT jsonb_path_query('[1, 2, 3]', 'pg [$[*], 4, 2 + 3]');\n jsonb_path_query\n ------------------\n [1, 2, 3, 4, 5]\n\nHaving this feature, jsonb_path_query_array() becomes somewhat redundant.\n\n\n5. Object construction syntax. It is useful for constructing derived objects\nfrom the interesting parts of the original object. (But this is not sufficient\nto \"project\" each object in array, item method like '.map()' is needed here.)\n\n SELECT jsonb_path_query('{\"b\": 2}', 'pg { a : 1, b : $.b, \"x y\" : $.b + 3 }');\n jsonb_path_query\n -------------------------------\n { \"a\" : 1, \"b\": 3, \"x y\": 5 }\n\nFields with empty values are simply skipped regardless of lax/strict mode:\n\n SELECT jsonb_path_query('{\"a\": 1}', 'pg { b : $.b, a : $.a ? (@ > 1) }');\n jsonb_path_query\n ------------------\n {}\n\n\n6. Object subscription syntax. This gives us ability to specify what key to\nextract on runtime. The syntax is the same as ordinary array subscription\nsyntax.\n\n -- non-existent $.x is simply skipped in lax mode\n SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$.a, \"x\", \"a\"]');\n jsonb_path_query\n ------------------\n \"c\"\n \"b\"\n\n SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$fld]', '{\"fld\": \"b\"}');\n jsonb_path_query\n ------------------\n \"c\"\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 27 Feb 2020 18:57:46 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "jsonpath syntax extensions" }, { "msg_contents": "Hi Nikita,\n\nOn 2/27/20 10:57 AM, Nikita Glukhov wrote:\n> \n> Attached patches implement several useful jsonpath syntax extensions.\n> I already published them two years ago in the original SQL/JSON thread,\n> but then after creation of separate threads for SQL/JSON functions and\n> JSON_TABLE I forgot about them.\n\nAre these improvements targeted at PG13 or PG14? This seems to be a \npretty big change for the last CF of PG13. I know these have been \nsubmitted before but that was a few years ago so I think they count as new.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 4 Mar 2020 11:13:46 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "On 04.03.2020 19:13, David Steele wrote:\n\n> Hi Nikita,\n>\n> On 2/27/20 10:57 AM, Nikita Glukhov wrote:\n>>\n>> Attached patches implement several useful jsonpath syntax extensions.\n>> I already published them two years ago in the original SQL/JSON thread,\n>> but then after creation of separate threads for SQL/JSON functions and\n>> JSON_TABLE I forgot about them.\n>\n> Are these improvements targeted at PG13 or PG14?  This seems to be a \n> pretty big change for the last CF of PG13.  I know these have been \n> submitted before but that was a few years ago so I think they count as \n> new.\n\nI believe that some of these improvements can get into PG13. There is no need\nto review all of them, we can choose only the simplest ones.\n\nMost of code changes in #3-#5 consist of straightforward boilerplate jsonpath\nI/O code, and only changes in jsonpath_exec.c are interesting.\n\nOnly the patch #1 is mandatory, patches #3-#6 depend on it.\n\nThe patch #2 is not necessary, if jbvArray and jbvObject values would be\nwrapped into jbvBinary by JsonbValueToJsonb() call in #4 and #5.\n\nPatch #4 is the simplest one (only 20 new lines of code in jsonpath_exec.c).\n\nPatch #6 is the most complex one, and it affects only jsonpath execution.\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 04.03.2020 19:13, David Steele wrote:\n\nHi\n Nikita,\n \n\n On 2/27/20 10:57 AM, Nikita Glukhov wrote:\n \n\n\n Attached patches implement several useful jsonpath syntax\n extensions.\n \n I already published them two years ago in the original SQL/JSON\n thread,\n \n but then after creation of separate threads for SQL/JSON\n functions and\n \n JSON_TABLE I forgot about them.\n \n\n\n Are these improvements targeted at PG13 or PG14?  This seems to be\n a pretty big change for the last CF of PG13.  I know these have\n been submitted before but that was a few years ago so I think they\n count as new.\n \n\n\nI believe that some of these improvements can get into PG13. There is no need\nto review all of them, we can choose only the simplest ones.\n\nMost of code changes in #3-#5 consist of straightforward boilerplate jsonpath\nI/O code, and only changes in jsonpath_exec.c are interesting.\n\nOnly the patch #1 is mandatory, patches #3-#6 depend on it.\n\nThe patch #2 is not necessary, if jbvArray and jbvObject values would be\nwrapped into jbvBinary by JsonbValueToJsonb() call in #4 and #5.\n\nPatch #4 is the simplest one (only 20 new lines of code in jsonpath_exec.c).\n\nPatch #6 is the most complex one, and it affects only jsonpath execution.\n\n\n-- \n Nikita Glukhov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company", "msg_date": "Wed, 4 Mar 2020 23:18:52 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "On 3/4/20 3:18 PM, Nikita Glukhov wrote:\n> On 04.03.2020 19:13, David Steele wrote:\n>> On 2/27/20 10:57 AM, Nikita Glukhov wrote:\n>>>\n>>> Attached patches implement several useful jsonpath syntax extensions.\n>>> I already published them two years ago in the original SQL/JSON thread,\n>>> but then after creation of separate threads for SQL/JSON functions and\n>>> JSON_TABLE I forgot about them.\n>>\n>> Are these improvements targeted at PG13 or PG14?  This seems to be a \n>> pretty big change for the last CF of PG13.  I know these have been \n>> submitted before but that was a few years ago so I think they count as \n>> new.\n> \n> I believe that some of these improvements can get into PG13. There is no need\n> to review all of them, we can choose only the simplest ones.\nAnother year has passed without any comment or review on this patch set.\n\nI'm not sure why the feature is not generating any interest, but you \nmight want to ask people who have been involved in JSON path before if \nthey are interested in reviewing.\n\nSince this is still essentially a new feature with no review before this \nCF I still don't think it is a good candidate for v14 but let's see if \nit gets some review.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 3 Mar 2021 09:44:09 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "On 3/3/21 9:44 AM, David Steele wrote:\n> On 3/4/20 3:18 PM, Nikita Glukhov wrote:\n>> On 04.03.2020 19:13, David Steele wrote:\n>>> On 2/27/20 10:57 AM, Nikita Glukhov wrote:\n>>>>\n>>>> Attached patches implement several useful jsonpath syntax extensions.\n>>>> I already published them two years ago in the original SQL/JSON thread,\n>>>> but then after creation of separate threads for SQL/JSON functions and\n>>>> JSON_TABLE I forgot about them.\n>>>\n>>> Are these improvements targeted at PG13 or PG14?  This seems to be a \n>>> pretty big change for the last CF of PG13.  I know these have been \n>>> submitted before but that was a few years ago so I think they count \n>>> as new.\n>>\n>> I believe that some of these improvements can get into PG13.  There is \n>> no need\n>> to review all of them, we can choose only the simplest ones.\n> Another year has passed without any comment or review on this patch set.\n> \n> I'm not sure why the feature is not generating any interest, but you \n> might want to ask people who have been involved in JSON path before if \n> they are interested in reviewing.\n> \n> Since this is still essentially a new feature with no review before this \n> CF I still don't think it is a good candidate for v14 but let's see if \n> it gets some review.\nTarget version updated to 15.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 15 Mar 2021 08:25:15 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "This patch seems to be getting ignored. Like David I'm a bit puzzled\nbecause it doesn't seem like an especially obscure or difficult patch\nto review. Yet it's been multiple years without even a superficial\n\"does it meet the coding requirements\" review let alone a design\nreview.\n\nCan we get a volunteer to at least give it a quick once-over? I don't\nthink it's ideal to be doing this in the last CF but neither is it\nvery appetizing to just shift it to the next CF without a review after\ntwo years...\n\nOn Thu, 27 Feb 2020 at 10:58, Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n>\n> Hi, hackers!\n>\n> Attached patches implement several useful jsonpath syntax extensions.\n> I already published them two years ago in the original SQL/JSON thread,\n> but then after creation of separate threads for SQL/JSON functions and\n> JSON_TABLE I forgot about them.\n>\n> A brief description of the patches:\n>\n> 1. Introduced new jsonpath modifier 'pg' which is used for enabling\n> PostgreSQL-specific extensions. This feature was already proposed in the\n> discussion of jsonpath's like_regex implementation.\n>\n> 2. Added support for raw jbvObject and jbvArray JsonbValues inside jsonpath\n> engine. Now, jsonpath can operate with JSON arrays and objects only in\n> jbvBinary form. But with introduction of array and object constructors in\n> patches #4 and #5 raw in-memory jsonb containers can appear in jsonpath engine.\n> In some places we can iterate through jbvArrays, in others we need to encode\n> jbvArrays and jbvObjects into jbvBinay.\n>\n> 3. SQL/JSON sequence construction syntax. A simple comma-separated list can be\n> used to concatenate single values or sequences into a single resulting sequence.\n>\n> SELECT jsonb_path_query('[1, 2, 3]', 'pg $[*], 4, 2 + 3');\n> jsonb_path_query\n> ------------------\n> 1\n> 2\n> 3\n> 4\n> 5\n>\n> SELECT jsonb_path_query('{ \"a\": [1, 2, 3], \"b\": [4, 5] }',\n> 'pg ($.a[*], $.b[*]) ? (@ % 2 == 1)');\n> jsonb_path_query\n> ------------------\n> 1\n> 3\n> 5\n>\n>\n> Patches #4-#6 implement ECMAScript-like syntax constructors and accessors:\n>\n> 4. Array construction syntax.\n> This can also be considered as enclosing a sequence constructor into brackets.\n>\n> SELECT jsonb_path_query('[1, 2, 3]', 'pg [$[*], 4, 2 + 3]');\n> jsonb_path_query\n> ------------------\n> [1, 2, 3, 4, 5]\n>\n> Having this feature, jsonb_path_query_array() becomes somewhat redundant.\n>\n>\n> 5. Object construction syntax. It is useful for constructing derived objects\n> from the interesting parts of the original object. (But this is not sufficient\n> to \"project\" each object in array, item method like '.map()' is needed here.)\n>\n> SELECT jsonb_path_query('{\"b\": 2}', 'pg { a : 1, b : $.b, \"x y\" : $.b + 3 }');\n> jsonb_path_query\n> -------------------------------\n> { \"a\" : 1, \"b\": 3, \"x y\": 5 }\n>\n> Fields with empty values are simply skipped regardless of lax/strict mode:\n>\n> SELECT jsonb_path_query('{\"a\": 1}', 'pg { b : $.b, a : $.a ? (@ > 1) }');\n> jsonb_path_query\n> ------------------\n> {}\n>\n>\n> 6. Object subscription syntax. This gives us ability to specify what key to\n> extract on runtime. The syntax is the same as ordinary array subscription\n> syntax.\n>\n> -- non-existent $.x is simply skipped in lax mode\n> SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$.a, \"x\", \"a\"]');\n> jsonb_path_query\n> ------------------\n> \"c\"\n> \"b\"\n>\n> SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$fld]', '{\"fld\": \"b\"}');\n> jsonb_path_query\n> ------------------\n> \"c\"\n>\n> --\n> Nikita Glukhov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 21 Mar 2022 16:09:09 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "Hm. Actually... These changes were split off from the JSON_TABLE\npatches? Are they still separate or have they been merged into those\nother patches since? I see the JSON_TABLE thread is getting more\ncomments do those reviews include these patches?\n\nOn Mon, 21 Mar 2022 at 16:09, Greg Stark <stark@mit.edu> wrote:\n>\n> This patch seems to be getting ignored. Like David I'm a bit puzzled\n> because it doesn't seem like an especially obscure or difficult patch\n> to review. Yet it's been multiple years without even a superficial\n> \"does it meet the coding requirements\" review let alone a design\n> review.\n>\n> Can we get a volunteer to at least give it a quick once-over? I don't\n> think it's ideal to be doing this in the last CF but neither is it\n> very appetizing to just shift it to the next CF without a review after\n> two years...\n>\n> On Thu, 27 Feb 2020 at 10:58, Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> >\n> > Hi, hackers!\n> >\n> > Attached patches implement several useful jsonpath syntax extensions.\n> > I already published them two years ago in the original SQL/JSON thread,\n> > but then after creation of separate threads for SQL/JSON functions and\n> > JSON_TABLE I forgot about them.\n> >\n> > A brief description of the patches:\n> >\n> > 1. Introduced new jsonpath modifier 'pg' which is used for enabling\n> > PostgreSQL-specific extensions. This feature was already proposed in the\n> > discussion of jsonpath's like_regex implementation.\n> >\n> > 2. Added support for raw jbvObject and jbvArray JsonbValues inside jsonpath\n> > engine. Now, jsonpath can operate with JSON arrays and objects only in\n> > jbvBinary form. But with introduction of array and object constructors in\n> > patches #4 and #5 raw in-memory jsonb containers can appear in jsonpath engine.\n> > In some places we can iterate through jbvArrays, in others we need to encode\n> > jbvArrays and jbvObjects into jbvBinay.\n> >\n> > 3. SQL/JSON sequence construction syntax. A simple comma-separated list can be\n> > used to concatenate single values or sequences into a single resulting sequence.\n> >\n> > SELECT jsonb_path_query('[1, 2, 3]', 'pg $[*], 4, 2 + 3');\n> > jsonb_path_query\n> > ------------------\n> > 1\n> > 2\n> > 3\n> > 4\n> > 5\n> >\n> > SELECT jsonb_path_query('{ \"a\": [1, 2, 3], \"b\": [4, 5] }',\n> > 'pg ($.a[*], $.b[*]) ? (@ % 2 == 1)');\n> > jsonb_path_query\n> > ------------------\n> > 1\n> > 3\n> > 5\n> >\n> >\n> > Patches #4-#6 implement ECMAScript-like syntax constructors and accessors:\n> >\n> > 4. Array construction syntax.\n> > This can also be considered as enclosing a sequence constructor into brackets.\n> >\n> > SELECT jsonb_path_query('[1, 2, 3]', 'pg [$[*], 4, 2 + 3]');\n> > jsonb_path_query\n> > ------------------\n> > [1, 2, 3, 4, 5]\n> >\n> > Having this feature, jsonb_path_query_array() becomes somewhat redundant.\n> >\n> >\n> > 5. Object construction syntax. It is useful for constructing derived objects\n> > from the interesting parts of the original object. (But this is not sufficient\n> > to \"project\" each object in array, item method like '.map()' is needed here.)\n> >\n> > SELECT jsonb_path_query('{\"b\": 2}', 'pg { a : 1, b : $.b, \"x y\" : $.b + 3 }');\n> > jsonb_path_query\n> > -------------------------------\n> > { \"a\" : 1, \"b\": 3, \"x y\": 5 }\n> >\n> > Fields with empty values are simply skipped regardless of lax/strict mode:\n> >\n> > SELECT jsonb_path_query('{\"a\": 1}', 'pg { b : $.b, a : $.a ? (@ > 1) }');\n> > jsonb_path_query\n> > ------------------\n> > {}\n> >\n> >\n> > 6. Object subscription syntax. This gives us ability to specify what key to\n> > extract on runtime. The syntax is the same as ordinary array subscription\n> > syntax.\n> >\n> > -- non-existent $.x is simply skipped in lax mode\n> > SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$.a, \"x\", \"a\"]');\n> > jsonb_path_query\n> > ------------------\n> > \"c\"\n> > \"b\"\n> >\n> > SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$fld]', '{\"fld\": \"b\"}');\n> > jsonb_path_query\n> > ------------------\n> > \"c\"\n> >\n> > --\n> > Nikita Glukhov\n> > Postgres Professional: http://www.postgrespro.com\n> > The Russian Postgres Company\n>\n>\n>\n> --\n> greg\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 21 Mar 2022 16:13:46 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "\nOp 21-03-2022 om 21:13 schreef Greg Stark:\n> Hm. Actually... These changes were split off from the JSON_TABLE\n> patches? Are they still separate or have they been merged into those\n> other patches since? I see the JSON_TABLE thread is getting more\n> comments do those reviews include these patches?\n> \n\nThey are separate.\n\nFWIW, I've done all my JSON_PATH testing both without and with these \nsyntax extensions (but I've done no code review.) I like these \nextensions but as you say -- there seems to be not much interest.\n\n\nErik\n\n> On Mon, 21 Mar 2022 at 16:09, Greg Stark <stark@mit.edu> wrote:\n>>\n>> This patch seems to be getting ignored. Like David I'm a bit puzzled\n>> because it doesn't seem like an especially obscure or difficult patch\n>> to review. Yet it's been multiple years without even a superficial\n>> \"does it meet the coding requirements\" review let alone a design\n>> review.\n>>\n>> Can we get a volunteer to at least give it a quick once-over? I don't\n>> think it's ideal to be doing this in the last CF but neither is it\n>> very appetizing to just shift it to the next CF without a review after\n>> two years...\n>>\n>> On Thu, 27 Feb 2020 at 10:58, Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n>>>\n>>> Hi, hackers!\n>>>\n>>> Attached patches implement several useful jsonpath syntax extensions.\n>>> I already published them two years ago in the original SQL/JSON thread,\n>>> but then after creation of separate threads for SQL/JSON functions and\n>>> JSON_TABLE I forgot about them.\n>>>\n>>> A brief description of the patches:\n>>>\n>>> 1. Introduced new jsonpath modifier 'pg' which is used for enabling\n>>> PostgreSQL-specific extensions. This feature was already proposed in the\n>>> discussion of jsonpath's like_regex implementation.\n>>>\n>>> 2. Added support for raw jbvObject and jbvArray JsonbValues inside jsonpath\n>>> engine. Now, jsonpath can operate with JSON arrays and objects only in\n>>> jbvBinary form. But with introduction of array and object constructors in\n>>> patches #4 and #5 raw in-memory jsonb containers can appear in jsonpath engine.\n>>> In some places we can iterate through jbvArrays, in others we need to encode\n>>> jbvArrays and jbvObjects into jbvBinay.\n>>>\n>>> 3. SQL/JSON sequence construction syntax. A simple comma-separated list can be\n>>> used to concatenate single values or sequences into a single resulting sequence.\n>>>\n>>> SELECT jsonb_path_query('[1, 2, 3]', 'pg $[*], 4, 2 + 3');\n>>> jsonb_path_query\n>>> ------------------\n>>> 1\n>>> 2\n>>> 3\n>>> 4\n>>> 5\n>>>\n>>> SELECT jsonb_path_query('{ \"a\": [1, 2, 3], \"b\": [4, 5] }',\n>>> 'pg ($.a[*], $.b[*]) ? (@ % 2 == 1)');\n>>> jsonb_path_query\n>>> ------------------\n>>> 1\n>>> 3\n>>> 5\n>>>\n>>>\n>>> Patches #4-#6 implement ECMAScript-like syntax constructors and accessors:\n>>>\n>>> 4. Array construction syntax.\n>>> This can also be considered as enclosing a sequence constructor into brackets.\n>>>\n>>> SELECT jsonb_path_query('[1, 2, 3]', 'pg [$[*], 4, 2 + 3]');\n>>> jsonb_path_query\n>>> ------------------\n>>> [1, 2, 3, 4, 5]\n>>>\n>>> Having this feature, jsonb_path_query_array() becomes somewhat redundant.\n>>>\n>>>\n>>> 5. Object construction syntax. It is useful for constructing derived objects\n>>> from the interesting parts of the original object. (But this is not sufficient\n>>> to \"project\" each object in array, item method like '.map()' is needed here.)\n>>>\n>>> SELECT jsonb_path_query('{\"b\": 2}', 'pg { a : 1, b : $.b, \"x y\" : $.b + 3 }');\n>>> jsonb_path_query\n>>> -------------------------------\n>>> { \"a\" : 1, \"b\": 3, \"x y\": 5 }\n>>>\n>>> Fields with empty values are simply skipped regardless of lax/strict mode:\n>>>\n>>> SELECT jsonb_path_query('{\"a\": 1}', 'pg { b : $.b, a : $.a ? (@ > 1) }');\n>>> jsonb_path_query\n>>> ------------------\n>>> {}\n>>>\n>>>\n>>> 6. Object subscription syntax. This gives us ability to specify what key to\n>>> extract on runtime. The syntax is the same as ordinary array subscription\n>>> syntax.\n>>>\n>>> -- non-existent $.x is simply skipped in lax mode\n>>> SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$.a, \"x\", \"a\"]');\n>>> jsonb_path_query\n>>> ------------------\n>>> \"c\"\n>>> \"b\"\n>>>\n>>> SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$fld]', '{\"fld\": \"b\"}');\n>>> jsonb_path_query\n>>> ------------------\n>>> \"c\"\n>>>\n>>> --\n>>> Nikita Glukhov\n>>> Postgres Professional: http://www.postgrespro.com\n>>> The Russian Postgres Company\n>>\n>>\n>>\n>> --\n>> greg\n> \n> \n> \n\n\n", "msg_date": "Mon, 21 Mar 2022 21:25:18 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "Hi,\n\nOn 2022-03-21 21:09, Greg Stark wrote:\n> This patch seems to be getting ignored. Like David I'm a bit puzzled\n> because it doesn't seem like an especially obscure or difficult patch\n> to review. Yet it's been multiple years without even a superficial\n> \"does it meet the coding requirements\" review let alone a design\n> review.\n> \n> Can we get a volunteer to at least give it a quick once-over? I don't\n> think it's ideal to be doing this in the last CF but neither is it\n> very appetizing to just shift it to the next CF without a review after\n> two years...\n\nI have just one suggestion: probably the object subscription syntax, as \nin '$[\"keyA\",\"keyB\"]', should not require 'pg ' prefix, as it is a part \nof the original JSONPath (https://goessner.net/articles/JsonPath/) and \nis supported in multiple other implementations.\n\n>> 6. Object subscription syntax. This gives us ability to specify what \n>> key to\n>> extract on runtime. The syntax is the same as ordinary array \n>> subscription\n>> syntax.\n>> \n>> -- non-existent $.x is simply skipped in lax mode\n>> SELECT jsonb_path_query('{\"a\": \"b\", \"b\": \"c\"}', 'pg $[$.a, \"x\", \n>> \"a\"]');\n>> jsonb_path_query\n>> ------------------\n>> \"c\"\n>> \"b\"\n\nThe variable reference support ('pg $[$.a]') probably _is_ a \nPostgreSQL-specific extension, though.\n\n-- Ph.\n\n\n", "msg_date": "Mon, 28 Mar 2022 22:33:37 +0200", "msg_from": "Phil Krylov <phil@krylov.eu>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "Well I still think this would be a good candidate to get reviewed.\n\nBut it currently needs a rebase and it's the last day of the CF so I\nguess it'll get moved forward again. I don't think \"returned with\nfeedback\" is helpful given there's been basically no feedback :(\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:17:06 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "Hi,\nOk, we'll rebase it onto actual master for the next iteration.\nThank you!\n\nOn Thu, Mar 31, 2022 at 10:17 PM Greg Stark <stark@mit.edu> wrote:\n\n> Well I still think this would be a good candidate to get reviewed.\n>\n> But it currently needs a rebase and it's the last day of the CF so I\n> guess it'll get moved forward again. I don't think \"returned with\n> feedback\" is helpful given there's been basically no feedback :(\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Ok, we'll rebase it onto actual master for the next iteration.Thank you!On Thu, Mar 31, 2022 at 10:17 PM Greg Stark <stark@mit.edu> wrote:Well I still think this would be a good candidate to get reviewed.\n\nBut it currently needs a rebase and it's the last day of the CF so I\nguess it'll get moved forward again. I don't think \"returned with\nfeedback\" is helpful given there's been basically no feedback :(\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 31 Mar 2022 22:21:11 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall. [For this patchset in particular, it's been suggested to\n split the extensions up into smaller independent pieces.]\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/2482/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/f6344bbb-9141-e8c8-e655-d9baf40c4478%40timescale.com\n\n\n", "msg_date": "Tue, 2 Aug 2022 14:14:08 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" }, { "msg_contents": "These syntax extensions would make the jsonpath syntax a super powerful query language capable of most nosql workloads people would have. Especially querying jsonpath with a variable key to look for is a sorely missed feature from the language. I would be open to reviewing the patches if need be, but if community support is all that's needed I believe a lot of users who could use this feature aren't using it because of the lack of documentation on all of postgres' amazing jsonpath features. The best doc I've found on all the functionality is https://github.com/obartunov/sqljsondoc/blob/master/jsonpath.md \r\n\r\nLet me know how i can help!\r\nAlex\n\nThe new status of this patch is: Needs review\n", "msg_date": "Mon, 13 Feb 2023 18:18:26 +0000", "msg_from": "Alexander Iansiti <aiansiti@outlook.com>", "msg_from_op": false, "msg_subject": "Re: jsonpath syntax extensions" } ]
[ { "msg_contents": "Enabling BEFORE triggers FOR EACH ROW in partitioned tables is very easy\n-- just remove the check against them. With that, they work fine.\n\nThe main problem is that the executor is not prepared to re-route the\ntuple if the user decides to change the partitioning columns (so you get\nthe error that the partitioning constraint is violated by the partition,\nwhich makes no sense if you're inserting in the top-level partitioned\ntable). There are several views about that:\n\n1. Just let it be. If the user does something silly, it's their problem\nif they get an ugly error message.\n\n2. If the partition keys are changed, raise an error. The trigger is\nallowed to change everything but those columns. Then there's no\nconflict, and it allows desirable use cases.\n\n3. Allow the partition keys to change, as long as the tuple ends up in\nthe same partition. This is the same as (1) except the error message is\nnicer.\n\nThe attached patch implements (2). The cases that are allowed by (3)\nare a strict superset of those allowed by (2), so if we decide to allow\nit in the future, it is possible without breaking anything that works\nafter implementing (2).\n\nThe implementation harnesses the newly added pg_trigger.tgparentid\ncolumn; if that is set to a non-zero value, then we search up the\npartitioning hierarchy for each partitioning key column, and verify the\nvalues are bitwise equal, up to the \"root\". Notes:\n\n* We must check all levels, not just the one immediately above, because\nthe routing might involve crawling down several levels, and any of those\nmight become invalidated if the trigger changes values.\n\n* The \"root\" is not necessarily the root partitioned table, but instead\nit's the table that was named in the command. Because of this, we don't\nneed to acquire locks on the tables, since the executor already has the\ntables open and locked (thus they cannot be modified by concurrent\ncommands).\n\n* I find it a little odd that the leaf partition doesn't have any intel\non what its partitioning columns are. I guess they haven't been needed\nthus far, and it seems inappropriate for this admittedly very small\nfeature to add such a burden on the rest of the system.\n\n* The new function I added, ReportTriggerPartkeyChange(), contains one\nserious bug (namely: it doesn't map attribute numbers properly if\npartitions are differently defined). Also, it has a performance issue,\nnamely that we do heap_getattr() potentially repeatedly -- maybe it'd be\nbetter to \"slotify\" the tuple prior to doing the checks. Another\npossible controversial point is that its location in commands/trigger.c\nisn't great. (Frankly, I don't understand why the executor trigger\nfiring functions are in commands/ at all.)\n\nThoughts?\n\n-- \n�lvaro Herrera", "msg_date": "Thu, 27 Feb 2020 13:51:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On 2020-02-27 17:51, Alvaro Herrera wrote:\n> Enabling BEFORE triggers FOR EACH ROW in partitioned tables is very easy\n> -- just remove the check against them. With that, they work fine.\n\nThis looks good to me in principle. It's a useful thing to support.\n\n> 1. Just let it be. If the user does something silly, it's their problem\n> if they get an ugly error message.\n> \n> 2. If the partition keys are changed, raise an error. The trigger is\n> allowed to change everything but those columns. Then there's no\n> conflict, and it allows desirable use cases.\n> \n> 3. Allow the partition keys to change, as long as the tuple ends up in\n> the same partition. This is the same as (1) except the error message is\n> nicer.\n> \n> The attached patch implements (2).\n\nThat seems alright to me.\n\n> * The new function I added, ReportTriggerPartkeyChange(), contains one\n> serious bug (namely: it doesn't map attribute numbers properly if\n> partitions are differently defined). Also, it has a performance issue,\n> namely that we do heap_getattr() potentially repeatedly -- maybe it'd be\n> better to \"slotify\" the tuple prior to doing the checks.\n\nThe attribute ordering issue obviously needs to be addressed, but the \nperformance issue is probably not so important. How many levels of \npartitioning are we expecting?\n\n> Another\n> possible controversial point is that its location in commands/trigger.c\n> isn't great. (Frankly, I don't understand why the executor trigger\n> firing functions are in commands/ at all.)\n\nyeah ...\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 19:44:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On Thu, Feb 27, 2020 at 10:22 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> * The \"root\" is not necessarily the root partitioned table, but instead\n> it's the table that was named in the command. Because of this, we don't\n> need to acquire locks on the tables, since the executor already has the\n> tables open and locked (thus they cannot be modified by concurrent\n> commands).\n\nI believe this is because of the partition level constraints on the\ntable that was named in the command would catch any violation in the\npartition key change in the levels above that table.\n\nWill it be easier to subject the new tuple to the partition level\nconstraints themselves and report if those are violated. See\nRelationGetPartitionQual() for getting partition constraints. This\nfunction includes partition constraints from all the levels so in your\nfunction you don't have to walk up the partition tree. It includes\nconstraints from the level above the table that was named in the\ncommand, but that might be fine. We will catch the error earlier and\nmay be provide a better error message.\n\n>\n> * The new function I added, ReportTriggerPartkeyChange(), contains one\n> serious bug (namely: it doesn't map attribute numbers properly if\n> partitions are differently defined).\n\nIIUC the code in your patch, it seems you are just looking at\npartnatts. But partition key can contain expressions also which are\nstored in partexprs. So, I think the code won't catch change in the\npartition key values when it contains expression. Using\nRelationGetPartitionQual() will fix this problem and also problem of\nattribute remapping across the partition hierarchy.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 11 Mar 2020 20:53:50 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On Wed, Mar 11, 2020 at 8:53 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Feb 27, 2020 at 10:22 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> >\n> > * The \"root\" is not necessarily the root partitioned table, but instead\n> > it's the table that was named in the command. Because of this, we don't\n> > need to acquire locks on the tables, since the executor already has the\n> > tables open and locked (thus they cannot be modified by concurrent\n> > commands).\n>\n> I believe this is because of the partition level constraints on the\n> table that was named in the command would catch any violation in the\n> partition key change in the levels above that table.\n>\n> Will it be easier to subject the new tuple to the partition level\n> constraints themselves and report if those are violated. See\n> RelationGetPartitionQual() for getting partition constraints. This\n> function includes partition constraints from all the levels so in your\n> function you don't have to walk up the partition tree. It includes\n> constraints from the level above the table that was named in the\n> command, but that might be fine. We will catch the error earlier and\n> may be provide a better error message.\n\nI realized that this will implement the third option in your original\nproposal, not the second one. I suppose that's fine too?\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 12 Mar 2020 09:47:36 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On 2020-03-12 05:17, Ashutosh Bapat wrote:\n> On Wed, Mar 11, 2020 at 8:53 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>> Will it be easier to subject the new tuple to the partition level\n>> constraints themselves and report if those are violated. See\n>> RelationGetPartitionQual() for getting partition constraints. This\n>> function includes partition constraints from all the levels so in your\n>> function you don't have to walk up the partition tree. It includes\n>> constraints from the level above the table that was named in the\n>> command, but that might be fine. We will catch the error earlier and\n>> may be provide a better error message.\n> \n> I realized that this will implement the third option in your original\n> proposal, not the second one. I suppose that's fine too?\n\nIt might be that that is actually easier to do. Instead of trying to \nfigure out which columns have changed, in the face of different column \nordering and general expressions, just check after a trigger whether the \ncolumn still fits into the partition.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 08:28:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On 2020-Mar-11, Ashutosh Bapat wrote:\n\n> On Thu, Feb 27, 2020 at 10:22 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n\n> > * The new function I added, ReportTriggerPartkeyChange(), contains one\n> > serious bug (namely: it doesn't map attribute numbers properly if\n> > partitions are differently defined).\n> \n> IIUC the code in your patch, it seems you are just looking at\n> partnatts. But partition key can contain expressions also which are\n> stored in partexprs. So, I think the code won't catch change in the\n> partition key values when it contains expression. Using\n> RelationGetPartitionQual() will fix this problem and also problem of\n> attribute remapping across the partition hierarchy.\n\nOh, of course.\n\nIn fact, I don't need to deal with PartitionQual directly; I can just\nuse ExecPartitionCheck(), since in ExecBRInsertTriggers et al we already\nhave all we need. v2 attached.\n\nHere's some example output. With my previous patch, this was the error\nwe reported:\n\n insert into parted values (1, 1, 'uno uno v2'); -- fail\n ERROR: changing partitioning columns in a before trigger is not supported\n DETAIL: Column \"a\" was changed by trigger \"t\".\n\nNow, passing emitError=true to ExecPartitionCheck, I get this:\n\n insert into parted values (1, 1, 'uno uno v2'); -- fail\n ERROR: new row for relation \"parted_1_1\" violates partition constraint\n DETAIL: Failing row contains (2, 1, uno uno v2).\n\nNote the discrepancy in the table named in the INSERT vs. the one in the\nerror message. This is a low-quality error IMO. So I'd instead pass\nemitError=false, and produce my own error. It's useful to report the\ntrigger name and original partition name:\n\n insert into parted values (1, 1, 'uno uno v2'); -- fail\n ERROR: moving row to another partition during a BEFORE trigger is not supported\n DETAIL: Before trigger \"t\", row was to be in partition \"public.parted_1_1\"\n\nNote that in this implementation I no longer know which column is the\nproblematic one, but I suppose users have clue enough. Wording\nsuggestions welcome.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 13 Mar 2020 13:25:20 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On Fri, 13 Mar 2020 at 21:55, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Mar-11, Ashutosh Bapat wrote:\n>\n> > On Thu, Feb 27, 2020 at 10:22 PM Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote:\n>\n> > > * The new function I added, ReportTriggerPartkeyChange(), contains one\n> > > serious bug (namely: it doesn't map attribute numbers properly if\n> > > partitions are differently defined).\n> >\n> > IIUC the code in your patch, it seems you are just looking at\n> > partnatts. But partition key can contain expressions also which are\n> > stored in partexprs. So, I think the code won't catch change in the\n> > partition key values when it contains expression. Using\n> > RelationGetPartitionQual() will fix this problem and also problem of\n> > attribute remapping across the partition hierarchy.\n>\n> Oh, of course.\n>\n> In fact, I don't need to deal with PartitionQual directly; I can just\n> use ExecPartitionCheck(), since in ExecBRInsertTriggers et al we already\n> have all we need. v2 attached.\n>\n\nThanks.\n\n\n> insert into parted values (1, 1, 'uno uno v2'); -- fail\n> ERROR: moving row to another partition during a BEFORE trigger is not\n> supported\n> DETAIL: Before trigger \"t\", row was to be in partition\n> \"public.parted_1_1\"\n>\n> Note that in this implementation I no longer know which column is the\n> problematic one, but I suppose users have clue enough. Wording\n> suggestions welcome.\n>\n\nWhen we have expression as a partition key, there may not be one particular\ncolumn which causes the row to move out of partition. So, this should be\nfine.\nA slight wording suggestion below.\n\n- * Complain if we find an unexpected trigger type.\n- */\n- if (!TRIGGER_FOR_AFTER(trigForm->tgtype))\n- elog(ERROR, \"unexpected trigger \\\"%s\\\" found\",\n- NameStr(trigForm->tgname));\n\n!AFTER means INSTEAD OF and BEFORE. Do you intend to allow INSTEAD OF\ntriggers\nas well?\n- */\n- if (stmt->timing != TRIGGER_TYPE_AFTER)\n\nSame comment as the above?\n\n+ /*\n+ * After a tuple in a partition goes through a trigger, the user\n+ * could have changed the partition key enough that the tuple\n+ * no longer fits the partition. Verify that.\n+ */\n+ if (trigger->tgisclone &&\n\nWhy do we want to restrict this check only for triggers which are cloned\nfrom\nthe ancestors?\n\n+ !ExecPartitionCheck(relinfo, slot, estate, false))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"moving row to another partition during a BEFORE trigger is not\nsupported\"),\n+ errdetail(\"Before trigger \\\"%s\\\", row was to be in partition \\\"%s.%s\\\"\",\n\nIn the error message you removed above, we are mentioning BEFORE FOR EACH\nROW\ntrigger. Should we continue to use the same terminology?\n\nI was wondering whether it would be good to check the partition constraint\nonly\nonce i.e. after all before row triggers have been executed. This would avoid\nthrowing an error in case multiple triggers together worked to keep the\ntuple\nin the same partition when individual trigger/s caused it to move out of\nthat\npartition. But then we would loose the opportunity to point out the before\nrow\ntrigger which actually caused the row to move out of the partition. Anyway,\nwanted to bring that for the discussion here.\n\n@@ -307,7 +307,7 @@ CreatePartitionDirectory(MemoryContext mcxt)\n *\n * The purpose of this function is to ensure that we get the same\n * PartitionDesc for each relation every time we look it up. In the\n- * face of current DDL, different PartitionDescs may be constructed with\n+ * face of concurrent DDL, different PartitionDescs may be constructed with\n\nThanks for catching it. Looks unrelated though.\n\n+-- Before triggers and partitions\n\nThe test looks good. Should we add a test for partitioned table with\npartition\nkey as expression?\n\nThe approach looks good to me.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 13 Mar 2020 at 21:55, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Mar-11, Ashutosh Bapat wrote:\n\n> On Thu, Feb 27, 2020 at 10:22 PM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n\n> > * The new function I added, ReportTriggerPartkeyChange(), contains one\n> > serious bug (namely: it doesn't map attribute numbers properly if\n> > partitions are differently defined).\n> \n> IIUC the code in your patch, it seems you are just looking at\n> partnatts. But partition key can contain expressions also which are\n> stored in partexprs. So, I think the code won't catch change in the\n> partition key values when it contains expression. Using\n> RelationGetPartitionQual() will fix this problem and also problem of\n> attribute remapping across the partition hierarchy.\n\nOh, of course.\n\nIn fact, I don't need to deal with PartitionQual directly; I can just\nuse ExecPartitionCheck(), since in ExecBRInsertTriggers et al we already\nhave all we need.  v2 attached.Thanks.\n\n insert into parted values (1, 1, 'uno uno v2');    -- fail\n ERROR:  moving row to another partition during a BEFORE trigger is not supported\n DETAIL:  Before trigger \"t\", row was to be in partition \"public.parted_1_1\"\n\nNote that in this implementation I no longer know which column is the\nproblematic one, but I suppose users have clue enough.  Wording\nsuggestions welcome.When we have expression as a partition key, there may not be one particular column which causes the row to move out of partition. So, this should be fine.A slight wording suggestion below.-\t\t * Complain if we find an unexpected trigger type.-\t\t */-\t\tif (!TRIGGER_FOR_AFTER(trigForm->tgtype))-\t\t\telog(ERROR, \"unexpected trigger \\\"%s\\\" found\",-\t\t\t\t NameStr(trigForm->tgname));!AFTER means INSTEAD OF and BEFORE. Do you intend to allow INSTEAD OF triggersas well?-\t\t\t */-\t\t\tif (stmt->timing != TRIGGER_TYPE_AFTER)Same comment as the above? +\t\t\t/*+\t\t\t * After a tuple in a partition goes through a trigger, the user+\t\t\t * could have changed the partition key enough that the tuple+\t\t\t * no longer fits the partition.  Verify that.+\t\t\t */+\t\t\tif (trigger->tgisclone &&Why do we want to restrict this check only for triggers which are cloned fromthe ancestors? +\t\t\t\t!ExecPartitionCheck(relinfo, slot, estate, false))+\t\t\t\tereport(ERROR,+\t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),+\t\t\t\t\t\t errmsg(\"moving row to another partition during a BEFORE trigger is not supported\"),+\t\t\t\t\t\t errdetail(\"Before trigger \\\"%s\\\", row was to be in partition \\\"%s.%s\\\"\",In the error message you removed above, we are mentioning BEFORE FOR EACH ROWtrigger. Should we continue to use the same terminology?I was wondering whether it would be good to check the partition constraint onlyonce i.e. after all before row triggers have been executed. This would avoidthrowing an error in case multiple triggers together worked to keep the tuplein the same partition when individual trigger/s caused it to move out of thatpartition. But then we would loose the opportunity to point out the before rowtrigger which actually caused the row to move out of the partition. Anyway,wanted to bring that for the discussion here.@@ -307,7 +307,7 @@ CreatePartitionDirectory(MemoryContext mcxt)  *  * The purpose of this function is to ensure that we get the same  * PartitionDesc for each relation every time we look it up.  In the- * face of current DDL, different PartitionDescs may be constructed with+ * face of concurrent DDL, different PartitionDescs may be constructed withThanks for catching it. Looks unrelated though. +-- Before triggers and partitionsThe test looks good. Should we add a test for partitioned table with partitionkey as expression? The approach looks good to me.-- Best Wishes,Ashutosh", "msg_date": "Tue, 17 Mar 2020 22:11:43 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "I was expecting that documentation somewhere covered the fact that BR\ntriggers are not supported on a partitioned table. And this patch\nwould remove/improve that portion of the code. But I don't see any\ndocumentation changes in this patch.\n\nOn Tue, Mar 17, 2020 at 10:11 PM Ashutosh Bapat\n<ashutosh.bapat@2ndquadrant.com> wrote:\n>\n>\n>\n> On Fri, 13 Mar 2020 at 21:55, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>\n>> On 2020-Mar-11, Ashutosh Bapat wrote:\n>>\n>> > On Thu, Feb 27, 2020 at 10:22 PM Alvaro Herrera\n>> > <alvherre@2ndquadrant.com> wrote:\n>>\n>> > > * The new function I added, ReportTriggerPartkeyChange(), contains one\n>> > > serious bug (namely: it doesn't map attribute numbers properly if\n>> > > partitions are differently defined).\n>> >\n>> > IIUC the code in your patch, it seems you are just looking at\n>> > partnatts. But partition key can contain expressions also which are\n>> > stored in partexprs. So, I think the code won't catch change in the\n>> > partition key values when it contains expression. Using\n>> > RelationGetPartitionQual() will fix this problem and also problem of\n>> > attribute remapping across the partition hierarchy.\n>>\n>> Oh, of course.\n>>\n>> In fact, I don't need to deal with PartitionQual directly; I can just\n>> use ExecPartitionCheck(), since in ExecBRInsertTriggers et al we already\n>> have all we need. v2 attached.\n>\n>\n> Thanks.\n>\n>>\n>> insert into parted values (1, 1, 'uno uno v2'); -- fail\n>> ERROR: moving row to another partition during a BEFORE trigger is not supported\n>> DETAIL: Before trigger \"t\", row was to be in partition \"public.parted_1_1\"\n>>\n>> Note that in this implementation I no longer know which column is the\n>> problematic one, but I suppose users have clue enough. Wording\n>> suggestions welcome.\n>\n>\n> When we have expression as a partition key, there may not be one particular column which causes the row to move out of partition. So, this should be fine.\n> A slight wording suggestion below.\n>\n> - * Complain if we find an unexpected trigger type.\n> - */\n> - if (!TRIGGER_FOR_AFTER(trigForm->tgtype))\n> - elog(ERROR, \"unexpected trigger \\\"%s\\\" found\",\n> - NameStr(trigForm->tgname));\n>\n> !AFTER means INSTEAD OF and BEFORE. Do you intend to allow INSTEAD OF triggers\n> as well?\n> - */\n> - if (stmt->timing != TRIGGER_TYPE_AFTER)\n>\n> Same comment as the above?\n>\n> + /*\n> + * After a tuple in a partition goes through a trigger, the user\n> + * could have changed the partition key enough that the tuple\n> + * no longer fits the partition. Verify that.\n> + */\n> + if (trigger->tgisclone &&\n>\n> Why do we want to restrict this check only for triggers which are cloned from\n> the ancestors?\n>\n> + !ExecPartitionCheck(relinfo, slot, estate, false))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"moving row to another partition during a BEFORE trigger is not supported\"),\n> + errdetail(\"Before trigger \\\"%s\\\", row was to be in partition \\\"%s.%s\\\"\",\n>\n> In the error message you removed above, we are mentioning BEFORE FOR EACH ROW\n> trigger. Should we continue to use the same terminology?\n>\n> I was wondering whether it would be good to check the partition constraint only\n> once i.e. after all before row triggers have been executed. This would avoid\n> throwing an error in case multiple triggers together worked to keep the tuple\n> in the same partition when individual trigger/s caused it to move out of that\n> partition. But then we would loose the opportunity to point out the before row\n> trigger which actually caused the row to move out of the partition. Anyway,\n> wanted to bring that for the discussion here.\n>\n> @@ -307,7 +307,7 @@ CreatePartitionDirectory(MemoryContext mcxt)\n> *\n> * The purpose of this function is to ensure that we get the same\n> * PartitionDesc for each relation every time we look it up. In the\n> - * face of current DDL, different PartitionDescs may be constructed with\n> + * face of concurrent DDL, different PartitionDescs may be constructed with\n>\n> Thanks for catching it. Looks unrelated though.\n>\n> +-- Before triggers and partitions\n>\n> The test looks good. Should we add a test for partitioned table with partition\n> key as expression?\n>\n> The approach looks good to me.\n>\n> --\n> Best Wishes,\n> Ashutosh\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 18 Mar 2020 20:45:55 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "On 2020-Mar-17, Ashutosh Bapat wrote:\n\n> On Fri, 13 Mar 2020 at 21:55, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n\n> > Note that in this implementation I no longer know which column is the\n> > problematic one, but I suppose users have clue enough. Wording\n> > suggestions welcome.\n> \n> When we have expression as a partition key, there may not be one particular\n> column which causes the row to move out of partition. So, this should be\n> fine.\n\nTrue.\n\n> A slight wording suggestion below.\n> \n> - * Complain if we find an unexpected trigger type.\n> - */\n> - if (!TRIGGER_FOR_AFTER(trigForm->tgtype))\n> - elog(ERROR, \"unexpected trigger \\\"%s\\\" found\",\n> - NameStr(trigForm->tgname));\n> \n> !AFTER means INSTEAD OF and BEFORE. Do you intend to allow INSTEAD OF\n> triggers as well?\n\nHmm, yeah, this should check both types; I'll put it back. Note that\nthis is just a cross-check that the catalogs we're going to copy don't\ncontain bogus info; the real backstop for that at the user level is in\nthe other one you complain about:\n\n> - */\n> - if (stmt->timing != TRIGGER_TYPE_AFTER)\n> \n> Same comment as the above?\n\nNote that in this one we have a check for INSTEAD before we enter the\nFOR EACH ROW block, so this case is already covered -- AFAICS the code\nis correct.\n\n> + /*\n> + * After a tuple in a partition goes through a trigger, the user\n> + * could have changed the partition key enough that the tuple\n> + * no longer fits the partition. Verify that.\n> + */\n> + if (trigger->tgisclone &&\n> \n> Why do we want to restrict this check only for triggers which are\n> cloned from the ancestors?\n\nBecause it's not our business in the other case. When the trigger is\ndefined directly in the partition, it's the user's problem if something\ngoing amiss.\n\n> + !ExecPartitionCheck(relinfo, slot, estate, false))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"moving row to another partition during a BEFORE trigger is not\n> supported\"),\n> + errdetail(\"Before trigger \\\"%s\\\", row was to be in partition \\\"%s.%s\\\"\",\n> \n> In the error message you removed above, we are mentioning BEFORE FOR EACH\n> ROW trigger. Should we continue to use the same terminology?\n\nSounds good, I'll change that.\n\nI also changed the errdetail slightly:\n\terrdetail(\"Before executing trigger \\\"%s\\\", the row was to be in partition \\\"%s.%s\\\"\",\n\n> I was wondering whether it would be good to check the partition\n> constraint only once i.e. after all before row triggers have been\n> executed. This would avoid throwing an error in case multiple triggers\n> together worked to keep the tuple in the same partition when\n> individual trigger/s caused it to move out of that partition. But then\n> we would loose the opportunity to point out the before row trigger\n> which actually caused the row to move out of the partition. Anyway,\n> wanted to bring that for the discussion here.\n\nYeah, I too thought about a combination of triggers that move the tuple\nelsewhere and back. Frankly, I don't think we need to support that. It\nsounds devious and likely we'll miss some odd corner case -- anything\ninvolving the weird cross-partition UPDATE mechanism sounds easy to get\nwrong.\n\n> +-- Before triggers and partitions\n> \n> The test looks good. Should we add a test for partitioned table with\n> partition\n> key as expression?\n\nWill do.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 18:02:13 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "Thanks for the reviews; I have pushed it now.\n\n(I made the doc edits you mentioned too.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 19:01:22 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: BEFORE ROW triggers for partitioned tables" }, { "msg_contents": "Hi commiter,\n\nMany of our customers expect to use BR triggers in partitioned tables.\nAfter I followed your discussion, I also checked your patch. \nHere are two questions confusing me:\n\n1. Your modification removes the check BR triggers against partitioned table,\n and a more friendly error message is added to the ExecInsert and ExecUpdate. \nYou are correct. ExecInsert does not reroute partitions. \nHowever, when ExecUpdate modifies partition keys, \nit is almost equivalent to ExecDelete and ExecInsert, \nand it is re-routed(ExecPrepareTupleRouting) once before ExecInsert. Therefore, \nwhy should an error be thrown in ExecUpdate?\nLet's look at a case : \n ```\n postgres=# create table parted (a int, b int, c text) partition by list (a);\n CREATE TABLE\n postgres=# create table parted_1 partition of parted for values in (1);\n CREATE TABLE\n postgres=# create table parted_2 partition of parted for values in (2);\n CREATE TABLE\n postgres=# create function parted_trigfunc() returns trigger language plpgsql as $$\n begin\n new.a = new.a + 1;\n return new;\n end;\n $$;\n CREATE FUNCTION\n postgres=# insert into parted values (1, 1, 'uno uno v1'); \n INSERT 0 1\n postgres=# create trigger t before update on parted\n for each row execute function parted_trigfunc();\n CREATE TRIGGER\n postgres=# update parted set c = c || 'v3'; \n ```\nIf there is no check in the ExecUpdate, \nthe above update SQL will be executed successfully.\nHowever, in your code, this will fail.\nSo, what is the reason for your consideration?\n\n2. In this patch, you only removed the restrictions BR trigger against \nthe partitioned table, but did not fundamentally solve the problem caused \nby modifying partition keys in the BR trigger. What are the difficulties in \nsolving this problem fundamentally? We plan to implement it. \nCan you give us some suggestions?\n\n\n------------------------------------------------------------------\n发件人:Alvaro Herrera <alvherre@2ndquadrant.com>\n发送时间:2021年1月18日(星期一) 20:36\n收件人:Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>\n抄 送:Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; Pg Hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: BEFORE ROW triggers for partitioned tables\n\nThanks for the reviews; I have pushed it now.\n\n(I made the doc edits you mentioned too.)\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\nHi commiter,Many of our customers expect to use BR triggers in partitioned tables.After I followed your discussion, I also checked your patch. Here are two questions confusing me:1. Your modification removes the check BR triggers against partitioned table, and a more friendly error message is added to the ExecInsert and ExecUpdate.  You are correct. ExecInsert does not reroute partitions. However, when ExecUpdate modifies partition keys, it is almost equivalent to ExecDelete and ExecInsert, and it is re-routed(ExecPrepareTupleRouting) once before ExecInsert. Therefore, why should an error be thrown in ExecUpdate?Let's look at a case :     ```        postgres=# create table parted (a int, b int, c text) partition by list (a);        CREATE TABLE        postgres=# create table parted_1 partition of parted for values in (1);        CREATE TABLE        postgres=# create table parted_2 partition of parted for values in (2);        CREATE TABLE                postgres=# create function parted_trigfunc() returns trigger language plpgsql as $$             begin           new.a = new.a + 1;           return new;              end;                     $$;        CREATE FUNCTION        postgres=# insert into parted values (1, 1, 'uno uno v1');         INSERT 0 1        postgres=# create trigger t before update on parted           for each row execute function parted_trigfunc();        CREATE TRIGGER        postgres=# update parted set c = c || 'v3';    ```If there is no check in the ExecUpdate, the above update SQL will be executed successfully.However, in your code, this will fail.So, what is the reason for your consideration?2. In this patch, you only removed the restrictions BR trigger against the partitioned table, but did not fundamentally solve the problem caused by modifying partition keys in the BR trigger. What are the difficulties in solving this problem fundamentally? We plan to implement it. Can you give us some suggestions?------------------------------------------------------------------发件人:Alvaro Herrera <alvherre@2ndquadrant.com>发送时间:2021年1月18日(星期一) 20:36收件人:Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>抄 送:Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; Pg Hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: BEFORE ROW triggers for partitioned tablesThanks for the reviews; I have pushed it now.(I made the doc edits you mentioned too.)-- Álvaro Herrera                https://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 18 Jan 2021 20:59:37 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaQkVGT1JFIFJPVyB0cmlnZ2VycyBmb3IgcGFydGl0aW9uZWQgdGFibGVz?=" }, { "msg_contents": "Hi commiter,\n\nMany of our customers expect to use BR triggers in partitioned tables.\nAfter I followed your discussion, I also checked your patch. \nHere are two questions confusing me:\n\n1. Your modification removes the check BR triggers against partitioned table,\n and a more friendly error message is added to the ExecInsert and ExecUpdate. \nYou are correct. ExecInsert does not reroute partitions. \nHowever, when ExecUpdate modifies partition keys, \nit is almost equivalent to ExecDelete and ExecInsert, \nand it is re-routed(ExecPrepareTupleRouting) once before ExecInsert. Therefore, \nwhy should an error be thrown in ExecUpdate?\nLet's look at a case : \n ```\n postgres=# create table parted (a int, b int, c text) partition by list (a);\n CREATE TABLE\n postgres=# create table parted_1 partition of parted for values in (1);\n CREATE TABLE\n postgres=# create table parted_2 partition of parted for values in (2);\n CREATE TABLE\n postgres=# create function parted_trigfunc() returns trigger language plpgsql as $$\n begin\n new.a = new.a + 1;\n return new;\n end;\n $$;\n CREATE FUNCTION\n postgres=# insert into parted values (1, 1, 'uno uno v1'); \n INSERT 0 1\n postgres=# create trigger t before update on parted\n for each row execute function parted_trigfunc();\n CREATE TRIGGER\n postgres=# update parted set c = c || 'v3'; \n ```\nIf there is no check in the ExecUpdate, \nthe above update SQL will be executed successfully.\nHowever, in your code, this will fail.\nSo, what is the reason for your consideration?\n\n2. In this patch, you only removed the restrictions BR trigger against \nthe partitioned table, but did not fundamentally solve the problem caused \nby modifying partition keys in the BR trigger. What are the difficulties in \nsolving this problem fundamentally? We plan to implement it. \nCan you give us some suggestions?\n\nWe look forward to your reply.\nThank you very much,\n Regards, Adger\n\n\n\n\n\n------------------------------------------------------------------\n发件人:Alvaro Herrera <alvherre@2ndquadrant.com>\n发送时间:2021年1月18日(星期一) 20:36\n收件人:Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>\n抄 送:Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; Pg Hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: BEFORE ROW triggers for partitioned tables\n\nThanks for the reviews; I have pushed it now.\n\n(I made the doc edits you mentioned too.)\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\nHi commiter,Many of our customers expect to use BR triggers in partitioned tables.After I followed your discussion, I also checked your patch. Here are two questions confusing me:1. Your modification removes the check BR triggers against partitioned table, and a more friendly error message is added to the ExecInsert and ExecUpdate.  You are correct. ExecInsert does not reroute partitions. However, when ExecUpdate modifies partition keys, it is almost equivalent to ExecDelete and ExecInsert, and it is re-routed(ExecPrepareTupleRouting) once before ExecInsert. Therefore, why should an error be thrown in ExecUpdate?Let's look at a case :     ```        postgres=# create table parted (a int, b int, c text) partition by list (a);        CREATE TABLE        postgres=# create table parted_1 partition of parted for values in (1);        CREATE TABLE        postgres=# create table parted_2 partition of parted for values in (2);        CREATE TABLE                postgres=# create function parted_trigfunc() returns trigger language plpgsql as $$             begin           new.a = new.a + 1;           return new;              end;                     $$;        CREATE FUNCTION        postgres=# insert into parted values (1, 1, 'uno uno v1');         INSERT 0 1        postgres=# create trigger t before update on parted           for each row execute function parted_trigfunc();        CREATE TRIGGER        postgres=# update parted set c = c || 'v3';    ```If there is no check in the ExecUpdate, the above update SQL will be executed successfully.However, in your code, this will fail.So, what is the reason for your consideration?2. In this patch, you only removed the restrictions BR trigger against the partitioned table, but did not fundamentally solve the problem caused by modifying partition keys in the BR trigger. What are the difficulties in solving this problem fundamentally? We plan to implement it. Can you give us some suggestions?We look forward to your reply.Thank you very much, Regards,  Adger------------------------------------------------------------------发件人:Alvaro Herrera <alvherre@2ndquadrant.com>发送时间:2021年1月18日(星期一) 20:36收件人:Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>抄 送:Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; Pg Hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: BEFORE ROW triggers for partitioned tablesThanks for the reviews; I have pushed it now.(I made the doc edits you mentioned too.)-- Álvaro Herrera                https://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 18 Jan 2021 21:03:07 +0800", "msg_from": "\"=?UTF-8?B?5p2O5p2wKOaFjui/vSk=?=\" <adger.lj@alibaba-inc.com>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaQkVGT1JFIFJPVyB0cmlnZ2VycyBmb3IgcGFydGl0aW9uZWQgdGFibGVz?=" } ]
[ { "msg_contents": "Hi,\n\nI am trying to run a few benchmarks measuring the effects of patch to\nmake GetSnapshotData() faster in the face of larger numbers of\nestablished connections.\n\nBefore the patch connection establishment often is very slow due to\ncontention. The first few connections are fast, but after that it takes\nincreasingly long. The first few connections constantly hold\nProcArrayLock in shared mode, which then makes it hard for new\nconnections to acquire it exclusively (I'm addressing that to a\nsignificant degree in the patch FWIW).\n\nBut for a fair comparison of the runtime effects I'd like to only\ncompare the throughput for when connections are actually usable,\notherwise I end up benchmarking few vs many connections, which is not\nuseful. And because I'd like to run the numbers for a lot of different\nnumbers of connections etc, I can't just make each run several hour\nlongs to make the initial minutes not matter much.\n\nTherefore I'd like to make pgbench wait till it has established all\nconnections, before they run queries.\n\nDoes anybody else see this as being useful?\n\nIf so, should this be done unconditionally? A new option? Included in an\nexisting one somehow?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Feb 2020 10:01:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hi,\n\nOn 2020-02-27 10:01:00 -0800, Andres Freund wrote:\n> If so, should this be done unconditionally? A new option? Included in an\n> existing one somehow?\n\nFWIW, leaving windows, error handling, and other annoyances aside, this\ncan be implemented fairly simply. See below.\n\nAs an example of the difference:\n\nBefore:\nandres@awork3:~/build/postgres/dev-optimize/vpath$ ./src/bin/pgbench/pgbench -M prepared -c 5000 -j 100 -T 100 -P1 -S\nstarting vacuum...end.\nprogress: 100.4 s, 515307.4 tps, lat 1.374 ms stddev 7.739\ntransaction type: <builtin: select only>\nscaling factor: 30\nquery mode: prepared\nnumber of clients: 5000\nnumber of threads: 100\nduration: 100 s\nnumber of transactions actually processed: 51728348\nlatency average = 1.374 ms\nlatency stddev = 7.739 ms\ntps = 513802.541226 (including connections establishing)\ntps = 521342.427158 (excluding connections establishing)\n\n\nNote that there's no progress report until the end. That's because the\nmain thread didn't get a connection until the other threads were done.\n\n\nAfter:\n\npgbench -M prepared -c 5000 -j 100 -T 100 -P1 -S\nstarting vacuum...end.\nprogress: 1.5 s, 9943.5 tps, lat 4.795 ms stddev 14.822\nprogress: 2.0 s, 380312.6 tps, lat 1.728 ms stddev 15.461\nprogress: 3.0 s, 478811.1 tps, lat 2.052 ms stddev 31.687\nprogress: 4.0 s, 470804.6 tps, lat 1.941 ms stddev 24.661\n\n\n\nI think this also shows that \"including/excluding connections\nestablishing\" as well as some of the other stats reported pretty\nbogus. In the 'before' case a substantial numer of the connections had\nnot yet been established until the end of the test run!\n\n\n\ndiff --git i/src/bin/pgbench/pgbench.c w/src/bin/pgbench/pgbench.c\nindex 1159757acb0..1a82c6a290e 100644\n--- i/src/bin/pgbench/pgbench.c\n+++ w/src/bin/pgbench/pgbench.c\n@@ -310,6 +310,8 @@ typedef struct RandomState\n /* Various random sequences are initialized from this one. */\n static RandomState base_random_sequence;\n \n+pthread_barrier_t conn_barrier;\n+\n /*\n * Connection state machine states.\n */\n@@ -6110,6 +6112,8 @@ main(int argc, char **argv)\n \n /* start threads */\n #ifdef ENABLE_THREAD_SAFETY\n+ pthread_barrier_init(&conn_barrier, NULL, nthreads);\n+\n for (i = 0; i < nthreads; i++)\n {\n TState *thread = &threads[i];\n@@ -6265,6 +6269,8 @@ threadRun(void *arg)\n INSTR_TIME_SET_CURRENT(thread->conn_time);\n INSTR_TIME_SUBTRACT(thread->conn_time, thread->start_time);\n \n+ pthread_barrier_wait(&conn_barrier);\n+\n /* explicitly initialize the state machines */\n for (i = 0; i < nstate; i++)\n {\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Feb 2020 10:51:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "At Thu, 27 Feb 2020 10:51:29 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-02-27 10:01:00 -0800, Andres Freund wrote:\n> > If so, should this be done unconditionally? A new option? Included in an\n> > existing one somehow?\n> \n> FWIW, leaving windows, error handling, and other annoyances aside, this\n> can be implemented fairly simply. See below.\n> \n> As an example of the difference:\n> \n> Before:\n> andres@awork3:~/build/postgres/dev-optimize/vpath$ ./src/bin/pgbench/pgbench -M prepared -c 5000 -j 100 -T 100 -P1 -S\n> starting vacuum...end.\n> progress: 100.4 s, 515307.4 tps, lat 1.374 ms stddev 7.739\n> transaction type: <builtin: select only>\n> scaling factor: 30\n> query mode: prepared\n> number of clients: 5000\n> number of threads: 100\n> duration: 100 s\n> number of transactions actually processed: 51728348\n> latency average = 1.374 ms\n> latency stddev = 7.739 ms\n> tps = 513802.541226 (including connections establishing)\n> tps = 521342.427158 (excluding connections establishing)\n> \n> \n> Note that there's no progress report until the end. That's because the\n> main thread didn't get a connection until the other threads were done.\n> \n> \n> After:\n> \n> pgbench -M prepared -c 5000 -j 100 -T 100 -P1 -S\n> starting vacuum...end.\n> progress: 1.5 s, 9943.5 tps, lat 4.795 ms stddev 14.822\n> progress: 2.0 s, 380312.6 tps, lat 1.728 ms stddev 15.461\n> progress: 3.0 s, 478811.1 tps, lat 2.052 ms stddev 31.687\n> progress: 4.0 s, 470804.6 tps, lat 1.941 ms stddev 24.661\n> \n> \n> \n> I think this also shows that \"including/excluding connections\n> establishing\" as well as some of the other stats reported pretty\n> bogus. In the 'before' case a substantial numer of the connections had\n> not yet been established until the end of the test run!\n\nI see it useful. In most cases we don't care connection time of\npgbench. Especially in the mentioned case the result is just bogus. I\nthink the reason for \"including/excluding connection establishing\" is\nnot that people wants to see how long connection took to establish but\nthat how long the substantial work took. If each client did run with\ncontinuously re-establishing new connections the connection time would\nbe useful, but actually all the connections are established at once at\nthe beginning.\n\nSo FWIW I prefer that the barrier is applied by default (that is, it\ncan be disabled) and the progress time starts at the time all clients\nhas been established.\n\n> starting vacuum...end.\n+ time to established 5000 connections: 1323ms\n! progress: 1.0 s, 330000.5 tps, lat 2.795 ms stddev 14.822\n! progress: 2.0 s, 380312.6 tps, lat 1.728 ms stddev 15.461\n! progress: 3.0 s, 478811.1 tps, lat 2.052 ms stddev 31.687\n! progress: 4.0 s, 470804.6 tps, lat 1.941 ms stddev 24.661\n> transaction type: <builtin: select only>\n> scaling factor: 30\n> query mode: prepared\n> number of clients: 5000\n> number of threads: 100\n> duration: 100 s\n> number of transactions actually processed: 51728348\n> latency average = 1.374 ms\n> latency stddev = 7.739 ms\n> tps = 513802.541226 (including connections establishing)\n> tps = 521342.427158 (excluding connections establishing)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Feb 2020 15:00:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hello Andres,\n\n> Therefore I'd like to make pgbench wait till it has established all\n> connections, before they run queries.\n\n> Does anybody else see this as being useful?\n\nYes, I think that having this behavior available would make sense.\n\n> If so, should this be done unconditionally?\n\nDunno. I should think about it. I'd say probably.\n\nPgbench is more or less designed to run a long hopefully steady-state \nbenchmark, so that the initial connection setup is always negligeable. Not \ncomplying with this hypothesis quite often leads to weird results.\n\n> A new option?\n\nMaybe, if not unconditional.\n\nIf there is an unconditional barrier, the excluding/including connection \nstuff does not make a lot of sense when not under -C, if it did make any \nsense before…\n\n> Included in an existing one somehow?\n\nWhich one would you suggest?\n\nAdding a synchronization barrier should be simple enough, I thought about \nit in the past.\n\nHowever, I'd still be wary that it is no silver bullet: if you start a lot \nof threads compared to the number of available cores, pgbench would \nbasically overload the system, and you would experience a lot of waiting \ntime which reflects that the client code has not got enough cpu time. \nBasically you would be testing the OS process/thread management \nperformance.\n\nOn my 4-core laptop, with a do-nothing script (\\set i 0):\n\n sh> pgbench -T 10 -f nope.sql -P 1 -j 10 -c 10\n latency average = 0.000 ms\n latency stddev = 0.049 ms\n tps = 21048841.630291 (including connections establishing)\n tps = 21075139.938887 (excluding connections establishing)\n\n sh> pgbench -T 10 -f nope.sql -P 1 -j 100 -c 100\n latency average = 0.002 ms\n latency stddev = 0.470 ms\n tps = 23846777.241370 (including connections establishing)\n tps = 24132252.146257 (excluding connections establishing)\n\nThroughput is slightly better, latency average and variance explode \nbecause each thread is given stretches of cpu time to advance, then wait \nfor the next round of cpu time.\n\n-- \nFabien.", "msg_date": "Sat, 29 Feb 2020 15:29:19 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "\nHello Kyotaro-san,\n\n>> I think this also shows that \"including/excluding connections\n>> establishing\" as well as some of the other stats reported pretty\n>> bogus. In the 'before' case a substantial numer of the connections had\n>> not yet been established until the end of the test run!\n>\n> I see it useful. In most cases we don't care connection time of\n> pgbench. Especially in the mentioned case the result is just bogus. I\n> think the reason for \"including/excluding connection establishing\" is\n> not that people wants to see how long connection took to establish but\n> that how long the substantial work took. If each client did run with\n> continuously re-establishing new connections the connection time would\n> be useful, but actually all the connections are established at once at\n> the beginning.\n>\n> So FWIW I prefer that the barrier is applied by default\n\nYep.\n\n> (that is, it can be disabled)\n\nOn reflection, I'm not sure I see a use case for not running the barrier \nif it is available.\n\n> and the progress time starts at the time all clients has been \n> established.\n\nYep, the start time should be set after the connection barrier, and \npossibly before a start barrier to ensure that no transaction has started \nbefore the start time: although performance measures are expected to be \nslightly false because of how they are measured (measuring takes time), \nfrom a benchmarking perspective the displayed result should be <= the \nactual performance.\n\nNow, again, if long benchmarks are run, which for a db should more or less \nalways be the case, this should not matter much.\n\n>> starting vacuum...end.\n> + time to established 5000 connections: 1323ms\n\nYep, maybe showing the initial connection time separately.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 29 Feb 2020 15:39:06 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hi,\n\nOn 2020-02-29 15:29:19 +0100, Fabien COELHO wrote:\n> Pgbench is more or less designed to run a long hopefully steady-state\n> benchmark, so that the initial connection setup is always negligeable. Not\n> complying with this hypothesis quite often leads to weird results.\n\nI don't think this is a good starting point. Sure, a longer run will\nyield more precise results, and one needs more than just an\ninstantaneous measurement. But in a lot of cases we want to use pgbench\nto measure a lot of different variations, making it infeasible for each\nrun to be all that long.\n\nOf course whether that's feasible depends on the workload (e.g. readonly\nruns can be shorter than read/write runs).\n\nAlso note that in the case that made me look at this, you'd have to run\nthe test for *weeks* to drown out the performance difference that's\nsolely caused by difference in how long individual connects are\nestablished. Partially because the \"excluding connection establishing\"\nnumber is entirely broken, but also because fewer connections having\nbeen established changes the performance so much.\n\n\nI think we should also consider making pgbench actually use non-blocking\nconnection establishment. It seems pretty weird that that's the one\nlibpq operation where we don't? In particular for -C, with -c > -j,\nthat makes the results pretty meaningless.\n\n\n> Adding a synchronization barrier should be simple enough, I thought about it\n> in the past.\n> \n> However, I'd still be wary that it is no silver bullet: if you start a lot\n> of threads compared to the number of available cores, pgbench would\n> basically overload the system, and you would experience a lot of waiting\n> time which reflects that the client code has not got enough cpu time.\n> Basically you would be testing the OS process/thread management performance.\n\nSure, that's possible. But I don't see what that has to do with the\nbarrier?\n\nAlso, most scripts actually have client/server interaction...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 29 Feb 2020 09:37:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello Andres,\n\n> FWIW, leaving windows, error handling, and other annoyances aside, this\n> can be implemented fairly simply. See below.\n\nAttached an attempt at improving things.\n\nI've put 2 barriers: one so that all threads are up, one when all \nconnections are setup and the bench is ready to go.\n\nI've done a blind attempt at implementing the barrier stuff on windows.\n\nI've changed the performance calculations depending on -C or not. Ramp-up \neffects are smoothed.\n\nI've merged all time-related stuff (time_t, instr_time, int64) to use a \nunique type (pg_time_usec_t) and set of functions/macros, which simplifies \nthe code somehow.\n\nI've tried to do some variable renaming to distinguish timestamps and \nintervals.\n\nThis is work in progress.\n\n-- \nFabien.", "msg_date": "Sun, 1 Mar 2020 22:16:06 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hi,\n\nOn 2020-03-01 22:16:06 +0100, Fabien COELHO wrote:\n>\n> Hello Andres,\n>\n> > FWIW, leaving windows, error handling, and other annoyances aside, this\n> > can be implemented fairly simply. See below.\n>\n> Attached an attempt at improving things.\n\nAwesome!\n\n\n> I've put 2 barriers: one so that all threads are up, one when all\n> connections are setup and the bench is ready to go.\n\nI'd done similarly locally.\n\nSlight aside: Have you ever looked at moving pgbench to non-blocking\nconnection establishment? It seems weird to use non-blocking everywhere\nbut connection establishment.\n\n\n> I've done a blind attempt at implementing the barrier stuff on windows.\n\nNeat.\n\n\n> I've changed the performance calculations depending on -C or not. Ramp-up\n> effects are smoothed.\n\n\n> I've merged all time-related stuff (time_t, instr_time, int64) to use a\n> unique type (pg_time_usec_t) and set of functions/macros, which simplifies\n> the code somehow.\n\nHm. I'm not convinced it's a good idea for pgbench to do its own thing\nhere.\n\n\n>\n> #ifdef WIN32\n> +#define PTHREAD_BARRIER_SERIAL_THREAD (-1)\n> +\n> /* Use native win32 threads on Windows */\n> typedef struct win32_pthread *pthread_t;\n> typedef int pthread_attr_t;\n> +typedef SYNCHRONIZATION_BARRIER pthread_barrier_t;\n>\n> static int\tpthread_create(pthread_t *thread, pthread_attr_t *attr, void *(*start_routine) (void *), void *arg);\n> static int\tpthread_join(pthread_t th, void **thread_return);\n> +\n> +static int\tpthread_barrier_init(pthread_barrier_t *barrier, void *unused, int nthreads);\n> +static int\tpthread_barrier_wait(pthread_barrier_t *barrier);\n> +static int\tpthread_barrier_destroy(pthread_barrier_t *barrier);\n\nHow about using 'struct unknown_type *unused' instead of \"unused\"?\nBecause the void *unused will accept everything...\n\n\n> +/* Thread synchronization barriers */\n> +static pthread_barrier_t\n> +\tstart_barrier,\t\t/* all threads are started */\n> +\tbench_barrier;\t\t/* benchmarking ready to start */\n> +\n\nWe don't really need two barriers here. The way that pthread barriers\nare defined is that they 'reset' after all the threads have arrived. You\ncan argue we still want that, but ...\n\n\n\n> @@ -5165,20 +5151,16 @@ printSimpleStats(const char *prefix, SimpleStats *ss)\n>\n> /* print out results */\n> static void\n> -printResults(StatsData *total, instr_time total_time,\n> -\t\t\t instr_time conn_total_time, int64 latency_late)\n> +printResults(StatsData *total,\n\nGiven that we're changing the output (for the better) of pgbench again,\nI wonder if we should add the pgbench version to the benchmark\noutput. Otherwise it seems easy to end up e.g. seeing a performance\ndifference between pg12 and pg14, where all that's actually happening is\na different output because each run used the respective pgbench version.\n\n\n\n> +\t\t\t pg_time_usec_t total_delay,\t\t/* benchmarking time */\n> +\t\t\t pg_time_usec_t conn_total_delay,\t/* is_connect */\n> +\t\t\t pg_time_usec_t conn_elapsed_delay,\t/* !is_connect */\n> +\t\t\t int64 latency_late)\n\nI'm not a fan of naming these 'delay'. To me that doesn't sounds like\nit's about the time the total benchmark has taken.\n\n\n> @@ -5239,8 +5220,16 @@ printResults(StatsData *total, instr_time total_time,\n> \t\t\t 0.001 * total->lag.sum / total->cnt, 0.001 * total->lag.max);\n> \t}\n>\n> -\tprintf(\"tps = %f (including connections establishing)\\n\", tps_include);\n> -\tprintf(\"tps = %f (excluding connections establishing)\\n\", tps_exclude);\n> +\tif (is_connect)\n> +\t{\n> +\t\tprintf(\"average connection time = %.3f ms\\n\", 0.001 * conn_total_delay / total->cnt);\n> +\t\tprintf(\"tps = %f (including reconnection times)\\n\", tps);\n> +\t}\n> +\telse\n> +\t{\n> +\t\tprintf(\"initial connection time = %.3f ms\\n\", 0.001 * conn_elapsed_delay);\n> +\t\tprintf(\"tps = %f (without initial connection establishing)\\n\", tps);\n> +\t}\n\nKeeping these separate makes sense to me, they're just so wildly\ndifferent.\n\n\n> +/*\n> + * Simpler convenient interface\n> + *\n> + * The instr_time type is expensive when dealing with time arithmetic.\n> + * Define a type to hold microseconds on top of this, suitable for\n> + * benchmarking performance measures, eg in \"pgbench\".\n> + */\n> +typedef int64 pg_time_usec_t;\n> +\n> +static inline pg_time_usec_t\n> +pg_time_get_usec(void)\n> +{\n> +\tinstr_time now;\n> +\n> +\tINSTR_TIME_SET_CURRENT(now);\n> +\treturn (pg_time_usec_t) INSTR_TIME_GET_MICROSEC(now);\n> +}\n\nFor me the function name sounds like you're getting the usec out of a\npg_time. Not that it's getting a new timestamp.\n\n\n> +#define PG_TIME_SET_CURRENT_LAZY(t)\t\t\\\n> +\tif ((t) == 0) \t\t\t\t\t\t\\\n> +\t\t(t) = pg_time_get_usec()\n> +\n> +#define PG_TIME_GET_DOUBLE(t) (0.000001 * (t))\n> #endif\t\t\t\t\t\t\t/* INSTR_TIME_H */\n\nI'd make it an inline function instead of this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:40:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello Andres,\n\n> Slight aside: Have you ever looked at moving pgbench to non-blocking\n> connection establishment? It seems weird to use non-blocking everywhere\n> but connection establishment.\n\nNope. If there is some interest, why not. The reason for not doing it is \nthat the typical use case is just to connect once at the beginning so that \nconnections do not matter anyway. Now with -C it makes sense.\n\n>> I've changed the performance calculations depending on -C or not. Ramp-up\n>> effects are smoothed.\n>\n>> I've merged all time-related stuff (time_t, instr_time, int64) to use a\n>> unique type (pg_time_usec_t) and set of functions/macros, which simplifies\n>> the code somehow.\n>\n> Hm. I'm not convinced it's a good idea for pgbench to do its own thing\n> here.\n\nHaving 3 time types (in fact, 4, double is used as well for some \ncalculations) in just one file to deal with time does not help much to \nunderstand the code, and there is quite a few line to translate from one \nto the other.\n\n>> +static int\tpthread_barrier_init(pthread_barrier_t *barrier, void *unused, int nthreads);\n>\n> How about using 'struct unknown_type *unused' instead of \"unused\"?\n> Because the void *unused will accept everything...\n\nNever encountered this pattern. It does not seem to be used anywhere in pg \nsources. I'd be afraid that some compilers would complain. I can try \nanyway.\n\n>> +/* Thread synchronization barriers */\n>> +static pthread_barrier_t\n>> +\tstart_barrier,\t\t/* all threads are started */\n>> +\tbench_barrier;\t\t/* benchmarking ready to start */\n>> +\n>\n> We don't really need two barriers here. The way that pthread barriers \n> are defined is that they 'reset' after all the threads have arrived. You \n> can argue we still want that, but ...\n\nYes, one barrier could be reused.\n\n>> /* print out results */\n>> static void\n>> -printResults(StatsData *total, instr_time total_time,\n>> -\t\t\t instr_time conn_total_time, int64 latency_late)\n>> +printResults(StatsData *total,\n>\n> Given that we're changing the output (for the better) of pgbench again,\n> I wonder if we should add the pgbench version to the benchmark\n> output.\n\nDunno. Maybe.\n\n> Otherwise it seems easy to end up e.g. seeing a performance\n> difference between pg12 and pg14, where all that's actually happening is\n> a different output because each run used the respective pgbench version.\n\nYep.\n\n>> +\t\t\t pg_time_usec_t total_delay,\t\t/* benchmarking time */\n>> +\t\t\t pg_time_usec_t conn_total_delay,\t/* is_connect */\n>> +\t\t\t pg_time_usec_t conn_elapsed_delay,\t/* !is_connect */\n>> +\t\t\t int64 latency_late)\n>\n> I'm not a fan of naming these 'delay'. To me that doesn't sounds like\n> it's about the time the total benchmark has taken.\n\nHmmm… I'd like to differentiate variable names which contain timestamp \nversus those which contain intervals, given that it is the same underlying \ntype. That said, I'm not very happy with \"delay\" either.\n\nWhat would you suggest?\n\n>> +pg_time_get_usec(void)\n>\n> For me the function name sounds like you're getting the usec out of a\n> pg_time. Not that it's getting a new timestamp.\n\nOk, I'll think of something else, possibly \"pg_now\"? \"pg_time_now\"?\n\n>> +#define PG_TIME_SET_CURRENT_LAZY(t)\t\t\\\n>> +\tif ((t) == 0) \t\t\t\t\t\t\\\n>> +\t\t(t) = pg_time_get_usec()\n>> +\n>> +#define PG_TIME_GET_DOUBLE(t) (0.000001 * (t))\n>> #endif\t\t\t\t\t\t\t/* INSTR_TIME_H */\n>\n> I'd make it an inline function instead of this.\n\nI did it that way because it was already done with defines on instr_time, \nbut I'm fine with inline.\n\nI'll try to look at it over the week-end.\n\n-- \nFabien.", "msg_date": "Thu, 5 Mar 2020 23:55:04 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hello Andres,\n\n>> I've changed the performance calculations depending on -C or not. Ramp-up\n>> effects are smoothed.\n>\n>> I've merged all time-related stuff (time_t, instr_time, int64) to use a\n>> unique type (pg_time_usec_t) and set of functions/macros, which simplifies\n>> the code somehow.\n>\n> Hm. I'm not convinced it's a good idea for pgbench to do its own thing\n> here.\n\nGiven the unjustifiable heterogeneousness it induces and the simpler code \nafter the move, I think it is much better. Pgbench cloc is smaller after \nbarrier are added (4655 to 4650) thanks to that and a few other code \nsimplifications. Removing all INSTR_TIME_* costly macros is a relief in \nitself…\n\n>> +static int\tpthread_barrier_init(pthread_barrier_t *barrier, void *unused, int nthreads);\n>\n> How about using 'struct unknown_type *unused' instead of \"unused\"?\n\nHaven't done it because I found no other instances in pg, and anyway this \ncode is only used once locally and NULL is passed.\n\n>> +static pthread_barrier_t\n>> +\tstart_barrier,\t\t/* all threads are started */\n>> +\tbench_barrier;\t\t/* benchmarking ready to start */\n>\n> We don't really need two barriers here.\n\nIndeed. Down to one.\n\n>> /* print out results */\n>\n> Given that we're changing the output (for the better) of pgbench again,\n> I wonder if we should add the pgbench version to the benchmark\n> output.\n\nNot sure about it, but done anyway.\n\n>> +\t\t\t pg_time_usec_t total_delay,\t\t/* benchmarking time */\n>> +\t\t\t pg_time_usec_t conn_total_delay,\t/* is_connect */\n>> +\t\t\t pg_time_usec_t conn_elapsed_delay,\t/* !is_connect */\n>> +\t\t\t int64 latency_late)\n>\n> I'm not a fan of naming these 'delay'. To me that doesn't sounds like\n> it's about the time the total benchmark has taken.\n\nI have used '_duration', and tried to clarify some field and variable \nnames depending on what data they actually hold.\n\n>> +\t\tprintf(\"tps = %f (including reconnection times)\\n\", tps);\n>> +\t\tprintf(\"tps = %f (without initial connection establishing)\\n\", tps);\n>\n> Keeping these separate makes sense to me, they're just so wildly \n> different.\n\nYep. I've added a comment about that.\n\n>> +static inline pg_time_usec_t\n>> +pg_time_get_usec(void)\n>\n> For me the function name sounds like you're getting the usec out of a\n> pg_time. Not that it's getting a new timestamp.\n\nI've used \"pg_time_now()\".\n\n>> +#define PG_TIME_SET_CURRENT_LAZY(t)\t\t\\\n>> +\tif ((t) == 0) \t\t\t\t\t\t\\\n>> +\t\t(t) = pg_time_get_usec()\n>\n> I'd make it an inline function instead of this.\n\nDone \"pg_time_now_lazy(&now)\"\n\nI have also simplified the code around thread creation & join because it \nwas a mess: thread 0 was run in the middle of the stat collection loop…\n\nI have updated the doc with actual current output, but excluding the \nversion display which would have to be changed between releases.\n\n-- \nFabien.", "msg_date": "Sat, 7 Mar 2020 09:24:43 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hallo Andres,\n\n> Slight aside: Have you ever looked at moving pgbench to non-blocking \n> connection establishment? It seems weird to use non-blocking everywhere \n> but connection establishment.\n\nAttached an attempt at doing that, mostly done for fun. It seems to be a \nlittle slower on a local socket.\n\nWhat do you think?\n\nMaybe it would be worth having it with an option?\n\n-- \nFabien.", "msg_date": "Sat, 7 Mar 2020 09:30:31 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hello,\n\n>>> I've merged all time-related stuff (time_t, instr_time, int64) to use a\n>>> unique type (pg_time_usec_t) and set of functions/macros, which simplifies\n>>> the code somehow.\n>> \n>> Hm. I'm not convinced it's a good idea for pgbench to do its own thing\n>> here.\n\nI really think that the refactoring part is a good thing because cloc and \ncost is reduced (time arithmetic is an ugly pain with instr_time).\n\nI have split the patch.\n\n* First patch reworks time measurements in pgbench.\n\nIt creates a convenient pg_time_usec_t and use it everywhere, getting rid \nof \"instr_time_t\". The code is somehow simplified wrt what time are taken\nand what they mean.\n\nInstead of displaying 2 tps at the end, which is basically insane, it \nshows one tps for --connect, which includes reconnection times, and one \ntps for the usual one connection at startup which simply ignores the \ninitial connection time.\n\nThis (mostly) refactoring reduces the cloc.\n\n* Second patch adds a barrier before starting the bench\n\nIt applies on top of the previous one. The initial imbalance due to thread \ncreation times is smoothed.\n\nI may add a --start-on option afterwards so that several pgbench (running \non distinct hosts) can be synchronized, which would be implemented as a \ndelay inserted by thread 0 before the barrier.\n\nThe windows implementation is more or less blind, if someone can confirm \nthat it works, it would be nice.\n\n-- \nFabien.", "msg_date": "Sun, 17 May 2020 11:55:43 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "> On 17 May 2020, at 11:55, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> I have split the patch.\n> \n> * First patch reworks time measurements in pgbench.\n\n> * Second patch adds a barrier before starting the bench\n> \n> It applies on top of the previous one. The initial imbalance due to thread creation times is smoothed.\n\nThe usecs patch fails to apply to HEAD, can you please submit a rebased version\nof this patchset. Also, when doing so, can you please rename the patches such\nthat sort alphabetically in the order in which they are intended to be applied.\nThe CFBot patch tester will otherwise try to apply them out of order which\ncause errors.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 3 Jul 2020 11:21:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": ">> * First patch reworks time measurements in pgbench.\n>> * Second patch adds a barrier before starting the bench\n>>\n>> It applies on top of the previous one. The initial imbalance due to \n>> thread creation times is smoothed.\n>\n> The usecs patch fails to apply to HEAD, can you please submit a rebased version\n> of this patchset. Also, when doing so, can you please rename the patches such\n> that sort alphabetically in the order in which they are intended to be applied.\n> The CFBot patch tester will otherwise try to apply them out of order which\n> cause errors.\n\nOk. Attached.\n\n-- \nFabien.", "msg_date": "Sat, 4 Jul 2020 08:34:25 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Dear Fabien, Andres\n\nI think your idea is good, hence I put some comments as a reviewer.\nI focus on only the linux code because I'm not familiar with the Windows system. Sorry.\n\n[For patch A]\n\nPlease complete fixes for the documentation. At least the following sentence should be fixed:\n```\nThe last two lines report the number of transactions per second, figured with and without counting the time to start database sessions.\n```\n\n> -starting vacuum...end.\n\nI think any other options should be disabled in the example, therefore please leave this line.\n\n> + /* explicitly initialize the state machines */\n> + for (int i = 0; i < nstate; i++)\n> + {\n> + state[i].state = CSTATE_CHOOSE_SCRIPT;\n> + }\n\nI'm not sure but I think braces should be removed in our coding conventions.\n\nOther changes are being reviewed now.\n\n[For patch B]\n\n> + /* GO */\n> + pthread_barrier_wait(&barrier);\n\nThe current implementation is too simple. If nthreads >= 2 and connection fails in the one thread,\nthe other one will wait forever.\nSome special treatments are needed in the `done` code block and here.\n\n\n[others]\n\n> > (that is, it can be disabled)\n> \n> On reflection, I'm not sure I see a use case for not running the barrier \n> if it is available.\n\nIf the start point changes and there is no way to disable this feature,\nthe backward compatibility will be completely violated.\nIt means that tps cannot be compared to older versions under the same conditions,\nand It may hide performance-related issues.\nI think it's not good.\n\n\nBest regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n-----Original Message-----\nFrom: Fabien COELHO <coelho@cri.ensmp.fr> \nSent: Saturday, July 4, 2020 3:34 PM\nTo: Daniel Gustafsson <daniel@yesql.se>\nCc: Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org\nSubject: Re: pgbench: option delaying queries till connections establishment?\n\n\n>> * First patch reworks time measurements in pgbench.\n>> * Second patch adds a barrier before starting the bench\n>>\n>> It applies on top of the previous one. The initial imbalance due to \n>> thread creation times is smoothed.\n>\n> The usecs patch fails to apply to HEAD, can you please submit a rebased version\n> of this patchset. Also, when doing so, can you please rename the patches such\n> that sort alphabetically in the order in which they are intended to be applied.\n> The CFBot patch tester will otherwise try to apply them out of order which\n> cause errors.\n\nOk. Attached.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 26 Oct 2020 08:31:32 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Dear Fabien;\n\n> The current implementation is too simple. If nthreads >= 2 and connection fails in the one thread,\n> the other one will wait forever.\n\nI attached the very preliminary patch for solving the problem.\nEven if threads fail to connect, the others can go through the barrier.\nBut I think this implementation is not good...\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Wed, 28 Oct 2020 09:28:35 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello,\n\n> Please complete fixes for the documentation. At least the following sentence should be fixed:\n> ```\n> The last two lines report the number of transactions per second, figured with and without counting the time to start database sessions.\n> ```\n\nIndeed. I scanned the file but did not find other places that needed \nupdating.\n\n>> -starting vacuum...end.\n>\n> I think any other options should be disabled in the example, therefore please leave this line.\n\nYes.\n\n>> + for (int i = 0; i < nstate; i++)\n>> + {\n>> + state[i].state = CSTATE_CHOOSE_SCRIPT;\n>> + }\n>\n> I'm not sure but I think braces should be removed in our coding conventions.\n\nNot sure either. I'm not for having too many braces anyway, so I removed \nthem.\n\n>> + /* GO */\n>> + pthread_barrier_wait(&barrier);\n>\n> The current implementation is too simple. If nthreads >= 2 and connection fails in the one thread,\n> the other one will wait forever.\n> Some special treatments are needed in the `done` code block and here.\n\nIndeed. I took your next patch with an added explanation. I'm unclear \nwhether proceeding makes much sense though, that is some thread would run \nthe test and other would have aborted. Hmmm.\n\n>>> (that is, it can be disabled)\n>>\n>> On reflection, I'm not sure I see a use case for not running the barrier\n>> if it is available.\n>\n> If the start point changes and there is no way to disable this feature,\n> the backward compatibility will be completely violated.\n> It means that tps cannot be compared to older versions under the same conditions,\n> and It may hide performance-related issues.\n> I think it's not good.\n\nISTM that there is another patch in the queue which needs barriers to \ndelay some initialization so as to fix a corner case bug, in which case \nthe behavior would be mandatory. The current submission could add an \noption to skip the barrier synchronization, but I'm not enthousiastic to \nadd an option and remove it shortly later.\n\nMoreover, the \"compatibility\" is with nothing which does not make much \nsense. When testing with many threads and clients, the current \nimplementation make threads start when they are ready, which means that \nyou can have threads issuing transactions while others are not yet \nconnected or not even started, so that the actually measured performance \nis quite awkward for a short bench. ISTM that allowing such a backward \ncompatible strange behavior does not serve pg users. If the user want the \nold unreliable behavior, they can use a old pgbench, and obtain unreliable \nfigures.\n\nFor these two reasons, I'm inclined not to add an option to skip these \nbarriers, but this can be debatted further.\n\nAttached 2 updated patches.\n\n-- \nFabien.", "msg_date": "Mon, 2 Nov 2020 19:59:11 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Dear Fabien, \n\n> Indeed. I scanned the file but did not find other places that needed \n> updating.\n\n> Yes.\n\n> Not sure either. I'm not for having too many braces anyway, so I removed \n> them.\n\nI checked your fixes and I think it's OK. \nFinally, please move the doc fixes to patch B in order to separate patches\ncompletely.\n\n> Indeed. I took your next patch with an added explanation. I'm unclear \n> whether proceeding makes much sense though, that is some thread would run \n> the test and other would have aborted. Hmmm.\n\nYour comment looks good, thanks.\nIn the previous version pgbench starts benchmarking even if some connections fail.\nAnd users can notice the connection failure by stderr output.\nHence the current specification may be enough.\nIf you agree, please remove the following lines:\n\n```\n+\t\t\t\t * It is unclear whether it is worth doing anything rather than\n+\t\t\t\t * coldly exiting with an error message.\n```\n\n> ISTM that there is another patch in the queue which needs barriers to \n> delay some initialization so as to fix a corner case bug, in which case \n> the behavior would be mandatory. The current submission could add an \n> option to skip the barrier synchronization, but I'm not enthousiastic to \n> add an option and remove it shortly later.\n\nCould you tell me which patch you mention? Basically I agree what you say,\nbut I want to check it.\n\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Fri, 6 Nov 2020 02:15:10 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "\nHello,\n\n>> Indeed. I took your next patch with an added explanation. I'm unclear\n>> whether proceeding makes much sense though, that is some thread would run\n>> the test and other would have aborted. Hmmm.\n>\n> Your comment looks good, thanks. In the previous version pgbench starts \n> benchmarking even if some connections fail. And users can notice the \n> connection failure by stderr output. Hence the current specification may \n> be enough.\n\nUsually I run many pgbench through scripts, so I'm probably not there to \ncheck a lone stderr failure at the beginning if performance figures are\nactually reported.\n\n> If you agree, please remove the following lines:\n>\n> ```\n> +\t\t\t\t * It is unclear whether it is worth doing anything rather than\n> +\t\t\t\t * coldly exiting with an error message.\n> ```\n\nI can remove the line, but I strongly believe that reporting performance \nfigures if some client connection failed thus the bench could not run as \nprescribed is a bad behavior. The good news is that it is probably quite \nunlikely. So I'd prefer to keep it and possibly submit a patch to change \nthe behavior.\n\n>> ISTM that there is another patch in the queue which needs barriers to\n>> delay some initialization so as to fix a corner case bug, in which case\n>> the behavior would be mandatory. The current submission could add an\n>> option to skip the barrier synchronization, but I'm not enthousiastic to\n>> add an option and remove it shortly later.\n>\n> Could you tell me which patch you mention? Basically I agree what you say,\n> but I want to check it.\n\nShould be this one: https://commitfest.postgresql.org/30/2624/,\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 7 Nov 2020 18:33:27 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Dear Fabien,\n\n> Usually I run many pgbench through scripts, so I'm probably not there to \n> check a lone stderr failure at the beginning if performance figures are\n> actually reported.\n\n> I can remove the line, but I strongly believe that reporting performance \n> figures if some client connection failed thus the bench could not run as \n> prescribed is a bad behavior. The good news is that it is probably quite \n> unlikely. So I'd prefer to keep it and possibly submit a patch to change \n> the behavior.\n\nI agree such a situation is very bad, and I understood you have a plan to \nsubmit patches for fix it. If so leaving lines as a TODO is OK.\n\n> Should be this one: https://commitfest.postgresql.org/30/2624/\n\nThis discussion is still on-going, but I can see that the starting time\nmay be delayed for looking up all pgbench-variables.\n(I think the status of this thread might be wrong. it should be\n'Needs review,' but now 'Waiting on Author.')\n\nThis patch is mostly good and can change a review status soon,\nhowever, I think it should wait that related patch.\nPlease discuss how to fix it with Tom, and this will commit soon.\n\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Wed, 11 Nov 2020 11:11:48 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "\nHello,\n\n>> I can remove the line, but I strongly believe that reporting performance\n>> figures if some client connection failed thus the bench could not run as\n>> prescribed is a bad behavior. The good news is that it is probably quite\n>> unlikely. So I'd prefer to keep it and possibly submit a patch to change\n>> the behavior.\n>\n> I agree such a situation is very bad, and I understood you have a plan to\n> submit patches for fix it. If so leaving lines as a TODO is OK.\n\nThanks.\n\n>> Should be this one: https://commitfest.postgresql.org/30/2624/\n>\n> This discussion is still on-going, but I can see that the starting time\n> may be delayed for looking up all pgbench-variables.\n\nYep, that's it.\n\n> (I think the status of this thread might be wrong. it should be\n> 'Needs review,' but now 'Waiting on Author.')\n\nI changed it to \"Needs review\".\n\n> This patch is mostly good and can change a review status soon,\n> however, I think it should wait that related patch.\n\nHmmm.\n\n> Please discuss how to fix it with Tom,\n\nI would not have the presumption to pressure Tom's agenda in any way!\n\n> and this will commit soon.\n\nand this will wait till its time comes. In the mean time, I think that you \nshould put the patch status as you see fit, independently of the other \npatch: it seems unlikely that they would be committed together, and I'll \nhave to merge the remaining one anyway.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 11 Nov 2020 13:23:46 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Dear Fabien, \n\n> and this will wait till its time comes. In the mean time, I think that you \n> should put the patch status as you see fit, independently of the other \n> patch: it seems unlikely that they would be committed together, and I'll \n> have to merge the remaining one anyway.\n\nOK. I found the related thread[1], and I understood you will submit another patch\non the thread.\n\nPostgreSQL Patch Tester says all regression tests are passed, and\nI change the status to \"Ready for committer.\"\n\n[1]: https://commitfest.postgresql.org/31/2827/\n\nThank you for discussing with me.\n\nHayato Kuroda\nFUJITSU LIMITED\n\n-----Original Message-----\nFrom: Fabien COELHO <coelho@cri.ensmp.fr> \nSent: Wednesday, November 11, 2020 9:24 PM\nTo: Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com>\nCc: Andres Freund <andres@anarazel.de>; Daniel Gustafsson <daniel@yesql.se>; pgsql-hackers@postgresql.org\nSubject: RE: pgbench: option delaying queries till connections establishment?\n\n\nHello,\n\n>> I can remove the line, but I strongly believe that reporting performance\n>> figures if some client connection failed thus the bench could not run as\n>> prescribed is a bad behavior. The good news is that it is probably quite\n>> unlikely. So I'd prefer to keep it and possibly submit a patch to change\n>> the behavior.\n>\n> I agree such a situation is very bad, and I understood you have a plan to\n> submit patches for fix it. If so leaving lines as a TODO is OK.\n\nThanks.\n\n>> Should be this one: https://commitfest.postgresql.org/30/2624/\n>\n> This discussion is still on-going, but I can see that the starting time\n> may be delayed for looking up all pgbench-variables.\n\nYep, that's it.\n\n> (I think the status of this thread might be wrong. it should be\n> 'Needs review,' but now 'Waiting on Author.')\n\nI changed it to \"Needs review\".\n\n> This patch is mostly good and can change a review status soon,\n> however, I think it should wait that related patch.\n\nHmmm.\n\n> Please discuss how to fix it with Tom,\n\nI would not have the presumption to pressure Tom's agenda in any way!\n\n> and this will commit soon.\n\nand this will wait till its time comes. In the mean time, I think that you \nshould put the patch status as you see fit, independently of the other \npatch: it seems unlikely that they would be committed together, and I'll \nhave to merge the remaining one anyway.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 13 Nov 2020 05:44:56 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello!\n\nOn 2020-11-13 08:44, kuroda.hayato@fujitsu.com wrote:\n> Dear Fabien,\n> \n>> and this will wait till its time comes. In the mean time, I think that \n>> you\n>> should put the patch status as you see fit, independently of the other\n>> patch: it seems unlikely that they would be committed together, and \n>> I'll\n>> have to merge the remaining one anyway.\n> \n> OK. I found the related thread[1], and I understood you will submit\n> another patch\n> on the thread.\n> \n> PostgreSQL Patch Tester says all regression tests are passed, and\n> I change the status to \"Ready for committer.\"\n> \n> [1]: https://commitfest.postgresql.org/31/2827/\n> \n> Thank you for discussing with me.\n> \n> Hayato Kuroda\n> FUJITSU LIMITED\n\n From the mentioned thread [2]:\n\n>>> While trying to test a patch that adds a synchronization barrier in \n>>> pgbench [1] on Windows,\n>> \n>> Thanks for trying that, I do not have a windows setup for testing, and\n>> the sync code I wrote for Windows is basically blind coding:-(\n> \n> FYI:\n> \n> 1) It looks like pgbench will no longer support Windows XP due to the\n> function DeleteSynchronizationBarrier. From\n> https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-deletesynchronizationbarrier\n> :\n> \n> Minimum supported client: Windows 8 [desktop apps only]\n> Minimum supported server: Windows Server 2012 [desktop apps only]\n> \n> On Windows Server 2008 R2 (MSVC 2013) the 6-th version of the patch\n> [1] has compiled without (new) warnings, but when running pgbench I\n> got the following error:\n> \n> The procedure entry point DeleteSynchronizationBarrier could not be\n> located in the dynamic link library KERNEL32.dll.\n\nIMO, it looks like either old Windows systems should not call new \nfunctions, or we should throw them a compilation error. (Update \nMIN_WINNT to 0x0602 = Windows 8 in src/include/port/win32.h?) In the \nsecond case it looks like the documentation should be updated too, see \ndoc/src/sgml/installation.sgml:\n\n<para>\n <productname>PostgreSQL</productname> can be expected to work on these \noperating\n systems: Linux (all recent distributions), Windows (XP and later),\n FreeBSD, OpenBSD, NetBSD, macOS, AIX, HP/UX, and Solaris.\n Other Unix-like systems may also work but are not currently\n being tested. In most cases, all CPU architectures supported by\n a given operating system will work. Look in\n <xref linkend=\"installation-platform-notes\"/> below to see if\n there is information\n specific to your operating system, particularly if using an older \nsystem.\n</para>\n\n<...>\n\n<para>\n The native Windows port requires a 32 or 64-bit version of Windows\n 2000 or later. Earlier operating systems do\n not have sufficient infrastructure (but Cygwin may be used on\n those). MinGW, the Unix-like build tools, and MSYS, a collection\n of Unix tools required to run shell scripts\n like <command>configure</command>, can be downloaded\n from <ulink url=\"http://www.mingw.org/\"></ulink>. Neither is\n required to run the resulting binaries; they are needed only for\n creating the binaries.\n</para>\n\n[2] \nhttps://www.postgresql.org/message-id/e5a09b790db21356376b6e73673aa07c%40postgrespro.ru\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 14 Nov 2020 16:48:51 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello Marina,\n\n>> 1) It looks like pgbench will no longer support Windows XP due to the\n>> function DeleteSynchronizationBarrier. From\n>> https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-deletesynchronizationbarrier\n>>\n>> Minimum supported client: Windows 8 [desktop apps only]\n>> Minimum supported server: Windows Server 2012 [desktop apps only]\n\nThanks for the test and precise analysis!\n\nSigh.\n\nI do not think that putting such version requirements are worth it just \nfor the sake of pgbench.\n\nIn the attached version, I just comment out the call and add an \nexplanation about why it is commented out. If pg overall version \nrequirements are changed on windows, then it could be reinstated.\n\n-- \nFabien.", "msg_date": "Sat, 14 Nov 2020 16:53:17 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Hi!\n\nOn Thu, Feb 27, 2020 at 9:01 PM Andres Freund <andres@anarazel.de> wrote:\n> I am trying to run a few benchmarks measuring the effects of patch to\n> make GetSnapshotData() faster in the face of larger numbers of\n> established connections.\n>\n> Before the patch connection establishment often is very slow due to\n> contention. The first few connections are fast, but after that it takes\n> increasingly long. The first few connections constantly hold\n> ProcArrayLock in shared mode, which then makes it hard for new\n> connections to acquire it exclusively (I'm addressing that to a\n> significant degree in the patch FWIW).\n\nHmm... Let's see the big picture. You've recently committed a\npatchset, which greatly improved the performance of GetSnapshotData().\nAnd you're making further improvements in this direction. But you're\ngetting trouble in measuring the effect, because Postgres is still\nstuck on ProcArrayLock. And in this thread you propose a workaround\nfor that implemented on the pgbench side. My very dumb idea is\nfollowing: should we finally give a chance to more fair lwlocks rather\nthan inventing workarounds?\n\nAs I remember, your major argument against more fair lwlocks was the\nidea that we should fix lwlocks use-cases rather than lwlock mechanism\nthemselves. But can we expect that we fix all the lwlocks use-case in\nany reasonable prospect? My guess is 'no'.\n\nLinks\n1. https://www.postgresql.org/message-id/CAPpHfdvJhO1qutziOp%3Ddy8TO8Xb4L38BxgKG4RPa1up1Lzh_UQ%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 14 Nov 2020 20:07:38 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello!\n\nOn 2020-11-14 20:07, Alexander Korotkov wrote:\n> Hmm... Let's see the big picture. You've recently committed a\n> patchset, which greatly improved the performance of GetSnapshotData().\n> And you're making further improvements in this direction. But you're\n> getting trouble in measuring the effect, because Postgres is still\n> stuck on ProcArrayLock. And in this thread you propose a workaround\n> for that implemented on the pgbench side. My very dumb idea is\n> following: should we finally give a chance to more fair lwlocks rather\n> than inventing workarounds?\n> \n> As I remember, your major argument against more fair lwlocks was the\n> idea that we should fix lwlocks use-cases rather than lwlock mechanism\n> themselves. But can we expect that we fix all the lwlocks use-case in\n> any reasonable prospect? My guess is 'no'.\n> \n> Links\n> 1.\n> https://www.postgresql.org/message-id/CAPpHfdvJhO1qutziOp%3Ddy8TO8Xb4L38BxgKG4RPa1up1Lzh_UQ%40mail.gmail.com\n\nSorry I'm not familiar with the internal architecture of snapshots, \nlocks etc. in postgres, but I wanted to ask - what exact kind of patch \nfor fair lwlocks do you want to offer to the community? I applied the \n6-th version of the patch for fair lwlocks from [1] to the old master \nbranch (see commit [2]), started many threads in pgbench (-M prepared -c \n1000 -j 500 -T 10 -P1 -S) and I did not receive stable first progress \nreports, which IIUC are one of the advantages of the discussed patch for \nthe pgbench (see [3])...\n\n[1] \nhttps://www.postgresql.org/message-id/CAPpHfduV3v3EG7K74-9htBZz_mpE993zGz-%3D2k5RNA3tqabUAA%40mail.gmail.com\n[2] \nhttps://github.com/postgres/postgres/commit/84d514887f9ca673ae688d00f8b544e70f1ab270\n[3] \nhttps://www.postgresql.org/message-id/20200227185129.hikscyenomnlrord%40alap3.anarazel.de\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 17 Nov 2020 00:09:34 +0300", "msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hi,\n\nOn 2020-11-14 20:07:38 +0300, Alexander Korotkov wrote:\n> Hmm... Let's see the big picture. You've recently committed a\n> patchset, which greatly improved the performance of GetSnapshotData().\n> And you're making further improvements in this direction. But you're\n> getting trouble in measuring the effect, because Postgres is still\n> stuck on ProcArrayLock.\n\nNo, the problem was that I couldn't measure the before/after behaviour\nreliably, because not all connections actually ever get established\n*before* the GetSnapshotData() scability patchset. Which made the\nnumbers pointless, because we'd often end up with e.g. 80 connections\ndoing work pre-patch, and 800 post-patch; which obviously measures very\ndifferent things.\n\nI think the issue really is that, independent of PG lock contention,\nit'll take a while to establish all connections, and that starting to\nbenchmark with only some connections established will create pretty\npointless numbers.\n\n\n> And in this thread you propose a workaround\n> for that implemented on the pgbench side. My very dumb idea is\n> following: should we finally give a chance to more fair lwlocks rather\n> than inventing workarounds?\n\nPerhaps - I just don't think it's related to this thread. And how you're\ngoing to address the overhead.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 16 Nov 2020 13:32:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hi,\n\nOn 2020-11-17 00:09:34 +0300, Marina Polyakova wrote:\n> Sorry I'm not familiar with the internal architecture of snapshots, locks\n> etc. in postgres, but I wanted to ask - what exact kind of patch for fair\n> lwlocks do you want to offer to the community? I applied the 6-th version of\n> the patch for fair lwlocks from [1] to the old master branch (see commit\n> [2]), started many threads in pgbench (-M prepared -c 1000 -j 500 -T 10 -P1\n> -S) and I did not receive stable first progress reports, which IIUC are one\n> of the advantages of the discussed patch for the pgbench (see [3])...\n\nThanks for running some benchmarks.\n\nI think it's quite unsurprising that you'd see skewed numbers initially,\neven with fairer locks. Just by virtue of pgbench starting threads and\neach thread immediately starting to perform work, you are bound to get\nodd pretty meaningless initial numbers. Even without contention, and\nwhen using fewer connections than there are CPUs. And especially when\nstarting a larger number of connections, because the main pgbench thread\nwill get fewer and fewer scheduler slices because of the threads running\nbenchmarks already.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Mon, 16 Nov 2020 13:53:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "\n> I think the issue really is that, independent of PG lock contention,\n> it'll take a while to establish all connections, and that starting to\n> benchmark with only some connections established will create pretty\n> pointless numbers.\n\nYes. This is why I think that if we have some way to synchronize it should \nalways be used, i.e. not an option.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 17 Nov 2020 06:58:35 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "On Sun, Nov 15, 2020 at 4:53 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> In the attached version, I just comment out the call and add an\n> explanation about why it is commented out. If pg overall version\n> requirements are changed on windows, then it could be reinstated.\n\nIt looks like macOS doesn't have pthread barriers (via cfbot 2021, now\nwith more operating systems):\n\npgbench.c:326:8: error: unknown type name 'pthread_barrier_t'\nstatic pthread_barrier_t barrier;\n^\npgbench.c:6128:2: error: implicit declaration of function\n'pthread_barrier_init' is invalid in C99\n[-Werror,-Wimplicit-function-declaration]\npthread_barrier_init(&barrier, NULL, nthreads);\n^\n\n\n", "msg_date": "Fri, 1 Jan 2021 08:10:55 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "\n> It looks like macOS doesn't have pthread barriers (via cfbot 2021, now\n> with more operating systems):\n\nIndeed:-(\n\nI'll look into that.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 1 Jan 2021 21:15:07 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "On Sat, Jan 2, 2021 at 9:15 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > It looks like macOS doesn't have pthread barriers (via cfbot 2021, now\n> > with more operating systems):\n>\n> Indeed:-(\n>\n> I'll look into that.\n\nJust for fun, the attached 0002 patch is a quick prototype of a\nreplacement for that stuff that seems to work OK on a Mac here. (I'm\nnot sure if the Windows part makes sense or works.)", "msg_date": "Sat, 2 Jan 2021 22:50:55 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "\n>>> It looks like macOS doesn't have pthread barriers (via cfbot 2021, now\n>>> with more operating systems):\n>>\n>> Indeed:-(\n>>\n>> I'll look into that.\n>\n> Just for fun, the attached 0002 patch is a quick prototype of a\n> replacement for that stuff that seems to work OK on a Mac here. (I'm\n> not sure if the Windows part makes sense or works.)\n\nThanks! That will definitely help because I do not have a Mac. I'll do \nsome cleanup.\n\n-- \nFabien.\n\n\n", "msg_date": "Sat, 2 Jan 2021 21:49:01 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "On Sun, Jan 3, 2021 at 9:49 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Just for fun, the attached 0002 patch is a quick prototype of a\n> > replacement for that stuff that seems to work OK on a Mac here. (I'm\n> > not sure if the Windows part makes sense or works.)\n>\n> Thanks! That will definitely help because I do not have a Mac. I'll do\n> some cleanup.\n\nI think the main things to clean up are:\n\n1. pthread_barrier_init() should check for errors from\npthread_cond_init() and pthread_mutex_init(), and return -1.\n2. pthread_barrier_destroy() should call pthread_cond_destroy() and\npthread_mutex_destroy().\n3 . Decide if it's sane for the Windows-based emulation to be in here\ntoo, or if it should stay in pgbench.c. Or alternatively, if we're\nemulating pthread stuff on Windows, why not also put the other pthread\nemulation stuff from pgbench.c into a \"ports\" file; that seems\npremature and overkill for your project. I dunno.\n4. cfbot shows that it's not building on Windows because\nHAVE_PTHREAD_BARRIER_WAIT is missing from Solution.pm.\n\nAs far as I know, it's OK and common practice to ignore the return\ncode from eg pthread_mutex_lock() and pthread_cond_wait(), with\nrationale along the lines that there'd have to be a programming error\nfor them to fail in simple cases.\n\nUnfortunately, cfbot can only tell us that it's building OK on a Mac,\nbut doesn't actually run the pgbench code to reach this stuff. It's\nnot running check-world on that platform yet for the following asinine\nreason:\n\nconnection to database failed: Unix-domain socket path\n\"/private/var/folders/3y/l0z1x3693dl_8n0qybp4dqwh0000gn/T/cirrus-ci-build/src/bin/pg_upgrade/.s.PGSQL.58080\"\nis too long (maximum 103 bytes)\n\n\n", "msg_date": "Sat, 9 Jan 2021 08:13:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Sat, Jan 9, 2021 at 8:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jan 3, 2021 at 9:49 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > > Just for fun, the attached 0002 patch is a quick prototype of a\n> > > replacement for that stuff that seems to work OK on a Mac here. (I'm\n> > > not sure if the Windows part makes sense or works.)\n> >\n> > Thanks! That will definitely help because I do not have a Mac. I'll do\n> > some cleanup.\n>\n> I think the main things to clean up are:\n\nI’m supposed to be on vacation this week, but someone left a shiny new\nArm Mac laptop near me, so here’s a cleaned up version.\n\n> 1. pthread_barrier_init() should check for errors from\n> pthread_cond_init() and pthread_mutex_init(), and return -1.\n\nDone.\n\n> 2. pthread_barrier_destroy() should call pthread_cond_destroy() and\n> pthread_mutex_destroy().\n\nDone.\n\n> 3 . Decide if it's sane for the Windows-based emulation to be in here\n> too, or if it should stay in pgbench.c. Or alternatively, if we're\n> emulating pthread stuff on Windows, why not also put the other pthread\n> emulation stuff from pgbench.c into a \"ports\" file; that seems\n> premature and overkill for your project. I dunno.\n\nI decided to solve only the macOS problem for now. So in this\nversion, the A and B patches are exactly as you had them in your v7,\nexcept that B includes “port/pg_pthread.h” instead of <pthread.h>.\n\nMaybe it’d make sense to move the Win32 pthread emulation stuff out of\npgbench.c into src/port too (the pre-existing stuff, and the new\nbarrier stuff you added), but that seems like a separate patch, one\nthat I’m not best placed to write, and it’s not clear to me that we’ll\nwant to be using pthread APIs as our main abstraction if/when thread\nusage increases in the PG source tree anyway. Other opinions welcome.\n\n> 4. cfbot shows that it's not building on Windows because\n> HAVE_PTHREAD_BARRIER_WAIT is missing from Solution.pm.\n\nFixed, I think.", "msg_date": "Mon, 18 Jan 2021 10:54:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello Thomas,\n\n>> 3 . Decide if it's sane for the Windows-based emulation to be in here\n>> too, or if it should stay in pgbench.c. Or alternatively, if we're\n>> emulating pthread stuff on Windows, why not also put the other pthread\n>> emulation stuff from pgbench.c into a \"ports\" file; that seems\n>> premature and overkill for your project. I dunno.\n>\n> I decided to solve only the macOS problem for now. So in this\n> version, the A and B patches are exactly as you had them in your v7,\n> except that B includes “port/pg_pthread.h” instead of <pthread.h>.\n>\n> Maybe it’d make sense to move the Win32 pthread emulation stuff out of\n> pgbench.c into src/port too (the pre-existing stuff, and the new\n> barrier stuff you added), but that seems like a separate patch, one\n> that I’m not best placed to write, and it’s not clear to me that we’ll\n> want to be using pthread APIs as our main abstraction if/when thread\n> usage increases in the PG source tree anyway. Other opinions welcome.\n\nI think it would be much more consistent to move all the thread complement \nstuff there directly: Currently (v8) the windows implementation is in \npgbench and the MacOS implementation in port, which is quite messy.\n\nAttached is a patch set which does that. I cannot test it neither on \nWindows nor on MacOS. Path 1 & 2 are really independent.\n\n-- \nFabien.", "msg_date": "Sat, 30 Jan 2021 13:17:57 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "On Sun, Jan 31, 2021 at 1:18 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> I think it would be much more consistent to move all the thread complement\n> stuff there directly: Currently (v8) the windows implementation is in\n> pgbench and the MacOS implementation in port, which is quite messy.\n\nHmm. Well this is totally subjective, but here's how I see this after\nthinking about it a bit more: macOS does actually have POSIX threads,\nexcept for this tiny missing piece, so it's OK to write a toy\nimplementation that is reasonably conformant, and put it in there\nusing the usual AC_REPLACE_FUNCS machinery. It will go away when\nApple eventually adds a real one. Windows does not, and here we're\nwriting a very partial toy implementation that is far from conformant.\nI think that's OK for pgbench's purposes, for now, but I'd prefer to\nkeep it inside pgbench.c. I think at some point in the (hopefully not\ntoo distant) future, we'll start working on thread support for the\nbackend, and then I think we'll probably come up with our own\nabstraction over Windows and POSIX threads, rather than trying to use\nPOSIX API wrappers from Windows, so I don't really want this stuff in\nthe port library. Does this make some kind of sense?\n\n> Attached is a patch set which does that. I cannot test it neither on\n> Windows nor on MacOS. Path 1 & 2 are really independent.\n\nNo worries. For some reason I have a lot of computers; I'll try to\nget this (or rather a version with the Windows stuff moved back)\npassing on all of them soon, with the aim of making it committable.\n\n\n", "msg_date": "Wed, 3 Mar 2021 18:23:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Wed, Mar 3, 2021 at 6:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jan 31, 2021 at 1:18 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I think it would be much more consistent to move all the thread complement\n> > stuff there directly: Currently (v8) the windows implementation is in\n> > pgbench and the MacOS implementation in port, which is quite messy.\n>\n> Hmm. Well this is totally subjective, but here's how I see this after\n> thinking about it a bit more: macOS does actually have POSIX threads,\n> except for this tiny missing piece, so it's OK to write a toy\n> implementation that is reasonably conformant, and put it in there\n> using the usual AC_REPLACE_FUNCS machinery. It will go away when\n> Apple eventually adds a real one. Windows does not, and here we're\n> writing a very partial toy implementation that is far from conformant.\n> I think that's OK for pgbench's purposes, for now, but I'd prefer to\n> keep it inside pgbench.c. I think at some point in the (hopefully not\n> too distant) future, we'll start working on thread support for the\n> backend, and then I think we'll probably come up with our own\n> abstraction over Windows and POSIX threads, rather than trying to use\n> POSIX API wrappers from Windows, so I don't really want this stuff in\n> the port library. Does this make some kind of sense?\n\nHere is an attempt to move things in that direction. It compiles\ntests OK on Unix including macOS with and without\n--disable-thread-safety, and it compiles on Windows (via CI) but I\ndon't yet know if it works there.\n\nv10-0001-Add-missing-pthread_barrier_t.patch\n\nSame as v8. Adds the missing pthread_barrier_t support for macOS\nonly. Based on the traditional configure symbol probe for now. It's\npossible that we'll later decide to use declarations to be more\nfuture-proof against Apple's API versioning strategy, but I don't have\none of those weird cross-version compiler setups to investigate that\n(see complaints from James Hilliard about the way we deal with\npwrite()).\n\nv10-0002-pgbench-Refactor-the-way-we-do-thread-portabilit.patch\n\nNew. Abandons the concept of doing a fake pthread API on Windows in\npgbench.c, in favour of a couple of tiny local macros that abstract\nover POSIX, Windows and threadless builds. This results in less code,\nand also fixes some minor problems I spotted in pre-existing code:\nit's not OK to use (pthread_t) 0 as a special value, or to compare\npthread_t values with ==, or to assume that pthread APIs set errno.\n\nv10-0003-pgbench-Improve-time-measurement-code.patch\n\nYour original A patch, rebased over the above. I haven't reviewed\nthis one. It lacks a commit message.\n\nv10-0004-pgbench-Synchronize-client-threads.patch\n\nAdds in the barriers.", "msg_date": "Thu, 4 Mar 2021 22:44:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Thu, Mar 4, 2021 at 10:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> v10-0002-pgbench-Refactor-the-way-we-do-thread-portabilit.patch\n\nHere's a better version of that part. I don't yet know if it actually\nworks on Windows...", "msg_date": "Fri, 5 Mar 2021 18:22:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Fri, Mar 5, 2021 at 6:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Mar 4, 2021 at 10:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > v10-0002-pgbench-Refactor-the-way-we-do-thread-portabilit.patch\n>\n> Here's a better version of that part. I don't yet know if it actually\n> works on Windows...\n\nDavid Rowley kindly tested this for me on Windows and told me how to\nfix one of the macros that had incorrect error checking on that OS.\nSo here's a new version. I'm planning to commit 0001 and 0002 soon,\nif there are no objections. 0003 needs some more review.", "msg_date": "Mon, 8 Mar 2021 15:18:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Mon, Mar 8, 2021 at 3:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> David Rowley kindly tested this for me on Windows and told me how to\n> fix one of the macros that had incorrect error checking on that OS.\n> So here's a new version. I'm planning to commit 0001 and 0002 soon,\n> if there are no objections. 0003 needs some more review.\n\nI made a few mostly cosmetic changes, pgindented and pushed all these patches.\n\n\n", "msg_date": "Wed, 10 Mar 2021 17:54:41 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Mar 8, 2021 at 3:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> David Rowley kindly tested this for me on Windows and told me how to\n>> fix one of the macros that had incorrect error checking on that OS.\n>> So here's a new version. I'm planning to commit 0001 and 0002 soon,\n>> if there are no objections. 0003 needs some more review.\n\n> I made a few mostly cosmetic changes, pgindented and pushed all these patches.\n\nSo, gaur is not too happy with this:\n\nccache gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -I../../src/port -DFRONTEND -I../../src/include -D_USE_CTYPE_MACROS -D_XOPEN_SOURCE_EXTENDED -I/usr/local/libxml2-2.6.23/include/libxml2 -I/usr/local/ssl-1.0.1e/include -c -o strlcat.o strlcat.c\npthread_barrier_wait.c: In function 'pthread_barrier_init':\npthread_barrier_wait.c:24:2: error: incompatible type for argument 2 of 'pthread_cond_init'\n/usr/include/pthread.h:378:5: note: expected 'pthread_condattr_t' but argument is of type 'void *'\npthread_barrier_wait.c:26:2: error: incompatible type for argument 2 of 'pthread_mutex_init'\n/usr/include/pthread.h:354:5: note: expected 'pthread_mutexattr_t' but argument is of type 'void *'\nmake[2]: *** [pthread_barrier_wait.o] Error 1\n\nChecking the man pages, it seems that this ancient HPUX version\nis using some pre-POSIX API spec in which pthread_cond_init takes a\npthread_condattr_t rather than a pointer to pthread_condattr_t.\nSimilarly for pthread_mutex_init.\n\nWhile it's likely that we could work around that, it's my\nopinion that we shouldn't have to, because gaur is building with\n--disable-thread-safety. If that switch has any meaning at all,\nit should be that we don't try to use thread infrastructure.\nWas any thought given to being able to opt out of this patchset\nto support that configure option?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Mar 2021 21:46:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Sat, Mar 13, 2021 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Checking the man pages, it seems that this ancient HPUX version\n> is using some pre-POSIX API spec in which pthread_cond_init takes a\n> pthread_condattr_t rather than a pointer to pthread_condattr_t.\n> Similarly for pthread_mutex_init.\n\nWow.\n\n> While it's likely that we could work around that, it's my\n> opinion that we shouldn't have to, because gaur is building with\n> --disable-thread-safety. If that switch has any meaning at all,\n> it should be that we don't try to use thread infrastructure.\n> Was any thought given to being able to opt out of this patchset\n> to support that configure option?\n\nOops. The pgbench code was tested under --disable-thread-safety, but\nit didn't occur to me that the AC_REPLACE_FUNCS search for\npthread_barrier_wait should also be conditional on that; I will now go\nand try to figure out how to do that.\n\n\n", "msg_date": "Sat, 13 Mar 2021 16:00:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Mar 13, 2021 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Was any thought given to being able to opt out of this patchset\n>> to support that configure option?\n\n> Oops. The pgbench code was tested under --disable-thread-safety, but\n> it didn't occur to me that the AC_REPLACE_FUNCS search for\n> pthread_barrier_wait should also be conditional on that; I will now go\n> and try to figure out how to do that.\n\nOK, cool. I don't think it's hard, just do\n\nif test \"$enable_thread_safety\" = yes; then\n AC_REPLACE_FUNCS(pthread_barrier_wait)\nfi\n\nProbably this check should be likewise conditional:\n\nAC_SEARCH_LIBS(pthread_barrier_wait, pthread)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Mar 2021 22:08:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Sat, Mar 13, 2021 at 4:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> OK, cool. I don't think it's hard, just do\n>\n> if test \"$enable_thread_safety\" = yes; then\n> AC_REPLACE_FUNCS(pthread_barrier_wait)\n> fi\n>\n> Probably this check should be likewise conditional:\n>\n> AC_SEARCH_LIBS(pthread_barrier_wait, pthread)\n\nThanks. This seems to work for me on a Mac. I confirmed with nm that\nwe don't define or reference any pthread_XXX symbols with\n--disable-thread-safety, and we do otherwise, and the pgbench tests\npass either way.", "msg_date": "Sat, 13 Mar 2021 16:37:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Thanks. This seems to work for me on a Mac. I confirmed with nm that\n> we don't define or reference any pthread_XXX symbols with\n> --disable-thread-safety, and we do otherwise, and the pgbench tests\n> pass either way.\n\nLooks reasonable by eyeball. If you'd push it, I can launch\na gaur run right away.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Mar 2021 22:58:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "On Sat, Mar 13, 2021 at 4:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Thanks. This seems to work for me on a Mac. I confirmed with nm that\n> > we don't define or reference any pthread_XXX symbols with\n> > --disable-thread-safety, and we do otherwise, and the pgbench tests\n> > pass either way.\n>\n> Looks reasonable by eyeball. If you'd push it, I can launch\n> a gaur run right away.\n\nDone.\n\n\n", "msg_date": "Sat, 13 Mar 2021 17:23:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Mar 13, 2021 at 4:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Looks reasonable by eyeball. If you'd push it, I can launch\n>> a gaur run right away.\n\n> Done.\n\ngaur's gotten through \"make\" and \"make check\" cleanly. Unfortunately\nI expect it will fail at the pg_amcheck test before it reaches pgbench.\nBut for the moment it's reasonable to assume we're good here. Thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Mar 2021 01:31:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello Thomas,\n\n>> David Rowley kindly tested this for me on Windows and told me how to\n>> fix one of the macros that had incorrect error checking on that OS.\n>> So here's a new version. I'm planning to commit 0001 and 0002 soon,\n>> if there are no objections. 0003 needs some more review.\n>\n> I made a few mostly cosmetic changes, pgindented and pushed all these patches.\n\nThanks a lot for pushing all that, and fixing issues raised by buildfarm \nanimals pretty unexpected and strange failures…\n\nI must say that I'm not a big fan of the macro-based all-in-capitals API \nfor threads because it exposes some platform specific uglyness (eg \nTHREAD_FUNC_CC) and it does not look much like clean C code when used. I \nliked the previous partial pthread implementation better, even if it was \nnot the real thing, obviously.\n\nISTM that with the current approach threads are always used on Windows, \ni.e. pgbench does not comply to \"ENABLE_THREAD_SAFETY\" configuration on \nthat platform. Not sure whether this is an issue that need to be \naddressed, though.\n\n-- \nFabien.", "msg_date": "Sat, 13 Mar 2021 09:08:31 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "On Sat, Mar 13, 2021 at 9:08 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> I must say that I'm not a big fan of the macro-based all-in-capitals API\n> for threads because it exposes some platform specific uglyness (eg\n> THREAD_FUNC_CC) and it does not look much like clean C code when used. I\n> liked the previous partial pthread implementation better, even if it was\n> not the real thing, obviously.\n\nBut we were using macros already, to support --disable-thread-safety\nbuilds. I just changed them to upper case and dropped the 'p',\nbecause I didn't like to pretend to do POSIX threads, but do it so\nbadly. Perhaps we should drop --disable-thread-safety soon, and\nperhaps it is nearly time to create a good thread abstraction in clean\nC code, for use in the server and here? Then we won't need any ugly\nmacros.\n\n> ISTM that with the current approach threads are always used on Windows,\n> i.e. pgbench does not comply to \"ENABLE_THREAD_SAFETY\" configuration on\n> that platform. Not sure whether this is an issue that need to be\n> addressed, though.\n\nThe idea of that option, as I understand it, is that in ancient times\nthere were Unix systems with no threads (that's of course the reason\nPostgreSQL is the way it is). I don't think that was ever the case\nfor Windows NT, and we have no build option for that on Windows\nAFAICS.\n\n\n", "msg_date": "Sat, 13 Mar 2021 22:54:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "Hello Thomas,\n\n>> I must say that I'm not a big fan of the macro-based all-in-capitals API\n>> for threads because it exposes some platform specific uglyness (eg\n>> THREAD_FUNC_CC) and it does not look much like clean C code when used. I\n>> liked the previous partial pthread implementation better, even if it was\n>> not the real thing, obviously.\n>\n> But we were using macros already, to support --disable-thread-safety\n> builds.\n\nYep, but the look and feel was still C code.\n\n> I just changed them to upper case and dropped the 'p', because I didn't \n> like to pretend to do POSIX threads, but do it so badly.\n\nHmmm. From the source code point of view it was just like actually using \nposix threads, even if the underlying machinery was not quite that on some \nsystems. I value looking at \"beautiful\" and \"standard\" code if possible, \neven if there is some cheating involved, compared to exposing macros. I \nmade some effort to remove the pretty ugly and inefficient INSTR_TIME \nmacros from pgbench, replaced with straightforward arithmetic and inlined \nfunctions. Now some other macros just crept back in:-) Anyway, this is \njust \"les goûts et les couleurs\" (just a matter of taste), as we say here.\n\n> Perhaps we should drop --disable-thread-safety soon, and perhaps it is \n> nearly time to create a good thread abstraction in clean C code, for use \n> in the server and here? Then we won't need any ugly macros.\n\n+1.\n\n>> ISTM that with the current approach threads are always used on Windows,\n>> i.e. pgbench does not comply to \"ENABLE_THREAD_SAFETY\" configuration on\n>> that platform. Not sure whether this is an issue that need to be\n>> addressed, though.\n>\n> The idea of that option, as I understand it, is that in ancient times\n> there were Unix systems with no threads (that's of course the reason\n> PostgreSQL is the way it is). I don't think that was ever the case\n> for Windows NT, and we have no build option for that on Windows\n> AFAICS.\n\nOk, fine with me.\n\n-- \nFabien.", "msg_date": "Sat, 13 Mar 2021 12:09:37 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" }, { "msg_contents": "Dave Cramer\nwww.postgres.rocks\n\n\nOn Tue, 16 May 2023 at 07:27, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> I am trying to run a few benchmarks measuring the effects of patch to\n> make GetSnapshotData() faster in the face of larger numbers of\n> established connections.\n>\n> Before the patch connection establishment often is very slow due to\n> contention. The first few connections are fast, but after that it takes\n> increasingly long. The first few connections constantly hold\n> ProcArrayLock in shared mode, which then makes it hard for new\n> connections to acquire it exclusively (I'm addressing that to a\n> significant degree in the patch FWIW).\n>\n> But for a fair comparison of the runtime effects I'd like to only\n> compare the throughput for when connections are actually usable,\n> otherwise I end up benchmarking few vs many connections, which is not\n> useful. And because I'd like to run the numbers for a lot of different\n> numbers of connections etc, I can't just make each run several hour\n> longs to make the initial minutes not matter much.\n>\n> Therefore I'd like to make pgbench wait till it has established all\n> connections, before they run queries.\n>\n> Does anybody else see this as being useful?\n>\n> If so, should this be done unconditionally? A new option? Included in an\n> existing one somehow?\n>\n> Greetings,\n\nAndres Freund\n>\n\nI've recently run into something I am having difficulty understanding.\n\nI am running pgbench with the following\npgbench -h localhost -c 100 -j 100 -t 2 -S -s 1000 pgbench -U pgbench\n--protocol=simple\n\nWithout pgbouncer I get around 5k TPS\nwith pgbouncer I get around 15k TPS\n\nLooking at the code connection initiation time should not be part of the\ncalculation so I' puzzled why pgbouncer is making such a dramatic\ndifference ?\n\nDave\n\nDave Cramerwww.postgres.rocksOn Tue, 16 May 2023 at 07:27, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nI am trying to run a few benchmarks measuring the effects of patch to\nmake GetSnapshotData() faster in the face of larger numbers of\nestablished connections.\n\nBefore the patch connection establishment often is very slow due to\ncontention. The first few connections are fast, but after that it takes\nincreasingly long. The first few connections constantly hold\nProcArrayLock in shared mode, which then makes it hard for new\nconnections to acquire it exclusively (I'm addressing that to a\nsignificant degree in the patch FWIW).\n\nBut for a fair comparison of the runtime effects I'd like to only\ncompare the throughput for when connections are actually usable,\notherwise I end up benchmarking few vs many connections, which is not\nuseful. And because I'd like to run the numbers for a lot of different\nnumbers of connections etc, I can't just make each run several hour\nlongs to make the initial minutes not matter much.\n\nTherefore I'd like to make pgbench wait till it has established all\nconnections, before they run queries.\n\nDoes anybody else see this as being useful?\n\nIf so, should this be done unconditionally? A new option? Included in an\nexisting one somehow?\n\nGreetings, \nAndres FreundI've recently run into something I am having difficulty understanding.I am running pgbench with the followingpgbench -h localhost -c 100 -j 100 -t 2 -S -s 1000 pgbench -U pgbench --protocol=simple Without pgbouncer I get around 5k TPSwith pgbouncer I get around 15k TPSLooking at the code connection initiation time should not be part of the calculation so I' puzzled why pgbouncer is making such a dramatic difference ?Dave", "msg_date": "Tue, 16 May 2023 08:54:43 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": ">\n> I've recently run into something I am having difficulty understanding.\n>\n> I am running pgbench with the following\n> pgbench -h localhost -c 100 -j 100 -t 2 -S -s 1000 pgbench -U pgbench\n> --protocol=simple\n>\n> Without pgbouncer I get around 5k TPS\n> with pgbouncer I get around 15k TPS\n>\n> Looking at the code connection initiation time should not be part of the\n> calculation so I' puzzled why pgbouncer is making such a dramatic\n> difference ?\n>\n> Dave\n>\n\nTurns out that for this specific test, pg is faster with a pooler.\n\nDave Cramer, [May 16, 2023 at 9:49:24 AM]:\n\nturns out having a connection pool helps. First run is without a pool,\nsecond with\n\n\npgbench=# select mean_exec_time, stddev_exec_time, calls, total_exec_time,\nmin_exec_time, max_exec_time from pg_stat_statements where\nqueryid=-531095336438083412;\n\n mean_exec_time | stddev_exec_time | calls | total_exec_time |\n min_exec_time\n | max_exec_time\n\n--------------------+--------------------+-------+-------------------+----------------------+---------------\n\n 0.4672699999999998 | 2.2758508661446535 | 200 | 93.45399999999997 |\n0.046616000000000005 | 17.434766\n\n(1 row)\n\n\npgbench=# select pg_stat_statements_reset();\n\n pg_stat_statements_reset\n\n--------------------------\n\n\n(1 row)\n\n\npgbench=# select mean_exec_time, stddev_exec_time, calls, total_exec_time,\nmin_exec_time, max_exec_time from pg_stat_statements where\nqueryid=-531095336438083412;\n\n mean_exec_time | stddev_exec_time | calls | total_exec_time |\nmin_exec_time | max_exec_time\n\n---------------------+----------------------+-------+--------------------+---------------+---------------\n\n 0.06640186499999999 | 0.021800404695481574 | 200 | 13.280373000000004 |\n 0.034006 | 0.226696\n(1 row)\n\nI've recently run into something I am having difficulty understanding.I am running pgbench with the followingpgbench -h localhost -c 100 -j 100 -t 2 -S -s 1000 pgbench -U pgbench --protocol=simple Without pgbouncer I get around 5k TPSwith pgbouncer I get around 15k TPSLooking at the code connection initiation time should not be part of the calculation so I' puzzled why pgbouncer is making such a dramatic difference ?DaveTurns out that for this specific test, pg is faster with a pooler. Dave Cramer, [May 16, 2023 at 9:49:24 AM]:turns out having a connection pool helps. First run is without a pool, second withpgbench=# select mean_exec_time, stddev_exec_time, calls, total_exec_time, min_exec_time, max_exec_time from pg_stat_statements where queryid=-531095336438083412;   mean_exec_time   |  stddev_exec_time  | calls |  total_exec_time  |    min_exec_time     | max_exec_time--------------------+--------------------+-------+-------------------+----------------------+--------------- 0.4672699999999998 | 2.2758508661446535 |   200 | 93.45399999999997 | 0.046616000000000005 |     17.434766(1 row)pgbench=# select pg_stat_statements_reset(); pg_stat_statements_reset--------------------------(1 row)pgbench=# select mean_exec_time, stddev_exec_time, calls, total_exec_time, min_exec_time, max_exec_time from pg_stat_statements where queryid=-531095336438083412;   mean_exec_time    |   stddev_exec_time   | calls |  total_exec_time   | min_exec_time | max_exec_time---------------------+----------------------+-------+--------------------+---------------+--------------- 0.06640186499999999 | 0.021800404695481574 |   200 | 13.280373000000004 |      0.034006 |      0.226696(1 row)", "msg_date": "Tue, 16 May 2023 10:25:29 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections establishment?" }, { "msg_contents": "\nHello Dave,\n\n>> I am running pgbench with the following\n>> pgbench -h localhost -c 100 -j 100 -t 2 -S -s 1000 pgbench -U pgbench\n>> --protocol=simple\n>>\n>> Without pgbouncer I get around 5k TPS\n>> with pgbouncer I get around 15k TPS\n>>\n>> Looking at the code connection initiation time should not be part of the\n>> calculation so I' puzzled why pgbouncer is making such a dramatic\n>> difference ?\n>\n> Turns out that for this specific test, pg is faster with a pooler.\n\nThis does not tell \"why\".\n\nDoes the pooler prepares statements, whereas \"simple\" does not?\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 3 Jul 2023 09:24:17 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: option delaying queries till connections\n establishment?" } ]
[ { "msg_contents": "While looking at a recent complaint about bad planning, I was\nreminded that jsonb's @> and related operators use \"contsel\"\nas their selectivity estimator. This is really bad, because\n(a) contsel is only a stub, yielding a fixed default estimate,\nand (b) that default is 0.001, meaning we estimate these operators\nas five times more selective than equality, which is surely pretty\nsilly.\n\nThere's a good model for improving this in ltree's ltreeparentsel():\nfor any \"var OP constant\" query, we can try applying the operator\nto all of the column's MCV and histogram values, taking the latter\nas being a random sample of the non-MCV values. That code is\nactually 100% generic except for the question of exactly what\ndefault selectivity ought to be plugged in when we don't have stats.\n\nHence, the attached draft patch moves that logic into a generic\nfunction in selfuncs.c, and then invents \"matchsel\" and \"matchjoinsel\"\ngeneric estimators that have a default estimate of twice DEFAULT_EQ_SEL.\n(I'm not especially wedded to that number, but it seemed like a\nreasonable starting point.)\n\nThere were a couple of other operators that seemed to be inappropriately\nusing contsel, so I changed all of these to use matchsel:\n\n @>(tsquery,tsquery) | tsq_mcontains\n <@(tsquery,tsquery) | tsq_mcontained\n @@(text,text) | ts_match_tt\n @@(text,tsquery) | ts_match_tq\n -|-(anyrange,anyrange) | range_adjacent\n @>(jsonb,jsonb) | jsonb_contains\n ?(jsonb,text) | jsonb_exists\n ?|(jsonb,text[]) | jsonb_exists_any\n ?&(jsonb,text[]) | jsonb_exists_all\n <@(jsonb,jsonb) | jsonb_contained\n @?(jsonb,jsonpath) | jsonb_path_exists_opr\n @@(jsonb,jsonpath) | jsonb_path_match_opr\n\nNote: you might think that we should just shove this generic logic\ninto contsel itself, and maybe areasel and patternsel while at it.\nHowever, that would be pretty useless for these functions' intended\nusage with the geometric operators, because we collect neither MCV\nnor histogram stats for the geometric data types, making the extra\ncomplexity worthless. Pending somebody putting some effort into\nestimation for the geometric data types, I think we should just get\nout of the business of having non-geometric types relying on these\nestimators.\n\nThis patch is not complete, because I didn't look at changing\nthe contrib modules, and grep says at least some of them are using\ncontsel for non-geometric data types. But I thought I'd put it up\nfor discussion at this stage.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 27 Feb 2020 14:51:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "I wrote:\n> This patch is not complete, because I didn't look at changing\n> the contrib modules, and grep says at least some of them are using\n> contsel for non-geometric data types. But I thought I'd put it up\n> for discussion at this stage.\n\nHearing nothing, I went ahead and hacked on the contrib code.\nThe attached 0002 updates hstore, ltree, and pg_trgm to get them\nout of using contsel/contjoinsel for anything. (0001 is the same\npatch I posted before.)\n\nIn ltree, I noted that < <= >= > were using contsel even though\nthose are part of a btree opclass, meaning they could perfectly\nwell use scalarltsel and friends. So now they do. Everything\nelse now uses matchsel/matchjoinsel, leaving ltreeparentsel as\nan unused backward-compatibility feature. I didn't think that\nthe default selectivity in ltreeparentsel was particularly sane,\nso having those operators use their own selectivity logic\ninstead of using matchsel like everything else seemed pointless\n(and certainly pairing a custom ltreeparentsel with contjoinsel\nisn't something to encourage).\n\nIn pg_trgm, the change of default selectivity estimate causes one\nplan to change, but I think that's fine; looking at the data hidden\nby COSTS OFF shows the new estimate is closer to reality anyway.\n(That test is meant to exercise some gist consistent-function logic,\nwhich it still does, so no worries there.)\n\nThe cube and seg extensions still make significant use of contsel and\nthe other geometric estimator stubs. Although we could in principle\nchange those operators to use matchsel, I'm hesitant to do so without\ncloser analysis. The sort orderings imposed by their default btree\nopclasses correlate strongly with cube/seg size, which is related to\noverlap/containment outcomes, so I'm not sure that the histogram\nentries would provide a plausibly random sample for this purpose.\nSo those modules are not touched here.\n\nThere are a few other random uses of geometric join estimators\npaired with non-geometric restriction estimators, including\nthese in the core core:\n\n @>(anyrange,anyelement) | range_contains_elem | rangesel | contjoinsel\n @>(anyrange,anyrange) | range_contains | rangesel | contjoinsel\n <@(anyelement,anyrange) | elem_contained_by_range | rangesel | contjoinsel\n <@(anyrange,anyrange) | range_contained_by | rangesel | contjoinsel\n &&(anyrange,anyrange) | range_overlaps | rangesel | areajoinsel\n\nplus the @@ and ~~ operators in intarray. While this is ugly,\nit's probably not worth changing until somebody creates non-stub\njoin selectivity code that will work for these cases.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 28 Feb 2020 17:09:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Hi Tom,\n\nThe patches look entirely reasonable to me.\nThe second one needs to be rebased.\n\nI like the idea of stubbing matchjoinsel for now,\nas well as being careful with operators that may correlate with sort \norderings.\n\nThe only little thing I can think of is hardcoding it as 2 * DEFAULT_EQ_SEL.\nWhile I don't have any arguments against the value itself I think it \nshould be configurable independently.\nSadly DEFAULT_MATCH_SEL name is already taken for text patterns.\nNot sure if it's a reason to rename all the stuff.\n\nBest, Alex\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:55:13 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": false, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Quickly tested like this:\n\ncreate table t(a jsonb);\ninsert into t select jsonb_object( array[(random() * 10)::int::text], \n'{\" \"}') from generate_series(1, 100000);\ninsert into t select jsonb_object( array[(random() * 10)::int::text], \narray[(random() * 1000)::int::text]) from generate_series(1, 100000);\nexplain analyze select * from t where a ? '1';\nanalyze t;\nexplain analyze select * from t where a ? '1';\n\nBest, Alex\n\n\n", "msg_date": "Tue, 31 Mar 2020 17:08:26 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": false, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n> The only little thing I can think of is hardcoding it as 2 * DEFAULT_EQ_SEL.\n> While I don't have any arguments against the value itself I think it \n> should be configurable independently.\n> Sadly DEFAULT_MATCH_SEL name is already taken for text patterns.\n> Not sure if it's a reason to rename all the stuff.\n\nYeah, I was going to invent a symbol till I noticed that DEFAULT_MATCH_SEL\nwas already taken :-(.\n\nThere are only about half a dozen uses of that in-core, so maybe we could\nget away with renaming that one, but on the whole I'd rather leave it\nalone in case some extension is using it. So that leaves us with needing\nto find a better name for this new one. Any ideas?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:20:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "\n> So that leaves us with needing\n> to find a better name for this new one. Any ideas?\nI'm thinking of something wide like\nopersel, operjoinsel, DEFAULT_OPER_SEL\nor maybe even\ngenericsel, genericjoinsel, DEFAULT_GENERIC_SEL\n\nBest, Alex\n\n\n", "msg_date": "Tue, 31 Mar 2020 17:26:14 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": false, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n>> So that leaves us with needing\n>> to find a better name for this new one. Any ideas?\n\n> I'm thinking of something wide like\n> opersel, operjoinsel, DEFAULT_OPER_SEL\n> or maybe even\n> genericsel, genericjoinsel, DEFAULT_GENERIC_SEL\n\nSeems a little *too* generic :-(\n\nI was wondering about DEFAULT_MATCHING_SEL. The difference in purpose\nfrom DEFAULT_MATCH_SEL wouldn't be too obvious, but then it probably\nwouldn't be anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:29:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "\n> I was wondering about DEFAULT_MATCHING_SEL. The difference in purpose\n> from DEFAULT_MATCH_SEL wouldn't be too obvious, but then it probably\n> wouldn't be anyway.\nFine with me, especially if both new functions are renamed accordingly.\n\nBest, Alex\n\n\n", "msg_date": "Tue, 31 Mar 2020 17:34:01 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": false, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n>> I was wondering about DEFAULT_MATCHING_SEL. The difference in purpose\n>> from DEFAULT_MATCH_SEL wouldn't be too obvious, but then it probably\n>> wouldn't be anyway.\n\n> Fine with me, especially if both new functions are renamed accordingly.\n\nYup, that would make sense, will do it like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:44:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Renamed \"matchsel\" to \"matchingsel\" etc, added DEFAULT_MATCHING_SEL,\nrebased over commit 911e70207. Since that commit already created\nnew versions of the relevant contrib modules, I think we can just\nredefine what those versions contain, rather than making yet-newer\nversions. (Of course, that assumes we're going to include this in\nv13.)\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 31 Mar 2020 13:53:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "On 31/03/2020 18:53, Tom Lane wrote:\n> Renamed \"matchsel\" to \"matchingsel\" etc, added DEFAULT_MATCHING_SEL,\n> rebased over commit 911e70207. Since that commit already created\n> new versions of the relevant contrib modules, I think we can just\n> redefine what those versions contain, rather than making yet-newer\n> versions. (Of course, that assumes we're going to include this in\n> v13.)\n\nLooks good to me.\n\nBest, Alex\n\n\n", "msg_date": "Wed, 1 Apr 2020 00:24:08 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": false, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n> On 31/03/2020 18:53, Tom Lane wrote:\n>> Renamed \"matchsel\" to \"matchingsel\" etc, added DEFAULT_MATCHING_SEL,\n>> rebased over commit 911e70207. Since that commit already created\n>> new versions of the relevant contrib modules, I think we can just\n>> redefine what those versions contain, rather than making yet-newer\n>> versions. (Of course, that assumes we're going to include this in\n>> v13.)\n\n> Looks good to me.\n\nPushed, thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 10:33:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Less-silly selectivity for JSONB matching operators" } ]
[ { "msg_contents": "\"ALTER TEXT SEARCH DICTIONARY foobar\" can be followed by an open\nparenthesis, but that is not offered in tab completion. That is useful,\nbecause otherwise I have to look up the docs to see if I need a SET or\nOPTION(S) or WITH or something before it, just to discover I don't.\n\nThe attached one-line patch adds \"(\".\n\nWe can't go beyond that, as available options for each dictionary are not\nknown in advance.\n\nCheers,\n\nJeff", "msg_date": "Thu, 27 Feb 2020 15:27:21 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "ALTER TEXT SEARCH DICTIONARY tab completion" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nIt looks good and does what it says on the tin.\r\n\r\nOne minor nitpick I feel I should add is that for completeness and\r\nbalance the equivalent `CREATE TEXT SEARCH DICTIONARY` should \r\nget the same treatment.\r\n\r\nMaybe something along the lines of:\r\n- else if (Matches(\"CREATE\", \"TEXT\", \"SEARCH\", \"CONFIGURATION\", MatchAny))\r\n+ else if (Matches(\"CREATE\", \"TEXT\", \"SEARCH\", \"DICTIONARY|CONFIGURATION\", MatchAny))", "msg_date": "Wed, 04 Mar 2020 15:02:58 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: ALTER TEXT SEARCH DICTIONARY tab completion" }, { "msg_contents": "Georgios Kokolatos <gkokolatos@pm.me> writes:\n> One minor nitpick I feel I should add is that for completeness and\n> balance the equivalent `CREATE TEXT SEARCH DICTIONARY` should\n> get the same treatment.\n\n> Maybe something along the lines of:\n> - else if (Matches(\"CREATE\", \"TEXT\", \"SEARCH\", \"CONFIGURATION\", MatchAny))\n> + else if (Matches(\"CREATE\", \"TEXT\", \"SEARCH\", \"DICTIONARY|CONFIGURATION\", MatchAny))\n\nAgreed; actually all four CREATE TEXT SEARCH commands could do that.\nI pushed it as attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 07 Mar 2020 17:00:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER TEXT SEARCH DICTIONARY tab completion" } ]
[ { "msg_contents": "I've seen a few requests on how to make FTS search on the absolute value of\nintegers. This question is usually driven by the fact that the text search\nparser interprets a separating hyphen (\"partnumber-987\") as a minus sign.\n\nThere is currently no good answer for this that doesn't involve C coding.\nI think this feature has a natural and trivial home in the dict_int\nextension, so attached is a patch that does that.\n\nThere are no changes to the extension creation scripts, so there is no need\nto bump the version. And I think the convention is that we don't bump the\nversion just to signify a change which implements a new feature when that\ndoesn't change the creation scripts.\n\nCheers,\n\nJeff", "msg_date": "Thu, 27 Feb 2020 15:49:35 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "Add absolute value to dict_int" }, { "msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> I've seen a few requests on how to make FTS search on the absolute value of\n> integers. This question is usually driven by the fact that the text search\n> parser interprets a separating hyphen (\"partnumber-987\") as a minus sign.\n\n> There is currently no good answer for this that doesn't involve C coding.\n> I think this feature has a natural and trivial home in the dict_int\n> extension, so attached is a patch that does that.\n\nSeems reasonable, so pushed with minor cleanup (I noticed it was\noff-by-one for the maxlen check, which was harmless unless you had\nrejectlong enabled as well). I debated whether I liked the \"absval\"\nparameter name, which seemed a bit too abbreviative; but it's more\nor less in line with the existing parameter names, so I left it alone.\n\n> There are no changes to the extension creation scripts, so there is no need\n> to bump the version. And I think the convention is that we don't bump the\n> version just to signify a change which implements a new feature when that\n> doesn't change the creation scripts.\n\nRight, there's no need to update the creation script.\n\n\nI noticed one odd thing which is not the fault of this patch, but\nseems to need cleaned up:\n\nregression=# ALTER TEXT SEARCH DICTIONARY intdict (absval = true);\nALTER TEXT SEARCH DICTIONARY\nregression=# select ts_lexize('intdict', '-123456');\n ts_lexize \n-----------\n {123456}\n(1 row)\n\nregression=# ALTER TEXT SEARCH DICTIONARY intdict (absval = 1);\nALTER TEXT SEARCH DICTIONARY\nregression=# select ts_lexize('intdict', '-123456');\nERROR: absval requires a Boolean value\n\nWhy did ALTER accept that, if it wasn't valid? It's not like\nthere's no error checking at all:\n\nregression=# ALTER TEXT SEARCH DICTIONARY intdict (absval = foo);\nERROR: absval requires a Boolean value\n\nUpon digging into that, the reason is that defGetBoolean accepts\na T_Integer Value with value 1, but it doesn't accept a T_String\nwith value \"1\". And apparently we're satisfied to smash dictionary\nparameters to strings before storing them.\n\nThere are at least three things we could do here:\n\n1. Insist that defGetBoolean and its siblings should accept the\nstring equivalent of any non-string value they accept. This\nwould change the behavior of a whole lot of utility commands,\nnot only the text search commands, and I'm not exactly convinced\nit's a good idea. Seems like it's losing error detection\ncapability.\n\n2. Improve the catalog representation of text search parameters\nso that the original Value node can be faithfully reconstructed.\nI'd be for this, except it seems like a lot of work for a rather\nminor benefit.\n\n3. Rearrange text search parameter validation so we smash parameters\nto strings *before* we validate them, ensuring we won't take any\nsettings that will be rejected later.\n\nI'm leaning to #3 as being the most practical fix. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Mar 2020 18:48:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add absolute value to dict_int" }, { "msg_contents": "I wrote:\n> There are at least three things we could do here:\n> 1. Insist that defGetBoolean and its siblings should accept the\n> string equivalent of any non-string value they accept. This\n> would change the behavior of a whole lot of utility commands,\n> not only the text search commands, and I'm not exactly convinced\n> it's a good idea. Seems like it's losing error detection\n> capability.\n> 2. Improve the catalog representation of text search parameters\n> so that the original Value node can be faithfully reconstructed.\n> I'd be for this, except it seems like a lot of work for a rather\n> minor benefit.\n> 3. Rearrange text search parameter validation so we smash parameters\n> to strings *before* we validate them, ensuring we won't take any\n> settings that will be rejected later.\n\nI still don't much like #1, but after looking closer, #2 is not as\nimpractical as I thought. The catalog representation doesn't need\nany redefinition really, because per the existing comments,\n\n * For the convenience of pg_dump, the output is formatted exactly as it\n * would need to appear in CREATE TEXT SEARCH DICTIONARY to reproduce the\n * same options.\n\nSo all we really need to do is upgrade [de]serialize_deflist to be smarter\nabout int and float nodes. This is still a bit invasive because somebody\ndecided to make deserialize_deflist serve two masters, and I don't feel\nlike working through whether the prsheadline code would cope nicely with\nnon-string output nodes. So the first patch attached adds a flag argument\nto deserialize_deflist to tell it whether to force all the values to\nstrings.\n\nAlternatively, we could do #3, as in the second patch below. This\nseems clearly Less Good, but it's a smaller/safer patch, and it's\nat least potentially back-patchable, whereas changing the signature\nof deserialize_deflist in stable branches would risk trouble.\n\n(I didn't bother with regression test additions yet, but some would\nbe appropriate for either patch.)\n\nGiven the lack of field complaints, I'm not that excited about\nback-patching anything for this. So my inclination is to go with #2\n(first patch) and fix it in HEAD only.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 09 Mar 2020 19:49:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add absolute value to dict_int" }, { "msg_contents": "I wrote:\n> So all we really need to do is upgrade [de]serialize_deflist to be smarter\n> about int and float nodes. This is still a bit invasive because somebody\n> decided to make deserialize_deflist serve two masters, and I don't feel\n> like working through whether the prsheadline code would cope nicely with\n> non-string output nodes. So the first patch attached adds a flag argument\n> to deserialize_deflist to tell it whether to force all the values to\n> strings.\n\nOn closer inspection, it doesn't seem that changing the behavior for\nprsheadline will make any difference. The only extant code that\nreads that result is prsd_headline which always uses defGetString,\nand probably any third-party text search parsers would too.\nSo I've pushed this without the extra flag argument.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Mar 2020 12:32:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add absolute value to dict_int" } ]
[ { "msg_contents": "Attached is a patch that exports a new function from logtape.c:\n\n extern LogicalTapeSet *LogicalTapeSetExtend(\n LogicalTapeSet *lts, int nAdditional);\n\nThe purpose is to allow adding new tapes dynamically for the upcoming\nHash Aggregation work[0]. HashAgg doesn't know in advance how many\ntapes it will need, though only a limited number are actually active at\na time.\n\nThis was proposed and originally written by Adam Lee[1] (extract only\nthe changes to logtape.c/h from his posted patch). Strangely, I'm\nseeing ~5% regression with his version when doing:\n\n -- t20m_1_int4 has 20 million random integers\n select * from t20m_1_int4 order by i offset 100000000;\n\nWhich seems to be due to using a pointer rather than a flexible array\nmember (I'm using gcc[2]). My version (attached) still uses a flexible\narray member, which doesn't see the regression; but I repalloc the\nwhole struct so the caller needs to save the new pointer to the tape\nset.\n\nThat doesn't entirely make sense to me, and I'm wondering if someone\nelse can repro that result and/or make a suggestion, because I don't\nhave a good explanation. I'm fine with my version of the patch, but it\nwould be nice to know why there's such a big difference using a pointer\nversus a flexible array member.\n\nRegards,\n\tJeff Davis\n\n[0] \nhttps://postgr.es/m/6e7c269b9a84ff75fefcad8ab2d4758f03581e98.camel%40j-davis.com\n[1] https://postgr.es/m/20200108071202.GA1511@mars.local\n[2] gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0", "msg_date": "Thu, 27 Feb 2020 13:01:08 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Thu, Feb 27, 2020 at 01:01:08PM -0800, Jeff Davis wrote:\n> Attached is a patch that exports a new function from logtape.c:\n> \n> extern LogicalTapeSet *LogicalTapeSetExtend(\n> LogicalTapeSet *lts, int nAdditional);\n> \n> The purpose is to allow adding new tapes dynamically for the upcoming\n> Hash Aggregation work[0]. HashAgg doesn't know in advance how many\n> tapes it will need, though only a limited number are actually active at\n> a time.\n> \n> This was proposed and originally written by Adam Lee[1] (extract only\n> the changes to logtape.c/h from his posted patch). Strangely, I'm\n> seeing ~5% regression with his version when doing:\n> \n> -- t20m_1_int4 has 20 million random integers\n> select * from t20m_1_int4 order by i offset 100000000;\n> \n> Which seems to be due to using a pointer rather than a flexible array\n> member (I'm using gcc[2]). My version (attached) still uses a flexible\n> array member, which doesn't see the regression; but I repalloc the\n> whole struct so the caller needs to save the new pointer to the tape\n> set.\n> \n> That doesn't entirely make sense to me, and I'm wondering if someone\n> else can repro that result and/or make a suggestion, because I don't\n> have a good explanation. I'm fine with my version of the patch, but it\n> would be nice to know why there's such a big difference using a pointer\n> versus a flexible array member.\n> \n> Regards,\n> \tJeff Davis\n\nI noticed another difference, I was using palloc0(), which could be one of the\nreason, but not sure.\n\nTested your hashagg-20200226.patch on my laptop(Apple clang version 11.0.0),\nthe average time is 25.9s:\n\n```\ncreate table t20m_1_int4(i int);\ncopy t20m_1_int4 from program 'shuf -i 1-4294967295 -n 20000000';\nanalyze t20m_1_int4;\n```\n```\nexplain analyze select * from t20m_1_int4 order by i offset 100000000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3310741.20..3310741.20 rows=1 width=4) (actual time=25666.471..25666.471 rows=0 loops=1)\n -> Sort (cost=3260740.96..3310741.20 rows=20000096 width=4) (actual time=20663.065..24715.269 rows=20000000 loops=1)\n Sort Key: i\n Sort Method: external merge Disk: 274056kB\n -> Seq Scan on t20m_1_int4 (cost=0.00..288496.96 rows=20000096 width=4) (actual time=0.075..2749.385 rows=20000000 loops=1)\n Planning Time: 0.109 ms\n Execution Time: 25911.542 ms\n(7 rows)\n```\n\nBut if use the palloc0() or do the MemSet() like:\n\n```\nlts = (LogicalTapeSet *) palloc0(offsetof(LogicalTapeSet, tapes) +\n\t\t\t\t\t\t\t\tntapes * sizeof(LogicalTape));\n...\nMemSet(lts->tapes, 0, lts->nTapes * sizeof(LogicalTape)); <--- this line doesn't matter as I observed,\n which makes a little sense the compiler\n might know it's already zero.\n```\n\nThe average time goes up to 26.6s:\n\n```\nexplain analyze select * from t20m_1_int4 order by i offset 100000000;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3310741.20..3310741.20 rows=1 width=4) (actual time=26419.712..26419.712 rows=0 loops=1)\n -> Sort (cost=3260740.96..3310741.20 rows=20000096 width=4) (actual time=21430.044..25468.661 rows=20000000 loops=1)\n Sort Key: i\n Sort Method: external merge Disk: 274056kB\n -> Seq Scan on t20m_1_int4 (cost=0.00..288496.96 rows=20000096 width=4) (actual time=0.060..2839.452 rows=20000000 loops=1)\n Planning Time: 0.111 ms\n Execution Time: 26652.255 ms\n(7 rows)\n```\n\n-- \nAdam Lee\n\n\n", "msg_date": "Fri, 28 Feb 2020 14:16:41 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Fri, 2020-02-28 at 14:16 +0800, Adam Lee wrote:\n> I noticed another difference, I was using palloc0(), which could be\n> one of the\n> reason, but not sure.\n\nI changed the palloc0()'s in your code to plain palloc(), and it didn't\nmake any perceptible difference. Still slower than the version I posted\nthat keeps the flexible array.\n\nDid you compare all 3? Master, with your patch, and with my patch? I'd\nlike to see if you're seeing the same thing that I am.\n\n> Tested your hashagg-20200226.patch on my laptop(Apple clang version\n> 11.0.0),\n> the average time is 25.9s:\n\nThat sounds high -- my runs are about half that time. Is that with a\ndebug build or an optimized one?\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n", "msg_date": "Fri, 28 Feb 2020 12:38:55 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Fri, Feb 28, 2020 at 12:38:55PM -0800, Jeff Davis wrote:\n> On Fri, 2020-02-28 at 14:16 +0800, Adam Lee wrote:\n> > I noticed another difference, I was using palloc0(), which could be\n> > one of the\n> > reason, but not sure.\n> \n> I changed the palloc0()'s in your code to plain palloc(), and it didn't\n> make any perceptible difference. Still slower than the version I posted\n> that keeps the flexible array.\n> \n> Did you compare all 3? Master, with your patch, and with my patch? I'd\n> like to see if you're seeing the same thing that I am.\n> \n> > Tested your hashagg-20200226.patch on my laptop(Apple clang version\n> > 11.0.0),\n> > the average time is 25.9s:\n> \n> That sounds high -- my runs are about half that time. Is that with a\n> debug build or an optimized one?\n> \n> Regards,\n> \tJeff Davis\n\nYes, I was running a debug version. I usually do 'CFLAGS=-O0 -g3'\n'--enable-cassert' '--enable-debug'.\n\nTest with a general build:\n\nMaster: 12729ms 12970ms 12999ms\nWith my patch(a pointer): 12965ms 13273ms 13116ms\nWith your patch(flexible array): 12906ms 12991ms 13043ms\n\nNot obvious I suppose, anyway, your patch looks good to me.\n\n-- \nAdam Lee\n\n\n", "msg_date": "Tue, 3 Mar 2020 09:49:35 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Tue, 2020-03-03 at 09:49 +0800, Adam Lee wrote:\n> Master: 12729ms 12970ms 12999ms\n> With my patch(a pointer): 12965ms 13273ms 13116ms\n> With your patch(flexible array): 12906ms 12991ms 13043ms\n\nHmm.. looks like you didn't reproduce the difference I saw. What\ncompiler/version?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 03 Mar 2020 08:46:24 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Tue, Mar 03, 2020 at 08:46:24AM -0800, Jeff Davis wrote:\n> On Tue, 2020-03-03 at 09:49 +0800, Adam Lee wrote:\n> > Master: 12729ms 12970ms 12999ms\n> > With my patch(a pointer): 12965ms 13273ms 13116ms\n> > With your patch(flexible array): 12906ms 12991ms 13043ms\n> \n> Hmm.. looks like you didn't reproduce the difference I saw. What\n> compiler/version?\n\nIt was \"Apple clang version 11.0.0 (clang-1100.0.33.17)\".\n\nThen I changed to \"gcc-9 (Homebrew GCC 9.2.0_3) 9.2.0\" this morning.\n\nMaster(e537aed61d): 13342.844 ms 13195.982 ms 13271.023 ms\nWith my patch(a pointer): 13020.029 ms 13008.158 ms 13063.658 ms\nWith your patch(flexible array): 12870.117 ms 12814.725 ms 13119.255 ms\n\n-- \nAdam Lee\n\n\n", "msg_date": "Wed, 4 Mar 2020 11:57:19 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Wed, 2020-03-04 at 11:57 +0800, Adam Lee wrote:\n> Master(e537aed61d): 13342.844 ms 13195.982 ms 13271.023 ms\n> With my patch(a pointer): 13020.029 ms 13008.158 ms 13063.658 ms\n> With your patch(flexible array): 12870.117 ms 12814.725 ms 13119.255\n> ms\n\nI tracked the problem down.\n\nWhen we change from a flexible array to a pointer and a separately-\nallocated chunk, then it causes some unrelated code in\nLogicalTapeWrite() to be optimized differently in my environment.\n\nSpecifically, the memcpy() in LogicalTapeWrite() gets turned into an\ninline implementation using the \"rep movsq\" instruction. For my\nparticular case, actually calling memcpy (rather than using the inlined\nversion) is a lot faster.\n\nIn my test case, LogicalTapeWrite() was being called with size of 4 or\n10, so perhaps those small values are just handled much more\nefficiently in the real memcpy.\n\nTo get it to use the real memcpy(), I had to '#undef _FORTIFY_SOURCE'\nat the top of the file, and pass -fno-builtin-memcpy. Then the\nregression went away.\n\nI don't care for the version I posted where it repalloc()s the entire\nstruct. That seems needlessly odd and could cause confusion; and it\nalso changes the API so that the caller needs to update its pointers.\n\nI'm inclined to use a version similar to Adam's, where it has a pointer\nand a separate palloc() chunk (attached). That causes a regression in\nmy setup, but it's not a terrible regression, and it's not really the\n\"fault\" of the change. Trying to twist code around to satisfy a\nparticular compiler/libc seems like a bad idea. It also might depend on\nthe exact query, and may even be faster for some.\n\nAny objections?\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 04 Mar 2020 12:06:07 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" }, { "msg_contents": "On Wed, 2020-03-04 at 12:06 -0800, Jeff Davis wrote:\n> I tracked the problem down.\n\nBecause of the name _FORTIFY_SOURCE, I got somewhat concerned that this\nchange (logtape-20200303) was somehow interfering with a safety\nmechanism in the compiler.\n\nThere's a mechanism in GCC called object size tracking[1]. memcpy() is\nreplaced by __builtin___memcpy_chk(), which verifies that the amount\nbeing copied doesn't exceed the destination object size -- but of\ncourse this only works if GCC knows the destination object size. If it\ndoesn't know the object size, or if it can prove at compile time that\nit will never be exceeded, then it replaces the checked memcpy with a\ncall to normal memcpy (at least [1] seems to suggest that it will).\n\nBut I did some experiments and GCC is clearly not able to know the\ndestination object size either before or after the logtape-20200303\nchange:\n\n * palloc (and friends) lack the alloc_size function attribute[2],\nwhich is required for GCC to try to track it (aside: maybe we should\nadd it as an unrelated change?)\n * if I add the alloc_size attribute to palloc, it is able to do very\nbasic tracking of the object size; but not in more complex cases like\nthe buffer in logtape.c\n\nTherefore, from [1], I'd expect the call to checked memcpy to be\nreplaced by a call to normal memcpy() either before or after the\nchange.\n\nIt is replaced by normal memcpy() before the change, but not after.\nI conclude that this is arbitrary and not fundamentally related to\nobject size checking or _FORTIFY_SOURCE.\n\nI don't think I should hold up this change because of an arbitrary\ndecision by the compiler.\n\nRegards,\n\tJeff Davis\n\n[1] https://gcc.gnu.org/onlinedocs/gcc/Object-Size-Checking.html\n[2] \nhttps://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-alloc_005fsize-function-attribute\n\n\n\n\n\n", "msg_date": "Sat, 07 Mar 2020 13:00:55 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Add LogicalTapeSetExtend() to logtape.c" } ]
[ { "msg_contents": "Hi,\n\n From time to time I run into the limitation that ALTER TYPE does not\nallow changing storage strategy. I've written a bunch of data types over\nthe years - in some cases I simply forgot to specify storage in CREATE\nTYPE (so it defaulted to PLAIN) or I expected PLAIN to be sufficient and\nthen later wished I could enable TOAST.\n\nObviously, without ALTER TYPE supporting that it's rather tricky. You\neither need to do dump/restore, or tweak the pg_type catalog directly.\nSo here is an initial patch extending ALTER TYPE to support this.\n\nThe question is why this was not implemented before - my assumption is\nthis is not simply because no one wanted that. We track the storage in\npg_attribute too, and ALTER TABLE allows changing that ...\n\nMy understanding is that pg_type.typstorage essentially specifies two\nthings: (a) default storage strategy for the attributes with this type,\nand (b) whether the type implementation is prepared to handle TOAST-ed\nvalues or not. And pg_attribute.attstorage has to respect this - when\nthe type says PLAIN then the attribute can't simply use strategy that\nwould enable TOAST.\n\nLuckily, this is only a problem when switching typstorage to PLAIN (i.e.\nwhen disabling TOAST for the type). The attached v1 patch checks if\nthere are attributes with non-PLAIN storage for this type, and errors\nout if it finds one. But unfortunately that's not entirely correct,\nbecause ALTER TABLE only sets storage for new data. A table may be\ncreated with e.g. EXTENDED storage for an attribute, a bunch of rows may\nbe loaded and then the storage for the attribute may be changed to\nPLAIN. That would pass the check as it's currently in the patch, yet\nthere may be TOAST-ed values for the type with PLAIN storage :-(\n\nI'm not entirely sure what to do about this, but I think it's OK to just\nreject changes in this direction (from non-PLAIN to PLAIN storage). I've\nnever needed it, and it seems pretty useless - it seems fine to just\ninstruct the user to do a dump/restore.\n\nIn the opposite direction - when switching from PLAIN to a TOAST-enabled\nstorage, or enabling/disabling compression, this is not an issue at all.\nIt's legal for type to specify e.g. EXTENDED but attribute to use PLAIN,\nfor example. So the attached v1 patch simply allows this direction.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 28 Feb 2020 01:44:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> My understanding is that pg_type.typstorage essentially specifies two\n> things: (a) default storage strategy for the attributes with this type,\n> and (b) whether the type implementation is prepared to handle TOAST-ed\n> values or not. And pg_attribute.attstorage has to respect this - when\n> the type says PLAIN then the attribute can't simply use strategy that\n> would enable TOAST.\n\nCheck.\n\n> Luckily, this is only a problem when switching typstorage to PLAIN (i.e.\n> when disabling TOAST for the type). The attached v1 patch checks if\n> there are attributes with non-PLAIN storage for this type, and errors\n> out if it finds one. But unfortunately that's not entirely correct,\n> because ALTER TABLE only sets storage for new data. A table may be\n> created with e.g. EXTENDED storage for an attribute, a bunch of rows may\n> be loaded and then the storage for the attribute may be changed to\n> PLAIN. That would pass the check as it's currently in the patch, yet\n> there may be TOAST-ed values for the type with PLAIN storage :-(\n\n> I'm not entirely sure what to do about this, but I think it's OK to just\n> reject changes in this direction (from non-PLAIN to PLAIN storage).\n\nYeah, I think you should just reject that: once toast-capable, always\ntoast-capable. Scanning pg_attribute is entirely insufficient because\nof race conditions --- and while we accept such races in some other\nplaces, this seems like a place where the risk is too high and the\nvalue too low.\n\nAnybody who really needs to go in that direction still has the alternative\nof manually frobbing the catalogs (and taking the responsibility for\nraces and un-toasting whatever's stored already).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 13:59:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Fri, Feb 28, 2020 at 01:59:49PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> My understanding is that pg_type.typstorage essentially specifies two\n>> things: (a) default storage strategy for the attributes with this type,\n>> and (b) whether the type implementation is prepared to handle TOAST-ed\n>> values or not. And pg_attribute.attstorage has to respect this - when\n>> the type says PLAIN then the attribute can't simply use strategy that\n>> would enable TOAST.\n>\n>Check.\n>\n>> Luckily, this is only a problem when switching typstorage to PLAIN (i.e.\n>> when disabling TOAST for the type). The attached v1 patch checks if\n>> there are attributes with non-PLAIN storage for this type, and errors\n>> out if it finds one. But unfortunately that's not entirely correct,\n>> because ALTER TABLE only sets storage for new data. A table may be\n>> created with e.g. EXTENDED storage for an attribute, a bunch of rows may\n>> be loaded and then the storage for the attribute may be changed to\n>> PLAIN. That would pass the check as it's currently in the patch, yet\n>> there may be TOAST-ed values for the type with PLAIN storage :-(\n>\n>> I'm not entirely sure what to do about this, but I think it's OK to just\n>> reject changes in this direction (from non-PLAIN to PLAIN storage).\n>\n>Yeah, I think you should just reject that: once toast-capable, always\n>toast-capable. Scanning pg_attribute is entirely insufficient because\n>of race conditions --- and while we accept such races in some other\n>places, this seems like a place where the risk is too high and the\n>value too low.\n>\n>Anybody who really needs to go in that direction still has the alternative\n>of manually frobbing the catalogs (and taking the responsibility for\n>races and un-toasting whatever's stored already).\n>\n\nYeah. Attached is v2 of the patch, simply rejecting such changes.\n\nI think we might check if there are any attributes with the given data\ntype, and allow the change if there are none. That would still allow the\nchange when the type is used only for things like function parameters\netc. But we'd also have to check for domains (recursively).\n\nOne thing I haven't mentioned in the original message is CASCADE. It\nseems useful to optionally change storage for all attributes with the\ngiven data type. But I'm not sure it's actually a good idea, and the\namount of code seems non-trivial (it'd have to copy quite a bit of code\nfrom ALTER TABLE). I'm also not sure what to do about domains and\nattributes using those. It's more time/code than what I'm willing spend\nnow, so I'll laeve this as a possible future improvement.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Feb 2020 02:25:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I think we might check if there are any attributes with the given data\n> type, and allow the change if there are none. That would still allow the\n> change when the type is used only for things like function parameters\n> etc. But we'd also have to check for domains (recursively).\n\nStill has race conditions.\n\n> One thing I haven't mentioned in the original message is CASCADE. It\n> seems useful to optionally change storage for all attributes with the\n> given data type. But I'm not sure it's actually a good idea, and the\n> amount of code seems non-trivial (it'd have to copy quite a bit of code\n> from ALTER TABLE).\n\nYou'd need a moderately strong lock on each such table, which means\nthere'd be serious deadlock hazards. I'm dubious that it's worth\ntroubling with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 20:35:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Fri, Feb 28, 2020 at 08:35:33PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I think we might check if there are any attributes with the given data\n>> type, and allow the change if there are none. That would still allow the\n>> change when the type is used only for things like function parameters\n>> etc. But we'd also have to check for domains (recursively).\n>\n>Still has race conditions.\n>\n\nYeah, I have no problem believing that.\n\n>> One thing I haven't mentioned in the original message is CASCADE. It\n>> seems useful to optionally change storage for all attributes with the\n>> given data type. But I'm not sure it's actually a good idea, and the\n>> amount of code seems non-trivial (it'd have to copy quite a bit of code\n>> from ALTER TABLE).\n>\n>You'd need a moderately strong lock on each such table, which means\n>there'd be serious deadlock hazards. I'm dubious that it's worth\n>troubling with.\n>\n\nYeah, I don't plan to do this in v1 (and I have no immediate plan to\nwork on it after that). But I wonder how is the deadlock risk any\ndifferent compared e.g. to DROP TYPE ... CASCADE?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 29 Feb 2020 22:37:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Feb 28, 2020 at 08:35:33PM -0500, Tom Lane wrote:\n>> You'd need a moderately strong lock on each such table, which means\n>> there'd be serious deadlock hazards. I'm dubious that it's worth\n>> troubling with.\n\n> Yeah, I don't plan to do this in v1 (and I have no immediate plan to\n> work on it after that). But I wonder how is the deadlock risk any\n> different compared e.g. to DROP TYPE ... CASCADE?\n\nTrue, but dropping a type you're actively using seems pretty improbable;\nwhereas the whole point of the patch you're proposing is that people\n*would* want to use it in production.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Feb 2020 17:13:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "I started to look at this patch with fresh eyes after reading the patch\nfor adding binary I/O for ltree,\n\nhttps://www.postgresql.org/message-id/flat/CANmj9Vxx50jOo1L7iSRxd142NyTz6Bdcgg7u9P3Z8o0=HGkYyQ@mail.gmail.com\n\nand realizing that the only reasonable way to tackle that problem is to\nprovide an ALTER TYPE command that can set the binary I/O functions for\nan existing type. (One might think that it'd be acceptable to UPDATE\nthe pg_type row directly; but that wouldn't take care of dependencies\nproperly, and it also wouldn't handle the domain issues I discuss below.)\nThere are other properties too that can be set in CREATE TYPE but we\nhave no convenient way to adjust later, though it'd be reasonable to\nwant to do so.\n\nI do not think that we want to invent bespoke syntax for each property.\nThe more such stuff we cram into ALTER TYPE, the bigger the risk of\nconflicting with some future SQL extension. Moreover, since CREATE TYPE\nfor base types already uses the \"( keyword = value, ... )\" syntax for\nthese properties, and we have a similar precedent in CREATE/ALTER\nOPERATOR, it seems to me that the syntax we want here is\n\n\tALTER TYPE typename SET ( keyword = value [, ... ] )\n\nAttached is a stab at doing it that way, and implementing setting of\nthe binary I/O functions for good measure. (It'd be reasonable to\nadd more stuff, like setting the other support functions, but this\nis enough for the immediate discussion.)\n\nThe main thing I'm not too happy about is what to do about domains.\nYour v2 patch supposed that it's okay to allow ALTER TYPE on domains,\nbut I'm not sure we want to go that way, and if we do there's certainly\na bunch more work that has to be done. Up to now the system has\nsupposed that domains inherit all these properties from their base\ntypes. I'm not certain exactly how far that assumption has propagated,\nbut there's at least one place that implicitly assumes it: pg_dump has\nno logic for adjusting a domain to have different storage or support\nfunctions than the base type had. So as v2 stands, a custom storage\noption on a domain would be lost in dump/reload.\n\nAnother issue that would become a big problem if we allow domains to\nhave custom I/O functions is that the wire protocol transmits the\nbase type's OID, not the domain's OID, for an output column that\nis of a domain type. A client that expected a particular output\nformat on the strength of what it was told the column type was\nwould be in for a big surprise.\n\nCertainly we could fix pg_dump if we had a mind to, but changing\nthe wire protocol for this would have unpleasant ramifications.\nAnd I'm worried about whether there are other places in the system\nthat are also making this sort of assumption.\n\nI'm also not very convinced that we *want* to allow domains to vary from\ntheir base types in this way. The primary use-case I can think of for\nALTER TYPE SET is in extension update scripts, and an extension would\nalmost surely wish for any domains over its type to automatically absorb\nwhatever changes of this sort it wants to make.\n\nSo I think there are two distinct paths we could take here:\n\n* Decide that it's okay to allow domains to vary from their base type\nin these properties. Teach pg_dump to cope with that, and stand ready\nto fix any other bugs we find, and have some story to tell the people\nwhose clients we break. Possibly add a CASCADE option to\nALTER TYPE SET, with the semantics of adjusting dependent domains\nto match. (This is slightly less scary than the CASCADE semantics\nyou had in mind, because it would only affect pg_type entries not\ntables.)\n\n* Decide that we'll continue to require domains to match their base\ntype in all these properties. That means refusing to allow ALTER\non a domain per se, and automatically cascading these changes to\ndependent domains.\n\nIn the v3 patch below, I've ripped out the ALTER DOMAIN syntax on\nthe assumption that we'd do the latter; but I've not written the\ncascade recursion logic, because that seemed like a lot of work\nto do in advance of having consensus on it being a good idea.\n\nI've also restricted the code to work just on base types, because\nit's far from apparent to me that it makes any sense to allow any\nof these operations on derived types such as composites or ranges.\nAgain, there's a fair amount of code that is not going to be\nprepared for such a type to have properties that it could not\nhave at creation, and I don't see a use-case that really justifies\nbreaking those expectations.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 02 Mar 2020 14:11:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Mon, Mar 02, 2020 at 02:11:10PM -0500, Tom Lane wrote:\n>I started to look at this patch with fresh eyes after reading the patch\n>for adding binary I/O for ltree,\n>\n>https://www.postgresql.org/message-id/flat/CANmj9Vxx50jOo1L7iSRxd142NyTz6Bdcgg7u9P3Z8o0=HGkYyQ@mail.gmail.com\n>\n>and realizing that the only reasonable way to tackle that problem is to\n>provide an ALTER TYPE command that can set the binary I/O functions for\n>an existing type. (One might think that it'd be acceptable to UPDATE\n>the pg_type row directly; but that wouldn't take care of dependencies\n>properly, and it also wouldn't handle the domain issues I discuss below.)\n>There are other properties too that can be set in CREATE TYPE but we\n>have no convenient way to adjust later, though it'd be reasonable to\n>want to do so.\n>\n>I do not think that we want to invent bespoke syntax for each property.\n>The more such stuff we cram into ALTER TYPE, the bigger the risk of\n>conflicting with some future SQL extension. Moreover, since CREATE TYPE\n>for base types already uses the \"( keyword = value, ... )\" syntax for\n>these properties, and we have a similar precedent in CREATE/ALTER\n>OPERATOR, it seems to me that the syntax we want here is\n>\n>\tALTER TYPE typename SET ( keyword = value [, ... ] )\n>\n\nAgreed, it seems reasonable to use the ALTER OPRATOR precedent.\n\n>Attached is a stab at doing it that way, and implementing setting of\n>the binary I/O functions for good measure. (It'd be reasonable to\n>add more stuff, like setting the other support functions, but this\n>is enough for the immediate discussion.)\n>\n>The main thing I'm not too happy about is what to do about domains.\n>Your v2 patch supposed that it's okay to allow ALTER TYPE on domains,\n>but I'm not sure we want to go that way, and if we do there's certainly\n>a bunch more work that has to be done. Up to now the system has\n>supposed that domains inherit all these properties from their base\n>types. I'm not certain exactly how far that assumption has propagated,\n>but there's at least one place that implicitly assumes it: pg_dump has\n>no logic for adjusting a domain to have different storage or support\n>functions than the base type had. So as v2 stands, a custom storage\n>option on a domain would be lost in dump/reload.\n>\n>Another issue that would become a big problem if we allow domains to\n>have custom I/O functions is that the wire protocol transmits the\n>base type's OID, not the domain's OID, for an output column that\n>is of a domain type. A client that expected a particular output\n>format on the strength of what it was told the column type was\n>would be in for a big surprise.\n>\n>Certainly we could fix pg_dump if we had a mind to, but changing\n>the wire protocol for this would have unpleasant ramifications.\n>And I'm worried about whether there are other places in the system\n>that are also making this sort of assumption.\n>\n>I'm also not very convinced that we *want* to allow domains to vary from\n>their base types in this way. The primary use-case I can think of for\n>ALTER TYPE SET is in extension update scripts, and an extension would\n>almost surely wish for any domains over its type to automatically absorb\n>whatever changes of this sort it wants to make.\n>\n>So I think there are two distinct paths we could take here:\n>\n>* Decide that it's okay to allow domains to vary from their base type\n>in these properties. Teach pg_dump to cope with that, and stand ready\n>to fix any other bugs we find, and have some story to tell the people\n>whose clients we break. Possibly add a CASCADE option to\n>ALTER TYPE SET, with the semantics of adjusting dependent domains\n>to match. (This is slightly less scary than the CASCADE semantics\n>you had in mind, because it would only affect pg_type entries not\n>tables.)\n>\n>* Decide that we'll continue to require domains to match their base\n>type in all these properties. That means refusing to allow ALTER\n>on a domain per se, and automatically cascading these changes to\n>dependent domains.\n>\n>In the v3 patch below, I've ripped out the ALTER DOMAIN syntax on\n>the assumption that we'd do the latter; but I've not written the\n>cascade recursion logic, because that seemed like a lot of work\n>to do in advance of having consensus on it being a good idea.\n>\n\nI do agree we should do the latter, i.e. maintain the assumption that\ndomains have the same properties as their base type. I can't think of a\nuse case for allowing them to differ, it just didn't occur to me there\nis this implicit assumption when writing the patch.\n\n>I've also restricted the code to work just on base types, because\n>it's far from apparent to me that it makes any sense to allow any\n>of these operations on derived types such as composites or ranges.\n>Again, there's a fair amount of code that is not going to be\n>prepared for such a type to have properties that it could not\n>have at creation, and I don't see a use-case that really justifies\n>breaking those expectations.\n>\n\nYeah, that makes sense too, I think.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 23:05:31 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Mon, Mar 02, 2020 at 02:11:10PM -0500, Tom Lane wrote:\n>> In the v3 patch below, I've ripped out the ALTER DOMAIN syntax on\n>> the assumption that we'd do the latter; but I've not written the\n>> cascade recursion logic, because that seemed like a lot of work\n>> to do in advance of having consensus on it being a good idea.\n\n> I do agree we should do the latter, i.e. maintain the assumption that\n> domains have the same properties as their base type. I can't think of a\n> use case for allowing them to differ, it just didn't occur to me there\n> is this implicit assumption when writing the patch.\n\nHere's a v4 that is rebased over HEAD + the OPAQUE-ectomy that I\nproposed at <4110.1583255415@sss.pgh.pa.us>, plus it adds recursion\nto domains, and I also added the ability to set typmod I/O and\nanalyze functions, which seems like functionality that somebody\ncould possibly wish to add to a type after-the-fact much like\nbinary I/O.\n\nI thought about allowing the basic I/O functions to be replaced as\nwell, but I couldn't really convince myself that there's a use-case\nfor that. In practice you'd probably always just change the\nbehavior of the existing I/O functions, not want to sub in new ones.\n\n(I kind of wonder, actually, whether there's a use-case for the\nNONE options here at all. When would you remove a support function?)\n\nOf the remaining CREATE TYPE options, \"category\" and \"preferred\"\ncould perhaps be changeable but I couldn't get excited about them.\nAll the others seem like there are gotchas --- for example,\nchanging a type's collatable property is much harder than it\nlooks because it'd affect stored views. So this seems like a\nreasonable stopping point.\n\nI think this is committable --- how about you?\n\nI've included the OPAQUE-ectomy patches below so that the cfbot\ncan test this, but they're the same as in the other thread.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 04 Mar 2020 13:39:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "I wrote:\n> I think this is committable --- how about you?\n\n... or not. I just noticed that the typcache tracks each type's\ntypstorage setting, and there's no provision for flushing/reloading\nthat.\n\nAs far as I can find, there is only one place where the cached\nvalue is used, and that's in rangetypes.c which needs to know\nwhether the range element type is toastable. (It doesn't actually\nneed to know the exact value of typstorage, only whether it is or\nisn't PLAIN.)\n\nWe have a number of possible fixes for that:\n\n1. Upgrade typcache.c to support flushing and rebuilding this data.\nThat seems fairly expensive; while we may be forced into that someday,\nI'm hesitant to do it for a fairly marginal feature like this one.\n\n2. Stop using the typcache for this particular purpose in rangetypes.c.\nThat seems rather undesirable from a performance standpoint, too.\n\n3. Drop the ability for ALTER TYPE to promote from PLAIN to not-PLAIN\ntypstorage, and adjust the typcache so that it only remembers boolean\ntoastability not the specific toasting strategy. Then the cache is\nstill immutable so no need for update logic.\n\nI'm kind of liking #3, ugly as it sounds, because I'm not sure how\nmuch of a use-case there is for the upgrade-from-PLAIN case.\nParticularly now that TOAST is so ingrained in the system, it seems\nrather unlikely that a production-grade data type wouldn't have\nbeen designed to be toastable from the beginning, if there could be\nany advantage to that. Either #1 or #2 seem like mighty high prices\nto pay for offering an option that might have no real-world uses.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Mar 2020 18:15:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "I wrote:\n> 3. Drop the ability for ALTER TYPE to promote from PLAIN to not-PLAIN\n> typstorage, and adjust the typcache so that it only remembers boolean\n> toastability not the specific toasting strategy. Then the cache is\n> still immutable so no need for update logic.\n>\n> I'm kind of liking #3, ugly as it sounds, because I'm not sure how\n> much of a use-case there is for the upgrade-from-PLAIN case.\n> Particularly now that TOAST is so ingrained in the system, it seems\n> rather unlikely that a production-grade data type wouldn't have\n> been designed to be toastable from the beginning, if there could be\n> any advantage to that. Either #1 or #2 seem like mighty high prices\n> to pay for offering an option that might have no real-world uses.\n\nHere's a v5 based on that approach. I also added some comments about\nthe potential race conditions involved in recursing to domains.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 04 Mar 2020 18:56:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Wed, Mar 4, 2020 at 4:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > I think this is committable --- how about you?\n>\n> ... or not. I just noticed that the typcache tracks each type's\n> typstorage setting, and there's no provision for flushing/reloading\n> that.\n>\n> As far as I can find, there is only one place where the cached\n> value is used, and that's in rangetypes.c which needs to know\n> whether the range element type is toastable. (It doesn't actually\n> need to know the exact value of typstorage, only whether it is or\n> isn't PLAIN.)\n>\n> [...]\n\n\n\n>\n> 3. Drop the ability for ALTER TYPE to promote from PLAIN to not-PLAIN\n> typstorage, and adjust the typcache so that it only remembers boolean\n> toastability not the specific toasting strategy. Then the cache is\n> still immutable so no need for update logic.\n>\n> I'm kind of liking #3, ugly as it sounds, because I'm not sure how\n> much of a use-case there is for the upgrade-from-PLAIN case.\n> Particularly now that TOAST is so ingrained in the system, it seems\n> rather unlikely that a production-grade data type wouldn't have\n> been designed to be toastable from the beginning, if there could be\n> any advantage to that. Either #1 or #2 seem like mighty high prices\n> to pay for offering an option that might have no real-world uses.\n>\n\nTomas' opening paragraph for this thread indicated this was motivated by\nthe plain-to-toast change but I'm not in a position to provide independent\ninsight.\n\nWithout that piece this is mainly about being able to specify a type's\npreference for when and how it can be toasted. That seems like sufficient\nmotivation, though that functionality seems basic enough that I'm wondering\nwhy it hasn't come up before now (this seems like a different topic of\nwonder than what Tomas mentioned in the OP).\n\nIs there also an issue with whether the type has implemented compression or\nnot - i.e., should the x->e and m->e paths be forbidden too? Or is it\nalways the case a non-plain type is compressible and the other non-plain\noptions just switch between preferences (so External just says \"while I can\nbe compressed, please don't\")?\nSeparately...\n\nCan you please include an edit to [1] indicating that \"e\" is the\nabbreviation for External and \"x\" is Extended (spelling out the other two\nas well). Might be worth a comment at [2] as well.\n\n[1] https://www.postgresql.org/docs/12/catalog-pg-type.html\n[2] https://www.postgresql.org/docs/12/storage-toast.html\n\nThanks!\n\nDavid J.\n\nOn Wed, Mar 4, 2020 at 4:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> I think this is committable --- how about you?\n\n... or not.  I just noticed that the typcache tracks each type's\ntypstorage setting, and there's no provision for flushing/reloading\nthat.\n\nAs far as I can find, there is only one place where the cached\nvalue is used, and that's in rangetypes.c which needs to know\nwhether the range element type is toastable.  (It doesn't actually\nneed to know the exact value of typstorage, only whether it is or\nisn't PLAIN.)\n[...] \n3. Drop the ability for ALTER TYPE to promote from PLAIN to not-PLAIN\ntypstorage, and adjust the typcache so that it only remembers boolean\ntoastability not the specific toasting strategy.  Then the cache is\nstill immutable so no need for update logic.\n\nI'm kind of liking #3, ugly as it sounds, because I'm not sure how\nmuch of a use-case there is for the upgrade-from-PLAIN case.\nParticularly now that TOAST is so ingrained in the system, it seems\nrather unlikely that a production-grade data type wouldn't have\nbeen designed to be toastable from the beginning, if there could be\nany advantage to that.  Either #1 or #2 seem like mighty high prices\nto pay for offering an option that might have no real-world uses.Tomas' opening paragraph for this thread indicated this was motivated by the plain-to-toast change but I'm not in a position to provide independent insight.Without that piece this is mainly about being able to specify a type's preference for when and how it can be toasted.  That seems like sufficient motivation, though that functionality seems basic enough that I'm wondering why it hasn't come up before now (this seems like a different topic of wonder than what Tomas mentioned in the OP).Is there also an issue with whether the type has implemented compression or not - i.e., should the x->e and m->e paths be forbidden too?  Or is it always the case a non-plain type is compressible and the other non-plain options just switch between preferences (so External just says \"while I can be compressed, please don't\")?Separately...Can you please include an edit to [1] indicating that \"e\" is the abbreviation for External and \"x\" is Extended (spelling out the other two as well).  Might be worth a comment at [2] as well.[1] https://www.postgresql.org/docs/12/catalog-pg-type.html [2] https://www.postgresql.org/docs/12/storage-toast.htmlThanks!David J.", "msg_date": "Wed, 4 Mar 2020 17:00:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Wed, Mar 04, 2020 at 06:56:42PM -0500, Tom Lane wrote:\n>I wrote:\n>> 3. Drop the ability for ALTER TYPE to promote from PLAIN to not-PLAIN\n>> typstorage, and adjust the typcache so that it only remembers boolean\n>> toastability not the specific toasting strategy. Then the cache is\n>> still immutable so no need for update logic.\n>>\n>> I'm kind of liking #3, ugly as it sounds, because I'm not sure how\n>> much of a use-case there is for the upgrade-from-PLAIN case.\n>> Particularly now that TOAST is so ingrained in the system, it seems\n>> rather unlikely that a production-grade data type wouldn't have\n>> been designed to be toastable from the beginning, if there could be\n>> any advantage to that. Either #1 or #2 seem like mighty high prices\n>> to pay for offering an option that might have no real-world uses.\n>\n>Here's a v5 based on that approach. I also added some comments about\n>the potential race conditions involved in recursing to domains.\n>\n\nWell, I don't know what to say, really. This very thread started with me\nexplaining how I've repeatedly needed a way to upgrade from PLAIN, so I\ndon't quite agree with your claim that there's no use case for that.\n\nGranted, the cases may be my faults - sometimes I have not expected the\ntype to need TOAST initially, and then later realizing I've been wrong.\nIn other cases I simply failed to realize PLAIN is the default value\neven for varlena types (yes, it's a silly mistake).\n\nFWIW I'm not suggesting you go and implement #1 or #2 for me, that'd be\nup to me I guess. But I disagree there's no use case for it, and #3\nmakes this featuer useless for me.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 20:05:13 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> FWIW I'm not suggesting you go and implement #1 or #2 for me, that'd be\n> up to me I guess. But I disagree there's no use case for it, and #3\n> makes this featuer useless for me.\n\nOK, then we need to do something else. Do you have ideas for other\nalternatives?\n\nIf not, we probably should bite the bullet and go for #1, since\nI have little doubt that we'll need that someday anyway.\nThe trick will be to keep down the cache invalidation overhead...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 14:52:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Thu, Mar 05, 2020 at 02:52:44PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> FWIW I'm not suggesting you go and implement #1 or #2 for me, that'd be\n>> up to me I guess. But I disagree there's no use case for it, and #3\n>> makes this featuer useless for me.\n>\n>OK, then we need to do something else. Do you have ideas for other\n>alternatives?\n>\n\nI don't have any other ideas, unfortunately. And I think if I had one,\nit'd probably be some sort of ugly hack anyway :-/\n\n>If not, we probably should bite the bullet and go for #1, since\n>I have little doubt that we'll need that someday anyway.\n>The trick will be to keep down the cache invalidation overhead...\n>\n\nYeah, I agree #1 seems like the cleanest/best option. Are you worried\nabout the overhead due to the extra complexity, or overhead due to\ncache getting invalidated for this particular reason?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 5 Mar 2020 22:17:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "I wrote:\n> If not, we probably should bite the bullet and go for #1, since\n> I have little doubt that we'll need that someday anyway.\n> The trick will be to keep down the cache invalidation overhead...\n\nHere's a version that does it like that. I'm less worried about the\noverhead than I was before, because I realized that we already had\na syscache callback for pg_type there. And it was being pretty\nstupid about which entries it reset, too, so this version might\nactually net out as less overhead (in some workloads anyway).\n\nFor ease of review I just added the new TCFLAGS value out of\nsequence, but I'd be inclined to renumber the bits back into\nsequence before committing.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 05 Mar 2020 17:46:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Yeah, I agree #1 seems like the cleanest/best option. Are you worried\n> about the overhead due to the extra complexity, or overhead due to\n> cache getting invalidated for this particular reason?\n\nThe overhead is basically a hash_seq_search traversal over the typcache\neach time we get a pg_type inval event, which there could be a lot of.\nOn the other hand we have a lot of inval overhead already, so this might\nnot amount to anything noticeable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 18:08:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "On Thu, Mar 05, 2020 at 05:46:44PM -0500, Tom Lane wrote:\n>I wrote:\n>> If not, we probably should bite the bullet and go for #1, since\n>> I have little doubt that we'll need that someday anyway.\n>> The trick will be to keep down the cache invalidation overhead...\n>\n>Here's a version that does it like that. I'm less worried about the\n>overhead than I was before, because I realized that we already had\n>a syscache callback for pg_type there. And it was being pretty\n>stupid about which entries it reset, too, so this version might\n>actually net out as less overhead (in some workloads anyway).\n>\n>For ease of review I just added the new TCFLAGS value out of\n>sequence, but I'd be inclined to renumber the bits back into\n>sequence before committing.\n>\n\nLGTM. If I had to nitpick, I'd say that the example in docs should be \n\n ALTER TYPE mytype SET (\n SEND = mytypesend,\n RECEIVE = mytyperecv\n );\n\ni.e. with uppercase SEND/RECEIVE, because that's how we spell it in\nother examples in CREATE TYPE etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Mar 2020 14:42:18 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Mar 05, 2020 at 05:46:44PM -0500, Tom Lane wrote:\n>> For ease of review I just added the new TCFLAGS value out of\n>> sequence, but I'd be inclined to renumber the bits back into\n>> sequence before committing.\n\n> LGTM. If I had to nitpick, I'd say that the example in docs should be \n> ALTER TYPE mytype SET (\n> SEND = mytypesend,\n> RECEIVE = mytyperecv\n> );\n> i.e. with uppercase SEND/RECEIVE, because that's how we spell it in\n> other examples in CREATE TYPE etc.\n\nOK, pushed with those changes and some other docs-polishing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 12:20:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Is there also an issue with whether the type has implemented compression or\n> not - i.e., should the x->e and m->e paths be forbidden too? Or is it\n> always the case a non-plain type is compressible and the other non-plain\n> options just switch between preferences (so External just says \"while I can\n> be compressed, please don't\")?\n\nYeah, the only relevant issue here is \"can it be toasted, or not?\". A\ndata type doesn't have direct control of which toasting options can be\napplied, nor does it need to, as long as the C functions apply the\ncorrect detoast macros.\n\n> Can you please include an edit to [1] indicating that \"e\" is the\n> abbreviation for External and \"x\" is Extended (spelling out the other two\n> as well). Might be worth a comment at [2] as well.\n> [1] https://www.postgresql.org/docs/12/catalog-pg-type.html\n> [2] https://www.postgresql.org/docs/12/storage-toast.html\n\nDone in [1]; I didn't see much point in changing [2].\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 12:23:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing ALTER TYPE to change storage strategy" } ]
[ { "msg_contents": "Hi All,\n\nCurrently we will not consider EXPR_SUBLINK when pulling up sublinks and\nthis would cause performance issues for some queries with the form of:\n'a > (SELECT agg(b) from ...)' as described in [1].\n\nSo here is a patch as an attempt to pull up EXPR SubLinks. The idea,\nwhich is based on Greenplum's implementation, is to perform the\nfollowing transformation.\n\nFor query:\n\nselect * from foo where foo.a >\n (select avg(bar.a) from bar where foo.b = bar.b);\n\nwe transform it to:\n\nselect * from foo inner join\n (select bar.b, avg(bar.a) as avg from bar group by bar.b) sub\non foo.b = sub.b and foo.a > sub.avg;\n\nTo do that, we recurse through the quals in sub-select and extract quals\nof form 'foo(outervar) = bar(innervar)' and then according to innervars\nwe make new SortGroupClause items and TargetEntry items for sub-select.\nAnd at last we pull up the sub-select into upper range table.\n\nAs a result, the plan would change as:\n\nFROM\n\n QUERY PLAN\n----------------------------------------\n Seq Scan on foo\n Filter: ((a)::numeric > (SubPlan 1))\n SubPlan 1\n -> Aggregate\n -> Seq Scan on bar\n Filter: (foo.b = b)\n(6 rows)\n\nTO\n\n QUERY PLAN\n--------------------------------------------------\n Hash Join\n Hash Cond: (foo.b = bar.b)\n Join Filter: ((foo.a)::numeric > (avg(bar.a)))\n -> Seq Scan on foo\n -> Hash\n -> HashAggregate\n Group Key: bar.b\n -> Seq Scan on bar\n(8 rows)\n\nThe patch works but still in draft stage. Post it here to see if it is\nthe right thing we want.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWodctmbU%2BZj6U83y_RniQk0UeXBvKH1ZaJ%3DLR_iC90GOw%40mail.gmail.com\n\nThanks\nRichard", "msg_date": "Fri, 28 Feb 2020 14:35:23 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, Feb 28, 2020 at 2:35 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Hi All,\n>\n> Currently we will not consider EXPR_SUBLINK when pulling up sublinks and\n> this would cause performance issues for some queries with the form of:\n> 'a > (SELECT agg(b) from ...)' as described in [1].\n>\n> So here is a patch as an attempt to pull up EXPR SubLinks. The idea,\n> which is based on Greenplum's implementation, is to perform the\n> following transformation.\n>\n> For query:\n>\n> select * from foo where foo.a >\n> (select avg(bar.a) from bar where foo.b = bar.b);\n>\n> we transform it to:\n>\n> select * from foo inner join\n> (select bar.b, avg(bar.a) as avg from bar group by bar.b) sub\n> on foo.b = sub.b and foo.a > sub.avg;\n>\n\nGlad to see this. I think the hard part is this transform is not *always*\ngood. for example foo.a only has 1 rows, but bar has a lot of rows, if so\nthe original would be the better one. doss this patch consider this\nproblem?\n\n\n> Thanks\n> Richard\n>\n\nOn Fri, Feb 28, 2020 at 2:35 PM Richard Guo <guofenglinux@gmail.com> wrote:Hi All,Currently we will not consider EXPR_SUBLINK when pulling up sublinks andthis would cause performance issues for some queries with the form of:'a > (SELECT agg(b) from ...)' as described in [1].So here is a patch as an attempt to pull up EXPR SubLinks. The idea,which is based on Greenplum's implementation, is to perform thefollowing transformation.For query:select * from foo where foo.a >    (select avg(bar.a) from bar where foo.b = bar.b);we transform it to:select * from foo inner join    (select bar.b, avg(bar.a) as avg from bar group by bar.b) subon foo.b = sub.b and foo.a > sub.avg;Glad to see this.  I think the hard part is this transform is not *always* good.  for example foo.a only has 1 rows, but bar has a lot  of rows, if so the original would be the better one.  doss this patch consider this problem? ThanksRichard", "msg_date": "Fri, 28 Feb 2020 15:02:28 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, Feb 28, 2020 at 3:02 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Fri, Feb 28, 2020 at 2:35 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n>\n>> Hi All,\n>>\n>> Currently we will not consider EXPR_SUBLINK when pulling up sublinks and\n>> this would cause performance issues for some queries with the form of:\n>> 'a > (SELECT agg(b) from ...)' as described in [1].\n>>\n>> So here is a patch as an attempt to pull up EXPR SubLinks. The idea,\n>> which is based on Greenplum's implementation, is to perform the\n>> following transformation.\n>>\n>> For query:\n>>\n>> select * from foo where foo.a >\n>> (select avg(bar.a) from bar where foo.b = bar.b);\n>>\n>> we transform it to:\n>>\n>> select * from foo inner join\n>> (select bar.b, avg(bar.a) as avg from bar group by bar.b) sub\n>> on foo.b = sub.b and foo.a > sub.avg;\n>>\n>\n> Glad to see this. I think the hard part is this transform is not *always*\n> good. for example foo.a only has 1 rows, but bar has a lot of rows, if\n> so\n> the original would be the better one.\n>\n\nYes exactly. TBH I'm not sure how to achieve that. Currently in the\npatch this transformation happens in the stage of preprocessing the\njointree. We do not have enough information at this time to tell which\nis better between the transformed one and untransformed one.\n\nIf we want to choose the better one by cost comparison, then we need to\nplan the query twice, one for the transformed query and one for the\nuntransformed query. But this seems infeasible in current optimizer's\narchitecture.\n\nAny ideas on this part?\n\nThanks\nRichard\n\nOn Fri, Feb 28, 2020 at 3:02 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Fri, Feb 28, 2020 at 2:35 PM Richard Guo <guofenglinux@gmail.com> wrote:Hi All,Currently we will not consider EXPR_SUBLINK when pulling up sublinks andthis would cause performance issues for some queries with the form of:'a > (SELECT agg(b) from ...)' as described in [1].So here is a patch as an attempt to pull up EXPR SubLinks. The idea,which is based on Greenplum's implementation, is to perform thefollowing transformation.For query:select * from foo where foo.a >    (select avg(bar.a) from bar where foo.b = bar.b);we transform it to:select * from foo inner join    (select bar.b, avg(bar.a) as avg from bar group by bar.b) subon foo.b = sub.b and foo.a > sub.avg;Glad to see this.  I think the hard part is this transform is not *always* good.  for example foo.a only has 1 rows, but bar has a lot  of rows, if so the original would be the better one.Yes exactly. TBH I'm not sure how to achieve that. Currently in thepatch this transformation happens in the stage of preprocessing thejointree. We do not have enough information at this time to tell whichis better between the transformed one and untransformed one.If we want to choose the better one by cost comparison, then we need toplan the query twice, one for the transformed query and one for theuntransformed query. But this seems infeasible in current optimizer'sarchitecture.Any ideas on this part?ThanksRichard", "msg_date": "Fri, 28 Feb 2020 19:49:59 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, Feb 28, 2020 at 3:02 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Glad to see this. I think the hard part is this transform is not *always*\n>> good. for example foo.a only has 1 rows, but bar has a lot of rows, if\n>> so the original would be the better one.\n\n> Yes exactly. TBH I'm not sure how to achieve that.\n\nYeah, I was about to make the same objection when I saw Andy already had.\nWithout some moderately-reliable way of estimating whether the change\nis actually a win, I think we're better off leaving it out. The user\ncan always rewrite the query for themselves if the grouped implementation\nwould be better -- but if the planner just does it blindly, there's no\nrecourse when it's worse.\n\n> Any ideas on this part?\n\nI wonder whether it'd be possible to rewrite the query, but then\nconsider two implementations, one where the equality clause is\npushed down into the aggregating subquery as though it were LATERAL.\nYou'd want to be able to figure out that the presence of that clause\nmade it unnecessary to do the GROUP BY ... but having done so, a\nplan treating the aggregating subquery as LATERAL ought to be pretty\nnearly performance-equivalent to the current way. So this could be\nmechanized in the current planner structure by treating that as a\nparameterized path for the subquery, and comparing it to unparameterized\npaths that calculate the full grouped output.\n\nObviously it'd be a long slog from here to there, but it seems like\nmaybe that could be made to work. There's a separate question about\nwhether it's really worth the trouble, seeing that the optimization\nis available today to people who rewrite their queries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 10:35:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, Feb 28, 2020 at 11:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > On Fri, Feb 28, 2020 at 3:02 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >> Glad to see this. I think the hard part is this transform is not\n> *always*\n> >> good. for example foo.a only has 1 rows, but bar has a lot of rows, if\n> >> so the original would be the better one.\n>\n> > Yes exactly. TBH I'm not sure how to achieve that.\n>\n> Yeah, I was about to make the same objection when I saw Andy already had.\n> Without some moderately-reliable way of estimating whether the change\n> is actually a win, I think we're better off leaving it out. The user\n> can always rewrite the query for themselves if the grouped implementation\n> would be better -- but if the planner just does it blindly, there's no\n> recourse when it's worse.\n>\n\nYes, that makes sense.\n\n\n>\n> > Any ideas on this part?\n>\n> I wonder whether it'd be possible to rewrite the query, but then\n> consider two implementations, one where the equality clause is\n> pushed down into the aggregating subquery as though it were LATERAL.\n> You'd want to be able to figure out that the presence of that clause\n> made it unnecessary to do the GROUP BY ... but having done so, a\n> plan treating the aggregating subquery as LATERAL ought to be pretty\n> nearly performance-equivalent to the current way. So this could be\n> mechanized in the current planner structure by treating that as a\n> parameterized path for the subquery, and comparing it to unparameterized\n> paths that calculate the full grouped output.\n>\n\nI suppose this would happen in/around function set_subquery_pathlist.\nWhen we generate access paths for the subquery, we try to push down the\nequality clause into subquery, remove the unnecessary GROUP BY, etc.\nand then perform another run of subquery_planner to generate the\nparameterized path, and add it to the RelOptInfo for the subquery. So\nthat we can do comparison to unparameterized paths.\n\nAm I understanding it correctly?\n\n\n>\n> Obviously it'd be a long slog from here to there, but it seems like\n> maybe that could be made to work. There's a separate question about\n> whether it's really worth the trouble, seeing that the optimization\n> is available today to people who rewrite their queries.\n>\n\nIf I understand correctly as above, yes, this would take quite a lot of\neffort. Not sure if it's still worth doing.\n\nThanks\nRichard\n\nOn Fri, Feb 28, 2020 at 11:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> On Fri, Feb 28, 2020 at 3:02 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Glad to see this.  I think the hard part is this transform is not *always*\n>> good.  for example foo.a only has 1 rows, but bar has a lot  of rows, if\n>> so the original would be the better one.\n\n> Yes exactly. TBH I'm not sure how to achieve that.\n\nYeah, I was about to make the same objection when I saw Andy already had.\nWithout some moderately-reliable way of estimating whether the change\nis actually a win, I think we're better off leaving it out.  The user\ncan always rewrite the query for themselves if the grouped implementation\nwould be better -- but if the planner just does it blindly, there's no\nrecourse when it's worse.Yes, that makes sense. \n\n> Any ideas on this part?\n\nI wonder whether it'd be possible to rewrite the query, but then\nconsider two implementations, one where the equality clause is\npushed down into the aggregating subquery as though it were LATERAL.\nYou'd want to be able to figure out that the presence of that clause\nmade it unnecessary to do the GROUP BY ... but having done so, a\nplan treating the aggregating subquery as LATERAL ought to be pretty\nnearly performance-equivalent to the current way.  So this could be\nmechanized in the current planner structure by treating that as a\nparameterized path for the subquery, and comparing it to unparameterized\npaths that calculate the full grouped output.I suppose this would happen in/around function set_subquery_pathlist.When we generate access paths for the subquery, we try to push down theequality clause into subquery, remove the unnecessary GROUP BY, etc.and then perform another run of subquery_planner to generate theparameterized path, and add it to the RelOptInfo for the subquery. Sothat we can do comparison to unparameterized paths.Am I understanding it correctly? \n\nObviously it'd be a long slog from here to there, but it seems like\nmaybe that could be made to work.  There's a separate question about\nwhether it's really worth the trouble, seeing that the optimization\nis available today to people who rewrite their queries.If I understand correctly as above, yes, this would take quite a lot ofeffort. Not sure if it's still worth doing.ThanksRichard", "msg_date": "Mon, 2 Mar 2020 16:21:18 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "Actually I have a different opinion to handle this issue, to execute the\na > (select avg(a) from tinner where x = touer.x); The drawback of current\npath is because it may calculates the same touer.x value multi-times. So\nif we cache the values we have calculated before, we can avoid the cost.\nMaterial path may be the one we can reference but it assumes all the tuples\nin the tuplestore matches the input params, which is not the fact here.\n\nBut what if the input params doesn't change? If so we can use Material path\nto optimize this case. But since we don't know if the if the input params\nchanged\nor not during plan time, we just add the path (let's assume we can add it\nwith some\nrules or cost calculation). If the input params is not changed, we use the\ncached\nvalues, if the input params changed, we can ReScan the Material node. To\noptimize\nthe the cache invalidation frequent issue like (1, 2, 1, 2, 1, 2) case, we\nmay consider\na sort path to change the input values to (1, 1, 1, 2, 2, 2). But overall\nit is a big effort.\n\nAs a independent small optimization maybe if the input params doesn't\nchange, we\ncan use the tuples in the Material node again. Suppose it will not\ndemage our current\nframework if we can add the material path by either rules based or cost\nbased.\n\nSuppose we have the following data:\n\ndemo=# select * from j1 limit 10;\n i | im5 | im100 | im1000\n----+-----+-------+--------\n 1 | 1 | 1 | 1\n 2 | 2 | 2 | 2\n 3 | 3 | 3 | 3\n 4 | 4 | 4 | 4\n 5 | 0 | 5 | 5\n 6 | 1 | 6 | 6\n 7 | 2 | 7 | 7\n 8 | 3 | 8 | 8\n 9 | 4 | 9 | 9\n 10 | 0 | 10 | 10\n(10 rows)\n\ntotally we have j1 = 10,000,002 rows, the extra 2 rows because we have 3\nrows for i=1\ndemo=# select * from j1 where i = 1;\n i | im5 | im100 | im1000\n---+-----+-------+--------\n 1 | 1 | 1 | 1\n 1 | 1 | 1 | 1\n 1 | 1 | 1 | 1\n(3 rows)\n\nThen select * from j1 j1o where im5 = (select avg(im5) from j1 where im5 =\nj1o.im5) and i = 1;\nwill hit our above optimizations. The plan is\n\n QUERY PLAN\n-----------------------------------------------\n Index Scan using j1_idx1 on j1 j1o\n Index Cond: (i = 1)\n Filter: ((im5)::numeric < (SubPlan 1))\n SubPlan 1\n -> Materialize\n -> Aggregate\n -> Seq Scan on j1\n Filter: (im5 = j1o.im5)\n(8 rows)\n\nand the Aggregate is just executed once (execution time dropped from 8.x s\nto 2.6s).\n\n----\nThe attached is a very PoC patch, but it can represent my idea for\ncurrent discuss, Some notes about the implementation.\n\n1. We need to check if the input params is really not changed. Currently\nI just\ncomment it out for quick test.\n\n- planstate->chgParam = bms_add_member(planstate->chgParam,\nparamid);\n+ // planstate->chgParam =\nbms_add_member(planstate->chgParam, paramid);\n\nLooks we have a lot of places to add a params\nto chgParam without checking the actual value. The place I found this case\nis\nduring ExecNestLoop. So we may need a handy and efficient way to do the\ncheck for all the places. However it is not a must for current case\n\n2. I probably misunderstand the the usage of MaterialState->eflags.\nsince I don't\nknow why the eflag need to be checked ExecMaterial. and I have to remove\nit to\nlet my PoC work.\n\n- if (tuplestorestate == NULL && node->eflags != 0)\n+ if (tuplestorestate == NULL)\n\n\n3. I added the material path in a very hacked way, the if check just to\nmake\nsure it take effect on my test statement only. If you want to test this\npatch locally,\nyou need to change the oid for your case.\n\n+ if (linitial_node(RangeTblEntry, root->parse->rtable)->relid ==\n25634)\n+ best_path = (Path *) create_material_path(final_rel,\nbest_path);\n\nBut when we take this action to production case, how to cost this strategy\nis\nchallenge since it can neither reduce the total_cost nor result in a new\nPathKey.\nI will check other place to see how this kind can be added.\n\n\nBest Regards\nAndy Fan", "msg_date": "Fri, 24 Apr 2020 11:25:14 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, Apr 24, 2020 at 8:56 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Actually I have a different opinion to handle this issue, to execute the\n> a > (select avg(a) from tinner where x = touer.x); The drawback of current\n> path is because it may calculates the same touer.x value multi-times. So\n> if we cache the values we have calculated before, we can avoid the cost.\n> Material path may be the one we can reference but it assumes all the tuples\n> in the tuplestore matches the input params, which is not the fact here.\n>\n> But what if the input params doesn't change? If so we can use Material path\n> to optimize this case. But since we don't know if the if the input params changed\n> or not during plan time, we just add the path (let's assume we can add it with some\n> rules or cost calculation). If the input params is not changed, we use the cached\n> values, if the input params changed, we can ReScan the Material node. To optimize\n> the the cache invalidation frequent issue like (1, 2, 1, 2, 1, 2) case, we may consider\n> a sort path to change the input values to (1, 1, 1, 2, 2, 2). But overall it is a big effort.\n>\n> As a independent small optimization maybe if the input params doesn't change, we\n> can use the tuples in the Material node again. Suppose it will not demage our current\n> framework if we can add the material path by either rules based or cost based.\n>\n> Suppose we have the following data:\n>\n> demo=# select * from j1 limit 10;\n> i | im5 | im100 | im1000\n> ----+-----+-------+--------\n> 1 | 1 | 1 | 1\n> 2 | 2 | 2 | 2\n> 3 | 3 | 3 | 3\n> 4 | 4 | 4 | 4\n> 5 | 0 | 5 | 5\n> 6 | 1 | 6 | 6\n> 7 | 2 | 7 | 7\n> 8 | 3 | 8 | 8\n> 9 | 4 | 9 | 9\n> 10 | 0 | 10 | 10\n> (10 rows)\n>\n> totally we have j1 = 10,000,002 rows, the extra 2 rows because we have 3 rows for i=1\n> demo=# select * from j1 where i = 1;\n> i | im5 | im100 | im1000\n> ---+-----+-------+--------\n> 1 | 1 | 1 | 1\n> 1 | 1 | 1 | 1\n> 1 | 1 | 1 | 1\n> (3 rows)\n>\n> Then select * from j1 j1o where im5 = (select avg(im5) from j1 where im5 = j1o.im5) and i = 1;\n> will hit our above optimizations. The plan is\n>\n> QUERY PLAN\n> -----------------------------------------------\n> Index Scan using j1_idx1 on j1 j1o\n> Index Cond: (i = 1)\n> Filter: ((im5)::numeric < (SubPlan 1))\n> SubPlan 1\n> -> Materialize\n> -> Aggregate\n> -> Seq Scan on j1\n> Filter: (im5 = j1o.im5)\n> (8 rows)\n>\n> and the Aggregate is just executed once (execution time dropped from 8.x s\n> to 2.6s).\n>\n> ----\n> The attached is a very PoC patch, but it can represent my idea for\n> current discuss, Some notes about the implementation.\n>\n> 1. We need to check if the input params is really not changed. Currently I just\n> comment it out for quick test.\n>\n> - planstate->chgParam = bms_add_member(planstate->chgParam, paramid);\n> + // planstate->chgParam = bms_add_member(planstate->chgParam, paramid);\n>\n> Looks we have a lot of places to add a params\n> to chgParam without checking the actual value. The place I found this case is\n> during ExecNestLoop. So we may need a handy and efficient way to do the\n> check for all the places. However it is not a must for current case\n>\n> 2. I probably misunderstand the the usage of MaterialState->eflags. since I don't\n> know why the eflag need to be checked ExecMaterial. and I have to remove it to\n> let my PoC work.\n>\n> - if (tuplestorestate == NULL && node->eflags != 0)\n> + if (tuplestorestate == NULL)\n>\n>\n> 3. I added the material path in a very hacked way, the if check just to make\n> sure it take effect on my test statement only. If you want to test this patch locally,\n> you need to change the oid for your case.\n>\n> + if (linitial_node(RangeTblEntry, root->parse->rtable)->relid == 25634)\n> + best_path = (Path *) create_material_path(final_rel, best_path);\n\nCan we just directly add the material path on top of the best path? I\nmean there are possibilities that we might not get any benefit of the\nmaterial because there is no duplicate from the outer node but we are\npaying the cost of materialization right? The correct idea would be\nthat we should select this based on the cost comparison. Basically,\nwe can consider how many duplicates we have from the outer table\nvariable no?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Apr 2020 14:20:59 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": ">\n>\n> > 3. I added the material path in a very hacked way, the if check just\n> to make\n> > sure it take effect on my test statement only. If you want to test this\n> patch locally,\n> > you need to change the oid for your case.\n> >\n> > + if (linitial_node(RangeTblEntry, root->parse->rtable)->relid ==\n> 25634)\n> > + best_path = (Path *) create_material_path(final_rel,\n> best_path);\n>\nCan we just directly add the material path on top of the best path? I\n> mean there are possibilities that we might not get any benefit of the\n> material because there is no duplicate from the outer node but we are\n> paying the cost of materialization right? The correct idea would be\n> that we should select this based on the cost comparison. Basically,\n> we can consider how many duplicates we have from the outer table\n> variable no?\n>\n\nThanks for interesting of it. Of course we can't add the material path on\nbest path,\nthat's why I say it is a very hacked way. and say \"how to cost this\nstrategy is\nchallenge \" (the part you striped when you reply the email). But we have\nto\ntest a path first (it must be helpful on some case at least) and the\nresult is correct,\nthen we think about how to cost it. The purpose of my writing is about the\nfirst step\nand see what people think about it.\n\nAs for how to cost it, I'm agreed with your suggestion, but we may need\nmore\nthan that, like. (1, 2, 1) and (1, 1, 2) is same for your suggestion, but\nthey\nare not different in this path. and we also may be think about if we can\nget a lower cost if we add a new sort path.\n\nBest Regards\nAndy Fan\n\n\n> 3.  I added the material path in a very hacked way, the if check  just to make\n> sure it take effect on my test statement only.  If you want to test this patch locally,\n> you need to change the oid for your case.\n>\n> +       if (linitial_node(RangeTblEntry, root->parse->rtable)->relid == 25634)\n> +               best_path = (Path *) create_material_path(final_rel, best_path);\nCan we just directly add the material path on top of the best path?  I\nmean there are possibilities that we might not get any benefit of the\nmaterial because there is no duplicate from the outer node but we are\npaying the cost of materialization right?   The correct idea would be\nthat we should select this based on the cost comparison.  Basically,\nwe can consider how many duplicates we have from the outer table\nvariable no?Thanks for interesting of it. Of course we can't add the material path on best path, that's why I say it is a  very hacked way.  and say \"how to cost this strategy is challenge \"  (the part you striped when you reply the email).   But we have to test a path first (it must  be helpful on some case at least) and the result is correct,  then we think about how to cost it. The purpose of my writing is about the first stepand see what people think about it. As for how to cost it,  I'm agreed with your suggestion,  but we may need morethan that,  like.  (1, 2, 1) and (1, 1, 2) is same for your suggestion, but theyare not  different in this path.  and we also may be  think about if we can get a lower cost if we add a new sort path. Best RegardsAndy Fan", "msg_date": "Fri, 24 Apr 2020 17:11:16 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, Apr 24, 2020 at 2:42 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>\n>> > 3. I added the material path in a very hacked way, the if check just to make\n>> > sure it take effect on my test statement only. If you want to test this patch locally,\n>> > you need to change the oid for your case.\n>> >\n>> > + if (linitial_node(RangeTblEntry, root->parse->rtable)->relid == 25634)\n>> > + best_path = (Path *) create_material_path(final_rel, best_path);\n>>\n>> Can we just directly add the material path on top of the best path? I\n>> mean there are possibilities that we might not get any benefit of the\n>> material because there is no duplicate from the outer node but we are\n>> paying the cost of materialization right? The correct idea would be\n>> that we should select this based on the cost comparison. Basically,\n>> we can consider how many duplicates we have from the outer table\n>> variable no?\n>\n>\n> Thanks for interesting of it. Of course we can't add the material path on best path,\n> that's why I say it is a very hacked way. and say \"how to cost this strategy is\n> challenge \" (the part you striped when you reply the email).\n\nRight, I see that now. Thanks for pointing it out.\n\n But we have to\n> test a path first (it must be helpful on some case at least) and the result is correct,\n> then we think about how to cost it. The purpose of my writing is about the first step\n> and see what people think about it.\n\nOk\n\n>\n> As for how to cost it, I'm agreed with your suggestion, but we may need more\n> than that, like. (1, 2, 1) and (1, 1, 2) is same for your suggestion, but they\n> are not different in this path.\n\nValid point.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Apr 2020 14:45:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, 24 Apr 2020 at 15:26, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Actually I have a different opinion to handle this issue, to execute the\n> a > (select avg(a) from tinner where x = touer.x); The drawback of current\n> path is because it may calculates the same touer.x value multi-times. So\n> if we cache the values we have calculated before, we can avoid the cost.\n> Material path may be the one we can reference but it assumes all the tuples\n> in the tuplestore matches the input params, which is not the fact here.\n>\n> But what if the input params doesn't change? If so we can use Material path\n> to optimize this case. But since we don't know if the if the input params changed\n> or not during plan time, we just add the path (let's assume we can add it with some\n> rules or cost calculation). If the input params is not changed, we use the cached\n> values, if the input params changed, we can ReScan the Material node. To optimize\n> the the cache invalidation frequent issue like (1, 2, 1, 2, 1, 2) case, we may consider\n> a sort path to change the input values to (1, 1, 1, 2, 2, 2). But overall it is a big effort.\n\nThis does not seem quite right to me. What you need is some sort of\nparameterized materialize. Materialize just reads its subnode and\nstores the entire thing input and reuses it any time that it\nrescanned.\n\nYou likely need something more like what is mentioned in [1]. There's\nalso a bunch of code from Heikki in the initial email in that thread.\nHeikki put it in nodeSubplan.c. I think it should be a node of its\nown.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f-kAk1cGVvzg9TXCLhPsxx_oFVOrTGSR5yTRXKWntTVFA@mail.gmail.com\n\n\n", "msg_date": "Fri, 24 Apr 2020 21:24:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" }, { "msg_contents": "On Fri, Apr 24, 2020 at 5:24 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 24 Apr 2020 at 15:26, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > Actually I have a different opinion to handle this issue, to execute the\n> > a > (select avg(a) from tinner where x = touer.x); The drawback of\n> current\n> > path is because it may calculates the same touer.x value multi-times. So\n> > if we cache the values we have calculated before, we can avoid the cost.\n> > Material path may be the one we can reference but it assumes all the\n> tuples\n> > in the tuplestore matches the input params, which is not the fact here.\n> >\n> > But what if the input params doesn't change? If so we can use Material\n> path\n> > to optimize this case. But since we don't know if the if the input\n> params changed\n> > or not during plan time, we just add the path (let's assume we can add\n> it with some\n> > rules or cost calculation). If the input params is not changed, we use\n> the cached\n> > values, if the input params changed, we can ReScan the Material node.\n> To optimize\n> > the the cache invalidation frequent issue like (1, 2, 1, 2, 1, 2) case,\n> we may consider\n> > a sort path to change the input values to (1, 1, 1, 2, 2, 2). But\n> overall it is a big effort.\n>\n> This does not seem quite right to me. What you need is some sort of\n> parameterized materialize. Materialize just reads its subnode and\n> stores the entire thing input and reuses it any time that it\n> rescanned.\n>\n> You likely need something more like what is mentioned in [1]. There's\n> also a bunch of code from Heikki in the initial email in that thread.\n> Heikki put it in nodeSubplan.c. I think it should be a node of its\n> own.\n>\n>\nGlad to see your feedback, David:). Actually I thought about this idea\nsome\ntime ago, but since we have to implement a new path and handle\nthe cached data is too huge case, I gave it up later. When I am working\non some other stuff, I found Material path with some chgParam change may\nget a no harmful improvement with less effort, based on we know how to\nadd the material path and we can always get a correct result.\n\nI will check the link you provide when I get time, It's a nice feature and\nit will be a\ngood place to continue working on that feature.\n\nBest Regards\nAndy Fan\n\nOn Fri, Apr 24, 2020 at 5:24 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 24 Apr 2020 at 15:26, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Actually I have a different opinion to handle this issue,  to execute the\n> a > (select avg(a) from tinner where x = touer.x);  The drawback of current\n> path is because it may calculates the same touer.x value multi-times. So\n> if we cache the values we have calculated before, we can avoid the cost.\n> Material path may be the one we can reference but it assumes all the tuples\n> in the tuplestore matches the input params, which is not the fact here.\n>\n> But what if the input params doesn't change?  If so we can use Material path\n> to optimize this case.  But since we don't know if the if the input params changed\n> or not during plan time,  we just add the path (let's assume we can add it with some\n> rules or cost calculation).  If the input params is not changed, we use the cached\n> values,  if the input params changed,  we can ReScan the Material node.  To optimize\n> the the cache invalidation frequent issue like (1, 2, 1, 2, 1, 2) case, we may consider\n> a sort path to change the input values to (1, 1, 1, 2, 2, 2).  But overall it is a big effort.\n\nThis does not seem quite right to me. What you need is some sort of\nparameterized materialize. Materialize just reads its subnode and\nstores the entire thing input and reuses it any time that it\nrescanned.\n\nYou likely need something more like what is mentioned in [1]. There's\nalso a bunch of code from Heikki in the initial email in that thread.\nHeikki put it in nodeSubplan.c. I think it should be a node of its\nown.\nGlad to see your feedback, David:).   Actually I thought about this idea sometime ago, but since we have to implement a new path and handle the cached data is too huge case,  I gave it up later.  When I am working on some other stuff,  I found Material path with some chgParam change may get a no harmful improvement with less effort, based on we know how to add the material path and we can always get  a correct result. I will check the link you provide when I get time,  It's a nice feature and it will be agood place to continue working on that feature.Best RegardsAndy Fan", "msg_date": "Fri, 24 Apr 2020 17:54:03 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trying to pull up EXPR SubLinks" } ]
[ { "msg_contents": "Hello, this is a followup thread of [1].\n\n# I didn't noticed that the thread didn't cover -hackers..\n\nWhen recovery of any type ends, we see several kinds of error messages\nthat says \"WAL is broken\".\n\n> LOG: invalid record length at 0/7CB6BC8: wanted 24, got 0\n> LOG: redo is not required\n> LOG: database system is ready to accept connections\n\nThis patch reduces the scariness of such messages as the follows.\n\n> LOG: rached end of WAL at 0/1551048 on timeline 1 in pg_wal during crash recovery\n> DETAIL: invalid record length at 0/1551048: wanted 24, got 0\n> LOG: redo is not required\n> LOG: database system is ready to accept connections\n\n[1] https://www.postgresql.org/message-id/20200117.172655.1384889922565817808.horikyota.ntt%40gmail.com\n\nI'll register this to the coming CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 28 Feb 2020 16:01:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Fri, Feb 28, 2020 at 04:01:00PM +0900, Kyotaro Horiguchi wrote:\n> Hello, this is a followup thread of [1].\n> \n> # I didn't noticed that the thread didn't cover -hackers..\n> \n> When recovery of any type ends, we see several kinds of error messages\n> that says \"WAL is broken\".\n\nHave you considered an error context here? Your patch leads to a bit\nof duplication with the message a bit down of what you are changing\nwhere the end of local pg_wal is reached.\n\n> +\t* reached the end of WAL. Otherwise something's really wrong and\n> +\t* we report just only the errormsg if any. If we don't receive\n\nThis sentence sounds strange to me. Or you meant \"Something is wrong,\nso use errormsg as report if it is set\"?\n\n> +\t\t\t * Note: errormsg is alreay translated.\n\nTypo here.\n\n> +\tif (StandbyMode)\n> +\t\tereport(actual_emode,\n> +\t\t\t(errmsg (\"rached end of WAL at %X/%X on timeline %u in %s during streaming replication\",\n\nStandbyMode happens also with only WAL archiving, depending on if\nprimary_conninfo is set or not.\n\n> +\t(errmsg (\"rached end of WAL at %X/%X on timeline %u in %s during crash recovery\",\n\nFWIW, you are introducing three times the same typo, in the same\nword, in three different messages.\n--\nMichael", "msg_date": "Fri, 28 Feb 2020 16:33:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Thank you for the comments.\n\nAt Fri, 28 Feb 2020 16:33:18 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Feb 28, 2020 at 04:01:00PM +0900, Kyotaro Horiguchi wrote:\n> > Hello, this is a followup thread of [1].\n> > \n> > # I didn't noticed that the thread didn't cover -hackers..\n> > \n> > When recovery of any type ends, we see several kinds of error messages\n> > that says \"WAL is broken\".\n> \n> Have you considered an error context here? Your patch leads to a bit\n> of duplication with the message a bit down of what you are changing\n> where the end of local pg_wal is reached.\n\nIt is a DEBUG message and it is for the time moving from crash\nrecovery to archive recovery. I could remove that but decided to leave\nit for tracability.\n\n> > +\t* reached the end of WAL. Otherwise something's really wrong and\n> > +\t* we report just only the errormsg if any. If we don't receive\n> \n> This sentence sounds strange to me. Or you meant \"Something is wrong,\n> so use errormsg as report if it is set\"?\n\nThe whole comment there follows.\n| recovery. If we get here during recovery, we can assume that we\n| reached the end of WAL. Otherwise something's really wrong and\n| we report just only the errormsg if any. If we don't receive\n| errormsg here, we already logged something. We don't emit\n| \"reached end of WAL\" in muted messages.\n\n\"Othhersise\" means \"other than the case of recovery\". \"Just only the\nerrmsg\" means \"show the message not as a part the message \"reached end\nof WAL\".\n\n> > +\t\t\t * Note: errormsg is alreay translated.\n> \n> Typo here.\n\nThanks. Will fix along with \"rached\".\n\n> > +\tif (StandbyMode)\n> > +\t\tereport(actual_emode,\n> > +\t\t\t(errmsg (\"rached end of WAL at %X/%X on timeline %u in %s during streaming replication\",\n> \n> StandbyMode happens also with only WAL archiving, depending on if\n> primary_conninfo is set or not.\n\nRight. I'll fix it. Maybe to \"during standby mode\".\n\n> > +\t(errmsg (\"rached end of WAL at %X/%X on timeline %u in %s during crash recovery\",\n> \n> FWIW, you are introducing three times the same typo, in the same\n> word, in three different messages.\n\nThey're copy-pasto. I refrained from constructing an error message\nfrom multiple nonindipendent parts. Are you suggesting to do so?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Feb 2020 17:28:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hello.\n\nI changed the condition from randAccess to fetching_ckpt considering\nthe discussion in another thread [1]. Then I moved the block that\nshows the new messages to more appropriate place.\n\nAt Fri, 28 Feb 2020 17:28:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > \n> > Have you considered an error context here? Your patch leads to a bit\n> > of duplication with the message a bit down of what you are changing\n> > where the end of local pg_wal is reached.\n> \n> It is a DEBUG message and it is for the time moving from crash\n> recovery to archive recovery. I could remove that but decided to leave\n> it for tracability.\n\nI modified the message so that it has the same look to the new\nmessages, but I left it being DEBUG1, since it is just a intermediate\nstate. We should finally see one of the new three messages.\n\nAfter the messages changed, another message from wal sender came to\nlook redundant.\n\n| [20866] LOG: replication terminated by primary server\n| [20866] DETAIL: End of WAL reached on timeline 1 at 0/30001C8.\n| [20866] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n| [20851] LOG: reached end of WAL at 0/30001C8 on timeline 1 in archive during standby mode\n| [20851] DETAIL: invalid record length at 0/30001C8: wanted 24, got 0\n\nI changed the above to the below, which looks more adequate.\n\n| [24271] LOG: replication terminated by primary server on timeline 1 at 0/3000240.\n| [24271] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n| [24267] LOG: reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\n| [24267] DETAIL: invalid record length at 0/3000240: wanted 24, got 0\n\n> > > +\t* reached the end of WAL. Otherwise something's really wrong and\n> > > +\t* we report just only the errormsg if any. If we don't receive\n> > \n> > This sentence sounds strange to me. Or you meant \"Something is wrong,\n> > so use errormsg as report if it is set\"?\n\nThe message no longer exists.\n\n> > > +\t(errmsg (\"rached end of WAL at %X/%X on timeline %u in %s during crash recovery\",\n> > \n> > FWIW, you are introducing three times the same typo, in the same\n> > word, in three different messages.\n> \n> They're copy-pasto. I refrained from constructing an error message\n> from multiple nonindipendent parts. Are you suggesting to do so?\n\nThe tree times repetition of almost same phrases is very unreadable. I\nrewrote it in more simple shape.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 05 Mar 2020 16:06:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On 2020-03-05 08:06, Kyotaro Horiguchi wrote:\n> | [20866] LOG: replication terminated by primary server\n> | [20866] DETAIL: End of WAL reached on timeline 1 at 0/30001C8.\n> | [20866] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n> | [20851] LOG: reached end of WAL at 0/30001C8 on timeline 1 in archive during standby mode\n> | [20851] DETAIL: invalid record length at 0/30001C8: wanted 24, got 0\n> \n> I changed the above to the below, which looks more adequate.\n> \n> | [24271] LOG: replication terminated by primary server on timeline 1 at 0/3000240.\n> | [24271] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n> | [24267] LOG: reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\n> | [24267] DETAIL: invalid record length at 0/3000240: wanted 24, got 0\n\nIs this the before and after? That doesn't seem like a substantial \nimprovement to me. You still get the \"scary\" message at the end.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 23 Mar 2020 10:37:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Mon, Mar 23, 2020 at 2:37 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-03-05 08:06, Kyotaro Horiguchi wrote:\n> > | [20866] LOG: replication terminated by primary server\n> > | [20866] DETAIL: End of WAL reached on timeline 1 at 0/30001C8.\n> > | [20866] FATAL: could not send end-of-streaming message to primary: no\n> COPY in progress\n> > | [20851] LOG: reached end of WAL at 0/30001C8 on timeline 1 in archive\n> during standby mode\n> > | [20851] DETAIL: invalid record length at 0/30001C8: wanted 24, got 0\n> >\n> > I changed the above to the below, which looks more adequate.\n> >\n> > | [24271] LOG: replication terminated by primary server on timeline 1\n> at 0/3000240.\n> > | [24271] FATAL: could not send end-of-streaming message to primary:\n> no COPY in progress\n> > | [24267] LOG: reached end of WAL at 0/3000240 on timeline 1 in\n> archive during standby mode\n> > | [24267] DETAIL: invalid record length at 0/3000240: wanted 24, got 0\n>\n> Is this the before and after? That doesn't seem like a substantial\n> improvement to me. You still get the \"scary\" message at the end.\n>\n\n+1 I agree it still reads scary and doesn't seem improvement.\n\nPlus, I am hoping message will improve for pg_waldump as well?\nSince it reads confusing and every-time have to explain new developer it's\nexpected behavior which is annoying.\n\npg_waldump: fatal: error in WAL record at 0/1553F70: invalid record length\nat 0/1553FA8: wanted 24, got 0\n\nOn Mon, Mar 23, 2020 at 2:37 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-03-05 08:06, Kyotaro Horiguchi wrote:\n> | [20866] LOG:  replication terminated by primary server\n> | [20866] DETAIL:  End of WAL reached on timeline 1 at 0/30001C8.\n> | [20866] FATAL:  could not send end-of-streaming message to primary: no COPY in progress\n> | [20851] LOG:  reached end of WAL at 0/30001C8 on timeline 1 in archive during standby mode\n> | [20851] DETAIL:  invalid record length at 0/30001C8: wanted 24, got 0\n> \n> I changed the above to the below, which looks more adequate.\n> \n> | [24271]  LOG:  replication terminated by primary server on timeline 1 at 0/3000240.\n> | [24271]  FATAL:  could not send end-of-streaming message to primary: no COPY in progress\n> | [24267]  LOG:  reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\n> | [24267]  DETAIL:  invalid record length at 0/3000240: wanted 24, got 0\n\nIs this the before and after?  That doesn't seem like a substantial \nimprovement to me.  You still get the \"scary\" message at the end.+1 I agree it still reads scary and doesn't seem improvement.Plus, I am hoping message will improve for pg_waldump as well?Since it reads confusing and every-time have to explain new developer it's expected behavior which is annoying.pg_waldump: fatal: error in WAL record at 0/1553F70: invalid record length at 0/1553FA8: wanted 24, got 0", "msg_date": "Mon, 23 Mar 2020 10:43:09 -0700", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\nOn 2020-03-23 10:37:16 +0100, Peter Eisentraut wrote:\n> On 2020-03-05 08:06, Kyotaro Horiguchi wrote:\n> > | [20866] LOG: replication terminated by primary server\n> > | [20866] DETAIL: End of WAL reached on timeline 1 at 0/30001C8.\n> > | [20866] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n\nIMO it's a bug that we see this FATAL. I seem to recall that we didn't\nuse to get that?\n\n\n> > | [20851] LOG: reached end of WAL at 0/30001C8 on timeline 1 in archive during standby mode\n> > | [20851] DETAIL: invalid record length at 0/30001C8: wanted 24, got 0\n> > \n> > I changed the above to the below, which looks more adequate.\n> > \n> > | [24271] LOG: replication terminated by primary server on timeline 1 at 0/3000240.\n> > | [24271] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n> > | [24267] LOG: reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\n> > | [24267] DETAIL: invalid record length at 0/3000240: wanted 24, got 0\n> \n> Is this the before and after? That doesn't seem like a substantial\n> improvement to me. You still get the \"scary\" message at the end.\n\nIt seems like a minor improvement - folding the DETAIL into the message\nmakes sense to me here. But it indeed doesn't really address the main\nissue.\n\nI think we don't want to elide the information about how the end of the\nWAL was detected - there are some issues where I found that quite\nhelpful. But we could reformulate it to be clearer that it's informative\noutput, not a bug. E.g. something roughly like\n\nLOG: reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\nDETAIL: End detected due to invalid record length at 0/3000240: expected 24, got 0\n(I first elided the position in the DETAIL, but it could differ from the\none in LOG)\n\nI don't find that very satisfying, but I can't come up with something\nthat provides the current information, while being less scary than my\nsuggestion?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:47:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\nOn 2020-03-23 10:43:09 -0700, Ashwin Agrawal wrote:\n> Plus, I am hoping message will improve for pg_waldump as well?\n> Since it reads confusing and every-time have to explain new developer it's\n> expected behavior which is annoying.\n> \n> pg_waldump: fatal: error in WAL record at 0/1553F70: invalid record length\n> at 0/1553FA8: wanted 24, got 0\n\nWhat would you like to see here? There's inherently a lot less\ninformation about the context in waldump. We can't know whether it's to\nbe expected that the WAL ends at that point, or whether there was\ncorruption.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:49:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 23 Mar 2020 12:47:36 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-03-23 10:37:16 +0100, Peter Eisentraut wrote:\n> > On 2020-03-05 08:06, Kyotaro Horiguchi wrote:\n> > > | [20866] LOG: replication terminated by primary server\n> > > | [20866] DETAIL: End of WAL reached on timeline 1 at 0/30001C8.\n> > > | [20866] FATAL: could not send end-of-streaming message to primary: no COPY in progress\n> \n> IMO it's a bug that we see this FATAL. I seem to recall that we didn't\n> use to get that?\n\nI thought that it is a convention that A auxiliary process uses ERROR\n(which is turned into FATAL in ereport) to exit, which I didn't like\nso much, but it was out of scope of this patch.\n\nAs for the message bove, the FATAL is preceded by the \"LOG:\nreplication terminated by\" message, that means walreceiver tries to\nsend new data after disconnection just to fail, which is\nunreasonable. I think we should exit immediately after detecting\ndisconnection. The FATAL is gone by the attached.\n\n> > > | [24267] LOG: reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\n> > > | [24267] DETAIL: invalid record length at 0/3000240: wanted 24, got 0\n> > \n> > Is this the before and after? That doesn't seem like a substantial\n> > improvement to me. You still get the \"scary\" message at the end.\n> \n> It seems like a minor improvement - folding the DETAIL into the message\n> makes sense to me here. But it indeed doesn't really address the main\n> issue.\n> \n> I think we don't want to elide the information about how the end of the\n> WAL was detected - there are some issues where I found that quite\n> helpful. But we could reformulate it to be clearer that it's informative\n> output, not a bug. E.g. something roughly like\n> \n> LOG: reached end of WAL at 0/3000240 on timeline 1 in archive during standby mode\n> DETAIL: End detected due to invalid record length at 0/3000240: expected 24, got 0\n> (I first elided the position in the DETAIL, but it could differ from the\n> one in LOG)\n> \n> I don't find that very satisfying, but I can't come up with something\n> that provides the current information, while being less scary than my\n> suggestion?\n\nThe 0-length record is not an \"invalid\" state during recovery, so we\ncan add the message for the state as \"record length is 0 at %X/%X\". I\nthink if other states found there, it implies something wrong.\n\nLSN is redundantly shown but I'm not sure if it is better to remove it\nfrom either of the two lines.\n\n| LOG: reached end of WAL at 0/3000850 on timeline 1 in pg_wal during crash recovery\n| DETAIL: record length is 0 at 0/3000850\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 24 Mar 2020 10:52:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On 2020-03-24 02:52, Kyotaro Horiguchi wrote:\n>> I don't find that very satisfying, but I can't come up with something\n>> that provides the current information, while being less scary than my\n>> suggestion?\n> The 0-length record is not an \"invalid\" state during recovery, so we\n> can add the message for the state as \"record length is 0 at %X/%X\". I\n> think if other states found there, it implies something wrong.\n> \n> LSN is redundantly shown but I'm not sure if it is better to remove it\n> from either of the two lines.\n> \n> | LOG: reached end of WAL at 0/3000850 on timeline 1 in pg_wal during crash recovery\n> | DETAIL: record length is 0 at 0/3000850\n\nI'm not up to date on all these details, but my high-level idea would be \nsome kind of hint associated with the existing error messages, like:\n\nHINT: This is to be expected if this is the end of the WAL. Otherwise, \nit could indicate corruption.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 13:53:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Wed, Mar 25, 2020 at 8:53 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> HINT: This is to be expected if this is the end of the WAL. Otherwise,\n> it could indicate corruption.\n\nFirst, I agree that this general issue is a problem, because it's come\nup for me in quite a number of customer situations. Either people get\nscared when they shouldn't, because the message is innocuous, or they\ndon't get scared about other things that actually are scary, because\nif some scary-looking messages are actually innocuous, it can lead\npeople to believe that the same is true in other cases.\n\nSecond, I don't really like the particular formulation you have above,\nbecause the user still doesn't know whether or not to be scared. Can\nwe figure that out? I think if we're in crash recovery, I think that\nwe should not be scared, because we have no alternative to assuming\nthat we've reached the end of WAL, so all crash recoveries will end\nlike this. If we're in archive recovery, we should definitely be\nscared if we haven't yet reached the minimum recovery point, because\nmore WAL than that should certainly exist. After that, it depends on\nhow we got the WAL. If it's being streamed, the question is whether\nwe've reached the end of what got streamed. If it's being copied from\nthe archive, we ought to have the whole segment, but maybe not more.\nCan we get the right context to the point where the error is being\nreported to know whether we hit the error at the end of the WAL that\nwas streamed? If not, can we somehow rejigger things so that we only\nmake it sound scary if we keep getting stuck at the same point when we\nwoud've expected to make progress meanwhile?\n\nI'm just spitballing here, but it would be really good if there's a\nway to know definitely whether or not you should be scared. Corrupted\nWAL segments are definitely a thing that happens, but retries are a\nlot more common.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 26 Mar 2020 12:40:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Thu, Mar 26, 2020 at 12:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 8:53 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > HINT: This is to be expected if this is the end of the WAL. Otherwise,\n> > it could indicate corruption.\n>\n> First, I agree that this general issue is a problem, because it's come\n> up for me in quite a number of customer situations. Either people get\n> scared when they shouldn't, because the message is innocuous, or they\n> don't get scared about other things that actually are scary, because\n> if some scary-looking messages are actually innocuous, it can lead\n> people to believe that the same is true in other cases.\n>\n> Second, I don't really like the particular formulation you have above,\n> because the user still doesn't know whether or not to be scared. Can\n> we figure that out? I think if we're in crash recovery, I think that\n> we should not be scared, because we have no alternative to assuming\n> that we've reached the end of WAL, so all crash recoveries will end\n> like this. If we're in archive recovery, we should definitely be\n> scared if we haven't yet reached the minimum recovery point, because\n> more WAL than that should certainly exist. After that, it depends on\n> how we got the WAL. If it's being streamed, the question is whether\n> we've reached the end of what got streamed. If it's being copied from\n> the archive, we ought to have the whole segment, but maybe not more.\n> Can we get the right context to the point where the error is being\n> reported to know whether we hit the error at the end of the WAL that\n> was streamed? If not, can we somehow rejigger things so that we only\n> make it sound scary if we keep getting stuck at the same point when we\n> woud've expected to make progress meanwhile?\n>\n> I'm just spitballing here, but it would be really good if there's a\n> way to know definitely whether or not you should be scared. Corrupted\n> WAL segments are definitely a thing that happens, but retries are a\n> lot more common.\n\nFirst, I agree that getting enough context to say precisely is by far the ideal.\n\nThat being said, as an end user who's found this surprising -- and\nmomentarily scary every time I initially scan it even though I *know\nintellectually it's not* -- I would find Peter's suggestion a\nsignificant improvement over what we have now. I'm fairly certainly my\nco-workers on our database team would also. Knowing that something is\nat least not always scary is good. Though I'll grant that this does\nhave the negative in reverse: if it actually is a scary\nsituation...this mutes your concern level. On the other hand,\nmonitoring would tell us if there's a real problem (namely replication\nlag), so I think the trade-off is clearly worth it.\n\nHow about this minor tweak:\nHINT: This is expected if this is the end of currently available WAL.\nOtherwise, it could indicate corruption.\n\nThanks,\nJames\n\n\n", "msg_date": "Fri, 27 Mar 2020 22:25:29 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi Kyotaro,\n\nOn 3/27/20 10:25 PM, James Coleman wrote:\n> On Thu, Mar 26, 2020 at 12:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> I'm just spitballing here, but it would be really good if there's a\n>> way to know definitely whether or not you should be scared. Corrupted\n>> WAL segments are definitely a thing that happens, but retries are a\n>> lot more common.\n> \n> First, I agree that getting enough context to say precisely is by far the ideal.\n> \n> That being said, as an end user who's found this surprising -- and\n> momentarily scary every time I initially scan it even though I *know\n> intellectually it's not* -- I would find Peter's suggestion a\n> significant improvement over what we have now. I'm fairly certainly my\n> co-workers on our database team would also. Knowing that something is\n> at least not always scary is good. Though I'll grant that this does\n> have the negative in reverse: if it actually is a scary\n> situation...this mutes your concern level. On the other hand,\n> monitoring would tell us if there's a real problem (namely replication\n> lag), so I think the trade-off is clearly worth it.\n> \n> How about this minor tweak:\n> HINT: This is expected if this is the end of currently available WAL.\n> Otherwise, it could indicate corruption.\n\nAny thoughts on the suggestions for making the messaging clearer?\n\nAlso, the patch no longer applies: \nhttp://cfbot.cputube.org/patch_32_2490.log.\n\nMarking this Waiting on Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 3 Mar 2021 11:14:20 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Wed, 3 Mar 2021 11:14:20 -0500, David Steele <david@pgmasters.net> wrote in \n> Hi Kyotaro,\n> \n> On 3/27/20 10:25 PM, James Coleman wrote:\n> > On Thu, Mar 26, 2020 at 12:41 PM Robert Haas <robertmhaas@gmail.com>\n> > wrote:\n> >>\n> >> I'm just spitballing here, but it would be really good if there's a\n> >> way to know definitely whether or not you should be scared. Corrupted\n> >> WAL segments are definitely a thing that happens, but retries are a\n> >> lot more common.\n> > First, I agree that getting enough context to say precisely is by far\n> > the ideal.\n> > That being said, as an end user who's found this surprising -- and\n> > momentarily scary every time I initially scan it even though I *know\n> > intellectually it's not* -- I would find Peter's suggestion a\n> > significant improvement over what we have now. I'm fairly certainly my\n> > co-workers on our database team would also. Knowing that something is\n> > at least not always scary is good. Though I'll grant that this does\n> > have the negative in reverse: if it actually is a scary\n> > situation...this mutes your concern level. On the other hand,\n> > monitoring would tell us if there's a real problem (namely replication\n> > lag), so I think the trade-off is clearly worth it.\n> > How about this minor tweak:\n> > HINT: This is expected if this is the end of currently available WAL.\n> > Otherwise, it could indicate corruption.\n> \n> Any thoughts on the suggestions for making the messaging clearer?\n> \n> Also, the patch no longer applies:\n> http://cfbot.cputube.org/patch_32_2490.log.\n\nSorry for missing the last discussions. I agree to the point about\nreally-scary situation.\n\nValidXLogRecordHeader deliberately marks End-Of-WAL only in the case\nof zero-length record so that the callers can identify that case,\ninstead of inferring the EOW state without it. All other invalid data\nis treated as potentially danger situation. I think this is a\nreasonable classification. And the error level for the \"danger\" cases\nis changed to WARNING (from LOG).\n\n\nAs the result, the following messages are emitted with the attached.\n\n- found zero-length record during recovery (the DETAIL might not be needed.)\n\n> LOG: redo starts at 0/14000118\n> LOG: reached end of WAL at 0/14C5D070 on timeline 1 in pg_wal during crash recovery\n> DETAIL: record length is 0 at 0/14C5D070\n> LOG: redo done at 0/14C5CF48 system usage: ...\n\n- found another kind of invalid data\n\n> LOG: redo starts at 0/150000A0\n> WARNING: invalid record length at 0/1500CA60: wanted 24, got 54\n> LOG: redo done at 0/1500CA28 system usage: ...\n\n\nOn the way checking the patch, I found that it emits the following log\nlines in the case the redo loop meets an invalid record at the\nstarting:\n\n> LOG: invalid record length at 0/10000118: wanted 24, got 42\n> LOG: redo is not required\n\nwhich doesn't look proper. That case is identifiable using the\nEnd-of_WAL flag this patch adds. Thus we get the following error\nmessages.\n\n\n- found end-of-wal at the beginning of recovery\n\n> LOG: reached end of WAL at 0/130000A0 on timeline 1 in pg_wal during crash recovery\n> DETAIL: record length is 0 at 0/130000A0\n> LOG: redo is not required\n\n- found invalid data\n\n> WARNING: invalid record length at 0/120000A0: wanted 24, got 42\n> WARNING: redo is skipped\n> HINT: This suggests WAL file corruption. You might need to check the database.\n\nThe logic of ErrRecPtr in ReadRecord may wrong. I remember having\nsuch an discussion before...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 04 Mar 2021 15:50:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On 3/4/21, 10:50 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\r\n> As the result, the following messages are emitted with the attached.\r\n\r\nI'd like to voice my support for this effort, and I intend to help\r\nreview the patch. It looks like the latest patch no longer applies,\r\nso I've marked the commitfest entry [0] as waiting-on-author.\r\n\r\nNathan\r\n\r\n[0] https://commitfest.postgresql.org/35/2490/\r\n\r\n", "msg_date": "Fri, 22 Oct 2021 17:54:40 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Fri, 22 Oct 2021 17:54:40 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> On 3/4/21, 10:50 PM, \"Kyotaro Horiguchi\" <horikyota.ntt@gmail.com> wrote:\n> > As the result, the following messages are emitted with the attached.\n> \n> I'd like to voice my support for this effort, and I intend to help\n> review the patch. It looks like the latest patch no longer applies,\n> so I've marked the commitfest entry [0] as waiting-on-author.\n> \n> Nathan\n> \n> [0] https://commitfest.postgresql.org/35/2490/\n\nSorry for being late to reply. I rebased this to the current master.\n\n- rebased\n\n- use LSN_FORMAT_ARGS instead of bare shift and mask.\n\n- v4 immediately exited walreceiver on disconnection. Maybe I wanted\n not to see a FATAL message on standby after primary dies. However\n that would be another issue and that change was plain wrong.. v5\n just removes the \"end-of-WAL\" part from the message, which duplicate\n to what startup emits.\n\n- add a new error message \"missing contrecord at %X/%X\". Maybe this\n should be regarded as a leftover of the contrecord patch. In the\n attached patch the \"%X/%X\" is the LSN of the current record. The log\n messages look like this (026_overwrite_contrecord).\n\nLOG: redo starts at 0/1486CB8\nWARNING: missing contrecord at 0/1FFC2E0\nLOG: consistent recovery state reached at 0/1FFC2E0\nLOG: started streaming WAL from primary at 0/2000000 on timeline 1\nLOG: successfully skipped missing contrecord at 0/1FFC2E0, overwritten at 2021-11-08 14:50:11.969952+09\nCONTEXT: WAL redo at 0/2000028 for XLOG/OVERWRITE_CONTRECORD: lsn 0/1FFC2E0; time 2021-11-08 14:50:11.969952+09\n\nWhile checking the behavior for the case of missing-contrecord, I\nnoticed that emode_for_corrupt_record() doesn't work as expected since\nreadSource is reset to XLOG_FROM_ANY after a read failure. We could\nremember the last failed source but pg_wal should have been visited if\npage read error happened so I changed the function so that it treats\nXLOG_FROM_ANY the same way with XLOG_FROM_PG_WAL.\n\n(Otherwise we see \"LOG: reached end-of-WAL at ..\" message after\n \"WARNING: missing contrecord at..\" message.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 08 Nov 2021 14:59:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Mon, Nov 08, 2021 at 02:59:46PM +0900, Kyotaro Horiguchi wrote:\n> While checking the behavior for the case of missing-contrecord, I\n> noticed that emode_for_corrupt_record() doesn't work as expected since\n> readSource is reset to XLOG_FROM_ANY after a read failure. We could\n> remember the last failed source but pg_wal should have been visited if\n> page read error happened so I changed the function so that it treats\n> XLOG_FROM_ANY the same way with XLOG_FROM_PG_WAL.\n\nFWIW, I am not much a fan of assuming that it is fine to use\nXLOG_FROM_ANY as a condition here. The comments on top of\nemode_for_corrupt_record() make it rather clear what the expectations\nare, and this is the default readSource.\n\n> (Otherwise we see \"LOG: reached end-of-WAL at ..\" message after\n> \"WARNING: missing contrecord at..\" message.)\n\n+ /* broken record found */\n+ ereport(WARNING,\n+ (errmsg(\"redo is skipped\"),\n+ errhint(\"This suggests WAL file corruption. You might need to check the database.\")));\nThis looks rather scary to me, FWIW, and this could easily be reached\nif one forgets about EndOfWAL while hacking on xlogreader.c.\nUnlikely so, still.\n\n+ report_invalid_record(state,\n+ \"missing contrecord at %X/%X\",\n+ LSN_FORMAT_ARGS(RecPtr));\nIsn't there a risk here to break applications checking after error\nmessages stored in the WAL reader after seeing a contrecord?\n\n+ if (record->xl_tot_len == 0)\n+ {\n+ /* This is strictly not an invalid state, so phrase it as so. */\n+ report_invalid_record(state,\n+ \"record length is 0 at %X/%X\",\n+ LSN_FORMAT_ARGS(RecPtr));\n+ state->EndOfWAL = true;\n+ return false;\n+ }\nThis assumes that a value of 0 for xl_tot_len is a synonym of the end\nof WAL, but cannot we have also a corrupted record in this case in the\nshape of xl_tot_len being 0? We validate the full record after\nreading the header, so it seems to me that we should not assume that\nthings are just ending as proposed in this patch.\n--\nMichael", "msg_date": "Tue, 9 Nov 2021 09:53:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Thank you for the comments!\n\nAt Tue, 9 Nov 2021 09:53:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Nov 08, 2021 at 02:59:46PM +0900, Kyotaro Horiguchi wrote:\n> > While checking the behavior for the case of missing-contrecord, I\n> > noticed that emode_for_corrupt_record() doesn't work as expected since\n> > readSource is reset to XLOG_FROM_ANY after a read failure. We could\n> > remember the last failed source but pg_wal should have been visited if\n> > page read error happened so I changed the function so that it treats\n> > XLOG_FROM_ANY the same way with XLOG_FROM_PG_WAL.\n> \n> FWIW, I am not much a fan of assuming that it is fine to use\n> XLOG_FROM_ANY as a condition here. The comments on top of\n> emode_for_corrupt_record() make it rather clear what the expectations\n> are, and this is the default readSource.\n\nThe readSource is expected by the function to be the failed source but\nit goes back to XLOG_FROM_ANY on page read failure. So the function\n*is* standing on the wrong assumption. I noticed that currentSource\nholds the last accessed source (but forgot about that). So it is\nexactly what we need here. No longer need to introduce the unclear\nassumption by using it.\n\n> > (Otherwise we see \"LOG: reached end-of-WAL at ..\" message after\n> > \"WARNING: missing contrecord at..\" message.)\n> \n> + /* broken record found */\n> + ereport(WARNING,\n> + (errmsg(\"redo is skipped\"),\n> + errhint(\"This suggests WAL file corruption. You might need to check the database.\")));\n> This looks rather scary to me, FWIW, and this could easily be reached\n\nYes, the message is intentionally scary, since we don't come here in\nthe case of clean WAL:)\n\n> if one forgets about EndOfWAL while hacking on xlogreader.c.\n> Unlikely so, still.\n\nI don't understand. Isn't it the case of almost every feature?\n\nThe patch compells hackers to maintain the condition for recovery\nbeing considered cleanly ended. If the last record doesn't meet the\ncondition, the WAL file should be considered having a\nproblem. However, I don't see the condition expanded to have another\nterm in future.\n\nEven if someone including myself broke that condition, we will at\nworst unwantedly see a scary message. And I believe almost all\nhackers can easily find it a bug from the DETAILED message shown along\naside. I'm not sure such bugs could be found in development phase,\nthough..\n\n> + report_invalid_record(state,\n> + \"missing contrecord at %X/%X\",\n> + LSN_FORMAT_ARGS(RecPtr));\n> Isn't there a risk here to break applications checking after error\n> messages stored in the WAL reader after seeing a contrecord?\n\nI'm not sure you are mentioning the case where no message is stored\npreviously, or the case where already a message is stored. The former\nis fine as the record is actually broken. But I was missing the latter\ncase. In this version I avoided to overwite the error message.\n\n> + if (record->xl_tot_len == 0)\n> + {\n> + /* This is strictly not an invalid state, so phrase it as so. */\n> + report_invalid_record(state,\n> + \"record length is 0 at %X/%X\",\n> + LSN_FORMAT_ARGS(RecPtr));\n> + state->EndOfWAL = true;\n> + return false;\n> + }\n> This assumes that a value of 0 for xl_tot_len is a synonym of the end\n> of WAL, but cannot we have also a corrupted record in this case in the\n> shape of xl_tot_len being 0? We validate the full record after\n> reading the header, so it seems to me that we should not assume that\n> things are just ending as proposed in this patch.\n\nYeah, it's the most serious concern to me. So I didn't hide the\ndetailed message in the \"end-of-wal reached message\".\n\n> LOG: reached end of WAL at 0/512F198 on timeline 1 in pg_wal during crash recovery\n> DETAIL: record length is 0 at 0/512F210\n\nI believe everyone regards zero record length as fine unless something\nwrong is seen afterwards. However, we can extend the check to the\nwhole record header. I think it is by far nearer to the perfect for\nalmost all cases. The attached emits the following message for the\ngood (true end-of-WAL) case.\n\n> LOG: reached end of WAL at 0/512F4A0 on timeline 1 in pg_wal during crash recovery\n> DETAIL: empty record header found at 0/512F518\n\nIf garbage bytes are found in the header area, the following log will\nbe left. I think we can have a better message here.\n\n> WARNING: garbage record header at 0/2095458\n> LOG: redo done at 0/2095430 system usage: CPU: user: 0.03 s, system: 0.01 s, elapsed: 0.04 s\n\n\nThis is the updated version.\n\n- emode_for_currupt_record() now uses currentSource instead of\n readSource.\n\n- If zero record length is faced, make sure the whole header is zeroed\n before deciding it as the end-of-WAL.\n\n- Do not overwrite existig message when missing contrecord is\n detected. The message added here is seen in the TAP test log\n 026_overwrite_contrecord_standby.log\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 09 Nov 2021 16:27:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Tue, 09 Nov 2021 16:27:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> This is the updated version.\n> \n> - emode_for_currupt_record() now uses currentSource instead of\n> readSource.\n> \n> - If zero record length is faced, make sure the whole header is zeroed\n> before deciding it as the end-of-WAL.\n> \n> - Do not overwrite existig message when missing contrecord is\n> detected. The message added here is seen in the TAP test log\n> 026_overwrite_contrecord_standby.log\n\nd2ddfa681db27a138acb63c8defa8cc6fa588922 removed global variables\nReadRecPtr and EndRecPtr. This is rebased version that reads the LSNs\ndirectly from xlogreader instead of the removed global variables.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 08 Dec 2021 16:01:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": ">\n> d2ddfa681db27a138acb63c8defa8cc6fa588922 removed global variables\n> ReadRecPtr and EndRecPtr. This is rebased version that reads the LSNs\n> directly from xlogreader instead of the removed global variables.\n>\n\nHi, hackers!\n\nI've checked the latest version of a patch. It applies cleanly, check-world\npasses and CI is also in the green state.\nProposed messages seem good to me, but probably it would be better to have\na test on conditions where \"reached end of WAL...\" emitted.\nThen, I believe it can be set as 'ready for committter'.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nd2ddfa681db27a138acb63c8defa8cc6fa588922 removed global variables\nReadRecPtr and EndRecPtr. This is rebased version that reads the LSNs\ndirectly from xlogreader instead of the removed global variables.Hi, hackers!I've checked the latest version of a patch. It applies cleanly, check-world passes and CI is also in the green state.Proposed messages seem good to me, but probably it would be better to have a test on conditions where \"reached end of WAL...\" emitted.Then, I believe it can be set as 'ready for committter'.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 24 Jan 2022 14:23:33 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 24 Jan 2022 14:23:33 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in \n> >\n> > d2ddfa681db27a138acb63c8defa8cc6fa588922 removed global variables\n> > ReadRecPtr and EndRecPtr. This is rebased version that reads the LSNs\n> > directly from xlogreader instead of the removed global variables.\n> >\n> \n> Hi, hackers!\n> \n> I've checked the latest version of a patch. It applies cleanly, check-world\n> passes and CI is also in the green state.\n> Proposed messages seem good to me, but probably it would be better to have\n> a test on conditions where \"reached end of WAL...\" emitted.\n> Then, I believe it can be set as 'ready for committter'.\n\nThanks for checking that, and the comment!\n\nI thought that we usually don't test log messages, but finally I found\nthat I needed that. It is because I found another mode of end-of-wal\nand a bug that emits a spurious message on passing...\n\nThis v8 is changed in...\n\n- Added tests to 011_crash_recovery.pl\n\n- Fixed a bug that server emits \"end-of-wal\" messages even if it have\n emitted an error message for the same LSN.\n\n- Changed XLogReaderValidatePageHeader() so that it recognizes an\n empty page as end-of-WAL.\n\n- Made pg_waldump conscious of end-of-wal.\n\nWhile doing the last item, I noticed that pg_waldump shows the wrong\nLSN as the error position. Concretely it emits the LSN of the last\nsound WAL record as the error position. I will post a bug-fix patch\nfor the issue after confirmation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 25 Jan 2022 17:34:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Tue, 25 Jan 2022 17:34:56 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> This v8 is changed in...\n> \n> - Added tests to 011_crash_recovery.pl\n> \n> - Fixed a bug that server emits \"end-of-wal\" messages even if it have\n> emitted an error message for the same LSN.\n> \n> - Changed XLogReaderValidatePageHeader() so that it recognizes an\n> empty page as end-of-WAL.\n> \n> - Made pg_waldump conscious of end-of-wal.\n> \n> While doing the last item, I noticed that pg_waldump shows the wrong\n> LSN as the error position. Concretely it emits the LSN of the last\n> sound WAL record as the error position. I will post a bug-fix patch\n> for the issue after confirmation.\n\nI noticed that I added a useless error message \"garbage record\nheader\", but it is a kind of invalid record length. So I removed the\nmessage. That change makes the logic for EOW in ValidXLogRecordHeader\nand XLogReaderValidatePageHeader share the same flow.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Jan 2022 10:35:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": ">\n> > This v8 is changed in...\n> >\n> > - Added tests to 011_crash_recovery.pl\n> >\n> > - Fixed a bug that server emits \"end-of-wal\" messages even if it have\n> > emitted an error message for the same LSN.\n> >\n> > - Changed XLogReaderValidatePageHeader() so that it recognizes an\n> > empty page as end-of-WAL.\n> >\n> > - Made pg_waldump conscious of end-of-wal.\n> >\n> > While doing the last item, I noticed that pg_waldump shows the wrong\n> > LSN as the error position. Concretely it emits the LSN of the last\n> > sound WAL record as the error position. I will post a bug-fix patch\n> > for the issue after confirmation.\n>\n> I noticed that I added a useless error message \"garbage record\n> header\", but it is a kind of invalid record length. So I removed the\n> message. That change makes the logic for EOW in ValidXLogRecordHeader\n> and XLogReaderValidatePageHeader share the same flow.\n>\n\nHi, Kyotaro!\n\nI don't quite understand a meaning of a comment:\n /* it is completely zeroed, call it a day */\n\nPlease also run pgindent on your code.\n\nOtherwise the new patch seems ok.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\n> This v8 is changed in...\n> \n> - Added tests to 011_crash_recovery.pl\n> \n> - Fixed a bug that server emits \"end-of-wal\" messages even if it have\n>   emitted an error message for the same LSN.\n> \n> - Changed XLogReaderValidatePageHeader() so that it recognizes an\n>   empty page as end-of-WAL.\n> \n> - Made pg_waldump conscious of end-of-wal.\n> \n> While doing the last item, I noticed that pg_waldump shows the wrong\n> LSN as the error position.  Concretely it emits the LSN of the last\n> sound WAL record as the error position.  I will post a bug-fix patch\n> for the issue after confirmation.\n\nI noticed that I added a useless error message \"garbage record\nheader\", but it is a kind of invalid record length.  So I removed the\nmessage. That change makes the logic for EOW in ValidXLogRecordHeader\nand XLogReaderValidatePageHeader share the same flow.Hi,  Kyotaro!I don't quite understand a meaning of a comment: /* it is completely zeroed, call it a day  */Please also run pgindent on your code.Otherwise the new patch seems ok.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Mon, 31 Jan 2022 15:17:09 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi, Pavel.\n\nAt Mon, 31 Jan 2022 15:17:09 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in \n> I don't quite understand a meaning of a comment:\n> /* it is completely zeroed, call it a day */\n\nWhile rethinking about this comment, It came to my mind that\nXLogReaderValidatePageHeader is doing whole-page check. There is no\nclear reason for not doing at least the same check here.\nValidXLogRecordHeader is changed to check all bytes in the rest of the\npage, instead of just the record header.\n\nWhile working on that, I noticed another end-of-WAL case, unexpected\npageaddr. I think we can assume it safe when the pageaddr is smaller\nthan expected (or we have no choice than assuming\nso). XLogReaderValidatePageHeader is changed that way. But I'm not\nsure others regard it as a form of safe end-of-WAL.\n\n> Please also run pgindent on your code.\n\nHmm. I'm not sure we need to do that at this stage. pgindent makes\nchanges on the whole file involving unrelated part from this patch.\nAnyway I did that then removed irrelevant edits.\n\npgindent makes a seemingly not-great suggestion.\n\n+\t\tchar\t *pe =\n+\t\t(char *) record + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n\nI'm not sure this is intended but I split the line into two lines to\ndefine and assign.\n\n> Otherwise the new patch seems ok.\n\nThanks!\n\nThis version 10 is changed in the following points.\n\n- Rewrited the comment in ValidXLogRecordHeader.\n- ValidXLogRecordHeader\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 01 Feb 2022 11:58:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": ">\n> This version 10 is changed in the following points.\n>\n> - Rewrited the comment in ValidXLogRecordHeader.\n> - ValidXLogRecordHeader\n>\nThanks!\n\nMaybe it can be written little bit shorter:\npe = (char *) record + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\nas\npe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n?\n\n\nThe problem that pgindent sometimes reflow formatting of unrelated blocks\nis indeed existing. But I think it's right to manually leave pgindent-ed\ncode only on what is related to the patch. The leftover is pgindent-ed in a\nscheduled manner sometimes, so don't need to bother.\n\nI'd like to set v10 as RfC.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nThis version 10 is changed in the following points.\n\n- Rewrited the comment in ValidXLogRecordHeader.\n- ValidXLogRecordHeaderThanks!Maybe it can be written little bit shorter:pe = (char *) record + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1)); as pe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));?The problem that pgindent sometimes reflow formatting of unrelated blocks is indeed existing. But I think it's right to manually leave pgindent-ed code only on what is related to the patch. The leftover is pgindent-ed in a scheduled manner sometimes, so don't need to bother.I'd like to set v10 as RfC.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 1 Feb 2022 12:38:01 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Tue, 1 Feb 2022 12:38:01 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in \n> Maybe it can be written little bit shorter:\n> pe = (char *) record + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n> as\n> pe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n> ?\n\nThat difference would be a matter of taste, but I found it looks\ncleaner that definition and assignment is separated for both p and pe.\nNow it is like the following.\n\n>\tchar\t *p;\n>\tchar\t *pe;\n>\n>\t/* scan from the beginning of the record to the end of block */\n>\tp = (char *) record;\n>\tpe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n\n\n> The problem that pgindent sometimes reflow formatting of unrelated blocks\n> is indeed existing. But I think it's right to manually leave pgindent-ed\n> code only on what is related to the patch. The leftover is pgindent-ed in a\n> scheduled manner sometimes, so don't need to bother.\n\nYeah, I meant that it is a bit annoying to unpginden-ting unrelated\nedits:p\n\n> I'd like to set v10 as RfC.\n\nThanks! The suggested change is done in the attached v11.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 02 Feb 2022 14:34:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": ">\n> Thanks! The suggested change is done in the attached v11.\n>\n\nThanks! v11 is a small refactoring of v10 that doesn't change behavior, so\nit is RfC as well.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nThanks!  The suggested change is done in the attached v11.Thanks! v11 is a small refactoring of v10 that doesn't change behavior, so it is RfC as well.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 2 Feb 2022 11:24:06 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\nHere are some of my review comments on the v11 patch:\n\n- (errmsg_internal(\"reached end of WAL in\npg_wal, entering archive recovery\")));\n+ (errmsg_internal(\"reached end of WAL at %X/%X\non timeline %u in %s during crash recovery, entering archive\nrecovery\",\n+ LSN_FORMAT_ARGS(ErrRecPtr),\n+ replayTLI,\n+ xlogSourceNames[currentSource])));\n\nWhy crash recovery? Won't this message get printed even during PITR?\n\nI just did a PITR and could see these messages in the logfile.\n\n2022-02-08 18:00:44.367 IST [86185] LOG: starting point-in-time\nrecovery to WAL location (LSN) \"0/5227790\"\n2022-02-08 18:00:44.368 IST [86185] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2022-02-08 18:00:44.369 IST [86185] LOG: redo starts at 0/14DC8D8\n2022-02-08 18:00:44.978 IST [86185] DEBUG1: reached end of WAL at\n0/3FFFFD0 on timeline 1 in pg_wal during crash recovery, entering\narchive recovery\n\n==\n\n+ /*\n+ * If we haven't emit an error message, we have safely reached the\n+ * end-of-WAL.\n+ */\n+ if (emode_for_corrupt_record(LOG, ErrRecPtr) == LOG)\n+ {\n+ char *fmt;\n+\n+ if (StandbyMode)\n+ fmt = gettext_noop(\"reached end of WAL at %X/%X on\ntimeline %u in %s during standby mode\");\n+ else if (InArchiveRecovery)\n+ fmt = gettext_noop(\"reached end of WAL at %X/%X on\ntimeline %u in %s during archive recovery\");\n+ else\n+ fmt = gettext_noop(\"reached end of WAL at %X/%X on\ntimeline %u in %s during crash recovery\");\n+\n+ ereport(LOG,\n+ (errmsg(fmt, LSN_FORMAT_ARGS(ErrRecPtr), replayTLI,\n+ xlogSourceNames[currentSource]),\n+ (errormsg ? errdetail_internal(\"%s\", errormsg) : 0)));\n+ }\n\nDoesn't it make sense to add an assert statement inside this if-block\nthat will check for xlogreader->EndOfWAL?\n\n==\n\n- * We only end up here without a message when XLogPageRead()\n- * failed - in that case we already logged something. In\n- * StandbyMode that only happens if we have been triggered, so we\n- * shouldn't loop anymore in that case.\n+ * If we get here for other than end-of-wal, emit the error\n+ * message right now. Otherwise the message if any is shown as a\n+ * part of the end-of-WAL message below.\n */\n\nFor consistency, I think we can replace \"end-of-wal\" with\n\"end-of-WAL\". Please note that everywhere else in the comments you\nhave used \"end-of-WAL\". So why not the same here?\n\n==\n\n ereport(LOG,\n- (errmsg(\"replication terminated by\nprimary server\"),\n- errdetail(\"End of WAL reached on\ntimeline %u at %X/%X.\",\n- startpointTLI,\n-\nLSN_FORMAT_ARGS(LogstreamResult.Write))));\n+ (errmsg(\"replication terminated by\nprimary server on timeline %u at %X/%X.\",\n+ startpointTLI,\n+\nLSN_FORMAT_ARGS(LogstreamResult.Write))));\n\nIs this change really required? I don't see any issue with the\nexisting error message.\n\n==\n\nLastly, are we also planning to backport this patch?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Wed, Feb 2, 2022 at 11:05 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 1 Feb 2022 12:38:01 +0400, Pavel Borisov <pashkin.elfe@gmail.com> wrote in\n> > Maybe it can be written little bit shorter:\n> > pe = (char *) record + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n> > as\n> > pe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n> > ?\n>\n> That difference would be a matter of taste, but I found it looks\n> cleaner that definition and assignment is separated for both p and pe.\n> Now it is like the following.\n>\n> > char *p;\n> > char *pe;\n> >\n> > /* scan from the beginning of the record to the end of block */\n> > p = (char *) record;\n> > pe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n>\n>\n> > The problem that pgindent sometimes reflow formatting of unrelated blocks\n> > is indeed existing. But I think it's right to manually leave pgindent-ed\n> > code only on what is related to the patch. The leftover is pgindent-ed in a\n> > scheduled manner sometimes, so don't need to bother.\n>\n> Yeah, I meant that it is a bit annoying to unpginden-ting unrelated\n> edits:p\n>\n> > I'd like to set v10 as RfC.\n>\n> Thanks! The suggested change is done in the attached v11.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n", "msg_date": "Tue, 8 Feb 2022 18:35:34 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi, Ashutosh.\n\nAt Tue, 8 Feb 2022 18:35:34 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> Here are some of my review comments on the v11 patch:\n\nThank you for taking a look on this.\n\n> - (errmsg_internal(\"reached end of WAL in\n> pg_wal, entering archive recovery\")));\n> + (errmsg_internal(\"reached end of WAL at %X/%X\n> on timeline %u in %s during crash recovery, entering archive\n> recovery\",\n> + LSN_FORMAT_ARGS(ErrRecPtr),\n> + replayTLI,\n> + xlogSourceNames[currentSource])));\n> \n> Why crash recovery? Won't this message get printed even during PITR?\n\nIt is in the if-block with the following condition.\n\n>\t * If archive recovery was requested, but we were still doing\n>\t * crash recovery, switch to archive recovery and retry using the\n>\t * offline archive. We have now replayed all the valid WAL in\n>\t * pg_wal, so we are presumably now consistent.\n...\n> if (!InArchiveRecovery && ArchiveRecoveryRequested)\n\nThis means archive-recovery is requested but not started yet. That is,\nwe've just finished crash recovery. The existing comment cited\ntogether is mentioning that.\n\nAt the end of PITR (or archive recovery), the other code works.\n\n> /*\n> * If we haven't emit an error message, we have safely reached the\n> * end-of-WAL.\n> */\n> if (emode_for_corrupt_record(LOG, ErrRecPtr) == LOG)\n> {\n> \tchar\t *fmt;\n> \n> \tif (StandbyMode)\n> \t\tfmt = gettext_noop(\"reached end of WAL at %X/%X on timeline %u in %s during standby mode\");\n> \telse if (InArchiveRecovery)\n> \t\tfmt = gettext_noop(\"reached end of WAL at %X/%X on timeline %u in %s during archive recovery\");\n> \telse\n> \t\tfmt = gettext_noop(\"reached end of WAL at %X/%X on timeline %u in %s during crash recovery\");\n\nThe last among the above messages is choosed when archive-recovery is\nnot requested at all.\n\n> I just did a PITR and could see these messages in the logfile.\n\nYeah, the log lines are describing that the server starting with crash\nrecovery to run PITR.\n\n> 2022-02-08 18:00:44.367 IST [86185] LOG: starting point-in-time\n> recovery to WAL location (LSN) \"0/5227790\"\n> 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not\n> properly shut down; automatic recovery in progress\n\nWell. I guess that the \"automatic recovery\" is ambiguous. Does it\nmake sense if the second line were like the follows instead?\n\n+ 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not properly shut down; crash recovery in progress\n\n> 2022-02-08 18:00:44.369 IST [86185] LOG: redo starts at 0/14DC8D8\n> 2022-02-08 18:00:44.978 IST [86185] DEBUG1: reached end of WAL at\n> 0/3FFFFD0 on timeline 1 in pg_wal during crash recovery, entering\n> archive recovery\n\n(I don't include this change in this patch since it would be another\nissue.)\n\n> ==\n> \n> + /*\n> + * If we haven't emit an error message, we have safely reached the\n> + * end-of-WAL.\n> + */\n> + if (emode_for_corrupt_record(LOG, ErrRecPtr) == LOG)\n> + {\n> + char *fmt;\n> +\n> + if (StandbyMode)\n> + fmt = gettext_noop(\"reached end of WAL at %X/%X on\n> timeline %u in %s during standby mode\");\n> + else if (InArchiveRecovery)\n> + fmt = gettext_noop(\"reached end of WAL at %X/%X on\n> timeline %u in %s during archive recovery\");\n> + else\n> + fmt = gettext_noop(\"reached end of WAL at %X/%X on\n> timeline %u in %s during crash recovery\");\n> +\n> + ereport(LOG,\n> + (errmsg(fmt, LSN_FORMAT_ARGS(ErrRecPtr), replayTLI,\n> + xlogSourceNames[currentSource]),\n> + (errormsg ? errdetail_internal(\"%s\", errormsg) : 0)));\n> + }\n> \n> Doesn't it make sense to add an assert statement inside this if-block\n> that will check for xlogreader->EndOfWAL?\n\nGood point. On second thought, the condition there is flat wrong.\nThe message is \"reached end of WAL\" so the condition should be\nEndOfWAL. On the other hand we didn't make sure that the error\nmessage for the stop is emitted anywhere. Thus I don't particularly\nwant to be strict on that point.\n\nI made the following change for this.\n\n-\t\t\tif (emode_for_corrupt_record(LOG, ErrRecPtr) == LOG)\n+\t\t\tif (xlogreader->EndOfWAL)\n\n\n\n> ==\n> \n> - * We only end up here without a message when XLogPageRead()\n> - * failed - in that case we already logged something. In\n> - * StandbyMode that only happens if we have been triggered, so we\n> - * shouldn't loop anymore in that case.\n> + * If we get here for other than end-of-wal, emit the error\n> + * message right now. Otherwise the message if any is shown as a\n> + * part of the end-of-WAL message below.\n> */\n> \n> For consistency, I think we can replace \"end-of-wal\" with\n> \"end-of-WAL\". Please note that everywhere else in the comments you\n> have used \"end-of-WAL\". So why not the same here?\n\nRight. Fixed.\n\n> ==\n> \n> ereport(LOG,\n> - (errmsg(\"replication terminated by\n> primary server\"),\n> - errdetail(\"End of WAL reached on\n> timeline %u at %X/%X.\",\n> - startpointTLI,\n> -\n> LSN_FORMAT_ARGS(LogstreamResult.Write))));\n> + (errmsg(\"replication terminated by\n> primary server on timeline %u at %X/%X.\",\n> + startpointTLI,\n> +\n> LSN_FORMAT_ARGS(LogstreamResult.Write))));\n> \n> Is this change really required? I don't see any issue with the\n> existing error message.\n\nWithout the change, we see two similar end-of-WAL messages from both\nwalreceiver and startup. (Please don't care about the slight\ndifference of LSNs..)\n\n[walreceiver] LOG: replication terminated by primary server\n[walreceiver] DETAIL: End of WAL reached on timeline 1 at 0/B0000D8.\n[startup] LOG: reached end of WAL at 0/B000060 on timeline 1 in archive during standby mode\n[startup] DETAIL: empty record found at 0/B0000D8\n\nBut what the walreceiver detected at the time is not End-of-WAL but an\nerror on the streaming connection. Since this patch makes startup\nprocess to detect End-of-WAL, we don't need the duplicate and\nin-a-sense false end-of-WAL message from walreceiver.\n\n# By the way, I deliberately choosed to report the LSN of last\n# successfully record in the \"reached end of WAL\" message. On second\n# thought about this choice, I came to think that it is better to report\n# the failure LSN. I changed it to report the failure LSN. In this\n# case we face an ambiguity according to how we failed to read the\n# record but for now we have no choice than blindly choosing one of\n# them. I choosed EndRecPtr since I think decode error happens quite\n# rarely than read errors.\n\n[walreceiver] LOG: replication terminated by primary server at 0/B014228 on timeline 1.\n[startup] LOG: reached end of WAL at 0/B014228 on timeline 1 in archive during standby mode\n[startup] DETAIL: empty record found at 0/B014228\n\nThis is the reason for the change.\n\n\n> Lastly, are we also planning to backport this patch?\n\nThis is apparent a behavioral change, not a bug fix, which I think we\nregard as not appropriate for back-patching.\n\n\nAs the result, I made the following chages in the version 11.\n\n1. Changed the condition for the \"end-of-WAL\" message from\n emode_for_corrupt_record to the EndOfWAL flag.\n\n2. Corrected the wording of end-of-wal to end-of-WAL.\n\n3. In the \"reached end of WAL\" message, report the LSN of the\n beginning of failed record instead of the beginning of the\n last-succeeded record.\n\n4. In the changed message in walreceiver.c, I swapped LSN and timeline\n so that they are in the same order with other similar messages.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 09 Feb 2022 16:44:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Wed, Feb 9, 2022 at 1:14 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hi, Ashutosh.\n>\n> At Tue, 8 Feb 2022 18:35:34 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > Here are some of my review comments on the v11 patch:\n>\n> Thank you for taking a look on this.\n>\n> > - (errmsg_internal(\"reached end of WAL in\n> > pg_wal, entering archive recovery\")));\n> > + (errmsg_internal(\"reached end of WAL at %X/%X\n> > on timeline %u in %s during crash recovery, entering archive\n> > recovery\",\n> > + LSN_FORMAT_ARGS(ErrRecPtr),\n> > + replayTLI,\n> > + xlogSourceNames[currentSource])));\n> >\n> > Why crash recovery? Won't this message get printed even during PITR?\n>\n> It is in the if-block with the following condition.\n>\n> > * If archive recovery was requested, but we were still doing\n> > * crash recovery, switch to archive recovery and retry using the\n> > * offline archive. We have now replayed all the valid WAL in\n> > * pg_wal, so we are presumably now consistent.\n> ...\n> > if (!InArchiveRecovery && ArchiveRecoveryRequested)\n>\n> This means archive-recovery is requested but not started yet. That is,\n> we've just finished crash recovery. The existing comment cited\n> together is mentioning that.\n>\n> At the end of PITR (or archive recovery), the other code works.\n>\n\nThis is quite understandable, the point here is that the message that\nwe are emitting says, we have just finished reading the wal files in\nthe pg_wal directory during crash recovery and are now entering\narchive recovery when we are actually doing point-in-time recovery\nwhich seems a bit misleading.\n\n> > /*\n> > * If we haven't emit an error message, we have safely reached the\n> > * end-of-WAL.\n> > */\n> > if (emode_for_corrupt_record(LOG, ErrRecPtr) == LOG)\n> > {\n> > char *fmt;\n> >\n> > if (StandbyMode)\n> > fmt = gettext_noop(\"reached end of WAL at %X/%X on timeline %u in %s during standby mode\");\n> > else if (InArchiveRecovery)\n> > fmt = gettext_noop(\"reached end of WAL at %X/%X on timeline %u in %s during archive recovery\");\n> > else\n> > fmt = gettext_noop(\"reached end of WAL at %X/%X on timeline %u in %s during crash recovery\");\n>\n> The last among the above messages is choosed when archive-recovery is\n> not requested at all.\n>\n> > I just did a PITR and could see these messages in the logfile.\n>\n> Yeah, the log lines are describing that the server starting with crash\n> recovery to run PITR.\n>\n> > 2022-02-08 18:00:44.367 IST [86185] LOG: starting point-in-time\n> > recovery to WAL location (LSN) \"0/5227790\"\n> > 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not\n> > properly shut down; automatic recovery in progress\n>\n> Well. I guess that the \"automatic recovery\" is ambiguous. Does it\n> make sense if the second line were like the follows instead?\n>\n> + 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not properly shut down; crash recovery in progress\n>\n\nWell, according to me the current message looks fine.\n\n> > Lastly, are we also planning to backport this patch?\n>\n> This is apparent a behavioral change, not a bug fix, which I think we\n> regard as not appropriate for back-patching.\n>\n>\n> As the result, I made the following chages in the version 11.\n>\n> 1. Changed the condition for the \"end-of-WAL\" message from\n> emode_for_corrupt_record to the EndOfWAL flag.\n>\n> 2. Corrected the wording of end-of-wal to end-of-WAL.\n>\n> 3. In the \"reached end of WAL\" message, report the LSN of the\n> beginning of failed record instead of the beginning of the\n> last-succeeded record.\n>\n> 4. In the changed message in walreceiver.c, I swapped LSN and timeline\n> so that they are in the same order with other similar messages.\n>\n\nThanks for sharing this information.\n\n==\n\nHere is one more comment:\n\nOne more comment:\n\n+# identify REDO WAL file\n+my $cmd = \"pg_controldata -D \" . $node->data_dir();\n+my $chkptfile;\n+$cmd = ['pg_controldata', '-D', $node->data_dir()];\n+$stdout = '';\n+$stderr = '';\n+IPC::Run::run $cmd, '>', \\$stdout, '2>', \\$stderr;\n+ok($stdout =~ /^Latest checkpoint's REDO WAL file:[ \\t] *(.+)$/m,\n+ \"checkpoint file is identified\");\n+my $chkptfile = $1;\n\n$chkptfile is declared twice in the same scope. We can probably remove\nthe first one.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 9 Feb 2022 17:31:02 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Wed, 9 Feb 2022 17:31:02 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> On Wed, Feb 9, 2022 at 1:14 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > This means archive-recovery is requested but not started yet. That is,\n> > we've just finished crash recovery. The existing comment cited\n> > together is mentioning that.\n> >\n> > At the end of PITR (or archive recovery), the other code works.\n> >\n> \n> This is quite understandable, the point here is that the message that\n> we are emitting says, we have just finished reading the wal files in\n> the pg_wal directory during crash recovery and are now entering\n> archive recovery when we are actually doing point-in-time recovery\n> which seems a bit misleading.\n\nHere is the messages.\n\n> 2022-02-08 18:00:44.367 IST [86185] LOG: starting point-in-time\n> recovery to WAL location (LSN) \"0/5227790\"\n> 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not\n> properly shut down; automatic recovery in progress\n> 2022-02-08 18:00:44.369 IST [86185] LOG: redo starts at 0/14DC8D8\n> 2022-02-08 18:00:44.978 IST [86185] DEBUG1: reached end of WAL at\n> 0/3FFFFD0 on timeline 1 in pg_wal during crash recovery, entering\n> archive recovery\n\nIn the first place the last DEBUG1 is not on my part, but one of the\nmessages added by this patch says the same thing. Is your point that\narchive recovery is different thing from PITR? In regard to the\ndifference, I think PITR is a form of archive recovery.\n\nThat being said, after some thoughts on this, I changed my mind that\nwe don't need to say what operation was being performed at the\nend-of-WAL. So in the attached the end-of-WAL message is not\naccompanied by the kind of recovery.\n\n> LOG: reached end of WAL at 0/3000000 on timeline 1\n\nI removed the archive-source part along with the operation mode.\nBecause it make the message untranslatable. It is now very simple but\nseems enough.\n\nWhile working on this, I noticed that we need to set EndOfWAL when\nWaitForWALToBecomeAvailable returned with failure. That means the\nfile does not exist at all so it is a kind of end-of-WAL. In that\nsense the following existing comment in ReadRecord is a bit wrong.\n\n>\t * We only end up here without a message when XLogPageRead()\n>\t * failed - in that case we already logged something. In\n>\t * StandbyMode that only happens if we have been triggered, so we\n>\t * shouldn't loop anymore in that case.\n\nActually there's a case we get there without a message and without\nlogged something when a segment file is not found unless we're in\nstandby mode.\n\n> > Well. I guess that the \"automatic recovery\" is ambiguous. Does it\n> > make sense if the second line were like the follows instead?\n> >\n> > + 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not properly shut down; crash recovery in progress\n> >\n> \n> Well, according to me the current message looks fine.\n\nGood to hear. (In the previos version I modified the message by accident..)\n\n> $chkptfile is declared twice in the same scope. We can probably remove\n> the first one.\n\nUgh.. Fixed. (I wonder why Perl doesn't complain on this..)\n\n\nIn this version 12 I made the following changes.\n\n- Rewrote (halfly reverted) a comment in ReadRecord\n\n- Simplified the \"reached end of WAL\" message by removing recovery\n mode and WAL source in ReadRecord.\n\n- XLogPageRead sets EndOfWAL flag in the ENOENT case.\n\n- Removed redundant declaration of the same variable in TAP script.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 10 Feb 2022 15:17:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\nOn Thu, Feb 10, 2022 at 11:47 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 9 Feb 2022 17:31:02 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > On Wed, Feb 9, 2022 at 1:14 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > This means archive-recovery is requested but not started yet. That is,\n> > > we've just finished crash recovery. The existing comment cited\n> > > together is mentioning that.\n> > >\n> > > At the end of PITR (or archive recovery), the other code works.\n> > >\n> >\n> > This is quite understandable, the point here is that the message that\n> > we are emitting says, we have just finished reading the wal files in\n> > the pg_wal directory during crash recovery and are now entering\n> > archive recovery when we are actually doing point-in-time recovery\n> > which seems a bit misleading.\n>\n> Here is the messages.\n>\n> > 2022-02-08 18:00:44.367 IST [86185] LOG: starting point-in-time\n> > recovery to WAL location (LSN) \"0/5227790\"\n> > 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not\n> > properly shut down; automatic recovery in progress\n> > 2022-02-08 18:00:44.369 IST [86185] LOG: redo starts at 0/14DC8D8\n> > 2022-02-08 18:00:44.978 IST [86185] DEBUG1: reached end of WAL at\n> > 0/3FFFFD0 on timeline 1 in pg_wal during crash recovery, entering\n> > archive recovery\n>\n> In the first place the last DEBUG1 is not on my part, but one of the\n> messages added by this patch says the same thing. Is your point that\n> archive recovery is different thing from PITR? In regard to the\n> difference, I think PITR is a form of archive recovery.\n>\n\nNo, I haven't tried to compare archive recovery to PITR or vice versa,\ninstead I was trying to compare crash recovery with PITR. The message\nyou're emitting says just before entering into the archive recovery is\n- \"reached end-of-WAL on ... in pg_wal *during crash recovery*,\nentering archive recovery\". This message is static and can be emitted\nnot only during crash recovery, but also during PITR. I think we can\nremove the \"during crash recovery\" part from this message, so \"reached\nthe end of WAL at %X/%X on timeline %u in %s, entering archive\nrecovery\". Also I don't think we need format specifier %s here, it can\nbe hard-coded with pg_wal as in this case we can only enter archive\nrecovery after reading wal from pg_wal, so current WAL source has to\nbe pg_wal, isn't it?\n\n> That being said, after some thoughts on this, I changed my mind that\n> we don't need to say what operation was being performed at the\n> end-of-WAL. So in the attached the end-of-WAL message is not\n> accompanied by the kind of recovery.\n>\n> > LOG: reached end of WAL at 0/3000000 on timeline 1\n>\n> I removed the archive-source part along with the operation mode.\n> Because it make the message untranslatable. It is now very simple but\n> seems enough.\n>\n> While working on this, I noticed that we need to set EndOfWAL when\n> WaitForWALToBecomeAvailable returned with failure. That means the\n> file does not exist at all so it is a kind of end-of-WAL. In that\n> sense the following existing comment in ReadRecord is a bit wrong.\n>\n> > * We only end up here without a message when XLogPageRead()\n> > * failed - in that case we already logged something. In\n> > * StandbyMode that only happens if we have been triggered, so we\n> > * shouldn't loop anymore in that case.\n>\n> Actually there's a case we get there without a message and without\n> logged something when a segment file is not found unless we're in\n> standby mode.\n>\n> > > Well. I guess that the \"automatic recovery\" is ambiguous. Does it\n> > > make sense if the second line were like the follows instead?\n> > >\n> > > + 2022-02-08 18:00:44.368 IST [86185] LOG: database system was not properly shut down; crash recovery in progress\n> > >\n> >\n> > Well, according to me the current message looks fine.\n>\n> Good to hear. (In the previos version I modified the message by accident..)\n>\n> > $chkptfile is declared twice in the same scope. We can probably remove\n> > the first one.\n>\n> Ugh.. Fixed. (I wonder why Perl doesn't complain on this..)\n>\n>\n> In this version 12 I made the following changes.\n>\n> - Rewrote (halfly reverted) a comment in ReadRecord\n>\n> - Simplified the \"reached end of WAL\" message by removing recovery\n> mode and WAL source in ReadRecord.\n>\n> - XLogPageRead sets EndOfWAL flag in the ENOENT case.\n>\n> - Removed redundant declaration of the same variable in TAP script.\n>\n\nThanks for the changes. Please note that I am not able to apply the\nlatest patch on HEAD. Could you please rebase it on HEAD and share the\nnew version. Thank you.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:14:11 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 14 Feb 2022 20:14:11 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> No, I haven't tried to compare archive recovery to PITR or vice versa,\n> instead I was trying to compare crash recovery with PITR. The message\n> you're emitting says just before entering into the archive recovery is\n> - \"reached end-of-WAL on ... in pg_wal *during crash recovery*,\n> entering archive recovery\". This message is static and can be emitted\n> not only during crash recovery, but also during PITR. I think we can\n\nNo. It is emitted *only* after crash recovery before starting archive\nrecovery. Another message this patch adds can be emitted after PITR\nor archive recovery.\n\n> not only during crash recovery, but also during PITR. I think we can\n> remove the \"during crash recovery\" part from this message, so \"reached\n> the end of WAL at %X/%X on timeline %u in %s, entering archive\n\nWhat makes you think it can be emitted after other than crash recovery?\n(Please look at the code comment just above.)\n\n> recovery\". Also I don't think we need format specifier %s here, it can\n> be hard-coded with pg_wal as in this case we can only enter archive\n> recovery after reading wal from pg_wal, so current WAL source has to\n> be pg_wal, isn't it?\n\nYou're right that it can't be other than pg_wal. It was changed just\nin accordance woth another message this patch adds and it would be a\nmatter of taste. I replaced to \"pg_wal\" in this version.\n\n> Thanks for the changes. Please note that I am not able to apply the\n> latest patch on HEAD. Could you please rebase it on HEAD and share the\n> new version. Thank you.\n\nA change on TAP script hit this. The v13 attached is:\n\n- Rebased.\n\n- Replaced \"%s\" in the debug transition message from crash recovery to\n archive recovery.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 15 Feb 2022 11:22:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Tue, Feb 15, 2022 at 7:52 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 14 Feb 2022 20:14:11 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > No, I haven't tried to compare archive recovery to PITR or vice versa,\n> > instead I was trying to compare crash recovery with PITR. The message\n> > you're emitting says just before entering into the archive recovery is\n> > - \"reached end-of-WAL on ... in pg_wal *during crash recovery*,\n> > entering archive recovery\". This message is static and can be emitted\n> > not only during crash recovery, but also during PITR. I think we can\n>\n> No. It is emitted *only* after crash recovery before starting archive\n> recovery. Another message this patch adds can be emitted after PITR\n> or archive recovery.\n>\n> > not only during crash recovery, but also during PITR. I think we can\n> > remove the \"during crash recovery\" part from this message, so \"reached\n> > the end of WAL at %X/%X on timeline %u in %s, entering archive\n>\n> What makes you think it can be emitted after other than crash recovery?\n> (Please look at the code comment just above.)\n>\n\nYep that's right. We won't be coming here in case of pitr.\n\n> > recovery\". Also I don't think we need format specifier %s here, it can\n> > be hard-coded with pg_wal as in this case we can only enter archive\n> > recovery after reading wal from pg_wal, so current WAL source has to\n> > be pg_wal, isn't it?\n>\n> You're right that it can't be other than pg_wal. It was changed just\n> in accordance woth another message this patch adds and it would be a\n> matter of taste. I replaced to \"pg_wal\" in this version.\n>\n\nOK. I have verified the changes.\n\n> > Thanks for the changes. Please note that I am not able to apply the\n> > latest patch on HEAD. Could you please rebase it on HEAD and share the\n> > new version. Thank you.\n>\n> A change on TAP script hit this. The v13 attached is:\n>\n\nOK. The v13 patch looks good. I have marked it as ready to commit.\nThank you for working on all my review comments.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Tue, 15 Feb 2022 20:17:20 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Tue, 15 Feb 2022 20:17:20 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> OK. The v13 patch looks good. I have marked it as ready to commit.\n> Thank you for working on all my review comments.\n\nThaks! But the recent xlog.c refactoring crashes into this patch.\nAnd I found a silly bug while rebasing.\n\nxlog.c:12463 / xlogrecovery.c:3168\n\t\tif (!WaitForWALToBecomeAvailable(targetPagePtr + reqLen,\n..\n{\n+\t\t\tAssert(!StandbyMode);\n...\n+\t\t\txlogreader->EndOfWAL = true;\n\nYeah, I forgot about promotion there.. So what I should have done is\nsetting EndOfWAL according to StandbyMode.\n\n+\t\t\tAssert(!StandbyMode || CheckForStandbyTrigger());\n...\n+\t\t\t/* promotion exit is not end-of-WAL */\n+\t\t\txlogreader->EndOfWAL = !StandbyMode;\n\nThe rebased v14 is attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 17 Feb 2022 16:50:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Thu, Feb 17, 2022 at 1:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 15 Feb 2022 20:17:20 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > OK. The v13 patch looks good. I have marked it as ready to commit.\n> > Thank you for working on all my review comments.\n>\n> Thaks! But the recent xlog.c refactoring crashes into this patch.\n> And I found a silly bug while rebasing.\n>\n\nThanks.! I'll take a look at the new changes.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Thu, 17 Feb 2022 17:45:50 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Thu, Feb 17, 2022 at 1:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 15 Feb 2022 20:17:20 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > OK. The v13 patch looks good. I have marked it as ready to commit.\n> > Thank you for working on all my review comments.\n>\n> Thaks! But the recent xlog.c refactoring crashes into this patch.\n> And I found a silly bug while rebasing.\n>\n> xlog.c:12463 / xlogrecovery.c:3168\n> if (!WaitForWALToBecomeAvailable(targetPagePtr + reqLen,\n> ..\n> {\n> + Assert(!StandbyMode);\n> ...\n> + xlogreader->EndOfWAL = true;\n>\n> Yeah, I forgot about promotion there..\n\nYes, we exit WaitForWALToBecomeAvailable() even in standby mode\nprovided the user has requested for the promotion. So checking for the\n!StandbyMode condition alone was not enough.\n\nSo what I should have done is\n> setting EndOfWAL according to StandbyMode.\n>\n> + Assert(!StandbyMode || CheckForStandbyTrigger());\n> ...\n> + /* promotion exit is not end-of-WAL */\n> + xlogreader->EndOfWAL = !StandbyMode;\n>\n\nThe changes looks good. thanks.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Sat, 19 Feb 2022 09:31:33 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Sat, 19 Feb 2022 09:31:33 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> The changes looks good. thanks.!\n\nThanks!\n\nSome recent core change changed WAL insertion speed during the TAP\ntest and revealed one forgotton case of EndOfWAL. When a record\nheader flows into the next page, XLogReadRecord does separate check\nfrom ValidXLogRecordHeader by itself.\n\n>\t * If the whole record header is on this page, validate it immediately.\n>\t * Otherwise do just a basic sanity check on xl_tot_len, and validate the\n>\t * rest of the header after reading it from the next page. The xl_tot_len\n>\t * check is necessary here to ensure that we enter the \"Need to reassemble\n>\t * record\" code path below; otherwise we might fail to apply\n>\t * ValidXLogRecordHeader at all.\n>\t */\n>\tif (targetRecOff <= XLOG_BLCKSZ - SizeOfXLogRecord)\n>\t{\n...\n> }\n>\telse\n>\t{\n>\t\t/* XXX: more validation should be done here */\n>\t\tif (total_len < SizeOfXLogRecord)\n>\t\t{\n\nI could simplly copy-in a part of ValidXLogRecordHeader there but that\nresults in rather large duplicate code. I could have\nValidXLogRecordHeader handle the partial header case but it seems to\nme complex.\n\nSo in this version I split the xl_tot_len part of\nValidXLogRecordHeader into ValidXLogRecordLength.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 02 Mar 2022 11:17:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Wed, Mar 2, 2022 at 7:47 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 19 Feb 2022 09:31:33 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in\n> > The changes looks good. thanks.!\n>\n> Thanks!\n>\n> Some recent core change changed WAL insertion speed during the TAP\n> test and revealed one forgotton case of EndOfWAL. When a record\n> header flows into the next page, XLogReadRecord does separate check\n> from ValidXLogRecordHeader by itself.\n>\n\nThe new changes made in the patch look good. Thanks to the recent\nchanges to speed WAL insertion that have helped us catch this bug.\n\nOne small comment:\n\n record = (XLogRecord *) (state->readBuf + RecPtr % XLOG_BLCKSZ);\n- total_len = record->xl_tot_len;\n\nDo you think we need to change the position of the comments written\nfor above code that says:\n\n /*\n * Read the record length.\n *\n...\n...\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Thu, 3 Mar 2022 15:39:44 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Thu, 3 Mar 2022 15:39:44 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in \n> The new changes made in the patch look good. Thanks to the recent\n> changes to speed WAL insertion that have helped us catch this bug.\n\nThanks for the quick checking.\n\n> One small comment:\n> \n> record = (XLogRecord *) (state->readBuf + RecPtr % XLOG_BLCKSZ);\n> - total_len = record->xl_tot_len;\n> \n> Do you think we need to change the position of the comments written\n> for above code that says:\n\nYeah, I didn't do that since it is about header verification. But as\nyou pointed, the result still doesn't look perfect.\n\nOn second thought the two seems repeating the same things. Thus I\nmerged the two comments together. In this verion 16 it looks like\nthis.\n\n>\t/*\n>\t * Validate the record header.\n>\t *\n>\t * Even though we use an XLogRecord pointer here, the whole record header\n>\t * might not fit on this page. If the whole record header is on this page,\n>\t * validate it immediately. Even otherwise xl_tot_len must be on this page\n>\t * (it is the first field of MAXALIGNed records), but we still cannot\n>\t * access any further fields until we've verified that we got the whole\n>\t * header, so do just a basic sanity check on record length, and validate\n>\t * the rest of the header after reading it from the next page. The length\n>\t * check is necessary here to ensure that we enter the \"Need to reassemble\n>\t * record\" code path below; otherwise we might fail to apply\n>\t * ValidXLogRecordHeader at all.\n>\t */\n>\trecord = (XLogRecord *) (state->readBuf + RecPtr % XLOG_BLCKSZ);\n>\n>\tif (targetRecOff <= XLOG_BLCKSZ - SizeOfXLogRecord)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 04 Mar 2022 09:43:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\nOn 2022-03-04 09:43:59 +0900, Kyotaro Horiguchi wrote:\n> On second thought the two seems repeating the same things. Thus I\n> merged the two comments together. In this verion 16 it looks like\n> this.\n\nPatch currently fails to apply, needs a rebase:\nhttp://cfbot.cputube.org/patch_37_2490.log\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:01:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 21 Mar 2022 17:01:19 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Patch currently fails to apply, needs a rebase:\n> http://cfbot.cputube.org/patch_37_2490.log\n\nThanks for noticing me of that.\n\nRebased to the current HEAD.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 22 Mar 2022 11:34:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "me> Rebased to the current HEAD.\n\nb64c3bd62e (removal of unused \"use Config\") conflicted on a TAP\nscript.\n\nRebased.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 29 Mar 2022 15:07:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Mon, Mar 28, 2022 at 11:07 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Rebased.\n\nUnfortunately this will need another rebase over latest.\n\n[CFM hat] Looking through the history here, this has been bumped to\nReady for Committer a few times and then bumped back to Needs Review\nafter a required rebase. What's the best way for us to provide support\nfor contributors who get stuck in this loop? Maybe we can be more\naggressive about automated notifications when a RfC patch goes red in\nthe cfbot?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 6 Jul 2022 11:05:51 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Wed, Jul 06, 2022 at 11:05:51AM -0700, Jacob Champion wrote:\n> [CFM hat] Looking through the history here, this has been bumped to\n> Ready for Committer a few times and then bumped back to Needs Review\n> after a required rebase. What's the best way for us to provide support\n> for contributors who get stuck in this loop? Maybe we can be more\n> aggressive about automated notifications when a RfC patch goes red in\n> the cfbot?\n\nHaving a better integration between the CF bot and the CF app would be\ngreat, IMO. People tend to easily forget about what they send in my\nexperience, even if they manage a small pool of patches or a larger\none.\n--\nMichael", "msg_date": "Thu, 7 Jul 2022 09:04:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Wed, 6 Jul 2022 11:05:51 -0700, Jacob Champion <jchampion@timescale.com> wrote in \n> On Mon, Mar 28, 2022 at 11:07 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > Rebased.\n> \n> Unfortunately this will need another rebase over latest.\n\nThanks! Done. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 07 Jul 2022 17:32:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "@cfbot: rebased over adb466150, which did the same thing as one of the\nhunks in xlogreader.c.", "msg_date": "Fri, 16 Sep 2022 23:21:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Fri, 16 Sep 2022 23:21:50 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> @cfbot: rebased over adb466150, which did the same thing as one of the\n> hunks in xlogreader.c.\n\nOops. Thanks! And then this gets a further conflict (param names\nharmonization). So further rebased. And removed an extra blank line\nyou pointed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 26 Sep 2022 16:17:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "rebased", "msg_date": "Thu, 27 Oct 2022 21:37:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Just rebased.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 18 Nov 2022 17:25:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\nOn 2022-11-18 17:25:37 +0900, Kyotaro Horiguchi wrote:\n> Just rebased.\n\nFails with address sanitizer:\nhttps://cirrus-ci.com/task/5632986241564672\n\nUnfortunately one of the failures is in pg_waldump and we don't seem to\ncapture its output in 011_crash_recovery. So we don't see the nice formattted\noutput...\n\n[11:07:18.868] #0 0x00007fcf43803ce1 in raise () from /lib/x86_64-linux-gnu/libc.so.6\n[11:07:18.912] \n[11:07:18.912] Thread 1 (Thread 0x7fcf43662780 (LWP 39124)):\n[11:07:18.912] #0 0x00007fcf43803ce1 in raise () from /lib/x86_64-linux-gnu/libc.so.6\n[11:07:18.912] No symbol table info available.\n[11:07:18.912] #1 0x00007fcf437ed537 in abort () from /lib/x86_64-linux-gnu/libc.so.6\n[11:07:18.912] No symbol table info available.\n[11:07:18.912] #2 0x00007fcf43b8511b in __sanitizer::Abort () at ../../../../src/libsanitizer/sanitizer_common/sanitizer_posix_libcdep.cpp:155\n[11:07:18.912] No locals.\n[11:07:18.912] #3 0x00007fcf43b8fce8 in __sanitizer::Die () at ../../../../src/libsanitizer/sanitizer_common/sanitizer_termination.cpp:58\n[11:07:18.912] No locals.\n[11:07:18.912] #4 0x00007fcf43b7244c in __asan::ScopedInErrorReport::~ScopedInErrorReport (this=0x7ffd4fde18e6, __in_chrg=<optimized out>) at ../../../../src/libsanitizer/asan/asan_report.cpp:186\n[11:07:18.912] buffer_copy = {<__sanitizer::InternalMmapVectorNoCtor<char>> = {data_ = 0x7fcf40350000 '=' <repeats 65 times>, \"\\n==39124==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x625000002100 at pc 0x55c36c21e315 bp 0x7ffd4fde2550 sp 0x7ffd4fde2\"..., capacity_bytes_ = 65536, size_ = <optimized out>}, <No data fields>}\n...\n[11:07:18.912] #6 0x00007fcf43b72788 in __asan::__asan_report_load1 (addr=<optimized out>) at ../../../../src/libsanitizer/asan/asan_rtl.cpp:117\n[11:07:18.912] bp = 140725943412048\n[11:07:18.912] pc = <optimized out>\n[11:07:18.912] local_stack = 140528180793728\n[11:07:18.912] sp = 140725943412040\n[11:07:18.912] #7 0x000055c36c21e315 in ValidXLogRecordLength (state=state@entry=0x61a000000680, RecPtr=RecPtr@entry=33655480, record=record@entry=0x625000000bb8) at xlogreader.c:1126\n[11:07:18.912] p = <optimized out>\n[11:07:18.912] pe = 0x625000002100 \"\"\n[11:07:18.912] #8 0x000055c36c21e3b1 in ValidXLogRecordHeader (state=state@entry=0x61a000000680, RecPtr=RecPtr@entry=33655480, PrevRecPtr=33655104, record=record@entry=0x625000000bb8, randAccess=randAccess@entry=false) at xlogreader.c:1169\n[11:07:18.912] No locals.\n\nThe most important bit is \"AddressSanitizer: heap-buffer-overflow on address 0x6250000\\\n02100 at pc 0x55c36c21e315 bp 0x7ffd4fde2550 sp 0x7ffd4fde2\"\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Nov 2022 09:20:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Fri, Nov 18, 2022 at 05:25:37PM +0900, Kyotaro Horiguchi wrote:\n> +\t\twhile (*p == 0 && p < pe)\n> +\t\t\tp++;\n\nThe bug reported by Andres/cfbot/ubsan is here.\n\nFixed in attached.\n\nI didn't try to patch the test case to output the failing stderr, but\nthat might be good.", "msg_date": "Tue, 22 Nov 2022 16:04:56 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Tue, 22 Nov 2022 16:04:56 -0600, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Fri, Nov 18, 2022 at 05:25:37PM +0900, Kyotaro Horiguchi wrote:\n> > +\t\twhile (*p == 0 && p < pe)\n> > +\t\t\tp++;\n> \n> The bug reported by Andres/cfbot/ubsan is here.\n>\n> Fixed in attached.\n\nUr..ou..\n\n-\t\twhile (*p == 0 && p < pe)\n+\t\twhile (p < pe && *p == 0)\n\nIt was an off-by-one error. Thanks!\n\n> I didn't try to patch the test case to output the failing stderr, but\n> that might be good.\n\nI have made use of Cluster::wait_for_log(), but still find_in_log() is\nthere since it is used to check if a message that should not be logged\nis not logged.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 30 Nov 2022 11:56:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "So this patch is now failing because it applies new tests to\n011_crash_recovery.pl, which was removed recently. Can you please move\nthem elsewhere?\n\nI think the comment for ValidXLogRecordLength should explain what the\nreturn value is.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 3 Feb 2023 15:16:02 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Thanks!\n\nAt Fri, 3 Feb 2023 15:16:02 +0100, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> So this patch is now failing because it applies new tests to\n> 011_crash_recovery.pl, which was removed recently. Can you please move\n> them elsewhere?\n\nI don't find an appropriate file to move to. In the end I created a\nnew file with the name 034_recovery.pl. I added a test for standbys,\ntoo. (which is the first objective of this patch.)\n\n> I think the comment for ValidXLogRecordLength should explain what the\n> return value is.\n\nAgreed.\n\n\n/*\n * Validate record length of an XLOG record header.\n *\n * This is substantially a part of ValidXLogRecordHeader. But XLogReadRecord\n * needs this separate from the function in case of a partial record header.\n+ *\n+ * Returns true if the xl_tot_len header field has a seemingly valid value,\n+ * which means the caller can proceed reading to the following part of the\n+ * record.\n */\n static bool\n ValidXLogRecordLength(XLogReaderState *state, XLogRecPtr RecPtr,\n\nI added a similar description to ValidXLogRecordHeader.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 07 Feb 2023 16:07:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "It looks like this needs a rebase and at a quick glance it looks like\nmore than a trivial conflict. I'll mark it Waiting on Author. Please\nupdate it back when it's rebased\n\n\n\n\n--\nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 6 Mar 2023 14:58:15 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 6 Mar 2023 14:58:15 -0500, \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> wrote in \n> It looks like this needs a rebase and at a quick glance it looks like\n> more than a trivial conflict. I'll mark it Waiting on Author. Please\n> update it back when it's rebased\n\nThanks for checking it!\n\nI think 4ac30ba4f2 is that, which changes a few error\nmessages. Addition to rebasing, I rewrote some code comments of\nxlogreader.c and revised the additional test script.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 07 Mar 2023 15:35:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\n> Thanks for checking it!\n>\n> I think 4ac30ba4f2 is that, which changes a few error\n> messages. Addition to rebasing, I rewrote some code comments of\n> xlogreader.c and revised the additional test script.\n\nThanks for working on this, it bugged me for a while. I noticed that\ncfbot is not happy with the patch so I rebased it.\npostgresql:pg_waldump test suite didn't pass after the rebase. I fixed\nit too. Other than that the patch LGTM so I'm not changing its status\nfrom \"Ready for Committer\".\n\nIt looks like the patch was moved between the commitfests since\n2020... If there is anything that may help merging it into PG17 please\nlet me know.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 17 Jul 2023 15:20:30 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 17 Jul 2023 15:20:30 +0300, Aleksander Alekseev <aleksander@timescale.com> wrote in \n> Thanks for working on this, it bugged me for a while. I noticed that\n> cfbot is not happy with the patch so I rebased it.\n> postgresql:pg_waldump test suite didn't pass after the rebase. I fixed\n> it too. Other than that the patch LGTM so I'm not changing its status\n> from \"Ready for Committer\".\n\nThanks for the rebasing.\n\n> It looks like the patch was moved between the commitfests since\n> 2020... If there is anything that may help merging it into PG17 please\n> let me know.\n\nThis might be just too-much or there might be some doubt in this..\n\nThis change basically makes a zero-length record be considered as the\nnormal end of WAL.\n\nThe most controvorsial point I think in the design is the criteria for\nan error condition. The assumption is that the WAL is sound if all\nbytes following a complete record, up to the next page boundary, are\nzeroed out. This is slightly narrower than the original criteria,\nmerely checking the next record is zero-length. Naturally, there\nmight be instances where that page has been blown out due to device\nfailure or some other reasons. Despite this, I believe it is\npreferable rather than always issuing a warning (in the LOG level,\nthough) about a potential WAL corruption.\n\nI've adjusted the condition for muting repeated log messages at the\nsame LSN, changing it from ==LOG to <=WARNING. This is simply a\nconsequence of following the change of \"real\" warnings from LOG to\nWARNING. I believe this is acceptable even without considering\naforementioned change, as any single retriable (<ERROR) error at an\nLSN should be sufficient to alert users about potential issues.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 Jul 2023 14:02:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Anyway, this requires rebsaing, and done.\n\nThanks for John (Naylor) for pointing this out.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 22 Nov 2023 16:31:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Wed, 22 Nov 2023 at 13:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n>\r\n> Anyway, this requires rebsaing, and done.\r\n\r\nFew tests are failing at [1], kindly post an updated patch:\r\n/tmp/cirrus-ci-build/src/test/recovery --testgroup recovery --testname\r\n039_end_of_wal -- /usr/local/bin/perl -I\r\n/tmp/cirrus-ci-build/src/test/perl -I\r\n/tmp/cirrus-ci-build/src/test/recovery\r\n/tmp/cirrus-ci-build/src/test/recovery/t/039_end_of_wal.pl\r\n[23:53:10.370] ――――――――――――――――――――――――――――――――――――― ✀\r\n―――――――――――――――――――――――――――――――――――――\r\n[23:53:10.370] stderr:\r\n[23:53:10.370] # Failed test 'xl_tot_len zero'\r\n[23:53:10.370] # at\r\n/tmp/cirrus-ci-build/src/test/recovery/t/039_end_of_wal.pl line 267.\r\n[23:53:10.370] # Failed test 'xlp_magic zero'\r\n[23:53:10.370] # at\r\n/tmp/cirrus-ci-build/src/test/recovery/t/039_end_of_wal.pl line 340.\r\n[23:53:10.370] # Failed test 'xlp_magic zero (split record header)'\r\n[23:53:10.370] # at\r\n/tmp/cirrus-ci-build/src/test/recovery/t/039_end_of_wal.pl line 445.\r\n[23:53:10.370] # Looks like you failed 3 tests of 14.\r\n[23:53:10.370]\r\n[23:53:10.370] (test program exited with status code 3)\r\n[23:53:10.370] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n\r\n[1] - https://cirrus-ci.com/task/5859293157654528\r\n\r\nRegards,\r\nVignesh\r\n", "msg_date": "Fri, 5 Jan 2024 16:02:24 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Fri, 5 Jan 2024 16:02:24 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> On Wed, 22 Nov 2023 at 13:01, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > Anyway, this requires rebsaing, and done.\n> \n> Few tests are failing at [1], kindly post an updated patch:\n\nThanks!\n\nThe errors occurred in a part of the tests for end-of-WAL detection\nadded in the master branch. These failures were primarily due to\nchanges in the message contents introduced by this patch. During the\nrevision, I discovered an issue with the handling of empty pages that\nappear in the middle of reading continuation records. In the previous\nversion, such empty pages were mistakenly identified as indicating a\nclean end-of-WAL (that is a LOG). However, they should actually be\nhandled as a WARNING, since the record curently being read is broken\nat the empty pages. The following changes have been made in this\nversion:\n\n1. Adjusting the test to align with the error message changes\n introduced by this patch.\n\n2. Adding tests for the newly added messages.\n\n3. Correcting the handling of empty pages encountered during the\n reading of continuation records. (XLogReaderValidatePageHeader)\n\n4. Revising code comments.\n\n5. Changing the term \"log segment\" to \"WAL\n segment\". (XLogReaderValidatePageHeader)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 11 Jan 2024 16:18:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\n> The errors occurred in a part of the tests for end-of-WAL detection\n> added in the master branch. These failures were primarily due to\n> changes in the message contents introduced by this patch. During the\n> revision, I discovered an issue with the handling of empty pages that\n> appear in the middle of reading continuation records. In the previous\n> version, such empty pages were mistakenly identified as indicating a\n> clean end-of-WAL (that is a LOG). However, they should actually be\n> handled as a WARNING, since the record curently being read is broken\n> at the empty pages. The following changes have been made in this\n> version:\n>\n> 1. Adjusting the test to align with the error message changes\n> introduced by this patch.\n>\n> 2. Adding tests for the newly added messages.\n>\n> 3. Correcting the handling of empty pages encountered during the\n> reading of continuation records. (XLogReaderValidatePageHeader)\n>\n> 4. Revising code comments.\n>\n> 5. Changing the term \"log segment\" to \"WAL\n> segment\". (XLogReaderValidatePageHeader)\n>\n> regards.\n\nThanks for the updated patch.\n\n```\n+ p = (char *) record;\n+ pe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n+\n+ while (p < pe && *p == 0)\n+ p++;\n+\n+ if (p == pe)\n```\n\nJust as a random thought: perhaps we should make this a separate\nfunction, as a part of src/port/. It seems to me that this code could\nbenefit from using vector instructions some day, similarly to\nmemcmp(), memset() etc. Surprisingly there doesn't seem to be a\nstandard C function for this. Alternatively one could argue that one\ncycle doesn't make much code to reuse and that the C compiler will\nplace SIMD instructions for us. However a counter-counter argument\nwould be that we could use a macro or even better an inline function\nand have the same effect except getting a slightly more readable code.\n\n```\n- * This is just a convenience subroutine to avoid duplicated code in\n+ * This is just a convenience subroutine to avoid duplicate code in\n```\n\nThis change doesn't seem to be related to the patch. Personally I\ndon't mind it though.\n\nAll in all I find v28 somewhat scary. It does much more than \"making\none message less scary\" as it was initially intended and what bugged\nme personally, and accordingly touches many more places including\nxlogreader.c, xlogrecovery.c, etc.\n\nParticularly I have mixed feeling about this:\n\n```\n+ /*\n+ * Consider it as end-of-WAL if all subsequent bytes of this page\n+ * are zero. We don't bother checking the subsequent pages since\n+ * they are not zeroed in the case of recycled segments.\n+ */\n```\n\nIf I understand correctly, if somehow several FS blocks end up being\nzeroed (due to OS bug, bit rot, restoring from a corrupted for\nwhatever reason backup, hardware failures, ...) there is non-zero\nchance that PG will interpret this as a normal situation. To my\nknowledge this is not what we typically do - typically PG would report\nan error and ask a human to figure out what happened. Of course the\npossibility of such a scenario is small, but I don't think that as\nDBMS developers we can ignore it.\n\nDoes anyone agree or maybe I'm making things up?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 12 Jan 2024 15:03:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Thank you for the comments.\n\nAt Fri, 12 Jan 2024 15:03:26 +0300, Aleksander Alekseev <aleksander@timescale.com> wrote in \n> ```\n> + p = (char *) record;\n> + pe = p + XLOG_BLCKSZ - (RecPtr & (XLOG_BLCKSZ - 1));\n> +\n> + while (p < pe && *p == 0)\n> + p++;\n> +\n> + if (p == pe)\n> ```\n> \n> Just as a random thought: perhaps we should make this a separate\n> function, as a part of src/port/. It seems to me that this code could\n> benefit from using vector instructions some day, similarly to\n> memcmp(), memset() etc. Surprisingly there doesn't seem to be a\n> standard C function for this. Alternatively one could argue that one\n> cycle doesn't make much code to reuse and that the C compiler will\n> place SIMD instructions for us. However a counter-counter argument\n> would be that we could use a macro or even better an inline function\n> and have the same effect except getting a slightly more readable code.\n\nCreating a function with a name like memcmp_byte() should be\nstraightforward, but implementing it with SIMD right away seems a bit\nchallenging. Similar operations are already being performed elsewhere\nin the code, probably within the stats collector, where memcmp is used\nwith a statically allocated area that's filled with zeros. If we can\nachieve a performance equivalent to memcmp with this new function,\nthen it definitely seems worth pursuing.\n\n> ```\n> - * This is just a convenience subroutine to avoid duplicated code in\n> + * This is just a convenience subroutine to avoid duplicate code in\n> ```\n> \n> This change doesn't seem to be related to the patch. Personally I\n> don't mind it though.\n\nAh, I'm sorry. That was something I mistakenly thought I had written\nat the last moment and made modifications to.\n\n> All in all I find v28 somewhat scary. It does much more than \"making\n> one message less scary\" as it was initially intended and what bugged\n> me personally, and accordingly touches many more places including\n> xlogreader.c, xlogrecovery.c, etc.\n> \n> Particularly I have mixed feeling about this:\n> \n> ```\n> + /*\n> + * Consider it as end-of-WAL if all subsequent bytes of this page\n> + * are zero. We don't bother checking the subsequent pages since\n> + * they are not zeroed in the case of recycled segments.\n> + */\n> ```\n> \n> If I understand correctly, if somehow several FS blocks end up being\n> zeroed (due to OS bug, bit rot, restoring from a corrupted for\n> whatever reason backup, hardware failures, ...) there is non-zero\n> chance that PG will interpret this as a normal situation. To my\n> knowledge this is not what we typically do - typically PG would report\n> an error and ask a human to figure out what happened. Of course the\n> possibility of such a scenario is small, but I don't think that as\n> DBMS developers we can ignore it.\n\nFor now, let me explain the basis for this patch. The fundamental\nissue is that these warnings that always appear are, in practice, not\na problem in almost all cases. Some of those who encounter them for\nthe first time may feel uneasy and reach out with inquiries. On the\nother hand, those familiar with these warnings tend to ignore them and\nonly pay attention to details when actual issues arise. Therefore, the\nintention of this patch is to label them as \"no issue\" unless a\nproblem is blatantly evident, in order to prevent unnecessary concern.\n\n> Does anyone agree or maybe I'm making things up?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:57:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "Hi,\n\n> > If I understand correctly, if somehow several FS blocks end up being\n> > zeroed (due to OS bug, bit rot, restoring from a corrupted for\n> > whatever reason backup, hardware failures, ...) there is non-zero\n> > chance that PG will interpret this as a normal situation. To my\n> > knowledge this is not what we typically do - typically PG would report\n> > an error and ask a human to figure out what happened. Of course the\n> > possibility of such a scenario is small, but I don't think that as\n> > DBMS developers we can ignore it.\n>\n> For now, let me explain the basis for this patch. The fundamental\n> issue is that these warnings that always appear are, in practice, not\n> a problem in almost all cases. Some of those who encounter them for\n> the first time may feel uneasy and reach out with inquiries. On the\n> other hand, those familiar with these warnings tend to ignore them and\n> only pay attention to details when actual issues arise. Therefore, the\n> intention of this patch is to label them as \"no issue\" unless a\n> problem is blatantly evident, in order to prevent unnecessary concern.\n\nI agree and don't mind affecting the error message per se.\n\nHowever I see that the actual logic of how WAL is processed is being\nchanged. If we do this, at very least it requires thorough thinking. I\nstrongly suspect that the proposed code is wrong and/or not safe\nand/or less safe than it is now for the reasons named above.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 16 Jan 2024 14:46:02 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "On Tue, Jan 16, 2024 at 02:46:02PM +0300, Aleksander Alekseev wrote:\n>> For now, let me explain the basis for this patch. The fundamental\n>> issue is that these warnings that always appear are, in practice, not\n>> a problem in almost all cases. Some of those who encounter them for\n>> the first time may feel uneasy and reach out with inquiries. On the\n>> other hand, those familiar with these warnings tend to ignore them and\n>> only pay attention to details when actual issues arise. Therefore, the\n>> intention of this patch is to label them as \"no issue\" unless a\n>> problem is blatantly evident, in order to prevent unnecessary concern.\n> \n> I agree and don't mind affecting the error message per se.\n> \n> However I see that the actual logic of how WAL is processed is being\n> changed. If we do this, at very least it requires thorough thinking. I\n> strongly suspect that the proposed code is wrong and/or not safe\n> and/or less safe than it is now for the reasons named above.\n\nFWIW, that pretty much sums up my feeling regarding this patch,\nbecause an error, basically any error, would hurt back very badly.\nSure, the error messages we generate now when reaching the end of WAL\ncan sound scary, and they are (I suspect that's not really the case\nfor anybody who has history doing support with PostgreSQL because a\nbunch of these messages are old enough to vote, but I can understand\nthat anybody would freak out the first time they see that).\n\nHowever, per the recent issues we've had in this area, like\ncd7f19da3468 but I'm more thinking about 6b18b3fe2c2f and\nbae868caf222, I am of the opinion that the header validation, the\nempty page case in XLogReaderValidatePageHeader() and the record read\nchanges are risky enough that I am not convinced that the gains are\nworth the risks taken.\n\nThe error stack in the WAL reader is complicated enough that making it\nmore complicated as the patch proposes does not sound like not a good\ntradeoff to me to make the reports related to the end of WAL cleaner\nfor the end-user. I agree that we should do something, but the patch\ndoes not seem like a good step towards this goal. Perhaps somebody\nwould be more excited about this proposal than I am, of course.\n--\nMichael", "msg_date": "Wed, 17 Jan 2024 14:32:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/2490/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/2490\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 16:09:28 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Mon, 22 Jan 2024 16:09:28 +1100, Peter Smith <smithpb2250@gmail.com> wrote in \n> 2024-01 Commitfest.\n> \n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n> \n> ======\n> [1] https://commitfest.postgresql.org/46/2490/\n> [2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/2490\n\nThanks for noticing of that. Will repost a new version.\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 23 Jan 2024 13:01:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." }, { "msg_contents": "At Wed, 17 Jan 2024 14:32:00 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Jan 16, 2024 at 02:46:02PM +0300, Aleksander Alekseev wrote:\n> >> For now, let me explain the basis for this patch. The fundamental\n> >> issue is that these warnings that always appear are, in practice, not\n> >> a problem in almost all cases. Some of those who encounter them for\n> >> the first time may feel uneasy and reach out with inquiries. On the\n> >> other hand, those familiar with these warnings tend to ignore them and\n> >> only pay attention to details when actual issues arise. Therefore, the\n> >> intention of this patch is to label them as \"no issue\" unless a\n> >> problem is blatantly evident, in order to prevent unnecessary concern.\n> > \n> > I agree and don't mind affecting the error message per se.\n> > \n> > However I see that the actual logic of how WAL is processed is being\n> > changed. If we do this, at very least it requires thorough thinking. I\n> > strongly suspect that the proposed code is wrong and/or not safe\n> > and/or less safe than it is now for the reasons named above.\n> \n> FWIW, that pretty much sums up my feeling regarding this patch,\n> because an error, basically any error, would hurt back very badly.\n> Sure, the error messages we generate now when reaching the end of WAL\n> can sound scary, and they are (I suspect that's not really the case\n> for anybody who has history doing support with PostgreSQL because a\n> bunch of these messages are old enough to vote, but I can understand\n> that anybody would freak out the first time they see that).\n> \n> However, per the recent issues we've had in this area, like\n> cd7f19da3468 but I'm more thinking about 6b18b3fe2c2f and\n> bae868caf222, I am of the opinion that the header validation, the\n> empty page case in XLogReaderValidatePageHeader() and the record read\n> changes are risky enough that I am not convinced that the gains are\n> worth the risks taken.\n> \n> The error stack in the WAL reader is complicated enough that making it\n> more complicated as the patch proposes does not sound like not a good\n> tradeoff to me to make the reports related to the end of WAL cleaner\n> for the end-user. I agree that we should do something, but the patch\n> does not seem like a good step towards this goal. Perhaps somebody\n> would be more excited about this proposal than I am, of course.\n\nThank you both for the comments. The criticism seems valid. The\napproach to identifying the end-of-WAL state in this patch is quite\nheuristic, and its validity or safety can certainly be contested. On\nthe other hand, if we seek perfection in this area of judgment, we may\nneed to have the WAL format itself more robust. In any case, since the\nmajority of the feedback on this patch seems to be negative, I am\ngoing to withdraw it if no supportive opinions emerge during this\ncommit-fest.\n\nThe attached patch addresses the errors reported by CF-bot.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 23 Jan 2024 13:13:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make mesage at end-of-recovery less scary." } ]
[ { "msg_contents": "Hello, this is a follow-on of [1] and [2].\n\nCurrently the executor visits execution nodes one-by-one. Considering\nsharding, Append on multiple postgres_fdw nodes can work\nsimultaneously and that can largely shorten the respons of the whole\nquery. For example, aggregations that can be pushed-down to remote\nwould be accelerated by the number of remote servers. Even other than\nsuch an extreme case, collecting tuples from multiple servers also can\nbe accelerated by tens of percent [2].\n\nI have suspended the work waiting asyncrohous or push-up executor to\ncome but the mood seems inclining toward doing that before that to\ncome [3].\n\nThe patchset consists of three parts.\n\n- v2-0001-Allow-wait-event-set-to-be-regsitered-to-resoure.patch\n The async feature uses WaitEvent, and it needs to be released on\n error. This patch makes it possible to register WaitEvent to\n resowner to handle that case..\n\n- v2-0002-infrastructure-for-asynchronous-execution.patch\n It povides an abstraction layer of asynchronous behavior\n (execAsync). Then adds ExecAppend, another version of ExecAppend,\n that handles \"async-capable\" subnodes asynchronously. Also it\n contains planner part that makes planner aware of \"async-capable\"\n and \"async-aware\" path nodes.\n\n- v2-0003-async-postgres_fdw.patch\n The \"async-capable\" postgres_fdw. It accelerates multiple\n postgres_fdw nodes on a single connection case as well as\n postgres_fdw nodes on dedicate connections.\n\nregards.\n\n[1] https://www.postgresql.org/message-id/2020012917585385831113%40highgo.ca\n[2] https://www.postgresql.org/message-id/20180515.202945.69332784.horiguchi.kyotaro@lab.ntt.co.jp\n[3] https://www.postgresql.org/message-id/20191205181217.GA12895%40momjian.us\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 28 Feb 2020 17:06:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 2/28/20 3:06 AM, Kyotaro Horiguchi wrote:\n> Hello, this is a follow-on of [1] and [2].\n> \n> Currently the executor visits execution nodes one-by-one. Considering\n> sharding, Append on multiple postgres_fdw nodes can work\n> simultaneously and that can largely shorten the respons of the whole\n> query. For example, aggregations that can be pushed-down to remote\n> would be accelerated by the number of remote servers. Even other than\n> such an extreme case, collecting tuples from multiple servers also can\n> be accelerated by tens of percent [2].\n> \n> I have suspended the work waiting asyncrohous or push-up executor to\n> come but the mood seems inclining toward doing that before that to\n> come [3].\n> \n> The patchset consists of three parts.\n\nAre these improvements targeted at PG13 or PG14? This seems to be a \npretty big change for the last CF of PG13.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:56:55 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Wed, 4 Mar 2020 09:56:55 -0500, David Steele <david@pgmasters.net> wrote in \n> On 2/28/20 3:06 AM, Kyotaro Horiguchi wrote:\n> > Hello, this is a follow-on of [1] and [2].\n> > Currently the executor visits execution nodes one-by-one. Considering\n> > sharding, Append on multiple postgres_fdw nodes can work\n> > simultaneously and that can largely shorten the respons of the whole\n> > query. For example, aggregations that can be pushed-down to remote\n> > would be accelerated by the number of remote servers. Even other than\n> > such an extreme case, collecting tuples from multiple servers also can\n> > be accelerated by tens of percent [2].\n> > I have suspended the work waiting asyncrohous or push-up executor to\n> > come but the mood seems inclining toward doing that before that to\n> > come [3].\n> > The patchset consists of three parts.\n> \n> Are these improvements targeted at PG13 or PG14? This seems to be a\n> pretty big change for the last CF of PG13.\n\nIt is targeted at PG14. As we have the target version in CF-app now,\nI marked it as targetting PG14.\n\nThank you for the suggestion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Mar 2020 09:14:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Feb 28, 2020 at 9:08 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> - v2-0001-Allow-wait-event-set-to-be-regsitered-to-resoure.patch\n> The async feature uses WaitEvent, and it needs to be released on\n> error. This patch makes it possible to register WaitEvent to\n> resowner to handle that case..\n\n+1\n\n> - v2-0002-infrastructure-for-asynchronous-execution.patch\n> It povides an abstraction layer of asynchronous behavior\n> (execAsync). Then adds ExecAppend, another version of ExecAppend,\n> that handles \"async-capable\" subnodes asynchronously. Also it\n> contains planner part that makes planner aware of \"async-capable\"\n> and \"async-aware\" path nodes.\n\n> This patch add an infrastructure for asynchronous execution. As a PoC\n> this makes only Append capable to handle asynchronously executable\n> subnodes.\n\nWhat other nodes do you think could be async aware? I suppose you\nwould teach joins to pass through the async support of their children,\nand then you could make partition-wise join work like that.\n\n+ /* choose appropriate version of Exec function */\n+ if (appendstate->as_nasyncplans == 0)\n+ appendstate->ps.ExecProcNode = ExecAppend;\n+ else\n+ appendstate->ps.ExecProcNode = ExecAppendAsync;\n\nCool. No extra cost for people not using the new feature.\n\n+ slot = ExecProcNode(subnode);\n+ if (subnode->asyncstate == AS_AVAILABLE)\n\nSo, now when you execute a node, you get a result AND you get some\ninformation that you access by reaching into the child node's\nPlanState. The ExecProcNode() interface is extremely limiting, but\nI'm not sure if this is the right way to extend it. Maybe\nExecAsyncProcNode() with a wide enough interface to do the job?\n\n+Bitmapset *\n+ExecAsyncEventWait(PlanState **nodes, Bitmapset *waitnodes, long timeout)\n+{\n+ static int *refind = NULL;\n+ static int refindsize = 0;\n...\n+ if (refindsize < n)\n...\n+ static ExecAsync_mcbarg mcb_arg =\n+ { &refind, &refindsize };\n+ static MemoryContextCallback mcb =\n+ { ExecAsyncMemoryContextCallback, (void *)&mcb_arg, NULL };\n...\n+ MemoryContextRegisterResetCallback(TopTransactionContext, &mcb);\n\nThis seems a bit strange. Why not just put the pointer in the plan\nstate? I suppose you want to avoid allocating a new buffer for every\nquery. Perhaps you could fix that by having a small fixed-size buffer\nin the PlanState to cover common cases and allocating a larger one in\na per-query memory context if that one is too small, or just not\nworrying about it and allocating every time since you know the desired\nsize.\n\n+ wes = CreateWaitEventSet(TopTransactionContext,\nTopTransactionResourceOwner, n);\n...\n+ FreeWaitEventSet(wes);\n\nBTW, just as an FYI, I am proposing[1] to add support for\nRemoveWaitEvent(), so that you could have a single WaitEventSet for\nthe lifetime of the executor node, and then add and remove sockets\nonly as needed. I'm hoping to commit that for PG13, if there are no\nobjections or better ideas soon, because it's useful for some other\nplaces where we currently create and destroy WaitEventSets frequently.\nOne complication when working with long-lived WaitEventSet objects is\nthat libpq (or some other thing used by some other hypothetical\nasync-capable FDW) is free to close and reopen its socket whenever it\nwants, so you need a way to know when it has done that. In that patch\nset I added pqSocketChangeCount() so that you can see when pgSocket()\nrefers to a new socket (even if the file descriptor number is the same\nby coincidence), but that imposes some book-keeping duties on the\ncaller, and now I'm wondering how that would look in your patch set.\nMy goal is to generate the minimum number of systems calls. I think\nit would be nice if a 1000-shard query only calls epoll_ctl() when a\nchild node needs to be added or removed from the set, not\nepoll_create(), 1000 * epoll_ctl(), epoll_wait(), close() for every\nwait. But I suppose there is an argument that it's more complication\nthan it's worth.\n\n[1] https://commitfest.postgresql.org/27/2452/\n\n\n", "msg_date": "Thu, 5 Mar 2020 16:17:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Thank you for the comment.\n\nAt Thu, 5 Mar 2020 16:17:24 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Fri, Feb 28, 2020 at 9:08 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > - v2-0001-Allow-wait-event-set-to-be-regsitered-to-resoure.patch\n> > The async feature uses WaitEvent, and it needs to be released on\n> > error. This patch makes it possible to register WaitEvent to\n> > resowner to handle that case..\n> \n> +1\n> \n> > - v2-0002-infrastructure-for-asynchronous-execution.patch\n> > It povides an abstraction layer of asynchronous behavior\n> > (execAsync). Then adds ExecAppend, another version of ExecAppend,\n> > that handles \"async-capable\" subnodes asynchronously. Also it\n> > contains planner part that makes planner aware of \"async-capable\"\n> > and \"async-aware\" path nodes.\n> \n> > This patch add an infrastructure for asynchronous execution. As a PoC\n> > this makes only Append capable to handle asynchronously executable\n> > subnodes.\n> \n> What other nodes do you think could be async aware? I suppose you\n> would teach joins to pass through the async support of their children,\n> and then you could make partition-wise join work like that.\n\nAn Append node is fed from many immediate-child async-capable nodes,\nso the Apeend node can pick any child node that have fired.\n\nUnfortunately joins are not wide but deep. That means a set of\nasync-capable nodes have multiple async-aware (and async capable at\nthe same time for intermediate nodes) parent nodes. So if we want to\nbe async on that configuration, we need \"push-up\" executor engine. In\nmy last trial, ignoring performane, I could turn almost all nodes into\npush-up style but a few nodes, like WindowAgg or HashJoin have a quite\nlow affinity with push-up style since the caller sites to child nodes\nare many and scattered. I got through the low-affinity by turning the\nnodes into state machines, but I don't think it is good.\n\n> + /* choose appropriate version of Exec function */\n> + if (appendstate->as_nasyncplans == 0)\n> + appendstate->ps.ExecProcNode = ExecAppend;\n> + else\n> + appendstate->ps.ExecProcNode = ExecAppendAsync;\n> \n> Cool. No extra cost for people not using the new feature.\n\nIt creates some duplicate code but I agree on the performance\nperspective.\n\n> + slot = ExecProcNode(subnode);\n> + if (subnode->asyncstate == AS_AVAILABLE)\n> \n> So, now when you execute a node, you get a result AND you get some\n> information that you access by reaching into the child node's\n> PlanState. The ExecProcNode() interface is extremely limiting, but\n> I'm not sure if this is the right way to extend it. Maybe\n> ExecAsyncProcNode() with a wide enough interface to do the job?\n\nSounds resonable and seems easy to do.\n\n> +Bitmapset *\n> +ExecAsyncEventWait(PlanState **nodes, Bitmapset *waitnodes, long timeout)\n> +{\n> + static int *refind = NULL;\n> + static int refindsize = 0;\n> ...\n> + if (refindsize < n)\n> ...\n> + static ExecAsync_mcbarg mcb_arg =\n> + { &refind, &refindsize };\n> + static MemoryContextCallback mcb =\n> + { ExecAsyncMemoryContextCallback, (void *)&mcb_arg, NULL };\n> ...\n> + MemoryContextRegisterResetCallback(TopTransactionContext, &mcb);\n> \n> This seems a bit strange. Why not just put the pointer in the plan\n> state? I suppose you want to avoid allocating a new buffer for every\n> query. Perhaps you could fix that by having a small fixed-size buffer\n> in the PlanState to cover common cases and allocating a larger one in\n> a per-query memory context if that one is too small, or just not\n> worrying about it and allocating every time since you know the desired\n> size.\n\nThe most significant factor for the shape would be ExecAsync is not a\nkind of ExecNode. So ExecAsyncEventWait doen't have direcgt access to\nEState other than one of given mutiple nodes. I consider tryig to use\ngiven ExecNodes as an access path to ESttate.\n\n\n> + wes = CreateWaitEventSet(TopTransactionContext,\n> TopTransactionResourceOwner, n);\n> ...\n> + FreeWaitEventSet(wes);\n> \n> BTW, just as an FYI, I am proposing[1] to add support for\n> RemoveWaitEvent(), so that you could have a single WaitEventSet for\n> the lifetime of the executor node, and then add and remove sockets\n> only as needed. I'm hoping to commit that for PG13, if there are no\n> objections or better ideas soon, because it's useful for some other\n> places where we currently create and destroy WaitEventSets frequently.\n\nYes! I have wanted that (but haven't done by myself..., and I didn't\nunderstand the details from the title \"Reducint WaitEventSet syscall\nchurn\":p)\n\n> One complication when working with long-lived WaitEventSet objects is\n> that libpq (or some other thing used by some other hypothetical\n> async-capable FDW) is free to close and reopen its socket whenever it\n> wants, so you need a way to know when it has done that. In that patch\n> set I added pqSocketChangeCount() so that you can see when pgSocket()\n> refers to a new socket (even if the file descriptor number is the same\n> by coincidence), but that imposes some book-keeping duties on the\n> caller, and now I'm wondering how that would look in your patch set.\n\nAs for postgres-fdw, unsponaneous disconnection immedately leands to\nquery ERROR.\n\n> My goal is to generate the minimum number of systems calls. I think\n> it would be nice if a 1000-shard query only calls epoll_ctl() when a\n> child node needs to be added or removed from the set, not\n> epoll_create(), 1000 * epoll_ctl(), epoll_wait(), close() for every\n> wait. But I suppose there is an argument that it's more complication\n> than it's worth.\n> \n> [1] https://commitfest.postgresql.org/27/2452/\n\nI'm not sure how it gives performance gain, but reducing syscalls\nitself is good. I'll look on it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Mar 2020 17:44:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI have tested the feature and it shows great performance in queries\r\nwhich have small amount result compared with base scan amount.", "msg_date": "Mon, 09 Mar 2020 03:03:14 +0000", "msg_from": "movead li <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI occur a strange issue when a exec 'make installcheck-world', it is:\r\n\r\n##########################################################\r\n...\r\n============== running regression test queries ==============\r\ntest adminpack ... FAILED 60 ms\r\n\r\n======================\r\n 1 of 1 tests failed. \r\n======================\r\n\r\nThe differences that caused some tests to fail can be viewed in the\r\nfile \"/work/src/postgres_app_for/contrib/adminpack/regression.diffs\". A copy of the test summary that you see\r\nabove is saved in the file \"/work/src/postgres_app_for/contrib/adminpack/regression.out\".\r\n...\r\n##########################################################\r\n\r\nAnd the content in 'contrib/adminpack/regression.out' is:\r\n##########################################################\r\nSELECT pg_file_write('/tmp/test_file0', 'test0', false);\r\n ERROR: absolute path not allowed\r\n SELECT pg_file_write(current_setting('data_directory') || '/test_file4', 'test4', false);\r\n- pg_file_write \r\n----------------\r\n- 5\r\n-(1 row)\r\n-\r\n+ERROR: reference to parent directory (\"..\") not allowed\r\n SELECT pg_file_write(current_setting('data_directory') || '/../test_file4', 'test4', false);\r\n ERROR: reference to parent directory (\"..\") not allowed\r\n RESET ROLE;\r\n@@ -149,7 +145,7 @@\r\n SELECT pg_file_unlink('test_file4');\r\n pg_file_unlink \r\n ----------------\r\n- t\r\n+ f\r\n (1 row)\r\n##########################################################\r\n\r\nHowever the issue does not occur when I do a 'make check-world'.\r\nAnd it doesn't occur when I test the 'make installcheck-world' without the patch.\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Mar 2020 05:13:42 +0000", "msg_from": "movead li <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hello. Thank you for testing.\n\nAt Tue, 10 Mar 2020 05:13:42 +0000, movead li <movead.li@highgo.ca> wrote in \n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> I occur a strange issue when a exec 'make installcheck-world', it is:\n\nI had't done that.. Bud it worked for me.\n\n> ##########################################################\n> ...\n> ============== running regression test queries ==============\n> test adminpack ... FAILED 60 ms\n> \n> ======================\n> 1 of 1 tests failed. \n> ======================\n> \n> The differences that caused some tests to fail can be viewed in the\n> file \"/work/src/postgres_app_for/contrib/adminpack/regression.diffs\". A copy of the test summary that you see\n> above is saved in the file \"/work/src/postgres_app_for/contrib/adminpack/regression.out\".\n> ...\n> ##########################################################\n> \n> And the content in 'contrib/adminpack/regression.out' is:\n\nI don't see that file. Maybe regression.diff?\n\n> ##########################################################\n> SELECT pg_file_write('/tmp/test_file0', 'test0', false);\n> ERROR: absolute path not allowed\n> SELECT pg_file_write(current_setting('data_directory') || '/test_file4', 'test4', false);\n> - pg_file_write \n> ----------------\n> - 5\n> -(1 row)\n> -\n> +ERROR: reference to parent directory (\"..\") not allowed\n\nIt seems to me that you are setting a path containing \"..\" to PGDATA.\n\n> However the issue does not occur when I do a 'make check-world'.\n> And it doesn't occur when I test the 'make installcheck-world' without the patch.\n\ncheck-world doesn't use path containing \"..\" as PGDATA.\n\n> The new status of this patch is: Waiting on Author\n\nThanks for noticing that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Mar 2020 16:50:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": ">It seems to me that you are setting a path containing \"..\" to PGDATA.\r\nThanks point it for me.\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>It seems to me that you are setting a path containing \"..\" to PGDATA.\nThanks point it for me.\nHighgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 11 Mar 2020 09:36:29 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI redo the make installcheck-world as Kyotaro Horiguchi point out and the\r\nresult nothing wrong. And I think the patch is good in feature and performance\r\nhere is the test result thread I made before:\r\nhttps://www.postgresql.org/message-id/CA%2B9bhCK7chd0qx%2Bmny%2BU9xaOs2FDNJ7RaxG4%3D9rpgT6oAKBgWA%40mail.gmail.com\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 11 Mar 2020 01:46:38 +0000", "msg_from": "movead li <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hi,\n\nOn Wed, Mar 11, 2020 at 10:47 AM movead li <movead.li@highgo.ca> wrote:\n\n> I redo the make installcheck-world as Kyotaro Horiguchi point out and the\n> result nothing wrong. And I think the patch is good in feature and performance\n> here is the test result thread I made before:\n> https://www.postgresql.org/message-id/CA%2B9bhCK7chd0qx%2Bmny%2BU9xaOs2FDNJ7RaxG4%3D9rpgT6oAKBgWA%40mail.gmail.com\n>\n> The new status of this patch is: Ready for Committer\n\nAs discussed upthread, this is a material for PG14, so I moved this to\nthe next commitfest, keeping the same status. I've not looked at the\npatch in any detail yet, so I'm not sure that that is the right status\nfor the patch, though. I'd like to work on this for PG14 if I have\ntime.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 30 Mar 2020 17:15:56 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 3/30/20 1:15 PM, Etsuro Fujita wrote:\n> Hi,\n> \n> On Wed, Mar 11, 2020 at 10:47 AM movead li <movead.li@highgo.ca> wrote:\n> \n>> I redo the make installcheck-world as Kyotaro Horiguchi point out and the\n>> result nothing wrong. And I think the patch is good in feature and performance\n>> here is the test result thread I made before:\n>> https://www.postgresql.org/message-id/CA%2B9bhCK7chd0qx%2Bmny%2BU9xaOs2FDNJ7RaxG4%3D9rpgT6oAKBgWA%40mail.gmail.com\n>>\n>> The new status of this patch is: Ready for Committer\n> \n> As discussed upthread, this is a material for PG14, so I moved this to\n> the next commitfest, keeping the same status. I've not looked at the\n> patch in any detail yet, so I'm not sure that that is the right status\n> for the patch, though. I'd like to work on this for PG14 if I have\n> time.\n\nHi,\nThis patch no longer applies cleanly.\nIn addition, code comments contain spelling errors.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 3 Jun 2020 15:00:06 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hello, Andrey.\n\nAt Wed, 3 Jun 2020 15:00:06 +0500, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote in \n> This patch no longer applies cleanly.\n> In addition, code comments contain spelling errors.\n\nSure. Thaks for noticing of them and sorry for the many typos.\nAdditional item in WaitEventIPC conflicted with this.\n\n\nI found the following typos.\n\nconnection.c:\n s/Rerturns/Returns/\npostgres-fdw.c:\n s/Retrive/Retrieve/\n s/ForeginScanState/ForeignScanState/\n s/manipuration/manipulation/\n s/asyncstate/async state/\n s/alrady/already/\n\nnodeAppend.c: \n s/Rery/Retry/\n\ncreateplan.c:\n s/chidlren/children/\n\nresowner.c:\n s/identier/identifier/ X 2\n\nexecnodes.h:\n s/sutff/stuff/\n\nplannodes.h:\n s/asyncronous/asynchronous/\n \n \nRemoved a useless variable PgFdwScanState.result_ready.\nRemoved duplicate code from remove_async_node() by using move_to_next_waiter().\nDone some minor cleanups.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 04 Jun 2020 15:00:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/4/20 11:00 AM, Kyotaro Horiguchi wrote:\n> Removed a useless variable PgFdwScanState.result_ready.\n> Removed duplicate code from remove_async_node() by using move_to_next_waiter().\n> Done some minor cleanups.\n> \nI am reviewing your code.\nA couple of variables are no longer needed (see changes.patch in attachment.\n\nSomething about the cost of an asynchronous plan:\n\nAt the simple query plan (see below) I see:\n1. Startup cost of local SeqScan is equal 0, ForeignScan - 100. But \nstartup cost of Append is 0.\n2. Total cost of an Append node is a sum of the subplans. Maybe in the \ncase of asynchronous append we need to use some reduce factor?\n\nexplain select * from parts;\n\nWith Async Append:\n=====================\n\n Append (cost=0.00..2510.30 rows=106780 width=8)\n Async subplans: 3\n -> Async Foreign Scan on part_1 parts_2 (cost=100.00..177.80 \nrows=2260 width=8)\n -> Async Foreign Scan on part_2 parts_3 (cost=100.00..177.80 \nrows=2260 width=8)\n -> Async Foreign Scan on part_3 parts_4 (cost=100.00..177.80 \nrows=2260 width=8)\n -> Seq Scan on part_0 parts_1 (cost=0.00..1443.00 rows=100000 width=8)\n\nWithout Async Append:\n=====================\n\n Append (cost=0.00..2510.30 rows=106780 width=8)\n -> Seq Scan on part_0 parts_1 (cost=0.00..1443.00 rows=100000 width=8)\n -> Foreign Scan on part_1 parts_2 (cost=100.00..177.80 rows=2260 \nwidth=8)\n -> Foreign Scan on part_2 parts_3 (cost=100.00..177.80 rows=2260 \nwidth=8)\n -> Foreign Scan on part_3 parts_4 (cost=100.00..177.80 rows=2260 \nwidth=8)\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 9 Jun 2020 14:20:42 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hello, Andrey.\n\nAt Tue, 9 Jun 2020 14:20:42 +0500, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote in \n> On 6/4/20 11:00 AM, Kyotaro Horiguchi wrote:\n> > Removed a useless variable PgFdwScanState.result_ready.\n> > Removed duplicate code from remove_async_node() by using\n> > move_to_next_waiter().\n> > Done some minor cleanups.\n> > \n> I am reviewing your code.\n> A couple of variables are no longer needed (see changes.patch in\n> attachment.\n\nThanks! The recent changes made them useless. Fixed.\n\n> Something about the cost of an asynchronous plan:\n> \n> At the simple query plan (see below) I see:\n> 1. Startup cost of local SeqScan is equal 0, ForeignScan - 100. But\n> startup cost of Append is 0.\n\nThe result itself is right that the append doesn't wait for foreign\nscans for the first iteration then fetches a tuple from the local\ntable. But the estimation is made just by an accident. If you\ndefined a foreign table as the first partition, the cost of Append\nwould be 100, which is rather wrong.\n\n> 2. Total cost of an Append node is a sum of the subplans. Maybe in the\n> case of asynchronous append we need to use some reduce factor?\n\nYes. For the reason mentioned above, foreign subpaths don't affect\nthe startup cost of Append as far as any sync subpaths exist. If no\nsync subpaths exist, the Append's startup cost is the minimum startup\ncost among the async subpaths.\n\nI fixed cost_append so that it calculates the correct startup\ncost. Now the function estimates as follows.\n\nAppend (Foreign(100), Foreign(100), Local(0)) => 0;\nAppend (Local(0), Foreign(100), Foreign(100)) => 0;\nAppend (Foreign(100), Foreign(100)) => 100;\n\n\n> explain select * from parts;\n> \n> With Async Append:\n> =====================\n> \n> Append (cost=0.00..2510.30 rows=106780 width=8)\n> Async subplans: 3\n> -> Async Foreign Scan on part_1 parts_2 (cost=100.00..177.80 rows=2260\n> -> width=8)\n> -> Async Foreign Scan on part_2 parts_3 (cost=100.00..177.80 rows=2260\n> -> width=8)\n> -> Async Foreign Scan on part_3 parts_4 (cost=100.00..177.80 rows=2260\n> -> width=8)\n> -> Seq Scan on part_0 parts_1 (cost=0.00..1443.00 rows=100000 width=8)\n\nThe SeqScan seems to be the first partition for the parent. It is the\nfirst subnode at cost estimation. The result is right but it comes\nfrom a wrong logic.\n\n> Without Async Append:\n> =====================\n> \n> Append (cost=0.00..2510.30 rows=106780 width=8)\n> -> Seq Scan on part_0 parts_1 (cost=0.00..1443.00 rows=100000 width=8)\n> -> Foreign Scan on part_1 parts_2 (cost=100.00..177.80 rows=2260 width=8)\n> -> Foreign Scan on part_2 parts_3 (cost=100.00..177.80 rows=2260 width=8)\n> -> Foreign Scan on part_3 parts_4 (cost=100.00..177.80 rows=2260 width=8)\n\nThe starup cost of the Append is the cost of the first subnode, that is, 0.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 10 Jun 2020 12:05:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/10/20 8:05 AM, Kyotaro Horiguchi wrote:\n> Hello, Andrey.\n> \n> At Tue, 9 Jun 2020 14:20:42 +0500, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote in\n>> On 6/4/20 11:00 AM, Kyotaro Horiguchi wrote:\n>> 2. Total cost of an Append node is a sum of the subplans. Maybe in the\n>> case of asynchronous append we need to use some reduce factor?\n> \n> Yes. For the reason mentioned above, foreign subpaths don't affect\n> the startup cost of Append as far as any sync subpaths exist. If no\n> sync subpaths exist, the Append's startup cost is the minimum startup\n> cost among the async subpaths.\nI mean that you can possibly change computation of total cost of the \nAsync append node. It may affect the planner choice between ForeignScan \n(followed by the execution of the JOIN locally) and partitionwise join \nstrategies.\n\nHave you also considered the possibility of dynamic choice between \nsynchronous and async append (during optimization)? This may be useful \nfor a query with the LIMIT clause.\n\n-- \nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 11 Jun 2020 12:03:45 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "The patch has a problem with partitionwise aggregates.\n\nAsynchronous append do not allow the planner to use partial aggregates. \nExample you can see in attachment. I can't understand why: costs of \npartitionwise join are less.\nInitial script and explains of the query with and without the patch you \ncan see in attachment.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com", "msg_date": "Mon, 15 Jun 2020 08:51:23 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Thanks for testing, but..\n\nAt Mon, 15 Jun 2020 08:51:23 +0500, \"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> wrote in \n> The patch has a problem with partitionwise aggregates.\n> \n> Asynchronous append do not allow the planner to use partial\n> aggregates. Example you can see in attachment. I can't understand why:\n> costs of partitionwise join are less.\n> Initial script and explains of the query with and without the patch\n> you can see in attachment.\n\nI had more or less the same plan with the second one without the patch\n(that is, vanilla master/HEAD, but used merge joins instead).\n\nI'm not sure what prevented join pushdown, but the difference between\nthe two is whether the each partitionwise join is pushed down to\nremote or not, That is hardly seems related to the async execution\npatch.\n\nCould you tell me how did you get the first plan?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 15 Jun 2020 17:29:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/15/20 1:29 PM, Kyotaro Horiguchi wrote:\n> Thanks for testing, but..\n> \n> At Mon, 15 Jun 2020 08:51:23 +0500, \"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> wrote in\n>> The patch has a problem with partitionwise aggregates.\n>>\n>> Asynchronous append do not allow the planner to use partial\n>> aggregates. Example you can see in attachment. I can't understand why:\n>> costs of partitionwise join are less.\n>> Initial script and explains of the query with and without the patch\n>> you can see in attachment.\n> \n> I had more or less the same plan with the second one without the patch\n> (that is, vanilla master/HEAD, but used merge joins instead).\n> \n> I'm not sure what prevented join pushdown, but the difference between\n> the two is whether the each partitionwise join is pushed down to\n> remote or not, That is hardly seems related to the async execution\n> patch.\n> \n> Could you tell me how did you get the first plan?\n\n1. Use clear current vanilla master.\n\n2. Start two instances with the script 'frgn2n.sh' from attachment.\nThere are I set GUCs:\nenable_partitionwise_join = true\nenable_partitionwise_aggregate = true\n\n3. Execute query:\nexplain analyze SELECT sum(parts.b)\n\tFROM parts, second\n\tWHERE parts.a = second.a AND second.b < 100;\n\nThat's all.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com", "msg_date": "Mon, 15 Jun 2020 14:59:18 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Thanks.\n\nMy conclusion on this is the async patch is not the cause of the\nbehavior change mentioned here.\n\nAt Mon, 15 Jun 2020 14:59:18 +0500, \"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> wrote in \n> > Could you tell me how did you get the first plan?\n> \n> 1. Use clear current vanilla master.\n> \n> 2. Start two instances with the script 'frgn2n.sh' from attachment.\n> There are I set GUCs:\n> enable_partitionwise_join = true\n> enable_partitionwise_aggregate = true\n> \n> 3. Execute query:\n> explain analyze SELECT sum(parts.b)\n> \tFROM parts, second\n> \tWHERE parts.a = second.a AND second.b < 100;\n> \n> That's all.\n\nWith mater/HEAD, I got the second (local join) plan for a while first\nthen got the first (remote join). The cause of the plan change was\nfound to be autovacuum on the remote node.\n\nBefore the vacuum the result of remote estimation was as follows.\n\nNode2 (remote)\n=# EXPLAIN SELECT r4.b FROM (public.part_1 r4 INNER JOIN public.second_1 r8 ON (((r4.a = r8.a)) AND ((r8.b < 100))));\n QUERY PLAN \n---------------------------------------------------------------------------\n Merge Join (cost=2269.20..3689.70 rows=94449 width=4)\n Merge Cond: (r8.a = r4.a)\n -> Sort (cost=74.23..76.11 rows=753 width=4)\n Sort Key: r8.a\n -> Seq Scan on second_1 r8 (cost=0.00..38.25 rows=753 width=4)\n Filter: (b < 100)\n -> Sort (cost=2194.97..2257.68 rows=25086 width=8)\n Sort Key: r4.a\n -> Seq Scan on part_1 r4 (cost=0.00..361.86 rows=25086 width=8)\n(9 rows)\n\nAfter running a vacuum it changes as follows.\n\n QUERY PLAN \n------------------------------------------------------------------------\n Hash Join (cost=5.90..776.31 rows=9741 width=4)\n Hash Cond: (r4.a = r8.a)\n -> Seq Scan on part_1 r4 (cost=0.00..360.78 rows=24978 width=8)\n -> Hash (cost=4.93..4.93 rows=78 width=4)\n -> Seq Scan on second_1 r8 (cost=0.00..4.93 rows=78 width=4)\n Filter: (b < 100)\n(6 rows)\n\nThat changes the plan on the local side the way you saw. I saw the\nexactly same behavior with the async execution patch.\n\nregards.\n\n\n\n\nFYI, the explain results for another plan changed as follows. It is\nestimated to return 25839 rows, which is far less than 94449. So local\njoin beated remote join.\n\n=# EXPLAIN SELECT a, b FROM public.part_1 ORDER BY a ASC NULLS LAST;\n QUERY PLAN \n------------------------------------------------------------------\n Sort (cost=2194.97..2257.68 rows=25086 width=8)\n Sort Key: a\n -> Seq Scan on part_1 (cost=0.00..361.86 rows=25086 width=8)\n(3 rows)\n=# EXPLAIN SELECT a FROM public.second_1 WHERE ((b < 100)) ORDER BY a ASC NULLS LAST;\n QUERY PLAN \n-----------------------------------------------------------------\n Sort (cost=74.23..76.11 rows=753 width=4)\n Sort Key: a\n -> Seq Scan on second_1 (cost=0.00..38.25 rows=753 width=4)\n Filter: (b < 100)\n(4 rows)\n\nAre changed to:\n\n=# EXPLAIN SELECT a, b FROM public.part_1 ORDER BY a ASC NULLS LAST;\n QUERY PLAN \n------------------------------------------------------------------\n Sort (cost=2185.22..2247.66 rows=24978 width=8)\n Sort Key: a\n -> Seq Scan on part_1 (cost=0.00..360.78 rows=24978 width=8)\n(3 rows)\n\nhoriguti=# EXPLAIN SELECT a FROM public.second_1 WHERE ((b < 100)) ORDER BY a ASC NULLS LAST;\n QUERY PLAN \n---------------------------------------------------------------\n Sort (cost=7.38..7.57 rows=78 width=4)\n Sort Key: a\n -> Seq Scan on second_1 (cost=0.00..4.93 rows=78 width=4)\n Filter: (b < 100)\n(4 rows)\n\nThey return 25056 rows, which is far more than 9741 rows. So remote\njoin won.\n\nOf course the number of returning rows is not the only factor of the\ncost change but is the most significant factor in this case.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 16 Jun 2020 17:30:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/16/20 1:30 PM, Kyotaro Horiguchi wrote:\n> They return 25056 rows, which is far more than 9741 rows. So remote\n> join won.\n> \n> Of course the number of returning rows is not the only factor of the\n> cost change but is the most significant factor in this case.\n> \nThanks for the attention.\nI see one slight flaw of this approach to asynchronous append:\nAsyncAppend works only for ForeignScan subplans. if we have \nPartialAggregate, Join or another more complicated subplan, we can't use \nasynchronous machinery.\nIt may lead to a situation than small difference in a filter constant \ncan cause a big difference in execution time.\nI imagine an Append node, that can switch current subplan from time to \ntime and all ForeignScan nodes of the overall plan are added to one \nqueue. The scan buffer can be larger than a cursor fetch size and each \nIterateForeignScan() call can induce asynchronous scan of another \nForeignScan node if buffer is not full.\nBut these are only thoughts, not an proposal. I have no questions to \nyour patch right now.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\n\n\n", "msg_date": "Wed, 17 Jun 2020 15:01:08 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Wed, 17 Jun 2020 15:01:08 +0500, \"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> wrote in \n> On 6/16/20 1:30 PM, Kyotaro Horiguchi wrote:\n> > They return 25056 rows, which is far more than 9741 rows. So remote\n> > join won.\n> > Of course the number of returning rows is not the only factor of the\n> > cost change but is the most significant factor in this case.\n> > \n> Thanks for the attention.\n> I see one slight flaw of this approach to asynchronous append:\n> AsyncAppend works only for ForeignScan subplans. if we have\n> PartialAggregate, Join or another more complicated subplan, we can't\n> use asynchronous machinery.\n\nYes, the asynchronous append works only when it has at least one\nasync-capable immediate subnode. Currently there's only one\nasync-capable node, ForeignScan.\n\n> I imagine an Append node, that can switch current subplan from time to\n> time and all ForeignScan nodes of the overall plan are added to one\n> queue. The scan buffer can be larger than a cursor fetch size and each\n> IterateForeignScan() call can induce asynchronous scan of another\n> ForeignScan node if buffer is not full.\n> But these are only thoughts, not an proposal. I have no questions to\n> your patch right now.\n\nA major property of async-capable nodes is yieldability(?), that is,\nit ought to be able to give way for other nodes when it is not ready\nto return a tuple. That means such nodes are state machine rather than\nfunction. Fortunately ForeignScan is natively a kind of state machine\nin a sense so it is easily turned into async-capable node. Append is\nalso a state machine in the same sense but currently no other nodes\ncan use it as async-capable node.\n\nFor example, an Agg or Sort node generally needs two or more tuples\nfrom its subnode to generate a tuple to be returned to parent. Some\nworking memory is needed while generating a returning tuple. If the\nnode takes in a tuple from a subnode but not generated a result tuple,\nthe node must yield CPU time to other nodes. These nodes are not state\nmachines at all and it is somewhat hard to make it so. It gets quite\ncomplex in WindowAgg since it calls subnodes at arbitrary call level\nof component functions.\n\nFurther issue is leaf scan nodes, SeqScan, IndexScan, etc. also need\nto be asynchronous.\n\nFinally the executor will turn into push-up style from the current\nvolcano (pull-style).\n\nI tried all of that (perhaps except scan nodes) a couple of years ago\nbut the result was a kind of crap^^;\n\nAfter all, I returned to the current shape. It doesn't seem bad as\nThomas proposed the same thing.\n\n\n*1: async-aware is defined (here) as a node that can have\n async-capable subnodes.\n\n> It may lead to a situation than small difference in a filter constant\n> can cause a big difference in execution time.\n\nIt is what we usually see? We could get a big win for certain\ncondition without a loss even otherwise.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 19 Jun 2020 12:05:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hello.\n\nAs the result of a discussion with Fujita-san off-list, I'm going to\nhold off development until he decides whether mine or Thomas' is\nbetter.\n\nHowever, I fixed two misbehaviors and rebased.\n\nA. It runs ordered Append asynchronously, but that leads to a bogus\n result. I taught create_append_plan not to make subnodes async when\n pathkey is not NIL.\n\nB. It calculated the total cost of Append by summing up total costs of\n all subnodes including async subnodes. It is too pessimistic so I\n changed that to the following.\n\n Max(total cost of sync subnodes, maximum cost of async subnodes);\n\n However this is a bit too optimistic in that it ignores interference\n between async subnodes, it is more realistic in the cases where the\n subnode ForeignScans are connecting to different servers.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 02 Jul 2020 11:14:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Horiguchi-san,\n\nOn Thu, Jul 2, 2020 at 11:14 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> As the result of a discussion with Fujita-san off-list, I'm going to\n> hold off development until he decides whether mine or Thomas' is\n> better.\n\nI'd like to join the party, but IIUC, we don't yet reach a consensus\non which one is the right way to go. So I think we need to discuss\nthat first.\n\n> However, I fixed two misbehaviors and rebased.\n\nThank you for the updated patch!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 2 Jul 2020 12:20:37 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Jul 2, 2020 at 3:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Jul 2, 2020 at 11:14 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > As the result of a discussion with Fujita-san off-list, I'm going to\n> > hold off development until he decides whether mine or Thomas' is\n> > better.\n>\n> I'd like to join the party, but IIUC, we don't yet reach a consensus\n> on which one is the right way to go. So I think we need to discuss\n> that first.\n\nEither way, we definitely need patch 0001. One comment:\n\n-CreateWaitEventSet(MemoryContext context, int nevents)\n+CreateWaitEventSet(MemoryContext context, ResourceOwner res, int nevents)\n\nI wonder if it's better to have it receive ResourceOwner like that, or\nto have it capture CurrentResourceOwner. I think the latter is more\ncommon in existing code.\n\n\n", "msg_date": "Fri, 14 Aug 2020 13:29:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Aug 14, 2020 at 10:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jul 2, 2020 at 3:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > I'd like to join the party, but IIUC, we don't yet reach a consensus\n> > on which one is the right way to go. So I think we need to discuss\n> > that first.\n>\n> Either way, we definitely need patch 0001. One comment:\n>\n> -CreateWaitEventSet(MemoryContext context, int nevents)\n> +CreateWaitEventSet(MemoryContext context, ResourceOwner res, int nevents)\n>\n> I wonder if it's better to have it receive ResourceOwner like that, or\n> to have it capture CurrentResourceOwner. I think the latter is more\n> common in existing code.\n\nSorry for not having discussed anything, but actually, I’ve started\nreviewing your patch first. I’ll return to this after reviewing it\nsome more.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 15 Aug 2020 13:40:17 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Jul 02, 2020 at 11:14:48AM +0900, Kyotaro Horiguchi wrote:\n> As the result of a discussion with Fujita-san off-list, I'm going to\n> hold off development until he decides whether mine or Thomas' is\n> better.\n\nThe latest patch doesn't apply so I set as WoA.\nhttps://commitfest.postgresql.org/29/2491/\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 19 Aug 2020 23:25:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Wed, 19 Aug 2020 23:25:36 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in \n> On Thu, Jul 02, 2020 at 11:14:48AM +0900, Kyotaro Horiguchi wrote:\n> > As the result of a discussion with Fujita-san off-list, I'm going to\n> > hold off development until he decides whether mine or Thomas' is\n> > better.\n> \n> The latest patch doesn't apply so I set as WoA.\n> https://commitfest.postgresql.org/29/2491/\n\nThanks. This is rebased version.\n\nAt Fri, 14 Aug 2020 13:29:16 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> Either way, we definitely need patch 0001. One comment:\n> \n> -CreateWaitEventSet(MemoryContext context, int nevents)\n> +CreateWaitEventSet(MemoryContext context, ResourceOwner res, int nevents)\n> \n> I wonder if it's better to have it receive ResourceOwner like that, or\n> to have it capture CurrentResourceOwner. I think the latter is more\n> common in existing code.\n\nThere's no existing WaitEventSets belonging to a resowner. So\nunconditionally capturing CurrentResourceOwner doesn't work well. I\ncould pass a bool instead but that make things more complex.\n\nCome to think of \"complex\", ExecAsync stuff in this patch might be\ntoo-much for a short-term solution until executor overhaul, if it\ncomes shortly. (the patch of mine here as a whole is like that,\nthough..). The queueing stuff in postgres_fdw is, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 20 Aug 2020 16:36:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 20.08.2020 10:36, Kyotaro Horiguchi wrote:\n> At Wed, 19 Aug 2020 23:25:36 -0500, Justin Pryzby <pryzby@telsasoft.com> wrote in\n>> On Thu, Jul 02, 2020 at 11:14:48AM +0900, Kyotaro Horiguchi wrote:\n>>> As the result of a discussion with Fujita-san off-list, I'm going to\n>>> hold off development until he decides whether mine or Thomas' is\n>>> better.\n>> The latest patch doesn't apply so I set as WoA.\n>> https://commitfest.postgresql.org/29/2491/\n> Thanks. This is rebased version.\n>\n> At Fri, 14 Aug 2020 13:29:16 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n>> Either way, we definitely need patch 0001. One comment:\n>>\n>> -CreateWaitEventSet(MemoryContext context, int nevents)\n>> +CreateWaitEventSet(MemoryContext context, ResourceOwner res, int nevents)\n>>\n>> I wonder if it's better to have it receive ResourceOwner like that, or\n>> to have it capture CurrentResourceOwner. I think the latter is more\n>> common in existing code.\n> There's no existing WaitEventSets belonging to a resowner. So\n> unconditionally capturing CurrentResourceOwner doesn't work well. I\n> could pass a bool instead but that make things more complex.\n>\n> Come to think of \"complex\", ExecAsync stuff in this patch might be\n> too-much for a short-term solution until executor overhaul, if it\n> comes shortly. (the patch of mine here as a whole is like that,\n> though..). The queueing stuff in postgres_fdw is, too.\n>\n> regards.\n>\n\n\nHi,\nLooks like current implementation of asynchronous append incorrectly \nhandle LIMIT clause:\n\npsql:append.sql:10: ERROR:� another command is already in progress\nCONTEXT:� remote SQL command: CLOSE c1\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 22 Sep 2020 15:52:33 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 22.09.2020 15:52, Konstantin Knizhnik wrote:\n>\n>\n> On 20.08.2020 10:36, Kyotaro Horiguchi wrote:\n>> At Wed, 19 Aug 2020 23:25:36 -0500, Justin Pryzby \n>> <pryzby@telsasoft.com> wrote in\n>>> On Thu, Jul 02, 2020 at 11:14:48AM +0900, Kyotaro Horiguchi wrote:\n>>>> As the result of a discussion with Fujita-san off-list, I'm going to\n>>>> hold off development until he decides whether mine or Thomas' is\n>>>> better.\n>>> The latest patch doesn't apply so I set as WoA.\n>>> https://commitfest.postgresql.org/29/2491/\n>> Thanks. This is rebased version.\n>>\n>> At Fri, 14 Aug 2020 13:29:16 +1200, Thomas Munro \n>> <thomas.munro@gmail.com> wrote in\n>>> Either way, we definitely need patch 0001.� One comment:\n>>>\n>>> -CreateWaitEventSet(MemoryContext context, int nevents)\n>>> +CreateWaitEventSet(MemoryContext context, ResourceOwner res, int \n>>> nevents)\n>>>\n>>> I wonder if it's better to have it receive ResourceOwner like that, or\n>>> to have it capture CurrentResourceOwner.� I think the latter is more\n>>> common in existing code.\n>> There's no existing WaitEventSets belonging to a resowner. So\n>> unconditionally capturing CurrentResourceOwner doesn't work well. I\n>> could pass a bool instead but that make things more complex.\n>>\n>> Come to think of \"complex\", ExecAsync stuff in this patch might be\n>> too-much for a short-term solution until executor overhaul, if it\n>> comes shortly. (the patch of mine here as a whole is like that,\n>> though..). The queueing stuff in postgres_fdw is, too.\n>>\n>> regards.\n>>\n>\n>\n> Hi,\n> Looks like current implementation of asynchronous append incorrectly \n> handle LIMIT clause:\n>\n> psql:append.sql:10: ERROR:� another command is already in progress\n> CONTEXT:� remote SQL command: CLOSE c1\n>\n>\n>\nJust FYI: the following patch fixes the problem:\n\n--- a/contrib/postgres_fdw/postgres_fdw.c\n+++ b/contrib/postgres_fdw/postgres_fdw.c\n@@ -1667,6 +1667,11 @@ remove_async_node(ForeignScanState *node)\n\n ���� ��� if (cur == node)\n ���� ��� {\n+��� ��� ��� PGconn *conn = curstate->s.conn;\n+\n+��� ��� ��� while(PQisBusy(conn))\n+��� ��� ��� ��� PQclear(PQgetResult(conn));\n+\n ���� ��� ��� prev_state->waiter = curstate->waiter;\n\n ���� ��� ��� /* relink to the previous node if the last node was removed */\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 22 Sep 2020 16:40:11 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 22.09.2020 16:40, Konstantin Knizhnik wrote:\n>\n>\n> On 22.09.2020 15:52, Konstantin Knizhnik wrote:\n>>\n>>\n>> On 20.08.2020 10:36, Kyotaro Horiguchi wrote:\n>>> At Wed, 19 Aug 2020 23:25:36 -0500, Justin Pryzby \n>>> <pryzby@telsasoft.com> wrote in\n>>>> On Thu, Jul 02, 2020 at 11:14:48AM +0900, Kyotaro Horiguchi wrote:\n>>>>> As the result of a discussion with Fujita-san off-list, I'm going to\n>>>>> hold off development until he decides whether mine or Thomas' is\n>>>>> better.\n>>>> The latest patch doesn't apply so I set as WoA.\n>>>> https://commitfest.postgresql.org/29/2491/\n>>> Thanks. This is rebased version.\n>>>\n>>> At Fri, 14 Aug 2020 13:29:16 +1200, Thomas Munro \n>>> <thomas.munro@gmail.com> wrote in\n>>>> Either way, we definitely need patch 0001.� One comment:\n>>>>\n>>>> -CreateWaitEventSet(MemoryContext context, int nevents)\n>>>> +CreateWaitEventSet(MemoryContext context, ResourceOwner res, int \n>>>> nevents)\n>>>>\n>>>> I wonder if it's better to have it receive ResourceOwner like that, or\n>>>> to have it capture CurrentResourceOwner.� I think the latter is more\n>>>> common in existing code.\n>>> There's no existing WaitEventSets belonging to a resowner. So\n>>> unconditionally capturing CurrentResourceOwner doesn't work well. I\n>>> could pass a bool instead but that make things more complex.\n>>>\n>>> Come to think of \"complex\", ExecAsync stuff in this patch might be\n>>> too-much for a short-term solution until executor overhaul, if it\n>>> comes shortly. (the patch of mine here as a whole is like that,\n>>> though..). The queueing stuff in postgres_fdw is, too.\n>>>\n>>> regards.\n>>>\n>>\n>>\n>> Hi,\n>> Looks like current implementation of asynchronous append incorrectly \n>> handle LIMIT clause:\n>>\n>> psql:append.sql:10: ERROR:� another command is already in progress\n>> CONTEXT:� remote SQL command: CLOSE c1\n>>\n>>\n>>\n> Just FYI: the following patch fixes the problem:\n>\n> --- a/contrib/postgres_fdw/postgres_fdw.c\n> +++ b/contrib/postgres_fdw/postgres_fdw.c\n> @@ -1667,6 +1667,11 @@ remove_async_node(ForeignScanState *node)\n>\n> ���� ��� if (cur == node)\n> ���� ��� {\n> +��� ��� ��� PGconn *conn = curstate->s.conn;\n> +\n> +��� ��� ��� while(PQisBusy(conn))\n> +��� ��� ��� ��� PQclear(PQgetResult(conn));\n> +\n> ���� ��� ��� prev_state->waiter = curstate->waiter;\n>\n> ���� ��� ��� /* relink to the previous node if the last node was \n> removed */\n>\n\nSorry, but it is not the only problem.\nIf you execute the query above and then in the same backend try to \ninsert more records, then backend is crashed:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0� 0x00007f5dfc59a231 in fetch_received_data (node=0x230c130) at \npostgres_fdw.c:3736\n3736��� ��� Assert(fsstate->s.commonstate->leader == node);\n(gdb) p sstate->s.commonstate\nNo symbol \"sstate\" in current context.\n(gdb) p fsstate->s.commonstate\nCannot access memory at address 0x7f7f7f7f7f7f7f87\n\nAlso my patch doesn't solve the problem for small number of records \n(100) in the table.\nI attach yet another patch which fix both problems.\nPlease notice that I did not go deep inside code of async append, so I \nam not sure that my patch is complete and correct.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 22 Sep 2020 17:59:45 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Sep 22, 2020 at 9:52 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> On 20.08.2020 10:36, Kyotaro Horiguchi wrote:\n> > Come to think of \"complex\", ExecAsync stuff in this patch might be\n> > too-much for a short-term solution until executor overhaul, if it\n> > comes shortly. (the patch of mine here as a whole is like that,\n> > though..). The queueing stuff in postgres_fdw is, too.\n\n> Looks like current implementation of asynchronous append incorrectly\n> handle LIMIT clause:\n>\n> psql:append.sql:10: ERROR: another command is already in progress\n> CONTEXT: remote SQL command: CLOSE c1\n\nThanks for the report (and patch)!\n\nThe same issue has already been noticed in [1]. I too think the cause\nof the issue would be in the 0003 patch (ie, “the queueing stuff “ in\npostgres_fdw), but I’m not sure it is really a good idea to have that\nin postgres_fdw in the first place, because it would impact\nperformance negatively in some cases (see [1]).\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com\n\n\n", "msg_date": "Wed, 23 Sep 2020 02:20:46 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Your AsyncAppend doesn't switch to another source if the data in current \nleader is available:\n\n/*\n * The request for the next node cannot be sent before the leader\n * responds. Finish the current leader if possible.\n */\nif (PQisBusy(leader_state->s.conn))\n{\n int rc = WaitLatchOrSocket(NULL, WL_SOCKET_READABLE | WL_TIMEOUT | \nWL_EXIT_ON_PM_DEATH, PQsocket(leader_state->s.conn), 0, \nWAIT_EVENT_ASYNC_WAIT);\n if (!(rc & WL_SOCKET_READABLE))\n available = false;\n}\n\n/* fetch the leader's data and enqueue it for the next request */\nif (available)\n{\n fetch_received_data(leader);\n add_async_waiter(leader);\n}\n\nI don't understand, why it is needed. If we have fdw connections with \ndifferent latency, then we will read data from the fast connection \nfirst. I think this may be a source of skew and decrease efficiency of \nasynchronous append.\n\nFor example, see below synthetic query:\nCREATE TABLE l (a integer) PARTITION BY LIST (a);\nCREATE FOREIGN TABLE f1 PARTITION OF l FOR VALUES IN (1) SERVER lb \nOPTIONS (table_name 'l1');\nCREATE FOREIGN TABLE f2 PARTITION OF l FOR VALUES IN (2) SERVER lb \nOPTIONS (table_name 'l2');\n\nINSERT INTO l (a) SELECT 2 FROM generate_series(1,200) as gs;\nINSERT INTO l (a) SELECT 1 FROM generate_series(1,1000) as gs;\n\nEXPLAIN ANALYZE (SELECT * FROM f1) UNION ALL (SELECT * FROM f2) LIMIT 400;\n\nResult:\nLimit (cost=100.00..122.21 rows=400 width=4) (actual time=0.483..1.183 \nrows=400 loops=1)\n -> Append (cost=100.00..424.75 rows=5850 width=4) (actual \ntime=0.482..1.149 rows=400 loops=1)\n -> Foreign Scan on f1 (cost=100.00..197.75 rows=2925 \nwidth=4) (actual time=0.481..1.115 rows=400 loops=1)\n -> Foreign Scan on f2 (cost=100.00..197.75 rows=2925 \nwidth=4) (never executed)\n\nAs you can see, executor scans one input and doesn't tried to scan another.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 25 Sep 2020 21:34:04 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Aug 20, 2020 at 4:36 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> This is rebased version.\n\nThanks for the rebased version!\n\n> Come to think of \"complex\", ExecAsync stuff in this patch might be\n> too-much for a short-term solution until executor overhaul, if it\n> comes shortly. (the patch of mine here as a whole is like that,\n> though..). The queueing stuff in postgres_fdw is, too.\n\nHere are some review comments on “ExecAsync stuff” (the 0002 patch):\n\n@@ -192,10 +196,20 @@ ExecInitAppend(Append *node, EState *estate, int eflags)\n\n i = -1;\n while ((i = bms_next_member(validsubplans, i)) >= 0)\n {\n Plan *initNode = (Plan *) list_nth(node->appendplans, i);\n+ int sub_eflags = eflags;\n+\n+ /* Let async-capable subplans run asynchronously */\n+ if (i < node->nasyncplans)\n+ {\n+ sub_eflags |= EXEC_FLAG_ASYNC;\n+ nasyncplans++;\n+ }\n\nThis would be more ambitious than Thomas’ patch: his patch only allows\nforeign scan nodes beneath an Append node to be executed\nasynchronously, but your patch allows any plan nodes beneath it (e.g.,\nlocal child joins between foreign tables). Right? I think that would\nbe great, but I’m not sure how we execute such plan nodes\nasynchronously as other parts of your patch seem to assume that only\nforeign scan nodes beneath an Append are considered as async-capable.\nMaybe I’m missing something, though. Could you elaborate on that?\n\nYour patch (and the original patch by Robert [1]) modified\nExecAppend() so that it can process local plan nodes while waiting for\nthe results from remote queries, which would be also a feature that’s\nnot supported by Thomas’ patch, but I’d like to know performance\nresults. Did you do performance testing on that? I couldn’t find\nthat from the archive.\n\n+bool\n+is_async_capable_path(Path *path)\n+{\n+ switch (nodeTag(path))\n+ {\n+ case T_ForeignPath:\n+ {\n+ FdwRoutine *fdwroutine = path->parent->fdwroutine;\n+\n+ Assert(fdwroutine != NULL);\n+ if (fdwroutine->IsForeignPathAsyncCapable != NULL &&\n+ fdwroutine->IsForeignPathAsyncCapable((ForeignPath *) path))\n+ return true;\n+ }\n\nDo we really need to introduce the FDW API\nIsForeignPathAsyncCapable()? I think we could determine whether a\nforeign path is async-capable, by checking whether the FDW has the\npostgresForeignAsyncConfigureWait() API.\n\nIn relation to the first comment, I noticed this change in the\npostgres_fdw regression tests:\n\nHEAD:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT a, count(t1) FROM pagg_tab t1 GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n QUERY PLAN\n------------------------------------------------------------------------\n Sort\n Output: t1.a, (count(((t1.*)::pagg_tab)))\n Sort Key: t1.a\n -> Append\n -> HashAggregate\n Output: t1.a, count(((t1.*)::pagg_tab))\n Group Key: t1.a\n Filter: (avg(t1.b) < '22'::numeric)\n -> Foreign Scan on public.fpagg_tab_p1 t1\n Output: t1.a, t1.*, t1.b\n Remote SQL: SELECT a, b, c FROM public.pagg_tab_p1\n -> HashAggregate\n Output: t1_1.a, count(((t1_1.*)::pagg_tab))\n Group Key: t1_1.a\n Filter: (avg(t1_1.b) < '22'::numeric)\n -> Foreign Scan on public.fpagg_tab_p2 t1_1\n Output: t1_1.a, t1_1.*, t1_1.b\n Remote SQL: SELECT a, b, c FROM public.pagg_tab_p2\n -> HashAggregate\n Output: t1_2.a, count(((t1_2.*)::pagg_tab))\n Group Key: t1_2.a\n Filter: (avg(t1_2.b) < '22'::numeric)\n -> Foreign Scan on public.fpagg_tab_p3 t1_2\n Output: t1_2.a, t1_2.*, t1_2.b\n Remote SQL: SELECT a, b, c FROM public.pagg_tab_p3\n(25 rows)\n\nPatched:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT a, count(t1) FROM pagg_tab t1 GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n QUERY PLAN\n------------------------------------------------------------------------\n Sort\n Output: t1.a, (count(((t1.*)::pagg_tab)))\n Sort Key: t1.a\n -> HashAggregate\n Output: t1.a, count(((t1.*)::pagg_tab))\n Group Key: t1.a\n Filter: (avg(t1.b) < '22'::numeric)\n -> Append\n Async subplans: 3\n -> Async Foreign Scan on public.fpagg_tab_p1 t1_1\n Output: t1_1.a, t1_1.*, t1_1.b\n Remote SQL: SELECT a, b, c FROM public.pagg_tab_p1\n -> Async Foreign Scan on public.fpagg_tab_p2 t1_2\n Output: t1_2.a, t1_2.*, t1_2.b\n Remote SQL: SELECT a, b, c FROM public.pagg_tab_p2\n -> Async Foreign Scan on public.fpagg_tab_p3 t1_3\n Output: t1_3.a, t1_3.*, t1_3.b\n Remote SQL: SELECT a, b, c FROM public.pagg_tab_p3\n(18 rows)\n\nSo, your patch can only handle foreign scan nodes beneath an Append\nfor now? Anyway, I think this would lead to the improved efficiency,\nconsidering performance results from Movead [2]. And I think planner\nchanges to make this happen would be a good thing in your patch.\n\nThat’s all I have for now. Sorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/2020011417113872105895%40highgo.ca\n\n\n", "msg_date": "Sat, 26 Sep 2020 19:45:39 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Thanks for reviewing.\n\nAt Sat, 26 Sep 2020 19:45:39 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> > Come to think of \"complex\", ExecAsync stuff in this patch might be\n> > too-much for a short-term solution until executor overhaul, if it\n> > comes shortly. (the patch of mine here as a whole is like that,\n> > though..). The queueing stuff in postgres_fdw is, too.\n> \n> Here are some review comments on “ExecAsync stuff” (the 0002 patch):\n> \n> @@ -192,10 +196,20 @@ ExecInitAppend(Append *node, EState *estate, int eflags)\n> \n> i = -1;\n> while ((i = bms_next_member(validsubplans, i)) >= 0)\n> {\n> Plan *initNode = (Plan *) list_nth(node->appendplans, i);\n> + int sub_eflags = eflags;\n> +\n> + /* Let async-capable subplans run asynchronously */\n> + if (i < node->nasyncplans)\n> + {\n> + sub_eflags |= EXEC_FLAG_ASYNC;\n> + nasyncplans++;\n> + }\n> \n> This would be more ambitious than Thomas’ patch: his patch only allows\n> foreign scan nodes beneath an Append node to be executed\n> asynchronously, but your patch allows any plan nodes beneath it (e.g.,\n> local child joins between foreign tables). Right? I think that would\n\nRight. It is intended to work any place, but all upper nodes up to the\ncommon node must be \"async-aware and capable\" for the machinery to work. So it\ndoesn't work currently since Append is the only async-aware node.\n> be great, but I’m not sure how we execute such plan nodes\n> asynchronously as other parts of your patch seem to assume that only\n> foreign scan nodes beneath an Append are considered as async-capable.\n> Maybe I’m missing something, though. Could you elaborate on that?\n\nRight about this patch. As a trial at hand, in my faint memory, some\njoin methods and some aggregaioion can be async-aware but they are not\nincluded in this patch not to bloat it with more complex stuff.\n\n> Your patch (and the original patch by Robert [1]) modified\n> ExecAppend() so that it can process local plan nodes while waiting for\n> the results from remote queries, which would be also a feature that’s\n> not supported by Thomas’ patch, but I’d like to know performance\n> results. Did you do performance testing on that? I couldn’t find\n> that from the archive.\n\nAt least, even though theoretically, I think it's obvious that it's\nperformant to do something than just sitting waitng for the next tuple\nto come from abroad. (I's not so obvious for slow local\nvs. hyperspeed-remotes configuration, but...)\n\n> +bool\n> +is_async_capable_path(Path *path)\n> +{\n> + switch (nodeTag(path))\n> + {\n> + case T_ForeignPath:\n> + {\n> + FdwRoutine *fdwroutine = path->parent->fdwroutine;\n> +\n> + Assert(fdwroutine != NULL);\n> + if (fdwroutine->IsForeignPathAsyncCapable != NULL &&\n> + fdwroutine->IsForeignPathAsyncCapable((ForeignPath *) path))\n> + return true;\n> + }\n> \n> Do we really need to introduce the FDW API\n> IsForeignPathAsyncCapable()? I think we could determine whether a\n> foreign path is async-capable, by checking whether the FDW has the\n> postgresForeignAsyncConfigureWait() API.\n\nNote that the API routine takes a path, but it's just that a child\npath in a certain form theoretically can obstruct async behavior.\n\n> In relation to the first comment, I noticed this change in the\n> postgres_fdw regression tests:\n> \n> HEAD:\n> \n> EXPLAIN (VERBOSE, COSTS OFF)\n> SELECT a, count(t1) FROM pagg_tab t1 GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Sort\n> Output: t1.a, (count(((t1.*)::pagg_tab)))\n> Sort Key: t1.a\n> -> Append\n> -> HashAggregate\n> Output: t1.a, count(((t1.*)::pagg_tab))\n> Group Key: t1.a\n> Filter: (avg(t1.b) < '22'::numeric)\n> -> Foreign Scan on public.fpagg_tab_p1 t1\n> Output: t1.a, t1.*, t1.b\n> Remote SQL: SELECT a, b, c FROM public.pagg_tab_p1\n> -> HashAggregate\n> Output: t1_1.a, count(((t1_1.*)::pagg_tab))\n> Group Key: t1_1.a\n> Filter: (avg(t1_1.b) < '22'::numeric)\n> -> Foreign Scan on public.fpagg_tab_p2 t1_1\n> Output: t1_1.a, t1_1.*, t1_1.b\n> Remote SQL: SELECT a, b, c FROM public.pagg_tab_p2\n> -> HashAggregate\n> Output: t1_2.a, count(((t1_2.*)::pagg_tab))\n> Group Key: t1_2.a\n> Filter: (avg(t1_2.b) < '22'::numeric)\n> -> Foreign Scan on public.fpagg_tab_p3 t1_2\n> Output: t1_2.a, t1_2.*, t1_2.b\n> Remote SQL: SELECT a, b, c FROM public.pagg_tab_p3\n> (25 rows)\n> \n> Patched:\n> \n> EXPLAIN (VERBOSE, COSTS OFF)\n> SELECT a, count(t1) FROM pagg_tab t1 GROUP BY a HAVING avg(b) < 22 ORDER BY 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Sort\n> Output: t1.a, (count(((t1.*)::pagg_tab)))\n> Sort Key: t1.a\n> -> HashAggregate\n> Output: t1.a, count(((t1.*)::pagg_tab))\n> Group Key: t1.a\n> Filter: (avg(t1.b) < '22'::numeric)\n> -> Append\n> Async subplans: 3\n> -> Async Foreign Scan on public.fpagg_tab_p1 t1_1\n> Output: t1_1.a, t1_1.*, t1_1.b\n> Remote SQL: SELECT a, b, c FROM public.pagg_tab_p1\n> -> Async Foreign Scan on public.fpagg_tab_p2 t1_2\n> Output: t1_2.a, t1_2.*, t1_2.b\n> Remote SQL: SELECT a, b, c FROM public.pagg_tab_p2\n> -> Async Foreign Scan on public.fpagg_tab_p3 t1_3\n> Output: t1_3.a, t1_3.*, t1_3.b\n> Remote SQL: SELECT a, b, c FROM public.pagg_tab_p3\n> (18 rows)\n> \n> So, your patch can only handle foreign scan nodes beneath an Append\n\nYes, as I wrote above. Append-Foreign is the most promising and\nsuitable as an example. (and... Agg/WindowAgg are the hardest nodes\nto make async-aware.)\n\n> for now? Anyway, I think this would lead to the improved efficiency,\n> considering performance results from Movead [2]. And I think planner\n> changes to make this happen would be a good thing in your patch.\n\nThanks.\n\n> That’s all I have for now. Sorry for the delay.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Sep 2020 10:35:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Sep 28, 2020 at 10:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sat, 26 Sep 2020 19:45:39 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > Here are some review comments on “ExecAsync stuff” (the 0002 patch):\n> >\n> > @@ -192,10 +196,20 @@ ExecInitAppend(Append *node, EState *estate, int eflags)\n> >\n> > i = -1;\n> > while ((i = bms_next_member(validsubplans, i)) >= 0)\n> > {\n> > Plan *initNode = (Plan *) list_nth(node->appendplans, i);\n> > + int sub_eflags = eflags;\n> > +\n> > + /* Let async-capable subplans run asynchronously */\n> > + if (i < node->nasyncplans)\n> > + {\n> > + sub_eflags |= EXEC_FLAG_ASYNC;\n> > + nasyncplans++;\n> > + }\n> >\n> > This would be more ambitious than Thomas’ patch: his patch only allows\n> > foreign scan nodes beneath an Append node to be executed\n> > asynchronously, but your patch allows any plan nodes beneath it (e.g.,\n> > local child joins between foreign tables). Right? I think that would\n>\n> Right. It is intended to work any place,\n\n> > be great, but I’m not sure how we execute such plan nodes\n> > asynchronously as other parts of your patch seem to assume that only\n> > foreign scan nodes beneath an Append are considered as async-capable.\n> > Maybe I’m missing something, though. Could you elaborate on that?\n>\n> Right about this patch. As a trial at hand, in my faint memory, some\n> join methods and some aggregaioion can be async-aware but they are not\n> included in this patch not to bloat it with more complex stuff.\n\nYeah. I’m concerned about what was discussed in [1] as well. I think\nit would be better only to allow foreign scan nodes beneath an Append,\nas in Thomas’ patch (and the original patch by Robert), at least in\nthe first cut of this feature.\n\nBTW: I noticed that you changed the ExecProcNode() API so that an\nAppend calling FDWs can know wether they return tuples immediately or\nnot:\n\n+ while ((i = bms_first_member(needrequest)) >= 0)\n+ {\n+ TupleTableSlot *slot;\n+ PlanState *subnode = node->appendplans[i];\n+\n+ slot = ExecProcNode(subnode);\n+ if (subnode->asyncstate == AS_AVAILABLE)\n+ {\n+ if (!TupIsNull(slot))\n+ {\n+ node->as_asyncresult[node->as_nasyncresult++] = slot;\n+ node->as_needrequest = bms_add_member(node->as_needrequest, i);\n+ }\n+ }\n+ else\n+ node->as_pending_async = bms_add_member(node->as_pending_async, i);\n+ }\n\nIn the case of postgres_fdw:\n\n /*\n * postgresIterateForeignScan\n- * Retrieve next row from the result set, or clear tuple slot to indicate\n- * EOF.\n+ * Retrieve next row from the result set.\n+ *\n+ * For synchronous nodes, returns clear tuple slot means EOF.\n+ *\n+ * For asynchronous nodes, if clear tuple slot is returned, the caller\n+ * needs to check async state to tell if all tuples received\n+ * (AS_AVAILABLE) or waiting for the next data to come (AS_WAITING).\n */\n\nThat is, 1) in postgresIterateForeignScan() postgres_fdw sets the new\nPlanState’s flag asyncstate to AS_AVAILABLE/AS_WAITING depending on\nwhether it returns a tuple immediately or not, and then 2) the Append\nknows that from the new flag when the callback routine returns. I’m\nnot sure this is a good idea, because it seems likely that the\nExecProcNode() change would affect many other places in the executor,\nmaking maintenance and/or future development difficult. I think the\nFDW callback routines proposed in the original patch by Robert would\nprovide a cleaner way to do asynchronous execution of FDWs without\nchanging the ExecProcNode() API, IIUC:\n\n+On the other hand, nodes that wish to produce tuples asynchronously\n+generally need to implement three methods:\n+\n+1. When an asynchronous request is made, the node's ExecAsyncRequest callback\n+will be invoked; it should use ExecAsyncSetRequiredEvents to indicate the\n+number of file descriptor events for which it wishes to wait and whether it\n+wishes to receive a callback when the process latch is set. Alternatively,\n+it can instead use ExecAsyncRequestDone if a result is available immediately.\n+\n+2. When the event loop wishes to wait or poll for file descriptor events and\n+the process latch, the ExecAsyncConfigureWait callback is invoked to configure\n+the file descriptor wait events for which the node wishes to wait. This\n+callback isn't needed if the node only cares about the process latch.\n+\n+3. When file descriptors or the process latch become ready, the node's\n+ExecAsyncNotify callback is invoked.\n\nWhat is the reason for not doing like this in your patch?\n\nThanks for the explanation!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYrbgTBnLwnr1v%3Dpk%2BC%3DznWg7AgV9%3DM9ehrq6TDexPQNw%40mail.gmail.com\n\n\n", "msg_date": "Tue, 29 Sep 2020 04:45:25 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Sep 28, 2020 at 10:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sat, 26 Sep 2020 19:45:39 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > Your patch (and the original patch by Robert [1]) modified\n> > ExecAppend() so that it can process local plan nodes while waiting for\n> > the results from remote queries, which would be also a feature that’s\n> > not supported by Thomas’ patch, but I’d like to know performance\n> > results.\n\n> At least, even though theoretically, I think it's obvious that it's\n> performant to do something than just sitting waitng for the next tuple\n> to come from abroad.\n\nI did a simple test on my laptop:\n\ncreate table t1 (a int, b int, c text);\ncreate foreign table p1 (a int, b int, c text) server server1 options\n(table_name 't1');\ncreate table p2 (a int, b int, c text);\n\ninsert into p1 select 10 + i % 10, i, to_char(i, 'FM00000') from\ngenerate_series(0, 99999) i;\ninsert into p2 select 20 + i % 10, i, to_char(i, 'FM00000') from\ngenerate_series(0, 99999) i;\n\nanalyze p1;\nvacuum analyze p2;\n\ncreate table pt (a int, b int, c text) partition by range (a);\nalter table pt attach partition p1 for values from (10) to (20);\nalter table pt attach partition p2 for values from (20) to (30);\n\nset enable_partitionwise_aggregate to on;\n\nselect a, count(*) from pt group by a;\n\nHEAD: 47.734 ms\nWith your patch: 32.400 ms\n\nThis test is pretty simple, but I think this shows that the mentioned\nfeature would be useful for cases where it takes time to get the\nresults from remote queries.\n\nCool!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 30 Sep 2020 16:30:41 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Wed, 30 Sep 2020 16:30:41 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \r\n> On Mon, Sep 28, 2020 at 10:35 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > At Sat, 26 Sep 2020 19:45:39 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\r\n> > > Your patch (and the original patch by Robert [1]) modified\r\n> > > ExecAppend() so that it can process local plan nodes while waiting for\r\n> > > the results from remote queries, which would be also a feature that’s\r\n> > > not supported by Thomas’ patch, but I’d like to know performance\r\n> > > results.\r\n> \r\n> > At least, even though theoretically, I think it's obvious that it's\r\n> > performant to do something than just sitting waitng for the next tuple\r\n> > to come from abroad.\r\n> \r\n> I did a simple test on my laptop:\r\n> \r\n> create table t1 (a int, b int, c text);\r\n> create foreign table p1 (a int, b int, c text) server server1 options\r\n> (table_name 't1');\r\n> create table p2 (a int, b int, c text);\r\n> \r\n> insert into p1 select 10 + i % 10, i, to_char(i, 'FM00000') from\r\n> generate_series(0, 99999) i;\r\n> insert into p2 select 20 + i % 10, i, to_char(i, 'FM00000') from\r\n> generate_series(0, 99999) i;\r\n> \r\n> analyze p1;\r\n> vacuum analyze p2;\r\n> \r\n> create table pt (a int, b int, c text) partition by range (a);\r\n> alter table pt attach partition p1 for values from (10) to (20);\r\n> alter table pt attach partition p2 for values from (20) to (30);\r\n> \r\n> set enable_partitionwise_aggregate to on;\r\n> \r\n> select a, count(*) from pt group by a;\r\n> \r\n> HEAD: 47.734 ms\r\n> With your patch: 32.400 ms\r\n> \r\n> This test is pretty simple, but I think this shows that the mentioned\r\n> feature would be useful for cases where it takes time to get the\r\n> results from remote queries.\r\n> \r\n> Cool!\r\n\r\nThanks. Since it starts all remote nodes before local ones, the\r\nstartup gain would be the shorter of the startup time of the fastest\r\nremote and the time required for all local nodes. Plus remote\r\ntransfer gets asynchronous fetch gain.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Thu, 01 Oct 2020 11:16:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Oct 01, 2020 at 11:16:53AM +0900, Kyotaro Horiguchi wrote:\n> Thanks. Since it starts all remote nodes before local ones, the\n> startup gain would be the shorter of the startup time of the fastest\n> remote and the time required for all local nodes. Plus remote\n> transfer gets asynchronous fetch gain.\n\nThe patch fails to apply per the CF bot. For now, I have moved it to\nnext CF, waiting on author.\n--\nMichael", "msg_date": "Thu, 1 Oct 2020 12:56:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Thu, 1 Oct 2020 12:56:02 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Oct 01, 2020 at 11:16:53AM +0900, Kyotaro Horiguchi wrote:\n> > Thanks. Since it starts all remote nodes before local ones, the\n> > startup gain would be the shorter of the startup time of the fastest\n> > remote and the time required for all local nodes. Plus remote\n> > transfer gets asynchronous fetch gain.\n> \n> The patch fails to apply per the CF bot. For now, I have moved it to\n> next CF, waiting on author.\n\nThanks! Rebased.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 01 Oct 2020 13:43:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Sep 29, 2020 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> BTW: I noticed that you changed the ExecProcNode() API so that an\n> Append calling FDWs can know wether they return tuples immediately or\n> not:\n\n> That is, 1) in postgresIterateForeignScan() postgres_fdw sets the new\n> PlanState’s flag asyncstate to AS_AVAILABLE/AS_WAITING depending on\n> whether it returns a tuple immediately or not, and then 2) the Append\n> knows that from the new flag when the callback routine returns. I’m\n> not sure this is a good idea, because it seems likely that the\n> ExecProcNode() change would affect many other places in the executor,\n> making maintenance and/or future development difficult. I think the\n> FDW callback routines proposed in the original patch by Robert would\n> provide a cleaner way to do asynchronous execution of FDWs without\n> changing the ExecProcNode() API, IIUC:\n>\n> +On the other hand, nodes that wish to produce tuples asynchronously\n> +generally need to implement three methods:\n> +\n> +1. When an asynchronous request is made, the node's ExecAsyncRequest callback\n> +will be invoked; it should use ExecAsyncSetRequiredEvents to indicate the\n> +number of file descriptor events for which it wishes to wait and whether it\n> +wishes to receive a callback when the process latch is set. Alternatively,\n> +it can instead use ExecAsyncRequestDone if a result is available immediately.\n> +\n> +2. When the event loop wishes to wait or poll for file descriptor events and\n> +the process latch, the ExecAsyncConfigureWait callback is invoked to configure\n> +the file descriptor wait events for which the node wishes to wait. This\n> +callback isn't needed if the node only cares about the process latch.\n> +\n> +3. When file descriptors or the process latch become ready, the node's\n> +ExecAsyncNotify callback is invoked.\n>\n> What is the reason for not doing like this in your patch?\n\nI think we should avoid changing the ExecProcNode() API.\n\nThomas’ patch also provides a clean FDW API that doesn’t change the\nExecProcNode() API, but I think the FDW API provided in Robert’ patch\nwould be better designed, because I think it would support more\ndifferent types of asynchronous interaction between the core and FDWs.\nConsider this bit from Thomas’ patch, which produces a tuple when a\nfile descriptor becomes ready:\n\n+ if (event.events & WL_SOCKET_READABLE)\n+ {\n+ /* Linear search for the node that told us to wait for this fd. */\n+ for (i = 0; i < node->nasyncplans; ++i)\n+ {\n+ if (event.fd == node->asyncfds[i])\n+ {\n+ TupleTableSlot *result;\n+\n+ /*\n+ --> * We assume that because the fd is ready, it can produce\n+ --> * a tuple now, which is not perfect. An improvement\n+ --> * would be if it could say 'not yet, I'm still not\n+ --> * ready', so eg postgres_fdw could PQconsumeInput and\n+ --> * then say 'I need more input'.\n+ */\n+ result = ExecProcNode(node->asyncplans[i]);\n+ if (!TupIsNull(result))\n+ {\n+ /*\n+ * Remember this plan so that append_next_async will\n+ * keep trying this subplan first until it stops\n+ * feeding us buffered tuples.\n+ */\n+ node->lastreadyplan = i;\n+ /* We can stop waiting for this fd. */\n+ node->asyncfds[i] = 0;\n+ return result;\n+ }\n+ else\n+ {\n+ /*\n+ * This subplan has reached EOF. We'll go back and\n+ * wait for another one.\n+ */\n+ forget_async_subplan(node, i);\n+ break;\n+ }\n+ }\n+ }\n+ }\n\nAs commented above, his patch doesn’t allow an FDW to do another data\nfetch from the remote side before returning a tuple when the file\ndescriptor becomes available, but Robert’s patch would, using his FDW\nAPI ForeignAsyncNotify(), which is called when the file descriptor\nbecomes available, IIUC.\n\nI might be missing something, but I feel inclined to vote for Robert’s\npatch (more precisely, Robert’s patch as a base patch with (1) some\nplanner/executor changes from Horiguchi-san’s patch and (2)\npostgres_fdw changes from Thomas’ patch adjusted to match Robert’s FDW\nAPI).\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 2 Oct 2020 09:00:53 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Fri, 2 Oct 2020 09:00:53 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \r\n> On Tue, Sep 29, 2020 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\r\n> > BTW: I noticed that you changed the ExecProcNode() API so that an\r\n> > Append calling FDWs can know wether they return tuples immediately or\r\n> > not:\r\n> \r\n> > That is, 1) in postgresIterateForeignScan() postgres_fdw sets the new\r\n> > PlanState’s flag asyncstate to AS_AVAILABLE/AS_WAITING depending on\r\n> > whether it returns a tuple immediately or not, and then 2) the Append\r\n> > knows that from the new flag when the callback routine returns. I’m\r\n> > not sure this is a good idea, because it seems likely that the\r\n> > ExecProcNode() change would affect many other places in the executor,\r\n> > making maintenance and/or future development difficult. I think the\r\n> > FDW callback routines proposed in the original patch by Robert would\r\n> > provide a cleaner way to do asynchronous execution of FDWs without\r\n> > changing the ExecProcNode() API, IIUC:\r\n> >\r\n> > +On the other hand, nodes that wish to produce tuples asynchronously\r\n> > +generally need to implement three methods:\r\n> > +\r\n> > +1. When an asynchronous request is made, the node's ExecAsyncRequest callback\r\n> > +will be invoked; it should use ExecAsyncSetRequiredEvents to indicate the\r\n> > +number of file descriptor events for which it wishes to wait and whether it\r\n> > +wishes to receive a callback when the process latch is set. Alternatively,\r\n> > +it can instead use ExecAsyncRequestDone if a result is available immediately.\r\n> > +\r\n> > +2. When the event loop wishes to wait or poll for file descriptor events and\r\n> > +the process latch, the ExecAsyncConfigureWait callback is invoked to configure\r\n> > +the file descriptor wait events for which the node wishes to wait. This\r\n> > +callback isn't needed if the node only cares about the process latch.\r\n> > +\r\n> > +3. When file descriptors or the process latch become ready, the node's\r\n> > +ExecAsyncNotify callback is invoked.\r\n> >\r\n> > What is the reason for not doing like this in your patch?\r\n> \r\n> I think we should avoid changing the ExecProcNode() API.\r\n> Thomas’ patch also provides a clean FDW API that doesn’t change the\r\n> ExecProcNode() API, but I think the FDW API provided in Robert’ patch\r\n\r\nCould you explain about what the \"change\" you are mentioning is?\r\n\r\nI have made many changes to reduce performance inpact on existing\r\npaths (before the current PlanState.ExecProcNode was introduced.) So\r\nlarge part of my changes could be actually reverted.\r\n\r\n> would be better designed, because I think it would support more\r\n> different types of asynchronous interaction between the core and FDWs.\r\n> Consider this bit from Thomas’ patch, which produces a tuple when a\r\n> file descriptor becomes ready:\r\n> \r\n> + if (event.events & WL_SOCKET_READABLE)\r\n> + {\r\n> + /* Linear search for the node that told us to wait for this fd. */\r\n> + for (i = 0; i < node->nasyncplans; ++i)\r\n> + {\r\n> + if (event.fd == node->asyncfds[i])\r\n> + {\r\n> + TupleTableSlot *result;\r\n> +\r\n> + /*\r\n> + --> * We assume that because the fd is ready, it can produce\r\n> + --> * a tuple now, which is not perfect. An improvement\r\n> + --> * would be if it could say 'not yet, I'm still not\r\n> + --> * ready', so eg postgres_fdw could PQconsumeInput and\r\n> + --> * then say 'I need more input'.\r\n> + */\r\n> + result = ExecProcNode(node->asyncplans[i]);\r\n..\r\n> As commented above, his patch doesn’t allow an FDW to do another data\r\n> fetch from the remote side before returning a tuple when the file\r\n> descriptor becomes available, but Robert’s patch would, using his FDW\r\n> API ForeignAsyncNotify(), which is called when the file descriptor\r\n> becomes available, IIUC.\r\n> \r\n> I might be missing something, but I feel inclined to vote for Robert’s\r\n> patch (more precisely, Robert’s patch as a base patch with (1) some\r\n> planner/executor changes from Horiguchi-san’s patch and (2)\r\n> postgres_fdw changes from Thomas’ patch adjusted to match Robert’s FDW\r\n> API).\r\n\r\nI'm not sure what you have in mind from the description above. Could\r\nyou please ellaborate?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 02 Oct 2020 15:39:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Oct 2, 2020 at 3:39 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 2 Oct 2020 09:00:53 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > I think we should avoid changing the ExecProcNode() API.\n\n> Could you explain about what the \"change\" you are mentioning is?\n\nIt’s the contract of the ExecProcNode() API: if the result is NULL or\nan empty slot, there is nothing more to do. You changed it to\nsomething like this: “even if the result is NULL or an empty slot,\nthere might be something more to do if AS_WAITING, so please wait in\nthat case”. That seems pretty invasive to me.\n\n> > I might be missing something, but I feel inclined to vote for Robert’s\n> > patch (more precisely, Robert’s patch as a base patch with (1) some\n> > planner/executor changes from Horiguchi-san’s patch and (2)\n> > postgres_fdw changes from Thomas’ patch adjusted to match Robert’s FDW\n> > API).\n>\n> I'm not sure what you have in mind from the description above. Could\n> you please ellaborate?\n\nSorry, my explanation was not enough.\n\nYou made lots of changes to the original patch by Robert, but I don’t\nthink those changes are all good; 1) as for the core part, you changed\nhis patch so that FDWs can interact with the core at execution time,\nonly through the ForeignAsyncConfigureWait() API, but that resulted in\nan invasive change to the ExecProcNode() API as mentioned above, and\n2) as for the postgres_fdw part, you changed it so that postgres_fdw\ncan handle concurrent data fetches from multiple foreign scan nodes\nusing the same connection, but that would cause a performance issue\nthat I mentioned in [1].\n\nSo I think it would be better to use his patch rather as proposed\nexcept for the postgres_fdw part and Thomas’ patch as a base patch for\nthat part. As for your patch, I think we could use some part of it as\nimprovements. One thing is the planner/executor changes that lead to\nthe improved efficiency discussed in [2][3]. Another would be to have\na separate ExecAppend() function for this feature like your patch to\navoid a performance penalty in the case of a plain old Append that\ninvolves no FDWs with asynchronism optimization, if necessary. I also\nthink we could probably use the WaitEventSet-related changes in your\npatch (i.e., the 0001 patch).\n\nDoes that answer your question?\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAPmGK16%2By8mEX9AT1LXVLksbTyDnYWZXm0uDxZ8bza153Wey9A%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAPmGK14AjvCd9QuoRQ-ATyExA_SiVmGFGstuqAKSzZ7JDJTBVg%40mail.gmail.com\n\n\n", "msg_date": "Sun, 4 Oct 2020 18:36:05 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Sun, 4 Oct 2020 18:36:05 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Fri, Oct 2, 2020 at 3:39 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Fri, 2 Oct 2020 09:00:53 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > > I think we should avoid changing the ExecProcNode() API.\n> \n> > Could you explain about what the \"change\" you are mentioning is?\n\nThank you for the explanation.\n\n> It’s the contract of the ExecProcNode() API: if the result is NULL or\n> an empty slot, there is nothing more to do. You changed it to\n> something like this: “even if the result is NULL or an empty slot,\n> there might be something more to do if AS_WAITING, so please wait in\n> that case”. That seems pretty invasive to me.\n\nYeah, it's \"invasive' as I intended. I thought that the async-aware\nand async-capable nodes should interact using a channel defined as a\npart of ExecProcNode API. It was aiming an increased affinity to\npush-up executor framework.\n\nSince the current direction is committing this feature as a\nintermediate or tentative implement, it sounds reasonable to avoid\nsuch a change.\n\n> > > I might be missing something, but I feel inclined to vote for Robert’s\n> > > patch (more precisely, Robert’s patch as a base patch with (1) some\n> > > planner/executor changes from Horiguchi-san’s patch and (2)\n> > > postgres_fdw changes from Thomas’ patch adjusted to match Robert’s FDW\n> > > API).\n> >\n> > I'm not sure what you have in mind from the description above. Could\n> > you please ellaborate?\n> \n> Sorry, my explanation was not enough.\n> \n> You made lots of changes to the original patch by Robert, but I don’t\n> think those changes are all good; 1) as for the core part, you changed\n> his patch so that FDWs can interact with the core at execution time,\n> only through the ForeignAsyncConfigureWait() API, but that resulted in\n> an invasive change to the ExecProcNode() API as mentioned above, and\n> 2) as for the postgres_fdw part, you changed it so that postgres_fdw\n> can handle concurrent data fetches from multiple foreign scan nodes\n> using the same connection, but that would cause a performance issue\n> that I mentioned in [1].\n\n(Putting aside the bug itself..)\n\nYeah, I noticed such a possibility of fetch cascading, however, I\nthink that that situation that the feature is intended for is more\ncommon than the problem case.\n\nBeing said, I agree that it is a candidate to rip out when we are\nthinking to reduce the footprint of this patch.\n\n> So I think it would be better to use his patch rather as proposed\n> except for the postgres_fdw part and Thomas’ patch as a base patch for\n> that part. As for your patch, I think we could use some part of it as\n> improvements. One thing is the planner/executor changes that lead to\n> the improved efficiency discussed in [2][3]. Another would be to have\n> a separate ExecAppend() function for this feature like your patch to\n> avoid a performance penalty in the case of a plain old Append that\n> involves no FDWs with asynchronism optimization, if necessary. I also\n> think we could probably use the WaitEventSet-related changes in your\n> patch (i.e., the 0001 patch).\n> \n> Does that answer your question?\n\nYes, thanks. Comments about the direction from me is as above. Are\nyou going to continue working on this patch?\n\n\n> [1] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAPmGK16%2By8mEX9AT1LXVLksbTyDnYWZXm0uDxZ8bza153Wey9A%40mail.gmail.com\n> [3] https://www.postgresql.org/message-id/CAPmGK14AjvCd9QuoRQ-ATyExA_SiVmGFGstuqAKSzZ7JDJTBVg%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 05 Oct 2020 13:29:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Oct 5, 2020 at 1:30 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sun, 4 Oct 2020 18:36:05 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > It’s the contract of the ExecProcNode() API: if the result is NULL or\n> > an empty slot, there is nothing more to do. You changed it to\n> > something like this: “even if the result is NULL or an empty slot,\n> > there might be something more to do if AS_WAITING, so please wait in\n> > that case”. That seems pretty invasive to me.\n>\n> Yeah, it's \"invasive' as I intended. I thought that the async-aware\n> and async-capable nodes should interact using a channel defined as a\n> part of ExecProcNode API. It was aiming an increased affinity to\n> push-up executor framework.\n>\n> Since the current direction is committing this feature as a\n> intermediate or tentative implement, it sounds reasonable to avoid\n> such a change.\n\nOK (Actually, I'm wondering if we could probably extend this to the\ncase where an Append is indirectly on top of a foreign scan node\nwithout changing the ExecProcNode() API.)\n\n> > You made lots of changes to the original patch by Robert, but I don’t\n> > think those changes are all good; 1) as for the core part, you changed\n> > his patch so that FDWs can interact with the core at execution time,\n> > only through the ForeignAsyncConfigureWait() API, but that resulted in\n> > an invasive change to the ExecProcNode() API as mentioned above, and\n> > 2) as for the postgres_fdw part, you changed it so that postgres_fdw\n> > can handle concurrent data fetches from multiple foreign scan nodes\n> > using the same connection, but that would cause a performance issue\n> > that I mentioned in [1].\n\n> Yeah, I noticed such a possibility of fetch cascading, however, I\n> think that that situation that the feature is intended for is more\n> common than the problem case.\n\nI think a cleaner solution to that would be to support multiple\nconnections to the remote server...\n\n> > So I think it would be better to use his patch rather as proposed\n> > except for the postgres_fdw part and Thomas’ patch as a base patch for\n> > that part. As for your patch, I think we could use some part of it as\n> > improvements. One thing is the planner/executor changes that lead to\n> > the improved efficiency discussed in [2][3]. Another would be to have\n> > a separate ExecAppend() function for this feature like your patch to\n> > avoid a performance penalty in the case of a plain old Append that\n> > involves no FDWs with asynchronism optimization, if necessary. I also\n> > think we could probably use the WaitEventSet-related changes in your\n> > patch (i.e., the 0001 patch).\n> >\n> > Does that answer your question?\n>\n> Yes, thanks. Comments about the direction from me is as above. Are\n> you going to continue working on this patch?\n\nYes, if there are no objections from you or Thomas or Robert or anyone\nelse, I'll update Robert's patch as such.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 5 Oct 2020 15:35:36 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 10/5/20 11:35 AM, Etsuro Fujita wrote:\nHi,\nI found a small problem. If we have a mix of async and sync subplans \nwhen we catch an assertion on a busy connection. Just for example:\n\nPLAN\n====\nNested Loop (cost=100.00..174316.95 rows=975 width=8) (actual \ntime=5.191..9.262 rows=9 loops=1)\n Join Filter: (frgn.a = l.a)\n Rows Removed by Join Filter: 8991\n -> Append (cost=0.00..257.20 rows=11890 width=4) (actual \ntime=0.419..2.773 rows=1000 loops=1)\n Async subplans: 4\n -> Async Foreign Scan on f_1 l_2 (cost=100.00..197.75 \nrows=2925 width=4) (actual time=0.381..0.585 rows=211 loops=1)\n -> Async Foreign Scan on f_2 l_3 (cost=100.00..197.75 \nrows=2925 width=4) (actual time=0.005..0.206 rows=195 loops=1)\n -> Async Foreign Scan on f_3 l_4 (cost=100.00..197.75 \nrows=2925 width=4) (actual time=0.003..0.282 rows=187 loops=1)\n -> Async Foreign Scan on f_4 l_5 (cost=100.00..197.75 \nrows=2925 width=4) (actual time=0.003..0.316 rows=217 loops=1)\n -> Seq Scan on l_0 l_1 (cost=0.00..2.90 rows=190 width=4) \n(actual time=0.017..0.057 rows=190 loops=1)\n -> Materialize (cost=100.00..170.94 rows=975 width=4) (actual \ntime=0.001..0.002 rows=9 loops=1000)\n -> Foreign Scan on frgn (cost=100.00..166.06 rows=975 \nwidth=4) (actual time=0.766..0.768 rows=9 loops=1)\n\nReproduction script 'test1.sql' see in attachment. Here I force the \nproblem reproduction with setting enable_hashjoin and enable_mergejoin \nto off.\n\n'asyncmix.patch' contains my solution to this problem.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 8 Oct 2020 14:39:47 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hi,\nI want to suggest one more improvement. Currently the\nis_async_capable_path() routine allow only ForeignPath nodes as async \ncapable path. But in some cases we can allow SubqueryScanPath as async \ncapable too.\n\nFor example:\nSELECT * FROM ((SELECT * FROM foreign_1)\nUNION ALL\n(SELECT a FROM foreign_2)) AS b;\n\nis async capable, but:\n\nSELECT * FROM ((SELECT * FROM foreign_1 LIMIT 10)\nUNION ALL\n(SELECT a FROM foreign_2 LIMIT 10)) AS b;\n\ndoesn't async capable.\n\nThe patch in attachment tries to improve this situation.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 8 Oct 2020 16:40:24 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Oct 8, 2020 at 6:39 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> I found a small problem. If we have a mix of async and sync subplans\n> when we catch an assertion on a busy connection. Just for example:\n>\n> PLAN\n> ====\n> Nested Loop (cost=100.00..174316.95 rows=975 width=8) (actual\n> time=5.191..9.262 rows=9 loops=1)\n> Join Filter: (frgn.a = l.a)\n> Rows Removed by Join Filter: 8991\n> -> Append (cost=0.00..257.20 rows=11890 width=4) (actual\n> time=0.419..2.773 rows=1000 loops=1)\n> Async subplans: 4\n> -> Async Foreign Scan on f_1 l_2 (cost=100.00..197.75\n> rows=2925 width=4) (actual time=0.381..0.585 rows=211 loops=1)\n> -> Async Foreign Scan on f_2 l_3 (cost=100.00..197.75\n> rows=2925 width=4) (actual time=0.005..0.206 rows=195 loops=1)\n> -> Async Foreign Scan on f_3 l_4 (cost=100.00..197.75\n> rows=2925 width=4) (actual time=0.003..0.282 rows=187 loops=1)\n> -> Async Foreign Scan on f_4 l_5 (cost=100.00..197.75\n> rows=2925 width=4) (actual time=0.003..0.316 rows=217 loops=1)\n> -> Seq Scan on l_0 l_1 (cost=0.00..2.90 rows=190 width=4)\n> (actual time=0.017..0.057 rows=190 loops=1)\n> -> Materialize (cost=100.00..170.94 rows=975 width=4) (actual\n> time=0.001..0.002 rows=9 loops=1000)\n> -> Foreign Scan on frgn (cost=100.00..166.06 rows=975\n> width=4) (actual time=0.766..0.768 rows=9 loops=1)\n\nActually I also found a similar issue before [1]. But in the first\nplace I'm not sure the way of handling concurrent data fetches by\nmultiple ForeignScan nodes using the same connection in postgres_fdw\nimplemented in Horiguchi-san's patch would be really acceptable,\nbecause that would impact performance *negatively* in some cases as\nmentioned in [1]. So I feel inclined to just disable this feature in\nproblematic cases including the above one in the first cut. Even with\nsuch a limitation, I think it would be useful, because it would cover\ntypical use cases such as partitionwise joins and partitionwise\naggregates.\n\nThanks for the report!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com\n\n\n", "msg_date": "Thu, 12 Nov 2020 19:16:42 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Oct 8, 2020 at 8:40 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> I want to suggest one more improvement. Currently the\n> is_async_capable_path() routine allow only ForeignPath nodes as async\n> capable path. But in some cases we can allow SubqueryScanPath as async\n> capable too.\n\n> The patch in attachment tries to improve this situation.\n\nSeems like a good idea. Will look at the patch in detail.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 12 Nov 2020 19:20:36 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Oct 5, 2020 at 3:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Yes, if there are no objections from you or Thomas or Robert or anyone\n> else, I'll update Robert's patch as such.\n\nHere is a new version of the patch (as promised in the developer\nunconference in PostgresConf.CN & PGConf.Asia 2020):\n\n* In Robert's patch [1] (and Horiguchi-san's, which was created based\non Robert's), ExecAppend() was modified to retrieve tuples from\nasync-aware children *before* the tuples will be needed, but I don't\nthink that's really a good idea, because the query might complete\nbefore returning the tuples. So I modified that function so that a\ntuple is retrieved from an async-aware child *when* it is needed, like\nThomas' patch. I used FDW callback functions proposed by Robert, but\nintroduced another FDW callback function ForeignAsyncBegin() for each\nasync-aware child to start an asynchronous data fetch at the first\ncall to ExecAppend() after ExecInitAppend() or ExecReScanAppend().\n\n* For EvalPlanQual, I modified the patch so that async-aware children\nare treated as if they were synchronous when executing EvalPlanQual.\n\n* In Robert's patch, all async-aware children below Append nodes in\nthe query waiting for events to occur were managed by a single EState,\nbut I modified the patch so that such children are managed by each\nAppend node, like Horiguchi-san's patch and Thomas'.\n\n* In Robert's patch, the FDW callback function\nForeignAsyncConfigureWait() allowed multiple events to be configured,\nbut I limited that function to only allow a single event to be\nconfigured, just for simplicity.\n\n* I haven't yet added some planner/resowner changes from Horiguchi-san's patch.\n\n* I haven't yet done anything about the issue on postgres_fdw's\nhandling of concurrent data fetches by multiple ForeignScan nodes\n(below *different* Append nodes in the query) using the same\nconnection discussed in [2]. I modified the patch to just disable\napplying this feature to problematic test cases in the postgres_fdw\nregression tests, by a new GUC enable_async_append.\n\nComments welcome! The attached is still WIP and maybe I'm missing\nsomething, though.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com", "msg_date": "Tue, 17 Nov 2020 18:56:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Thanks you for the new version.\n\nAt Tue, 17 Nov 2020 18:56:02 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Mon, Oct 5, 2020 at 3:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Yes, if there are no objections from you or Thomas or Robert or anyone\n> > else, I'll update Robert's patch as such.\n> \n> Here is a new version of the patch (as promised in the developer\n> unconference in PostgresConf.CN & PGConf.Asia 2020):\n> \n> * In Robert's patch [1] (and Horiguchi-san's, which was created based\n> on Robert's), ExecAppend() was modified to retrieve tuples from\n> async-aware children *before* the tuples will be needed, but I don't\n\nThe \"retrieve\" means the move of a tuple from fdw to executor\n(ExecAppend or ExecAsync) layer?\n\n> think that's really a good idea, because the query might complete\n> before returning the tuples. So I modified that function so that a\n\nI'm not sure how it matters. Anyway the fdw holds up to tens of tuples\nbefore the executor actually make requests for them. The reason for\nthe early fetching is letting fdw send the next request as early as\npossible. (However, I didn't measure the effect of the\nnodeAppend-level prefetching.)\n\n> tuple is retrieved from an async-aware child *when* it is needed, like\n> Thomas' patch. I used FDW callback functions proposed by Robert, but\n> introduced another FDW callback function ForeignAsyncBegin() for each\n> async-aware child to start an asynchronous data fetch at the first\n> call to ExecAppend() after ExecInitAppend() or ExecReScanAppend().\n\nEven though the terminology is not officially determined, in the past\ndiscussions \"async-aware\" meant \"can handle async-capable subnodes\"\nand \"async-capable\" is used as \"can run asynchronously\". Likewise you\nseem to have changed the meaning of as_needrequest from \"subnodes that\nneeds to request for the next tuple\" to \"subnodes that already have\ngot query-send request and waiting for the result to come\". I would\nargue to use the words and variables (names) in such meanings. (Yeah,\nparallel_aware is being used in that meaning, I'm not sure what is the\nbetter wordings for the aware-capable relationship in that case.)\n\n> * For EvalPlanQual, I modified the patch so that async-aware children\n> are treated as if they were synchronous when executing EvalPlanQual.\n\nDoesn't async execution accelerate the epq-fetching? Or does\nasync-execution goes into trouble in the EPQ path?\n\n> * In Robert's patch, all async-aware children below Append nodes in\n> the query waiting for events to occur were managed by a single EState,\n> but I modified the patch so that such children are managed by each\n> Append node, like Horiguchi-san's patch and Thomas'.\n\nManaging in Estate give advantage for push-up style executor but\nmanaging in node_state is simpler.\n\n> * In Robert's patch, the FDW callback function\n> ForeignAsyncConfigureWait() allowed multiple events to be configured,\n> but I limited that function to only allow a single event to be\n> configured, just for simplicity.\n\nNo problem for me.\n\n> * I haven't yet added some planner/resowner changes from Horiguchi-san's patch.\n> \n> * I haven't yet done anything about the issue on postgres_fdw's\n> handling of concurrent data fetches by multiple ForeignScan nodes\n> (below *different* Append nodes in the query) using the same\n> connection discussed in [2]. I modified the patch to just disable\n> applying this feature to problematic test cases in the postgres_fdw\n> regression tests, by a new GUC enable_async_append.\n> \n> Comments welcome! The attached is still WIP and maybe I'm missing\n> something, though.\n> \n> Best regards,\n> Etsuro Fujita\n> \n> [1] https://www.postgresql.org/message-id/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 20 Nov 2020 15:51:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Hello.\n\nI looked through the nodeAppend.c and postgres_fdw.c part and those\nare I think the core of this patch.\n\n-\t\t * figure out which subplan we are currently processing\n+\t\t * try to get a tuple from async subplans\n+\t\t */\n+\t\tif (!bms_is_empty(node->as_needrequest) ||\n+\t\t\t(node->as_syncdone && !bms_is_empty(node->as_asyncpending)))\n+\t\t{\n+\t\t\tif (ExecAppendAsyncGetNext(node, &result))\n+\t\t\t\treturn result;\n\nThe function ExecAppendAsyncGetNext() is a function called only here,\nand contains only 31 lines. It doesn't seem to me that the separation\nmakes the code more readable.\n\n-\t\t/* choose new subplan; if none, we're done */\n-\t\tif (!node->choose_next_subplan(node))\n+\t\t/* wait or poll async events */\n+\t\tif (!bms_is_empty(node->as_asyncpending))\n+\t\t{\n+\t\t\tAssert(!node->as_syncdone);\n+\t\t\tAssert(bms_is_empty(node->as_needrequest));\n+\t\t\tExecAppendAsyncEventWait(node);\n\nYou moved the function to wait for events from execAsync to\nnodeAppend. The former is a generic module that can be used from any\nkind of executor nodes, but the latter is specialized for nodeAppend.\nIn other words, the abstraction level is lowered here. What is the\nreason for the change?\n\n\n+\t\t/* Perform the actual callback. */\n+\t\tExecAsyncRequest(areq);\n+\t\tif (ExecAppendAsyncResponse(areq))\n+\t\t{\n+\t\t\tAssert(!TupIsNull(areq->result));\n+\t\t\t*result = areq->result;\n\nPutting aside the name of the functions, the first two function are\nused only this way at only two places. ExecAsyncRequest(areq) tells\nfdw to store the first tuple among the already received ones to areq,\nand ExecAppendAsyncResponse(areq) is checking the result is actually\nset. Finally the result is retrieved directory from areq->result.\nWhat is the reason that the two functions are separately exists?\n\n\n+\t\t\t/* Perform the actual callback. */\n+\t\t\tExecAsyncNotify(areq);\n\nMmm. The usage of the function (or its name) looks completely reverse\nto me. I think FDW should NOTIFY to exec nodes that the new tuple\ngets available but the reverse is nonsense. What the function is\nactually doing is to REQUEST fdw to fetch tuples that are expected to\nhave arrived, which is different from what the name suggests.\n\n\npostgres_fdw.c\n\n> postgresIterateForeignScan(ForeignScanState *node)\n> {\n> \tPgFdwScanState *fsstate = (PgFdwScanState *) node->fdw_state;\n> \tTupleTableSlot *slot = node->ss.ss_ScanTupleSlot;\n> \n> \t/*\n> \t * If this is the first call after Begin or ReScan, we need to create the\n> \t * cursor on the remote side.\n> \t */\n> \tif (!fsstate->cursor_exists)\n> \t\tcreate_cursor(node);\n\nWith the patch, cursors are also created in another place so at least\nthe comment is wrong. That being said, I think we should unify the\ncode except the differences between async and sync. For example, if\nthe fetch_more_data_begin() needs to be called only for async\nfetching, the cursor should be created before calling the function, in\nthe code path common with sync fetching.\n\n\n+\n+\t\t/* If this was the second part of an async request, we must fetch until NULL. */\n+\t\tif (fsstate->async_aware)\n+\t\t{\n+\t\t\t/* call once and raise error if not NULL as expected? */\n+\t\t\twhile (PQgetResult(conn) != NULL)\n+\t\t\t\t;\n+\t\t\tfsstate->conn_state->async_query_sent = false;\n+\t\t}\n\nPQgetResult() receives the result of a query at once. This code means\nseveral queries (FETCHes) are queued in, and we discard the result\nexcept the last one. Actually the res is just PQclear'd just after so\nthis just discards *all* result of maybe more than one FETCHes. I\nthink something's wrong if we need this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 20 Nov 2020 20:16:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Fri, 20 Nov 2020 20:16:42 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> +\t\t/* If this was the second part of an async request, we must fetch until NULL. */\nme> +\t\tif (fsstate->async_aware)\nme> +\t\t{\nme> +\t\t\t/* call once and raise error if not NULL as expected? */\nme> +\t\t\twhile (PQgetResult(conn) != NULL)\nme> +\t\t\t\t;\nme> +\t\t\tfsstate->conn_state->async_query_sent = false;\nme> +\t\t}\nme> \nme> PQgetResult() receives the result of a query at once. This code means\nme> several queries (FETCHes) are queued in, and we discard the result\nme> except the last one. Actually the res is just PQclear'd just after so\nme> this just discards *all* result of maybe more than one FETCHes. I\nme> think something's wrong if we need this.\n\nI was wrong, it is worse. That leaks the returned PGresult.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 20 Nov 2020 20:26:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "I test the patch and occur several issues as blow:\r\n\r\nIssue one:\r\nGet a Assert error at 'Assert(bms_is_member(i, node->as_needrequest));' in\r\nExecAppendAsyncRequest() function when I use more than two foreign table\r\non different foreign server.\r\n\r\nI research the code and do such change then the Assert problom disappear.\r\n\r\n@@ -1004,6 +1004,7 @@ ExecAppendAsyncResponse(AsyncRequest *areq) bms_del_member(node->as_needrequest, areq->request_index); node->as_asyncpending = bms_add_member(node->as_asyncpending, areq->request_index); + node->as_lastasyncplan = INVALID_SUBPLAN_INDEX; return false; }\r\n\r\nIssue two:\r\nThen I test and find if I have sync subplan and async sunbplan, it will run over\r\nthe sync subplan then the async turn, I do not know if it is intent.\r\n\r\nIssue three:\r\nAfter code change mentioned in the Issue one, I can not get performance improvement.\r\nI query on partitioned table and all sub-partition the time spent on partitioned table\r\nalways same as the sum of all sub-partition.\r\n\r\nSorry if I have something wrong when test the patch.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca\r\n\n\nI test the patch and occur several issues as blow:Issue one:Get a Assert error at 'Assert(bms_is_member(i, node->as_needrequest));' inExecAppendAsyncRequest() function when I use more than two foreign tableon different foreign server.I research the code and do such change then the Assert problom disappear.@@ -1004,6 +1004,7 @@ ExecAppendAsyncResponse(AsyncRequest *areq)\n bms_del_member(node->as_needrequest, areq->request_index);\n node->as_asyncpending = bms_add_member(node->as_asyncpending,\n areq->request_index);\n+ node->as_lastasyncplan = INVALID_SUBPLAN_INDEX;\n return false;\n }Issue two:Then I test and find if I have sync subplan and async sunbplan, it will run overthe sync subplan then the async turn, I do not know if it is intent.Issue three:After code change mentioned in the Issue one, I can not get performance improvement.I query on partitioned table and all sub-partition the time spent on partitioned tablealways same as the sum of all sub-partition.Sorry if I have something wrong when test the patch.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca", "msg_date": "Thu, 26 Nov 2020 09:28:06 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "\"On Thu, Nov 26, 2020 at 9:28 AM movead.li@highgo.ca\n<movead.li@highgo.ca> wrote:\n>\n>\n> I test the patch and occur several issues as blow:\n>\n> Issue one:\n> Get a Assert error at 'Assert(bms_is_member(i, node->as_needrequest));' in\n> ExecAppendAsyncRequest() function when I use more than two foreign table\n> on different foreign server.\n>\n> I research the code and do such change then the Assert problem disappear.\n>\n> @@ -1004,6 +1004,7 @@ ExecAppendAsyncResponse(AsyncRequest *areq) bms_del_member(node->as_needrequest, areq->request_index); node->as_asyncpending = bms_add_member(node->as_asyncpending, areq->request_index); + node->as_lastasyncplan = INVALID_SUBPLAN_INDEX; return false; }\n>\n> Issue two:\n> Then I test and find if I have sync subplan and async sunbplan, it will run over\n> the sync subplan then the async turn, I do not know if it is intent.\n\nI only just noticed this patch. It's very interesting to me given the\nongoing work happening on postgres_fdw batching and the way libpq\npipelining is looking like it's getting there. I'll study up on the\nexecutor and see if I can understand this well enough to hack together\na PoC to make it use libpq batching.\n\nHave you taken a look at how this patch may overlap with those?\n\nSee -hackers threads:\n\n* \"POC: postgres_fdw insert batching\" [1]\n* \"PATCH: Batch/pipelining support for libpq\" [2]\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB2982039EA967F0304CC6A3ECFE0B0@OSBPR01MB2982.jpnprd01.prod.outlook.com\n[2] https://www.postgresql.org/message-id/20201026190936.GA18705@alvherre.pgsql\n\n\n", "msg_date": "Mon, 30 Nov 2020 10:45:34 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 11/17/20 2:56 PM, Etsuro Fujita wrote:\n> On Mon, Oct 5, 2020 at 3:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Comments welcome! The attached is still WIP and maybe I'm missing\n> something, though.\nI reviewed your patch and used it in my TPC-H benchmarks. It is still \nWIP. Will you improve this patch?\n\nI also want to say that, in my opinion, Horiguchi-san's version seems \npreferable: it is more structured, simple to understand, executor-native \nand allows to reduce FDW interface changes. This code really only needs \none procedure - IsForeignPathAsyncCapable.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 10 Dec 2020 11:38:04 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Nov 20, 2020 at 3:51 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Tue, 17 Nov 2020 18:56:02 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > * In Robert's patch [1] (and Horiguchi-san's, which was created based\n> > on Robert's), ExecAppend() was modified to retrieve tuples from\n> > async-aware children *before* the tuples will be needed, but I don't\n>\n> The \"retrieve\" means the move of a tuple from fdw to executor\n> (ExecAppend or ExecAsync) layer?\n\nYes, that's what I mean.\n\n> > think that's really a good idea, because the query might complete\n> > before returning the tuples. So I modified that function so that a\n>\n> I'm not sure how it matters. Anyway the fdw holds up to tens of tuples\n> before the executor actually make requests for them. The reason for\n> the early fetching is letting fdw send the next request as early as\n> possible. (However, I didn't measure the effect of the\n> nodeAppend-level prefetching.)\n\nI agree that that would lead to an improved efficiency in some cases,\nbut I still think that that would be useless in some other cases like\nSELECT * FROM sharded_table LIMIT 1. Also, I think the situation\nwould get worse if we support Append on top of joins or aggregates\nover ForeignScans, which would be more expensive to perform than these\nForeignScans.\n\nIf we do prefetching, I think it would be better that it’s the\nresponsibility of the FDW to do prefetching, and I think that that\ncould be done by letting the FDW to start another data fetch,\nindependently of the core, in the ForeignAsyncNotify callback routine,\nwhich I revived from Robert's original patch. I think that that would\nbe more efficient, because the FDW would no longer need to wait until\nall buffered tuples are returned to the core. In the WIP patch, I\nonly allowed the callback routine to put the corresponding ForeignScan\nnode into a state where it’s either ready for a new request or needing\na callback for another data fetch, but I think we could probably relax\nthe restriction so that the ForeignScan node can be put into another\nstate where it’s ready for a new request while needing a callback for\nthe prefetch.\n\n> > tuple is retrieved from an async-aware child *when* it is needed, like\n> > Thomas' patch. I used FDW callback functions proposed by Robert, but\n> > introduced another FDW callback function ForeignAsyncBegin() for each\n> > async-aware child to start an asynchronous data fetch at the first\n> > call to ExecAppend() after ExecInitAppend() or ExecReScanAppend().\n>\n> Even though the terminology is not officially determined, in the past\n> discussions \"async-aware\" meant \"can handle async-capable subnodes\"\n> and \"async-capable\" is used as \"can run asynchronously\".\n\nThanks for the explanation!\n\n> Likewise you\n> seem to have changed the meaning of as_needrequest from \"subnodes that\n> needs to request for the next tuple\" to \"subnodes that already have\n> got query-send request and waiting for the result to come\".\n\nNo. I think I might slightly change the original definition of\nas_needrequest, though.\n\n> I would\n> argue to use the words and variables (names) in such meanings.\n\nI think the word \"aware\" has a broader meaning, so the naming as\nproposed would be OK IMO. But actually, I don't have any strong\nopinion about that, so I'll change it as explained.\n\n> > * For EvalPlanQual, I modified the patch so that async-aware children\n> > are treated as if they were synchronous when executing EvalPlanQual.\n>\n> Doesn't async execution accelerate the epq-fetching? Or does\n> async-execution goes into trouble in the EPQ path?\n\nThe reason why I disabled async execution when executing EPQ is to\navoid sending asynchronous queries to the remote sides, which would be\nuseless, because scan tuples for an EPQ recheck are obtained in a\ndedicated way.\n\n> > * In Robert's patch, all async-aware children below Append nodes in\n> > the query waiting for events to occur were managed by a single EState,\n> > but I modified the patch so that such children are managed by each\n> > Append node, like Horiguchi-san's patch and Thomas'.\n>\n> Managing in Estate give advantage for push-up style executor but\n> managing in node_state is simpler.\n\nWhat do you mean by \"push-up style executor\"?\n\nThanks for the review! Sorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 12 Dec 2020 18:25:57 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Nov 20, 2020 at 8:16 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I looked through the nodeAppend.c and postgres_fdw.c part and those\n> are I think the core of this patch.\n\nThanks again for the review!\n\n> - * figure out which subplan we are currently processing\n> + * try to get a tuple from async subplans\n> + */\n> + if (!bms_is_empty(node->as_needrequest) ||\n> + (node->as_syncdone && !bms_is_empty(node->as_asyncpending)))\n> + {\n> + if (ExecAppendAsyncGetNext(node, &result))\n> + return result;\n>\n> The function ExecAppendAsyncGetNext() is a function called only here,\n> and contains only 31 lines. It doesn't seem to me that the separation\n> makes the code more readable.\n\nConsidering the original ExecAppend() is about 50 lines long, 31 lines\nof code would not be small. So I'd vote for separating it into\nanother function as proposed.\n\n> - /* choose new subplan; if none, we're done */\n> - if (!node->choose_next_subplan(node))\n> + /* wait or poll async events */\n> + if (!bms_is_empty(node->as_asyncpending))\n> + {\n> + Assert(!node->as_syncdone);\n> + Assert(bms_is_empty(node->as_needrequest));\n> + ExecAppendAsyncEventWait(node);\n>\n> You moved the function to wait for events from execAsync to\n> nodeAppend. The former is a generic module that can be used from any\n> kind of executor nodes, but the latter is specialized for nodeAppend.\n> In other words, the abstraction level is lowered here. What is the\n> reason for the change?\n\nThe reason is just because that function is only called from\nExecAppend(). I put some functions only called from nodeAppend.c in\nexecAsync.c, though.\n\n> + /* Perform the actual callback. */\n> + ExecAsyncRequest(areq);\n> + if (ExecAppendAsyncResponse(areq))\n> + {\n> + Assert(!TupIsNull(areq->result));\n> + *result = areq->result;\n>\n> Putting aside the name of the functions, the first two function are\n> used only this way at only two places. ExecAsyncRequest(areq) tells\n> fdw to store the first tuple among the already received ones to areq,\n> and ExecAppendAsyncResponse(areq) is checking the result is actually\n> set. Finally the result is retrieved directory from areq->result.\n> What is the reason that the two functions are separately exists?\n\nI think that when an async-aware node gets a tuple from an\nasync-capable node, they should use ExecAsyncRequest() /\nExecAyncHogeResponse() rather than ExecProcNode() [1]. I modified the\npatch so that ExecAppendAsyncResponse() is called from Append, but to\nsupport bubbling up the plan tree discussed in [2], I think it should\nbe called from ForeignScans (the sides of async-capable nodes). Am I\nright? Anyway, I’ll rename ExecAppendAyncResponse() to the one\nproposed in Robert’s original patch.\n\n> + /* Perform the actual callback. */\n> + ExecAsyncNotify(areq);\n>\n> Mmm. The usage of the function (or its name) looks completely reverse\n> to me. I think FDW should NOTIFY to exec nodes that the new tuple\n> gets available but the reverse is nonsense. What the function is\n> actually doing is to REQUEST fdw to fetch tuples that are expected to\n> have arrived, which is different from what the name suggests.\n\nAs mentioned in a previous email, this is an FDW callback routine\nrevived from Robert’s patch. I think the naming is reasonable,\nbecause the callback routine notifies the FDW of readiness of a file\ndescriptor. And actually, the callback routine tells the core whether\nthe corresponding ForeignScan node is ready for a new request or not,\nby setting the callback_pending flag accordingly.\n\n> postgres_fdw.c\n>\n> > postgresIterateForeignScan(ForeignScanState *node)\n> > {\n> > PgFdwScanState *fsstate = (PgFdwScanState *) node->fdw_state;\n> > TupleTableSlot *slot = node->ss.ss_ScanTupleSlot;\n> >\n> > /*\n> > * If this is the first call after Begin or ReScan, we need to create the\n> > * cursor on the remote side.\n> > */\n> > if (!fsstate->cursor_exists)\n> > create_cursor(node);\n>\n> With the patch, cursors are also created in another place so at least\n> the comment is wrong.\n\nGood catch! Will fix.\n\n> That being said, I think we should unify the\n> code except the differences between async and sync. For example, if\n> the fetch_more_data_begin() needs to be called only for async\n> fetching, the cursor should be created before calling the function, in\n> the code path common with sync fetching.\n\nI think that that would make the code easier to understand, but I’m\nnot 100% sure we really need to do so.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYrbgTBnLwnr1v%3Dpk%2BC%3DznWg7AgV9%3DM9ehrq6TDexPQNw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmoZSWnhy%3DSB3ggQcB6EqKxzbNeNn%3DEfwARnCS5tyhhBNcw%40mail.gmail.com\n\n\n", "msg_date": "Sat, 12 Dec 2020 19:06:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Sat, 12 Dec 2020 18:25:57 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \r\n> On Fri, Nov 20, 2020 at 3:51 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > At Tue, 17 Nov 2020 18:56:02 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\r\n> > > * In Robert's patch [1] (and Horiguchi-san's, which was created based\r\n> > > on Robert's), ExecAppend() was modified to retrieve tuples from\r\n> > > async-aware children *before* the tuples will be needed, but I don't\r\n> >\r\n> > The \"retrieve\" means the move of a tuple from fdw to executor\r\n> > (ExecAppend or ExecAsync) layer?\r\n> \r\n> Yes, that's what I mean.\r\n> \r\n> > > think that's really a good idea, because the query might complete\r\n> > > before returning the tuples. So I modified that function so that a\r\n> >\r\n> > I'm not sure how it matters. Anyway the fdw holds up to tens of tuples\r\n> > before the executor actually make requests for them. The reason for\r\n> > the early fetching is letting fdw send the next request as early as\r\n> > possible. (However, I didn't measure the effect of the\r\n> > nodeAppend-level prefetching.)\r\n> \r\n> I agree that that would lead to an improved efficiency in some cases,\r\n> but I still think that that would be useless in some other cases like\r\n> SELECT * FROM sharded_table LIMIT 1. Also, I think the situation\r\n> would get worse if we support Append on top of joins or aggregates\r\n> over ForeignScans, which would be more expensive to perform than these\r\n> ForeignScans.\r\n\r\nI'm not sure which gain we weigh on, but if doing \"LIMIT 1\" on Append\r\nfor many times is more common than fetching all or \"LIMIT <many\r\nmultiples of fetch_size>\", that discussion would be convincing... Is\r\nit really the case?\r\n\r\nSince core knows of async execution, I think if we disable async\r\nexection, it should be decided by planner, which knows how many tuples\r\nare expected to be returned. On the other hand the most apparent\r\ncriteria for whether to enable async or not would be fetch_size, which\r\nis fdw's secret. Thus we could rename ForeignPathAsyncCapable() to\r\nsomething like ForeignPathRunAsync(), true from which means \"the FDW\r\nis telling that it can run async and is thinking that the given number\r\nof tuples will be fetched at once.\".\r\n\r\n> If we do prefetching, I think it would be better that it’s the\r\n> responsibility of the FDW to do prefetching, and I think that that\r\n> could be done by letting the FDW to start another data fetch,\r\n> independently of the core, in the ForeignAsyncNotify callback routine,\r\n\r\nFDW does prefetching (if it means sending request to remote) in my\r\npatch, so I agree to that. It suspect that you were intended to say\r\nthe opposite. The core (ExecAppendAsyncGetNext()) controls\r\nprefetching in your patch.\r\n\r\n> which I revived from Robert's original patch. I think that that would\r\n> be more efficient, because the FDW would no longer need to wait until\r\n> all buffered tuples are returned to the core. In the WIP patch, I\r\n\r\nI don't understand. My patch sends a prefetch-query as soon as all the\r\ntuples of the last remote-request is stored into FDW storage. The\r\nreason for removing ExecAsyncNotify() was it is just redundant as far\r\nas concerning Append asynchrony. But I particulary oppose to revive\r\nthe function.\r\n\r\n> only allowed the callback routine to put the corresponding ForeignScan\r\n> node into a state where it’s either ready for a new request or needing\r\n> a callback for another data fetch, but I think we could probably relax\r\n> the restriction so that the ForeignScan node can be put into another\r\n> state where it’s ready for a new request while needing a callback for\r\n> the prefetch.\r\n\r\nI don't understand this, too. ExecAsyncNotify() doesn't touch any of\r\nthe bitmaps, as_needrequest, callback_pending nor as_asyncpending in\r\nyour patch. Am I looking into something wrong? I'm looking\r\nasync-wip-2020-11-17.patch.\r\n\r\n(By the way, it is one of those that make the code hard to read to me\r\nthat the \"callback\" means \"calling an API function\". I think none of\r\nthem (ExecAsyncBegin, ExecAsyncRequest, ExecAsyncNotify) are a\r\n\"callback\".)\r\n\r\n> > > tuple is retrieved from an async-aware child *when* it is needed, like\r\n> > > Thomas' patch. I used FDW callback functions proposed by Robert, but\r\n> > > introduced another FDW callback function ForeignAsyncBegin() for each\r\n> > > async-aware child to start an asynchronous data fetch at the first\r\n> > > call to ExecAppend() after ExecInitAppend() or ExecReScanAppend().\r\n> >\r\n> > Even though the terminology is not officially determined, in the past\r\n> > discussions \"async-aware\" meant \"can handle async-capable subnodes\"\r\n> > and \"async-capable\" is used as \"can run asynchronously\".\r\n> \r\n> Thanks for the explanation!\r\n> \r\n> > Likewise you\r\n> > seem to have changed the meaning of as_needrequest from \"subnodes that\r\n> > needs to request for the next tuple\" to \"subnodes that already have\r\n> > got query-send request and waiting for the result to come\".\r\n> \r\n> No. I think I might slightly change the original definition of\r\n> as_needrequest, though.\r\n\r\nMmm, sorry. I may have been perplexed by the comment below, which is\r\nalso added to ExecAsyncNotify().\r\n\r\nExecAppendAsyncRequest:\r\n>\t\tAssert(bms_is_member(i, node->as_needrequest));\r\n>\r\n>\t\t/* Perform the actual callback. */\r\n>\t\tExecAsyncRequest(areq);\r\n>\t\tif (ExecAppendAsyncResponse(areq))\r\n>\t\t{\r\n>\t\t\tAssert(!TupIsNull(areq->result));\r\n>\t\t\t*result = areq->result;\r\n>\t\t\treturn true;\r\n>\t\t}\r\n\r\n\r\n\r\n> > I would\r\n> > argue to use the words and variables (names) in such meanings.\r\n> \r\n> I think the word \"aware\" has a broader meaning, so the naming as\r\n> proposed would be OK IMO. But actually, I don't have any strong\r\n> opinion about that, so I'll change it as explained.\r\n\r\nThanks.\r\n\r\n> > > * For EvalPlanQual, I modified the patch so that async-aware children\r\n> > > are treated as if they were synchronous when executing EvalPlanQual.\r\n> >\r\n> > Doesn't async execution accelerate the epq-fetching? Or does\r\n> > async-execution goes into trouble in the EPQ path?\r\n> \r\n> The reason why I disabled async execution when executing EPQ is to\r\n> avoid sending asynchronous queries to the remote sides, which would be\r\n> useless, because scan tuples for an EPQ recheck are obtained in a\r\n> dedicated way.\r\n\r\nIf EPQ is performed onto Append, I think it should gain from\r\nasynchronous execution since it is going to fetch *a* tuple from\r\nseveral partitions or children. I believe EPQ doesn't contain Append\r\nin major cases, though. (Or I didn't come up with the steps for the\r\ncase to happen...)\r\n\r\n\r\n> > > * In Robert's patch, all async-aware children below Append nodes in\r\n> > > the query waiting for events to occur were managed by a single EState,\r\n> > > but I modified the patch so that such children are managed by each\r\n> > > Append node, like Horiguchi-san's patch and Thomas'.\r\n> >\r\n> > Managing in Estate give advantage for push-up style executor but\r\n> > managing in node_state is simpler.\r\n> \r\n> What do you mean by \"push-up style executor\"?\r\n\r\nThe reverse of the volcano-style executor, which enters from the\r\ntopmost node and down to the bottom. In the \"push-up stule executor\",\r\nthe bottom-most nodes fires by a certain trigger then every\r\nintermediate nodes throws up the result to the parent until reaching\r\nthe topmost node.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Mon, 14 Dec 2020 16:01:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Sat, 12 Dec 2020 19:06:51 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \r\n> On Fri, Nov 20, 2020 at 8:16 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > I looked through the nodeAppend.c and postgres_fdw.c part and those\r\n> > are I think the core of this patch.\r\n> \r\n> Thanks again for the review!\r\n> \r\n> > - * figure out which subplan we are currently processing\r\n> > + * try to get a tuple from async subplans\r\n> > + */\r\n> > + if (!bms_is_empty(node->as_needrequest) ||\r\n> > + (node->as_syncdone && !bms_is_empty(node->as_asyncpending)))\r\n> > + {\r\n> > + if (ExecAppendAsyncGetNext(node, &result))\r\n> > + return result;\r\n> >\r\n> > The function ExecAppendAsyncGetNext() is a function called only here,\r\n> > and contains only 31 lines. It doesn't seem to me that the separation\r\n> > makes the code more readable.\r\n> \r\n> Considering the original ExecAppend() is about 50 lines long, 31 lines\r\n> of code would not be small. So I'd vote for separating it into\r\n> another function as proposed.\r\n\r\nOk, I no longer oppose to separating some part from ExecAppend().\r\n\r\n> > - /* choose new subplan; if none, we're done */\r\n> > - if (!node->choose_next_subplan(node))\r\n> > + /* wait or poll async events */\r\n> > + if (!bms_is_empty(node->as_asyncpending))\r\n> > + {\r\n> > + Assert(!node->as_syncdone);\r\n> > + Assert(bms_is_empty(node->as_needrequest));\r\n> > + ExecAppendAsyncEventWait(node);\r\n> >\r\n> > You moved the function to wait for events from execAsync to\r\n> > nodeAppend. The former is a generic module that can be used from any\r\n> > kind of executor nodes, but the latter is specialized for nodeAppend.\r\n> > In other words, the abstraction level is lowered here. What is the\r\n> > reason for the change?\r\n> \r\n> The reason is just because that function is only called from\r\n> ExecAppend(). I put some functions only called from nodeAppend.c in\r\n> execAsync.c, though.\r\n\r\n(I think) You told me that you preferred the genericity of the\r\noriginal interface, but you're doing the opposite. If you think we\r\ncan move such a generic feature to a part of Append node, all other\r\nfeatures can be move the same way. I guess there's a reason you want\r\nonly the this specific feature out of all of them be Append-spcific\r\nand I want to know that.\r\n\r\n> > + /* Perform the actual callback. */\r\n> > + ExecAsyncRequest(areq);\r\n> > + if (ExecAppendAsyncResponse(areq))\r\n> > + {\r\n> > + Assert(!TupIsNull(areq->result));\r\n> > + *result = areq->result;\r\n> >\r\n> > Putting aside the name of the functions, the first two function are\r\n> > used only this way at only two places. ExecAsyncRequest(areq) tells\r\n> > fdw to store the first tuple among the already received ones to areq,\r\n> > and ExecAppendAsyncResponse(areq) is checking the result is actually\r\n> > set. Finally the result is retrieved directory from areq->result.\r\n> > What is the reason that the two functions are separately exists?\r\n> \r\n> I think that when an async-aware node gets a tuple from an\r\n> async-capable node, they should use ExecAsyncRequest() /\r\n> ExecAyncHogeResponse() rather than ExecProcNode() [1]. I modified the\r\n> patch so that ExecAppendAsyncResponse() is called from Append, but to\r\n> support bubbling up the plan tree discussed in [2], I think it should\r\n> be called from ForeignScans (the sides of async-capable nodes). Am I\r\n> right? Anyway, I’ll rename ExecAppendAyncResponse() to the one\r\n> proposed in Robert’s original patch.\r\n\r\nEven though I understand the concept but to make work it we need to\r\nremember the parent *async* node somewhere. In my faint memory the\r\nvery early patch did something like that.\r\n\r\nSo I think just providing ExecAsyncResponse() doesn't make it\r\ntrue. But if we make it true, it would be something like\r\npartially-reversed steps from what the current Exec*()s do for some of\r\nthe existing nodes and further code is required for some other nodes\r\nlike WindowFunction. Bubbling up works only in very simple cases where\r\na returned tuple is thrown up to further parent as-is or at least when\r\nthe node convers a tuple into another shape. If an async-receiver node\r\nwants to process multiple tuples from a child or from multiple\r\nchildren, it is no longer be just a bubbling up.\r\n\r\nThat being said, we could avoid passing (a-kind-of) side-channel\r\ninformation when ExecProcNode is called by providing\r\nExecAsyncResponse(). But I don't think the \"side-channel\" is not a\r\nproblem since it is just another state of the node.\r\n\r\n\r\nAnd.. I think the reason I feel uneasy for the patch may be that the\r\npatch uses the interface names in somewhat different context.\r\nOrigianlly the fraemework resides in-between executor nodes, not on a\r\nnode of either side. ExecAsyncNotify() notifies the requestee about an\r\nevent and ExecAsyncResonse() notifies the requestor about a new\r\ntuple. I don't feel strangeness in this usage. But this patch feels to\r\nme using the same names in different (and somewhat wrong) context.\r\n\r\n> > + /* Perform the actual callback. */\r\n> > + ExecAsyncNotify(areq);\r\n> >\r\n> > Mmm. The usage of the function (or its name) looks completely reverse\r\n> > to me. I think FDW should NOTIFY to exec nodes that the new tuple\r\n> > gets available but the reverse is nonsense. What the function is\r\n> > actually doing is to REQUEST fdw to fetch tuples that are expected to\r\n> > have arrived, which is different from what the name suggests.\r\n> \r\n> As mentioned in a previous email, this is an FDW callback routine\r\n> revived from Robert’s patch. I think the naming is reasonable,\r\n> because the callback routine notifies the FDW of readiness of a file\r\n> descriptor. And actually, the callback routine tells the core whether\r\n> the corresponding ForeignScan node is ready for a new request or not,\r\n> by setting the callback_pending flag accordingly.\r\n\r\nHmm. Agreed. The word \"callback\" is also used there [3]... I\r\nremember and it seems reasonable that the core calls AsyncNotify() on\r\nFDW and the FDW calls ExecForeignScan as a response to it and notify\r\nback to core of that using ExecAsyncRequestDone(). But the patch here\r\nfeels a little strange, or uneasy, to me.\r\n\r\n[3] https://www.postgresql.org/message-id/20161018.103051.30820907.horiguchi.kyotaro%40lab.ntt.co.jp\r\n\r\n> > postgres_fdw.c\r\n> >\r\n> > > postgresIterateForeignScan(ForeignScanState *node)\r\n> > > {\r\n> > > PgFdwScanState *fsstate = (PgFdwScanState *) node->fdw_state;\r\n> > > TupleTableSlot *slot = node->ss.ss_ScanTupleSlot;\r\n> > >\r\n> > > /*\r\n> > > * If this is the first call after Begin or ReScan, we need to create the\r\n> > > * cursor on the remote side.\r\n> > > */\r\n> > > if (!fsstate->cursor_exists)\r\n> > > create_cursor(node);\r\n> >\r\n> > With the patch, cursors are also created in another place so at least\r\n> > the comment is wrong.\r\n> \r\n> Good catch! Will fix.\r\n> \r\n> > That being said, I think we should unify the\r\n> > code except the differences between async and sync. For example, if\r\n> > the fetch_more_data_begin() needs to be called only for async\r\n> > fetching, the cursor should be created before calling the function, in\r\n> > the code path common with sync fetching.\r\n> \r\n> I think that that would make the code easier to understand, but I’m\r\n> not 100% sure we really need to do so.\r\n\r\nAnd I believe that we don't tolerate even the slightest performance\r\ndegradation.\r\n\r\n> Best regards,\r\n> Etsuro Fujita\r\n> \r\n> [1] https://www.postgresql.org/message-id/CA%2BTgmoYrbgTBnLwnr1v%3Dpk%2BC%3DznWg7AgV9%3DM9ehrq6TDexPQNw%40mail.gmail.com\r\n> [2] https://www.postgresql.org/message-id/CA%2BTgmoZSWnhy%3DSB3ggQcB6EqKxzbNeNn%3DEfwARnCS5tyhhBNcw%40mail.gmail.com\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Mon, 14 Dec 2020 17:56:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Dec 14, 2020 at 4:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sat, 12 Dec 2020 18:25:57 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > On Fri, Nov 20, 2020 at 3:51 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > The reason for\n> > > the early fetching is letting fdw send the next request as early as\n> > > possible. (However, I didn't measure the effect of the\n> > > nodeAppend-level prefetching.)\n> >\n> > I agree that that would lead to an improved efficiency in some cases,\n> > but I still think that that would be useless in some other cases like\n> > SELECT * FROM sharded_table LIMIT 1. Also, I think the situation\n> > would get worse if we support Append on top of joins or aggregates\n> > over ForeignScans, which would be more expensive to perform than these\n> > ForeignScans.\n>\n> I'm not sure which gain we weigh on, but if doing \"LIMIT 1\" on Append\n> for many times is more common than fetching all or \"LIMIT <many\n> multiples of fetch_size>\", that discussion would be convincing... Is\n> it really the case?\n\nI don't have a clear answer for that... Performance in the case you\nmentioned would be improved by async execution without prefetching by\nAppend, so it seemed reasonable to me to remove that prefetching to\navoid unnecessary overheads in the case I mentioned. BUT: I started\nto think my proposal, which needs an additional FDW callback routine\n(ie, ForeignAsyncBegin()), might be a bad idea, because it would\nincrease the burden on FDW authors.\n\n> > If we do prefetching, I think it would be better that it’s the\n> > responsibility of the FDW to do prefetching, and I think that that\n> > could be done by letting the FDW to start another data fetch,\n> > independently of the core, in the ForeignAsyncNotify callback routine,\n>\n> FDW does prefetching (if it means sending request to remote) in my\n> patch, so I agree to that. It suspect that you were intended to say\n> the opposite. The core (ExecAppendAsyncGetNext()) controls\n> prefetching in your patch.\n\nNo. That function just tries to retrieve a tuple from any of the\nready subplans (ie, subplans marked as as_needrequest).\n\n> > which I revived from Robert's original patch. I think that that would\n> > be more efficient, because the FDW would no longer need to wait until\n> > all buffered tuples are returned to the core. In the WIP patch, I\n>\n> I don't understand. My patch sends a prefetch-query as soon as all the\n> tuples of the last remote-request is stored into FDW storage. The\n> reason for removing ExecAsyncNotify() was it is just redundant as far\n> as concerning Append asynchrony. But I particulary oppose to revive\n> the function.\n\nSorry, my explanation was not good, but what I'm saying here is about\nmy patch, not your patch. I think this FDW callback routine would be\nuseful; it allows an FDW to perform another asynchronous data fetch\nbefore delivering a tuple to the core as discussed in [1]. Also, it\nwould be useful when extending to the case where we have intermediate\nnodes between an Append and a ForeignScan such as joins or aggregates,\nwhich I'll explain below.\n\n> > only allowed the callback routine to put the corresponding ForeignScan\n> > node into a state where it’s either ready for a new request or needing\n> > a callback for another data fetch, but I think we could probably relax\n> > the restriction so that the ForeignScan node can be put into another\n> > state where it’s ready for a new request while needing a callback for\n> > the prefetch.\n>\n> I don't understand this, too. ExecAsyncNotify() doesn't touch any of\n> the bitmaps, as_needrequest, callback_pending nor as_asyncpending in\n> your patch. Am I looking into something wrong? I'm looking\n> async-wip-2020-11-17.patch.\n\nIn the WIP patch I post, these bitmaps are modified in the core side\nbased on the callback_pending and request_complete flags in\nAsyncRequests returned from FDWs (See ExecAppendAsyncEventWait()).\n\n> (By the way, it is one of those that make the code hard to read to me\n> that the \"callback\" means \"calling an API function\". I think none of\n> them (ExecAsyncBegin, ExecAsyncRequest, ExecAsyncNotify) are a\n> \"callback\".)\n\nI thought the word “callback” was OK, because these functions would\ncall the corresponding FDW callback routines, but I’ll revise the\nwording.\n\n> > The reason why I disabled async execution when executing EPQ is to\n> > avoid sending asynchronous queries to the remote sides, which would be\n> > useless, because scan tuples for an EPQ recheck are obtained in a\n> > dedicated way.\n>\n> If EPQ is performed onto Append, I think it should gain from\n> asynchronous execution since it is going to fetch *a* tuple from\n> several partitions or children. I believe EPQ doesn't contain Append\n> in major cases, though. (Or I didn't come up with the steps for the\n> case to happen...)\n\nSorry, I don’t understand this part. Could you elaborate a bit more on it?\n\n> > What do you mean by \"push-up style executor\"?\n>\n> The reverse of the volcano-style executor, which enters from the\n> topmost node and down to the bottom. In the \"push-up stule executor\",\n> the bottom-most nodes fires by a certain trigger then every\n> intermediate nodes throws up the result to the parent until reaching\n> the topmost node.\n\nThat is what I'm thinking to be able to support the case I mentioned\nabove. I think that that would allow us to find ready subplans\nefficiently from occurred wait events in ExecAppendAsyncEventWait().\nConsider a plan like this:\n\nAppend\n-> Nested Loop\n -> Foreign Scan on a\n -> Foreign Scan on b\n-> ...\n\nI assume here that Foreign Scan on a, Foreign Scan on b, and Nested\nLoop are all async-capable and that we have somewhere in the executor\nan AsyncRequest with requestor=\"Nested Loop\" and requestee=\"Foreign\nScan on a\", an AsyncRequest with requestor=\"Nested Loop\" and\nrequestee=\"Foreign Scan on b\", and an AsyncRequest with\nrequestor=\"Append\" and requestee=\"Nested Loop\". In\nExecAppendAsyncEventWait(), if a file descriptor for foreign table a\nbecomes ready, we would call ForeignAsyncNotify() for a, and if it\nreturns a tuple back to the requestor node (ie, Nested Loop) (using\nExecAsyncResponse()), then *ForeignAsyncNotify() would be called for\nNested Loop*. Nested Loop would then call ExecAsyncRequest() for the\ninner requestee node (ie, Foreign Scan on b; I assume here that it is\na foreign scan parameterized by a). If Foreign Scan on b returns a\ntuple back to the requestor node (ie, Nested Loop) (using\nExecAsyncResponse()), then Nested Loop would match the tuples from the\nouter and inner sides. If they match, the join result would be\nreturned back to the requestor node (ie, Append) (using\nExecAsyncResponse()), marking the Nested Loop subplan as\nas_needrequest. Otherwise, Nested Loop would call ExecAsyncRequest()\nfor the inner requestee node for the next tuple, and so on. If\nExecAsyncRequest() can't return a tuple immediately, we would wait\nuntil a file descriptor for foreign table b becomes ready; we would\nstart from calling ForeignAsyncNotify() for b when the file descriptor\nbecomes ready. In this way we could find ready subplans efficiently\nfrom occurred wait events in ExecAppendAsyncEventWait() when extending\nto the case where subplans are joins or aggregates over Foreign Scans,\nI think. Maybe I’m missing something, though.\n\nThanks for the comments!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK153oorYtTpW_-aZrjH-iecHbykX7qbxX_5630ZK8nqVHg%40mail.gmail.com\n\n\n", "msg_date": "Sat, 19 Dec 2020 17:55:22 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Dec 14, 2020 at 5:56 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sat, 12 Dec 2020 19:06:51 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > On Fri, Nov 20, 2020 at 8:16 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n\n> > > + /* wait or poll async events */\n> > > + if (!bms_is_empty(node->as_asyncpending))\n> > > + {\n> > > + Assert(!node->as_syncdone);\n> > > + Assert(bms_is_empty(node->as_needrequest));\n> > > + ExecAppendAsyncEventWait(node);\n> > >\n> > > You moved the function to wait for events from execAsync to\n> > > nodeAppend. The former is a generic module that can be used from any\n> > > kind of executor nodes, but the latter is specialized for nodeAppend.\n> > > In other words, the abstraction level is lowered here. What is the\n> > > reason for the change?\n> >\n> > The reason is just because that function is only called from\n> > ExecAppend(). I put some functions only called from nodeAppend.c in\n> > execAsync.c, though.\n>\n> (I think) You told me that you preferred the genericity of the\n> original interface, but you're doing the opposite. If you think we\n> can move such a generic feature to a part of Append node, all other\n> features can be move the same way. I guess there's a reason you want\n> only the this specific feature out of all of them be Append-spcific\n> and I want to know that.\n\nThe reason is that I’m thinking to add a small feature for\nmultiplexing Append subplans, not a general feature for async\nexecution as discussed in [1], because this would be an interim\nsolution until the executor rewrite is done.\n\n> > I think that when an async-aware node gets a tuple from an\n> > async-capable node, they should use ExecAsyncRequest() /\n> > ExecAyncHogeResponse() rather than ExecProcNode() [1]. I modified the\n> > patch so that ExecAppendAsyncResponse() is called from Append, but to\n> > support bubbling up the plan tree discussed in [2], I think it should\n> > be called from ForeignScans (the sides of async-capable nodes). Am I\n> > right? Anyway, I’ll rename ExecAppendAyncResponse() to the one\n> > proposed in Robert’s original patch.\n>\n> Even though I understand the concept but to make work it we need to\n> remember the parent *async* node somewhere. In my faint memory the\n> very early patch did something like that.\n>\n> So I think just providing ExecAsyncResponse() doesn't make it\n> true. But if we make it true, it would be something like\n> partially-reversed steps from what the current Exec*()s do for some of\n> the existing nodes and further code is required for some other nodes\n> like WindowFunction. Bubbling up works only in very simple cases where\n> a returned tuple is thrown up to further parent as-is or at least when\n> the node convers a tuple into another shape. If an async-receiver node\n> wants to process multiple tuples from a child or from multiple\n> children, it is no longer be just a bubbling up.\n\nI explained the meaning of “bubbling up the plan tree” in a previous\nemail I sent a moment ago.\n\n> And.. I think the reason I feel uneasy for the patch may be that the\n> patch uses the interface names in somewhat different context.\n> Origianlly the fraemework resides in-between executor nodes, not on a\n> node of either side. ExecAsyncNotify() notifies the requestee about an\n> event and ExecAsyncResonse() notifies the requestor about a new\n> tuple. I don't feel strangeness in this usage. But this patch feels to\n> me using the same names in different (and somewhat wrong) context.\n\nSorry, this is a WIP patch. Will fix.\n\n> > > + /* Perform the actual callback. */\n> > > + ExecAsyncNotify(areq);\n> > >\n> > > Mmm. The usage of the function (or its name) looks completely reverse\n> > > to me.\n\n> > As mentioned in a previous email, this is an FDW callback routine\n> > revived from Robert’s patch. I think the naming is reasonable,\n> > because the callback routine notifies the FDW of readiness of a file\n> > descriptor. And actually, the callback routine tells the core whether\n> > the corresponding ForeignScan node is ready for a new request or not,\n> > by setting the callback_pending flag accordingly.\n>\n> Hmm. Agreed. The word \"callback\" is also used there [3]... I\n> remember and it seems reasonable that the core calls AsyncNotify() on\n> FDW and the FDW calls ExecForeignScan as a response to it and notify\n> back to core of that using ExecAsyncRequestDone(). But the patch here\n> feels a little strange, or uneasy, to me.\n\nI’m not sure what I should do to improve the patch. Could you\nelaborate a bit more on this part?\n\n> > > postgres_fdw.c\n> > >\n> > > > postgresIterateForeignScan(ForeignScanState *node)\n> > > > {\n> > > > PgFdwScanState *fsstate = (PgFdwScanState *) node->fdw_state;\n> > > > TupleTableSlot *slot = node->ss.ss_ScanTupleSlot;\n> > > >\n> > > > /*\n> > > > * If this is the first call after Begin or ReScan, we need to create the\n> > > > * cursor on the remote side.\n> > > > */\n> > > > if (!fsstate->cursor_exists)\n> > > > create_cursor(node);\n\n> > > That being said, I think we should unify the\n> > > code except the differences between async and sync. For example, if\n> > > the fetch_more_data_begin() needs to be called only for async\n> > > fetching, the cursor should be created before calling the function, in\n> > > the code path common with sync fetching.\n> >\n> > I think that that would make the code easier to understand, but I’m\n> > not 100% sure we really need to do so.\n>\n> And I believe that we don't tolerate even the slightest performance\n> degradation.\n\nIn the case of async execution, the cursor would have already been\ncreated before we get here as mentioned by you, so we would just skip\ncreate_cursor() in that case. I don’t think that that would degrade\nperformance noticeably. Am I wrong?\n\nThanks again!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmobx8su_bYtAa3DgrqB%2BR7xZG6kHRj0ccMUUshKAQVftww%40mail.gmail.com\n\n\n", "msg_date": "Sat, 19 Dec 2020 18:20:52 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Nov 26, 2020 at 10:28 AM movead.li@highgo.ca\n<movead.li@highgo.ca> wrote:\n> I test the patch and occur several issues as blow:\n\nThank you for the review!\n\n> Issue one:\n> Get a Assert error at 'Assert(bms_is_member(i, node->as_needrequest));' in\n> ExecAppendAsyncRequest() function when I use more than two foreign table\n> on different foreign server.\n>\n> I research the code and do such change then the Assert problom disappear.\n\nCould you show a test case causing the assertion failure?\n\n> Issue two:\n> Then I test and find if I have sync subplan and async sunbplan, it will run over\n> the sync subplan then the async turn, I do not know if it is intent.\n\nDid you use a partitioned table with only two partitions where one is\nlocal and the other is remote? If so, that would be expected, because\nin that case, 1) the patch would first send an asynchronous query to\nthe remote, 2) it would then process the local partition until the\nend, 3) it would then wait/poll the async event, and 4) it would\nfinally process the remote partition when the event occurs.\n\nSorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sun, 20 Dec 2020 17:15:38 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Dec 10, 2020 at 3:38 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 11/17/20 2:56 PM, Etsuro Fujita wrote:\n> > On Mon, Oct 5, 2020 at 3:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Comments welcome! The attached is still WIP and maybe I'm missing\n> > something, though.\n> I reviewed your patch and used it in my TPC-H benchmarks. It is still\n> WIP. Will you improve this patch?\n\nYeah, will do.\n\n> I also want to say that, in my opinion, Horiguchi-san's version seems\n> preferable: it is more structured, simple to understand, executor-native\n> and allows to reduce FDW interface changes.\n\nI’m not sure what you mean by “executor-native”, but I partly agree\nthat Horiguchi-san’s version would be easier to understand, because\nhis version was made so that a tuple is requested from an async\nsubplan using our Volcano Iterator model almost as-is. But my\nconcerns about his version would be: 1) it’s actually pretty invasive,\nbecause it changes the contract of the ExecProcNode() API [1], and 2)\nIIUC it wouldn’t allow us to find ready subplans from occurred wait\nevents when we extend to the case where subplans are joins or\naggregates over ForeignScans [2].\n\n> This code really only needs\n> one procedure - IsForeignPathAsyncCapable.\n\nThis isn’t correct: his version uses ForeignAsyncConfigureWait() as well.\n\nThank you for reviewing! Sorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16YXCADSwsFLSxqTBBLbt3E_%3DiigKTtjS%3Ddqu%2B8K8DWCw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAPmGK16rA5ODyRrVK9iPsyW-td2RcRZXsdWoVhMmLLmUhprsTg%40mail.gmail.com\n\n\n", "msg_date": "Sun, 20 Dec 2020 17:25:57 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Sat, Dec 19, 2020 at 5:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Dec 14, 2020 at 4:01 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Sat, 12 Dec 2020 18:25:57 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > > On Fri, Nov 20, 2020 at 3:51 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > The reason for\n> > > > the early fetching is letting fdw send the next request as early as\n> > > > possible. (However, I didn't measure the effect of the\n> > > > nodeAppend-level prefetching.)\n> > >\n> > > I agree that that would lead to an improved efficiency in some cases,\n> > > but I still think that that would be useless in some other cases like\n> > > SELECT * FROM sharded_table LIMIT 1. Also, I think the situation\n> > > would get worse if we support Append on top of joins or aggregates\n> > > over ForeignScans, which would be more expensive to perform than these\n> > > ForeignScans.\n> >\n> > I'm not sure which gain we weigh on, but if doing \"LIMIT 1\" on Append\n> > for many times is more common than fetching all or \"LIMIT <many\n> > multiples of fetch_size>\", that discussion would be convincing... Is\n> > it really the case?\n>\n> I don't have a clear answer for that... Performance in the case you\n> mentioned would be improved by async execution without prefetching by\n> Append, so it seemed reasonable to me to remove that prefetching to\n> avoid unnecessary overheads in the case I mentioned. BUT: I started\n> to think my proposal, which needs an additional FDW callback routine\n> (ie, ForeignAsyncBegin()), might be a bad idea, because it would\n> increase the burden on FDW authors.\n\nI dropped my proposal; I modified the patch so that ExecAppend()\nrequests tuples from all subplans needing a request *at once*, as\noriginally proposed by Robert and then you. Please find attached a\nnew version of the patch.\n\nOther changes:\n\n* I renamed ExecAppendAsyncResponse() to what was originally proposed\nby Robert, and modified the patch so that that function is called from\nthe requestee side, not the requestor side as in the previous version.\n\n* I renamed the variable async_aware as explained by you.\n\n* I tweaked comments a bit to address your comments.\n\n* I made code simpler, and added a bit more assertions.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 31 Dec 2020 19:15:48 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Dec 31, 2020 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> * I tweaked comments a bit to address your comments.\n\nI forgot to update some comments. :-( Attached is a new version of\nthe patch updating comments further. I did a bit of cleanup for the\npostgres_fdw part as well.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 1 Jan 2021 17:41:39 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Sun, Dec 20, 2020 at 5:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Nov 26, 2020 at 10:28 AM movead.li@highgo.ca\n> <movead.li@highgo.ca> wrote:\n> > Issue one:\n> > Get a Assert error at 'Assert(bms_is_member(i, node->as_needrequest));' in\n> > ExecAppendAsyncRequest() function when I use more than two foreign table\n> > on different foreign server.\n> >\n> > I research the code and do such change then the Assert problom disappear.\n>\n> Could you show a test case causing the assertion failure?\n\nI happened to reproduce the same failure in my environment.\n\nI think your change would be correct, but I changed the patch so that\nit doesn’t need as_lastasyncplan anymore [1]. The new version of the\npatch works well for my case. So, could you test your case with it?\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK17L0j6otssa53ZvjnCsjguJHZXaqPL2HU_LDoZ4ATZjEw%40mail.gmail.com\n\n\n", "msg_date": "Sat, 2 Jan 2021 17:15:59 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Sat, 19 Dec 2020 17:55:22 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Mon, Dec 14, 2020 at 4:01 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Sat, 12 Dec 2020 18:25:57 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > > On Fri, Nov 20, 2020 at 3:51 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > The reason for\n> > > > the early fetching is letting fdw send the next request as early as\n> > > > possible. (However, I didn't measure the effect of the\n> > > > nodeAppend-level prefetching.)\n> > >\n> > > I agree that that would lead to an improved efficiency in some cases,\n> > > but I still think that that would be useless in some other cases like\n> > > SELECT * FROM sharded_table LIMIT 1. Also, I think the situation\n> > > would get worse if we support Append on top of joins or aggregates\n> > > over ForeignScans, which would be more expensive to perform than these\n> > > ForeignScans.\n> >\n> > I'm not sure which gain we weigh on, but if doing \"LIMIT 1\" on Append\n> > for many times is more common than fetching all or \"LIMIT <many\n> > multiples of fetch_size>\", that discussion would be convincing... Is\n> > it really the case?\n> \n> I don't have a clear answer for that... Performance in the case you\n> mentioned would be improved by async execution without prefetching by\n> Append, so it seemed reasonable to me to remove that prefetching to\n> avoid unnecessary overheads in the case I mentioned. BUT: I started\n> to think my proposal, which needs an additional FDW callback routine\n> (ie, ForeignAsyncBegin()), might be a bad idea, because it would\n> increase the burden on FDW authors.\n\nI agree on the point of developers' burden.\n\n> > > If we do prefetching, I think it would be better that it’s the\n> > > responsibility of the FDW to do prefetching, and I think that that\n> > > could be done by letting the FDW to start another data fetch,\n> > > independently of the core, in the ForeignAsyncNotify callback routine,\n> >\n> > FDW does prefetching (if it means sending request to remote) in my\n> > patch, so I agree to that. It suspect that you were intended to say\n> > the opposite. The core (ExecAppendAsyncGetNext()) controls\n> > prefetching in your patch.\n> \n> No. That function just tries to retrieve a tuple from any of the\n> ready subplans (ie, subplans marked as as_needrequest).\n\nMmm. I meant that the function explicitly calls\nExecAppendAsyncRequest(), which finally calls fetch_more_data_begin()\n(if needed). Conversely if the function dosn't call\nExecAppendAsyncRequsest, the next request to remote doesn't\nhappen. That is, after the tuple buffer of FDW-side is exhausted, the\nnext request doesn't happen until executor requests for the next\ntuple. You seem to be saying that \"postgresForeignAsyncRequest() calls\nfetch_more_data_begin() following its own decision.\" but this doesn't\nseem to be \"prefetching\".\n\n> > > which I revived from Robert's original patch. I think that that would\n> > > be more efficient, because the FDW would no longer need to wait until\n> > > all buffered tuples are returned to the core. In the WIP patch, I\n> >\n> > I don't understand. My patch sends a prefetch-query as soon as all the\n> > tuples of the last remote-request is stored into FDW storage. The\n> > reason for removing ExecAsyncNotify() was it is just redundant as far\n> > as concerning Append asynchrony. But I particulary oppose to revive\n> > the function.\n> \n> Sorry, my explanation was not good, but what I'm saying here is about\n> my patch, not your patch. I think this FDW callback routine would be\n> useful; it allows an FDW to perform another asynchronous data fetch\n> before delivering a tuple to the core as discussed in [1]. Also, it\n> would be useful when extending to the case where we have intermediate\n> nodes between an Append and a ForeignScan such as joins or aggregates,\n> which I'll explain below.\n\nYeah. If a not-immediate parent of an async-capable node works as\nasync-aware, the notify API would have the power. So I don't object to\nthe API.\n\n> > > only allowed the callback routine to put the corresponding ForeignScan\n> > > node into a state where it’s either ready for a new request or needing\n> > > a callback for another data fetch, but I think we could probably relax\n> > > the restriction so that the ForeignScan node can be put into another\n> > > state where it’s ready for a new request while needing a callback for\n> > > the prefetch.\n> >\n> > I don't understand this, too. ExecAsyncNotify() doesn't touch any of\n> > the bitmaps, as_needrequest, callback_pending nor as_asyncpending in\n> > your patch. Am I looking into something wrong? I'm looking\n> > async-wip-2020-11-17.patch.\n> \n> In the WIP patch I post, these bitmaps are modified in the core side\n> based on the callback_pending and request_complete flags in\n> AsyncRequests returned from FDWs (See ExecAppendAsyncEventWait()).\n\nSorry. I think I misread you here. I agree that, the notify API is not\nso useful now but would be useful if we allow notify descendents other\nthan immediate children. However, I stumbled on the fact that some\nkinds of node doesn't return a result when all the underlying nodes\nreturned *a* tuple. Concretely count(*) doesn't return after *all*\ntuple of the counted relation has been returned. I remember that the\nfact might be the reason why I removed the API. After all the topmost\nasync-aware node must ask every immediate child if it can return a\ntuple.\n\n> > (By the way, it is one of those that make the code hard to read to me\n> > that the \"callback\" means \"calling an API function\". I think none of\n> > them (ExecAsyncBegin, ExecAsyncRequest, ExecAsyncNotify) are a\n> > \"callback\".)\n> \n> I thought the word “callback” was OK, because these functions would\n> call the corresponding FDW callback routines, but I’ll revise the\n> wording.\n\nI'm not confident on the usage of \"callback\", though:p (Sorry.) I\nbelieve that \"callback\" is a function a caller tells a callee to call\nit. In broader meaning, all FDW APIs are a function that an FDW\nextention tells the core to call it (yeah, that's inversed.). However,\nwe don't call fread a callback of libc. They work based on slightly\ndifferent mechanism but substantially the same, I think.\n\n> > > The reason why I disabled async execution when executing EPQ is to\n> > > avoid sending asynchronous queries to the remote sides, which would be\n> > > useless, because scan tuples for an EPQ recheck are obtained in a\n> > > dedicated way.\n> >\n> > If EPQ is performed onto Append, I think it should gain from\n> > asynchronous execution since it is going to fetch *a* tuple from\n> > several partitions or children. I believe EPQ doesn't contain Append\n> > in major cases, though. (Or I didn't come up with the steps for the\n> > case to happen...)\n> \n> Sorry, I don’t understand this part. Could you elaborate a bit more on it?\n\nEPQ retrieves a specific tuple from a node. If we perform EPQ on an\nAppend, only one of the children should offer a result tuple. Since\nAppend has no idea of which of its children will offer a result, it\nhas no way other than asking all children until it receives a\nresult. If we do that, asynchronously sending a query to all nodes\nwould win.\n\n\n> > > What do you mean by \"push-up style executor\"?\n> >\n> > The reverse of the volcano-style executor, which enters from the\n> > topmost node and down to the bottom. In the \"push-up stule executor\",\n> > the bottom-most nodes fires by a certain trigger then every\n> > intermediate nodes throws up the result to the parent until reaching\n> > the topmost node.\n> \n> That is what I'm thinking to be able to support the case I mentioned\n> above. I think that that would allow us to find ready subplans\n> efficiently from occurred wait events in ExecAppendAsyncEventWait().\n> Consider a plan like this:\n> \n> Append\n> -> Nested Loop\n> -> Foreign Scan on a\n> -> Foreign Scan on b\n> -> ...\n> \n> I assume here that Foreign Scan on a, Foreign Scan on b, and Nested\n> Loop are all async-capable and that we have somewhere in the executor\n> an AsyncRequest with requestor=\"Nested Loop\" and requestee=\"Foreign\n> Scan on a\", an AsyncRequest with requestor=\"Nested Loop\" and\n> requestee=\"Foreign Scan on b\", and an AsyncRequest with\n> requestor=\"Append\" and requestee=\"Nested Loop\". In\n> ExecAppendAsyncEventWait(), if a file descriptor for foreign table a\n> becomes ready, we would call ForeignAsyncNotify() for a, and if it\n> returns a tuple back to the requestor node (ie, Nested Loop) (using\n> ExecAsyncResponse()), then *ForeignAsyncNotify() would be called for\n> Nested Loop*. Nested Loop would then call ExecAsyncRequest() for the\n> inner requestee node (ie, Foreign Scan on b; I assume here that it is\n> a foreign scan parameterized by a). If Foreign Scan on b returns a\n> tuple back to the requestor node (ie, Nested Loop) (using\n> ExecAsyncResponse()), then Nested Loop would match the tuples from the\n> outer and inner sides. If they match, the join result would be\n> returned back to the requestor node (ie, Append) (using\n> ExecAsyncResponse()), marking the Nested Loop subplan as\n> as_needrequest. Otherwise, Nested Loop would call ExecAsyncRequest()\n> for the inner requestee node for the next tuple, and so on. If\n> ExecAsyncRequest() can't return a tuple immediately, we would wait\n> until a file descriptor for foreign table b becomes ready; we would\n> start from calling ForeignAsyncNotify() for b when the file descriptor\n> becomes ready. In this way we could find ready subplans efficiently\n> from occurred wait events in ExecAppendAsyncEventWait() when extending\n> to the case where subplans are joins or aggregates over Foreign Scans,\n> I think. Maybe I’m missing something, though.\n\nMaybe so. As I mentioned above, in the follwoing case..\n\n Join -1\n Join -2\n ForegnScan -A\n ForegnScan -B\n\tForegnScan -C\n\nWhere the Join-1 is the leader of asynchronous fetching. Even if both\nof the FS-A,B have returned one tuple each, it's unsure that Join-2\nreturns a tuple. I'm not sure how to resolve the situation with the\ncurrent infrastructure as-is.\n\nSo I tried a structure where when a node gets a new tuple, the node\nasks the parent whether it is satisfied or not. In that trial I needed\nto make every execnodes a state machine and that was pretty messy..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 15 Jan 2021 16:54:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Jan 15, 2021 at 4:54 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Mmm. I meant that the function explicitly calls\n> ExecAppendAsyncRequest(), which finally calls fetch_more_data_begin()\n> (if needed). Conversely if the function dosn't call\n> ExecAppendAsyncRequsest, the next request to remote doesn't\n> happen. That is, after the tuple buffer of FDW-side is exhausted, the\n> next request doesn't happen until executor requests for the next\n> tuple. You seem to be saying that \"postgresForeignAsyncRequest() calls\n> fetch_more_data_begin() following its own decision.\" but this doesn't\n> seem to be \"prefetching\".\n\nLet me explain a bit more. Actually, the new version of the patch\nallows prefetching in the FDW side; for such prefetching in\npostgres_fdw, I think we could add a fetch_more_data_begin() call in\npostgresForeignAsyncNotify(). But I left that for future work,\nbecause we don’t know yet if that’s really useful. (Another reason\nwhy I left that is we have more important issues that should be\naddressed [1], and I think addressing those issues is a requirement\nfor us to commit this patch, but adding such prefetching isn’t, IMO.)\n\n> Sorry. I think I misread you here. I agree that, the notify API is not\n> so useful now but would be useful if we allow notify descendents other\n> than immediate children. However, I stumbled on the fact that some\n> kinds of node doesn't return a result when all the underlying nodes\n> returned *a* tuple. Concretely count(*) doesn't return after *all*\n> tuple of the counted relation has been returned. I remember that the\n> fact might be the reason why I removed the API. After all the topmost\n> async-aware node must ask every immediate child if it can return a\n> tuple.\n\nThe patch I posted, which revived Robert’s original patch using stuff\nfrom your patch and Thomas’, provides ExecAsyncRequest() as well as\nExecAsyncNotify(), which supports pull-based execution like\nExecProcNode() (while ExecAsyncNotify() supports push-based\nexecution.) In the aggregate case you mentioned, I think we could\niterate calling ExecAsyncRequest() for the underlying subplan to get\nall tuples from it, in a similar way to ExecProcNode() in the normal\ncase.\n\n> EPQ retrieves a specific tuple from a node. If we perform EPQ on an\n> Append, only one of the children should offer a result tuple. Since\n> Append has no idea of which of its children will offer a result, it\n> has no way other than asking all children until it receives a\n> result. If we do that, asynchronously sending a query to all nodes\n> would win.\n\nThanks for the explanation! But I’m still not sure why we need to\nsend an asynchronous query to each of the asynchronous nodes in an EPQ\nrecheck. Is it possible to explain a bit more about that?\n\nI wrote:\n> > That is what I'm thinking to be able to support the case I mentioned\n> > above. I think that that would allow us to find ready subplans\n> > efficiently from occurred wait events in ExecAppendAsyncEventWait().\n> > Consider a plan like this:\n> >\n> > Append\n> > -> Nested Loop\n> > -> Foreign Scan on a\n> > -> Foreign Scan on b\n> > -> ...\n> >\n> > I assume here that Foreign Scan on a, Foreign Scan on b, and Nested\n> > Loop are all async-capable and that we have somewhere in the executor\n> > an AsyncRequest with requestor=\"Nested Loop\" and requestee=\"Foreign\n> > Scan on a\", an AsyncRequest with requestor=\"Nested Loop\" and\n> > requestee=\"Foreign Scan on b\", and an AsyncRequest with\n> > requestor=\"Append\" and requestee=\"Nested Loop\". In\n> > ExecAppendAsyncEventWait(), if a file descriptor for foreign table a\n> > becomes ready, we would call ForeignAsyncNotify() for a, and if it\n> > returns a tuple back to the requestor node (ie, Nested Loop) (using\n> > ExecAsyncResponse()), then *ForeignAsyncNotify() would be called for\n> > Nested Loop*. Nested Loop would then call ExecAsyncRequest() for the\n> > inner requestee node (ie, Foreign Scan on b; I assume here that it is\n> > a foreign scan parameterized by a). If Foreign Scan on b returns a\n> > tuple back to the requestor node (ie, Nested Loop) (using\n> > ExecAsyncResponse()), then Nested Loop would match the tuples from the\n> > outer and inner sides. If they match, the join result would be\n> > returned back to the requestor node (ie, Append) (using\n> > ExecAsyncResponse()), marking the Nested Loop subplan as\n> > as_needrequest. Otherwise, Nested Loop would call ExecAsyncRequest()\n> > for the inner requestee node for the next tuple, and so on. If\n> > ExecAsyncRequest() can't return a tuple immediately, we would wait\n> > until a file descriptor for foreign table b becomes ready; we would\n> > start from calling ForeignAsyncNotify() for b when the file descriptor\n> > becomes ready. In this way we could find ready subplans efficiently\n> > from occurred wait events in ExecAppendAsyncEventWait() when extending\n> > to the case where subplans are joins or aggregates over Foreign Scans,\n> > I think. Maybe I’m missing something, though.\n\n> Maybe so. As I mentioned above, in the follwoing case..\n>\n> Join -1\n> Join -2\n> ForegnScan -A\n> ForegnScan -B\n> ForegnScan -C\n>\n> Where the Join-1 is the leader of asynchronous fetching. Even if both\n> of the FS-A,B have returned one tuple each, it's unsure that Join-2\n> returns a tuple. I'm not sure how to resolve the situation with the\n> current infrastructure as-is.\n\nMaybe my explanation was not good, so let me explain a bit more.\nAssume that Join-2 is a nested loop join as shown above. If the\ntuples from the outer/inner sides didn’t match, we could iterate\ncalling *ExecAsyncRequest()* for the inner side until a matched tuple\nfrom it is found. If the inner side wasn’t able to return a tuple\nimmediately, 1) it would return request_complete=false to Join-2 using\nExecAsyncResponse(), and 2) we could wait for a file descriptor for\nthe inner side to become ready (while processing other part of the\nAppend tree), and 3) when the file descriptor becomes ready, recursive\nExecAsyncNotify() calls would restart the Join-2 processing in a\npush-based manner as explained above.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK14xrGe%2BXks7%2BfVLBoUUbKwcDkT9km1oFXhdY%2BFFhbMjUg%40mail.gmail.com\n\n\n", "msg_date": "Mon, 18 Jan 2021 13:06:23 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Nov 17, 2020 at 6:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> * I haven't yet done anything about the issue on postgres_fdw's\n> handling of concurrent data fetches by multiple ForeignScan nodes\n> (below *different* Append nodes in the query) using the same\n> connection discussed in [2]. I modified the patch to just disable\n> applying this feature to problematic test cases in the postgres_fdw\n> regression tests, by a new GUC enable_async_append.\n\nA solution for the issue would be a scheduler designed to handle such\ndata fetches more efficiently, but I don’t think it’s easy to create\nsuch a scheduler. Rather than doing so, I'd like to propose to allow\nFDWs to disable async execution of them in problematic cases by\nthemselves during executor startup in the first cut. What I have in\nmind for that is:\n\n1) For an FDW that has async-capable ForeignScan(s), we allow the FDW\nto record, for each of the async-capable and non-async-capable\nForeignScan(s), the information on a connection to be used for the\nForeignScan into EState during BeginForeignScan().\n\n2) After doing ExecProcNode() to each SubPlan and the main query tree\nin InitPlan(), we give the FDW a chance to a) reconsider, for each of\nthe async-capable ForeignScan(s), whether the ForeignScan can be\nexecuted asynchronously as planned, based on the information stored\ninto EState in #1, and then b) disable async execution of the\nForeignScan if not.\n\n#1 and #2 would be done after initial partition pruning, so more\nasync-capable ForeignScans would be executed asynchronously, if other\nasync-capable ForeignScans conflicting with them are removed by that\npruning.\n\nThis wouldn’t prevent us from adding a feature like what was proposed\nby Horiguchi-san later.\n\nBTW: while considering this, I noticed some bugs with\nExecAppendAsyncBegin() in the previous patch. Attached is a new\nversion of the patch fixing them. I also tweaked some comments a\nlittle bit.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 1 Feb 2021 12:06:09 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Feb 1, 2021 at 12:06 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, Nov 17, 2020 at 6:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > * I haven't yet done anything about the issue on postgres_fdw's\n> > handling of concurrent data fetches by multiple ForeignScan nodes\n> > (below *different* Append nodes in the query) using the same\n> > connection discussed in [2]. I modified the patch to just disable\n> > applying this feature to problematic test cases in the postgres_fdw\n> > regression tests, by a new GUC enable_async_append.\n>\n> A solution for the issue would be a scheduler designed to handle such\n> data fetches more efficiently, but I don’t think it’s easy to create\n> such a scheduler. Rather than doing so, I'd like to propose to allow\n> FDWs to disable async execution of them in problematic cases by\n> themselves during executor startup in the first cut. What I have in\n> mind for that is:\n>\n> 1) For an FDW that has async-capable ForeignScan(s), we allow the FDW\n> to record, for each of the async-capable and non-async-capable\n> ForeignScan(s), the information on a connection to be used for the\n> ForeignScan into EState during BeginForeignScan().\n>\n> 2) After doing ExecProcNode() to each SubPlan and the main query tree\n> in InitPlan(), we give the FDW a chance to a) reconsider, for each of\n> the async-capable ForeignScan(s), whether the ForeignScan can be\n> executed asynchronously as planned, based on the information stored\n> into EState in #1, and then b) disable async execution of the\n> ForeignScan if not.\n\ns/ExecProcNode()/ExecInitNode()/. Sorry for that. I’ll post an\nupdated patch for this in a few days.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 4 Feb 2021 19:21:16 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Feb 4, 2021 at 7:21 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Feb 1, 2021 at 12:06 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Rather than doing so, I'd like to propose to allow\n> > FDWs to disable async execution of them in problematic cases by\n> > themselves during executor startup in the first cut. What I have in\n> > mind for that is:\n> >\n> > 1) For an FDW that has async-capable ForeignScan(s), we allow the FDW\n> > to record, for each of the async-capable and non-async-capable\n> > ForeignScan(s), the information on a connection to be used for the\n> > ForeignScan into EState during BeginForeignScan().\n> >\n> > 2) After doing ExecProcNode() to each SubPlan and the main query tree\n> > in InitPlan(), we give the FDW a chance to a) reconsider, for each of\n> > the async-capable ForeignScan(s), whether the ForeignScan can be\n> > executed asynchronously as planned, based on the information stored\n> > into EState in #1, and then b) disable async execution of the\n> > ForeignScan if not.\n>\n> s/ExecProcNode()/ExecInitNode()/. Sorry for that. I’ll post an\n> updated patch for this in a few days.\n\nI created a WIP patch for this. For #2, I added a new callback\nroutine ReconsiderAsyncForeignScan(). The routine for postgres_fdw\npostgresReconsiderAsyncForeignScan() is pretty simple: async execution\nof an async-capable ForeignScan is disabled if the connection used for\nit is used in other parts of the query plan tree except async subplans\njust below the parent Append. Here is a running example:\n\npostgres=# create table t1 (a int, b int, c text);\npostgres=# create table t2 (a int, b int, c text);\npostgres=# create foreign table p1 (a int, b int, c text) server\nserver1 options (table_name 't1');\npostgres=# create foreign table p2 (a int, b int, c text) server\nserver2 options (table_name 't2');\npostgres=# create table pt (a int, b int, c text) partition by range (a);\npostgres=# alter table pt attach partition p1 for values from (10) to (20);\npostgres=# alter table pt attach partition p2 for values from (20) to (30);\npostgres=# insert into p1 select 10 + i % 10, i, to_char(i, 'FM0000')\nfrom generate_series(0, 99) i;\npostgres=# insert into p2 select 20 + i % 10, i, to_char(i, 'FM0000')\nfrom generate_series(0, 99) i;\npostgres=# analyze pt;\npostgres=# create table loct (a int, b int);\npostgres=# create foreign table ft (a int, b int) server server1\noptions (table_name 'loct');\npostgres=# insert into ft select i, i from generate_series(0, 99) i;\npostgres=# analyze ft;\npostgres=# create view v as select * from ft;\n\npostgres=# explain verbose select * from pt, v where pt.b = v.b and v.b = 99;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Nested Loop (cost=200.00..306.84 rows=2 width=21)\n Output: pt.a, pt.b, pt.c, ft.a, ft.b\n -> Foreign Scan on public.ft (cost=100.00..102.27 rows=1 width=8)\n Output: ft.a, ft.b\n Remote SQL: SELECT a, b FROM public.loct WHERE ((b = 99))\n -> Append (cost=100.00..204.55 rows=2 width=13)\n -> Foreign Scan on public.p1 pt_1 (cost=100.00..102.27\nrows=1 width=13)\n Output: pt_1.a, pt_1.b, pt_1.c\n Remote SQL: SELECT a, b, c FROM public.t1 WHERE ((b = 99))\n -> Async Foreign Scan on public.p2 pt_2\n(cost=100.00..102.27 rows=1 width=13)\n Output: pt_2.a, pt_2.b, pt_2.c\n Remote SQL: SELECT a, b, c FROM public.t2 WHERE ((b = 99))\n(12 rows)\n\nFor this query, while p2 is executed asynchronously, p1 isn’t as it\nuses the same connection with ft. BUT:\n\npostgres=# create role view_owner SUPERUSER;\npostgres=# create user mapping for view_owner server server1;\npostgres=# alter view v owner to view_owner;\n\npostgres=# explain verbose select * from pt, v where pt.b = v.b and v.b = 99;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Nested Loop (cost=200.00..306.84 rows=2 width=21)\n Output: pt.a, pt.b, pt.c, ft.a, ft.b\n -> Foreign Scan on public.ft (cost=100.00..102.27 rows=1 width=8)\n Output: ft.a, ft.b\n Remote SQL: SELECT a, b FROM public.loct WHERE ((b = 99))\n -> Append (cost=100.00..204.55 rows=2 width=13)\n -> Async Foreign Scan on public.p1 pt_1\n(cost=100.00..102.27 rows=1 width=13)\n Output: pt_1.a, pt_1.b, pt_1.c\n Remote SQL: SELECT a, b, c FROM public.t1 WHERE ((b = 99))\n -> Async Foreign Scan on public.p2 pt_2\n(cost=100.00..102.27 rows=1 width=13)\n Output: pt_2.a, pt_2.b, pt_2.c\n Remote SQL: SELECT a, b, c FROM public.t2 WHERE ((b = 99))\n(12 rows)\n\nin this setup, p1 is executed asynchronously as ft doesn’t use the\nsame connection with p1.\n\nI added to postgresReconsiderAsyncForeignScan() this as well: even if\nthe connection isn’t used in the other parts, async execution of an\nasync-capable ForeignScan is disabled if the subplans of the Append\nare all async-capable, and they use the same connection, because in\nthat case the subplans won’t be parallelized at all, and the overhead\nof async execution may cause a performance degradation.\n\nAttached is an updated version of the patch. Sorry for the delay.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 10 Feb 2021 19:31:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Wed, Feb 10, 2021 at 7:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Attached is an updated version of the patch. Sorry for the delay.\n\nI noticed that I forgot to add new files. :-(. Please find attached\nan updated patch.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 10 Feb 2021 21:31:15 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Wed, 10 Feb 2021 21:31:15 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Wed, Feb 10, 2021 at 7:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Attached is an updated version of the patch. Sorry for the delay.\n> \n> I noticed that I forgot to add new files. :-(. Please find attached\n> an updated patch.\n\nThanks for the new version.\n\nIt seems too specific to async Append so I look it as a PoC of the\nmechanism.\n\nIt creates a hash table that keyed by connection umid to record\nplanids run on the connection, triggerd by core planner via a dedicate\nAPI function. It seems to me that ConnCacheEntry.state can hold that\nand the hash is not needed at all.\n\n| postgresReconsiderAsyncForeignScan(ForeignScanState *node, AsyncContext *acxt)\n| {\n| ...\n| /*\n| \t * If the connection used for the ForeignScan node is used in other parts\n| \t * of the query plan tree except async subplans of the parent Append node,\n| \t * disable async execution of the ForeignScan node.\n| \t */\n| \tif (!bms_is_subset(fsplanids, asyncplanids))\n| \t\treturn false;\n\nThis would be a reasonable restriction.\n\n| \t/*\n| \t * If the subplans of the Append node are all async-capable, and use the\n| \t * same connection, then we won't execute them asynchronously.\n| \t */\n| \tif (requestor->as_nasyncplans == requestor->as_nplans &&\n| \t\t!bms_nonempty_difference(asyncplanids, fsplanids))\n| \t\treturn false;\n\nIt is the correct restiction? I understand that the currently\nintending restriction is one connection accepts at most one FDW-scan\nnode. This looks somethig different...\n\n(Sorry, time's up for now.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Feb 2021 17:30:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Feb 12, 2021 at 5:30 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> It seems too specific to async Append so I look it as a PoC of the\n> mechanism.\n\nAre you saying that the patch only reconsiders async ForeignScans?\n\n> It creates a hash table that keyed by connection umid to record\n> planids run on the connection, triggerd by core planner via a dedicate\n> API function. It seems to me that ConnCacheEntry.state can hold that\n> and the hash is not needed at all.\n\nI think a good thing about the hash table is that it can be used by\nother FDWs that support async execution in a similar way to\npostgres_fdw, so they don’t need to create their own hash tables. But\nI’d like to know about the idea of using ConnCacheEntry. Could you\nelaborate a bit more about that?\n\n> | postgresReconsiderAsyncForeignScan(ForeignScanState *node, AsyncContext *acxt)\n> | {\n> | ...\n> | /*\n> | * If the connection used for the ForeignScan node is used in other parts\n> | * of the query plan tree except async subplans of the parent Append node,\n> | * disable async execution of the ForeignScan node.\n> | */\n> | if (!bms_is_subset(fsplanids, asyncplanids))\n> | return false;\n>\n> This would be a reasonable restriction.\n\nCool!\n\n> | /*\n> | * If the subplans of the Append node are all async-capable, and use the\n> | * same connection, then we won't execute them asynchronously.\n> | */\n> | if (requestor->as_nasyncplans == requestor->as_nplans &&\n> | !bms_nonempty_difference(asyncplanids, fsplanids))\n> | return false;\n>\n> It is the correct restiction? I understand that the currently\n> intending restriction is one connection accepts at most one FDW-scan\n> node. This looks somethig different...\n\nPeople put multiple partitions in a remote PostgreSQL server in\nsharding, so the patch allows multiple postgres_fdw ForeignScans\nbeneath an Append that use the same connection to be executed\nasynchronously like this:\n\npostgres=# create table t1 (a int, b int, c text);\npostgres=# create table t2 (a int, b int, c text);\npostgres=# create table t3 (a int, b int, c text);\npostgres=# create foreign table p1 (a int, b int, c text) server\nserver1 options (table_name 't1');\npostgres=# create foreign table p2 (a int, b int, c text) server\nserver2 options (table_name 't2');\npostgres=# create foreign table p3 (a int, b int, c text) server\nserver2 options (table_name 't3');\npostgres=# create table pt (a int, b int, c text) partition by range (a);\npostgres=# alter table pt attach partition p1 for values from (10) to (20);\npostgres=# alter table pt attach partition p2 for values from (20) to (30);\npostgres=# alter table pt attach partition p3 for values from (30) to (40);\npostgres=# insert into p1 select 10 + i % 10, i, to_char(i, 'FM0000')\nfrom generate_series(0, 99) i;\npostgres=# insert into p2 select 20 + i % 10, i, to_char(i, 'FM0000')\nfrom generate_series(0, 99) i;\npostgres=# insert into p3 select 30 + i % 10, i, to_char(i, 'FM0000')\nfrom generate_series(0, 99) i;\npostgres=# analyze pt;\n\npostgres=# explain verbose select count(*) from pt;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Aggregate (cost=314.25..314.26 rows=1 width=8)\n Output: count(*)\n -> Append (cost=100.00..313.50 rows=300 width=0)\n -> Async Foreign Scan on public.p1 pt_1\n(cost=100.00..104.00 rows=100 width=0)\n Remote SQL: SELECT NULL FROM public.t1\n -> Async Foreign Scan on public.p2 pt_2\n(cost=100.00..104.00 rows=100 width=0)\n Remote SQL: SELECT NULL FROM public.t2\n -> Async Foreign Scan on public.p3 pt_3\n(cost=100.00..104.00 rows=100 width=0)\n Remote SQL: SELECT NULL FROM public.t3\n(9 rows)\n\nFor this query, p2 and p3, which use the same connection, are scanned\nasynchronously!\n\nBut if all the subplans of an Append are async postgres_fdw\nForeignScans that use the same connection, they won’t be parallelized\nat all, and the overhead of async execution may cause a performance\ndegradation. So the patch disables async execution of them in that\ncase using the above code bit.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sun, 14 Feb 2021 20:06:57 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Wed, Feb 10, 2021 at 9:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Please find attached an updated patch.\n\nI noticed that this doesn’t work for cases where ForeignScans are\nexecuted inside functions, and I don’t have any simple solution for\nthat. So I’m getting back to what Horiguchi-san proposed for\npostgres_fdw to handle concurrent fetches from a remote server\nperformed by multiple ForeignScan nodes that use the same connection.\nAs discussed before, we would need to create a scheduler for\nperforming such fetches in a more optimized way to avoid a performance\ndegradation in some cases, but that wouldn’t be easy. Instead, how\nabout reducing concurrency as an alternative? In his proposal,\npostgres_fdw was modified to perform prefetching pretty aggressively,\nso I mean removing aggressive prefetching. I think we could add it to\npostgres_fdw later maybe as the server/table options. Sorry for the\nback and forth.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 18 Feb 2021 11:51:59 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Sorry that I haven't been able to respond.\r\n\r\nAt Thu, 18 Feb 2021 11:51:59 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \r\n> On Wed, Feb 10, 2021 at 9:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\r\n> > Please find attached an updated patch.\r\n> \r\n> I noticed that this doesn’t work for cases where ForeignScans are\r\n> executed inside functions, and I don’t have any simple solution for\r\n\r\nAh, concurrent fetches in different plan trees? (For fairness, I\r\nhadn't noticed that case:p) The same can happen when an extension that\r\nis called via hooks.\r\n\r\n> that. So I’m getting back to what Horiguchi-san proposed for\r\n> postgres_fdw to handle concurrent fetches from a remote server\r\n> performed by multiple ForeignScan nodes that use the same connection.\r\n> As discussed before, we would need to create a scheduler for\r\n> performing such fetches in a more optimized way to avoid a performance\r\n> degradation in some cases, but that wouldn’t be easy. Instead, how\r\n\r\nIf the \"degradation\" means degradation caused by repeated creation of\r\nremote cursors, anyway every node on the same connection create its\r\nown connection named as \"c<n>\" and never \"re\"created in any case.\r\n\r\nIf the \"degradation\" means that my patch needs to wait for the\r\nprevious prefetching query to return tuples before sending a new query\r\n(vacate_connection()), it is just moving the wait from just before\r\nsending the new query to just before fetching the next round of the\r\nprevious node. The only case it becomes visible degradation is where\r\nthe tuples in the next round is not wanted by the upper nodes.\r\n\r\nunpatched\r\n\r\nnodeA <tuple exhaused>\r\n <send prefetching FETCH A>\r\n\t <return the last tuple of the last round>\r\nnodeB !!<wait for FETCH A returns>\r\n <send FETCH B>\r\n\t !!<wait for FETCH B returns>\r\n\t <return tuple just returned>\r\nnodeA <return already fetched tuple>\t \r\n\r\npatched\r\n\r\nnodeA <tuple exhaused>\r\n\t <return the last tuple of the last round>\r\nnodeB <send FETCH B>\r\n\t !!<wait for FETCH B returns>\r\n\t <return the first tuple of the round>\r\nnodeA <send FETCH A>\r\n\t !!<wait for FETCH A returns>\r\n\t <return the first tuple of the round>\r\n\r\nThat happens when the upper node stops just after the internal\r\ntuplestore is emptied, and the probability is one in fetch_tuples. (It\r\nis not stochastic so if a query gets suffered by the degradation, it\r\nalways suffers unless fetch_tuples is not changed.) I'm still not\r\nsure that degree of degradaton becomes a show stopper.\r\n\r\n> degradation in some cases, but that wouldn’t be easy. Instead, how\r\n> about reducing concurrency as an alternative? In his proposal,\r\n> postgres_fdw was modified to perform prefetching pretty aggressively,\r\n> so I mean removing aggressive prefetching. I think we could add it to\r\n> postgres_fdw later maybe as the server/table options. Sorry for the\r\n> back and forth.\r\n\r\nThat was the natural extension from non-aggresive prefetching.\r\nHowever, maybe we can live without that since if some needs more\r\nspeed, it is enought to give every remote tables a dedicate\r\nconnection.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Thu, 18 Feb 2021 15:15:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Feb 18, 2021 at 3:16 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 18 Feb 2021 11:51:59 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > I noticed that this doesn’t work for cases where ForeignScans are\n> > executed inside functions, and I don’t have any simple solution for\n>\n> Ah, concurrent fetches in different plan trees? (For fairness, I\n> hadn't noticed that case:p) The same can happen when an extension that\n> is called via hooks.\n\nYeah, consider a plan containing a FunctionScan that invokes a query\nlike e.g., “SELECT * FROM foreign_table” via SPI.\n\n> > So I’m getting back to what Horiguchi-san proposed for\n> > postgres_fdw to handle concurrent fetches from a remote server\n> > performed by multiple ForeignScan nodes that use the same connection.\n> > As discussed before, we would need to create a scheduler for\n> > performing such fetches in a more optimized way to avoid a performance\n> > degradation in some cases, but that wouldn’t be easy.\n>\n> If the \"degradation\" means degradation caused by repeated creation of\n> remote cursors, anyway every node on the same connection create its\n> own connection named as \"c<n>\" and never \"re\"created in any case.\n>\n> If the \"degradation\" means that my patch needs to wait for the\n> previous prefetching query to return tuples before sending a new query\n> (vacate_connection()), it is just moving the wait from just before\n> sending the new query to just before fetching the next round of the\n> previous node. The only case it becomes visible degradation is where\n> the tuples in the next round is not wanted by the upper nodes.\n\nThe latter. And yeah, typical cases where the performance degradation\noccurs would be queries with LIMIT, as discussed in [1].\n\nI’m not concerned about postgres_fdw modified to process an\nin-progress fetch by a ForeignScan before starting a new\nasynchronous/synchronous fetch by another ForeignScan using the same\nconnection. Actually, that seems pretty reasonable to me, so I’d like\nto use that part in your patch in the next version. My concern is\nthat postgresIterateForeignScan() was modified to start another\nasynchronous fetch from a remote table (if possible) right after doing\nfetch_received_data() for the remote table, because aggressive\nprefetching like that may increase the probability that ForeignScans\nusing the same connection conflict with each other, leading to a large\nperformance degradation. (Another issue with that would be that the\nfsstate->tuples array for the remote table may be enlarged\nindefinitely.)\n\nWhether the degradation is acceptable or not would depend on the user,\nand needless to say, the smaller degradation would be more acceptable.\nSo I’ll update the patch using your patch without the\npostgresIterateForeignScan() change.\n\n> > In his proposal,\n> > postgres_fdw was modified to perform prefetching pretty aggressively,\n> > so I mean removing aggressive prefetching. I think we could add it to\n> > postgres_fdw later maybe as the server/table options.\n\n> That was the natural extension from non-aggresive prefetching.\n\nI also suppose that that would improve the performance in some cases.\nLet’s leave that for future work.\n\n> However, maybe we can live without that since if some needs more\n> speed, it is enought to give every remote tables a dedicate\n> connection.\n\nYeah, I think so too.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16E1erFV9STg8yokoewY6E-zEJtLzHUJcQx%2B3dyivCT%3DA%40mail.gmail.com\n\n\n", "msg_date": "Sat, 20 Feb 2021 15:35:45 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Sat, Feb 20, 2021 at 3:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> So I’ll update the patch using your patch without the\n> postgresIterateForeignScan() change.\n\nHere is an updated version of the patch. Based on your idea of\ncompleting an in-progress command (if any) before sending a new\ncommand to the remote, I created a function for that\nprocess_pending_request(), and added it where needed in\ncontrib/postgres_fdw. I also adjusted the patch, and fixed some bugs\nin the postgres_fdw part of the patch.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 1 Mar 2021 17:56:10 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Mar 1, 2021 at 5:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Here is an updated version of the patch.\n\nAnother thing I'm concerned about in the postgres_fdw part is the case\nwhere all/many postgres_fdw ForeignScans of an Append use the same\nconnection, because in that case those ForeignScans are executed one\nby one, not in parallel, and hence the overhead of async execution\n(i.e., doing ExecAppendAsyncEventWait()) would merely cause a\nperformance degradation. Here is such an example:\n\npostgres=# create server loopback foreign data wrapper postgres_fdw\noptions (dbname 'postgres');\npostgres=# create user mapping for current_user server loopback;\npostgres=# create table pt (a int, b int, c text) partition by range (a);\npostgres=# create table loct1 (a int, b int, c text);\npostgres=# create table loct2 (a int, b int, c text);\npostgres=# create table loct3 (a int, b int, c text);\npostgres=# create foreign table p1 partition of pt for values from\n(10) to (20) server loopback options (table_name 'loct1');\npostgres=# create foreign table p2 partition of pt for values from\n(20) to (30) server loopback options (table_name 'loct2');\npostgres=# create foreign table p3 partition of pt for values from\n(30) to (40) server loopback options (table_name 'loct3');\npostgres=# insert into p1 select 10 + i % 10, i, to_char(i, 'FM00000')\nfrom generate_series(0, 99999) i;\npostgres=# insert into p2 select 20 + i % 10, i, to_char(i, 'FM00000')\nfrom generate_series(0, 99999) i;\npostgres=# insert into p3 select 30 + i % 10, i, to_char(i, 'FM00000')\nfrom generate_series(0, 99999) i;\npostgres=# analyze pt;\n\npostgres=# set enable_async_append to off;\npostgres=# select count(*) from pt;\n count\n--------\n 300000\n(1 row)\n\nTime: 366.905 ms\n\npostgres=# set enable_async_append to on;\npostgres=# select count(*) from pt;\n count\n--------\n 300000\n(1 row)\n\nTime: 385.431 ms\n\nPeople would use postgres_fdw to access old partitions archived in a\nsingle remote server. So the same degradation would be likely to\nhappen in such a use case. To avoid that, how about 1) adding the\ntable/server options to postgres_fdw that allow/disallow async\nexecution, and 2) setting them to false by default?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 4 Mar 2021 13:00:13 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Mar 4, 2021 at 1:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> To avoid that, how about 1) adding the\n> table/server options to postgres_fdw that allow/disallow async\n> execution, and 2) setting them to false by default?\n\nThere seems to be no objections, so I went ahead and added the\ntable/server option ‘async_capable’ set false by default. Attached is\nan updated patch.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 8 Mar 2021 14:05:55 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Nov 17, 2020 at 6:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> * I haven't yet added some planner/resowner changes from Horiguchi-san's patch.\n\nThe patch in [1] allocates, populates, frees a wait event set every\ntime when doing ExecAppendAsyncEventWait(), so it wouldn’t leak wait\nevent sets. Actually, we don’t need the ResourceOwner change?\n\nI thought the change to cost_append() proposed in his patch would be a\ngood idea, but I noticed this:\n\n+ /*\n+ * It's not obvious how to determine the total cost of\n+ * async subnodes. Although it is not always true, we\n+ * assume it is the maximum cost among all async subnodes.\n+ */\n+ if (async_max_cost < subpath->total_cost)\n+ async_max_cost = subpath->total_cost;\n\nAs commented, the assumption isn’t always correct (a counter-example\nwould be the case where all async subnodes use the same connection as\nshown in [2]). Rather than modifying that function as proposed, I\nfeel inclined to leave that function as-is.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK14wcXKqGDpYRieA1ETgyj%2BEp5ntrGVD%3D29iESoQYUx9YQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAPmGK17Ap6AGTFrtn3%3D%3DPsVfHUkuiRPFXZqXSQ%3DXWQDtDbNNBQ%40mail.gmail.com\n\n\n", "msg_date": "Mon, 8 Mar 2021 14:30:40 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Mar 8, 2021 at 2:05 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> There seems to be no objections, so I went ahead and added the\n> table/server option ‘async_capable’ set false by default. Attached is\n> an updated patch.\n\nAttached is an updated version of the patch. Changes are:\n\n* I modified nodeAppend.c a bit further to make the code simpler\n(mostly, ExecAppendAsyncBegin() and related code).\n* I added a function ExecAsyncRequestPending() to execAsync.c for the\nconvenience of FDWs.\n* I fixed a bug in the definition of WAIT_EVENT_APPEND_READY in pgstat.h.\n* I fixed a bug in process_pending_request() in postgres_fdw.c.\n* I added comments to executor/README based on Robert’s original patch.\n* I added/adjusted/fixed some other comments and docs.\n* I think it would be better to keep the existing test cases in\npostgres_fdw.sql as-is for testing the existing features, so I\nmodified it as such, and added new test cases for testing this\nfeature.\n* I rebased the patch against HEAD.\n\nI haven’t yet added docs on FDW APIs. I think the patch would need a\nbit more comments. But other than that, I feel the patch is in good\nshape.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 19 Mar 2021 20:48:22 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "is_async_capable_path() should probably have a \"break\" for case T_ForeignPath.\n\nlittle typos:\naready\nsigle\ngivne\na event: an event\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 19 Mar 2021 07:57:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Mar 19, 2021 at 9:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> is_async_capable_path() should probably have a \"break\" for case T_ForeignPath.\n\nGood catch! Will fix.\n\n> little typos:\n> aready\n> sigle\n> givne\n> a event\n\nLots of typos. :-( Will fix.\n\nThank you for the review!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 20 Mar 2021 14:35:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Mar 19, 2021 at 8:48 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I haven’t yet added docs on FDW APIs. I think the patch would need a\n> bit more comments.\n\nHere is an updated patch. Changes are:\n\n* Added docs on FDW APIs.\n* Added/tweaked some more comments.\n* Fixed a bug and typos pointed out by Justin.\n* Added an assertion to ExecAppendAsyncBegin().\n* Added a bit more regression test cases.\n* Rebased the patch against HEAD.\n\nI think the patch would be committable.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 29 Mar 2021 18:50:49 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Mar 29, 2021 at 6:50 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I think the patch would be committable.\n\nHere is a new version of the patch.\n\n* Rebased the patch against HEAD.\n* Tweaked docs/comments a bit further.\n* Added the commit message. Does that make sense?\n\nI'm happy with the patch, so I'll commit it if there are no objections.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 30 Mar 2021 20:40:35 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Tue, 30 Mar 2021 20:40:35 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Mon, Mar 29, 2021 at 6:50 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > I think the patch would be committable.\n> \n> Here is a new version of the patch.\n> \n> * Rebased the patch against HEAD.\n> * Tweaked docs/comments a bit further.\n> * Added the commit message. Does that make sense?\n> \n> I'm happy with the patch, so I'll commit it if there are no objections.\n\nThanks for the patch.\n\nMay I ask some questions?\n\n+ <term><literal>async_capable</literal></term>\n+ <listitem>\n+ <para>\n+ This option controls whether <filename>postgres_fdw</filename> allows\n+ foreign tables to be scanned concurrently for asynchronous execution.\n+ It can be specified for a foreign table or a foreign server.\n\nIsn't it strange that an option named \"async_capable\" *allows* async?\n\n+\t\t * We'll prefer to consider this join async-capable if any table from\n+\t\t * either side of the join is considered async-capable.\n+\t\t */\n+\t\tfpinfo->async_capable = fpinfo_o->async_capable ||\n+\t\t\tfpinfo_i->async_capable;\n\nWe need to explain this behavior in the documentation.\n\nRegarding to the wording \"async capable\", if it literally represents\nthe capability to run asynchronously, when any one element of a\ncombined path doesn't have the capability, the whole path cannot be\nasync-capable. If it represents allowance for an element to run\nasynchronously, then the whole path is inhibited to run asynchronously\nunless all elements are allowed to do so. If it represents\nenforcement or suggestion to run asynchronously, enforcing asynchrony\nto an element would lead to running the whole path asynchronously\nsince all elements of postgres_fdw are capable to run asynchronously\nas the nature.\n\nIt looks somewhat inconsistent to be inhibitive for the default value\nof \"async_capable\", but agressive in merging?\n\nIf I'm wrong in the understanding, please feel free to go ahead.\n\nregrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 31 Mar 2021 10:11:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Wed, Mar 31, 2021 at 10:11 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> + <term><literal>async_capable</literal></term>\n> + <listitem>\n> + <para>\n> + This option controls whether <filename>postgres_fdw</filename> allows\n> + foreign tables to be scanned concurrently for asynchronous execution.\n> + It can be specified for a foreign table or a foreign server.\n>\n> Isn't it strange that an option named \"async_capable\" *allows* async?\n\nI think \"async_capable\" is a good name for that option. See the\noption \"updatable\" below in the postgres_fdw documentation.\n\n> + * We'll prefer to consider this join async-capable if any table from\n> + * either side of the join is considered async-capable.\n> + */\n> + fpinfo->async_capable = fpinfo_o->async_capable ||\n> + fpinfo_i->async_capable;\n>\n> We need to explain this behavior in the documentation.\n>\n> Regarding to the wording \"async capable\", if it literally represents\n> the capability to run asynchronously, when any one element of a\n> combined path doesn't have the capability, the whole path cannot be\n> async-capable. If it represents allowance for an element to run\n> asynchronously, then the whole path is inhibited to run asynchronously\n> unless all elements are allowed to do so. If it represents\n> enforcement or suggestion to run asynchronously, enforcing asynchrony\n> to an element would lead to running the whole path asynchronously\n> since all elements of postgres_fdw are capable to run asynchronously\n> as the nature.\n>\n> It looks somewhat inconsistent to be inhibitive for the default value\n> of \"async_capable\", but agressive in merging?\n\nIf the foreign table has async_capable=true, it actually means that\nthere are resources (CPU, IO, network, etc.) to scan the foreign table\nconcurrently. And if any table from either side of the join has such\nresources, then they could also be used for the join. So I don't\nthink this behavior is aggressive. I think it would be better to add\nmore comments, though.\n\nAnyway, these are all about naming and docs/comments, so I'll return\nto this after committing the patch.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 31 Mar 2021 14:12:07 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Mar 30, 2021 at 8:40 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I'm happy with the patch, so I'll commit it if there are no objections.\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 31 Mar 2021 18:55:22 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> Pushed.\n\nThe buildfarm points out that this fails under valgrind.\nI easily reproduced it here:\n\n==00:00:03:42.115 3410499== Syscall param epoll_wait(events) points to unaddressable byte(s)\n==00:00:03:42.115 3410499== at 0x58E926B: epoll_wait (epoll_wait.c:30)\n==00:00:03:42.115 3410499== by 0x7FC903: WaitEventSetWaitBlock (latch.c:1452)\n==00:00:03:42.115 3410499== by 0x7FC903: WaitEventSetWait (latch.c:1398)\n==00:00:03:42.115 3410499== by 0x6BF46C: ExecAppendAsyncEventWait (nodeAppend.c:1025)\n==00:00:03:42.115 3410499== by 0x6BF667: ExecAppendAsyncGetNext (nodeAppend.c:915)\n==00:00:03:42.115 3410499== by 0x6BF667: ExecAppend (nodeAppend.c:337)\n==00:00:03:42.115 3410499== by 0x6D49E4: ExecProcNode (executor.h:257)\n==00:00:03:42.115 3410499== by 0x6D49E4: ExecModifyTable (nodeModifyTable.c:2222)\n==00:00:03:42.115 3410499== by 0x6A87F2: ExecProcNode (executor.h:257)\n==00:00:03:42.115 3410499== by 0x6A87F2: ExecutePlan (execMain.c:1531)\n==00:00:03:42.115 3410499== by 0x6A87F2: standard_ExecutorRun (execMain.c:350)\n==00:00:03:42.115 3410499== by 0x82597F: ProcessQuery (pquery.c:160)\n==00:00:03:42.115 3410499== by 0x825BE9: PortalRunMulti (pquery.c:1267)\n==00:00:03:42.115 3410499== by 0x826826: PortalRun (pquery.c:779)\n==00:00:03:42.115 3410499== by 0x82291E: exec_simple_query (postgres.c:1185)\n==00:00:03:42.115 3410499== by 0x823F3E: PostgresMain (postgres.c:4415)\n==00:00:03:42.115 3410499== by 0x79BAC1: BackendRun (postmaster.c:4483)\n==00:00:03:42.115 3410499== by 0x79BAC1: BackendStartup (postmaster.c:4205)\n==00:00:03:42.115 3410499== by 0x79BAC1: ServerLoop (postmaster.c:1737)\n==00:00:03:42.115 3410499== Address 0x10d10628 is 7,960 bytes inside a recently re-allocated block of size 8,192 alloc'd\n==00:00:03:42.115 3410499== at 0x4C30F0B: malloc (vg_replace_malloc.c:307)\n==00:00:03:42.115 3410499== by 0x94F9EA: AllocSetAlloc (aset.c:919)\n==00:00:03:42.115 3410499== by 0x957BAF: MemoryContextAlloc (mcxt.c:809)\n==00:00:03:42.115 3410499== by 0x958CC0: MemoryContextStrdup (mcxt.c:1179)\n==00:00:03:42.115 3410499== by 0x516AE4: untransformRelOptions (reloptions.c:1336)\n==00:00:03:42.115 3410499== by 0x6E6ADF: GetForeignTable (foreign.c:273)\n==00:00:03:42.115 3410499== by 0xF3BD470: postgresBeginForeignScan (postgres_fdw.c:1479)\n==00:00:03:42.115 3410499== by 0x6C2E83: ExecInitForeignScan (nodeForeignscan.c:236)\n==00:00:03:42.115 3410499== by 0x6AF893: ExecInitNode (execProcnode.c:283)\n==00:00:03:42.115 3410499== by 0x6C0007: ExecInitAppend (nodeAppend.c:232)\n==00:00:03:42.115 3410499== by 0x6AFA37: ExecInitNode (execProcnode.c:180)\n==00:00:03:42.115 3410499== by 0x6D533A: ExecInitModifyTable (nodeModifyTable.c:2575)\n\n==00:00:03:44.907 3410499== Syscall param epoll_wait(events) points to unaddressable byte(s)\n==00:00:03:44.907 3410499== at 0x58E926B: epoll_wait (epoll_wait.c:30)\n==00:00:03:44.907 3410499== by 0x7FC903: WaitEventSetWaitBlock (latch.c:1452)\n==00:00:03:44.907 3410499== by 0x7FC903: WaitEventSetWait (latch.c:1398)\n==00:00:03:44.907 3410499== by 0x6BF46C: ExecAppendAsyncEventWait (nodeAppend.c:1025)\n==00:00:03:44.907 3410499== by 0x6BF718: ExecAppend (nodeAppend.c:370)\n==00:00:03:44.907 3410499== by 0x6D49E4: ExecProcNode (executor.h:257)\n==00:00:03:44.907 3410499== by 0x6D49E4: ExecModifyTable (nodeModifyTable.c:2222)\n==00:00:03:44.907 3410499== by 0x6A87F2: ExecProcNode (executor.h:257)\n==00:00:03:44.907 3410499== by 0x6A87F2: ExecutePlan (execMain.c:1531)\n==00:00:03:44.907 3410499== by 0x6A87F2: standard_ExecutorRun (execMain.c:350)\n==00:00:03:44.907 3410499== by 0x82597F: ProcessQuery (pquery.c:160)\n==00:00:03:44.907 3410499== by 0x825BE9: PortalRunMulti (pquery.c:1267)\n==00:00:03:44.907 3410499== by 0x826826: PortalRun (pquery.c:779)\n==00:00:03:44.907 3410499== by 0x82291E: exec_simple_query (postgres.c:1185)\n==00:00:03:44.907 3410499== by 0x823F3E: PostgresMain (postgres.c:4415)\n==00:00:03:44.907 3410499== by 0x79BAC1: BackendRun (postmaster.c:4483)\n==00:00:03:44.907 3410499== by 0x79BAC1: BackendStartup (postmaster.c:4205)\n==00:00:03:44.907 3410499== by 0x79BAC1: ServerLoop (postmaster.c:1737)\n==00:00:03:44.907 3410499== Address 0x1093fdd8 is 2,904 bytes inside a recently re-allocated block of size 16,384 alloc'd\n==00:00:03:44.907 3410499== at 0x4C30F0B: malloc (vg_replace_malloc.c:307)\n==00:00:03:44.907 3410499== by 0x94F9EA: AllocSetAlloc (aset.c:919)\n==00:00:03:44.907 3410499== by 0x958233: palloc (mcxt.c:964)\n==00:00:03:44.907 3410499== by 0x69C400: ExprEvalPushStep (execExpr.c:2310)\n==00:00:03:44.907 3410499== by 0x69C541: ExecPushExprSlots (execExpr.c:2490)\n==00:00:03:44.907 3410499== by 0x69C580: ExecInitExprSlots (execExpr.c:2445)\n==00:00:03:44.907 3410499== by 0x69F0DD: ExecInitQual (execExpr.c:231)\n==00:00:03:44.907 3410499== by 0x6D80EF: ExecInitSeqScan (nodeSeqscan.c:172)\n==00:00:03:44.907 3410499== by 0x6AF9CE: ExecInitNode (execProcnode.c:208)\n==00:00:03:44.907 3410499== by 0x6C0007: ExecInitAppend (nodeAppend.c:232)\n==00:00:03:44.907 3410499== by 0x6AFA37: ExecInitNode (execProcnode.c:180)\n==00:00:03:44.907 3410499== by 0x6D533A: ExecInitModifyTable (nodeModifyTable.c:2575)\n==00:00:03:44.907 3410499== \n\nSorta looks like something is relying on a pointer into the relcache\nto be valid for longer than it can safely rely on that. The\nCLOBBER_CACHE_ALWAYS animals will probably be unhappy too, but\nthey are slower than valgrind.\n\n(Note that the test case appears to succeed, you have to notice that\nthe backend crashed after exiting.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Apr 2021 11:09:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Apr 2, 2021 at 12:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The buildfarm points out that this fails under valgrind.\n> I easily reproduced it here:\n\n> Sorta looks like something is relying on a pointer into the relcache\n> to be valid for longer than it can safely rely on that. The\n> CLOBBER_CACHE_ALWAYS animals will probably be unhappy too, but\n> they are slower than valgrind.\n>\n> (Note that the test case appears to succeed, you have to notice that\n> the backend crashed after exiting.)\n\nWill look into this.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 2 Apr 2021 00:45:34 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Apr 2, 2021 at 12:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The buildfarm points out that this fails under valgrind.\n> I easily reproduced it here:\n>\n> ==00:00:03:42.115 3410499== Syscall param epoll_wait(events) points to unaddressable byte(s)\n> ==00:00:03:42.115 3410499== at 0x58E926B: epoll_wait (epoll_wait.c:30)\n> ==00:00:03:42.115 3410499== by 0x7FC903: WaitEventSetWaitBlock (latch.c:1452)\n> ==00:00:03:42.115 3410499== by 0x7FC903: WaitEventSetWait (latch.c:1398)\n> ==00:00:03:42.115 3410499== by 0x6BF46C: ExecAppendAsyncEventWait (nodeAppend.c:1025)\n> ==00:00:03:42.115 3410499== by 0x6BF667: ExecAppendAsyncGetNext (nodeAppend.c:915)\n> ==00:00:03:42.115 3410499== by 0x6BF667: ExecAppend (nodeAppend.c:337)\n> ==00:00:03:42.115 3410499== by 0x6D49E4: ExecProcNode (executor.h:257)\n> ==00:00:03:42.115 3410499== by 0x6D49E4: ExecModifyTable (nodeModifyTable.c:2222)\n> ==00:00:03:42.115 3410499== by 0x6A87F2: ExecProcNode (executor.h:257)\n> ==00:00:03:42.115 3410499== by 0x6A87F2: ExecutePlan (execMain.c:1531)\n> ==00:00:03:42.115 3410499== by 0x6A87F2: standard_ExecutorRun (execMain.c:350)\n> ==00:00:03:42.115 3410499== by 0x82597F: ProcessQuery (pquery.c:160)\n> ==00:00:03:42.115 3410499== by 0x825BE9: PortalRunMulti (pquery.c:1267)\n> ==00:00:03:42.115 3410499== by 0x826826: PortalRun (pquery.c:779)\n> ==00:00:03:42.115 3410499== by 0x82291E: exec_simple_query (postgres.c:1185)\n> ==00:00:03:42.115 3410499== by 0x823F3E: PostgresMain (postgres.c:4415)\n> ==00:00:03:42.115 3410499== by 0x79BAC1: BackendRun (postmaster.c:4483)\n> ==00:00:03:42.115 3410499== by 0x79BAC1: BackendStartup (postmaster.c:4205)\n> ==00:00:03:42.115 3410499== by 0x79BAC1: ServerLoop (postmaster.c:1737)\n> ==00:00:03:42.115 3410499== Address 0x10d10628 is 7,960 bytes inside a recently re-allocated block of size 8,192 alloc'd\n> ==00:00:03:42.115 3410499== at 0x4C30F0B: malloc (vg_replace_malloc.c:307)\n> ==00:00:03:42.115 3410499== by 0x94F9EA: AllocSetAlloc (aset.c:919)\n> ==00:00:03:42.115 3410499== by 0x957BAF: MemoryContextAlloc (mcxt.c:809)\n> ==00:00:03:42.115 3410499== by 0x958CC0: MemoryContextStrdup (mcxt.c:1179)\n> ==00:00:03:42.115 3410499== by 0x516AE4: untransformRelOptions (reloptions.c:1336)\n> ==00:00:03:42.115 3410499== by 0x6E6ADF: GetForeignTable (foreign.c:273)\n> ==00:00:03:42.115 3410499== by 0xF3BD470: postgresBeginForeignScan (postgres_fdw.c:1479)\n> ==00:00:03:42.115 3410499== by 0x6C2E83: ExecInitForeignScan (nodeForeignscan.c:236)\n> ==00:00:03:42.115 3410499== by 0x6AF893: ExecInitNode (execProcnode.c:283)\n> ==00:00:03:42.115 3410499== by 0x6C0007: ExecInitAppend (nodeAppend.c:232)\n> ==00:00:03:42.115 3410499== by 0x6AFA37: ExecInitNode (execProcnode.c:180)\n> ==00:00:03:42.115 3410499== by 0x6D533A: ExecInitModifyTable (nodeModifyTable.c:2575)\n\nThe reason for this would be that epoll_wait() is called with\nmaxevents exceeding the size of the input event array in the test\ncase. To fix, I adjusted the parameters to call the caller function\nWaitEventSetWait() with in ExecAppendAsyncEventWait(). Patch\nattached.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 5 Apr 2021 17:15:47 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Thanks for the patch.\n\nAt Mon, 5 Apr 2021 17:15:47 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Fri, Apr 2, 2021 at 12:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The buildfarm points out that this fails under valgrind.\n> > I easily reproduced it here:\n> >\n> > ==00:00:03:42.115 3410499== Syscall param epoll_wait(events) points to unaddressable byte(s)\n> > ==00:00:03:42.115 3410499== at 0x58E926B: epoll_wait (epoll_wait.c:30)\n> > ==00:00:03:42.115 3410499== by 0x7FC903: WaitEventSetWaitBlock (latch.c:1452)\n...\n> The reason for this would be that epoll_wait() is called with\n> maxevents exceeding the size of the input event array in the test\n> case. To fix, I adjusted the parameters to call the caller function\n\n# s/input/output/ event array? (occurrred_events)\n\n# I couldn't reproduce it, so sorry in advance if the following\n# discussion is totally bogus..\n\nI have nothing to say if it actually corrects the error, but the only\nrestriction of maxevents is that it must be positive, and in any case\nepoll_wait returns no more than set->nevents events. So I'm a bit\nwondering if that's the reason. In the first place I'm wondering if\nvalgrind is aware of that depth..\n\n==00:00:03:42.115 3410499== Syscall param epoll_wait(events) points to unaddressable byte(s)\n==00:00:03:42.115 3410499== at 0x58E926B: epoll_wait (epoll_wait.c:30)\n...\n==00:00:03:42.115 3410499== Address 0x10d10628 is 7,960 bytes inside a recently re-allocated block of size 8,192 alloc'd\n==00:00:03:42.115 3410499== at 0x4C30F0B: malloc (vg_replace_malloc.c:307)\n==00:00:03:42.115 3410499== by 0x94F9EA: AllocSetAlloc (aset.c:919)\n==00:00:03:42.115 3410499== by 0x957BAF: MemoryContextAlloc (mcxt.c:809)\n==00:00:03:42.115 3410499== by 0x958CC0: MemoryContextStrdup (mcxt.c:1179)\n==00:00:03:42.115 3410499== by 0x516AE4: untransformRelOptions (reloptions.c:1336)\n==00:00:03:42.115 3410499== by 0x6E6ADF: GetForeignTable (foreign.c:273)\n==00:00:03:42.115 3410499== by 0xF3BD470: postgresBeginForeignScan (postgres_fdw.c:1479)\n\nAs Tom said, this looks like set->epoll_ret_events at the time points\nto a palloc'ed memory resided within a realloced chunk.\n\nValgrind is saying that the variable (WaitEventSet*) set itself is a\nvalid pointer. On the other hand set->epoll_ret_events poinst to a\nmemory chunk that maybe valgrind thinks to have been freed. Since they\nare in one allocation block so the pointer alone is broken if valgrind\nis right in its complain.\n\nI'm at a loss. How did you cause the error?\n\n> WaitEventSetWait() with in ExecAppendAsyncEventWait(). Patch\n> attached.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 06 Apr 2021 12:01:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Apr 6, 2021 at 12:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 5 Apr 2021 17:15:47 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > On Fri, Apr 2, 2021 at 12:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > The buildfarm points out that this fails under valgrind.\n> > > I easily reproduced it here:\n> > >\n> > > ==00:00:03:42.115 3410499== Syscall param epoll_wait(events) points to unaddressable byte(s)\n> > > ==00:00:03:42.115 3410499== at 0x58E926B: epoll_wait (epoll_wait.c:30)\n> > > ==00:00:03:42.115 3410499== by 0x7FC903: WaitEventSetWaitBlock (latch.c:1452)\n> ...\n> > The reason for this would be that epoll_wait() is called with\n> > maxevents exceeding the size of the input event array in the test\n> > case. To fix, I adjusted the parameters to call the caller function\n>\n> # s/input/output/ event array? (occurrred_events)\n\nSorry, my explanation was not enough. I think I was in a hurry. I\nmean by \"the input event array\" the epoll_event array given to\nepoll_wait() (i.e., the epoll_ret_events array).\n\n> # I couldn't reproduce it, so sorry in advance if the following\n> # discussion is totally bogus..\n\nI produced this failure by running the following simple query in async\nmode on a valgrind-enabled build:\n\nselect * from ft1 union all select * from ft2\n\nwhere ft1 and ft2 are postgres_fdw foreign tables. For this query, we\nwould call WaitEventSetWait() with nevents=16 in\nExecAppendAsyncEventWait() as EVENT_BUFFER_SIZE=16, and then\nepoll_wait() with maxevents=16 in WaitEventSetWaitBlock(); but\nmaxevents would exceed the input event array as the array size is\nthree. I think this inconsitency would cause the valgrind failure.\nI'm not 100% sure about that, but the patch fixing this inconsistency\nI posted fixed the failure in my environment.\n\n> I have nothing to say if it actually corrects the error, but the only\n> restriction of maxevents is that it must be positive, and in any case\n> epoll_wait returns no more than set->nevents events. So I'm a bit\n> wondering if that's the reason. In the first place I'm wondering if\n> valgrind is aware of that depth..\n\nYeah, the failure might actually be harmless, but anyway, we should\nmake the buildfarm green. Also, we should improve the code to avoid\nthe consistency mentioned above, so I'll apply the patch.\n\nThanks for the comments!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 6 Apr 2021 17:45:39 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Apr 6, 2021 at 5:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Also, we should improve the code to avoid\n> the consistency mentioned above,\n\nSorry, s/consistency/inconsistency/.\n\n> I'll apply the patch.\n\nDone. Let's see if this works.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 6 Apr 2021 19:25:16 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Wed, Mar 31, 2021 at 2:12 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Mar 31, 2021 at 10:11 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > + * We'll prefer to consider this join async-capable if any table from\n> > + * either side of the join is considered async-capable.\n> > + */\n> > + fpinfo->async_capable = fpinfo_o->async_capable ||\n> > + fpinfo_i->async_capable;\n> >\n> > We need to explain this behavior in the documentation.\n\n> > It looks somewhat inconsistent to be inhibitive for the default value\n> > of \"async_capable\", but agressive in merging?\n>\n> If the foreign table has async_capable=true, it actually means that\n> there are resources (CPU, IO, network, etc.) to scan the foreign table\n> concurrently. And if any table from either side of the join has such\n> resources, then they could also be used for the join. So I don't\n> think this behavior is aggressive. I think it would be better to add\n> more comments, though.\n>\n> I'll return to this after committing the patch.\n\nI updated the above comment so that it explains the reason. Please\nfind attached a patch. I did some cleanup as well:\n\n* Simplified code in ExecAppendAsyncEventWait() a little bit to avoid\nduplicating the same nevents calculation, and updated comments there.\n\n* Added an assertion to ExecAppendAsyncRequest().\n\n* Updated comments for fetch_more_data_begin().\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 22 Apr 2021 12:30:41 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Apr 22, 2021 at 12:30 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n\n> > > + * We'll prefer to consider this join async-capable if any table from\n> > > + * either side of the join is considered async-capable.\n> > > + */\n> > > + fpinfo->async_capable = fpinfo_o->async_capable ||\n> > > + fpinfo_i->async_capable;\n\n> I updated the above comment so that it explains the reason. Please\n> find attached a patch. I did some cleanup as well:\n\nI have committed the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 23 Apr 2021 12:12:58 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 4/23/21 8:12 AM, Etsuro Fujita wrote:\n> I have committed the patch.\nWhile studying the capabilities of AsyncAppend, I noticed an \ninconsistency with the cost model of the optimizer:\n\nasync_capable = off:\n--------------------\nAppend (cost=100.00..695.00 ...)\n -> Foreign Scan on f1 part_1 (cost=100.00..213.31 ...)\n -> Foreign Scan on f2 part_2 (cost=100.00..216.07 ...)\n -> Foreign Scan on f3 part_3 (cost=100.00..215.62 ...)\n\nasync_capable = on:\n-------------------\nAppend (cost=100.00..695.00 ...)\n -> Async Foreign Scan on f1 part_1 (cost=100.00..213.31 ...)\n -> Async Foreign Scan on f2 part_2 (cost=100.00..216.07 ...)\n -> Async Foreign Scan on f3 part_3 (cost=100.00..215.62 ...)\n\n\nHere I see two problems:\n1. Cost of an AsyncAppend is the same as cost of an Append. But \nexecution time of the AsyncAppend for three remote partitions has more \nthan halved.\n2. Cost of an AsyncAppend looks as a sum of the child ForeignScan costs.\n\nI haven't ideas why it may be a problem right now. But I can imagine \nthat it may be a problem in future if we have alternative paths: complex \npushdown in synchronous mode (a few rows to return) or simple \nasynchronous append with a large set of rows to return.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:01:12 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 4/23/21 8:12 AM, Etsuro Fujita wrote:\n> I have committed the patch.\nSmall mistake i found. If no tuple was received from a foreign \npartition, explain shows that we never executed node. For example,\nif we have 0 tuples in f1 and 100 tuples in f2:\n\nQuery:\nEXPLAIN (ANALYZE, VERBOSE, TIMING OFF, COSTS OFF)\nSELECT * FROM (SELECT * FROM f1 UNION ALL SELECT * FROM f2) AS q1\nLIMIT 101;\n\nExplain:\n Limit (actual rows=100 loops=1)\n Output: f1.a\n -> Append (actual rows=100 loops=1)\n -> Async Foreign Scan on public.f1 (never executed)\n Output: f1.a\n Remote SQL: SELECT a FROM public.l1\n -> Async Foreign Scan on public.f2 (actual rows=100 loops=1)\n Output: f2.a\n Remote SQL: SELECT a FROM public.l2\n\nThe patch in the attachment fixes this.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 26 Apr 2021 15:35:53 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 4/23/21 8:12 AM, Etsuro Fujita wrote:\n> On Thu, Apr 22, 2021 at 12:30 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I have committed the patch.\n\nOne more question. Append choose async plans at the stage of the Append \nplan creation.\nLater, the planner performs some optimizations, such as eliminating \ntrivial Subquery nodes. So, AsyncAppend is impossible in some \nsituations, for example:\n\n(SELECT * FROM f1 WHERE a < 10)\n UNION ALL\n(SELECT * FROM f2 WHERE a < 10);\n\nBut works for the query:\n\nSELECT *\n FROM (SELECT * FROM f1 UNION ALL SELECT * FROM f2) AS q1\nWHERE a < 10;\n\nAs far as I understand, this is not a hard limit. We can choose async \nsubplans at the beginning of the execution stage.\nFor a demo, I prepared the patch (see in attachment).\nIt solves the problem and passes the regression tests.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Tue, 27 Apr 2021 11:57:30 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Apr 26, 2021 at 3:01 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> While studying the capabilities of AsyncAppend, I noticed an\n> inconsistency with the cost model of the optimizer:\n\n> Here I see two problems:\n> 1. Cost of an AsyncAppend is the same as cost of an Append. But\n> execution time of the AsyncAppend for three remote partitions has more\n> than halved.\n> 2. Cost of an AsyncAppend looks as a sum of the child ForeignScan costs.\n\nYeah, we don’t adjust the cost for async Append; it’s the same as that\nfor sync Append. But I don’t see any issue as-is, either. (It’s not\nthat easy to adjust the cost to an appropriate value in the case of\npostgres_fdw, because in that case the cost would vary depending on\nwhich connections are used for scanning foreign tables [1].)\n\n> I haven't ideas why it may be a problem right now. But I can imagine\n> that it may be a problem in future if we have alternative paths: complex\n> pushdown in synchronous mode (a few rows to return) or simple\n> asynchronous append with a large set of rows to return.\n\nYeah, I think it’s better if we could consider async append paths and\nestimate the costs for them accurately at path-creation time, not\nplan-creation time, because that would make it possible to use async\nexecution in more cases, as you pointed out. But I left that for\nfuture work, because I wanted to make the first cut simple.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK15i-OyCesd369P8zyBErjN_T18zVYu27714bf_L%3DCOXew%40mail.gmail.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 21:27:05 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:35 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Small mistake i found. If no tuple was received from a foreign\n> partition, explain shows that we never executed node. For example,\n> if we have 0 tuples in f1 and 100 tuples in f2:\n>\n> Query:\n> EXPLAIN (ANALYZE, VERBOSE, TIMING OFF, COSTS OFF)\n> SELECT * FROM (SELECT * FROM f1 UNION ALL SELECT * FROM f2) AS q1\n> LIMIT 101;\n>\n> Explain:\n> Limit (actual rows=100 loops=1)\n> Output: f1.a\n> -> Append (actual rows=100 loops=1)\n> -> Async Foreign Scan on public.f1 (never executed)\n> Output: f1.a\n> Remote SQL: SELECT a FROM public.l1\n> -> Async Foreign Scan on public.f2 (actual rows=100 loops=1)\n> Output: f2.a\n> Remote SQL: SELECT a FROM public.l2\n>\n> The patch in the attachment fixes this.\n\nThanks for the report and patch! Will look into this.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 27 Apr 2021 21:31:08 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Thu, Mar 4, 2021 at 1:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Another thing I'm concerned about in the postgres_fdw part is the case\n> where all/many postgres_fdw ForeignScans of an Append use the same\n> connection, because in that case those ForeignScans are executed one\n> by one, not in parallel, and hence the overhead of async execution\n> (i.e., doing ExecAppendAsyncEventWait()) would merely cause a\n> performance degradation. Here is such an example:\n>\n> postgres=# create server loopback foreign data wrapper postgres_fdw\n> options (dbname 'postgres');\n> postgres=# create user mapping for current_user server loopback;\n> postgres=# create table pt (a int, b int, c text) partition by range (a);\n> postgres=# create table loct1 (a int, b int, c text);\n> postgres=# create table loct2 (a int, b int, c text);\n> postgres=# create table loct3 (a int, b int, c text);\n> postgres=# create foreign table p1 partition of pt for values from\n> (10) to (20) server loopback options (table_name 'loct1');\n> postgres=# create foreign table p2 partition of pt for values from\n> (20) to (30) server loopback options (table_name 'loct2');\n> postgres=# create foreign table p3 partition of pt for values from\n> (30) to (40) server loopback options (table_name 'loct3');\n> postgres=# insert into p1 select 10 + i % 10, i, to_char(i, 'FM00000')\n> from generate_series(0, 99999) i;\n> postgres=# insert into p2 select 20 + i % 10, i, to_char(i, 'FM00000')\n> from generate_series(0, 99999) i;\n> postgres=# insert into p3 select 30 + i % 10, i, to_char(i, 'FM00000')\n> from generate_series(0, 99999) i;\n> postgres=# analyze pt;\n>\n> postgres=# set enable_async_append to off;\n> postgres=# select count(*) from pt;\n> count\n> --------\n> 300000\n> (1 row)\n>\n> Time: 366.905 ms\n>\n> postgres=# set enable_async_append to on;\n> postgres=# select count(*) from pt;\n> count\n> --------\n> 300000\n> (1 row)\n>\n> Time: 385.431 ms\n\nI think the user should be careful about this. How about adding a\nnote about it to the “Asynchronous Execution Options” section in\npostgres-fdw.sgml, like the attached?\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 6 May 2021 15:25:25 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Apr 27, 2021 at 9:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Apr 26, 2021 at 7:35 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> > Small mistake i found. If no tuple was received from a foreign\n> > partition, explain shows that we never executed node.\n\n> > The patch in the attachment fixes this.\n>\n> Will look into this.\n\nThe patch fixes the issue, but I don’t think it’s the right way to go,\nbecause it requires an extra ExecProcNode() call, which wouldn’t be\nefficient. Also, the patch wouldn’t address another issue I noticed\nin EXPLAIN ANALYZE for async-capable nodes that the command wouldn’t\nmeasure the time spent in such nodes accurately. For the case of\nasync-capable node using postgres_fdw, it only measures the time spent\nin ExecProcNode() in ExecAsyncRequest()/ExecAsyncNotify(), missing the\ntime spent in other things such as creating a cursor in\nExecAsyncRequest(). :-(. To address both issues, I’d like to propose\nthe attached, in which I added instrumentation support to\nExecAsyncRequest()/ExecAsyncConfigureWait()/ExecAsyncNotify(). I\nthink this would not only address the reported issue more efficiently,\nbut allow to collect timing for async-capable nodes more accurately.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 6 May 2021 15:45:06 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Apr 27, 2021 at 3:57 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> One more question. Append choose async plans at the stage of the Append\n> plan creation.\n> Later, the planner performs some optimizations, such as eliminating\n> trivial Subquery nodes. So, AsyncAppend is impossible in some\n> situations, for example:\n>\n> (SELECT * FROM f1 WHERE a < 10)\n> UNION ALL\n> (SELECT * FROM f2 WHERE a < 10);\n>\n> But works for the query:\n>\n> SELECT *\n> FROM (SELECT * FROM f1 UNION ALL SELECT * FROM f2) AS q1\n> WHERE a < 10;\n>\n> As far as I understand, this is not a hard limit.\n\nI think so, but IMO I think this would be an improvement rather than a bug fix.\n\n> We can choose async\n> subplans at the beginning of the execution stage.\n> For a demo, I prepared the patch (see in attachment).\n> It solves the problem and passes the regression tests.\n\nThanks for the patch! IIUC, another approach to this would be the\npatch you proposed before [1]. Right?\n\nI didn't have time to look at the patch in [1] for PG14. My apologies\nfor that. Actually, I was planning to return it when the development\nfor PG15 starts.\n\nSorry for the late reply.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/7fe10f95-ac6c-c81d-a9d3-227493eb9055%40postgrespro.ru\n\n\n", "msg_date": "Thu, 6 May 2021 18:11:01 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Greetings,\n\n* Etsuro Fujita (etsuro.fujita@gmail.com) wrote:\n> On Thu, Mar 4, 2021 at 1:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Another thing I'm concerned about in the postgres_fdw part is the case\n> > where all/many postgres_fdw ForeignScans of an Append use the same\n> > connection, because in that case those ForeignScans are executed one\n> > by one, not in parallel, and hence the overhead of async execution\n> > (i.e., doing ExecAppendAsyncEventWait()) would merely cause a\n> > performance degradation. Here is such an example:\n> >\n> > postgres=# create server loopback foreign data wrapper postgres_fdw\n> > options (dbname 'postgres');\n> > postgres=# create user mapping for current_user server loopback;\n> > postgres=# create table pt (a int, b int, c text) partition by range (a);\n> > postgres=# create table loct1 (a int, b int, c text);\n> > postgres=# create table loct2 (a int, b int, c text);\n> > postgres=# create table loct3 (a int, b int, c text);\n> > postgres=# create foreign table p1 partition of pt for values from\n> > (10) to (20) server loopback options (table_name 'loct1');\n> > postgres=# create foreign table p2 partition of pt for values from\n> > (20) to (30) server loopback options (table_name 'loct2');\n> > postgres=# create foreign table p3 partition of pt for values from\n> > (30) to (40) server loopback options (table_name 'loct3');\n> > postgres=# insert into p1 select 10 + i % 10, i, to_char(i, 'FM00000')\n> > from generate_series(0, 99999) i;\n> > postgres=# insert into p2 select 20 + i % 10, i, to_char(i, 'FM00000')\n> > from generate_series(0, 99999) i;\n> > postgres=# insert into p3 select 30 + i % 10, i, to_char(i, 'FM00000')\n> > from generate_series(0, 99999) i;\n> > postgres=# analyze pt;\n> >\n> > postgres=# set enable_async_append to off;\n> > postgres=# select count(*) from pt;\n> > count\n> > --------\n> > 300000\n> > (1 row)\n> >\n> > Time: 366.905 ms\n> >\n> > postgres=# set enable_async_append to on;\n> > postgres=# select count(*) from pt;\n> > count\n> > --------\n> > 300000\n> > (1 row)\n> >\n> > Time: 385.431 ms\n> \n> I think the user should be careful about this. How about adding a\n> note about it to the “Asynchronous Execution Options” section in\n> postgres-fdw.sgml, like the attached?\n\nI'd suggest the language point out that it's not actually possible to do\notherwise, since they all need to be part of the same transaction.\n\nWithout that, it looks like we're just missing a trick somewhere and\nsomeone might think that they could improve PG to open multiple\nconnections to the same remote server to execute them in parallel.\n\nMaybe:\n\nIn order to ensure that the data being returned from a foreign server\nis consistent, postgres_fdw will only open one connection for a given\nforeign server and will run all queries against that server sequentially\neven if there are multiple foreign tables involved. In such a case, it\nmay be more performant to disable this option to eliminate the overhead\nassociated with running queries asynchronously.\n\n... then again, it'd really be better if we could figure out a way to\njust do the right thing here. I haven't looked at this in depth but I\nwould think that the overhead of async would be well worth it just about\nany time there's more than one foreign server involved. Is it not\nreasonable to have a heuristic where we disable async in the cases where\nthere's only one foreign server, but have it enabled all the other time?\nWhile continuing to allow users to manage it explicitly if they want.\n\nThanks,\n\nStephen", "msg_date": "Thu, 6 May 2021 13:12:24 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/5/21 22:12, Stephen Frost wrote:\n> * Etsuro Fujita (etsuro.fujita@gmail.com) wrote:\n>> I think the user should be careful about this. How about adding a\n>> note about it to the “Asynchronous Execution Options” section in\n>> postgres-fdw.sgml, like the attached?\n+1\n> ... then again, it'd really be better if we could figure out a way to\n> just do the right thing here. I haven't looked at this in depth but I\n> would think that the overhead of async would be well worth it just about\n> any time there's more than one foreign server involved. Is it not\n> reasonable to have a heuristic where we disable async in the cases where\n> there's only one foreign server, but have it enabled all the other time?\n> While continuing to allow users to manage it explicitly if they want.\nBechmarking of SELECT from foreign partitions hosted on the same server, \ni see results:\n\nWith async append:\n1 partition - 178 ms; 4 - 263; 8 - 450; 16 - 860; 32 - 1740.\n\nWithout:\n1 - 178 ms; 4 - 583; 8 - 1140; 16 - 2302; 32 - 4620.\n\nSo, these results show that we have a reason to use async append in the \ncase where there's only one foreign server.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 7 May 2021 14:59:52 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/5/21 11:45, Etsuro Fujita wrote:\n> On Tue, Apr 27, 2021 at 9:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> The patch fixes the issue, but I don’t think it’s the right way to go,\n> because it requires an extra ExecProcNode() call, which wouldn’t be\n> efficient. Also, the patch wouldn’t address another issue I noticed\n> in EXPLAIN ANALYZE for async-capable nodes that the command wouldn’t\n> measure the time spent in such nodes accurately. For the case of\n> async-capable node using postgres_fdw, it only measures the time spent\n> in ExecProcNode() in ExecAsyncRequest()/ExecAsyncNotify(), missing the\n> time spent in other things such as creating a cursor in\n> ExecAsyncRequest(). :-(. To address both issues, I’d like to propose\n> the attached, in which I added instrumentation support to\n> ExecAsyncRequest()/ExecAsyncConfigureWait()/ExecAsyncNotify(). I\n> think this would not only address the reported issue more efficiently,\n> but allow to collect timing for async-capable nodes more accurately.\n\nOk, I agree with the approach, but the next test case failed:\n\nEXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF)\nSELECT * FROM (\n\t(SELECT * FROM f1) UNION ALL (SELECT * FROM f2)\n) q1 LIMIT 100;\nERROR: InstrUpdateTupleCount called on node not yet executed\n\nInitialization script see in attachment.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 7 May 2021 15:32:47 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 6/5/21 14:11, Etsuro Fujita wrote:\n> On Tue, Apr 27, 2021 at 3:57 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> One more question. Append choose async plans at the stage of the Append\n>> plan creation.\n>> Later, the planner performs some optimizations, such as eliminating\n>> trivial Subquery nodes. So, AsyncAppend is impossible in some\n>> situations, for example:\n>>\n>> (SELECT * FROM f1 WHERE a < 10)\n>> UNION ALL\n>> (SELECT * FROM f2 WHERE a < 10);\n>>\n>> But works for the query:\n>>\n>> SELECT *\n>> FROM (SELECT * FROM f1 UNION ALL SELECT * FROM f2) AS q1\n>> WHERE a < 10;\n>>\n>> As far as I understand, this is not a hard limit.\n> \n> I think so, but IMO I think this would be an improvement rather than a bug fix.\n> \n>> We can choose async\n>> subplans at the beginning of the execution stage.\n>> For a demo, I prepared the patch (see in attachment).\n>> It solves the problem and passes the regression tests.\n> \n> Thanks for the patch! IIUC, another approach to this would be the\n> patch you proposed before [1]. Right?\nYes. I think, new solution will be better.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 7 May 2021 15:35:50 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, May 7, 2021 at 2:12 AM Stephen Frost <sfrost@snowman.net> wrote:\n> I'd suggest the language point out that it's not actually possible to do\n> otherwise, since they all need to be part of the same transaction.\n>\n> Without that, it looks like we're just missing a trick somewhere and\n> someone might think that they could improve PG to open multiple\n> connections to the same remote server to execute them in parallel.\n\nAgreed.\n\n> Maybe:\n>\n> In order to ensure that the data being returned from a foreign server\n> is consistent, postgres_fdw will only open one connection for a given\n> foreign server and will run all queries against that server sequentially\n> even if there are multiple foreign tables involved. In such a case, it\n> may be more performant to disable this option to eliminate the overhead\n> associated with running queries asynchronously.\n\nOk, I’ll merge this into the next version.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 8 May 2021 00:55:07 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, May 7, 2021 at 7:35 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 6/5/21 14:11, Etsuro Fujita wrote:\n> > On Tue, Apr 27, 2021 at 3:57 PM Andrey V. Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >> One more question. Append choose async plans at the stage of the Append\n> >> plan creation.\n> >> Later, the planner performs some optimizations, such as eliminating\n> >> trivial Subquery nodes. So, AsyncAppend is impossible in some\n> >> situations, for example:\n> >>\n> >> (SELECT * FROM f1 WHERE a < 10)\n> >> UNION ALL\n> >> (SELECT * FROM f2 WHERE a < 10);\n\n> >> We can choose async\n> >> subplans at the beginning of the execution stage.\n> >> For a demo, I prepared the patch (see in attachment).\n> >> It solves the problem and passes the regression tests.\n> >\n> > IIUC, another approach to this would be the\n> > patch you proposed before [1]. Right?\n> Yes. I think, new solution will be better.\n\nOk, will review.\n\nI think it would be better to start a new thread for this, and add the\npatch to the next CF so that it doesn’t get lost.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 8 May 2021 01:05:47 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, May 7, 2021 at 7:32 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Ok, I agree with the approach, but the next test case failed:\n>\n> EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF)\n> SELECT * FROM (\n> (SELECT * FROM f1) UNION ALL (SELECT * FROM f2)\n> ) q1 LIMIT 100;\n> ERROR: InstrUpdateTupleCount called on node not yet executed\n>\n> Initialization script see in attachment.\n\nReproduced. Here is the EXPLAIN output for the query:\n\nexplain verbose select * from ((select * from f1) union all (select *\nfrom f2)) q1 limit 100;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Limit (cost=100.00..104.70 rows=100 width=4)\n Output: f1.a\n -> Append (cost=100.00..724.22 rows=13292 width=4)\n -> Async Foreign Scan on public.f1 (cost=100.00..325.62\nrows=6554 width=4)\n Output: f1.a\n Remote SQL: SELECT a FROM public.l1\n -> Async Foreign Scan on public.f2 (cost=100.00..332.14\nrows=6738 width=4)\n Output: f2.a\n Remote SQL: SELECT a FROM public.l2\n(9 rows)\n\nWhen executing the query “select * from ((select * from f1) union all\n(select * from f2)) q1 limit 100” in async mode, the remote queries\nfor f1 and f2 would be sent to the remote at the same time in the\nfirst ExecAppend(). If the result for the remote query for f1 is\nreturned first, the local query would be processed using the result,\nand the remote query for f2 in progress would be processed during\nExecutorEnd() using process_pending_request() (and vice versa). But\nin the EXPLAIN ANALYZE case, InstrEndLoop() is called *before*\nExecutorEnd(), and it initializes the instr->running flag, so in that\ncase, when processing the in-progress remote query in\nprocess_pending_request(), we would call InstrUpdateTupleCount() with\nthe flag unset, causing this error.\n\nI think a simple fix for this would be just remove the check whether\nthe instr->running flag is set or not in InstrUpdateTupleCount().\nAttached is an updated patch, in which I also updated a comment in\nexecnodes.h and docs in fdwhandler.sgml to match the code in\nnodeAppend.c, and fixed typos in comments in nodeAppend.c.\n\nThanks for the review and script!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 10 May 2021 12:03:08 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 10/5/21 08:03, Etsuro Fujita wrote:\n> On Fri, May 7, 2021 at 7:32 PM Andrey Lepikhov\n> I think a simple fix for this would be just remove the check whether\n> the instr->running flag is set or not in InstrUpdateTupleCount().\n> Attached is an updated patch, in which I also updated a comment in\n> execnodes.h and docs in fdwhandler.sgml to match the code in\n> nodeAppend.c, and fixed typos in comments in nodeAppend.c.\nYour patch fixes the problem. But I found two more problems:\n\nEXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) \n \n \nSELECT * FROM ( \n \n \n (SELECT * FROM f1) \n \n \n UNION ALL \n \n \n (SELECT * FROM f2) \n \n \n UNION ALL \n \n \n (SELECT * \nFROM l3) \n \n ) q1 \nLIMIT 6709;\n QUERY PLAN\n--------------------------------------------------------------\n Limit (actual rows=6709 loops=1)\n -> Append (actual rows=6709 loops=1)\n -> Async Foreign Scan on f1 (actual rows=1 loops=1)\n -> Async Foreign Scan on f2 (actual rows=1 loops=1)\n -> Seq Scan on l3 (actual rows=6708 loops=1)\n\nHere we scan 6710 tuples at low level but appended only 6709. Where did \nwe lose one tuple?\n\n2.\nSELECT * FROM (\n\t(SELECT * FROM f1) \n \n \n UNION ALL\n\t(SELECT * FROM f2) \n \n \n UNION ALL\n\t(SELECT * FROM f3 WHERE a > 0)\n) q1 LIMIT 3000;\n QUERY PLAN\n--------------------------------------------------------------\n Limit (actual rows=3000 loops=1)\n -> Append (actual rows=3000 loops=1)\n -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n -> Foreign Scan on f3 (actual rows=3000 loops=1)\n\nHere we give preference to the synchronous scan. Why?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 11 May 2021 07:58:10 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 7/5/21 21:05, Etsuro Fujita wrote:\n> I think it would be better to start a new thread for this, and add the\n> patch to the next CF so that it doesn’t get lost.\n\nCurrent implementation of async append choose asynchronous subplans at \nthe phase of an append plan creation. This is safe approach, but we \nloose some optimizations, such of flattening trivial subqueries and \ncan't execute some simple queries asynchronously. For example:\n\nEXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF, COSTS OFF)\n(SELECT * FROM f1 WHERE a < 10) UNION ALL\n(SELECT * FROM f2 WHERE a < 10);\n\nBut, as I could understand, we can choose these subplans later, at the \ninit append phase when all optimizations already passed.\nIn attachment - implementation of the proposed approach.\n\nInitial script for the example see in the parent thread [1].\n\n\n[1] \nhttps://www.postgresql.org/message-id/a38bb206-8340-9528-5ef6-37de2d5cb1a3%40postgrespro.ru\n\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Tue, 11 May 2021 08:45:21 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Mon, May 10, 2021 at 8:45 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 7/5/21 21:05, Etsuro Fujita wrote:\n> > I think it would be better to start a new thread for this, and add the\n> > patch to the next CF so that it doesn’t get lost.\n>\n> Current implementation of async append choose asynchronous subplans at\n> the phase of an append plan creation. This is safe approach, but we\n> loose some optimizations, such of flattening trivial subqueries and\n> can't execute some simple queries asynchronously. For example:\n>\n> EXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF, COSTS OFF)\n> (SELECT * FROM f1 WHERE a < 10) UNION ALL\n> (SELECT * FROM f2 WHERE a < 10);\n>\n> But, as I could understand, we can choose these subplans later, at the\n> init append phase when all optimizations already passed.\n> In attachment - implementation of the proposed approach.\n>\n> Initial script for the example see in the parent thread [1].\n>\n>\n> [1]\n>\n> https://www.postgresql.org/message-id/a38bb206-8340-9528-5ef6-37de2d5cb1a3%40postgrespro.ru\n>\n>\n> --\n> regards,\n> Andrey Lepikhov\n> Postgres Professional\n>\nHi,\n\n+ /* Check to see if subplan can be executed asynchronously */\n+ if (subplan->async_capable)\n+ {\n+ subplan->async_capable = false;\n\nIt seems the if statement is not needed: you can directly assign false\nto subplan->async_capable.\n\nCheers\n\nOn Mon, May 10, 2021 at 8:45 PM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 7/5/21 21:05, Etsuro Fujita wrote:\n> I think it would be better to start a new thread for this, and add the\n> patch to the next CF so that it doesn’t get lost.\n\nCurrent implementation of async append choose asynchronous subplans at \nthe phase of an append plan creation. This is safe approach, but we \nloose some optimizations, such of flattening trivial subqueries and \ncan't execute some simple queries asynchronously. For example:\n\nEXPLAIN (ANALYZE, TIMING OFF, SUMMARY OFF, COSTS OFF)\n(SELECT * FROM f1 WHERE a < 10) UNION ALL\n(SELECT * FROM f2 WHERE a < 10);\n\nBut, as I could understand, we can choose these subplans later, at the \ninit append phase when all optimizations already passed.\nIn attachment - implementation of the proposed approach.\n\nInitial script for the example see in the parent thread [1].\n\n\n[1] \nhttps://www.postgresql.org/message-id/a38bb206-8340-9528-5ef6-37de2d5cb1a3%40postgrespro.ru\n\n\n-- \nregards,\nAndrey Lepikhov\nPostgres ProfessionalHi,+           /* Check to see if subplan can be executed asynchronously */+           if (subplan->async_capable)+           {+               subplan->async_capable = false;It seems the if statement is not needed: you can directly assign false to  subplan->async_capable.Cheers", "msg_date": "Mon, 10 May 2021 20:55:34 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On 11/5/21 08:55, Zhihong Yu wrote:\n> +           /* Check to see if subplan can be executed asynchronously */\n> +           if (subplan->async_capable)\n> +           {\n> +               subplan->async_capable = false;\n> \n> It seems the if statement is not needed: you can directly assign false \n> to  subplan->async_capable.Thank you, I agree with you.\nClose look into the postgres_fdw regression tests show at least one open \nproblem with this approach: we need to control situations when only one \npartition doesn't pruned and append isn't exist at all.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 11 May 2021 12:06:05 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Tue, May 11, 2021 at 11:58 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Your patch fixes the problem. But I found two more problems:\n>\n> EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF)\n> SELECT * FROM (\n> (SELECT * FROM f1)\n> UNION ALL\n> (SELECT * FROM f2)\n> UNION ALL\n> (SELECT * FROM l3)\n> ) q1 LIMIT 6709;\n> QUERY PLAN\n> --------------------------------------------------------------\n> Limit (actual rows=6709 loops=1)\n> -> Append (actual rows=6709 loops=1)\n> -> Async Foreign Scan on f1 (actual rows=1 loops=1)\n> -> Async Foreign Scan on f2 (actual rows=1 loops=1)\n> -> Seq Scan on l3 (actual rows=6708 loops=1)\n>\n> Here we scan 6710 tuples at low level but appended only 6709. Where did\n> we lose one tuple?\n\nThe extra tuple, which is from f1 or f2, would have been kept in the\nAppend node's as_asyncresults, not returned from the Append node to\nthe Limit node. The async Foreign Scan nodes would fetch tuples\nbefore the Append node ask the tuples, so the fetched tuples may or\nmay not be used.\n\n> 2.\n> SELECT * FROM (\n> (SELECT * FROM f1)\n> UNION ALL\n> (SELECT * FROM f2)\n> UNION ALL\n> (SELECT * FROM f3 WHERE a > 0)\n> ) q1 LIMIT 3000;\n> QUERY PLAN\n> --------------------------------------------------------------\n> Limit (actual rows=3000 loops=1)\n> -> Append (actual rows=3000 loops=1)\n> -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n> -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n> -> Foreign Scan on f3 (actual rows=3000 loops=1)\n>\n> Here we give preference to the synchronous scan. Why?\n\nThis would be expected behavior, and the reason is avoid performance\ndegradation; you might think it would be better to execute the async\nForeign Scan nodes more aggressively, but it would require\nwaiting/polling for file descriptor events many times, which is\nexpensive and might cause performance degradation. I think there is\nroom for improvement, though.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 11 May 2021 16:24:53 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 11/5/21 12:24, Etsuro Fujita wrote:\n> On Tue, May 11, 2021 at 11:58 AM Andrey Lepikhov\n> The extra tuple, which is from f1 or f2, would have been kept in the\n> Append node's as_asyncresults, not returned from the Append node to\n> the Limit node. The async Foreign Scan nodes would fetch tuples\n> before the Append node ask the tuples, so the fetched tuples may or\n> may not be used.\nOk.>> -> Append (actual rows=3000 loops=1)\n>> -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n>> -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n>> -> Foreign Scan on f3 (actual rows=3000 loops=1)\n>>\n>> Here we give preference to the synchronous scan. Why?\n> \n> This would be expected behavior, and the reason is avoid performance\n> degradation; you might think it would be better to execute the async\n> Foreign Scan nodes more aggressively, but it would require\n> waiting/polling for file descriptor events many times, which is\n> expensive and might cause performance degradation. I think there is\n> room for improvement, though.\nYes, I agree with you. Maybe you can add note in documentation on \nasync_capable, for example:\n\"... Synchronous and asynchronous scanning strategies can be mixed by \noptimizer in one scan plan of a partitioned table or an 'UNION ALL' \ncommand. For performance reasons, synchronous scans executes before the \nfirst of async scan. ...\"\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 11 May 2021 14:27:10 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, May 11, 2021 at 6:27 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 11/5/21 12:24, Etsuro Fujita wrote:\n\n> >> -> Append (actual rows=3000 loops=1)\n> >> -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n> >> -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n> >> -> Foreign Scan on f3 (actual rows=3000 loops=1)\n> >>\n> >> Here we give preference to the synchronous scan. Why?\n> >\n> > This would be expected behavior, and the reason is avoid performance\n> > degradation; you might think it would be better to execute the async\n> > Foreign Scan nodes more aggressively, but it would require\n> > waiting/polling for file descriptor events many times, which is\n> > expensive and might cause performance degradation. I think there is\n> > room for improvement, though.\n> Yes, I agree with you. Maybe you can add note in documentation on\n> async_capable, for example:\n> \"... Synchronous and asynchronous scanning strategies can be mixed by\n> optimizer in one scan plan of a partitioned table or an 'UNION ALL'\n> command. For performance reasons, synchronous scans executes before the\n> first of async scan. ...\"\n\n+1 But I think this is an independent issue, so I think it would be\nbetter to address the issue separately.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 11 May 2021 18:55:05 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, May 11, 2021 at 6:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 11, 2021 at 6:27 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> > On 11/5/21 12:24, Etsuro Fujita wrote:\n>\n> > >> -> Append (actual rows=3000 loops=1)\n> > >> -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n> > >> -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n> > >> -> Foreign Scan on f3 (actual rows=3000 loops=1)\n> > >>\n> > >> Here we give preference to the synchronous scan. Why?\n> > >\n> > > This would be expected behavior, and the reason is avoid performance\n> > > degradation; you might think it would be better to execute the async\n> > > Foreign Scan nodes more aggressively, but it would require\n> > > waiting/polling for file descriptor events many times, which is\n> > > expensive and might cause performance degradation. I think there is\n> > > room for improvement, though.\n> > Yes, I agree with you. Maybe you can add note in documentation on\n> > async_capable, for example:\n> > \"... Synchronous and asynchronous scanning strategies can be mixed by\n> > optimizer in one scan plan of a partitioned table or an 'UNION ALL'\n> > command. For performance reasons, synchronous scans executes before the\n> > first of async scan. ...\"\n>\n> +1 But I think this is an independent issue, so I think it would be\n> better to address the issue separately.\n\nI have committed the patch for the original issue.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 12 May 2021 14:15:38 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "I'm resending this because I failed to reply to all.\n\nOn Sat, May 8, 2021 at 12:55 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, May 7, 2021 at 2:12 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > In order to ensure that the data being returned from a foreign server\n> > is consistent, postgres_fdw will only open one connection for a given\n> > foreign server and will run all queries against that server sequentially\n> > even if there are multiple foreign tables involved. In such a case, it\n> > may be more performant to disable this option to eliminate the overhead\n> > associated with running queries asynchronously.\n>\n> Ok, I’ll merge this into the next version.\n\nStephen’s version would be much better than mine, so I updated the\npatch as proposed except the first sentence. If the foreign tables\nare subject to different user mappings, multiple connections will be\nopened, and queries will be performed in parallel. So I expanded the\nsentence a little bit, to avoid misunderstanding. Attached is a new\nversion.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Sun, 16 May 2021 23:39:14 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Sun, May 16, 2021 at 11:39 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Attached is a new version.\n\nI have committed the patch.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 17 May 2021 17:40:47 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Wed, Mar 31, 2021 at 6:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, Mar 30, 2021 at 8:40 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > I'm happy with the patch, so I'll commit it if there are no objections.\n>\n> Pushed.\n\nI noticed that rescan of async Appends is broken when\ndo_exec_prune=false, leading to incorrect results on normal builds and\nthe following failure on assertion-enabled builds:\n\nTRAP: FailedAssertion(\"node->as_valid_asyncplans == NULL\", File:\n\"nodeAppend.c\", Line: 1126, PID: 76644)\n\nSee a test case for this added in the attached. The root cause would\nbe that we call classify_matching_subplans() to re-determine\nsync/async subplans when called from the first ExecAppend() after the\nfirst ReScan, even if do_exec_prune=false, which is incorrect because\nin that case it is assumed to re-use sync/async subplans determined\nduring the the first ExecAppend() after Init. The attached fixes this\nissue. (A previous patch also had this issue, so I fixed it, but I\nthink I broke this again when simplifying the patch :-(.) I did a bit\nof cleanup, and modified ExecReScanAppend() to initialize an async\nstate variable as_nasyncresults to zero, to be sure. I think the\nvariable would have been set to zero before we get to that function,\nso I don't think we really need to do so, though.\n\nI will add this to the open items list for v14.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 28 May 2021 16:30:29 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "At Fri, 28 May 2021 16:30:29 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Wed, Mar 31, 2021 at 6:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Tue, Mar 30, 2021 at 8:40 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > I'm happy with the patch, so I'll commit it if there are no objections.\n> >\n> > Pushed.\n> \n> I noticed that rescan of async Appends is broken when\n> do_exec_prune=false, leading to incorrect results on normal builds and\n> the following failure on assertion-enabled builds:\n> \n> TRAP: FailedAssertion(\"node->as_valid_asyncplans == NULL\", File:\n> \"nodeAppend.c\", Line: 1126, PID: 76644)\n> \n> See a test case for this added in the attached. The root cause would\n> be that we call classify_matching_subplans() to re-determine\n> sync/async subplans when called from the first ExecAppend() after the\n> first ReScan, even if do_exec_prune=false, which is incorrect because\n> in that case it is assumed to re-use sync/async subplans determined\n> during the the first ExecAppend() after Init. The attached fixes this\n> issue. (A previous patch also had this issue, so I fixed it, but I\n> think I broke this again when simplifying the patch :-(.) I did a bit\n> of cleanup, and modified ExecReScanAppend() to initialize an async\n> state variable as_nasyncresults to zero, to be sure. I think the\n> variable would have been set to zero before we get to that function,\n> so I don't think we really need to do so, though.\n> \n> I will add this to the open items list for v14.\n\nThe patch drops some \"= NULL\" (initial) initializations when\nnasyncplans == 0. AFAICS makeNode() fills the returned memory with\nzeroes but I'm not sure it is our convention to omit the\nintializations.\n\nOtherwise the patch seems to make the code around cleaner.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 May 2021 17:29:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "Horiguchi-san,\n\nOn Fri, May 28, 2021 at 5:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 28 May 2021 16:30:29 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > The root cause would\n> > be that we call classify_matching_subplans() to re-determine\n> > sync/async subplans when called from the first ExecAppend() after the\n> > first ReScan, even if do_exec_prune=false, which is incorrect because\n> > in that case it is assumed to re-use sync/async subplans determined\n> > during the the first ExecAppend() after Init.\n\nI noticed I wrote it wrong. If do_exec_prune=false, we would\ndetermine sync/async subplans during ExecInitAppend(), so the “re-use\nsync/async subplans determined during the the first ExecAppend() after\nInit\" part should be corrected as “re-use sync/async subplans\ndetermined during ExecInitAppend()”. Sorry for that.\n\n> The patch drops some \"= NULL\" (initial) initializations when\n> nasyncplans == 0. AFAICS makeNode() fills the returned memory with\n> zeroes but I'm not sure it is our convention to omit the\n> intializations.\n\nI’m not sure, but I think we omit it in some cases; for example, we\ndon’t set as_valid_subplans to NULL explicitly in ExecInitAppend(), if\ndo_exec_prune=true.\n\n> Otherwise the patch seems to make the code around cleaner.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 28 May 2021 22:53:06 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, May 28, 2021 at 10:53 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, May 28, 2021 at 5:29 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The patch drops some \"= NULL\" (initial) initializations when\n> > nasyncplans == 0. AFAICS makeNode() fills the returned memory with\n> > zeroes but I'm not sure it is our convention to omit the\n> > intializations.\n>\n> I’m not sure, but I think we omit it in some cases; for example, we\n> don’t set as_valid_subplans to NULL explicitly in ExecInitAppend(), if\n> do_exec_prune=true.\n\nOk, I think it would be a good thing to initialize the\npointers/variables to NULL/zero explicitly, so I updated the patch as\nsuch. Barring objections, I'll get the patch committed in a few days.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 1 Jun 2021 18:30:28 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, May 11, 2021 at 6:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 11, 2021 at 6:27 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> > On 11/5/21 12:24, Etsuro Fujita wrote:\n>\n> > >> -> Append (actual rows=3000 loops=1)\n> > >> -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n> > >> -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n> > >> -> Foreign Scan on f3 (actual rows=3000 loops=1)\n> > >>\n> > >> Here we give preference to the synchronous scan. Why?\n> > >\n> > > This would be expected behavior, and the reason is avoid performance\n> > > degradation; you might think it would be better to execute the async\n> > > Foreign Scan nodes more aggressively, but it would require\n> > > waiting/polling for file descriptor events many times, which is\n> > > expensive and might cause performance degradation. I think there is\n> > > room for improvement, though.\n> > Yes, I agree with you. Maybe you can add note in documentation on\n> > async_capable, for example:\n> > \"... Synchronous and asynchronous scanning strategies can be mixed by\n> > optimizer in one scan plan of a partitioned table or an 'UNION ALL'\n> > command. For performance reasons, synchronous scans executes before the\n> > first of async scan. ...\"\n>\n> +1 But I think this is an independent issue, so I think it would be\n> better to address the issue separately.\n\nI think that since postgres-fdw.sgml would be for users rather than\ndevelopers, unlike fdwhandler.sgml, it would be better to explain this\nmore in a not-too-technical way. So how about something like this?\n\nAsynchronous execution is applied even when an Append node contains\nsubplan(s) executed synchronously as well as subplan(s) executed\nasynchronously. In that case, if the asynchronous subplans are ones\nexecuted using postgres_fdw, tuples from the asynchronous subplans are\nnot returned until after at least one synchronous subplan returns all\ntuples, as that subplan is executed while the asynchronous subplans\nare waiting for the results of queries sent to foreign servers. This\nbehavior might change in a future release.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 3 Jun 2021 18:49:59 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 3/6/21 14:49, Etsuro Fujita wrote:\n> On Tue, May 11, 2021 at 6:55 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> On Tue, May 11, 2021 at 6:27 PM Andrey Lepikhov\n>> <a.lepikhov@postgrespro.ru> wrote:\n>>> On 11/5/21 12:24, Etsuro Fujita wrote:\n>>\n>>>>> -> Append (actual rows=3000 loops=1)\n>>>>> -> Async Foreign Scan on f1 (actual rows=0 loops=1)\n>>>>> -> Async Foreign Scan on f2 (actual rows=0 loops=1)\n>>>>> -> Foreign Scan on f3 (actual rows=3000 loops=1)\n>>>>>\n>>>>> Here we give preference to the synchronous scan. Why?\n>>>>\n>>>> This would be expected behavior, and the reason is avoid performance\n>>>> degradation; you might think it would be better to execute the async\n>>>> Foreign Scan nodes more aggressively, but it would require\n>>>> waiting/polling for file descriptor events many times, which is\n>>>> expensive and might cause performance degradation. I think there is\n>>>> room for improvement, though.\n>>> Yes, I agree with you. Maybe you can add note in documentation on\n>>> async_capable, for example:\n>>> \"... Synchronous and asynchronous scanning strategies can be mixed by\n>>> optimizer in one scan plan of a partitioned table or an 'UNION ALL'\n>>> command. For performance reasons, synchronous scans executes before the\n>>> first of async scan. ...\"\n>>\n>> +1 But I think this is an independent issue, so I think it would be\n>> better to address the issue separately.\n> \n> I think that since postgres-fdw.sgml would be for users rather than\n> developers, unlike fdwhandler.sgml, it would be better to explain this\n> more in a not-too-technical way. So how about something like this?\n> \n> Asynchronous execution is applied even when an Append node contains\n> subplan(s) executed synchronously as well as subplan(s) executed\n> asynchronously. In that case, if the asynchronous subplans are ones\n> executed using postgres_fdw, tuples from the asynchronous subplans are\n> not returned until after at least one synchronous subplan returns all\n> tuples, as that subplan is executed while the asynchronous subplans\n> are waiting for the results of queries sent to foreign servers. This\n> behavior might change in a future release.\nGood, this text is clear for me.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 3 Jun 2021 20:33:56 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Tue, Jun 1, 2021 at 6:30 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, May 28, 2021 at 10:53 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, May 28, 2021 at 5:29 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > The patch drops some \"= NULL\" (initial) initializations when\n> > > nasyncplans == 0. AFAICS makeNode() fills the returned memory with\n> > > zeroes but I'm not sure it is our convention to omit the\n> > > intializations.\n> >\n> > I’m not sure, but I think we omit it in some cases; for example, we\n> > don’t set as_valid_subplans to NULL explicitly in ExecInitAppend(), if\n> > do_exec_prune=true.\n>\n> Ok, I think it would be a good thing to initialize the\n> pointers/variables to NULL/zero explicitly, so I updated the patch as\n> such. Barring objections, I'll get the patch committed in a few days.\n\nI'm replanning to push this early next week for some reason.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 4 Jun 2021 19:26:05 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Jun 4, 2021 at 7:26 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, Jun 1, 2021 at 6:30 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, May 28, 2021 at 10:53 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > On Fri, May 28, 2021 at 5:29 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > The patch drops some \"= NULL\" (initial) initializations when\n> > > > nasyncplans == 0. AFAICS makeNode() fills the returned memory with\n> > > > zeroes but I'm not sure it is our convention to omit the\n> > > > intializations.\n> > >\n> > > I’m not sure, but I think we omit it in some cases; for example, we\n> > > don’t set as_valid_subplans to NULL explicitly in ExecInitAppend(), if\n> > > do_exec_prune=true.\n> >\n> > Ok, I think it would be a good thing to initialize the\n> > pointers/variables to NULL/zero explicitly, so I updated the patch as\n> > such. Barring objections, I'll get the patch committed in a few days.\n>\n> I'm replanning to push this early next week for some reason.\n\nPushed. I will close this in the open items list for v14.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 7 Jun 2021 12:57:25 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Fri, Jun 4, 2021 at 12:33 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Good, this text is clear for me.\n\nCool! I created a patch for that, which I'm attaching. I'm planning\nto commit the patch.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 7 Jun 2021 18:36:39 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On Mon, Jun 7, 2021 at 6:36 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I created a patch for that, which I'm attaching. I'm planning\n> to commit the patch.\n\nDone.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 8 Jun 2021 13:57:28 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Asynchronous Append on postgres_fdw nodes." }, { "msg_contents": "On 11/5/21 06:55, Zhihong Yu wrote:\n> On Mon, May 10, 2021 at 8:45 PM Andrey Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> It seems the if statement is not needed: you can directly assign false \n> to  subplan->async_capable.\nI have completely rewritten this patch.\n\nMain idea:\n\nThe async_capable field of a plan node inform us that this node could \nwork in async mode. Each node sets this field based on its own logic.\nThe actual mode of a node is defined by the async_capable of PlanState \nstructure. It is made at the executor initialization stage.\nIn this patch, only an append node could define async behaviour for its \nsubplans.\nWith such approach the IsForeignPathAsyncCapable routine become \nunecessary, I think.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 30 Jun 2021 07:50:01 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Wed, Jun 30, 2021 at 1:50 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> I have completely rewritten this patch.\n>\n> Main idea:\n>\n> The async_capable field of a plan node inform us that this node could\n> work in async mode. Each node sets this field based on its own logic.\n> The actual mode of a node is defined by the async_capable of PlanState\n> structure. It is made at the executor initialization stage.\n> In this patch, only an append node could define async behaviour for its\n> subplans.\n\nI finally reviewed the patch. One thing I noticed about the patch is\nthat it would break ordered Appends. Here is such an example using\nthe patch:\n\ncreate table pt (a int) partition by range (a);\ncreate table loct1 (a int);\ncreate table loct2 (a int);\ncreate foreign table p1 partition of pt for values from (10) to (20)\nserver loopback1 options (table_name 'loct1');\ncreate foreign table p2 partition of pt for values from (20) to (30)\nserver loopback2 options (table_name 'loct2');\n\nexplain verbose select * from pt order by a;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Append (cost=200.00..440.45 rows=5850 width=4)\n -> Async Foreign Scan on public.p1 pt_1 (cost=100.00..205.60\nrows=2925 width=4)\n Output: pt_1.a\n Remote SQL: SELECT a FROM public.loct1 ORDER BY a ASC NULLS LAST\n -> Async Foreign Scan on public.p2 pt_2 (cost=100.00..205.60\nrows=2925 width=4)\n Output: pt_2.a\n Remote SQL: SELECT a FROM public.loct2 ORDER BY a ASC NULLS LAST\n(7 rows)\n\nThis would not always provide tuples in the required order, as async\nexecution would provide them from the subplans rather randomly. I\nthink it would not only be too late but be not efficient to do the\nplanning work at execution time (consider executing generic plans!),\nso I think we should avoid doing so. (The cost of doing that work for\nsimple foreign scans is small, but if we support async execution for\nupper plan nodes such as NestLoop as discussed before, I think the\ncost for such plan nodes would not be small anymore.)\n\nTo just execute what was planned at execution time, I think we should\nreturn to the patch in [1]. The patch was created for Horiguchi-san’s\nasync-execution patch, so I modified it to work with HEAD, and added a\nsimplified version of your test cases. Please find attached a patch.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/7fe10f95-ac6c-c81d-a9d3-227493eb9055@postgrespro.ru", "msg_date": "Mon, 23 Aug 2021 18:18:43 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On 8/23/21 2:18 PM, Etsuro Fujita wrote:\n> To just execute what was planned at execution time, I think we should\n> return to the patch in [1]. The patch was created for Horiguchi-san’s\n> async-execution patch, so I modified it to work with HEAD, and added a\n> simplified version of your test cases. Please find attached a patch.\n> [1] https://www.postgresql.org/message-id/7fe10f95-ac6c-c81d-a9d3-227493eb9055@postgrespro.ru\nI agree, this way is more safe. I tried to search for another approach, \nbecause here isn't general solution: for each plan node we should \nimplement support of asynchronous behaviour.\nBut for practical use, for small set of nodes, it will work good. I \nhaven't any objections for this patch.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 30 Aug 2021 13:36:38 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Mon, Aug 30, 2021 at 5:36 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 8/23/21 2:18 PM, Etsuro Fujita wrote:\n> > To just execute what was planned at execution time, I think we should\n> > return to the patch in [1]. The patch was created for Horiguchi-san’s\n> > async-execution patch, so I modified it to work with HEAD, and added a\n> > simplified version of your test cases. Please find attached a patch.\n\n> > [1] https://www.postgresql.org/message-id/7fe10f95-ac6c-c81d-a9d3-227493eb9055@postgrespro.ru\n\n> I agree, this way is more safe. I tried to search for another approach,\n> because here isn't general solution: for each plan node we should\n> implement support of asynchronous behaviour.\n\nI think so too.\n\n> But for practical use, for small set of nodes, it will work good. I\n> haven't any objections for this patch.\n\nOK\n\nTo allow async execution in a bit more cases, I modified the patch a\nbit further: a ProjectionPath put directly above an async-capable\nForeignPath would also be considered async-capable as ForeignScan can\nproject and no separate Result is needed in that case, so I modified\nmark_async_capable_plan() as such, and added test cases to the\npostgres_fdw regression test. Attached is an updated version of the\npatch.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 30 Aug 2021 18:52:12 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Etsuro Fujita писал 2021-08-30 12:52:\n> On Mon, Aug 30, 2021 at 5:36 PM Andrey V. Lepikhov\n> \n> To allow async execution in a bit more cases, I modified the patch a\n> bit further: a ProjectionPath put directly above an async-capable\n> ForeignPath would also be considered async-capable as ForeignScan can\n> project and no separate Result is needed in that case, so I modified\n> mark_async_capable_plan() as such, and added test cases to the\n> postgres_fdw regression test. Attached is an updated version of the\n> patch.\n> \n\nHi.\n\nThe patch looks good to me and seems to work as expected.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 15 Sep 2021 09:40:46 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Hi Alexander,\n\nOn Wed, Sep 15, 2021 at 3:40 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> Etsuro Fujita писал 2021-08-30 12:52:\n> > To allow async execution in a bit more cases, I modified the patch a\n> > bit further: a ProjectionPath put directly above an async-capable\n> > ForeignPath would also be considered async-capable as ForeignScan can\n> > project and no separate Result is needed in that case, so I modified\n> > mark_async_capable_plan() as such, and added test cases to the\n> > postgres_fdw regression test. Attached is an updated version of the\n> > patch.\n\n> The patch looks good to me and seems to work as expected.\n\nThanks for reviewing! I’m planning to commit the patch.\n\nSorry for the long delay.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sun, 13 Mar 2022 18:39:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sun, Mar 13, 2022 at 6:39 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Sep 15, 2021 at 3:40 PM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n> > The patch looks good to me and seems to work as expected.\n>\n> I’m planning to commit the patch.\n\nI polished the patch a bit:\n\n* Reordered a bit of code in create_append_plan() in logical order (no\nfunctional changes).\n* Added more comments.\n* Added/Tweaked regression test cases.\n\nAlso, I added the commit message. Attached is a new version of the\npatch. Barring objections, I’ll commit this.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Sun, 3 Apr 2022 19:29:11 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sun, Apr 3, 2022 at 3:28 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Sun, Mar 13, 2022 at 6:39 PM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n> > On Wed, Sep 15, 2021 at 3:40 PM Alexander Pyhalov\n> > <a.pyhalov@postgrespro.ru> wrote:\n> > > The patch looks good to me and seems to work as expected.\n> >\n> > I’m planning to commit the patch.\n>\n> I polished the patch a bit:\n>\n> * Reordered a bit of code in create_append_plan() in logical order (no\n> functional changes).\n> * Added more comments.\n> * Added/Tweaked regression test cases.\n>\n> Also, I added the commit message. Attached is a new version of the\n> patch. Barring objections, I’ll commit this.\n>\n> Best regards,\n> Etsuro Fujita\n>\nHi,\n\n+ WRITE_ENUM_FIELD(status, SubqueryScanStatus);\n\nLooks like the new field can be named subquerystatus - this way its purpose\nis clearer.\n\n+ * mark_async_capable_plan\n+ * Check whether a given Path node is async-capable, and if so, mark\nthe\n+ * Plan node created from it as such.\n\nPlease add comment explaining what the return value means.\n\n+ if (!IsA(plan, Result) &&\n+ mark_async_capable_plan(plan,\n+ ((ProjectionPath *) path)->subpath))\n+ return true;\n\nby returning true, `plan->async_capable = true;` is skipped.\nIs that intentional ?\n\nCheers\n\nOn Sun, Apr 3, 2022 at 3:28 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Sun, Mar 13, 2022 at 6:39 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Sep 15, 2021 at 3:40 PM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n> > The patch looks good to me and seems to work as expected.\n>\n> I’m planning to commit the patch.\n\nI polished the patch a bit:\n\n* Reordered a bit of code in create_append_plan() in logical order (no\nfunctional changes).\n* Added more comments.\n* Added/Tweaked regression test cases.\n\nAlso, I added the commit message.  Attached is a new version of the\npatch.  Barring objections, I’ll commit this.\n\nBest regards,\nEtsuro FujitaHi,+   WRITE_ENUM_FIELD(status, SubqueryScanStatus);Looks like the new field can be named subquerystatus - this way its purpose is clearer.+ * mark_async_capable_plan+ *     Check whether a given Path node is async-capable, and if so, mark the+ *     Plan node created from it as such.Please add comment explaining what the return value means.+           if (!IsA(plan, Result) &&+               mark_async_capable_plan(plan,+                                       ((ProjectionPath *) path)->subpath))+               return true;by returning true, `plan->async_capable = true;` is skipped.Is that intentional ?Cheers", "msg_date": "Sun, 3 Apr 2022 07:42:37 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Hi Zhihong,\n\nOn Sun, Apr 3, 2022 at 11:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> + WRITE_ENUM_FIELD(status, SubqueryScanStatus);\n>\n> Looks like the new field can be named subquerystatus - this way its purpose is clearer.\n\nI agree that “status” is too general. “subquerystatus” might be good,\nbut I’d like to propose “scanstatus” instead, because I think this\nwould be consistent with the naming of the RowMarkType-enum member\n“markType” in PlanRowMark defined in the same file.\n\n> + * mark_async_capable_plan\n> + * Check whether a given Path node is async-capable, and if so, mark the\n> + * Plan node created from it as such.\n>\n> Please add comment explaining what the return value means.\n\nOk, how about something like this?\n\n“Check whether a given Path node is async-capable, and if so, mark the\nPlan node created from it as such and return true; otherwise, return\nfalse.”\n\n> + if (!IsA(plan, Result) &&\n> + mark_async_capable_plan(plan,\n> + ((ProjectionPath *) path)->subpath))\n> + return true;\n>\n> by returning true, `plan->async_capable = true;` is skipped.\n> Is that intentional ?\n\nThat is intentional; we don’t need to set the async_capable flag\nbecause in that case the flag would already have been set by the above\nmark_async_capable_plan(). Note that we pass “plan” to that function.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 4 Apr 2022 13:06:40 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On 4/3/22 15:29, Etsuro Fujita wrote:\n> On Sun, Mar 13, 2022 at 6:39 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> On Wed, Sep 15, 2021 at 3:40 PM Alexander Pyhalov\n>> <a.pyhalov@postgrespro.ru> wrote:\n>>> The patch looks good to me and seems to work as expected.\n>>\n>> I’m planning to commit the patch.\n> \n> I polished the patch a bit:\n> \n> * Reordered a bit of code in create_append_plan() in logical order (no\n> functional changes).\n> * Added more comments.\n> * Added/Tweaked regression test cases.\n> \n> Also, I added the commit message. Attached is a new version of the\n> patch. Barring objections, I’ll commit this.\n\nSorry for late answer - just vacation.\nI looked through this patch - looks much more stable.\nBut, as far as I remember, on previous version some problems were found \nout on the TPC-H test. I want to play a bit with the TPC-H and with \nparameterized plans.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 4 Apr 2022 14:30:20 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Mon, Apr 4, 2022 at 1:06 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Sun, Apr 3, 2022 at 11:38 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + WRITE_ENUM_FIELD(status, SubqueryScanStatus);\n> >\n> > Looks like the new field can be named subquerystatus - this way its purpose is clearer.\n>\n> I agree that “status” is too general. “subquerystatus” might be good,\n> but I’d like to propose “scanstatus” instead, because I think this\n> would be consistent with the naming of the RowMarkType-enum member\n> “markType” in PlanRowMark defined in the same file.\n>\n> > + * mark_async_capable_plan\n> > + * Check whether a given Path node is async-capable, and if so, mark the\n> > + * Plan node created from it as such.\n> >\n> > Please add comment explaining what the return value means.\n>\n> Ok, how about something like this?\n>\n> “Check whether a given Path node is async-capable, and if so, mark the\n> Plan node created from it as such and return true; otherwise, return\n> false.”\n\nI have committed the patch after modifying it as such. (I think we\ncan improve these later, if necessary.)\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 6 Apr 2022 15:58:29 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Mon, Apr 4, 2022 at 6:30 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 4/3/22 15:29, Etsuro Fujita wrote:\n> > Also, I added the commit message. Attached is a new version of the\n> > patch. Barring objections, I’ll commit this.\n\n> I looked through this patch - looks much more stable.\n> But, as far as I remember, on previous version some problems were found\n> out on the TPC-H test. I want to play a bit with the TPC-H and with\n> parameterized plans.\n\nI might be missing something, but I don't see any problems, so I have\ncommitted the patch after some modifications. If you find them,\nplease let me know.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 6 Apr 2022 16:05:30 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Wed, Apr 06, 2022 at 03:58:29PM +0900, Etsuro Fujita wrote:\n> I have committed the patch after modifying it as such. (I think we\n> can improve these later, if necessary.)\n\nThis patch seems to be causing the planner to crash.\nHere's a query reduced from sqlsmith.\n\n| explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x000055b4396a2edf in trivial_subqueryscan (plan=0x7f4219ed93b0) at ../../../../src/include/nodes/pg_list.h:151\n151 return l ? l->length : 0;\n(gdb) bt\n#0 0x000055b4396a2edf in trivial_subqueryscan (plan=0x7f4219ed93b0) at ../../../../src/include/nodes/pg_list.h:151\n#1 0x000055b43968af89 in mark_async_capable_plan (plan=plan@entry=0x7f4219ed93b0, path=path@entry=0x7f4219e89538) at createplan.c:1132\n#2 0x000055b439691924 in create_append_plan (root=root@entry=0x55b43affb2b0, best_path=best_path@entry=0x7f4219ed0cb8, flags=flags@entry=0) at createplan.c:1329\n#3 0x000055b43968fa21 in create_plan_recurse (root=root@entry=0x55b43affb2b0, best_path=best_path@entry=0x7f4219ed0cb8, flags=flags@entry=0) at createplan.c:421\n#4 0x000055b43968f974 in create_projection_plan (root=root@entry=0x55b43affb2b0, best_path=best_path@entry=0x7f4219ed0f60, flags=flags@entry=1) at createplan.c:2039\n#5 0x000055b43968fa6f in create_plan_recurse (root=root@entry=0x55b43affb2b0, best_path=0x7f4219ed0f60, flags=flags@entry=1) at createplan.c:433\n#6 0x000055b439690221 in create_plan (root=root@entry=0x55b43affb2b0, best_path=<optimized out>) at createplan.c:348\n#7 0x000055b4396a1451 in standard_planner (parse=0x55b43af05e28, query_string=<optimized out>, cursorOptions=2048, boundParams=0x0) at planner.c:413\n#8 0x000055b4396a19c1 in planner (parse=parse@entry=0x55b43af05e28, query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\", \n cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at planner.c:277\n#9 0x000055b439790c78 in pg_plan_query (querytree=querytree@entry=0x55b43af05e28, query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\", \n cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:883\n#10 0x000055b439790d54 in pg_plan_queries (querytrees=0x55b43afdd528, query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\", \n cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:975\n#11 0x000055b439791239 in exec_simple_query (query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\") at postgres.c:1169\n#12 0x000055b439793183 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4542\n#13 0x000055b4396e6af7 in BackendRun (port=port@entry=0x55b43af2ffe0) at postmaster.c:4489\n#14 0x000055b4396e9c03 in BackendStartup (port=port@entry=0x55b43af2ffe0) at postmaster.c:4217\n#15 0x000055b4396e9e4a in ServerLoop () at postmaster.c:1791\n#16 0x000055b4396eb401 in PostmasterMain (argc=7, argv=<optimized out>) at postmaster.c:1463\n#17 0x000055b43962b4df in main (argc=7, argv=0x55b43aeff0c0) at main.c:202\n\nActually, the original query failed like this:\n#2 0x000055b4398e9f90 in ExceptionalCondition (conditionName=conditionName@entry=0x55b439a61238 \"plan->scanstatus == SUBQUERY_SCAN_UNKNOWN\", errorType=errorType@entry=0x55b43994b00b \"FailedAssertion\", \n#3 0x000055b4396a2ecf in trivial_subqueryscan (plan=0x55b43b59cac8) at setrefs.c:1367\n\n\n", "msg_date": "Fri, 8 Apr 2022 07:43:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Fri, Apr 8, 2022 at 5:43 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Apr 06, 2022 at 03:58:29PM +0900, Etsuro Fujita wrote:\n> > I have committed the patch after modifying it as such. (I think we\n> > can improve these later, if necessary.)\n>\n> This patch seems to be causing the planner to crash.\n> Here's a query reduced from sqlsmith.\n>\n> | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1\n> <= pg_trigger_depth();\n>\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 0x000055b4396a2edf in trivial_subqueryscan (plan=0x7f4219ed93b0) at\n> ../../../../src/include/nodes/pg_list.h:151\n> 151 return l ? l->length : 0;\n> (gdb) bt\n> #0 0x000055b4396a2edf in trivial_subqueryscan (plan=0x7f4219ed93b0) at\n> ../../../../src/include/nodes/pg_list.h:151\n> #1 0x000055b43968af89 in mark_async_capable_plan (plan=plan@entry=0x7f4219ed93b0,\n> path=path@entry=0x7f4219e89538) at createplan.c:1132\n> #2 0x000055b439691924 in create_append_plan (root=root@entry=0x55b43affb2b0,\n> best_path=best_path@entry=0x7f4219ed0cb8, flags=flags@entry=0) at\n> createplan.c:1329\n> #3 0x000055b43968fa21 in create_plan_recurse (root=root@entry=0x55b43affb2b0,\n> best_path=best_path@entry=0x7f4219ed0cb8, flags=flags@entry=0) at\n> createplan.c:421\n> #4 0x000055b43968f974 in create_projection_plan (root=root@entry=0x55b43affb2b0,\n> best_path=best_path@entry=0x7f4219ed0f60, flags=flags@entry=1) at\n> createplan.c:2039\n> #5 0x000055b43968fa6f in create_plan_recurse (root=root@entry=0x55b43affb2b0,\n> best_path=0x7f4219ed0f60, flags=flags@entry=1) at createplan.c:433\n> #6 0x000055b439690221 in create_plan (root=root@entry=0x55b43affb2b0,\n> best_path=<optimized out>) at createplan.c:348\n> #7 0x000055b4396a1451 in standard_planner (parse=0x55b43af05e28,\n> query_string=<optimized out>, cursorOptions=2048, boundParams=0x0) at\n> planner.c:413\n> #8 0x000055b4396a19c1 in planner (parse=parse@entry=0x55b43af05e28,\n> query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM\n> information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\",\n> cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\n> at planner.c:277\n> #9 0x000055b439790c78 in pg_plan_query (querytree=querytree@entry=0x55b43af05e28,\n> query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM\n> information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\",\n> cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\n> at postgres.c:883\n> #10 0x000055b439790d54 in pg_plan_queries (querytrees=0x55b43afdd528,\n> query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM\n> information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\",\n> cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)\n> at postgres.c:975\n> #11 0x000055b439791239 in exec_simple_query\n> (query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM\n> information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\")\n> at postgres.c:1169\n> #12 0x000055b439793183 in PostgresMain (dbname=<optimized out>,\n> username=<optimized out>) at postgres.c:4542\n> #13 0x000055b4396e6af7 in BackendRun (port=port@entry=0x55b43af2ffe0) at\n> postmaster.c:4489\n> #14 0x000055b4396e9c03 in BackendStartup (port=port@entry=0x55b43af2ffe0)\n> at postmaster.c:4217\n> #15 0x000055b4396e9e4a in ServerLoop () at postmaster.c:1791\n> #16 0x000055b4396eb401 in PostmasterMain (argc=7, argv=<optimized out>) at\n> postmaster.c:1463\n> #17 0x000055b43962b4df in main (argc=7, argv=0x55b43aeff0c0) at main.c:202\n>\n> Actually, the original query failed like this:\n> #2 0x000055b4398e9f90 in ExceptionalCondition\n> (conditionName=conditionName@entry=0x55b439a61238 \"plan->scanstatus ==\n> SUBQUERY_SCAN_UNKNOWN\", errorType=errorType@entry=0x55b43994b00b\n> \"FailedAssertion\",\n> #3 0x000055b4396a2ecf in trivial_subqueryscan (plan=0x55b43b59cac8) at\n> setrefs.c:1367\n>\n\nHi,\nI logged the value of plan->scanstatus before the assertion :\n\n2022-04-08 16:20:59.601 UTC [26325] LOG: scan status 0\n2022-04-08 16:20:59.601 UTC [26325] STATEMENT: explain SELECT 1 FROM\ninformation_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n2022-04-08 16:20:59.796 UTC [26296] LOG: server process (PID 26325) was\nterminated by signal 11: Segmentation fault\n\nIt seems its value was SUBQUERY_SCAN_UNKNOWN.\n\nStill trying to find out the cause for the crash.\n\nOn Fri, Apr 8, 2022 at 5:43 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Apr 06, 2022 at 03:58:29PM +0900, Etsuro Fujita wrote:\n> I have committed the patch after modifying it as such.  (I think we\n> can improve these later, if necessary.)\n\nThis patch seems to be causing the planner to crash.\nHere's a query reduced from sqlsmith.\n\n| explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0  0x000055b4396a2edf in trivial_subqueryscan (plan=0x7f4219ed93b0) at ../../../../src/include/nodes/pg_list.h:151\n151             return l ? l->length : 0;\n(gdb) bt\n#0  0x000055b4396a2edf in trivial_subqueryscan (plan=0x7f4219ed93b0) at ../../../../src/include/nodes/pg_list.h:151\n#1  0x000055b43968af89 in mark_async_capable_plan (plan=plan@entry=0x7f4219ed93b0, path=path@entry=0x7f4219e89538) at createplan.c:1132\n#2  0x000055b439691924 in create_append_plan (root=root@entry=0x55b43affb2b0, best_path=best_path@entry=0x7f4219ed0cb8, flags=flags@entry=0) at createplan.c:1329\n#3  0x000055b43968fa21 in create_plan_recurse (root=root@entry=0x55b43affb2b0, best_path=best_path@entry=0x7f4219ed0cb8, flags=flags@entry=0) at createplan.c:421\n#4  0x000055b43968f974 in create_projection_plan (root=root@entry=0x55b43affb2b0, best_path=best_path@entry=0x7f4219ed0f60, flags=flags@entry=1) at createplan.c:2039\n#5  0x000055b43968fa6f in create_plan_recurse (root=root@entry=0x55b43affb2b0, best_path=0x7f4219ed0f60, flags=flags@entry=1) at createplan.c:433\n#6  0x000055b439690221 in create_plan (root=root@entry=0x55b43affb2b0, best_path=<optimized out>) at createplan.c:348\n#7  0x000055b4396a1451 in standard_planner (parse=0x55b43af05e28, query_string=<optimized out>, cursorOptions=2048, boundParams=0x0) at planner.c:413\n#8  0x000055b4396a19c1 in planner (parse=parse@entry=0x55b43af05e28, query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\", \n    cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at planner.c:277\n#9  0x000055b439790c78 in pg_plan_query (querytree=querytree@entry=0x55b43af05e28, query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\", \n    cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:883\n#10 0x000055b439790d54 in pg_plan_queries (querytrees=0x55b43afdd528, query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\", \n    cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0) at postgres.c:975\n#11 0x000055b439791239 in exec_simple_query (query_string=query_string@entry=0x55b43af04c40 \"SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 > pg_trigger_depth();\") at postgres.c:1169\n#12 0x000055b439793183 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4542\n#13 0x000055b4396e6af7 in BackendRun (port=port@entry=0x55b43af2ffe0) at postmaster.c:4489\n#14 0x000055b4396e9c03 in BackendStartup (port=port@entry=0x55b43af2ffe0) at postmaster.c:4217\n#15 0x000055b4396e9e4a in ServerLoop () at postmaster.c:1791\n#16 0x000055b4396eb401 in PostmasterMain (argc=7, argv=<optimized out>) at postmaster.c:1463\n#17 0x000055b43962b4df in main (argc=7, argv=0x55b43aeff0c0) at main.c:202\n\nActually, the original query failed like this:\n#2  0x000055b4398e9f90 in ExceptionalCondition (conditionName=conditionName@entry=0x55b439a61238 \"plan->scanstatus == SUBQUERY_SCAN_UNKNOWN\", errorType=errorType@entry=0x55b43994b00b \"FailedAssertion\", \n#3  0x000055b4396a2ecf in trivial_subqueryscan (plan=0x55b43b59cac8) at setrefs.c:1367Hi,I logged the value of plan->scanstatus before the assertion :2022-04-08 16:20:59.601 UTC [26325] LOG:  scan status 02022-04-08 16:20:59.601 UTC [26325] STATEMENT:  explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();2022-04-08 16:20:59.796 UTC [26296] LOG:  server process (PID 26325) was terminated by signal 11: Segmentation fault It seems its value was SUBQUERY_SCAN_UNKNOWN.Still trying to find out the cause for the crash.", "msg_date": "Fri, 8 Apr 2022 09:28:43 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This patch seems to be causing the planner to crash.\n> Here's a query reduced from sqlsmith.\n>\n> | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n>\n> Program terminated with signal SIGSEGV, Segmentation fault.\n\nReproduced. Will look into this.\n\nThanks for the report!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sat, 9 Apr 2022 01:58:53 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sat, Apr 9, 2022 at 1:58 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This patch seems to be causing the planner to crash.\n> > Here's a query reduced from sqlsmith.\n> >\n> > | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n> >\n> > Program terminated with signal SIGSEGV, Segmentation fault.\n>\n> Reproduced. Will look into this.\n\nI think the cause of this is that mark_async_capable_plan() failed to\ntake into account that when the given path is a SubqueryScanPath or\nForeignPath, the given corresponding plan might include a gating\nResult node that evaluates pseudoconstant quals. My oversight. :-(\nAttached is a patch for fixing that. I think v14 has the same issue,\nso I think we need backpatching.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Sun, 10 Apr 2022 19:43:48 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Hi,\n\nOn Sat, Apr 9, 2022 at 1:24 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Fri, Apr 8, 2022 at 5:43 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> This patch seems to be causing the planner to crash.\n>> Here's a query reduced from sqlsmith.\n>>\n>> | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n>>\n>> Program terminated with signal SIGSEGV, Segmentation fault.\n\n> I logged the value of plan->scanstatus before the assertion :\n>\n> 2022-04-08 16:20:59.601 UTC [26325] LOG: scan status 0\n> 2022-04-08 16:20:59.601 UTC [26325] STATEMENT: explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n> 2022-04-08 16:20:59.796 UTC [26296] LOG: server process (PID 26325) was terminated by signal 11: Segmentation fault\n>\n> It seems its value was SUBQUERY_SCAN_UNKNOWN.\n>\n> Still trying to find out the cause for the crash.\n\nI think the cause is an oversight in mark_async_capable_plan(). See [1].\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK15NkuaVo0Fu_0TfoCpPPJaJi4OMLzEQtkE6Bt6YT52fPQ%40mail.gmail.com\n\n\n", "msg_date": "Sun, 10 Apr 2022 19:58:30 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sun, Apr 10, 2022 at 3:42 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Sat, Apr 9, 2022 at 1:58 AM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n> > On Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > This patch seems to be causing the planner to crash.\n> > > Here's a query reduced from sqlsmith.\n> > >\n> > > | explain SELECT 1 FROM information_schema.constraint_column_usage\n> WHERE 1 <= pg_trigger_depth();\n> > >\n> > > Program terminated with signal SIGSEGV, Segmentation fault.\n> >\n> > Reproduced. Will look into this.\n>\n> I think the cause of this is that mark_async_capable_plan() failed to\n> take into account that when the given path is a SubqueryScanPath or\n> ForeignPath, the given corresponding plan might include a gating\n> Result node that evaluates pseudoconstant quals. My oversight. :-(\n> Attached is a patch for fixing that. I think v14 has the same issue,\n> so I think we need backpatching.\n>\n> Best regards,\n> Etsuro Fujita\n>\nHi,\nLooking at the second hunk of the patch:\n FdwRoutine *fdwroutine = path->parent->fdwroutine;\n...\n+ if (IsA(plan, Result))\n+ return false;\n\nIt seems the check of whether plan is a Result node can be lifted ahead of\nthe switch statement (i.e. to the beginning of mark_async_capable_plan).\n\nThis way, we don't have to check for every case in the switch statement.\n\nCheers\n\nOn Sun, Apr 10, 2022 at 3:42 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Sat, Apr 9, 2022 at 1:58 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This patch seems to be causing the planner to crash.\n> > Here's a query reduced from sqlsmith.\n> >\n> > | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n> >\n> > Program terminated with signal SIGSEGV, Segmentation fault.\n>\n> Reproduced.  Will look into this.\n\nI think the cause of this is that mark_async_capable_plan() failed to\ntake into account that when the given path is a SubqueryScanPath or\nForeignPath, the given corresponding plan might include a gating\nResult node that evaluates pseudoconstant quals.  My oversight.  :-(\nAttached is a patch for fixing that.  I think v14 has the same issue,\nso I think we need backpatching.\n\nBest regards,\nEtsuro FujitaHi,Looking at the second hunk of the patch:                FdwRoutine *fdwroutine = path->parent->fdwroutine; ...+               if (IsA(plan, Result))+                   return false;It seems the check of whether plan is a Result node can be lifted ahead of the switch statement (i.e. to the beginning of mark_async_capable_plan).This way, we don't have to check for every case in the switch statement.Cheers", "msg_date": "Sun, 10 Apr 2022 06:46:25 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sun, Apr 10, 2022 at 07:43:48PM +0900, Etsuro Fujita wrote:\n> On Sat, Apr 9, 2022 at 1:58 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > This patch seems to be causing the planner to crash.\n> > > Here's a query reduced from sqlsmith.\n> > >\n> > > | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n> > >\n> > > Program terminated with signal SIGSEGV, Segmentation fault.\n> >\n> > Reproduced. Will look into this.\n> \n> I think the cause of this is that mark_async_capable_plan() failed to\n> take into account that when the given path is a SubqueryScanPath or\n> ForeignPath, the given corresponding plan might include a gating\n> Result node that evaluates pseudoconstant quals. My oversight. :-(\n> Attached is a patch for fixing that. I think v14 has the same issue,\n> so I think we need backpatching.\n\nThanks - this seems to resolve the issue.\n\nOn Sun, Apr 10, 2022 at 06:46:25AM -0700, Zhihong Yu wrote:\n> Looking at the second hunk of the patch:\n> FdwRoutine *fdwroutine = path->parent->fdwroutine;\n> ...\n> + if (IsA(plan, Result))\n> + return false;\n> \n> It seems the check of whether plan is a Result node can be lifted ahead of\n> the switch statement (i.e. to the beginning of mark_async_capable_plan).\n> \n> This way, we don't have to check for every case in the switch statement.\n\nI think you misread it - the other branch says: if (*not* IsA())\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 10 Apr 2022 21:41:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sun, Apr 10, 2022 at 7:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Apr 10, 2022 at 07:43:48PM +0900, Etsuro Fujita wrote:\n> > On Sat, Apr 9, 2022 at 1:58 AM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n> > > On Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > > This patch seems to be causing the planner to crash.\n> > > > Here's a query reduced from sqlsmith.\n> > > >\n> > > > | explain SELECT 1 FROM information_schema.constraint_column_usage\n> WHERE 1 <= pg_trigger_depth();\n> > > >\n> > > > Program terminated with signal SIGSEGV, Segmentation fault.\n> > >\n> > > Reproduced. Will look into this.\n> >\n> > I think the cause of this is that mark_async_capable_plan() failed to\n> > take into account that when the given path is a SubqueryScanPath or\n> > ForeignPath, the given corresponding plan might include a gating\n> > Result node that evaluates pseudoconstant quals. My oversight. :-(\n> > Attached is a patch for fixing that. I think v14 has the same issue,\n> > so I think we need backpatching.\n>\n> Thanks - this seems to resolve the issue.\n>\n> On Sun, Apr 10, 2022 at 06:46:25AM -0700, Zhihong Yu wrote:\n> > Looking at the second hunk of the patch:\n> > FdwRoutine *fdwroutine = path->parent->fdwroutine;\n> > ...\n> > + if (IsA(plan, Result))\n> > + return false;\n> >\n> > It seems the check of whether plan is a Result node can be lifted ahead\n> of\n> > the switch statement (i.e. to the beginning of mark_async_capable_plan).\n> >\n> > This way, we don't have to check for every case in the switch statement.\n>\n> I think you misread it - the other branch says: if (*not* IsA())\n>\n> No, I didn't misread:\n\n if (!IsA(plan, Result) &&\n mark_async_capable_plan(plan,\n ((ProjectionPath *) path)->subpath))\n return true;\n return false;\n\nIf the plan is Result node, false would be returned.\nSo the check can be lifted to the beginning of the func.\n\nCheers\n\nOn Sun, Apr 10, 2022 at 7:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sun, Apr 10, 2022 at 07:43:48PM +0900, Etsuro Fujita wrote:\n> On Sat, Apr 9, 2022 at 1:58 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, Apr 8, 2022 at 9:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > This patch seems to be causing the planner to crash.\n> > > Here's a query reduced from sqlsmith.\n> > >\n> > > | explain SELECT 1 FROM information_schema.constraint_column_usage WHERE 1 <= pg_trigger_depth();\n> > >\n> > > Program terminated with signal SIGSEGV, Segmentation fault.\n> >\n> > Reproduced.  Will look into this.\n> \n> I think the cause of this is that mark_async_capable_plan() failed to\n> take into account that when the given path is a SubqueryScanPath or\n> ForeignPath, the given corresponding plan might include a gating\n> Result node that evaluates pseudoconstant quals.  My oversight.  :-(\n> Attached is a patch for fixing that.  I think v14 has the same issue,\n> so I think we need backpatching.\n\nThanks - this seems to resolve the issue.\n\nOn Sun, Apr 10, 2022 at 06:46:25AM -0700, Zhihong Yu wrote:\n> Looking at the second hunk of the patch:\n>                 FdwRoutine *fdwroutine = path->parent->fdwroutine;\n> ...\n> +               if (IsA(plan, Result))\n> +                   return false;\n> \n> It seems the check of whether plan is a Result node can be lifted ahead of\n> the switch statement (i.e. to the beginning of mark_async_capable_plan).\n> \n> This way, we don't have to check for every case in the switch statement.\n\nI think you misread it - the other branch says: if (*not* IsA())No, I didn't misread:            if (!IsA(plan, Result) &&                mark_async_capable_plan(plan,                                        ((ProjectionPath *) path)->subpath))                return true;            return false;If the plan is Result node, false would be returned.So the check can be lifted to the beginning of the func.Cheers", "msg_date": "Sun, 10 Apr 2022 19:48:35 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Mon, Apr 11, 2022 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Apr 10, 2022 at 7:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> On Sun, Apr 10, 2022 at 06:46:25AM -0700, Zhihong Yu wrote:\n>> > Looking at the second hunk of the patch:\n>> > FdwRoutine *fdwroutine = path->parent->fdwroutine;\n>> > ...\n>> > + if (IsA(plan, Result))\n>> > + return false;\n>> >\n>> > It seems the check of whether plan is a Result node can be lifted ahead of\n>> > the switch statement (i.e. to the beginning of mark_async_capable_plan).\n>> >\n>> > This way, we don't have to check for every case in the switch statement.\n>>\n>> I think you misread it - the other branch says: if (*not* IsA())\n>>\n> No, I didn't misread:\n>\n> if (!IsA(plan, Result) &&\n> mark_async_capable_plan(plan,\n> ((ProjectionPath *) path)->subpath))\n> return true;\n> return false;\n>\n> If the plan is Result node, false would be returned.\n> So the check can be lifted to the beginning of the func.\n\nI think we might support more cases in the switch statement in the\nfuture. My concern about your proposal is that it might make it hard\nto add new cases to the statement. I agree that what I proposed has a\nbit of redundant code, but writing code inside each case independently\nwould make it easy to add them, making code consistent across branches\nand thus making back-patching easy.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Sun, 17 Apr 2022 17:49:55 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Sun, Apr 17, 2022 at 1:48 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Mon, Apr 11, 2022 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Sun, Apr 10, 2022 at 7:41 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >> On Sun, Apr 10, 2022 at 06:46:25AM -0700, Zhihong Yu wrote:\n> >> > Looking at the second hunk of the patch:\n> >> > FdwRoutine *fdwroutine = path->parent->fdwroutine;\n> >> > ...\n> >> > + if (IsA(plan, Result))\n> >> > + return false;\n> >> >\n> >> > It seems the check of whether plan is a Result node can be lifted\n> ahead of\n> >> > the switch statement (i.e. to the beginning of\n> mark_async_capable_plan).\n> >> >\n> >> > This way, we don't have to check for every case in the switch\n> statement.\n> >>\n> >> I think you misread it - the other branch says: if (*not* IsA())\n> >>\n> > No, I didn't misread:\n> >\n> > if (!IsA(plan, Result) &&\n> > mark_async_capable_plan(plan,\n> > ((ProjectionPath *)\n> path)->subpath))\n> > return true;\n> > return false;\n> >\n> > If the plan is Result node, false would be returned.\n> > So the check can be lifted to the beginning of the func.\n>\n> I think we might support more cases in the switch statement in the\n> future. My concern about your proposal is that it might make it hard\n> to add new cases to the statement. I agree that what I proposed has a\n> bit of redundant code, but writing code inside each case independently\n> would make it easy to add them, making code consistent across branches\n> and thus making back-patching easy.\n>\n> Thanks for reviewing!\n>\n> Best regards,\n> Etsuro Fujita\n>\nHi,\nWhen a new case arises where the plan is not a Result node, this func can\nbe rewritten.\nIf there is only one such new case, the check at the beginning of the func\ncan be tuned to exclude that case.\n\nI still think the check should be lifted to the beginning of the func\n(given the current cases).\n\nCheers\n\nOn Sun, Apr 17, 2022 at 1:48 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Mon, Apr 11, 2022 at 11:44 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Apr 10, 2022 at 7:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> On Sun, Apr 10, 2022 at 06:46:25AM -0700, Zhihong Yu wrote:\n>> > Looking at the second hunk of the patch:\n>> >                 FdwRoutine *fdwroutine = path->parent->fdwroutine;\n>> > ...\n>> > +               if (IsA(plan, Result))\n>> > +                   return false;\n>> >\n>> > It seems the check of whether plan is a Result node can be lifted ahead of\n>> > the switch statement (i.e. to the beginning of mark_async_capable_plan).\n>> >\n>> > This way, we don't have to check for every case in the switch statement.\n>>\n>> I think you misread it - the other branch says: if (*not* IsA())\n>>\n> No, I didn't misread:\n>\n>             if (!IsA(plan, Result) &&\n>                 mark_async_capable_plan(plan,\n>                                         ((ProjectionPath *) path)->subpath))\n>                 return true;\n>             return false;\n>\n> If the plan is Result node, false would be returned.\n> So the check can be lifted to the beginning of the func.\n\nI think we might support more cases in the switch statement in the\nfuture.  My concern about your proposal is that it might make it hard\nto add new cases to the statement.  I agree that what I proposed has a\nbit of redundant code, but writing code inside each case independently\nwould make it easy to add them, making code consistent across branches\nand thus making back-patching easy.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro FujitaHi,When a new case arises where the plan is not a Result node, this func can be rewritten.If there is only one such new case, the check at the beginning of the func can be tuned to exclude that case.I still think the check should be lifted to the beginning of the func (given the current cases).Cheers", "msg_date": "Sun, 17 Apr 2022 03:34:55 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Hi,\n\nOn Sun, Apr 17, 2022 at 7:30 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Apr 17, 2022 at 1:48 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> I think we might support more cases in the switch statement in the\n>> future. My concern about your proposal is that it might make it hard\n>> to add new cases to the statement. I agree that what I proposed has a\n>> bit of redundant code, but writing code inside each case independently\n>> would make it easy to add them, making code consistent across branches\n>> and thus making back-patching easy.\n\n> When a new case arises where the plan is not a Result node, this func can be rewritten.\n> If there is only one such new case, the check at the beginning of the func can be tuned to exclude that case.\n\nSorry, I don't agree with you.\n\n> I still think the check should be lifted to the beginning of the func (given the current cases).\n\nThe given path isn't limited to SubqueryScanPath, ForeignPath and\nProjectionPath, so another concern is extra cycles needed when the\npath is other path type that is projection-capable (e.g., Path for\nsequential scan, IndexPath, NestPath, ...). Assume that the given\npath is a Path (that doesn't contain pseudoconstant quals). In that\ncase the given SeqScan plan node wouldn't contain a gating Result\nnode, so if we put the if test at the top of the function, we need to\nexecute not only the test but the switch statement for the given\npath/plan nodes. But if we put the if test inside each case block, we\nonly need to execute the switch statement, without executing the test.\nIn the latter case I think we can save cycles for normal cases.\n\nIn short: I don't think it's a great idea to put the if test at the\ntop of the function.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 19 Apr 2022 18:01:33 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Tue, Apr 19, 2022 at 2:01 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Sun, Apr 17, 2022 at 7:30 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Sun, Apr 17, 2022 at 1:48 AM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n> >> I think we might support more cases in the switch statement in the\n> >> future. My concern about your proposal is that it might make it hard\n> >> to add new cases to the statement. I agree that what I proposed has a\n> >> bit of redundant code, but writing code inside each case independently\n> >> would make it easy to add them, making code consistent across branches\n> >> and thus making back-patching easy.\n>\n> > When a new case arises where the plan is not a Result node, this func\n> can be rewritten.\n> > If there is only one such new case, the check at the beginning of the\n> func can be tuned to exclude that case.\n>\n> Sorry, I don't agree with you.\n>\n> > I still think the check should be lifted to the beginning of the func\n> (given the current cases).\n>\n> The given path isn't limited to SubqueryScanPath, ForeignPath and\n> ProjectionPath, so another concern is extra cycles needed when the\n> path is other path type that is projection-capable (e.g., Path for\n> sequential scan, IndexPath, NestPath, ...). Assume that the given\n> path is a Path (that doesn't contain pseudoconstant quals). In that\n> case the given SeqScan plan node wouldn't contain a gating Result\n> node, so if we put the if test at the top of the function, we need to\n> execute not only the test but the switch statement for the given\n> path/plan nodes. But if we put the if test inside each case block, we\n> only need to execute the switch statement, without executing the test.\n> In the latter case I think we can save cycles for normal cases.\n>\n> In short: I don't think it's a great idea to put the if test at the\n> top of the function.\n>\n> Best regards,\n> Etsuro Fujita\n>\nHi,\nIt is okay to keep the formation in your patch.\n\nCheers\n\nOn Tue, Apr 19, 2022 at 2:01 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:Hi,\n\nOn Sun, Apr 17, 2022 at 7:30 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Sun, Apr 17, 2022 at 1:48 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> I think we might support more cases in the switch statement in the\n>> future.  My concern about your proposal is that it might make it hard\n>> to add new cases to the statement.  I agree that what I proposed has a\n>> bit of redundant code, but writing code inside each case independently\n>> would make it easy to add them, making code consistent across branches\n>> and thus making back-patching easy.\n\n> When a new case arises where the plan is not a Result node, this func can be rewritten.\n> If there is only one such new case, the check at the beginning of the func can be tuned to exclude that case.\n\nSorry, I don't agree with you.\n\n> I still think the check should be lifted to the beginning of the func (given the current cases).\n\nThe given path isn't limited to SubqueryScanPath, ForeignPath and\nProjectionPath, so another concern is extra cycles needed when the\npath is other path type that is projection-capable (e.g., Path for\nsequential scan, IndexPath, NestPath, ...).  Assume that the given\npath is a Path (that doesn't contain pseudoconstant quals).  In that\ncase the given SeqScan plan node wouldn't contain a gating Result\nnode, so if we put the if test at the top of the function, we need to\nexecute not only the test but the switch statement for the given\npath/plan nodes.  But if we put the if test inside each case block, we\nonly need to execute the switch statement, without executing the test.\nIn the latter case I think we can save cycles for normal cases.\n\nIn short: I don't think it's a great idea to put the if test at the\ntop of the function.\n\nBest regards,\nEtsuro FujitaHi,It is okay to keep the formation in your patch.Cheers", "msg_date": "Tue, 19 Apr 2022 10:08:41 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 20, 2022 at 2:04 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> It is okay to keep the formation in your patch.\n\nI modified mark_async_capable_plan() a bit further; 1) adjusted code\nin the ProjectionPath case, just for consistency with other cases, and\n2) tweaked/improved comments a bit. Attached is a new version of the\npatch (“prevent-async-2.patch”).\n\nAs mentioned before, v14 has the same issue, so I created a fix for\nv14, which I’m attaching as well (“prevent-async-2-v14.patch”). In\nthe fix I modified is_async_capable_path() the same way as\nmark_async_capable_plan() in HEAD, renaming it to\nis_async_capable_plan(), and updated some comments.\n\nBarring objections, I’ll push/back-patch these.\n\nThanks!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 25 Apr 2022 13:29:16 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Mon, Apr 25, 2022 at 1:29 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I modified mark_async_capable_plan() a bit further; 1) adjusted code\n> in the ProjectionPath case, just for consistency with other cases, and\n> 2) tweaked/improved comments a bit. Attached is a new version of the\n> patch (“prevent-async-2.patch”).\n>\n> As mentioned before, v14 has the same issue, so I created a fix for\n> v14, which I’m attaching as well (“prevent-async-2-v14.patch”). In\n> the fix I modified is_async_capable_path() the same way as\n> mark_async_capable_plan() in HEAD, renaming it to\n> is_async_capable_plan(), and updated some comments.\n>\n> Barring objections, I’ll push/back-patch these.\n\nDone.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 28 Apr 2022 15:32:46 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Wed, Apr 6, 2022 at 3:58 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I have committed the patch after modifying it as such.\n\nThe patch calls trivial_subqueryscan() during create_append_plan() to\ndetermine the triviality of a SubqueryScan that is a child of an\nAppend node. Unlike when calling it from\nset_subqueryscan_references(), this is done before some\npost-processing such as set_plan_references() on the subquery. The\nreason why this is safe wouldn't be that obvious, so I added to\ntrivial_subqueryscan() comments explaining this. Attached is a patch\nfor that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Thu, 2 Jun 2022 21:08:28 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Thu, Jun 2, 2022 at 5:08 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Wed, Apr 6, 2022 at 3:58 PM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n> > I have committed the patch after modifying it as such.\n>\n> The patch calls trivial_subqueryscan() during create_append_plan() to\n> determine the triviality of a SubqueryScan that is a child of an\n> Append node. Unlike when calling it from\n> set_subqueryscan_references(), this is done before some\n> post-processing such as set_plan_references() on the subquery. The\n> reason why this is safe wouldn't be that obvious, so I added to\n> trivial_subqueryscan() comments explaining this. Attached is a patch\n> for that.\n>\n> Best regards,\n> Etsuro Fujita\n>\nHi,\nSuggestion on formatting the comment:\n\n+ * node (or that for any plan node in the subplan tree), 2)\n+ * set_plan_references() modifies the tlist for every plan node in the\n\nIt would be more readable if `2)` is put at the beginning of the second\nline above.\n\n+ * preserves the length and order of the tlist, and 3)\nset_plan_references()\n+ * might delete the topmost plan node like an Append or MergeAppend from\nthe\n\nSimilarly you can move `3) set_plan_references()` to the beginning of the\nnext line.\n\nCheers\n\nOn Thu, Jun 2, 2022 at 5:08 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Wed, Apr 6, 2022 at 3:58 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I have committed the patch after modifying it as such.\n\nThe patch calls trivial_subqueryscan() during create_append_plan() to\ndetermine the triviality of a SubqueryScan that is a child of an\nAppend node.  Unlike when calling it from\nset_subqueryscan_references(), this is done before some\npost-processing such as set_plan_references() on the subquery.  The\nreason why this is safe wouldn't be that obvious, so I added to\ntrivial_subqueryscan() comments explaining this.  Attached is a patch\nfor that.\n\nBest regards,\nEtsuro FujitaHi,Suggestion on formatting the comment:+ * node (or that for any plan node in the subplan tree), 2)+ * set_plan_references() modifies the tlist for every plan node in theIt would be more readable if `2)` is put at the beginning of the second line above.+ * preserves the length and order of the tlist, and 3) set_plan_references()+ * might delete the topmost plan node like an Append or MergeAppend from theSimilarly you can move `3) set_plan_references()` to the beginning of the next line.Cheers", "msg_date": "Thu, 2 Jun 2022 09:09:03 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Fri, Jun 3, 2022 at 1:03 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> Suggestion on formatting the comment:\n>\n> + * node (or that for any plan node in the subplan tree), 2)\n> + * set_plan_references() modifies the tlist for every plan node in the\n>\n> It would be more readable if `2)` is put at the beginning of the second line above.\n>\n> + * preserves the length and order of the tlist, and 3) set_plan_references()\n> + * might delete the topmost plan node like an Append or MergeAppend from the\n>\n> Similarly you can move `3) set_plan_references()` to the beginning of the next line.\n\nSeems like a good idea, so I updated the patch as you suggest. I did\nsome indentation as well, which I think improves readability a bit\nfurther. Attached is an updated version. If no objections, I’ll\ncommit the patch.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 8 Jun 2022 19:18:27 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" }, { "msg_contents": "On Wed, Jun 8, 2022 at 7:18 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Jun 3, 2022 at 1:03 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > Suggestion on formatting the comment:\n> >\n> > + * node (or that for any plan node in the subplan tree), 2)\n> > + * set_plan_references() modifies the tlist for every plan node in the\n> >\n> > It would be more readable if `2)` is put at the beginning of the second line above.\n> >\n> > + * preserves the length and order of the tlist, and 3) set_plan_references()\n> > + * might delete the topmost plan node like an Append or MergeAppend from the\n> >\n> > Similarly you can move `3) set_plan_references()` to the beginning of the next line.\n>\n> Seems like a good idea, so I updated the patch as you suggest. I did\n> some indentation as well, which I think improves readability a bit\n> further. Attached is an updated version. If no objections, I’ll\n> commit the patch.\n\nDone.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 9 Jun 2022 19:39:47 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Defer selection of asynchronous subplans until the executor\n initialization stage" } ]
[ { "msg_contents": "Attached is a WIP patch to improve the performance of numeric sqrt()\nand ln(), which also makes a couple of related improvements to\ndiv_var_fast(), all of which have knock-on benefits for other numeric\nfunctions. The actual impact varies greatly depending on the inputs,\nbut the overall effect is to reduce the run time of the numeric_big\nregression test by about 20%.\n\nAdditionally it improves the accuracy of sqrt() -- currently sqrt()\nsometimes rounds the last digit of the result the wrong way, for\nexample sqrt(100000000000000010000000000000000) returns\n10000000000000001, when the correct answer should be 10000000000000000\nto zero decimal places. With this patch, sqrt() guarantees to return\nthe result correctly rounded to the last digit for all inputs.\n\nThe main change is to sqrt_var(), which now uses a different algorithm\n[1] for better performance than the Newton-Raphson method. Actually\nI've re-cast the algorithm from [1] into an iterative form, rather\nthan doing it recursively, as it's presented in that paper. This\nimproves performance further, by avoiding overheads from function\ncalls and copying numeric variables around. Also, IMO, the iterative\nform of the algorithm is much more elegant, since it works by making a\nsingle pass over the input digits, consuming them one at a time from\nmost significant to least, producing a succession of increasingly more\naccurate approximations to the square root, until the desired\nprecision is reached.\n\nFor inputs with a handful of digits, this is typically 3-5 times\nfaster, and for inputs with more digits the performance improvement is\nlarger (e.g. sqrt(2e131071) is around 10 times faster). If the input\nis a perfect square, with a result having a lot of trailing zeros, the\nnew algorithm is much faster because it basically has nothing to do in\nlater iterations (e.g., sqrt(64e13070) is about 600 times faster).\n\nAnother change to sqrt_var() is that it now explicitly supports a\nnegative rscale, i.e., rounding before the decimal point. This is\nexploited by ln_var() in its argument reduction stage -- ln_var()\nreduces all inputs to the range (0.9, 1.1) by repeatedly taking the\nsquare root. For very large inputs this can have an enormous impact,\nfor example log(1e131071) currently takes about 6.5 seconds on my\nmachine, whereas with this patch I can run it 1000 times in a plpgsql\nloop in about 90ms, so its around 70,000 times faster in that case. Of\ncourse, that's an extreme example, and for most inputs it's a much\nmore modest difference (e.g., ln(2) is about 1.5 times faster).\n\nIn passing, I also made a couple of optimisations to div_var_fast(),\ndiscovered while comparing it's performace with div_var() for various\ninputs.\n\nIt's possible that there are further gains to be had in the sqrt()\nalgorithm on platforms that support 128-bit integers, but I haven't\nhad a chance to investigate that yet.\n\nRegards,\nDean\n\n[1] https://hal.inria.fr/inria-00072854/document", "msg_date": "Fri, 28 Feb 2020 08:15:09 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "On Fri, 28 Feb 2020 at 08:15, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> It's possible that there are further gains to be had in the sqrt()\n> algorithm on platforms that support 128-bit integers, but I haven't\n> had a chance to investigate that yet.\n>\n\nRebased patch attached, now using 128-bit integers for part of\nsqrt_var() on platforms that support them. This turned out to be well\nworth it (1.5 to 2 times faster than the previous version if the\nresult has less than 30 or 40 digits).\n\nRegards,\nDean", "msg_date": "Sun, 1 Mar 2020 19:47:29 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "Dear Dean,\n\nOn 2020-03-01 20:47, Dean Rasheed wrote:\n> On Fri, 28 Feb 2020 at 08:15, Dean Rasheed <dean.a.rasheed@gmail.com> \n> wrote:\n>> \n>> It's possible that there are further gains to be had in the sqrt()\n>> algorithm on platforms that support 128-bit integers, but I haven't\n>> had a chance to investigate that yet.\n>> \n> \n> Rebased patch attached, now using 128-bit integers for part of\n> sqrt_var() on platforms that support them. This turned out to be well\n> worth it (1.5 to 2 times faster than the previous version if the\n> result has less than 30 or 40 digits).\n\nThank you for these patches, these sound like really nice improvements.\nOne thing can to my mind while reading the patch:\n\n+\t *\t\tIf r < 0 Then\n+\t *\t\t\tLet r = r + 2*s - 1\n+\t *\t\t\tLet s = s - 1\n\n+\t\t\t/* s is too large by 1; let r = r + 2*s - 1 and s = s - 1 */\n+\t\t\tr_int64 += 2 * s_int64 - 1;\n+\t\t\ts_int64--;\n\nThis can be reformulated as:\n\n+\t *\t\tIf r < 0 Then\n+\t *\t\t\tLet r = r + s\n+\t *\t\t\tLet s = s - 1\n+\t *\t\t\tLet r = r + s\n\n+\t\t\t/* s is too large by 1; let r = r + 2*s - 1 and s = s - 1 */\n+\t\t\tr_int64 += s_int64;\n+\t\t\ts_int64--;\n+\t\t\tr_int64 += s_int64;\n\nwhich would remove one mul/shift and the temp. variable. Mind you, I \nhave\nnot benchmarked this, so it might make little difference, but maybe it \nis\nworth trying it.\n\nBest regards,\n\nTels", "msg_date": "Tue, 03 Mar 2020 01:17:02 +0100", "msg_from": "Tels <nospam-pg-abuse@bloodgate.com>", "msg_from_op": false, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "On Tue, 3 Mar 2020 at 00:17, Tels <nospam-pg-abuse@bloodgate.com> wrote:\n>\n> Thank you for these patches, these sound like really nice improvements.\n\nThanks for looking!\n\n> One thing can to my mind while reading the patch:\n>\n> + * If r < 0 Then\n> + * Let r = r + 2*s - 1\n> + * Let s = s - 1\n>\n> This can be reformulated as:\n>\n> + * If r < 0 Then\n> + * Let r = r + s\n> + * Let s = s - 1\n> + * Let r = r + s\n>\n> which would remove one mul/shift and the temp. variable.\n\nGood point, that's a neat little optimisation.\n\nI wasn't able to detect any difference in performance, because those\ncorrections are only triggered about 1 time in every 50 or so, but it\nlooks neater to me, especially in the numeric iterations, where it\nsaves a sub_var() by const_one as well as not using the temporary\nvariable.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 3 Mar 2020 13:42:20 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "Hi Dean,\n\nOn 2/28/20 3:15 AM, Dean Rasheed wrote:\n> Attached is a WIP patch to improve the performance of numeric sqrt()\n> and ln(), which also makes a couple of related improvements to\n> div_var_fast(), all of which have knock-on benefits for other numeric\n> functions. The actual impact varies greatly depending on the inputs,\n> but the overall effect is to reduce the run time of the numeric_big\n> regression test by about 20%.\n\nAre these improvements targeted at PG13 or PG14? This seems a pretty \nbig change for the last CF of PG13.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:41:16 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "On Wed, 4 Mar 2020 at 14:41, David Steele <david@pgmasters.net> wrote:\n>\n> Are these improvements targeted at PG13 or PG14? This seems a pretty\n> big change for the last CF of PG13.\n>\n\nWell of course that's not entirely up to me, but I was hoping to\ncommit it for PG13.\n\nIt's very well covered by a large number of regression tests in both\nnumeric.sql and numeric_big.sql, since nearly anything that calls\nln(), log() or pow() ends up going through sqrt_var(). Also, the\nchanges are local to functions in numeric.c, which makes them easy to\nrevert if something proves to be wrong.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 4 Mar 2020 16:37:05 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Wed, 4 Mar 2020 at 14:41, David Steele <david@pgmasters.net> wrote:\n>> Are these improvements targeted at PG13 or PG14? This seems a pretty\n>> big change for the last CF of PG13.\n\n> Well of course that's not entirely up to me, but I was hoping to\n> commit it for PG13.\n\n> It's very well covered by a large number of regression tests in both\n> numeric.sql and numeric_big.sql, since nearly anything that calls\n> ln(), log() or pow() ends up going through sqrt_var(). Also, the\n> changes are local to functions in numeric.c, which makes them easy to\n> revert if something proves to be wrong.\n\nFWIW, I agree that this is a reasonable thing to consider committing\nfor v13. It's not adding any new user-visible behavior, so there's\nno definitional issues to quibble over, which is usually what I worry\nabout regretting after an overly-hasty commit. And it's only touching\na few functions in one file, so even if the patch is a bit long, the\ncomplexity seems pretty well controlled.\n\nI've not read the patch in detail so this isn't meant as a review,\nbut from a process standpoint I see no reason not to go forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Mar 2020 14:43:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "Tels <nospam-pg-abuse@bloodgate.com> writes:\n> This can be reformulated as:\n> +\t *\t\tIf r < 0 Then\n> +\t *\t\t\tLet r = r + s\n> +\t *\t\t\tLet s = s - 1\n> +\t *\t\t\tLet r = r + s\n\nHere's a v3 that\n\n* incorporates Tels' idea;\n\n* improves some of the comments (IMO anyway, though some are clear typos);\n\n* adds some XXX comments about things that could be further improved\nand/or need better explanations.\n\nI also ran it through pgindent, just cause I'm like that.\n\nWith resolutions of the XXX items, I think this'd be committable.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 22 Mar 2020 18:16:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "On Sun, 22 Mar 2020 at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> With resolutions of the XXX items, I think this'd be committable.\n>\n\nThanks for looking at this!\n\nHere is an updated patch with the following updates based on your comments:\n\n* Now uses integer arithmetic to compute res_weight and res_ndigits,\ninstead of floor() and ceil().\n\n* New comment giving a more detailed explanation of how blen is\nchosen, and why it must sometimes examine the first digit of the input\nand reduce blen by 1 (which can occur at any step, as shown in the\nexample given).\n\n* New comment giving a proof that the number of steps required is\nguaranteed to be less than 32.\n\n* New comment explaining why the initial integer square root using\nNewton's method is guaranteed to converge. I couldn't find a formal\nreference for this, but there's a Wikipedia article on it -\nhttps://en.wikipedia.org/wiki/Integer_square_root and I think it's a\nwell-known result in the field.\n\nRegards,\nDean", "msg_date": "Wed, 25 Mar 2020 08:57:31 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Here is an updated patch with the following updates based on your comments:\n\nThis resolves all my concerns. I've marked it RFC in the CF app.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 12:45:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some improvements to numeric sqrt() and ln()" } ]
[ { "msg_contents": "I came across the HAVE_WORKING_LINK define in pg_config_manual.h. \nAFAICT, hard links are supported on Windows and Cygwin in the OS \nversions that we support, and pg_upgrade already contains the required \nshim. It seems to me we could normalize and simplify that, as in the \nattached patches. (Perhaps rename durable_link_or_rename() then.) I \nsuccessfully tested on MSVC, MinGW, and Cygwin.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 28 Feb 2020 14:14:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "HAVE_WORKING_LINK still needed?" }, { "msg_contents": "On 2020-Feb-28, Peter Eisentraut wrote:\n\n> @@ -788,7 +788,6 @@ durable_link_or_rename(const char *oldfile, const char *newfile, int elevel)\n> \tif (fsync_fname_ext(oldfile, false, false, elevel) != 0)\n> \t\treturn -1;\n> \n> -#ifdef HAVE_WORKING_LINK\n> \tif (link(oldfile, newfile) < 0)\n> \t{\n> \t\tereport(elevel,\n> @@ -798,17 +797,6 @@ durable_link_or_rename(const char *oldfile, const char *newfile, int elevel)\n> \t\treturn -1;\n> \t}\n> \tunlink(oldfile);\n> -#else\n> -\t/* XXX: Add racy file existence check? */\n> -\tif (rename(oldfile, newfile) < 0)\n\nMaybe rename durable_link_or_rename to just durable_link?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 12:03:23 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: HAVE_WORKING_LINK still needed?" }, { "msg_contents": "On Fri, Feb 28, 2020 at 2:15 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> I came across the HAVE_WORKING_LINK define in pg_config_manual.h.\n> AFAICT, hard links are supported on Windows and Cygwin in the OS\n> versions that we support, and pg_upgrade already contains the required\n> shim. It seems to me we could normalize and simplify that, as in the\n> attached patches. (Perhaps rename durable_link_or_rename() then.) I\n> successfully tested on MSVC, MinGW, and Cygwin.\n>\n\nThe link referenced in the comments of win32_pghardlink() [1] is quite old,\nand is automatically redirected to the current documentation [2]. Maybe\nthis patch should use the new path.\n\n[1] http://msdn.microsoft.com/en-us/library/aa363860(VS.85).aspx\n[2]\nhttps://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createhardlinka\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Feb 28, 2020 at 2:15 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:I came across the HAVE_WORKING_LINK define in pg_config_manual.h. \nAFAICT, hard links are supported on Windows and Cygwin in the OS \nversions that we support, and pg_upgrade already contains the required \nshim.  It seems to me we could normalize and simplify that, as in the \nattached patches.  (Perhaps rename durable_link_or_rename() then.)  I \nsuccessfully tested on MSVC, MinGW, and Cygwin.The link referenced in the comments of win32_pghardlink() [1]\n\nis quite old, and is automatically redirected to the current documentation [2]. Maybe this patch should use the new path.[1] http://msdn.microsoft.com/en-us/library/aa363860(VS.85).aspx[2] https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createhardlinkaRegards,Juan José Santamaría Flecha", "msg_date": "Fri, 28 Feb 2020 17:52:39 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: HAVE_WORKING_LINK still needed?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I came across the HAVE_WORKING_LINK define in pg_config_manual.h. \n> AFAICT, hard links are supported on Windows and Cygwin in the OS \n> versions that we support, and pg_upgrade already contains the required \n> shim. It seems to me we could normalize and simplify that, as in the \n> attached patches. (Perhaps rename durable_link_or_rename() then.) I \n> successfully tested on MSVC, MinGW, and Cygwin.\n\nI don't have any way to test on Windows, but this patchset passes\neyeball review. +1 for getting rid of the special cases.\nAlso +1 for s/durable_link_or_rename/durable_link/.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 11:55:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HAVE_WORKING_LINK still needed?" }, { "msg_contents": "On 2020-Feb-28, Tom Lane wrote:\n\n> Also +1 for s/durable_link_or_rename/durable_link/.\n\nActually, it's not *that* either, because what the function does is link\nfollowed by unlink. So it's more a variation of durable_rename with\nslightly different semantics -- the difference is what happens if a file\nwith the target name already exists. Maybe call it durable_rename_no_overwrite.\n\nThere's a lot of commonality between the two. Perhaps it's not entirely\nsilly to merge both as a single routine, with a flag to select either\nbehavior.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 15:44:11 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: HAVE_WORKING_LINK still needed?" }, { "msg_contents": "On 2020-02-28 19:44, Alvaro Herrera wrote:\n> On 2020-Feb-28, Tom Lane wrote:\n> \n>> Also +1 for s/durable_link_or_rename/durable_link/.\n> \n> Actually, it's not *that* either, because what the function does is link\n> followed by unlink. So it's more a variation of durable_rename with\n> slightly different semantics -- the difference is what happens if a file\n> with the target name already exists. Maybe call it durable_rename_no_overwrite.\n\nI have committed the first two patches.\n\nHere is the third patch again, we renaming durable_link_or_rename() to \ndurable_rename_excl(). This seems to match existing Unix system call \nnaming best (see open() flag O_EXCL, and macOS has a renamex_np() flag \nRENAME_EXCL).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 4 Mar 2020 17:37:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: HAVE_WORKING_LINK still needed?" }, { "msg_contents": "On 2020-03-04 17:37, Peter Eisentraut wrote:\n> Here is the third patch again, we renaming durable_link_or_rename() to\n> durable_rename_excl(). This seems to match existing Unix system call\n> naming best (see open() flag O_EXCL, and macOS has a renamex_np() flag\n> RENAME_EXCL).\n\ncommitted like that\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Mar 2020 11:25:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: HAVE_WORKING_LINK still needed?" } ]
[ { "msg_contents": "Hi,\n\nWhile self reviewing a patch I'm about to send I changed the assertion\nin index_getnext_tid from\n Assert(TransactionIdIsValid(RecentGlobalXmin))\nto instead test (via an indirection)\n Assert(TransactionIdIsValid(MyProc->xmin))\n\nWithout ->xmin being set, it's not safe to do scans. And\nchecking RecentGlobalXmin doesn't really do much.\n\nBut, uh. That doesn't get very far through the regression tests.\n\nTurns out that I am to blame for that. All the way back in 9.4. For\nlogical decoding I needed to make ScanPgRelation() use a specific type\nof snapshot during one corner case of logical decoding. For reasons lost\nto time, I didn't continue to pass NULL to systable_beginscan() in the\nplain case, but did an explicit GetCatalogSnapshot(RelationRelationId).\nNote the missing RegisterSnapshot()...\n\nThat's bad because:\n\na) If invalidation processing triggers a InvalidateCatalogSnapshot(),\n the contained SnapshotResetXmin() may find no other snapshot, and\n reset ->xmin. Which then may cause relevant row versions to be\n removed.\nb) If there's a subsequent GetCatalogSnapshot() during invalidation\n processing, that will GetSnapshotData() into the snapshot currently\n being used.\n\nThe fix itself is trivial, just pass NULL for the normal case, rather\nthan doing GetCatalogSnapshot().\n\n\nBut I think this points to some severe holes in relevant assertions /\ninfrastructure:\n\n1) Asserting that RecentGlobalXmin is set - like many routines do -\n isn't meaningful, because it stays set even if SnapshotResetXmin()\n releases the transaction's snapshot. These are fairly old assertions\n (d53a56687f3d). As far as I can tell these routines really should\n verify that a snapshot is set.\n\n2) I think we need to reset TransactionXmin, RecentXmin whenever\n SnapshotResetXmin() clears xmin. While we'll set them again the next\n time a snapshot is acquired, the fact that they stay set seems likely\n to hide bugs.\n\n We also could remove TransactionXmin and instead use the\n pgproc/pgxact's ->xmin. I don't really see the point of having it?\n\n3) Similarly, I think we ought to reset reset RecentGlobal[Data]Xmin at\n the end of the transaction or such.\n\n But I'm not clear what protects those values from being affected by\n wraparound in a longrunning transaction? Initially they are obviously\n protected against that due to the procarray - but once the oldest\n procarray entry releases its snapshot, the global xmin horizon can\n advance. That allows transactions that up to ~2 billion into the\n future of the current backend, whereas RecentGlobalXmin might be\n nearly ~2 billion transactions in the past relative to ->xmin.\n\n That might not have been a likely problem many years ago, but seems\n far from impossible today?\n\n\nI propose to commit a fix, but then also add an assertion for\nTransactionIdIsValid(MyPgXact->xmin) instead (or in addition) to the\nTransactionIdIsValid(RecentGlobalXmin) tests right now. And in master\nclear the various *Xmin variables whenever we reset xmin.\n\nI think in master we should also start to make RecentGlobalXmin etc\nFullTransactionIds. We can then convert the 32bit xids we compare with\nRecentGlobal* to 64bit xids (which is safe, because live xids have to be\nwithin [oldestXid, nextXid)). I have that as part of another patch\nanyway...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 28 Feb 2020 21:24:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nOn 2020-02-28 21:24:59 -0800, Andres Freund wrote:\n> Turns out that I am to blame for that. All the way back in 9.4. For\n> logical decoding I needed to make ScanPgRelation() use a specific type\n> of snapshot during one corner case of logical decoding. For reasons lost\n> to time, I didn't continue to pass NULL to systable_beginscan() in the\n> plain case, but did an explicit GetCatalogSnapshot(RelationRelationId).\n> Note the missing RegisterSnapshot()...\n\nOh, I pierced through the veil: It's likely because the removal of\nSnapshotNow happened concurrently to the development of logical\ndecoding. Before using proper snapshot for catalog scans passing in\nSnapshotNow was precisely the right thing to do...\n\nI think that's somewhat unlikely to actively cause problems in practice,\nas ScanPgRelation() requires that we already have a lock on the\nrelation, we only look for a single row, and I don't think we rely on\nthe result's tid to be correct. I don't immediately see a case where\nthis would trigger in a problematic way.\n\n\nAfter fixing the ScanPgRelation case I found another occurance of the\nproblem:\nThe catalogsnapshot copied at (ignore the slight off line numbers please):\n\n#0 GetCatalogSnapshot (relid=1249) at /home/andres/src/postgresql/src/backend/utils/time/snapmgr.c:454\n#1 0x0000560d725b198d in systable_beginscan (heapRelation=0x7f13429f2a08, indexId=2659, indexOK=true, snapshot=0x0, nkeys=2, key=0x7fff26e04db0)\n at /home/andres/src/postgresql/src/backend/access/index/genam.c:378\n#2 0x0000560d72b4117f in SearchCatCacheMiss (cache=0x560d74590800, nkeys=2, hashValue=1029784422, hashIndex=102, v1=697088, v2=3, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1359\n#3 0x0000560d72b41045 in SearchCatCacheInternal (cache=0x560d74590800, nkeys=2, v1=697088, v2=3, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1299\n#4 0x0000560d72b40d09 in SearchCatCache (cache=0x560d74590800, v1=697088, v2=3, v3=0, v4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/catcache.c:1153\n#5 0x0000560d72b5b65f in SearchSysCache (cacheId=7, key1=697088, key2=3, key3=0, key4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/syscache.c:1112\n#6 0x0000560d72b5b9dd in SearchSysCacheCopy (cacheId=7, key1=697088, key2=3, key3=0, key4=0)\n at /home/andres/src/postgresql/src/backend/utils/cache/syscache.c:1187\n#7 0x0000560d72645501 in RemoveAttrDefaultById (attrdefId=697096) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1821\n#8 0x0000560d7263fd6b in doDeletion (object=0x560d745d37a0, flags=29) at /home/andres/src/postgresql/src/backend/catalog/dependency.c:1397\n#9 0x0000560d7263fa17 in deleteOneObject (object=0x560d745d37a0, depRel=0x7fff26e052d0, flags=29)\n at /home/andres/src/postgresql/src/backend/catalog/dependency.c:1261\n#10 0x0000560d7263e4d6 in deleteObjectsInList (targetObjects=0x560d745d1ec0, depRel=0x7fff26e052d0, flags=29)\n at /home/andres/src/postgresql/src/backend/catalog/dependency.c:271\n#11 0x0000560d7263e58a in performDeletion (object=0x7fff26e05304, behavior=DROP_CASCADE, flags=29)\n at /home/andres/src/postgresql/src/backend/catalog/dependency.c:356\n#12 0x0000560d72655f3f in RemoveTempRelations (tempNamespaceId=686167) at /home/andres/src/postgresql/src/backend/catalog/namespace.c:4155\n#13 0x0000560d72655f72 in RemoveTempRelationsCallback (code=0, arg=0) at /home/andres/src/postgresql/src/backend/catalog/namespace.c:4174\n#14 0x0000560d729a58e2 in shmem_exit (code=0) at /home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:239\n#15 0x0000560d729a57b5 in proc_exit_prepare (code=0) at /home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:194\n\nis released at the end of systable_endscan. And then xmin is reset at:\n\n#0 SnapshotResetXmin () at /home/andres/src/postgresql/src/backend/utils/time/snapmgr.c:1038\n#1 0x0000560d72bb9bfc in InvalidateCatalogSnapshot () at /home/andres/src/postgresql/src/backend/utils/time/snapmgr.c:521\n#2 0x0000560d72b43d62 in LocalExecuteInvalidationMessage (msg=0x7fff26e04e70) at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:562\n#3 0x0000560d729b277b in ReceiveSharedInvalidMessages (invalFunction=0x560d72b43d26 <LocalExecuteInvalidationMessage>, \n resetFunction=0x560d72b43f92 <InvalidateSystemCaches>) at /home/andres/src/postgresql/src/backend/storage/ipc/sinval.c:120\n#4 0x0000560d72b44070 in AcceptInvalidationMessages () at /home/andres/src/postgresql/src/backend/utils/cache/inval.c:683\n#5 0x0000560d729b8f4f in LockRelationOid (relid=2658, lockmode=3) at /home/andres/src/postgresql/src/backend/storage/lmgr/lmgr.c:136\n#6 0x0000560d725341e3 in relation_open (relationId=2658, lockmode=3) at /home/andres/src/postgresql/src/backend/access/common/relation.c:56\n#7 0x0000560d725b22c3 in index_open (relationId=2658, lockmode=3) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:130\n#8 0x0000560d727b6991 in ExecOpenIndices (resultRelInfo=0x560d7468ffa0, speculative=false)\n at /home/andres/src/postgresql/src/backend/executor/execIndexing.c:199\n#9 0x0000560d7264fc7e in CatalogOpenIndexes (heapRel=0x7f13429f2a08) at /home/andres/src/postgresql/src/backend/catalog/indexing.c:51\n#10 0x0000560d72650010 in CatalogTupleUpdate (heapRel=0x7f13429f2a08, otid=0x560d746901d4, tup=0x560d746901d0)\n at /home/andres/src/postgresql/src/backend/catalog/indexing.c:228\n#11 0x0000560d72645583 in RemoveAttrDefaultById (attrdefId=697096) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1830\n#12 0x0000560d7263fd6b in doDeletion (object=0x560d745d37a0, flags=29) at /home/andres/src/postgresql/src/backend/catalog/dependency.c:1397\n\nwhich then hits an assertion at:\n\n#2 0x0000560d725a4e86 in heap_page_prune_opt (relation=0x7f13429f2a08, buffer=3153) at /home/andres/src/postgresql/src/backend/access/heap/pruneheap.c:131\n#3 0x0000560d7259a5b3 in heapam_index_fetch_tuple (scan=0x560d746912c8, tid=0x7fff26e04c5a, snapshot=0x7fff26e04c60, slot=0x560d745d1ef8, \n call_again=0x7fff26e04ade, all_dead=0x7fff26e04c59) at /home/andres/src/postgresql/src/backend/access/heap/heapam_handler.c:137\n#4 0x0000560d725eeedc in table_index_fetch_tuple (scan=0x560d746912c8, tid=0x7fff26e04c5a, snapshot=0x7fff26e04c60, slot=0x560d745d1ef8, \n call_again=0x7fff26e04ade, all_dead=0x7fff26e04c59) at /home/andres/src/postgresql/src/include/access/tableam.h:1020\n#5 0x0000560d725ef478 in table_index_fetch_tuple_check (rel=0x7f13429f2a08, tid=0x7fff26e04c5a, snapshot=0x7fff26e04c60, all_dead=0x7fff26e04c59)\n at /home/andres/src/postgresql/src/backend/access/table/tableam.c:213\n#6 0x0000560d725b4ef7 in _bt_check_unique (rel=0x7f1342a29ce0, insertstate=0x7fff26e04d90, heapRel=0x7f13429f2a08, checkUnique=UNIQUE_CHECK_YES, \n is_unique=0x7fff26e04dc1, speculativeToken=0x7fff26e04d88) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtinsert.c:452\n#7 0x0000560d725b48e0 in _bt_doinsert (rel=0x7f1342a29ce0, itup=0x560d746892b0, checkUnique=UNIQUE_CHECK_YES, heapRel=0x7f13429f2a08)\n at /home/andres/src/postgresql/src/backend/access/nbtree/nbtinsert.c:247\n#8 0x0000560d725c0167 in btinsert (rel=0x7f1342a29ce0, values=0x7fff26e04ee0, isnull=0x7fff26e04ec0, ht_ctid=0x560d746901d4, heapRel=0x7f13429f2a08, \n checkUnique=UNIQUE_CHECK_YES, indexInfo=0x560d7468f3d8) at /home/andres/src/postgresql/src/backend/access/nbtree/nbtree.c:207\n#9 0x0000560d725b2530 in index_insert (indexRelation=0x7f1342a29ce0, values=0x7fff26e04ee0, isnull=0x7fff26e04ec0, heap_t_ctid=0x560d746901d4, \n heapRelation=0x7f13429f2a08, checkUnique=UNIQUE_CHECK_YES, indexInfo=0x560d7468f3d8) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:186\n#10 0x0000560d7264ff34 in CatalogIndexInsert (indstate=0x560d7468ffa0, heapTuple=0x560d746901d0)\n at /home/andres/src/postgresql/src/backend/catalog/indexing.c:157\n#11 0x0000560d7265003e in CatalogTupleUpdate (heapRel=0x7f13429f2a08, otid=0x560d746901d4, tup=0x560d746901d0)\n at /home/andres/src/postgresql/src/backend/catalog/indexing.c:232\n#12 0x0000560d72645583 in RemoveAttrDefaultById (attrdefId=697096) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1830\n#13 0x0000560d7263fd6b in doDeletion (object=0x560d745d37a0, flags=29) at /home/andres/src/postgresql/src/backend/catalog/dependency.c:1397\n\nSo, um. What happens is that doDeletion() does a catalog scan, which\nsets a snapshot. The results of that catalog scan are then used to\nperform modifications. But at that point there's no guarantee that we\nstill hold *any* snapshot, as e.g. invalidations can trigger the catalog\nsnapshot being released.\n\nI don't see how that's safe. Without ->xmin preventing that,\nintermediate row versions that we did look up could just get vacuumed\naway, and replaced with a different row. That does seem like a serious\nissue?\n\nI think there's likely a lot of places that can hit this? What makes it\nsafe for InvalidateCatalogSnapshot()->SnapshotResetXmin() to release\n->xmin when there previously has been *any* catalog access? Because in\ncontrast to normal table modifications, there's currently nothing at all\nforcing us to hold a snapshot between catalog lookups an their\nmodifications?\n\nAm I missing something? Or is this a fairly significant hole in our\narrangements?\n\nThe easiest way to fix this would probably be to have inval.c call a\nversion of InvalidateCatalogSnapshot() that leaves the oldest catalog\nsnapshot around, but sets up things so that GetCatalogSnapshot() will\nreturn a freshly taken snapshot? ISTM that pretty much every\nInvalidateCatalogSnapshot() call within a transaction needs that behaviour?\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Fri, 28 Feb 2020 22:10:52 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nOn 2020-02-28 22:10:52 -0800, Andres Freund wrote:\n> So, um. What happens is that doDeletion() does a catalog scan, which\n> sets a snapshot. The results of that catalog scan are then used to\n> perform modifications. But at that point there's no guarantee that we\n> still hold *any* snapshot, as e.g. invalidations can trigger the catalog\n> snapshot being released.\n> \n> I don't see how that's safe. Without ->xmin preventing that,\n> intermediate row versions that we did look up could just get vacuumed\n> away, and replaced with a different row. That does seem like a serious\n> issue?\n> \n> I think there's likely a lot of places that can hit this? What makes it\n> safe for InvalidateCatalogSnapshot()->SnapshotResetXmin() to release\n> ->xmin when there previously has been *any* catalog access? Because in\n> contrast to normal table modifications, there's currently nothing at all\n> forcing us to hold a snapshot between catalog lookups an their\n> modifications?\n> \n> Am I missing something? Or is this a fairly significant hole in our\n> arrangements?\n\nI still think that's true. In a first iteration I hacked around the\nproblem by explicitly registering a catalog snapshot in\nRemoveTempRelations(). That *sometimes* allows to get through the\nregression tests without the assertions triggering.\n\nBut I don't think that's good enough (even if we fixed the other\npotential crashes similarly). The only reason that avoids the asserts is\nbecause in nearly all other cases there's also a user snapshot that's\npushed. But that pushed snapshot can have an xmin that's newer than the\ncatalog snapshot, which means we're still in danger of tids from catalog\nscans being outdated.\n\nMy preliminary conclusion is that it's simply not safe to do\nSnapshotResetXmin() from within InvalidateCatalogSnapshot(),\nPopActiveSnapshot(), UnregisterSnapshotFromOwner() etc. Instead we need\nto defer the SnapshotResetXmin() call until at least\nCommitTransactionCommand()? Outside of that there ought (with exception\nof multi-transaction commands, but they have to be careful anyway) to be\nno \"in progress\" sequences of related catalog lookups/modifications.\n\nAlternatively we could ensure that all catalog lookup/mod sequences\nensure that the first catalog snapshot is registered. But that seems\nlike a gargantuan task?\n\n\n> The easiest way to fix this would probably be to have inval.c call a\n> version of InvalidateCatalogSnapshot() that leaves the oldest catalog\n> snapshot around, but sets up things so that GetCatalogSnapshot() will\n> return a freshly taken snapshot? ISTM that pretty much every\n> InvalidateCatalogSnapshot() call within a transaction needs that behaviour?\n\n\nA related question is in which cases is it actually safe to use a\nsnapshot that's not registered, nor pushed as the active snapshot.\nsnapmgr.c just provides:\n\n* Note that the return value may point at static storage that will be modified\n * by future calls and by CommandCounterIncrement(). Callers should call\n * RegisterSnapshot or PushActiveSnapshot on the returned snap if it is to be\n * used very long.\n\nbut doesn't clarify what 'very long' means. As far as I can tell,\nthere's very little that actually safe. It's probably ok to do a single\nvisiblity test, but anything that e.g. has a chance of accepting\ninvalidations is entirely unsafe?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 29 Feb 2020 12:17:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nOn 2020-02-28 22:10:52 -0800, Andres Freund wrote:\n> On 2020-02-28 21:24:59 -0800, Andres Freund wrote:\n> > Turns out that I am to blame for that. All the way back in 9.4. For\n> > logical decoding I needed to make ScanPgRelation() use a specific type\n> > of snapshot during one corner case of logical decoding. For reasons lost\n> > to time, I didn't continue to pass NULL to systable_beginscan() in the\n> > plain case, but did an explicit GetCatalogSnapshot(RelationRelationId).\n> > Note the missing RegisterSnapshot()...\n> \n> Oh, I pierced through the veil: It's likely because the removal of\n> SnapshotNow happened concurrently to the development of logical\n> decoding. Before using proper snapshot for catalog scans passing in\n> SnapshotNow was precisely the right thing to do...\n> \n> I think that's somewhat unlikely to actively cause problems in practice,\n> as ScanPgRelation() requires that we already have a lock on the\n> relation, we only look for a single row, and I don't think we rely on\n> the result's tid to be correct. I don't immediately see a case where\n> this would trigger in a problematic way.\n\nPushed a fix for this.\n\n\n> So, um. What happens is that doDeletion() does a catalog scan, which\n> sets a snapshot. The results of that catalog scan are then used to\n> perform modifications. But at that point there's no guarantee that we\n> still hold *any* snapshot, as e.g. invalidations can trigger the catalog\n> snapshot being released.\n> \n> I don't see how that's safe. Without ->xmin preventing that,\n> intermediate row versions that we did look up could just get vacuumed\n> away, and replaced with a different row. That does seem like a serious\n> issue?\n> \n> I think there's likely a lot of places that can hit this? What makes it\n> safe for InvalidateCatalogSnapshot()->SnapshotResetXmin() to release\n> ->xmin when there previously has been *any* catalog access? Because in\n> contrast to normal table modifications, there's currently nothing at all\n> forcing us to hold a snapshot between catalog lookups an their\n> modifications?\n> \n> Am I missing something? Or is this a fairly significant hole in our\n> arrangements?\n> \n> The easiest way to fix this would probably be to have inval.c call a\n> version of InvalidateCatalogSnapshot() that leaves the oldest catalog\n> snapshot around, but sets up things so that GetCatalogSnapshot() will\n> return a freshly taken snapshot? ISTM that pretty much every\n> InvalidateCatalogSnapshot() call within a transaction needs that behaviour?\n\nI'd still like to get some input here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 28 Mar 2020 12:30:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nOn 2020-03-28 12:30:40 -0700, Andres Freund wrote:\n> On 2020-02-28 22:10:52 -0800, Andres Freund wrote:\n> > On 2020-02-28 21:24:59 -0800, Andres Freund wrote:\n> > So, um. What happens is that doDeletion() does a catalog scan, which\n> > sets a snapshot. The results of that catalog scan are then used to\n> > perform modifications. But at that point there's no guarantee that we\n> > still hold *any* snapshot, as e.g. invalidations can trigger the catalog\n> > snapshot being released.\n> > \n> > I don't see how that's safe. Without ->xmin preventing that,\n> > intermediate row versions that we did look up could just get vacuumed\n> > away, and replaced with a different row. That does seem like a serious\n> > issue?\n> > \n> > I think there's likely a lot of places that can hit this? What makes it\n> > safe for InvalidateCatalogSnapshot()->SnapshotResetXmin() to release\n> > ->xmin when there previously has been *any* catalog access? Because in\n> > contrast to normal table modifications, there's currently nothing at all\n> > forcing us to hold a snapshot between catalog lookups an their\n> > modifications?\n> > \n> > Am I missing something? Or is this a fairly significant hole in our\n> > arrangements?\n> > \n> > The easiest way to fix this would probably be to have inval.c call a\n> > version of InvalidateCatalogSnapshot() that leaves the oldest catalog\n> > snapshot around, but sets up things so that GetCatalogSnapshot() will\n> > return a freshly taken snapshot? ISTM that pretty much every\n> > InvalidateCatalogSnapshot() call within a transaction needs that behaviour?\n> \n> I'd still like to get some input here.\n\nAttached is a one patch that adds assertions to detect this, and one\nthat puts enough workarounds in place to make the tests pass. I don't\nlike this much, but I thought it'd be useful for others to understand\nthe problem.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 28 Mar 2020 13:54:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nRobert, Tom, it'd be great if you could look through this thread. I\nthink there's a problem here (and it has gotten worse after the\nintroduction of catalog snapshots). Both of you at least dabbled in\nrelated code.\n\n\nOn 2020-02-29 12:17:07 -0800, Andres Freund wrote:\n> On 2020-02-28 22:10:52 -0800, Andres Freund wrote:\n> > So, um. What happens is that doDeletion() does a catalog scan, which\n> > sets a snapshot. The results of that catalog scan are then used to\n> > perform modifications. But at that point there's no guarantee that we\n> > still hold *any* snapshot, as e.g. invalidations can trigger the catalog\n> > snapshot being released.\n> > \n> > I don't see how that's safe. Without ->xmin preventing that,\n> > intermediate row versions that we did look up could just get vacuumed\n> > away, and replaced with a different row. That does seem like a serious\n> > issue?\n> > \n> > I think there's likely a lot of places that can hit this? What makes it\n> > safe for InvalidateCatalogSnapshot()->SnapshotResetXmin() to release\n> > ->xmin when there previously has been *any* catalog access? Because in\n> > contrast to normal table modifications, there's currently nothing at all\n> > forcing us to hold a snapshot between catalog lookups an their\n> > modifications?\n> > \n> > Am I missing something? Or is this a fairly significant hole in our\n> > arrangements?\n> \n> I still think that's true. In a first iteration I hacked around the\n> problem by explicitly registering a catalog snapshot in\n> RemoveTempRelations(). That *sometimes* allows to get through the\n> regression tests without the assertions triggering.\n\nThe attached two patches (they're not meant to be applied) reliably get\nthrough the regression tests. But I suspect I'd have to at least do a\nCLOBBER_CACHE_ALWAYS run to find all the actually vulnerable places.\n\n\n> But I don't think that's good enough (even if we fixed the other\n> potential crashes similarly). The only reason that avoids the asserts is\n> because in nearly all other cases there's also a user snapshot that's\n> pushed. But that pushed snapshot can have an xmin that's newer than the\n> catalog snapshot, which means we're still in danger of tids from catalog\n> scans being outdated.\n> \n> My preliminary conclusion is that it's simply not safe to do\n> SnapshotResetXmin() from within InvalidateCatalogSnapshot(),\n> PopActiveSnapshot(), UnregisterSnapshotFromOwner() etc. Instead we need\n> to defer the SnapshotResetXmin() call until at least\n> CommitTransactionCommand()? Outside of that there ought (with exception\n> of multi-transaction commands, but they have to be careful anyway) to be\n> no \"in progress\" sequences of related catalog lookups/modifications.\n> \n> Alternatively we could ensure that all catalog lookup/mod sequences\n> ensure that the first catalog snapshot is registered. But that seems\n> like a gargantuan task?\n\nI also just noticed comments of this style in a few places\n\t * Start a transaction so we can access pg_database, and get a snapshot.\n\t * We don't have a use for the snapshot itself, but we're interested in\n\t * the secondary effect that it sets RecentGlobalXmin. (This is critical\n\t * for anything that reads heap pages, because HOT may decide to prune\n\t * them even if the process doesn't attempt to modify any tuples.)\nfollowed by code like\n\n\tStartTransactionCommand();\n\t(void) GetTransactionSnapshot();\n\n\trel = table_open(DatabaseRelationId, AccessShareLock);\n\tscan = table_beginscan_catalog(rel, 0, NULL);\n\nwhich is not safe at all, unfortunately. The snapshot is not\npushed/active, therefore invalidations processed e.g. as part of the\ntable_open() could execute a InvalidateCatalogSnapshot(), which in turn\nwould remove the catalog snapshot from the pairing heap and\nSnapshotResetXmin(). And poof, the backend's xmin is gone.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 7 Apr 2020 00:24:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "[ belatedly responding ]\n\nOn Sat, Feb 29, 2020 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> My preliminary conclusion is that it's simply not safe to do\n> SnapshotResetXmin() from within InvalidateCatalogSnapshot(),\n> PopActiveSnapshot(), UnregisterSnapshotFromOwner() etc. Instead we need\n> to defer the SnapshotResetXmin() call until at least\n> CommitTransactionCommand()? Outside of that there ought (with exception\n> of multi-transaction commands, but they have to be careful anyway) to be\n> no \"in progress\" sequences of related catalog lookups/modifications.\n>\n> Alternatively we could ensure that all catalog lookup/mod sequences\n> ensure that the first catalog snapshot is registered. But that seems\n> like a gargantuan task?\n\nIf I understand correctly, the scenario you're concerned about is\nsomething like this:\n\n(1) Transaction #1 reads a catalog tuple and immediately releases its snapshot.\n(2) Transaction #2 performs a DELETE or UPDATE on that catalog tuple.\n(3) Transaction #3 completes a VACUUM on the table, so that the old\ntuple is pruned, thus marked dead, and then the TID is marked unused.\n(4) Transaction #4 performs an INSERT which reuses the same TID.\n(5) Transaction #1 now performs a DELETE or UPDATE using the previous\nTID and updates the unrelated tuple which reused the TID rather than\nthe intended tuple.\n\nIt seems to me that what is supposed to prevent this from happening is\nthat you aren't supposed to release your snapshot at the end of step\n#1. You're supposed to hold onto it until after step #5 is complete. I\nthink that there are fair number of places that are already careful\nabout that. I just picked a random source file that I knew Tom had\nwritten and found this bit in extension_config_remove:\n\n extScan = systable_beginscan(extRel, ExtensionOidIndexId, true,\n NULL, 1, key);\n\n extTup = systable_getnext(extScan);\n...a lot more stuff...\n CatalogTupleUpdate(extRel, &extTup->t_self, extTup);\n\n systable_endscan(extScan);\n\nQuite apart from this issue, there's a very good reason why it's like\nthat: extTup might be pointing right into a disk buffer, and if we did\nsystable_endscan() before the last access to it, our pointer could\nbecome invalid. A fair number of places are protected due to the scan\nbeing kept open like this, but it looks like most of the ones that use\nSearchSysCacheCopyX + CatalogTupleUpdate are problematic.\n\nI would be inclined to fix this problem by adjusting those places to\nkeep a snapshot open rather than by making some arbitrary rule about\nholding onto a catalog snapshot until the end of the command. That\nseems like a fairly magical coding rule that will happen to work in\nmost practical cases but isn't really a principled approach to the\nproblem. Besides being magical, it's also fragile: just deciding to\nuse a some other snapshot instead of the catalog snapshot causes your\ncode to be subtly broken in a way you're surely not going to expect.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Apr 2020 16:56:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nOn 2020-04-09 16:56:03 -0400, Robert Haas wrote:\n> [ belatedly responding ]\n> \n> On Sat, Feb 29, 2020 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > My preliminary conclusion is that it's simply not safe to do\n> > SnapshotResetXmin() from within InvalidateCatalogSnapshot(),\n> > PopActiveSnapshot(), UnregisterSnapshotFromOwner() etc. Instead we need\n> > to defer the SnapshotResetXmin() call until at least\n> > CommitTransactionCommand()? Outside of that there ought (with exception\n> > of multi-transaction commands, but they have to be careful anyway) to be\n> > no \"in progress\" sequences of related catalog lookups/modifications.\n> >\n> > Alternatively we could ensure that all catalog lookup/mod sequences\n> > ensure that the first catalog snapshot is registered. But that seems\n> > like a gargantuan task?\n> \n> If I understand correctly, the scenario you're concerned about is\n> something like this:\n> \n> (1) Transaction #1 reads a catalog tuple and immediately releases its snapshot.\n> (2) Transaction #2 performs a DELETE or UPDATE on that catalog tuple.\n> (3) Transaction #3 completes a VACUUM on the table, so that the old\n> tuple is pruned, thus marked dead, and then the TID is marked unused.\n> (4) Transaction #4 performs an INSERT which reuses the same TID.\n> (5) Transaction #1 now performs a DELETE or UPDATE using the previous\n> TID and updates the unrelated tuple which reused the TID rather than\n> the intended tuple.\n\nPretty much.\n\nI think it's enough for 3) and 4) to happen in quite that way. If 3) is\njust HOT pruned away, or 3) happens but 4) doesn't, we'd still be in\ntrouble:\nCurrently heap_update/delete has no non-assert check that the\npassed in TID is an existing tuple.\n\n\tlp = PageGetItemId(page, ItemPointerGetOffsetNumber(otid));\n\tAssert(ItemIdIsNormal(lp));\n..\n\toldtup.t_tableOid = RelationGetRelid(relation);\n\toldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);\n\toldtup.t_len = ItemIdGetLength(lp);\n\toldtup.t_self = *otid;\n...\n\tmodified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,\n\t\t\t\t\t\t\t\t\t\t\t\t &oldtup, newtup);\n\nso we'll treat the page header as a tuple. Not likely to end well.\n\n\n\n> It seems to me that what is supposed to prevent this from happening is\n> that you aren't supposed to release your snapshot at the end of step\n> #1. You're supposed to hold onto it until after step #5 is complete. I\n> think that there are fair number of places that are already careful\n> about that. I just picked a random source file that I knew Tom had\n> written and found this bit in extension_config_remove:\n> \n> extScan = systable_beginscan(extRel, ExtensionOidIndexId, true,\n> NULL, 1, key);\n> \n> extTup = systable_getnext(extScan);\n> ...a lot more stuff...\n> CatalogTupleUpdate(extRel, &extTup->t_self, extTup);\n> \n> systable_endscan(extScan);\n> \n> Quite apart from this issue, there's a very good reason why it's like\n> that: extTup might be pointing right into a disk buffer, and if we did\n> systable_endscan() before the last access to it, our pointer could\n> become invalid. A fair number of places are protected due to the scan\n> being kept open like this, but it looks like most of the ones that use\n> SearchSysCacheCopyX + CatalogTupleUpdate are problematic.\n\nIndeed. There's unfortunately quite a few of those. There's also a few\nplaces, most prominently probably performMultipleDeletions(), that\nexplicitly do searches, and then afterwards perform deletions - without\nholding a snapshot.\n\n\n> I would be inclined to fix this problem by adjusting those places to\n> keep a snapshot open rather than by making some arbitrary rule about\n> holding onto a catalog snapshot until the end of the command.\n\nThat's what my prototype patch did. It's doable, although we would need\nmore complete assertions than I had added to ensure we're not\nintroducing more broken places.\n\nWhile my patch did that, for correctness I don't think it can just be\nsomething like\n snap = RegisterSnapshot(GetLatestSnapshot());\nor\n PushActiveSnapshot(GetTransactionSnapshot());\n\nas neither willbe the catalog snapshot, which could be older than\nGetLatestSnapshot()/GetTransactionSnapshot(). But IIRC we also can't\njust register the catalog snapshot, because some parts of the system\nwill use a \"normal\" snapshot instead (which could be older).\n\n\n> That seems like a fairly magical coding rule that will happen to work\n> in most practical cases but isn't really a principled approach to the\n> problem.\n\nI'm not sure it'd be that magical to only release resources at\nCommitTransactionCommand() time. We kinda do that for a few other things\nalready.\n\n\n> Besides being magical, it's also fragile: just deciding to\n> use a some other snapshot instead of the catalog snapshot causes your\n> code to be subtly broken in a way you're surely not going to expect.\n\nThat's actually kind of an argument the other way for me: Because there\ncan be multiple snapshots, and because it is hard to check that the same\nsnapshot is held across lookup & update, it seems more robust to not\nreset the xmin in the middle of a command.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:32:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-09 16:56:03 -0400, Robert Haas wrote:\n>> That seems like a fairly magical coding rule that will happen to work\n>> in most practical cases but isn't really a principled approach to the\n>> problem.\n\n> I'm not sure it'd be that magical to only release resources at\n> CommitTransactionCommand() time. We kinda do that for a few other things\n> already.\n\nI'd be worried about consumption of resources during a long transaction.\nBut maybe we could release at CommandCounterIncrement?\n\nStill, I tend to agree with Robert that associating a snap with an\nopen catalog scan is the right way. I have vague memories that a long\ntime ago, all catalog modifications were done via the fetch-from-a-\nscan-and-update approach. Starting from a catcache tuple instead\nis a relative newbie.\n\nIf we're going to forbid using a catcache tuple as the starting point\nfor updates, one way to enforce it would be to have the catcache\nnot save the TID.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Apr 2020 18:52:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" }, { "msg_contents": "Hi,\n\nOn 2020-04-09 18:52:32 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-04-09 16:56:03 -0400, Robert Haas wrote:\n> >> That seems like a fairly magical coding rule that will happen to work\n> >> in most practical cases but isn't really a principled approach to the\n> >> problem.\n> \n> > I'm not sure it'd be that magical to only release resources at\n> > CommitTransactionCommand() time. We kinda do that for a few other things\n> > already.\n> \n> I'd be worried about consumption of resources during a long transaction.\n> But maybe we could release at CommandCounterIncrement?\n\nWhich resources are you thinking of? SnapshotResetXmin() shouldn't take\nany directly. Obviously it can cause bloat - but where would we use a\nsnapshot for only some part of a command, but need not have xmin\npreserved till the end of the command?\n\n\n> Still, I tend to agree with Robert that associating a snap with an\n> open catalog scan is the right way.\n\nI'm wondering if we should do both. I think releasing xmin in the middle\nof a command, but only when invalidations arrive in the middle of it, is\npretty likely to be involved in bugs in the future. But it also seems\ngood to ensure that snapshots are held across relevant operations.\n\nAny idea how to deal with different types of snapshots potentially being\nused within such a sequence?\n\n\n> I have vague memories that a long time ago, all catalog modifications\n> were done via the fetch-from-a- scan-and-update approach. Starting\n> from a catcache tuple instead is a relative newbie. If we're going to\n> forbid using a catcache tuple as the starting point for updates, one\n> way to enforce it would be to have the catcache not save the TID.\n\nI suspect that that'd be fairly painful code-churn wise.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 Apr 2020 18:20:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Catalog invalidations vs catalog scans vs ScanPgRelation()" } ]
[ { "msg_contents": "Hi\n\nI would to enhance \\g command about variant \\gcsv\n\nproposed command has same behave like \\g, only the result will be every\ntime in csv format.\n\nIt can helps with writing psql macros wrapping \\g command.\n\nOptions, notes?\n\nRegards\n\nPavel\n\nHiI would to enhance \\g command about variant \\gcsvproposed command has same behave like \\g, only the result will be every time in csv format.It can helps with writing psql macros wrapping \\g command.Options, notes?RegardsPavel", "msg_date": "Sat, 29 Feb 2020 06:43:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal \\gcsv" }, { "msg_contents": "On 29/02/2020 06:43, Pavel Stehule wrote:\n> Hi\n> \n> I would to enhance \\g command about variant \\gcsv\n> \n> proposed command has same behave like \\g, only the result will be every\n> time in csv format.\n> \n> It can helps with writing psql macros wrapping \\g command.\n> \n> Options, notes?\n\nBut then we would need \\ghtml and \\glatex etc. If we want a shortcut\nfor setting a one-off format, I would go for \\gf or something.\n\n \\gf csv\n \\gf html\n \\gf latex\n\n-1 on \\gcsv\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 29 Feb 2020 11:34:29 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "so 29. 2. 2020 v 11:34 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 29/02/2020 06:43, Pavel Stehule wrote:\n> > Hi\n> >\n> > I would to enhance \\g command about variant \\gcsv\n> >\n> > proposed command has same behave like \\g, only the result will be every\n> > time in csv format.\n> >\n> > It can helps with writing psql macros wrapping \\g command.\n> >\n> > Options, notes?\n>\n> But then we would need \\ghtml and \\glatex etc. If we want a shortcut\n> for setting a one-off format, I would go for \\gf or something.\n>\n> \\gf csv\n> \\gf html\n> \\gf latex\n>\n\nusability of html or latex format in psql is significantly lower than csv\nformat. There is only one generic format for data - csv.\n\nRegards\n\nPavel\n\n\n\n> -1 on \\gcsv\n> --\n> Vik Fearing\n>\n\nso 29. 2. 2020 v 11:34 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 29/02/2020 06:43, Pavel Stehule wrote:\n> Hi\n> \n> I would to enhance \\g command about variant \\gcsv\n> \n> proposed command has same behave like \\g, only the result will be every\n> time in csv format.\n> \n> It can helps with writing psql macros wrapping \\g command.\n> \n> Options, notes?\n\nBut then we would need \\ghtml and \\glatex etc.  If we want a shortcut\nfor setting a one-off format, I would go for \\gf or something.\n\n    \\gf csv\n    \\gf html\n    \\gf latexusability of html or latex format in psql is significantly lower than csv format. There is only one generic format for data - csv. RegardsPavel\n\n-1 on \\gcsv\n-- \nVik Fearing", "msg_date": "Sat, 29 Feb 2020 11:59:22 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On Sat, Feb 29, 2020 at 11:59:22AM +0100, Pavel Stehule wrote:\n> so 29. 2. 2020 v 11:34 odes�latel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n> > On 29/02/2020 06:43, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > I would to enhance \\g command about variant \\gcsv\n> > >\n> > > proposed command has same behave like \\g, only the result will be every\n> > > time in csv format.\n> > >\n> > > It can helps with writing psql macros wrapping \\g command.\n> > >\n> > > Options, notes?\n> >\n> > But then we would need \\ghtml and \\glatex etc. If we want a shortcut\n> > for setting a one-off format, I would go for \\gf or something.\n> >\n> > \\gf csv\n> > \\gf html\n> > \\gf latex\n> >\n> \n> usability of html or latex format in psql is significantly lower than csv\n> format. There is only one generic format for data - csv.\n\nNot exactly. There's a lot of uses for things along the lines of \n\n\\gf json\n\\gf yaml\n\nI'd rather add a new \\gf that takes arguments, as it seems more\nextensible. For example, there are uses for\n\n\\gf csv header\n\nif no header is the default, or \n\n\\gf csv noheader\n\nif header is the default.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 29 Feb 2020 18:06:51 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "so 29. 2. 2020 v 11:34 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 29/02/2020 06:43, Pavel Stehule wrote:\n> > Hi\n> >\n> > I would to enhance \\g command about variant \\gcsv\n> >\n> > proposed command has same behave like \\g, only the result will be every\n> > time in csv format.\n> >\n> > It can helps with writing psql macros wrapping \\g command.\n> >\n> > Options, notes?\n>\n> But then we would need \\ghtml and \\glatex etc. If we want a shortcut\n> for setting a one-off format, I would go for \\gf or something.\n>\n> \\gf csv\n> \\gf html\n> \\gf latex\n>\n\nok. I implemented \\gf. See a attached patch\n\nRegards\n\nPavel\n\n\n> -1 on \\gcsv\n> --\n> Vik Fearing\n>", "msg_date": "Sun, 1 Mar 2020 13:29:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "so 29. 2. 2020 v 18:06 odesílatel David Fetter <david@fetter.org> napsal:\n\n> On Sat, Feb 29, 2020 at 11:59:22AM +0100, Pavel Stehule wrote:\n> > so 29. 2. 2020 v 11:34 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> > > On 29/02/2020 06:43, Pavel Stehule wrote:\n> > > > Hi\n> > > >\n> > > > I would to enhance \\g command about variant \\gcsv\n> > > >\n> > > > proposed command has same behave like \\g, only the result will be\n> every\n> > > > time in csv format.\n> > > >\n> > > > It can helps with writing psql macros wrapping \\g command.\n> > > >\n> > > > Options, notes?\n> > >\n> > > But then we would need \\ghtml and \\glatex etc. If we want a shortcut\n> > > for setting a one-off format, I would go for \\gf or something.\n> > >\n> > > \\gf csv\n> > > \\gf html\n> > > \\gf latex\n> > >\n> >\n> > usability of html or latex format in psql is significantly lower than csv\n> > format. There is only one generic format for data - csv.\n>\n> Not exactly. There's a lot of uses for things along the lines of\n>\n> \\gf json\n> \\gf yaml\n>\n> I'd rather add a new \\gf that takes arguments, as it seems more\n> extensible. For example, there are uses for\n>\n\nI implemented \\gf by Vik's proposal\n\n\n> \\gf csv header\n>\n> if no header is the default, or\n>\n> \\gf csv noheader\n>\n\nIt is little bit hard (although it looks simply).\n\nThe second option of this command can be file - and it reads all to end of\nline. So in this case a implementation of variadic parameters is difficult.\n\nMotivation for this patch is a possibility to write macros like\n\npostgres=# \\set gnuplot '\\\\g | gnuplot -p -e \"set datafile separator\n\\',\\'; set key autotitle columnhead; set terminal dumb enhanced; plot\n\\'-\\'with boxes\"'\n\npostgres=# \\pset format csv\n\npostgres=# select i, sin(i) from generate_series(0, 6.3, 0.05) g(i) :gnuplot\n\n\nwith \\gf csv I can do almost what I need.\n\n\\set gnuplot '\\\\gf csv | gnuplot -p -e \"set datafile separator \\',\\'; set\nkey autotitle columnhead; set terminal dumb enhanced; plot \\'-\\'with\nboxes\"'\n\n\n> if header is the default.\n>\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\nso 29. 2. 2020 v 18:06 odesílatel David Fetter <david@fetter.org> napsal:On Sat, Feb 29, 2020 at 11:59:22AM +0100, Pavel Stehule wrote:\n> so 29. 2. 2020 v 11:34 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n> > On 29/02/2020 06:43, Pavel Stehule wrote:\n> > > Hi\n> > >\n> > > I would to enhance \\g command about variant \\gcsv\n> > >\n> > > proposed command has same behave like \\g, only the result will be every\n> > > time in csv format.\n> > >\n> > > It can helps with writing psql macros wrapping \\g command.\n> > >\n> > > Options, notes?\n> >\n> > But then we would need \\ghtml and \\glatex etc.  If we want a shortcut\n> > for setting a one-off format, I would go for \\gf or something.\n> >\n> >     \\gf csv\n> >     \\gf html\n> >     \\gf latex\n> >\n> \n> usability of html or latex format in psql is significantly lower than csv\n> format. There is only one generic format for data - csv.\n\nNot exactly.  There's a lot of uses for things along the lines of \n\n\\gf json\n\\gf yaml\n\nI'd rather add a new \\gf that takes arguments, as it seems more\nextensible. For example, there are uses forI implemented \\gf by Vik's proposal \n\n\\gf csv header\n\nif no header is the default, or \n\n\\gf csv noheaderIt is little bit hard (although it looks simply).The second option of this command can be file - and it reads all to end of line. So in this case a implementation of variadic parameters is difficult.Motivation for this patch is a possibility to write macros likepostgres=# \\set gnuplot '\\\\g | gnuplot -p -e \"set datafile separator \\',\\'; set key autotitle columnhead; set terminal dumb enhanced; plot \\'-\\'with boxes\"' \n\npostgres=# \\pset format csv\n\npostgres=# select i, sin(i) from generate_series(0, 6.3, 0.05) g(i) :gnuplot with \\gf csv I can do almost what I need. \\set gnuplot '\\\\gf csv | gnuplot -p -e \"set datafile separator \\',\\'; set key autotitle columnhead; set terminal dumb enhanced; plot \\'-\\'with boxes\"' \n\nif header is the default.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Sun, 1 Mar 2020 13:34:23 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 01/03/2020 13:29, Pavel Stehule wrote:\n> so 29. 2. 2020 v 11:34 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 29/02/2020 06:43, Pavel Stehule wrote:\n>>> Hi\n>>>\n>>> I would to enhance \\g command about variant \\gcsv\n>>>\n>>> proposed command has same behave like \\g, only the result will be every\n>>> time in csv format.\n>>>\n>>> It can helps with writing psql macros wrapping \\g command.\n>>>\n>>> Options, notes?\n>>\n>> But then we would need \\ghtml and \\glatex etc. If we want a shortcut\n>> for setting a one-off format, I would go for \\gf or something.\n>>\n>> \\gf csv\n>> \\gf html\n>> \\gf latex\n>>\n> \n> ok. I implemented \\gf. See a attached patch\n\nI snuck this into the commitfest that starts today while no one was\nlooking. https://commitfest.postgresql.org/27/2503/\n\nAnd I added myself as reviewer.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 1 Mar 2020 15:49:32 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel", "msg_date": "Tue, 24 Mar 2020 11:02:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 3/24/20 3:02 AM, Pavel Stehule wrote:\n> Hi\n> \n> rebase\n\nThank you, Pavel.\n\nI have now had time to review it, and it looks good to me except for two\nissues.\n\nThe first is, even though I suggested gf, I think it should actually be\ngfmt. There may be something else in the future that starts with f and\nwe shouldn't close ourselves off to it.\n\nThe second is tab completion doesn't work for the second argument.\nAdding the following fixes that:\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex ed6945a7f12..9d8cf442972 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -3786,6 +3786,12 @@ psql_completion(const char *text, int start, int end)\n COMPLETE_WITH_CS(\"aligned\", \"asciidoc\", \"csv\", \"html\",\n\"latex\",\n \"latex-longtable\",\n\"troff-ms\", \"unaligned\",\n \"wrapped\");\n+ else if (TailMatchesCS(\"\\\\gf\", MatchAny))\n+ {\n+ completion_charp = \"\\\\\";\n+ completion_force_quote = false;\n+ matches = rl_completion_matches(text, complete_from_files);\n+ }\n\n else if (TailMatchesCS(\"\\\\h|\\\\help\"))\n COMPLETE_WITH_LIST(sql_commands);\n\n\nAfter some opinions on the first issue and fixing the second, I think\nthis is good to be committed.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 26 Mar 2020 09:45:23 -0700", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Hi\n\nčt 26. 3. 2020 v 17:45 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/24/20 3:02 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > rebase\n>\n> Thank you, Pavel.\n>\n> I have now had time to review it, and it looks good to me except for two\n> issues.\n>\n> The first is, even though I suggested gf, I think it should actually be\n> gfmt. There may be something else in the future that starts with f and\n> we shouldn't close ourselves off to it.\n>\n\nrenamed to \\gfmt\n\n\n> The second is tab completion doesn't work for the second argument.\n> Adding the following fixes that:\n>\n> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> index ed6945a7f12..9d8cf442972 100644\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -3786,6 +3786,12 @@ psql_completion(const char *text, int start, int\n> end)\n> COMPLETE_WITH_CS(\"aligned\", \"asciidoc\", \"csv\", \"html\",\n> \"latex\",\n> \"latex-longtable\",\n> \"troff-ms\", \"unaligned\",\n> \"wrapped\");\n> + else if (TailMatchesCS(\"\\\\gf\", MatchAny))\n> + {\n> + completion_charp = \"\\\\\";\n> + completion_force_quote = false;\n> + matches = rl_completion_matches(text, complete_from_files);\n> + }\n>\n> else if (TailMatchesCS(\"\\\\h|\\\\help\"))\n> COMPLETE_WITH_LIST(sql_commands);\n>\n>\nmerged\n\n\n> After some opinions on the first issue and fixing the second, I think\n> this is good to be committed.\n>\n\nThank you for review\n\nPavel\n\n-- \n> Vik Fearing\n>", "msg_date": "Thu, 26 Mar 2020 18:49:11 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 3/26/20 10:49 AM, Pavel Stehule wrote:\n> Hi\n> \n> čt 26. 3. 2020 v 17:45 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> After some opinions on the first issue and fixing the second, I think\n>> this is good to be committed.\n>>\n> \n> Thank you for review\n\nThis patch now looks good to me. Marking as Ready for Committer.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 26 Mar 2020 10:55:39 -0700", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "čt 26. 3. 2020 v 18:55 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/26/20 10:49 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > čt 26. 3. 2020 v 17:45 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> >> After some opinions on the first issue and fixing the second, I think\n> >> this is good to be committed.\n> >>\n> >\n> > Thank you for review\n>\n> This patch now looks good to me. Marking as Ready for Committer.\n>\n\nThank you very much\n\nPavel\n\n-- \n> Vik Fearing\n>\n\nčt 26. 3. 2020 v 18:55 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 3/26/20 10:49 AM, Pavel Stehule wrote:\n> Hi\n> \n> čt 26. 3. 2020 v 17:45 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> After some opinions on the first issue and fixing the second, I think\n>> this is good to be committed.\n>>\n> \n> Thank you for review\n\nThis patch now looks good to me.  Marking as Ready for Committer.Thank you very muchPavel\n-- \nVik Fearing", "msg_date": "Thu, 26 Mar 2020 18:56:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 2020-03-26 18:49, Pavel Stehule wrote:\n> Hi\n> \n> [psql-gfmt.patch]\n\nThis seems useful and works well; I haven't found any errors. Well done.\n\nHowever, I have a suggestion that is perhaps slightly outside of this \npatch but functionally so close that maybe we can discuss it here.\n\nWhen you try to get a tab-separated output via this new \\gfmt in a \none-liner\nyou're still forced to use\n \\pset csv_fieldsep '\\t'\n\nWould it be possible to do one of the following to enable a more compact \none-liner syntax:\n\n1. add an option:\n \\gfmt tsv --> use a TAB instead of a comma in the csv\n\nor\n\n2. let the psql command-line option '--csv' honour the value given by \npsql -F/--field-separator (it does not do so now)\n\nor\n\n3. add an psql -commandline option:\n --csv-field-separator\n\nAny of these three (I'd prefer the first) would make producing a tsv in \nshell one-liners with psql easier/more compact.\n\n\nThanks,\n\n\nErik Rijkers\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 26 Mar 2020 19:41:45 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "čt 26. 3. 2020 v 19:41 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n\n> On 2020-03-26 18:49, Pavel Stehule wrote:\n> > Hi\n> >\n> > [psql-gfmt.patch]\n>\n> This seems useful and works well; I haven't found any errors. Well done.\n>\n> However, I have a suggestion that is perhaps slightly outside of this\n> patch but functionally so close that maybe we can discuss it here.\n>\n> When you try to get a tab-separated output via this new \\gfmt in a\n> one-liner\n> you're still forced to use\n> \\pset csv_fieldsep '\\t'\n>\n> Would it be possible to do one of the following to enable a more compact\n> one-liner syntax:\n>\n> 1. add an option:\n> \\gfmt tsv --> use a TAB instead of a comma in the csv\n>\n> or\n>\n> 2. let the psql command-line option '--csv' honour the value given by\n> psql -F/--field-separator (it does not do so now)\n>\n> or\n>\n> 3. add an psql -commandline option:\n> --csv-field-separator\n>\n> Any of these three (I'd prefer the first) would make producing a tsv in\n> shell one-liners with psql easier/more compact.\n>\n\nI understand to your proposal, but it's hard to do inside \\gfmt command\n\n1. a syntax of psql backslash commands doesn't support named parameters,\nand \\gfmt (like some others \\gx) statements has optional parameter already.\nThere was a long discussion (without success) about possible\nparametrizations of psql commands.\n\n2. if I understand to tsv format, then it is not CSV format with different\nseparator.\n\nthe most correct design is introduction new output format \"tsv\".This format\ncan produce 100% valid tsv.\n\nRegards\n\nPavel\n\n\n>\n> Thanks,\n>\n>\n> Erik Rijkers\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nčt 26. 3. 2020 v 19:41 odesílatel Erik Rijkers <er@xs4all.nl> napsal:On 2020-03-26 18:49, Pavel Stehule wrote:\n> Hi\n> \n> [psql-gfmt.patch]\n\nThis seems useful and works well; I haven't found any errors. Well done.\n\nHowever, I have a suggestion that is perhaps slightly outside of this \npatch but functionally so close that maybe we can discuss it here.\n\nWhen you try to get a tab-separated output via this new  \\gfmt  in a \none-liner\nyou're still forced to use\n    \\pset csv_fieldsep '\\t'\n\nWould it be possible to do one of the following to enable a more compact \none-liner syntax:\n\n1. add an option:\n     \\gfmt tsv   --> use a TAB instead of a comma in the csv\n\nor\n\n2. let the psql command-line option '--csv' honour the value given by  \npsql -F/--field-separator (it does not do so now)\n\nor\n\n3. add an psql -commandline option:\n     --csv-field-separator\n\nAny of these three (I'd prefer the first) would make producing a tsv in \nshell one-liners with psql easier/more compact.I understand to your proposal, but it's hard to do inside \\gfmt command1. a syntax of psql backslash commands doesn't support named parameters, and \\gfmt (like some others \\gx) statements has optional parameter already. There was a long discussion (without success) about possible parametrizations of psql commands.2. if I understand to tsv format, then it is not CSV format with different separator.the most correct design is introduction new output format \"tsv\".This format can produce 100% valid tsv.RegardsPavel\n\n\nThanks,\n\n\nErik Rijkers", "msg_date": "Fri, 27 Mar 2020 21:27:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "\tErik Rijkers wrote:\n\n> 2. let the psql command-line option '--csv' honour the value given by \n> psql -F/--field-separator (it does not do so now)\n>\n> or\n> \n> 3. add an psql -commandline option:\n> --csv-field-separator\n\nSetting the field separator on the command line is already supported\nthrough this kind of invocation:\n\npsql --csv -P csv_fieldsep=$'\\t'\n\nbash expands $'\\t' to a tab character. Other shells might need\ndifferent tricks.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Sat, 28 Mar 2020 15:06:05 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "so 28. 3. 2020 v 15:06 odesílatel Daniel Verite <daniel@manitou-mail.org>\nnapsal:\n\n> Erik Rijkers wrote:\n>\n> > 2. let the psql command-line option '--csv' honour the value given by\n> > psql -F/--field-separator (it does not do so now)\n> >\n> > or\n> >\n> > 3. add an psql -commandline option:\n> > --csv-field-separator\n>\n> Setting the field separator on the command line is already supported\n> through this kind of invocation:\n>\n> psql --csv -P csv_fieldsep=$'\\t'\n>\n> bash expands $'\\t' to a tab character. Other shells might need\n> different tricks.\n>\n\nWe have named parameters in shell, but not in psql\n\n\n\n>\n> Best regards,\n> --\n> Daniel Vérité\n> PostgreSQL-powered mailer: http://www.manitou-mail.org\n> Twitter: @DanielVerite\n>\n\nso 28. 3. 2020 v 15:06 odesílatel Daniel Verite <daniel@manitou-mail.org> napsal:        Erik Rijkers wrote:\n\n> 2. let the psql command-line option '--csv' honour the value given by  \n> psql -F/--field-separator (it does not do so now)\n>\n> or\n> \n> 3. add an psql -commandline option:\n>     --csv-field-separator\n\nSetting the field separator on the command line is already supported\nthrough this kind of invocation:\n\npsql --csv -P csv_fieldsep=$'\\t'\n\nbash expands $'\\t' to a tab character. Other shells might need\ndifferent tricks.We have named parameters in shell, but not in psql \n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite", "msg_date": "Sat, 28 Mar 2020 15:09:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 2020-03-28 15:06, Daniel Verite wrote:\n> Erik Rijkers wrote:\n> \n>> 2. let the psql command-line option '--csv' honour the value given by\n>> psql -F/--field-separator (it does not do so now)\n>> \n>> or\n>> \n>> 3. add an psql -commandline option:\n>> --csv-field-separator\n> \n> Setting the field separator on the command line is already supported\n> through this kind of invocation:\n> \n> psql --csv -P csv_fieldsep=$'\\t'\n> \n> bash expands $'\\t' to a tab character. Other shells might need\n> different tricks.\n\nAh yes, that works. I had not seen that psql -P option. Thanks!\n\n> \n> \n> Best regards,\n> --\n> Daniel Vérité\n> PostgreSQL-powered mailer: http://www.manitou-mail.org\n> Twitter: @DanielVerite\n\n\n", "msg_date": "Sat, 28 Mar 2020 15:39:02 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "I took a look at this proposal, and while I see the value of being\nable to do something like this, it seems pretty short-sighted and\nnon-orthogonal as it stands. We've already got \\gx, which is a wart,\nand now this patch wants to add \\gfmt which is a different wart of the\nsame ilk. What happens if you want to combine them? Plus we already\nhad David complaining upthread that he'd like to be able to select\nCSV-format suboptions; and now here comes Erik wondering about the\nsame thing.\n\nIt seems to me that this line of development is going to end in a whole\nflotilla of \\g-something commands that aren't composable and never quite\nsatisfy the next use-case to come down the pike, so we keep on needing\neven more of them.\n\nSo I think we really need a way to be able to specify multiple different\n\\pset subcommands that apply just for the duration of one \\g command.\nPavel dismissed that upthread as being too hard, but I think we'd better\ntry harder.\n\nPlan A:\n\nConsider some syntax along the lines of\n\n\\gpset (pset-option-name [pset-option-value]) ... filename\n\nor if you don't like parentheses, choose some other punctuation to wrap\nthe \\pset options in. I initially thought of square brackets, but I'm\nafraid they might be just too darn confusing to document --- how could\nyou make them distinct from metasyntax square brackets, especially in\nplain-ASCII docs? Also it'd have to be punctuation that's unlikely to\nstart a file name --- but parens are already reserved in most shells.\n\nPlan B:\n\nAnother idea is to break the operation into multiple backslash commands,\nwhere the initial ones set up state that doesn't do anything until the\noutput command comes along:\n\n\\tpset [ pset-option-name [ pset-option-value ] ]\n\n Sets a \"temporary\" pset option, which will have effect in the\n next \\gpset command; or with no parameters, shows the current set\n of temporary options\n\n\\gpset filename\n\n Execute SQL command and output to filename (or pipe), using the\n pset option set defined by preceding \\tpset commands, and reverting\n that option set to all-defaults afterward.\n\nProbably we could think of better terminology than \"temporary\"\nand a better command name than \"\\tpset\", but you get the gist.\n\nAdmittedly, \"\\tpset format csv \\gpset filename\" is a bit more\nverbose than the current proposal of \"\\gfmt csv filename\"\n... but we'd have solved the problem once and for all, even\nfor pset options we've not invented yet.\n\nPlan C:\n\nProbably there are other ways to get there; these are just the\nfirst ideas that came to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 19:53:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 4/1/20 1:53 AM, Tom Lane wrote:\n> Consider some syntax along the lines of\n> \n> \\gpset (pset-option-name [pset-option-value]) ... filename\n> \n> or if you don't like parentheses, choose some other punctuation to wrap\n> the \\pset options in. I initially thought of square brackets, but I'm\n> afraid they might be just too darn confusing to document --- how could\n> you make them distinct from metasyntax square brackets, especially in\n> plain-ASCII docs? Also it'd have to be punctuation that's unlikely to\n> start a file name --- but parens are already reserved in most shells.\n\n\nIf parens are going to be required, why don't we just add them to \\g?\n\nTABLE blah \\g (format csv) filename\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 1 Apr 2020 09:07:26 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 4/1/20 1:53 AM, Tom Lane wrote:\n>> Consider some syntax along the lines of\n>> \\gpset (pset-option-name [pset-option-value]) ... filename\n\n> If parens are going to be required, why don't we just add them to \\g?\n> TABLE blah \\g (format csv) filename\n\nYeah, if we're willing to assume that nobody uses filenames beginning\nwith '(', we could just extend \\g's syntax rather than adding a new\ncommand.\n\nAfter sleeping on it, though, I'm liking my Plan B idea better than\nPlan A. Plan B is very clearly implementable without needing surgery\non the backslash-command parser (I didn't look at the lexer to see\nwhat paren-handling would involve, but it might be painful). And it\ndoesn't put any new limits on what pset parameters can look like;\nPlan A would likely result in some problems if anybody wants to use\nparens in future pset options.\n\nI think that maybe the best terminology for Plan B would be to say\nthat there's an \"alternate\" formatting parameter set, which is\nmanipulated by \\apset and then used by \\ga.\n\nAnother thought, bearing in mind the dictum that the only good numbers\nin computer science are 0, 1, and N, is to introduce a concept of named\nformatting parameter sets, which you'd manipulate with say\n\t\\npset set-name [param-name [param-value]]\nand use with\n\t\\gn set-name filename-or-command\nA likely usage pattern for that would be to set up a few favorite\nformats in your ~/.psqlrc, and then they'd be available to just use\nimmediately in \\gn. (In this universe, \\gn should not destroy or\nreset the parameter set it uses.)\n\nThis is way beyond what anyone has asked for, so I'm not seriously\nproposing that we do it right now, but it might be something to keep\nin mind as a possible future direction. The main thing that it calls\ninto question is whether we really want \\ga to reset the alternate\nparameter values after use. Maybe it'd be better not to --- I can\nthink of about-equally-likely usage patterns where you would want\nthat or not. We could invent an explicit \"\\apset reset\" command\ninstead of auto-reset. I could see having a command to copy the\ncurrent primary formatting parameters to the alternate area, too.\n\nThere's an argument that this is all way too complicated, of course,\nand maybe it is. But considering that we've already had two requests\nfor things you can't do with \\gfmt as it stands, I think the patch\nis too simple as it is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 11:18:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "\tTom Lane wrote:\n\n> I could see having a command to copy the current primary formatting\n> parameters to the alternate area, too.\n\nWe could have a stack to store parameters before temporary\nchanges, for instance if you want to do one csv export and\ncome back to normal without assuming what \"normal\"\nvalues were.\n\n\\pset push format csv_fieldsep\n\\pset format csv\n\\pset csv_fielsep '\\t'\nsome command \\g somefile\n\\pset pop\n\nSo \\pset pop would reset the pushed parameters\nto their values when pushed, which also could be all\nparameters:\n\n\\pset push all\n\\pset param1 something\n\\pset param2 something-else\n...other commands...\n\\pset pop\n\nor\n\n\\pset push all\n\\i somescript.sql\n\\pset pop\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 01 Apr 2020 17:52:06 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On Wed, 1 Apr 2020 at 11:52, Daniel Verite <daniel@manitou-mail.org> wrote:\n\n> Tom Lane wrote:\n>\n> > I could see having a command to copy the current primary formatting\n> > parameters to the alternate area, too.\n>\n> We could have a stack to store parameters before temporary\n> changes, for instance if you want to do one csv export and\n> come back to normal without assuming what \"normal\"\n> values were.\n>\n\nI think it might be a good idea to decide whether psql is to be a\nprogramming environment, or just a command shell.\n\nIf it is to be a programming environment, we should either adopt an\nexisting language or strike a committee of programming language experts to\ndesign a new one.\n\nIf it is to be a command shell, new features should be evaluated in part on\nwhether they move psql significantly closer to being a programming language\nand rejected if they do.\n\nOn Wed, 1 Apr 2020 at 11:52, Daniel Verite <daniel@manitou-mail.org> wrote:        Tom Lane wrote:\n\n>  I could see having a command to copy the current primary formatting\n> parameters to the alternate area, too.\n\nWe could have a stack to store parameters before temporary\nchanges, for instance if you want to do one csv export and\ncome back to normal without assuming what \"normal\"\nvalues were.I think it might be a good idea to decide whether psql is to be a programming environment, or just a command shell.If it is to be a programming environment, we should either adopt an existing language or strike a committee of programming language experts to design a new one.If it is to be a command shell, new features should be evaluated in part on whether they move psql significantly closer to being a programming language and rejected if they do.", "msg_date": "Wed, 1 Apr 2020 12:01:08 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "st 1. 4. 2020 v 17:52 odesílatel Daniel Verite <daniel@manitou-mail.org>\nnapsal:\n\n> Tom Lane wrote:\n>\n> > I could see having a command to copy the current primary formatting\n> > parameters to the alternate area, too.\n>\n> We could have a stack to store parameters before temporary\n> changes, for instance if you want to do one csv export and\n> come back to normal without assuming what \"normal\"\n> values were.\n>\n> \\pset push format csv_fieldsep\n> \\pset format csv\n> \\pset csv_fielsep '\\t'\n> some command \\g somefile\n> \\pset pop\n>\n> So \\pset pop would reset the pushed parameters\n> to their values when pushed, which also could be all\n> parameters:\n>\n> \\pset push all\n> \\pset param1 something\n> \\pset param2 something-else\n> ...other commands...\n> \\pset pop\n>\n> or\n>\n> \\pset push all\n> \\i somescript.sql\n> \\pset pop\n>\n>\nIt can work, but it is not user friendly - my proposal was motivated by\nusing some quick csv exports to gplot's pipe.\n\nRegards\n\nPavel\n\n>\n> Best regards,\n> --\n> Daniel Vérité\n> PostgreSQL-powered mailer: http://www.manitou-mail.org\n> Twitter: @DanielVerite\n>\n\nst 1. 4. 2020 v 17:52 odesílatel Daniel Verite <daniel@manitou-mail.org> napsal:        Tom Lane wrote:\n\n>  I could see having a command to copy the current primary formatting\n> parameters to the alternate area, too.\n\nWe could have a stack to store parameters before temporary\nchanges, for instance if you want to do one csv export and\ncome back to normal without assuming what \"normal\"\nvalues were.\n\n\\pset push format csv_fieldsep\n\\pset format csv\n\\pset  csv_fielsep '\\t'\nsome command \\g somefile\n\\pset pop\n\nSo \\pset pop would reset the pushed parameters\nto their values when pushed, which also could be all\nparameters:\n\n\\pset push all\n\\pset param1 something\n\\pset param2 something-else\n...other commands...\n\\pset pop\n\nor\n\n\\pset push all\n\\i somescript.sql\n\\pset pop\nIt can work, but it is not user friendly - my proposal was motivated by using some quick csv exports to gplot's pipe.RegardsPavel\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite", "msg_date": "Wed, 1 Apr 2020 18:03:20 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> It can work, but it is not user friendly - my proposal was motivated by\n> using some quick csv exports to gplot's pipe.\n\nI kind of liked the stack idea, myself. It's simpler than what I was\nsuggesting and it covers probably 90% of the use-case.\n\nHowever, if we prefer something closer to Plan A ... I took a look at\nthe psql lexer, and the only difference between OT_FILEPIPE and OT_NORMAL\nparsing is if the argument starts with '|'. So we could make it work\nI think. I'd modify my first proposal so far as to make it\n\n\t\\g ( pset-option pset-value ... ) filename-or-pipe\n\nThat is, require spaces around the parens, and require a value for each\npset-option (no fair using the shortcuts like \"\\pset expanded\"). Then\nit's easy to separate the option names and values from the paren markers.\nThe \\g parser would consume its first argument in OT_FILEPIPE mode, and\nthen if it sees '(' it would consume arguments in OT_NORMAL mode until\nit's found the ')'.\n\nThis way also narrows the backwards-compatibility problem from \"fails if\nyour filename starts with '('\" to \"fails if your filename is exactly '('\",\nwhich seems acceptably improbable to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 12:29:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 2020-Apr-01, Pavel Stehule wrote:\n\n> It can work, but it is not user friendly - my proposal was motivated by\n> using some quick csv exports to gplot's pipe.\n\nCan we fix that by adding some syntax to allow command aliases?\nSo you could add to your .psqlrc something like\n\n\\alias \\gcsv \\pset push all \\; \\cbuf; \\; \\pset pop\n\nwhere the \\cbuf is a hypothetical \"function\" that expands to the current\nquery buffer. This needs some refining I guess, but it'd allow you to\ncreate your own shortcuts for the most common features you want without\nexcessive typing effort.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Apr 2020 15:09:31 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On 4/1/20 6:29 PM, Tom Lane wrote:\n> I'd modify my first proposal so far as to make it\n> \n> \t\\g ( pset-option pset-value ... ) filename-or-pipe\n> \n> That is, require spaces around the parens\n\nI think requiring spaces inside the parentheses is a severe POLA\nviolation and I vote strongly against it.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 2 Apr 2020 02:29:56 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "\tAlvaro Herrera wrote:\n\n> Can we fix that by adding some syntax to allow command aliases?\n> So you could add to your .psqlrc something like\n> \n> \\alias \\gcsv \\pset push all \\; \\cbuf; \\; \\pset pop\n> \n> where the \\cbuf is a hypothetical \"function\" that expands to the current\n> query buffer. This needs some refining I guess, but it'd allow you to\n> create your own shortcuts for the most common features you want without\n> excessive typing effort.\n\nSince variables can contain metacommands, they can be abused\nas macros. For instance I think a declaration like this would work:\n\n\\set gcsv '\\\\pset push all \\\\pset format csv \\\\g \\\\pset pop'\n\nor with another pset with embedded single quotes:\n\n\\set gcsv '\\\\pset push all \\\\pset format csv \\\\pset csv_fieldsep ''\\\\t'' \\\\g\n\\\\pset pop'\n\nThis kind of usage is not mentioned explicitly in the doc, so it might be\nhard to discover, but without the push/pop feature that doesn't exist,\nwe can already do that:\n\ntest=> \\set gcsv '\\\\pset format csv \\\\pset csv_fieldsep ''\\\\t'' \\\\g'\n\ntest=> select 1,2 :gcsv | (echo \"START OF OUTPUT\"; cat)\nOutput format is csv.\nField separator for CSV is \"\t\".\nSTART OF OUTPUT\n?column?\t?column?\n1\t2\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 02 Apr 2020 11:23:49 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "st 1. 4. 2020 v 18:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > It can work, but it is not user friendly - my proposal was motivated by\n> > using some quick csv exports to gplot's pipe.\n>\n> I kind of liked the stack idea, myself. It's simpler than what I was\n> suggesting and it covers probably 90% of the use-case.\n>\n\nThe stack idea probably needs much stronger psql handling error redesign to\nbe safe\n\npostgres=# \\set ON_ERROR_STOP 1\npostgres=# select 10/0 \\echo 'ahoj' \\g \\echo 'nazdar\nahoj\nERROR: division by zero\n\nThere is not guaranteed so the command for returning to stored state will\nbe executed.\n\n\n\n> However, if we prefer something closer to Plan A ... I took a look at\n> the psql lexer, and the only difference between OT_FILEPIPE and OT_NORMAL\n> parsing is if the argument starts with '|'. So we could make it work\n> I think. I'd modify my first proposal so far as to make it\n>\n> \\g ( pset-option pset-value ... ) filename-or-pipe\n>\n> That is, require spaces around the parens, and require a value for each\n> pset-option (no fair using the shortcuts like \"\\pset expanded\"). Then\n> it's easy to separate the option names and values from the paren markers.\n> The \\g parser would consume its first argument in OT_FILEPIPE mode, and\n> then if it sees '(' it would consume arguments in OT_NORMAL mode until\n> it's found the ')'.\n>\n\nTo have this syntax can be nice, but the requirement spaces around\nparenthesis is not too user friendly and natural.\n\nFollowing ideas are based on Tom's ideas\n\nWe can have a new commands for cloning print environments and switch to one\nshot environment. It can be based just on enhancing of \\pset command\n\n\\pset save anyidentifier .. serialize settings\n\\pset load anyidentifier .. load setting\n\\pset oneshot [anyidentifer] .. prepare and set copy of current print\nsetting for next execution command\n\\pset main\n\\pset reset .. reset to defaults\n\nso this can support some scenarios\n\n-- one shot csv\n\\pset oneshot -- copy current settings to one shot environment and use one\nshot environment\n\\pset format csv\n\\pset csv_delimiter ;\nselect 1; -- any output\n\n-- prepare named configuration\n\\pset oneshot\n\\pset format csv\n\\pset csv_delimiter ;\n\\pset save czech_csv -- serialize changes against \"main\" environment\n\\pset main\n\n\\pset load czech_csv\nselect 1;\n\nor\n\n\\pset oneshot czech_csv\nselect 1;\n\nSo we just need to enhance syntax only of \\pset command, and we have to\nsupport work with two print settings environments - \"main\" and \"oneshot\"\n\nWhat do you think about this proposal?\n\nRegards\n\nPavel\n\n\n\n\n\n\n> This way also narrows the backwards-compatibility problem from \"fails if\n> your filename starts with '('\" to \"fails if your filename is exactly '('\",\n> which seems acceptably improbable to me.\n>\n> regards, tom lane\n>\n\nst 1. 4. 2020 v 18:29 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> It can work, but it is not user friendly - my proposal was motivated by\n> using some quick csv exports to gplot's pipe.\n\nI kind of liked the stack idea, myself.  It's simpler than what I was\nsuggesting and it covers probably 90% of the use-case.The stack idea probably needs much stronger psql handling error redesign to be safepostgres=# \\set ON_ERROR_STOP 1postgres=# select 10/0 \\echo 'ahoj' \\g \\echo 'nazdarahojERROR:  division by zeroThere is not guaranteed so the command for returning to stored state will be executed. \n\nHowever, if we prefer something closer to Plan A ... I took a look at\nthe psql lexer, and the only difference between OT_FILEPIPE and OT_NORMAL\nparsing is if the argument starts with '|'.  So we could make it work\nI think.  I'd modify my first proposal so far as to make it\n\n        \\g ( pset-option pset-value ... ) filename-or-pipe\n\nThat is, require spaces around the parens, and require a value for each\npset-option (no fair using the shortcuts like \"\\pset expanded\").  Then\nit's easy to separate the option names and values from the paren markers.\nThe \\g parser would consume its first argument in OT_FILEPIPE mode, and\nthen if it sees '(' it would consume arguments in OT_NORMAL mode until\nit's found the ')'.To have this syntax can be nice, but the requirement spaces around parenthesis is not too user friendly and natural. Following ideas are based on Tom's ideasWe can have a new commands for cloning print environments and switch to one shot environment. It can be based just on enhancing of \\pset command\\pset save anyidentifier .. serialize settings\\pset load anyidentifier .. load setting\\pset oneshot [anyidentifer] .. prepare and set copy of current print setting for next execution command\\pset main \\pset reset .. reset to defaultsso this can support some scenarios-- one shot csv\\pset oneshot  -- copy current settings to one shot environment and use one shot environment\\pset format csv\\pset csv_delimiter ;select 1; -- any output-- prepare named configuration\\pset oneshot\\pset format csv\\pset csv_delimiter ;\\pset save czech_csv -- serialize changes against \"main\" environment\\pset main\\pset load czech_csv select 1;or\\pset oneshot czech_csvselect 1;So we just need to enhance syntax only of \\pset command, and we have to support work with two print settings environments - \"main\" and \"oneshot\"What do you think about this proposal?RegardsPavel\n\nThis way also narrows the backwards-compatibility problem from \"fails if\nyour filename starts with '('\" to \"fails if your filename is exactly '('\",\nwhich seems acceptably improbable to me.\n\n                        regards, tom lane", "msg_date": "Fri, 3 Apr 2020 22:21:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> We can have a new commands for cloning print environments and switch to one\n> shot environment. It can be based just on enhancing of \\pset command\n\n> \\pset save anyidentifier .. serialize settings\n> \\pset load anyidentifier .. load setting\n> \\pset oneshot [anyidentifer] .. prepare and set copy of current print\n> setting for next execution command\n> \\pset main\n> \\pset reset .. reset to defaults\n\nI feel like that's gotten pretty far away from the idea of a simple,\neasy-to-use way of adjusting the parameters for one \\g operation.\nThere'd be a whole lot of typing involved above and beyond the\nobviously-necessary part of specifying the new pset parameter values.\n\n(Also, it's not clear to me how that's any more robust than the\nstack idea. If you could lose \"\\pset pop\" to an error, you could\nlose \"\\pset reset\" too.)\n\nIf people are upset about the extra whitespace in the paren-style\nproposal, we could do without it. The only real problem would be\nif there's ever a pset parameter for which a trailing right paren\ncould be a sensible part of the value. Maybe that's not ever\ngoing to be an issue; or maybe we could provide a quoting mechanism\nfor weird pset values.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 18:24:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "so 4. 4. 2020 v 0:24 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > We can have a new commands for cloning print environments and switch to\n> one\n> > shot environment. It can be based just on enhancing of \\pset command\n>\n> > \\pset save anyidentifier .. serialize settings\n> > \\pset load anyidentifier .. load setting\n> > \\pset oneshot [anyidentifer] .. prepare and set copy of current print\n> > setting for next execution command\n> > \\pset main\n> > \\pset reset .. reset to defaults\n>\n> I feel like that's gotten pretty far away from the idea of a simple,\n> easy-to-use way of adjusting the parameters for one \\g operation.\n> There'd be a whole lot of typing involved above and beyond the\n> obviously-necessary part of specifying the new pset parameter values.\n>\n\nfor my original proposal is important only one command \\pset oneshot\n\nso one shot setting can be done by\n\n\\pset oneshot\n\\pset format csv\n\\pset csv_separator ;\nany command that print tuples\n\nthis is your plan B, but we we need just enhance only pset command, and all\nothers can be used without any modifications.\n\n\n> (Also, it's not clear to me how that's any more robust than the\n> stack idea. If you could lose \"\\pset pop\" to an error, you could\n> lose \"\\pset reset\" too.)\n>\n\nThe \\pset reset should not to do switch from one shot to usual settings\n(this is not necessary,because one shot settings is destroyed after\nexecution), but my idea is reset to initial psql settings\n\n>\n> If people are upset about the extra whitespace in the paren-style\n> proposal, we could do without it. The only real problem would be\n> if there's ever a pset parameter for which a trailing right paren\n> could be a sensible part of the value. Maybe that's not ever\n> going to be an issue; or maybe we could provide a quoting mechanism\n> for weird pset values.\n>\n\nParametrization in parenthesis is usual pattern (EXPLAIN, COPY, ..) in\nPostgres, and for me it most natural syntax.\n\n\n\n\n>\n> regards, tom lane\n>\n\nso 4. 4. 2020 v 0:24 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> We can have a new commands for cloning print environments and switch to one\n> shot environment. It can be based just on enhancing of \\pset command\n\n> \\pset save anyidentifier .. serialize settings\n> \\pset load anyidentifier .. load setting\n> \\pset oneshot [anyidentifer] .. prepare and set copy of current print\n> setting for next execution command\n> \\pset main\n> \\pset reset .. reset to defaults\n\nI feel like that's gotten pretty far away from the idea of a simple,\neasy-to-use way of adjusting the parameters for one \\g operation.\nThere'd be a whole lot of typing involved above and beyond the\nobviously-necessary part of specifying the new pset parameter values.for my original proposal is important only one command \\pset oneshotso one shot setting can be done by\\pset oneshot\\pset format csv\\pset csv_separator ;any command that print tuplesthis is your plan B, but we we need just enhance only pset command, and all others can be used without any modifications.\n\n(Also, it's not clear to me how that's any more robust than the\nstack idea.  If you could lose \"\\pset pop\" to an error, you could\nlose \"\\pset reset\" too.)The \\pset reset should not to do switch from one shot to usual settings (this is not necessary,because one shot settings is destroyed after execution), but my idea is reset to initial psql settings  \n\nIf people are upset about the extra whitespace in the paren-style\nproposal, we could do without it.  The only real problem would be\nif there's ever a pset parameter for which a trailing right paren\ncould be a sensible part of the value.  Maybe that's not ever\ngoing to be an issue; or maybe we could provide a quoting mechanism\nfor weird pset values.Parametrization in parenthesis is usual pattern (EXPLAIN, COPY, ..) in Postgres, and for me it most natural syntax.  \n\n                        regards, tom lane", "msg_date": "Sat, 4 Apr 2020 06:31:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Here's a WIP patch for the parenthesized-options route.\n\nI realized that if we make the options be single words in the form\nname=value, we can easily handle the shortcut forms with no value.\nSo that's what this does.\n\nWhat this does *not* do is offer any solution to the question of\nhow to put a right paren as the last character of a pset option\nvalue. I don't really see any easy way to handle that, but maybe\nwe can punt for now.\n\nAlso no docs or test cases, but I see no point in putting effort into\nthat in advance of consensus that this is what we want.\n\n0001 is some save/restore infrastructure that we'd need for pretty\nmuch all of the proposals on the table, and then 0002 improves the\ncommand itself.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 06 Apr 2020 20:28:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "út 7. 4. 2020 v 2:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Here's a WIP patch for the parenthesized-options route.\n>\n> I realized that if we make the options be single words in the form\n> name=value, we can easily handle the shortcut forms with no value.\n> So that's what this does.\n>\n> What this does *not* do is offer any solution to the question of\n> how to put a right paren as the last character of a pset option\n> value. I don't really see any easy way to handle that, but maybe\n> we can punt for now.\n>\n> Also no docs or test cases, but I see no point in putting effort into\n> that in advance of consensus that this is what we want.\n>\n> 0001 is some save/restore infrastructure that we'd need for pretty\n> much all of the proposals on the table, and then 0002 improves the\n> command itself.\n>\n\nlooks well\n\njust note to syntax\n\nyour patch supports syntax\n\n(option1=value option2=value)\n\nIt looks little bit inconsistent and unusual\n\nshould be better comma separated list?\n\n(option1=value, option2=value)\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n>\n\nút 7. 4. 2020 v 2:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Here's a WIP patch for the parenthesized-options route.\n\nI realized that if we make the options be single words in the form\nname=value, we can easily handle the shortcut forms with no value.\nSo that's what this does.\n\nWhat this does *not* do is offer any solution to the question of\nhow to put a right paren as the last character of a pset option\nvalue.  I don't really see any easy way to handle that, but maybe\nwe can punt for now.\n\nAlso no docs or test cases, but I see no point in putting effort into\nthat in advance of consensus that this is what we want.\n\n0001 is some save/restore infrastructure that we'd need for pretty\nmuch all of the proposals on the table, and then 0002 improves the\ncommand itself.looks welljust note to syntaxyour patch supports syntax (option1=value option2=value)It looks little bit inconsistent and unusual should be better comma separated list?(option1=value, option2=value)RegardsPavel\n\n                        regards, tom lane", "msg_date": "Tue, 7 Apr 2020 09:29:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "On Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n\n> your patch supports syntax\n>\n> (option1=value option2=value)\n>\n> It looks little bit inconsistent and unusual\n>\n>>\nIt's the same as a connection string. Actually, maybe that's the key to\nallowing parentheses, etc. in option values if needed - allow the same\nsingle-quote quoting as in connection strings. Maybe even just call the\nsame code to do the parsing.\n\nOn Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com> wrote: your patch supports syntax (option1=value option2=value)It looks little bit inconsistent and unusual It's the same as a connection string. Actually, maybe that's the key to allowing parentheses, etc. in option values if needed - allow the same single-quote quoting as in connection strings. Maybe even just call the same code to do the parsing.", "msg_date": "Tue, 7 Apr 2020 06:48:59 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "út 7. 4. 2020 v 12:49 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>\n>> your patch supports syntax\n>>\n>> (option1=value option2=value)\n>>\n>> It looks little bit inconsistent and unusual\n>>\n>>>\n> It's the same as a connection string. Actually, maybe that's the key to\n> allowing parentheses, etc. in option values if needed - allow the same\n> single-quote quoting as in connection strings. Maybe even just call the\n> same code to do the parsing.\n>\n\nI don't think so connection string syntax should be used there.\n\nút 7. 4. 2020 v 12:49 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:On Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com> wrote: your patch supports syntax (option1=value option2=value)It looks little bit inconsistent and unusual It's the same as a connection string. Actually, maybe that's the key to allowing parentheses, etc. in option values if needed - allow the same single-quote quoting as in connection strings. Maybe even just call the same code to do the parsing.I don't think so connection string syntax should be used there.", "msg_date": "Tue, 7 Apr 2020 16:06:29 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> your patch supports syntax\n>> (option1=value option2=value)\n>> It looks little bit inconsistent and unusual\n\n> It's the same as a connection string.\n\nYeah, I didn't exactly invent that out of nowhere. There are a couple\nof reasons not to add commas to the syntax:\n\n* It would make comma be another character that's hard to put into\npset values in this context. And unlike right paren, there's plenty\nof reason to think people would wish to do that, eg \\g (fieldsep=,) ...\n\n* If we use commas then people would figure the spaces are optional\nand would try to write things like \\g (expanded,null=NULL) ...\nThat moves the goalposts quite a bit in terms of the code having\nto pick apart strings, and it makes things a lot more ambiguous\nthan they were before --- notably, now '=' is *also* a character\nthat you can't readily write in a pset value.\n\n> Actually, maybe that's the key to\n> allowing parentheses, etc. in option values if needed - allow the same\n> single-quote quoting as in connection strings. Maybe even just call the\n> same code to do the parsing.\n\nI don't think there is a lot of wiggle room to let \\g have its own quoting\nrules. The psqlscanslash lexer has its own ideas about that, which we\ncannot bypass without losing features. An example is that people would\nexpect this to work:\n\t\\set myfmt '(expanded tuples_only)'\n\t\\g :myfmt somefile\nSo we can't just ask to snarf the input in OT_WHOLE_LINE mode and then\npick it apart locally in \\g. And having two levels of quoting rules\nwould be disastrous for usability.\n\nThe lexer does have the ability to report whether an argument was quoted,\nbut it doesn't seem to work quite the way we would want here; it actually\nreports whether any part of the argument was quoted. So if we tried to\nmake right paren recognition depend on that, this'd misbehave:\n\t\\g (fieldsep='|')\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 10:28:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "út 7. 4. 2020 v 16:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > On Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> your patch supports syntax\n> >> (option1=value option2=value)\n> >> It looks little bit inconsistent and unusual\n>\n> > It's the same as a connection string.\n>\n> Yeah, I didn't exactly invent that out of nowhere. There are a couple\n> of reasons not to add commas to the syntax:\n>\n> * It would make comma be another character that's hard to put into\n> pset values in this context. And unlike right paren, there's plenty\n> of reason to think people would wish to do that, eg \\g (fieldsep=,) ...\n>\n\nok, this is valid argument\n\n\n> * If we use commas then people would figure the spaces are optional\n> and would try to write things like \\g (expanded,null=NULL) ...\n> That moves the goalposts quite a bit in terms of the code having\n> to pick apart strings, and it makes things a lot more ambiguous\n> than they were before --- notably, now '=' is *also* a character\n> that you can't readily write in a pset value.\n>\n> > Actually, maybe that's the key to\n> > allowing parentheses, etc. in option values if needed - allow the same\n> > single-quote quoting as in connection strings. Maybe even just call the\n> > same code to do the parsing.\n>\n> I don't think there is a lot of wiggle room to let \\g have its own quoting\n> rules. The psqlscanslash lexer has its own ideas about that, which we\n> cannot bypass without losing features. An example is that people would\n> expect this to work:\n> \\set myfmt '(expanded tuples_only)'\n> \\g :myfmt somefile\n> So we can't just ask to snarf the input in OT_WHOLE_LINE mode and then\n> pick it apart locally in \\g. And having two levels of quoting rules\n> would be disastrous for usability.\n>\n> The lexer does have the ability to report whether an argument was quoted,\n> but it doesn't seem to work quite the way we would want here; it actually\n> reports whether any part of the argument was quoted. So if we tried to\n> make right paren recognition depend on that, this'd misbehave:\n> \\g (fieldsep='|')\n>\n\nok, I have not any objections.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nút 7. 4. 2020 v 16:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Isaac Morland <isaac.morland@gmail.com> writes:\n> On Tue, 7 Apr 2020 at 03:30, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> your patch supports syntax\n>> (option1=value option2=value)\n>> It looks little bit inconsistent and unusual\n\n> It's the same as a connection string.\n\nYeah, I didn't exactly invent that out of nowhere.  There are a couple\nof reasons not to add commas to the syntax:\n\n* It would make comma be another character that's hard to put into\npset values in this context.  And unlike right paren, there's plenty\nof reason to think people would wish to do that, eg \\g (fieldsep=,) ...ok, this is valid argument \n\n* If we use commas then people would figure the spaces are optional\nand would try to write things like \\g (expanded,null=NULL) ...\nThat moves the goalposts quite a bit in terms of the code having\nto pick apart strings, and it makes things a lot more ambiguous\nthan they were before --- notably, now '=' is *also* a character\nthat you can't readily write in a pset value.\n\n> Actually, maybe that's the key to\n> allowing parentheses, etc. in option values if needed - allow the same\n> single-quote quoting as in connection strings. Maybe even just call the\n> same code to do the parsing.\n\nI don't think there is a lot of wiggle room to let \\g have its own quoting\nrules.  The psqlscanslash lexer has its own ideas about that, which we\ncannot bypass without losing features.  An example is that people would\nexpect this to work:\n        \\set myfmt '(expanded tuples_only)'\n        \\g :myfmt somefile\nSo we can't just ask to snarf the input in OT_WHOLE_LINE mode and then\npick it apart locally in \\g.  And having two levels of quoting rules\nwould be disastrous for usability.\n\nThe lexer does have the ability to report whether an argument was quoted,\nbut it doesn't seem to work quite the way we would want here; it actually\nreports whether any part of the argument was quoted.  So if we tried to\nmake right paren recognition depend on that, this'd misbehave:\n        \\g (fieldsep='|')ok, I have not any objections.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Tue, 7 Apr 2020 16:56:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Here's a more fully fleshed-out patch, with documentation and some\ntest cases. (0001 patch is identical to last time.)\n\nConsidering this is the last day before v13 feature freeze, should\nI push this, or sit on it till v14? I feel reasonably good that we\nhave a nice feature definition here, but it's awfully late in the\ncycle to be designing features.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 Apr 2020 13:27:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "út 7. 4. 2020 v 19:27 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Here's a more fully fleshed-out patch, with documentation and some\n> test cases. (0001 patch is identical to last time.)\n>\n> Considering this is the last day before v13 feature freeze, should\n> I push this, or sit on it till v14? I feel reasonably good that we\n> have a nice feature definition here, but it's awfully late in the\n> cycle to be designing features.\n>\n\nI am for pushing to v13. This feature should not to break any, and there is\nlot of time to finish details.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n>\n\nút 7. 4. 2020 v 19:27 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Here's a more fully fleshed-out patch, with documentation and some\ntest cases.  (0001 patch is identical to last time.)\n\nConsidering this is the last day before v13 feature freeze, should\nI push this, or sit on it till v14?  I feel reasonably good that we\nhave a nice feature definition here, but it's awfully late in the\ncycle to be designing features.I am for pushing to v13. This feature should not to break any, and there is lot of time to finish details. RegardsPavel \n\n                        regards, tom lane", "msg_date": "Tue, 7 Apr 2020 19:30:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 7. 4. 2020 v 19:27 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Considering this is the last day before v13 feature freeze, should\n>> I push this, or sit on it till v14? I feel reasonably good that we\n>> have a nice feature definition here, but it's awfully late in the\n>> cycle to be designing features.\n\n> I am for pushing to v13. This feature should not to break any, and there is\n> lot of time to finish details.\n\nHearing no objections, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 17:47:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal \\gcsv" }, { "msg_contents": "út 7. 4. 2020 v 23:47 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 7. 4. 2020 v 19:27 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> Considering this is the last day before v13 feature freeze, should\n> >> I push this, or sit on it till v14? I feel reasonably good that we\n> >> have a nice feature definition here, but it's awfully late in the\n> >> cycle to be designing features.\n>\n> > I am for pushing to v13. This feature should not to break any, and there\n> is\n> > lot of time to finish details.\n>\n> Hearing no objections, pushed.\n>\n\nThank you\n\nPavel\n\n\n> regards, tom lane\n>\n\nút 7. 4. 2020 v 23:47 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 7. 4. 2020 v 19:27 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Considering this is the last day before v13 feature freeze, should\n>> I push this, or sit on it till v14?  I feel reasonably good that we\n>> have a nice feature definition here, but it's awfully late in the\n>> cycle to be designing features.\n\n> I am for pushing to v13. This feature should not to break any, and there is\n> lot of time to finish details.\n\nHearing no objections, pushed.Thank youPavel\n\n                        regards, tom lane", "msg_date": "Wed, 8 Apr 2020 06:16:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal \\gcsv" } ]
[ { "msg_contents": "Hi,\n\nI looked again at one of the potential issues Ranier Vilela's static \nanalysis found and after looking more at it I still think this one is a \nreal bug. But my original patch was incorrect and introduced a use after \nfree bug.\n\nThe code for resetting the hash tables of the SubPlanState node in \nbuildSubPlanHash() looks like it can never run, and additionally it \nwould be broken if it would ever run. This issue was introduced in \ncommit 356687bd825e5ca7230d43c1bffe7a59ad2e77bd.\n\nAs far as I gather the following is true:\n\n1. It sets node->hashtable and node->hashnulls to NULL a few lines \nbefore checking if they are not NULL which means the code for resetting \nthem cannot ever be reached.\n\n2. But if we changed to code so that the ResetTupleHashTable() calls are \nreachable we would hit a typo. It resets node->hashtable twice and never \nresets node->hashnulls which would cause non-obvious bugs.\n\n3. Additionally since the memory context used by the hash tables is \nreset in buildSubPlanHash() if we start resetting hash tables we will \nget a use after free bug.\n\nI have attached a patch which makes sure the code for resetting the hash \ntables is actually run while also fixing the code for resetting them.\n\nSince the current behavior of the code in HEAD is not actually broken, \nit is just an optimization which is not used, this fix does not have to \nbe backpatched.\n\nAndreas", "msg_date": "Sat, 29 Feb 2020 09:58:46 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": true, "msg_subject": "Broken resetting of subplan hash tables" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> The code for resetting the hash tables of the SubPlanState node in \n> buildSubPlanHash() looks like it can never run, and additionally it \n> would be broken if it would ever run. This issue was introduced in \n> commit 356687bd825e5ca7230d43c1bffe7a59ad2e77bd.\n\nRight. Justin Pryzby also noted this a couple weeks back, but we\ndidn't deal with it at that point because we were up against a\nrelease deadline.\n\n> As far as I gather the following is true:\n\n> 1. It sets node->hashtable and node->hashnulls to NULL a few lines \n> before checking if they are not NULL which means the code for resetting \n> them cannot ever be reached.\n\nYeah. Those lines should have been removed when the ResetTupleHashTable\nlogic was added.\n\n> 2. But if we changed to code so that the ResetTupleHashTable() calls are \n> reachable we would hit a typo. It resets node->hashtable twice and never \n> resets node->hashnulls which would cause non-obvious bugs.\n\nRight.\n\n> 3. Additionally since the memory context used by the hash tables is \n> reset in buildSubPlanHash() if we start resetting hash tables we will \n> get a use after free bug.\n\nNope, not right. The hash table metadata is now allocated in the\nes_query_cxt; what is in the hashtablecxt is just tuples, and that\ndoes need to be cleared, per the comment for ResetTupleHashTable.\nYour patch as given results in a nasty memory leak, which is easily\ndemonstrated with a small mod to the regression test case I added:\n\nselect sum(ss.tst::int) from\n generate_series(1,10000000) o cross join lateral (\n select i.ten in (select f1 from int4_tbl where f1 <= o) as tst,\n random() as r\n from onek i where i.unique1 = o%1000 ) ss;\n\n> Since the current behavior of the code in HEAD is not actually broken, \n> it is just an optimization which is not used, this fix does not have to \n> be backpatched.\n\nUnfortunately ... this test case also leaks memory like mad in\nHEAD, v12, and v11, because all of them are rebuilding the hash\ntable from scratch without clearing the old one. So this is\nindeed broken and a back-patch is necessary.\n\nI noted while looking at this that most of the calls of\nResetTupleHashTable are actually never reached in the regression\ntests (per code coverage report) so I made up some test cases\nthat do reach 'em, and included those in the commit.\n\nTBH, I think that this tuple table API is seriously misdesigned;\nit is confusing and very error-prone to have the callers need to\nreset the tuple context separately from calling ResetTupleHashTable.\nAlso, the callers all look like their resets are intended to destroy\nthe whole hashtable not just its contents (as indeed they were doing,\nbefore the faulty commit). But I didn't attempt to fix that today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Feb 2020 14:02:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Broken resetting of subplan hash tables" }, { "msg_contents": "Hi,\n\nOn 2020-02-29 14:02:59 -0500, Tom Lane wrote:\n> > 3. Additionally since the memory context used by the hash tables is \n> > reset in buildSubPlanHash() if we start resetting hash tables we will \n> > get a use after free bug.\n > \n> Nope, not right. The hash table metadata is now allocated in the\n> es_query_cxt; what is in the hashtablecxt is just tuples, and that\n> does need to be cleared, per the comment for ResetTupleHashTable.\n> Your patch as given results in a nasty memory leak, which is easily\n> demonstrated with a small mod to the regression test case I added:\n\n> select sum(ss.tst::int) from\n> generate_series(1,10000000) o cross join lateral (\n> select i.ten in (select f1 from int4_tbl where f1 <= o) as tst,\n> random() as r\n> from onek i where i.unique1 = o%1000 ) ss;\n> \n> > Since the current behavior of the code in HEAD is not actually broken, \n> > it is just an optimization which is not used, this fix does not have to \n> > be backpatched.\n> \n> Unfortunately ... this test case also leaks memory like mad in\n> HEAD, v12, and v11, because all of them are rebuilding the hash\n> table from scratch without clearing the old one. So this is\n> indeed broken and a back-patch is necessary.\n\nYea :(. Thanks for doing that.\n\n\n> I noted while looking at this that most of the calls of\n> ResetTupleHashTable are actually never reached in the regression\n> tests (per code coverage report) so I made up some test cases\n> that do reach 'em, and included those in the commit.\n\nCool.\n\n\n> TBH, I think that this tuple table API is seriously misdesigned;\n> it is confusing and very error-prone to have the callers need to\n> reset the tuple context separately from calling ResetTupleHashTable.\n\nDo you have an alternative proposal? Before committing the patch adding\nit that way, I'd waited for quite a while asking for input... In\nseveral cases (nodeAgg.c, nodeSetOp.c) there's memory from outside\nexecGrouping.c that's also allocated in the same context as the table\ncontents (via TupleHashTable->additional) - just resetting the context\npassed to BuildTupleHashTableExt() as part of ResetTupleHashTable()\nseems problematic too.\n\nWe could change it so more of the metadata for execGrouping.c is\ncomputed outside of BuildTupleHashTableExt(), and continue to destroy\nthe entire hashtable. But we'd still have to reallocate the hashtable,\nthe slot, etc. So having a reset interface seems like the right thing.\n\nI guess we could set it up so that BuildTupleHashTableExt() registers a\nmemory context reset callback on tablecxt, which'd reinitialize the\nhashtable. But that seems like it'd be at least as confusing?\n\n\n> Also, the callers all look like their resets are intended to destroy\n> the whole hashtable not just its contents (as indeed they were doing,\n> before the faulty commit). But I didn't attempt to fix that today.\n\nHm? nodeAgg.c, nodeSetOp.c, nodeRecursiveUnion.c don't at all look like\nthat to me? Why would they want to drop the hashtable metadata when\nresetting? What am I missing?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 29 Feb 2020 11:35:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Broken resetting of subplan hash tables" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-02-29 14:02:59 -0500, Tom Lane wrote:\n>> TBH, I think that this tuple table API is seriously misdesigned;\n>> it is confusing and very error-prone to have the callers need to\n>> reset the tuple context separately from calling ResetTupleHashTable.\n\n> Do you have an alternative proposal?\n\nI'd be inclined to let the tuple hashtable make its own tuple-storage\ncontext and reset that for itself. Is it really worth the complexity\nand bug hazards to share such a context with other uses?\n\n> We could change it so more of the metadata for execGrouping.c is\n> computed outside of BuildTupleHashTableExt(), and continue to destroy\n> the entire hashtable. But we'd still have to reallocate the hashtable,\n> the slot, etc. So having a reset interface seems like the right thing.\n\nAgreed, the reset interface is a good idea. I'm just not happy that\nin addition to resetting, you have to remember to reset some\nvaguely-related context (and heaven help you if you reset that context\nbut not the hashtable).\n\n>> Also, the callers all look like their resets are intended to destroy\n>> the whole hashtable not just its contents (as indeed they were doing,\n>> before the faulty commit). But I didn't attempt to fix that today.\n\n> Hm? nodeAgg.c, nodeSetOp.c, nodeRecursiveUnion.c don't at all look like\n> that to me? Why would they want to drop the hashtable metadata when\n> resetting? What am I missing?\n\nThey may not look like it to you; but Andreas misread that, and so did\nI at first --- not least because that *is* how it used to work, and\nthere are still comments suggesting that that's how it works, eg this\nin ExecInitRecursiveUnion:\n\n * If hashing, we need a per-tuple memory context for comparisons, and a\n * longer-lived context to store the hash table. The table can't just be\n * kept in the per-query context because we want to be able to throw it\n * away when rescanning.\n\n\"throw it away\" sure looks like it means the entire hashtable, not just\nits tuple contents.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Feb 2020 16:44:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Broken resetting of subplan hash tables" }, { "msg_contents": "Em sáb., 29 de fev. de 2020 às 18:44, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> \"throw it away\" sure looks like it means the entire hashtable, not just\n> its tuple contents.\n>\nI don't know if I can comment clearly to help, but from my experience,\ndestroying and rebuilding the hashtable is a waste if possible, resetting\nit.\nBy analogy, I have code with arrays where, I reuse them, with only one\nline, instead of reconstructing them.\na->nelts = 0; / * reset array * /\nIf possible, doing the same for hashtables would be great.\n\nregards,\nRanier Vilela\n\nEm sáb., 29 de fev. de 2020 às 18:44, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\"throw it away\" sure looks like it means the entire hashtable, not just\nits tuple contents.I don't know if I can comment clearly to help, but from my experience, destroying and rebuilding the hashtable is a waste if possible, resetting it.By analogy, I have code with arrays where, I reuse them, with only one line, instead of reconstructing them.a->nelts = 0; / * reset array * /If possible, doing the same for hashtables would be great. regards,Ranier Vilela", "msg_date": "Sun, 1 Mar 2020 09:52:47 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Broken resetting of subplan hash tables" } ]
[ { "msg_contents": "Hi,\nI'm sending this report from DrMemory, which shows some leaks from the\ncurrent postgres.\nDrMemory is it is a reliable tool, but it is not perfect. (\nhttps://drmemory.org/)\n\nregards,\nRanier Vilela", "msg_date": "Sat, 29 Feb 2020 11:45:57 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[REPORT] Possible Memory Leak Postgres Windows" } ]
[ { "msg_contents": "Hi hackers,\n\nI want to continue development of Oliver Ford's respect/ignore nulls for\nlag,lead,first_value,last_value and nth_value\nand from first/last for nth_value patch, but I am not sure how to proceed\nwith it and any feedback will be very useful.\n\nI have dropped support of from first/last for nth_value(), but also I\nreimplemented it in a different way,\nby using negative number for the position argument, to be able to get the\nsame frame in exact reverse order.\nAfter that patch becomes much more simple and major concerns about\nprecedence hack has gone,\nbut maybe it can be additionally simplified.\n\nI have not renamed special bool type \"ignorenulls\", because it is probably\nnot acceptable way for calling extra version\nof window functions (because it makes things very easy, and it can reuse\nframes), but I removed the other special bool type \"fromlast\".\n\nSo, that is the major question, can someone give me an better idea or\nexample that I can use,\nfor something, that can be more acceptable as implementation and I will try\nto do it in a such way.\n\nAttached file is for PostgreSQL 13 (master git branch) and I will add it\nnow to a March commit fest, to be able to track changes.\nEverything works and patch is in very good shape, make check is passed and\nalso, I use it from some time for SQL analysis purposes\n(because ignore nulls is one of the most needed feature in OLAP/BI area and\nOracle, Amazon Redshift and Informix have it).\n\nAfter patch review and suggestions about what to do with special bool type\nand unreserved keywords, I will reimplement it, if needed.", "msg_date": "Sat, 29 Feb 2020 17:54:51 +0200", "msg_from": "Krasiyan Andreev <krasiyan@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] respect/ignore nulls for lag,lead,first_value,last_value and\n nth_value and from first/last for nth_value" } ]
[ { "msg_contents": "Hello,\n\nI'm writing telemetry data into a table partitioned by time. When there \nis no partition for a particular date, my app notices the constraint \nviolation, creates the partition, and retries the insert.\n\nI'm used to handling constraint violations by observing the constraint \nname in the error fields. However, this error had none. I set out to add \nthe name to the error field, but after a bit of reading my impression is \nthat partition constraints are more like a property of a table.\n\nI've attached a patch that adds the schema and table name fields to \nerrors for my use case:\n\n- Insert data into a partitioned table for which there is no partition.\n- Insert data directly into an incorrect partition.\n\nThanks,\nChris", "msg_date": "Sat, 29 Feb 2020 13:33:48 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add schema and table names to partition error" }, { "msg_contents": "Hi Chris,\n\nOn Sun, Mar 1, 2020 at 4:34 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> Hello,\n>\n> I'm writing telemetry data into a table partitioned by time. When there\n> is no partition for a particular date, my app notices the constraint\n> violation, creates the partition, and retries the insert.\n>\n> I'm used to handling constraint violations by observing the constraint\n> name in the error fields. However, this error had none. I set out to add\n> the name to the error field, but after a bit of reading my impression is\n> that partition constraints are more like a property of a table.\n\nThis makes sense to me. Btree code which implements unique\nconstraints also does this; see _bt_check_unique() function in\nsrc/backend/access/nbtree/nbtinsert.c:\n\n ereport(ERROR,\n (errcode(ERRCODE_UNIQUE_VIOLATION),\n errmsg(\"duplicate key value violates\nunique constraint \\\"%s\\\"\",\n RelationGetRelationName(rel)),\n key_desc ? errdetail(\"Key %s already exists.\",\n key_desc) : 0,\n errtableconstraint(heapRel,\n\nRelationGetRelationName(rel))));\n\n> I've attached a patch that adds the schema and table name fields to\n> errors for my use case:\n\nInstead of using errtable(), use errtableconstraint() like the btree\ncode does, if only just for consistency.\n\n> - Insert data into a partitioned table for which there is no partition.\n> - Insert data directly into an incorrect partition.\n\nThere are couple more instances in src/backend/command/tablecmds.c\nwhere partition constraint is checked:\n\nIn ATRewriteTable():\n\n if (partqualstate && !ExecCheck(partqualstate, econtext))\n {\n if (tab->validate_default)\n ereport(ERROR,\n (errcode(ERRCODE_CHECK_VIOLATION),\n errmsg(\"updated partition constraint for\ndefault partition \\\"%s\\\" would be violated by some row\",\n RelationGetRelationName(oldrel))));\n else\n ereport(ERROR,\n (errcode(ERRCODE_CHECK_VIOLATION),\n errmsg(\"partition constraint of relation\n\\\"%s\\\" is violated by some row\",\n RelationGetRelationName(oldrel))));\n }\n\nMaybe, better fix these too for completeness.\n\nThanks,\nAmit\n\n\n", "msg_date": "Sun, 1 Mar 2020 13:40:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Sun, Mar 1, 2020 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Chris,\n>\n> On Sun, Mar 1, 2020 at 4:34 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> > Hello,\n> >\n> > I'm writing telemetry data into a table partitioned by time. When there\n> > is no partition for a particular date, my app notices the constraint\n> > violation, creates the partition, and retries the insert.\n> >\n> > I'm used to handling constraint violations by observing the constraint\n> > name in the error fields. However, this error had none. I set out to add\n> > the name to the error field, but after a bit of reading my impression is\n> > that partition constraints are more like a property of a table.\n>\n> This makes sense to me. Btree code which implements unique\n> constraints also does this; see _bt_check_unique() function in\n> src/backend/access/nbtree/nbtinsert.c:\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_UNIQUE_VIOLATION),\n> errmsg(\"duplicate key value violates\n> unique constraint \\\"%s\\\"\",\n> RelationGetRelationName(rel)),\n> key_desc ? errdetail(\"Key %s already exists.\",\n> key_desc) : 0,\n> errtableconstraint(heapRel,\n>\n> RelationGetRelationName(rel))));\n>\n> > I've attached a patch that adds the schema and table name fields to\n> > errors for my use case:\n>\n> Instead of using errtable(), use errtableconstraint() like the btree\n> code does, if only just for consistency.\n>\n\n+1. We use errtableconstraint at other places where we use error code\nERRCODE_CHECK_VIOLATION.\n\n> > - Insert data into a partitioned table for which there is no partition.\n> > - Insert data directly into an incorrect partition.\n>\n> There are couple more instances in src/backend/command/tablecmds.c\n> where partition constraint is checked:\n>\n> In ATRewriteTable():\n>\n> if (partqualstate && !ExecCheck(partqualstate, econtext))\n> {\n> if (tab->validate_default)\n> ereport(ERROR,\n> (errcode(ERRCODE_CHECK_VIOLATION),\n> errmsg(\"updated partition constraint for\n> default partition \\\"%s\\\" would be violated by some row\",\n> RelationGetRelationName(oldrel))));\n> else\n> ereport(ERROR,\n> (errcode(ERRCODE_CHECK_VIOLATION),\n> errmsg(\"partition constraint of relation\n> \\\"%s\\\" is violated by some row\",\n> RelationGetRelationName(oldrel))));\n> }\n>\n> Maybe, better fix these too for completeness.\n>\n\nRight, if we want to make a change for this, then I think we can once\ncheck all the places where we use error code ERRCODE_CHECK_VIOLATION.\nAnother thing we might need to see is which of these can be\nback-patched. We should also try to write the tests for cases we are\nchanging even if we don't want to commit those.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 1 Mar 2020 16:44:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "Thank you both for look at this!\n\nOn 3/1/20 5:14 AM, Amit Kapila wrote:\n> On Sun, Mar 1, 2020 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> Hi Chris,\n>>\n>> On Sun, Mar 1, 2020 at 4:34 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>>> Hello,\n>>>\n>>> I'm writing telemetry data into a table partitioned by time. When there\n>>> is no partition for a particular date, my app notices the constraint\n>>> violation, creates the partition, and retries the insert.\n>>>\n>>> I'm used to handling constraint violations by observing the constraint\n>>> name in the error fields. However, this error had none. I set out to add\n>>> the name to the error field, but after a bit of reading my impression is\n>>> that partition constraints are more like a property of a table.\n>>\n>> This makes sense to me. Btree code which implements unique\n>> constraints also does this; see _bt_check_unique() function in\n>> src/backend/access/nbtree/nbtinsert.c:\n>>\n>> ereport(ERROR,\n>> (errcode(ERRCODE_UNIQUE_VIOLATION),\n>> errmsg(\"duplicate key value violates\n>> unique constraint \\\"%s\\\"\",\n>> RelationGetRelationName(rel)),\n>> key_desc ? errdetail(\"Key %s already exists.\",\n>> key_desc) : 0,\n>> errtableconstraint(heapRel,\n>>\n>> RelationGetRelationName(rel))));\n>>\n>>> I've attached a patch that adds the schema and table name fields to\n>>> errors for my use case:\n>>\n>> Instead of using errtable(), use errtableconstraint() like the btree\n>> code does, if only just for consistency.\n\nThere are two relations in the example you give: the index, rel, and the \ntable, heapRel. It makes sense to me that two error fields be filled in \nwith those two names.\n\nWith partitions, there is no second name because there is no index nor \nconstraint object. My (very limited) understanding is that partition \n\"constraints\" are entirely contained within pg_class.relpartbound of the \npartition.\n\nAre you suggesting that the table name go into the constraint name field \nof the error?\n\n> +1. We use errtableconstraint at other places where we use error code\n> ERRCODE_CHECK_VIOLATION.\n\nYes, I see this function used when it is a CHECK constraint that is \nbeing violated. In every case the constraint name is passed as the \nsecond argument.\n\n>> There are couple more instances in src/backend/command/tablecmds.c\n>> where partition constraint is checked:\n>>\n>> Maybe, better fix these too for completeness.\n\nDone. As there is no named constraint here, I used errtable again.\n\n> Right, if we want to make a change for this, then I think we can once\n> check all the places where we use error code ERRCODE_CHECK_VIOLATION.\n\nI've looked at every instance of this. It is used for 1) check \nconstraints, 2) domain constraints, and 3) partition constraints.\n\n1. errtableconstraint is used with the name of the constraint.\n2. errdomainconstraint is used with the name of the constraint except in \none instance which deliberately uses errtablecol.\n3. With the attached patch, errtable is used except for one instance in \nsrc/backend/partitioning/partbounds.c described below.\n\nIn check_default_partition_contents of \nsrc/backend/partitioning/partbounds.c, the default partition is checked \nfor any rows that should belong in the partition being added _unless_ \nthe leaf being checked is a foreign table. There are two tables \nmentioned in this warning, and I couldn't decide which, if any, deserves \nto be in the error fields:\n\n /*\n * Only RELKIND_RELATION relations (i.e. leaf \npartitions) need to be\n * scanned.\n */\n if (part_rel->rd_rel->relkind != RELKIND_RELATION)\n {\n if (part_rel->rd_rel->relkind == \nRELKIND_FOREIGN_TABLE)\n ereport(WARNING,\n \n(errcode(ERRCODE_CHECK_VIOLATION),\n errmsg(\"skipped \nscanning foreign table \\\"%s\\\" which is a partition of default partition \n\\\"%s\\\"\",\n \nRelationGetRelationName(part_rel),\n \nRelationGetRelationName(default_rel))));\n\n if (RelationGetRelid(default_rel) != \nRelationGetRelid(part_rel))\n table_close(part_rel, NoLock);\n\n continue;\n }\n\n> Another thing we might need to see is which of these can be\n> back-patched. We should also try to write the tests for cases we are\n> changing even if we don't want to commit those.\n\nI don't have any opinion on back-patching. Existing tests pass. I wasn't \nable to find another test that checks the constraint field of errors. \nThere's a little bit in the tests for psql, but that is about the the \n\\errverbose functionality rather than specific errors and their fields.\n\nHere's what I tested:\n\n# CREATE TABLE t1 (i int PRIMARY KEY); INSERT INTO t1 VALUES (1), (1);\n# \\errverbose\n...\nCONSTRAINT NAME: t1_pkey\n\n# CREATE TABLE pt1 (x int, y int, PRIMARY KEY (x,y)) PARTITIONED BY \nRANGE (y);\n# INSERT INTO pt1 VALUES (10,10);\n# \\errverbose\n...\nSCHEMA NAME: public\nTABLE NAME: pt1\n\n# CREATE TABLE pt1_p1 PARTITION OF pt1 FOR VALUES FROM (1) TO (5);\n# INSERT INTO pt1 VALUES (10,10);\n# \\errverbose\n...\nSCHEMA NAME: public\nTABLE NAME: pt1\n\n# INSERT INTO pt1_p1 VALUES (10,10);\n# \\errverbose\n...\nSCHEMA NAME: public\nTABLE NAME: pt1_p1\n\n# CREATE TABLE pt1_dp PARTITION OF pt1 DEFAULT;\n# INSERT INTO pt1 VALUES (10,10);\n# CREATE TABLE pt1_p2 PARTITION OF pt1 FOR VALUES FROM (6) TO (20);\n# \\errverbose\n...\nSCHEMA NAME: public\nTABLE NAME: pt1_dp\n\n\nThanks,\nChris", "msg_date": "Sun, 1 Mar 2020 17:51:24 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "Hi Chris,\n\nOn Mon, Mar 2, 2020 at 8:51 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> On 3/1/20 5:14 AM, Amit Kapila wrote:\n> > On Sun, Mar 1, 2020 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> This makes sense to me. Btree code which implements unique\n> >> constraints also does this; see _bt_check_unique() function in\n> >> src/backend/access/nbtree/nbtinsert.c:\n> >>\n> >> ereport(ERROR,\n> >> (errcode(ERRCODE_UNIQUE_VIOLATION),\n> >> errmsg(\"duplicate key value violates\n> >> unique constraint \\\"%s\\\"\",\n> >> RelationGetRelationName(rel)),\n> >> key_desc ? errdetail(\"Key %s already exists.\",\n> >> key_desc) : 0,\n> >> errtableconstraint(heapRel,\n> >>\n> >> RelationGetRelationName(rel))));\n> >>\n> >>> I've attached a patch that adds the schema and table name fields to\n> >>> errors for my use case:\n> >>\n> >> Instead of using errtable(), use errtableconstraint() like the btree\n> >> code does, if only just for consistency.\n>\n> There are two relations in the example you give: the index, rel, and the\n> table, heapRel. It makes sense to me that two error fields be filled in\n> with those two names.\n\nThat's a good point. Index constraints are actually named after the\nindex and vice versa, so it's a totally valid usage of\nerrtableconstraint().\n\ncreate table foo (a int unique);\n\\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nIndexes:\n \"foo_a_key\" UNIQUE CONSTRAINT, btree (a)\n\nselect conname from pg_constraint where conrelid = 'foo'::regclass;\n conname\n-----------\n foo_a_key\n(1 row)\n\ncreate table bar (a int, constraint a_uniq unique (a));\n\\d bar\n Table \"public.bar\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nIndexes:\n \"a_uniq\" UNIQUE CONSTRAINT, btree (a)\n\nselect conname from pg_constraint where conrelid = 'bar'::regclass;\n conname\n---------\n a_uniq\n(1 row)\n\n> With partitions, there is no second name because there is no index nor\n> constraint object.\n\nIt's right to say that partition's case cannot really be equated with\nunique indexes.\n\n> My (very limited) understanding is that partition\n> \"constraints\" are entirely contained within pg_class.relpartbound of the\n> partition.\n\nThat is correct.\n\n> Are you suggesting that the table name go into the constraint name field\n> of the error?\n\nYes, that's what I was thinking, at least for \"partition constraint\nviolated\" errors, but given the above that would be a misleading use\nof ErrorData.constraint_name.\n\nMaybe it's not too late to invent a new error code like\nERRCODE_PARTITION_VIOLATION or such, then maybe we can use a\nhard-coded name, say, just the string \"partition constraint\".\n\n> >> There are couple more instances in src/backend/command/tablecmds.c\n> >> where partition constraint is checked:\n> >>\n> >> Maybe, better fix these too for completeness.\n>\n> Done. As there is no named constraint here, I used errtable again.\n>\n> > Right, if we want to make a change for this, then I think we can once\n> > check all the places where we use error code ERRCODE_CHECK_VIOLATION.\n>\n> I've looked at every instance of this. It is used for 1) check\n> constraints, 2) domain constraints, and 3) partition constraints.\n>\n> 1. errtableconstraint is used with the name of the constraint.\n> 2. errdomainconstraint is used with the name of the constraint except in\n> one instance which deliberately uses errtablecol.\n> 3. With the attached patch, errtable is used except for one instance in\n> src/backend/partitioning/partbounds.c described below.\n>\n> In check_default_partition_contents of\n> src/backend/partitioning/partbounds.c, the default partition is checked\n> for any rows that should belong in the partition being added _unless_\n> the leaf being checked is a foreign table. There are two tables\n> mentioned in this warning, and I couldn't decide which, if any, deserves\n> to be in the error fields:\n>\n> /*\n> * Only RELKIND_RELATION relations (i.e. leaf\n> partitions) need to be\n> * scanned.\n> */\n> if (part_rel->rd_rel->relkind != RELKIND_RELATION)\n> {\n> if (part_rel->rd_rel->relkind ==\n> RELKIND_FOREIGN_TABLE)\n> ereport(WARNING,\n>\n> (errcode(ERRCODE_CHECK_VIOLATION),\n> errmsg(\"skipped\n> scanning foreign table \\\"%s\\\" which is a partition of default partition\n> \\\"%s\\\"\",\n>\n> RelationGetRelationName(part_rel),\n>\n> RelationGetRelationName(default_rel))));\n\nIt seems strange to see that errcode here or any errcode for that\nmatter, so we shouldn't really be concerned about this one.\n\n>\n> if (RelationGetRelid(default_rel) !=\n> RelationGetRelid(part_rel))\n> table_close(part_rel, NoLock);\n>\n> continue;\n> }\n>\n> > Another thing we might need to see is which of these can be\n> > back-patched. We should also try to write the tests for cases we are\n> > changing even if we don't want to commit those.\n>\n> I don't have any opinion on back-patching. Existing tests pass. I wasn't\n> able to find another test that checks the constraint field of errors.\n> There's a little bit in the tests for psql, but that is about the the\n> \\errverbose functionality rather than specific errors and their fields.\n\nActually, it's not a bad idea to use \\errverbose to test this.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 2 Mar 2020 13:09:14 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Mon, Mar 2, 2020 at 9:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n>\n> > My (very limited) understanding is that partition\n> > \"constraints\" are entirely contained within pg_class.relpartbound of the\n> > partition.\n>\n> That is correct.\n>\n> > Are you suggesting that the table name go into the constraint name field\n> > of the error?\n>\n> Yes, that's what I was thinking, at least for \"partition constraint\n> violated\" errors, but given the above that would be a misleading use\n> of ErrorData.constraint_name.\n>\n\nI think it is better to use errtable in such cases.\n\n> Maybe it's not too late to invent a new error code like\n> ERRCODE_PARTITION_VIOLATION or such, then maybe we can use a\n> hard-coded name, say, just the string \"partition constraint\".\n>\n> > >> There are couple more instances in src/backend/command/tablecmds.c\n> > >> where partition constraint is checked:\n> > >>\n> > >> Maybe, better fix these too for completeness.\n> >\n> > Done. As there is no named constraint here, I used errtable again.\n> >\n> > > Right, if we want to make a change for this, then I think we can once\n> > > check all the places where we use error code ERRCODE_CHECK_VIOLATION.\n> >\n> > I've looked at every instance of this. It is used for 1) check\n> > constraints, 2) domain constraints, and 3) partition constraints.\n> >\n> > 1. errtableconstraint is used with the name of the constraint.\n> > 2. errdomainconstraint is used with the name of the constraint except in\n> > one instance which deliberately uses errtablecol.\n> > 3. With the attached patch, errtable is used except for one instance in\n> > src/backend/partitioning/partbounds.c described below.\n> >\n> > In check_default_partition_contents of\n> > src/backend/partitioning/partbounds.c, the default partition is checked\n> > for any rows that should belong in the partition being added _unless_\n> > the leaf being checked is a foreign table. There are two tables\n> > mentioned in this warning, and I couldn't decide which, if any, deserves\n> > to be in the error fields:\n> >\n> > /*\n> > * Only RELKIND_RELATION relations (i.e. leaf\n> > partitions) need to be\n> > * scanned.\n> > */\n> > if (part_rel->rd_rel->relkind != RELKIND_RELATION)\n> > {\n> > if (part_rel->rd_rel->relkind ==\n> > RELKIND_FOREIGN_TABLE)\n> > ereport(WARNING,\n> >\n> > (errcode(ERRCODE_CHECK_VIOLATION),\n> > errmsg(\"skipped\n> > scanning foreign table \\\"%s\\\" which is a partition of default partition\n> > \\\"%s\\\"\",\n> >\n> > RelationGetRelationName(part_rel),\n> >\n> > RelationGetRelationName(default_rel))));\n>\n> It seems strange to see that errcode here or any errcode for that\n> matter, so we shouldn't really be concerned about this one.\n>\n\nRight.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Mar 2020 16:30:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On 3/1/20 10:09 PM, Amit Langote wrote:\n> Hi Chris,\n> \n> On Mon, Mar 2, 2020 at 8:51 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>> On 3/1/20 5:14 AM, Amit Kapila wrote:\n>>> On Sun, Mar 1, 2020 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>>>\n>>>> There are couple more instances in src/backend/command/tablecmds.c\n>>>> where partition constraint is checked:\n>>>>\n>>>> Maybe, better fix these too for completeness.\n>>>\n>>> Another thing we might need to see is which of these can be\n>>> back-patched. We should also try to write the tests for cases we are\n>>> changing even if we don't want to commit those.\n>>\n>> I don't have any opinion on back-patching. Existing tests pass. I wasn't\n>> able to find another test that checks the constraint field of errors.\n>> There's a little bit in the tests for psql, but that is about the the\n>> \\errverbose functionality rather than specific errors and their fields.\n> \n> Actually, it's not a bad idea to use \\errverbose to test this.\n> \n\nI've added a second patch with tests that cover three of the five errors \ntouched by the first patch. Rather than \\errverbose, I simply \\set \nVERBOSITY verbose. I could not find a way to exclude the location field \nfrom the output, so those lines will be likely be out of date soon--if \nnot already.\n\nI couldn't find a way to exercise the errors in tablecmds.c. Does anyone \nknow how to instigate a table rewrite that would violate partition \nconstraints? I tried:\n\nALTER TABLE pterr1 ALTER y TYPE bigint USING (y - 5);\nERROR: 42P16: cannot alter column \"y\" because it is part of the \npartition key of relation \"pterr1\"\nLOCATION: ATPrepAlterColumnType, tablecmds.c:10812\n\nThanks,\nChris", "msg_date": "Mon, 2 Mar 2020 22:35:19 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "> +\\set VERBOSITY verbose\n> +-- no partitions\n> +CREATE TABLE pterr1 (x int, y int, PRIMARY KEY (x, y)) PARTITION BY RANGE (y);\n> +INSERT INTO pterr1 VALUES (10, 10);\n> +ERROR: 23514: no partition of relation \"pterr1\" found for row\n> +DETAIL: Partition key of the failing row contains (y) = (10).\n> +SCHEMA NAME: public\n> +TABLE NAME: pterr1\n> +LOCATION: ExecFindPartition, execPartition.c:349\n\nThis won't work well, because people would be forced to update the .out\nfile whenever the execPartition.c file changed to add or remove lines\nbefore the one with the error call. Maybe if you want to verify the\nschema/table names, use a plpgsql function to extract them, using\nGET STACKED DIAGNOSTICS TABLE_NAME = ...\nin an exception block?\n\nI'm not sure that this *needs* to be tested, though. Don't we already\nverify that errtable() works, elsewhere? I don't suppose you mean to\ntest that every single ereport() call that includes errtable() contains\na TABLE NAME item.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Mar 2020 13:08:46 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On 3/3/20 10:08 AM, Alvaro Herrera wrote:\n>> +\\set VERBOSITY verbose\n>> +-- no partitions\n>> +CREATE TABLE pterr1 (x int, y int, PRIMARY KEY (x, y)) PARTITION BY RANGE (y);\n>> +INSERT INTO pterr1 VALUES (10, 10);\n>> +ERROR: 23514: no partition of relation \"pterr1\" found for row\n>> +DETAIL: Partition key of the failing row contains (y) = (10).\n>> +SCHEMA NAME: public\n>> +TABLE NAME: pterr1\n>> +LOCATION: ExecFindPartition, execPartition.c:349\n> \n> This won't work well, because people would be forced to update the .out\n> file whenever the execPartition.c file changed to add or remove lines\n> before the one with the error call.\n\nI agree. I expected that and should have made it more clear that I \ndidn't intend for those tests to be committed. Others in the thread \nsuggested I include some form of test, even if it didn't live past \nreview. That being said...\n\n> Maybe if you want to verify the\n> schema/table names, use a plpgsql function to extract them, using\n> GET STACKED DIAGNOSTICS TABLE_NAME = ...\n> in an exception block?\n\nThis is a great idea and the result looks much cleaner than I expected. \nI have no reservations about committing the attached tests.\n\n> I'm not sure that this *needs* to be tested, though. Don't we already\n> verify that errtable() works, elsewhere?\n\nI looked for tests that might target errtable() or errtableconstraint() \nbut found none. Perhaps someone who knows the tests better could answer \nthis question.\n\n> I don't suppose you mean to\n> test that every single ereport() call that includes errtable() contains\n> a TABLE NAME item.\n\nCorrect. I intend only to test the few calls I'm touching in this \nthread. It might be worthwhile for someone to perform a more thorough \nreview of existing errors, however. The documentation seems to say that \nevery error in SQLSTATE class 23 has one of these fields filled[1]. The \nerrors in these patches are in that class but lacked any fields.\n\n[1] https://www.postgresql.org/docs/current/errcodes-appendix.html\n\nThanks,\nChris", "msg_date": "Tue, 3 Mar 2020 23:18:51 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Wed, Mar 4, 2020 at 10:48 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>\n> On 3/3/20 10:08 AM, Alvaro Herrera wrote:\n> >> +\\set VERBOSITY verbose\n> >> +-- no partitions\n> >> +CREATE TABLE pterr1 (x int, y int, PRIMARY KEY (x, y)) PARTITION BY RANGE (y);\n> >> +INSERT INTO pterr1 VALUES (10, 10);\n> >> +ERROR: 23514: no partition of relation \"pterr1\" found for row\n> >> +DETAIL: Partition key of the failing row contains (y) = (10).\n> >> +SCHEMA NAME: public\n> >> +TABLE NAME: pterr1\n> >> +LOCATION: ExecFindPartition, execPartition.c:349\n> >\n> > This won't work well, because people would be forced to update the .out\n> > file whenever the execPartition.c file changed to add or remove lines\n> > before the one with the error call.\n>\n> I agree. I expected that and should have made it more clear that I\n> didn't intend for those tests to be committed. Others in the thread\n> suggested I include some form of test, even if it didn't live past\n> review.\n>\n\nRight, it is not for committing those tests, but rather once we try to\nhit the code we are changing in this patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 10:52:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On 3/3/20 11:18 PM, Chris Bandy wrote:\n> On 3/3/20 10:08 AM, Alvaro Herrera wrote:\n>> I don't suppose you mean to\n>> test that every single ereport() call that includes errtable() contains\n>> a TABLE NAME item.\n> \n> Correct. I intend only to test the few calls I'm touching in this\n> It might be worthwhile for someone to perform a more thorough\n> review of existing errors, however. The documentation seems to say that\n> every error in SQLSTATE class 23 has one of these fields filled[1]. The\n> errors in these patches are in that class but lacked any fields.\n> \n> [1] https://www.postgresql.org/docs/current/errcodes-appendix.html\n\nBy the power of grep I found another partition error that needed a\nfield. I'm pretty happy with the way the test turned out, so I've\nsquashed everything into a single patch.\n\nI've also convinced myself that the number of integrity errors in the\nentire codebase is manageable to test. If others think it is worthwhile,\nI can spend some time over the next week to expand this test approach to\ncover _all_ SQLSTATE class 23 errors.\n\nIf so,\n\n* Should it be one regression test (file) that discusses the\nsignificance of class 23, or\n\n* Should it be a few test cases added to the existing test files related\nto each feature?\n\nThe former allows the helper function to be defined once, while the\nlatter would repeat it over many files.\n\nThanks,\nChris", "msg_date": "Wed, 4 Mar 2020 02:54:20 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add object names to partition errors" }, { "msg_contents": "On 3/4/20 2:54 AM, Chris Bandy wrote:\n> I've also convinced myself that the number of integrity errors in the\n> entire codebase is manageable to test. If others think it is worthwhile,\n> I can spend some time over the next week to expand this test approach to\n> cover _all_ SQLSTATE class 23 errors.\n\nDone. Please find attached two patches that (1) test all but one reports\nof integrity violations and (2) attach object names to the handful that\nlacked them.\n\nI decided to include error messages in the tests so that the next person\nto change the message would be mindful of the attached fields and vice\nversa. I thought these might be impacted by locale, but `make check\nLANG=de_DE.utf8` passes for me. Is that command the right way to verify\nthat?\n\nWith these patches, behavior matches the documentation which states:\n\"[object] names are supplied in separate fields of the error report\nmessage so that applications need not try to extract them from the\npossibly-localized human-readable text of the message. As of PostgreSQL\n9.3, complete coverage for this feature exists only for errors in\nSQLSTATE class 23...\"\n\n\nThanks,\nChris", "msg_date": "Fri, 6 Mar 2020 23:37:37 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add tests for integrity violation error fields" }, { "msg_contents": "On Tue, Mar 3, 2020 at 10:05 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>\n> On 3/1/20 10:09 PM, Amit Langote wrote:\n> > Hi Chris,\n> >\n> > On Mon, Mar 2, 2020 at 8:51 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> >> On 3/1/20 5:14 AM, Amit Kapila wrote:\n> >>> On Sun, Mar 1, 2020 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >>>>\n> >>>> There are couple more instances in src/backend/command/tablecmds.c\n> >>>> where partition constraint is checked:\n> >>>>\n> >>>> Maybe, better fix these too for completeness.\n> >>>\n> >>> Another thing we might need to see is which of these can be\n> >>> back-patched. We should also try to write the tests for cases we are\n> >>> changing even if we don't want to commit those.\n> >>\n> >> I don't have any opinion on back-patching. Existing tests pass. I wasn't\n> >> able to find another test that checks the constraint field of errors.\n> >> There's a little bit in the tests for psql, but that is about the the\n> >> \\errverbose functionality rather than specific errors and their fields.\n> >\n> > Actually, it's not a bad idea to use \\errverbose to test this.\n> >\n>\n> I've added a second patch with tests that cover three of the five errors\n> touched by the first patch. Rather than \\errverbose, I simply \\set\n> VERBOSITY verbose. I could not find a way to exclude the location field\n> from the output, so those lines will be likely be out of date soon--if\n> not already.\n>\n> I couldn't find a way to exercise the errors in tablecmds.c. Does anyone\n> know how to instigate a table rewrite that would violate partition\n> constraints? I tried:\n>\n\nWhen I tried to apply your patch on HEAD with patch -p1 <\n<path_to_patch>, I am getting below errors\n\n(Stripping trailing CRs from patch; use --binary to disable.)\ncan't find file to patch at input line 17\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n..\n\nI have tried with git am as well, but it failed. I am not sure what\nis the reason. Can you please once check at your end? Also, see, if\nit applies till 9.5 as I think we should backpatch this.\n\nIIUC, this patch is mainly to get the table name, schema name in case\nof the error paths, so that your application can handle errors in case\npartition constraint violation, right?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Mar 2020 16:59:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "Amit,\n\nOn 3/11/20 6:29 AM, Amit Kapila wrote:\n> On Tue, Mar 3, 2020 at 10:05 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>>\n>> On 3/1/20 10:09 PM, Amit Langote wrote:\n>>> Hi Chris,\n>>>\n>>> On Mon, Mar 2, 2020 at 8:51 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>>>> On 3/1/20 5:14 AM, Amit Kapila wrote:\n>>>>> On Sun, Mar 1, 2020 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>>>>>\n>>>>>> There are couple more instances in src/backend/command/tablecmds.c\n>>>>>> where partition constraint is checked:\n>>>>>>\n>>>>>> Maybe, better fix these too for completeness.\n>>>>>\n>>>>> Another thing we might need to see is which of these can be\n>>>>> back-patched. We should also try to write the tests for cases we are\n>>>>> changing even if we don't want to commit those.\n>>>>\n>>>> I don't have any opinion on back-patching. Existing tests pass. I wasn't\n>>>> able to find another test that checks the constraint field of errors.\n>>>> There's a little bit in the tests for psql, but that is about the the\n>>>> \\errverbose functionality rather than specific errors and their fields.\n>>>\n>>> Actually, it's not a bad idea to use \\errverbose to test this.\n>>>\n>>\n>> I've added a second patch with tests that cover three of the five errors\n>> touched by the first patch. Rather than \\errverbose, I simply \\set\n>> VERBOSITY verbose. I could not find a way to exclude the location field\n>> from the output, so those lines will be likely be out of date soon--if\n>> not already.\n>>\n>> I couldn't find a way to exercise the errors in tablecmds.c. Does anyone\n>> know how to instigate a table rewrite that would violate partition\n>> constraints? I tried:\n>>\n> \n> When I tried to apply your patch on HEAD with patch -p1 <\n> <path_to_patch>, I am getting below errors\n> \n> (Stripping trailing CRs from patch; use --binary to disable.)\n> can't find file to patch at input line 17\n> Perhaps you used the wrong -p or --strip option?\n> The text leading up to this was:\n> ..\n> \n> I have tried with git am as well, but it failed. I am not sure what\n> is the reason. Can you please once check at your end?\n\nYes, sorry. This set (and v3 and v4) should work with -p0. Any following\npatches from me will use the normal -p1.\n\n> Also, see, if\n> it applies till 9.5 as I think we should backpatch this.\n> \n> IIUC, this patch is mainly to get the table name, schema name in case\n> of the error paths, so that your application can handle errors in case\n> partition constraint violation, right?\n\nYes, that is correct. Which also means it doesn't apply to 9.5 (no\npartitions!) Later in this thread I created a test that covers all\nintegrity violation errors.[1] *That* can be backpatched, if you'd like.\n\nFor an approach limited to partitions only, I recommend looking at v4\nrather than v2 or v3.[2]\n\n[1]: https://postgresql.org/message-id/0731def8-978e-0285-04ee-582762729b38%40gmail.com\n[2]: https://postgresql.org/message-id/7985cf2f-5082-22d9-1bb4-6b280150eeae%40gmail.com\n\nThanks,\nChris\n\n\n", "msg_date": "Wed, 11 Mar 2020 10:21:45 -0500", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Wed, Mar 11, 2020 at 8:51 PM Chris Bandy <bandy.chris@gmail.com> wrote:\n>\n> On 3/11/20 6:29 AM, Amit Kapila wrote:\n> >\n> > I have tried with git am as well, but it failed. I am not sure what\n> > is the reason. Can you please once check at your end?\n>\n> Yes, sorry. This set (and v3 and v4) should work with -p0. Any following\n> patches from me will use the normal -p1.\n>\n\nOkay.\n\n> > Also, see, if\n> > it applies till 9.5 as I think we should backpatch this.\n> >\n> > IIUC, this patch is mainly to get the table name, schema name in case\n> > of the error paths, so that your application can handle errors in case\n> > partition constraint violation, right?\n>\n> Yes, that is correct. Which also means it doesn't apply to 9.5 (no\n> partitions!) Later in this thread I created a test that covers all\n> integrity violation errors.[1] *That* can be backpatched, if you'd like.\n>\n> For an approach limited to partitions only, I recommend looking at v4\n> rather than v2 or v3.[2]\n>\n\nIt is strange that I didn't receive your email which has a v4 version.\nI will look into it, but I don't think we need to add the tests for\nerror conditions. Those are good for testing, but I think if we start\nadding tests for all error conditions, then it might increase the\nnumber of tests that are not of very high value.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Mar 2020 19:46:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Thu, Mar 12, 2020 at 7:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 11, 2020 at 8:51 PM Chris Bandy <bandy.chris@gmail.com> wrote:\n> >\n> > On 3/11/20 6:29 AM, Amit Kapila wrote:\n> > >\n> > > I have tried with git am as well, but it failed. I am not sure what\n> > > is the reason. Can you please once check at your end?\n> >\n> > Yes, sorry. This set (and v3 and v4) should work with -p0. Any following\n> > patches from me will use the normal -p1.\n> >\n>\n> Okay.\n>\n\nI again tried the latest patch v5 both with -p1 and -p0, but it gives\nan error while applying the patch. Can you send a patch that we can\napply with patch -p1 or git-am?\n\n[1] - https://www.postgresql.org/message-id/0731def8-978e-0285-04ee-582762729b38%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Mar 2020 17:26:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On 3/18/20 6:56 AM, Amit Kapila wrote:\n> On Thu, Mar 12, 2020 at 7:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Mar 11, 2020 at 8:51 PM Chris Bandy <bandy.chris@gmail.com> wrote:\n>>>\n>>> On 3/11/20 6:29 AM, Amit Kapila wrote:\n>>>>\n>>>> I have tried with git am as well, but it failed. I am not sure what\n>>>> is the reason. Can you please once check at your end?\n>>>\n>>> Yes, sorry. This set (and v3 and v4) should work with -p0. Any following\n>>> patches from me will use the normal -p1.\n>>>\n>>\n>> Okay.\n>>\n> \n> I again tried the latest patch v5 both with -p1 and -p0, but it gives\n> an error while applying the patch. Can you send a patch that we can\n> apply with patch -p1 or git-am?\n> \n> [1] - https://www.postgresql.org/message-id/0731def8-978e-0285-04ee-582762729b38%40gmail.com\n> \n\nSorry for these troubles. Attached are patches created using `git\nformat-patch -n -v6` on master at 487e9861d0.\n\nThanks,\nChris", "msg_date": "Wed, 18 Mar 2020 17:25:05 -0500", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Thu, Mar 19, 2020 at 3:55 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>\n>\n> Sorry for these troubles. Attached are patches created using `git\n> format-patch -n -v6` on master at 487e9861d0.\n>\n\nNo problem. I have extracted your code changes as a separate patch\n(see attached) as I am not sure we want to add tests for these cases.\nThis doesn't apply in back-branches, but I think that is small work\nand we can do that if required. The real question is do we want to\nback-patch this? Basically, this improves the errors in certain cases\nby providing additional information that otherwise the user might need\nto extract from error messages. So, there doesn't seem to be pressing\nneed to back-patch this but OTOH, we have mentioned in docs that we\nsupport to display this information for all SQLSTATE class 23\n(integrity constraint violation) errors which is not true as we forgot\nto adhere to that in some parts of code.\n\nWhat do you think? Anybody else has an opinion on whether to\nback-patch this or not?\n\n[1] - https://www.postgresql.org/docs/devel/errcodes-appendix.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 19 Mar 2020 10:16:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "Thank you Chris, Amit.\n\nOn Thu, Mar 19, 2020 at 1:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Mar 19, 2020 at 3:55 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> >\n> >\n> > Sorry for these troubles. Attached are patches created using `git\n> > format-patch -n -v6` on master at 487e9861d0.\n> >\n>\n> No problem. I have extracted your code changes as a separate patch\n> (see attached) as I am not sure we want to add tests for these cases.\n> This doesn't apply in back-branches, but I think that is small work\n> and we can do that if required. The real question is do we want to\n> back-patch this? Basically, this improves the errors in certain cases\n> by providing additional information that otherwise the user might need\n> to extract from error messages. So, there doesn't seem to be pressing\n> need to back-patch this but OTOH, we have mentioned in docs that we\n> support to display this information for all SQLSTATE class 23\n> (integrity constraint violation) errors which is not true as we forgot\n> to adhere to that in some parts of code.\n>\n> What do you think? Anybody else has an opinion on whether to\n> back-patch this or not?\n\nAs nobody except Chris complained about this so far, maybe no?\n\n-- \nThank you,\nAmit\n\n\n", "msg_date": "Thu, 19 Mar 2020 19:04:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On 3/18/20 11:46 PM, Amit Kapila wrote:\n> On Thu, Mar 19, 2020 at 3:55 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n>>\n>>\n>> Sorry for these troubles. Attached are patches created using `git\n>> format-patch -n -v6` on master at 487e9861d0.\n>>\n> \n> No problem. I have extracted your code changes as a separate patch\n> (see attached) as I am not sure we want to add tests for these cases.\n\nPatch looks good.\n\nMy last pitch to keep the tests: These would be the first and only\nautomated tests that verify errtable, errtableconstraint, etc.\n\n> This doesn't apply in back-branches, but I think that is small work\n> and we can do that if required.\n\nIt looks like the only failing hunk on REL_12_STABLE is in tablecmds.c.\nThe ereport is near line 5090 there. The partition code has changed\nquite a bit compared the older branches. ;-)\n\nThanks,\nChris\n\n\n", "msg_date": "Thu, 19 Mar 2020 09:51:15 -0500", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Thu, Mar 19, 2020 at 8:21 PM Chris Bandy <bandy.chris@gmail.com> wrote:\n>\n> On 3/18/20 11:46 PM, Amit Kapila wrote:\n> > On Thu, Mar 19, 2020 at 3:55 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> >>\n> >>\n> >> Sorry for these troubles. Attached are patches created using `git\n> >> format-patch -n -v6` on master at 487e9861d0.\n> >>\n> >\n> > No problem. I have extracted your code changes as a separate patch\n> > (see attached) as I am not sure we want to add tests for these cases.\n>\n> Patch looks good.\n>\n> My last pitch to keep the tests: These would be the first and only\n> automated tests that verify errtable, errtableconstraint, etc.\n>\n\nI don't object to those tests. However, I don't feel adding just for\nthis patch is advisable. I suggest you start a new thread for these\ntests and let us see what others think about them.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Mar 2020 12:20:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Thu, Mar 19, 2020 at 3:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Thank you Chris, Amit.\n>\n> On Thu, Mar 19, 2020 at 1:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Mar 19, 2020 at 3:55 AM Chris Bandy <bandy.chris@gmail.com> wrote:\n> > >\n> > >\n> > > Sorry for these troubles. Attached are patches created using `git\n> > > format-patch -n -v6` on master at 487e9861d0.\n> > >\n> >\n> > No problem. I have extracted your code changes as a separate patch\n> > (see attached) as I am not sure we want to add tests for these cases.\n> > This doesn't apply in back-branches, but I think that is small work\n> > and we can do that if required. The real question is do we want to\n> > back-patch this? Basically, this improves the errors in certain cases\n> > by providing additional information that otherwise the user might need\n> > to extract from error messages. So, there doesn't seem to be pressing\n> > need to back-patch this but OTOH, we have mentioned in docs that we\n> > support to display this information for all SQLSTATE class 23\n> > (integrity constraint violation) errors which is not true as we forgot\n> > to adhere to that in some parts of code.\n> >\n> > What do you think? Anybody else has an opinion on whether to\n> > back-patch this or not?\n>\n> As nobody except Chris complained about this so far, maybe no?\n>\n\nFair enough, unless I see any other opinions, I will push this on Monday.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Mar 2020 12:22:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" }, { "msg_contents": "On Fri, Mar 20, 2020 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 19, 2020 at 3:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > >\n> > > What do you think? Anybody else has an opinion on whether to\n> > > back-patch this or not?\n> >\n> > As nobody except Chris complained about this so far, maybe no?\n> >\n>\n> Fair enough, unless I see any other opinions, I will push this on Monday.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Mar 2020 08:22:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add schema and table names to partition error" } ]
[ { "msg_contents": "Hi,\nWhile using PL/Perl I have found that it obtains boolean arguments from Postgres as ‘t’ and ‘f’, which is extremely inconvenient because ‘f’ is not false from the perl viewpoint.\nSo the problem is how to convert the SQL booleans into Perl style.\n \nThere are 3 ways to do this:\n* make plperl automatically convert bools into something acceptable for perl. This looks simple, but probably is not acceptable as it breaks compatibility.\n* try to make some trick like it is done with arrays, i.e. convert bools into special Perl objects which look like ‘t’ and ‘f’ when treated as text, but are true and false for boolean operations. I am not sure that it is possible and reliable.\n* make a transform which transforms bool, like it is done with jsonb. This does not break compatibility and is rather straightforward.\nSo I propose to take the third way and make such transform. This is very simple, a patch is attached.\nAlso this patch improves the plperl documentation page, which now has nothing said about the transforms.\n \nRegards,\nIvan Panchenko", "msg_date": "Sun, 01 Mar 2020 00:55:17 +0300", "msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?Ym9vbF9wbHBlcmwgdHJhbnNmb3Jt?=" }, { "msg_contents": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:\n> While using PL/Perl I have found that it obtains boolean arguments from Postgres as ‘t’ and ‘f’, which is extremely inconvenient because ‘f’ is not false from the perl viewpoint.\n> ...\n> * make a transform which transforms bool, like it is done with jsonb. This does not break compatibility and is rather straightforward.\n\nPlease register this patch in the commitfest app, so we don't lose track\nof it.\n\nhttps://commitfest.postgresql.org/27/\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Feb 2020 17:15:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?B?Ym9vbF9wbHBlcmwgdHJhbnNmb3Jt?=" }, { "msg_contents": "\nOn 2/29/20 4:55 PM, Ivan Panchenko wrote:\n> Hi,\n> While using PL/Perl I have found that it obtains boolean arguments\n> from Postgres as ‘t’ and ‘f’, which is extremely inconvenient because\n> ‘f’ is not false from the perl viewpoint.\n> So the problem is how to convert the SQL booleans into Perl style.\n>  \n> There are 3 ways to do this:\n>\n> 1. make plperl automatically convert bools into something acceptable\n> for perl. This looks simple, but probably is not acceptable as it\n> breaks compatibility.\n> 2. try to make some trick like it is done with arrays, i.e. convert\n> bools into special Perl objects which look like ‘t’ and ‘f’ when\n> treated as text, but are true and false for boolean operations. I\n> am not sure that it is possible and reliable.\n> 3. make a transform which transforms bool, like it is done with\n> jsonb. This does not break compatibility and is rather\n> straightforward.\n>\n> So I propose to take the third way and make such transform. This is\n> very simple, a patch is attached.\n> Also this patch improves the plperl documentation page, which now has\n> nothing said about the transforms.\n>  \n>\n\n\nPatch appears to be missing all the new files.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 29 Feb 2020 23:57:47 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: bool_plperl transform" }, { "msg_contents": "Sorry,\n \nPlease find the full patch attached.\n \nIvan\n  \n>Воскресенье, 1 марта 2020, 7:57 +03:00 от Andrew Dunstan <andrew.dunstan@2ndquadrant.com>:\n> \n>\n>On 2/29/20 4:55 PM, Ivan Panchenko wrote:\n>> Hi,\n>> While using PL/Perl I have found that it obtains boolean arguments\n>> from Postgres as ‘t’ and ‘f’, which is extremely inconvenient because\n>> ‘f’ is not false from the perl viewpoint.\n>> So the problem is how to convert the SQL booleans into Perl style.\n>>  \n>> There are 3 ways to do this:\n>>\n>> 1. make plperl automatically convert bools into something acceptable\n>> for perl. This looks simple, but probably is not acceptable as it\n>> breaks compatibility.\n>> 2. try to make some trick like it is done with arrays, i.e. convert\n>> bools into special Perl objects which look like ‘t’ and ‘f’ when\n>> treated as text, but are true and false for boolean operations. I\n>> am not sure that it is possible and reliable.\n>> 3. make a transform which transforms bool, like it is done with\n>> jsonb. This does not break compatibility and is rather\n>> straightforward.\n>>\n>> So I propose to take the third way and make such transform. This is\n>> very simple, a patch is attached.\n>> Also this patch improves the plperl documentation page, which now has\n>> nothing said about the transforms.\n>>  \n>>\n>\n>Patch appears to be missing all the new files.\n>\n>\n>cheers\n>\n>\n>andrew\n>\n>\n>\n>--\n>Andrew Dunstan https://www.2ndQuadrant.com\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>  \n \n \n--\nИван Панченко", "msg_date": "Sun, 01 Mar 2020 11:02:10 +0300", "msg_from": "=?UTF-8?B?V2Fv?= <wao@mail.ru>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmVbMl06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": ">Воскресенье, 1 февраля 2020, 1:15 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>:\n> \n>=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= < wao@mail.ru > writes:\n>> While using PL/Perl I have found that it obtains boolean arguments from Postgres as ‘t’ and ‘f’, which is extremely inconvenient because ‘f’ is not false from the perl viewpoint.\n>> ...\n>> * make a transform which transforms bool, like it is done with jsonb. This does not break compatibility and is rather straightforward.\n>Please register this patch in the commitfest app, so we don't lose track\n>of it.\n>\n>https://commitfest.postgresql.org/27/\nDone:\nhttps://commitfest.postgresql.org/27/2502/\n \nRegards,\nIvan\n \n>\n>regards, tom lane \n \n \n \n \n Воскресенье, 1 февраля 2020, 1:15 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>: =?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:> While using PL/Perl I have found that it obtains boolean arguments from Postgres as ‘t’ and ‘f’, which is extremely inconvenient because ‘f’ is not false from the perl viewpoint.> ...> * make a transform which transforms bool, like it is done with jsonb. This does not break compatibility and is rather straightforward.Please register this patch in the commitfest app, so we don't lose trackof it.https://commitfest.postgresql.org/27/Done:https://commitfest.postgresql.org/27/2502/ Regards,Ivan regards, tom lane", "msg_date": "Sun, 01 Mar 2020 14:14:41 +0300", "msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?UmVbMl06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": "=?UTF-8?B?V2Fv?= <wao@mail.ru> writes:\n> Please find the full patch attached.\n\nThe cfbot shows this failing to build on Windows:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81889\n\nI believe that's a build without plperl, so what it's probably telling\nyou is that Mkvcbuild.pm needs to be taught to build this module\nconditionally, as it already does for hstore_plperl and jsonb_plperl.\n\nAlso, while the Linux build is passing, I can't find that it is actually\ncompiling or testing bool_plperl anywhere:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/656909114\n\nThis is likely because you didn't add it to contrib/Makefile.\n\nIn general, I'd suggest grepping for references to hstore_plperl\nor jsonb_plperl, and making sure that bool_plperl gets added where\nappropriate.\n\nI rather imagine you need a .gitignore file, as well.\n\nYou're also going to have to provide some documentation, because\nI don't see any in the patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Mar 2020 16:13:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?B?UmVbMl06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": "Wao <wao@mail.ru> writes:\n\n> +Datum\n> +bool_to_plperl(PG_FUNCTION_ARGS)\n> +{\n> +\tdTHX;\n> +\tbool in = PG_GETARG_BOOL(0);\n> +\tSV\t*sv = newSVnv(SvNV(in ? &PL_sv_yes : &PL_sv_no));\n> +\treturn PointerGetDatum(sv);\n> +}\n\nWhy is this only copying the floating point part of the built-in\nbooleans before returning them? I think this should just return\n&PL_sv_yes or &PL_sv_no directly, like boolean expressions in Perl do,\nand like what happens for NULL (&PL_sv_undef).\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Sun, 01 Mar 2020 22:09:37 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: bool_plperl transform" }, { "msg_contents": ">Понедельник, 2 марта 2020, 1:09 +03:00 от ilmari@ilmari.org:\n> \n>Wao < wao@mail.ru > writes:\n> \n>> +Datum\n>> +bool_to_plperl(PG_FUNCTION_ARGS)\n>> +{\n>> + dTHX;\n>> + bool in = PG_GETARG_BOOL(0);\n>> + SV *sv = newSVnv(SvNV(in ? &PL_sv_yes : &PL_sv_no));\n>> + return PointerGetDatum(sv);\n>> +}\n>Why is this only copying the floating point part of the built-in\n>booleans before returning them? I think this should just return\n>&PL_sv_yes or &PL_sv_no directly, like boolean expressions in Perl do,\n>and like what happens for NULL (&PL_sv_undef).\nThanks, I will fix this in the next version of the patch.\n \nRegards,\nIvan\n>\n>- ilmari\n>--\n>\"A disappointingly low fraction of the human race is,\n> at any given time, on fire.\" - Stig Sandbeck Mathisen\n>\n>  \n \n \n \n \n Понедельник, 2 марта 2020, 1:09 +03:00 от ilmari@ilmari.org: Wao <wao@mail.ru> writes: > +Datum> +bool_to_plperl(PG_FUNCTION_ARGS)> +{> + dTHX;> + bool in = PG_GETARG_BOOL(0);> + SV *sv = newSVnv(SvNV(in ? &PL_sv_yes : &PL_sv_no));> + return PointerGetDatum(sv);> +}Why is this only copying the floating point part of the built-inbooleans before returning them? I think this should just return&PL_sv_yes or &PL_sv_no directly, like boolean expressions in Perl do,and like what happens for NULL (&PL_sv_undef).Thanks, I will fix this in the next version of the patch. Regards,Ivan- ilmari--\"A disappointingly low fraction of the human race is, at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Mon, 02 Mar 2020 02:30:46 +0300", "msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?UmVbMl06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": "Thanks, Tom.\n \nI think now it should build, please find the fixed patch attached.\nI had no possibility to check it on Windows now, but the relevant changes in Mkvcbuild.pm are done, so I hope it should work.\nThe documentation changes are also included in the same patch.\n \nRegards,\nIvan\n  \n>Понедельник, 2 марта 2020, 0:14 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>:\n> \n>=?UTF-8?B?V2Fv?= < wao@mail.ru > writes:\n>> Please find the full patch attached.\n>The cfbot shows this failing to build on Windows:\n>\n>https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81889\n>\n>I believe that's a build without plperl, so what it's probably telling\n>you is that Mkvcbuild.pm needs to be taught to build this module\n>conditionally, as it already does for hstore_plperl and jsonb_plperl.\n>\n>Also, while the Linux build is passing, I can't find that it is actually\n>compiling or testing bool_plperl anywhere:\n>\n>https://travis-ci.org/postgresql-cfbot/postgresql/builds/656909114\n>\n>This is likely because you didn't add it to contrib/Makefile.\n>\n>In general, I'd suggest grepping for references to hstore_plperl\n>or jsonb_plperl, and making sure that bool_plperl gets added where\n>appropriate.\n>\n>I rather imagine you need a .gitignore file, as well.\n>\n>You're also going to have to provide some documentation, because\n>I don't see any in the patch.\n>\n>regards, tom lane", "msg_date": "Mon, 02 Mar 2020 03:01:40 +0300", "msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?UmVbNF06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:\n> [ bool_plperl_transform_v3.patch ]\n\nI reviewed this, fixed some minor problems (mostly cosmetic, but not\nentirely), and pushed it.\n\nThanks for the contribution!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 17:15:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?B?UmVbNF06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": "Tom,\n  \n>Суббота, 7 марта 2020, 1:15 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>:\n> \n>=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= < wao@mail.ru > writes:\n>> [ bool_plperl_transform_v3.patch ]\n>I reviewed this, fixed some minor problems (mostly cosmetic, but not\n>entirely), and pushed it.\n\nThanks for the commit and for your work improving the patch.\n \nDo you think the jsonb transform is worth explicit mentioning at the PL/Perl documentation page, or not?\n \n>\n>Thanks for the contribution!\n>\n>regards, tom lane\n> \n\nRegards,\nIvan\n \n \n \nTom, Суббота, 7 марта 2020, 1:15 +03:00 от Tom Lane <tgl@sss.pgh.pa.us>: =?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:> [ bool_plperl_transform_v3.patch ]I reviewed this, fixed some minor problems (mostly cosmetic, but notentirely), and pushed it.Thanks for the commit and for your work improving the patch. Do you think the jsonb transform is worth explicit mentioning at the PL/Perl documentation page, or not? Thanks for the contribution!regards, tom lane Regards,Ivan", "msg_date": "Sat, 07 Mar 2020 18:07:24 +0300", "msg_from": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?UmVbNl06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" }, { "msg_contents": "=?UTF-8?B?SXZhbiBQYW5jaGVua28=?= <wao@mail.ru> writes:\n> Do you think the jsonb transform is worth explicit mentioning at the PL/Perl documentation page, or not?\n\nRight now it's documented under the json data types, which seems\nsufficient to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 10:34:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?B?UmVbNl06IGJvb2xfcGxwZXJsIHRyYW5zZm9ybQ==?=" } ]
[ { "msg_contents": "Hello,\n\nI noticed the following scenario under the development of truncate\nsupport on FDW.\n\nIn case when 'ftable' maps a remote table that has inherited children,...\n\npostgres=# create table rtable_parent (id int, label text, x text);\nCREATE TABLE\npostgres=# create table rtable_child () inherits (rtable_parent);\nCREATE TABLE\npostgres=# insert into rtable_parent (select x, 'parent', md5(x::text)\nfrom generate_series(1,10) x);\nINSERT 0 10\npostgres=# insert into rtable_child (select x, 'child', md5(x::text)\nfrom generate_series(6,15) x);\nINSERT 0 10\npostgres=# create foreign table ftable (id int, label text, x text)\n server loopback options (table_name 'rtable_parent');\nCREATE FOREIGN TABLE\n\nThe 'ftable' shows the results from both of the parent and children.\npostgres=# select * from ftable;\n id | label | x\n----+--------+----------------------------------\n 1 | parent | c4ca4238a0b923820dcc509a6f75849b\n 2 | parent | c81e728d9d4c2f636f067f89cc14862c\n 3 | parent | eccbc87e4b5ce2fe28308fd9f2a7baf3\n 4 | parent | a87ff679a2f3e71d9181a67b7542122c\n 5 | parent | e4da3b7fbbce2345d7772b0674a318d5\n 6 | parent | 1679091c5a880faf6fb5e6087eb1b2dc\n 7 | parent | 8f14e45fceea167a5a36dedd4bea2543\n 8 | parent | c9f0f895fb98ab9159f51fd0297e236d\n 9 | parent | 45c48cce2e2d7fbdea1afc51c7c6ad26\n 10 | parent | d3d9446802a44259755d38e6d163e820\n 6 | child | 1679091c5a880faf6fb5e6087eb1b2dc\n 7 | child | 8f14e45fceea167a5a36dedd4bea2543\n 8 | child | c9f0f895fb98ab9159f51fd0297e236d\n 9 | child | 45c48cce2e2d7fbdea1afc51c7c6ad26\n 10 | child | d3d9446802a44259755d38e6d163e820\n 11 | child | 6512bd43d9caa6e02c990b0a82652dca\n 12 | child | c20ad4d76fe97759aa27a0c99bff6710\n 13 | child | c51ce410c124a10e0db5e4b97fc2af39\n 14 | child | aab3238922bcc25a6f606eb525ffdc56\n 15 | child | 9bf31c7ff062936a96d3c8bd1f8f2ff3\n(20 rows)\n\nWhen we try to update the foreign-table without DirectUpdate mode,\nremote query tries to update the rows specified by \"ctid\" system column.\nHowever, it was not a unique key in this case.\n\npostgres=# explain update ftable set x = 'updated' where id > 10 and\npg_backend_pid() > 0;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Update on ftable (cost=100.00..133.80 rows=414 width=74)\n -> Result (cost=100.00..133.80 rows=414 width=74)\n One-Time Filter: (pg_backend_pid() > 0)\n -> Foreign Scan on ftable (cost=100.00..133.80 rows=414 width=42)\n(4 rows)\n\n[*] Note that pg_backend_pid() prevent direct update.\n\npostgres=# update ftable set x = 'updated' where id > 10 and\npg_backend_pid() > 0;\nUPDATE 5\npostgres=# select ctid,* from ftable;\n ctid | id | label | x\n--------+----+--------+----------------------------------\n (0,1) | 1 | parent | c4ca4238a0b923820dcc509a6f75849b\n (0,2) | 2 | parent | c81e728d9d4c2f636f067f89cc14862c\n (0,3) | 3 | parent | eccbc87e4b5ce2fe28308fd9f2a7baf3\n (0,4) | 4 | parent | a87ff679a2f3e71d9181a67b7542122c\n (0,5) | 5 | parent | e4da3b7fbbce2345d7772b0674a318d5\n (0,11) | 6 | parent | updated\n (0,12) | 7 | parent | updated\n (0,13) | 8 | parent | updated\n (0,14) | 9 | parent | updated\n (0,15) | 10 | parent | updated\n (0,1) | 6 | child | 1679091c5a880faf6fb5e6087eb1b2dc\n (0,2) | 7 | child | 8f14e45fceea167a5a36dedd4bea2543\n (0,3) | 8 | child | c9f0f895fb98ab9159f51fd0297e236d\n (0,4) | 9 | child | 45c48cce2e2d7fbdea1afc51c7c6ad26\n (0,5) | 10 | child | d3d9446802a44259755d38e6d163e820\n (0,11) | 11 | child | updated\n (0,12) | 12 | child | updated\n (0,13) | 13 | child | updated\n (0,14) | 14 | child | updated\n (0,15) | 15 | child | updated\n(20 rows)\n\nThe WHERE-clause (id > 10) should affect only child table.\nHowever, it updated the rows in the parent table with same ctid.\n\nHow about your thought?\nProbably, we need to fetch a pair of tableoid and ctid to identify\nthe remote table exactly, if not direct-update cases.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Sun, 1 Mar 2020 11:59:58 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": true, "msg_subject": "[BUG?] postgres_fdw incorrectly updates remote table if it has\n inherited children." }, { "msg_contents": "Hi,\n\nOn Sun, Mar 1, 2020 at 12:00 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> Hello,\n>\n> I noticed the following scenario under the development of truncate\n> support on FDW.\n>\n> In case when 'ftable' maps a remote table that has inherited children,...\n>\n> postgres=# create table rtable_parent (id int, label text, x text);\n> CREATE TABLE\n> postgres=# create table rtable_child () inherits (rtable_parent);\n> CREATE TABLE\n> postgres=# insert into rtable_parent (select x, 'parent', md5(x::text)\n> from generate_series(1,10) x);\n> INSERT 0 10\n> postgres=# insert into rtable_child (select x, 'child', md5(x::text)\n> from generate_series(6,15) x);\n> INSERT 0 10\n> postgres=# create foreign table ftable (id int, label text, x text)\n> server loopback options (table_name 'rtable_parent');\n> CREATE FOREIGN TABLE\n>\n> The 'ftable' shows the results from both of the parent and children.\n> postgres=# select * from ftable;\n> id | label | x\n> ----+--------+----------------------------------\n> 1 | parent | c4ca4238a0b923820dcc509a6f75849b\n> 2 | parent | c81e728d9d4c2f636f067f89cc14862c\n> 3 | parent | eccbc87e4b5ce2fe28308fd9f2a7baf3\n> 4 | parent | a87ff679a2f3e71d9181a67b7542122c\n> 5 | parent | e4da3b7fbbce2345d7772b0674a318d5\n> 6 | parent | 1679091c5a880faf6fb5e6087eb1b2dc\n> 7 | parent | 8f14e45fceea167a5a36dedd4bea2543\n> 8 | parent | c9f0f895fb98ab9159f51fd0297e236d\n> 9 | parent | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> 10 | parent | d3d9446802a44259755d38e6d163e820\n> 6 | child | 1679091c5a880faf6fb5e6087eb1b2dc\n> 7 | child | 8f14e45fceea167a5a36dedd4bea2543\n> 8 | child | c9f0f895fb98ab9159f51fd0297e236d\n> 9 | child | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> 10 | child | d3d9446802a44259755d38e6d163e820\n> 11 | child | 6512bd43d9caa6e02c990b0a82652dca\n> 12 | child | c20ad4d76fe97759aa27a0c99bff6710\n> 13 | child | c51ce410c124a10e0db5e4b97fc2af39\n> 14 | child | aab3238922bcc25a6f606eb525ffdc56\n> 15 | child | 9bf31c7ff062936a96d3c8bd1f8f2ff3\n> (20 rows)\n>\n> When we try to update the foreign-table without DirectUpdate mode,\n> remote query tries to update the rows specified by \"ctid\" system column.\n> However, it was not a unique key in this case.\n>\n> postgres=# explain update ftable set x = 'updated' where id > 10 and\n> pg_backend_pid() > 0;\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Update on ftable (cost=100.00..133.80 rows=414 width=74)\n> -> Result (cost=100.00..133.80 rows=414 width=74)\n> One-Time Filter: (pg_backend_pid() > 0)\n> -> Foreign Scan on ftable (cost=100.00..133.80 rows=414 width=42)\n> (4 rows)\n>\n> [*] Note that pg_backend_pid() prevent direct update.\n>\n> postgres=# update ftable set x = 'updated' where id > 10 and\n> pg_backend_pid() > 0;\n> UPDATE 5\n> postgres=# select ctid,* from ftable;\n> ctid | id | label | x\n> --------+----+--------+----------------------------------\n> (0,1) | 1 | parent | c4ca4238a0b923820dcc509a6f75849b\n> (0,2) | 2 | parent | c81e728d9d4c2f636f067f89cc14862c\n> (0,3) | 3 | parent | eccbc87e4b5ce2fe28308fd9f2a7baf3\n> (0,4) | 4 | parent | a87ff679a2f3e71d9181a67b7542122c\n> (0,5) | 5 | parent | e4da3b7fbbce2345d7772b0674a318d5\n> (0,11) | 6 | parent | updated\n> (0,12) | 7 | parent | updated\n> (0,13) | 8 | parent | updated\n> (0,14) | 9 | parent | updated\n> (0,15) | 10 | parent | updated\n> (0,1) | 6 | child | 1679091c5a880faf6fb5e6087eb1b2dc\n> (0,2) | 7 | child | 8f14e45fceea167a5a36dedd4bea2543\n> (0,3) | 8 | child | c9f0f895fb98ab9159f51fd0297e236d\n> (0,4) | 9 | child | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> (0,5) | 10 | child | d3d9446802a44259755d38e6d163e820\n> (0,11) | 11 | child | updated\n> (0,12) | 12 | child | updated\n> (0,13) | 13 | child | updated\n> (0,14) | 14 | child | updated\n> (0,15) | 15 | child | updated\n> (20 rows)\n>\n> The WHERE-clause (id > 10) should affect only child table.\n> However, it updated the rows in the parent table with same ctid.\n>\n> How about your thought?\n> Probably, we need to fetch a pair of tableoid and ctid to identify\n> the remote table exactly, if not direct-update cases.\n\nThis was this discussed on this thread:\n\nhttps://www.postgresql.org/message-id/CAFjFpRfcgwsHRmpvoOK-GUQi-n8MgAS%2BOxcQo%3DaBDn1COywmcg%40mail.gmail.com\n\nSolutions have been proposed too, but none finalized yet.\n\nThanks,\nAmit\n\n\n", "msg_date": "Sun, 1 Mar 2020 12:38:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG?] postgres_fdw incorrectly updates remote table if it has\n inherited children." }, { "msg_contents": "Hi Amit,\n\nThanks, I didn't check the thread.\n\nIt looks to me the latest patch was submitted by Fujita-san, Oct-2018.\nThen, Tom pointer out this simple approach has a problem of inefficient remote\nquery plan because of no intelligence on the structure of remote tables mapped\nby postgres_fdw. After that, the patch has been left for a year.\n\nIndeed, it is not an ideal query plan to execute for each updated rows...\n\npostgres=# explain select * from rtable_parent where tableoid = 126397\nand ctid = '(0,11)'::tid;\n QUERY PLAN\n-------------------------------------------------------------------------\n Append (cost=0.00..5.18 rows=2 width=50)\n -> Seq Scan on rtable_parent (cost=0.00..1.15 rows=1 width=31)\n Filter: ((tableoid = '126397'::oid) AND (ctid = '(0,11)'::tid))\n -> Tid Scan on rtable_child (cost=0.00..4.02 rows=1 width=68)\n TID Cond: (ctid = '(0,11)'::tid)\n Filter: (tableoid = '126397'::oid)\n(6 rows)\n\nRather than the refactoring at postgres_fdw, is it possible to have a\nbuilt-in partition\npruning rule when \"tableoid = <OID>\" was supplied?\nIf partition mechanism would have the feature, it should not be a\ncomplicated problem.\n\nBest regards,\n\n2020年3月1日(日) 12:39 Amit Langote <amitlangote09@gmail.com>:\n>\n> Hi,\n>\n> On Sun, Mar 1, 2020 at 12:00 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> >\n> > Hello,\n> >\n> > I noticed the following scenario under the development of truncate\n> > support on FDW.\n> >\n> > In case when 'ftable' maps a remote table that has inherited children,...\n> >\n> > postgres=# create table rtable_parent (id int, label text, x text);\n> > CREATE TABLE\n> > postgres=# create table rtable_child () inherits (rtable_parent);\n> > CREATE TABLE\n> > postgres=# insert into rtable_parent (select x, 'parent', md5(x::text)\n> > from generate_series(1,10) x);\n> > INSERT 0 10\n> > postgres=# insert into rtable_child (select x, 'child', md5(x::text)\n> > from generate_series(6,15) x);\n> > INSERT 0 10\n> > postgres=# create foreign table ftable (id int, label text, x text)\n> > server loopback options (table_name 'rtable_parent');\n> > CREATE FOREIGN TABLE\n> >\n> > The 'ftable' shows the results from both of the parent and children.\n> > postgres=# select * from ftable;\n> > id | label | x\n> > ----+--------+----------------------------------\n> > 1 | parent | c4ca4238a0b923820dcc509a6f75849b\n> > 2 | parent | c81e728d9d4c2f636f067f89cc14862c\n> > 3 | parent | eccbc87e4b5ce2fe28308fd9f2a7baf3\n> > 4 | parent | a87ff679a2f3e71d9181a67b7542122c\n> > 5 | parent | e4da3b7fbbce2345d7772b0674a318d5\n> > 6 | parent | 1679091c5a880faf6fb5e6087eb1b2dc\n> > 7 | parent | 8f14e45fceea167a5a36dedd4bea2543\n> > 8 | parent | c9f0f895fb98ab9159f51fd0297e236d\n> > 9 | parent | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> > 10 | parent | d3d9446802a44259755d38e6d163e820\n> > 6 | child | 1679091c5a880faf6fb5e6087eb1b2dc\n> > 7 | child | 8f14e45fceea167a5a36dedd4bea2543\n> > 8 | child | c9f0f895fb98ab9159f51fd0297e236d\n> > 9 | child | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> > 10 | child | d3d9446802a44259755d38e6d163e820\n> > 11 | child | 6512bd43d9caa6e02c990b0a82652dca\n> > 12 | child | c20ad4d76fe97759aa27a0c99bff6710\n> > 13 | child | c51ce410c124a10e0db5e4b97fc2af39\n> > 14 | child | aab3238922bcc25a6f606eb525ffdc56\n> > 15 | child | 9bf31c7ff062936a96d3c8bd1f8f2ff3\n> > (20 rows)\n> >\n> > When we try to update the foreign-table without DirectUpdate mode,\n> > remote query tries to update the rows specified by \"ctid\" system column.\n> > However, it was not a unique key in this case.\n> >\n> > postgres=# explain update ftable set x = 'updated' where id > 10 and\n> > pg_backend_pid() > 0;\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------\n> > Update on ftable (cost=100.00..133.80 rows=414 width=74)\n> > -> Result (cost=100.00..133.80 rows=414 width=74)\n> > One-Time Filter: (pg_backend_pid() > 0)\n> > -> Foreign Scan on ftable (cost=100.00..133.80 rows=414 width=42)\n> > (4 rows)\n> >\n> > [*] Note that pg_backend_pid() prevent direct update.\n> >\n> > postgres=# update ftable set x = 'updated' where id > 10 and\n> > pg_backend_pid() > 0;\n> > UPDATE 5\n> > postgres=# select ctid,* from ftable;\n> > ctid | id | label | x\n> > --------+----+--------+----------------------------------\n> > (0,1) | 1 | parent | c4ca4238a0b923820dcc509a6f75849b\n> > (0,2) | 2 | parent | c81e728d9d4c2f636f067f89cc14862c\n> > (0,3) | 3 | parent | eccbc87e4b5ce2fe28308fd9f2a7baf3\n> > (0,4) | 4 | parent | a87ff679a2f3e71d9181a67b7542122c\n> > (0,5) | 5 | parent | e4da3b7fbbce2345d7772b0674a318d5\n> > (0,11) | 6 | parent | updated\n> > (0,12) | 7 | parent | updated\n> > (0,13) | 8 | parent | updated\n> > (0,14) | 9 | parent | updated\n> > (0,15) | 10 | parent | updated\n> > (0,1) | 6 | child | 1679091c5a880faf6fb5e6087eb1b2dc\n> > (0,2) | 7 | child | 8f14e45fceea167a5a36dedd4bea2543\n> > (0,3) | 8 | child | c9f0f895fb98ab9159f51fd0297e236d\n> > (0,4) | 9 | child | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> > (0,5) | 10 | child | d3d9446802a44259755d38e6d163e820\n> > (0,11) | 11 | child | updated\n> > (0,12) | 12 | child | updated\n> > (0,13) | 13 | child | updated\n> > (0,14) | 14 | child | updated\n> > (0,15) | 15 | child | updated\n> > (20 rows)\n> >\n> > The WHERE-clause (id > 10) should affect only child table.\n> > However, it updated the rows in the parent table with same ctid.\n> >\n> > How about your thought?\n> > Probably, we need to fetch a pair of tableoid and ctid to identify\n> > the remote table exactly, if not direct-update cases.\n>\n> This was this discussed on this thread:\n>\n> https://www.postgresql.org/message-id/CAFjFpRfcgwsHRmpvoOK-GUQi-n8MgAS%2BOxcQo%3DaBDn1COywmcg%40mail.gmail.com\n>\n> Solutions have been proposed too, but none finalized yet.\n>\n> Thanks,\n> Amit\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Sun, 1 Mar 2020 13:46:59 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: [BUG?] postgres_fdw incorrectly updates remote table if it has\n inherited children." }, { "msg_contents": "Hi KaiGai-san,\n\nOn Sun, Mar 1, 2020 at 1:47 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> It looks to me the latest patch was submitted by Fujita-san, Oct-2018.\n> Then, Tom pointer out this simple approach has a problem of inefficient remote\n> query plan because of no intelligence on the structure of remote tables mapped\n> by postgres_fdw. After that, the patch has been left for a year.\n\nUnfortunately, I didn't have time to work on that (and won't in the\ndevelopment cycle for PG13.)\n\n> Indeed, it is not an ideal query plan to execute for each updated rows...\n>\n> postgres=# explain select * from rtable_parent where tableoid = 126397\n> and ctid = '(0,11)'::tid;\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Append (cost=0.00..5.18 rows=2 width=50)\n> -> Seq Scan on rtable_parent (cost=0.00..1.15 rows=1 width=31)\n> Filter: ((tableoid = '126397'::oid) AND (ctid = '(0,11)'::tid))\n> -> Tid Scan on rtable_child (cost=0.00..4.02 rows=1 width=68)\n> TID Cond: (ctid = '(0,11)'::tid)\n> Filter: (tableoid = '126397'::oid)\n> (6 rows)\n\nIIRC, I think one of Tom's concerns about the solution I proposed was\nthat it added the tableoid restriction clause to the remote\nUPDATE/DELETE query even if the remote table is not an inheritance\nset. To add the clause only if the remote table is an inheritance\nset, what I have in mind is to 1) introduce a new postgres_fdw table\noption to indicate whether the remote table is an inheritance set or\nnot, and 2) determine whether to add the clause or not, using the\noption.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 2 Mar 2020 16:49:15 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG?] postgres_fdw incorrectly updates remote table if it has\n inherited children." }, { "msg_contents": "Fujita-san,\n\n> Unfortunately, I didn't have time to work on that (and won't in the\n> development cycle for PG13.)\n>\n> > Indeed, it is not an ideal query plan to execute for each updated rows...\n> >\n> > postgres=# explain select * from rtable_parent where tableoid = 126397\n> > and ctid = '(0,11)'::tid;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------\n> > Append (cost=0.00..5.18 rows=2 width=50)\n> > -> Seq Scan on rtable_parent (cost=0.00..1.15 rows=1 width=31)\n> > Filter: ((tableoid = '126397'::oid) AND (ctid = '(0,11)'::tid))\n> > -> Tid Scan on rtable_child (cost=0.00..4.02 rows=1 width=68)\n> > TID Cond: (ctid = '(0,11)'::tid)\n> > Filter: (tableoid = '126397'::oid)\n> > (6 rows)\n>\n> IIRC, I think one of Tom's concerns about the solution I proposed was\n> that it added the tableoid restriction clause to the remote\n> UPDATE/DELETE query even if the remote table is not an inheritance\n> set. To add the clause only if the remote table is an inheritance\n> set, what I have in mind is to 1) introduce a new postgres_fdw table\n> option to indicate whether the remote table is an inheritance set or\n> not, and 2) determine whether to add the clause or not, using the\n> option.\n>\nI don't think the new options in postgres_fdw is a good solution because\nremote table structure is flexible regardless of the local configuration in\nforeign-table options. People may add inherited child tables after the\ndeclaration of foreign-tables. It can make configuration mismatch.\nEven if we always add tableoid=OID restriction on the remote query,\nit shall be evaluated after the TidScan fetched the row pointed by ctid.\nIts additional cost is limited.\n\nAnd, one potential benefit is tableoid=OID restriction can be used to prune\nunrelated partition leafs/inherited children at the planner stage.\nProbably, it is a separated groundwork from postgres_fdw.\nOne planner considers the built-in rule for this kind of optimization,\nenhancement at postgres_fdw will be quite simple, I guess.\n\nHow about your thought?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Mon, 2 Mar 2020 21:25:45 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: [BUG?] postgres_fdw incorrectly updates remote table if it has\n inherited children." } ]
[ { "msg_contents": "Hi,\n\nI think postgres' issues with scaling to larger numbers of connections\nis a serious problem in the field. While poolers can address some of\nthat, given the issues around prepared statements, transaction state,\netc, I don't think that's sufficient in many cases. It also adds\nlatency.\n\nNor do I think the argument that one shouldn't have more than a few\ndozen connection holds particularly much water. As clients have think\ntime, and database results have to be sent/received (most clients don't\nuse pipelining), and as many applications have many application servers\nwith individual connection pools, it's very common to need more\nconnections than postgres can easily deal with.\n\n\nThe largest reason for that is GetSnapshotData(). It scales poorly to\nlarger connection counts. Part of that is obviously it's O(connections)\nnature, but I always thought it had to be more. I've seen production\nworkloads spending > 98% of the cpu time n GetSnapshotData().\n\n\nAfter a lot of analysis and experimentation I figured out that the\nprimary reason for this is PGXACT->xmin. Even the simplest transaction\nmodifies MyPgXact->xmin several times during its lifetime (IIRC twice\n(snapshot & release) for exec_bind_message(), same for\nexec_exec_message(), then again as part of EOXact processing). Which\nmeans that a backend doing GetSnapshotData() on a system with a number\nof other connections active, is very likely to hit PGXACT cachelines\nthat are owned by another cpu / set of cpus / socket. The larger the\nsystem is, the worse the consequences of this are.\n\nThis problem is most prominent (and harder to fix) for xmin, but also\nexists for the other fields in PGXACT. We rarely have xid, nxids,\noverflow, or vacuumFlags set, yet constantly set them, leading to\ncross-node traffic.\n\nThe second biggest problem is that the indirection through pgprocnos\nthat GetSnapshotData() has to do to go through to get each backend's\nxmin is very unfriendly for a pipelined CPU (i.e. all that postgres runs\non). There's basically a stall at the end of every loop iteration -\nwhich is exascerbated by there being so many cache misses.\n\n\nIt's fairly easy to avoid unnecessarily dirtying cachelines for all the\nPGXACT fields except xmin. Because that actually needs to be visible to\nother backends.\n\n\nWhile it sounds almost trivial in hindsight, it took me a long while to\ngrasp a solution to a big part of this problem: We don't actually need\nto look at PGXACT->xmin to compute a snapshot. The only reason that\nGetSnapshotData() does so, is because it also computes\nRecentGlobal[Data]Xmin.\n\nBut we don't actually need them all that frequently. They're primarily\nused as a horizons for heap_page_prune_opt() etc. But for one, while\npruning is really important, it doesn't happen *all* the time. But more\nimportantly a RecentGlobalXmin from an earlier transaction is actually\nsufficient for most pruning requests, especially when there is a larger\npercentage of reading than updating transaction (very common).\n\nBy having GetSnapshotData() compute an accurate upper bound after which\nwe are certain not to be able to prune (basically the transaction's\nxmin, slots horizons, etc), and a conservative lower bound below which\nwe are definitely able to prune, we can allow some pruning actions to\nhappen. If a pruning request (or something similar) encounters an xid\nbetween those, an accurate lower bound can be computed.\n\nThat allows to avoid looking at PGXACT->xmin.\n\n\nTo address the second big problem (the indirection), we can instead pack\nthe contents of PGXACT tightly, just like we do for pgprocnos. In the\nattached series, I introduced separate arrays for xids, vacuumFlags,\nnsubxids.\n\nThe reason for splitting them is that they change at different rates,\nand different sizes. In a read-mostly workload, most backends are not\ngoing to have an xid, therefore making the xids array almost\nconstant. As long as all xids are unassigned, GetSnapshotData() doesn't\nneed to look at anything else, therefore making it sensible to check the\nxid first.\n\n\nHere are some numbers for the submitted patch series. I'd to cull some\nfurther improvements to make it more manageable, but I think the numbers\nstill are quite convincing.\n\nThe workload is a pgbench readonly, with pgbench -M prepared -c $conns\n-j $conns -S -n for each client count. This is on a machine with 2\nIntel(R) Xeon(R) Platinum 8168, but virtualized.\n\nconns tps master\t\ttps pgxact-split\n\n1 26842.492845 26524.194821\n10 246923.158682 249224.782661\n50 695956.539704 709833.746374\n100 1054727.043139 1903616.306028\n200 964795.282957 1949200.338012\n300 906029.377539 1927881.231478\n400 845696.690912 1911065.369776\n500 812295.222497 1926237.255856\n600 888030.104213 1903047.236273\n700 866896.532490 1886537.202142\n800 863407.341506 1883768.592610\n900 871386.608563 1874638.012128\n1000 887668.277133 1876402.391502\n1500 860051.361395 1815103.564241\n2000 890900.098657 1775435.271018\n3000 874184.980039 1653953.817997\n4000 845023.080703 1582582.316043\n5000 817100.195728 1512260.802371\n\nI think these are pretty nice results.\n\n\nNote that the patchset currently does not implement snapshot_too_old,\nthe rest of the regression tests do pass.\n\n\nOne further cool recognition of the fact that GetSnapshotData()'s\nresults can be made to only depend on the set of xids in progress, is\nthat caching the results of GetSnapshotData() is almost trivial at that\npoint: We only need to recompute snapshots when a toplevel transaction\ncommits/aborts.\n\nSo we can avoid rebuilding snapshots when no commt has happened since it\nwas last built. Which amounts to assigning a current 'commit sequence\nnumber' to the snapshot, and checking that against the current number\nat the time of the next GetSnapshotData() call. Well, turns out there's\nthis \"LSN\" thing we assign to commits (there are some small issues with\nthat though). I've experimented with that, and it considerably further\nimproves the numbers above. Both with a higher peak throughput, but more\nimportantly it almost entirely removes the throughput regression from\n2000 connections onwards.\n\nI'm still working on cleaning that part of the patch up, I'll post it in\na bit.\n\n\nThe series currently consists out of:\n\n0001-0005: Fixes and assert improvements that are independent of the patch, but\n are hit by the new code (see also separate thread).\n\n0006: Move delayChkpt from PGXACT to PGPROC it's rarely checked & frequently modified\n\n0007: WIP: Introduce abstraction layer for \"is tuple invisible\" tests.\n\n This is the most crucial piece. Instead of code directly using\n RecentOldestXmin, there's a new set of functions for testing\n whether an xid is visible (InvisibleToEveryoneTestXid() et al).\n\n Those function use new horizon boundaries computed as part of\n GetSnapshotData(), and recompute an accurate boundary when the\n tested xid falls inbetween.\n\n There's a bit more infrastructure needed - we need to limit how\n often an accurate snapshot is computed. Probably to once per\n snapshot? Or once per transaction?\n\n\n To avoid issues with the lower boundary getting too old and\n presenting a wraparound danger, I made all the xids be\n FullTransactionIds. That imo is a good thing?\n\n\n This patch currently breaks old_snapshot_threshold, as I've not\n yet integrated it with the new functions. I think we can make the\n old snapshot stuff a lot more precise with this - instead of\n always triggering conflicts when a RecentGlobalXmin is too old, we\n can do so only in the cases we actually remove a row. I ran out of\n energy threading that through the heap_page_prune and\n HeapTupleSatisfiesVacuum.\n\n\n0008: Move PGXACT->xmin back to PGPROC.\n\n Now that GetSnapshotData() doesn't access xmin anymore, we can\n make it a normal field in PGPROC again.\n\n\n0009: Improve GetSnapshotData() performance by avoiding indirection for xid access.\n0010: Improve GetSnapshotData() performance by avoiding indirection for vacuumFlags\n0011: Improve GetSnapshotData() performance by avoiding indirection for nsubxids access\n\n These successively move the remaining PGXACT fields into separate\n arrays in ProcGlobal, and adjust GetSnapshotData() to take\n advantage. Those arrays are dense in the sense that they only\n contain data for PGPROCs that are in use (i.e. when disconnecting,\n the array is moved around)..\n\n I think xid, and vacuumFlags are pretty reasonable. But need\n cleanup, obviously:\n - The biggest cleanup would be to add a few helper functions for\n accessing the values, rather than open coding that.\n - Perhaps we should call the ProcGlobal ones 'cached', and name\n the PGPROC ones as the one true source of truth?\n\n For subxid I thought it'd be nice to have nxids and overflow be\n only one number. But that probably was the wrong call? Now\n TransactionIdInProgress() cannot look at at the subxids that did\n fit in PGPROC.subxid. I'm not sure that's important, given the\n likelihood of misses? But I'd probably still have the subxid\n array be one of {uint8 nsubxids; bool overflowed} instead.\n\n\n To keep the arrays dense they copy the logic for pgprocnos. Which\n means that ProcArrayAdd/Remove move things around. Unfortunately\n that requires holding both ProcArrayLock and XidGenLock currently\n (to avoid GetNewTransactionId() having to hold ProcArrayLock). But\n that doesn't seem too bad?\n\n\n0012: Remove now unused PGXACT.\n\n There's no reason to have it anymore.\n\nThe patchseries is also available at\nhttps://github.com/anarazel/postgres/tree/pgxact-split\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 1 Mar 2020 00:36:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-03-01 00:36:01 -0800, Andres Freund wrote:\n> conns tps master\t\ttps pgxact-split\n> \n> 1 26842.492845 26524.194821\n> 10 246923.158682 249224.782661\n> 50 695956.539704 709833.746374\n> 100 1054727.043139 1903616.306028\n> 200 964795.282957 1949200.338012\n> 300 906029.377539 1927881.231478\n> 400 845696.690912 1911065.369776\n> 500 812295.222497 1926237.255856\n> 600 888030.104213 1903047.236273\n> 700 866896.532490 1886537.202142\n> 800 863407.341506 1883768.592610\n> 900 871386.608563 1874638.012128\n> 1000 887668.277133 1876402.391502\n> 1500 860051.361395 1815103.564241\n> 2000 890900.098657 1775435.271018\n> 3000 874184.980039 1653953.817997\n> 4000 845023.080703 1582582.316043\n> 5000 817100.195728 1512260.802371\n> \n> I think these are pretty nice results.\n\nAttached as a graph as well.", "msg_date": "Sun, 1 Mar 2020 00:46:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 1, 2020 at 2:17 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-01 00:36:01 -0800, Andres Freund wrote:\n> > conns tps master tps pgxact-split\n> >\n> > 1 26842.492845 26524.194821\n> > 10 246923.158682 249224.782661\n> > 50 695956.539704 709833.746374\n> > 100 1054727.043139 1903616.306028\n> > 200 964795.282957 1949200.338012\n> > 300 906029.377539 1927881.231478\n> > 400 845696.690912 1911065.369776\n> > 500 812295.222497 1926237.255856\n> > 600 888030.104213 1903047.236273\n> > 700 866896.532490 1886537.202142\n> > 800 863407.341506 1883768.592610\n> > 900 871386.608563 1874638.012128\n> > 1000 887668.277133 1876402.391502\n> > 1500 860051.361395 1815103.564241\n> > 2000 890900.098657 1775435.271018\n> > 3000 874184.980039 1653953.817997\n> > 4000 845023.080703 1582582.316043\n> > 5000 817100.195728 1512260.802371\n> >\n> > I think these are pretty nice results.\n>\n\nNice improvement. +1 for improving the scalability for higher connection count.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 1 Mar 2020 16:25:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-03-01 00:36:01 -0800, Andres Freund wrote:\n> Here are some numbers for the submitted patch series. I'd to cull some\n> further improvements to make it more manageable, but I think the numbers\n> still are quite convincing.\n> \n> The workload is a pgbench readonly, with pgbench -M prepared -c $conns\n> -j $conns -S -n for each client count. This is on a machine with 2\n> Intel(R) Xeon(R) Platinum 8168, but virtualized.\n> \n> conns tps master\t\ttps pgxact-split\n> \n> 1 26842.492845 26524.194821\n> 10 246923.158682 249224.782661\n> 50 695956.539704 709833.746374\n> 100 1054727.043139 1903616.306028\n> 200 964795.282957 1949200.338012\n> 300 906029.377539 1927881.231478\n> 400 845696.690912 1911065.369776\n> 500 812295.222497 1926237.255856\n> 600 888030.104213 1903047.236273\n> 700 866896.532490 1886537.202142\n> 800 863407.341506 1883768.592610\n> 900 871386.608563 1874638.012128\n> 1000 887668.277133 1876402.391502\n> 1500 860051.361395 1815103.564241\n> 2000 890900.098657 1775435.271018\n> 3000 874184.980039 1653953.817997\n> 4000 845023.080703 1582582.316043\n> 5000 817100.195728 1512260.802371\n> \n> I think these are pretty nice results.\n\n> One further cool recognition of the fact that GetSnapshotData()'s\n> results can be made to only depend on the set of xids in progress, is\n> that caching the results of GetSnapshotData() is almost trivial at that\n> point: We only need to recompute snapshots when a toplevel transaction\n> commits/aborts.\n> \n> So we can avoid rebuilding snapshots when no commt has happened since it\n> was last built. Which amounts to assigning a current 'commit sequence\n> number' to the snapshot, and checking that against the current number\n> at the time of the next GetSnapshotData() call. Well, turns out there's\n> this \"LSN\" thing we assign to commits (there are some small issues with\n> that though). I've experimented with that, and it considerably further\n> improves the numbers above. Both with a higher peak throughput, but more\n> importantly it almost entirely removes the throughput regression from\n> 2000 connections onwards.\n> \n> I'm still working on cleaning that part of the patch up, I'll post it in\n> a bit.\n\nI triggered a longer run on the same hardware, that also includes\nnumbers for the caching patch.\n\nnclients\tmaster\tpgxact-split\tpgxact-split-cache\n1\t29742.805074\t29086.874404\t28120.709885\n2\t58653.005921\t56610.432919\t57343.937924\n3\t116580.383993\t115102.94057\t117512.656103\n4\t150821.023662\t154130.354635\t152053.714824\n5\t186679.754357\t189585.156519\t191095.841847\n6\t219013.756252\t223053.409306\t224480.026711\n7\t256861.673892\t256709.57311\t262427.179555\n8\t291495.547691\t294311.524297\t296245.219028\n9\t332835.641015\t333223.666809\t335460.280487\n10\t367883.74842\t373562.206447\t375682.894433\n15\t561008.204553\t578601.577916\t587542.061911\n20\t748000.911053\t794048.140682\t810964.700467\n25\t904581.660543\t1037279.089703\t1043615.577083\n30\t999231.007768\t1251113.123461\t1288276.726489\n35\t1001274.289847\t1438640.653822\t1438508.432425\n40\t991672.445199\t1518100.079695\t1573310.171868\n45\t994427.395069\t1575758.31948\t1649264.339117\n50\t1017561.371878\t1654776.716703\t1715762.303282\n60\t993943.210188\t1720318.989894\t1789698.632656\n70\t971379.995255\t1729836.303817\t1819477.25356\n80\t966276.137538\t1744019.347399\t1842248.57152\n90\t901175.211649\t1768907.069263\t1847823.970726\n100\t803175.74326\t1784636.397822\t1865795.782943\n125\t664438.039582\t1806275.514545\t1870983.64688\n150\t623562.201749\t1796229.009658\t1876529.428419\n175\t680683.150597\t1809321.487338\t1910694.40987\n200\t668413.988251\t1833457.942035\t1878391.674828\n225\t682786.299485\t1816577.462613\t1884587.77743\n250\t727308.562076\t1825796.324814\t1864692.025853\n275\t676295.999761\t1843098.107926\t1908698.584573\n300\t698831.398432\t1832068.168744\t1892735.290045\n400\t661534.639489\t1859641.983234\t1898606.247281\n500\t645149.788352\t1851124.475202\t1888589.134422\n600\t740636.323211\t1875152.669115\t1880653.747185\n700\t858645.363292\t1833527.505826\t1874627.969414\n800\t858287.957814\t1841914.668668\t1892106.319085\n900\t882204.933544\t1850998.221969\t1868260.041595\n1000\t910988.551206\t1836336.091652\t1862945.18557\n1500\t917727.92827\t1808822.338465\t1864150.00307\n2000\t982137.053108\t1813070.209217\t1877104.342864\n3000\t1013514.639108\t1753026.733843\t1870416.924248\n4000\t1025476.80688\t1600598.543635\t1859908.314496\n5000\t1019889.160511\t1534501.389169\t1870132.571895\n7500\t968558.864242\t1352137.828569\t1853825.376742\n10000\t887558.112017\t1198321.352461\t1867384.381886\n15000\t687766.593628\t950788.434914\t1710509.977169\n\nThe odd dip for master between 90 and 700 connections looks like it's\nnot directly related to GetSnapshotData(). It looks like it's related to\nthe linux scheduler and virtiualization. When a pgbench thread and\npostgres backend need to swap who gets executed, and both are on\ndifferent CPUs, the wakeup is more expensive when the target CPU is idle\nor isn't going to reschedule soon. In the expensive path a\ninter-process-interrupt (IPI) gets triggered, which requires to exit out\nof the VM (which is really expensive on azure, apparently). I can\ntrigger similar behaviour for the other runs by renicing, albeit on a\nslightly smaller scale.\n\nI'll try to find a larger system that's not virtualized :/.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 2 Mar 2020 15:24:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 3/1/20 3:36 AM, Andres Freund wrote:\n> \n> I think these are pretty nice results.\n\nIndeed they are.\n\nIs the target version PG13 or PG14? It seems like a pretty big patch to \ngo in the last commitfest for PG13.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:04:08 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nNice performance gains.\n\nOn Sun, 1 Mar 2020 at 21:36, Andres Freund <andres@anarazel.de> wrote:\n> The series currently consists out of:\n>\n> 0001-0005: Fixes and assert improvements that are independent of the patch, but\n> are hit by the new code (see also separate thread).\n>\n> 0006: Move delayChkpt from PGXACT to PGPROC it's rarely checked & frequently modified\n>\n> 0007: WIP: Introduce abstraction layer for \"is tuple invisible\" tests.\n>\n> This is the most crucial piece. Instead of code directly using\n> RecentOldestXmin, there's a new set of functions for testing\n> whether an xid is visible (InvisibleToEveryoneTestXid() et al).\n>\n> Those function use new horizon boundaries computed as part of\n> GetSnapshotData(), and recompute an accurate boundary when the\n> tested xid falls inbetween.\n>\n> There's a bit more infrastructure needed - we need to limit how\n> often an accurate snapshot is computed. Probably to once per\n> snapshot? Or once per transaction?\n>\n>\n> To avoid issues with the lower boundary getting too old and\n> presenting a wraparound danger, I made all the xids be\n> FullTransactionIds. That imo is a good thing?\n>\n>\n> This patch currently breaks old_snapshot_threshold, as I've not\n> yet integrated it with the new functions. I think we can make the\n> old snapshot stuff a lot more precise with this - instead of\n> always triggering conflicts when a RecentGlobalXmin is too old, we\n> can do so only in the cases we actually remove a row. I ran out of\n> energy threading that through the heap_page_prune and\n> HeapTupleSatisfiesVacuum.\n>\n>\n> 0008: Move PGXACT->xmin back to PGPROC.\n>\n> Now that GetSnapshotData() doesn't access xmin anymore, we can\n> make it a normal field in PGPROC again.\n>\n>\n> 0009: Improve GetSnapshotData() performance by avoiding indirection for xid access.\n\nI've only looked at 0001-0009 so far. I'm not quite the expert in this\narea, so the review feels a bit superficial. Here's what I noted down\nduring my pass.\n\n0001\n\n1. cant't -> can't\n\n* snapshot cant't change in the midst of a relcache build, so there's no\n\n0002\n\n2. I don't quite understand your change in\nUpdateSubscriptionRelState(). snap seems unused. Drilling down into\nSearchSysCacheCopy2, in SearchCatCacheMiss() the systable_beginscan()\npasses a NULL snapshot.\n\nthe whole patch does this. I guess I don't understand why 0002 does this.\n\n0004\n\n3. This comment seems to have the line order swapped in bt_check_every_level\n\n/*\n* RecentGlobalXmin/B-Tree page deletion.\n* This assertion matches the one in index_getnext_tid(). See note on\n*/\nAssert(SnapshotSet());\n\n0006\n\n4. Did you consider the location of 'delayChkpt' in PGPROC. Couldn't\nyou slot it in somewhere it would fit in existing padding?\n\n0007\n\n5. GinPageIsRecyclable() has no comments at all. I know that\nginvacuum.c is not exactly the modal citizen for function header\ncomments, but likely this patch is no good reason to continue the\ntrend.\n\n6. The comment rearrangement in bt_check_every_level should be in the\n0004 patch.\n\n7. struct InvisibleToEveryoneState could do with some comments\nexplaining the fields.\n\n8. The header comment in GetOldestXminInt needs to be updated. It\ntalks about \"if rel = NULL and there are no transactions\", but there's\nno parameter by that name now. Maybe the whole comment should be moved\ndown to the external implementation of the function\n\n9. I get the idea you don't intend to keep the debug message in\nInvisibleToEveryoneTestFullXid(), but if you do, then shouldn't it be\nusing UINT64_FORMAT?\n\n10. teh -> the\n\n* which is based on teh value computed when getting the current snapshot.\n\n11. InvisibleToEveryoneCheckXid and InvisibleToEveryoneCheckFullXid\nseem to have their extern modifiers in the .c file.\n\n0009\n\n12. iare -> are\n\n* These iare separate from the main PGPROC array so that the most heavily\n\n13. is -> are\n\n* accessed data is stored contiguously in memory in as few cache lines as\n\n14. It doesn't seem to quite make sense to talk about \"this proc\" in:\n\n/*\n* TransactionId of top-level transaction currently being executed by this\n* proc, if running and XID is assigned; else InvalidTransactionId.\n*\n* Each PGPROC has a copy of its value in PGPROC.xidCopy.\n*/\nTransactionId *xids;\n\nmaybe \"this\" can be replaced with \"each\"\n\nI will try to continue with the remaining patches soon. However, it\nwould be good to get a more complete patchset. I feel there are quite\na few XXX comments remaining for things you need to think about later,\nand ... it's getting late.\n\n\n", "msg_date": "Tue, 17 Mar 2020 23:59:14 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 1, 2020 at 9:36 PM Andres Freund <andres@anarazel.de> wrote:\n> conns tps master tps pgxact-split\n>\n> 1 26842.492845 26524.194821\n> 10 246923.158682 249224.782661\n> 50 695956.539704 709833.746374\n> 100 1054727.043139 1903616.306028\n> 200 964795.282957 1949200.338012\n> 300 906029.377539 1927881.231478\n> 400 845696.690912 1911065.369776\n> 500 812295.222497 1926237.255856\n> 600 888030.104213 1903047.236273\n> 700 866896.532490 1886537.202142\n> 800 863407.341506 1883768.592610\n> 900 871386.608563 1874638.012128\n> 1000 887668.277133 1876402.391502\n> 1500 860051.361395 1815103.564241\n> 2000 890900.098657 1775435.271018\n> 3000 874184.980039 1653953.817997\n> 4000 845023.080703 1582582.316043\n> 5000 817100.195728 1512260.802371\n\nThis will clearly be really big news for lots of PostgreSQL users.\n\n> One further cool recognition of the fact that GetSnapshotData()'s\n> results can be made to only depend on the set of xids in progress, is\n> that caching the results of GetSnapshotData() is almost trivial at that\n> point: We only need to recompute snapshots when a toplevel transaction\n> commits/aborts.\n>\n> So we can avoid rebuilding snapshots when no commt has happened since it\n> was last built. Which amounts to assigning a current 'commit sequence\n> number' to the snapshot, and checking that against the current number\n> at the time of the next GetSnapshotData() call. Well, turns out there's\n> this \"LSN\" thing we assign to commits (there are some small issues with\n> that though). I've experimented with that, and it considerably further\n> improves the numbers above. Both with a higher peak throughput, but more\n> importantly it almost entirely removes the throughput regression from\n> 2000 connections onwards.\n>\n> I'm still working on cleaning that part of the patch up, I'll post it in\n> a bit.\n\nI looked at that part on your public pgxact-split branch. In that\nversion you used \"CSN\" rather than something based on LSNs, which I\nassume avoids complications relating to WAL locking or something like\nthat. We should probably be careful to avoid confusion with the\npre-existing use of the term \"commit sequence number\" (CommitSeqNo,\nCSN) that appears in predicate.c. This also calls to mind the\n2013-2016 work by Ants Aasma and others[1] on CSN-based snapshots,\nwhich is obviously a much more radical change, but really means what\nit says (commits). The CSN in your patch set is used purely as a\nlevel-change for snapshot cache invalidation IIUC, and it advances\nalso for aborts -- so maybe it should be called something like\ncompleted_xact_count, using existing terminology from procarray.c.\n\n+ if (snapshot->csn != 0 && MyProc->xidCopy == InvalidTransactionId &&\n+ UINT64_ACCESS_ONCE(ShmemVariableCache->csn) == snapshot->csn)\n\nWhy is it OK to read ShmemVariableCache->csn without at least a read\nbarrier? I suppose this allows a cached snapshot to be used very soon\nafter a transaction commits and should be visible to you, but ...\nhmmmrkwjherkjhg... I guess it must be really hard to observe any\nanomaly. Let's see... maybe it's possible on a relaxed memory system\nlike POWER or ARM, if you use a shm flag to say \"hey I just committed\na transaction\", and the other guy sees the flag but can't yet see the\nnew CSN, so an SPI query can't see the transaction?\n\nAnother theoretical problem is the non-atomic read of a uint64 on some\n32 bit platforms.\n\n> 0007: WIP: Introduce abstraction layer for \"is tuple invisible\" tests.\n>\n> This is the most crucial piece. Instead of code directly using\n> RecentOldestXmin, there's a new set of functions for testing\n> whether an xid is visible (InvisibleToEveryoneTestXid() et al).\n>\n> Those function use new horizon boundaries computed as part of\n> GetSnapshotData(), and recompute an accurate boundary when the\n> tested xid falls inbetween.\n>\n> There's a bit more infrastructure needed - we need to limit how\n> often an accurate snapshot is computed. Probably to once per\n> snapshot? Or once per transaction?\n>\n>\n> To avoid issues with the lower boundary getting too old and\n> presenting a wraparound danger, I made all the xids be\n> FullTransactionIds. That imo is a good thing?\n\n+1, as long as we don't just move the wraparound danger to the places\nwhere we convert xids to fxids!\n\n+/*\n+ * Be very careful about when to use this function. It can only safely be used\n+ * when there is a guarantee that, at the time of the call, xid is within 2\n+ * billion xids of rel. That e.g. can be guaranteed if the the caller assures\n+ * a snapshot is held by the backend, and xid is from a table (where\n+ * vacuum/freezing ensures the xid has to be within that range).\n+ */\n+static inline FullTransactionId\n+FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n+{\n+ uint32 rel_epoch = EpochFromFullTransactionId(rel);\n+ TransactionId rel_xid = XidFromFullTransactionId(rel);\n+ uint32 epoch;\n+\n+ /*\n+ * TODO: A function to easily write an assertion ensuring that xid is\n+ * between [oldestXid, nextFullXid) woudl be useful here, and in plenty\n+ * other places.\n+ */\n+\n+ if (xid > rel_xid)\n+ epoch = rel_epoch - 1;\n+ else\n+ epoch = rel_epoch;\n+\n+ return FullTransactionIdFromEpochAndXid(epoch, xid);\n+}\n\nI hate it, but I don't have a better concrete suggestion right now.\nWhatever I come up with amounts to the same thing on some level,\nthough I feel like it might be better to used an infrequently updated\noldestFxid as the lower bound in a conversion. An upper bound would\nalso seem better, though requires much trickier interlocking. What\nyou have means \"it's near here!\"... isn't that too prone to bugs that\nare hidden because of the ambient fuzziness? A lower bound seems like\nit could move extremely infrequently and therefore it'd be OK for it\nto be protected by both proc array and xid gen locks (ie it'd be\nrecomputed when nextFxid needs to move too far ahead of it, so every\n~2 billion xacts). I haven't looked at this long enough to have a\nstrong opinion, though.\n\nOn a more constructive note:\n\nGetOldestXminInt() does:\n\n LWLockAcquire(ProcArrayLock, LW_SHARED);\n\n+ nextfxid = ShmemVariableCache->nextFullXid;\n+\n...\n LWLockRelease(ProcArrayLock);\n...\n+ return FullXidViaRelative(nextfxid, result);\n\nBut nextFullXid is protected by XidGenLock; maybe that's OK from a\ndata freshness point of view (I'm not sure), but from an atomicity\npoint of view, you can't do that can you?\n\n> This patch currently breaks old_snapshot_threshold, as I've not\n> yet integrated it with the new functions. I think we can make the\n> old snapshot stuff a lot more precise with this - instead of\n> always triggering conflicts when a RecentGlobalXmin is too old, we\n> can do so only in the cases we actually remove a row. I ran out of\n> energy threading that through the heap_page_prune and\n> HeapTupleSatisfiesVacuum.\n\nCCing Kevin as an FYI.\n\n> 0008: Move PGXACT->xmin back to PGPROC.\n>\n> Now that GetSnapshotData() doesn't access xmin anymore, we can\n> make it a normal field in PGPROC again.\n>\n>\n> 0009: Improve GetSnapshotData() performance by avoiding indirection for xid access.\n> 0010: Improve GetSnapshotData() performance by avoiding indirection for vacuumFlags\n> 0011: Improve GetSnapshotData() performance by avoiding indirection for nsubxids access\n>\n> These successively move the remaining PGXACT fields into separate\n> arrays in ProcGlobal, and adjust GetSnapshotData() to take\n> advantage. Those arrays are dense in the sense that they only\n> contain data for PGPROCs that are in use (i.e. when disconnecting,\n> the array is moved around)..\n>\n> I think xid, and vacuumFlags are pretty reasonable. But need\n> cleanup, obviously:\n> - The biggest cleanup would be to add a few helper functions for\n> accessing the values, rather than open coding that.\n> - Perhaps we should call the ProcGlobal ones 'cached', and name\n> the PGPROC ones as the one true source of truth?\n>\n> For subxid I thought it'd be nice to have nxids and overflow be\n> only one number. But that probably was the wrong call? Now\n> TransactionIdInProgress() cannot look at at the subxids that did\n> fit in PGPROC.subxid. I'm not sure that's important, given the\n> likelihood of misses? But I'd probably still have the subxid\n> array be one of {uint8 nsubxids; bool overflowed} instead.\n>\n>\n> To keep the arrays dense they copy the logic for pgprocnos. Which\n> means that ProcArrayAdd/Remove move things around. Unfortunately\n> that requires holding both ProcArrayLock and XidGenLock currently\n> (to avoid GetNewTransactionId() having to hold ProcArrayLock). But\n> that doesn't seem too bad?\n\nIn the places where you now acquire both, I guess you also need to\nrelease both in the error path?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BCSw_tEpJ%3Dmd1zgxPkjH6CWDnTDft4gBi%3D%2BP9SnoC%2BWy3pKdA%40mail.gmail.com\n\n\n", "msg_date": "Fri, 20 Mar 2020 18:23:03 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nThanks for looking!\n\nOn 2020-03-20 18:23:03 +1300, Thomas Munro wrote:\n> On Sun, Mar 1, 2020 at 9:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'm still working on cleaning that part of the patch up, I'll post it in\n> > a bit.\n> \n> I looked at that part on your public pgxact-split branch. In that\n> version you used \"CSN\" rather than something based on LSNs, which I\n> assume avoids complications relating to WAL locking or something like\n> that.\n\nRight, I first tried to use LSNs, but after further tinkering found that\nit's too hard to address the difference between visiblity order and LSN\norder. I don't think there's an easy way to address the difference.\n\n\n> We should probably be careful to avoid confusion with the\n> pre-existing use of the term \"commit sequence number\" (CommitSeqNo,\n> CSN) that appears in predicate.c.\n\nI looked at that after you mentioned it on IM. But I find it hard to\ngrok what it's precisely defined at. There's basically no comments\nexplaining what it's really supposed to do, and I find the relevant code\nfar from easy to grok :(.\n\n\n> This also calls to mind the 2013-2016 work by Ants Aasma and others[1]\n> on CSN-based snapshots, which is obviously a much more radical change,\n> but really means what it says (commits).\n\nWell, I think you could actually build some form of more dense snapshots\nontop of \"my\" CSN, with a bit of effort (and lot of handwaving). I don't\nthink they're that different concepts.\n\n\n> The CSN in your patch set is used purely as a level-change for\n> snapshot cache invalidation IIUC, and it advances also for aborts --\n> so maybe it should be called something like completed_xact_count,\n> using existing terminology from procarray.c.\n\nI expect it to be used outside of snapshots too, in the future, FWIW.\n\ncompleted_xact_count sounds good to me.\n\n\n> + if (snapshot->csn != 0 && MyProc->xidCopy == InvalidTransactionId &&\n> + UINT64_ACCESS_ONCE(ShmemVariableCache->csn) == snapshot->csn)\n> \n> Why is it OK to read ShmemVariableCache->csn without at least a read\n> barrier? I suppose this allows a cached snapshot to be used very soon\n> after a transaction commits and should be visible to you, but ...\n> hmmmrkwjherkjhg... I guess it must be really hard to observe any\n> anomaly. Let's see... maybe it's possible on a relaxed memory system\n> like POWER or ARM, if you use a shm flag to say \"hey I just committed\n> a transaction\", and the other guy sees the flag but can't yet see the\n> new CSN, so an SPI query can't see the transaction?\n\nYea, it does need more thought / comments. I can't really see an actual\ncorrectness violation though. As far as I can tell you'd never be able\nto get an \"older\" ShmemVariableCache->csn than one since *after* the\nlast lock acquired/released by the current backend - which then also\nmeans a different \"ordering\" would have been possible allowing the\ncurrent backend to take the snapshot earlier.\n\n\n> Another theoretical problem is the non-atomic read of a uint64 on some\n> 32 bit platforms.\n\nYea, it probably should be a pg_atomic_uint64 to address that. I don't\nthink it really would cause problems, because I think it'd always end up\ncausing an unnecessary snapshot build. But there's no need to go there.\n\n\n> > 0007: WIP: Introduce abstraction layer for \"is tuple invisible\" tests.\n> >\n> > This is the most crucial piece. Instead of code directly using\n> > RecentOldestXmin, there's a new set of functions for testing\n> > whether an xid is visible (InvisibleToEveryoneTestXid() et al).\n> >\n> > Those function use new horizon boundaries computed as part of\n> > GetSnapshotData(), and recompute an accurate boundary when the\n> > tested xid falls inbetween.\n> >\n> > There's a bit more infrastructure needed - we need to limit how\n> > often an accurate snapshot is computed. Probably to once per\n> > snapshot? Or once per transaction?\n> >\n> >\n> > To avoid issues with the lower boundary getting too old and\n> > presenting a wraparound danger, I made all the xids be\n> > FullTransactionIds. That imo is a good thing?\n> \n> +1, as long as we don't just move the wraparound danger to the places\n> where we convert xids to fxids!\n> \n> +/*\n> + * Be very careful about when to use this function. It can only safely be used\n> + * when there is a guarantee that, at the time of the call, xid is within 2\n> + * billion xids of rel. That e.g. can be guaranteed if the the caller assures\n> + * a snapshot is held by the backend, and xid is from a table (where\n> + * vacuum/freezing ensures the xid has to be within that range).\n> + */\n> +static inline FullTransactionId\n> +FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n> +{\n> + uint32 rel_epoch = EpochFromFullTransactionId(rel);\n> + TransactionId rel_xid = XidFromFullTransactionId(rel);\n> + uint32 epoch;\n> +\n> + /*\n> + * TODO: A function to easily write an assertion ensuring that xid is\n> + * between [oldestXid, nextFullXid) woudl be useful here, and in plenty\n> + * other places.\n> + */\n> +\n> + if (xid > rel_xid)\n> + epoch = rel_epoch - 1;\n> + else\n> + epoch = rel_epoch;\n> +\n> + return FullTransactionIdFromEpochAndXid(epoch, xid);\n> +}\n> \n> I hate it, but I don't have a better concrete suggestion right now.\n> Whatever I come up with amounts to the same thing on some level,\n> though I feel like it might be better to used an infrequently updated\n> oldestFxid as the lower bound in a conversion.\n\nI am not sure it's as clearly correct to use oldestFxid in as many\ncases. Normally PGPROC->xmin (PGXACT->xmin currently) should prevent the\n\"system wide\" xid horizon to move too far relative to that, but I think\nthere are more plausible problems with the \"oldest\" xid horizon to move\nconcurrently with the a backend inspecting values.\n\nIt shouldn't be a problem here since the values are taken under a lock\npreventing both from being moved I think, and since we're only comparing\nthose two values without taking anything else into account, the \"global\"\nhorizon changing concurrently wouldn't matter.\n\nBut it seems easier to understand the correctness when comparing to\nnextXid?\n\nWhat's the benefit of looking at an \"infrequently updated\" value\ninstead? I guess you can argue that it'd be more likely to be in cache,\nbut since all of this lives in a single cacheline...\n\n\n> An upper bound would also seem better, though requires much trickier\n> interlocking. What you have means \"it's near here!\"... isn't that too\n> prone to bugs that are hidden because of the ambient fuzziness?\n\nI can't follow the last sentence. Could you expand?\n\n\n> On a more constructive note:\n> \n> GetOldestXminInt() does:\n> \n> LWLockAcquire(ProcArrayLock, LW_SHARED);\n> \n> + nextfxid = ShmemVariableCache->nextFullXid;\n> +\n> ...\n> LWLockRelease(ProcArrayLock);\n> ...\n> + return FullXidViaRelative(nextfxid, result);\n> \n> But nextFullXid is protected by XidGenLock; maybe that's OK from a\n> data freshness point of view (I'm not sure), but from an atomicity\n> point of view, you can't do that can you?\n\nHm. Yea, I think it's not safe against torn 64bit reads, you're right.\n\n\n> > This patch currently breaks old_snapshot_threshold, as I've not\n> > yet integrated it with the new functions. I think we can make the\n> > old snapshot stuff a lot more precise with this - instead of\n> > always triggering conflicts when a RecentGlobalXmin is too old, we\n> > can do so only in the cases we actually remove a row. I ran out of\n> > energy threading that through the heap_page_prune and\n> > HeapTupleSatisfiesVacuum.\n> \n> CCing Kevin as an FYI.\n\nIf anybody has an opinion on this sketch I'd be interested. I've started\nto implement it, so ...\n\n\n> > 0008: Move PGXACT->xmin back to PGPROC.\n> >\n> > Now that GetSnapshotData() doesn't access xmin anymore, we can\n> > make it a normal field in PGPROC again.\n> >\n> >\n> > 0009: Improve GetSnapshotData() performance by avoiding indirection for xid access.\n> > 0010: Improve GetSnapshotData() performance by avoiding indirection for vacuumFlags\n> > 0011: Improve GetSnapshotData() performance by avoiding indirection for nsubxids access\n> >\n> > These successively move the remaining PGXACT fields into separate\n> > arrays in ProcGlobal, and adjust GetSnapshotData() to take\n> > advantage. Those arrays are dense in the sense that they only\n> > contain data for PGPROCs that are in use (i.e. when disconnecting,\n> > the array is moved around)..\n> >\n> > I think xid, and vacuumFlags are pretty reasonable. But need\n> > cleanup, obviously:\n> > - The biggest cleanup would be to add a few helper functions for\n> > accessing the values, rather than open coding that.\n> > - Perhaps we should call the ProcGlobal ones 'cached', and name\n> > the PGPROC ones as the one true source of truth?\n> >\n> > For subxid I thought it'd be nice to have nxids and overflow be\n> > only one number. But that probably was the wrong call? Now\n> > TransactionIdInProgress() cannot look at at the subxids that did\n> > fit in PGPROC.subxid. I'm not sure that's important, given the\n> > likelihood of misses? But I'd probably still have the subxid\n> > array be one of {uint8 nsubxids; bool overflowed} instead.\n> >\n> >\n> > To keep the arrays dense they copy the logic for pgprocnos. Which\n> > means that ProcArrayAdd/Remove move things around. Unfortunately\n> > that requires holding both ProcArrayLock and XidGenLock currently\n> > (to avoid GetNewTransactionId() having to hold ProcArrayLock). But\n> > that doesn't seem too bad?\n> \n> In the places where you now acquire both, I guess you also need to\n> release both in the error path?\n\nHm. I guess you mean:\n\n\tif (arrayP->numProcs >= arrayP->maxProcs)\n\t{\n\t\t/*\n\t\t * Oops, no room. (This really shouldn't happen, since there is a\n\t\t * fixed supply of PGPROC structs too, and so we should have failed\n\t\t * earlier.)\n\t\t */\n\t\tLWLockRelease(ProcArrayLock);\n\t\tereport(FATAL,\n\t\t\t\t(errcode(ERRCODE_TOO_MANY_CONNECTIONS),\n\t\t\t\t errmsg(\"sorry, too many clients already\")));\n\t}\n\nI think we should just remove the LWLockRelease? At this point we\nalready have set up ProcKill(), which would release all lwlocks after\nthe error was thrown?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Mar 2020 23:00:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-03-17 23:59:14 +1300, David Rowley wrote:\n> Nice performance gains.\n\nThanks.\n\n\n> On Sun, 1 Mar 2020 at 21:36, Andres Freund <andres@anarazel.de> wrote:\n> 2. I don't quite understand your change in\n> UpdateSubscriptionRelState(). snap seems unused. Drilling down into\n> SearchSysCacheCopy2, in SearchCatCacheMiss() the systable_beginscan()\n> passes a NULL snapshot.\n> \n> the whole patch does this. I guess I don't understand why 0002 does this.\n\nSee the thread at https://postgr.es/m/20200229052459.wzhqnbhrriezg4v2%40alap3.anarazel.de\n\nBasically, the way catalog snapshots are handled right now, it's not\ncorrect to much without a snapshot held. Any concurrent invalidation can\ncause the catalog snapshot to be released, which can reset the backend's\nxmin. Which in turn can allow for pruning etc to remove required data.\n\nThis is part of this series only because I felt I needed to add stronger\nasserts to be confident in what's happening. And they started to trigger\nall over :( - and weren't related to the patchset :(.\n\n\n> 4. Did you consider the location of 'delayChkpt' in PGPROC. Couldn't\n> you slot it in somewhere it would fit in existing padding?\n> \n> 0007\n\nHm, maybe. I'm not sure what the best thing to do here is - there's some\narguments to be made that we should keep the fields moved from PGXACT\ntogether on their own cacheline. Compared to some of the other stuff in\nPGPROC they're still accessed from other backends fairly frequently.\n\n\n> 5. GinPageIsRecyclable() has no comments at all. I know that\n> ginvacuum.c is not exactly the modal citizen for function header\n> comments, but likely this patch is no good reason to continue the\n> trend.\n\nWell, I basically just moved the code from the macro of the same\nname... I'll add something.\n\n\n> 9. I get the idea you don't intend to keep the debug message in\n> InvisibleToEveryoneTestFullXid(), but if you do, then shouldn't it be\n> using UINT64_FORMAT?\n\nYea, I don't intend to keep them - they're way too verbose, even for\nDEBUG*. Note that there's some advantage in the long long cast approach\n- it's easier to deal with for translations IIRC.\n\n> 13. is -> are\n> \n> * accessed data is stored contiguously in memory in as few cache lines as\n\nOh? 'data are stored' sounds wrong to me, somehow.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 28 Mar 2020 17:49:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 29, 2020 at 1:49 PM Andres Freund <andres@anarazel.de> wrote:\n> > 13. is -> are\n> >\n> > * accessed data is stored contiguously in memory in as few cache lines as\n>\n> Oh? 'data are stored' sounds wrong to me, somehow.\n\nIn computer contexts it seems pretty well established that we treat\n\"data\" as an uncountable noun (like \"air\"), so I think \"is\" is right\nhere. In maths or science contexts it's usually treated as a plural\nfollowing Latin, which admittedly sounds cleverer, but it also has a\nslightly different meaning, not bits and bytes but something more like\nsamples or (wince) datums.\n\n\n", "msg_date": "Sun, 29 Mar 2020 14:15:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 1, 2020 at 12:36 AM Andres Freund <andres@anarazel.de> wrote:\n> The workload is a pgbench readonly, with pgbench -M prepared -c $conns\n> -j $conns -S -n for each client count. This is on a machine with 2\n> Intel(R) Xeon(R) Platinum 8168, but virtualized.\n>\n> conns tps master tps pgxact-split\n>\n> 1 26842.492845 26524.194821\n> 10 246923.158682 249224.782661\n> 50 695956.539704 709833.746374\n> 100 1054727.043139 1903616.306028\n> 200 964795.282957 1949200.338012\n> 300 906029.377539 1927881.231478\n> 400 845696.690912 1911065.369776\n> 500 812295.222497 1926237.255856\n> 600 888030.104213 1903047.236273\n> 700 866896.532490 1886537.202142\n> 800 863407.341506 1883768.592610\n> 900 871386.608563 1874638.012128\n> 1000 887668.277133 1876402.391502\n> 1500 860051.361395 1815103.564241\n> 2000 890900.098657 1775435.271018\n> 3000 874184.980039 1653953.817997\n> 4000 845023.080703 1582582.316043\n> 5000 817100.195728 1512260.802371\n>\n> I think these are pretty nice results.\n\nThis scalability improvement is clearly very significant. There is\nlittle question that this is a strategically important enhancement for\nthe Postgres project in general. I hope that you will ultimately be\nable to commit the patchset before feature freeze.\n\nI have heard quite a few complaints about the scalability of snapshot\nacquisition in Postgres. Generally from very large users that are not\nwell represented on the mailing lists, for a variety of reasons. The\nGetSnapshotData() bottleneck is a *huge* problem for us. (As problems\nfor Postgres users go, I would probably rank it second behind issues\nwith VACUUM.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 28 Mar 2020 18:39:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sat, Mar 28, 2020 at 06:39:32PM -0700, Peter Geoghegan wrote:\n> On Sun, Mar 1, 2020 at 12:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > The workload is a pgbench readonly, with pgbench -M prepared -c $conns\n> > -j $conns -S -n for each client count. This is on a machine with 2\n> > Intel(R) Xeon(R) Platinum 8168, but virtualized.\n> >\n> > conns tps master tps pgxact-split\n> >\n> > 1 26842.492845 26524.194821\n> > 10 246923.158682 249224.782661\n> > 50 695956.539704 709833.746374\n> > 100 1054727.043139 1903616.306028\n> > 200 964795.282957 1949200.338012\n> > 300 906029.377539 1927881.231478\n> > 400 845696.690912 1911065.369776\n> > 500 812295.222497 1926237.255856\n> > 600 888030.104213 1903047.236273\n> > 700 866896.532490 1886537.202142\n> > 800 863407.341506 1883768.592610\n> > 900 871386.608563 1874638.012128\n> > 1000 887668.277133 1876402.391502\n> > 1500 860051.361395 1815103.564241\n> > 2000 890900.098657 1775435.271018\n> > 3000 874184.980039 1653953.817997\n> > 4000 845023.080703 1582582.316043\n> > 5000 817100.195728 1512260.802371\n> >\n> > I think these are pretty nice results.\n> \n> This scalability improvement is clearly very significant. There is\n> little question that this is a strategically important enhancement for\n> the Postgres project in general. I hope that you will ultimately be\n> able to commit the patchset before feature freeze.\n\n+1\n\n> I have heard quite a few complaints about the scalability of snapshot\n> acquisition in Postgres. Generally from very large users that are not\n> well represented on the mailing lists, for a variety of reasons. The\n> GetSnapshotData() bottleneck is a *huge* problem for us. (As problems\n> for Postgres users go, I would probably rank it second behind issues\n> with VACUUM.)\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 28 Mar 2020 21:44:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-03-28 18:39:32 -0700, Peter Geoghegan wrote:\n> I have heard quite a few complaints about the scalability of snapshot\n> acquisition in Postgres. Generally from very large users that are not\n> well represented on the mailing lists, for a variety of reasons. The\n> GetSnapshotData() bottleneck is a *huge* problem for us. (As problems\n> for Postgres users go, I would probably rank it second behind issues\n> with VACUUM.)\n\nYea, I see it similarly. For busy databases, my experience is that\nvacuum is the big problem for write heavy workloads (or the write\nportion), and snapshot scalability the big problem for read heavy oltp\nworkloads.\n\n\n> This scalability improvement is clearly very significant. There is\n> little question that this is a strategically important enhancement for\n> the Postgres project in general. I hope that you will ultimately be\n> able to commit the patchset before feature freeze.\n\nI've done a fair bit of cleanup, but I'm still fighting with how to\nimplement old_snapshot_threshold in a good way. It's not hard to get it\nback to kind of working, but it requires some changes that go into the\nwrong direction.\n\nThe problem basically is that the current old_snapshot_threshold\nimplementation just reduces OldestXmin to whatever is indicated by\nold_snapshot_threshold, even if not necessary for pruning to do the\nspecific cleanup that's about to be done. If OldestXmin < threshold,\nit'll set shared state that fails all older accesses. But that doesn't\nreally work well with approach in the patch of using a lower/upper\nboundary for potentially valid xmin horizons.\n\nI thinkt he right approach would be to split\nTransactionIdLimitedForOldSnapshots() into separate parts. One that\ndetermines the most aggressive horizon that old_snapshot_threshold\nallows, and a separate part that increases the threshold after which\naccesses need to error out\n(i.e. SetOldSnapshotThresholdTimestamp()). Then we can only call\nSetOldSnapshotThresholdTimestamp() for exactly the xids that are\nremoved, not for the most aggressive interpretation.\n\nUnfortunately I think that basically requires changing\nHeapTupleSatisfiesVacuum's signature, to take a more complex parameter\nthan OldestXmin (to take InvisibleToEveryoneState *), which quickly\nincreases the size of the patch.\n\n\nI'm currently doing that and seeing how the result makes me feel about\nthe patch.\n\nAlternatively we also can just be less efficient and call\nGetOldestXmin() more aggressively when old_snapshot_threshold is\nset. That'd be easier to implement - but seems like an ugly gotcha.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 28 Mar 2020 20:35:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 29, 2020 at 4:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Mar 1, 2020 at 12:36 AM Andres Freund <andres@anarazel.de> wrote:\n> > The workload is a pgbench readonly, with pgbench -M prepared -c $conns\n> > -j $conns -S -n for each client count. This is on a machine with 2\n> > Intel(R) Xeon(R) Platinum 8168, but virtualized.\n> >\n> > conns tps master tps pgxact-split\n> >\n> > 1 26842.492845 26524.194821\n> > 10 246923.158682 249224.782661\n> > 50 695956.539704 709833.746374\n> > 100 1054727.043139 1903616.306028\n> > 200 964795.282957 1949200.338012\n> > 300 906029.377539 1927881.231478\n> > 400 845696.690912 1911065.369776\n> > 500 812295.222497 1926237.255856\n> > 600 888030.104213 1903047.236273\n> > 700 866896.532490 1886537.202142\n> > 800 863407.341506 1883768.592610\n> > 900 871386.608563 1874638.012128\n> > 1000 887668.277133 1876402.391502\n> > 1500 860051.361395 1815103.564241\n> > 2000 890900.098657 1775435.271018\n> > 3000 874184.980039 1653953.817997\n> > 4000 845023.080703 1582582.316043\n> > 5000 817100.195728 1512260.802371\n> >\n> > I think these are pretty nice results.\n>\n> This scalability improvement is clearly very significant. There is\n> little question that this is a strategically important enhancement for\n> the Postgres project in general. I hope that you will ultimately be\n> able to commit the patchset before feature freeze.\n\n+1, this is really very cool results.\n\nDespite this patchset is expected to be clearly a big win on majority\nof workloads, I think we still need to investigate different workloads\non different hardware to ensure there is no regression.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 29 Mar 2020 21:24:32 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi, \n\nOn March 29, 2020 11:24:32 AM PDT, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> clearly a big win on majority\n>of workloads, I think we still need to investigate different workloads\n>on different hardware to ensure there is no regression.\n\nDefinitely. Which workloads are you thinking of? I can think of those affected facets: snapshot speed, commit speed with writes, connection establishment, prepared transaction speed. All in the small and large connection count cases.\n\nI did measurements on all of those but prepared xacts, fwiw. That definitely needs to be measured, due to the locking changes around procarrayaddd/remove.\n\nI don't think regressions besides perhaps 2pc are likely - there's nothing really getting more expensive but procarray add/remove.\n\n\nAndres\n\nRegards,\n\nAndres\n\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sun, 29 Mar 2020 11:50:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 29, 2020 at 11:50:10AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On March 29, 2020 11:24:32 AM PDT, Alexander Korotkov\n><a.korotkov@postgrespro.ru> wrote:\n>> clearly a big win on majority\n>>of workloads, I think we still need to investigate different workloads\n>>on different hardware to ensure there is no regression.\n>\n>Definitely. Which workloads are you thinking of? I can think of those\n>affected facets: snapshot speed, commit speed with writes, connection\n>establishment, prepared transaction speed. All in the small and large\n>connection count cases.\n>\n>I did measurements on all of those but prepared xacts, fwiw. That\n>definitely needs to be measured, due to the locking changes around\n>procarrayaddd/remove.\n>\n>I don't think regressions besides perhaps 2pc are likely - there's\n>nothing really getting more expensive but procarray add/remove.\n>\n\nIf I get some instructions what tests to do, I can run a bunch of tests\non my machinees (not the largest boxes, but at least something). I don't\nhave the bandwidth to come up with tests on my own, at the moment.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 30 Mar 2020 00:52:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Mar 29, 2020 at 9:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On March 29, 2020 11:24:32 AM PDT, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> > clearly a big win on majority\n> >of workloads, I think we still need to investigate different workloads\n> >on different hardware to ensure there is no regression.\n>\n> Definitely. Which workloads are you thinking of? I can think of those affected facets: snapshot speed, commit speed with writes, connection establishment, prepared transaction speed. All in the small and large connection count cases.\n\nFollowing pgbench scripts comes first to my mind:\n1) SELECT txid_current(); (artificial but good for checking corner case)\n2) Single insert statement (as example of very short transaction)\n3) Plain pgbench read-write (you already did it for sure)\n4) pgbench read-write script with increased amount of SELECTs. Repeat\nselect from pgbench_accounts say 10 times with different aids.\n5) 10% pgbench read-write, 90% of pgbench read-only\n\n> I did measurements on all of those but prepared xacts, fwiw\n\nGreat, it would be nice to see the results in the thread.\n\n> That definitely needs to be measured, due to the locking changes around procarrayaddd/remove.\n>\n> I don't think regressions besides perhaps 2pc are likely - there's nothing really getting more expensive but procarray add/remove.\n\nI agree that ProcArrayAdd()/Remove() should be first subject of\ninvestigation, but other cases should be checked as well IMHO.\nRegarding 2pc I can following scenarios come to my mind:\n1) pgbench read-write modified so that every transaction is prepared\nfirst, then commit prepared.\n2) 10% of 2pc pgbench read-write, 90% normal pgbench read-write\n3) 10% of 2pc pgbench read-write, 90% normal pgbench read-only\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 30 Mar 2020 17:04:00 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nI'm still fighting with snapshot_too_old. The feature is just badly\nundertested, underdocumented, and there's lots of other oddities. I've\nnow spent about as much time on that feature than on the whole rest of\nthe patchset.\n\nAs an example for under-documented, here's a definitely non-trivial\nblock of code without a single comment explaining what it's doing.\n\n\t\t\t\tif (oldSnapshotControl->count_used > 0 &&\n\t\t\t\t\tts >= oldSnapshotControl->head_timestamp)\n\t\t\t\t{\n\t\t\t\t\tint\t\t\toffset;\n\n\t\t\t\t\toffset = ((ts - oldSnapshotControl->head_timestamp)\n\t\t\t\t\t\t\t / USECS_PER_MINUTE);\n\t\t\t\t\tif (offset > oldSnapshotControl->count_used - 1)\n\t\t\t\t\t\toffset = oldSnapshotControl->count_used - 1;\n\t\t\t\t\toffset = (oldSnapshotControl->head_offset + offset)\n\t\t\t\t\t\t% OLD_SNAPSHOT_TIME_MAP_ENTRIES;\n\t\t\t\t\txlimit = oldSnapshotControl->xid_by_minute[offset];\n\n\t\t\t\t\tif (NormalTransactionIdFollows(xlimit, recentXmin))\n\t\t\t\t\t\tSetOldSnapshotThresholdTimestamp(ts, xlimit);\n\t\t\t\t}\n\n\t\t\t\tLWLockRelease(OldSnapshotTimeMapLock);\n\nAlso, SetOldSnapshotThresholdTimestamp() acquires a separate spinlock -\nnot great to call that with the lwlock held.\n\n\nThen there's this comment:\n\n\t\t/*\n\t\t * Failsafe protection against vacuuming work of active transaction.\n\t\t *\n\t\t * This is not an assertion because we avoid the spinlock for\n\t\t * performance, leaving open the possibility that xlimit could advance\n\t\t * and be more current; but it seems prudent to apply this limit. It\n\t\t * might make pruning a tiny bit less aggressive than it could be, but\n\t\t * protects against data loss bugs.\n\t\t */\n\t\tif (TransactionIdIsNormal(latest_xmin)\n\t\t\t&& TransactionIdPrecedes(latest_xmin, xlimit))\n\t\t\txlimit = latest_xmin;\n\n\t\tif (NormalTransactionIdFollows(xlimit, recentXmin))\n\t\t\treturn xlimit;\n\nSo this is not using lock, so the values aren't accurate, but it avoids\ndata loss bugs? I also don't know which spinlock is avoided on the path\nhere as mentioend - the acquisition is unconditional.\n\nBut more importantly - if this is about avoiding data loss bugs, how on\nearth is it ok that we don't go through these checks in the\nold_snapshot_threshold == 0 path?\n\n\t\t/*\n\t\t * Zero threshold always overrides to latest xmin, if valid. Without\n\t\t * some heuristic it will find its own snapshot too old on, for\n\t\t * example, a simple UPDATE -- which would make it useless for most\n\t\t * testing, but there is no principled way to ensure that it doesn't\n\t\t * fail in this way. Use a five-second delay to try to get useful\n\t\t * testing behavior, but this may need adjustment.\n\t\t */\n\t\tif (old_snapshot_threshold == 0)\n\t\t{\n\t\t\tif (TransactionIdPrecedes(latest_xmin, MyProc->xmin)\n\t\t\t\t&& TransactionIdFollows(latest_xmin, xlimit))\n\t\t\t\txlimit = latest_xmin;\n\n\t\t\tts -= 5 * USECS_PER_SEC;\n\t\t\tSetOldSnapshotThresholdTimestamp(ts, xlimit);\n\n\t\t\treturn xlimit;\n\t\t}\n\n\nThis feature looks like it was put together by applying force until\nsomething gave, and then stopping just there.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Mar 2020 13:04:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-03-31 13:04:38 -0700, Andres Freund wrote:\n> I'm still fighting with snapshot_too_old. The feature is just badly\n> undertested, underdocumented, and there's lots of other oddities. I've\n> now spent about as much time on that feature than on the whole rest of\n> the patchset.\n\nTo expand on this being under-tested: The whole time mapping\ninfrastructure is not tested, because all of that is bypassed when\nold_snapshot_threshold = 0. And old_snapshot_threshold = 0 basically\nonly exists for testing. The largest part of the complexity of this\nfeature are TransactionIdLimitedForOldSnapshots() and\nMaintainOldSnapshotTimeMapping() - and none of the complexity is tested\ndue to the tests running with old_snapshot_threshold = 0.\n\nSo we have test only infrastructure that doesn't allow to actually test\nthe feature.\n\n\nAnd the tests that we do have don't have a single comment explaining\nwhat the expected results are. Except for the newer\nsto_using_hash_index.spec, they just run all permutations. I don't know\nhow those tests actually help, since it's not clear why any of the\nresults are the way they are. And which just are the results of\nbugs. Ore not affected by s_t_o.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Mar 2020 14:55:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, 1 Mar 2020 at 21:47, Andres Freund <andres@anarazel.de> wrote:\n> On 2020-03-01 00:36:01 -0800, Andres Freund wrote:\n> > conns tps master tps pgxact-split\n> >\n> > 1 26842.492845 26524.194821\n> > 10 246923.158682 249224.782661\n> > 50 695956.539704 709833.746374\n> > 100 1054727.043139 1903616.306028\n> > 200 964795.282957 1949200.338012\n> > 300 906029.377539 1927881.231478\n> > 400 845696.690912 1911065.369776\n> > 500 812295.222497 1926237.255856\n> > 600 888030.104213 1903047.236273\n> > 700 866896.532490 1886537.202142\n> > 800 863407.341506 1883768.592610\n> > 900 871386.608563 1874638.012128\n> > 1000 887668.277133 1876402.391502\n> > 1500 860051.361395 1815103.564241\n> > 2000 890900.098657 1775435.271018\n> > 3000 874184.980039 1653953.817997\n> > 4000 845023.080703 1582582.316043\n> > 5000 817100.195728 1512260.802371\n> >\n> > I think these are pretty nice results.\n\nFWIW, I took this for a spin on an AMD 3990x:\n\n# setup\npgbench -i postgres\n\n#benchmark\n#!/bin/bash\n\nfor i in 1 10 50 100 200 300 400 500 600 700 800 900 1000 1500 2000\n3000 4000 5000;\ndo\necho Testing with $i connections >> bench.log\npgbench2 -M prepared -c $i -j $i -S -n -T 60 postgres >> bench.log\ndone\n\npgbench2 is your patched version pgbench. I got some pretty strange\nresults with the unpatched version. Up to about 50 million tps for\nexcluding connection establishing, which seems pretty farfetched\n\nconnections Unpatched Patched\n1 49062.24413 49834.64983\n10 428673.1027 453290.5985\n50 1552413.084 1849233.821\n100 2039675.027 2261437.1\n200 3139648.845 3632008.991\n300 3091248.316 3597748.942\n400 3056453.5 3567888.293\n500 3019571.47 3574009.053\n600 2991052.393 3537518.903\n700 2952484.763 3553252.603\n800 2910976.875 3539404.865\n900 2873929.989 3514353.776\n1000 2846859.499 3490006.026\n1500 2540003.038 3370093.934\n2000 2361799.107 3197556.738\n3000 2056973.778 2949740.692\n4000 1751418.117 2627174.81\n5000 1464786.461 2334586.042\n\n> Attached as a graph as well.\n\nLikewise.\n\nDavid", "msg_date": "Mon, 6 Apr 2020 00:05:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nThese benchmarks are on my workstation. The larger VM I used in the last\nround wasn't currently available.\n\nHW:\n2 x Intel(R) Xeon(R) Gold 5215 (each 10 cores / 20 threads)\n192GB Ram.\ndata directory is on a Samsung SSD 970 PRO 1TB\n\nA bunch of terminals, emacs, mutt are open while the benchmark is\nrunning. No browser.\n\nUnless mentioned otherwise, relevant configuration options are:\nmax_connections=1200\nshared_buffers=8GB\nmax_prepared_transactions=1000\nsynchronous_commit=local\nhuge_pages=on\nfsync=off # to make it more likely to see scalability bottlenecks\n\n\nIndependent of the effects of this patch (i.e. including master) I had a\nfairly hard time getting reproducible number for *low* client cases. I\nfound the numbers to be more reproducible if I pinned server/pgbench\nonto the same core :(. I chose to do that for the -c1 cases, to\nbenchmark the optimal behaviour, as that seemed to have the biggest\npotential for regressions.\n\nAll numbers are best of three. Tests start in freshly created cluster\neach.\n\n\nOn 2020-03-30 17:04:00 +0300, Alexander Korotkov wrote:\n> Following pgbench scripts comes first to my mind:\n> 1) SELECT txid_current(); (artificial but good for checking corner case)\n\n-M prepared -T 180\n(did a few longer runs, but doesn't seem to matter much)\n\nclients tps master tps pgxact\n1 46118 46027\n16 377357 440233\n40 373304 410142\n198 103912 105579\n\nbtw, there's some pretty horrible cacheline bouncing in txid_current()\nbecause backends first ReadNextFullTransactionId() (acquires XidGenLock\nin shared mode, reads ShmemVariableCache->nextFullXid), then separately\ncauses GetNewTransactionId() (acquires XidGenLock exclusively, reads &\nwrites nextFullXid).\n\nWith for fsync=off (and also for synchronous_commit=off) the numbers\nare, at lower client counts, severly depressed and variable due to\nwalwriter going completely nuts (using more CPU than the backend doing\nthe queries). Because WAL writes are so fast on my storage, individual\nXLogBackgroundFlush() calls are very quick. This leads to a *lot* of\nkill()s from the backend, from within XLogSetAsyncXactLSN(). There got\nto be a bug here. But unrelated.\n\n> 2) Single insert statement (as example of very short transaction)\n\nCREATE TABLE testinsert(c1 int not null, c2 int not null, c3 int not null, c4 int not null);\nINSERT INTO testinsert VALUES(1, 2, 3, 4);\n\n-M prepared -T 360\n\nfsync on:\nclients tps master tps pgxact\n1 653 658\n16 5687 5668\n40 14212 14229\n198 60483 62420\n\nfsync off:\nclients tps master tps pgxact\n1 59356 59891\n16 290626\t 299991\n40 348210 355669\n198 289182 291529\n\nclients tps master tps pgxact\n1024 47586 52135\n\n-M simple\nfsync off:\nclients tps master tps pgxact\n40 289077 326699\n198 286011 299928\n\n\n\n\n> 3) Plain pgbench read-write (you already did it for sure)\n\n-s 100 -M prepared -T 700\n\nautovacuum=off, fsync on:\nclients tps master tps pgxact\n1 474 479\n16 4356 4476\n40 8591 9309\n198 20045 20261\n1024 17986 18545\n\nautovacuum=off, fsync off:\nclients tps master tps pgxact\n1 7828 7719\n16 49069 50482\n40 68241 73081\n198 73464 77801\n1024 25621 28410\n\nI chose autovacuum off because otherwise the results vary much more\nwidely, and autovacuum isn't really needed for the workload.\n\n\n\n> 4) pgbench read-write script with increased amount of SELECTs. Repeat\n> select from pgbench_accounts say 10 times with different aids.\n\nI did intersperse all server-side statements in the script with two\nselects of other pgbench_account rows each.\n\n-s 100 -M prepared -T 700\nautovacuum=off, fsync on:\nclients tps master tps pgxact\n1 365 367\n198 20065 21391\n\n-s 1000 -M prepared -T 700\nautovacuum=off, fsync on:\nclients tps master tps pgxact\n16 2757 2880\n40 4734 4996\n198 16950 19998\n1024 22423 24935\n\n\n> 5) 10% pgbench read-write, 90% of pgbench read-only\n\n-s 100 -M prepared -T 100 -bselect-only@9 -btpcb-like@1\n\nautovacuum=off, fsync on:\nclients tps master tps pgxact\n16 37289 38656\n40 81284 81260\n198 189002 189357\n1024 143986 164762\n\n\n> > That definitely needs to be measured, due to the locking changes around procarrayaddd/remove.\n> >\n> > I don't think regressions besides perhaps 2pc are likely - there's nothing really getting more expensive but procarray add/remove.\n>\n> I agree that ProcArrayAdd()/Remove() should be first subject of\n> investigation, but other cases should be checked as well IMHO.\n\nI'm not sure I really see the point. If simple prepared tx doesn't show\nup as a negative difference, a more complex one won't either, since the\nProcArrayAdd()/Remove() related bottlenecks will play smaller and\nsmaller role.\n\n\n> Regarding 2pc I can following scenarios come to my mind:\n> 1) pgbench read-write modified so that every transaction is prepared\n> first, then commit prepared.\n\nThe numbers here are -M simple, because I wanted to use\nPREPARE TRANSACTION 'ptx_:client_id';\nCOMMIT PREPARED 'ptx_:client_id';\n\n-s 100 -M prepared -T 700 -f ~/tmp/pgbench-write-2pc.sql\nautovacuum=off, fsync on:\nclients tps master tps pgxact\n1 251 249\n16 2134 2174\n40 3984 4089\n198 6677 7522\n1024 3641 3617\n\n\n> 2) 10% of 2pc pgbench read-write, 90% normal pgbench read-write\n\n-s 100 -M prepared -T 100 -f ~/tmp/pgbench-write-2pc.sql@1 -btpcb-like@9\n\nclients tps master tps pgxact\n198 18625 18906\n\n> 3) 10% of 2pc pgbench read-write, 90% normal pgbench read-only\n\n-s 100 -M prepared -T 100 -f ~/tmp/pgbench-write-2pc.sql@1 -bselect-only@9\n\nclients tps master tps pgxact\n198 84817 84350\n\n\nI also benchmarked connection overhead, by using pgbench with -C\nexecuting SELECT 1.\n\n-T 10\nclients tps master tps pgxact\n1 572 587\n16 2109 2140\n40 2127 2136\n198 2097 2129\n1024 2101 2118\n\n\n\nThese numbers seem pretty decent to me. The regressions seem mostly\nwithin noise. The one possible exception to that is plain pgbench\nread/write with fsync=off and only a single session. I'll run more\nbenchmarks around that tomorrow (but now it's 6am :().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Apr 2020 06:39:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-06 06:39:59 -0700, Andres Freund wrote:\n> These benchmarks are on my workstation. The larger VM I used in the last\n> round wasn't currently available.\n\nOne way to reproduce the problem at smaller connection counts / smaller\nmachines is to take more snapshots. Doesn't fully reproduce the problem,\nbecause resetting ->xmin without xact overhead is part of the problem,\nbut it's helpful.\n\nI use a volatile function that loops over a trivial statement. There's\nprobably an easier / more extreme way to reproduce the problem. But it's\ngood enough.\n\n-- setup\nCREATE OR REPLACE FUNCTION snapme(p_ret int, p_loop int) RETURNS int VOLATILE LANGUAGE plpgsql AS $$BEGIN FOR x in 1..p_loop LOOP EXECUTE 'SELECT 1';END LOOP; RETURN p_ret; END;$$;\n-- statement executed in parallel\nSELECT snapme(17, 10000);\n\nbefore (all above 1.5%):\n+ 37.82% postgres postgres [.] GetSnapshotData\n+ 6.26% postgres postgres [.] AllocSetAlloc\n+ 3.77% postgres postgres [.] base_yyparse\n+ 3.04% postgres postgres [.] core_yylex\n+ 1.94% postgres postgres [.] grouping_planner\n+ 1.83% postgres libc-2.30.so [.] __strncpy_avx2\n+ 1.80% postgres postgres [.] palloc\n+ 1.73% postgres libc-2.30.so [.] __memset_avx2_unaligned_erms\n\nafter:\n+ 5.75% postgres postgres [.] base_yyparse\n+ 4.37% postgres postgres [.] palloc\n+ 4.29% postgres postgres [.] AllocSetAlloc\n+ 3.75% postgres postgres [.] expression_tree_walker.part.0\n+ 3.14% postgres postgres [.] core_yylex\n+ 2.51% postgres postgres [.] subquery_planner\n+ 2.48% postgres postgres [.] CheckExprStillValid\n+ 2.45% postgres postgres [.] check_stack_depth\n+ 2.42% postgres plpgsql.so [.] exec_stmt\n+ 1.92% postgres libc-2.30.so [.] __memset_avx2_unaligned_erms\n+ 1.91% postgres postgres [.] query_tree_walker\n+ 1.88% postgres libc-2.30.so [.] __GI_____strtoll_l_internal\n+ 1.86% postgres postgres [.] _SPI_execute_plan\n+ 1.85% postgres postgres [.] assign_query_collations_walker\n+ 1.84% postgres postgres [.] remove_useless_results_recurse\n+ 1.83% postgres postgres [.] grouping_planner\n+ 1.50% postgres postgres [.] set_plan_refs\n\n\nIf I change the workload to be\nBEGIN;\nSELECT txid_current();\nSELECT snapme(17, 1000);\nCOMMIT;\n\n\nthe difference reduces (because GetSnapshotData() only needs to look at\nprocs with xids, and xids are assigned for much longer), but still is\nsignificant:\n\nbefore (all above 1.5%):\n+ 35.89% postgres postgres [.] GetSnapshotData\n+ 7.94% postgres postgres [.] AllocSetAlloc\n+ 4.42% postgres postgres [.] base_yyparse\n+ 3.62% postgres libc-2.30.so [.] __memset_avx2_unaligned_erms\n+ 2.87% postgres postgres [.] LWLockAcquire\n+ 2.76% postgres postgres [.] core_yylex\n+ 2.30% postgres postgres [.] expression_tree_walker.part.0\n+ 1.81% postgres postgres [.] MemoryContextAllocZeroAligned\n+ 1.80% postgres postgres [.] transformStmt\n+ 1.66% postgres postgres [.] grouping_planner\n+ 1.64% postgres postgres [.] subquery_planner\n\nafter:\n+ 24.59% postgres postgres [.] GetSnapshotData\n+ 4.89% postgres postgres [.] base_yyparse\n+ 4.59% postgres postgres [.] AllocSetAlloc\n+ 3.00% postgres postgres [.] LWLockAcquire\n+ 2.76% postgres postgres [.] palloc\n+ 2.27% postgres postgres [.] MemoryContextAllocZeroAligned\n+ 2.26% postgres postgres [.] check_stack_depth\n+ 1.77% postgres postgres [.] core_yylex\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Apr 2020 13:53:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-06 06:39:59 -0700, Andres Freund wrote:\n> > 3) Plain pgbench read-write (you already did it for sure)\n> \n> -s 100 -M prepared -T 700\n> \n> autovacuum=off, fsync on:\n> clients tps master tps pgxact\n> 1 474 479\n> 16 4356 4476\n> 40 8591 9309\n> 198 20045 20261\n> 1024 17986 18545\n> \n> autovacuum=off, fsync off:\n> clients tps master tps pgxact\n> 1 7828 7719\n> 16 49069 50482\n> 40 68241 73081\n> 198 73464 77801\n> 1024 25621 28410\n> \n> I chose autovacuum off because otherwise the results vary much more\n> widely, and autovacuum isn't really needed for the workload.\n\n> These numbers seem pretty decent to me. The regressions seem mostly\n> within noise. The one possible exception to that is plain pgbench\n> read/write with fsync=off and only a single session. I'll run more\n> benchmarks around that tomorrow (but now it's 6am :().\n\nThe \"one possible exception\" turned out to be a \"real\" regression, but\none that was dead easy to fix: It was an DEBUG1 elog I had left in. The\noverhead seems to solely have been the increased code size + overhead of\nerrstart(). After that there's no difference in the single client case\nanymore (I'd not expect a benefit).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Apr 2020 16:52:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nSEE BELOW: What, and what not, to do for v13.\n\n\nAttached is a substantially polished version of my patches. Note that\nthe first three patches, as well as the last, are not intended to be\ncommitted at this time / in this form - they're there to make testing\neasier.\n\nThere is a lot of polish, but also a few substantial changes:\n\n- To be compatible with old_snapshot_threshold I've revised the way\n heap_page_prune_opt() deals with old_snapshot_threshold. Now\n old_snapshot_threshold is only applied when we otherwise would have\n been unable to prune (both at the time of the pd_prune_xid check, and\n on individual tuples). This makes old_snapshot_threshold considerably\n cheaper and cause less conflicts.\n\n This required adding a version of HeapTupleSatisfiesVacuum that\n returns the horizon, rather than doing the horizon test itself; that\n way we can first test a tuple's horizon against the normal approximate\n threshold (making it an accurate threshold if needed) and only if that\n fails fall back to old_snapshot_threshold.\n\n The main reason here was not to improve old_snapshot_threshold, but to\n avoid a regression when its being used. Because we need a horizon to\n pass to old_snapshot_threshold, we'd have to fall back to computing an\n accurate horizon too often.\n\n\n- Previous versions of the patch had a TODO about computing horizons not\n just for one of shared / catalog / data tables, but all of them at\n once. To avoid potentially needing to determine xmin horizons multiple\n times within one transaction. For that I've renamed GetOldestXmin()\n to ComputeTransactionHorizons() and added wrapper functions instead of\n the different flag combinations we previously had for GetOldestXmin().\n\n This allows us to get rid of the PROCARRAY_* flags, and PROC_RESERVED.\n\n\n- To address Thomas' review comment about not accessing nextFullXid\n without xidGenLock, I made latestCompletedXid a FullTransactionId (a\n fxid is needed to be able to infer 64bit xids for the horizons -\n otherwise there is some danger they could wrap).\n\n\n- Improving the comment around the snapshot caching, I decided that the\n justification for correctness around not taking ProcArrayLock is too\n complicated (in particular around setting MyProc->xmin). While\n avoiding ProcArrayLock alltogether is a substantial gain, the caching\n itself helps a lot already. Seems best to leave that for a later step.\n\n This means that the numbers for the very high connection counts aren't\n quite as good.\n\n\n- Plenty of small changes to address issues I found while\n benchmarking. The only one of real note is that I had released\n XidGenLock after ProcArrayLock in ProcArrayAdd/Remove. For 2pc that\n causes noticable unnecessary contention, because we'll wait for\n XidGenLock while holding ProcArrayLock...\n\n\nI think this is pretty close to being committable.\n\n\nBut: This patch came in very late for v13, and it took me much longer to\npolish it up than I had hoped (partially distraction due to various bugs\nI found (in particular snapshot_too_old), partially covid19, partially\n\"hell if I know\"). The patchset touches core parts of the system. While\nboth Thomas and David have done some review, they haven't for the latest\nversion (mea culpa).\n\nIn many other instances I would say that the above suggests slipping to\nv14, given the timing.\n\nThe main reason I am considering pushing is that I think this patcheset\naddresses one of the most common critiques of postgres, as well as very\ncommon, hard to fix, real-world production issues. GetSnapshotData() has\nbeen a major bottleneck for about as long as I have been using postgres,\nand this addresses that to a significant degree.\n\nA second reason I am considering it is that, in my opinion, the changes\nare not all that complicated and not even that large. At least not for a\nchange to a problem that we've long tried to improve.\n\n\nObviously we all have a tendency to think our own work is important, and\nthat we deserve a bit more leeway than others. So take the above with a\ngrain of salt.\n\n\nComments?\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 7 Apr 2020 05:15:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-04-07 05:15:03 -0700, Andres Freund wrote:\n> Attached is a substantially polished version of my patches. Note that\n> the first three patches, as well as the last, are not intended to be\n> committed at this time / in this form - they're there to make testing\n> easier.\n\nI didn't actually attached that last not-to-be-committed patch... It's\njust the pgbench patch that I had attached before (and started a thread\nabout). Here it is again.", "msg_date": "Tue, 7 Apr 2020 05:18:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 4/7/20 8:15 AM, Andres Freund wrote:\n\n> I think this is pretty close to being committable.\n> \n> \n> But: This patch came in very late for v13, and it took me much longer to\n> polish it up than I had hoped (partially distraction due to various bugs\n> I found (in particular snapshot_too_old), partially covid19, partially\n> \"hell if I know\"). The patchset touches core parts of the system. While\n> both Thomas and David have done some review, they haven't for the latest\n> version (mea culpa).\n> \n> In many other instances I would say that the above suggests slipping to\n> v14, given the timing.\n> \n> The main reason I am considering pushing is that I think this patcheset\n> addresses one of the most common critiques of postgres, as well as very\n> common, hard to fix, real-world production issues. GetSnapshotData() has\n> been a major bottleneck for about as long as I have been using postgres,\n> and this addresses that to a significant degree.\n> \n> A second reason I am considering it is that, in my opinion, the changes\n> are not all that complicated and not even that large. At least not for a\n> change to a problem that we've long tried to improve.\n\nEven as recently as earlier this week there was a blog post making the\nrounds about the pain points running PostgreSQL with many simultaneous\nconnections. Anything to help with that would go a long way, and looking\nat the benchmarks you ran (at least with a quick, nonthorough glance)\nthis could and should be very positively impactful to a *lot* of\nPostgreSQL users.\n\nI can't comment on the \"close to committable\" aspect (at least not with\nan informed, confident opinion) but if it is indeed close to committable\nand you can put the work to finish polishing (read: \"bug fixing\" :-) and\nwe have a plan both of testing and, if need be, to revert, I would be\nokay with including it, for whatever my vote is worth. Is the timing /\nsituation ideal? No, but the way you describe it, it sounds like there\nis enough that can be done to ensure it's ready for Beta 1.\n\nFrom a RMT standpoint, perhaps this is one of the \"Recheck at Mid-Beta\"\nitems, as well.\n\nThanks,\n\nJonathan", "msg_date": "Tue, 7 Apr 2020 10:27:11 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Comments:\n\nIn 0002, the comments in SnapshotSet() are virtually incomprehensible.\nThere's no commit message so the reasons for the changes are unclear.\nBut mostly looks unproblematic.\n\n0003 looks like a fairly unrelated bug fix that deserves to be\ndiscussed on the thread related to the original patch. Probably should\nbe an open item.\n\n0004 looks fine.\n\nRegarding 0005:\n\nThere's sort of a mix of terminology here: are we pruning tuples or\nremoving tuples or testing whether things are invisible? It would be\nbetter to be more consistent.\n\n+ * State for testing whether tuple versions may be removed. To improve\n+ * GetSnapshotData() performance we don't compute an accurate value whenever\n+ * acquiring a snapshot. Instead we compute boundaries above/below which we\n+ * know that row versions are [not] needed anymore. If at test time values\n+ * falls in between the two, the boundaries can be recomputed (unless that\n+ * just happened).\n\nI don't like the wording here much. Maybe: State for testing whether\nan XID is invisible to all current snapshots. If an XID precedes\nmaybe_needed_bound, it's definitely not visible to any current\nsnapshot. If it equals or follows definitely_needed_bound, that XID\nisn't necessarily invisible to all snapshots. If it falls in between,\nwe're not sure. If, when testing a particular tuple, we see an XID\nsomewhere in the middle, we can try recomputing the boundaries to get\na more accurate answer (unless we've just done that). This is cheaper\nthan maintaining an accurate value all the time.\n\nThere's also the problem that this sorta contradicts the comment for\ndefinitely_needed_bound. There it says intermediate values needed to\nbe tested against the ProcArray, whereas here it says we need to\nrecompute the bounds. That's kinda confusing.\n\nComputedHorizons seems like a fairly generic name. I think there's\nsome relationship between InvisibleToEveryoneState and\nComputedHorizons that should be brought out more clearly by the naming\nand the comments.\n\n+ /*\n+ * The value of ShmemVariableCache->latestCompletedFullXid when\n+ * ComputeTransactionHorizons() held ProcArrayLock.\n+ */\n+ FullTransactionId latest_completed;\n+\n+ /*\n+ * The same for procArray->replication_slot_xmin and.\n+ * procArray->replication_slot_catalog_xmin.\n+ */\n+ TransactionId slot_xmin;\n+ TransactionId slot_catalog_xmin;\n\nDepartment of randomly inconsistent names. In general I think it's\nquite hard to grasp the relationship between the different fields in\nComputedHorizons.\n\n+static inline bool OldSnapshotThresholdActive(void)\n+{\n+ return old_snapshot_threshold >= 0;\n+}\n\nFormatting.\n\n+\n+bool\n+GinPageIsRecyclable(Page page)\n\nNeeds a comment. Or more than one.\n\n- /*\n- * If a transaction wrote a commit record in the gap between taking and\n- * logging the snapshot then latestCompletedXid may already be higher than\n- * the value from the snapshot, so check before we use the incoming value.\n- */\n- if (TransactionIdPrecedes(ShmemVariableCache->latestCompletedXid,\n- running->latestCompletedXid))\n- ShmemVariableCache->latestCompletedXid = running->latestCompletedXid;\n-\n- Assert(TransactionIdIsNormal(ShmemVariableCache->latestCompletedXid));\n-\n- LWLockRelease(ProcArrayLock);\n\nThis code got relocated so that the lock is released later, but you\ndidn't add any comments explaining why. Somebody will move it back and\nthen you'll yet at them for doing it wrong. :-)\n\n+ /*\n+ * Must have called GetOldestVisibleTransactionId() if using SnapshotAny.\n+ * Shouldn't have for an MVCC snapshot. (It's especially worth checking\n+ * this for parallel builds, since ambuild routines that support parallel\n+ * builds must work these details out for themselves.)\n+ */\n+ Assert(snapshot == SnapshotAny || IsMVCCSnapshot(snapshot));\n+ Assert(snapshot == SnapshotAny ? TransactionIdIsValid(OldestXmin) :\n+ !TransactionIdIsValid(OldestXmin));\n+ Assert(snapshot == SnapshotAny || !anyvisible);\n\nThis looks like a gratuitous code relocation.\n\n+HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer,\nTransactionId *dead_after)\n\nI don't much like the name dead_after, but I don't have a better\nsuggestion, either.\n\n- * Deleter committed, but perhaps it was recent enough that some open\n- * transactions could still see the tuple.\n+ * Deleter committed, allow caller to check if it was recent enough that\n+ * some open transactions could still see the tuple.\n\nI think you could drop this change.\n\n+ /*\n+ * State related to determining whether a dead tuple is still needed.\n+ */\n+ InvisibleToEveryoneState *vistest;\n+ TimestampTz limited_oldest_ts;\n+ TransactionId limited_oldest_xmin;\n+ /* have we made removal decision based on old_snapshot_threshold */\n+ bool limited_oldest_committed;\n\nWould benefit from more comments.\n\n+ * accuring to prstate->vistest, but that can be removed based on\n\nTypo.\n\nGenerally, heap_prune_satisfies_vacuum looks pretty good. The\nlimited_oldest_committed naming is confusing, but the comments make it\na lot clearer.\n\n+ * If oldest btpo.xact in the deleted pages is invisible, then at\n\nI'd say \"invisible to everyone\" here for clarity.\n\n-latestCompletedXid variable. This allows GetSnapshotData to use\n-latestCompletedXid + 1 as xmax for its snapshot: there can be no\n+latestCompletedFullXid variable. This allows GetSnapshotData to use\n+latestCompletedFullXid + 1 as xmax for its snapshot: there can be no\n\nIs this fixing a preexisting README defect?\n\nIt might be useful if this README expanded on the new machinery a bit\ninstead of just updating the wording to account for it, but I'm not\nsure exactly what that would look like or whether it would be too\nduplicative of other things.\n\n+void AssertTransactionIdMayBeOnDisk(TransactionId xid)\n\nFormatting.\n\n+ * Assert that xid is one that we could actually see on disk.\n\nI don't know what this means. The whole purpose of this routine is\nvery unclear to me.\n\n * the secondary effect that it sets RecentGlobalXmin. (This is critical\n * for anything that reads heap pages, because HOT may decide to prune\n * them even if the process doesn't attempt to modify any tuples.)\n+ *\n+ * FIXME: This comment is inaccurate / the code buggy. A snapshot that is\n+ * not pushed/active does not reliably prevent HOT pruning (->xmin could\n+ * e.g. be cleared when cache invalidations are processed).\n\nSomething needs to be done here... and in the other similar case.\n\nIs this kind of review helpful?\n\n...Robert\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:41:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nThanks for the review!\n\n\nOn 2020-04-07 12:41:07 -0400, Robert Haas wrote:\n> In 0002, the comments in SnapshotSet() are virtually incomprehensible.\n> There's no commit message so the reasons for the changes are unclear.\n> But mostly looks unproblematic.\n\nI was planning to drop that patch pre-commit, at least for now. I think\nthere's a few live bugs here, but they're all older. I did send a few emails\nabout the class of problem, unfortunately it was a fairly one-sided\nconversation so far ;)\n\nhttps://www.postgresql.org/message-id/20200407072418.ccvnyjbrktyi3rzc%40alap3.anarazel.de\n\n\n> 0003 looks like a fairly unrelated bug fix that deserves to be\n> discussed on the thread related to the original patch. Probably should\n> be an open item.\n\nThere was some discussion in a separate thread:\nhttps://www.postgresql.org/message-id/20200406025651.fpzdb5yyb7qyhqko%40alap3.anarazel.de\nThe only reason for including it in this patch stack is that I can't\nreally execercise the patchset without the fix (it's a bit sad that this\nissue has gone unnoticed for months before I found it as part of the\ndevelopment of this patch).\n\nThink I'll push a minimal version now, and add an open item.\n\n\n> \n> Regarding 0005:\n> \n> There's sort of a mix of terminology here: are we pruning tuples or\n> removing tuples or testing whether things are invisible? It would be\n> better to be more consistent.\n> \n> + * State for testing whether tuple versions may be removed. To improve\n> + * GetSnapshotData() performance we don't compute an accurate value whenever\n> + * acquiring a snapshot. Instead we compute boundaries above/below which we\n> + * know that row versions are [not] needed anymore. If at test time values\n> + * falls in between the two, the boundaries can be recomputed (unless that\n> + * just happened).\n> \n> I don't like the wording here much. Maybe: State for testing whether\n> an XID is invisible to all current snapshots. If an XID precedes\n> maybe_needed_bound, it's definitely not visible to any current\n> snapshot. If it equals or follows definitely_needed_bound, that XID\n> isn't necessarily invisible to all snapshots. If it falls in between,\n> we're not sure. If, when testing a particular tuple, we see an XID\n> somewhere in the middle, we can try recomputing the boundaries to get\n> a more accurate answer (unless we've just done that). This is cheaper\n> than maintaining an accurate value all the time.\n\nI'll incorporate that, thanks.\n\n\n> There's also the problem that this sorta contradicts the comment for\n> definitely_needed_bound. There it says intermediate values needed to\n> be tested against the ProcArray, whereas here it says we need to\n> recompute the bounds. That's kinda confusing.\n\nFor me those are the same. Computing an accurate bound is visitting the\nprocarray. But I'll rephrase.\n\n\n> ComputedHorizons seems like a fairly generic name. I think there's\n> some relationship between InvisibleToEveryoneState and\n> ComputedHorizons that should be brought out more clearly by the naming\n> and the comments.\n\nI don't like the naming of ComputedHorizons, ComputeTransactionHorizons\nmuch... But I find it hard to come up with something that's meaningfully\nbetter.\n\nI'll add a comment.\n\n\n> + /*\n> + * The value of ShmemVariableCache->latestCompletedFullXid when\n> + * ComputeTransactionHorizons() held ProcArrayLock.\n> + */\n> + FullTransactionId latest_completed;\n> +\n> + /*\n> + * The same for procArray->replication_slot_xmin and.\n> + * procArray->replication_slot_catalog_xmin.\n> + */\n> + TransactionId slot_xmin;\n> + TransactionId slot_catalog_xmin;\n> \n> Department of randomly inconsistent names. In general I think it's\n> quite hard to grasp the relationship between the different fields in\n> ComputedHorizons.\n\nWhat's the inconsistency? The dropped replication_ vs dropped FullXid\npostfix?\n\n\n\n> +\n> +bool\n> +GinPageIsRecyclable(Page page)\n> \n> Needs a comment. Or more than one.\n\nWell, I started to write one a couple times. But it's really just moving\nthe pre-existing code from the macro into a function and there weren't\nany comments around *any* of it before. All my comment attempts\nbasically just were restating the code in so many words, or would have\nrequired more work than I saw justified in the context of just moving\ncode.\n\n\n> - /*\n> - * If a transaction wrote a commit record in the gap between taking and\n> - * logging the snapshot then latestCompletedXid may already be higher than\n> - * the value from the snapshot, so check before we use the incoming value.\n> - */\n> - if (TransactionIdPrecedes(ShmemVariableCache->latestCompletedXid,\n> - running->latestCompletedXid))\n> - ShmemVariableCache->latestCompletedXid = running->latestCompletedXid;\n> -\n> - Assert(TransactionIdIsNormal(ShmemVariableCache->latestCompletedXid));\n> -\n> - LWLockRelease(ProcArrayLock);\n> \n> This code got relocated so that the lock is released later, but you\n> didn't add any comments explaining why. Somebody will move it back and\n> then you'll yet at them for doing it wrong. :-)\n\nI just moved it because the code now references ->nextFullXid, which was\npreviously maintained after latestCompletedXid.\n\n\n> + /*\n> + * Must have called GetOldestVisibleTransactionId() if using SnapshotAny.\n> + * Shouldn't have for an MVCC snapshot. (It's especially worth checking\n> + * this for parallel builds, since ambuild routines that support parallel\n> + * builds must work these details out for themselves.)\n> + */\n> + Assert(snapshot == SnapshotAny || IsMVCCSnapshot(snapshot));\n> + Assert(snapshot == SnapshotAny ? TransactionIdIsValid(OldestXmin) :\n> + !TransactionIdIsValid(OldestXmin));\n> + Assert(snapshot == SnapshotAny || !anyvisible);\n> \n> This looks like a gratuitous code relocation.\n\nI found it hard to understand the comments because the Asserts were done\nfurther away from where the relevant decisions they were made. And I\nthink I have history to back me up: It looks to me that that that is\nbecause ab0dfc961b6a821f23d9c40c723d11380ce195a6 just put the progress\nrelated code between the if (!scan) and the Asserts.\n\n\n> +HeapTupleSatisfiesVacuumHorizon(HeapTuple htup, Buffer buffer,\n> TransactionId *dead_after)\n> \n> I don't much like the name dead_after, but I don't have a better\n> suggestion, either.\n> \n> - * Deleter committed, but perhaps it was recent enough that some open\n> - * transactions could still see the tuple.\n> + * Deleter committed, allow caller to check if it was recent enough that\n> + * some open transactions could still see the tuple.\n> \n> I think you could drop this change.\n\nOk. Wasn't quite sure what to what to do with that comment.\n\n\n> Generally, heap_prune_satisfies_vacuum looks pretty good. The\n> limited_oldest_committed naming is confusing, but the comments make it\n> a lot clearer.\n\nI didn't like _committed much either. But couldn't come up with\nsomething short. _relied_upon?\n\n\n> + * If oldest btpo.xact in the deleted pages is invisible, then at\n> \n> I'd say \"invisible to everyone\" here for clarity.\n> \n> -latestCompletedXid variable. This allows GetSnapshotData to use\n> -latestCompletedXid + 1 as xmax for its snapshot: there can be no\n> +latestCompletedFullXid variable. This allows GetSnapshotData to use\n> +latestCompletedFullXid + 1 as xmax for its snapshot: there can be no\n> \n> Is this fixing a preexisting README defect?\n\nIt's just adjusting for the changed name of latestCompletedXid to\nlatestCompletedFullXid, as part of widening it to 64bits. I'm not\nreally a fan of adding that to the variable name, but surrounding code\nalready did it (cf VariableCache->nextFullXid), so I thought I'd follow\nsuit.\n\n\n> It might be useful if this README expanded on the new machinery a bit\n> instead of just updating the wording to account for it, but I'm not\n> sure exactly what that would look like or whether it would be too\n> duplicative of other things.\n\n\n\n> +void AssertTransactionIdMayBeOnDisk(TransactionId xid)\n> \n> Formatting.\n> \n> + * Assert that xid is one that we could actually see on disk.\n> \n> I don't know what this means. The whole purpose of this routine is\n> very unclear to me.\n\nIt's intended to be a double check against\n\n\n> * the secondary effect that it sets RecentGlobalXmin. (This is critical\n> * for anything that reads heap pages, because HOT may decide to prune\n> * them even if the process doesn't attempt to modify any tuples.)\n> + *\n> + * FIXME: This comment is inaccurate / the code buggy. A snapshot that is\n> + * not pushed/active does not reliably prevent HOT pruning (->xmin could\n> + * e.g. be cleared when cache invalidations are processed).\n> \n> Something needs to be done here... and in the other similar case.\n\nIndeed. I wrote a separate email about it yesterday:\nhttps://www.postgresql.org/message-id/20200407072418.ccvnyjbrktyi3rzc%40alap3.anarazel.de\n\n\n\n> Is this kind of review helpful?\n\nYes!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 10:51:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "More review, since it sounds like you like it:\n\n0006 - Boring. But I'd probably make this move both xmin and xid back,\nwith related comment changes; see also next comment.\n\n0007 -\n\n+ TransactionId xidCopy; /* this backend's xid, a copy of this proc's\n+ ProcGlobal->xids[] entry. */\n\nCan we please NOT put Copy into the name like that? Pretty please?\n\n+ int pgxactoff; /* offset into various ProcGlobal-> arrays\n+ * NB: can change any time unless locks held!\n+ */\n\nI'm going to add the helpful comment \"NB: can change any time unless\nlocks held!\" to every data structure in the backend that is in shared\nmemory and not immutable. No need, of course, to mention WHICH\nlocks...\n\nOn a related note, PROC_HDR really, really, really needs comments\nexplaining the locking regimen for the new xids field.\n\n+ ProcGlobal->xids[pgxactoff] = InvalidTransactionId;\n\nApparently this array is not dense in the sense that it excludes\nunused slots, but comments elsewhere don't seem to entirely agree.\nMaybe the comments discussing how it is \"dense\" need to be a little\nmore precise about this.\n\n+ for (int i = 0; i < nxids; i++)\n\nI miss my C89. Yeah, it's just me.\n\n- if (!suboverflowed)\n+ if (suboverflowed)\n+ continue;\n+\n\nDo we really need to do this kind of diddling in this patch? I mean\nyes to the idea, but no to things that are going to make it harder to\nunderstand what happened if this blows up.\n\n+ uint32 TotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;\n\n /* ProcGlobal */\n size = add_size(size, sizeof(PROC_HDR));\n- /* MyProcs, including autovacuum workers and launcher */\n- size = add_size(size, mul_size(MaxBackends, sizeof(PGPROC)));\n- /* AuxiliaryProcs */\n- size = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGPROC)));\n- /* Prepared xacts */\n- size = add_size(size, mul_size(max_prepared_xacts, sizeof(PGPROC)));\n- /* ProcStructLock */\n+ size = add_size(size, mul_size(TotalProcs, sizeof(PGPROC)));\n\nThis seems like a bad idea. If we establish a precedent that it's OK\nto have sizing routines that don't use add_size() and mul_size(),\npeople are going to cargo cult that into places where there is more\nrisk of overflow than there is here.\n\nYou've got a bunch of different places that talk about the new PGXACT\narray and they are somewhat redundant yet without saying exactly the\nsame thing every time either. I think that needs cleanup.\n\nOne thing I didn't see is any clear discussion of what happens if the\ntwo copies of the XID information don't agree with each other. That\nshould be added someplace, either in an appropriate code comment or in\na README or something. I *think* both are protected by the same locks,\nbut there's also some unlocked access to those structure members, so\nit's not entirely a slam dunk.\n\n...Robert\n\n\n", "msg_date": "Tue, 7 Apr 2020 14:28:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-07 10:51:12 -0700, Andres Freund wrote:\n> > +void AssertTransactionIdMayBeOnDisk(TransactionId xid)\n> > \n> > Formatting.\n> > \n> > + * Assert that xid is one that we could actually see on disk.\n> > \n> > I don't know what this means. The whole purpose of this routine is\n> > very unclear to me.\n> \n> It's intended to be a double check against\n\nforgetting things...? Err:\n\nIt is intended to make it easier to detect cases where the passed\nTransactionId is not safe against wraparound. If there is protection\nagainst wraparound, then the xid\n\na) may never be older than ShmemVariableCache->oldestXid (since\n otherwise the rel/datfrozenxid could not have advanced past the xid,\n and because oldestXid is what what prevents ->nextFullXid from\n advancing far enough to cause a wraparound)\n\nb) cannot be >= ShmemVariableCache->nextFullXid. If it is, it cannot\n recently have come from GetNewTransactionId(), and thus there is no\n anti-wraparound protection either.\n\nAs full wraparounds are painful to exercise in testing,\nAssertTransactionIdMayBeOnDisk() is intended to make it easier to detect\npotential hazards.\n\nThe reason for the *OnDisk naming is that [oldestXid, nextFullXid) is\nthe appropriate check for values actually stored in tables. There could,\nand probably should, be a narrower assertion ensuring that a xid is\nprotected against being pruned away (i.e. a PGPROC's xmin covering it).\n\nThe reason for being concerned enough in the new code to add the new\nassertion helper (as well as a major motivating reason for making the\nhorizons 64 bit xids) is that it's much harder to ensure that \"global\nxmin\" style horizons don't wrap around. By definition they include other\nbackend's ->xmin, and those can be released without a lock at any\ntime. As a lot of wraparound issues are triggered by very longrunning\ntransactions, it is not even unlikely to hit such problems: At some\npoint somebody is going to kill that old backend and ->oldestXid will\nadvance very quickly.\n\nThere is a lot of code that is pretty unsafe around wraparounds... They\nare getting easier and easier to hit on a regular schedule in production\n(plenty of databases that hit wraparounds multiple times a week). And I\ndon't think we as PG developers often don't quite take that into\naccount.\n\n\nDoes that make some sense? Do you have a better suggestion for a name?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 11:28:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Apr 7, 2020 at 11:28 AM Andres Freund <andres@anarazel.de> wrote:\n> There is a lot of code that is pretty unsafe around wraparounds... They\n> are getting easier and easier to hit on a regular schedule in production\n> (plenty of databases that hit wraparounds multiple times a week). And I\n> don't think we as PG developers often don't quite take that into\n> account.\n\nIt would be nice if there was high level documentation on wraparound\nhazards. Maybe even a dedicated README file.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 7 Apr 2020 11:32:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Apr 7, 2020 at 2:28 PM Andres Freund <andres@anarazel.de> wrote:\n> Does that make some sense? Do you have a better suggestion for a name?\n\nI think it makes sense. I have two basic problems with the name. The\nfirst is that \"on disk\" doesn't seem to be a very clear way of\ndescribing what you're actually checking here, and it definitely\ndoesn't refer to an existing concept which sophisticated hackers can\nbe expected to understand. The second is that \"may\" is ambiguous in\nEnglish: it can either mean that something is permissible (\"Johnny,\nyou may go to the bathroom\") or that we do not have certain knowledge\nof it (\"Johnny may be in the bathroom\"). When it is followed by \"be\",\nit usually has the latter sense, although there are exceptions (e.g.\n\"She may be discharged from the hospital today if she wishes, but we\nrecommend that she stay for another day\"). Consequently, I found that\nuse of \"may be\" in this context wicked confusing. What came to mind\nwas:\n\nbool\nRobertMayBeAGiraffe(void)\n{\n return true; // i mean, i haven't seen him since last week, so who knows?\n}\n\nSo I suggest a name with \"Is\" or no verb, rather than one with\n\"MayBe.\" And I suggest something else instead of \"OnDisk,\" e.g.\nAssertTransactionIdIsInUsableRange() or\nTransactionIdIsInAllowableRange() or\nAssertTransactionIdWraparoundProtected(). I kind of like that last\none, but YMMV.\n\nI wish to clarify that in sending these review emails I am taking no\nposition on whether or not it is prudent to commit any or all of them.\nI do not think we can rule out the possibility that they will Break\nThings, but neither do I wish to be seen as That Guy Who Stands In The\nWay of Important Improvements. Time does not permit me a detailed\nreview anyway. So, these comments are provided in the hope that they\nmay be useful but without endorsement or acrimony. If other people\nwant to endorse or, uh, acrimoniate, based on my comments, that is up\nto them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 14:51:52 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Apr 7, 2020 at 1:51 PM Andres Freund <andres@anarazel.de> wrote:\n> > ComputedHorizons seems like a fairly generic name. I think there's\n> > some relationship between InvisibleToEveryoneState and\n> > ComputedHorizons that should be brought out more clearly by the naming\n> > and the comments.\n>\n> I don't like the naming of ComputedHorizons, ComputeTransactionHorizons\n> much... But I find it hard to come up with something that's meaningfully\n> better.\n\nIt would help to stick XID in there, like ComputedXIDHorizons. What I\nfind really baffling is that you seem to have two structures in the\nsame file that have essentially the same purpose, but the second one\n(ComputedHorizons) has a lot more stuff in it. I can't understand why.\n\n> What's the inconsistency? The dropped replication_ vs dropped FullXid\n> postfix?\n\nYeah, just having the member names be randomly different between the\nstructs. Really harms greppability.\n\n> > Generally, heap_prune_satisfies_vacuum looks pretty good. The\n> > limited_oldest_committed naming is confusing, but the comments make it\n> > a lot clearer.\n>\n> I didn't like _committed much either. But couldn't come up with\n> something short. _relied_upon?\n\noldSnapshotLimitUsed or old_snapshot_limit_used, like currentCommandIdUsed?\n\n> It's just adjusting for the changed name of latestCompletedXid to\n> latestCompletedFullXid, as part of widening it to 64bits. I'm not\n> really a fan of adding that to the variable name, but surrounding code\n> already did it (cf VariableCache->nextFullXid), so I thought I'd follow\n> suit.\n\nOops, that was me misreading the diff. Sorry for the noise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 15:03:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-07 14:28:09 -0400, Robert Haas wrote:\n> More review, since it sounds like you like it:\n>\n> 0006 - Boring. But I'd probably make this move both xmin and xid back,\n> with related comment changes; see also next comment.\n>\n> 0007 -\n>\n> + TransactionId xidCopy; /* this backend's xid, a copy of this proc's\n> + ProcGlobal->xids[] entry. */\n>\n> Can we please NOT put Copy into the name like that? Pretty please?\n\nDo you have a suggested naming scheme? Something indicating that it's\nnot the only place that needs to be updated?\n\n\n> + int pgxactoff; /* offset into various ProcGlobal-> arrays\n> + * NB: can change any time unless locks held!\n> + */\n>\n> I'm going to add the helpful comment \"NB: can change any time unless\n> locks held!\" to every data structure in the backend that is in shared\n> memory and not immutable. No need, of course, to mention WHICH\n> locks...\n\nI think it's more on-point here, because we need to hold either of the\nlocks* even, for changes to a backend's own status that one reasonably\ncould expect would be safe to at least inspect. E.g looking at\nProcGlobal->xids[MyProc->pgxactoff]\ndoesn't look suspicious, but could very well return another backends\nxid, if neither ProcArrayLock nor XidGenLock is held (because a\nProcArrayRemove() could have changed pgxactoff if a previous entry was\nremoved).\n\n*see comment at PROC_HDR:\n\n *\n * Adding/Removing an entry into the procarray requires holding *both*\n * ProcArrayLock and XidGenLock in exclusive mode (in that order). Both are\n * needed because the dense arrays (see below) are accessed from\n * GetNewTransactionId() and GetSnapshotData(), and we don't want to add\n * further contention by both using one lock. Adding/Removing a procarray\n * entry is much less frequent.\n */\ntypedef struct PROC_HDR\n{\n\t/* Array of PGPROC structures (not including dummies for prepared txns) */\n\tPGPROC\t *allProcs;\n\n\n\t/*\n\t * Arrays with per-backend information that is hotly accessed, indexed by\n\t * PGPROC->pgxactoff. These are in separate arrays for three reasons:\n\t * First, to allow for as tight loops accessing the data as\n\t * possible. Second, to prevent updates of frequently changing data from\n\t * invalidating cachelines shared with less frequently changing\n\t * data. Third to condense frequently accessed data into as few cachelines\n\t * as possible.\n\t *\n\t * When entering a PGPROC for 2PC transactions with ProcArrayAdd(), those\n\t * copies are used to provide the contents of the dense data, and will be\n\t * transferred by ProcArrayAdd() while it already holds ProcArrayLock.\n\t */\n\nthere's also\n\n * The various *Copy fields are copies of the data in ProcGlobal arrays that\n * can be accessed without holding ProcArrayLock / XidGenLock (see PROC_HDR\n * comments).\n\n\nI had a more explicit warning/explanation about the dangers of accessing\nthe arrays without locks, but apparently went to far when reducing\nduplicated comments.\n\n\n> On a related note, PROC_HDR really, really, really needs comments\n> explaining the locking regimen for the new xids field.\n\n\nI'll expand the above, in particular highlighting the danger of\npgxactoff changing.\n\n\n> + ProcGlobal->xids[pgxactoff] = InvalidTransactionId;\n>\n> Apparently this array is not dense in the sense that it excludes\n> unused slots, but comments elsewhere don't seem to entirely agree.\n\nWhat do you mean with \"unused slots\"? Backends that committed?\n\nDense is intended to describe that the arrays only contain currently\n\"live\" entries. I.e. that the first procArray->numProcs entries in each\narray have the data for all procs (including prepared xacts) that are\n\"connected\". This is extending the concept that already existed for\nprocArray->pgprocnos.\n\nWheras the PGPROC/PGXACT arrays have \"unused\" entries interspersed.\n\nThis is what previously lead to the slow loop in GetSnapshotData(),\nwhere we had to iterate over PGXACTs over an indirection in\nprocArray->pgprocnos. I.e. to only look at in-use PGXACTs we had to go\nthrough allProcs[arrayP->pgprocnos[i]], which is, uh, suboptimal for\na performance critical routine holding a central lock.\n\nI'll try to expand the comments around dense, but if you have a better\ndescriptor.\n\n\n> Maybe the comments discussing how it is \"dense\" need to be a little\n> more precise about this.\n>\n> + for (int i = 0; i < nxids; i++)\n>\n> I miss my C89. Yeah, it's just me.\n\nOh, dear god. I hate declaring variables like 'i' on function scope. The\nbug that haunted me the longest in the development of this patch was in\nXidCacheRemoveRunningXids, where there are both i and j, and a macro\nXidCacheRemove(i), but the macro gets passed j as i.\n\n\n> - if (!suboverflowed)\n> + if (suboverflowed)\n> + continue;\n> +\n>\n> Do we really need to do this kind of diddling in this patch? I mean\n> yes to the idea, but no to things that are going to make it harder to\n> understand what happened if this blows up.\n\nI can try to reduce those differences. Given the rest of the changes it\ndidn't seem likely to matter. I found it hard to keep the branches\nnesting in my head when seeing:\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\n\n> + uint32 TotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;\n>\n> /* ProcGlobal */\n> size = add_size(size, sizeof(PROC_HDR));\n> - /* MyProcs, including autovacuum workers and launcher */\n> - size = add_size(size, mul_size(MaxBackends, sizeof(PGPROC)));\n> - /* AuxiliaryProcs */\n> - size = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGPROC)));\n> - /* Prepared xacts */\n> - size = add_size(size, mul_size(max_prepared_xacts, sizeof(PGPROC)));\n> - /* ProcStructLock */\n> + size = add_size(size, mul_size(TotalProcs, sizeof(PGPROC)));\n>\n> This seems like a bad idea. If we establish a precedent that it's OK\n> to have sizing routines that don't use add_size() and mul_size(),\n> people are going to cargo cult that into places where there is more\n> risk of overflow than there is here.\n\nHm. I'm not sure I see the problem. Are you concerned that TotalProcs\nwould overflow due to too big MaxBackends or max_prepared_xacts? The\nmultiplication itself is still protected by add_size(). It didn't seem\ncorrect to use add_size for the TotalProcs addition, since that's not\nreally a size. And since the limit for procs is much lower than\nUINT32_MAX...\n\nIt seems worse to add a separate add_size calculation for each type of\nproc entry, for for each of the individual arrays. We'd end up with\n\n\tsize = add_size(size, sizeof(PROC_HDR));\n\tsize = add_size(size, mul_size(MaxBackends, sizeof(PGPROC)));\n\tsize = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGPROC)));\n\tsize = add_size(size, mul_size(max_prepared_xacts, sizeof(PGPROC)));\n\tsize = add_size(size, sizeof(slock_t));\n\n\tsize = add_size(size, mul_size(MaxBackends, sizeof(sizeof(*ProcGlobal->xids))));\n\tsize = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(sizeof(*ProcGlobal->xids)));\n\tsize = add_size(size, mul_size(max_prepared_xacts, sizeof(sizeof(*ProcGlobal->xids))));\n\tsize = add_size(size, mul_size(MaxBackends, sizeof(sizeof(*ProcGlobal->subxidStates))));\n\tsize = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(sizeof(*ProcGlobal->subxidStates)));\n\tsize = add_size(size, mul_size(max_prepared_xacts, sizeof(sizeof(*ProcGlobal->subxidStates))));\n\tsize = add_size(size, mul_size(MaxBackends, sizeof(sizeof(*ProcGlobal->vacuumFlags))));\n\tsize = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(sizeof(*ProcGlobal->vacuumFlags)));\n\tsize = add_size(size, mul_size(max_prepared_xacts, sizeof(sizeof(*ProcGlobal->vacuumFlags))));\n\ninstead of\n\n\tsize = add_size(size, sizeof(PROC_HDR));\n\tsize = add_size(size, mul_size(TotalProcs, sizeof(PGPROC)));\n\tsize = add_size(size, sizeof(slock_t));\n\n\tsize = add_size(size, mul_size(TotalProcs, sizeof(*ProcGlobal->xids)));\n\tsize = add_size(size, mul_size(TotalProcs, sizeof(*ProcGlobal->subxidStates)));\n\tsize = add_size(size, mul_size(TotalProcs, sizeof(*ProcGlobal->vacuumFlags)));\n\nwhich seems clearly worse.\n\n\n> You've got a bunch of different places that talk about the new PGXACT\n> array and they are somewhat redundant yet without saying exactly the\n> same thing every time either. I think that needs cleanup.\n\nCould you point out a few of those comments, I'm not entirely sure which\nyou're talking about?\n\n\n> One thing I didn't see is any clear discussion of what happens if the\n> two copies of the XID information don't agree with each other.\n\nIt should never happen. There's asserts that try to ensure that. For the\nxid-less case:\n\nProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n...\n\t\tAssert(!TransactionIdIsValid(proc->xidCopy));\n\t\tAssert(proc->subxidStatusCopy.count == 0);\nand for the case of having an xid:\n\nProcArrayEndTransactionInternal(PGPROC *proc, TransactionId latestXid)\n...\n\tAssert(ProcGlobal->xids[pgxactoff] == proc->xidCopy);\n...\n\tAssert(ProcGlobal->subxidStates[pgxactoff].count == proc->subxidStatusCopy.count &&\n\t\t ProcGlobal->subxidStates[pgxactoff].overflowed == proc->subxidStatusCopy.overflowed);\n\n\n> That should be added someplace, either in an appropriate code comment\n> or in a README or something. I *think* both are protected by the same\n> locks, but there's also some unlocked access to those structure\n> members, so it's not entirely a slam dunk.\n\nHm. I considered is allowed to modify those and when to really be\ncovered by the existing comments in transam/README. In particular in the\n\"Interlocking Transaction Begin, Transaction End, and Snapshots\"\nsection.\n\nDo you think that a comment explaining that the *Copy version has to be\nkept up2date at all times (except when not yet added with ProcArrayAdd)\nwould ameliorate that concern? \n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:24:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "0008 -\n\nHere again, I greatly dislike putting Copy in the name. It makes\nlittle sense to pretend that either is the original and the other is\nthe copy. You just have the same data in two places. If one of them is\nmore authoritative, the place to explain that is in the comments, not\nby elongating the structure member name and supposing anyone will be\nable to make something of that.\n\n+ *\n+ * XXX: That's why this is using vacuumFlagsCopy.\n\nI am not sure there's any problem with the code that needs fixing\nhere, so I might think about getting rid of this XXX. But this gets\nback to my complaint about the locking regime being unclear. What I\nthink you need to do here is rephrase the previous paragraph so that\nit explains the reason for using this copy a bit better. Like \"We read\nthe copy of vacuumFlags from PGPROC rather than visiting the copy\nattached to ProcGlobal because we can do that without taking a lock.\nSee fuller explanation in <place>.\" Or whatever.\n\n0009, 0010 -\n\nI think you've got this whole series of things divided up too finely.\nLike, 0005 feels like the meat of it, and that has a bunch of things\nin it that could plausible be separated out as separate commits. 0007\nalso seems to do more than one kind of thing (see my comment regarding\nmoving some of that into 0006). But whacking everything around like a\ncrazy man in 0005 and a little more in 0007 and then doing the\nfollowing cleanup in these little tiny steps seems pretty lame.\nSeparating 0009 from 0010 is maybe the clearest example of that, but\nIMHO it's pretty unclear why both of these shouldn't be merged with\n0008.\n\nTo be clear, I exaggerate for effect. 0005 is not whacking everything\naround like a crazy man. But it is a non-minimal patch, whereas I\nconsider 0009 and 0010 to be sub-minimal.\n\nMy comments on the Copy naming apply here as well. I am also starting\nto wonder why exactly we need two copies of all this stuff. Perhaps\nI've just failed to absorb the idea for having read the patch too\nbriefly, but I think that we need to make sure that it's super-clear\nwhy we're doing that. If we just needed it for one field because\n$REASONS, that would be one thing, but if we need it for all of them\nthen there must be some underlying principle here that needs a good\nexplanation in an easy-to-find and centrally located place.\n\n0011 -\n\n+ * Number of top-level transactions that completed in some form since the\n+ * start of the server. This currently is solely used to check whether\n+ * GetSnapshotData() needs to recompute the contents of the snapshot, or\n+ * not. There are likely other users of this. Always above 1.\n\nDoes it only count XID-bearing transactions? If so, best mention that.\n\n+ * transactions completed since the last GetSnapshotData()..\n\nToo many periods.\n\n+ /* Same with CSN */\n+ ShmemVariableCache->xactCompletionCount++;\n\nIf I didn't know that CSN stood for commit sequence number from\nreading years of mailing list traffic, I'd be lost here. So I think\nthis comment shouldn't use that term.\n\n+GetSnapshotDataFillTooOld(Snapshot snapshot)\n\nUh... no clue what's going on here. Granted the code had no comments\nin the old place either, so I guess it's not worse, but even the name\nof the new function is pretty incomprehensible.\n\n+ * Helper function for GetSnapshotData() that check if the bulk of the\n\nchecks\n\n+ * the fields that need to change and returns true. false is returned\n+ * otherwise.\n\nOtherwise, it returns false.\n\n+ * It is safe to re-enter the snapshot's xmin. This can't cause xmin to go\n\nI know what it means to re-enter a building, but I don't know what it\nmeans to re-enter the snapshot's xmin.\n\nThis whole comment seems a bit murky.\n\n...Robert\n\n\n", "msg_date": "Tue, 7 Apr 2020 15:26:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-07 14:51:52 -0400, Robert Haas wrote:\n> On Tue, Apr 7, 2020 at 2:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > Does that make some sense? Do you have a better suggestion for a name?\n> \n> I think it makes sense. I have two basic problems with the name. The\n> first is that \"on disk\" doesn't seem to be a very clear way of\n> describing what you're actually checking here, and it definitely\n> doesn't refer to an existing concept which sophisticated hackers can\n> be expected to understand. The second is that \"may\" is ambiguous in\n> English: it can either mean that something is permissible (\"Johnny,\n> you may go to the bathroom\") or that we do not have certain knowledge\n> of it (\"Johnny may be in the bathroom\"). When it is followed by \"be\",\n> it usually has the latter sense, although there are exceptions (e.g.\n> \"She may be discharged from the hospital today if she wishes, but we\n> recommend that she stay for another day\"). Consequently, I found that\n> use of \"may be\" in this context wicked confusing.\n\nWell, it *is* only a vague test :). It shouldn't ever have a false\npositive, but there's plenty chance for false negatives (if wrapped\naround far enough).\n\n\n> So I suggest a name with \"Is\" or no verb, rather than one with\n> \"MayBe.\" And I suggest something else instead of \"OnDisk,\" e.g.\n> AssertTransactionIdIsInUsableRange() or\n> TransactionIdIsInAllowableRange() or\n> AssertTransactionIdWraparoundProtected(). I kind of like that last\n> one, but YMMV.\n\nMake sense - but they all seem to express a bit more certainty than I\nthink the test actually provides.\n\nI explicitly did not want (and added a comment to that affect) have\nsomething like TransactionIdIsInAllowableRange(), because there never\ncan be a safe use of its return value, as far as I can tell.\n\nThe \"OnDisk\" was intended to clarify that the range it verifies is\nwhether it'd be ok for the xid to have been found in a relation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:30:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-07 15:03:46 -0400, Robert Haas wrote:\n> On Tue, Apr 7, 2020 at 1:51 PM Andres Freund <andres@anarazel.de> wrote:\n> > > ComputedHorizons seems like a fairly generic name. I think there's\n> > > some relationship between InvisibleToEveryoneState and\n> > > ComputedHorizons that should be brought out more clearly by the naming\n> > > and the comments.\n> >\n> > I don't like the naming of ComputedHorizons, ComputeTransactionHorizons\n> > much... But I find it hard to come up with something that's meaningfully\n> > better.\n> \n> It would help to stick XID in there, like ComputedXIDHorizons. What I\n> find really baffling is that you seem to have two structures in the\n> same file that have essentially the same purpose, but the second one\n> (ComputedHorizons) has a lot more stuff in it. I can't understand why.\n\nComputedHorizons are the various \"accurate\" horizons computed by\nComputeTransactionHorizons(). That's used to determine a horizon for\nvacuuming (via GetOldestVisibleTransactionId()) and other similar use\ncases.\n\nThe various InvisibleToEveryoneState variables contain the boundary\nbased horizons, and are updated / initially filled by\nGetSnapshotData(). When the a tested value falls between the boundaries,\nwe update the approximate boundaries using\nComputeTransactionHorizons(). That briefly makes the boundaries in\nthe InvisibleToEveryoneState accurate - but future GetSnapshotData()\ncalls will increase the definitely_needed_bound (if transactions\ncommitted since).\n\nThe ComputedHorizons fields could instead just be pointer based\narguments to ComputeTransactionHorizons(), but that seems clearly\nworse.\n\nI'll change ComputedHorizons's comment to say that it's the result of\nComputeTransactionHorizons(), not the \"state\".\n\n\n> > What's the inconsistency? The dropped replication_ vs dropped FullXid\n> > postfix?\n> \n> Yeah, just having the member names be randomly different between the\n> structs. Really harms greppability.\n\nThe long names make it hard to keep line lengths in control, in\nparticular when also involving the long macro names for TransactionId /\nFullTransactionId comparators...\n\n\n> > > Generally, heap_prune_satisfies_vacuum looks pretty good. The\n> > > limited_oldest_committed naming is confusing, but the comments make it\n> > > a lot clearer.\n> >\n> > I didn't like _committed much either. But couldn't come up with\n> > something short. _relied_upon?\n> \n> oldSnapshotLimitUsed or old_snapshot_limit_used, like currentCommandIdUsed?\n\nWill go for old_snapshot_limit_used, and rename the other variables to\nold_snapshot_limit_ts, old_snapshot_limit_xmin.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:43:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Apr 7, 2020 at 3:31 PM Andres Freund <andres@anarazel.de> wrote:\n> Well, it *is* only a vague test :). It shouldn't ever have a false\n> positive, but there's plenty chance for false negatives (if wrapped\n> around far enough).\n\nSure, but I think you get my point. Asserting that something \"might\nbe\" true isn't much of an assertion. Saying that it's in the correct\nrange is not to say there can't be a problem - but we're saying that\nit IS in the expect range, not that it may or may not be.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:08:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Apr 7, 2020 at 3:24 PM Andres Freund <andres@anarazel.de> wrote:\n> > 0007 -\n> >\n> > + TransactionId xidCopy; /* this backend's xid, a copy of this proc's\n> > + ProcGlobal->xids[] entry. */\n> >\n> > Can we please NOT put Copy into the name like that? Pretty please?\n>\n> Do you have a suggested naming scheme? Something indicating that it's\n> not the only place that needs to be updated?\n\nI don't think trying to indicate that in the structure member names is\na useful idea. I think you should give them the same names, maybe with\nan \"s\" to pluralize the copy hanging off of ProcGlobal, and put a\ncomment that says something like:\n\nWe keep two copies of each of the following three fields. One copy is\nhere in the PGPROC, and the other is in a more densely-packed array\nhanging off of PGXACT. Both copies of the value must always be updated\nat the same time and under the same locks, so that it is always the\ncase that MyProc->xid == ProcGlobal->xids[MyProc->pgprocno] and\nsimilarly for vacuumFlags and WHATEVER. Note, however, that the arrays\nattached to ProcGlobal only contain entries for PGPROC structures that\nare currently part of the ProcArray (i.e. there is currently a backend\nfor that PGPROC). We use those arrays when STUFF and the copies in the\nindividual PGPROC when THINGS.\n\n> I think it's more on-point here, because we need to hold either of the\n> locks* even, for changes to a backend's own status that one reasonably\n> could expect would be safe to at least inspect.\n\nIt's just too brief and obscure to be useful.\n\n> > + ProcGlobal->xids[pgxactoff] = InvalidTransactionId;\n> >\n> > Apparently this array is not dense in the sense that it excludes\n> > unused slots, but comments elsewhere don't seem to entirely agree.\n>\n> What do you mean with \"unused slots\"? Backends that committed?\n\nBackends that have no XID. You mean, I guess, that it is \"dense\" in\nthe sense that only live backends are in there, not \"dense\" in the\nsense that only active write transactions are in there. It would be\nnice to nail that down better; the wording I suggested above might\nhelp.\n\n> > + uint32 TotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;\n> >\n> > /* ProcGlobal */\n> > size = add_size(size, sizeof(PROC_HDR));\n> > - /* MyProcs, including autovacuum workers and launcher */\n> > - size = add_size(size, mul_size(MaxBackends, sizeof(PGPROC)));\n> > - /* AuxiliaryProcs */\n> > - size = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGPROC)));\n> > - /* Prepared xacts */\n> > - size = add_size(size, mul_size(max_prepared_xacts, sizeof(PGPROC)));\n> > - /* ProcStructLock */\n> > + size = add_size(size, mul_size(TotalProcs, sizeof(PGPROC)));\n> >\n> > This seems like a bad idea. If we establish a precedent that it's OK\n> > to have sizing routines that don't use add_size() and mul_size(),\n> > people are going to cargo cult that into places where there is more\n> > risk of overflow than there is here.\n>\n> Hm. I'm not sure I see the problem. Are you concerned that TotalProcs\n> would overflow due to too big MaxBackends or max_prepared_xacts? The\n> multiplication itself is still protected by add_size(). It didn't seem\n> correct to use add_size for the TotalProcs addition, since that's not\n> really a size. And since the limit for procs is much lower than\n> UINT32_MAX...\n\nI'm concerned that there are 0 uses of add_size in any shared-memory\nsizing function, and I think it's best to keep it that way. If you\ninitialize TotalProcs = add_size(MaxBackends,\nadd_size(NUM_AUXILIARY_PROCS, max_prepared_xacts)) then I'm happy. I\nthink it's a desperately bad idea to imagine that we can dispense with\noverflow checks here and be safe. It's just too easy for that to\nbecome false due to future code changes, or get copied to other places\nwhere it's unsafe now.\n\n> > You've got a bunch of different places that talk about the new PGXACT\n> > array and they are somewhat redundant yet without saying exactly the\n> > same thing every time either. I think that needs cleanup.\n>\n> Could you point out a few of those comments, I'm not entirely sure which\n> you're talking about?\n\n+ /*\n+ * Also allocate a separate arrays for data that is frequently (e.g. by\n+ * GetSnapshotData()) accessed from outside a backend. There is one entry\n+ * in each for every *live* PGPROC entry, and they are densely packed so\n+ * that the first procArray->numProc entries are all valid. The entries\n+ * for a PGPROC in those arrays are at PGPROC->pgxactoff.\n+ *\n+ * Note that they may not be accessed without ProcArrayLock held! Upon\n+ * ProcArrayRemove() later entries will be moved.\n+ *\n+ * These are separate from the main PGPROC array so that the most heavily\n+ * accessed data is stored contiguously in memory in as few cache lines as\n+ * possible. This provides significant performance benefits, especially on\n+ * a multiprocessor system.\n+ */\n\n+ * Arrays with per-backend information that is hotly accessed, indexed by\n+ * PGPROC->pgxactoff. These are in separate arrays for three reasons:\n+ * First, to allow for as tight loops accessing the data as\n+ * possible. Second, to prevent updates of frequently changing data from\n+ * invalidating cachelines shared with less frequently changing\n+ * data. Third to condense frequently accessed data into as few cachelines\n+ * as possible.\n\n+ *\n+ * The various *Copy fields are copies of the data in ProcGlobal arrays that\n+ * can be accessed without holding ProcArrayLock / XidGenLock (see PROC_HDR\n+ * comments).\n\n+ * Adding/Removing an entry into the procarray requires holding *both*\n+ * ProcArrayLock and XidGenLock in exclusive mode (in that order). Both are\n+ * needed because the dense arrays (see below) are accessed from\n+ * GetNewTransactionId() and GetSnapshotData(), and we don't want to add\n+ * further contention by both using one lock. Adding/Removing a procarray\n+ * entry is much less frequent.\n\nI'm not saying these are all entirely redundant with each other;\nthat's not so. But I don't think it gives a terribly clear grasp of\nthe overall picture either, even taking all of them together.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:13:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-07 15:26:36 -0400, Robert Haas wrote:\n> 0008 -\n> \n> Here again, I greatly dislike putting Copy in the name. It makes\n> little sense to pretend that either is the original and the other is\n> the copy. You just have the same data in two places. If one of them is\n> more authoritative, the place to explain that is in the comments, not\n> by elongating the structure member name and supposing anyone will be\n> able to make something of that.\n\nOk.\n\n\n\n> 0009, 0010 -\n> \n> I think you've got this whole series of things divided up too finely.\n> Like, 0005 feels like the meat of it, and that has a bunch of things\n> in it that could plausible be separated out as separate commits. 0007\n> also seems to do more than one kind of thing (see my comment regarding\n> moving some of that into 0006). But whacking everything around like a\n> crazy man in 0005 and a little more in 0007 and then doing the\n> following cleanup in these little tiny steps seems pretty lame.\n> Separating 0009 from 0010 is maybe the clearest example of that, but\n> IMHO it's pretty unclear why both of these shouldn't be merged with\n> 0008.\n\nI found it a *lot* easier to review / evolve them this way. I e.g. had\nan earlier version in which the subxid part of the change worked\nsubstantially differently (it tried to elide the overflowed bool, by\ndefinining -1 as the indicator for overflows), and it'd been way harder\nto change that if I didn't have a patch with *just* the subxid changes.\n\nI'd not push them separated by time, but I do think it'd make sense to\npush them as separate commits. I think it's easier to review them in\ncase of a bug in a separate area.\n\n\n> My comments on the Copy naming apply here as well. I am also starting\n> to wonder why exactly we need two copies of all this stuff. Perhaps\n> I've just failed to absorb the idea for having read the patch too\n> briefly, but I think that we need to make sure that it's super-clear\n> why we're doing that. If we just needed it for one field because\n> $REASONS, that would be one thing, but if we need it for all of them\n> then there must be some underlying principle here that needs a good\n> explanation in an easy-to-find and centrally located place.\n\nThe main reason is that we want to be able to cheaply check the current\nstate of the variables (mostly when checking a backend's own state). We\ncan't access the \"dense\" ones without holding a lock, but we e.g. don't\nwant to make ProcArrayEndTransactionInternal() take a lock just to check\nif vacuumFlags is set.\n\nIt turns out to also be good for performance to have the copy for\nanother reason: The \"dense\" arrays share cachelines with other\nbackends. That's worth it because it allows to make GetSnapshotData(),\nby far the most frequent operation, touch fewer cache lines. But it also\nmeans that it's more likely that a backend's \"dense\" array entry isn't\nin a local cpu cache (it'll be pulled out of there when modified in\nanother backend). In many cases we don't need the shared entry at commit\netc time though, we just need to check if it is set - and most of the\ntime it won't be. The local entry allows to do that cheaply.\n\nBasically it makes sense to access the PGPROC variable when checking a\nsingle backend's data, especially when we have to look at the PGPROC for\nother reasons already. It makes sense to look at the \"dense\" arrays if\nwe need to look at many / most entries, because we then benefit from the\nreduced indirection and better cross-process cacheability.\n\n\n> 0011 -\n> \n> + * Number of top-level transactions that completed in some form since the\n> + * start of the server. This currently is solely used to check whether\n> + * GetSnapshotData() needs to recompute the contents of the snapshot, or\n> + * not. There are likely other users of this. Always above 1.\n> \n> Does it only count XID-bearing transactions? If so, best mention that.\n\nOh, good point.\n\n\n> +GetSnapshotDataFillTooOld(Snapshot snapshot)\n> \n> Uh... no clue what's going on here. Granted the code had no comments\n> in the old place either, so I guess it's not worse, but even the name\n> of the new function is pretty incomprehensible.\n\nIt fills the old_snapshot_threshold fields of a Snapshot.\n\n\n> + * It is safe to re-enter the snapshot's xmin. This can't cause xmin to go\n> \n> I know what it means to re-enter a building, but I don't know what it\n> means to re-enter the snapshot's xmin.\n\nRe-entering it into the procarray, thereby preventing rows that the\nsnapshot could see from being removed.\n\n> This whole comment seems a bit murky.\n\nHow about:\n\t/*\n\t * If the current xactCompletionCount is still the same as it was at the\n\t * time the snapshot was built, we can be sure that rebuilding the\n\t * contents of the snapshot the hard way would result in the same snapshot\n\t * contents:\n\t *\n\t * As explained in transam/README, the set of xids considered running by\n\t * GetSnapshotData() cannot change while ProcArrayLock is held. Snapshot\n\t * contents only depend on transactions with xids and xactCompletionCount\n\t * is incremented whenever a transaction with an xid finishes (while\n\t * holding ProcArrayLock) exclusively). Thus the xactCompletionCount check\n\t * ensures we would detect if the snapshot would have changed.\n\t *\n\t * As the snapshot contents are the same as it was before, it is is safe\n\t * to re-enter the snapshot's xmin into the PGPROC array. None of the rows\n\t * visible under the snapshot could already have been removed (that'd\n\t * require the set of running transactions to change) and it fulfills the\n\t * requirement that concurrent GetSnapshotData() calls yield the same\n\t * xmin.\n\t */\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 13:27:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-07 16:13:07 -0400, Robert Haas wrote:\n> On Tue, Apr 7, 2020 at 3:24 PM Andres Freund <andres@anarazel.de> wrote:\n> > > + ProcGlobal->xids[pgxactoff] = InvalidTransactionId;\n> > >\n> > > Apparently this array is not dense in the sense that it excludes\n> > > unused slots, but comments elsewhere don't seem to entirely agree.\n> >\n> > What do you mean with \"unused slots\"? Backends that committed?\n> \n> Backends that have no XID. You mean, I guess, that it is \"dense\" in\n> the sense that only live backends are in there, not \"dense\" in the\n> sense that only active write transactions are in there.\n\nCorrect.\n\nI tried the \"only active write transaction\" approach, btw, and had a\nhard time making it scale well (due to the much more frequent moving of\nentries at commit/abort time). If we were to go to a 'only active\ntransactions' array at some point we'd imo still need pretty much all\nthe other changes made here - so I'm not worried about it for now.\n\n\n> > > + uint32 TotalProcs = MaxBackends + NUM_AUXILIARY_PROCS + max_prepared_xacts;\n> > >\n> > > /* ProcGlobal */\n> > > size = add_size(size, sizeof(PROC_HDR));\n> > > - /* MyProcs, including autovacuum workers and launcher */\n> > > - size = add_size(size, mul_size(MaxBackends, sizeof(PGPROC)));\n> > > - /* AuxiliaryProcs */\n> > > - size = add_size(size, mul_size(NUM_AUXILIARY_PROCS, sizeof(PGPROC)));\n> > > - /* Prepared xacts */\n> > > - size = add_size(size, mul_size(max_prepared_xacts, sizeof(PGPROC)));\n> > > - /* ProcStructLock */\n> > > + size = add_size(size, mul_size(TotalProcs, sizeof(PGPROC)));\n> > >\n> > > This seems like a bad idea. If we establish a precedent that it's OK\n> > > to have sizing routines that don't use add_size() and mul_size(),\n> > > people are going to cargo cult that into places where there is more\n> > > risk of overflow than there is here.\n> >\n> > Hm. I'm not sure I see the problem. Are you concerned that TotalProcs\n> > would overflow due to too big MaxBackends or max_prepared_xacts? The\n> > multiplication itself is still protected by add_size(). It didn't seem\n> > correct to use add_size for the TotalProcs addition, since that's not\n> > really a size. And since the limit for procs is much lower than\n> > UINT32_MAX...\n> \n> I'm concerned that there are 0 uses of add_size in any shared-memory\n> sizing function, and I think it's best to keep it that way.\n\nI can't make sense of that sentence?\n\n\nWe already have code like this, and have for a long time:\n\t/* Size of the ProcArray structure itself */\n#define PROCARRAY_MAXPROCS\t(MaxBackends + max_prepared_xacts)\n\nadding NUM_AUXILIARY_PROCS doesn't really change that, does it?\n\n\n> If you initialize TotalProcs = add_size(MaxBackends,\n> add_size(NUM_AUXILIARY_PROCS, max_prepared_xacts)) then I'm happy.\n\nWill do.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Apr 2020 13:42:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi\n\nOn 2020-04-07 05:15:03 -0700, Andres Freund wrote:\n> SEE BELOW: What, and what not, to do for v13.\n>\n> [ description of changes ]\n> \n> I think this is pretty close to being committable.\n> \n> But: This patch came in very late for v13, and it took me much longer to\n> polish it up than I had hoped (partially distraction due to various bugs\n> I found (in particular snapshot_too_old), partially covid19, partially\n> \"hell if I know\"). The patchset touches core parts of the system. While\n> both Thomas and David have done some review, they haven't for the latest\n> version (mea culpa).\n> \n> In many other instances I would say that the above suggests slipping to\n> v14, given the timing.\n> \n> The main reason I am considering pushing is that I think this patcheset\n> addresses one of the most common critiques of postgres, as well as very\n> common, hard to fix, real-world production issues. GetSnapshotData() has\n> been a major bottleneck for about as long as I have been using postgres,\n> and this addresses that to a significant degree.\n> \n> A second reason I am considering it is that, in my opinion, the changes\n> are not all that complicated and not even that large. At least not for a\n> change to a problem that we've long tried to improve.\n> \n> \n> Obviously we all have a tendency to think our own work is important, and\n> that we deserve a bit more leeway than others. So take the above with a\n> grain of salt.\n\nI tried hard, but came up short. It's 5 AM, and I am still finding\ncomments that aren't quite right. For a while I thought I'd be pushing a\nfew hours ... And even if it were ready now: This is too large a patch\nto push this tired (but damn, I'd love to).\n\nUnfortunately adressing Robert's comments made me realize I didn't like\nsome of my own naming. In particular I started to dislike\nInvisibleToEveryone, and some of the procarray.c variables around\n\"visible\". After trying about half a dozen schemes I think I found\nsomething that makes some sense, although I am still not perfectly\nhappy.\n\nI think the attached set of patches address most of Robert's review\ncomments, minus a few cases minor quibbles where I thought he was wrong\n(fundamentally wrong of course). There are no *Copy fields in PGPROC\nanymore, there's a lot more comments above PROC_HDR (not duplicated\nelsewhere). I've reduced the interspersed changes to GetSnapshotData()\nso those can be done separately.\n\nThere's also somewhat meaningful commit messages now. But\n snapshot scalability: Move in-progress xids to ProcGlobal->xids[].\nneeds to be expanded to mention the changed locking requirements.\n\n\nRealistically it still 2-3 hours of proof-reading.\n\n\nThis makes me sad :(", "msg_date": "Wed, 8 Apr 2020 05:43:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Apr 8, 2020 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> Realistically it still 2-3 hours of proof-reading.\n>\n> This makes me sad :(\n\nCan we ask RMT to extend feature freeze for this particular patchset?\nI think it's reasonable assuming extreme importance of this patchset.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:59:50 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Apr 7, 2020 at 4:27 PM Andres Freund <andres@anarazel.de> wrote:\n> The main reason is that we want to be able to cheaply check the current\n> state of the variables (mostly when checking a backend's own state). We\n> can't access the \"dense\" ones without holding a lock, but we e.g. don't\n> want to make ProcArrayEndTransactionInternal() take a lock just to check\n> if vacuumFlags is set.\n>\n> It turns out to also be good for performance to have the copy for\n> another reason: The \"dense\" arrays share cachelines with other\n> backends. That's worth it because it allows to make GetSnapshotData(),\n> by far the most frequent operation, touch fewer cache lines. But it also\n> means that it's more likely that a backend's \"dense\" array entry isn't\n> in a local cpu cache (it'll be pulled out of there when modified in\n> another backend). In many cases we don't need the shared entry at commit\n> etc time though, we just need to check if it is set - and most of the\n> time it won't be. The local entry allows to do that cheaply.\n>\n> Basically it makes sense to access the PGPROC variable when checking a\n> single backend's data, especially when we have to look at the PGPROC for\n> other reasons already. It makes sense to look at the \"dense\" arrays if\n> we need to look at many / most entries, because we then benefit from the\n> reduced indirection and better cross-process cacheability.\n\nThat's a good explanation. I think it should be in the comments or a\nREADME somewhere.\n\n> How about:\n> /*\n> * If the current xactCompletionCount is still the same as it was at the\n> * time the snapshot was built, we can be sure that rebuilding the\n> * contents of the snapshot the hard way would result in the same snapshot\n> * contents:\n> *\n> * As explained in transam/README, the set of xids considered running by\n> * GetSnapshotData() cannot change while ProcArrayLock is held. Snapshot\n> * contents only depend on transactions with xids and xactCompletionCount\n> * is incremented whenever a transaction with an xid finishes (while\n> * holding ProcArrayLock) exclusively). Thus the xactCompletionCount check\n> * ensures we would detect if the snapshot would have changed.\n> *\n> * As the snapshot contents are the same as it was before, it is is safe\n> * to re-enter the snapshot's xmin into the PGPROC array. None of the rows\n> * visible under the snapshot could already have been removed (that'd\n> * require the set of running transactions to change) and it fulfills the\n> * requirement that concurrent GetSnapshotData() calls yield the same\n> * xmin.\n> */\n\nThat's nice and clear.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 09:24:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 4/8/20 8:59 AM, Alexander Korotkov wrote:\n> On Wed, Apr 8, 2020 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n>> Realistically it still 2-3 hours of proof-reading.\n>>\n>> This makes me sad :(\n> \n> Can we ask RMT to extend feature freeze for this particular patchset?\n> I think it's reasonable assuming extreme importance of this patchset.\n\nOne of the features of RMT responsibilities[1] is to be \"hands off\" as\nmuch as possible, so perhaps a reverse ask: how would people feel about\nthis patch going into PG13, knowing that the commit would come after the\nfeature freeze date?\n\nMy 2¢, with RMT hat off:\n\nAs mentioned earlier[2], we know that connection scalability is a major\npain point with PostgreSQL and any effort that can help alleviate that\nis a huge win, even in incremental gains. Andres et al experimentation\nshow that this is more than incremental gains, and will certainly make a\nhuge difference in people's PostgreSQL experience. It is one of those\nfeatures where you can \"plug in and win\" -- you get a performance\nbenefit just by upgrading. That is not insignificant.\n\nHowever, I also want to ensure that we are fair: in the past there have\nalso been other patches that have been \"oh-so-close\" to commit before\nfeature freeze but have not made it in (an example escapes me at the\nmoment). Therefore, we really need to have consensus among ourselves\nthat the right decision is to allow this to go in after feature freeze.\n\nDid this come in (very) late into the development cycle? Yes, and I\nthink normally that's enough to give cause for pause. But I could also\nargue that Andres is fixing a \"bug\" with PostgreSQL (probably several\nbugs ;) with PostgreSQL -- and perhaps the fixes can't be backpatched\nper se, but they do improve the overall stability and usability of\nPostgreSQL and it'd be a shame if we have to wait on them.\n\nLastly, with the ongoing world events, perhaps time that could have been\ndedicated to this and other patches likely affected their completion. I\nknow most things in my life take way longer than they used to (e.g.\ntaking out the trash/recycles has gone from a 15s to 240s routine). The\nsame could be said about other patches as well, but this one has a far\ngreater impact (a double-edged sword, of course) given it's a feature\nthat everyone uses in PostgreSQL ;)\n\nSo with my RMT hat off, I say +1 to allowing this post feature freeze,\nthough within a reasonable window.\n\nMy 2¢, with RMT hat on:\n\nI believe in[2] I outlined a way a path for including the patch even at\nthis stage in the game. If it is indeed committed, I think we\nimmediately put it on the \"Recheck a mid-Beta\" list. I know it's not as\ntrivial to change as something like \"Determine if jit=\"on\" by default\"\n(not picking on Andres, I just remember that example from RMT 11), but\nit at least provides a notable reminder that we need to ensure we test\nthis thoroughly, and point people to really hammer it during beta.\n\nSo with my RMT hat on, I say +0 but with a ;)\n\nThanks,\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/Release_Management_Team#History\n[2]\nhttps://www.postgresql.org/message-id/6be8c321-68ea-a865-d8d0-50a3af616463%40postgresql.org", "msg_date": "Wed, 8 Apr 2020 09:26:42 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Apr 8, 2020 at 9:27 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> One of the features of RMT responsibilities[1] is to be \"hands off\" as\n> much as possible, so perhaps a reverse ask: how would people feel about\n> this patch going into PG13, knowing that the commit would come after the\n> feature freeze date?\n\nLetting something be committed after feature freeze, or at any other\ntime, is just a risk vs. reward trade-off. Every patch carries some\nchance of breaking stuff or making things worse. And every patch has a\nchance of making something better that people care about.\n\nOn general principle, I would categorize this as a moderate-risk\npatch. It doesn't change SQL syntax like, e.g. MERGE, nor does it\ntouch the on-disk format, like, e.g. INSERT .. ON CONFLICT UPDATE. The\nchanges are relatively localized, unlike, e.g. parallel query. Those\nare all things that reduce risk. On the other hand, it's a brand new\npatch which has not been thoroughly reviewed by anyone. Moreover,\nshakedown time will be minimal because we're so late in the release\ncycle. if it has subtle synchronization problems or if it regresses\nperformance badly in some cases, we might not find out about any of\nthat until after release. While in theory we could revert it any time,\nsince no SQL syntax or on-disk format is affected, in practice it will\nbe difficult to do that if it's making life better for some people and\nworse for others.\n\nI don't know what the right thing to do is. I agree with everyone who\nsays this is a very important problem, and I have the highest respect\nfor Andres's technical ability. On the other hand, I have been around\nhere long enough to know that deciding whether to allow late commits\non the basis of how much we like the feature is a bad plan, because it\ntakes into account only the upside of a commit, and ignores the\npossible downside risk. Typically, the commit is late because the\nfeature was rushed to completion at the last minute, which can have an\neffect on quality. I can say, having read through the patches\nyesterday, that they don't suck, but I can't say that they're fully\ncorrect. That's not to say that we shouldn't decide to take them, but\nit is a concern to be taken seriously. We have made mistakes before in\nwhat we shipped that had serious implications for many users and for\nthe project; we should all be wary of making more such mistakes. I am\nnot trying to say that solving problems and making stuff better is NOT\nimportant, just that every coin has two sides.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 09:44:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-08 09:24:13 -0400, Robert Haas wrote:\n> On Tue, Apr 7, 2020 at 4:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > The main reason is that we want to be able to cheaply check the current\n> > state of the variables (mostly when checking a backend's own state). We\n> > can't access the \"dense\" ones without holding a lock, but we e.g. don't\n> > want to make ProcArrayEndTransactionInternal() take a lock just to check\n> > if vacuumFlags is set.\n> >\n> > It turns out to also be good for performance to have the copy for\n> > another reason: The \"dense\" arrays share cachelines with other\n> > backends. That's worth it because it allows to make GetSnapshotData(),\n> > by far the most frequent operation, touch fewer cache lines. But it also\n> > means that it's more likely that a backend's \"dense\" array entry isn't\n> > in a local cpu cache (it'll be pulled out of there when modified in\n> > another backend). In many cases we don't need the shared entry at commit\n> > etc time though, we just need to check if it is set - and most of the\n> > time it won't be. The local entry allows to do that cheaply.\n> >\n> > Basically it makes sense to access the PGPROC variable when checking a\n> > single backend's data, especially when we have to look at the PGPROC for\n> > other reasons already. It makes sense to look at the \"dense\" arrays if\n> > we need to look at many / most entries, because we then benefit from the\n> > reduced indirection and better cross-process cacheability.\n> \n> That's a good explanation. I think it should be in the comments or a\n> README somewhere.\n\nI had a briefer version in the PROC_HDR comment. I've just expanded it\nto:\n *\n * The denser separate arrays are beneficial for three main reasons: First, to\n * allow for as tight loops accessing the data as possible. Second, to prevent\n * updates of frequently changing data (e.g. xmin) from invalidating\n * cachelines also containing less frequently changing data (e.g. xid,\n * vacuumFlags). Third to condense frequently accessed data into as few\n * cachelines as possible.\n *\n * There are two main reasons to have the data mirrored between these dense\n * arrays and PGPROC. First, as explained above, a PGPROC's array entries can\n * only be accessed with either ProcArrayLock or XidGenLock held, whereas the\n * PGPROC entries do not require that (obviously there may still be locking\n * requirements around the individual field, separate from the concerns\n * here). That is particularly important for a backend to efficiently checks\n * it own values, which it often can safely do without locking. Second, the\n * PGPROC fields allow to avoid unnecessary accesses and modification to the\n * dense arrays. A backend's own PGPROC is more likely to be in a local cache,\n * whereas the cachelines for the dense array will be modified by other\n * backends (often removing it from the cache for other cores/sockets). At\n * commit/abort time a check of the PGPROC value can avoid accessing/dirtying\n * the corresponding array value.\n *\n * Basically it makes sense to access the PGPROC variable when checking a\n * single backend's data, especially when already looking at the PGPROC for\n * other reasons already. It makes sense to look at the \"dense\" arrays if we\n * need to look at many / most entries, because we then benefit from the\n * reduced indirection and better cross-process cache-ability.\n *\n * When entering a PGPROC for 2PC transactions with ProcArrayAdd(), the data\n * in the dense arrays is initialized from the PGPROC while it already holds\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Apr 2020 14:37:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Apr 8, 2020 at 09:44:16AM -0400, Robert Haas wrote:\n> I don't know what the right thing to do is. I agree with everyone who\n> says this is a very important problem, and I have the highest respect\n> for Andres's technical ability. On the other hand, I have been around\n> here long enough to know that deciding whether to allow late commits\n> on the basis of how much we like the feature is a bad plan, because it\n> takes into account only the upside of a commit, and ignores the\n> possible downside risk. Typically, the commit is late because the\n> feature was rushed to completion at the last minute, which can have an\n> effect on quality. I can say, having read through the patches\n> yesterday, that they don't suck, but I can't say that they're fully\n> correct. That's not to say that we shouldn't decide to take them, but\n> it is a concern to be taken seriously. We have made mistakes before in\n> what we shipped that had serious implications for many users and for\n> the project; we should all be wary of making more such mistakes. I am\n> not trying to say that solving problems and making stuff better is NOT\n> important, just that every coin has two sides.\n\nIf we don't commit this, where does this leave us with the\nold_snapshot_threshold feature? We remove it in back branches and have\nno working version in PG 13? That seems kind of bad.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 8 Apr 2020 18:06:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-08 09:26:42 -0400, Jonathan S. Katz wrote:\n> On 4/8/20 8:59 AM, Alexander Korotkov wrote:\n> > On Wed, Apr 8, 2020 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> >> Realistically it still 2-3 hours of proof-reading.\n> >>\n> >> This makes me sad :(\n> >\n> > Can we ask RMT to extend feature freeze for this particular patchset?\n> > I think it's reasonable assuming extreme importance of this patchset.\n\n> One of the features of RMT responsibilities[1] is to be \"hands off\" as\n> much as possible, so perhaps a reverse ask: how would people feel about\n> this patch going into PG13, knowing that the commit would come after the\n> feature freeze date?\n\nI'm obviously biased, so I don't think there's much point in responding\ndirectly to that question. But I thought it could be helpful if I\ndescribed what my thoughts about where the patchset is:\n\nWhat made me not commit it \"earlier\" yesterday was not that I had/have\nany substantial concerns about the technical details of the patch. But\nthat there were a few too many comments that didn't yet sound quite\nright, that the commit messages didn't yet explain the architecture\n/ benefits well enough, and that I noticed that a few variable names\nwere too easy to be misunderstood by others.\n\nBy 5 AM I had addressed most of that, except that some technical details\nweren't yet mentioned in the commit messages ([1], they are documented\nin the code). I also produce enough typos / odd grammar when fully\nawake, so even though I did proof read my changes, I thought that I need\nto do that again while awake.\n\nThere have been no substantial code changes since yesterday. The\nvariable renaming prompted by Robert (which I agree is an improvement),\nas well as reducing the diff size by deferring some readability\nimprovements (probably also a good idea) did however produce quite a few\nconflicts in subsequent patches that I needed to resolve. Another awake\nread-through to confirm that I resolved them correctly seemed the\nresponsible thing to do before a commit.\n\n\n> Lastly, with the ongoing world events, perhaps time that could have been\n> dedicated to this and other patches likely affected their completion. I\n> know most things in my life take way longer than they used to (e.g.\n> taking out the trash/recycles has gone from a 15s to 240s routine). The\n> same could be said about other patches as well, but this one has a far\n> greater impact (a double-edged sword, of course) given it's a feature\n> that everyone uses in PostgreSQL ;)\n\nI'm obviously not alone in that, so I agree that it's not an argument\npro/con anything.\n\nBut this definitely is the case for me. Leaving aside the general dread,\nnot having a quiet home-office, nor good exercise, is definitely not\nhelping.\n\nI'm really bummed that I didn't have the cycles to help the shared\nmemory stats patch ready as well. It's clearly not yet there (but\nimproved a lot during the CF). But it's been around for so long, and\nthere's so many improvements blocked by the current stats\ninfrastructure...\n\n\n[1] the \"mirroring\" of values beteween dense arrays and PGPROC, the\nchanged locking regimen for ProcArrayAdd/Remove, the widening of\nlastCompletedXid to be a 64bit xid\n[2] https://www.postgresql.org/message-id/20200407121503.zltbpqmdesurflnm%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:17:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-08 09:44:16 -0400, Robert Haas wrote:\n> Moreover, shakedown time will be minimal because we're so late in the\n> release cycle\n\nMy impression increasingly is that there's very little actual shakedown\nbefore beta :(. As e.g. evidenced by the fact that 2PC did basically not\nwork for several months until I did new benchmarks for this patch.\n\nI don't know what to do about that, but...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:20:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-04-08 18:06:23 -0400, Bruce Momjian wrote:\n> If we don't commit this, where does this leave us with the\n> old_snapshot_threshold feature? We remove it in back branches and have\n> no working version in PG 13? That seems kind of bad.\n\nI don't think this patch changes the situation for\nold_snapshot_threshold in a meaningful way.\n\nSure, this patch makes old_snapshot_threshold scale better, and triggers\nfewer unnecessary query cancellations. But there still are wrong query\nresults, the tests still don't test anything meaningful, and the\ndetermination of which query is cancelled is still wrong.\n\n- Andres\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:25:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Apr 8, 2020 at 03:25:34PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2020-04-08 18:06:23 -0400, Bruce Momjian wrote:\n> > If we don't commit this, where does this leave us with the\n> > old_snapshot_threshold feature? We remove it in back branches and have\n> > no working version in PG 13? That seems kind of bad.\n> \n> I don't think this patch changes the situation for\n> old_snapshot_threshold in a meaningful way.\n> \n> Sure, this patch makes old_snapshot_threshold scale better, and triggers\n> fewer unnecessary query cancellations. But there still are wrong query\n> results, the tests still don't test anything meaningful, and the\n> determination of which query is cancelled is still wrong.\n\nOh, OK, so it still needs to be disabled. I was hoping we could paint\nthis as a fix.\n\nBased on Robert's analysis of the risk (no SQL syntax, no storage\nchanges), I think, if you are willing to keep working at this until the\nfinal release, it is reasonable to apply it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 8 Apr 2020 20:31:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Apr 08, 2020 at 03:17:41PM -0700, Andres Freund wrote:\n> On 2020-04-08 09:26:42 -0400, Jonathan S. Katz wrote:\n>> Lastly, with the ongoing world events, perhaps time that could have been\n>> dedicated to this and other patches likely affected their completion. I\n>> know most things in my life take way longer than they used to (e.g.\n>> taking out the trash/recycles has gone from a 15s to 240s routine). The\n>> same could be said about other patches as well, but this one has a far\n>> greater impact (a double-edged sword, of course) given it's a feature\n>> that everyone uses in PostgreSQL ;)\n> \n> I'm obviously not alone in that, so I agree that it's not an argument\n> pro/con anything.\n> \n> But this definitely is the case for me. Leaving aside the general dread,\n> not having a quiet home-office, nor good exercise, is definitely not\n> helping.\n\nAnother factor to be careful of is that by committing a new feature in\na release cycle, you actually need to think about the extra amount of\nresources you may need to address comments and issues about it in time\nduring the beta/stability period, and that more care is likely needed\nif you commit something at the end of the cycle. On top of that,\ncurrently, that's a bit hard to plan one or two weeks ahead if help is\nneeded to stabilize something you worked on. I am pretty sure that\nwe'll be able to sort things out with a collective effort though.\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 10:22:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hello, hackers.\nAndres, nice work!\n\nSorry for the off-top.\n\nSome of my work [1] related to the support of index hint bits on\nstandby is highly interfering with this patch.\nIs it safe to consider it committed and start rebasing on top of the patches?\n\nThanks,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/CANtu0ojmkN_6P7CQWsZ%3DuEgeFnSmpCiqCxyYaHnhYpTZHj7Ubw%40mail.gmail.com\n\n\n", "msg_date": "Sun, 7 Jun 2020 11:24:50 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "This patch no longer applies to HEAD, please submit a rebased version. I've\nmarked it Waiting on Author in the meantime.\n\ncheers ./daniel\n\n\n", "msg_date": "Wed, 1 Jul 2020 14:42:59 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-07-01 14:42:59 +0200, Daniel Gustafsson wrote:\n> This patch no longer applies to HEAD, please submit a rebased version. I've\n> marked it Waiting on Author in the meantime.\n\nThanks!\n\nHere's a rebased version. There's a good bit of commit message\npolishing and some code and comment cleanup compared to the last\nversion. Oh, and obviously the conflicts are resolved.\n\nIt could make sense to split the conversion of\nVariableCacheData->latestCompletedXid to FullTransactionId out from 0001\ninto is own commit. Not sure...\n\nI've played with splitting 0003, to have the \"infrastructure\" pieces\nseparate, but I think it's not worth it. Without a user the changes look\nweird and it's hard to have the comment make sense.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 15 Jul 2020 18:25:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-Jul-15, Andres Freund wrote:\n\n> It could make sense to split the conversion of\n> VariableCacheData->latestCompletedXid to FullTransactionId out from 0001\n> into is own commit. Not sure...\n\n+1, the commit is large enough and that change can be had in advance.\n\nNote you forward-declare struct GlobalVisState twice in heapam.h.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Jul 2020 21:33:06 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-07-15 21:33:06 -0400, Alvaro Herrera wrote:\n> On 2020-Jul-15, Andres Freund wrote:\n> \n> > It could make sense to split the conversion of\n> > VariableCacheData->latestCompletedXid to FullTransactionId out from 0001\n> > into is own commit. Not sure...\n> \n> +1, the commit is large enough and that change can be had in advance.\n\nI've done that in the attached.\n\nI wonder if somebody has an opinion on renaming latestCompletedXid to\nlatestCompletedFullXid. That's the pattern we already had (cf\nnextFullXid), but it also leads to pretty long lines and quite a few\ncomment etc changes.\n\nI'm somewhat inclined to remove the \"Full\" out of the variable, and to\nalso do that for nextFullXid. I feel like including it in the variable\nname is basically a poor copy of the (also not great) C type system. If\nwe hadn't made FullTransactionId a struct I'd see it differently (and\nthus incompatible with TransactionId), but we have ...\n\n\n> Note you forward-declare struct GlobalVisState twice in heapam.h.\n\nOh, fixed, thanks.\n\n\nI've also fixed a correctness bug that Thomas's cfbot found (and he\npersonally pointed out). There were occasional make check runs with\nvacuum erroring out. That turned out to be because it was possible for\nthe horizon used to make decisions in heap_page_prune() and\nlazy_scan_heap() to differ a bit. I've started a thread about my\nconcerns around the fragility of that logic [1]. The code around that\ncan use a bit more polish, I think. I mainly wanted to post a new\nversion so that the patch separated out above can be looked at.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20200723181018.neey2jd3u7rfrfrn%40alap3.anarazel.de", "msg_date": "Thu, 23 Jul 2020 18:11:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Fri, Jul 24, 2020 at 1:11 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-07-15 21:33:06 -0400, Alvaro Herrera wrote:\n> > On 2020-Jul-15, Andres Freund wrote:\n> > > It could make sense to split the conversion of\n> > > VariableCacheData->latestCompletedXid to FullTransactionId out from 0001\n> > > into is own commit. Not sure...\n> >\n> > +1, the commit is large enough and that change can be had in advance.\n>\n> I've done that in the attached.\n\n+ * pair with the memory barrier below. We do however accept xid to be <=\n+ * to next_xid, instead of just <, as xid could be from the procarray,\n+ * before we see the updated nextFullXid value.\n\nTricky. Right, that makes sense. I like the range assertion.\n\n+static inline FullTransactionId\n+FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n\nI'm struggling to find a better word for this than \"relative\".\n\n+ return FullTransactionIdFromU64(U64FromFullTransactionId(rel)\n+ + (int32) (xid - rel_xid));\n\nI like your branch-free code for this.\n\n> I wonder if somebody has an opinion on renaming latestCompletedXid to\n> latestCompletedFullXid. That's the pattern we already had (cf\n> nextFullXid), but it also leads to pretty long lines and quite a few\n> comment etc changes.\n>\n> I'm somewhat inclined to remove the \"Full\" out of the variable, and to\n> also do that for nextFullXid. I feel like including it in the variable\n> name is basically a poor copy of the (also not great) C type system. If\n> we hadn't made FullTransactionId a struct I'd see it differently (and\n> thus incompatible with TransactionId), but we have ...\n\nYeah, I'm OK with dropping the \"Full\". I've found it rather clumsy too.\n\n\n", "msg_date": "Wed, 29 Jul 2020 18:15:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Jul 29, 2020 at 6:15 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> +static inline FullTransactionId\n> +FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n>\n> I'm struggling to find a better word for this than \"relative\".\n\nThe best I've got is \"anchor\" xid. It is an xid that is known to\nlimit nextFullXid's range while the receiving function runs.\n\n\n", "msg_date": "Wed, 29 Jul 2020 19:20:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "> On 24 Jul 2020, at 03:11, Andres Freund <andres@anarazel.de> wrote:\n\n> I've done that in the attached.\n\nAs this is actively being reviewed but time is running short, I'm moving this\nto the next CF.\n\ncheers ./daniel\n\n\n", "msg_date": "Fri, 31 Jul 2020 21:43:50 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-07-29 19:20:04 +1200, Thomas Munro wrote:\n> On Wed, Jul 29, 2020 at 6:15 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > +static inline FullTransactionId\n> > +FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n> >\n> > I'm struggling to find a better word for this than \"relative\".\n> \n> The best I've got is \"anchor\" xid. It is an xid that is known to\n> limit nextFullXid's range while the receiving function runs.\n\nThinking about it, I think that relative is a good descriptor. It's just\nthat 'via' is weird. How about: FullXidRelativeTo?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 Aug 2020 17:19:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Wed, Aug 12, 2020 at 12:19 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-07-29 19:20:04 +1200, Thomas Munro wrote:\n> > On Wed, Jul 29, 2020 at 6:15 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > +static inline FullTransactionId\n> > > +FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n> > >\n> > > I'm struggling to find a better word for this than \"relative\".\n> >\n> > The best I've got is \"anchor\" xid. It is an xid that is known to\n> > limit nextFullXid's range while the receiving function runs.\n>\n> Thinking about it, I think that relative is a good descriptor. It's just\n> that 'via' is weird. How about: FullXidRelativeTo?\n\nWFM.\n\n\n", "msg_date": "Wed, 12 Aug 2020 12:24:52 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-12 12:24:52 +1200, Thomas Munro wrote:\n> On Wed, Aug 12, 2020 at 12:19 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-07-29 19:20:04 +1200, Thomas Munro wrote:\n> > > On Wed, Jul 29, 2020 at 6:15 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > +static inline FullTransactionId\n> > > > +FullXidViaRelative(FullTransactionId rel, TransactionId xid)\n> > > >\n> > > > I'm struggling to find a better word for this than \"relative\".\n> > >\n> > > The best I've got is \"anchor\" xid. It is an xid that is known to\n> > > limit nextFullXid's range while the receiving function runs.\n> >\n> > Thinking about it, I think that relative is a good descriptor. It's just\n> > that 'via' is weird. How about: FullXidRelativeTo?\n> \n> WFM.\n\nCool, pushed.\n\nAttached are the rebased remainder of the series. Unless somebody\nprotests, I plan to push 0001 after a bit more comment polishing and\nwait a buildfarm cycle, then push 0002-0005 and wait again, and finally\npush 0006.\n\nThere's further optimizations, particularly after 0002 and after 0006,\nbut that seems better done later.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 12 Aug 2020 10:38:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "We have two essentially identical buildfarm failures since these patches\nwent in:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2020-08-15%2011%3A27%3A32\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2020-08-15%2003%3A09%3A14\n\nThey're both in the same place in the freeze-the-dead isolation test:\n\nTRAP: FailedAssertion(\"!TransactionIdPrecedes(members[i].xid, cutoff_xid)\", File: \"heapam.c\", Line: 6051)\n0x9613eb <ExceptionalCondition+0x5b> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n0x52d586 <heap_prepare_freeze_tuple+0x926> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n0x53bc7e <heap_vacuum_rel+0x100e> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n0x6949bb <vacuum_rel+0x25b> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n0x694532 <vacuum+0x602> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n0x693d1c <ExecVacuum+0x37c> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n0x8324b3\n...\n2020-08-14 22:16:41.783 CDT [78410:4] LOG: server process (PID 80395) was terminated by signal 6: Abort trap\n2020-08-14 22:16:41.783 CDT [78410:5] DETAIL: Failed process was running: VACUUM FREEZE tab_freeze;\n\nperipatus has successes since this failure, so it's not fully reproducible\non that machine. I'm suspicious of a timing problem in computing vacuum's\ncutoff_xid.\n\n(I'm also wondering why the failing check is an Assert rather than a real\ntest-and-elog. Assert doesn't seem like an appropriate way to check for\nplausible data corruption cases.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Aug 2020 11:10:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-15 11:10:51 -0400, Tom Lane wrote:\n> We have two essentially identical buildfarm failures since these patches\n> went in:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2020-08-15%2011%3A27%3A32\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2020-08-15%2003%3A09%3A14\n>\n> They're both in the same place in the freeze-the-dead isolation test:\n\n> TRAP: FailedAssertion(\"!TransactionIdPrecedes(members[i].xid, cutoff_xid)\", File: \"heapam.c\", Line: 6051)\n> 0x9613eb <ExceptionalCondition+0x5b> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> 0x52d586 <heap_prepare_freeze_tuple+0x926> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> 0x53bc7e <heap_vacuum_rel+0x100e> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> 0x6949bb <vacuum_rel+0x25b> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> 0x694532 <vacuum+0x602> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> 0x693d1c <ExecVacuum+0x37c> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> 0x8324b3\n> ...\n> 2020-08-14 22:16:41.783 CDT [78410:4] LOG: server process (PID 80395) was terminated by signal 6: Abort trap\n> 2020-08-14 22:16:41.783 CDT [78410:5] DETAIL: Failed process was running: VACUUM FREEZE tab_freeze;\n>\n> peripatus has successes since this failure, so it's not fully reproducible\n> on that machine. I'm suspicious of a timing problem in computing vacuum's\n> cutoff_xid.\n\nHm, maybe it's something around what I observed in\nhttps://www.postgresql.org/message-id/20200723181018.neey2jd3u7rfrfrn%40alap3.anarazel.de\n\nI.e. that somehow we end up with hot pruning and freezing coming to a\ndifferent determination, and trying to freeze a hot tuple.\n\nI'll try to add a few additional asserts here, and burn some cpu tests\ntrying to trigger the issue.\n\nI gotta escape the heat in the house for a few hours though (no AC\nhere), so I'll not look at the results till later this afternoon, unless\nit triggers soon.\n\n\n> (I'm also wondering why the failing check is an Assert rather than a real\n> test-and-elog. Assert doesn't seem like an appropriate way to check for\n> plausible data corruption cases.)\n\nRobert, and to a lesser degree you, gave me quite a bit of grief over\nconverting nearby asserts to elogs. I agree it'd be better if it were\nan assert, but ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 15 Aug 2020 09:42:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-15 09:42:00 -0700, Andres Freund wrote:\n> On 2020-08-15 11:10:51 -0400, Tom Lane wrote:\n> > We have two essentially identical buildfarm failures since these patches\n> > went in:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2020-08-15%2011%3A27%3A32\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2020-08-15%2003%3A09%3A14\n> >\n> > They're both in the same place in the freeze-the-dead isolation test:\n> \n> > TRAP: FailedAssertion(\"!TransactionIdPrecedes(members[i].xid, cutoff_xid)\", File: \"heapam.c\", Line: 6051)\n> > 0x9613eb <ExceptionalCondition+0x5b> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> > 0x52d586 <heap_prepare_freeze_tuple+0x926> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> > 0x53bc7e <heap_vacuum_rel+0x100e> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> > 0x6949bb <vacuum_rel+0x25b> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> > 0x694532 <vacuum+0x602> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> > 0x693d1c <ExecVacuum+0x37c> at /home/pgbuildfarm/buildroot/HEAD/inst/bin/postgres\n> > 0x8324b3\n> > ...\n> > 2020-08-14 22:16:41.783 CDT [78410:4] LOG: server process (PID 80395) was terminated by signal 6: Abort trap\n> > 2020-08-14 22:16:41.783 CDT [78410:5] DETAIL: Failed process was running: VACUUM FREEZE tab_freeze;\n> >\n> > peripatus has successes since this failure, so it's not fully reproducible\n> > on that machine. I'm suspicious of a timing problem in computing vacuum's\n> > cutoff_xid.\n> \n> Hm, maybe it's something around what I observed in\n> https://www.postgresql.org/message-id/20200723181018.neey2jd3u7rfrfrn%40alap3.anarazel.de\n> \n> I.e. that somehow we end up with hot pruning and freezing coming to a\n> different determination, and trying to freeze a hot tuple.\n> \n> I'll try to add a few additional asserts here, and burn some cpu tests\n> trying to trigger the issue.\n> \n> I gotta escape the heat in the house for a few hours though (no AC\n> here), so I'll not look at the results till later this afternoon, unless\n> it triggers soon.\n\n690 successful runs later, it didn't trigger for me :(. Seems pretty\nclear that there's another variable than pure chance, otherwise it seems\nlike that number of runs should have hit the issue, given the number of\nbf hits vs bf runs.\n\nMy current plan would is to push a bit of additional instrumentation to\nhelp narrow down the issue. We can afterwards decide what of that we'd\nlike to keep longer term, and what not.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Aug 2020 11:16:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> 690 successful runs later, it didn't trigger for me :(. Seems pretty\n> clear that there's another variable than pure chance, otherwise it seems\n> like that number of runs should have hit the issue, given the number of\n> bf hits vs bf runs.\n\nIt seems entirely likely that there's a timing component in this, for\ninstance autovacuum coming along at just the right time. It's not too\nsurprising that some machines would be more prone to show that than\nothers. (Note peripatus is FreeBSD, which we've already learned has\nsignificantly different kernel scheduler behavior than Linux.)\n\n> My current plan would is to push a bit of additional instrumentation to\n> help narrow down the issue.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Aug 2020 14:30:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-08-16 14:30:24 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > 690 successful runs later, it didn't trigger for me :(. Seems pretty\n> > clear that there's another variable than pure chance, otherwise it seems\n> > like that number of runs should have hit the issue, given the number of\n> > bf hits vs bf runs.\n> \n> It seems entirely likely that there's a timing component in this, for\n> instance autovacuum coming along at just the right time. It's not too\n> surprising that some machines would be more prone to show that than\n> others. (Note peripatus is FreeBSD, which we've already learned has\n> significantly different kernel scheduler behavior than Linux.)\n\nYea. Interestingly there was a reproduction on linux since the initial\nreports you'd dug up:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=butterflyfish&dt=2020-08-15%2019%3A54%3A53\n\nbut that's likely a virtualized environment, so I guess the host\nscheduler behaviour could play a similar role.\n\nI'll run a few iterations with rr's chaos mode too, which tries to\nrandomize scheduling decisions...\n\n\nI noticed that it's quite hard to actually hit the hot tuple path I\nmentioned earlier on my machine. Would probably be good to have a tests\nhitting it more reliably. But I'm not immediately seeing how we could\nforce the necessarily serialization.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Aug 2020 12:00:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "I wrote:\n> It seems entirely likely that there's a timing component in this, for\n> instance autovacuum coming along at just the right time.\n\nD'oh. The attached seems to make it 100% reproducible.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 16 Aug 2020 16:17:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-16 16:17:23 -0400, Tom Lane wrote:\n> I wrote:\n> > It seems entirely likely that there's a timing component in this, for\n> > instance autovacuum coming along at just the right time.\n> \n> D'oh. The attached seems to make it 100% reproducible.\n\nGreat! It interestingly didn't work as the first item on the schedule,\nwhere I had duplicated it it to out of impatience. I guess there might\nbe some need of concurrent autovacuum activity or something like that.\n\nI now luckily have a rr trace of the problem, so I hope I can narrow it\ndown to the original problem fairly quickly.\n\nThanks,\n\nAndres\n\n\n", "msg_date": "Sun, 16 Aug 2020 13:31:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-16 13:31:53 -0700, Andres Freund wrote:\n> I now luckily have a rr trace of the problem, so I hope I can narrow it\n> down to the original problem fairly quickly.\n\nGna, I think I see the problem. In at least one place I wrongly\naccessed the 'dense' array of in-progress xids using the 'pgprocno',\ninstead of directly using the [0...procArray->numProcs) index.\n\nWorking on a fix, together with some improved asserts.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Aug 2020 13:52:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-16 13:52:58 -0700, Andres Freund wrote:\n> On 2020-08-16 13:31:53 -0700, Andres Freund wrote:\n> > I now luckily have a rr trace of the problem, so I hope I can narrow it\n> > down to the original problem fairly quickly.\n> \n> Gna, I think I see the problem. In at least one place I wrongly\n> accessed the 'dense' array of in-progress xids using the 'pgprocno',\n> instead of directly using the [0...procArray->numProcs) index.\n> \n> Working on a fix, together with some improved asserts.\n\ndiff --git i/src/backend/storage/ipc/procarray.c w/src/backend/storage/ipc/procarray.c\nindex 8262abd42e6..96e4a878576 100644\n--- i/src/backend/storage/ipc/procarray.c\n+++ w/src/backend/storage/ipc/procarray.c\n@@ -1663,7 +1663,7 @@ ComputeXidHorizons(ComputeXidHorizonsResult *h)\n TransactionId xmin;\n \n /* Fetch xid just once - see GetNewTransactionId */\n- xid = UINT32_ACCESS_ONCE(other_xids[pgprocno]);\n+ xid = UINT32_ACCESS_ONCE(other_xids[index]);\n xmin = UINT32_ACCESS_ONCE(proc->xmin);\n \n /*\n\nindeed fixes the issue based on a number of iterations of your modified\ntest, and fixes a clear bug.\n\nWRT better asserts: We could make ProcArrayRemove() and InitProcGlobal()\ninitialize currently unused procArray->pgprocnos,\nprocGlobal->{xids,subxidStates,vacuumFlags} to invalid values and/or\ndeclare them as uninitialized using the valgrind helpers.\n\nFor the first, one issue is that there's no obviously good candidate for\nan uninitialized xid. We could use something like FrozenTransactionId,\nwhich may never be in the procarray. But it's not exactly pretty.\n\nOpinions?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Aug 2020 14:11:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-16 14:11:46 -0700, Andres Freund wrote:\n> On 2020-08-16 13:52:58 -0700, Andres Freund wrote:\n> > On 2020-08-16 13:31:53 -0700, Andres Freund wrote:\n> > Gna, I think I see the problem. In at least one place I wrongly\n> > accessed the 'dense' array of in-progress xids using the 'pgprocno',\n> > instead of directly using the [0...procArray->numProcs) index.\n> > \n> > Working on a fix, together with some improved asserts.\n> \n> diff --git i/src/backend/storage/ipc/procarray.c w/src/backend/storage/ipc/procarray.c\n> index 8262abd42e6..96e4a878576 100644\n> --- i/src/backend/storage/ipc/procarray.c\n> +++ w/src/backend/storage/ipc/procarray.c\n> @@ -1663,7 +1663,7 @@ ComputeXidHorizons(ComputeXidHorizonsResult *h)\n> TransactionId xmin;\n> \n> /* Fetch xid just once - see GetNewTransactionId */\n> - xid = UINT32_ACCESS_ONCE(other_xids[pgprocno]);\n> + xid = UINT32_ACCESS_ONCE(other_xids[index]);\n> xmin = UINT32_ACCESS_ONCE(proc->xmin);\n> \n> /*\n> \n> indeed fixes the issue based on a number of iterations of your modified\n> test, and fixes a clear bug.\n\nPushed that much.\n\n\n> WRT better asserts: We could make ProcArrayRemove() and InitProcGlobal()\n> initialize currently unused procArray->pgprocnos,\n> procGlobal->{xids,subxidStates,vacuumFlags} to invalid values and/or\n> declare them as uninitialized using the valgrind helpers.\n> \n> For the first, one issue is that there's no obviously good candidate for\n> an uninitialized xid. We could use something like FrozenTransactionId,\n> which may never be in the procarray. But it's not exactly pretty.\n> \n> Opinions?\n\nSo we get some builfarm results while thinking about this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Aug 2020 14:26:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Aug 16, 2020 at 2:11 PM Andres Freund <andres@anarazel.de> wrote:\n> For the first, one issue is that there's no obviously good candidate for\n> an uninitialized xid. We could use something like FrozenTransactionId,\n> which may never be in the procarray. But it's not exactly pretty.\n\nMaybe it would make sense to mark the fields as inaccessible or\nundefined to Valgrind. That has advantages and disadvantages that are\nobvious.\n\nIf that isn't enough, it might not hurt to do this on top of whatever\nbecomes the primary solution. An undefined value has the advantage of\n\"spreading\" when the value gets copied around.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 16 Aug 2020 14:28:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> For the first, one issue is that there's no obviously good candidate for\n> an uninitialized xid. We could use something like FrozenTransactionId,\n> which may never be in the procarray. But it's not exactly pretty.\n\nHuh? What's wrong with using InvalidTransactionId?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Aug 2020 17:28:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-08-16 17:28:46 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > For the first, one issue is that there's no obviously good candidate for\n> > an uninitialized xid. We could use something like FrozenTransactionId,\n> > which may never be in the procarray. But it's not exactly pretty.\n> \n> Huh? What's wrong with using InvalidTransactionId?\n\nIt's a normal value for a backend when it doesn't have an xid assigned.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Aug 2020 14:30:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-Aug-16, Peter Geoghegan wrote:\n\n> On Sun, Aug 16, 2020 at 2:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > For the first, one issue is that there's no obviously good candidate for\n> > an uninitialized xid. We could use something like FrozenTransactionId,\n> > which may never be in the procarray. But it's not exactly pretty.\n> \n> Maybe it would make sense to mark the fields as inaccessible or\n> undefined to Valgrind. That has advantages and disadvantages that are\n> obvious.\n\n... and perhaps making Valgrind complain about it is sufficient.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Aug 2020 16:09:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Sun, Aug 16, 2020 at 02:26:57PM -0700, Andres Freund wrote:\n> So we get some builfarm results while thinking about this.\n\nAndres, there is an entry in the CF for this thread:\nhttps://commitfest.postgresql.org/29/2500/\n\nA lot of work has been committed with 623a9ba, 73487a6, 5788e25, etc.\nNow that PGXACT is done, how much work is remaining here?\n--\nMichael", "msg_date": "Thu, 3 Sep 2020 17:18:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 03.09.2020 11:18, Michael Paquier wrote:\n> On Sun, Aug 16, 2020 at 02:26:57PM -0700, Andres Freund wrote:\n>> So we get some builfarm results while thinking about this.\n> Andres, there is an entry in the CF for this thread:\n> https://commitfest.postgresql.org/29/2500/\n>\n> A lot of work has been committed with 623a9ba, 73487a6, 5788e25, etc.\n> Now that PGXACT is done, how much work is remaining here?\n> --\n> Michael\n\nAndres,\nFirst of all a lot of thanks for this work.\nImproving Postgres connection scalability is very important.\n\nReported results looks very impressive.\nBut I tried to reproduce them and didn't observed similar behavior.\nSo I am wondering what can be the difference and what I am doing wrong.\n\nI have tried two different systems.\nFirst one is IBM Power2 server with 384 cores and 8Tb of RAM.\nI run the same read-only pgbench test as you. I do not think that size of the database is matter, so I used scale 100 -\nit seems to be enough to avoid frequent buffer conflicts.\nThen I run the same scripts as you:\n\n �for ((n=100; n < 1000; n+=100)); do echo $n; pgbench -M prepared -c $n -T 100 -j $n -M prepared -S -n postgres ; done\n �for ((n=1000; n <= 5000; n+=1000)); do echo $n; pgbench -M prepared -c $n -T 100 -j $n -M prepared -S -n postgres ; done\n\n\nI have compared current master with version of Postgres prior to your commits with scalability improvements: a9a4a7ad56\n\nFor all number of connections older version shows slightly better results, for example for 500 clients: 475k TPS vs. 450k TPS for current master.\n\nThis is quite exotic server and I do not have currently access to it.\nSo I have repeated experiments at Intel server.\nIt has 160 cores Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz and 256Gb of RAM.\n\nThe same database, the same script, results are the following:\n\nClients \told/inc \told/exl \tnew/inc \tnew/exl\n1000 \t1105750 \t1163292 \t1206105 \t1212701\n2000 \t1050933 \t1124688 \t1149706 \t1164942\n3000 \t1063667 \t1195158 \t1118087 \t1144216\n4000 \t1040065 \t1290432 \t1107348 \t1163906\n5000 \t943813 \t1258643 \t1103790 \t1160251\n\nI have separately show results including/excluding connection connections establishing,\nbecause in new version there are almost no differences between them,\nbut for old version gap between them is noticeable.\n\nConfiguration file has the following differences with default postgres config:\n\nmax_connections = 10000\t\t\t# (change requires restart)\nshared_buffers = 8GB\t\t\t# min 128kB\n\n\nThis results contradict with yours and makes me ask the following questions:\n\n1. Why in your case performance is almost two times larger (2 millions vs 1)?\nThe hardware in my case seems to be at least not worser than yours...\nMay be there are some other improvements in the version you have tested which are not yet committed to master?\n\n2. You wrote: This is on a machine with 2\nIntel(R) Xeon(R) Platinum 8168, but virtualized (2 sockets of 18 cores/36 threads)\n\nAccording to Intel specification Intel� Xeon� Platinum 8168 Processor has 24 cores:\nhttps://ark.intel.com/content/www/us/en/ark/products/120504/intel-xeon-platinum-8168-processor-33m-cache-2-70-ghz.html\n\nAnd at your graph we can see almost linear increase of speed up to 40 connections.\n\nBut most suspicious word for me is \"virtualized\". What is the actual hardware and how it is virtualized?\n\nDo you have any idea why in my case master version (with your commits) behaves almost the same as non-patched version?\nBelow is yet another table showing scalability from 10 to 100 connections and combining your results (first two columns) and my results (last two columns):\n\n\nClients \told master \tpgxact-split-cache \tcurrent master\n\trevision 9a4a7ad56\n10 \t367883 \t375682 \t358984\n\t347067\n20 \t748000 \t810964 \t668631\n\t630304\n30 \t999231 \t1288276 \t920255\n\t848244\n40 \t991672 \t1573310 \t1100745\n\t970717\n50\n\t1017561 \t1715762 \t1193928\n\t1008755\n60\n\t993943 \t1789698 \t1255629\n\t917788\n70\n\t971379 \t1819477 \t1277634\n\t873022\n80\n\t966276 \t1842248 \t1266523\n\t830197\n90\n\t901175 \t1847823 \t1255260\n\t736550\n100\n\t803175 \t1865795 \t1241143\n\t736756\n\n\nMay be it is because of more complex architecture of my server?\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 03.09.2020 11:18, Michael Paquier\n wrote:\n\n\nOn Sun, Aug 16, 2020 at 02:26:57PM -0700, Andres Freund wrote:\n\n\nSo we get some builfarm results while thinking about this.\n\n\n\nAndres, there is an entry in the CF for this thread:\nhttps://commitfest.postgresql.org/29/2500/\n\nA lot of work has been committed with 623a9ba, 73487a6, 5788e25, etc.\nNow that PGXACT is done, how much work is remaining here?\n--\nMichael\n\n\n\nAndres, \nFirst of all a lot of thanks for this work.\nImproving Postgres connection scalability is very important.\n\nReported results looks very impressive.\nBut I tried to reproduce them and didn't observed similar behavior.\nSo I am wondering what can be the difference and what I am doing wrong.\n\nI have tried two different systems.\nFirst one is IBM Power2 server with 384 cores and 8Tb of RAM.\nI run the same read-only pgbench test as you. I do not think that size of the database is matter, so I used scale 100 - \nit seems to be enough to avoid frequent buffer conflicts.\nThen I run the same scripts as you:\n\n�for ((n=100; n < 1000; n+=100)); do echo $n; pgbench -M prepared -c $n -T 100 -j $n -M prepared -S -n postgres ; done\n�for ((n=1000; n <= 5000; n+=1000)); do echo $n; pgbench -M prepared -c $n -T 100 -j $n -M prepared -S -n postgres ; done\n\n\nI have compared current master with version of Postgres prior to your commits with scalability improvements: a9a4a7ad56\n\nFor all number of connections older version shows slightly better results, for example for 500 clients: 475k TPS vs. 450k TPS for current master.\n\nThis is quite exotic server and I do not have currently access to it.\nSo I have repeated experiments at Intel server.\nIt has 160 cores Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz and 256Gb of RAM.\n\nThe same database, the same script, results are the following:\n\n\n\n\n\nClients\nold/inc\nold/exl\nnew/inc\nnew/exl\n\n\n1000\n1105750\n1163292\n1206105\n1212701\n\n\n2000\n1050933\n1124688\n1149706\n1164942\n\n\n3000\n1063667\n1195158\n1118087\n1144216\n\n\n4000\n1040065\n1290432\n1107348\n1163906\n\n\n5000\n943813\n1258643\n1103790\n1160251\n\n\n\n\nI have separately show results including/excluding connection connections establishing, \nbecause in new version there are almost no differences between them, \nbut for old version gap between them is noticeable.\n\nConfiguration file has the following differences with default postgres config:\n\nmax_connections = 10000\t\t\t# (change requires restart)\nshared_buffers = 8GB\t\t\t# min 128kB\n\n\nThis results contradict with yours and makes me ask the following questions:\n\n1. Why in your case performance is almost two times larger (2 millions vs 1)?\nThe hardware in my case seems to be at least not worser than yours...\nMay be there are some other improvements in the version you have tested which are not yet committed to master?\n\n2. You wrote: This is on a machine with 2\nIntel(R) Xeon(R) Platinum 8168, but virtualized (2 sockets of 18 cores/36 threads)\n\nAccording to Intel specification Intel� Xeon� Platinum 8168 Processor has 24 cores:\nhttps://ark.intel.com/content/www/us/en/ark/products/120504/intel-xeon-platinum-8168-processor-33m-cache-2-70-ghz.html\n\nAnd at your graph we can see almost linear increase of speed up to 40 connections. \n\nBut most suspicious word for me is \"virtualized\". What is the actual hardware and how it is virtualized?\n\nDo you have any idea why in my case master version (with your commits) behaves almost the same as non-patched version?\nBelow is yet another table showing scalability from 10 to 100 connections and combining your results (first two columns) and my results (last two columns):\n\n\n\n\n\n\nClients\nold master\npgxact-split-cache\ncurrent master\n\nrevision 9a4a7ad56�\n \n\n\n10\n367883\n375682\n358984\n\n347067\n\n\n\n20\n748000\n810964\n668631\n\n630304\n\n\n\n30\n999231\n1288276\n920255\n\n848244\n\n\n\n40\n991672\n1573310\n1100745\n\n970717\n\n\n\n50\n\n1017561\n1715762\n1193928\n\n1008755\n\n\n\n60\n\n993943\n1789698\n1255629\n\n917788\n\n\n\n70\n\n971379\n1819477\n1277634\n\n873022\n\n\n\n80\n\n966276\n1842248\n1266523\n\n830197\n\n\n\n90\n\n901175\n1847823\n1255260\n\n736550\n\n\n\n100\n\n803175\n1865795\n1241143\n\n736756\n\n\n\n\n\n May be it is because of more complex architecture of my server?\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 4 Sep 2020 18:24:12 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-03 17:18:29 +0900, Michael Paquier wrote:\n> On Sun, Aug 16, 2020 at 02:26:57PM -0700, Andres Freund wrote:\n> > So we get some builfarm results while thinking about this.\n> \n> Andres, there is an entry in the CF for this thread:\n> https://commitfest.postgresql.org/29/2500/\n> \n> A lot of work has been committed with 623a9ba, 73487a6, 5788e25, etc.\n> Now that PGXACT is done, how much work is remaining here?\n\nI think it's best to close the entry. There's substantial further wins\npossible, in particular not acquiring ProcArrayLock in GetSnapshotData()\nwhen the cache is valid improves performance substantially. But it's\nnon-trivial enough that it's probably best dealth with in a separate\npatch / CF entry.\n\nClosed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 4 Sep 2020 10:35:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-04 18:24:12 +0300, Konstantin Knizhnik wrote:\n> Reported results looks very impressive.\n> But I tried to reproduce them and didn't observed similar behavior.\n> So I am wondering what can be the difference and what I am doing wrong.\n\nThat is odd - I did reproduce it on quite a few systems by now.\n\n\n> Configuration file has the following differences with default postgres config:\n> \n> max_connections = 10000\t\t\t# (change requires restart)\n> shared_buffers = 8GB\t\t\t# min 128kB\n\nI also used huge_pages=on / configured them on the OS level. Otherwise\nTLB misses will be a significant factor.\n\nDoes it change if you initialize the test database using\nPGOPTIONS='-c vacuum_freeze_min_age=0' pgbench -i -s 100\nor run a manual VACUUM FREEZE; after initialization?\n\n\n> I have tried two different systems.\n> First one is IBM Power2 server with 384 cores and 8Tb of RAM.\n> I run the same read-only pgbench test as you. I do not think that size of the database is matter, so I used scale 100 -\n> it seems to be enough to avoid frequent buffer conflicts.\n> Then I run the same scripts as you:\n>\n> �for ((n=100; n < 1000; n+=100)); do echo $n; pgbench -M prepared -c $n -T 100 -j $n -M prepared -S -n postgres ; done\n> �for ((n=1000; n <= 5000; n+=1000)); do echo $n; pgbench -M prepared -c $n -T 100 -j $n -M prepared -S -n postgres ; done\n>\n>\n> I have compared current master with version of Postgres prior to your commits with scalability improvements: a9a4a7ad56\n\nHm, it'd probably be good to compare commits closer to the changes, to\navoid other changes showing up.\n\nHm - did you verify if all the connections were actually established?\nParticularly without the patch applied? With an unmodified pgbench, I\nsometimes saw better numbers, but only because only half the connections\nwere able to be established, due to ProcArrayLock contention.\n\nSee https://www.postgresql.org/message-id/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de\n\nThere also is the issue that pgbench numbers for inclusive/exclusive are\njust about meaningless right now:\nhttps://www.postgresql.org/message-id/20200227202636.qaf7o6qcajsudoor%40alap3.anarazel.de\n(reminds me, need to get that fixed)\n\n\nOne more thing worth investigating is whether your results change\nsignificantly when you start the server using\nnumactl --interleave=all <start_server_cmdline>.\nEspecially on larger systems the results otherwise can vary a lot from\nrun-to-run, because the placement of shared buffers matters a lot.\n\n\n> So I have repeated experiments at Intel server.\n> It has 160 cores Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz and 256Gb of RAM.\n>\n> The same database, the same script, results are the following:\n>\n> Clients \told/inc \told/exl \tnew/inc \tnew/exl\n> 1000 \t1105750 \t1163292 \t1206105 \t1212701\n> 2000 \t1050933 \t1124688 \t1149706 \t1164942\n> 3000 \t1063667 \t1195158 \t1118087 \t1144216\n> 4000 \t1040065 \t1290432 \t1107348 \t1163906\n> 5000 \t943813 \t1258643 \t1103790 \t1160251\n\n> I have separately show results including/excluding connection connections establishing,\n> because in new version there are almost no differences between them,\n> but for old version gap between them is noticeable.\n>\n> Configuration file has the following differences with default postgres config:\n>\n> max_connections = 10000\t\t\t# (change requires restart)\n> shared_buffers = 8GB\t\t\t# min 128kB\n\n>\n> This results contradict with yours and makes me ask the following questions:\n\n> 1. Why in your case performance is almost two times larger (2 millions vs 1)?\n> The hardware in my case seems to be at least not worser than yours...\n> May be there are some other improvements in the version you have tested which are not yet committed to master?\n\nNo, no uncommitted changes, except for the pgbench stuff mentioned\nabove. However I found that the kernel version matters a fair bit, it's\npretty easy to run into kernel scalability issues in a workload that is\nthis heavy scheduler dependent.\n\nDid you connect via tcp or unix socket? Was pgbench running on the same\nmachine? It was locally via unix socket for me (but it's also observable\nvia two machines, just with lower overall throughput).\n\nDid you run a profile to see where the bottleneck is?\n\n\nThere's a seperate benchmark that I found to be quite revealing that's\nfar less dependent on scheduler behaviour. Run two pgbench instances:\n\n1) With a very simply script '\\sleep 1s' or such, and many connections\n (e.g. 100,1000,5000). That's to simulate connections that are\n currently idle.\n2) With a normal pgbench read only script, and low client counts.\n\nBefore the changes 2) shows a very sharp decline in performance when the\ncount in 1) increases. Afterwards its pretty much linear.\n\nI think this benchmark actually is much more real world oriented - due\nto latency and client side overheads it's very normal to have a large\nfraction of connections idle in read mostly OLTP workloads.\n\nHere's the result on my workstation (2x Xeon Gold 5215 CPUs), testing\n1f42d35a1d6144a23602b2c0bc7f97f3046cf890 against\n07f32fcd23ac81898ed47f88beb569c631a2f223 which are the commits pre/post\nconnection scalability changes.\n\nI used fairly short pgbench runs (15s), and the numbers are the best of\nthree runs. I also had emacs and mutt open - some noise to be\nexpected. But I also gotta work ;)\n\n| Idle Connections | Active Connections | TPS pre | TPS post |\n|-----------------:|-------------------:|--------:|---------:|\n| 0 | 1 | 33599 | 33406 |\n| 100 | 1 | 31088 | 33279 |\n| 1000 | 1 | 29377 | 33434 |\n| 2500 | 1 | 27050 | 33149 |\n| 5000 | 1 | 21895 | 33903 |\n| 10000 | 1 | 16034 | 33140 |\n| 0 | 48 | 1042005 | 1125104 |\n| 100 | 48 | 986731 | 1103584 |\n| 1000 | 48 | 854230 | 1119043 |\n| 2500 | 48 | 716624 | 1119353 |\n| 5000 | 48 | 553657 | 1119476 |\n| 10000 | 48 | 369845 | 1115740 |\n\n\nAnd a second version of this, where the idle connections are just less\nbusy, using the following script:\n\\sleep 100ms\nSELECT 1;\n\n| Mostly Idle Connections | Active Connections | TPS pre | TPS post |\n|------------------------:|-------------------:|--------:|---------------:|\n| 0 | 1 | 33837 | 34095.891429 |\n| 100 | 1 | 30622 | 31166.767491 |\n| 1000 | 1 | 25523 | 28829.313249 |\n| 2500 | 1 | 19260 | 24978.878822 |\n| 5000 | 1 | 11171 | 24208.146408 |\n| 10000 | 1 | 6702 | 29577.517084 |\n| 0 | 48 | 1022721 | 1133153.772338 |\n| 100 | 48 | 980705 | 1034235.255883 |\n| 1000 | 48 | 824668 | 1115965.638395 |\n| 2500 | 48 | 698510 | 1073280.930789 |\n| 5000 | 48 | 478535 | 1041931.158287 |\n| 10000 | 48 | 276042 | 953567.038634 |\n\nIt's probably worth to call out that in the second test run here the\nrun-to-run variability is huge. Presumably because it's very scheduler\ndependent much CPU time \"active\" backends and the \"active\" pgbench gets\nat higher \"mostly idle\" connection counts.\n\n\n> 2. You wrote: This is on a machine with 2\n> Intel(R) Xeon(R) Platinum 8168, but virtualized (2 sockets of 18 cores/36 threads)\n>\n> According to Intel specification Intel� Xeon� Platinum 8168 Processor has 24 cores:\n> https://ark.intel.com/content/www/us/en/ark/products/120504/intel-xeon-platinum-8168-processor-33m-cache-2-70-ghz.html\n>\n> And at your graph we can see almost linear increase of speed up to 40 connections.\n>\n> But most suspicious word for me is \"virtualized\". What is the actual hardware and how it is virtualized?\n\nThat was on an azure Fs72v2. I think that's hyperv virtualized, with all\nthe \"lost\" cores dedicated to the hypervisor. But I did reproduce the\nspeedups on my unvirtualized workstation (2x Xeon Gold 5215 CPUs) -\nthe ceiling is lower, obviously.\n\n\n> May be it is because of more complex architecture of my server?\n\nThink we'll need profiles to know...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 4 Sep 2020 11:53:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020-09-04 11:53:04 -0700, Andres Freund wrote:\n> There's a seperate benchmark that I found to be quite revealing that's\n> far less dependent on scheduler behaviour. Run two pgbench instances:\n> \n> 1) With a very simply script '\\sleep 1s' or such, and many connections\n> (e.g. 100,1000,5000). That's to simulate connections that are\n> currently idle.\n> 2) With a normal pgbench read only script, and low client counts.\n> \n> Before the changes 2) shows a very sharp decline in performance when the\n> count in 1) increases. Afterwards its pretty much linear.\n> \n> I think this benchmark actually is much more real world oriented - due\n> to latency and client side overheads it's very normal to have a large\n> fraction of connections idle in read mostly OLTP workloads.\n> \n> Here's the result on my workstation (2x Xeon Gold 5215 CPUs), testing\n> 1f42d35a1d6144a23602b2c0bc7f97f3046cf890 against\n> 07f32fcd23ac81898ed47f88beb569c631a2f223 which are the commits pre/post\n> connection scalability changes.\n> \n> I used fairly short pgbench runs (15s), and the numbers are the best of\n> three runs. I also had emacs and mutt open - some noise to be\n> expected. But I also gotta work ;)\n> \n> | Idle Connections | Active Connections | TPS pre | TPS post |\n> |-----------------:|-------------------:|--------:|---------:|\n> | 0 | 1 | 33599 | 33406 |\n> | 100 | 1 | 31088 | 33279 |\n> | 1000 | 1 | 29377 | 33434 |\n> | 2500 | 1 | 27050 | 33149 |\n> | 5000 | 1 | 21895 | 33903 |\n> | 10000 | 1 | 16034 | 33140 |\n> | 0 | 48 | 1042005 | 1125104 |\n> | 100 | 48 | 986731 | 1103584 |\n> | 1000 | 48 | 854230 | 1119043 |\n> | 2500 | 48 | 716624 | 1119353 |\n> | 5000 | 48 | 553657 | 1119476 |\n> | 10000 | 48 | 369845 | 1115740 |\n\nAttached in graph form.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 4 Sep 2020 12:11:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Fri, Sep 04, 2020 at 10:35:19AM -0700, Andres Freund wrote:\n> I think it's best to close the entry. There's substantial further wins\n> possible, in particular not acquiring ProcArrayLock in GetSnapshotData()\n> when the cache is valid improves performance substantially. But it's\n> non-trivial enough that it's probably best dealth with in a separate\n> patch / CF entry.\n\nCool, thanks for updating the CF entry.\n--\nMichael", "msg_date": "Sat, 5 Sep 2020 10:31:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\n\nOn 04.09.2020 21:53, Andres Freund wrote:\n>\n> I also used huge_pages=on / configured them on the OS level. Otherwise\n> TLB misses will be a significant factor.\n\nAs far as I understand there should not be no any TLB misses because \nsize of the shared buffers (8Mb) as several order of magnitude smaler \nthat available physical memory.\n>\n> Does it change if you initialize the test database using\n> PGOPTIONS='-c vacuum_freeze_min_age=0' pgbench -i -s 100\n> or run a manual VACUUM FREEZE; after initialization?\nI tried it, but didn't see any improvement.\n\n>\n> Hm, it'd probably be good to compare commits closer to the changes, to\n> avoid other changes showing up.\n>\n> Hm - did you verify if all the connections were actually established?\n> Particularly without the patch applied? With an unmodified pgbench, I\n> sometimes saw better numbers, but only because only half the connections\n> were able to be established, due to ProcArrayLock contention.\nYes, that really happen quite often at IBM Power2 server (specific of \nit's atomic implementation).\nI even have to patch pgbench  by adding one second delay after \nconnection has been established to make it possible  for all clients to \nconnect.\nBut at Intel server I didn't see unconnected clients. And in any case - \nit happen only for large number of connections (> 1000).\nBut the best performance was achieved at about 100 connections and still \nI can not reach 2k TPS performance a in your case.\n\n> Did you connect via tcp or unix socket? Was pgbench running on the same\n> machine? It was locally via unix socket for me (but it's also observable\n> via two machines, just with lower overall throughput).\n\nPgbench was launched at the same machine and connected through unix sockets.\n\n> Did you run a profile to see where the bottleneck is?\nSorry I do not have root privileges at this server and so can not use perf.\n>\n> There's a seperate benchmark that I found to be quite revealing that's\n> far less dependent on scheduler behaviour. Run two pgbench instances:\n>\n> 1) With a very simply script '\\sleep 1s' or such, and many connections\n> (e.g. 100,1000,5000). That's to simulate connections that are\n> currently idle.\n> 2) With a normal pgbench read only script, and low client counts.\n>\n> Before the changes 2) shows a very sharp decline in performance when the\n> count in 1) increases. Afterwards its pretty much linear.\n>\n> I think this benchmark actually is much more real world oriented - due\n> to latency and client side overheads it's very normal to have a large\n> fraction of connections idle in read mostly OLTP workloads.\n>\n> Here's the result on my workstation (2x Xeon Gold 5215 CPUs), testing\n> 1f42d35a1d6144a23602b2c0bc7f97f3046cf890 against\n> 07f32fcd23ac81898ed47f88beb569c631a2f223 which are the commits pre/post\n> connection scalability changes.\n>\n> I used fairly short pgbench runs (15s), and the numbers are the best of\n> three runs. I also had emacs and mutt open - some noise to be\n> expected. But I also gotta work ;)\n>\n> | Idle Connections | Active Connections | TPS pre | TPS post |\n> |-----------------:|-------------------:|--------:|---------:|\n> | 0 | 1 | 33599 | 33406 |\n> | 100 | 1 | 31088 | 33279 |\n> | 1000 | 1 | 29377 | 33434 |\n> | 2500 | 1 | 27050 | 33149 |\n> | 5000 | 1 | 21895 | 33903 |\n> | 10000 | 1 | 16034 | 33140 |\n> | 0 | 48 | 1042005 | 1125104 |\n> | 100 | 48 | 986731 | 1103584 |\n> | 1000 | 48 | 854230 | 1119043 |\n> | 2500 | 48 | 716624 | 1119353 |\n> | 5000 | 48 | 553657 | 1119476 |\n> | 10000 | 48 | 369845 | 1115740 |\n\nYes, there is also noticeable difference in my case\n\n| Idle Connections | Active Connections | TPS pre | TPS post |\n|-----------------:|-------------------:|--------:|---------:|\n| 5000 | 48 | 758914 | 1184085 |\n\n> Think we'll need profiles to know...\n\nI will try to obtain sudo permissions and do profiling.\n\n\n", "msg_date": "Sat, 5 Sep 2020 16:58:31 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\n\nOn 04.09.2020 21:53, Andres Freund wrote:\n>\n>> May be it is because of more complex architecture of my server?\n> Think we'll need profiles to know...\n\nThis is \"perf top\" of pgebch -c 100 -j 100 -M prepared -S\n\n   12.16%  postgres                           [.] PinBuffer\n   11.92%  postgres                           [.] LWLockAttemptLock\n    6.46%  postgres                           [.] UnpinBuffer.constprop.11\n    6.03%  postgres                           [.] LWLockRelease\n    3.14%  postgres                           [.] BufferGetBlockNumber\n    3.04%  postgres                           [.] ReadBuffer_common\n    2.13%  [kernel]                           [k] _raw_spin_lock_irqsave\n    2.11%  [kernel]                           [k] switch_mm_irqs_off\n    1.95%  postgres                           [.] _bt_compare\n\n\nLooks like most of the time is pent in buffers locks.\nAnd which pgbench database scale factor you have used?\n\n\n", "msg_date": "Sun, 6 Sep 2020 14:05:35 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-05 16:58:31 +0300, Konstantin Knizhnik wrote:\n> On 04.09.2020 21:53, Andres Freund wrote:\n> > \n> > I also used huge_pages=on / configured them on the OS level. Otherwise\n> > TLB misses will be a significant factor.\n> \n> As far as I understand there should not be no any TLB misses because size of\n> the shared buffers (8Mb) as several order of magnitude smaler that available\n> physical memory.\n\nI assume you didn't mean 8MB but 8GB? If so, that's way large enough to\nbe bigger than the TLB, particularly across processes (IIRC there's no\noptimization to keep shared mappings de-duplicated between processes\nfrom the view of the TLB).\n\n\n> Yes, there is also noticeable difference in my case\n> \n> | Idle Connections | Active Connections | TPS pre | TPS post |\n> |-----------------:|-------------------:|--------:|---------:|\n> | 5000 | 48 | 758914 | 1184085 |\n\nSounds like you're somehow hitting another bottleneck around 1.2M TPS\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 6 Sep 2020 11:52:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-06 14:05:35 +0300, Konstantin Knizhnik wrote:\n> On 04.09.2020 21:53, Andres Freund wrote:\n> > \n> > > May be it is because of more complex architecture of my server?\n> > Think we'll need profiles to know...\n> \n> This is \"perf top\" of pgebch -c 100 -j 100 -M prepared -S\n> \n> � 12.16%� postgres�������������������������� [.] PinBuffer\n> � 11.92%� postgres�������������������������� [.] LWLockAttemptLock\n> �� 6.46%� postgres�������������������������� [.] UnpinBuffer.constprop.11\n> �� 6.03%� postgres�������������������������� [.] LWLockRelease\n> �� 3.14%� postgres�������������������������� [.] BufferGetBlockNumber\n> �� 3.04%� postgres�������������������������� [.] ReadBuffer_common\n> �� 2.13%� [kernel]�������������������������� [k] _raw_spin_lock_irqsave\n> �� 2.11%� [kernel]�������������������������� [k] switch_mm_irqs_off\n> �� 1.95%� postgres�������������������������� [.] _bt_compare\n> \n> \n> Looks like most of the time is pent in buffers locks.\n\nHm, that is interesting / odd. If you record a profile with call graphs\n(e.g. --call-graph dwarf), where are all the LWLockAttemptLock calls\ncomming from?\n\n\nI assume the machine you're talking about is an 8 socket machine?\n\nWhat if you:\na) start postgres and pgbench with numactl --interleave=all\nb) start postgres with numactl --interleave=0,1 --cpunodebind=0,1 --membind=0,1\n in case you have 4 sockets, or 0,1,2,3 in case you have 8 sockets?\n\n\n> And which pgbench database scale factor you have used?\n\n200\n\nAnother thing you could try is to run 2-4 pgench instances in different\ndatabases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 6 Sep 2020 11:56:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 06.09.2020 21:56, Andres Freund wrote:\n>\n> Hm, that is interesting / odd. If you record a profile with call graphs\n> (e.g. --call-graph dwarf), where are all the LWLockAttemptLock calls\n> comming from?\n>\nAttached.\n\n> I assume the machine you're talking about is an 8 socket machine?\n>\n> What if you:\n> a) start postgres and pgbench with numactl --interleave=all\n> b) start postgres with numactl --interleave=0,1 --cpunodebind=0,1 --membind=0,1\n> in case you have 4 sockets, or 0,1,2,3 in case you have 8 sockets?\n>\n\nTPS for -c 100\n\n--interleave=all\n1168910\n--interleave=0,1\n1232557\n--interleave=0,1,2,3\n1254271\n--cpunodebind=0,1,2,3 --membind=0,1,2,3\n1237237\n--cpunodebind=0,1 --membind=0,1\n1420211\n--cpunodebind=0 --membind=0\n1101203\n\n\n>> And which pgbench database scale factor you have used?\n> 200\n>\n> Another thing you could try is to run 2-4 pgench instances in different\n> databases.\nI tried to reinitialize database with scale 200 but there was no \nsignificant improvement in performance.", "msg_date": "Mon, 7 Sep 2020 17:20:53 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\n\nOn 06.09.2020 21:52, Andres Freund wrote:\n> Hi,\n>\n> On 2020-09-05 16:58:31 +0300, Konstantin Knizhnik wrote:\n>> On 04.09.2020 21:53, Andres Freund wrote:\n>>> I also used huge_pages=on / configured them on the OS level. Otherwise\n>>> TLB misses will be a significant factor.\n>> As far as I understand there should not be no any TLB misses because size of\n>> the shared buffers (8Mb) as several order of magnitude smaler that available\n>> physical memory.\n> I assume you didn't mean 8MB but 8GB? If so, that's way large enough to\n> be bigger than the TLB, particularly across processes (IIRC there's no\n> optimization to keep shared mappings de-duplicated between processes\n> from the view of the TLB).\n>\n>\nSorry, certainly 8Gb.\nI tried huge pages, but it has almost no effect/\n\n\n", "msg_date": "Mon, 7 Sep 2020 19:35:50 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\n\nOn Mon, Sep 7, 2020, at 07:20, Konstantin Knizhnik wrote:\n> >> And which pgbench database scale factor you have used?\n> > 200\n> >\n> > Another thing you could try is to run 2-4 pgench instances in different\n> > databases.\n> I tried to reinitialize database with scale 200 but there was no \n> significant improvement in performance.\n\nIf you're replying to the last bit I am quoting, I was talking about having four databases with separate pbench tables etc. To see how much of it is procarray contention, and how much it is contention of common buffers etc.\n\n\n> Attachments:\n> * pgbench.svg\n\nWhat numactl was used for this one?\n\n\n", "msg_date": "Mon, 07 Sep 2020 13:45:48 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020/09/03 17:18, Michael Paquier wrote:\n> On Sun, Aug 16, 2020 at 02:26:57PM -0700, Andres Freund wrote:\n>> So we get some builfarm results while thinking about this.\n> \n> Andres, there is an entry in the CF for this thread:\n> https://commitfest.postgresql.org/29/2500/\n> \n> A lot of work has been committed with 623a9ba, 73487a6, 5788e25, etc.\n\nI haven't seen it mentioned here, so apologies if I've overlooked\nsomething, but as of 623a9ba queries on standbys seem somewhat\nbroken.\n\nSpecifically, I maintain some code which does something like this:\n\n- connects to a standby\n- checks a particular row does not exist on the standby\n- connects to the primary\n- writes a row in the primary\n- polls the standby (using the same connection as above)\n to verify the row arrives on the standby\n\nAs of recent HEAD it never sees the row arrive on the standby, even\nthough it is verifiably there.\n\nI've traced this back to 623a9ba, which relies on \"xactCompletionCount\"\nbeing incremented to determine whether the snapshot can be reused,\nbut that never happens on a standby, meaning this test in\nGetSnapshotDataReuse():\n\n if (curXactCompletionCount != snapshot->snapXactCompletionCount)\n return false;\n\nwill never return false, and the snapshot's xmin/xmax never get advanced.\nWhich means the session on the standby is not able to see rows on the\nstandby added after the session was started.\n\nIt's simple enough to prevent that being an issue by just never calling\nGetSnapshotDataReuse() if the snapshot was taken during recovery\n(though obviously that means any performance benefits won't be available\non standbys).\n\nI wonder if it's possible to increment \"xactCompletionCount\"\nduring replay along these lines:\n\n *** a/src/backend/access/transam/xact.c\n --- b/src/backend/access/transam/xact.c\n *************** xact_redo_commit(xl_xact_parsed_commit *\n *** 5915,5920 ****\n --- 5915,5924 ----\n */\n if (XactCompletionApplyFeedback(parsed->xinfo))\n XLogRequestWalReceiverReply();\n +\n + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n + ShmemVariableCache->xactCompletionCount++;\n + LWLockRelease(ProcArrayLock);\n }\n\nwhich seems to work (though quite possibly I've overlooked something I don't\nknow that I don't know about and it will all break horribly somewhere,\netc. etc.).\n\n\nRegards\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 8 Sep 2020 13:03:01 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-08 13:03:01 +0900, Ian Barwick wrote:\n> On 2020/09/03 17:18, Michael Paquier wrote:\n> > On Sun, Aug 16, 2020 at 02:26:57PM -0700, Andres Freund wrote:\n> > > So we get some builfarm results while thinking about this.\n> > \n> > Andres, there is an entry in the CF for this thread:\n> > https://commitfest.postgresql.org/29/2500/\n> > \n> > A lot of work has been committed with 623a9ba, 73487a6, 5788e25, etc.\n> \n> I haven't seen it mentioned here, so apologies if I've overlooked\n> something, but as of 623a9ba queries on standbys seem somewhat\n> broken.\n> \n> Specifically, I maintain some code which does something like this:\n> \n> - connects to a standby\n> - checks a particular row does not exist on the standby\n> - connects to the primary\n> - writes a row in the primary\n> - polls the standby (using the same connection as above)\n> to verify the row arrives on the standby\n> \n> As of recent HEAD it never sees the row arrive on the standby, even\n> though it is verifiably there.\n\nUgh, that's not good.\n\n\n> I've traced this back to 623a9ba, which relies on \"xactCompletionCount\"\n> being incremented to determine whether the snapshot can be reused,\n> but that never happens on a standby, meaning this test in\n> GetSnapshotDataReuse():\n> \n> if (curXactCompletionCount != snapshot->snapXactCompletionCount)\n> return false;\n> \n> will never return false, and the snapshot's xmin/xmax never get advanced.\n> Which means the session on the standby is not able to see rows on the\n> standby added after the session was started.\n> \n> It's simple enough to prevent that being an issue by just never calling\n> GetSnapshotDataReuse() if the snapshot was taken during recovery\n> (though obviously that means any performance benefits won't be available\n> on standbys).\n\nYea, that doesn't sound great. Nor is the additional branch welcome.\n\n\n> I wonder if it's possible to increment \"xactCompletionCount\"\n> during replay along these lines:\n> \n> *** a/src/backend/access/transam/xact.c\n> --- b/src/backend/access/transam/xact.c\n> *************** xact_redo_commit(xl_xact_parsed_commit *\n> *** 5915,5920 ****\n> --- 5915,5924 ----\n> */\n> if (XactCompletionApplyFeedback(parsed->xinfo))\n> XLogRequestWalReceiverReply();\n> +\n> + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> + ShmemVariableCache->xactCompletionCount++;\n> + LWLockRelease(ProcArrayLock);\n> }\n> \n> which seems to work (though quite possibly I've overlooked something I don't\n> know that I don't know about and it will all break horribly somewhere,\n> etc. etc.).\n\nWe'd also need the same in a few more places. Probably worth looking at\nthe list where we increment it on the primary (particularly we need to\nalso increment it for aborts, and 2pc commit/aborts).\n\nAt first I was very confused as to why none of the existing tests have\nfound this significant issue. But after thinking about it for a minute\nthat's because they all use psql, and largely separate psql invocations\nfor each query :(. Which means that there's no cached snapshot around...\n\nDo you want to try to write a patch?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Sep 2020 21:11:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020/09/08 13:11, Andres Freund wrote:\n> Hi,\n> \n> On 2020-09-08 13:03:01 +0900, Ian Barwick wrote:\n(...)\n>> I wonder if it's possible to increment \"xactCompletionCount\"\n>> during replay along these lines:\n>>\n>> *** a/src/backend/access/transam/xact.c\n>> --- b/src/backend/access/transam/xact.c\n>> *************** xact_redo_commit(xl_xact_parsed_commit *\n>> *** 5915,5920 ****\n>> --- 5915,5924 ----\n>> */\n>> if (XactCompletionApplyFeedback(parsed->xinfo))\n>> XLogRequestWalReceiverReply();\n>> +\n>> + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n>> + ShmemVariableCache->xactCompletionCount++;\n>> + LWLockRelease(ProcArrayLock);\n>> }\n>>\n>> which seems to work (though quite possibly I've overlooked something I don't\n>> know that I don't know about and it will all break horribly somewhere,\n>> etc. etc.).\n> \n> We'd also need the same in a few more places. Probably worth looking at\n> the list where we increment it on the primary (particularly we need to\n> also increment it for aborts, and 2pc commit/aborts).\n\nYup.\n\n> At first I was very confused as to why none of the existing tests have\n> found this significant issue. But after thinking about it for a minute\n> that's because they all use psql, and largely separate psql invocations\n> for each query :(. Which means that there's no cached snapshot around...\n> \n> Do you want to try to write a patch?\n\nSure, I'll give it a go as I have some time right now.\n\n\nRegards\n\nIan Barwick\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 8 Sep 2020 13:23:01 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, Sep 8, 2020 at 4:11 PM Andres Freund <andres@anarazel.de> wrote:\n> At first I was very confused as to why none of the existing tests have\n> found this significant issue. But after thinking about it for a minute\n> that's because they all use psql, and largely separate psql invocations\n> for each query :(. Which means that there's no cached snapshot around...\n\nI prototyped a TAP test patch that could maybe do the sort of thing\nyou need, in patch 0006 over at [1]. Later versions of that patch set\ndropped it, because I figured out how to use the isolation tester\ninstead, but I guess you can't do that for a standby test (at least\nnot until someone teaches the isolation tester to support multi-node\nschedules, something that would be extremely useful...). Example:\n\n+# start an interactive session that we can use to interleave statements\n+my $session = PsqlSession->new($node, \"postgres\");\n+$session->send(\"\\\\set PROMPT1 ''\\n\", 2);\n+$session->send(\"\\\\set PROMPT2 ''\\n\", 1);\n...\n+# our snapshot is not too old yet, so we can still use it\n+@lines = $session->send(\"select * from t order by i limit 1;\\n\", 2);\n+shift @lines;\n+$result = shift @lines;\n+is($result, \"1\");\n...\n+# our snapshot is too old! the thing it wants to see has been removed\n+@lines = $session->send(\"select * from t order by i limit 1;\\n\", 2);\n+shift @lines;\n+$result = shift @lines;\n+is($result, \"ERROR: snapshot too old\");\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BFkUuDv-bcBns%3DZ_O-V9QGW0nWZNHOkEPxHZWjegRXvw%40mail.gmail.com\n\n\n", "msg_date": "Tue, 8 Sep 2020 16:44:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\n\nOn 07.09.2020 23:45, Andres Freund wrote:\n> Hi,\n>\n>\n> On Mon, Sep 7, 2020, at 07:20, Konstantin Knizhnik wrote:\n>>>> And which pgbench database scale factor you have used?\n>>> 200\n>>>\n>>> Another thing you could try is to run 2-4 pgench instances in different\n>>> databases.\n>> I tried to reinitialize database with scale 200 but there was no\n>> significant improvement in performance.\n> If you're replying to the last bit I am quoting, I was talking about having four databases with separate pbench tables etc. To see how much of it is procarray contention, and how much it is contention of common buffers etc.\n>\nSorry, I have tested hypothesis that the difference in performance in my \nand you cases can be explained by size of the table which can have \ninfluence on shared buffer  contention.\nThus is why I used the same scale as you, but there is no difference \ncompatring with scale 100.\n\nAnd definitely Postgres performance in this test is limited by lock \ncontention (most likely shared buffers locks, rather than procarray locks).\nIf I create two instances of postgres, both with pgbench -s 200 database \nand run two pgbenches with 100 connections each, then\neach instance shows the same ~1million TPS (1186483) as been launched \nstandalone. And total TPS is 2.3 millions.\n\n>> Attachments:\n>> * pgbench.svg\n> What numactl was used for this one?\n>\nNone. I have not used numactl in this case.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 8 Sep 2020 12:27:35 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-08 16:44:17 +1200, Thomas Munro wrote:\n> On Tue, Sep 8, 2020 at 4:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > At first I was very confused as to why none of the existing tests have\n> > found this significant issue. But after thinking about it for a minute\n> > that's because they all use psql, and largely separate psql invocations\n> > for each query :(. Which means that there's no cached snapshot around...\n> \n> I prototyped a TAP test patch that could maybe do the sort of thing\n> you need, in patch 0006 over at [1]. Later versions of that patch set\n> dropped it, because I figured out how to use the isolation tester\n> instead, but I guess you can't do that for a standby test (at least\n> not until someone teaches the isolation tester to support multi-node\n> schedules, something that would be extremely useful...).\n\nUnfortunately proper multi-node isolationtester test basically is\nequivalent to building a global lock graph. I think, at least? Including\na need to be able to correlate connections with their locks between the\nnodes.\n\nBut for something like the bug at hand it'd probably sufficient to just\n\"hack\" something with dblink. In session 1) insert a row on the primary\nusing dblink, return the LSN, wait for the LSN to have replicated and\nfinally in session 2) check for row visibility.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Sep 2020 10:53:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-06-07 11:24:50 +0300, Michail Nikolaev wrote:\n> Hello, hackers.\n> Andres, nice work!\n> \n> Sorry for the off-top.\n> \n> Some of my work [1] related to the support of index hint bits on\n> standby is highly interfering with this patch.\n> Is it safe to consider it committed and start rebasing on top of the patches?\n\nSorry, I missed this email. Since they're now committed, yes, it is safe\n;)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Sep 2020 12:19:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020/09/09 2:53, Andres Freund wrote:\n> Hi,\n> \n> On 2020-09-08 16:44:17 +1200, Thomas Munro wrote:\n>> On Tue, Sep 8, 2020 at 4:11 PM Andres Freund <andres@anarazel.de> wrote:\n>>> At first I was very confused as to why none of the existing tests have\n>>> found this significant issue. But after thinking about it for a minute\n>>> that's because they all use psql, and largely separate psql invocations\n>>> for each query :(. Which means that there's no cached snapshot around...\n>>\n>> I prototyped a TAP test patch that could maybe do the sort of thing\n>> you need, in patch 0006 over at [1]. Later versions of that patch set\n>> dropped it, because I figured out how to use the isolation tester\n>> instead, but I guess you can't do that for a standby test (at least\n>> not until someone teaches the isolation tester to support multi-node\n>> schedules, something that would be extremely useful...).\n> \n> Unfortunately proper multi-node isolationtester test basically is\n> equivalent to building a global lock graph. I think, at least? Including\n> a need to be able to correlate connections with their locks between the\n> nodes.\n> \n> But for something like the bug at hand it'd probably sufficient to just\n> \"hack\" something with dblink. In session 1) insert a row on the primary\n> using dblink, return the LSN, wait for the LSN to have replicated and\n> finally in session 2) check for row visibility.\n\nThe attached seems to do the trick.\n\n\nRegards\n\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 9 Sep 2020 15:28:07 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020/09/08 13:23, Ian Barwick wrote:\n> On 2020/09/08 13:11, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2020-09-08 13:03:01 +0900, Ian Barwick wrote:\n> (...)\n>>> I wonder if it's possible to increment \"xactCompletionCount\"\n>>> during replay along these lines:\n>>>\n>>>      *** a/src/backend/access/transam/xact.c\n>>>      --- b/src/backend/access/transam/xact.c\n>>>      *************** xact_redo_commit(xl_xact_parsed_commit *\n>>>      *** 5915,5920 ****\n>>>      --- 5915,5924 ----\n>>>               */\n>>>              if (XactCompletionApplyFeedback(parsed->xinfo))\n>>>                      XLogRequestWalReceiverReply();\n>>>      +\n>>>      +       LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n>>>      +       ShmemVariableCache->xactCompletionCount++;\n>>>      +       LWLockRelease(ProcArrayLock);\n>>>        }\n>>>\n>>> which seems to work (though quite possibly I've overlooked something I don't\n>>> know that I don't know about and it will all break horribly somewhere,\n>>> etc. etc.).\n>>\n>> We'd also need the same in a few more places. Probably worth looking at\n>> the list where we increment it on the primary (particularly we need to\n>> also increment it for aborts, and 2pc commit/aborts).\n> \n> Yup.\n> \n>> At first I was very confused as to why none of the existing tests have\n>> found this significant issue. But after thinking about it for a minute\n>> that's because they all use psql, and largely separate psql invocations\n>> for each query :(. Which means that there's no cached snapshot around...\n>>\n>> Do you want to try to write a patch?\n> \n> Sure, I'll give it a go as I have some time right now.\n\n\nAttached, though bear in mind I'm not very familiar with parts of this,\nparticularly 2PC stuff, so consider it educated guesswork.\n\n\nRegards\n\n\nIan Barwick\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 9 Sep 2020 17:02:58 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-09 17:02:58 +0900, Ian Barwick wrote:\n> Attached, though bear in mind I'm not very familiar with parts of this,\n> particularly 2PC stuff, so consider it educated guesswork.\n\nThanks for this, and the test case!\n\nYour commit fixes the issues, but not quite correctly. Multixacts\nshouldn't matter, so we don't need to do anything there. And for the\nincreases, I think they should be inside the already existing\nProcArrayLock acquisition, as in the attached.\n\n\nI've also included a quite heavily revised version of your test. Instead\nof using dblink I went for having a long-running psql that I feed over\nstdin. The main reason for not liking the previous version is that it\nseems fragile, with the sleep and everything. I expanded it to cover\n2PC is as well.\n\nThe test probably needs a bit of cleanup, wrapping some of the\nredundancy around the pump_until calls.\n\nI think the approach of having a long running psql session is really\nuseful, and probably would speed up some tests. Does anybody have a good\nidea for how to best, and without undue effort, to integrate this into\nPostgresNode.pm? I don't really have a great idea, so I think I'd leave\nit with a local helper in the new test?\n\nRegards,\n\nAndres", "msg_date": "Mon, 14 Sep 2020 16:17:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think the approach of having a long running psql session is really\n> useful, and probably would speed up some tests. Does anybody have a good\n> idea for how to best, and without undue effort, to integrate this into\n> PostgresNode.pm? I don't really have a great idea, so I think I'd leave\n> it with a local helper in the new test?\n\nYou could use the interactive_psql infrastructure that already exists\nfor psql/t/010_tab_completion.pl. That does rely on IO::Pty, but\nI think I'd prefer to accept that dependency for such tests over rolling\nour own IPC::Run, which is more or less what you've done here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Sep 2020 20:14:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-14 20:14:48 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think the approach of having a long running psql session is really\n> > useful, and probably would speed up some tests. Does anybody have a good\n> > idea for how to best, and without undue effort, to integrate this into\n> > PostgresNode.pm? I don't really have a great idea, so I think I'd leave\n> > it with a local helper in the new test?\n> \n> You could use the interactive_psql infrastructure that already exists\n> for psql/t/010_tab_completion.pl. That does rely on IO::Pty, but\n> I think I'd prefer to accept that dependency for such tests over rolling\n> our own IPC::Run, which is more or less what you've done here.\n\nMy test uses IPC::Run - although I'm indirectly 'use'ing, which I guess\nisn't pretty. Just as 013_crash_restart.pl already did (even before\npsql/t/010_tab_completion.pl). I am mostly wondering whether we could\navoid copying the utility functions into multiple test files...\n\nDoes IO::Pty work on windows? Given that currently the test doesn't use\na pty and that there's no benefit I can see in requiring one, I'm a bit\nhesitant to go there?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Sep 2020 17:42:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Mon, Sep 14, 2020 at 05:42:51PM -0700, Andres Freund wrote:\n> My test uses IPC::Run - although I'm indirectly 'use'ing, which I guess\n> isn't pretty. Just as 013_crash_restart.pl already did (even before\n> psql/t/010_tab_completion.pl). I am mostly wondering whether we could\n> avoid copying the utility functions into multiple test files...\n> \n> Does IO::Pty work on windows? Given that currently the test doesn't use\n> a pty and that there's no benefit I can see in requiring one, I'm a bit\n> hesitant to go there?\n\nPer https://metacpan.org/pod/IO::Tty:\n\"Windows is now supported, but ONLY under the Cygwin environment, see\nhttp://sources.redhat.com/cygwin/.\"\n\nSo I would suggest to make stuff a soft dependency (as Tom is\nhinting?), and not worry about Windows specifically. It is not like\nwhat we are dealing with here is specific to Windows anyway, so you\nwould have already sufficient coverage. I would not mind if any\nrefactoring is done later, once we know that the proposed test is\nstable in the buildfarm as we would get a better image of what part of\nthe facility overlaps across multiple tests.\n--\nMichael", "msg_date": "Tue, 15 Sep 2020 11:56:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-15 11:56:24 +0900, Michael Paquier wrote:\n> On Mon, Sep 14, 2020 at 05:42:51PM -0700, Andres Freund wrote:\n> > My test uses IPC::Run - although I'm indirectly 'use'ing, which I guess\n> > isn't pretty. Just as 013_crash_restart.pl already did (even before\n> > psql/t/010_tab_completion.pl). I am mostly wondering whether we could\n> > avoid copying the utility functions into multiple test files...\n> > \n> > Does IO::Pty work on windows? Given that currently the test doesn't use\n> > a pty and that there's no benefit I can see in requiring one, I'm a bit\n> > hesitant to go there?\n> \n> Per https://metacpan.org/pod/IO::Tty:\n> \"Windows is now supported, but ONLY under the Cygwin environment, see\n> http://sources.redhat.com/cygwin/.\"\n> \n> So I would suggest to make stuff a soft dependency (as Tom is\n> hinting?), and not worry about Windows specifically. It is not like\n> what we are dealing with here is specific to Windows anyway, so you\n> would have already sufficient coverage. I would not mind if any\n> refactoring is done later, once we know that the proposed test is\n> stable in the buildfarm as we would get a better image of what part of\n> the facility overlaps across multiple tests.\n\nI'm confused - the test as posted should work on windows, and we already\ndo this in an existing test (src/test/recovery/t/013_crash_restart.pl). What's\nthe point in adding a platforms specific dependency here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Sep 2020 20:03:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-09-14 16:17:18 -0700, Andres Freund wrote:\n> I've also included a quite heavily revised version of your test. Instead\n> of using dblink I went for having a long-running psql that I feed over\n> stdin. The main reason for not liking the previous version is that it\n> seems fragile, with the sleep and everything. I expanded it to cover\n> 2PC is as well.\n> \n> The test probably needs a bit of cleanup, wrapping some of the\n> redundancy around the pump_until calls.\n> \n> I think the approach of having a long running psql session is really\n> useful, and probably would speed up some tests. Does anybody have a good\n> idea for how to best, and without undue effort, to integrate this into\n> PostgresNode.pm? I don't really have a great idea, so I think I'd leave\n> it with a local helper in the new test?\n\nAttached is an updated version of the test (better utility function,\nstricter regexes, bailing out instead of failing just the current when\npsql times out). I'm leaving it in this test for now, but it's fairly\neasy to use this way, in my opinion, so it may be worth moving to\nPostgresNode at some point.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 30 Sep 2020 15:43:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On Tue, 15 Sep 2020 at 07:17, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-09-09 17:02:58 +0900, Ian Barwick wrote:\n> > Attached, though bear in mind I'm not very familiar with parts of this,\n> > particularly 2PC stuff, so consider it educated guesswork.\n>\n> Thanks for this, and the test case!\n>\n> Your commit fixes the issues, but not quite correctly. Multixacts\n> shouldn't matter, so we don't need to do anything there. And for the\n> increases, I think they should be inside the already existing\n> ProcArrayLock acquisition, as in the attached.\n>\n>\n> I've also included a quite heavily revised version of your test. Instead\n> of using dblink I went for having a long-running psql that I feed over\n> stdin. The main reason for not liking the previous version is that it\n> seems fragile, with the sleep and everything. I expanded it to cover\n> 2PC is as well.\n>\n> The test probably needs a bit of cleanup, wrapping some of the\n> redundancy around the pump_until calls.\n>\n> I think the approach of having a long running psql session is really\n> useful, and probably would speed up some tests. Does anybody have a good\n> idea for how to best, and without undue effort, to integrate this into\n> PostgresNode.pm? I don't really have a great idea, so I think I'd leave\n> it with a local helper in the new test?\n\n2ndQ has some infra for that and various other TAP enhancements that\nI'd like to try to upstream. I'll ask what I can share and how.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Thu, 1 Oct 2020 17:37:34 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi Ian, Andrew, All,\n\nOn 2020-09-30 15:43:17 -0700, Andres Freund wrote:\n> Attached is an updated version of the test (better utility function,\n> stricter regexes, bailing out instead of failing just the current when\n> psql times out). I'm leaving it in this test for now, but it's fairly\n> easy to use this way, in my opinion, so it may be worth moving to\n> PostgresNode at some point.\n\nI pushed this yesterday. Ian, thanks again for finding this and helping\nwith fixing & testing.\n\n\nUnfortunately currently some buildfarm animals don't like the test for\nreasons I don't quite understand. Looks like it's all windows + msys\nanimals that run the tap tests. On jacana and fairywren the new test\nfails, but with a somewhat confusing problem:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-10-01%2015%3A32%3A34\nBail out! aborting wait: program timed out\n# stream contents: >>data\n# (0 rows)\n# <<\n# pattern searched for: (?m-xis:^\\\\(0 rows\\\\)$)\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-10-01%2014%3A12%3A13\nBail out! aborting wait: program timed out\nstream contents: >>data\n(0 rows)\n<<\npattern searched for: (?^m:^\\\\(0 rows\\\\)$)\n\nI don't know with the -xis indicates on jacana, and why it's not present\non fairywren. Nor do I know why the pattern doesn't match in the first\nplace, sure looks like it should?\n\nAndrew, do you have an insight into how mingw's regex match differs\nfrom native windows and proper unixoid systems? I guess it's somewhere\naround line endings or such?\n\nJacana successfully deals with 013_crash_restart.pl, which does use the\nsame mechanis as the new 021_row_visibility.pl - I think the only real\ndifference is that I used ^ and $ in the regexes in the latter...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 1 Oct 2020 11:26:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\nOn 10/1/20 2:26 PM, Andres Freund wrote:\n> Hi Ian, Andrew, All,\n>\n> On 2020-09-30 15:43:17 -0700, Andres Freund wrote:\n>> Attached is an updated version of the test (better utility function,\n>> stricter regexes, bailing out instead of failing just the current when\n>> psql times out). I'm leaving it in this test for now, but it's fairly\n>> easy to use this way, in my opinion, so it may be worth moving to\n>> PostgresNode at some point.\n> I pushed this yesterday. Ian, thanks again for finding this and helping\n> with fixing & testing.\n>\n>\n> Unfortunately currently some buildfarm animals don't like the test for\n> reasons I don't quite understand. Looks like it's all windows + msys\n> animals that run the tap tests. On jacana and fairywren the new test\n> fails, but with a somewhat confusing problem:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2020-10-01%2015%3A32%3A34\n> Bail out! aborting wait: program timed out\n> # stream contents: >>data\n> # (0 rows)\n> # <<\n> # pattern searched for: (?m-xis:^\\\\(0 rows\\\\)$)\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-10-01%2014%3A12%3A13\n> Bail out! aborting wait: program timed out\n> stream contents: >>data\n> (0 rows)\n> <<\n> pattern searched for: (?^m:^\\\\(0 rows\\\\)$)\n>\n> I don't know with the -xis indicates on jacana, and why it's not present\n> on fairywren. Nor do I know why the pattern doesn't match in the first\n> place, sure looks like it should?\n>\n> Andrew, do you have an insight into how mingw's regex match differs\n> from native windows and proper unixoid systems? I guess it's somewhere\n> around line endings or such?\n>\n> Jacana successfully deals with 013_crash_restart.pl, which does use the\n> same mechanis as the new 021_row_visibility.pl - I think the only real\n> difference is that I used ^ and $ in the regexes in the latter...\n\n\nMy strong suspicion is that we're getting unwanted CRs. Note the\npresence of numerous instances of this in PostgresNode.pm:\n\n $stdout =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n\nSo you probably want something along those lines at the top of the loop\nin send_query_and_wait:\n\n $$psql{stdout} =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n\npossibly also for stderr, just to make it more futureproof, and at the\ntop of the file:\n\n use Config;\n\n\nDo you want me to test that first?\n\n\nThe difference between the canonical way perl states the regex is due to\nperl version differences. It shouldn't matter.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n", "msg_date": "Thu, 1 Oct 2020 16:00:20 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-10-01 16:00:20 -0400, Andrew Dunstan wrote:\n> My strong suspicion is that we're getting unwanted CRs. Note the\n> presence of numerous instances of this in PostgresNode.pm:\n\n\n> $stdout =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n> \n> So you probably want something along those lines at the top of the loop\n> in send_query_and_wait:\n> \n> $$psql{stdout} =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n\nYikes, that's ugly :(.\n\n\nI assume it's not, as the comments says\n\t# Note: on Windows, IPC::Run seems to convert \\r\\n to \\n in program output\n\t# if we're using native Perl, but not if we're using MSys Perl. So do it\n\t# by hand in the latter case, here and elsewhere.\nthat IPC::Run converts things, but that native windows perl uses\nhttps://perldoc.perl.org/perlrun#PERLIO\na PERLIO that includes :crlf, whereas msys probably doesn't?\n\nAny chance you could run something like\nperl -mPerlIO -e 'print(PerlIO::get_layers(STDIN), \"\\n\");'\non both native and msys perl?\n\n\n> possibly also for stderr, just to make it more futureproof, and at the\n> top of the file:\n> \n> use Config;\n> \n> \n\n> Do you want me to test that first?\n\nThat'd be awesome.\n\n\n> The difference between the canonical way perl states the regex is due to\n> perl version differences. It shouldn't matter.\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 1 Oct 2020 13:22:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\nOn 10/1/20 4:22 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2020-10-01 16:00:20 -0400, Andrew Dunstan wrote:\n>> My strong suspicion is that we're getting unwanted CRs. Note the\n>> presence of numerous instances of this in PostgresNode.pm:\n>\n>> $stdout =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n>>\n>> So you probably want something along those lines at the top of the loop\n>> in send_query_and_wait:\n>>\n>> $$psql{stdout} =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n> Yikes, that's ugly :(.\n>\n>\n> I assume it's not, as the comments says\n> \t# Note: on Windows, IPC::Run seems to convert \\r\\n to \\n in program output\n> \t# if we're using native Perl, but not if we're using MSys Perl. So do it\n> \t# by hand in the latter case, here and elsewhere.\n> that IPC::Run converts things, but that native windows perl uses\n> https://perldoc.perl.org/perlrun#PERLIO\n> a PERLIO that includes :crlf, whereas msys probably doesn't?\n>\n> Any chance you could run something like\n> perl -mPerlIO -e 'print(PerlIO::get_layers(STDIN), \"\\n\");'\n> on both native and msys perl?\n>\n>\n\nsys (jacana): stdio\n\nnative: unixcrlf\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 1 Oct 2020 16:44:03 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-10-01 16:44:03 -0400, Andrew Dunstan wrote:\n> > I assume it's not, as the comments says\n> > \t# Note: on Windows, IPC::Run seems to convert \\r\\n to \\n in program output\n> > \t# if we're using native Perl, but not if we're using MSys Perl. So do it\n> > \t# by hand in the latter case, here and elsewhere.\n> > that IPC::Run converts things, but that native windows perl uses\n> > https://perldoc.perl.org/perlrun#PERLIO\n> > a PERLIO that includes :crlf, whereas msys probably doesn't?\n> >\n> > Any chance you could run something like\n> > perl -mPerlIO -e 'print(PerlIO::get_layers(STDIN), \"\\n\");'\n> > on both native and msys perl?\n> \n> sys (jacana): stdio\n> \n> native: unixcrlf\n\nInteresting. That suggest we could get around needing the if msys\nbranches in several places by setting PERLIO to unixcrlf somewhere\ncentrally when using msys.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 1 Oct 2020 13:59:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\nOn 10/1/20 4:22 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2020-10-01 16:00:20 -0400, Andrew Dunstan wrote:\n>> My strong suspicion is that we're getting unwanted CRs. Note the\n>> presence of numerous instances of this in PostgresNode.pm:\n>\n>> $stdout =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n>>\n>> So you probably want something along those lines at the top of the loop\n>> in send_query_and_wait:\n>>\n>> $$psql{stdout} =~ s/\\r\\n/\\n/g if $Config{osname} eq 'msys';\n> Yikes, that's ugly :(.\n>\n>\n> I assume it's not, as the comments says\n> \t# Note: on Windows, IPC::Run seems to convert \\r\\n to \\n in program output\n> \t# if we're using native Perl, but not if we're using MSys Perl. So do it\n> \t# by hand in the latter case, here and elsewhere.\n> that IPC::Run converts things, but that native windows perl uses\n> https://perldoc.perl.org/perlrun#PERLIO\n> a PERLIO that includes :crlf, whereas msys probably doesn't?\n>\n> Any chance you could run something like\n> perl -mPerlIO -e 'print(PerlIO::get_layers(STDIN), \"\\n\");'\n> on both native and msys perl?\n>\n>\n>> possibly also for stderr, just to make it more futureproof, and at the\n>> top of the file:\n>>\n>> use Config;\n>>\n>>\n>> Do you want me to test that first?\n> That'd be awesome.\n>\n>\n>\n\nThe change I suggested makes jacana happy.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 1 Oct 2020 19:21:14 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "On 2020/10/02 3:26, Andres Freund wrote:\n> Hi Ian, Andrew, All,\n> \n> On 2020-09-30 15:43:17 -0700, Andres Freund wrote:\n>> Attached is an updated version of the test (better utility function,\n>> stricter regexes, bailing out instead of failing just the current when\n>> psql times out). I'm leaving it in this test for now, but it's fairly\n>> easy to use this way, in my opinion, so it may be worth moving to\n>> PostgresNode at some point.\n> \n> I pushed this yesterday. Ian, thanks again for finding this and helping\n> with fixing & testing.\n\nThanks! Apologies for not getting back to your earlier responses,\nhave been distracted by Various Other Things.\n\nThe tests I run which originally triggered the issue now run just fine.\n\n\nRegards\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Fri, 2 Oct 2020 11:14:19 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nOn 2020-10-01 19:21:14 -0400, Andrew Dunstan wrote:\n> On 10/1/20 4:22 PM, Andres Freund wrote:\n> > \t# Note: on Windows, IPC::Run seems to convert \\r\\n to \\n in program output\n> > \t# if we're using native Perl, but not if we're using MSys Perl. So do it\n> > \t# by hand in the latter case, here and elsewhere.\n> > that IPC::Run converts things, but that native windows perl uses\n> > https://perldoc.perl.org/perlrun#PERLIO\n> > a PERLIO that includes :crlf, whereas msys probably doesn't?\n> >\n> > Any chance you could run something like\n> > perl -mPerlIO -e 'print(PerlIO::get_layers(STDIN), \"\\n\");'\n> > on both native and msys perl?\n> >\n> >\n> >> possibly also for stderr, just to make it more futureproof, and at the\n> >> top of the file:\n> >>\n> >> use Config;\n> >>\n> >>\n> >> Do you want me to test that first?\n> > That'd be awesome.\n\n> The change I suggested makes jacana happy.\n\nThanks, pushed. Hopefully that fixes the mingw animals.\n\nI wonder if we instead should do something like\n\n# Have mingw perl treat CRLF the same way as perl on native windows does\nifeq ($(build_os),mingw32)\n PROVE=\"PERLIO=unixcrlf $(PROVE)\"\nendif\n\nin Makefile.global.in? Then we could remove these rexes from all the\nvarious places?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Oct 2020 19:33:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\nOn 10/5/20 10:33 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2020-10-01 19:21:14 -0400, Andrew Dunstan wrote:\n>> On 10/1/20 4:22 PM, Andres Freund wrote:\n>>> \t# Note: on Windows, IPC::Run seems to convert \\r\\n to \\n in program output\n>>> \t# if we're using native Perl, but not if we're using MSys Perl. So do it\n>>> \t# by hand in the latter case, here and elsewhere.\n>>> that IPC::Run converts things, but that native windows perl uses\n>>> https://perldoc.perl.org/perlrun#PERLIO\n>>> a PERLIO that includes :crlf, whereas msys probably doesn't?\n>>>\n>>> Any chance you could run something like\n>>> perl -mPerlIO -e 'print(PerlIO::get_layers(STDIN), \"\\n\");'\n>>> on both native and msys perl?\n>>>\n>>>\n>>>> possibly also for stderr, just to make it more futureproof, and at the\n>>>> top of the file:\n>>>>\n>>>> use Config;\n>>>>\n>>>>\n>>>> Do you want me to test that first?\n>>> That'd be awesome.\n>> The change I suggested makes jacana happy.\n> Thanks, pushed. Hopefully that fixes the mingw animals.\n>\n\nI don't think we're out of the woods yet. This test is also have bad\neffects on bowerbird, which is an MSVC animal. It's hanging completely :-(\n\n\nDigging some more.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 9 Oct 2020 16:38:14 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "Hi,\n\nGreatly appreciate if you could please reply to the following questions as\ntime allows.\n\nI have seen previous discussion/patches on a built-in connection pooler. How\ndoes this scalability improvement, particularly idle connection improvements\netc, affect that built-in pooler need, if any?\n\n\nSame general question about an external connection pooler in general in\nProduction? Still required to route to different servers but no longer\nneeded for the pooling part. as an example.\n\n\nMany Thanks!\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Sat, 27 Feb 2021 10:40:58 -0700 (MST)", "msg_from": "AJG <ayden@gera.co.nz>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\n----- Mensagem original -----\n> De: \"AJG\" <ayden@gera.co.nz>\n> Para: \"Pg Hackers\" <pgsql-hackers@postgresql.org>\n> Enviadas: Sábado, 27 de fevereiro de 2021 14:40:58\n> Assunto: Re: Improving connection scalability: GetSnapshotData()\n\n> Hi,\n\n> Greatly appreciate if you could please reply to the following questions as\n> time allows.\n\n> I have seen previous discussion/patches on a built-in connection pooler. How\n> does this scalability improvement, particularly idle connection improvements\n> etc, affect that built-in pooler need, if any?\n\n> Same general question about an external connection pooler in general in\n> Production? Still required to route to different servers but no longer\n> needed for the pooling part. as an example.\n\n> Many Thanks!\n\n> --\n> Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\nAs I understand it, the improvements made to GetSnapShotData() mean having higher connection count does not incur as much a penalty to performance as before. \nI am not sure it solves the connection stablishment side of things, but I may be wrong.\n\nLuis R. Weck \n\n\n", "msg_date": "Mon, 1 Mar 2021 09:49:47 -0300 (BRT)", "msg_from": "luis.roberto@siscobra.com.br", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" }, { "msg_contents": "\n\nOn 27.02.2021 20:40, AJG wrote:\n> Hi,\n>\n> Greatly appreciate if you could please reply to the following questions as\n> time allows.\n>\n> I have seen previous discussion/patches on a built-in connection pooler. How\n> does this scalability improvement, particularly idle connection improvements\n> etc, affect that built-in pooler need, if any?\n>\n>\n> Same general question about an external connection pooler in general in\n> Production? Still required to route to different servers but no longer\n> needed for the pooling part. as an example.\n>\n>\n> Many Thanks!\n>\n\nConnection pooler is still needed.\nThe patch for GetSnapshotData() mostly improves scalability of read-only \nqueries and reduce contention for procarray lock.\nBut read-write upload cause contention for many other resources: \nrelation extension lock, buffer locks, tuple locks and so on.\n\nIf you run pgbench at NUMA machine with hundreds of cores, then you will \nstill observe significant degradation of performance with increasing \nnumber of connection.\nAnd this degradation will be dramatic if you replace some non-uniform \ndistribution of keys, for example Zipfian distribution.\n\n\n\n\n>\n>\n>\n> --\n> Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n>\n>\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 18:46:51 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Improving connection scalability: GetSnapshotData()" } ]
[ { "msg_contents": "Hi\n\nI miss a reglanguage type from our set of reg* types.\n\nIt reduce a mental overhead for queries over pg_proc table\n\nWith this type I can easy filter only plpgsql functions\n\nselect *\n from pg_proc\nwhere prolang = 'plpgsql'::reglanguage\n and pronamespace <> 'pg_catalog'::regnamespace;\n\nRegards\n\nPavel", "msg_date": "Sun, 1 Mar 2020 11:07:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - reglanguage type" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I miss a reglanguage type from our set of reg* types.\n\nI'm skeptical about this. I don't think we want to wind up with a reg*\ntype for every system catalog, so there needs to be some rule about which\nones it's worth the trouble for. The original idea was to provide a reg*\ntype if the lookup rule would be anything more complicated than \"select\noid from <catalog> where name = 'foo'\". We went beyond that with\nregnamespace and regrole, but I think there was a sufficient argument of\nusefulness for those two. I don't see that reglanguage has enough of\na use-case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Mar 2020 13:31:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - reglanguage type" }, { "msg_contents": "ne 1. 3. 2020 v 19:31 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I miss a reglanguage type from our set of reg* types.\n>\n> I'm skeptical about this. I don't think we want to wind up with a reg*\n> type for every system catalog, so there needs to be some rule about which\n> ones it's worth the trouble for. The original idea was to provide a reg*\n> type if the lookup rule would be anything more complicated than \"select\n> oid from <catalog> where name = 'foo'\". We went beyond that with\n> regnamespace and regrole, but I think there was a sufficient argument of\n> usefulness for those two. I don't see that reglanguage has enough of\n> a use-case.\n>\n\nthe use-case is probably only one - filtering pg_proc. Probably the most\ncommon filter is\n\nprolang = (SELECT oid\n FROM pg_language\n WHERE lanname = 'plpgsql')\n\nIt's little bit not comfortable so for namespace we can do pronamespace <>\n'pg_catalog'::regnamespace and there is nothing for language.\n\nThis feature is interesting for people who write code in plpgsql, or who\nmigrate from PL/SQL (and for people who use plpgsql_check).\n\nAll mass check (mass usage of plpgsql_check) have to use filter on prolang.\n\nRegards\n\nPavel\n\n\n\n\n>\n> regards, tom lane\n>\n\nne 1. 3. 2020 v 19:31 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I miss a reglanguage type from our set of reg* types.\n\nI'm skeptical about this.  I don't think we want to wind up with a reg*\ntype for every system catalog, so there needs to be some rule about which\nones it's worth the trouble for.  The original idea was to provide a reg*\ntype if the lookup rule would be anything more complicated than \"select\noid from <catalog> where name = 'foo'\".  We went beyond that with\nregnamespace and regrole, but I think there was a sufficient argument of\nusefulness for those two.  I don't see that reglanguage has enough of\na use-case.the use-case is probably only one - filtering pg_proc. Probably the most common filter isprolang = (SELECT oid \n FROM pg_language \n WHERE lanname = 'plpgsql')It's little bit not comfortable so for namespace we can do pronamespace <> 'pg_catalog'::regnamespace and there is nothing for language.This feature is interesting for people who write code in plpgsql, or who migrate from PL/SQL (and for people who use plpgsql_check).All mass check (mass usage of plpgsql_check) have to use filter on prolang.RegardsPavel \n\n                        regards, tom lane", "msg_date": "Sun, 1 Mar 2020 19:38:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - reglanguage type" } ]
[ { "msg_contents": "> The cfbot thinks it doesn't even apply anymore --- conflict with the dedup\n> patch, no doubt?\n\nMinor conflict with that patch indeed. Attached is rebased version. I'm running some tests now - will post the results here when finished.\n\n-Floris", "msg_date": "Sun, 1 Mar 2020 20:15:25 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": true, "msg_subject": "RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()" }, { "msg_contents": "On Sun, Mar 1, 2020 at 12:15 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n> Minor conflict with that patch indeed. Attached is rebased version. I'm running some tests now - will post the results here when finished.\n\nThanks.\n\nWe're going to have to go back to my original approach to inlining. At\nleast, it seemed to be necessary to do that to get any benefit from\nthe patch on my comparatively modest workstation (using a similar\npgbench SELECT benchmark to the one that you ran). Tom also had a\nconcern about the portability of inlining without using \"static\ninline\" -- that is another reason to avoid the approach to inlining\ntaken by v3 + v4.\n\nIt's possible (though not very likely) that performance has been\nimpacted by the deduplication patch (commit 0d861bbb), since it\nupdated the definition of BTreeTupleGetNAtts() itself.\n\nAttached is v5, which inlines in a targeted fashion, pretty much in\nthe same way as the earliest version. This is the same as v4 in every\nother way. Perhaps you can test this.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 2 Mar 2020 17:29:11 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()" }, { "msg_contents": "> Attached is v5, which inlines in a targeted fashion, pretty much in the same\r\n> way as the earliest version. This is the same as v4 in every other way.\r\n> Perhaps you can test this.\r\n> \r\n\r\nThank you for the new patch. With the new one I am indeed able to reproduce a performance increase. It is very difficult to reliably reproduce it when running on a large number of cores though, due to the NUMA architecture.\r\nFor tests with a small number of cores, I pin all of them to the same node. With that, I see a significant performance increase for v5 compared to master. However, if I pin pgbench to a different node than the node that Postgres is pinned to, this leads to a 20% performance degradation compared to having all of them on the same node, as well as the stddev increasing by a factor of 2 (regardless of patch). With that, it becomes very difficult to see any kind of performance increase due to the patch. For a large number of pgbench workers, I cannot specifically pin the pgbench worker on the same node as the Postgres backend connection it's handling. Leaving it to the OS gives very unreliable results - when I run the 30 workers / 30 connections test, I sometimes see periods of up to 30 minutes where it's doing it 'correctly', but it could also randomly run at the -20% performance for a long time. So far my best bet at explaining this is the NUMA performance hit. I'd like to be able to specifically schedule some Postgres backends to run on one node, while other Postgres backends run on a different node, but this isn't straightforward.\r\n\r\nFor now, I see no issues with the patch though. However, in real life situations there may be other, more important, optimizations for people that use big multi-node machines.\r\n\r\nThoughts?\r\n\r\n-Floris\r\n\r\n", "msg_date": "Sun, 8 Mar 2020 11:23:08 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": true, "msg_subject": "RE: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()" }, { "msg_contents": "\n\n> On Mar 2, 2020, at 5:29 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Sun, Mar 1, 2020 at 12:15 PM Floris Van Nee <florisvannee@optiver.com> wrote:\n>> Minor conflict with that patch indeed. Attached is rebased version. I'm running some tests now - will post the results here when finished.\n> \n> Thanks.\n> \n> We're going to have to go back to my original approach to inlining. At\n> least, it seemed to be necessary to do that to get any benefit from\n> the patch on my comparatively modest workstation (using a similar\n> pgbench SELECT benchmark to the one that you ran). Tom also had a\n> concern about the portability of inlining without using \"static\n> inline\" -- that is another reason to avoid the approach to inlining\n> taken by v3 + v4.\n> \n> It's possible (though not very likely) that performance has been\n> impacted by the deduplication patch (commit 0d861bbb), since it\n> updated the definition of BTreeTupleGetNAtts() itself.\n> \n> Attached is v5, which inlines in a targeted fashion, pretty much in\n> the same way as the earliest version. This is the same as v4 in every\n> other way. Perhaps you can test this.\n\nHi Peter, just a quick code review:\n\nThe v5 patch files apply and pass the regression tests. I get no errors. The performance impact is within the noise. The TPS with the patch are higher sometimes and lower other times, looking across the 27 subtests of the \"select-only\" benchmark. Which subtest is slower or faster changes per run, so that doesn't seem to be relevant. I ran the \"select-only\" six times (three with the patch, three without). The change you made to the loop is clearly visible in the nbtsearch.s output, and the change to inline _bt_compare is even more visible, so there doesn't seem to be any defeating of your change due to the compiler ignoring the \"inline\" or such. I compiled using gcc -O2\n\n% gcc --version\nConfigured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/4.2.1\nApple clang version 11.0.0 (clang-1100.0.33.17)\nTarget: x86_64-apple-darwin19.4.0\nThread model: posix\nInstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n\n2.4 GHz 8-Core Intel Core i9\n32 GB 2667 MHz DDR4\n\nReading this thread, I think the lack of a performance impact on laptop hardware was expected, but perhaps confirmation that it does not make things worse is useful?\n\nSince this patch doesn't seem to do any harm, I would mark it as \"ready for committer\", except that there doesn't yet seem to be enough evidence that it is a net win.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 May 2020 12:35:17 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()" }, { "msg_contents": "Status update for a commitfest entry.\r\n\r\nThis thread was inactive for a while. The latest review suggests that it is Ready For Committer.\r\nI also took a quick look at the patch and agree that it looks sensible. Maybe add a comment before the _bt_compare_inl() to explain the need for this code change.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 02 Nov 2020 17:45:37 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: RE: Delaying/avoiding BTreeTupleGetNAtts() call within\n _bt_compare()" }, { "msg_contents": "On Mon, Nov 2, 2020 at 9:46 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> This thread was inactive for a while. The latest review suggests that it is Ready For Committer.\n> I also took a quick look at the patch and agree that it looks sensible. Maybe add a comment before the _bt_compare_inl() to explain the need for this code change.\n\nActually I am probably going to withdraw this patch soon. The idea is\na plausible way of improving things. But at the same time I cannot\nreally demonstrate any benefit on hardware that I have access to.\n\nThis patch was something I worked on based on a private complaint from\nAndres (who is CC'd here now) during an informal conversation at pgDay\nSF. If Andres is now in a position to test the patch and can show a\nbenefit on certain hardware, I may well pick it back up. But as things\nstand the evidence in support of the patch is pretty weak. I'm not\ngoing to commit a patch like this unless and until it's crystal clear\nwhat the benefits are.\n\nif Andres cannot spend any time on this in the foreseeable future then\nI'll withdraw the patch. I intend to formally withdraw the patch on\nNovember 9th, provided no new information comes to light.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 2 Nov 2020 13:04:40 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: RE: Delaying/avoiding BTreeTupleGetNAtts() call within\n _bt_compare()" }, { "msg_contents": "On Thu, May 28, 2020 at 12:35 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Reading this thread, I think the lack of a performance impact on laptop hardware was expected, but perhaps confirmation that it does not make things worse is useful?\n>\n> Since this patch doesn't seem to do any harm, I would mark it as \"ready for committer\", except that there doesn't yet seem to be enough evidence that it is a net win.\n\nThank you for testing my patch. Sorry for the delay in getting back to this.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 2 Nov 2020 13:05:04 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Delaying/avoiding BTreeTupleGetNAtts() call within _bt_compare()" }, { "msg_contents": "On Mon, Nov 2, 2020 at 1:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> if Andres cannot spend any time on this in the foreseeable future then\n> I'll withdraw the patch. I intend to formally withdraw the patch on\n> November 9th, provided no new information comes to light.\n\nI have now formally withdrawn the patch in the CF app.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 9 Nov 2020 10:47:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: RE: Delaying/avoiding BTreeTupleGetNAtts() call within\n _bt_compare()" } ]
[ { "msg_contents": "Hackers,\n\nThe last Commitfest for v13 is now in progress!\n\nCurrent stats for the Commitfest are:\n\nNeeds review: 192\nWaiting on Author: 19\nReady for Committer: 4\nTotal: 215\n\nNote that this time I'll be ignoring work done prior to the actual CF \nwhen reporting progress. Arbitrary, perhaps, but I'm most interested in \ntracking the ongoing progress during the month.\n\nThe number of patches waiting on author seems lower (as a percentage) \nthan usual which I take to be a good sign. I'll be assessing the WoA \npatches over the next day or two, so if your patch is in this state get \na new version in soon.\n\nPlease, if you have submitted patches in this CF make sure that you are \nalso reviewing patches of a similar number and complexity. The CF \ncannot move forward without patch review.\n\nHappy Hacking!\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sun, 1 Mar 2020 16:10:24 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Commitfest 2020-03 Now in Progress" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> The last Commitfest for v13 is now in progress!\n> The number of patches waiting on author seems lower (as a percentage) \n> than usual which I take to be a good sign. I'll be assessing the WoA \n> patches over the next day or two, so if your patch is in this state get \n> a new version in soon.\n\nAnother pointer is to check the state of your patch in the cfbot:\n\nhttp://commitfest.cputube.org\n\nIf it isn't passing, please send in a new version that fixes whatever\nthe problem is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Mar 2020 16:16:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On 3/1/20 4:10 PM, David Steele wrote:\n> The last Commitfest for v13 is now in progress!\n> \n> Current stats for the Commitfest are:\n> \n> Needs review: 192\n> Waiting on Author: 19\n> Ready for Committer: 4\n> Total: 215\n\nHalfway through, here's where we stand. Note that a CF entry was added \nafter the stats above were recorded so the total has increased.\n\nNeeds review: 151 (-41)\nWaiting on Author: 20 (+1)\nReady for Committer: 9 (+5)\nCommitted: 25\nMoved to next CF: 1\nWithdrawn: 4\nReturned with Feedback: 6\nTotal: 216\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 17 Mar 2020 08:10:37 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On 3/17/20 8:10 AM, David Steele wrote:\n> On 3/1/20 4:10 PM, David Steele wrote:\n>> The last Commitfest for v13 is now in progress!\n>>\n>> Current stats for the Commitfest are:\n>>\n>> Needs review: 192\n>> Waiting on Author: 19\n>> Ready for Committer: 4\n>> Total: 215\n> \n> Halfway through, here's where we stand.  Note that a CF entry was added \n> after the stats above were recorded so the total has increased.\n> \n> Needs review: 151 (-41)\n> Waiting on Author: 20 (+1)\n> Ready for Committer: 9 (+5)\n> Committed: 25\n> Moved to next CF: 1\n> Withdrawn: 4\n> Returned with Feedback: 6\n> Total: 216\n\nAfter regulation time here's where we stand:\n\nNeeds review: 111 (-40)\nWaiting on Author: 26 (+6)\nReady for Committer: 11 (+2)\nCommitted: 51 (+11)\nMoved to next CF: 2 (+1)\nReturned with Feedback: 10 (+4)\nRejected: 1\nWithdrawn: 5 (+1)\nTotal: 217\n\nWe have one more patch so it appears that one in a completed state \n(committed, etc.) at the beginning of the CF has been moved to an \nuncompleted state. Or perhaps my math is just bad.\n\nThe RMT has determined that the CF will be extended for one week so I'll \nhold off on moving and marking patches until April 8.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 1 Apr 2020 10:09:44 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On 4/1/20 10:09 AM, David Steele wrote:\n> On 3/17/20 8:10 AM, David Steele wrote:\n>> On 3/1/20 4:10 PM, David Steele wrote:\n>>> The last Commitfest for v13 is now in progress!\n>>>\n>>> Current stats for the Commitfest are:\n>>>\n>>> Needs review: 192\n>>> Waiting on Author: 19\n>>> Ready for Committer: 4\n>>> Total: 215\n>>\n>> Halfway through, here's where we stand.  Note that a CF entry was \n>> added after the stats above were recorded so the total has increased.\n>>\n>> Needs review: 151 (-41)\n>> Waiting on Author: 20 (+1)\n>> Ready for Committer: 9 (+5)\n>> Committed: 25\n>> Moved to next CF: 1\n>> Withdrawn: 4\n>> Returned with Feedback: 6\n>> Total: 216\n> \n> After regulation time here's where we stand:\n> \n> Needs review: 111 (-40)\n> Waiting on Author: 26 (+6)\n> Ready for Committer: 11 (+2)\n> Committed: 51 (+11)\n> Moved to next CF: 2 (+1)\n> Returned with Feedback: 10 (+4)\n> Rejected: 1\n> Withdrawn: 5 (+1)\n> Total: 217\n> \n> We have one more patch so it appears that one in a completed state \n> (committed, etc.) at the beginning of the CF has been moved to an \n> uncompleted state. Or perhaps my math is just bad.\n> \n> The RMT has determined that the CF will be extended for one week so I'll \n> hold off on moving and marking patches until April 8.\n\nThe 2020-03 Commitfest is officially closed.\n\nFinal stats are (for entire CF, not just from March 1 this time):\n\nCommitted: 90.\nMoved to next CF: 115.\nWithdrawn: 8.\nRejected: 1.\nReturned with Feedback: 23.\nTotal: 237.\n\nGood job everyone!\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 8 Apr 2020 12:36:37 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "David Steele <david@pgmasters.net> writes:\n> The 2020-03 Commitfest is officially closed.\n\n> Final stats are (for entire CF, not just from March 1 this time):\n\n> Committed: 90.\n> Moved to next CF: 115.\n> Withdrawn: 8.\n> Rejected: 1.\n> Returned with Feedback: 23.\n> Total: 237.\n\n> Good job everyone!\n\nThanks for running it! I know it's a tedious responsibility.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Apr 2020 12:39:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On Wed, Apr 08, 2020 at 12:39:53PM -0400, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n>> The 2020-03 Commitfest is officially closed.\n>> Good job everyone!\n> \n> Thanks for running it! I know it's a tedious responsibility.\n\nNice, David. Thanks a lot!\n--\nMichael", "msg_date": "Thu, 9 Apr 2020 10:25:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On Wed, Apr 08, 2020 at 12:36:37PM -0400, David Steele wrote:\n> On 4/1/20 10:09 AM, David Steele wrote:\n> > On 3/17/20 8:10 AM, David Steele wrote:\n> > > On 3/1/20 4:10 PM, David Steele wrote:\n> > > > The last Commitfest for v13 is now in progress!\n> > > > \n> > > > Current stats for the Commitfest are:\n> > > > \n> > > > Needs review: 192\n> > > > Waiting on Author: 19\n> > > > Ready for Committer: 4\n> > > > Total: 215\n> > > \n> > > Halfway through, here's where we stand.� Note that a CF entry was\n> > > added after the stats above were recorded so the total has\n> > > increased.\n> > > \n> > > Needs review: 151 (-41)\n> > > Waiting on Author: 20 (+1)\n> > > Ready for Committer: 9 (+5)\n> > > Committed: 25\n> > > Moved to next CF: 1\n> > > Withdrawn: 4\n> > > Returned with Feedback: 6\n> > > Total: 216\n> > \n> > After regulation time here's where we stand:\n> > \n> > Needs review: 111 (-40)\n> > Waiting on Author: 26 (+6)\n> > Ready for Committer: 11 (+2)\n> > Committed: 51 (+11)\n> > Moved to next CF: 2 (+1)\n> > Returned with Feedback: 10 (+4)\n> > Rejected: 1\n> > Withdrawn: 5 (+1)\n> > Total: 217\n> > \n> > We have one more patch so it appears that one in a completed state\n> > (committed, etc.) at the beginning of the CF has been moved to an\n> > uncompleted state. Or perhaps my math is just bad.\n> > \n> > The RMT has determined that the CF will be extended for one week so I'll\n> > hold off on moving and marking patches until April 8.\n> \n> The 2020-03 Commitfest is officially closed.\n> \n> Final stats are (for entire CF, not just from March 1 this time):\n> \n> Committed: 90.\n> Moved to next CF: 115.\n> Withdrawn: 8.\n> Rejected: 1.\n> Returned with Feedback: 23.\n> Total: 237.\n> \n> Good job everyone!\n\nThanks so much for your hard work managing this one!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 9 Apr 2020 03:45:50 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On Wed, Apr 8, 2020 at 6:45 PM David Fetter <david@fetter.org> wrote:\n> Thanks so much for your hard work managing this one!\n\n+1\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 8 Apr 2020 19:09:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "On 2020-Apr-08, David Steele wrote:\n\n> The 2020-03 Commitfest is officially closed.\n> \n> Final stats are (for entire CF, not just from March 1 this time):\n\nThanks for managing!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 22:12:16 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" }, { "msg_contents": "Le jeu. 9 avr. 2020 à 04:12, Alvaro Herrera <alvherre@2ndquadrant.com> a\nécrit :\n\n> On 2020-Apr-08, David Steele wrote:\n>\n> > The 2020-03 Commitfest is officially closed.\n> >\n> > Final stats are (for entire CF, not just from March 1 this time):\n>\n> Thanks for managing!\n>\n\nThanks a lot for the hard work!\n\n>\n\nLe jeu. 9 avr. 2020 à 04:12, Alvaro Herrera <alvherre@2ndquadrant.com> a écrit :On 2020-Apr-08, David Steele wrote:\n\n> The 2020-03 Commitfest is officially closed.\n> \n> Final stats are (for entire CF, not just from March 1 this time):\n\nThanks for managing!Thanks a lot for the hard work!", "msg_date": "Thu, 9 Apr 2020 08:17:53 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2020-03 Now in Progress" } ]
[ { "msg_contents": "Hello,\n\nI was reading through some old threads[1][2][3] while trying to figure\nout how to add a new GUC to control I/O prefetching for new kinds of\nthings[4][5], and enjoyed Simon Riggs' reference to Jules Verne in the\ncontext of RAID spindles.\n\nOn 2 Sep 2015 14:54, \"Andres Freund\" <andres(at)anarazel(dot)de> wrote:\n> > On 2015-09-02 18:06:54 +0200, Tomas Vondra wrote:\n> > Maybe the best thing we can do is just completely abandon the \"number of\n> > spindles\" idea, and just say \"number of I/O requests to prefetch\". Possibly\n> > with an explanation of how to estimate it (devices * queue length).\n>\n> I think that'd be a lot better.\n\n+many, though I doubt I could describe how to estimate it myself,\nconsidering cloud storage, SANs, multi-lane NVMe etc. You basically\nhave to experiment, and like most of our resource consumption limits,\nit's a per-backend limit anyway, so it's pretty complicated, but I\ndon't see how the harmonic series helps anyone.\n\nShould we rename it? Here are my first suggestions:\n\nrandom_page_prefetch_degree\nmaintenance_random_page_prefetch_degree\n\nRationale for this naming pattern:\n* \"random_page\" from \"random_page_cost\"\n* leaves room for a different setting for sequential prefetching\n* \"degree\" conveys the idea without using loaded words like \"queue\"\nthat might imply we know something about the I/O subsystem or that\nit's system-wide like kernel and device queues\n* \"maintenance_\" prefix is like other GUCs that establish (presumably\nlarger) limits for processes working on behalf of many user sessions\n\nWhatever we call it, I don't think it makes sense to try to model the\ndetails of any particular storage system. Let's use a simple counter\nof I/Os initiated but not yet known to have completed (for now: it has\ndefinitely completed when the associated pread() complete; perhaps\nsomething involving real async I/O completion notification in later\nreleases).\n\n[1] https://www.postgresql.org/message-id/flat/CAHyXU0yaUG9R_E5%3D1gdXhD-MpWR%3DGr%3D4%3DEHFD_fRid2%2BSCQrqA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/Pine.GSO.4.64.0809220317320.20434%40westnet.com\n[3] https://www.postgresql.org/message-id/flat/FDDBA24E-FF4D-4654-BA75-692B3BA71B97%40enterprisedb.com\n[4] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com\n[5] https://www.postgresql.org/message-id/CA%2BTgmoZP-CTmEPZdmqEOb%2B6t_Tts2nuF7eoqxxvXEHaUoBDmsw%40mail.gmail.com\n\n\n", "msg_date": "Mon, 2 Mar 2020 18:28:41 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "Hi,\n\nOn 2020-03-02 18:28:41 +1300, Thomas Munro wrote:\n> I was reading through some old threads[1][2][3] while trying to figure\n> out how to add a new GUC to control I/O prefetching for new kinds of\n> things[4][5], and enjoyed Simon Riggs' reference to Jules Verne in the\n> context of RAID spindles.\n>\n> On 2 Sep 2015 14:54, \"Andres Freund\" <andres(at)anarazel(dot)de> wrote:\n> > > On 2015-09-02 18:06:54 +0200, Tomas Vondra wrote:\n> > > Maybe the best thing we can do is just completely abandon the \"number of\n> > > spindles\" idea, and just say \"number of I/O requests to prefetch\". Possibly\n> > > with an explanation of how to estimate it (devices * queue length).\n> >\n> > I think that'd be a lot better.\n>\n> +many, though I doubt I could describe how to estimate it myself,\n> considering cloud storage, SANs, multi-lane NVMe etc. You basically\n> have to experiment, and like most of our resource consumption limits,\n> it's a per-backend limit anyway, so it's pretty complicated, but I\n> don't see how the harmonic series helps anyone.\n>\n> Should we rename it? Here are my first suggestions:\n\nWhy rename? It's not like anybody knew how to infer a useful value for\neffective_io_concurrency, given the math computing the actually\neffective prefetch distance... I feel like we'll just unnecessarily\ncause people difficulty by doing so.\n\n\n> random_page_prefetch_degree\n> maintenance_random_page_prefetch_degree\n\nI don't like these names.\n\n\n> Rationale for this naming pattern:\n> * \"random_page\" from \"random_page_cost\"\n\nI don't think we want to corner us into only ever using these for random\nio.\n\n\n> * leaves room for a different setting for sequential prefetching\n\nI think if we want to split those at some point, we ought to split it if\nwe have a good reason, not before. It's not at all clear to me why you'd\nwant a substantially different queue depth for both.\n\n\n> * \"degree\" conveys the idea without using loaded words like \"queue\"\n> that might imply we know something about the I/O subsystem or that\n> it's system-wide like kernel and device queues\n\nWhy is that good? Queue depth is a pretty well established term. You can\nsearch for benchmarks of devices with it, you can correlate with OS\nconfig, etc.\n\n\n> * \"maintenance_\" prefix is like other GUCs that establish (presumably\n> larger) limits for processes working on behalf of many user sessions\n\nThat part makes sense to me.\n\n\n> Whatever we call it, I don't think it makes sense to try to model the\n> details of any particular storage system. Let's use a simple counter\n> of I/Os initiated but not yet known to have completed (for now: it has\n> definitely completed when the associated pread() complete; perhaps\n> something involving real async I/O completion notification in later\n> releases).\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Mar 2020 10:05:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Fri, Mar 06, 2020 at 10:05:13AM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2020-03-02 18:28:41 +1300, Thomas Munro wrote:\n>> I was reading through some old threads[1][2][3] while trying to figure\n>> out how to add a new GUC to control I/O prefetching for new kinds of\n>> things[4][5], and enjoyed Simon Riggs' reference to Jules Verne in the\n>> context of RAID spindles.\n>>\n>> On 2 Sep 2015 14:54, \"Andres Freund\" <andres(at)anarazel(dot)de> wrote:\n>> > > On 2015-09-02 18:06:54 +0200, Tomas Vondra wrote:\n>> > > Maybe the best thing we can do is just completely abandon the \"number of\n>> > > spindles\" idea, and just say \"number of I/O requests to prefetch\". Possibly\n>> > > with an explanation of how to estimate it (devices * queue length).\n>> >\n>> > I think that'd be a lot better.\n>>\n>> +many, though I doubt I could describe how to estimate it myself,\n>> considering cloud storage, SANs, multi-lane NVMe etc. You basically\n>> have to experiment, and like most of our resource consumption limits,\n>> it's a per-backend limit anyway, so it's pretty complicated, but I\n>> don't see how the harmonic series helps anyone.\n>>\n>> Should we rename it? Here are my first suggestions:\n>\n>Why rename? It's not like anybody knew how to infer a useful value for\n>effective_io_concurrency, given the math computing the actually\n>effective prefetch distance... I feel like we'll just unnecessarily\n>cause people difficulty by doing so.\n>\n\nI think the main issue with keeping the current GUC name is that if you\nhad a value that worked, we'll silently interpret it differently. Which\nis a bit annoying :-(\n\nSo I think we should either rename e_i_c or keep it as is, and then also\nhave a new GUC. And then translate the values between those (but that\nmight be overkill).\n\n>\n>> random_page_prefetch_degree\n>> maintenance_random_page_prefetch_degree\n>\n>I don't like these names.\n>\n\nWhat about these names?\n\n * effective_io_prefetch_distance\n * effective_io_prefetch_queue\n * effective_io_queue_depth\n\n>\n>> Rationale for this naming pattern:\n>> * \"random_page\" from \"random_page_cost\"\n>\n>I don't think we want to corner us into only ever using these for random\n>io.\n>\n\n+1\n\n>\n>> * leaves room for a different setting for sequential prefetching\n>\n>I think if we want to split those at some point, we ought to split it if\n>we have a good reason, not before. It's not at all clear to me why you'd\n>want a substantially different queue depth for both.\n>\n\n+1\n\n>\n>> * \"degree\" conveys the idea without using loaded words like \"queue\"\n>> that might imply we know something about the I/O subsystem or that\n>> it's system-wide like kernel and device queues\n>\n>Why is that good? Queue depth is a pretty well established term. You can\n>search for benchmarks of devices with it, you can correlate with OS\n>config, etc.\n>\n\nI mostly agree. With \"queue depth\" people have a fairly good idea what\nthey're setting, while with \"degree\" that's pretty unlikely I think.\n\n>\n>> * \"maintenance_\" prefix is like other GUCs that establish (presumably\n>> larger) limits for processes working on behalf of many user sessions\n>\n>That part makes sense to me.\n>\n>\n>> Whatever we call it, I don't think it makes sense to try to model the\n>> details of any particular storage system. Let's use a simple counter\n>> of I/Os initiated but not yet known to have completed (for now: it has\n>> definitely completed when the associated pread() complete; perhaps\n>> something involving real async I/O completion notification in later\n>> releases).\n>\n>+1\n>\n\nAgreed.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Mar 2020 20:35:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> So I think we should either rename e_i_c or keep it as is, and then also\n> have a new GUC. And then translate the values between those (but that\n> might be overkill).\n\nPlease DON'T try to have two interrelated GUCs for this. We learned\nour lesson about that years ago.\n\nI think dropping the existing GUC is a perfectly sane thing to do,\nif the new definition wouldn't be compatible. In practice few\npeople will notice, because few will have set it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 15:07:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 02, 2020 at 06:28:41PM +1300, Thomas Munro wrote:\n> Should we rename it? Here are my first suggestions:\n> \n> maintenance_random_page_prefetch_degree\n> \n> Rationale for this naming pattern:\n[...]\n> * \"maintenance_\" prefix is like other GUCs that establish (presumably\n> larger) limits for processes working on behalf of many user sessions\n\nI'm a bit skeptical about this - at least in V12 there's only two GUCs\nwith 'maintenance' in the name: maintenance_work_mem and\nmax_parallel_maintenance_workers. Both are used for utility commands and\ndo not apply to regular user queries while (AFAICT) your proposal is not\nlimited to utility commands. So I think if you name it\n'maintenance'-something, people will assume it only involves VACUUM or\nso.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Fri, 6 Mar 2020 21:52:02 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Sat, Mar 7, 2020 at 9:52 AM Michael Banck <michael.banck@credativ.de> wrote:\n> On Mon, Mar 02, 2020 at 06:28:41PM +1300, Thomas Munro wrote:\n> > * \"maintenance_\" prefix is like other GUCs that establish (presumably\n> > larger) limits for processes working on behalf of many user sessions\n>\n> I'm a bit skeptical about this - at least in V12 there's only two GUCs\n> with 'maintenance' in the name: maintenance_work_mem and\n> max_parallel_maintenance_workers. Both are used for utility commands and\n> do not apply to regular user queries while (AFAICT) your proposal is not\n> limited to utility commands. So I think if you name it\n> 'maintenance'-something, people will assume it only involves VACUUM or\n> so.\n\nNo, the proposal is not for the \"maintenance\" GUC to affect user\nqueries. The idea is that the \"maintenance\" GUC would be used for WAL\nprefetching during recovery[1], index prefetch during VACUUM[2] and\nprobably some other proposed things that are in development relating\nto background \"undo\" processing. What these things have in common, as\nAndres first articulated on thread [2] is that they all deal with a\nworkload that is correlated with the activities of multiple user\nbackends running concurrently. That's the basic idea of the WAL\nprefetching patch: even though all backends suffer from I/O stalls due\nto cache misses, usually that's happening concurrently in many\nbackends. A streaming replica that is trying to follow along\nreplaying the write-workload of the primary has to suffer all those\nstalls sequentially, so I'm trying to recreate some I/O parallelism by\nlooking ahead in the WAL. The theory with the two GUCs is that a user\nbackend should be able to use some I/O parallelism, but a\n\"maintenance\" job like the WAL prefetcher should be allowed to use a\nlot more. That's why the existing VACUUM code mentioned in thread [2]\nalready does \"+ 10\".\n\nMaybe \"maintenance\" isn't the best word for this, but that's the idea.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmoZP-CTmEPZdmqEOb%2B6t_Tts2nuF7eoqxxvXEHaUoBDmsw%40mail.gmail.com\n\n\n", "msg_date": "Sat, 7 Mar 2020 10:06:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Sat, Mar 7, 2020 at 8:00 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-03-02 18:28:41 +1300, Thomas Munro wrote:\n> > * leaves room for a different setting for sequential prefetching\n>\n> I think if we want to split those at some point, we ought to split it if\n> we have a good reason, not before. It's not at all clear to me why you'd\n> want a substantially different queue depth for both.\n\nAlright, I retract that part. It's known that at least on some\nsystems you might want to suppress that (due to some kind of bad\ninteraction with kernel readahead heuristics). But that isn't really\nan argument for having a different queue size, it's an argument for\nhaving a separate on/off switch.\n\n> > * \"degree\" conveys the idea without using loaded words like \"queue\"\n> > that might imply we know something about the I/O subsystem or that\n> > it's system-wide like kernel and device queues\n>\n> Why is that good? Queue depth is a pretty well established term. You can\n> search for benchmarks of devices with it, you can correlate with OS\n> config, etc.\n\nQueue depth is the standard term for an I/O queue that is shared by\nall users. What we're talking about here is undeniably also a queue\nwith a depth, but it's a limit on the amount of concurrent I/O that\n*each operator in a query* will try to initiate (for example: each\nbitmap heap scan in the query, in future perhaps btree scans and other\nthings), so I was thinking that we might want a different name.\n\nThe more I think about this the more I appreciate the current vague GUC name!\n\n\n", "msg_date": "Sat, 7 Mar 2020 10:26:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Sat, Mar 7, 2020 at 8:35 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I think the main issue with keeping the current GUC name is that if you\n> had a value that worked, we'll silently interpret it differently. Which\n> is a bit annoying :-(\n\nYeah. Perhaps we should just give the formula for translating v12\nsettings to v13 settings in the release notes. If we don't rename the\nGUC, you won't be forced to contemplate this when you upgrade, so the\namount of prefetching we do will go down a bit given the same value.\nThat is indeed what led me to start thinking about what a good new\nname would be. Now that I've been talked out of the \"random_page\"\npart, your names look like sensible candidates, but I wonder if there\nis some way to capture that it's \"per operation\"...\n\n\n", "msg_date": "Sat, 7 Mar 2020 10:33:03 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "\n\n> On Mar 7, 2020, at 00:33, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> That is indeed what led me to start thinking about what a good new\n> name would be. \n\nMySQL has a term io_capacity.\nhttps://dev.mysql.com/doc/refman/8.0/en/innodb-configuring-io-capacity.html \n> The innodb_io_capacity variable defines the overall I/O capacity available to InnoDB. It should be set to approximately the number of I/O operations that the system can perform per second (IOPS). When innodb_io_capacity is set, InnoDB estimates the I/O bandwidth available for background tasks based on the set value.\n> \n\nPerhaps we can have maintenance_io_capacity as well.\n\n", "msg_date": "Sat, 7 Mar 2020 13:54:40 +0300", "msg_from": "Evgeniy Shishkin <itparanoia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Sat, Mar 7, 2020 at 11:54 PM Evgeniy Shishkin <itparanoia@gmail.com> wrote:\n> > On Mar 7, 2020, at 00:33, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > That is indeed what led me to start thinking about what a good new\n> > name would be.\n>\n> MySQL has a term io_capacity.\n> https://dev.mysql.com/doc/refman/8.0/en/innodb-configuring-io-capacity.html\n> > The innodb_io_capacity variable defines the overall I/O capacity available to InnoDB. It should be set to approximately the number of I/O operations that the system can perform per second (IOPS). When innodb_io_capacity is set, InnoDB estimates the I/O bandwidth available for background tasks based on the set value.\n> >\n>\n> Perhaps we can have maintenance_io_capacity as well.\n\nThat sounds like total I/O capacity for your system that will be\nshared out for various tasks, which would definitely be nice to have,\nbut here we're talking about a simpler per-operation settings. What\nwe have is a bit like work_mem (a memory limit used for each\nindividual hash, sort, tuplestore, ...), compared to a hypothetical\nwhole-system memory budget (which would definitely also be nice to\nhave).\n\n\n", "msg_date": "Tue, 10 Mar 2020 11:28:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Sat, Mar 7, 2020 at 9:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > So I think we should either rename e_i_c or keep it as is, and then also\n> > have a new GUC. And then translate the values between those (but that\n> > might be overkill).\n>\n> Please DON'T try to have two interrelated GUCs for this. We learned\n> our lesson about that years ago.\n\nAck.\n\n> I think dropping the existing GUC is a perfectly sane thing to do,\n> if the new definition wouldn't be compatible. In practice few\n> people will notice, because few will have set it.\n\nThat's what I thought too, but if Andres is right that \"it's not like\nanybody knew how to infer a useful value\", I'm wondering it's enough\nif we just provide an explanation of the change in the release notes.\nThe default doesn't change (1 goes to 1), so most people will\nexperience no change, but it you had it set to (say) 42 after careful\nexperimentation, you might like to consider updating it to the result\nof:\n\n select round(sum(42 / n::float)) as new_setting from\ngenerate_series(1, 42) s(n)\n\nHere's a patch set to remove the spindle stuff, add a maintenance\nvariant, and use the maintenance one in\nheap_compute_xid_horizon_for_tuples().", "msg_date": "Tue, 10 Mar 2020 12:20:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Tue, Mar 10, 2020 at 12:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a patch set to remove the spindle stuff, add a maintenance\n> variant, and use the maintenance one in\n> heap_compute_xid_horizon_for_tuples().\n\nPushed.\n\n\n", "msg_date": "Mon, 16 Mar 2020 17:26:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Sun, Mar 15, 2020 at 9:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Mar 10, 2020 at 12:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a patch set to remove the spindle stuff, add a maintenance\n> > variant, and use the maintenance one in\n> > heap_compute_xid_horizon_for_tuples().\n>\n> Pushed.\n\nShouldn't you close out the \"Should we rename\neffective_io_concurrency?\" Postgres 13 open item now?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 May 2020 11:57:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" }, { "msg_contents": "On Wed, May 13, 2020 at 6:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Shouldn't you close out the \"Should we rename\n> effective_io_concurrency?\" Postgres 13 open item now?\n\nYeah, that doesn't really seem worth the churn. I'll move it to the\nresolved list in a day or two if no one shows up to argue for a\nrename.\n\n\n", "msg_date": "Thu, 14 May 2020 23:34:53 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency's steampunk spindle maths" } ]
[ { "msg_contents": "Starting with Python 3.9, the Python headers contain inline functions \nthat fall afoul of our -Wdeclaration-after-statement coding style. In \norder to silence those warnings, I've added some GCC-specific \ncontortions to disable that warning for Python.h only. Clang doesn't \nappear to warn about this at all; maybe it recognizes that this is an \nexternal header file. We could also write a configure check for this if \nwe want to be more flexible.\n\n(Attempts to convince upstream to change the coding style were \nunsuccessful (https://bugs.python.org/issue39615).)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 2 Mar 2020 14:22:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Silence compiler warnings with Python 3.9" }, { "msg_contents": "On 2020-03-02 14:22, Peter Eisentraut wrote:\n> Starting with Python 3.9, the Python headers contain inline functions\n> that fall afoul of our -Wdeclaration-after-statement coding style. In\n> order to silence those warnings, I've added some GCC-specific\n> contortions to disable that warning for Python.h only. Clang doesn't\n> appear to warn about this at all; maybe it recognizes that this is an\n> external header file. We could also write a configure check for this if\n> we want to be more flexible.\n> \n> (Attempts to convince upstream to change the coding style were\n> unsuccessful (https://bugs.python.org/issue39615).)\n\nMy fix in cpython was accepted after all and the issue is no longer \npresent in the latest alpha (3.9.0a5), so this can be considered closed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Mar 2020 11:34:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Silence compiler warnings with Python 3.9" } ]
[ { "msg_contents": "Hi\n\n\n\nI found a document bug about client authentication using TLS certificate. When clientcert authentication is enabled in pg_hba.conf, libpq does not verify that the common name in certificate matches database username like it is described in the documentation before allowing client connection.\n\nInstead, when sslmode is set to “verify-full”, libpq will verify if the server host name matches the common name in client certificate. When sslmode is set to “verify-ca”, libpq will verify that the client is trustworthy by checking the certificate trust chain up to the root certificate and it does not verify server hostname and certificate common name match in this case.\n\n\n\nThe attached patch corrects the clientcert authentication description in the documentation\n\n\n\ncheers\n\n\n\n\n\n\n\n\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca", "msg_date": "Mon, 02 Mar 2020 11:06:57 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "[PATCH] Documentation bug related to client authentication using\n TLS certificate" }, { "msg_contents": "Hi, Cary.\n\nOn 3/2/20 1:06 PM, Cary Huang wrote:\n> Hi\n> \n> I found a document bug about client authentication using TLS \n> certificate. When clientcert authentication is enabled in pg_hba.conf, \n> libpq does not verify that the *common name*in certificate \n> matches*database username*like it is described in the documentation \n> before allowing client connection.\n> \n> Instead, when sslmode is set to “verify-full”, libpq will verify if the \n> *server host name*matches the *common name *in client certificate.\n\nThis sounds incorrect. My understanding is that the *server* host name \nis always matched with the *server* common name.\n\n When\n> sslmode is set to “verify-ca”, libpq will verify that the client is \n> trustworthy by checking the certificate trust chain up to the root \n> certificate and it does not verify *server hostname*and \n> certificate*common name *match in this case.\n\nSimilarly, libpq will verify the *server* is trustworthy by checking the \n*server* certificate up to the root. It does not verify that the host \nname matches the common name in the *server* certificate.\n\nIn all cases, libpq is responsible for verifying the *server* is who it \nclaims to be.\n\n-- Chris\n\n\n", "msg_date": "Mon, 2 Mar 2020 21:23:37 -0600", "msg_from": "Chris Bandy <bandy.chris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Documentation bug related to client authentication using\n TLS certificate" }, { "msg_contents": "Hi Chris\n\n\n\nThank you for your feedback. You are right, libpq verify if the server is trustworthy by checking server certificate and check hostname matches with server common name when sslmode is verify-full, and it is already explained in another documentation page https://www.postgresql.org/docs/current/libpq-ssl.html\n\n\n\nHaving done another investigation, I found that the original documentation (https://www.postgresql.org/docs/current/auth-cert.html) is actually right. The server is indeed also checking the client certificate cn matches the database user name if the authentication method is set to \"cert\" \n\n\nPlease disregard this patch.\n\n\n\nthanks!\n\nCary\n\n\n---- On Mon, 02 Mar 2020 19:23:37 -0800 Chris Bandy <bandy.chris@gmail.com> wrote ----\n\n\nHi, Cary. \n \nOn 3/2/20 1:06 PM, Cary Huang wrote: \n> Hi \n> \n> I found a document bug about client authentication using TLS \n> certificate. When clientcert authentication is enabled in pg_hba.conf, \n> libpq does not verify that the *common name*in certificate \n> matches*database username*like it is described in the documentation \n> before allowing client connection. \n> \n> Instead, when sslmode is set to “verify-full”, libpq will verify if the \n> *server host name*matches the *common name *in client certificate. \n \nThis sounds incorrect. My understanding is that the *server* host name \nis always matched with the *server* common name. \n \n When \n> sslmode is set to “verify-ca”, libpq will verify that the client is \n> trustworthy by checking the certificate trust chain up to the root \n> certificate and it does not verify *server hostname*and \n> certificate*common name *match in this case. \n \nSimilarly, libpq will verify the *server* is trustworthy by checking the \n*server* certificate up to the root. It does not verify that the host \nname matches the common name in the *server* certificate. \n \nIn all cases, libpq is responsible for verifying the *server* is who it \nclaims to be. \n \n-- Chris\nHi ChrisThank you for your feedback. You are right, libpq verify if the server is trustworthy by checking server certificate and check hostname matches with server common name when sslmode is verify-full, and it is already explained in another documentation page https://www.postgresql.org/docs/current/libpq-ssl.htmlHaving done another investigation, I found that the original documentation (https://www.postgresql.org/docs/current/auth-cert.html) is actually right. The server is indeed also checking the client certificate cn matches the database user name if the authentication method is set to \"cert\" Please disregard this patch.thanks!Cary---- On Mon, 02 Mar 2020 19:23:37 -0800 Chris Bandy <bandy.chris@gmail.com> wrote ----Hi, Cary. On 3/2/20 1:06 PM, Cary Huang wrote: > Hi > > I found a document bug about client authentication using TLS > certificate. When clientcert authentication is enabled in pg_hba.conf, > libpq does not verify that the *common name*in certificate > matches*database username*like it is described in the documentation > before allowing client connection. > > Instead, when sslmode is set to “verify-full”, libpq will verify if the > *server host name*matches the *common name *in client certificate. This sounds incorrect. My understanding is that the *server* host name is always matched with the *server* common name. When > sslmode is set to “verify-ca”, libpq will verify that the client is > trustworthy by checking the certificate trust chain up to the root > certificate and it does not verify *server hostname*and > certificate*common name *match in this case. Similarly, libpq will verify the *server* is trustworthy by checking the *server* certificate up to the root. It does not verify that the host name matches the common name in the *server* certificate. In all cases, libpq is responsible for verifying the *server* is who it claims to be. -- Chris", "msg_date": "Tue, 03 Mar 2020 11:36:05 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Documentation bug related to client authentication\n using TLS certificate" } ]
[ { "msg_contents": "While looking at Tomas' ALTER TYPE patch, I got annoyed by the fact\nthat all of the backend writes constants of type alignment and type\nstorage values as literal characters, such as 'i' and 'x'. This is\nnot our style for most other \"poor man's enum\" catalog columns, and\nit makes it really hard to grep for relevant code. Hence, attached\nis a proposed patch to invent #define names for those values.\n\nAs is our custom for other similar catalog columns, I only used the\nmacros in C code. There are some references in SQL code too,\nparticularly in the regression tests, but the difficulty of replacing\nsymbolic references in SQL code seems more than it's worth to fix.\n\nOne thing that I'm not totally happy about, as this stands, is that\nwe have to #include \"catalog/pg_type.h\" in various places we did\nnot need to before (although only a fraction of the files I touched\nneed that). Part of the issue is that I used the TYPALIGN_XXX\nmacros in tupmacs.h, but did not #include pg_type.h there because\nI was concerned about macro inclusion bloat. Plausible alternatives\nto the way I did it here include\n\n* just bite the bullet and #include pg_type.h in tupmacs.h;\n\n* keep using the hard-coded values in tupmacs.h (with a comment\nas to why);\n\n* put the TYPALIGN_XXX #defines somewhere else (not clear where,\nbut there might be a case for postgres.h, since so much of the\nbackend has some interest in alignment).\n\nThoughts? Anybody want to say that this is more code churn\nthan it's worth?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 02 Mar 2020 17:52:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "On 2020-Mar-02, Tom Lane wrote:\n\n> While looking at Tomas' ALTER TYPE patch, I got annoyed by the fact\n> that all of the backend writes constants of type alignment and type\n> storage values as literal characters, such as 'i' and 'x'. This is\n> not our style for most other \"poor man's enum\" catalog columns, and\n> it makes it really hard to grep for relevant code. Hence, attached\n> is a proposed patch to invent #define names for those values.\n\nMakes sense.\n\n> As is our custom for other similar catalog columns, I only used the\n> macros in C code. There are some references in SQL code too,\n> particularly in the regression tests, but the difficulty of replacing\n> symbolic references in SQL code seems more than it's worth to fix.\n\nAgreed.\n\n> One thing that I'm not totally happy about, as this stands, is that\n> we have to #include \"catalog/pg_type.h\" in various places we did\n> not need to before (although only a fraction of the files I touched\n> need that). Part of the issue is that I used the TYPALIGN_XXX\n> macros in tupmacs.h, but did not #include pg_type.h there because\n> I was concerned about macro inclusion bloat. Plausible alternatives\n> to the way I did it here include\n> \n> * just bite the bullet and #include pg_type.h in tupmacs.h;\n\nI like this one the most -- better than the alternative in the patch --\nbecause it's the most honest IMO, except that there seems to be\naltogether too much cruft in pg_type.h that should be elsewhere\n(particularly nodes/nodes.h, which includes a large number of other\nheaders).\n\nIf we think that pg_type.h is the header to handle access to the pg_type\ncatalog, then I would think that the function declarations at the bottom\nshould be in some \"internal\" header file; then we can get rid of most\nthe #includes in pg_type.h.\n\n\n> Thoughts? Anybody want to say that this is more code churn\n> than it's worth?\n\nIt seems worthy cleanup to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 22:31:07 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Mar-02, Tom Lane wrote:\n>> One thing that I'm not totally happy about, as this stands, is that\n>> we have to #include \"catalog/pg_type.h\" in various places we did\n>> not need to before (although only a fraction of the files I touched\n>> need that).\n\n> If we think that pg_type.h is the header to handle access to the pg_type\n> catalog, then I would think that the function declarations at the bottom\n> should be in some \"internal\" header file; then we can get rid of most\n> the #includes in pg_type.h.\n\nWell, aside from indirect inclusions, pg_type.h also brings in a bunch\nof type OID macros, which I feel we don't want to broadcast everywhere.\n\nOne argument in favor of sticking these new macros somewhere \"more\ncentral\" is that they apply to both pg_type and pg_attribute (that\nis, attalign and attstorage also use them). That's not a strong\nargument, maybe, but it's something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 02 Mar 2020 22:22:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2020-Mar-02, Tom Lane wrote:\n>>> One thing that I'm not totally happy about, as this stands, is that\n>>> we have to #include \"catalog/pg_type.h\" in various places we did\n>>> not need to before (although only a fraction of the files I touched\n>>> need that).\n\n>> If we think that pg_type.h is the header to handle access to the pg_type\n>> catalog, then I would think that the function declarations at the bottom\n>> should be in some \"internal\" header file; then we can get rid of most\n>> the #includes in pg_type.h.\n\n> Well, aside from indirect inclusions, pg_type.h also brings in a bunch\n> of type OID macros, which I feel we don't want to broadcast everywhere.\n\nI realized that a possible compromise position is to have tupmacs.h\ninclude pg_type_d.h, not the whole pg_type.h header, thus dodging the\nindirect inclusions. That still brings in the type-OID macros, but\nit's a lot less header scope creep than I was first fearing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Mar 2020 10:11:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "On 2020-Mar-03, Tom Lane wrote:\n\n> I realized that a possible compromise position is to have tupmacs.h\n> include pg_type_d.h, not the whole pg_type.h header, thus dodging the\n> indirect inclusions. That still brings in the type-OID macros, but\n> it's a lot less header scope creep than I was first fearing.\n\nWFM.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 3 Mar 2020 15:29:26 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Mar-03, Tom Lane wrote:\n>> I realized that a possible compromise position is to have tupmacs.h\n>> include pg_type_d.h, not the whole pg_type.h header, thus dodging the\n>> indirect inclusions. That still brings in the type-OID macros, but\n>> it's a lot less header scope creep than I was first fearing.\n\n> WFM.\n\nOK, I'll look harder at doing it that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Mar 2020 13:35:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2020-Mar-03, Tom Lane wrote:\n>>> I realized that a possible compromise position is to have tupmacs.h\n>>> include pg_type_d.h, not the whole pg_type.h header, thus dodging the\n>>> indirect inclusions. That still brings in the type-OID macros, but\n>>> it's a lot less header scope creep than I was first fearing.\n\n>> WFM.\n\n> OK, I'll look harder at doing it that way.\n\nYeah, that works out very nicely: there's now only one place besides\ntupmacs.h that needs a new #include.\n\nI did a little more polishing, and consider the attached committable,\nunless anyone has objections.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 03 Mar 2020 16:45:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "On Tue, Mar 03, 2020 at 04:45:51PM -0500, Tom Lane wrote:\n> Yeah, that works out very nicely: there's now only one place besides\n> tupmacs.h that needs a new #include.\n> \n> I did a little more polishing, and consider the attached committable,\n> unless anyone has objections.\n\nNice. I have looked at the patch and it seems to me that no spots\nhave been missed.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 14:25:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 03, 2020 at 04:45:51PM -0500, Tom Lane wrote:\n>> Yeah, that works out very nicely: there's now only one place besides\n>> tupmacs.h that needs a new #include.\n>> I did a little more polishing, and consider the attached committable,\n>> unless anyone has objections.\n\n> Nice. I have looked at the patch and it seems to me that no spots\n> have been missed.\n\nPushed, thanks for reviewing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Mar 2020 10:35:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "Hi,\n\nOn 2020-03-02 17:52:17 -0500, Tom Lane wrote:\n> While looking at Tomas' ALTER TYPE patch, I got annoyed by the fact\n> that all of the backend writes constants of type alignment and type\n> storage values as literal characters, such as 'i' and 'x'. This is\n> not our style for most other \"poor man's enum\" catalog columns, and\n> it makes it really hard to grep for relevant code. Hence, attached\n> is a proposed patch to invent #define names for those values.\n\nIndependent of the patch, why aren't we using proper enums for some of\nthese? There's plenty code that tries to handle all variants for various\nsuch \"poor man's enum\"s - the current compiler doesn't allow the\ncompiler to help defend against forgotten values. And I think there's\nplenty cases where we *did* forget updating places for new values,\ne.g. around the partitioned table reltype.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Mar 2020 10:18:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-02 17:52:17 -0500, Tom Lane wrote:\n>> While looking at Tomas' ALTER TYPE patch, I got annoyed by the fact\n>> that all of the backend writes constants of type alignment and type\n>> storage values as literal characters, such as 'i' and 'x'. This is\n>> not our style for most other \"poor man's enum\" catalog columns, and\n>> it makes it really hard to grep for relevant code. Hence, attached\n>> is a proposed patch to invent #define names for those values.\n\n> Independent of the patch, why aren't we using proper enums for some of\n> these?\n\nI did think about that, but since the underlying storage needs to be\na \"char\", I'm not sure that using an enum for the values would really\nbe all that helpful. We might get warnings from pickier compilers,\nand we wouldn't necessarily get the warnings we actually want.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 14:10:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Symbolic names for the values of typalign and typstorage" } ]
[ { "msg_contents": "Hi there,\n\nI am potentially interested in the performance farm project listed here:\nhttps://wiki.postgresql.org/wiki/GSoC_2020#Develop_Performance_Farm_Benchmarks_and_Website_.282020.29\n\nI've applied to the pgperffarm mailing list as well, but am waiting for\nmoderator approval so I thought this list would be the best to ask about\nthe performance farm code.\n\nHere are the questions based on the\nhttps://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary repo:\n\n - Why is a front end framework used instead of django templates?\n - Any reason why the server hasn't been containerized?\n - Django 1.11 will no longer be supported in April 2020, is it time to\n move to 2.2 LTS? (\n https://www.djangoproject.com/download/#supported-versions)\n - What have been the issues with authentication integration to\n postgresql.org?\n - Should the client be turned into a package for package managers (e.g.\n pypi, DPKG, brew, etc.)?\n - The project description mentions refactoring to Python 3, but it seems\n like that was completed last GSoC?\n - Should the performance visualizations be added again?\n\n\nI've also looked at past mailing lists for this project, but am interested\nin hearing current insights from the community:\n\n -\n https://www.postgresql-archive.org/GSoC-2019-report-amp-feedback-td6100606.html\n -\n https://www.postgresql-archive.org/GSoC-2019-Proposal-Develop-Performance-Farm-Database-and-Website-td6079058.html\n -\n https://www.postgresql-archive.org/GSoC-Summery-of-pg-performance-farm-td6034578.html\n -\n https://www.postgresql-archive.org/GSOC-18-Performance-Farm-Project-Initialization-Project-td6010380.html\n -\n https://www.postgresql-archive.org/GSOC-18-Performance-Farm-Project-td6008120.html\n - https://www.postgresql-archive.org/performance-test-farm-td4388584.html\n\n\nThanks,\nKalvin\n\nHi there,I am potentially interested in the performance farm project listed here: https://wiki.postgresql.org/wiki/GSoC_2020#Develop_Performance_Farm_Benchmarks_and_Website_.282020.29I've applied to the pgperffarm mailing list as well, but am waiting for moderator approval so I thought this list would be the best to ask about the performance farm code.Here are the questions based on the https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary repo:Why is a front end framework used instead of django templates?Any reason why the server hasn't been containerized?Django 1.11 will no longer be supported in April 2020, is it time to move to 2.2 LTS? (https://www.djangoproject.com/download/#supported-versions)What have been the issues with authentication integration to postgresql.org?Should the client be turned into a package for package managers (e.g. pypi, DPKG, brew, etc.)?The project description mentions refactoring to Python 3, but it seems like that was completed last GSoC?Should the performance visualizations be added again?I've also looked at past mailing lists for this project, but am interested in hearing current insights from the community:https://www.postgresql-archive.org/GSoC-2019-report-amp-feedback-td6100606.htmlhttps://www.postgresql-archive.org/GSoC-2019-Proposal-Develop-Performance-Farm-Database-and-Website-td6079058.htmlhttps://www.postgresql-archive.org/GSoC-Summery-of-pg-performance-farm-td6034578.htmlhttps://www.postgresql-archive.org/GSOC-18-Performance-Farm-Project-Initialization-Project-td6010380.htmlhttps://www.postgresql-archive.org/GSOC-18-Performance-Farm-Project-td6008120.htmlhttps://www.postgresql-archive.org/performance-test-farm-td4388584.htmlThanks,Kalvin", "msg_date": "Tue, 3 Mar 2020 03:07:13 -0700", "msg_from": "Kalvin Eng <kalvin.eng@ualberta.ca>", "msg_from_op": true, "msg_subject": "[GSoC 2020] Questions About Performance Farm Benchmarks and Website" }, { "msg_contents": "Hi Kalvin,\n\nOn Tue, Mar 03, 2020 at 03:07:13AM -0700, Kalvin Eng wrote:\n> Hi there,\n> \n> I am potentially interested in the performance farm project listed here:\n> https://wiki.postgresql.org/wiki/GSoC_2020#Develop_Performance_Farm_Benchmarks_and_Website_.282020.29\n> \n> I've applied to the pgperffarm mailing list as well, but am waiting for\n> moderator approval so I thought this list would be the best to ask about\n> the performance farm code.\n> \n> Here are the questions based on the\n> https://git.postgresql.org/gitweb/?p=pgperffarm.git;a=summary repo:\n> \n> - Why is a front end framework used instead of django templates?\n\nI don't have a good answer for this, primarily because my knowledge on\nthe difference is weak...\n\n> - Any reason why the server hasn't been containerized?\n\nSimply because no effort has been put into it yet. Are you thinking for\nease of demoing or evaluating?\n\n> - Django 1.11 will no longer be supported in April 2020, is it time to\n> move to 2.2 LTS? (\n> https://www.djangoproject.com/download/#supported-versions)\n\nWe want to match the same version the community infrastructure uses, so\nyes, if that's the version they will be on.\n\n> - What have been the issues with authentication integration to\n> postgresql.org?\n\nThere is a custom authentication module that doesn't work outside of the\ncommunity infrastructure, and this project has been developed outside of\nthe community infrastructure. We haven't come up with a way to bridge\nthat gap yet.\n\n> - Should the client be turned into a package for package managers (e.g.\n> pypi, DPKG, brew, etc.)?\n\nI think that would be a plus.\n\n> - The project description mentions refactoring to Python 3, but it seems\n> like that was completed last GSoC?\n\nYeah, I think that's been squared away...\n\n> - Should the performance visualizations be added again?\n\nYes, that would good to have.\n\n> I've also looked at past mailing lists for this project, but am interested\n> in hearing current insights from the community:\n> \n> -\n> https://www.postgresql-archive.org/GSoC-2019-report-amp-feedback-td6100606.html\n> -\n> https://www.postgresql-archive.org/GSoC-2019-Proposal-Develop-Performance-Farm-Database-and-Website-td6079058.html\n> -\n> https://www.postgresql-archive.org/GSoC-Summery-of-pg-performance-farm-td6034578.html\n> -\n> https://www.postgresql-archive.org/GSOC-18-Performance-Farm-Project-Initialization-Project-td6010380.html\n> -\n> https://www.postgresql-archive.org/GSOC-18-Performance-Farm-Project-td6008120.html\n> - https://www.postgresql-archive.org/performance-test-farm-td4388584.html\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Tue, 3 Mar 2020 08:54:29 -0800", "msg_from": "Mark Wong <mark@2ndQuadrant.com>", "msg_from_op": false, "msg_subject": "Re: [GSoC 2020] Questions About Performance Farm Benchmarks and\n Website" } ]
[ { "msg_contents": "While looking at the proposed ALTER TYPE patch, I got annoyed\nabout the amount of cruft that exists in typecmds.c to deal with\nancient, non-type-safe ways of declaring type I/O functions.\nThe CREATE TYPE reference pages explains this well enough:\n\n Before PostgreSQL version 8.2, the shell-type creation syntax CREATE\n TYPE name did not exist. The way to create a new base type was to\n create its input function first. In this approach, PostgreSQL will\n first see the name of the new data type as the return type of the\n input function. The shell type is implicitly created in this\n situation, and then it can be referenced in the definitions of the\n remaining I/O functions. This approach still works, but is deprecated\n and might be disallowed in some future release. Also, to avoid\n accidentally cluttering the catalogs with shell types as a result of\n simple typos in function definitions, a shell type will only be made\n this way when the input function is written in C.\n\n In PostgreSQL versions before 7.3, it was customary to avoid creating\n a shell type at all, by replacing the functions' forward references to\n the type name with the placeholder pseudo-type opaque. The cstring\n arguments and results also had to be declared as opaque before 7.3. To\n support loading of old dump files, CREATE TYPE will accept I/O\n functions declared using opaque, but it will issue a notice and change\n the function declarations to use the correct types.\n\nIt might be too soon to drop the automatic-shell-type hack, but I think\na strong case can be made for dropping the automatic conversion of I/O\nfunctions declared with OPAQUE. 7.3 was released in 2002, so any code\nfollowing the old way is now old enough to vote. Does anyone really think\nthat a C function written against 7.2 or earlier would work in a modern\nserver without bigger changes than that?\n\nThe other remaining uses of OPAQUE are for old-style declarations of\ntrigger functions and language handler functions. Again it seems very\nunlikely that anyone still has code following the old style, or that\nthis'd be their biggest portability issue if they did.\n\nIn short, I propose ripping out OPAQUE entirely.\n\nI wouldn't lobby too hard against removing the auto-shell-type hack\neither, but it's not actually type-unsafe and it doesn't require\nvery much code to support, so the case for removing it seems a lot\nweaker than that for getting rid of OPAQUE.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Mar 2020 12:10:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Is it time to retire type \"opaque\"?" }, { "msg_contents": "I wrote:\n> In short, I propose ripping out OPAQUE entirely.\n\nLike so...\n\nI separated out the changes in CREATE TYPE because that's a bit\nmore complicated than the rest. The behavior around shell types\ngets somewhat simpler, and I moved the I/O function result type\nchecks into the lookup functions to make them all consistent.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 03 Mar 2020 18:39:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Is it time to retire type \"opaque\"?" }, { "msg_contents": "Hi,\n\nOn 2020-03-03 12:10:15 -0500, Tom Lane wrote:\n> In short, I propose ripping out OPAQUE entirely.\n\n+1\n\n\n> I wouldn't lobby too hard against removing the auto-shell-type hack\n> either, but it's not actually type-unsafe and it doesn't require\n> very much code to support, so the case for removing it seems a lot\n> weaker than that for getting rid of OPAQUE.\n\nI'm mildly in favor for for ripping those out too. I can't really\nimagine there's a lot of users left, and they shouldn't be hard to\nmigrate. I don't think it'll get meaningfully fewer / easier if we just\nwait another two years - seeems likely that auto shell type using code\nisn't touched much.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Mar 2020 10:12:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is it time to retire type \"opaque\"?" } ]
[ { "msg_contents": "I have a few questions about setting acl on SQL level.\n\nIs it safe to do something like\n UPDATE pg_class SET relacl = $1 WHERE oid = $2;\n?\n\nI don't think it is because ExecGrant_* call updateAclDependencies after\nthey do the update and my own update would not do that. But is it safe\nto do my update if I'm not touching anything in pg_global?\n\nIf it is not safe, is there any point in keeping around makeaclitem()?\nI see no use for it except for manually setting an acl column like\nabove, and it gives people a false sense of security (or at least it did\nfor me).\n\nAnd finally, would there be any interest in a function like\naclset(\"char\", oid, aclitem[]) and does this properly?\n\nMy use case is I have a simple view and a simple function that both\nprovide a wrapper over a table, and I want to have an event trigger that\nupdates their acls when the user does a GRANT/REVOKE on the base table.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 3 Mar 2020 18:48:10 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Setting ACL" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> I have a few questions about setting acl on SQL level.\n> Is it safe to do something like\n> UPDATE pg_class SET relacl = $1 WHERE oid = $2;\n> ?\n\n> I don't think it is because ExecGrant_* call updateAclDependencies after\n> they do the update and my own update would not do that. But is it safe\n> to do my update if I'm not touching anything in pg_global?\n\nWell, it'll work, but the system won't know about the role references\nin this ACL item, so for instance dropping the role wouldn't make the\nACL go away. Which might cause you dump/reload issues later.\n\n> And finally, would there be any interest in a function like\n> aclset(\"char\", oid, aclitem[]) and does this properly?\n\nNot really, when GRANT is already there ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Mar 2020 13:02:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Setting ACL" }, { "msg_contents": "On 03/03/2020 19:02, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> I have a few questions about setting acl on SQL level.\n>> Is it safe to do something like\n>> UPDATE pg_class SET relacl = $1 WHERE oid = $2;\n>> ?\n> \n>> I don't think it is because ExecGrant_* call updateAclDependencies after\n>> they do the update and my own update would not do that. But is it safe\n>> to do my update if I'm not touching anything in pg_global?\n> \n> Well, it'll work, but the system won't know about the role references\n> in this ACL item, so for instance dropping the role wouldn't make the> ACL go away. Which might cause you dump/reload issues later.\n\nOk, so not safe. Should we remove makeaclitem() then?\n\n>> And finally, would there be any interest in a function like\n>> aclset(\"char\", oid, aclitem[]) and does this properly?\n> \n> Not really, when GRANT is already there ...\n\nSo I have to manually do a diff of the two acls and generate\nGRANT/REVOKE statements? That's not encouraging. :(\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 3 Mar 2020 19:13:04 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Setting ACL" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> Ok, so not safe. Should we remove makeaclitem() then?\n\nWell, I wouldn't recommend poking values into an ACL with it,\nbut it seems like it has potential use in queries too, say\n\nselect * from pg_class\nwhere makeaclitem('joe'::regrole, 'bob'::regrole, 'select', false) = any(relacl);\n\nHowever, that certainly leaves a lot to be desired because\nin practical cases you wouldn't only be interested in\nexact matches. I suppose the has_foo_privilege series of\nfunctions would cover some of that territory though.\n\n> So I have to manually do a diff of the two acls and generate\n> GRANT/REVOKE statements? That's not encouraging. :(\n\nThe case of just blindly copying one object's ACL to another\nobject seems kind of limited. I could see providing some more\ngeneral facility for that sort of operation, but I'm not quite\nsure what it should look like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Mar 2020 13:25:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Setting ACL" }, { "msg_contents": "Greetings,\n\n* Vik Fearing (vik@postgresfriends.org) wrote:\n> So I have to manually do a diff of the two acls and generate\n> GRANT/REVOKE statements? That's not encouraging. :(\n\nNot sure if it's helpful to you, but pg_dump has code that generates SQL\nto do more-or-less exactly this.\n\nThanks,\n\nStephen", "msg_date": "Wed, 4 Mar 2020 15:20:10 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Setting ACL" } ]
[ { "msg_contents": "Anybody know how to add 14 to the \"Target version\" dropdown in the CF app?\n\nI haven't needed it yet but I'd like it to be there when I do.\n\nThanks!\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 3 Mar 2020 13:10:26 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "PG14 target version?" }, { "msg_contents": "On Tue, Mar 03, 2020 at 01:10:26PM -0500, David Steele wrote:\n> Anybody know how to add 14 to the \"Target version\" dropdown in the CF app?\n\nThe only person knowing that stuff is I think Magnus. I don't have an\naccess to that.\n\n> I haven't needed it yet but I'd like it to be there when I do.\n\nThat would be nice to have now, patches are going to be moved to the\nnext CF sooner than later.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 14:28:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PG14 target version?" }, { "msg_contents": "> On 4 Mar 2020, at 06:28, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 03, 2020 at 01:10:26PM -0500, David Steele wrote:\n>> Anybody know how to add 14 to the \"Target version\" dropdown in the CF app?\n> \n> The only person knowing that stuff is I think Magnus. I don't have an\n> access to that.\n\nMagnus, or someone else on the infra team since it requires an update to the\ndatabase. Looping in -www for visibility.\n\ncheers ./daniel\n\n\n", "msg_date": "Wed, 4 Mar 2020 10:14:37 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PG14 target version?" }, { "msg_contents": "On Wed, Mar 4, 2020 at 9:14 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 4 Mar 2020, at 06:28, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Mar 03, 2020 at 01:10:26PM -0500, David Steele wrote:\n> >> Anybody know how to add 14 to the \"Target version\" dropdown in the CF\n> app?\n> >\n> > The only person knowing that stuff is I think Magnus. I don't have an\n> > access to that.\n>\n> Magnus, or someone else on the infra team since it requires an update to\n> the\n> database. Looping in -www for visibility.\n>\n\nHmm, I just tried to login to the admin site to do this and got a 500\nerror. Unfortunately I'm now off to take a number of meetings, so can't\nlook more closely myself.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Mar 4, 2020 at 9:14 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 4 Mar 2020, at 06:28, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 03, 2020 at 01:10:26PM -0500, David Steele wrote:\n>> Anybody know how to add 14 to the \"Target version\" dropdown in the CF app?\n> \n> The only person knowing that stuff is I think Magnus.  I don't have an\n> access to that.\n\nMagnus, or someone else on the infra team since it requires an update to the\ndatabase. Looping in -www for visibility.Hmm, I just tried to login to the admin site to do this and got a 500 error. Unfortunately I'm now off to take a number of meetings, so can't look more closely myself. -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 4 Mar 2020 09:29:33 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: PG14 target version?" }, { "msg_contents": "On 2020-Mar-03, David Steele wrote:\n\n> Anybody know how to add 14 to the \"Target version\" dropdown in the CF app?\n> \n> I haven't needed it yet but I'd like it to be there when I do.\n\nDone.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Mar 2020 12:43:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PG14 target version?" }, { "msg_contents": "On 3/4/20 10:43 AM, Alvaro Herrera wrote:\n> On 2020-Mar-03, David Steele wrote:\n> \n>> Anybody know how to add 14 to the \"Target version\" dropdown in the CF app?\n>>\n>> I haven't needed it yet but I'd like it to be there when I do.\n> \n> Done.\n\nI see it, thanks!\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 4 Mar 2020 10:48:23 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: PG14 target version?" } ]
[ { "msg_contents": "On 3/3/20, 12:24 PM, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\r\n> IMO: is_dir should be there (and subdirs should be listed), but\r\n> parent_dir should not appear. Also, the \"path\" should show the complete\r\n> pathname, including containing dirs, starting from whatever the \"root\"\r\n> is for the operation.\r\n>\r\n> So for the example in the initial email, it would look like\r\n>\r\n> path isdir\r\n> pgsql_tmp11025.0.sharedfileset/ t\r\n> pgsql_tmp11025.0.sharedfileset/0.0 f\r\n> pgsql_tmp11025.0.sharedfileset/1.0 f\r\n>\r\n> plus additional columns, same as pg_ls_waldir et al.\r\n>\r\n> I'd rather not have the code assume that there's a single level of\r\n> subdirs, or assuming that an entry in the subdir cannot itself be a dir;\r\n> that might end up hiding files for no good reason.\r\n\r\n+1\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 3 Mar 2020 20:35:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories" } ]
[ { "msg_contents": "Hello. (added Tom in Cc:)\n\nIf I build the past versions from 9.4 to 9.6 with GCC8, I find it\nreally annoying to see the build screen filled with massive number of\nwarnings of format-truncation, stringop-truncation and\nformat-overflow.\n\nJust applying the commit 416e3e318c as-is silences the first two.\n\nThe last one is silenced by applying 5d923eb29b.\n\nThe commit message is sayiing that it is back-patched back at least to\n9.4, but it seem that the versions from 9.4 to 9.6 haven't got the\npatches.\n\nTom, would you back-patch the two commits to from 9.4 to 9.6?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 04 Mar 2020 08:12:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Back-patching -Wno-format-truncation." }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> If I build the past versions from 9.4 to 9.6 with GCC8, I find it\n> really annoying to see the build screen filled with massive number of\n> warnings of format-truncation, stringop-truncation and\n> format-overflow.\n\n> Just applying the commit 416e3e318c as-is silences the first two.\n\n> The last one is silenced by applying 5d923eb29b.\n\n> The commit message is sayiing that it is back-patched back at least to\n> 9.4, but it seem that the versions from 9.4 to 9.6 haven't got the\n> patches.\n\n> Tom, would you back-patch the two commits to from 9.4 to 9.6?\n\nUh ... it sure looks to me like they were back-patched as advertised.\nDo you not have these back-branch commits?\n\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nBranch: master Release: REL_11_BR [e71658523] 2018-06-16 15:34:07 -0400\nBranch: REL_10_STABLE Release: REL_10_5 [416e3e318] 2018-06-16 15:34:07 -0400\nBranch: REL9_6_STABLE Release: REL9_6_10 [119290be6] 2018-06-16 15:34:07 -0400\nBranch: REL9_5_STABLE Release: REL9_5_14 [14b69a532] 2018-06-16 15:34:07 -0400\nBranch: REL9_4_STABLE Release: REL9_4_19 [817d605e4] 2018-06-16 15:34:07 -0400\nBranch: REL9_3_STABLE Release: REL9_3_24 [ec5547e56] 2018-06-16 15:34:07 -0400\n\n Use -Wno-format-truncation and -Wno-stringop-truncation, if available.\n\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nBranch: master Release: REL_11_BR [5d923eb29] 2018-06-16 14:45:47 -0400\nBranch: REL_10_STABLE Release: REL_10_5 [189332615] 2018-06-16 14:45:47 -0400\nBranch: REL9_6_STABLE Release: REL9_6_10 [8870e2978] 2018-06-16 14:45:47 -0400\nBranch: REL9_5_STABLE Release: REL9_5_14 [f3be5d3e7] 2018-06-16 14:45:47 -0400\nBranch: REL9_4_STABLE Release: REL9_4_19 [fd079dd09] 2018-06-16 14:45:47 -0400\nBranch: REL9_3_STABLE Release: REL9_3_24 [3243cbc08] 2018-06-16 14:45:47 -0400\n\n Use snprintf not sprintf in pg_waldump's timestamptz_to_str.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Mar 2020 18:44:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Back-patching -Wno-format-truncation." }, { "msg_contents": "At Tue, 03 Mar 2020 18:44:16 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Uh ... it sure looks to me like they were back-patched as advertised.\n> Do you not have these back-branch commits?\n> \n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Branch: master Release: REL_11_BR [e71658523] 2018-06-16 15:34:07 -0400\n> Branch: REL_10_STABLE Release: REL_10_5 [416e3e318] 2018-06-16 15:34:07 -0400\n> Branch: REL9_6_STABLE Release: REL9_6_10 [119290be6] 2018-06-16 15:34:07 -0400\n> Branch: REL9_5_STABLE Release: REL9_5_14 [14b69a532] 2018-06-16 15:34:07 -0400\n> Branch: REL9_4_STABLE Release: REL9_4_19 [817d605e4] 2018-06-16 15:34:07 -0400\n> Branch: REL9_3_STABLE Release: REL9_3_24 [ec5547e56] 2018-06-16 15:34:07 -0400\n> \n> Use -Wno-format-truncation and -Wno-stringop-truncation, if available.\n> \n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Branch: master Release: REL_11_BR [5d923eb29] 2018-06-16 14:45:47 -0400\n> Branch: REL_10_STABLE Release: REL_10_5 [189332615] 2018-06-16 14:45:47 -0400\n> Branch: REL9_6_STABLE Release: REL9_6_10 [8870e2978] 2018-06-16 14:45:47 -0400\n> Branch: REL9_5_STABLE Release: REL9_5_14 [f3be5d3e7] 2018-06-16 14:45:47 -0400\n> Branch: REL9_4_STABLE Release: REL9_4_19 [fd079dd09] 2018-06-16 14:45:47 -0400\n> Branch: REL9_3_STABLE Release: REL9_3_24 [3243cbc08] 2018-06-16 14:45:47 -0400\n> \n> Use snprintf not sprintf in pg_waldump's timestamptz_to_str.\n\nMmm... I should have created my working trees from stale tracking\nbranches. I confirmed that they are surely there. Sorry for the bogus\nreport and thanks for the reply.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 04 Mar 2020 09:12:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Back-patching -Wno-format-truncation." } ]
[ { "msg_contents": "Hackers,\n\nThe current code in checksum_impl.h does not play nice with -Wconversion \non gcc:\n\nwarning: conversion to 'uint16 {aka short unsigned int}' from 'uint32 \n{aka unsigned int}' may alter its value [-Wconversion]\n return (checksum % 65535) + 1;\n ~~~~~~~~~~~~~~~~~~~^~~\n\nIt seems like an explicit cast to uint16 would be better?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Tue, 3 Mar 2020 18:37:36 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Cast to uint16 in pg_checksum_page()" }, { "msg_contents": "On Tue, Mar 03, 2020 at 06:37:36PM -0500, David Steele wrote:\n> Hackers,\n> \n> The current code in checksum_impl.h does not play nice with -Wconversion on\n> gcc:\n> \n> warning: conversion to 'uint16 {aka short unsigned int}' from 'uint32 {aka\n> unsigned int}' may alter its value [-Wconversion]\n> return (checksum % 65535) + 1;\n> ~~~~~~~~~~~~~~~~~~~^~~\n> \n> It seems like an explicit cast to uint16 would be better?\n\nAttempting to compile the backend code with -Wconversion leads to many\nwarnings, still there has been at least one fix in the past to ease\nthe use of the headers in this case, with b5b3229 (this made the code\nmore readable). Should we really care about this case?\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 13:41:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Cast to uint16 in pg_checksum_page()" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 03, 2020 at 06:37:36PM -0500, David Steele wrote:\n>> It seems like an explicit cast to uint16 would be better?\n\n> Attempting to compile the backend code with -Wconversion leads to many\n> warnings, still there has been at least one fix in the past to ease\n> the use of the headers in this case, with b5b3229 (this made the code\n> more readable). Should we really care about this case?\n\nPer the commit message for b5b3229, it might be worth getting rid of\nsuch messages for code that's exposed in header files, even if removing\nall of those warnings would be too much work. Perhaps David's use-case\nis an extension that's using that header?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Mar 2020 01:05:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cast to uint16 in pg_checksum_page()" }, { "msg_contents": "On 3/4/20 1:05 AM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Tue, Mar 03, 2020 at 06:37:36PM -0500, David Steele wrote:\n>>> It seems like an explicit cast to uint16 would be better?\n> \n>> Attempting to compile the backend code with -Wconversion leads to many\n>> warnings, still there has been at least one fix in the past to ease\n>> the use of the headers in this case, with b5b3229 (this made the code\n>> more readable). Should we really care about this case?\n> \n> Per the commit message for b5b3229, it might be worth getting rid of\n> such messages for code that's exposed in header files, even if removing\n> all of those warnings would be too much work. Perhaps David's use-case\n> is an extension that's using that header?\n\nYes, this is being included in an external project. Previously we have \nused a highly marked-up version but we are now trying to pull in the \nheader more or less verbatim.\n\nSince this header is specifically designated as something external \nprojects may want to use I think it makes sense to fix the warning.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 4 Mar 2020 07:02:43 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": true, "msg_subject": "Re: Cast to uint16 in pg_checksum_page()" }, { "msg_contents": "On Wed, Mar 04, 2020 at 07:02:43AM -0500, David Steele wrote:\n> Yes, this is being included in an external project. Previously we have used\n> a highly marked-up version but we are now trying to pull in the header more\n> or less verbatim.\n> \n> Since this header is specifically designated as something external projects\n> may want to use I think it makes sense to fix the warning.\n\nThis sounds like a sensible argument, similar to the ones raised on\nthe other thread, so no objections from me to improve things here. I\ncan look at that tomorrow, except if somebody else beats me to it.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 21:52:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Cast to uint16 in pg_checksum_page()" }, { "msg_contents": "On Wed, Mar 04, 2020 at 09:52:08PM +0900, Michael Paquier wrote:\n> This sounds like a sensible argument, similar to the ones raised on\n> the other thread, so no objections from me to improve things here. I\n> can look at that tomorrow, except if somebody else beats me to it.\n\nAnd done.\n--\nMichael", "msg_date": "Thu, 5 Mar 2020 14:13:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Cast to uint16 in pg_checksum_page()" } ]
[ { "msg_contents": "Hi all,\n\nAll the tools mentioned in $subject have been switched recently to use\nthe central logging infrastructure, which means that they have gained\ncoloring output. However we (mostly I) forgot to update the docs.\n\nAttached is a patch to fix this issue. Please let me know if there\nare comments and/or objections.\n\nThanks,\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 16:54:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "> On 4 Mar 2020, at 08:54, Michael Paquier <michael@paquier.xyz> wrote:\n\n> All the tools mentioned in $subject have been switched recently to use\n> the central logging infrastructure, which means that they have gained\n> coloring output. However we (mostly I) forgot to update the docs.\n\n+1 on updating the docs with PG_COLOR for these.\n\n> Attached is a patch to fix this issue. Please let me know if there\n> are comments and/or objections.\n\n+ color in diagnostics messages. Possible values are\n+ <literal>always</literal>, <literal>auto</literal>,\n+ <literal>never</literal>.\n\nNot being a native english speaker, I might have it backwards, but I find lists\nof values in a sentence like this to be easier to read when the final value is\nseparated by a conjunction like:\n\n\t<item 1>, <item 2>, .. , <item n-1> and <item n>\n\ncheers ./daniel\n\n", "msg_date": "Wed, 4 Mar 2020 10:12:23 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Wed, Mar 4, 2020 at 8:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> Attached is a patch to fix this issue. Please let me know if there\n> are comments and/or objections.\n>\n\nI think there are a couple tools missing: pg_archivecleanup, pg_ctl,\npg_test_fsync and pg_upgrade. pg_regress also, but there is nothing to do\nin the documentation with it.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Mar 4, 2020 at 8:54 AM Michael Paquier <michael@paquier.xyz> wrote:\nAttached is a patch to fix this issue.  Please let me know if there\nare comments and/or objections.I think there are a couple tools missing: pg_archivecleanup, pg_ctl, pg_test_fsync and pg_upgrade. pg_regress also, but there is nothing to do in the documentation with it.Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 4 Mar 2020 10:22:26 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "Bonjour Michaël,\n\n> All the tools mentioned in $subject have been switched recently to use\n> the central logging infrastructure, which means that they have gained\n> coloring output. However we (mostly I) forgot to update the docs.\n>\n> Attached is a patch to fix this issue. Please let me know if there\n> are comments and/or objections.\n\nNo objection. I did not know there was such a thing…\n\nMaybe a more detailed explanation about PG_COLOR could be stored \nsomewhere, and all affected tools could link to it? Or not.\n\nFor \"pgbench\", you could also add the standard sentence that it uses libpq \nenvironment variables, as it is also missing?\n\n-- \nFabien.", "msg_date": "Wed, 4 Mar 2020 11:31:27 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and\n pgbench" }, { "msg_contents": "On Wed, Mar 04, 2020 at 10:12:23AM +0100, Daniel Gustafsson wrote:\n> + color in diagnostics messages. Possible values are\n> + <literal>always</literal>, <literal>auto</literal>,\n> + <literal>never</literal>.\n> \n> Not being a native english speaker, I might have it backwards, but I find lists\n> of values in a sentence like this to be easier to read when the final value is\n> separated by a conjunction like:\n> \n> \t<item 1>, <item 2>, .. , <item n-1> and <item n>\n\nPoint received. Your suggestion is more natural to me as well. Now,\nall the existing docs don't follow that style so I chose consistency.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 21:55:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Wed, Mar 04, 2020 at 10:22:26AM +0100, Juan José Santamaría Flecha wrote:\n> I think there are a couple tools missing: pg_archivecleanup, pg_ctl,\n> pg_test_fsync and pg_upgrade. pg_regress also, but there is nothing to do\n> in the documentation with it.\n\nIndeed, true for pg_archivecleanup and pg_test_fsync, but not for\npg_ctl and pg_upgrade. The funny part about pg_ctl is that the\ninitialization is done for nothing, because nothing is actually logged\nwith the APIs of logging.c. pg_upgrade uses its own logging APIs,\nwhich have nothing to do with logging.c.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 22:01:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Wed, Mar 04, 2020 at 11:31:27AM +0100, Fabien COELHO wrote:\n> No objection. I did not know there was such a thing…\n> \n> Maybe a more detailed explanation about PG_COLOR could be stored somewhere,\n> and all affected tools could link to it? Or not.\n\nOne argument against that position is that each tool may just handle a\nsubset of the full set available, and that some of the subsets may\npartially intersect. Fun.\n\n> For \"pgbench\", you could also add the standard sentence that it uses libpq\n> environment variables, as it is also missing?\n\nYeah, that's true. Let's fix this part while on it.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 22:05:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Wed, Mar 04, 2020 at 10:05:30PM +0900, Michael Paquier wrote:\n> On Wed, Mar 04, 2020 at 11:31:27AM +0100, Fabien COELHO wrote:\n>> For \"pgbench\", you could also add the standard sentence that it uses libpq\n>> environment variables, as it is also missing?\n> \n> Yeah, that's true. Let's fix this part while on it.\n\nSo, combining the feedback from Fabien, Juan and Daniel I am finishing\nwith the attached. Any thoughts?\n--\nMichael", "msg_date": "Thu, 5 Mar 2020 16:26:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "> On 5 Mar 2020, at 08:26, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Mar 04, 2020 at 10:05:30PM +0900, Michael Paquier wrote:\n>> On Wed, Mar 04, 2020 at 11:31:27AM +0100, Fabien COELHO wrote:\n>>> For \"pgbench\", you could also add the standard sentence that it uses libpq\n>>> environment variables, as it is also missing?\n>> \n>> Yeah, that's true. Let's fix this part while on it.\n> \n> So, combining the feedback from Fabien, Juan and Daniel I am finishing\n> with the attached. Any thoughts?\n\nLGTM\n\ncheers ./daniel\n\n", "msg_date": "Thu, 5 Mar 2020 09:40:07 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Thu, Mar 5, 2020 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 5 Mar 2020, at 08:26, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > So, combining the feedback from Fabien, Juan and Daniel I am finishing\n> > with the attached. Any thoughts?\n>\n> LGTM\n\n\n+1\n\nRegards\n\nOn Thu, Mar 5, 2020 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 5 Mar 2020, at 08:26, Michael Paquier <michael@paquier.xyz> wrote:> \n> So, combining the feedback from Fabien, Juan and Daniel I am finishing\n> with the attached.  Any thoughts?\n\nLGTM+1Regards", "msg_date": "Thu, 5 Mar 2020 10:09:31 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Thu, Mar 05, 2020 at 10:09:31AM +0100, Juan José Santamaría Flecha wrote:\n> On Thu, Mar 5, 2020 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> LGTM\n> \n> +1\n\nThanks to both of you for the reviews. Please note that I will\nmention the business with pg_ctl and logging in a new thread and\nremove the diff of pg_ctl.c from the previous patch, and that the doc\nchanges could be backpatched down to 12 for the relevant parts. The\ndocumentation for PG_COLORS is still missing, but that's not new and I\nthink that we had better handle that case separately by creating a new\nsection in the docs. For now, let's wait a couple of days and see if\nothers have more thoughts to share about the doc patch of this thread.\n--\nMichael", "msg_date": "Sat, 7 Mar 2020 10:09:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" }, { "msg_contents": "On Sat, Mar 07, 2020 at 10:09:23AM +0900, Michael Paquier wrote:\n> Thanks to both of you for the reviews. Please note that I will\n> mention the business with pg_ctl and logging in a new thread and\n> remove the diff of pg_ctl.c from the previous patch, and that the doc\n> changes could be backpatched down to 12 for the relevant parts. The\n> documentation for PG_COLORS is still missing, but that's not new and I\n> think that we had better handle that case separately by creating a new\n> section in the docs. For now, let's wait a couple of days and see if\n> others have more thoughts to share about the doc patch of this thread.\n\nHearing nothing, done. The part about pgbench with PGHOST, PGUSER and\nPGPORT could go further down, but it has been like that for years so I\ndid not bother.\n--\nMichael", "msg_date": "Mon, 9 Mar 2020 11:12:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PG_COLOR not mentioned in docs of vacuumlo, oid2name and pgbench" } ]
[ { "msg_contents": "Hi,\n\nCurrently if pg_wal_replay_pause() is called after the standby\npromotion is triggerred before the promotion has successfully\nfinished, WAL replay is paused. That is, the replay pause is\npreferred than the promotion. Is this desiderable behavior?\n\nISTM that most users including me want the recovery to complete\nas soon as possible and the server to become the master when\nthey requeste the promotion. So I'm thinking to change\nthe recovery so that it ignore the pause request after the promotion\nis triggerred. Thought?\n\nI want to start this discussion because this is related to the patch\n(propoesd at the thread [1]) that I'm reviewing. It does that partially,\ni.e., prefers the promotion only when the pause is requested by\nrecovery_target_action=pause. But I think that it's reasonable and\nmore consistent to do that whether whichever the pause is requested\nby pg_wal_replay_pause() or recovery_target_action.\n\nBTW, regarding \"replay pause vs. delayed standby\", any wait by\nrecovery_min_apply_delay doesn't happen after the promotion\nis triggerred. IMO \"pause\" should be treated as the similar.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/19168211580382043@myt5-b646bde4b8f3.qloud-c.yandex.net\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 20:41:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "replay pause vs. standby promotion" }, { "msg_contents": "Hello\n\n> I want to start this discussion because this is related to the patch\n> (propoesd at the thread [1]) that I'm reviewing. It does that partially,\n> i.e., prefers the promotion only when the pause is requested by\n> recovery_target_action=pause. But I think that it's reasonable and\n> more consistent to do that whether whichever the pause is requested\n> by pg_wal_replay_pause() or recovery_target_action.\n\n+1.\nI'm just not sure if this is safe for replay logic, so I did not touch this behavior in my proposal. (hmm, I wanted to mention this, but apparently forgot)\n\nregards, Sergei\n\n\n", "msg_date": "Wed, 04 Mar 2020 15:00:54 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "On Wed, 04 Mar 2020 15:00:54 +0300\nSergei Kornilov <sk@zsrv.org> wrote:\n\n> Hello\n> \n> > I want to start this discussion because this is related to the patch\n> > (propoesd at the thread [1]) that I'm reviewing. It does that partially,\n> > i.e., prefers the promotion only when the pause is requested by\n> > recovery_target_action=pause. But I think that it's reasonable and\n> > more consistent to do that whether whichever the pause is requested\n> > by pg_wal_replay_pause() or recovery_target_action. \n> \n> +1.\n\n+1\n\nAnd pg_wal_replay_pause () should probably raise an error explaining the\nstandby ignores the pause because of ongoing promotion.\n\n\n", "msg_date": "Wed, 4 Mar 2020 15:40:19 +0100", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "On 2020/03/04 23:40, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 04 Mar 2020 15:00:54 +0300\n> Sergei Kornilov <sk@zsrv.org> wrote:\n> \n>> Hello\n>>\n>>> I want to start this discussion because this is related to the patch\n>>> (propoesd at the thread [1]) that I'm reviewing. It does that partially,\n>>> i.e., prefers the promotion only when the pause is requested by\n>>> recovery_target_action=pause. But I think that it's reasonable and\n>>> more consistent to do that whether whichever the pause is requested\n>>> by pg_wal_replay_pause() or recovery_target_action.\n>>\n>> +1.\n> \n> +1\n> \n> And pg_wal_replay_pause () should probably raise an error explaining the\n> standby ignores the pause because of ongoing promotion.\n\nOK, so patch attached.\n\nThis patch causes, if a promotion is triggered while recovery is paused,\nthe paused state to end and a promotion to continue. OTOH, this patch\nmakes pg_wal_replay_pause() and _resume() throw an error if it's executed\nwhile a promotion is ongoing.\n\nRegarding recovery_target_action, if the recovery target is reached\nwhile a promotion is ongoing, \"pause\" setting will act the same as \"promote\",\ni.e., recovery will finish and the server will start to accept connections.\n\nTo implement the above, I added new shared varible indicating whether\na promotion is triggered or not. Only startup process can update this shared\nvariable. Other processes like read-only backends can check whether\npromotion is ongoing, via this variable.\n\nI added new function PromoteIsTriggered() that returns true if a promotion\nis triggered. Since the name of this function and the existing function\nIsPromoteTriggered() are confusingly similar, I changed the name of\nIsPromoteTriggered() to IsPromoteSignaled, as more appropriate name.\n\nI'd like to apply the change of log message that Sergei proposed at [1]\nafter commiting this patch if it's ok.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/19168211580382043@myt5-b646bde4b8f3.qloud-c.yandex.net\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Fri, 6 Mar 2020 22:18:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "On Fri, Mar 6, 2020 at 10:18 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n> OK, so patch attached.\n>\n> This patch causes, if a promotion is triggered while recovery is paused,\n> the paused state to end and a promotion to continue. OTOH, this patch\n> makes pg_wal_replay_pause() and _resume() throw an error if it's executed\n> while a promotion is ongoing.\n\nRegarding recovery_target_action, if the recovery target is reached\n> while a promotion is ongoing, \"pause\" setting will act the same as\n> \"promote\",\n> i.e., recovery will finish and the server will start to accept connections.\n>\n> To implement the above, I added new shared varible indicating whether\n> a promotion is triggered or not. Only startup process can update this\n> shared\n> variable. Other processes like read-only backends can check whether\n> promotion is ongoing, via this variable.\n>\n> I added new function PromoteIsTriggered() that returns true if a promotion\n> is triggered. Since the name of this function and the existing function\n> IsPromoteTriggered() are confusingly similar, I changed the name of\n> IsPromoteTriggered() to IsPromoteSignaled, as more appropriate name.\n>\n\nI've confirmed the patch works as you described above.\nAnd I also poked around it a little bit but found no problems.\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Fri, Mar 6, 2020 at 10:18 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\nOK, so patch attached.\n\nThis patch causes, if a promotion is triggered while recovery is paused,\nthe paused state to end and a promotion to continue. OTOH, this patch\nmakes pg_wal_replay_pause() and _resume() throw an error if it's executed\nwhile a promotion is ongoing.  \nRegarding recovery_target_action, if the recovery target is reached\nwhile a promotion is ongoing, \"pause\" setting will act the same as \"promote\",\ni.e., recovery will finish and the server will start to accept connections.\n\nTo implement the above, I added new shared varible indicating whether\na promotion is triggered or not. Only startup process can update this shared\nvariable. Other processes like read-only backends can check whether\npromotion is ongoing, via this variable.\n\nI added new function PromoteIsTriggered() that returns true if a promotion\nis triggered. Since the name of this function and the existing function\nIsPromoteTriggered() are confusingly similar, I changed the name of\nIsPromoteTriggered() to IsPromoteSignaled, as more appropriate name.I've confirmed the patch works as you described above.And I also poked around it a little bit but found no problems. Regards,--Atsushi Torikoshi", "msg_date": "Fri, 20 Mar 2020 15:22:53 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/20 15:22, Atsushi Torikoshi wrote:\n> \n> On Fri, Mar 6, 2020 at 10:18 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> OK, so patch attached.\n> \n> This patch causes, if a promotion is triggered while recovery is paused,\n> the paused state to end and a promotion to continue. OTOH, this patch\n> makes pg_wal_replay_pause() and _resume() throw an error if it's executed\n> while a promotion is ongoing. \n> \n> Regarding recovery_target_action, if the recovery target is reached\n> while a promotion is ongoing, \"pause\" setting will act the same as \"promote\",\n> i.e., recovery will finish and the server will start to accept connections.\n> \n> To implement the above, I added new shared varible indicating whether\n> a promotion is triggered or not. Only startup process can update this shared\n> variable. Other processes like read-only backends can check whether\n> promotion is ongoing, via this variable.\n> \n> I added new function PromoteIsTriggered() that returns true if a promotion\n> is triggered. Since the name of this function and the existing function\n> IsPromoteTriggered() are confusingly similar, I changed the name of\n> IsPromoteTriggered() to IsPromoteSignaled, as more appropriate name.\n> \n> \n> I've confirmed the patch works as you described above.\n> And I also poked around it a little bit but found no problems.\n\nThanks for the review!\nBarrying any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 23 Mar 2020 15:51:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "Hello\n\n(I am trying to find an opportunity to review this patch...)\n\nConsider test case with streaming replication:\n\non primary: create table foo (i int);\non standby:\n\npostgres=# select pg_wal_replay_pause();\n pg_wal_replay_pause \n---------------------\n \n(1 row)\n\npostgres=# select pg_is_wal_replay_paused();\n pg_is_wal_replay_paused \n-------------------------\n t\n(1 row)\n\npostgres=# table foo;\n i \n---\n(0 rows)\n\nExecute \"insert into foo values (1);\" on primary\n\npostgres=# select pg_promote ();\n pg_promote \n------------\n t\n(1 row)\n\npostgres=# table foo;\n i \n---\n 1\n\nAnd we did replay one additional change during promote. I think this is wrong behavior. Possible can be fixed by\n\n+ if (PromoteIsTriggered()) break;\n /* Setup error traceback support for ereport() */\n errcallback.callback = rm_redo_error_callback;\n\nregards, Sergei\n\n\n", "msg_date": "Mon, 23 Mar 2020 16:46:52 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/23 22:46, Sergei Kornilov wrote:\n> Hello\n> \n> (I am trying to find an opportunity to review this patch...)\n\nThanks for the review! It's really helpful!\n\n\n> Consider test case with streaming replication:\n> \n> on primary: create table foo (i int);\n> on standby:\n> \n> postgres=# select pg_wal_replay_pause();\n> pg_wal_replay_pause\n> ---------------------\n> \n> (1 row)\n> \n> postgres=# select pg_is_wal_replay_paused();\n> pg_is_wal_replay_paused\n> -------------------------\n> t\n> (1 row)\n> \n> postgres=# table foo;\n> i\n> ---\n> (0 rows)\n> \n> Execute \"insert into foo values (1);\" on primary\n> \n> postgres=# select pg_promote ();\n> pg_promote\n> ------------\n> t\n> (1 row)\n> \n> postgres=# table foo;\n> i\n> ---\n> 1\n> \n> And we did replay one additional change during promote. I think this is wrong behavior. Possible can be fixed by\n> \n> + if (PromoteIsTriggered()) break;\n> /* Setup error traceback support for ereport() */\n> errcallback.callback = rm_redo_error_callback;\n\nYou meant that the promotion request should cause the recovery\nto finish immediately even if there are still outstanding WAL records,\nand cause the standby to become the master? I don't think that\nit's the expected (also existing) behavior of the promotion. That is,\nthe promotion request should cause the recovery to replay as much\nWAL records as possible, to the end, in order to avoid data loss. No?\n\nIf we would like to have the promotion method to finish recovery\nimmediately, IMO we should implement something like\n\"pg_ctl promote -m fast\". That is, we need to add new method into\nthe promotion.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 23 Mar 2020 23:36:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "On Mon, Mar 23, 2020 at 10:36 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> If we would like to have the promotion method to finish recovery\n> immediately, IMO we should implement something like\n> \"pg_ctl promote -m fast\". That is, we need to add new method into\n> the promotion.\n\nI think 'immediate' would be a better choice. One reason is that we've\nused the term 'fast promotion' in the past for a different feature.\nAnother is that 'immediate' might sound slightly scary to people who\nare familiar with what 'pg_ctl stop -mimmediate' does. And you want\npeople doing this to be just a little bit scared: not too scared, but\na little scared.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 23 Mar 2020 10:55:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "Hello\n\n> You meant that the promotion request should cause the recovery\n> to finish immediately even if there are still outstanding WAL records,\n> and cause the standby to become the master?\n\nOh, I get your point. But yes, I expect that in case of promotion request during a pause, the user (me too) will want to have exactly the current state, not latest available in WALs.\n\nReal usercase from my experience:\nThe user wants to update a third-party application. In case of problems, he wants to return to the old version of the application and the unchanged replica. Thus, it sets a pause on standby and performs an update. If all is ok - he will resume replay. In case of some problems he plans to promote standby.\nBut oops, standby will ignore promote signals during pause and we need get currect LSN from standby and restart it with recovery_target_lsn = ? and recovery_target_action = promote to achieve this state.\n\nregards, Sergei\n\n\n", "msg_date": "Mon, 23 Mar 2020 18:17:16 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/23 23:55, Robert Haas wrote:\n> On Mon, Mar 23, 2020 at 10:36 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> If we would like to have the promotion method to finish recovery\n>> immediately, IMO we should implement something like\n>> \"pg_ctl promote -m fast\". That is, we need to add new method into\n>> the promotion.\n> \n> I think 'immediate' would be a better choice. One reason is that we've\n> used the term 'fast promotion' in the past for a different feature.\n> Another is that 'immediate' might sound slightly scary to people who\n> are familiar with what 'pg_ctl stop -mimmediate' does. And you want\n> people doing this to be just a little bit scared: not too scared, but\n> a little scared.\n\n+1\n\nWhen I proposed the feature five years before, I used \"immediate\"\nas the option value.\nhttps://postgr.es/m/CAHGQGwHtvyDqKZaYWYA9zyyLEcAKiF5P0KpcpuNE_tsrGTFtQw@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 24 Mar 2020 00:57:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/24 0:17, Sergei Kornilov wrote:\n> Hello\n> \n>> You meant that the promotion request should cause the recovery\n>> to finish immediately even if there are still outstanding WAL records,\n>> and cause the standby to become the master?\n> \n> Oh, I get your point. But yes, I expect that in case of promotion request during a pause, the user (me too) will want to have exactly the current state, not latest available in WALs.\n\nBasically I'd like the promotion to make the standby replay all the WAL\neven if it's requested during pause state. OTOH I understand there\nare use cases where immediate promotion is useful, as you explained.\nSo, +1 to add something like \"pg_ctl promote -m immediate\".\n\nBut I'm afraid that now it's too late to add such feature into v13.\nProbably it's an item for v14....\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 24 Mar 2020 00:57:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/24 0:57, Fujii Masao wrote:\n> \n> \n> On 2020/03/24 0:17, Sergei Kornilov wrote:\n>> Hello\n>>\n>>> You meant that the promotion request should cause the recovery\n>>> to finish immediately even if there are still outstanding WAL records,\n>>> and cause the standby to become the master?\n>>\n>> Oh, I get your point. But yes, I expect that in case of promotion request during a pause, the user (me too) will want to have exactly the current state, not latest available in WALs.\n> \n> Basically I'd like the promotion to make the standby replay all the WAL\n> even if it's requested during pause state. OTOH I understand there\n> are use cases where immediate promotion is useful, as you explained.\n> So, +1 to add something like \"pg_ctl promote -m immediate\".\n> \n> But I'm afraid that now it's too late to add such feature into v13.\n> Probably it's an item for v14....\n\nI pushed the latest version of the patch. If you have further opinion\nabout immediate promotion, let's keep discussing that!\n\nAlso we need to go back to the original patch posted at [1].\n\n[1]\nhttps://www.postgresql.org/message-id/flat/19168211580382043@myt5-b646bde4b8f3.qloud-c.yandex.net\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 24 Mar 2020 12:54:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "Hello\n\n> I pushed the latest version of the patch. If you have further opinion\n> about immediate promotion, let's keep discussing that!\n\nThank you!\n\nHonestly, I forgot that the promotion is documented in high-availability.sgml as:\n\n> Before failover, any WAL immediately available in the archive or in pg_wal will be\n> restored, but no attempt is made to connect to the master.\n\nI mistakenly thought that promote should be \"immediately\"...\n\n> If a promotion is triggered while recovery is paused, the paused state ends and a promotion continues.\n\nCould we add a few words in func.sgml to clarify the behavior? Especially for users from my example above. Something like:\n\n> If a promotion is triggered while recovery is paused, the paused state ends, replay of any WAL immediately available in the archive or in pg_wal will be continued and then a promotion will be completed.\n\nregards, Sergei\n\n\n", "msg_date": "Tue, 24 Mar 2020 18:17:58 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/25 0:17, Sergei Kornilov wrote:\n> Hello\n> \n>> I pushed the latest version of the patch. If you have further opinion\n>> about immediate promotion, let's keep discussing that!\n> \n> Thank you!\n> \n> Honestly, I forgot that the promotion is documented in high-availability.sgml as:\n> \n>> Before failover, any WAL immediately available in the archive or in pg_wal will be\n>> restored, but no attempt is made to connect to the master.\n> \n> I mistakenly thought that promote should be \"immediately\"...\n> \n>> If a promotion is triggered while recovery is paused, the paused state ends and a promotion continues.\n> \n> Could we add a few words in func.sgml to clarify the behavior? Especially for users from my example above. Something like:\n> \n>> If a promotion is triggered while recovery is paused, the paused state ends, replay of any WAL immediately available in the archive or in pg_wal will be continued and then a promotion will be completed.\n\nThis description is true if pause is requested by pg_wal_replay_pause(),\nbut not if recovery target is reached and pause is requested by\nrecovery_target_action=pause. In the latter case, even if there are WAL data\navaiable in pg_wal or archive, they are not replayed, i.e., the promotion\ncompletes immediately. Probably we should document those two cases\nexplicitly to avoid the confusion about a promotion and recovery pause?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 25 Mar 2020 14:17:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "Hi\n\n>>  Could we add a few words in func.sgml to clarify the behavior? Especially for users from my example above. Something like:\n>>\n>>>  If a promotion is triggered while recovery is paused, the paused state ends, replay of any WAL immediately available in the archive or in pg_wal will be continued and then a promotion will be completed.\n>\n> This description is true if pause is requested by pg_wal_replay_pause(),\n> but not if recovery target is reached and pause is requested by\n> recovery_target_action=pause. In the latter case, even if there are WAL data\n> avaiable in pg_wal or archive, they are not replayed, i.e., the promotion\n> completes immediately. Probably we should document those two cases\n> explicitly to avoid the confusion about a promotion and recovery pause?\n\nThis is description for pg_wal_replay_pause, but actually we suggest to call pg_wal_replay_resume in recovery_target_action=pause... So, I agree, we need to document both cases.\n\nPS: I think we have inconsistent behavior here... Read wal during promotion from local pg_wal AND call restore_command, but ignore walreceiver also seems strange for my DBA hat...\n\nregards, Sergei\n\n\n", "msg_date": "Wed, 25 Mar 2020 13:42:56 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "\n\nOn 2020/03/25 19:42, Sergei Kornilov wrote:\n> Hi\n> \n>>>  Could we add a few words in func.sgml to clarify the behavior? Especially for users from my example above. Something like:\n>>>\n>>>>  If a promotion is triggered while recovery is paused, the paused state ends, replay of any WAL immediately available in the archive or in pg_wal will be continued and then a promotion will be completed.\n>>\n>> This description is true if pause is requested by pg_wal_replay_pause(),\n>> but not if recovery target is reached and pause is requested by\n>> recovery_target_action=pause. In the latter case, even if there are WAL data\n>> avaiable in pg_wal or archive, they are not replayed, i.e., the promotion\n>> completes immediately. Probably we should document those two cases\n>> explicitly to avoid the confusion about a promotion and recovery pause?\n> \n> This is description for pg_wal_replay_pause, but actually we suggest to call pg_wal_replay_resume in recovery_target_action=pause... So, I agree, we need to document both cases.\n> \n> PS: I think we have inconsistent behavior here... Read wal during promotion from local pg_wal AND call restore_command, but ignore walreceiver also seems strange for my DBA hat...\n\nIf we don't ignore walreceiver and does try to connect to the master,\na promotion and recovery cannot end forever since new WAL data can\nbe streamed. You think this behavior is more consistent?\n\nIMO it's valid to replay all the WAL data available to avoid data loss\nbefore a promotion completes.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 26 Mar 2020 22:20:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: replay pause vs. standby promotion" }, { "msg_contents": "Hello\n\n> If we don't ignore walreceiver and does try to connect to the master,\n> a promotion and recovery cannot end forever since new WAL data can\n> be streamed. You think this behavior is more consistent?\n\nWe have no simple point to stop replay.\nWell, except for \"immediately\" - just one easy stop. But I agree that this is not the best option. Simple and clear, but not best one for data when we want to replay as much as possible from archive.\n\n> IMO it's valid to replay all the WAL data available to avoid data loss\n> before a promotion completes.\n\nBut in case of still working primary (with archive_command) we choose quite random time to promote. A random time when the primary did not save the new wal segment.\nor even when a temporary error of restore_command occurs? We mention just cp command in docs. I know users uses cp (e.g. from NFS) without further error handling.\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 26 Mar 2020 17:37:30 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: replay pause vs. standby promotion" } ]
[ { "msg_contents": "I noticed while going over the multirange types patch that it adds a\npointless typiofunc cached OID to a struct used for I/O functions'\nfn_extra. It seems to go completely unused, so I checked range types\n(which this was cribbed from) and indeed, it is completely unused there\neither. My guess is that it was in turn cribbed from array's\nArrayMetaState, which is considerably more sophisticated; I suspect\nnobody noticed that caching it was pointless.\n\nHere's a patch to remove it from rangetypes.c. It doesn't really waste\nmuch memory anyway, but removing it lessens the cognitive load by one or\ntwo bits.\n\n-- \n�lvaro Herrera", "msg_date": "Wed, 4 Mar 2020 18:57:11 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "useless RangeIOData->typiofunc" }, { "msg_contents": "On 3/4/20 1:57 PM, Alvaro Herrera wrote:\n> I noticed while going over the multirange types patch that it adds a\n> pointless typiofunc cached OID to a struct used for I/O functions'\n> fn_extra. It seems to go completely unused, so I checked range types\n> (which this was cribbed from) and indeed, it is completely unused there\n> either. My guess is that it was in turn cribbed from array's\n> ArrayMetaState, which is considerably more sophisticated; I suspect\n> nobody noticed that caching it was pointless.\n\nI didn't believe it at first but I think you're right. :-)\n\n> Here's a patch to remove it from rangetypes.c. It doesn't really waste\n> much memory anyway, but removing it lessens the cognitive load by one or\n> two bits.\n\nLooks good to me, and it seems okay to make the same edits to \nmultirangetypes.c\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 14:34:02 -0800", "msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>", "msg_from_op": false, "msg_subject": "Re: useless RangeIOData->typiofunc" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I noticed while going over the multirange types patch that it adds a\n> pointless typiofunc cached OID to a struct used for I/O functions'\n> fn_extra. It seems to go completely unused, so I checked range types\n> (which this was cribbed from) and indeed, it is completely unused there\n> either. My guess is that it was in turn cribbed from array's\n> ArrayMetaState, which is considerably more sophisticated; I suspect\n> nobody noticed that caching it was pointless.\n\n> Here's a patch to remove it from rangetypes.c. It doesn't really waste\n> much memory anyway, but removing it lessens the cognitive load by one or\n> two bits.\n\nHm, I'm not sure that really lessens the cognitive load any, but\nif you do commit this please fix the dangling reference you left\nin the nearby comment:\n\n {\n TypeCacheEntry *typcache; /* range type's typcache entry */\n- Oid typiofunc; /* element type's I/O function */\n Oid typioparam; /* element type's I/O parameter */\n FmgrInfo proc; /* lookup result for typiofunc */\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n } RangeIOData;\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Mar 2020 17:34:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: useless RangeIOData->typiofunc" }, { "msg_contents": "On 2020-Mar-04, Tom Lane wrote:\n\n> Hm, I'm not sure that really lessens the cognitive load any, but\n> if you do commit this please fix the dangling reference you left\n> in the nearby comment:\n> \n> {\n> TypeCacheEntry *typcache; /* range type's typcache entry */\n> - Oid typiofunc; /* element type's I/O function */\n> Oid typioparam; /* element type's I/O parameter */\n> FmgrInfo proc; /* lookup result for typiofunc */\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> } RangeIOData;\n\nThanks -- ISTM it makes more sense to put the FmgrInfo before the\ntypioparam too:\n\ntypedef struct RangeIOData\n{\n TypeCacheEntry *typcache; /* range type's typcache entry */\n FmgrInfo proc; /* element type's I/O function */\n Oid typioparam; /* element type's I/O parameter */\n} RangeIOData;\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 5 Mar 2020 11:18:59 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: useless RangeIOData->typiofunc" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Thanks -- ISTM it makes more sense to put the FmgrInfo before the\n> typioparam too:\n\n> typedef struct RangeIOData\n> {\n> TypeCacheEntry *typcache; /* range type's typcache entry */\n> FmgrInfo proc; /* element type's I/O function */\n> Oid typioparam; /* element type's I/O parameter */\n> } RangeIOData;\n\nYeah, WFM. Maybe even rename the FmgrInfo to \"typioproc\"\nor the like?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 09:26:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: useless RangeIOData->typiofunc" }, { "msg_contents": "On 2020-Mar-05, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Thanks -- ISTM it makes more sense to put the FmgrInfo before the\n> > typioparam too:\n> \n> > typedef struct RangeIOData\n> > {\n> > TypeCacheEntry *typcache; /* range type's typcache entry */\n> > FmgrInfo proc; /* element type's I/O function */\n> > Oid typioparam; /* element type's I/O parameter */\n> > } RangeIOData;\n> \n> Yeah, WFM. Maybe even rename the FmgrInfo to \"typioproc\"\n> or the like?\n\nGood idea, thanks! Pushed with that change.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 12:02:40 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: useless RangeIOData->typiofunc" } ]
[ { "msg_contents": "If I run the regression tests so that the \"tenk1\" table is available,\r\nand then create an index on tenk1.twothousand, I notice that simple\r\n\"where twothousand = ?\" queries have query plans that look like the\r\nfollowing sample plan:\r\n\r\npg@regression:5432 [17755]=# explain (analyze, buffers, costs off)\r\nselect * from tenk1 where twothousand = 42;\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├─────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Bitmap Heap Scan on tenk1 (actual time=0.023..0.032 rows=5 loops=1)\r\n │\r\n│ Recheck Cond: (twothousand = 42)\r\n │\r\n│ Heap Blocks: exact=5\r\n │\r\n│ Buffers: shared hit=7\r\n │\r\n│ -> Bitmap Index Scan on tenk1_twothousand_idx1 (actual\r\ntime=0.015..0.015 rows=5 loops=1) │\r\n│ Index Cond: (twothousand = 42)\r\n │\r\n│ Buffers: shared hit=2\r\n │\r\n│ Planning Time: 0.146 ms\r\n │\r\n│ Execution Time: 0.065 ms\r\n │\r\n└─────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(9 rows)\r\n\r\nSeems reasonable. There is a bitmap index scan, more or less due to\r\nthe uncorrelated table access that would be required by an index scan\r\non tenk1_twothousand_idx1. We return 5 rows, and must access one heap\r\nblock for each of those 5 rows. I wonder why we don't get the\r\nfollowing alternative plan instead, which is slightly faster even with\r\nthe weak correlation:\r\n\r\npg@regression:5432 [17755]=# explain (analyze, buffers, costs off)\r\nselect * from tenk1 where twothousand = 42;\r\n┌────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Index Scan using tenk1_twothousand_idx1 on tenk1 (actual\r\ntime=0.020..0.030 rows=5 loops=1) │\r\n│ Index Cond: (twothousand = 42)\r\n │\r\n│ Buffers: shared hit=7\r\n │\r\n│ Planning Time: 0.134 ms\r\n │\r\n│ Execution Time: 0.058 ms\r\n │\r\n└────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(5 rows)\r\n\r\nBoth plans are very similar, really. The number of heap accesses and\r\nB-Tree index page accesses is exactly the same in each case. But the\r\nindex scan plan has one non-obvious advantage, that might matter a lot\r\nin the real world: it can apply the kill_prior_tuple optimization. (It\r\nis never possible to use the kill_prior_tuple optimization during a\r\nbitmap index scan.)\r\n\r\nIt makes sense that the planner determines that a bitmap index scan is\r\nfaster -- or it used to make sense. Before commit dd299df8, which made\r\nheap TID a tiebreaker nbtree index column, we might find ourselves\r\naccessing the same heap page multiple times, pinning it a second or a\r\nthird time within the executor (it depended on very unstable\r\nimplementation details in the nbtree code). These days we should\r\n*reliably* access the same number of heap pages (and index pages) with\r\neither plan. (There are a couple of caveats that I'm glossing over for\r\nnow, like pg_upgrade'd indexes.)\r\n\r\nIs it worth considering the influence of the tiebreaker heap TID\r\ncolumn work in the planner, so that we get to use the kill_prior_tuple\r\noptimization more often? I'm not planning to work on it myself, but it\r\nseems worth considering.\r\n\r\nFWIW, the planner requires lots of convincing before it will use the\r\nindex scan plan right now. I find that I need to set random_page_cost\r\nto 1.6 before the planner chooses the latter plan (a CLUSTER that uses\r\nthe twothousand index works too, of course). If I come up with a\r\nsimilar example that returns 10 rows (i.e. that indexes the \"thousand\"\r\nrow instead), random_page_cost needs to be reduced to 1.1 to produce\r\nan equivalent query plan crossover.\r\n\r\n-- \r\nPeter Geoghegan\r\n", "msg_date": "Wed, 4 Mar 2020 17:13:33 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "kill_prior_tuple and index scan costing" }, { "msg_contents": "Hi,\n\nreply largely based on a quick IM conversation between Peter and me.\n\nOn 2020-03-04 17:13:33 -0800, Peter Geoghegan wrote:\n> Both plans are very similar, really. The number of heap accesses and\n> B-Tree index page accesses is exactly the same in each case.\n\nNote that bitmap heap scans, currently, have the huge advantage of being\nable to efficiently prefetch heap data. That can be a *huge* performance\nboon (I've seen several orders of magnitude on SSDs).\n\nThere's also some benefits of bitmap heap scans in other ways. For heap\nthe \"single tid\" path index->heap lookup locks the page once for each\ntid, whereas bitmap heap scans only do that once - adding more lock\ncycles obvious can have a noticable performance impact. Not having\ninterspersed io between index and heap can be beneficial too.\n\n\nI thought we had optimized the non-lossy bitmap path for heap\n(i.e. heapam_scan_bitmap_next_block()) to perform visibility checks more\nefficiently than single tid fetches\n(i.e. heapam_index_fetch_tuple()). But both use\nheap_hot_search_buffer(), even though the number of page locks differs.\n\nI'm a bit surprised that neither heap_hot_search_buffer() nor the \"lossy\npath\" in heapam_scan_bitmap_next_blocks()'s take advantage of the page's\nall-visible? I don't really see a good reason for that. The HTSV calls\ndo show up noticably in profiles, in my experience.\n\n\nWhile your recent btree work ensures that we get the heap tids for an\nequality lookup in heap order (right?), I don't think we currently have\nthe planner infrastructure to know that that's the case (since other\nindex types don't guarantee that) / take it into account for planning?\n\n\n> But the index scan plan has one non-obvious advantage, that might\n> matter a lot in the real world: it can apply the kill_prior_tuple\n> optimization. (It is never possible to use the kill_prior_tuple\n> optimization during a bitmap index scan.)\n\nIndeed. I've seen this cause very significant issues a couple\ntimes. Basically whenever the handful of very common queries that\ntouched most of the data switched to bitmap heap scans, the indexes\nwould explode in size. Due to the index sizes involved there was no way\nnormal vacuum could clean up dead tuples quickly enough to prevent\nbloat, but with kill_prior_tuple that wasn't a problem.\n\nI have wondered whether we could \"just\" add some support for\nkill_prior_tuple to the bitmap index scan infrastructure. Obviously\nthat'd require some way of calling \"back\" into the index code once\n(several?) tuples on a page are found to be dead during a bitmap heap\nscan. Which would require keeping track of additional metadata for each\ntuple in the tid bitmap, which is obviously not free and would have to\nbe conditional.\n\nI don't really like the kill_prior_tuple interface much. But I don't\nimmediately see how to do better, without increasing the overhead.\n\n\n> It makes sense that the planner determines that a bitmap index scan is\n> faster -- or it used to make sense. Before commit dd299df8, which made\n> heap TID a tiebreaker nbtree index column, we might find ourselves\n> accessing the same heap page multiple times, pinning it a second or a\n> third time within the executor (it depended on very unstable\n> implementation details in the nbtree code). These days we should\n> *reliably* access the same number of heap pages (and index pages) with\n> either plan. (There are a couple of caveats that I'm glossing over for\n> now, like pg_upgrade'd indexes.)\n\nLeaving the locking difference pointed out above aside, there still is a\nsignificant difference in how many times we indirectly call into the\nindex AM, and how much setup work has to be done though?\n\nThere's at least one index_getnext_tid() call for each result tuple,\nwhich each time indirectly has to call btgettuple(). And each\nbtgettuple() has to do checks (do array keys need to be advanced, has\n_bt_first() been called). Whereas btgetbitmap() can amortize across\nall tuples.\n\nAnd that's without considering the fact that, to me, btgetbitmap() could\nbe significantly optimized by adding multiple tuples to the bitmap at\nonce, rather than doing so one-by-one.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Mar 2020 19:33:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: kill_prior_tuple and index scan costing" }, { "msg_contents": "On Sat, Mar 21, 2020 at 07:33:02PM -0700, Andres Freund wrote:\n> While your recent btree work ensures that we get the heap tids for an\n> equality lookup in heap order (right?),\n\nI think when I tested the TID tiebreaker patch, it didn't help for our case,\nwhich is for inequality: (timestamptz >= start AND timestamptz < end).\n\nThat seems to explain why, although I don't understand why it wouldn't also\napply to inequality comparison ?\n\n|template1=# CREATE TABLE t(i int,j int); CREATE INDEX ON t(i); INSERT INTO t SELECT (0.0001*a+9*(random()-0.5))::int FROM generate_series(1,99999999) a; VACUUM ANALYZE t;\n|template1=# explain (analyze,buffers) SELECT * FROM t WHERE i BETWEEN 2000 AND 3000;\n| Index Scan using t_i_idx on t (cost=0.44..277164.86 rows=10026349 width=8) (actual time=0.199..6839.564 rows=10010076 loops=1)\n| Index Cond: ((i >= 2000) AND (i <= 3000))\n| Buffers: shared hit=394701 read=52699\n\nvs.\n\n|template1=# SET enable_seqscan=off; SET enable_indexscan=off; explain (analyze,buffers) SELECT * FROM t WHERE i BETWEEN 2000 AND 3000;\n| Bitmap Heap Scan on t (cost=135038.52..1977571.10 rows=10026349 width=8) (actual time=743.649..3760.643 rows=10010076 loops=1)\n| Recheck Cond: ((i >= 2000) AND (i <= 3000))\n| Heap Blocks: exact=44685\n| Buffers: shared read=52700\n| -> Bitmap Index Scan on t_i_idx (cost=0.00..132531.93 rows=10026349 width=0) (actual time=726.474..726.475 rows=10010076 loops=1)\n| Index Cond: ((i >= 2000) AND (i <= 3000))\n| Buffers: shared read=8015\n\nI'm not concerned with the \"actual\" time or hit vs cached, but the total buffer\npages. Indexscan accessed 450k buffers vs 52k for bitmapscan.\n\n> I don't think we currently have\n> the planner infrastructure to know that that's the case (since other\n> index types don't guarantee that) / take it into account for planning?\n\nRight, since correlation is a property of the table column and not of the\nindex. See also:\nhttps://www.postgresql.org/message-id/14438.1512499811@sss.pgh.pa.us\n\nYears ago I had a patch to make correlation a property of indexes.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 21 Mar 2020 23:53:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: kill_prior_tuple and index scan costing" }, { "msg_contents": "Hi,\n\nOn 2020-03-21 23:53:05 -0500, Justin Pryzby wrote:\n> On Sat, Mar 21, 2020 at 07:33:02PM -0700, Andres Freund wrote:\n> > While your recent btree work ensures that we get the heap tids for an\n> > equality lookup in heap order (right?),\n> \n> I think when I tested the TID tiebreaker patch, it didn't help for our case,\n> which is for inequality: (timestamptz >= start AND timestamptz < end).\n> \n> That seems to explain why, although I don't understand why it wouldn't also\n> apply to inequality comparison ?\n\nBecause tids are only ordered for a single lookup key. So if you scan\nacross multiple values you could have key:page visited in the order of\n1:1 1:2 1:99 2:1 2:7 99:1 or such, i.e. the heap pages would not be\nmonotonically increasing. You can't however have 1:17 1:1, because for a\nspecific key value, the tid is used as an additional comparison value.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Mar 2020 22:03:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: kill_prior_tuple and index scan costing" } ]
[ { "msg_contents": "While looking at the tg_updatedcols patch I happened to notice that we still\nsupport pre-7.3 constraint triggers by converting them on the fly. AFAICT this\nrequires a pre-7.3 dump to hit.\n\nThis was added in late 2007 in a2899ebdc28080eab0f4bb0b8a5f30aa7bb31a89 due to\na report from the field, but I doubt this codepath is excercised much today (if\nat all).\n\nHaving code which is untested and not excercised by developers (or users, if my\nassumption holds), yet being reachable by SQL, runs the risk of introducing\nsubtle bugs. Is there a usecase for keeping it, or can/should it be removed in\n14? That would still leave a lot of supported versions to upgrade to in case\nthere are users to need this. Unless there are immediate -1's, I'll park this\nin a CF for v14.\n\ncheers ./daniel", "msg_date": "Thu, 5 Mar 2020 14:38:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "On 2020-Mar-05, Daniel Gustafsson wrote:\n\n> While looking at the tg_updatedcols patch I happened to notice that we still\n> support pre-7.3 constraint triggers by converting them on the fly. AFAICT this\n> requires a pre-7.3 dump to hit.\n> \n> This was added in late 2007 in a2899ebdc28080eab0f4bb0b8a5f30aa7bb31a89 due to\n> a report from the field, but I doubt this codepath is excercised much today (if\n> at all).\n\npg_dump's support for server versions prior to 8.0 was removed by commit\n64f3524e2c8d (Oct 2016) so it seems fair to remove this too. If people\nneed to upgrade from anything older than 7.3, they can do an intermediate jump.\n\n> Having code which is untested and not excercised by developers (or users, if my\n> assumption holds), yet being reachable by SQL, runs the risk of introducing\n> subtle bugs. Is there a usecase for keeping it, or can/should it be removed in\n> 14? That would still leave a lot of supported versions to upgrade to in case\n> there are users to need this. Unless there are immediate -1's, I'll park this\n> in a CF for v14.\n\nI know it's a late in the cycle for patches in commitfest, but why not\nconsider this for pg13 nonetheless? It seems simple enough. Also, per\nhttps://coverage.postgresql.org/src/backend/commands/trigger.c.gcov.html\nthis is the only large chunk of uncovered code in commands/trigger.c.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 11:30:41 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Mar-05, Daniel Gustafsson wrote:\n>> While looking at the tg_updatedcols patch I happened to notice that we still\n>> support pre-7.3 constraint triggers by converting them on the fly. AFAICT this\n>> requires a pre-7.3 dump to hit.\n\n> I know it's a late in the cycle for patches in commitfest, but why not\n> consider this for pg13 nonetheless? It seems simple enough. Also, per\n> https://coverage.postgresql.org/src/backend/commands/trigger.c.gcov.html\n> this is the only large chunk of uncovered code in commands/trigger.c.\n\n+1 --- I think this fits in well with my nearby proposal to remove\nOPAQUE, which is also only relevant for pre-7.3 dumps. Let's just\nnuke that stuff.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 09:42:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "On 3/5/20 9:42 AM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2020-Mar-05, Daniel Gustafsson wrote:\n>>> While looking at the tg_updatedcols patch I happened to notice that we still\n>>> support pre-7.3 constraint triggers by converting them on the fly. AFAICT this\n>>> requires a pre-7.3 dump to hit.\n> \n>> I know it's a late in the cycle for patches in commitfest, but why not\n>> consider this for pg13 nonetheless? It seems simple enough. Also, per\n>> https://coverage.postgresql.org/src/backend/commands/trigger.c.gcov.html\n>> this is the only large chunk of uncovered code in commands/trigger.c.\n> \n> +1 --- I think this fits in well with my nearby proposal to remove\n> OPAQUE, which is also only relevant for pre-7.3 dumps. Let's just\n> nuke that stuff.\n\n+1. CF entry added:\n\nhttps://commitfest.postgresql.org/27/2506\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 5 Mar 2020 09:56:40 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "> On 5 Mar 2020, at 15:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> On 2020-Mar-05, Daniel Gustafsson wrote:\n>>> While looking at the tg_updatedcols patch I happened to notice that we still\n>>> support pre-7.3 constraint triggers by converting them on the fly. AFAICT this\n>>> requires a pre-7.3 dump to hit.\n> \n>> I know it's a late in the cycle for patches in commitfest, but why not\n>> consider this for pg13 nonetheless? It seems simple enough. Also, per\n>> https://coverage.postgresql.org/src/backend/commands/trigger.c.gcov.html\n>> this is the only large chunk of uncovered code in commands/trigger.c.\n> \n> +1 --- I think this fits in well with my nearby proposal to remove\n> OPAQUE, which is also only relevant for pre-7.3 dumps. Let's just\n> nuke that stuff.\n\nSounds good. I was opting for 14 to not violate the no new patches in an ongoing CF policy, but if there is concensus from committers then +1 from me.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 5 Mar 2020 16:12:14 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 5 Mar 2020, at 15:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +1 --- I think this fits in well with my nearby proposal to remove\n>> OPAQUE, which is also only relevant for pre-7.3 dumps. Let's just\n>> nuke that stuff.\n\n> Sounds good. I was opting for 14 to not violate the no new patches in an ongoing CF policy, but if there is concensus from committers then +1 from me.\n\nAs long as we're thinking of zapping code that is long past its sell-by\ndate, I propose getting rid of this stanza in indexcmds.c, which\nbasically causes CREATE INDEX to ignore certain opclass specifications:\n\n /*\n * Release 7.0 removed network_ops, timespan_ops, and datetime_ops, so we\n * ignore those opclass names so the default *_ops is used. This can be\n * removed in some later release. bjm 2000/02/07\n *\n * Release 7.1 removes lztext_ops, so suppress that too for a while. tgl\n * 2000/07/30\n *\n * Release 7.2 renames timestamp_ops to timestamptz_ops, so suppress that\n * too for awhile. I'm starting to think we need a better approach. tgl\n * 2000/10/01\n *\n * Release 8.0 removes bigbox_ops (which was dead code for a long while\n * anyway). tgl 2003/11/11\n */\n if (list_length(opclass) == 1)\n {\n char *claname = strVal(linitial(opclass));\n\n if (strcmp(claname, \"network_ops\") == 0 ||\n strcmp(claname, \"timespan_ops\") == 0 ||\n strcmp(claname, \"datetime_ops\") == 0 ||\n strcmp(claname, \"lztext_ops\") == 0 ||\n strcmp(claname, \"timestamp_ops\") == 0 ||\n strcmp(claname, \"bigbox_ops\") == 0)\n opclass = NIL;\n }\n\n\nAt some point, the risk that this causes problems for developers of\nnew opclasses must outweigh the value of silently upgrading old dumps.\nI think if we're zapping other pre-7.3-compatibility hacks for that\npurpose, this one could go too.\n\nElsewhere in indexcmds.c, there's this:\n\n /*\n * Hack to provide more-or-less-transparent updating of old RTREE\n * indexes to GiST: if RTREE is requested and not found, use GIST.\n */\n if (strcmp(accessMethodName, \"rtree\") == 0)\n {\n ereport(NOTICE,\n (errmsg(\"substituting access method \\\"gist\\\" for obsolete method \\\"rtree\\\"\")));\n accessMethodName = \"gist\";\n tuple = SearchSysCache1(AMNAME, PointerGetDatum(accessMethodName));\n }\n\nwhich dates to 8.2 (2a8d3d83e of 2005-11-07). This is less bad than the\nother thing, since it won't affect the behavior of any command that\nwouldn't otherwise just fail; but maybe its time has passed as well?\nAlthough Alvaro's point comparing these behaviors to pg_dump's support\ncutoff of 8.0 suggests that maybe we should leave this one for now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 10:33:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "On 05/03/2020 16:33, Tom Lane wrote:\n> Elsewhere in indexcmds.c, there's this:\n> \n> /*\n> * Hack to provide more-or-less-transparent updating of old RTREE\n> * indexes to GiST: if RTREE is requested and not found, use GIST.\n> */\n> if (strcmp(accessMethodName, \"rtree\") == 0)\n> {\n> ereport(NOTICE,\n> (errmsg(\"substituting access method \\\"gist\\\" for obsolete method \\\"rtree\\\"\")));\n> accessMethodName = \"gist\";\n> tuple = SearchSysCache1(AMNAME, PointerGetDatum(accessMethodName));\n> }\n\nAww, this one is in my list of gotcha trivia questions.\n\nThat's not a reason not to remove it, of course.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 5 Mar 2020 18:19:59 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "On 2020-Mar-05, Tom Lane wrote:\n\n> As long as we're thinking of zapping code that is long past its sell-by\n> date, I propose getting rid of this stanza in indexcmds.c, which\n> basically causes CREATE INDEX to ignore certain opclass specifications:\n\nI agree, this should be fine to remove.\n\n> Elsewhere in indexcmds.c, there's this:\n> \n> /*\n> * Hack to provide more-or-less-transparent updating of old RTREE\n> * indexes to GiST: if RTREE is requested and not found, use GIST.\n> */\n> if (strcmp(accessMethodName, \"rtree\") == 0)\n> {\n> ereport(NOTICE,\n> (errmsg(\"substituting access method \\\"gist\\\" for obsolete method \\\"rtree\\\"\")));\n> accessMethodName = \"gist\";\n> tuple = SearchSysCache1(AMNAME, PointerGetDatum(accessMethodName));\n> }\n> \n> which dates to 8.2 (2a8d3d83e of 2005-11-07). This is less bad than the\n> other thing, since it won't affect the behavior of any command that\n> wouldn't otherwise just fail; but maybe its time has passed as well?\n> Although Alvaro's point comparing these behaviors to pg_dump's support\n> cutoff of 8.0 suggests that maybe we should leave this one for now.\n\nYeah, dunno, 'rtree' is even immortalized in tests; commit f2e403803fe6\nas recently as March 2019 was seen modifying that.\n\n(Another curious factoid is that SQLite supports something that vaguely\nlooks rtreeish https://sqlite.org/rtree.html -- However, because it\ndoesn't use the same syntax Postgres uses, it's not a point against\nremoving our hack.)\n\nI guess we can wait a couple years more on that one, since it's not\ndamaging anything anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 15:36:56 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "> On 5 Mar 2020, at 19:36, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Mar-05, Tom Lane wrote:\n> \n>> As long as we're thinking of zapping code that is long past its sell-by\n>> date, I propose getting rid of this stanza in indexcmds.c, which\n>> basically causes CREATE INDEX to ignore certain opclass specifications:\n> \n> I agree, this should be fine to remove.\n\nThe attached patchset removes this stanza as well.\n\nWhen poking around here I realized that defGetStringList was also left unused.\nIt was added with the logical decoding code but the single callsite has since\nbeen removed. As it's published in a header we might not want to remove it,\nbut I figured I'd bring it up as were talking about removing code.\n\ncheers ./daniel", "msg_date": "Thu, 5 Mar 2020 21:37:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Mar-05, Tom Lane wrote:\n>> As long as we're thinking of zapping code that is long past its sell-by\n>> date, I propose getting rid of this stanza in indexcmds.c, which\n>> basically causes CREATE INDEX to ignore certain opclass specifications:\n\n> I agree, this should be fine to remove.\n\nDone.\n\n>> which dates to 8.2 (2a8d3d83e of 2005-11-07). This is less bad than the\n>> other thing, since it won't affect the behavior of any command that\n>> wouldn't otherwise just fail; but maybe its time has passed as well?\n>> Although Alvaro's point comparing these behaviors to pg_dump's support\n>> cutoff of 8.0 suggests that maybe we should leave this one for now.\n\n> Yeah, dunno, 'rtree' is even immortalized in tests; commit f2e403803fe6\n> as recently as March 2019 was seen modifying that.\n\nHah, I didn't realize we actually had code coverage for that!\n\n> I guess we can wait a couple years more on that one, since it's not\n> damaging anything anyway.\n\nAgreed, I left it be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 15:50:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Having code which is untested and not excercised by developers (or users, if my\n> assumption holds), yet being reachable by SQL, runs the risk of introducing\n> subtle bugs. Is there a usecase for keeping it, or can/should it be removed in\n> 14? That would still leave a lot of supported versions to upgrade to in case\n> there are users to need this.\n\nPushed. Looking at the original commit, I noticed one now-obsolete\ncomment that should also be removed, so I did that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 15:52:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> When poking around here I realized that defGetStringList was also left unused.\n> It was added with the logical decoding code but the single callsite has since\n> been removed. As it's published in a header we might not want to remove it,\n> but I figured I'd bring it up as were talking about removing code.\n\nHm. Kind of inclined to leave it, since somebody will probably need it\nagain someday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 15:57:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" }, { "msg_contents": "> On 5 Mar 2020, at 21:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Having code which is untested and not excercised by developers (or users, if my\n>> assumption holds), yet being reachable by SQL, runs the risk of introducing\n>> subtle bugs. Is there a usecase for keeping it, or can/should it be removed in\n>> 14? That would still leave a lot of supported versions to upgrade to in case\n>> there are users to need this.\n> \n> Pushed. Looking at the original commit, I noticed one now-obsolete\n> comment that should also be removed, so I did that.\n\nThanks, I was looking around but totally missed that comment.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 6 Mar 2020 00:13:41 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Retiring support for pre-7.3 FK constraint triggers" } ]
[ { "msg_contents": "Hi,\n\nWe discussed the $SUBJECT six years ago at the following thread.\nhttps://postgr.es/m/CAHGQGwGYkF+CvpOMdxaO=+aNAzc1Oo9O4LqWo50MxpvFj+0VOw@mail.gmail.com\n\nSeems our consensus at that discussion was to leave a fallback\npromotion for a release or two for debugging purpose or as\nan emergency method because fast promotion might have\nsome issues, and then to remove it later. Now, more than six years\nhave already passed since that discussion. Is there still\nany reason to keep a fallback promotion? If nothing, I'd like to\ndrop it from v13.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 5 Mar 2020 22:48:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Should we remove a fallback promotion? take 2" }, { "msg_contents": "On Thu, Mar 5, 2020 at 8:48 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> We discussed the $SUBJECT six years ago at the following thread.\n> https://postgr.es/m/CAHGQGwGYkF+CvpOMdxaO=+aNAzc1Oo9O4LqWo50MxpvFj+0VOw@mail.gmail.com\n>\n> Seems our consensus at that discussion was to leave a fallback\n> promotion for a release or two for debugging purpose or as\n> an emergency method because fast promotion might have\n> some issues, and then to remove it later. Now, more than six years\n> have already passed since that discussion. Is there still\n> any reason to keep a fallback promotion? If nothing, I'd like to\n> drop it from v13.\n\nSeems reasonable, but it would be better if people proposed these\nkinds of changes closer to the beginning of the release cycle rather\nthan in the crush at the end.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 5 Mar 2020 09:40:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "On Thu, Mar 05, 2020 at 09:40:54AM -0500, Robert Haas wrote:\n> Seems reasonable, but it would be better if people proposed these\n> kinds of changes closer to the beginning of the release cycle rather\n> than in the crush at the end.\n\n+1, to both points.\n--\nMichael", "msg_date": "Fri, 6 Mar 2020 10:40:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "\n\nOn 2020/03/06 10:40, Michael Paquier wrote:\n> On Thu, Mar 05, 2020 at 09:40:54AM -0500, Robert Haas wrote:\n>> Seems reasonable, but it would be better if people proposed these\n>> kinds of changes closer to the beginning of the release cycle rather\n>> than in the crush at the end.\n> \n> +1, to both points.\n\nOk, I'm fine to do that in v14.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Fri, 6 Mar 2020 22:22:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "On 2020-Mar-06, Michael Paquier wrote:\n\n> On Thu, Mar 05, 2020 at 09:40:54AM -0500, Robert Haas wrote:\n> > Seems reasonable, but it would be better if people proposed these\n> > kinds of changes closer to the beginning of the release cycle rather\n> > than in the crush at the end.\n> \n> +1, to both points.\n\nWhy? Are you saying that there's some actual risk of breaking\nsomething? We're not even near beta or feature freeze yet.\n\nI'm not seeing the reason for the \"please propose this sooner in the\ncycle\" argument. It has already been proposed sooner -- seven years\nsooner. We're not waiting for users to complain anymore; clearly nobody\ncared.\n\nI think dragging things forever serves no purpose.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Mar 2020 16:33:18 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "Hi,\n\nOn 2020-03-06 16:33:18 -0300, Alvaro Herrera wrote:\n> On 2020-Mar-06, Michael Paquier wrote:\n> > On Thu, Mar 05, 2020 at 09:40:54AM -0500, Robert Haas wrote:\n> > > Seems reasonable, but it would be better if people proposed these\n> > > kinds of changes closer to the beginning of the release cycle rather\n> > > than in the crush at the end.\n> > \n> > +1, to both points.\n> \n> Why? Are you saying that there's some actual risk of breaking\n> something? We're not even near beta or feature freeze yet.\n> \n> I'm not seeing the reason for the \"please propose this sooner in the\n> cycle\" argument. It has already been proposed sooner -- seven years\n> sooner. We're not waiting for users to complain anymore; clearly nobody\n> cared.\n\nYea. There are changes that are so invasive that it's useful to go very\nearly, but in this case I'm not seeing it?\n\n+1 for removing non-fast promotions.\n\nFWIW, I find \"fallback promotion\" a confusing description.\n\n\nBtw, I'd really like to make the crash recovery environment more like\nthe replication environment. I.e. have checkpointer, bgwriter running,\nand have an 'end-of-recovery' record instead of a checkpoint at the end.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Mar 2020 14:56:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "On 2020/03/10 6:56, Andres Freund wrote:\n> Hi,\n> \n> On 2020-03-06 16:33:18 -0300, Alvaro Herrera wrote:\n>> On 2020-Mar-06, Michael Paquier wrote:\n>>> On Thu, Mar 05, 2020 at 09:40:54AM -0500, Robert Haas wrote:\n>>>> Seems reasonable, but it would be better if people proposed these\n>>>> kinds of changes closer to the beginning of the release cycle rather\n>>>> than in the crush at the end.\n>>>\n>>> +1, to both points.\n>>\n>> Why? Are you saying that there's some actual risk of breaking\n>> something? We're not even near beta or feature freeze yet.\n>>\n>> I'm not seeing the reason for the \"please propose this sooner in the\n>> cycle\" argument. It has already been proposed sooner -- seven years\n>> sooner. We're not waiting for users to complain anymore; clearly nobody\n>> cared.\n> \n> Yea. There are changes that are so invasive that it's useful to go very\n> early, but in this case I'm not seeing it?\n> \n> +1 for removing non-fast promotions.\n\nPatch attached. I will add this into the first CF for v14.\n\n> FWIW, I find \"fallback promotion\" a confusing description.\n\nYeah, so I changed the subject.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 20 Apr 2020 15:26:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Remove non-fast promotion Re: Should we remove a fallback promotion?\n take 2" }, { "msg_contents": "On Mon, Apr 20, 2020 at 03:26:16PM +0900, Fujii Masao wrote:\n> Patch attached. I will add this into the first CF for v14.\n\nThanks!\n\n> - if (IsPromoteSignaled())\n> + /*\n> + * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n> + * signal handler. It now leaves the file in place and lets the\n> + * Startup process do the unlink.\n> + */\n> + if (IsPromoteSignaled() && stat(PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n> {\n> - /*\n> - * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n> - * signal handler. It now leaves the file in place and lets the\n> - * Startup process do the unlink. This allows Startup to know whether\n> - * it should create a full checkpoint before starting up (fallback\n> - * mode). Fast promotion takes precedence.\n> - */\n> - if (stat(PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n> - {\n> - unlink(PROMOTE_SIGNAL_FILE);\n> - unlink(FALLBACK_PROMOTE_SIGNAL_FILE);\n> - fast_promote = true;\n> - }\n> - else if (stat(FALLBACK_PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n> - {\n> - unlink(FALLBACK_PROMOTE_SIGNAL_FILE);\n> - fast_promote = false;\n> - }\n> -\n> ereport(LOG, (errmsg(\"received promote request\")));\n> -\n> + unlink(PROMOTE_SIGNAL_FILE);\n\nOn HEAD, this code means that it is possible to end recovery just by\nsending SIGUSR2 to the startup process. With your patch, this code\nnow means that in order to finish recovery you need to send SIGUSR2 to\nthe startup process *and* to create the promote signal file. Is that\nreally what you want?\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 10:59:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/21 10:59, Michael Paquier wrote:\n> On Mon, Apr 20, 2020 at 03:26:16PM +0900, Fujii Masao wrote:\n>> Patch attached. I will add this into the first CF for v14.\n> \n> Thanks!\n> \n>> - if (IsPromoteSignaled())\n>> + /*\n>> + * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n>> + * signal handler. It now leaves the file in place and lets the\n>> + * Startup process do the unlink.\n>> + */\n>> + if (IsPromoteSignaled() && stat(PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n>> {\n>> - /*\n>> - * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n>> - * signal handler. It now leaves the file in place and lets the\n>> - * Startup process do the unlink. This allows Startup to know whether\n>> - * it should create a full checkpoint before starting up (fallback\n>> - * mode). Fast promotion takes precedence.\n>> - */\n>> - if (stat(PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n>> - {\n>> - unlink(PROMOTE_SIGNAL_FILE);\n>> - unlink(FALLBACK_PROMOTE_SIGNAL_FILE);\n>> - fast_promote = true;\n>> - }\n>> - else if (stat(FALLBACK_PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n>> - {\n>> - unlink(FALLBACK_PROMOTE_SIGNAL_FILE);\n>> - fast_promote = false;\n>> - }\n>> -\n>> ereport(LOG, (errmsg(\"received promote request\")));\n>> -\n>> + unlink(PROMOTE_SIGNAL_FILE);\n\nThanks for reviewing the patch!\n\n> On HEAD, this code means that it is possible to end recovery just by\n> sending SIGUSR2 to the startup process.\n\nYes, in this case, non-fast promotion is triggered.\n\n> With your patch, this code\n> now means that in order to finish recovery you need to send SIGUSR2 to\n> the startup process *and* to create the promote signal file.\n\nYes, but isn't this the same as the way to trigger fast promotion in HEAD?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 21 Apr 2020 14:27:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On Tue, Apr 21, 2020 at 02:27:20PM +0900, Fujii Masao wrote:\n> On 2020/04/21 10:59, Michael Paquier wrote:\n>> With your patch, this code\n>> now means that in order to finish recovery you need to send SIGUSR2 to\n>> the startup process *and* to create the promote signal file.\n> \n> Yes, but isn't this the same as the way to trigger fast promotion in HEAD?\n\nYep, but my point is that some users who have been relying only on\nSIGUSR2 sent to the startup process for a promotion may be surprised\nto see that doing the same operation does not trigger a promotion\nanymore.\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 14:54:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/21 14:54, Michael Paquier wrote:\n> On Tue, Apr 21, 2020 at 02:27:20PM +0900, Fujii Masao wrote:\n>> On 2020/04/21 10:59, Michael Paquier wrote:\n>>> With your patch, this code\n>>> now means that in order to finish recovery you need to send SIGUSR2 to\n>>> the startup process *and* to create the promote signal file.\n>>\n>> Yes, but isn't this the same as the way to trigger fast promotion in HEAD?\n> \n> Yep, but my point is that some users who have been relying only on\n> SIGUSR2 sent to the startup process for a promotion may be surprised\n> to see that doing the same operation does not trigger a promotion\n> anymore.\n\nYeah, but that's not documented. So I don't think that we need to keep\nthe backward-compatibility for that.\n\nAlso in that case, non-fast promotion is triggered. Since my patch\ntries to remove non-fast promotion, it's intentional to prevent them\nfrom doing that. But you think that we should not drop that because\nthere are still some users for that?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 21 Apr 2020 15:29:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On Tue, Apr 21, 2020 at 03:29:54PM +0900, Fujii Masao wrote:\n> Yeah, but that's not documented. So I don't think that we need to keep\n> the backward-compatibility for that.\n> \n> Also in that case, non-fast promotion is triggered. Since my patch\n> tries to remove non-fast promotion, it's intentional to prevent them\n> from doing that. But you think that we should not drop that because\n> there are still some users for that?\n\nIt would be good to ask around to folks maintaining HA solutions about\nthat change at least, as there could be a point in still letting\npromotion to happen in this case, but switch silently to the fast\npath.\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 15:36:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/21 15:36, Michael Paquier wrote:\n> On Tue, Apr 21, 2020 at 03:29:54PM +0900, Fujii Masao wrote:\n>> Yeah, but that's not documented. So I don't think that we need to keep\n>> the backward-compatibility for that.\n>>\n>> Also in that case, non-fast promotion is triggered. Since my patch\n>> tries to remove non-fast promotion, it's intentional to prevent them\n>> from doing that. But you think that we should not drop that because\n>> there are still some users for that?\n> \n> It would be good to ask around to folks maintaining HA solutions about\n> that change at least, as there could be a point in still letting\n> promotion to happen in this case, but switch silently to the fast\n> path.\n\n*If* there are some HA solutions doing that, IMO that they should be changed\nso that the documented official way to trigger promotion (i.e., pg_ctl promote,\npg_promote or trigger_file) is used instead.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 21 Apr 2020 15:48:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "At Tue, 21 Apr 2020 15:48:02 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/04/21 15:36, Michael Paquier wrote:\n> > On Tue, Apr 21, 2020 at 03:29:54PM +0900, Fujii Masao wrote:\n> >> Yeah, but that's not documented. So I don't think that we need to keep\n> >> the backward-compatibility for that.\n> >>\n> >> Also in that case, non-fast promotion is triggered. Since my patch\n> >> tries to remove non-fast promotion, it's intentional to prevent them\n> >> from doing that. But you think that we should not drop that because\n> >> there are still some users for that?\n> > It would be good to ask around to folks maintaining HA solutions about\n> > that change at least, as there could be a point in still letting\n> > promotion to happen in this case, but switch silently to the fast\n> > path.\n> \n> *If* there are some HA solutions doing that, IMO that they should be\n> *changed\n> so that the documented official way to trigger promotion (i.e., pg_ctl\n> promote,\n> pg_promote or trigger_file) is used instead.\n\nThe difference between fast and non-fast promotions is far trivial\nthan the difference between promotion happens or not. I think\neveryone cares about the new version actually promotes by the steps\nthey have been doing, but few of them even notices the difference\nbetween the fast and non-fast. If those who are using non-fast\npromotion for a certain reason should notice the change of promotion\nbehavior in release notes.\n\nThis is similar to the change of the default waiting behvaior of\npg_ctl at PG10.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 21 Apr 2020 16:58:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "At Mon, 20 Apr 2020 15:26:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Patch attached. I will add this into the first CF for v14.\n\n-\t\t\tif (!fast_promoted)\n+\t\t\tif (!promoted)\n \t\t\t\tRequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n \t\t\t\t\t\t\t\t CHECKPOINT_IMMEDIATE |\n \t\t\t\t\t\t\t\t CHECKPOINT_WAIT);\n\nIf we don't find the checkpoint record just before, we don't insert\nEnd-Of-Recovery record then run an immediate chekpoint. I think if we\nnuke the non-fast promotion, shouldn't we insert the EOR record even\nin that case?\n\nOr, as Andres suggested upthread, do we always insert it?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 21 Apr 2020 17:15:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/21 17:15, Kyotaro Horiguchi wrote:\n> At Mon, 20 Apr 2020 15:26:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Patch attached. I will add this into the first CF for v14.\n> \n> -\t\t\tif (!fast_promoted)\n> +\t\t\tif (!promoted)\n> \t\t\t\tRequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n> \t\t\t\t\t\t\t\t CHECKPOINT_IMMEDIATE |\n> \t\t\t\t\t\t\t\t CHECKPOINT_WAIT);\n> \n> If we don't find the checkpoint record just before, we don't insert\n> End-Of-Recovery record then run an immediate chekpoint. I think if we\n> nuke the non-fast promotion, shouldn't we insert the EOR record even\n> in that case?\n\nI'm not sure if that's safe. What if the server crashes before the checkpoint\ncompletes in that case? Since the last checkpoint record is not available,\nthe subsequent crash recovery will fail. This would lead to that the server\nwill never start up. Right? Currently ISTM that end-of-recovery-checkpoint\nis executed to avoid such trouble in that case.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 21 Apr 2020 22:08:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "Hello,\n\nOn Tue, 21 Apr 2020 15:36:22 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Apr 21, 2020 at 03:29:54PM +0900, Fujii Masao wrote:\n> > Yeah, but that's not documented. So I don't think that we need to keep\n> > the backward-compatibility for that.\n> > \n> > Also in that case, non-fast promotion is triggered. Since my patch\n> > tries to remove non-fast promotion, it's intentional to prevent them\n> > from doing that. But you think that we should not drop that because\n> > there are still some users for that? \n> \n> It would be good to ask around to folks maintaining HA solutions about\n> that change at least, as there could be a point in still letting\n> promotion to happen in this case, but switch silently to the fast\n> path.\n\nFWIW, PAF relies on pg_ctl promote. No need for non-fast promotion.\n\nRegards,\n\n\n", "msg_date": "Tue, 21 Apr 2020 23:19:33 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On 2020-Apr-21, Jehan-Guillaume de Rorthais wrote:\n\n> On Tue, 21 Apr 2020 15:36:22 +0900\n> Michael Paquier <michael@paquier.xyz> wrote:\n\n> > > Also in that case, non-fast promotion is triggered. Since my patch\n> > > tries to remove non-fast promotion, it's intentional to prevent them\n> > > from doing that. But you think that we should not drop that because\n> > > there are still some users for that? \n> > \n> > It would be good to ask around to folks maintaining HA solutions about\n> > that change at least, as there could be a point in still letting\n> > promotion to happen in this case, but switch silently to the fast\n> > path.\n> \n> FWIW, PAF relies on pg_ctl promote. No need for non-fast promotion.\n\nAFAICT repmgr uses 'pg_ctl promote', and has since version 3.0 (released\nin mid 2015). It was only 3.3.2 (mid 2017) that supported Postgres 10,\nso it seems fairly safe to assume that the removal won't be a problem.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 17:53:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On 2020-Apr-20, Fujii Masao wrote:\n\n> +\t/*\n> +\t * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n> +\t * signal handler. It now leaves the file in place and lets the\n> +\t * Startup process do the unlink.\n> +\t */\n> +\tif (IsPromoteSignaled() && stat(PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n> \t{\n> -\t\t/*\n> -\t\t * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n> -\t\t * signal handler. It now leaves the file in place and lets the\n> -\t\t * Startup process do the unlink. This allows Startup to know whether\n> -\t\t * it should create a full checkpoint before starting up (fallback\n> -\t\t * mode). Fast promotion takes precedence.\n> -\t\t */\n\nIt seems pointless to leave a very old comment that documents what the\ncode no longer does. I thikn it would be better to reword it indicating\nwhat the code does do, ie. something like \"Leave the signal file in\nplace; it will be removed by the startup process when ...\"\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 21 Apr 2020 17:57:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "At Tue, 21 Apr 2020 22:08:56 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/04/21 17:15, Kyotaro Horiguchi wrote:\n> > At Mon, 20 Apr 2020 15:26:16 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >> Patch attached. I will add this into the first CF for v14.\n> > -\t\t\tif (!fast_promoted)\n> > +\t\t\tif (!promoted)\n> > \t\t\t\tRequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n> > \t\t\t\t\t\t\t\t CHECKPOINT_IMMEDIATE |\n> > \t\t\t\t\t\t\t\t CHECKPOINT_WAIT);\n> > If we don't find the checkpoint record just before, we don't insert\n> > End-Of-Recovery record then run an immediate chekpoint. I think if we\n> > nuke the non-fast promotion, shouldn't we insert the EOR record even\n> > in that case?\n> \n> I'm not sure if that's safe. What if the server crashes before the\n> checkpoint\n> completes in that case? Since the last checkpoint record is not\n> available,\n> the subsequent crash recovery will fail. This would lead to that the\n> server\n> will never start up. Right? Currently ISTM that\n\nYes, that's right.\n\n> end-of-recovery-checkpoint\n> is executed to avoid such trouble in that case.\n\nI meant that we always have EOR at the end of recovery. So in the\nmissing latest checkpoint (and crash recovery) case, we insert EOR\nafter the immediate checkpoint. That also means we no longer set\nCHECKPOINT_END_OF_RECOVERY to the checkpoint, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Apr 2020 09:13:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On 2020/04/22 6:53, Alvaro Herrera wrote:\n> On 2020-Apr-21, Jehan-Guillaume de Rorthais wrote:\n> \n>> On Tue, 21 Apr 2020 15:36:22 +0900\n>> Michael Paquier <michael@paquier.xyz> wrote:\n> \n>>>> Also in that case, non-fast promotion is triggered. Since my patch\n>>>> tries to remove non-fast promotion, it's intentional to prevent them\n>>>> from doing that. But you think that we should not drop that because\n>>>> there are still some users for that?\n>>>\n>>> It would be good to ask around to folks maintaining HA solutions about\n>>> that change at least, as there could be a point in still letting\n>>> promotion to happen in this case, but switch silently to the fast\n>>> path.\n>>\n>> FWIW, PAF relies on pg_ctl promote. No need for non-fast promotion.\n> \n> AFAICT repmgr uses 'pg_ctl promote', and has since version 3.0 (released\n> in mid 2015). It was only 3.3.2 (mid 2017) that supported Postgres 10,\n> so it seems fairly safe to assume that the removal won't be a problem.\n\nCorrect, repmgr uses \"pg_ctl promote\" or pg_promote() (if available), and\nwon't be affected by this change.\n\n\nRegards\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 22 Apr 2020 10:28:07 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "At Wed, 22 Apr 2020 10:28:07 +0900, Ian Barwick <ian.barwick@2ndquadrant.com> wrote in \n> On 2020/04/22 6:53, Alvaro Herrera wrote:\n> > On 2020-Apr-21, Jehan-Guillaume de Rorthais wrote:\n> > \n> >> On Tue, 21 Apr 2020 15:36:22 +0900\n> >> Michael Paquier <michael@paquier.xyz> wrote:\n> > \n> >>>> Also in that case, non-fast promotion is triggered. Since my patch\n> >>>> tries to remove non-fast promotion, it's intentional to prevent them\n> >>>> from doing that. But you think that we should not drop that because\n> >>>> there are still some users for that?\n> >>>\n> >>> It would be good to ask around to folks maintaining HA solutions about\n> >>> that change at least, as there could be a point in still letting\n> >>> promotion to happen in this case, but switch silently to the fast\n> >>> path.\n> >>\n> >> FWIW, PAF relies on pg_ctl promote. No need for non-fast promotion.\n> > AFAICT repmgr uses 'pg_ctl promote', and has since version 3.0\n> > (released\n> > in mid 2015). It was only 3.3.2 (mid 2017) that supported Postgres\n> > 10,\n> > so it seems fairly safe to assume that the removal won't be a problem.\n> \n> Correct, repmgr uses \"pg_ctl promote\" or pg_promote() (if available),\n> and\n> won't be affected by this change.\n\nFor the record, the pgsql resource agent uses \"pg_ctl promote\" and\nworking with fast-promote. Auxiliary tools for it is assuming\nfast-promote.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Apr 2020 10:53:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On 2020/04/22 6:57, Alvaro Herrera wrote:\n> On 2020-Apr-20, Fujii Masao wrote:\n> \n>> +\t/*\n>> +\t * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n>> +\t * signal handler. It now leaves the file in place and lets the\n>> +\t * Startup process do the unlink.\n>> +\t */\n>> +\tif (IsPromoteSignaled() && stat(PROMOTE_SIGNAL_FILE, &stat_buf) == 0)\n>> \t{\n>> -\t\t/*\n>> -\t\t * In 9.1 and 9.2 the postmaster unlinked the promote file inside the\n>> -\t\t * signal handler. It now leaves the file in place and lets the\n>> -\t\t * Startup process do the unlink. This allows Startup to know whether\n>> -\t\t * it should create a full checkpoint before starting up (fallback\n>> -\t\t * mode). Fast promotion takes precedence.\n>> -\t\t */\n> \n> It seems pointless to leave a very old comment that documents what the\n> code no longer does. I thikn it would be better to reword it indicating\n> what the code does do, ie. something like \"Leave the signal file in\n> place; it will be removed by the startup process when ...\"\n\nAgreed. And, while reading the related code, I thought that it's more proper\nto place this comment in CheckPromoteSignal() rather than\nCheckForStandbyTrigger(). Because CheckPromoteSignal() actually does\nwhat the comment says, i.e., leaves the promote signal file in place and\nlets the startup process do the unlink.\n\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 22 Apr 2020 11:50:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/22 10:53, Kyotaro Horiguchi wrote:\n> At Wed, 22 Apr 2020 10:28:07 +0900, Ian Barwick <ian.barwick@2ndquadrant.com> wrote in\n>> On 2020/04/22 6:53, Alvaro Herrera wrote:\n>>> On 2020-Apr-21, Jehan-Guillaume de Rorthais wrote:\n>>>\n>>>> On Tue, 21 Apr 2020 15:36:22 +0900\n>>>> Michael Paquier <michael@paquier.xyz> wrote:\n>>>\n>>>>>> Also in that case, non-fast promotion is triggered. Since my patch\n>>>>>> tries to remove non-fast promotion, it's intentional to prevent them\n>>>>>> from doing that. But you think that we should not drop that because\n>>>>>> there are still some users for that?\n>>>>>\n>>>>> It would be good to ask around to folks maintaining HA solutions about\n>>>>> that change at least, as there could be a point in still letting\n>>>>> promotion to happen in this case, but switch silently to the fast\n>>>>> path.\n>>>>\n>>>> FWIW, PAF relies on pg_ctl promote. No need for non-fast promotion.\n>>> AFAICT repmgr uses 'pg_ctl promote', and has since version 3.0\n>>> (released\n>>> in mid 2015). It was only 3.3.2 (mid 2017) that supported Postgres\n>>> 10,\n>>> so it seems fairly safe to assume that the removal won't be a problem.\n>>\n>> Correct, repmgr uses \"pg_ctl promote\" or pg_promote() (if available),\n>> and\n>> won't be affected by this change.\n> \n> For the record, the pgsql resource agent uses \"pg_ctl promote\" and\n> working with fast-promote. Auxiliary tools for it is assuming\n> fast-promote.\n\nThanks all for checking whether the change affects each HA solution!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:51:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/22 9:13, Kyotaro Horiguchi wrote:\n> At Tue, 21 Apr 2020 22:08:56 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/04/21 17:15, Kyotaro Horiguchi wrote:\n>>> At Mon, 20 Apr 2020 15:26:16 +0900, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote in\n>>>> Patch attached. I will add this into the first CF for v14.\n>>> -\t\t\tif (!fast_promoted)\n>>> +\t\t\tif (!promoted)\n>>> \t\t\t\tRequestCheckpoint(CHECKPOINT_END_OF_RECOVERY |\n>>> \t\t\t\t\t\t\t\t CHECKPOINT_IMMEDIATE |\n>>> \t\t\t\t\t\t\t\t CHECKPOINT_WAIT);\n>>> If we don't find the checkpoint record just before, we don't insert\n>>> End-Of-Recovery record then run an immediate chekpoint. I think if we\n>>> nuke the non-fast promotion, shouldn't we insert the EOR record even\n>>> in that case?\n>>\n>> I'm not sure if that's safe. What if the server crashes before the\n>> checkpoint\n>> completes in that case? Since the last checkpoint record is not\n>> available,\n>> the subsequent crash recovery will fail. This would lead to that the\n>> server\n>> will never start up. Right? Currently ISTM that\n> \n> Yes, that's right.\n> \n>> end-of-recovery-checkpoint\n>> is executed to avoid such trouble in that case.\n> \n> I meant that we always have EOR at the end of recovery. So in the\n> missing latest checkpoint (and crash recovery) case, we insert EOR\n> after the immediate checkpoint. That also means we no longer set\n> CHECKPOINT_END_OF_RECOVERY to the checkpoint, too.\n\nCould you tell me what the benefit by this change is? Even with this change,\nthe server still needs to wait for the checkpoint to complete before\nbecoming the master and starting the service, unlike fast promotion. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 22 Apr 2020 11:51:42 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "At Wed, 22 Apr 2020 11:51:42 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > I meant that we always have EOR at the end of recovery. So in the\n> > missing latest checkpoint (and crash recovery) case, we insert EOR\n> > after the immediate checkpoint. That also means we no longer set\n> > CHECKPOINT_END_OF_RECOVERY to the checkpoint, too.\n> \n> Could you tell me what the benefit by this change is? Even with this\n> change,\n> the server still needs to wait for the checkpoint to complete before\n> becoming the master and starting the service, unlike fast\n> promotion. No?\n\nThere's no benefit of performance. It's just for simplicity by\nsignalling end-of-recovery in a unified way.\n\nLong ago, we had only non-fast promotion, which is marked by\nCHECKPOINT_END_OF_RECOVERY. When we introduced fast-promotion, it is\nmarked by the END_OF_RECOVERY record since checkpoint record is not\ninserted at the promotion time. However, we internally fall back to\nnon-fast promotion when we need to make a checkpoint immediately.\nIf we remove non-fast checkpoint, we don't need two means to signal\nend-of-recovery.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:09:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "On Wed, 22 Apr 2020 11:51:15 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> On 2020/04/22 10:53, Kyotaro Horiguchi wrote:\n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> [...] \n> \n> Thanks all for checking whether the change affects each HA solution!\n\nUnless I'm wrong, we don't have feedback from Patroni team.\n\nI did some quick grep and it seems to rely on \"pg_ctl promote\" as well.\nMoreover, the latest commit 80fbe9005 force a checkpoint right after the\npromote. So I suppose they don't use non-fast promote.\n\nI CC'ed Alexander Kukushkin to this discussion, so at least he is aware of\nthis topic.\n\nRegards,\n\n\n", "msg_date": "Wed, 22 Apr 2020 20:56:41 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "\n\nOn 2020/04/23 3:56, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 22 Apr 2020 11:51:15 +0900\n> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> On 2020/04/22 10:53, Kyotaro Horiguchi wrote:\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>> [...]\n>>\n>> Thanks all for checking whether the change affects each HA solution!\n> \n> Unless I'm wrong, we don't have feedback from Patroni team.\n> \n> I did some quick grep and it seems to rely on \"pg_ctl promote\" as well.\n> Moreover, the latest commit 80fbe9005 force a checkpoint right after the\n> promote. So I suppose they don't use non-fast promote.\n\nThanks for checking that!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 23 Apr 2020 11:34:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Remove non-fast promotion Re: Should we remove a fallback\n promotion? take 2" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nI've applied the v2 patch on the master branch. There some hunks, but the patch got applied. So, I ran make installcheck-world and everything looks fine to me with this patch. Though, I do have a few suggestions in general:\r\n\r\n(1) I see two functions being used (a) CheckPromoteSignal and (b) IsPromoteSignaled in the code. Should these be combined into a single function and perhaps check for \"promote_signaled\" and the \"PROMOTE_SIGNAL_FILE\". Not sure if doing this will break \"sigusr1_handler\" in postmaster.c though.\r\n\r\n(2) CheckPromoteSignal is checking for \"PROMOTE_SIGNAL_FILE\" file. So, perhaps, rather than calling stat on \"PROMOTE_SIGNAL_FILE\" in if statements, I would suggest to use CheckPromoteSignal function instead as it does nothing but stat on \"PROMOTE_SIGNAL_FILE\" (after applying your patch).\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 02 Jun 2020 18:38:18 +0000", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "On 2020/06/03 3:38, Hamid Akhtar wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> I've applied the v2 patch on the master branch. There some hunks, but the patch got applied. So, I ran make installcheck-world and everything looks fine to me with this patch. Though, I do have a few suggestions in general:\n\nThanks for the test and review!\n\n> (1) I see two functions being used (a) CheckPromoteSignal and (b) IsPromoteSignaled in the code. Should these be combined into a single function and perhaps check for \"promote_signaled\" and the \"PROMOTE_SIGNAL_FILE\". Not sure if doing this will break \"sigusr1_handler\" in postmaster.c though.\n\nI don't think we can do that simply. CheckPromoteSignal() can be called by\nboth postmaster and the startup process. OTOH, IsPromoteSignaled()\naccesses the flag that can be set only in the startup process' signal handler,\ni.e., it's intended to be called only by the startup process.\n\n> (2) CheckPromoteSignal is checking for \"PROMOTE_SIGNAL_FILE\" file. So, perhaps, rather than calling stat on \"PROMOTE_SIGNAL_FILE\" in if statements, I would suggest to use CheckPromoteSignal function instead as it does nothing but stat on \"PROMOTE_SIGNAL_FILE\" (after applying your patch).\n\nYes, that's good idea. Attached is the updated version of the patch.\nI replaced that stat() with CheckPromoteSignal(). Also I replaced\nunlink(PROMOTE_SIGNAL_FILE) with RemovePromoteSignalFiles().\n\n> The new status of this patch is: Waiting on Author\n\nI will change the status back to Needs Review.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 3 Jun 2020 09:43:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "At Wed, 3 Jun 2020 09:43:17 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> I will change the status back to Needs Review.\n\n record = ReadCheckpointRecord(xlogreader, checkPointLoc, 1, false);\n if (record != NULL)\n {\n- fast_promoted = true;\n+ promoted = true;\n\nEven if we missed the last checkpoint record, we don't give up\npromotion and continue fall-back promotion but the variable \"promoted\"\nstays false. That is confusiong.\n\nHow about changing it to fallback_promotion, or some names with more\nbehavior-specific name like immediate_checkpoint_needed?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Jun 2020 12:06:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "\n\nOn 2020/06/03 12:06, Kyotaro Horiguchi wrote:\n> At Wed, 3 Jun 2020 09:43:17 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> I will change the status back to Needs Review.\n\nThanks for the review!\n\n> record = ReadCheckpointRecord(xlogreader, checkPointLoc, 1, false);\n> if (record != NULL)\n> {\n> - fast_promoted = true;\n> + promoted = true;\n> \n> Even if we missed the last checkpoint record, we don't give up\n> promotion and continue fall-back promotion but the variable \"promoted\"\n> stays false. That is confusiong.\n> \n> How about changing it to fallback_promotion, or some names with more\n> behavior-specific name like immediate_checkpoint_needed?\n\n\nI like doEndOfRecoveryCkpt or something, but I have no strong opinion\nabout that flag naming. So I'm ok with immediate_checkpoint_needed\nif others also like that, too.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 3 Jun 2020 19:20:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "Applying the patch to the current master branch throws 9 hunks. AFAICT, the\npatch is good otherwise.\n\nOn Wed, Jun 3, 2020 at 3:20 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/06/03 12:06, Kyotaro Horiguchi wrote:\n> > At Wed, 3 Jun 2020 09:43:17 +0900, Fujii Masao <\n> masao.fujii@oss.nttdata.com> wrote in\n> >> I will change the status back to Needs Review.\n>\n> Thanks for the review!\n>\n> > record = ReadCheckpointRecord(xlogreader, checkPointLoc, 1,\n> false);\n> > if (record != NULL)\n> > {\n> > - fast_promoted = true;\n> > + promoted = true;\n> >\n> > Even if we missed the last checkpoint record, we don't give up\n> > promotion and continue fall-back promotion but the variable \"promoted\"\n> > stays false. That is confusiong.\n> >\n> > How about changing it to fallback_promotion, or some names with more\n> > behavior-specific name like immediate_checkpoint_needed?\n>\n>\n> I like doEndOfRecoveryCkpt or something, but I have no strong opinion\n> about that flag naming. So I'm ok with immediate_checkpoint_needed\n> if others also like that, too.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nApplying the patch to the current master branch throws 9 hunks. AFAICT, the patch is good otherwise.On Wed, Jun 3, 2020 at 3:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/06/03 12:06, Kyotaro Horiguchi wrote:\n> At Wed, 3 Jun 2020 09:43:17 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> I will change the status back to Needs Review.\n\nThanks for the review!\n\n>           record = ReadCheckpointRecord(xlogreader, checkPointLoc, 1, false);\n>           if (record != NULL)\n>           {\n> -          fast_promoted = true;\n> +          promoted = true;\n> \n> Even if we missed the last checkpoint record, we don't give up\n> promotion and continue fall-back promotion but the variable \"promoted\"\n> stays false. That is confusiong.\n> \n> How about changing it to fallback_promotion, or some names with more\n> behavior-specific name like immediate_checkpoint_needed?\n\n\nI like doEndOfRecoveryCkpt or something, but I have no strong opinion\nabout that flag naming. So I'm ok with immediate_checkpoint_needed\nif others also like that, too.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Mon, 27 Jul 2020 13:53:28 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "\n\nOn 2020/07/27 17:53, Hamid Akhtar wrote:\n> Applying the patch to the current master branch throws 9 hunks. AFAICT, the patch is good otherwise.\n\nSo you think that the patch can be marked as Ready for Committer?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 28 Jul 2020 01:31:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "There have been no real objections on this patch and it seems to work. So,\nIMHO, the only thing that needs to be done is perhaps update the patch so\nthat it applies clearly on the master branch.\n\nOn Mon, Jul 27, 2020 at 9:31 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/07/27 17:53, Hamid Akhtar wrote:\n> > Applying the patch to the current master branch throws 9 hunks. AFAICT,\n> the patch is good otherwise.\n>\n> So you think that the patch can be marked as Ready for Committer?\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nCELL:+923335449950 EMAIL: mailto:hamid.akhtar@highgo.ca\nSKYPE: engineeredvirus\n\nThere have been no real objections on this patch and it seems to work. So, IMHO, the only thing that needs to be done is perhaps update the patch so that it applies clearly on the master branch.On Mon, Jul 27, 2020 at 9:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/07/27 17:53, Hamid Akhtar wrote:\n> Applying the patch to the current master branch throws 9 hunks. AFAICT, the patch is good otherwise.\n\nSo you think that the patch can be marked as Ready for Committer?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n-- Highgo Software (Canada/China/Pakistan)URL : www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCCELL:+923335449950  EMAIL: mailto:hamid.akhtar@highgo.caSKYPE: engineeredvirus", "msg_date": "Tue, 28 Jul 2020 16:35:07 +0500", "msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should we remove a fallback promotion? take 2" }, { "msg_contents": "\n\nOn 2020/07/28 20:35, Hamid Akhtar wrote:\n> There have been no real objections on this patch and it seems to work.\n\nThanks! So I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 29 Jul 2020 21:26:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Should we remove a fallback promotion? take 2" } ]
[ { "msg_contents": "What's the reason to use pg_atomic...read_...() and pg_atomic...write_...()\nfunctions in localbuf.c?\n\nIt looks like there was an intention not to use them\n\nhttps://www.postgresql.org/message-id/CAPpHfdtfr3Aj7xJonXaKR8iY2p8uXOQ%2Be4BMpMDAM_5R4OcaDA%40mail.gmail.com\n\nbut the following discussion does not explain the decision to use them.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 05 Mar 2020 18:21:55 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Atomics in localbuf.c" }, { "msg_contents": "Hi\n\nOn March 5, 2020 9:21:55 AM PST, Antonin Houska <ah@cybertec.at> wrote:\n>What's the reason to use pg_atomic...read_...() and\n>pg_atomic...write_...()\n>functions in localbuf.c?\n>\n>It looks like there was an intention not to use them\n>\n>https://www.postgresql.org/message-id/CAPpHfdtfr3Aj7xJonXaKR8iY2p8uXOQ%2Be4BMpMDAM_5R4OcaDA%40mail.gmail.com\n>\n>but the following discussion does not explain the decision to use them.\n\nRead/write don't trigger locked/atomic operations. They just guarantee that you're not gonna read/write a torn value. Or a cached one. Since local/shared buffers share the buffer header definition, we still have to use proper functions to access the atomic variables.\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 05 Mar 2020 10:02:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Atomics in localbuf.c" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> On March 5, 2020 9:21:55 AM PST, Antonin Houska <ah@cybertec.at> wrote:\n> >What's the reason to use pg_atomic...read_...() and\n> >pg_atomic...write_...()\n> >functions in localbuf.c?\n> >\n> >It looks like there was an intention not to use them\n> >\n> >https://www.postgresql.org/message-id/CAPpHfdtfr3Aj7xJonXaKR8iY2p8uXOQ%2Be4BMpMDAM_5R4OcaDA%40mail.gmail.com\n> >\n> >but the following discussion does not explain the decision to use them.\n> \n> Read/write don't trigger locked/atomic operations. They just guarantee that\n> you're not gonna read/write a torn value. Or a cached one. Since\n> local/shared buffers share the buffer header definition, we still have to\n> use proper functions to access the atomic variables.\n\nSure, the atomic operations are necessary for shared buffers, but I still\ndon't understand why they are needed for *local* buffers - local buffers their\nheaders (BufferDesc) in process local memory, so there should be no concerns\nabout concurrent access.\n\nAnother thing that makes me confused is this comment in InitLocalBuffers():\n\n\t/*\n\t * Intentionally do not initialize the buffer's atomic variable\n\t * (besides zeroing the underlying memory above). That way we get\n\t * errors on platforms without atomics, if somebody (re-)introduces\n\t * atomic operations for local buffers.\n\t */\n\nThat sounds like there was an intention not to use the atomic functions in\nlocalbuf.c, but eventually they ended up there. Do I still miss something?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Thu, 05 Mar 2020 21:42:06 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Atomics in localbuf.c" }, { "msg_contents": "Hi\n\nOn March 5, 2020 12:42:06 PM PST, Antonin Houska <ah@cybertec.at> wrote:\n>Andres Freund <andres@anarazel.de> wrote:\n>\n>> On March 5, 2020 9:21:55 AM PST, Antonin Houska <ah@cybertec.at>\n>wrote:\n>> >What's the reason to use pg_atomic...read_...() and\n>> >pg_atomic...write_...()\n>> >functions in localbuf.c?\n>> >\n>> >It looks like there was an intention not to use them\n>> >\n>>\n>>https://www.postgresql.org/message-id/CAPpHfdtfr3Aj7xJonXaKR8iY2p8uXOQ%2Be4BMpMDAM_5R4OcaDA%40mail.gmail.com\n>> >\n>> >but the following discussion does not explain the decision to use\n>them.\n>> \n>> Read/write don't trigger locked/atomic operations. They just\n>guarantee that\n>> you're not gonna read/write a torn value. Or a cached one. Since\n>> local/shared buffers share the buffer header definition, we still\n>have to\n>> use proper functions to access the atomic variables.\n>\n>Sure, the atomic operations are necessary for shared buffers, but I\n>still\n>don't understand why they are needed for *local* buffers - local\n>buffers their\n>headers (BufferDesc) in process local memory, so there should be no\n>concerns\n>about concurrent access.\n>\n>Another thing that makes me confused is this comment in\n>InitLocalBuffers():\n>\n>\t/*\n>\t * Intentionally do not initialize the buffer's atomic variable\n>\t * (besides zeroing the underlying memory above). That way we get\n>\t * errors on platforms without atomics, if somebody (re-)introduces\n>\t * atomic operations for local buffers.\n>\t */\n>\n>That sounds like there was an intention not to use the atomic functions\n>in\n>localbuf.c, but eventually they ended up there. Do I still miss\n>something?\n\nAgain, the read/write functions do not imply atomic instructions.\n\nAnts\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 05 Mar 2020 12:59:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Atomics in localbuf.c" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> On March 5, 2020 12:42:06 PM PST, Antonin Houska <ah@cybertec.at> wrote:\n> >Andres Freund <andres@anarazel.de> wrote:\n> >\n> >> On March 5, 2020 9:21:55 AM PST, Antonin Houska <ah@cybertec.at>\n> >wrote:\n> >> >What's the reason to use pg_atomic...read_...() and\n> >> >pg_atomic...write_...()\n> >> >functions in localbuf.c?\n> >> >\n> >> >It looks like there was an intention not to use them\n> >> >\n> >>\n> >>https://www.postgresql.org/message-id/CAPpHfdtfr3Aj7xJonXaKR8iY2p8uXOQ%2Be4BMpMDAM_5R4OcaDA%40mail.gmail.com\n> >> >\n> >> >but the following discussion does not explain the decision to use\n> >them.\n> >> \n> >> Read/write don't trigger locked/atomic operations. They just\n> >guarantee that\n> >> you're not gonna read/write a torn value. Or a cached one. Since\n> >> local/shared buffers share the buffer header definition, we still\n> >have to\n> >> use proper functions to access the atomic variables.\n> >\n> >Sure, the atomic operations are necessary for shared buffers, but I\n> >still\n> >don't understand why they are needed for *local* buffers - local\n> >buffers their\n> >headers (BufferDesc) in process local memory, so there should be no\n> >concerns\n> >about concurrent access.\n> >\n> >Another thing that makes me confused is this comment in\n> >InitLocalBuffers():\n> >\n> >\t/*\n> >\t * Intentionally do not initialize the buffer's atomic variable\n> >\t * (besides zeroing the underlying memory above). That way we get\n> >\t * errors on platforms without atomics, if somebody (re-)introduces\n> >\t * atomic operations for local buffers.\n> >\t */\n> >\n> >That sounds like there was an intention not to use the atomic functions\n> >in\n> >localbuf.c, but eventually they ended up there. Do I still miss\n> >something?\n> \n> Again, the read/write functions do not imply atomic instructions.\n\nok. What I missed is that BufferDesc.state is declared as pg_atomic_uint32\nrather than plain int, so the pg_atomic_...() functions should be used\nregardless the buffer is shared or local. Sorry for the noise.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 06 Mar 2020 08:05:41 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Atomics in localbuf.c" }, { "msg_contents": "On Fri, Mar 6, 2020 at 2:04 AM Antonin Houska <ah@cybertec.at> wrote:\n> ok. What I missed is that BufferDesc.state is declared as pg_atomic_uint32\n> rather than plain int, so the pg_atomic_...() functions should be used\n> regardless the buffer is shared or local. Sorry for the noise.\n\nRight. I thought, though, that your question was why we did it that\nway instead of just declaring them as uint32. I'm not sure it's very\nimportant, but I think that question hasn't really been answered.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 6 Mar 2020 11:26:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Atomics in localbuf.c" }, { "msg_contents": "Hi,\n\nOn 2020-03-06 11:26:41 -0500, Robert Haas wrote:\n> On Fri, Mar 6, 2020 at 2:04 AM Antonin Houska <ah@cybertec.at> wrote:\n> > ok. What I missed is that BufferDesc.state is declared as pg_atomic_uint32\n> > rather than plain int, so the pg_atomic_...() functions should be used\n> > regardless the buffer is shared or local. Sorry for the noise.\n>\n> Right. I thought, though, that your question was why we did it that\n> way instead of just declaring them as uint32. I'm not sure it's very\n> important, but I think that question hasn't really been answered.\n\nI tried, at least:\n\n> Since local/shared buffers share the buffer header definition, we still have to use proper functions to access\n> the atomic variables.\n\nThere's only one struct BufferDesc. We could separate them out /\nintroduce a union or such. But that'd add some complexity / potential\nfor mistakes too.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 6 Mar 2020 11:06:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Atomics in localbuf.c" }, { "msg_contents": "On Fri, Mar 6, 2020 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > Since local/shared buffers share the buffer header definition, we still have to use proper functions to access\n> > the atomic variables.\n>\n> There's only one struct BufferDesc. We could separate them out /\n> introduce a union or such. But that'd add some complexity / potential\n> for mistakes too.\n\nOK, got it. Thanks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 9 Mar 2020 11:59:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Atomics in localbuf.c" } ]
[ { "msg_contents": "Hi Hackers,\n\nI found one interesting behavior when \"--with-gssapi\" is enabled,\n\ngiven a very \"common\" configuration in gp_hba.conf like below,\n\n     host            postgres    david   192.168.0.114/32    trust\n\nthe query message is always encrypted when using a very \"common\" way \nconnect to PG server,\n\n     $ psql -h pgserver -d postgres -U david\n\nunless I specify \"gssencmode=disable\" with -d option,\n\n     $ psql -h pgserver -U david  -d \"dbname=postgres gssencmode=disable\"\n\nBased on above behaviors, I did a further exercise on kerberos \nregression test and found the test coverage is not enough. It should be \nenhanced to cover the above behavior when user specified a \"host\" \nfollowed by \"trust\" access in pg_hba.conf.\n\nthe attachment is a patch to cover the behaviors mentioned above for \nkerberos regression test.\n\nAny thoughts?\n\n\nThanks,\n\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca", "msg_date": "Thu, 5 Mar 2020 12:53:22 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": true, "msg_subject": "kerberos regression test enhancement" } ]
[ { "msg_contents": "Dear community,\n\nI am really curious what was the original intention of using the\nPqSendBuffer and is it possible to remove it now.\n\nCurrently all messages are copied from StringInfo to this buffer and sent,\nwhich from my point of view is redundant operation.\nIt is possible to directly send messages from StringInfo to client. For\nexample: allocate more bytes from the beginning and fill out it before sent\nto client.\n\nMaybe there was already discussion about it or if I missing something\nplease fill free to correct me.\n\nThank you in advance!\n\nDear community,I am really curious what was the original intention of using the PqSendBuffer and is it possible to remove it now.Currently all messages are copied from StringInfo to this buffer and sent, which from my point of view is redundant operation.It is possible to directly send messages from StringInfo to client. For example: allocate more bytes from the beginning and fill out it before sent to client.Maybe there was already discussion about it or if I missing something please fill free to correct me.Thank you in advance!", "msg_date": "Thu, 5 Mar 2020 13:02:02 -0800", "msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: PqSendBuffer removal" }, { "msg_contents": "Aleksei Ivanov <iv.alekseii@gmail.com> writes:\n> I am really curious what was the original intention of using the\n> PqSendBuffer and is it possible to remove it now.\n\n> Currently all messages are copied from StringInfo to this buffer and sent,\n> which from my point of view is redundant operation.\n\nThat would mean doing a separate send() kernel call for every few bytes,\nno? I think the point of that buffer is to be sure we accumulate a\nreasonable number of bytes to pass to the kernel for each send().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 16:10:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Thank you for your reply!\n\nYes, you are right there will be a separate call to send the data, but is\ncopying data each time more costly operation than just one syscall?\n\nBesides, if we already have a ready message packet to be sent why should we\nwait?\n\nWaiting for your reply,\nBest regards!\n\n\n\nOn Thu, Mar 5, 2020 at 13:10 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Aleksei Ivanov <iv.alekseii@gmail.com> writes:\n> > I am really curious what was the original intention of using the\n> > PqSendBuffer and is it possible to remove it now.\n>\n> > Currently all messages are copied from StringInfo to this buffer and\n> sent,\n> > which from my point of view is redundant operation.\n>\n> That would mean doing a separate send() kernel call for every few bytes,\n> no? I think the point of that buffer is to be sure we accumulate a\n> reasonable number of bytes to pass to the kernel for each send().\n>\n> regards, tom lane\n>\n\nThank you for your reply!Yes, you are right there will be a separate call to send the data, but is copying data each time more costly operation than just one syscall?Besides, if we already have a ready message packet to be sent why should we wait?Waiting for your reply, Best regards!On Thu, Mar 5, 2020 at 13:10 Tom Lane <tgl@sss.pgh.pa.us> wrote:Aleksei Ivanov <iv.alekseii@gmail.com> writes:\n> I am really curious what was the original intention of using the\n> PqSendBuffer and is it possible to remove it now.\n\n> Currently all messages are copied from StringInfo to this buffer and sent,\n> which from my point of view is redundant operation.\n\nThat would mean doing a separate send() kernel call for every few bytes,\nno?  I think the point of that buffer is to be sure we accumulate a\nreasonable number of bytes to pass to the kernel for each send().\n\n                        regards, tom lane", "msg_date": "Thu, 5 Mar 2020 13:23:21 -0800", "msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Aleksei Ivanov <iv.alekseii@gmail.com> writes:\n> Yes, you are right there will be a separate call to send the data, but is\n> copying data each time more costly operation than just one syscall?\n\nWhat do you mean \"just one syscall\"? The entire point here is that it'd\ntake more syscalls to send the same amount of data.\n\nIt does strike me that with the v3 protocol, we do sometimes have cases\nwhere internal_putbytes is reached with a fairly large \"len\". If we've\nflushed out what's in PqSendBuffer to start with, and there's more than\na bufferload remaining in the source data, we could send the source\ndata directly to output without copying it to the buffer first.\nThat could actually result in *fewer* kernel calls not more, if \"len\"\nis large enough. But I strongly doubt that a code path that nets\nout to more kernel calls will win.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Mar 2020 18:04:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "*> What do you mean \"just one syscall\"? The entire point here is that\nit'd take more syscalls to send the same amount of data.*\n\nI mean that it messages are large enough more than 2K we will need 4\nsyscalls without copy it to the internal buffer, but currently we will copy\n8K of messages and send it using 1 call. I think that under some threshold\nof packet length it is redundant to copy it to internal buffer and the data\ncan be sent directly.\n\n\n\n\n\n\n\n\n*> It does strike me that with the v3 protocol, we do sometimes have\ncaseswhere internal_putbytes is reached with a fairly large \"len\". If\nwe'veflushed out what's in PqSendBuffer to start with, and there's more\nthana bufferload remaining in the source data, we could send the sourcedata\ndirectly to output without copying it to the buffer first.That could\nactually result in *fewer* kernel calls not more, if \"len\"is large enough.\nBut I strongly doubt that a code path that netsout to more kernel calls\nwill win.*\n\nYes, internal_putbytes can be updated to send data directly if the length\nis more than internal buffer size.\n\nOn Thu, Mar 5, 2020 at 15:04 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Aleksei Ivanov <iv.alekseii@gmail.com> writes:\n> > Yes, you are right there will be a separate call to send the data, but is\n> > copying data each time more costly operation than just one syscall?\n>\n> What do you mean \"just one syscall\"? The entire point here is that it'd\n> take more syscalls to send the same amount of data.\n>\n> It does strike me that with the v3 protocol, we do sometimes have cases\n> where internal_putbytes is reached with a fairly large \"len\". If we've\n> flushed out what's in PqSendBuffer to start with, and there's more than\n> a bufferload remaining in the source data, we could send the source\n> data directly to output without copying it to the buffer first.\n> That could actually result in *fewer* kernel calls not more, if \"len\"\n> is large enough. But I strongly doubt that a code path that nets\n> out to more kernel calls will win.\n>\n> regards, tom lane\n>\n\n> What do you mean \"just one syscall\"?  The entire point here is that it'd take more syscalls to send the same amount of data.I mean that it messages are large enough more than 2K we will need 4 syscalls without copy it to the internal buffer, but currently we will copy 8K of messages and send it using 1 call. I think that under some threshold of packet length it is redundant to copy it to internal buffer and the data can be sent directly.> It does strike me that with the v3 protocol, we do sometimes have caseswhere internal_putbytes is reached with a fairly large \"len\".  If we'veflushed out what's in PqSendBuffer to start with, and there's more thana bufferload remaining in the source data, we could send the sourcedata directly to output without copying it to the buffer first.That could actually result in *fewer* kernel calls not more, if \"len\"is large enough.  But I strongly doubt that a code path that netsout to more kernel calls will win.Yes, internal_putbytes can be updated to send data directly if the length is more than internal buffer size.On Thu, Mar 5, 2020 at 15:04 Tom Lane <tgl@sss.pgh.pa.us> wrote:Aleksei Ivanov <iv.alekseii@gmail.com> writes:\n> Yes, you are right there will be a separate call to send the data, but is\n> copying data each time more costly operation than just one syscall?\n\nWhat do you mean \"just one syscall\"?  The entire point here is that it'd\ntake more syscalls to send the same amount of data.\n\nIt does strike me that with the v3 protocol, we do sometimes have cases\nwhere internal_putbytes is reached with a fairly large \"len\".  If we've\nflushed out what's in PqSendBuffer to start with, and there's more than\na bufferload remaining in the source data, we could send the source\ndata directly to output without copying it to the buffer first.\nThat could actually result in *fewer* kernel calls not more, if \"len\"\nis large enough.  But I strongly doubt that a code path that nets\nout to more kernel calls will win.\n\n                        regards, tom lane", "msg_date": "Thu, 5 Mar 2020 15:27:05 -0800", "msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Hi, \n\nOn March 5, 2020 1:23:21 PM PST, Aleksei Ivanov <iv.alekseii@gmail.com> wrote:\n>Thank you for your reply!\n>\n>Yes, you are right there will be a separate call to send the data, but\n>is\n>copying data each time more costly operation than just one syscall?\n\nYes, it's very likely to be more expensive to execute a syscall in a lot of cases. They've gotten a lot more expensive with all the security issues. \n\n>Besides, if we already have a ready message packet to be sent why\n>should we\n>wait?\n\nIn a lot of cases we'll send a number of small messages after each other. We don't want to send those out separately, that'd just increase overhead.\n\n\nBut in some paths/workloads the copy is quite noticable. I've mused before whether we could extend StringInfo to handle cases like this. E.g. by having StringInfo have two lengths. One that is the offset to the start of the allocated memory (0 for plain StringInfos), and one for the length of the string being built.\n\nThen we could get a StringInfo pointing directly to the current insertion point in the send buffer. To support growing it, enlargeStringInfo would first subtract the offset to the start of the allocation, and then reallocate that. \n\nI can imagine that bring useful in a number of places. And because there only would be additional overhead when actually growing the StringInfo, I don't think the cost would be measurable.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 05 Mar 2020 16:39:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "On Fri, 6 Mar 2020 at 07:27, Aleksei Ivanov <iv.alekseii@gmail.com> wrote:\n>\n> > What do you mean \"just one syscall\"? The entire point here is that it'd take more syscalls to send the same amount of data.\n>\n> I mean that it messages are large enough more than 2K we will need 4 syscalls without copy it to the internal buffer, but currently we will copy 8K of messages and send it using 1 call. I think that under some threshold of packet length it is redundant to copy it to internal buffer and the data can be sent directly.\n\nI think what you're suggesting is more complex than you may expect.\nPostgreSQL is single threaded and relies pretty heavily on the ability\nto buffer internally. It also expects its network I/O to always\nsucceed. Just switching to directly doing nonblocking I/O is not very\nfeasible. Changing the network I/O paths may expose a lot more\nopportunities for send vs receive deadlocks.\n\nIt also complicates the protocol's handling of message boundaries,\nsince failures and interruptions can occur at more points.\n\nHave you measured anything that suggests that our admittedly\ninefficient multiple handling of send buffers is\nperformance-significant compared to the vast amount of memory\nallocation and copying we do all over the place elsewhere? Do you have\na concrete reason to want to remove this?\n\nIf I had to change this model I'd probably be looking at an\niovector-style approach, like we use with shm_mq. Assemble an array of\nbuffer descriptors pointing to short, usually statically allocated\nbuffers and populate one with each pqformat step. Then when the\nmessage is assembled, use writev(2) or similar to dispatch it. Maybe\ndo some automatic early flushing if the buffer space overflows. But\nthat might need a protocol extension so we had a way to recover after\ninterrupted sending of a partial message...\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Fri, 6 Mar 2020 13:32:48 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "*>Then we could get a StringInfo pointing directly to the current insertion\npoint in the send buffer. To support growing it, enlargeStringInfo would\nfirst subtract the offset to the start of the allocation, and then\nreallocate that*.\n\nI thought about it yesterday and one issue with this approach is how would\nyou known the length of the packet to be sent. As we can’t returned back in\nPqSendBuffer. Also realloc is quite expensive operation.\n\nPreviously I suggested to include offset into stringinfo and if it is large\nenough we will have an opportunity to send it directly and it will not\nrequired a lot of changes.\n\n\nOn Fri, Mar 6, 2020 at 10:45 Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On March 5, 2020 1:23:21 PM PST, Aleksei Ivanov <iv.alekseii@gmail.com>\n> wrote:\n> >Thank you for your reply!\n> >\n> >Yes, you are right there will be a separate call to send the data, but\n> >is\n> >copying data each time more costly operation than just one syscall?\n>\n> Yes, it's very likely to be more expensive to execute a syscall in a lot\n> of cases. They've gotten a lot more expensive with all the security issues.\n>\n> >Besides, if we already have a ready message packet to be sent why\n> >should we\n> >wait?\n>\n> In a lot of cases we'll send a number of small messages after each other.\n> We don't want to send those out separately, that'd just increase overhead.\n>\n>\n> But in some paths/workloads the copy is quite noticable. I've mused before\n> whether we could extend StringInfo to handle cases like this. E.g. by\n> having StringInfo have two lengths. One that is the offset to the start of\n> the allocated memory (0 for plain StringInfos), and one for the length of\n> the string being built.\n>\n> Then we could get a StringInfo pointing directly to the current insertion\n> point in the send buffer. To support growing it, enlargeStringInfo would\n> first subtract the offset to the start of the allocation, and then\n> reallocate that.\n>\n> I can imagine that bring useful in a number of places. And because there\n> only would be additional overhead when actually growing the StringInfo, I\n> don't think the cost would be measurable.\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\n>Then we could get a StringInfo pointing directly to the current insertion point in the send buffer.  To support growing it, enlargeStringInfo would first subtract the offset to the start of the allocation, and then reallocate that. I thought about it yesterday and one issue with this approach is how would you known the length of the packet to be sent. As we can’t returned back in PqSendBuffer. Also realloc is quite expensive operation.Previously I suggested to include offset into stringinfo and if it is large enough we will have an opportunity to send it directly and it will not required a lot of changes.On Fri, Mar 6, 2020 at 10:45 Andres Freund <andres@anarazel.de> wrote:Hi, \n\nOn March 5, 2020 1:23:21 PM PST, Aleksei Ivanov <iv.alekseii@gmail.com> wrote:\n>Thank you for your reply!\n>\n>Yes, you are right there will be a separate call to send the data, but\n>is\n>copying data each time more costly operation than just one syscall?\n\nYes, it's very likely to be more expensive to execute a syscall in a lot of cases. They've gotten a lot more expensive with all the security issues. \n\n>Besides, if we already have a ready message packet to be sent why\n>should we\n>wait?\n\nIn a lot of cases we'll send a number of small messages after each other. We don't want to send those out separately, that'd just increase overhead.\n\n\nBut in some paths/workloads the copy is quite noticable. I've mused before whether we could extend StringInfo to handle cases like this. E.g. by having StringInfo have two lengths. One that is the offset to the start of the allocated memory (0 for plain StringInfos), and one for the length of the string being built.\n\nThen we could get a StringInfo pointing directly to the current insertion point in the send buffer.  To support growing it, enlargeStringInfo would first subtract the offset to the start of the allocation, and then reallocate that. \n\nI can imagine that bring useful in a number of places. And because there only would be additional overhead when actually growing the StringInfo, I don't think the cost would be measurable.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Fri, 6 Mar 2020 11:09:23 -0800", "msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Hi,\n\nOn 2020-03-06 11:09:23 -0800, Aleksei Ivanov wrote:\n> *>Then we could get a StringInfo pointing directly to the current insertion\n> point in the send buffer. To support growing it, enlargeStringInfo would\n> first subtract the offset to the start of the allocation, and then\n> reallocate that*.\n>\n> I thought about it yesterday and one issue with this approach is how would\n> you known the length of the packet to be sent. As we can’t returned back in\n> PqSendBuffer. Also realloc is quite expensive operation.\n\nCould you expand a bit on what you see as the problem? Because I'm not\nfollowing?\n\nWhat does any of this have to do with realloc performance? I mean, the\nbuffer would just scale up once, so the cost of that would be very\nquickly amortized?\n\nWhat I'm thinking is that we'd have pg_beginmessage() (potentially a\ndifferent named version) initialize the relevant StringInfo basically as\n\n(StringInfoData){\n .data = PqSendPointer,\n .len = 0,\n .alloc_offset = PqSendBuffer - PqSendBuffer\n}\n\nand that pq_endmessage would then advance the equivalent (see below [1]) of\nwhat today is PqSendPointer to be PqSendPointer += StringInfo->len;\n\nThat'd mean that we'd never need to copy data in/out the send buffer\nanymore, because we'd directly construct the message in the send\nbuffer. Pretty much all important FE/BE communication goes through\npq_beginmessage[_reuse()].\n\nWe'd have to add some defenses against building multiple messages at the\nsame time. But neither do I think that is common, nor does it seem hard\nto defend againt: A simple counter should do the trick?\n\nRegards,\nAndres\n\n\n[1] Obviously the above sketch doesn't quite work that way. We can't\njust have stringinfo reallocate the buffer. Feels quite solvable though.\n\n\n", "msg_date": "Sat, 7 Mar 2020 10:33:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> What I'm thinking is that we'd have pg_beginmessage() (potentially a\n> different named version) initialize the relevant StringInfo basically as\n\n> (StringInfoData){\n> .data = PqSendPointer,\n> .len = 0,\n> .alloc_offset = PqSendBuffer - PqSendBuffer\n> }\n\nThis seems way overcomplicated compared to what I suggested (ie,\njust let internal_putbytes bypass the buffer in cases where the\ndata would get flushed next time through its loop anyway).\nWhat you're suggesting would be a lot more invasive and restrictive\n--- for example, why is it a good idea to have a hard-wired\nassumption that we can't build more than one message at once?\n\nI'm also concerned that trying to do too much optimization here will\nbreak one of the goals of the existing code, which is to not get into\na situation where an OOM failure causes a wire protocol violation\nbecause we've already sent part of a message but are no longer able to\nsend the rest of it. To ensure that doesn't happen, we must construct\nthe whole message before we start to send it, and we can't let\nbuffering of the last message be too entwined with construction of the\nnext one. Between that and the (desirable) arms-length separation\nbetween datatype I/O functions and the actual I/O, a certain amount of\ndata copying seems unavoidable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 13:54:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Hi,\n\nOn 2020-03-07 13:54:57 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > What I'm thinking is that we'd have pg_beginmessage() (potentially a\n> > different named version) initialize the relevant StringInfo basically as\n> \n> > (StringInfoData){\n> > .data = PqSendPointer,\n> > .len = 0,\n> > .alloc_offset = PqSendBuffer - PqSendBuffer\n> > }\n> \n> This seems way overcomplicated compared to what I suggested (ie,\n> just let internal_putbytes bypass the buffer in cases where the\n> data would get flushed next time through its loop anyway).\n\nWell, we quite frequently send out multiple messages in a row, without a\nflush inbetween. It'd be nice if we could avoid both copying buffers for\neach message, as well as allocating a new stringinfo.\n\nWe've reduced the number of wholesale stringinfo reallocations with\npq_beginmessage_reuse(), which is e.g. significant when actually\nreturning tuples, and that was a noticable performance improvement.\n\nI don't believe that the copy is a performance relevant factor solely\nfor messages that are individually too large to fit in the send\nbuffer. For one, there'll often be some pending send data from a\nprevious \"round\", which'd mean we'd need to call send() more often, or\nuse vectorized IO (i.e. switch to writev()). But also,\n\n\n> What you're suggesting would be a lot more invasive and restrictive\n> --- for example, why is it a good idea to have a hard-wired\n> assumption that we can't build more than one message at once?\n\nWell, we don't seem to have many (any?) places where that's not the\ncase. And having to use only one layer of buffering for outgoing data\ndoes seem advantageous to me. It'd not be hard to fall back to a\nseparate buffer just for the cases where there are multiple messages\nbuilt concurrently, if we want to support that.\n\n\n> I'm also concerned that trying to do too much optimization here will\n> break one of the goals of the existing code, which is to not get into\n> a situation where an OOM failure causes a wire protocol violation\n> because we've already sent part of a message but are no longer able to\n> send the rest of it. To ensure that doesn't happen, we must construct\n> the whole message before we start to send it, and we can't let\n> buffering of the last message be too entwined with construction of the\n> next one. Between that and the (desirable) arms-length separation\n> between datatype I/O functions and the actual I/O, a certain amount of\n> data copying seems unavoidable.\n\nSure. But I don't see why that requires two levels of buffering for\nmessages? If we were to build the message in the output buffer, resizing\nas needed, we can send the data once the message is complete, or not at\nall.\n\nI don't think anything on the datatype I/O level would be affected?\n\nWhile I think it'd be quite desirable to avoid e.g. the separate\nstringinfo allocation for send functions, I think that's quite a\nseparate project. One which I have no really good idea to tackle.\n\nGreetings,\n\nAndres Freund\n\n\n[1] Since I had looked it up:\n\nWe do a separate message for each of:\n1) result description\n2) each result row\n3) ReadyForQuery\n\nAnd we separately call through PQcommMethods for things like\npq_putemptymessage() and uses of pq_putmessage() not going through\npq_endmessage. The former is called a lot, especially when using the\nextended query protocol (which we want clients to use!).\n\n\nFor a SELECT 1 in the simple protocol we end up calling putmessage via:\n1) SendRowDescriptionMessage\n2) printtup()\n3) EndCommand()\n4) ReadyForQuery()\n\nFor extended:\n1) exec_parse_message()\n2) exec_bind_message()\n3) exec_describe_portal_message()\n4) printtup()\n5) EndCommand()\n6) ReadyForQuery()\n\n\n", "msg_date": "Mon, 9 Mar 2020 13:26:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-07 13:54:57 -0500, Tom Lane wrote:\n>> This seems way overcomplicated compared to what I suggested (ie,\n>> just let internal_putbytes bypass the buffer in cases where the\n>> data would get flushed next time through its loop anyway).\n\n> Well, we quite frequently send out multiple messages in a row, without a\n> flush inbetween. It'd be nice if we could avoid both copying buffers for\n> each message, as well as allocating a new stringinfo.\n\nThat gets you right into the situation where trouble adding more messages\ncould corrupt/destroy messages that were supposedly already sent (but in\nreality aren't flushed to the client quite yet). I really think that\nthere is not enough win available here to justify introducing that kind\nof fragility.\n\nTo be blunt, no actual evidence has been offered in this thread that\nit's worth changing anything at all in this area. All of the bytes\nin question eventually have to be delivered to the client, which is\ngoing to involve two kernel-space/user-space copy steps along with\nwho-knows-what network transmission overhead. The idea that an\nextra process-local memcpy or two is significant compared to that\nseems like mostly wishful thinking to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Mar 2020 17:32:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" }, { "msg_contents": "faifaiOn Sat, 7 Mar 2020 at 02:45, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On March 5, 2020 1:23:21 PM PST, Aleksei Ivanov <iv.alekseii@gmail.com> wrote:\n> >Thank you for your reply!\n> >\n> >Yes, you are right there will be a separate call to send the data, but\n> >is\n> >copying data each time more costly operation than just one syscall?\n>\n> Yes, it's very likely to be more expensive to execute a syscall in a lot of cases. They've gotten a lot more expensive with all the security issues.\n\nGood to know.\n\n> >Besides, if we already have a ready message packet to be sent why\n> >should we\n> >wait?\n>\n> In a lot of cases we'll send a number of small messages after each other. We don't want to send those out separately, that'd just increase overhead.\n\nRight. We presently set `TCP_NODELAY` to disable Nagle's algorithm, so\nwe'd tend to generate these messages as separate packets as well as\nseperate syscalls, making it doubly undesirable.\n\nWe can dynamically twiddle TCP_NODELAY like many other applications do\nto optimise the wire packets, but not the syscalls. It'd actually cost\nextra syscalls.\n\n> But in some paths/workloads the copy is quite noticable. I've mused before whether we could extend StringInfo to handle cases like this. E.g. by having StringInfo have two lengths. One that is the offset to the start of the allocated memory (0 for plain StringInfos), and one for the length of the string being built.\n>\n> Then we could get a StringInfo pointing directly to the current insertion point in the send buffer. To support growing it, enlargeStringInfo would first subtract the offset to the start of the allocation, and then reallocate that.\n>\n> I can imagine that bring useful in a number of places. And because there only would be additional overhead when actually growing the StringInfo, I don't think the cost would be measurable.\n\nThat sounds pretty sensible as it'd be minimally intrusive.\n\nI've wondered whether we can further optimise some uses by having\nlibpq-be manage an iovec for us instead, much like we support iovec\nfor shm_mq. Instead of a StringInfo we'd use an iovec wrapped by a\nlibpq-managed struct. libpq would reserve the first few entries in the\nmanaged iovec for the message header. Variants of pq_sendint64(...)\netc would add entries to the iovec and could be inlined since they'd\njust be convenience routines. The message would be flushed by having\nlibpq call writev() on the iovec container.\n\nWe'd want a wrapper struct for the iovec so we could have libpq keep a\ncursor for the next entry in the iovec. For libpq-fe it'd also contain\na map of which iovec entries need to be free()d; for libpq-be we'd\nprobably palloc(), maybe with a child memory context. To support a\nstack-allocated iovec for when we know all the message fields in\nadvance we'd have an init function that takes the address of the\npreallocated iovec and its length limit.\n\nWe could support We could also support a libpq-wrapped iovec where\nlibpq can realloc it if it fills.\nstack-allocated iovec - so the caller would be responsible for\nmanaging the max size\n\n\nThat way we can do zero-copy scatter-gather I/O for messages that\ndon't require binary-to-text-format transformations etc.\n\nBTW, if we change StringInfo, I'd like to also official bless the\nusage pattern where we wrap a buffer in a StringInfo so we can use\npq_getmsgint64 etc on it. Add a initConstStringInfo(StringInfo si,\nconst char * buf, size_t buflen) or something that assigns the\nStringInfo values and sets maxlen = -1. The only in-core usage I see\nfor this so far is in src/backend/replication/logical/worker.c but\nit's used extremely heavily in pglogical etc. It'd just be a\nconvenience function that blesses and documents existing usage.\n\nBut like Tom I really want to first come back to the evidence. Why\nshould we bother? Are we solving an actual problem here? PostgreSQL is\nimmensely memory-allocation-happy and copy-happy. Shouldn't we be more\ninterested in things like reducing the cost of multiple copies and\ntransform passes of Datum values? Especially since that's an actual\noperational pain point when you're working with multi-hundred-megabyte\nbytea or text fields.\n\nCan you come up with some profiling/performance numbers that track\ntime spent on memory copying in the areas you propose to target, plus\nmalloc overheads? With a tool like systemtap or perf it should not be\noverly difficult to do so by making targeted probes that filter based\non callstack, or on file / line-range or function.\n\n\n", "msg_date": "Wed, 11 Mar 2020 11:44:37 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: PqSendBuffer removal" } ]
[ { "msg_contents": "On 2018-03-06 14:50, Simon Riggs wrote:\n> On 6 March 2018 at 11:24, Dmitry Ivanov <d.ivanov@postgrespro.ru> \n> wrote:\n>>> In PG11, I propose the following command, sticking mostly to Ants'\n>>> syntax, and allowing to wait for multiple events before it returns. \n>>> It\n>>> doesn't hold snapshot and will not get cancelled by Hot Standby.\n>>> \n>>> WAIT FOR event [, event ...] options\n>>> \n>>> event is\n>>> LSN value\n>>> TIMESTAMP value\n>>> \n>>> options\n>>> TIMEOUT delay\n>>> UNTIL TIMESTAMP timestamp\n>>> (we have both, so people don't need to do math, they can use \n>>> whichever\n>>> they have)\n>> \n>> \n>> I have a (possibly) dumb question: if we have specified several \n>> events,\n>> should WAIT finish if only one of them triggered? It's not immediately\n>> obvious if we're waiting for ALL of them to trigger, or just one will\n>> suffice (ANY). IMO the syntax could be extended to something like:\n>> \n>> WAIT FOR [ANY | ALL] event [, event ...] options,\n>> \n>> with ANY being the default variant.\n> \n> +1\n\nHere I made new patch of feature, discussed above.\n\nWAIT FOR - wait statement to pause beneath statements\n==========\n\nSynopsis\n==========\n WAIT FOR [ANY | SOME | ALL] event [, event ...] options\n and event is:\n LSN value\n TIMESTAMP value\n\n and options is:\n TIMEOUT delay\n UNTIL TIMESTAMP timestamp\nDescription\n==========\nWAIT FOR - make to wait statements (that are beneath) on sleep until\nevent happens (Don’t process new queries until an event happens).\n\nHow to use it\n==========\nWAIT FOR LSN ‘LSN’ [, timeout in ms];\n\n#Wait until LSN 0/303EC60 will be replayed, or 10 second passed.\nWAIT FOR ANY LSN ‘0/303EC60’, TIMEOUT 10000;\n\n#Or same without timeout.\nWAIT FOR LSN ‘0/303EC60’;\n\n#Or wait for some timestamp.\nWAIT FOR TIMESTAMP '2020-01-02 17:20:19.028161+03';\n\n#Wait until ALL events occur: LSN to be replayed and timestamp\npassed.\nWAIT FOR ALL LSN ‘0/303EC60’, TIMESTAMP '2020-01-28 11:10:39.021341+03';\n\nNotice: WAIT FOR will release on PostmasterDeath or Interruption events\nif they come earlier then LSN or timeout.\n\nTesting the implementation\n======================\nThe implementation was tested with src/test/recovery/t/018_waitfor.pl\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 06 Mar 2020 00:24:01 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hello.\r\n\r\nI looked this briefly but not tested.\r\n\r\nAt Fri, 06 Mar 2020 00:24:01 +0300, Kartyshov Ivan <i.kartyshov@postgrespro.ru> wrote in \r\n> On 2018-03-06 14:50, Simon Riggs wrote:\r\n> > On 6 March 2018 at 11:24, Dmitry Ivanov <d.ivanov@postgrespro.ru>\r\n> > wrote:\r\n> >>> In PG11, I propose the following command, sticking mostly to Ants'\r\n> >>> syntax, and allowing to wait for multiple events before it returns. It\r\n> >>> doesn't hold snapshot and will not get cancelled by Hot Standby.\r\n> >>> WAIT FOR event [, event ...] options\r\n> >>> event is\r\n> >>> LSN value\r\n> >>> TIMESTAMP value\r\n> >>> options\r\n> >>> TIMEOUT delay\r\n> >>> UNTIL TIMESTAMP timestamp\r\n> >>> (we have both, so people don't need to do math, they can use whichever\r\n> >>> they have)\r\n> >> I have a (possibly) dumb question: if we have specified several\r\n> >> events,\r\n> >> should WAIT finish if only one of them triggered? It's not immediately\r\n> >> obvious if we're waiting for ALL of them to trigger, or just one will\r\n> >> suffice (ANY). IMO the syntax could be extended to something like:\r\n> >> WAIT FOR [ANY | ALL] event [, event ...] options,\r\n> >> with ANY being the default variant.\r\n> > +1\r\n> \r\n> Here I made new patch of feature, discussed above.\r\n> \r\n> WAIT FOR - wait statement to pause beneath statements\r\n> ==========\r\n> \r\n> Synopsis\r\n> ==========\r\n> WAIT FOR [ANY | SOME | ALL] event [, event ...] options\r\n> and event is:\r\n> LSN value\r\n> TIMESTAMP value\r\n> \r\n> and options is:\r\n> TIMEOUT delay\r\n> UNTIL TIMESTAMP timestamp\r\n\r\nThe syntax seems getting confused. What happens if we typed in the\r\ncommand \"WAIT FOR TIMESTAMP '...' UNTIL TIMESTAMP '....'\"? It seems\r\nto me the options is useles. Couldn't the TIMEOUT option be a part of\r\nevent? I know gram.y doesn't accept that syntax but it is not\r\napparent from the description above.\r\n\r\nAs I read through the previous thread, one of the reason for this\r\nfeature implemented as a syntax is it was inteded to be combined into\r\nBEGIN statement. If there is not any use case for the feature midst\r\nof a transaction, why don't you turn it into a part of BEGIN command?\r\n\r\n> Description\r\n> ==========\r\n> WAIT FOR - make to wait statements (that are beneath) on sleep until\r\n> event happens (Don’t process new queries until an event happens).\r\n...\r\n> Notice: WAIT FOR will release on PostmasterDeath or Interruption\r\n> events\r\n> if they come earlier then LSN or timeout.\r\n\r\nI think interrupts ought to result in ERROR.\r\n\r\nwait.c adds a fair amount of code and uses proc-array based\r\napproach. But Thomas suggested queue-based approach and I also think\r\nit is better. We already have a queue-based mechanism that behaves\r\nalmost the same with this feature in the comit code on master-side. It\r\navoids spurious backend wakeups. Couldn't we extend SyncRepWaitForLSN\r\nor share a part of the code/infrastructures so that this feature can\r\nshare the code?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 06 Mar 2020 14:54:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "As it was discussed earlier, I added wait for statement into begin/start \nstatement.\n\nSynopsis\n==========\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode[, ...] ] wait_for_event\n where transaction_mode is one of:\n ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ \nCOMMITTED | READ UNCOMMITTED }\n READ WRITE | READ ONLY\n [ NOT ] DEFERRABLE\n\n WAIT FOR [ANY | SOME | ALL] event [, event ...]\n and event is:\n LSN value [options]\n TIMESTAMP value\n\n and options is:\n TIMEOUT delay\n UNTIL TIMESTAMP timestamp\n\n\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 21 Mar 2020 14:16:11 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-03-21 14:16, Kartyshov Ivan wrote:\n> As it was discussed earlier, I added wait for statement into\n> begin/start statement.\nThanks! To address the discussion: I like the idea of having WAIT as a \npart of BEGIN statement rather than a separate command, as Thomas Munro \nsuggested. That way, the syntax itself enforces that WAIT FOR LSN will \nactually take effect, even for single-snapshot transactions. It seems \nmore convenient for the user, who won't have to remember the details \nabout how WAIT interacts with isolation levels.\n\n\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode[, ...] ] wait_for_event\nNot sure about this, but could we add \"WAIT FOR ..\" as another \ntransaction_mode rather than a separate thing? That way, user won't have \nto worry about the order. As of now, one should remember to always put \nWAIT FOR as the Last parameter in the BEGIN statement.\n\n\n> and event is:\n> LSN value [options]\n> TIMESTAMP value\nI would maybe remove WAIT FOR TIMESTAMP. As Robert Haas has pointed out, \nit seems a lot like pg_sleep_until(). Besides, it doesn't necessarily \nneed to be connected to transaction start, which makes it different from \nWAIT FOR LSN - so I wouldn't mix them together.\n\n\nI had another look at the code:\n\n\n===\nIn WaitShmemSize() you might want to use functions that check for \noverflow - add_size()/mul_size(). They're used in similar situations, \nfor example in BTreeShmemSize().\n\n\n===\nThis is how WaitUtility() is called - note that time_val will always be \n > 0:\n+    if (time_val <= 0)\n+        time_val = 1;\n+...\n+    res = WaitUtility(lsn, (int)(time_val * 1000), dest);\n\n(time_val * 1000) is passed to WaitUtility() as the delay argument. And \ninside WaitUtility() we have this:\n\n+if (delay > 0)\n+    latch_events = WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH;\n+else\n+    latch_events = WL_LATCH_SET | WL_POSTMASTER_DEATH;\n\nSince we always pass a delay value greater than 0, we'll never get to \nthe \"else\" clause here and we'll never be ready to wait for LSN forever. \nPerhaps due to that, the current test outputs this after a simple WAIT \nFOR LSN command:\npsql:<stdin>:1: NOTICE:  LSN is not reached.\n\n\n===\nSpeaking of tests,\n\nWhen I tried to test BEGIN TRANSACTION WAIT FOR LSN, I got a segfault:\nLOG: statement: BEGIN TRANSACTION WAIT FOR LSN '0/3002808'\nLOG: server process (PID 10385) was terminated by signal 11: \nSegmentation fault\nDETAIL: Failed process was running: COMMIT\n\nCould you add some more tests to the patch when this is fixed? With WAIT \nas part of BEGIN statement + with things such as WAIT FOR ALL ... / WAIT \nFOR ANY ... / WAIT FOR LSN ... UNTIL TIMESTAMP ...\n\n\n===\nIn WaitSetLatch() we should probably check backend for NULL before \ncalling SetLatch(&backend->procLatch)\n\nWe might also need to check wait statement for NULL in these two cases:\n+  case T_TransactionStmt:\n+  {...\n+      result = transformWaitForStmt(pstate, (WaitStmt *) stmt->wait);\n\ncase TRANS_STMT_START:\n{...\n+ WaitStmt *waitstmt = (WaitStmt *) stmt->wait;\n+ res = WaitMain(waitstmt, dest);\n\n\n===\nAfter we added the \"wait\" attribute to TransactionStmt struct, do we \nperhaps need to add something to _copyTransactionStmt() / \n_equalTransactionStmt()?\n\n--\nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com\n\n\n", "msg_date": "Wed, 25 Mar 2020 21:10:59 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Anna, thank you for your review.\n\nOn 2020-03-25 21:10, Anna Akenteva wrote:\n> On 2020-03-21 14:16, Kartyshov Ivan wrote:\n>> and event is:\n>> LSN value [options]\n>> TIMESTAMP value\n> I would maybe remove WAIT FOR TIMESTAMP. As Robert Haas has pointed\n> out, it seems a lot like pg_sleep_until(). Besides, it doesn't\n> necessarily need to be connected to transaction start, which makes it\n> different from WAIT FOR LSN - so I wouldn't mix them together.\nI don't mind.\nBut I think we should get one more opinions on this point.\n\n> ===\n> This is how WaitUtility() is called - note that time_val will always be \n> > 0:\n> +    if (time_val <= 0)\n> +        time_val = 1;\n> +...\n> +    res = WaitUtility(lsn, (int)(time_val * 1000), dest);\n> \n> (time_val * 1000) is passed to WaitUtility() as the delay argument.\n> And inside WaitUtility() we have this:\n> \n> +if (delay > 0)\n> +    latch_events = WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH;\n> +else\n> +    latch_events = WL_LATCH_SET | WL_POSTMASTER_DEATH;\n> \n> Since we always pass a delay value greater than 0, we'll never get to\n> the \"else\" clause here and we'll never be ready to wait for LSN\n> forever. Perhaps due to that, the current test outputs this after a\n> simple WAIT FOR LSN command:\n> psql:<stdin>:1: NOTICE:  LSN is not reached.\nI fix it, and Interruptions in last patch.\n\nAnna, feel free to work on this patch.\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 27 Mar 2020 04:15:59 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-03-27 04:15, Kartyshov Ivan wrote:\n> Anna, feel free to work on this patch.\n\nIvan and I worked on this patch a bit more. We fixed the bugs that we \ncould find and cleaned up the code. For now, we've kept both options: \nWAIT as a standalone statement and WAIT as a part of BEGIN. The new \npatch is attached.\n\nThe syntax looks a bit different now:\n\n- WAIT FOR [ANY | ALL] event [, ...]\n- BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ] [ WAIT FOR \n[ANY | ALL] event [, ...]]\nwhere event is one of:\n LSN value\n TIMEOUT number_of_milliseconds\n timestamp\n\nNow, one event cannot contain both an LSN and a TIMEOUT. With such \nsyntax, the behaviour seems to make sense. For the (default) WAIT FOR \nALL strategy, we pick the maximum LSN and maximum allowed timeout, and \nwait for the LSN till the timeout is over. If no timeout is specified, \nwe wait forever. If no LSN is specified, we just wait for the time to \npass. For the WAIT FOR ANY strategy, it's the same but we pick minimum \nLSN and timeout.\n\nThere are still some questions left:\n1) Should we only keep the BEGIN option, or does the standalone command \nhave potential after all?\n2) Should we change the grammar so that WAIT can be in any position of \nthe BEGIN statement, not necessarily in the end? Ivan and I haven't come \nto a consensus about this, so more opinions would be helpful.\n3) Since we added the \"wait\" attribute to TransactionStmt struct, do we \nneed to add something to _copyTransactionStmt() / \n_equalTransactionStmt()?\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com", "msg_date": "Wed, 01 Apr 2020 02:26:54 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-01 02:26, Anna Akenteva wrote:\n> On 2020-03-27 04:15, Kartyshov Ivan wrote:\n>> Anna, feel free to work on this patch.\n> \n> Ivan and I worked on this patch a bit more. We fixed the bugs that we\n> could find and cleaned up the code. For now, we've kept both options:\n> WAIT as a standalone statement and WAIT as a part of BEGIN. The new\n> patch is attached.\n> \n> The syntax looks a bit different now:\n> \n> - WAIT FOR [ANY | ALL] event [, ...]\n> - BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ] [ WAIT FOR\n> [ANY | ALL] event [, ...]]\n> where event is one of:\n> LSN value\n> TIMEOUT number_of_milliseconds\n> timestamp\n> \n> Now, one event cannot contain both an LSN and a TIMEOUT.\n> \n\nIn my understanding the whole idea of having TIMEOUT was to do something \nlike 'Do wait for this LSN to be replicated, but no longer than TIMEOUT \nmilliseconds'. What is the point of having plain TIMEOUT? It seems to be \nequivalent to pg_sleep, doesn't it?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Fri, 03 Apr 2020 17:29:45 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "I did some code cleanup and added tests - both for the standalone WAIT \nFOR statement and for WAIT FOR as a part of BEGIN. The new patch is \nattached.\n\nOn 2020-04-03 17:29, Alexey Kondratov wrote:\n> On 2020-04-01 02:26, Anna Akenteva wrote:\n>> \n>> - WAIT FOR [ANY | ALL] event [, ...]\n>> - BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ] [ WAIT FOR\n>> [ANY | ALL] event [, ...]]\n>> where event is one of:\n>> LSN value\n>> TIMEOUT number_of_milliseconds\n>> timestamp\n>> \n>> Now, one event cannot contain both an LSN and a TIMEOUT.\n>> \n> \n> In my understanding the whole idea of having TIMEOUT was to do\n> something like 'Do wait for this LSN to be replicated, but no longer\n> than TIMEOUT milliseconds'. What is the point of having plain TIMEOUT?\n> It seems to be equivalent to pg_sleep, doesn't it?\n> \n\nIn the patch that I reviewed, you could do things like:\n WAIT FOR\n LSN lsn0,\n LSN lsn1 TIMEOUT time1,\n LSN lsn2 TIMEOUT time2;\nand such a statement was in practice equivalent to\n WAIT FOR LSN(max(lsn0, lsn1, lsn2)) TIMEOUT (max(time1, time2))\n\nAs you can see, even though grammatically lsn1 is grouped with time1 and \nlsn2 is grouped with time2, both timeouts that we specified are not \nconnected to their respective LSN-s, and instead they kinda act like \nglobal timeouts. Therefore, I didn't see a point in keeping TIMEOUT \nnecessarily grammatically connected to LSN.\n\nIn the new syntax our statement would look like this:\n WAIT FOR LSN lsn0, LSN lsn1, LSN lsn2, TIMEOUT time1, TIMEOUT time2;\nTIMEOUT-s are not forced to be grouped with LSN-s anymore, which makes \nit more clear that all specified TIMEOUTs will be global and will apply \nto all LSN-s at once.\n\nThe point of having TIMEOUT is still to let us limit the time of waiting \nfor LSNs. It's just that with the new syntax, we can also use TIMEOUT \nwithout an LSN. You are right, such a case is equivalent to pg_sleep. \nOne way to avoid that is to prohibit waiting for TIMEOUT without \nspecifying an LSN. Do you think we should do that?\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com", "msg_date": "Fri, 03 Apr 2020 21:51:13 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hi!\n\nOn Fri, Apr 3, 2020 at 9:51 PM Anna Akenteva <a.akenteva@postgrespro.ru> wrote:\n> In the patch that I reviewed, you could do things like:\n> WAIT FOR\n> LSN lsn0,\n> LSN lsn1 TIMEOUT time1,\n> LSN lsn2 TIMEOUT time2;\n> and such a statement was in practice equivalent to\n> WAIT FOR LSN(max(lsn0, lsn1, lsn2)) TIMEOUT (max(time1, time2))\n>\n> As you can see, even though grammatically lsn1 is grouped with time1 and\n> lsn2 is grouped with time2, both timeouts that we specified are not\n> connected to their respective LSN-s, and instead they kinda act like\n> global timeouts. Therefore, I didn't see a point in keeping TIMEOUT\n> necessarily grammatically connected to LSN.\n>\n> In the new syntax our statement would look like this:\n> WAIT FOR LSN lsn0, LSN lsn1, LSN lsn2, TIMEOUT time1, TIMEOUT time2;\n> TIMEOUT-s are not forced to be grouped with LSN-s anymore, which makes\n> it more clear that all specified TIMEOUTs will be global and will apply\n> to all LSN-s at once.\n>\n> The point of having TIMEOUT is still to let us limit the time of waiting\n> for LSNs. It's just that with the new syntax, we can also use TIMEOUT\n> without an LSN. You are right, such a case is equivalent to pg_sleep.\n> One way to avoid that is to prohibit waiting for TIMEOUT without\n> specifying an LSN. Do you think we should do that?\n\nI think specifying multiple LSNs/TIMEOUTs is kind of ridiculous. We\ncan assume that client application is smart enough to calculate\nminimum/maximum on its side. When multiple LSNs/TIMEOUTs are\nspecified, what should we wait for? Reaching all the LSNs? Reaching\nany of LSNs? Are timeouts independent from LSNs or sticked together?\nSo if we didn't manage to reach LSN1 in TIMEOUT1, then we don't wait\nfor LSN2 with TIMEOUT2 (or not)?\n\nI think that now we would be fine with single LSN and single TIMEOUT.\nIn future we may add multiple LSNs/TIMEOUTs or/and support for\nexpressions as LSNs/TIMEOUTs if we figure out it's necessary.\n\nI also think it's good to couple waiting for lsn with beginning of\ntransaction is good idea. Separate WAIT FOR LSN statement called in\nthe middle of transaction looks problematic for me. Imagine we have RR\nisolation and already acquired the snapshot. Then out snapshot can\nblock applying wal records, which we are waiting for. That would be\nimplicit deadlock. It would be nice to evade such deadlocks by\ndesign.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 4 Apr 2020 03:14:01 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-03 21:51, Anna Akenteva wrote:\n> I did some code cleanup and added tests - both for the standalone WAIT\n> FOR statement and for WAIT FOR as a part of BEGIN. The new patch is\n> attached.\n\nI did more cleanup and code optimization on waiting events on latch.\nAnd rebase patch.\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 05 Apr 2020 02:56:31 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-04 03:14, Alexander Korotkov wrote:\n> I think that now we would be fine with single LSN and single TIMEOUT.\n> In future we may add multiple LSNs/TIMEOUTs or/and support for\n> expressions as LSNs/TIMEOUTs if we figure out it's necessary.\n> \n> I also think it's good to couple waiting for lsn with beginning of\n> transaction is good idea. Separate WAIT FOR LSN statement called in\n> the middle of transaction looks problematic for me. Imagine we have RR\n> isolation and already acquired the snapshot. Then out snapshot can\n> block applying wal records, which we are waiting for. That would be\n> implicit deadlock. It would be nice to evade such deadlocks by\n> design.\nOk, here is a new version of patch with single LSN and TIMEOUT.\n\nSynopsis\n==========\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ] [WAIT FOR LSN \n'lsn' [ TIMEOUT 'value']]\nand\nSTART TRANSACTION [ transaction_mode [, ...] ] [WAIT FOR LSN 'lsn' [ \nTIMEOUT 'value']]\n where lsn is result of pg_current_wal_flush_lsn on master.\n and value is uint time interval in milliseconds.\nDescription\n==========\nBEGIN/START...WAIT FOR - pause the start of transaction until a \nspecified LSN has\nbeen replayed. (Don’t open transaction if lsn is not reached on \ntimeout).\n\nHow to use it\n==========\nWAIT FOR LSN ‘LSN’ [, timeout in ms];\n\n# Before starting transaction, wait until LSN 0/84832E8 is replayed. \nWait time is\nnot limited here because a timeout was not specified\nBEGIN WAIT FOR LSN '0/84832E8';\n\n# Before starting transaction, wait until LSN 0/84832E8 is replayed. \nLimit the wait\ntime with 10 seconds, and if LSN is not reached by then, don't start the \ntransaction.\nSTART TRANSACTION WAIT FOR LSN '0/8DFFB88' TIMEOUT 10000;\n\n# Same as previous, but with transaction isolation level = REPEATABLE \nREAD\nBEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ WAIT FOR LSN \n'0/815C0F1' TIMEOUT 10000;\n\nNotice: WAIT FOR will release on PostmasterDeath or Interruption events\nif they come earlier than LSN or timeout.\n\nTesting the implementation\n======================\nThe implementation was tested with src/test/recovery/t/020_begin_wait.pl\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 07 Apr 2020 00:58:18 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Apr 7, 2020 at 12:58 AM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n> On 2020-04-04 03:14, Alexander Korotkov wrote:\n> > I think that now we would be fine with single LSN and single TIMEOUT.\n> > In future we may add multiple LSNs/TIMEOUTs or/and support for\n> > expressions as LSNs/TIMEOUTs if we figure out it's necessary.\n> >\n> > I also think it's good to couple waiting for lsn with beginning of\n> > transaction is good idea. Separate WAIT FOR LSN statement called in\n> > the middle of transaction looks problematic for me. Imagine we have RR\n> > isolation and already acquired the snapshot. Then out snapshot can\n> > block applying wal records, which we are waiting for. That would be\n> > implicit deadlock. It would be nice to evade such deadlocks by\n> > design.\n> Ok, here is a new version of patch with single LSN and TIMEOUT.\n\nI think this quite small feature, which already received quite amount\nof review. The last version is very pinched. But I think it would be\ngood to commit some very basic version, which is at least some\nprogress in the area and could be extended in future. I'm going to\npass trough the code tomorrow and commit this unless I found major\nissues or somebody objects.\n\n------\nAlexander Korotkov\n\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 03:25:47 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-07 00:58, Kartyshov Ivan wrote:\n> Ok, here is a new version of patch with single LSN and TIMEOUT.\n\nI had a look at the code and did some more code cleanup, with Ivan's \npermission.\nThis is what I did:\n- Removed \"WAIT FOR\" command tag from cmdtaglist.h and renamed WaitStmt \nto WaitClause (since there's no standalone WAIT FOR command anymore)\n- Added _copyWaitClause() and _equalWaitClause()\n- Removed unused #include-s from utility.c\n- Adjusted tests and documentation\n- Fixed/added some code comments\n\nI have a couple of questions about WaitUtility() though:\n- When waiting forever (due to not specifying a timeout), isn't 60 \nseconds too long of an interval to check for interrupts?\n- If we did specify a timeout, it might be a very long one. In this \ncase, shouldn't we also make sure to wake up sometimes to check for \ninterrupts?\n- Is it OK that specifying timeout = 0 (BEGIN WAIT FOR LSN ... TIMEOUT \n0) is the same as not specifying timeout at all?\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com", "msg_date": "Tue, 07 Apr 2020 05:25:53 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Apr 7, 2020 at 7:56 AM Anna Akenteva <a.akenteva@postgrespro.ru> wrote:\n>\n> On 2020-04-07 00:58, Kartyshov Ivan wrote:\n> > Ok, here is a new version of patch with single LSN and TIMEOUT.\n>\n> I had a look at the code and did some more code cleanup, with Ivan's\n> permission.\n> This is what I did:\n> - Removed \"WAIT FOR\" command tag from cmdtaglist.h and renamed WaitStmt\n> to WaitClause (since there's no standalone WAIT FOR command anymore)\n> - Added _copyWaitClause() and _equalWaitClause()\n> - Removed unused #include-s from utility.c\n> - Adjusted tests and documentation\n> - Fixed/added some code comments\n>\n> I have a couple of questions about WaitUtility() though:\n> - When waiting forever (due to not specifying a timeout), isn't 60\n> seconds too long of an interval to check for interrupts?\n> - If we did specify a timeout, it might be a very long one. In this\n> case, shouldn't we also make sure to wake up sometimes to check for\n> interrupts?\n>\n\nRight, we should probably wait for 100ms before checking the\ninterrupts. See the similar logic in pg_promote where we wait for\nspecified number of seconds.\n\n> - Is it OK that specifying timeout = 0 (BEGIN WAIT FOR LSN ... TIMEOUT\n> 0) is the same as not specifying timeout at all?\n>\n\nYes that sounds reasonable to me.\n\nReview comments:\n--------------------------\n1.\n+/*\n+ * Delete wait event of the current backend from the shared memory array.\n+ *\n+ * TODO: Consider state cleanup on backend failure.\n+ * Check:\n+ * 1) nomal|smart|fast|immediate stop\n+ * 2) SIGKILL and SIGTERM\n+ */\n+static void\n+DeleteEvent(void)\n\nI don't see how this is implemented or called to handle any errors.\nFor example in function WaitUtility if the WaitLatch errors out due to\nany error, then the event won't be deleted. I think we can't assume\nWaitLatch or any other code in this code path will never error out.\nFor ex. WaitLatch---->WaitEventSetWaitBlock() can error out. Also, in\nfuture we can add more code which can error out.\n\n2.\n+ /*\n+ * If received an interruption from CHECK_FOR_INTERRUPTS,\n+ * then delete the current event from array.\n+ */\n+ if (InterruptPending)\n+ {\n+ DeleteEvent();\n+ ProcessInterrupts();\n+ }\n\nWe generally do this type of handling via CHECK_FOR_INTERRUPTS. One\nreason is that it behaves slightly differently in Windows. I am not\nsure why we want to do differently here? This looks quite adhoc to me\nand may not be correct. If we handle this event in error path, then\nwe might not need to do some special handling.\n\n3.\n+/*\n+ * On WAIT use a latch to wait till LSN is replayed,\n+ * postmaster dies or timeout happens.\n+ * Returns 1 if LSN was reached and 0 otherwise.\n+ */\n+int\n+WaitUtility(XLogRecPtr target_lsn, const float8 secs)\n\nIsn't it better to have a return value as bool? IOW, why this\nfunction need int as its return value?\n\n4.\n+#define GetNowFloat() ((float8) GetCurrentTimestamp() / 1000000.0)\n\nThis same define is used elsewhere in the code as well, may be we can\ndefine it in some central place and use it.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 15:28:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Apr 7, 2020 at 5:56 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Tue, Apr 7, 2020 at 12:58 AM Kartyshov Ivan\n> <i.kartyshov@postgrespro.ru> wrote:\n> > On 2020-04-04 03:14, Alexander Korotkov wrote:\n> > > I think that now we would be fine with single LSN and single TIMEOUT.\n> > > In future we may add multiple LSNs/TIMEOUTs or/and support for\n> > > expressions as LSNs/TIMEOUTs if we figure out it's necessary.\n> > >\n> > > I also think it's good to couple waiting for lsn with beginning of\n> > > transaction is good idea. Separate WAIT FOR LSN statement called in\n> > > the middle of transaction looks problematic for me. Imagine we have RR\n> > > isolation and already acquired the snapshot. Then out snapshot can\n> > > block applying wal records, which we are waiting for. That would be\n> > > implicit deadlock. It would be nice to evade such deadlocks by\n> > > design.\n> > Ok, here is a new version of patch with single LSN and TIMEOUT.\n>\n> I think this quite small feature, which already received quite amount\n> of review. The last version is very pinched. But I think it would be\n> good to commit some very basic version, which is at least some\n> progress in the area and could be extended in future. I'm going to\n> pass trough the code tomorrow and commit this unless I found major\n> issues or somebody objects.\n>\n\nI have gone through this thread and skimmed through the patch and I am\nnot sure if we can say that this patch is ready to go. First, I don't\nthink we have a consensus on the syntax being used in the patch\n(various people didn't agree to LSN specific syntax). They wanted a\nmore generic syntax and I see that we tried to implement it and it\nturns out to be a bit complex but that doesn't mean we just give up on\nthe idea and take the simplest approach and that too without a broader\nagreement. Second, on my quick review, it seems there are a few\nthings like error handling, interrupt checking which need more work.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:02:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-07 13:32, Amit Kapila wrote:\n> First, I don't\n> think we have a consensus on the syntax being used in the patch\n> (various people didn't agree to LSN specific syntax). They wanted a\n> more generic syntax and I see that we tried to implement it and it\n> turns out to be a bit complex but that doesn't mean we just give up on\n> the idea and take the simplest approach and that too without a broader\n> agreement.\n\nThank you for your comments!\n\nInitially, the syntax used to be \"WAITLSN\", which confined us with only \nwaiting for LSN-s and not anything else. So we switched to \"WAIT FOR \nLSN\", which would allow us to add variations like \"WAIT FOR XID\" or \n\"WAIT FOR COMMIT TOKEN\" in the future if we wanted. A few people seemed \nto imply that this kind of syntax is expandable enough:\n\nOn 2018-02-01 14:47, Simon Riggs wrote:\n> I agree that WAIT LSN is good syntax because this allows us to wait\n> for something else in future.\n\nOn 2017-10-31 12:42:56, Ants Aasma wrote:\n> For lack of a better proposal I would like something along the lines \n> of:\n> WAIT FOR state_id[, state_id] [ OPTIONS (..)]\n\nAs for giving up waiting for multiple events: we can only wait for LSN-s \nat the moment, and there seems to be no point in waiting for multiple \nLSN-s at once, because it's equivalent to waiting for the biggest LSN. \nSo we opted for simpler grammar for now, only letting the user specify \none LSN and one TIMEOUT. If in the future we allow waiting for something \nelse, like XID-s, we can expand the grammar as needed.\n\nWhat are your own thoughts on the syntax?\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com\n\n\n", "msg_date": "Tue, 07 Apr 2020 15:07:42 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Apr 7, 2020 at 3:07 PM Anna Akenteva <a.akenteva@postgrespro.ru> wrote:\n> On 2017-10-31 12:42:56, Ants Aasma wrote:\n> > For lack of a better proposal I would like something along the lines\n> > of:\n> > WAIT FOR state_id[, state_id] [ OPTIONS (..)]\n>\n> As for giving up waiting for multiple events: we can only wait for LSN-s\n> at the moment, and there seems to be no point in waiting for multiple\n> LSN-s at once, because it's equivalent to waiting for the biggest LSN.\n> So we opted for simpler grammar for now, only letting the user specify\n> one LSN and one TIMEOUT. If in the future we allow waiting for something\n> else, like XID-s, we can expand the grammar as needed.\n\n+1\nIn the latest version of patch we have very brief and simple syntax\nallowing to wait for given LSN with given timeout. In future we can\nexpand this syntax in different ways. I don't see that current syntax\nis limiting us from something.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 15:16:09 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Apr 7, 2020 at 5:37 PM Anna Akenteva <a.akenteva@postgrespro.ru> wrote:\n>\n> On 2020-04-07 13:32, Amit Kapila wrote:\n> > First, I don't\n> > think we have a consensus on the syntax being used in the patch\n> > (various people didn't agree to LSN specific syntax). They wanted a\n> > more generic syntax and I see that we tried to implement it and it\n> > turns out to be a bit complex but that doesn't mean we just give up on\n> > the idea and take the simplest approach and that too without a broader\n> > agreement.\n>\n> Thank you for your comments!\n>\n> Initially, the syntax used to be \"WAITLSN\", which confined us with only\n> waiting for LSN-s and not anything else. So we switched to \"WAIT FOR\n> LSN\", which would allow us to add variations like \"WAIT FOR XID\" or\n> \"WAIT FOR COMMIT TOKEN\" in the future if we wanted. A few people seemed\n> to imply that this kind of syntax is expandable enough:\n>\n> On 2018-02-01 14:47, Simon Riggs wrote:\n> > I agree that WAIT LSN is good syntax because this allows us to wait\n> > for something else in future.\n>\n> On 2017-10-31 12:42:56, Ants Aasma wrote:\n> > For lack of a better proposal I would like something along the lines\n> > of:\n> > WAIT FOR state_id[, state_id] [ OPTIONS (..)]\n>\n> As for giving up waiting for multiple events: we can only wait for LSN-s\n> at the moment, and there seems to be no point in waiting for multiple\n> LSN-s at once, because it's equivalent to waiting for the biggest LSN.\n> So we opted for simpler grammar for now, only letting the user specify\n> one LSN and one TIMEOUT. If in the future we allow waiting for something\n> else, like XID-s, we can expand the grammar as needed.\n>\n> What are your own thoughts on the syntax?\n>\n\nI don't know how users can specify the LSN value but OTOH I could see\nif users can somehow provide the correct value of commit LSN for which\nthey want to wait, then it could work out. It is possible that I\nmisread and we have a consensus on WAIT FOR LSN [option] because I saw\nwhat Simon and Ants have proposed includes multiple state/events and\nit might be fine to have just one event for now.\n\nAnyone else wants to share an opinion on syntax?\n\nI think even if we are good with syntax, I could see the code is not\ncompletely ready to go as mentioned in few comments raised by me. I\nam not sure if we want to commit it in the current form and then\nimprove after feature freeze. If it is possible to fix the loose ends\nquickly and there are no more comments by anyone then probably we\nmight be able to commit it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 18:59:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-07 12:58, Amit Kapila wrote:\n> \n> Review comments:\n> 1.\n> +static void\n> +DeleteEvent(void)\n> I don't see how this is implemented or called to handle any errors.\n> \n> 2.\n> + if (InterruptPending)\n> + {\n> + DeleteEvent();\n> + ProcessInterrupts();\n> + }\n> We generally do this type of handling via CHECK_FOR_INTERRUPTS.\n> \n> 3.\n> +int\n> +WaitUtility(XLogRecPtr target_lsn, const float8 secs)\n> Isn't it better to have a return value as bool?\n> \n> 4.\n> +#define GetNowFloat() ((float8) GetCurrentTimestamp() / 1000000.0)\n> This same define is used elsewhere in the code as well, may be we can\n> define it in some central place and use it.\n\nThank you for your review!\nIvan and I have worked on the patch and tried to address your comments:\n\n0. Now we wake up at least every 100ms to check for interrupts.\n1. Now we call DeleteWaitedLSN() from \nProcessInterrupts()=>LockErrorCleanup(). It seems that we can only exit \nthe WAIT cycle improperly due to interrupts, so this should be enough \n(?)\n2. Now we use CHECK_FOR_INTERRUPTS() instead of ProcessInterrupts()\n3. Now WaitUtility() returns bool rather than int\n4. Now GetNowFloat() is only defined at one place in the code\n\nWhat we changed additionally:\n- Prohibited using WAIT FOR LSN on master\n- Added more tests\n- Checked the code with pgindent and adjusted pgindent/typedefs.list\n- Changed min_lsn's type to pg_atomic_uint64 + fixed how we work with \nmutex\n- Code cleanup in wait.[c|h]: cleaned up #include-s, gave better names \nto functions, changed elog() to ereport()\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com", "msg_date": "Tue, 07 Apr 2020 22:58:01 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Apr 7, 2020 at 10:58 PM Anna Akenteva <a.akenteva@postgrespro.ru> wrote:\n> Thank you for your review!\n> Ivan and I have worked on the patch and tried to address your comments:\n\nI've pushed this. I promise to do careful post-commit review and\nresolve any issues arising.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 7 Apr 2020 23:55:56 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-08 00:27, Tom Lane wrote:\n> Alexander Korotkov <akorotkov@postgresql.org> writes:\n» WAIT FOR LSN lsn [ TIMEOUT timeout ]\n> \n> This seems like a really carelessly chosen syntax —- *three* new\n> keywords, when you probably didn't need any. Are you not aware that\n> there is distributed overhead in the grammar for every keyword?\n> Plus, each new keyword carries the risk of breaking existing\n> applications, since it no longer works as an alias-not-preceded-by-AS.\n> \n\nTo avoid creating new keywords, we could change syntax in the following \nway:\nWAIT FOR => DEPENDS ON\nLSN => EVENT\nTIMEOUT => WITH INTERVAL\n\nSo\nSTART TRANSACTION WAIT FOR LSN '0/3F07A6B1' TIMEOUT 5000;\nwould instead look as\nSTART TRANSACTION DEPENDS ON EVENT '0/3F07A6B1' WITH INTERVAL '5 \nseconds';\n\n[1] \nhttps://www.postgresql.org/message-id/28209.1586294824%40sss.pgh.pa.us\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 08 Apr 2020 02:14:48 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Wed, Apr 8, 2020 at 2:14 AM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n> On 2020-04-08 00:27, Tom Lane wrote:\n> > Alexander Korotkov <akorotkov@postgresql.org> writes:\n> » WAIT FOR LSN lsn [ TIMEOUT timeout ]\n> >\n> > This seems like a really carelessly chosen syntax —- *three* new\n> > keywords, when you probably didn't need any. Are you not aware that\n> > there is distributed overhead in the grammar for every keyword?\n> > Plus, each new keyword carries the risk of breaking existing\n> > applications, since it no longer works as an alias-not-preceded-by-AS.\n> >\n>\n> To avoid creating new keywords, we could change syntax in the following\n> way:\n> WAIT FOR => DEPENDS ON\n\nLooks OK for me.\n\n> LSN => EVENT\n\nI think it's too generic. Not every event is lsn. TBH, lsn is not\nevent at all :)\n\nI wonder is we can still use word lsn, but don't use keyword for that.\nCan we take arbitrary non-quoted literal there and check it later?\n\n> TIMEOUT => WITH INTERVAL\n\nI'm not yet sure about this. Probably there are better options.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 02:52:55 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "At Wed, 8 Apr 2020 02:52:55 +0300, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote in \r\n> On Wed, Apr 8, 2020 at 2:14 AM Kartyshov Ivan\r\n> <i.kartyshov@postgrespro.ru> wrote:\r\n> > On 2020-04-08 00:27, Tom Lane wrote:\r\n> > > Alexander Korotkov <akorotkov@postgresql.org> writes:\r\n> > » WAIT FOR LSN lsn [ TIMEOUT timeout ]\r\n> > >\r\n> > > This seems like a really carelessly chosen syntax —- *three* new\r\n> > > keywords, when you probably didn't need any. Are you not aware that\r\n> > > there is distributed overhead in the grammar for every keyword?\r\n> > > Plus, each new keyword carries the risk of breaking existing\r\n> > > applications, since it no longer works as an alias-not-preceded-by-AS.\r\n> > >\r\n> >\r\n> > To avoid creating new keywords, we could change syntax in the following\r\n> > way:\r\n> > WAIT FOR => DEPENDS ON\r\n> \r\n> Looks OK for me.\r\n> \r\n> > LSN => EVENT\r\n> \r\n> I think it's too generic. Not every event is lsn. TBH, lsn is not\r\n> event at all :)\r\n> \r\n> I wonder is we can still use word lsn, but don't use keyword for that.\r\n> Can we take arbitrary non-quoted literal there and check it later?\r\n> \r\n> > TIMEOUT => WITH INTERVAL\r\n> \r\n> I'm not yet sure about this. Probably there are better options.\r\n\r\nHow about something like the follows.\r\n\r\nBEGIN AFTER ColId Sconst\r\nBEGIN FOLOWING ColId Sconst\r\n\r\nUNTIL <absolute time>;\r\nLIMIT BY <interval>;\r\nWITHIN Iconst;\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 08 Apr 2020 10:09:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-08 04:09, Kyotaro Horiguchi wrote:\n> How about something like the follows.\n> \n> BEGIN AFTER ColId Sconst\n> BEGIN FOLOWING ColId Sconst\n> \n> UNTIL <absolute time>;\n> LIMIT BY <interval>;\n> WITHIN Iconst;\n> \n> regards.\n\nI like your suggested keywords! I think that \"AFTER\" + \"WITHIN\" sound \nthe most natural. We could completely give up the LSN keyword for now. \nThe final command could look something like:\n\nBEGIN AFTER ‘0/303EC60’ WITHIN '5 seconds';\nor\nBEGIN AFTER ‘0/303EC60’ WITHIN 5000;\n\nI'd like to hear others' opinions on the syntax as well.\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com\n\n\n", "msg_date": "Wed, 08 Apr 2020 22:36:28 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Anna Akenteva <a.akenteva@postgrespro.ru> writes:\n> I'd like to hear others' opinions on the syntax as well.\n\nPardon me for coming very late to the party, but it seems like there are\nother questions that ought to be answered before we worry about any of\nthis. Why is this getting grafted onto BEGIN/START TRANSACTION in the\nfirst place? It seems like it would be just as useful as a separate\ncommand, if not more so. You could always start a transaction just\nafter waiting. Conversely, there might be reasons to want to wait\nwithin an already-started transaction.\n\nIf it could survive as a separate command, then I'd humbly suggest\nthat it requires no grammar work at all. You could just invent one\nor more functions that take suitable parameters.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Apr 2020 16:35:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "At Wed, 08 Apr 2020 16:35:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Anna Akenteva <a.akenteva@postgrespro.ru> writes:\n> > I'd like to hear others' opinions on the syntax as well.\n> \n> Pardon me for coming very late to the party, but it seems like there are\n> other questions that ought to be answered before we worry about any of\n> this. Why is this getting grafted onto BEGIN/START TRANSACTION in the\n> first place? It seems like it would be just as useful as a separate\n> command, if not more so. You could always start a transaction just\n> after waiting. Conversely, there might be reasons to want to wait\n> within an already-started transaction.\n> \n> If it could survive as a separate command, then I'd humbly suggest\n> that it requires no grammar work at all. You could just invent one\n> or more functions that take suitable parameters.\n\nThe rationale for not being a fmgr function is stated in the following\ncomments.\n\nhttps://www.postgresql.org/message-id/CAEepm%3D0V74EApmfv%3DMGZa24Ac_pV1vGrp3Ovnv-3rUXwxu9epg%40mail.gmail.com\n| because it doesn't work for our 2 higher isolation levels as\n| mentioned.\"\n\nhttps://www.postgresql.org/message-id/CA%2BTgmob-aG3Lqh6OpvMDYTNR5eyq94VugyEejyk7pLhE9uwnyA%40mail.gmail.com\n\n| IMHO, trying to do this using a function-based interface is a really\n| bad idea for exactly the reasons you mention. I don't see why we'd\n| resist the idea of core syntax here; transactions are a core part of\n| PostgreSQL.\n\nIt seemed to me that they were suggested it to be in a part of BEGIN\ncommand, but the next proposed patch implemented \"WAIT FOR\" command\nfor uncertain reasons to me. I don't object to the isolate command if\nit is useful than a part of BEGIN command.\n\nBy the way, for example, pg_current_wal_lsn() is a volatile function\nand repeated calls within a SERIALIZABLE transaction can return\ndifferent values.\n\nIf there's no necessity for this feature to be a core command, I think\nI would like to be it a function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Apr 2020 16:11:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "\n\nOn 2020/04/09 16:11, Kyotaro Horiguchi wrote:\n> At Wed, 08 Apr 2020 16:35:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>> Anna Akenteva <a.akenteva@postgrespro.ru> writes:\n>>> I'd like to hear others' opinions on the syntax as well.\n>>\n>> Pardon me for coming very late to the party, but it seems like there are\n>> other questions that ought to be answered before we worry about any of\n>> this. Why is this getting grafted onto BEGIN/START TRANSACTION in the\n>> first place? It seems like it would be just as useful as a separate\n>> command, if not more so. You could always start a transaction just\n>> after waiting. Conversely, there might be reasons to want to wait\n>> within an already-started transaction.\n>>\n>> If it could survive as a separate command, then I'd humbly suggest\n>> that it requires no grammar work at all. You could just invent one\n>> or more functions that take suitable parameters.\n> \n> The rationale for not being a fmgr function is stated in the following\n> comments.\n\nThis issue happens because the function is executed after BEGIN? If yes,\nwhat about executing the function (i.e., as separate transaction) before BEGIN?\nIf so, the snapshot taken in the function doesn't affect the subsequent\ntransaction whatever its isolation level is.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 9 Apr 2020 16:35:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/04/09 16:11, Kyotaro Horiguchi wrote:\n>> At Wed, 08 Apr 2020 16:35:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>>> Why is this getting grafted onto BEGIN/START TRANSACTION in the\n>>> first place?\n\n>> The rationale for not being a fmgr function is stated in the following\n>> comments. [...]\n\n> This issue happens because the function is executed after BEGIN? If yes,\n> what about executing the function (i.e., as separate transaction) before BEGIN?\n> If so, the snapshot taken in the function doesn't affect the subsequent\n> transaction whatever its isolation level is.\n\nI wonder whether making it a procedure, rather than a plain function,\nwould help any.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Apr 2020 09:33:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-09 16:33, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2020/04/09 16:11, Kyotaro Horiguchi wrote:\n>>> At Wed, 08 Apr 2020 16:35:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> \n>>> wrote in\n>>>> Why is this getting grafted onto BEGIN/START TRANSACTION in the\n>>>> first place?\n> \n>>> The rationale for not being a fmgr function is stated in the \n>>> following\n>>> comments. [...]\n> \n>> This issue happens because the function is executed after BEGIN? If \n>> yes,\n>> what about executing the function (i.e., as separate transaction) \n>> before BEGIN?\n>> If so, the snapshot taken in the function doesn't affect the \n>> subsequent\n>> transaction whatever its isolation level is.\n> \n> I wonder whether making it a procedure, rather than a plain function,\n> would help any.\n> \n\nJust another idea in case if one will still decide to go with a separate \nstatement + BEGIN integration instead of a function. We could use \nparenthesized options list here. This is already implemented for VACUUM, \nREINDEX, etc. There was an idea to allow CONCURRENTLY in REINDEX there \n[1] and recently this was proposed again for new options [2], since it \nis much more extensible from the grammar perspective.\n\nThat way, the whole feature may look like:\n\nWAIT (LSN '16/B374D848', TIMEOUT 100);\n\nand/or\n\nBEGIN\nWAIT (LSN '16/B374D848', WHATEVER_OPTION_YOU_WANT);\n...\nCOMMIT;\n\nIt requires only one reserved keyword 'WAIT'. The advantage of this \napproach is that it can be extended to support xid, timestamp, csn or \nanything else, that may be invented in the future, without affecting the \ngrammar.\n\nWhat do you think?\n\nPersonally, I find this syntax to be more convenient and human-readable \ncompared with function call:\n\nSELECT pg_wait_for_lsn('16/B374D848');\nBEGIN;\n\n\n[1] \nhttps://www.postgresql.org/message-id/aad2ec49-5142-7356-ffb2-a9b2649cdd1f%402ndquadrant.com\n\n[2] \nhttps://www.postgresql.org/message-id/20200401060334.GB142683%40paquier.xyz\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Thu, 09 Apr 2020 21:16:05 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "\n\nOn 2020/04/10 3:16, Alexey Kondratov wrote:\n> On 2020-04-09 16:33, Tom Lane wrote:\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>> On 2020/04/09 16:11, Kyotaro Horiguchi wrote:\n>>>> At Wed, 08 Apr 2020 16:35:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>>>>> Why is this getting grafted onto BEGIN/START TRANSACTION in the\n>>>>> first place?\n>>\n>>>> The rationale for not being a fmgr function is stated in the following\n>>>> comments. [...]\n>>\n>>> This issue happens because the function is executed after BEGIN? If yes,\n>>> what about executing the function (i.e., as separate transaction) before BEGIN?\n>>> If so, the snapshot taken in the function doesn't affect the subsequent\n>>> transaction whatever its isolation level is.\n>>\n>> I wonder whether making it a procedure, rather than a plain function,\n>> would help any.\n>>\n> \n> Just another idea in case if one will still decide to go with a separate statement + BEGIN integration instead of a function. We could use parenthesized options list here. This is already implemented for VACUUM, REINDEX, etc. There was an idea to allow CONCURRENTLY in REINDEX there [1] and recently this was proposed again for new options [2], since it is much more extensible from the grammar perspective.\n> \n> That way, the whole feature may look like:\n> \n> WAIT (LSN '16/B374D848', TIMEOUT 100);\n> \n> and/or\n> \n> BEGIN\n> WAIT (LSN '16/B374D848', WHATEVER_OPTION_YOU_WANT);\n> ...\n> COMMIT;\n> \n> It requires only one reserved keyword 'WAIT'. The advantage of this approach is that it can be extended to support xid, timestamp, csn or anything else, that may be invented in the future, without affecting the grammar.\n> \n> What do you think?\n> \n> Personally, I find this syntax to be more convenient and human-readable compared with function call:\n> \n> SELECT pg_wait_for_lsn('16/B374D848');\n> BEGIN;\n\nI can imagine that some users want to specify the LSN to wait for,\nfrom the result of another query, for example,\nSELECT pg_wait_for_lsn(lsn) FROM xxx. If this is valid use case,\nisn't the function better?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:25:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-10 05:25, Fujii Masao wrote:\n> On 2020/04/10 3:16, Alexey Kondratov wrote:\n>> Just another idea in case if one will still decide to go with a \n>> separate statement + BEGIN integration instead of a function. We could \n>> use parenthesized options list here. This is already implemented for \n>> VACUUM, REINDEX, etc. There was an idea to allow CONCURRENTLY in \n>> REINDEX there [1] and recently this was proposed again for new options \n>> [2], since it is much more extensible from the grammar perspective.\n>> \n>> That way, the whole feature may look like:\n>> \n>> WAIT (LSN '16/B374D848', TIMEOUT 100);\n>> \n>> and/or\n>> \n>> BEGIN\n>> WAIT (LSN '16/B374D848', WHATEVER_OPTION_YOU_WANT);\n>> ...\n>> COMMIT;\n>> \n>> It requires only one reserved keyword 'WAIT'. The advantage of this \n>> approach is that it can be extended to support xid, timestamp, csn or \n>> anything else, that may be invented in the future, without affecting \n>> the grammar.\n>> \n>> What do you think?\n>> \n>> Personally, I find this syntax to be more convenient and \n>> human-readable compared with function call:\n>> \n>> SELECT pg_wait_for_lsn('16/B374D848');\n>> BEGIN;\n> \n> I can imagine that some users want to specify the LSN to wait for,\n> from the result of another query, for example,\n> SELECT pg_wait_for_lsn(lsn) FROM xxx. If this is valid use case,\n> isn't the function better?\n> \n\nI think that the main purpose of the feature is to achieve \nread-your-writes-consistency, while using async replica for reads. In \nthat case lsn of last modification is stored inside application, so \nthere is no need to do any query for that. Moreover, you cannot store \nthis lsn inside database, since reads are distributed across all \nreplicas (+ primary).\n\nThus, I could imagine that 'xxx' in your example states for some kind of \nstored procedure, that fetches lsn from the off-postgres storage, but it \nlooks like very narrow case to count on it, doesn't it?\n\nAnyway, I am not against implementing this as a function. That was just \nanother option to consider.\n\nJust realized that the last patch I have seen does not allow usage of \nwait on primary. It may be a problem if reads are pooled not only across \nreplicas, but on primary as well, which should be quite usual I guess. \nIn that case application does not know either request will be processed \non replica, or on primary. I think it should be allowed without any \nwarning, or just saying some LOG/DEBUG at most, that there was no \nwaiting performed.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 18:08:59 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 11:25:02 +0900, Fujii Masao wrote:\n> > BEGIN\n> > WAIT (LSN '16/B374D848', WHATEVER_OPTION_YOU_WANT);\n> > ...\n> > COMMIT;\n> > \n> > It requires only one reserved keyword 'WAIT'. The advantage of this approach is that it can be extended to support xid, timestamp, csn or anything else, that may be invented in the future, without affecting the grammar.\n> > \n> > What do you think?\n> > \n> > Personally, I find this syntax to be more convenient and human-readable compared with function call:\n> > \n> > SELECT pg_wait_for_lsn('16/B374D848');\n> > BEGIN;\n> \n> I can imagine that some users want to specify the LSN to wait for,\n> from the result of another query, for example,\n> SELECT pg_wait_for_lsn(lsn) FROM xxx. If this is valid use case,\n> isn't the function better?\n\nI don't think a function is a good idea - it'll cause a snapshot to be\nheld while waiting. Which in turn will cause hot_standby_feedback to not\nbe able to report an increased xmin up. And it will possibly hit\nsnapshot recovery conflicts.\n\nWhereas explicit syntax, especially if a transaction control statement,\nwon't have that problem.\n\nI'd personally look at 'AFTER' instead of 'WAIT'. Upthread you talked\nabout a reserved keyword - why does it have to be reserved?\n\n\nFWIW, I'm not really convinced there needs to be bespoke timeout syntax\nfor this feature. I can see reasons why you'd not just want to rely on\nstatement_timeout, but at least that should be discussed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 12:33:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't think a function is a good idea - it'll cause a snapshot to be\n> held while waiting. Which in turn will cause hot_standby_feedback to not\n> be able to report an increased xmin up. And it will possibly hit\n> snapshot recovery conflicts.\n\nGood point, but we could address that by making it a procedure no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:29:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 16:29:39 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I don't think a function is a good idea - it'll cause a snapshot to be\n> > held while waiting. Which in turn will cause hot_standby_feedback to not\n> > be able to report an increased xmin up. And it will possibly hit\n> > snapshot recovery conflicts.\n>\n> Good point, but we could address that by making it a procedure no?\n\nProbably. Don't think we have great infrastructure for builtin\nprocedures yet though? We'd presumably not want to use plpgsql.\n\nISTM that we can make it BEGIN AFTER 'xx/xx' or such, which'd not\nrequire any keywords, it'd be easier to use than a procedure.\n\nWith a separate procedure, you'd likely need more roundtrips / complex\nlogic at the client. You either need to check first if the procedure\nerrored ou, and then send the BEGIN, or send both together and separate\nout potential errors.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:06:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-04-10 16:29:39 -0400, Tom Lane wrote:\n>> Good point, but we could address that by making it a procedure no?\n\n> Probably. Don't think we have great infrastructure for builtin\n> procedures yet though? We'd presumably not want to use plpgsql.\n\nDon't think anyone's tried yet. It's not instantly clear that the\namount of code needed would be more than comes along with new\nsyntax, though.\n\n> ISTM that we can make it BEGIN AFTER 'xx/xx' or such, which'd not\n> require any keywords, it'd be easier to use than a procedure.\n\nI still don't see a good argument for tying this to BEGIN. If it\nhas to be a statement, why not a standalone statement?\n\n(I also have a lurking suspicion that this shouldn't be SQL at all\nbut part of the replication command set.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:17:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 17:17:10 -0400, Tom Lane wrote:\n> > ISTM that we can make it BEGIN AFTER 'xx/xx' or such, which'd not\n> > require any keywords, it'd be easier to use than a procedure.\n> \n> I still don't see a good argument for tying this to BEGIN. If it\n> has to be a statement, why not a standalone statement?\n\nBecause the goal is to start a transaction where a certain action from\nthe primary is visible.\n\nI think there's also some advantages of having it in a single statement\nfor poolers. If a pooler analyzes BEGIN AFTER 'xx/xx' it could\ne.g. redirect the transaction to a node that's caught up far enough,\ninstead of blocking. But that can't work even close to as easily if it's\nsomething that has to be executed before transaction begin.\n\n\n> (I also have a lurking suspicion that this shouldn't be SQL at all\n> but part of the replication command set.)\n\nHm? I'm not quite following. The feature is useful to achieve\nread-your-own-writes consistency. Consider\n\nPrimary: INSERT INTO app_users ...; SELECT pg_current_wal_lsn();\nStandby: BEGIN AFTER 'returned/lsn';\nStandby: SELECT i_am_a_set_of_very_expensive_queries FROM ..., app_users;\n\nwithout the AFTER/WAIT whatnot, you cannot rely on the insert having\nbeen replicated to the standby.\n\nOffloading queries from the write node to replicas is a pretty standard\ntechnique for scaling out databases (including PG). We just make it\nharder than necessary.\n\nHow would this be part of the replication command set? This shouldn't\nrequire replication permissions for the user executing the queries.\nWhile I'm in favor of merging the replication protocol entirely with the\nnormal protocol, I've so far received very little support for that\nproposition...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 14:44:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-04-11 00:44, Andres Freund wrote:\n> I think there's also some advantages of having it in a single statement\n> for poolers. If a pooler analyzes BEGIN AFTER 'xx/xx' it could\n> e.g. redirect the transaction to a node that's caught up far enough,\n> instead of blocking. But that can't work even close to as easily if \n> it's\n> something that has to be executed before transaction begin.\n> \n\nI think that's a good point.\n\nAlso, I'm not sure how we'd expect a wait-for-LSN procedure to work \ninside a single-snapshot transaction. Would it throw an error inside a \nRR transaction block? Would it give a warning?\n\n-- \nAnna Akenteva\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 14 Apr 2020 12:52:07 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "This patch require some rewording of documentation/comments and variable names\nafter the language change introduced by 229f8c219f8f..a9a4a7ad565b, the thread\nbelow can be used as reference for how to change:\n\nhttps://www.postgresql.org/message-id/flat/20200615182235.x7lch5n6kcjq4aue%40alap3.anarazel.de\n\ncheers ./daniel\n\n\n\n\n", "msg_date": "Mon, 13 Jul 2020 13:21:25 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-07-13 14:21, Daniel Gustafsson wrote:\n> This patch require some rewording of documentation/comments and \n> variable names\n> after the language change introduced by 229f8c219f8f..a9a4a7ad565b, the \n> thread\n> below can be used as reference for how to change:\n> \n> https://www.postgresql.org/message-id/flat/20200615182235.x7lch5n6kcjq4aue%40alap3.anarazel.de\n> \n\nThank you for the heads up!\n\nI updated the most recent patch and removed the use of \"master\" from it, \nreplacing it with \"primary\".\n\n-- \nAnna Akenteva\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 18 Aug 2020 13:12:51 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Tue, Aug 18, 2020 at 01:12:51PM +0300, Anna Akenteva wrote:\n> I updated the most recent patch and removed the use of \"master\" from it,\n> replacing it with \"primary\".\n\nThis is failing to apply lately, causing the CF bot to complain:\nhttp://cfbot.cputube.org/patch_29_772.log\n--\nMichael", "msg_date": "Thu, 24 Sep 2020 13:51:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Anna Akenteva писал 2020-04-08 22:36:\n> On 2020-04-08 04:09, Kyotaro Horiguchi wrote:\n> \n> I like your suggested keywords! I think that \"AFTER\" + \"WITHIN\" sound\n> the most natural. We could completely give up the LSN keyword for now.\n> The final command could look something like:\n> \n> BEGIN AFTER ‘0/303EC60’ WITHIN '5 seconds';\n> or\n> BEGIN AFTER ‘0/303EC60’ WITHIN 5000;\n\n\nHello,\n\nI've changed the syntax of the command from BEGIN [ WAIT FOR LSN value [ \nTIMEOUT delay ]] to BEGIN [ AFTER value [ WITHIN delay ]] and removed \nall the unnecessary keywords.\n\nBest regards,\nAlexandra Pervushina.", "msg_date": "Fri, 02 Oct 2020 15:02:33 +0300", "msg_from": "a.pervushina@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hello,\n\nI've changed the BEGIN WAIT FOR LSN statement to core functions \npg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\nCurrently the functions work inside repeatable read transactions, but \nwaitlsn creates a snapshot if called first in a transaction block, which \ncan possibly lead the transaction to working incorrectly, so the \nfunction gives a warning.\n\nUsage examples\n==========\nselect pg_waitlsn(‘LSN’, timeout);\nselect pg_waitlsn_infinite(‘LSN’);\nselect pg_waitlsn_no_wait(‘LSN’);", "msg_date": "Wed, 18 Nov 2020 15:05:00 +0300", "msg_from": "a.pervushina@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Hello.\r\n\r\nAt Wed, 18 Nov 2020 15:05:00 +0300, a.pervushina@postgrespro.ru wrote in \r\n> I've changed the BEGIN WAIT FOR LSN statement to core functions\r\n> pg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\r\n> Currently the functions work inside repeatable read transactions, but\r\n> waitlsn creates a snapshot if called first in a transaction block,\r\n> which can possibly lead the transaction to working incorrectly, so the\r\n> function gives a warning.\r\n\r\nAccording to the discuttion here, implementing as functions is not\r\noptimal. As a Poc, I made it as a procedure. However I'm not sure it\r\nis the correct implement as a native procedure but it seems working as\r\nexpected.\r\n\r\n> Usage examples\r\n> ==========\r\n> select pg_waitlsn(‘LSN’, timeout);\r\n> select pg_waitlsn_infinite(‘LSN’);\r\n> select pg_waitlsn_no_wait(‘LSN’);\r\n\r\nThe first and second usage is coverd by a single procedure. The last\r\nfunction is equivalent to pg_last_wal_replay_lsn(). As the result, the\r\nfollowing procedure is provided in the attached.\r\n\r\npg_waitlsn(wait_lsn pg_lsn, timeout integer DEFAULT -1)\r\n\r\nAny opinions mainly compared to implementation as a command?\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Thu, 21 Jan 2021 17:30:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On Thu, Jan 21, 2021 at 1:30 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> Hello.\n>\n> At Wed, 18 Nov 2020 15:05:00 +0300, a.pervushina@postgrespro.ru wrote in\n> > I've changed the BEGIN WAIT FOR LSN statement to core functions\n> > pg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\n> > Currently the functions work inside repeatable read transactions, but\n> > waitlsn creates a snapshot if called first in a transaction block,\n> > which can possibly lead the transaction to working incorrectly, so the\n> > function gives a warning.\n>\n> According to the discuttion here, implementing as functions is not\n> optimal. As a Poc, I made it as a procedure. However I'm not sure it\n> is the correct implement as a native procedure but it seems working as\n> expected.\n>\n> > Usage examples\n> > ==========\n> > select pg_waitlsn(‘LSN’, timeout);\n> > select pg_waitlsn_infinite(‘LSN’);\n> > select pg_waitlsn_no_wait(‘LSN’);\n>\n> The first and second usage is coverd by a single procedure. The last\n> function is equivalent to pg_last_wal_replay_lsn(). As the result, the\n> following procedure is provided in the attached.\n>\n> pg_waitlsn(wait_lsn pg_lsn, timeout integer DEFAULT -1)\n>\n> Any opinions mainly compared to implementation as a command?\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nThe patch (pg_waitlsn_v10_2_kh.patch) does not compile successfully and has\ncompilation errors. Can you please take a look?\n\nhttps://cirrus-ci.com/task/6241565996744704\n\nxlog.c:45:10: fatal error: commands/wait.h: No such file or directory\n#include \"commands/wait.h\"\n^~~~~~~~~~~~~~~~~\ncompilation terminated.\nmake[4]: *** [<builtin>: xlog.o] Error 1\nmake[4]: *** Waiting for unfinished jobs....\nmake[3]: *** [../../../src/backend/common.mk:39: transam-recursive] Error 2\nmake[2]: *** [common.mk:39: access-recursive] Error 2\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\nI am changing the status to \"Waiting on Author\"\n\n\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Jan 21, 2021 at 1:30 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:Hello.\n\nAt Wed, 18 Nov 2020 15:05:00 +0300, a.pervushina@postgrespro.ru wrote in \n> I've changed the BEGIN WAIT FOR LSN statement to core functions\n> pg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\n> Currently the functions work inside repeatable read transactions, but\n> waitlsn creates a snapshot if called first in a transaction block,\n> which can possibly lead the transaction to working incorrectly, so the\n> function gives a warning.\n\nAccording to the discuttion here, implementing as functions is not\noptimal. As a Poc, I made it as a procedure. However I'm not sure it\nis the correct implement as a native procedure but it seems working as\nexpected.\n\n> Usage examples\n> ==========\n> select pg_waitlsn(‘LSN’, timeout);\n> select pg_waitlsn_infinite(‘LSN’);\n> select pg_waitlsn_no_wait(‘LSN’);\n\nThe first and second usage is coverd by a single procedure. The last\nfunction is equivalent to pg_last_wal_replay_lsn(). As the result, the\nfollowing procedure is provided in the attached.\n\npg_waitlsn(wait_lsn pg_lsn, timeout integer DEFAULT -1)\n\nAny opinions mainly compared to implementation as a command?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\nThe patch (pg_waitlsn_v10_2_kh.patch) does not compile successfully and has compilation errors. Can you please take a look?https://cirrus-ci.com/task/6241565996744704xlog.c:45:10: fatal error: commands/wait.h: No such file or directory#include \"commands/wait.h\"^~~~~~~~~~~~~~~~~compilation terminated.make[4]: *** [<builtin>: xlog.o] Error 1make[4]: *** Waiting for unfinished jobs....make[3]: *** [../../../src/backend/common.mk:39: transam-recursive] Error 2make[2]: *** [common.mk:39: access-recursive] Error 2make[1]: *** [Makefile:42: all-backend-recurse] Error 2make: *** [GNUmakefile:11: all-src-recurse] Error 2I am changing the status to  \"Waiting on Author\"-- Ibrar Ahmed", "msg_date": "Thu, 18 Mar 2021 18:57:15 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "At Thu, 18 Mar 2021 18:57:15 +0500, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote in \r\n> On Thu, Jan 21, 2021 at 1:30 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\n> wrote:\r\n> \r\n> > Hello.\r\n> >\r\n> > At Wed, 18 Nov 2020 15:05:00 +0300, a.pervushina@postgrespro.ru wrote in\r\n> > > I've changed the BEGIN WAIT FOR LSN statement to core functions\r\n> > > pg_waitlsn, pg_waitlsn_infinite and pg_waitlsn_no_wait.\r\n> > > Currently the functions work inside repeatable read transactions, but\r\n> > > waitlsn creates a snapshot if called first in a transaction block,\r\n> > > which can possibly lead the transaction to working incorrectly, so the\r\n> > > function gives a warning.\r\n> >\r\n> > According to the discuttion here, implementing as functions is not\r\n> > optimal. As a Poc, I made it as a procedure. However I'm not sure it\r\n> > is the correct implement as a native procedure but it seems working as\r\n> > expected.\r\n> >\r\n> > > Usage examples\r\n> > > ==========\r\n> > > select pg_waitlsn(‘LSN’, timeout);\r\n> > > select pg_waitlsn_infinite(‘LSN’);\r\n> > > select pg_waitlsn_no_wait(‘LSN’);\r\n> >\r\n> > The first and second usage is coverd by a single procedure. The last\r\n> > function is equivalent to pg_last_wal_replay_lsn(). As the result, the\r\n> > following procedure is provided in the attached.\r\n> >\r\n> > pg_waitlsn(wait_lsn pg_lsn, timeout integer DEFAULT -1)\r\n> >\r\n> > Any opinions mainly compared to implementation as a command?\r\n> >\r\n> > regards.\r\n> >\r\n> > --\r\n> > Kyotaro Horiguchi\r\n> > NTT Open Source Software Center\r\n> >\r\n> \r\n> The patch (pg_waitlsn_v10_2_kh.patch) does not compile successfully and has\r\n> compilation errors. Can you please take a look?\r\n> \r\n> https://cirrus-ci.com/task/6241565996744704\r\n> \r\n> xlog.c:45:10: fatal error: commands/wait.h: No such file or directory\r\n> #include \"commands/wait.h\"\r\n> ^~~~~~~~~~~~~~~~~\r\n> compilation terminated.\r\n> make[4]: *** [<builtin>: xlog.o] Error 1\r\n> make[4]: *** Waiting for unfinished jobs....\r\n> make[3]: *** [../../../src/backend/common.mk:39: transam-recursive] Error 2\r\n> make[2]: *** [common.mk:39: access-recursive] Error 2\r\n> make[1]: *** [Makefile:42: all-backend-recurse] Error 2\r\n> make: *** [GNUmakefile:11: all-src-recurse] Error 2\r\n> \r\n> I am changing the status to \"Waiting on Author\"\r\n\r\nAnna is the autor. The \"patch\" was just to show how we can implement\r\nthe feature as a procedure. (Sorry for the bad mistake I made.)\r\n\r\nThe patch still applies to the master. So I resend just rebased\r\nversion as v10_2, and attached the \"PoC\" as *.txt which applies on top\r\nof the patch.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n\n\ndiff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\nindex 0dca65dc7b..635508639a 100644\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -1474,6 +1474,10 @@ LANGUAGE internal\n STRICT IMMUTABLE PARALLEL SAFE\n AS 'unicode_is_normalized';\n \n+CREATE OR REPLACE PROCEDURE\n+ pg_waitlsn(wait_lsn pg_lsn, timeout integer DEFAULT -1)\n+ LANGUAGE internal AS 'pg_waitlsn';\n+\n --\n -- The default permissions for functions mean that anyone can execute them.\n -- A number of functions shouldn't be executable by just anyone, but rather\ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex c11387961e..7f25938cbc 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -11426,4 +11426,8 @@\n proargnames => '{trg_lsn}',\n prosrc => 'pg_waitlsn_no_wait' },\n \n+{ oid => '9313', descr => 'wait for LSN to be replayed',\n+ proname => 'pg_waitlsn', prokind => 'p',prorettype => 'void', proargtypes => 'pg_lsn int4',\n+ proargnames => '{wait_lsn,timeout}',\n+ prosrc => 'pg_waitlsn' }\n ]", "msg_date": "Mon, 22 Mar 2021 14:05:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" } ]
[ { "msg_contents": "Currently the pc files use hard coded paths for \"includedir\" and\n\"libdir.\"\n\nExample:\n\n Cflags: -I/usr/include\n Libs: -L/usr/lib -lpq\n\nThis is not very fortunate when cross compiling inside a buildroot,\nwhere the includes and libs are inside a staging directory, because this\nintroduces host paths into the build:\n\n checking for pkg-config... /builder/shared-workdir/build/sdk/staging_dir/host/bin/pkg-config\n checking for PostgreSQL libraries via pkg_config... -L/usr/lib <----\n\nThis commit addresses this by doing the following two things:\n\n 1. Instead of hard coding the paths in \"Cflags\" and \"Libs\"\n \"${includedir}\" and \"${libdir}\" are used. Note: these variables can\n be overriden on the pkg-config command line\n (\"--define-variable=libdir=/some/path\").\n\n 2. Add the variables \"prefix\" and \"exec_prefix\". If \"includedir\"\n and/or \"libdir\" are using these then construct them accordingly.\n This is done because buildroots (for instance OpenWrt) tend to\n rename the real pkg-config and call it indirectly from a script\n that sets \"prefix\", \"exec_prefix\" and \"bindir\", like so:\n\n pkg-config.real --define-variable=prefix=${STAGING_PREFIX} \\\n --define-variable=exec_prefix=${STAGING_PREFIX} \\\n --define-variable=bindir=${STAGING_PREFIX}/bin $@\n\nExample #1: user calls ./configure with \"--libdir=/some/lib\" and\n\"--includedir=/some/include\":\n\n prefix=/usr/local/pgsql\n exec_prefix=/usr/local/pgsql\n includedir=/some/include\n libdir=/some/lib\n\n Name: libpq\n Description: PostgreSQL libpq library\n Url: http://www.postgresql.org/\n Version: 12.1\n Requires:\n Requires.private:\n Cflags: -I${includedir}\n Libs: -L${libdir} -lpq\n Libs.private: -lcrypt -lm\n\nExample #2: user calls ./configure with no arguments:\n\n prefix=/usr/local/pgsql\n exec_prefix=/usr/local/pgsql\n includedir=${prefix}/include\n libdir=${exec_prefix}/lib\n\n Name: libpq\n Description: PostgreSQL libpq library\n Url: http://www.postgresql.org/\n Version: 12.1\n Requires:\n Requires.private:\n Cflags: -I${includedir}\n Libs: -L${libdir} -lpq\n Libs.private: -lcrypt -lm\n\nLike this the paths can be forced into the staging directory when using\na buildroot setup:\n\n checking for pkg-config... /home/sk/tmp/openwrt/staging_dir/host/bin/pkg-config\n checking for PostgreSQL libraries via pkg_config... -L/home/sk/tmp/openwrt/staging_dir/target-mips_24kc_musl/usr/lib\n\nSigned-off-by: Sebastian Kemper <sebastian_ml@gmx.net>\n---\n src/Makefile.shlib | 19 ++++++++++++++++---\n 1 file changed, 16 insertions(+), 3 deletions(-)\n\ndiff --git a/src/Makefile.shlib b/src/Makefile.shlib\nindex 29a7f6d38c..33c23dabdd 100644\n--- a/src/Makefile.shlib\n+++ b/src/Makefile.shlib\n@@ -387,14 +387,27 @@ endif # PORTNAME == cygwin || PORTNAME == win32\n\n\n %.pc: $(MAKEFILE_LIST)\n-\techo 'Name: lib$(NAME)' >$@\n+\techo 'prefix=$(prefix)' >$@\n+\techo 'exec_prefix=$(exec_prefix)' >>$@\n+ifeq ($(patsubst $(prefix)/%,,$(includedir)),)\n+\techo 'includedir=$${prefix}/$(patsubst $(prefix)/%,%,$(includedir))' >>$@\n+else\n+\techo 'includedir=$(includedir)' >>$@\n+endif\n+ifeq ($(patsubst $(exec_prefix)/%,,$(libdir)),)\n+\techo 'libdir=$${exec_prefix}/$(patsubst $(exec_prefix)/%,%,$(libdir))' >>$@\n+else\n+\techo 'libdir=$(libdir)' >>$@\n+endif\n+\techo >>$@\n+\techo 'Name: lib$(NAME)' >>$@\n \techo 'Description: PostgreSQL lib$(NAME) library' >>$@\n \techo 'Url: $(PACKAGE_URL)' >>$@\n \techo 'Version: $(VERSION)' >>$@\n \techo 'Requires: ' >>$@\n \techo 'Requires.private: $(PKG_CONFIG_REQUIRES_PRIVATE)' >>$@\n-\techo 'Cflags: -I$(includedir)' >>$@\n-\techo 'Libs: -L$(libdir) -l$(NAME)' >>$@\n+\techo 'Cflags: -I$${includedir}' >>$@\n+\techo 'Libs: -L$${libdir} -l$(NAME)' >>$@\n # Record -L flags that the user might have passed in to the PostgreSQL\n # build to locate third-party libraries (e.g., ldap, ssl). Filter out\n # those that point inside the build or source tree. Use sort to\n--\n2.24.1\n\n\n\n", "msg_date": "Thu, 5 Mar 2020 22:38:29 +0100", "msg_from": "Sebastian Kemper <sebastian_ml@gmx.net>", "msg_from_op": true, "msg_subject": "[PATCH] Make pkg-config files cross-compile friendly" }, { "msg_contents": "On 05.03.20 22:38, Sebastian Kemper wrote:\n> This commit addresses this by doing the following two things:\n> \n> 1. Instead of hard coding the paths in \"Cflags\" and \"Libs\"\n> \"${includedir}\" and \"${libdir}\" are used. Note: these variables can\n> be overriden on the pkg-config command line\n> (\"--define-variable=libdir=/some/path\").\n> \n> 2. Add the variables \"prefix\" and \"exec_prefix\". If \"includedir\"\n> and/or \"libdir\" are using these then construct them accordingly.\n> This is done because buildroots (for instance OpenWrt) tend to\n> rename the real pkg-config and call it indirectly from a script\n> that sets \"prefix\", \"exec_prefix\" and \"bindir\", like so:\n\nCommitted. I simplified your code a little bit, and I also made it so \nthat exec_prefix is set to ${prefix} by default. That way it matches \nwhat most other .pc files I have found do.\n\n\n", "msg_date": "Fri, 3 Sep 2021 16:58:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Make pkg-config files cross-compile friendly" } ]
[ { "msg_contents": "Hi all,\n\nAs of the thread which led to addd034 (please see\nhttps://www.postgresql.org/message-id/E1j9ioh-0005Kn-4O@gemulon.postgresql.org,\nand sorry about that), it happens that we don't have any tests which\nvalidate the internal data checksum implementation present in core as\nof checksum_impl.h. pageinspect includes a SQL-callable function to\ncalculate the checksum of a page, mentioned by David in CC, and only\none test exists to make sure that a checksum is not NULL, but it does\nnot really help if the formula is touched.\n\nAttached is a patch to close the gap by adding new tests to\npageinspect aimed at detecting any formula change. The trick is to\nmake the page data representative enough so as it is possible to\ndetect problems if any part of the formulas are changed, like updates\nof pg_checksum_block or checksumBaseOffsets.\n\nAny thoughts or other ideas?\nThanks,\n--\nMichael", "msg_date": "Fri, 6 Mar 2020 16:52:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "More tests to stress directly checksum_impl.h" }, { "msg_contents": "On 3/6/20 2:52 AM, Michael Paquier wrote:\n> \n> As of the thread which led to addd034 (please see\n> https://www.postgresql.org/message-id/E1j9ioh-0005Kn-4O@gemulon.postgresql.org,\n> and sorry about that), it happens that we don't have any tests which\n> validate the internal data checksum implementation present in core as\n> of checksum_impl.h. pageinspect includes a SQL-callable function to\n> calculate the checksum of a page, mentioned by David in CC, and only\n> one test exists to make sure that a checksum is not NULL, but it does\n> not really help if the formula is touched.\n> \n> Attached is a patch to close the gap by adding new tests to\n> pageinspect aimed at detecting any formula change. The trick is to\n> make the page data representative enough so as it is possible to\n> detect problems if any part of the formulas are changed, like updates\n> of pg_checksum_block or checksumBaseOffsets.\n> \n> Any thoughts or other ideas?\n\nThis looks sensible to me. The only downside is that it needs to be in \na contrib test rather than in the core tests, but it is far better than \nnothing.\n\nI'll be interested to see what the build farm thinks of it. Since we \ntreat the page as an array of uint32_t while checksumming it seems that \nendianness will be a factor in the checksum. My guess is that the first \nthree tests (01, 04, FF) will work on any endianness and the last three \ntests will not.\n\nregards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 6 Mar 2020 09:48:50 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Attached is a patch to close the gap by adding new tests to\n> pageinspect aimed at detecting any formula change. The trick is to\n> make the page data representative enough so as it is possible to\n> detect problems if any part of the formulas are changed, like updates\n> of pg_checksum_block or checksumBaseOffsets.\n> Any thoughts or other ideas?\n\nI wonder whether big-endian machines will compute the same values.\nA quick look at our checksum implementation makes it look like the\nresults will depend on the endianness.\n\nBetween that and the BLCKSZ dependency, it's not clear that we can\ntest this with just a plain old expected-file test case. Might\nneed to fall back to a TAP test.\n\nAnother way would be variant output files, which could be a sane\nsolution if we put this in its own test script.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 15:04:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "On Fri, Mar 06, 2020 at 03:04:27PM -0500, Tom Lane wrote:\n> Between that and the BLCKSZ dependency, it's not clear that we can\n> test this with just a plain old expected-file test case. Might\n> need to fall back to a TAP test.\n\nPerhaps the dependency of page.sql on 8kB pages could be improved,\nstill I am not sure either that testing checksums is worth the\ncomplexity of a new TAP test dependent on pageinspect (5a9323e has\nremoved such a dependency recently for example).\n\n> Another way would be variant output files, which could be a sane\n> solution if we put this in its own test script.\n\nAn extra option would be to just choose values which have the same\nordering as long as these are enough to break with changes in the\nformula, as mentioned by David, and add a comment about this\nassumption in the tests. I am not sure either if this option has more\nadvantages than the others, but it has at least the merit to be the\nsimplest one.\n\n(It is kind of hard to find a qemu image with big endian lately?)\n--\nMichael", "msg_date": "Sat, 7 Mar 2020 14:06:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Mar 06, 2020 at 03:04:27PM -0500, Tom Lane wrote:\n>> Between that and the BLCKSZ dependency, it's not clear that we can\n>> test this with just a plain old expected-file test case. Might\n>> need to fall back to a TAP test.\n\n> Perhaps the dependency of page.sql on 8kB pages could be improved,\n> still I am not sure either that testing checksums is worth the\n> complexity of a new TAP test dependent on pageinspect (5a9323e has\n> removed such a dependency recently for example).\n\nYeah, a TAP test is a mighty expensive solution.\n\n>> Another way would be variant output files, which could be a sane\n>> solution if we put this in its own test script.\n\nI think this way could work; see attached.\n\nI'm not sure if it's actually worth providing the variants for non-8K\nblock sizes. While running the tests to construct those, I was reminded\nthat not only do several of the other pageinspect tests \"fail\" at\nnondefault block sizes, but so do the core regression tests and some\nother tests as well. We are a long way from having check-world pass\nwith nondefault block sizes, so maybe this test doesn't need to either.\nHowever, there's something to be said for memorializing the behavior\nwe expect.\n\n> (It is kind of hard to find a qemu image with big endian lately?)\n\nThe boneyard over on my other desk has actual hardware ;-)\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 07 Mar 2020 13:22:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "On 3/7/20 1:22 PM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> \n>>> Another way would be variant output files, which could be a sane\n>>> solution if we put this in its own test script.\n> \n> I think this way could work; see attached.\n> \n> I'm not sure if it's actually worth providing the variants for non-8K\n> block sizes. While running the tests to construct those, I was reminded\n> that not only do several of the other pageinspect tests \"fail\" at\n> nondefault block sizes, but so do the core regression tests and some\n> other tests as well. We are a long way from having check-world pass\n> with nondefault block sizes, so maybe this test doesn't need to either.\n> However, there's something to be said for memorializing the behavior\n> we expect.\n\nNice! Looks like I was wrong about the checksums being the same on le/be \nsystems for repeated byte values. On closer inspection it looks like >> \n17 at least ensures this will not be true.\n\nGood to know.\n\nThanks,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sat, 7 Mar 2020 13:46:43 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "On Sat, Mar 07, 2020 at 01:46:43PM -0500, David Steele wrote:\n> Nice! Looks like I was wrong about the checksums being the same on le/be\n> systems for repeated byte values. On closer inspection it looks like >> 17\n> at least ensures this will not be true.\n\nThanks for the computations with big-endian! I would have just gone\ndown to the 8kB page for the expected results by seeing three other\ntests blowing up, but no objection to what you have here either. I\nhave checked the computations with little-endian from your patch and\nthese are correct.\n--\nMichael", "msg_date": "Sun, 8 Mar 2020 12:15:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Thanks for the computations with big-endian! I would have just gone\n> down to the 8kB page for the expected results by seeing three other\n> tests blowing up, but no objection to what you have here either. I\n> have checked the computations with little-endian from your patch and\n> these are correct.\n\nAfter thinking more I concluded that the extra expected files would\njust be a waste of tarball space, at least till such time as we make\na push to fix all the regression tests to be blocksize-independent.\n\nPushed it with just the 8K files.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Mar 2020 15:12:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More tests to stress directly checksum_impl.h" }, { "msg_contents": "On Sun, Mar 08, 2020 at 03:12:11PM -0400, Tom Lane wrote:\n> After thinking more I concluded that the extra expected files would\n> just be a waste of tarball space, at least till such time as we make\n> a push to fix all the regression tests to be blocksize-independent.\n\nMakes sense.\n\n> Pushed it with just the 8K files.\n\nThanks!\n--\nMichael", "msg_date": "Mon, 9 Mar 2020 10:03:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests to stress directly checksum_impl.h" } ]
[ { "msg_contents": "On 2020-03-06 08:54, Kyotaro Horiguchi wrote:\n> The syntax seems getting confused. What happens if we typed in the\n> command \"WAIT FOR TIMESTAMP '...' UNTIL TIMESTAMP '....'\"? It seems\n> to me the options is useles. Couldn't the TIMEOUT option be a part of\n> event? I know gram.y doesn't accept that syntax but it is not\n> apparent from the description above.\n\nI`ll fix the doc file.\n\nSynopsis\n==========\n WAIT FOR [ANY | SOME | ALL] event [, event ...]\n and event is:\n LSN value [options]\n TIMESTAMP value\n\n and options is:\n TIMEOUT delay\n UNTIL TIMESTAMP timestamp\n\n> As I read through the previous thread, one of the reason for this\n> feature implemented as a syntax is it was inteded to be combined into\n> BEGIN statement. If there is not any use case for the feature midst\n> of a transaction, why don't you turn it into a part of BEGIN command?\n\nIt`s seem to have some limitations on hot standbys. I`ll take few days\nto make a prototype.\n\n>> Description\n>> ==========\n>> WAIT FOR - make to wait statements (that are beneath) on sleep until\n>> event happens (Don’t process new queries until an event happens).\n> ...\n>> Notice: WAIT FOR will release on PostmasterDeath or Interruption\n>> events\n>> if they come earlier then LSN or timeout.\n> \n> I think interrupts ought to result in ERROR.\n> \n> wait.c adds a fair amount of code and uses proc-array based\n> approach. But Thomas suggested queue-based approach and I also think\n> it is better. We already have a queue-based mechanism that behaves\n> almost the same with this feature in the comit code on master-side. It\n> avoids spurious backend wakeups. Couldn't we extend SyncRepWaitForLSN\n> or share a part of the code/infrastructures so that this feature can\n> share the code?\n\nI`ll take a look on.\n\nThank you for your review.\n\nRebased patch is attached.\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Fri, 06 Mar 2020 15:21:49 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "I just wanted to express my excitement that this is being picked up again.\nI was very much looking forward to this years ago, and the use case for me\nis still there, so I am excited to see this moving again.\n\nI just wanted to express my excitement that this is being picked up again. I was very much looking forward to this years ago, and the use case for me is still there, so I am excited to see this moving again.", "msg_date": "Fri, 6 Mar 2020 11:55:38 -0500", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "Sorry, I have some troubles on email sending.\nOn 2020-03-06 08:54, Kyotaro Horiguchi wrote:\n> The syntax seems getting confused. What happens if we typed in the\n> command \"WAIT FOR TIMESTAMP '...' UNTIL TIMESTAMP '....'\"? It seems\n> to me the options is useles. Couldn't the TIMEOUT option be a part of\n> event? I know gram.y doesn't accept that syntax but it is not\n> apparent from the description above.\n\nI`ll fix the doc file.\n\nSynopsis\n==========\n WAIT FOR [ANY | SOME | ALL] event [, event ...]\n and event is:\n LSN value [options]\n TIMESTAMP value\n\n and options is:\n TIMEOUT delay\n UNTIL TIMESTAMP timestamp\n\n> As I read through the previous thread, one of the reason for this\n> feature implemented as a syntax is it was inteded to be combined into\n> BEGIN statement. If there is not any use case for the feature midst\n> of a transaction, why don't you turn it into a part of BEGIN command?\n\nIt`s seem to have some limitations on hot standbys. I`ll take few days\nto make a prototype.\n\n>> Description\n>> ==========\n>> WAIT FOR - make to wait statements (that are beneath) on sleep until\n>> event happens (Don’t process new queries until an event happens).\n> ...\n>> Notice: WAIT FOR will release on PostmasterDeath or Interruption\n>> events\n>> if they come earlier then LSN or timeout.\n> \n> I think interrupts ought to result in ERROR.\n> \n> wait.c adds a fair amount of code and uses proc-array based\n> approach. But Thomas suggested queue-based approach and I also think\n> it is better. We already have a queue-based mechanism that behaves\n> almost the same with this feature in the comit code on master-side. It\n> avoids spurious backend wakeups. Couldn't we extend SyncRepWaitForLSN\n> or share a part of the code/infrastructures so that this feature can\n> share the code?\n\nI`ll take a look on.\n\nThank you for your review.\n\nRebased patch is attached.\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 06 Mar 2020 22:42:35 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "I made some improvements over old implementation WAIT FOR.\n\nSynopsis\n==========\n WAIT FOR [ANY | SOME | ALL] event [, event ...]\n and event is:\n LSN value options\n TIMESTAMP value\n\n and options is:\n TIMEOUT delay\n UNTIL TIMESTAMP timestamp\n\nALL - option used by default.\n\nP.S. Now I testing BEGIN base WAIT prototype as discussed earlier.\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 17 Mar 2020 15:47:54 +0300", "msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" }, { "msg_contents": "On 2020-03-17 15:47, Kartyshov Ivan wrote:\n> Synopsis\n> ==========\n> WAIT FOR [ANY | SOME | ALL] event [, event ...]\nI'm confused as to what SOME would mean in this\ncommand's syntax, but I can see you removed it\nfrom gram.y since the last patch. Did you\ndecide to not implement it after all?\n\nAlso, I had a look at the code and tested it a bit.\n\n================\nIf I specify many events, here's what happens:\n\nFor WAIT_FOR_ALL strategy, it chooses\n- maximum LSN\n- maximum delay\nand waits for the resulting event.\n\nFor WAIT_FOR_ANY strategy - same, but it uses\nminimal LSN/delay.\n\nIn other words, statements\n (1) WAIT FOR ALL\n LSN '7F97208' TIMEOUT 11,\n LSN '3002808' TIMEOUT 50;\n (2) WAIT FOR ANY\n LSN '7F97208' TIMEOUT 11,\n LSN '3002808' TIMEOUT 50;\nare essentially equivalent to:\n (1) WAIT FOR LSN '7F97208' TIMEOUT 50;\n (2) WAIT FOR LSN '3002808' TIMEOUT 11;\n\nIt seems a bit counter-intuitive to me, because\nI expected events to be treated independently.\nIs this the expected behaviour?\n\n================\nIn utility.c:\n if (event->delay < time_val)\n time_val = event->delay / 1000;\n\nSince event->delay is an int, the result will\nbe zero for any delay value less than 1000.\nI suggest either dividing by 1000.0 or\nexplicitly converting int to float.\n\nAlso, shouldn't event->delay be divided\nby 1000 in the 'if' part as well?\n\n================\nYou compare two LSN-s using pg_lsn_cmp():\n res = DatumGetUInt32(\n DirectFunctionCall2(pg_lsn_cmp,\n lsn, trg_lsn));\n\nAs far as I understand, it'd be enough to use\noperators such as \"<=\", as you do in wait.c:\n /* If LSN has been replayed */\n if (trg_lsn <= cur_lsn)\n\n-- \nAnna Akenteva\nPostgres Professional:\nThe Russian Postgres Company\nhttp://www.postgrespro.com\n\n\n", "msg_date": "Sat, 21 Mar 2020 13:51:13 +0300", "msg_from": "Anna Akenteva <a.akenteva@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed" } ]
[ { "msg_contents": "In his long bootstrap-reworking thread[1] John Naylor initially proposed\nmoving the DECLARE_INDEX lines from indexing.h to each of the\ncorresponding catalog files. However, in the end that wasn't done;\nthese lines are still in indexing.h. Is there a reason for this?\nWouldn't it make more sense to have the indexes for pg_attribute appear\nin catalog/pg_attribute.h, and so forth?\n\nI was unable to find a rebuttal of the move; maybe it was just\nneglected because of fog-of-war.\n\n[1] https://postgr.es/m/CAJVSVGWO48JbbwXkJz_yBFyGYW-M9YWxnPdxJBUosDC9ou_F0Q@mail.gmail.com\n\n\n-- \n�lvaro Herrera\n\n\n", "msg_date": "Fri, 6 Mar 2020 15:20:53 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "move DECLARE_INDEX from indexing.h?" }, { "msg_contents": "On 2020-Mar-06, Alvaro Herrera wrote:\n\n> In his long bootstrap-reworking thread[1] John Naylor initially proposed\n> moving the DECLARE_INDEX lines from indexing.h to each of the\n> corresponding catalog files. However, in the end that wasn't done;\n> these lines are still in indexing.h. Is there a reason for this?\n> Wouldn't it make more sense to have the indexes for pg_attribute appear\n> in catalog/pg_attribute.h, and so forth?\n\n(In a quick experiment, simply moving the pg_aggregate indexes from\nindexing.h to pg_aggregate.h appears to work with no further changes.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Mar 2020 15:31:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: move DECLARE_INDEX from indexing.h?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> In his long bootstrap-reworking thread[1] John Naylor initially proposed\n> moving the DECLARE_INDEX lines from indexing.h to each of the\n> corresponding catalog files. However, in the end that wasn't done;\n> these lines are still in indexing.h. Is there a reason for this?\n> Wouldn't it make more sense to have the indexes for pg_attribute appear\n> in catalog/pg_attribute.h, and so forth?\n\nFWIW, I think it's just fine as-is, for the same reason that CREATE INDEX\nis a separate command from CREATE TABLE. The indexes on a table are not\npart of the table data; to some extent they're an implementation detail.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Mar 2020 14:30:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: move DECLARE_INDEX from indexing.h?" } ]
[ { "msg_contents": "I noticed a weird thing about rangetypes API while reviewing multiranges\n-- the combination of range_deserialize immediately followed by\nrange_get_flags. It seems quite odd to have range_deserialize return\nonly one flag (empty) and force callers to use a separate\nrange_get_flags call in order to fetch anything else they need. I\npropose that it's simpler to have range_deserialize have an out param\nfor flags (replacing \"empty\"), and callers can examine \"IsEmpty\" from\nthat using a macro accessor. So there are two macros now:\nRangeFlagsIsEmpty() takes the 'flags' value and return whether the bit\nis set. Its companion RangeIsEmpty() does the range_get_flags() dance.\n\nThe attached patch does that, with a net savings of 8 lines of code in\nrangetypes.c. I know, it's not amazing. But it's slightly cleaner this\nway IMO.\n\nThe reason things are this way is that initially (commit 4429f6a9e3e1)\nwere all opaque; the external observer could only see \"empty\" when\ndeserializing the value. Then commit 37ee4b75db8f added\nrange_get_flags() to obtain the flags from a range, but at that point it\nwas only used in places that did not deserialized the range anyway, so\nit was okay. I think it was commit c66e4f138b04 that ended up having\nboth deserialize and get_flags in succession. So things are weird now,\nbut they have not always been like that.\n\n\nI also chose to represent the flags out param as uint8 in\nrange_deserialize. With this change, the compiler warns for callers\nusing the old API (it used to take bool *), giving a chance to update.\n\n-- \n�lvaro Herrera", "msg_date": "Fri, 6 Mar 2020 17:03:43 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "range_deserialize + range_get_flags" } ]
[ { "msg_contents": "Hi:\n I'm a brand new postgresql contributor(may be not yet since my first\npatch[1]\n is still in review stage). and have 2 questions about the CI process and to\n see if the questions really exist or if I can help.\n\n Based on the facts that 1). The test cases may succeed locally but may\nbe failed\nin CI for some reasons. 2). The newer version of the patch need to be\nsubmitted\nwith new a email reply. 3). Reviewer & committer bandwidth is precious.\nso it would\nbe not good to reply the email just for fix some tiny errors too many\ntimes. so do we\nneed a method of updating patch without disturbing the email discussion?\nOne proposal\nis people still can updating their patch with pull request in github, and\nour another CI\nsetup can watch the PR and trigger automatically. Once it really succeed,\nthe contributor\ncan generate these patch and send to email group for reviewers.\n\n Another question I have is do we need a method to let the contributor to\ninteractively test\nthese code on the given environment? I guess many people doesn't have a\nwindows\nenvironment. We may be able to provide a windows on cloud and if people\nneed that,\nthey can ask for an account (just live for a shorter period) with an email.\n\n In summary, are the 2 questions above really questions? If yes, are my\nproposals good?\nIf yes, I would like to help on either the software part or hardware part\n. do we have other\nrequirements we can think together?\n\n[1] https://commitfest.postgresql.org/27/2433/\n\nRegards Andy Fan.\n\nHi:  I'm a brand new postgresql contributor(may be not yet since my first patch[1] is still in review stage). and have 2 questions about the CI process and to see if the questions really exist or if I can help.   Based on the facts that  1).  The test cases may succeed locally but may be failedin CI for some reasons.  2).  The newer version of the patch need to be submittedwith new a email reply.  3). Reviewer &  committer bandwidth is precious.  so it wouldbe not good to reply the email just for fix some tiny errors too many times.  so do we need a method of updating patch without disturbing the email discussion? One proposalis people still can updating their patch with pull request in github,  and our another CI setup can watch the PR and trigger automatically.  Once it really succeed, the contributorcan generate these patch and send to email group for reviewers.  Another question I have is do we need a method to let the contributor to interactively testthese code on the given environment?  I guess many people doesn't have a windows environment.   We may be able to provide a windows on cloud and if people need that, they can ask for an account (just live for a shorter period) with an email.  In summary,  are the 2 questions above really questions?  If yes,  are my proposals good?If yes,  I would like to help on either the software part or hardware part .  do we have otherrequirements we can think together?[1] https://commitfest.postgresql.org/27/2433/ Regards Andy Fan.", "msg_date": "Sat, 7 Mar 2020 11:53:32 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Questions about the CI process and proposal" }, { "msg_contents": "On 2020-03-07 04:53, Andy Fan wrote:\n>   Based on the facts that  1).  The test cases may succeed locally but \n> may be failed\n> in CI for some reasons.  2).  The newer version of the patch need to be \n> submitted\n> with new a email reply.  3). Reviewer &  committer bandwidth is \n> precious.  so it would\n> be not good to reply the email just for fix some tiny errors too many \n> times.\n\nThis is not a problem.\n\n> so do we\n> need a method of updating patch without disturbing the email discussion? \n\nI don't think so. Note also that it's not only about the verbal \ndiscussion but also about having a unified and uniform record about what \nwas sent by whom and when and how.\n\n> One proposal\n> is people still can updating their patch with pull request in github, \n> and our another CI\n> setup can watch the PR and trigger automatically.  Once it really \n> succeed, the contributor\n> can generate these patch and send to email group for reviewers.\n\nYou can do this now by sticking in your own travis or appveyor files and \npushing to your own github account. I do this from time to time.\n\n>   Another question I have is do we need a method to let the \n> contributor to interactively test\n> these code on the given environment?  I guess many people doesn't have a \n> windows\n> environment.   We may be able to provide a windows on cloud and if \n> people need that,\n> they can ask for an account (just live for a shorter period) with an email.\n\nSee my recent blog post: \nhttps://www.2ndquadrant.com/en/blog/developing-postgresql-windows-part-2/\n\nActually part 3 is going to be about how to use CI for Windows, so \nyou're just a bit ahead of me here. :)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 7 Mar 2020 08:31:55 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Questions about the CI process and proposal" }, { "msg_contents": "Andy>1). The test cases may succeed locally but\nAndy> may be failed\nAndy> in CI for some reasons\n\nPeter> This is not a problem\n\nI would disagree. A patch might easily make the database incompatible with\nclients like JDBC.\nDo current PostgreSQL tests catch that?\nI don't think so.\nHowever, that can be configured in PR CI.\n\nPeter>You can do this now by sticking in your own travis or appveyor files\nand\nPeter>pushing to your own github account. I do this from time to time\n\nDo you expect that everybody reinvents the wheel?\n\n---\n\nI've recently come across https://gitgitgadget.github.io/\nIt is a tool that converts GitHub PRs into mailing list messages (and back).\n\nWhat it does it enables contributors to send patches to the mailing list by\ncreating PRs.\n\nOf course, it does not replace the mailing list, however, it cross-posts\ncomments (e.g. it posts email responses as GitHub comments).\nSample PR: https://github.com/gitgitgadget/git/pull/525\n\nIt could significantly help contributors in the following ways:\n1) One can create PR without really sending a patch (e.g. to estimate the\nnumber of broken tests)\n2) It would be much easier to test patches since the number of CI checks\ncan easily exceed the number of tests in make check-*.\n\nIt would help reviewers as well:\n1) GitHub shows colored diffs which help to understand the patch\n2) There's a \"suggest change\" feature which helps for cases like \"fixing\ntypos\".\n3) PR shows if the patch applies at all, and it shows which tests fail\n4) It opens a way to trigger extra checks. For example, PR CI could trigger\ntests for **clients** like Java, C#, Ruby, etc, etc\n\nWDYT on configuring gitgitgadget (or something like that) for PostgreSQL?\n\nVladimir\n\nAndy>1).  The test cases may succeed locally butAndy> may be failedAndy> in CI for some reasonsPeter> This is not a problemI would disagree. A patch might easily make the database incompatible with clients like JDBC.Do current PostgreSQL tests catch that?I don't think so.However, that can be configured in PR CI.Peter>You can do this now by sticking in your own travis or appveyor files andPeter>pushing to your own github account.  I do this from time to timeDo you expect that everybody reinvents the wheel?---I've recently come across https://gitgitgadget.github.io/It is a tool that converts GitHub PRs into mailing list messages (and back).What it does it enables contributors to send patches to the mailing list by creating PRs.Of course, it does not replace the mailing list, however, it cross-posts comments (e.g. it posts email responses as GitHub comments).Sample PR: https://github.com/gitgitgadget/git/pull/525It could significantly help contributors in the following ways:1) One can create PR without really sending a patch (e.g. to estimate the number of broken tests)2) It would be much easier to test patches since the number of CI checks can easily exceed the number of tests in make check-*.It would help reviewers as well:1) GitHub shows colored diffs which help to understand the patch2) There's a \"suggest change\" feature which helps for cases like \"fixing typos\".3) PR shows if the patch applies at all, and it shows which tests fail4) It opens a way to trigger extra checks. For example, PR CI could trigger tests for **clients** like Java, C#, Ruby, etc, etcWDYT on configuring gitgitgadget (or something like that) for PostgreSQL?Vladimir", "msg_date": "Sat, 7 Mar 2020 12:12:05 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Questions about the CI process and proposal" } ]
[ { "msg_contents": "I noticed that catalog/objectaddress.h includes utils/acl.h for no \napparent reason. It turns out this used to be needed but not anymore. \nSo removed it and cleaned up the fallout. Patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 7 Mar 2020 08:25:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Remove utils/acl.h from catalog/objectaddress.h" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I noticed that catalog/objectaddress.h includes utils/acl.h for no \n> apparent reason. It turns out this used to be needed but not anymore. \n> So removed it and cleaned up the fallout. Patch attached.\n\nSeems reasonable. One thing I noticed is that if you are including\nnodes/parsenodes.h explicitly in objectaddress.h, there seems little\npoint in the #include \"nodes/pg_list.h\" right beside it.\n\nSometime we really ought to make an effort to make our header inclusions\nless of a mass of spaghetti. But this patch needn't take on that load.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Mar 2020 14:28:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove utils/acl.h from catalog/objectaddress.h" }, { "msg_contents": "On 2020-Mar-07, Peter Eisentraut wrote:\n\n> I noticed that catalog/objectaddress.h includes utils/acl.h for no apparent\n> reason. It turns out this used to be needed but not anymore. So removed it\n> and cleaned up the fallout. Patch attached.\n\nparser/parse_nodes.h already includes nodes/parsenodes.h, so the seeming\nredundancy in places such as \n\n> diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h\n> index c27d255d8d..be63e043c6 100644\n> --- a/src/include/commands/vacuum.h\n> +++ b/src/include/commands/vacuum.h\n> @@ -19,6 +19,7 @@\n> #include \"catalog/pg_statistic.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"nodes/parsenodes.h\"\n> +#include \"parser/parse_node.h\"\n\n(and others) is not just apparent; it's also redundant in practice. And\nit's not like parse_node.h is ever going to be able not to depend on\nparsenodes.h, so I would vote to remove nodes/parsenodes.h from the\nheaders where you're adding parser/parse_node.h.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 13:07:07 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Remove utils/acl.h from catalog/objectaddress.h" }, { "msg_contents": "On 2020-03-09 17:07, Alvaro Herrera wrote:\n> On 2020-Mar-07, Peter Eisentraut wrote:\n> \n>> I noticed that catalog/objectaddress.h includes utils/acl.h for no apparent\n>> reason. It turns out this used to be needed but not anymore. So removed it\n>> and cleaned up the fallout. Patch attached.\n> \n> parser/parse_nodes.h already includes nodes/parsenodes.h, so the seeming\n> redundancy in places such as\n> \n>> diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h\n>> index c27d255d8d..be63e043c6 100644\n>> --- a/src/include/commands/vacuum.h\n>> +++ b/src/include/commands/vacuum.h\n>> @@ -19,6 +19,7 @@\n>> #include \"catalog/pg_statistic.h\"\n>> #include \"catalog/pg_type.h\"\n>> #include \"nodes/parsenodes.h\"\n>> +#include \"parser/parse_node.h\"\n> \n> (and others) is not just apparent; it's also redundant in practice. And\n> it's not like parse_node.h is ever going to be able not to depend on\n> parsenodes.h, so I would vote to remove nodes/parsenodes.h from the\n> headers where you're adding parser/parse_node.h.\n\nOK, committed with your and Tom's changes.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 10:35:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Remove utils/acl.h from catalog/objectaddress.h" } ]
[ { "msg_contents": "Dear hackers,\n\nI know that it is possible to receive packets in binary format using\ndeclare binary cursor and than fetching the result.\n\nBut is it possible just using ordinary select from simple query to specify\nthat I want to receive the result in binary?\n\nBest regards,\n\nDear hackers,I know that it is possible to receive packets in binary format using declare binary cursor and than fetching the result.But is it possible just using ordinary select from simple query to specify that I want to receive the result in binary?Best regards,", "msg_date": "Fri, 6 Mar 2020 23:37:40 -0800", "msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>", "msg_from_op": true, "msg_subject": "Question: Select messages using binary format" }, { "msg_contents": "Hi\n\nso 7. 3. 2020 v 8:38 odesílatel Aleksei Ivanov <iv.alekseii@gmail.com>\nnapsal:\n\n> Dear hackers,\n>\n> I know that it is possible to receive packets in binary format using\n> declare binary cursor and than fetching the result.\n>\n> But is it possible just using ordinary select from simple query to specify\n> that I want to receive the result in binary?\n>\n\nIt depends on interface that you use. C API - libpq allow to specify type\nof communication per query\n\nhttps://www.postgresql.org/docs/current/libpq-exec.html\n\nPQexecParams *resultFormat*\n\n\nWhat I known, you cannot to do from SQL level.\n\nRegards\n\nPavel\n\n\n> Best regards,\n>\n\nHiso 7. 3. 2020 v 8:38 odesílatel Aleksei Ivanov <iv.alekseii@gmail.com> napsal:Dear hackers,I know that it is possible to receive packets in binary format using declare binary cursor and than fetching the result.But is it possible just using ordinary select from simple query to specify that I want to receive the result in binary?It depends on interface that you use. C API - libpq allow to specify type of communication per query https://www.postgresql.org/docs/current/libpq-exec.htmlPQexecParams resultFormatWhat I known, you cannot to do from SQL level.RegardsPavelBest regards,", "msg_date": "Sat, 7 Mar 2020 09:51:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question: Select messages using binary format" } ]
[ { "msg_contents": "I have added mention of the new SQL standard part SQL/MDA \n(multi-dimensional arrays) to the documentation.\n\nThis is not the same thing as the existing support for multidimensional \narrays in PostgreSQL. SQL/MDA targets huge arrays, aggregation over \nslices, export as images, for applications in fields such as physics and \nastronomy -- as I understand it. Something to look into perhaps at some \npoint, if there is interest, but right now it's just a mention that it's \nnot supported.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 7 Mar 2020 11:05:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "SQL/MDA added to docs" } ]
[ { "msg_contents": "pageinspect: Fix types used for bt_metap() columns.\n\nThe data types that contrib/pageinspect's bt_metap() function were\ndeclared to return as OUT arguments were wrong in some cases. For\nexample, the oldest_xact column (a TransactionId/xid field) was declared\ninteger/int4 within the pageinspect extension's sql file. This led to\nerrors when an oldest_xact value that exceeded 2^31-1 was encountered.\nSome of the other columns were defined incorrectly ever since\npageinspect was first introduced, though they were far less likely to\nproduce problems in practice.\n\nFix these issues by changing the declaration of bt_metap() to\nconsistently use data types that can reliably represent all possible\nvalues. This fixes things on HEAD only. No backpatch, since it doesn't\nseem like there is a safe way to fix the issue without including a new\nversion of the pageinspect extension (HEAD/Postgres 13 already\nintroduced a new version of the extension). Besides, the oldest_xact\nissue has been around since the release of Postgres 11, and we haven't\nheard any complaints about it before now.\n\nAlso, throw an error when we detect a bt_metap() declaration that must\nbe from an old version of the pageinspect extension by examining the\nnumber of attributes from the tuple descriptor for the return tuples.\nIt seems better to throw an error in a reliable and obvious way\nfollowing a Postgres upgrade, rather than letting bt_metap() fail\nunpredictably. The problem is fundamentally with the CREATE FUNCTION\ndeclared data types themselves, so I see no sensible alternative.\n\nReported-By: Victor Yegorov\nBug: #16285\nDiscussion: https://postgr.es/m/16285-df8fc1000ab3d5fc@postgresql.org\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/691e8b2e1889d61df47ae76601fa9db6cbac6f1c\n\nModified Files\n--------------\ncontrib/pageinspect/btreefuncs.c | 27 +++++++++++++++++++++++----\ncontrib/pageinspect/pageinspect--1.7--1.8.sql | 12 ++++++------\n2 files changed, 29 insertions(+), 10 deletions(-)", "msg_date": "Sun, 08 Mar 2020 00:45:22 +0000", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "pgsql: pageinspect: Fix types used for bt_metap() columns." }, { "msg_contents": "On 2020-Mar-08, Peter Geoghegan wrote:\n\n> Fix these issues by changing the declaration of bt_metap() to\n> consistently use data types that can reliably represent all possible\n> values. This fixes things on HEAD only. No backpatch, since it doesn't\n> seem like there is a safe way to fix the issue without including a new\n> version of the pageinspect extension (HEAD/Postgres 13 already\n> introduced a new version of the extension). Besides, the oldest_xact\n> issue has been around since the release of Postgres 11, and we haven't\n> heard any complaints about it before now.\n\nThis may be a good time to think through about how to implement a\nversion history for an extension that enables users to go from pg12's\ncurrent 1.7 pageinspect to a new fixed version in pg12, say 1.7.1, and\nin HEAD provide an upgrade path from both 1.7 and 1.7.1 to master's 1.8.\nThen you can pg_upgrade from pg12 to pg13 having either 1.7 or 1.7.1,\nand you will be able to get to 1.8 nonetheless.\n\nDoes that make sense?\n\nThe current problem might not be serious enough to warrant actually\nwriting the code that would be needed, but I propose to at least think\nabout it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 12:55:24 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: pageinspect: Fix types used for bt_metap() columns." }, { "msg_contents": "On Mon, Mar 9, 2020 at 8:55 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> This may be a good time to think through about how to implement a\n> version history for an extension that enables users to go from pg12's\n> current 1.7 pageinspect to a new fixed version in pg12, say 1.7.1, and\n> in HEAD provide an upgrade path from both 1.7 and 1.7.1 to master's 1.8.\n> Then you can pg_upgrade from pg12 to pg13 having either 1.7 or 1.7.1,\n> and you will be able to get to 1.8 nonetheless.\n>\n> Does that make sense?\n\nSort of. The main problem with that idea is that it requires the user\nto notice that the problem is in the extension definition itself,\nwhich is a pretty rare edge case -- how many people will follow\nthrough with that? You could deliberately break it so they'd have to\nnotice it, which is what I did here, but I don't think that users\nwould appreciate seeing that in a point release. Especially for\nsomething like bt_metap(), which kind of worked before.\n\nThere are also implementation problems. You might need rather a lot of\nupgrade paths. While the most significant problem by far here is with\nthe oldest_xact column, this bug was in the earliest version of the\npageinspect extension. Even 1.0 uses int4 where it should use\nsomething that works as BlockNumber (I used int8 for this here).\nThat's just messy.\n\n> The current problem might not be serious enough to warrant actually\n> writing the code that would be needed, but I propose to at least think\n> about it.\n\nI briefly considered doing something like targeting the backbranches\nby making oldest_xact display XIDs greater than 2^31-1 as negative\nvalues, or as NULL, with a NOTICE message. I quickly concluded that\nthe cure would be worse than the disease.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 9 Mar 2020 10:05:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: pgsql: pageinspect: Fix types used for bt_metap() columns." } ]
[ { "msg_contents": "Show opclass and opfamily related information in psql\n\nThis commit provides psql commands for listing operator classes, operator\nfamilies and its contents in psql. New commands will be useful for exploring\ncapabilities of both builtin opclasses/opfamilies as well as\nopclasses/opfamilies defined in extensions.\n\nDiscussion: https://postgr.es/m/1529675324.14193.5.camel%40postgrespro.ru\nAuthor: Sergey Cherkashin, Nikita Glukhov, Alexander Korotkov\nReviewed-by: Michael Paquier, Alvaro Herrera, Arthur Zakirov\nReviewed-by: Kyotaro Horiguchi, Andres Freund\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b0b5e20cd8d1a58a8782d5dc806a5232db116e2f\n\nModified Files\n--------------\ndoc/src/sgml/ref/psql-ref.sgml | 91 ++++++++++\nsrc/bin/psql/command.c | 33 +++-\nsrc/bin/psql/describe.c | 335 +++++++++++++++++++++++++++++++++++++\nsrc/bin/psql/describe.h | 19 +++\nsrc/bin/psql/help.c | 4 +\nsrc/bin/psql/tab-complete.c | 16 +-\nsrc/test/regress/expected/psql.out | 162 ++++++++++++++++++\nsrc/test/regress/sql/psql.sql | 18 ++\n8 files changed, 676 insertions(+), 2 deletions(-)", "msg_date": "Sun, 08 Mar 2020 10:35:29 +0000", "msg_from": "Alexander Korotkov <akorotkov@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Show opclass and opfamily related information in psql" }, { "msg_contents": "On 2020-Mar-08, Alexander Korotkov wrote:\n\n> Show opclass and opfamily related information in psql\n> \n> This commit provides psql commands for listing operator classes, operator\n> families and its contents in psql. New commands will be useful for exploring\n> capabilities of both builtin opclasses/opfamilies as well as\n> opclasses/opfamilies defined in extensions.\n\nI had chance to use these new commands this morning. I noticed the\nORDER BY clause of \\dAo is not very useful; for example:\n\n=# \\dAo+ brin datetime_minmax_ops \n List of operators of operator families\n AM │ Opfamily Name │ Operator │ Strategy │ Purpose │ Sort opfamily \n──────┼─────────────────────┼───────────────────────────────────────────────────────────────┼──────────┼─────────┼───────────────\n brin │ datetime_minmax_ops │ < (date, date) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (date, timestamp with time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (date, timestamp without time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (timestamp with time zone, date) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (timestamp with time zone, timestamp with time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (timestamp with time zone, timestamp without time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (timestamp without time zone, date) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (timestamp without time zone, timestamp with time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ < (timestamp without time zone, timestamp without time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ <= (date, date) │ 2 │ search │ \n brin │ datetime_minmax_ops │ <= (date, timestamp with time zone) │ 2 │ search │ \n brin │ datetime_minmax_ops │ <= (date, timestamp without time zone) │ 2 │ search │ \n brin │ datetime_minmax_ops │ <= (timestamp with time zone, date) │ 2 │ search │ \n brin │ datetime_minmax_ops │ <= (timestamp with time zone, timestamp with time zone) │ 2 │ search │ \n brin │ datetime_minmax_ops │ <= (timestamp with time zone, timestamp without time zone) │ 2 │ search │ \n\nNote how operator for strategy 1 are all together, then strategy 2, and\nso on. But I think we'd prefer the operators to be grouped together for\nthe same types (just like \\dAp already works); so I would change the clause\nfrom:\n ORDER BY 1, 2, o.amopstrategy, 3;\nto:\n ORDER BY 1, 2, pg_catalog.format_type(o.amoplefttype, NULL), pg_catalog.format_type(o.amoprighttype, NULL), o.amopstrategy;\n\nwhich gives this table:\n\n AM │ Opfamily Name │ Operator │ Strategy │ Purpose │ Sort opfamily \n──────┼─────────────────────┼───────────────────────────────────────────────────────────────┼──────────┼─────────┼───────────────\n brin │ datetime_minmax_ops │ < (date, date) │ 1 │ search │ \n brin │ datetime_minmax_ops │ <= (date, date) │ 2 │ search │ \n brin │ datetime_minmax_ops │ = (date, date) │ 3 │ search │ \n brin │ datetime_minmax_ops │ >= (date, date) │ 4 │ search │ \n brin │ datetime_minmax_ops │ > (date, date) │ 5 │ search │ \n brin │ datetime_minmax_ops │ < (date, timestamp with time zone) │ 1 │ search │ \n brin │ datetime_minmax_ops │ <= (date, timestamp with time zone) │ 2 │ search │ \n brin │ datetime_minmax_ops │ = (date, timestamp with time zone) │ 3 │ search │ \n brin │ datetime_minmax_ops │ >= (date, timestamp with time zone) │ 4 │ search │ \n brin │ datetime_minmax_ops │ > (date, timestamp with time zone) │ 5 │ search │ \n\nAlso, while I'm going about this, ISTM it'd make sense to\nlist same-class operators first, followed by cross-class operators.\nThat requires to add \"o.amoplefttype = o.amoprighttype DESC,\" after\n\"ORDER BY 1, 2,\". For brin's integer_minmax_ops, the resulting list\nwould have first (bigint,bigint) then (integer,integer) then\n(smallint,smallint), then all the rest:\n\n brin │ integer_minmax_ops │ < (bigint, bigint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (bigint, bigint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (bigint, bigint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (bigint, bigint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (bigint, bigint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (integer, integer) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (integer, integer) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (integer, integer) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (integer, integer) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (integer, integer) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (smallint, smallint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (smallint, smallint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (smallint, smallint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (smallint, smallint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (smallint, smallint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (bigint, integer) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (bigint, integer) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (bigint, integer) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (bigint, integer) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (bigint, integer) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (bigint, smallint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (bigint, smallint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (bigint, smallint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (bigint, smallint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (bigint, smallint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (integer, bigint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (integer, bigint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (integer, bigint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (integer, bigint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (integer, bigint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (integer, smallint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (integer, smallint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (integer, smallint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (integer, smallint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (integer, smallint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (smallint, bigint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (smallint, bigint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (smallint, bigint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (smallint, bigint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (smallint, bigint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (smallint, integer) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (smallint, integer) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (smallint, integer) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (smallint, integer) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (smallint, integer) │ 5 │ search │ \n\ninstead of listing putting cross-type ops that have bigint first, which\nare of secundary importance, which is what you get without it:\n\n brin │ integer_minmax_ops │ < (bigint, bigint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (bigint, bigint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (bigint, bigint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (bigint, bigint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (bigint, bigint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (bigint, integer) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (bigint, integer) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (bigint, integer) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (bigint, integer) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (bigint, integer) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (bigint, smallint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (bigint, smallint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (bigint, smallint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (bigint, smallint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (bigint, smallint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (integer, bigint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (integer, bigint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (integer, bigint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (integer, bigint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (integer, bigint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (integer, integer) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (integer, integer) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (integer, integer) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (integer, integer) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (integer, integer) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (integer, smallint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (integer, smallint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (integer, smallint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (integer, smallint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (integer, smallint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (smallint, bigint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (smallint, bigint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (smallint, bigint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (smallint, bigint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (smallint, bigint) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (smallint, integer) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (smallint, integer) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (smallint, integer) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (smallint, integer) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (smallint, integer) │ 5 │ search │ \n brin │ integer_minmax_ops │ < (smallint, smallint) │ 1 │ search │ \n brin │ integer_minmax_ops │ <= (smallint, smallint) │ 2 │ search │ \n brin │ integer_minmax_ops │ = (smallint, smallint) │ 3 │ search │ \n brin │ integer_minmax_ops │ >= (smallint, smallint) │ 4 │ search │ \n brin │ integer_minmax_ops │ > (smallint, smallint) │ 5 │ search │ \n\nwhich in my mind is a clear improvement.\n\nSo I propose the attached patch.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 11 May 2020 17:08:56 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Show opclass and opfamily related information in psql" }, { "msg_contents": "\nI would appreciate opinions from the patch authors on this ordering\nchange (rationale in previous email). I forgot to CC Sergei and Nikita.\n\n> diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\n> index 8dca6d8bb4..9bd0bf8356 100644\n> --- a/src/bin/psql/describe.c\n> +++ b/src/bin/psql/describe.c\n> @@ -6288,7 +6288,11 @@ listOpFamilyOperators(const char *access_method_pattern,\n> \t\tprocessSQLNamePattern(pset.db, &buf, family_pattern, have_where, false,\n> \t\t\t\t\t\t\t \"nsf.nspname\", \"of.opfname\", NULL, NULL);\n> \n> -\tappendPQExpBufferStr(&buf, \"ORDER BY 1, 2, o.amopstrategy, 3;\");\n> +\tappendPQExpBufferStr(&buf, \"ORDER BY 1, 2,\\n\"\n> +\t\t\t\t\t\t \" o.amoplefttype = o.amoprighttype DESC,\\n\"\n> +\t\t\t\t\t\t \" pg_catalog.format_type(o.amoplefttype, NULL),\\n\"\n> +\t\t\t\t\t\t \" pg_catalog.format_type(o.amoprighttype, NULL),\\n\"\n> +\t\t\t\t\t\t \" o.amopstrategy;\");\n> \n> \tres = PSQLexec(buf.data);\n> \ttermPQExpBuffer(&buf);\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 May 2020 14:09:58 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Show opclass and opfamily related information in psql" }, { "msg_contents": "Hi!\n\nOn Tue, May 12, 2020 at 12:09 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2020-Mar-08, Alexander Korotkov wrote:\n>\n> > Show opclass and opfamily related information in psql\n> >\n> > This commit provides psql commands for listing operator classes, operator\n> > families and its contents in psql. New commands will be useful for exploring\n> > capabilities of both builtin opclasses/opfamilies as well as\n> > opclasses/opfamilies defined in extensions.\n>\n> I had chance to use these new commands this morning.\n\nGreat, thank you!\n\n> Note how operator for strategy 1 are all together, then strategy 2, and\n> so on. But I think we'd prefer the operators to be grouped together for\n> the same types (just like \\dAp already works); so I would change the clause\n> from:\n> ORDER BY 1, 2, o.amopstrategy, 3;\n> to:\n> ORDER BY 1, 2, pg_catalog.format_type(o.amoplefttype, NULL), pg_catalog.format_type(o.amoprighttype, NULL), o.amopstrategy;\n\n+1\n\n> Also, while I'm going about this, ISTM it'd make sense to\n> list same-class operators first, followed by cross-class operators.\n> That requires to add \"o.amoplefttype = o.amoprighttype DESC,\" after\n> \"ORDER BY 1, 2,\". For brin's integer_minmax_ops, the resulting list\n> would have first (bigint,bigint) then (integer,integer) then\n> (smallint,smallint), then all the rest:\n\n+1\n\nNikita, what do you think?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 14 May 2020 12:52:10 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Show opclass and opfamily related information in psql" }, { "msg_contents": "On 14.05.2020 12:52, Alexander Korotkov wrote:\n\n> Nikita, what do you think?\n>\nI agree that this patch is an improvement.\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 14.05.2020 12:52, Alexander Korotkov wrote:\n\n\nNikita, what do you think?\n\n\n\n\nI agree that this patch is an improvement.\n\n -- \n Nikita Glukhov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company", "msg_date": "Thu, 14 May 2020 13:30:31 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Show opclass and opfamily related information in psql" }, { "msg_contents": "On Thu, May 14, 2020 at 1:30 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> I agree that this patch is an improvement.\n\nOK, I'm going to push this patch if no objections.\n(Sergey doesn't seem to continue involvement in PostgreSQL\ndevelopment, so it doesn't look like we should wait for him)\n\n------\nAlexander Korotkov\n\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 14 May 2020 13:34:29 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Show opclass and opfamily related information in psql" }, { "msg_contents": "On Thu, May 14, 2020 at 1:34 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, May 14, 2020 at 1:30 PM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> > I agree that this patch is an improvement.\n>\n> OK, I'm going to push this patch if no objections.\n> (Sergey doesn't seem to continue involvement in PostgreSQL\n> development, so it doesn't look like we should wait for him)\n\nPushed. I also applied the same ordering modification to \\dAp.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 17 May 2020 12:47:27 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Show opclass and opfamily related information in psql" } ]
[ { "msg_contents": "While working on a patch, I noticed this pre-existing behavior, which seems to\nbe new since v11, maybe due to changes to SRF.\n\n|postgres=# SELECT pg_ls_dir('.') LIMIT 1;\n|WARNING: 1 temporary files and directories not closed at end-of-transaction\n|pg_ls_dir | pg_dynshmem\n\n|postgres=# SELECT pg_ls_waldir() LIMIT 1;\n|WARNING: 1 temporary files and directories not closed at end-of-transaction\n|-[ RECORD 1 ]+-------------------------------------------------------------\n|pg_ls_waldir | (00000001000031920000007B,16777216,\"2020-03-08 03:50:34-07\")\n\n\nNote, that doesn't happen with \"SELECT * FROM\".\n\nI'm not sure what the solution is to that, but my patch was going to make it\nworse rather than better for pg_ls_tmpdir.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 8 Mar 2020 12:31:03 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> While working on a patch, I noticed this pre-existing behavior, which seems to\n> be new since v11, maybe due to changes to SRF.\n\n> |postgres=# SELECT pg_ls_dir('.') LIMIT 1;\n> |WARNING: 1 temporary files and directories not closed at end-of-transaction\n\nHmm, actually it looks to me like pg_ls_dir has been broken forever.\nThe reason the warning didn't show up before v11 is that CleanupTempFiles\ndidn't bleat about leaked \"allocated\" directories before that\n(cf 9cb7db3f0).\n\nI guess we ought to change that function to use returns-a-tuplestore\nprotocol instead of thinking it can hold a directory open across calls.\nIt's not hard to think of use-cases where the existing behavior would\ncause issues worse than a nanny-ish WARNING, especially on platforms\nwith tight \"ulimit -n\" limits.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Mar 2020 14:37:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Sun, Mar 08, 2020 at 02:37:49PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > While working on a patch, I noticed this pre-existing behavior, which seems to\n> > be new since v11, maybe due to changes to SRF.\n> \n> > |postgres=# SELECT pg_ls_dir('.') LIMIT 1;\n> > |WARNING: 1 temporary files and directories not closed at end-of-transaction\n> \n> Hmm, actually it looks to me like pg_ls_dir has been broken forever.\n> The reason the warning didn't show up before v11 is that CleanupTempFiles\n> didn't bleat about leaked \"allocated\" directories before that\n> (cf 9cb7db3f0).\n> \n> I guess we ought to change that function to use returns-a-tuplestore\n> protocol instead of thinking it can hold a directory open across calls.\n> It's not hard to think of use-cases where the existing behavior would\n> cause issues worse than a nanny-ish WARNING, especially on platforms\n> with tight \"ulimit -n\" limits.\n\nThanks for the analysis.\n\nDo you mean it should enumerate all files during the initial SRF call, or use\nsomething other than the SRF_* macros ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 8 Mar 2020 14:14:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Mar 08, 2020 at 02:37:49PM -0400, Tom Lane wrote:\n>> I guess we ought to change that function to use returns-a-tuplestore\n>> protocol instead of thinking it can hold a directory open across calls.\n>> It's not hard to think of use-cases where the existing behavior would\n>> cause issues worse than a nanny-ish WARNING, especially on platforms\n>> with tight \"ulimit -n\" limits.\n\n> Thanks for the analysis.\n\n> Do you mean it should enumerate all files during the initial SRF call, or use\n> something other than the SRF_* macros ?\n\nIt has to enumerate all the files during the first call. I suppose it\ncould do that and then still hand back the results one-at-a-time, but\nthere seems little point compared to filling a tuplestore immediately.\nSo probably the SRF_ macros are useless here.\n\nAnother possible solution is to register an exprstate-shutdown hook to\nensure the resource is cleaned up, but I'm not very happy with that\nbecause it does nothing to prevent the hazard of overrunning the\navailable resources if you have several of these active at once.\n\nI've just finished scanning the source code and concluding that all\nof these functions are similarly broken:\n\npg_ls_dir\npg_ls_dir_files\npg_tablespace_databases\npg_logdir_ls_internal\npg_timezone_names\npgrowlocks\n\nThe first five risk leaking an open directory, the last risks leaking\nan active tablescan and open relation.\n\nI don't see anything in the documentation (either funcapi.h or\nxfunc.sgml) warning that the function might not get run to completion,\neither ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Mar 2020 15:40:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "I wrote:\n> I've just finished scanning the source code and concluding that all\n> of these functions are similarly broken:\n> pg_ls_dir\n> pg_ls_dir_files\n> pg_tablespace_databases\n> pg_logdir_ls_internal\n> pg_timezone_names\n> pgrowlocks\n\nBTW, another thing I noticed while looking around is that some of\nthe functions using SRF_RETURN_DONE() think they should clean up\nmemory beforehand. This is a waste of code/cycles, as long as the\nmemory was properly allocated in funcctx->multi_call_memory_ctx,\nbecause funcapi.c takes care of deleting that context.\n\nWe should probably document that *any* manual cleanup before\nSRF_RETURN_DONE() is an antipattern. If you have to have cleanup,\nit needs to be done via RegisterExprContextCallback instead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 08 Mar 2020 16:30:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Sun, Mar 08, 2020 at 03:40:09PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Mar 08, 2020 at 02:37:49PM -0400, Tom Lane wrote:\n> >> I guess we ought to change that function to use returns-a-tuplestore\n> >> protocol instead of thinking it can hold a directory open across calls.\n> > Thanks for the analysis.\n> > Do you mean it should enumerate all files during the initial SRF call, or use\n> > something other than the SRF_* macros ?\n> It has to enumerate all the files during the first call. I suppose it\n\n> I've just finished scanning the source code and concluding that all\n> of these functions are similarly broken:\n> pg_ls_dir_files\n\nI patched this one to see what it looks like and to allow /hopefully/ moving\nforward one way or another with the pg_ls_tmpfile() patch set (or at least\navoid trying to do anything there which is too inconsistent with this fix).\n\n> I don't see anything in the documentation (either funcapi.h or\n> xfunc.sgml) warning that the function might not get run to completion,\n> either ...\n\nAlso, at first glance, these seem to be passing constant \"randomAccess=true\"\nrather than (bool) (rsinfo->allowedModes&SFRM_Materialize_Random)\n\n$ git grep -wl SFRM_Materialize |xargs grep -l 'tuplestore_begin_heap(true'\ncontrib/dblink/dblink.c\ncontrib/pageinspect/brinfuncs.c\ncontrib/pg_stat_statements/pg_stat_statements.c\nsrc/backend/access/transam/xlogfuncs.c\nsrc/backend/commands/event_trigger.c\nsrc/backend/commands/extension.c\nsrc/backend/foreign/foreign.c\nsrc/backend/replication/logical/launcher.c\nsrc/backend/replication/logical/logicalfuncs.c\nsrc/backend/replication/logical/origin.c\nsrc/backend/replication/slotfuncs.c\nsrc/backend/replication/walsender.c\nsrc/backend/storage/ipc/shmem.c\nsrc/backend/utils/adt/pgstatfuncs.c\nsrc/backend/utils/misc/guc.c\nsrc/backend/utils/misc/pg_config.c\n\n-- \nJustin", "msg_date": "Wed, 11 Mar 2020 06:19:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> On Sun, Mar 08, 2020 at 02:37:49PM -0400, Tom Lane wrote:\n>>>> I guess we ought to change that function to use returns-a-tuplestore\n>>>> protocol instead of thinking it can hold a directory open across calls.\n\n> I patched this one to see what it looks like and to allow /hopefully/ moving\n> forward one way or another with the pg_ls_tmpfile() patch set (or at least\n> avoid trying to do anything there which is too inconsistent with this fix).\n\nI reviewed this, added some test cases, and pushed it, so that we can see\nif the buildfarm finds anything wrong. (I'm not expecting that, because\nthis should all be pretty portable, but you never know.) Assuming not,\nwe need to fix the other functions similarly, and then do something about\nrevising the documentation to warn against this coding style. Do you\nwant to have a go at that?\n\n> Also, at first glance, these seem to be passing constant \"randomAccess=true\"\n> rather than (bool) (rsinfo->allowedModes&SFRM_Materialize_Random)\n\nHm. Not a bug, but possibly a performance issue, if the tuplestore\ngets big enough for that to matter. (I think it doesn't matter until\nwe start spilling to temp files.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 15:32:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Wed, Mar 11, 2020 at 03:32:38PM -0400, Tom Lane wrote:\n> > I patched this one to see what it looks like and to allow /hopefully/ moving\n> > forward one way or another with the pg_ls_tmpfile() patch set (or at least\n> > avoid trying to do anything there which is too inconsistent with this fix).\n> \n> I reviewed this, added some test cases, and pushed it, so that we can see\n\nThanks, tests were on my periphery..\n\n| In passing, fix bogus error report for stat() failure: it was\n| whining about the directory when it should be fingering the\n| individual file. Doubtless a copy-and-paste error.\n\nThanks again ; that was my 0001 patch on the other thread. No rebase conflict\neven ;)\nhttps://www.postgresql.org/message-id/20191228101650.GG12890%40telsasoft.com\n\n> Do you want to have a go at that?\n\nFirst draft attached. Note that I handled pg_ls_dir, even though I'm proposing\non the other thread to collapse/merge/meld it with pg_ls_dir_files [0].\nPossibly that's a bad idea with tuplestore, due to returning a scalar vs a row\nand needing to conditionally call CreateTemplateTupleDesc vs\nget_call_result_type. I'll rebase that patch later today.\n\nI didn't write test cases yet. Also didn't look for functions not on your\nlist.\n\nI noticed this doesn't actually do anything, but kept it for now...except in\npg_ls_dir error case:\n\nsrc/include/utils/tuplestore.h:/* tuplestore_donestoring() used to be required, but is no longer used */\nsrc/include/utils/tuplestore.h:#define tuplestore_donestoring(state) ((void) 0)\n\nI found a few documentation bits that I think aren't relevant, but could\npossibly be misread to encourage the bad coding practice. This is about *sql*\nfunctions:\n\n|37.5.8. SQL Functions Returning Sets\n|When an SQL function is declared as returning SETOF sometype, the function's\n|final query is executed TO COMPLETION, and each row it outputs is returned as\n|an element of the result set.\n|...\n|Set-returning functions in the select list are always evaluated as though they\n|are on the inside of a nested-loop join with the rest of the FROM clause, so\n|that the function(s) are run TO COMPLETION before the next row from the FROM\n|clause is considered.\n\n-- \nJustin\n\n[0] https://www.postgresql.org/message-id/20200310183037.GA29065%40telsasoft.com\nv9-0008-generalize-pg_ls_dir_files-and-retire-pg_ls_dir.patch", "msg_date": "Thu, 12 Mar 2020 07:11:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Sun, Mar 08, 2020 at 04:30:44PM -0400, Tom Lane wrote:\n> BTW, another thing I noticed while looking around is that some of\n> the functions using SRF_RETURN_DONE() think they should clean up\n> memory beforehand. This is a waste of code/cycles, as long as the\n> memory was properly allocated in funcctx->multi_call_memory_ctx,\n> because funcapi.c takes care of deleting that context.\n> \n> We should probably document that *any* manual cleanup before\n> SRF_RETURN_DONE() is an antipattern. If you have to have cleanup,\n> it needs to be done via RegisterExprContextCallback instead.\n\nThis part appears to be already in place since\ne4186762ffaa4188e16702e8f4f299ea70988b96:\n\n|The memory context that is current when the SRF is called is a transient\n|context that will be cleared between calls. This means that you do not need to\n|call pfree on everything you allocated using palloc; it will go away anyway.\n|However, if you want to allocate any data structures to live across calls, you\n|need to put them somewhere else. The memory context referenced by\n|multi_call_memory_ctx is a suitable location for any data that needs to survive\n|until the SRF is finished running. In most cases, this means that you should\n|switch into multi_call_memory_ctx while doing the first-call setup.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 12 Mar 2020 07:12:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Nitpick: please see c4dcd9144ba6.\n\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Wed, 11 Mar 2020 10:09:18 -0500\n> Subject: [PATCH] SRF: avoid leaking resources if not run to completion\n> \n> Change to return a tuplestore populated immediately and returned in full.\n> \n> Discussion: https://www.postgresql.org/message-id/20200308173103.GC1357%40telsasoft.com\n\nI wonder if this isn't saying that the whole value-per-call protocol is\nbogus, in that it seems impossible to write a useful function with it.\nMaybe we should add one final call with a special flag \"function\nshutdown\" or something, so that these resources can be released if the\nSRF isn't run to completion?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 12 Mar 2020 09:49:17 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I wonder if this isn't saying that the whole value-per-call protocol is\n> bogus, in that it seems impossible to write a useful function with it.\n\nOnly if you have a *very* narrow definition of \"useful function\".\nIf you look through SRF_RETURN_DONE callers, only a small minority\nare trying to do resource cleanup beforehand.\n\n> Maybe we should add one final call with a special flag \"function\n> shutdown\" or something, so that these resources can be released if the\n> SRF isn't run to completion?\n\nWe already have an appropriate mechanism for cleaning up resources,\nie RegisterExprContextCallback. I do not think what you're suggesting\ncould be made to work without an incompatible break in the API for\nSRFs, so it's not an improvement over telling people more forcefully\nabout how to do resource cleanup correctly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Mar 2020 10:05:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Thu, Mar 12, 2020 at 07:11:56AM -0500, Justin Pryzby wrote:\n> > Do you want to have a go at that?\n> \n> First draft attached. Note that I handled pg_ls_dir, even though I'm proposing\n> on the other thread to collapse/merge/meld it with pg_ls_dir_files [0].\n> Possibly that's a bad idea with tuplestore, due to returning a scalar vs a row\n> and needing to conditionally call CreateTemplateTupleDesc vs\n> get_call_result_type. I'll rebase that patch later today.\n> \n> I didn't write test cases yet. Also didn't look for functions not on your\n> list.\n> \n> I noticed this doesn't actually do anything, but kept it for now...except in\n> pg_ls_dir error case:\n> \n> src/include/utils/tuplestore.h:/* tuplestore_donestoring() used to be required, but is no longer used */\n> src/include/utils/tuplestore.h:#define tuplestore_donestoring(state) ((void) 0)\n\nv2 attached - I will add to next CF in case you want to defer it until later.\n\n-- \nJustin", "msg_date": "Mon, 16 Mar 2020 10:53:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> v2 attached - I will add to next CF in case you want to defer it until later.\n\nThanks, reviewed and pushed. Since this is a bug fix (at least in part)\nI didn't want to wait.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Mar 2020 21:38:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Mon, Mar 16, 2020 at 09:38:50PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > v2 attached - I will add to next CF in case you want to defer it until later.\n> \n> Thanks, reviewed and pushed. Since this is a bug fix (at least in part)\n> I didn't want to wait.\n\nThanks for fixing my test case and pushing.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 16 Mar 2020 21:00:17 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Thanks for fixing my test case and pushing.\n\nThe buildfarm just showed up another instability in the test cases\nwe added:\n\n=========================== regression.diffs ================\ndiff -U3 /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/../pgsql/src/test/regress/expected/misc_functions.out /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/misc_functions.out\n--- /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/../pgsql/src/test/regress/expected/misc_functions.out\t2020-03-17 08:14:50.292037956 +0100\n+++ /home/bf/build/buildfarm-idiacanthus/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/misc_functions.out\t2020-03-28 13:55:12.490024822 +0100\n@@ -169,11 +169,7 @@\n \n select (w).size = :segsize as ok\n from (select pg_ls_waldir() w) ss where length((w).name) = 24 limit 1;\n- ok \n-----\n- t\n-(1 row)\n-\n+ERROR: could not stat file \"pg_wal/000000010000000000000078\": No such file or directory\n select count(*) >= 0 as ok from pg_ls_archive_statusdir();\n ok \n ----\n\nIt's pretty obvious what happened here: concurrent activity renamed or\nremoved the WAL segment between when we saw it in the directory and\nwhen we tried to stat() it.\n\nThis seems like it would be just as much of a hazard for field usage\nas it is for regression testing, so I propose that we fix these\ndirectory-scanning functions to silently ignore ENOENT failures from\nstat(). Are there any for which we should not do that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 13:13:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Sat, Mar 28, 2020 at 01:13:54PM -0400, Tom Lane wrote:\n> The buildfarm just showed up another instability in the test cases\n> we added:\n\nYea, as you said, this is an issue with the *testcase*. The function behavior\ndidn't change, we just weren't previously exercising it.\n\n> select (w).size = :segsize as ok\n> from (select pg_ls_waldir() w) ss where length((w).name) = 24 limit 1;\n> - ok \n> -----\n> - t\n> -(1 row)\n> -\n> +ERROR: could not stat file \"pg_wal/000000010000000000000078\": No such file or directory\n> select count(*) >= 0 as ok from pg_ls_archive_statusdir();\n> ok \n> ----\n> \n> It's pretty obvious what happened here: concurrent activity renamed or\n> removed the WAL segment between when we saw it in the directory and\n> when we tried to stat() it.\n> \n> This seems like it would be just as much of a hazard for field usage\n> as it is for regression testing,\n\nThat's clearly true for pg_ls_waldir(), which call pg_ls_dir_files, and\nincludes some metadata columns.\n\n> so I propose that we fix these directory-scanning functions to silently\n> ignore ENOENT failures from stat(). Are there any for which we should not do\n> that?\n\nI think it's reasonable to ignore transient ENOENT for tmpdir, logdir, and\nprobably archive_statusdir. That doesn't currently affect pg_ls_dir(), which\nlists file but not metadata for an arbitrary dir, so doesn't call stat().\n\nNote that dangling links in the other functions currently cause (wrong [0])\nerror. I guess it should be documented if broken link is will be ignored due\nto ENOENT.\n\nMaybe we should lstat() the file to determine if it's a dangling link; if\nlstat() fails, then skip it. Currently, we use stat(), which shows metdata of\na link's *target*. Maybe we'd change that.\n\nNote that I have a patch which generalizes pg_ls_dir_files and makes\npg_ls_dir() a simple wrapper, so if that's pursued, they would behave the same\nunless I add another flag to do otherwise (but behaving the same has its\nmerits). It already uses lstat() to show links to dirs as isdir=no, which was\nneeded to avoid recursing into links-to-dirs in the new helper function\npg_ls_dir_recurse(). https://commitfest.postgresql.org/26/2377/\n\n[0] Which you fixed in 085b6b667 and I previously fixed at:\nhttps://www.postgresql.org/message-id/attachment/106478/v2-0001-BUG-in-errmsg.patch\n|$ sudo ln -s foo /var/log/postgresql/bar\n|ts=# SELECT * FROM pg_ls_logdir() ORDER BY 3;\n|ERROR: could not stat directory \"/var/log/postgresql\": No such file or directory\n\n\n", "msg_date": "Sat, 28 Mar 2020 13:39:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sat, Mar 28, 2020 at 01:13:54PM -0400, Tom Lane wrote:\n>> so I propose that we fix these directory-scanning functions to silently\n>> ignore ENOENT failures from stat(). Are there any for which we should not do\n>> that?\n\n> Maybe we should lstat() the file to determine if it's a dangling link; if\n> lstat() fails, then skip it. Currently, we use stat(), which shows metdata of\n> a link's *target*. Maybe we'd change that.\n\nHm, good point that ENOENT could refer to a symlink's target. Still,\nI'm not sure it's worth going out of our way to disambiguate that,\ngiven that these directories aren't really supposed to contain symlinks.\n(And on the third hand, if they aren't supposed to, then maybe these\nfunctions needn't look through any symlinks? In which case just\nsubstituting lstat for stat would resolve the ambiguity.)\n\n> Note that I have a patch which generalizes pg_ls_dir_files and makes\n> pg_ls_dir() a simple wrapper, so if that's pursued, they would behave the same\n> unless I add another flag to do otherwise (but behaving the same has its\n> merits). It already uses lstat() to show links to dirs as isdir=no, which was\n> needed to avoid recursing into links-to-dirs in the new helper function\n> pg_ls_dir_recurse(). https://commitfest.postgresql.org/26/2377/\n\nI think we need a back-patchable fix for the ENOENT failure, seeing that\nwe back-patched the new regression test; intermittent buildfarm failures\nare no fun in any branch. So new functions aren't too relevant here,\nalthough it's fair to look ahead at whether the same behavior will be\nappropriate for them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 15:07:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> Maybe we should lstat() the file to determine if it's a dangling link; if\n>> lstat() fails, then skip it. Currently, we use stat(), which shows metdata of\n>> a link's *target*. Maybe we'd change that.\n\n> Hm, good point that ENOENT could refer to a symlink's target. Still,\n> I'm not sure it's worth going out of our way to disambiguate that,\n> given that these directories aren't really supposed to contain symlinks.\n> (And on the third hand, if they aren't supposed to, then maybe these\n> functions needn't look through any symlinks? In which case just\n> substituting lstat for stat would resolve the ambiguity.)\n\nAfter looking at the callers of pg_ls_dir_files, and noticing that\nit's already defined to ignore anything that's not a regular file,\nI think switching to lstat makes sense.\n\nI also grepped the other uses of ReadDir[Extended], and didn't see\nany other ones that seemed desperately in need of changing.\n\nSo the attached seems like a sufficient fix.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 29 Mar 2020 12:37:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Sun, Mar 29, 2020 at 12:37:05PM -0400, Tom Lane wrote:\n> I wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> >> Maybe we should lstat() the file to determine if it's a dangling link; if\n> >> lstat() fails, then skip it. Currently, we use stat(), which shows metdata of\n> >> a link's *target*. Maybe we'd change that.\n> \n> > Hm, good point that ENOENT could refer to a symlink's target. Still,\n> > I'm not sure it's worth going out of our way to disambiguate that,\n> > given that these directories aren't really supposed to contain symlinks.\n> > (And on the third hand, if they aren't supposed to, then maybe these\n> > functions needn't look through any symlinks? In which case just\n> > substituting lstat for stat would resolve the ambiguity.)\n> \n> After looking at the callers of pg_ls_dir_files, and noticing that\n> it's already defined to ignore anything that's not a regular file,\n> I think switching to lstat makes sense.\n\nYea, only pg_ls_dir() shows special file types (and currently the others even\nhide dirs).\n\nThe essence of your patch is to ignore ENOENT, but you also changed to use\nlstat(), which seems unrelated. That means we'll now hide (non-broken)\nsymlinks. Is that intentional/needed ? I guess maybe you're trying to fix the\nbug (?) that symlinks aren't skipped? If so, I guess it should be a separate\ncommit, or the commit message should say so. I think the doc update is already\nhandled by: 8b6d94cf6c8319bfd6bebf8b863a5db586c19c3b (we didn't used to say we\nskipped specials, and now we say we do, and we'll to follow through RSN and\nactually do it, too).\n\n> diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c\n> index 01185f2..8429a12 100644\n> --- a/src/backend/utils/adt/genfile.c\n> +++ b/src/backend/utils/adt/genfile.c\n> @@ -596,10 +596,15 @@ pg_ls_dir_files(FunctionCallInfo fcinfo, const char *dir, bool missing_ok)\n> \n> \t\t/* Get the file info */\n> \t\tsnprintf(path, sizeof(path), \"%s/%s\", dir, de->d_name);\n> -\t\tif (stat(path, &attrib) < 0)\n> +\t\tif (lstat(path, &attrib) < 0)\n> +\t\t{\n> +\t\t\t/* Ignore concurrently-deleted files, else complain */\n> +\t\t\tif (errno == ENOENT)\n> +\t\t\t\tcontinue;\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode_for_file_access(),\n> \t\t\t\t\t errmsg(\"could not stat file \\\"%s\\\": %m\", path)));\n> +\t\t}\n> \n> \t\t/* Ignore anything but regular files */\n> \t\tif (!S_ISREG(attrib.st_mode))\n\n\n\n", "msg_date": "Sun, 29 Mar 2020 12:14:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Mar 29, 2020 at 12:37:05PM -0400, Tom Lane wrote:\n>> After looking at the callers of pg_ls_dir_files, and noticing that\n>> it's already defined to ignore anything that's not a regular file,\n>> I think switching to lstat makes sense.\n\n> Yea, only pg_ls_dir() shows special file types (and currently the others even\n> hide dirs).\n\n> The essence of your patch is to ignore ENOENT, but you also changed to use\n> lstat(), which seems unrelated. That means we'll now hide (non-broken)\n> symlinks. Is that intentional/needed ?\n\nWell, the following comment says \"ignore anything but regular files\",\nso I'm supposing that that is the behavior that we actually want here\nand failed to implement correctly. There might be scope for\nadditional directory-reading functions, but I'd think you'd want\nmore information (such as the file type) returned from anything\nthat doesn't act this way.\n\nIn practice, since these directories shouldn't contain symlinks,\nit's likely moot. The only place in PG data directories where\nwe actually expect symlinks is pg_tablespace ... and that contains\nsymlinks to directories, so that this function would ignore them\nanyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Mar 2020 13:22:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Sun, Mar 29, 2020 at 01:22:04PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Mar 29, 2020 at 12:37:05PM -0400, Tom Lane wrote:\n> >> After looking at the callers of pg_ls_dir_files, and noticing that\n> >> it's already defined to ignore anything that's not a regular file,\n> >> I think switching to lstat makes sense.\n> \n> > Yea, only pg_ls_dir() shows special file types (and currently the others even\n> > hide dirs).\n> \n> > The essence of your patch is to ignore ENOENT, but you also changed to use\n> > lstat(), which seems unrelated. That means we'll now hide (non-broken)\n> > symlinks. Is that intentional/needed ?\n> \n> Well, the following comment says \"ignore anything but regular files\",\n> so I'm supposing that that is the behavior that we actually want here\n> and failed to implement correctly. There might be scope for\n> additional directory-reading functions, but I'd think you'd want\n> more information (such as the file type) returned from anything\n> that doesn't act this way.\n\nMaybe pg_stat_file() deserves similar attention ? Right now, it'll fail on a\nbroken link. If we changed it to lstat(), then it'd work, but it'd also show\nmetadata for the *link* rather than its target.\n\nPatch proposed as v14-0001 patch here may be relevant:\nhttps://www.postgresql.org/message-id/20200317190401.GY26184%40telsasoft.com\n- indicating if it is a directory. Typical usages include:\n+ indicating if it is a directory (or a symbolic link to a directory).\n...\n\n> In practice, since these directories shouldn't contain symlinks,\n> it's likely moot. The only place in PG data directories where\n> we actually expect symlinks is pg_tablespace ... and that contains\n> symlinks to directories, so that this function would ignore them\n> anyway.\n\nI wouldn't hesitate to make symlinks, at least in log. It's surprising when\nfiles are hidden, but I won't argue about the best behavior here.\n\nI'm thinking of distributions or local configurations that use\n/var/log/postgresql. I didn't remember or didn't realize, but it looks like\ndebian's packages use logging_collector=off and then launch postmaster with\n2> /var/log/postgres/... It seems reasonable to do something like:\nlog/huge-querylog.csv => /zfs/compressed/...\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 29 Mar 2020 15:12:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "\nHello Justin,\n\n>> Well, the following comment says \"ignore anything but regular files\",\n>> so I'm supposing that that is the behavior that we actually want here\n>> and failed to implement correctly. There might be scope for\n>> additional directory-reading functions, but I'd think you'd want\n>> more information (such as the file type) returned from anything\n>> that doesn't act this way.\n>\n> Maybe pg_stat_file() deserves similar attention ? Right now, it'll fail on a\n> broken link. If we changed it to lstat(), then it'd work, but it'd also show\n> metadata for the *link* rather than its target.\n\nYep. I think this traditional answer is the rational answer.\n\nAs I wrote about an earlier version of the patch, ISTM that instead of \nreinventing, extending, adapting various ls variants (with/without \nmetadata, which show only files, which shows target of links, which shows \ndirectory, etc.) we would just need *one* postgres \"ls\" implementation \nwhich would be like \"ls -la arg\" (returns file type, dates), and then \neverything else is a wrapper around that with appropriate filtering that \ncan be done at the SQL level, like you started with recurse.\n\nIt would reduce the amount of C code and I find the SQL-level approach \nquite elegant.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 30 Mar 2020 07:16:17 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> As I wrote about an earlier version of the patch, ISTM that instead of \n> reinventing, extending, adapting various ls variants (with/without \n> metadata, which show only files, which shows target of links, which shows \n> directory, etc.) we would just need *one* postgres \"ls\" implementation \n> which would be like \"ls -la arg\" (returns file type, dates), and then \n> everything else is a wrapper around that with appropriate filtering that \n> can be done at the SQL level, like you started with recurse.\n\nYeah, I agree that some new function that can represent symlinks\nexplicitly in its output is the place to deal with this, for\npeople who want to deal with it.\n\nIn the meantime, there's still the question of what pg_ls_dir_files\nshould do exactly. Are we content to have it ignore symlinks?\nI remain inclined to think that's the right thing given its current\nbrief.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 10:44:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Hello,\n\n>> As I wrote about an earlier version of the patch, ISTM that instead of\n>> reinventing, extending, adapting various ls variants (with/without\n>> metadata, which show only files, which shows target of links, which shows\n>> directory, etc.) we would just need *one* postgres \"ls\" implementation\n>> which would be like \"ls -la arg\" (returns file type, dates), and then\n>> everything else is a wrapper around that with appropriate filtering that\n>> can be done at the SQL level, like you started with recurse.\n>\n> Yeah, I agree that some new function that can represent symlinks\n> explicitly in its output is the place to deal with this, for\n> people who want to deal with it.\n>\n> In the meantime, there's still the question of what pg_ls_dir_files\n> should do exactly. Are we content to have it ignore symlinks?\n> I remain inclined to think that's the right thing given its current\n> brief.\n\nMy 0.02€:\n\nI agree that it is enough to reproduce the current behavior of various \nexisting pg_ls* functions, but on the other hand outputing a column type \nchar like ls (-, d, l…) looks like really no big deal. I'd say that the \nonly reason not to do it may be to pass this before feature freeze.\n\n-- \nFabien.", "msg_date": "Tue, 31 Mar 2020 07:36:03 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "On Tue, Mar 31, 2020 at 07:36:03AM +0200, Fabien COELHO wrote:\n> > > As I wrote about an earlier version of the patch, ISTM that instead of\n> > > reinventing, extending, adapting various ls variants (with/without\n> > > metadata, which show only files, which shows target of links, which shows\n> > > directory, etc.) we would just need *one* postgres \"ls\" implementation\n> > > which would be like \"ls -la arg\" (returns file type, dates), and then\n> > > everything else is a wrapper around that with appropriate filtering that\n> > > can be done at the SQL level, like you started with recurse.\n> > \n> > Yeah, I agree that some new function that can represent symlinks\n> > explicitly in its output is the place to deal with this, for\n> > people who want to deal with it.\n> > \n> > In the meantime, there's still the question of what pg_ls_dir_files\n> > should do exactly. Are we content to have it ignore symlinks?\n> > I remain inclined to think that's the right thing given its current\n> > brief.\n> \n> My 0.02€:\n> \n> I agree that it is enough to reproduce the current behavior of various\n> existing pg_ls* functions, but on the other hand outputing a column type\n> char like ls (-, d, l…) looks like really no big deal. I'd say that the only\n> reason not to do it may be to pass this before feature freeze.\n\nRemember, there's two threads here, and this one is about the bug in stable\nreleases ($SUBJECT), and now the instability in the test that was added with\nits fix.\n\nI suggest to leave stat() alone in your patch for stable releases. I think\nit's okay if we change behavior so that a broken symlink is skipped instead of\nerroring (as a side effect of skipping ENOENT with stat()). But not okay if we\nchange pg_ls_logdir() to hide symlinks in back braches.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 31 Mar 2020 03:06:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I suggest to leave stat() alone in your patch for stable releases. I think\n> it's okay if we change behavior so that a broken symlink is skipped instead of\n> erroring (as a side effect of skipping ENOENT with stat()). But not okay if we\n> change pg_ls_logdir() to hide symlinks in back braches.\n\nMeh. I'm not really convinced, but in the absence of anyone expressing\nsupport for my position, I'll do it that way. I don't think it's worth\ndoing both a stat and lstat to tell the difference between file-is-gone\nand file-is-a-broken-symlink.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:41:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Hello hackers,\n31.03.2020 19:41, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> I suggest to leave stat() alone in your patch for stable releases. I think\n>> it's okay if we change behavior so that a broken symlink is skipped instead of\n>> erroring (as a side effect of skipping ENOENT with stat()). But not okay if we\n>> change pg_ls_logdir() to hide symlinks in back braches.\n> Meh. I'm not really convinced, but in the absence of anyone expressing\n> support for my position, I'll do it that way. I don't think it's worth\n> doing both a stat and lstat to tell the difference between file-is-gone\n> and file-is-a-broken-symlink.\nAs we've discovered in Bug #[16161], stat() for \"concurrently-deleted\nfile\" can also return ERROR_ACCESS_DENIED on Windows. It seems that\npg_upgradeCheck failures seen on\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=fairywren&br=REL_13_STABLE\ncaused by the same issue.\nShouldn't pg_ls_dir_files() retry stat() on ERROR_ACCESS_DENIED just\nlike the pgwin32_open() does to ignore files in \"delete pending\" state?\n\n[16161]\nhttps://www.postgresql.org/message-id/16161-7a985d2f1bbe8f71%40postgresql.org\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 13 Nov 2020 23:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Shouldn't pg_ls_dir_files() retry stat() on ERROR_ACCESS_DENIED just\n> like the pgwin32_open() does to ignore files in \"delete pending\" state?\n\nThat would soon lead us to changing every stat() caller in the system\nto have Windows-specific looping logic. No thanks. If we need to do\nthis, let's put in a Windows wrapper layer comparable to pgwin32_open()\nfor open().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Nov 2020 15:16:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "13.11.2020 23:16, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> Shouldn't pg_ls_dir_files() retry stat() on ERROR_ACCESS_DENIED just\n>> like the pgwin32_open() does to ignore files in \"delete pending\" state?\n> That would soon lead us to changing every stat() caller in the system\n> to have Windows-specific looping logic. No thanks. If we need to do\n> this, let's put in a Windows wrapper layer comparable to pgwin32_open()\n> for open().\nMaybe pgwin32_safestat() should perform this... For now it checks\nGetLastError() for ERROR_DELETE_PENDING, but as we've found out\npreviously this error code in fact is not returned by the OS.\nAnd if the fix is not going to be quick, probably it should be discussed\nin another thread...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 13 Nov 2020 23:30:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 13.11.2020 23:16, Tom Lane wrote:\n>> That would soon lead us to changing every stat() caller in the system\n>> to have Windows-specific looping logic. No thanks. If we need to do\n>> this, let's put in a Windows wrapper layer comparable to pgwin32_open()\n>> for open().\n\n> Maybe pgwin32_safestat() should perform this...\n\nUh ... now that you mention it, that's gone since bed90759f.\n\nThere is code in win32stat.c that purports to cope with this case, though\nI've not tested it personally.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Nov 2020 16:02:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg11+: pg_ls_*dir LIMIT 1: temporary files .. not closed at\n end-of-transaction" } ]
[ { "msg_contents": "I recently noticed while setting up a test environment that attempting to\nconnect to a standby running without hot_standby=on results in a fairly\ngeneric error (I believe \"the database system is starting up\"). I don't\nhave my test setup running right now, so can't confirm with a repro case at\nthe moment, but with a little bit of spelunking I noticed that error text\nonly shows up in src/backend/postmaster/postmaster.c when\nport->canAcceptConnections has the value CAC_STARTUP.\n\nIdeally the error message would include something along the lines of \"The\nserver is running as a standby but cannot accept connections with\nhot_standby=off\".\n\nI wanted to get some initial feedback on the idea before writing a patch:\ndoes that seem like a reasonable change? Is it actually plausible to\ndistinguish between this state and \"still recovering\" (i.e., when starting\nup a hot standby but initial recovery hasn't completed so it legitimately\ncan't accept connections yet)? If so, should we include the possibility if\nhot_standby isn't on, just in case?\n\nThe enum value CAC_STARTUP is defined in src/include/libpq/libpq-be.h,\nwhich makes me wonder if changing this value would result in a wire\nprotocol change/something the client wants to know about? If so, I assume\nit's not reasonable to change the value, but would it still be reasonable\nto change the error text?\n\nThanks,\nJames Coleman\n\nI recently noticed while setting up a test environment that attempting to connect to a standby running without hot_standby=on results in a fairly generic error (I believe \"the database system is starting up\"). I don't have my test setup running right now, so can't confirm with a repro case at the moment, but with a little bit of spelunking I noticed that error text only shows up in src/backend/postmaster/postmaster.c when port->canAcceptConnections has the value CAC_STARTUP.Ideally the error message would include something along the lines of \"The server is running as a standby but cannot accept connections with hot_standby=off\".I wanted to get some initial feedback on the idea before writing a patch: does that seem like a reasonable change? Is it actually plausible to distinguish between this state and \"still recovering\" (i.e., when starting up a hot standby but initial recovery hasn't completed so it legitimately can't accept connections yet)? If so, should we include the possibility if hot_standby isn't on, just in case?The enum value CAC_STARTUP is defined in src/include/libpq/libpq-be.h, which makes me wonder if changing this value would result in a wire protocol change/something the client wants to know about? If so, I assume it's not reasonable to change the value, but would it still be reasonable to change the error text?Thanks,James Coleman", "msg_date": "Sun, 8 Mar 2020 20:12:21 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "Hi,\n\nOn 2020-03-08 20:12:21 -0400, James Coleman wrote:\n> I recently noticed while setting up a test environment that attempting to\n> connect to a standby running without hot_standby=on results in a fairly\n> generic error (I believe \"the database system is starting up\"). I don't\n> have my test setup running right now, so can't confirm with a repro case at\n> the moment, but with a little bit of spelunking I noticed that error text\n> only shows up in src/backend/postmaster/postmaster.c when\n> port->canAcceptConnections has the value CAC_STARTUP.\n> \n> Ideally the error message would include something along the lines of \"The\n> server is running as a standby but cannot accept connections with\n> hot_standby=off\".\n\nYea, something roughly like that would be good.\n\n\n> I wanted to get some initial feedback on the idea before writing a patch:\n> does that seem like a reasonable change? Is it actually plausible to\n> distinguish between this state and \"still recovering\" (i.e., when starting\n> up a hot standby but initial recovery hasn't completed so it legitimately\n> can't accept connections yet)? If so, should we include the possibility if\n> hot_standby isn't on, just in case?\n\nYes, it is feasible to distinguish those cases. And we should, if we're\ngoing to change things around.\n\n\n> The enum value CAC_STARTUP is defined in src/include/libpq/libpq-be.h,\n> which makes me wonder if changing this value would result in a wire\n> protocol change/something the client wants to know about? If so, I assume\n> it's not reasonable to change the value, but would it still be reasonable\n> to change the error text?\n\nThe value shouldn't be visible to clients in any way. While not obvious\nfrom the name, there's this comment at the top of the header:\n\n *\t Note that this is backend-internal and is NOT exported to clients.\n *\t Structs that need to be client-visible are in pqcomm.h.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Mar 2020 15:28:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Mon, Mar 9, 2020 at 6:28 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-08 20:12:21 -0400, James Coleman wrote:\n> > I recently noticed while setting up a test environment that attempting to\n> > connect to a standby running without hot_standby=on results in a fairly\n> > generic error (I believe \"the database system is starting up\"). I don't\n> > have my test setup running right now, so can't confirm with a repro case at\n> > the moment, but with a little bit of spelunking I noticed that error text\n> > only shows up in src/backend/postmaster/postmaster.c when\n> > port->canAcceptConnections has the value CAC_STARTUP.\n> >\n> > Ideally the error message would include something along the lines of \"The\n> > server is running as a standby but cannot accept connections with\n> > hot_standby=off\".\n>\n> Yea, something roughly like that would be good.\n\nAwesome, thanks for the early feedback!\n\n> > I wanted to get some initial feedback on the idea before writing a patch:\n> > does that seem like a reasonable change? Is it actually plausible to\n> > distinguish between this state and \"still recovering\" (i.e., when starting\n> > up a hot standby but initial recovery hasn't completed so it legitimately\n> > can't accept connections yet)? If so, should we include the possibility if\n> > hot_standby isn't on, just in case?\n>\n> Yes, it is feasible to distinguish those cases. And we should, if we're\n> going to change things around.\n\nI'll look into this hopefully soon, but it's helpful to know that it's\npossible. Is it basically along the lines of checking to see if the\nLSN is past the minimum recovery point?\n\n> > The enum value CAC_STARTUP is defined in src/include/libpq/libpq-be.h,\n> > which makes me wonder if changing this value would result in a wire\n> > protocol change/something the client wants to know about? If so, I assume\n> > it's not reasonable to change the value, but would it still be reasonable\n> > to change the error text?\n>\n> The value shouldn't be visible to clients in any way. While not obvious\n> from the name, there's this comment at the top of the header:\n>\n> * Note that this is backend-internal and is NOT exported to clients.\n> * Structs that need to be client-visible are in pqcomm.h.\n\nThis is also helpful.\n\nThanks,\nJames\n\n\n", "msg_date": "Mon, 9 Mar 2020 18:40:32 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "Hi,\n\nOn 2020-03-09 18:40:32 -0400, James Coleman wrote:\n> On Mon, Mar 9, 2020 at 6:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I wanted to get some initial feedback on the idea before writing a patch:\n> > > does that seem like a reasonable change? Is it actually plausible to\n> > > distinguish between this state and \"still recovering\" (i.e., when starting\n> > > up a hot standby but initial recovery hasn't completed so it legitimately\n> > > can't accept connections yet)? If so, should we include the possibility if\n> > > hot_standby isn't on, just in case?\n> >\n> > Yes, it is feasible to distinguish those cases. And we should, if we're\n> > going to change things around.\n> \n> I'll look into this hopefully soon, but it's helpful to know that it's\n> possible. Is it basically along the lines of checking to see if the\n> LSN is past the minimum recovery point?\n\nNo, I don't think that's the right approach. IIRC the startup process\n(i.e. the one doing the WAL replay) signals postmaster once consistency\nhas been achieved. So you can just use that state.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Mar 2020 17:06:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Mon, Mar 9, 2020 at 8:06 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-09 18:40:32 -0400, James Coleman wrote:\n> > On Mon, Mar 9, 2020 at 6:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > I wanted to get some initial feedback on the idea before writing a patch:\n> > > > does that seem like a reasonable change? Is it actually plausible to\n> > > > distinguish between this state and \"still recovering\" (i.e., when starting\n> > > > up a hot standby but initial recovery hasn't completed so it legitimately\n> > > > can't accept connections yet)? If so, should we include the possibility if\n> > > > hot_standby isn't on, just in case?\n> > >\n> > > Yes, it is feasible to distinguish those cases. And we should, if we're\n> > > going to change things around.\n> >\n> > I'll look into this hopefully soon, but it's helpful to know that it's\n> > possible. Is it basically along the lines of checking to see if the\n> > LSN is past the minimum recovery point?\n>\n> No, I don't think that's the right approach. IIRC the startup process\n> (i.e. the one doing the WAL replay) signals postmaster once consistency\n> has been achieved. So you can just use that state.\n\nI've taken that approach in the attached patch (I'd expected to wait\nuntil later to work on this...but it seemed pretty small so I ended up\nhacking on it this evening).\n\nI don't have tests included: I tried intentionally breaking the\nexisting behavior (returning no error when hot_standby=off), but\nrunning make check-world (including tap tests) didn't find any\nbreakages. I can look into that more deeply at some point, but if you\nhappen to know a place we test similar things, then I'd be happy to\nhear it.\n\nOne other question: how is error message translation handled? I\nhaven't added entries to the relevant files, but also I'm obviously\nnot qualified to write them.\n\nThanks,\nJames", "msg_date": "Mon, 9 Mar 2020 22:05:16 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI applied the patch to the latest master branch and run a test below. The error messages have been separated. Below is the test steps.\r\n\r\n### setup primary server\r\ninitdb -D /tmp/primary/data\r\nmkdir /tmp/archive_dir\r\necho \"archive_mode='on'\" >> /tmp/primary/data/postgresql.conf\r\necho \"archive_command='cp %p /tmp/archive_dir/%f'\" >> /tmp/primary/data/postgresql.conf\r\npg_ctl -D /tmp/primary/data -l /tmp/primary-logs start\r\n\r\n### setup host standby server\r\npg_basebackup -p 5432 -w -R -D /tmp/hotstandby\r\necho \"primary_conninfo='host=127.0.0.1 port=5432 user=pgdev'\" >> /tmp/hotstandby/postgresql.conf\r\necho \"restore_command='cp /tmp/archive_dir/%f %p'\" >> /tmp/hotstandby/postgresql.conf\r\necho \"hot_standby = off\" >> /tmp/hotstandby/postgresql.conf\r\npg_ctl -D /tmp/hotstandby -l /tmp/hotstandby-logs -o \"-p 5433\" start\r\n\r\n### keep trying to connect to hot standby server in order to get the error messages in different stages.\r\nwhile true; do echo \"`date`\"; psql postgres -p 5433 -c \"SELECT txid_current_snapshot();\" sleep 0.2; done\r\n\r\n### before the patch\r\npsql: error: could not connect to server: FATAL: the database system is starting up\r\n...\r\n\r\n### after the patch, got different messages, one message indicates hot_standby is off\r\npsql: error: could not connect to server: FATAL: the database system is starting up\r\n...\r\npsql: error: could not connect to server: FATAL: the database system is up, but hot_standby is off\r\n...", "msg_date": "Thu, 02 Apr 2020 21:52:35 +0000", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Thu, Apr 2, 2020 at 5:53 PM David Zhang <david.zhang@highgo.ca> wrote:\n>\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> I applied the patch to the latest master branch and run a test below. The error messages have been separated. Below is the test steps.\n>\n> ### setup primary server\n> initdb -D /tmp/primary/data\n> mkdir /tmp/archive_dir\n> echo \"archive_mode='on'\" >> /tmp/primary/data/postgresql.conf\n> echo \"archive_command='cp %p /tmp/archive_dir/%f'\" >> /tmp/primary/data/postgresql.conf\n> pg_ctl -D /tmp/primary/data -l /tmp/primary-logs start\n>\n> ### setup host standby server\n> pg_basebackup -p 5432 -w -R -D /tmp/hotstandby\n> echo \"primary_conninfo='host=127.0.0.1 port=5432 user=pgdev'\" >> /tmp/hotstandby/postgresql.conf\n> echo \"restore_command='cp /tmp/archive_dir/%f %p'\" >> /tmp/hotstandby/postgresql.conf\n> echo \"hot_standby = off\" >> /tmp/hotstandby/postgresql.conf\n> pg_ctl -D /tmp/hotstandby -l /tmp/hotstandby-logs -o \"-p 5433\" start\n>\n> ### keep trying to connect to hot standby server in order to get the error messages in different stages.\n> while true; do echo \"`date`\"; psql postgres -p 5433 -c \"SELECT txid_current_snapshot();\" sleep 0.2; done\n>\n> ### before the patch\n> psql: error: could not connect to server: FATAL: the database system is starting up\n> ...\n>\n> ### after the patch, got different messages, one message indicates hot_standby is off\n> psql: error: could not connect to server: FATAL: the database system is starting up\n> ...\n> psql: error: could not connect to server: FATAL: the database system is up, but hot_standby is off\n> ...\n\nThanks for the review and testing!\n\nJames\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:49:44 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2020/04/03 22:49, James Coleman wrote:\n> On Thu, Apr 2, 2020 at 5:53 PM David Zhang <david.zhang@highgo.ca> wrote:\n>>\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world: not tested\n>> Implements feature: tested, passed\n>> Spec compliant: not tested\n>> Documentation: not tested\n>>\n>> I applied the patch to the latest master branch and run a test below. The error messages have been separated. Below is the test steps.\n>>\n>> ### setup primary server\n>> initdb -D /tmp/primary/data\n>> mkdir /tmp/archive_dir\n>> echo \"archive_mode='on'\" >> /tmp/primary/data/postgresql.conf\n>> echo \"archive_command='cp %p /tmp/archive_dir/%f'\" >> /tmp/primary/data/postgresql.conf\n>> pg_ctl -D /tmp/primary/data -l /tmp/primary-logs start\n>>\n>> ### setup host standby server\n>> pg_basebackup -p 5432 -w -R -D /tmp/hotstandby\n>> echo \"primary_conninfo='host=127.0.0.1 port=5432 user=pgdev'\" >> /tmp/hotstandby/postgresql.conf\n>> echo \"restore_command='cp /tmp/archive_dir/%f %p'\" >> /tmp/hotstandby/postgresql.conf\n>> echo \"hot_standby = off\" >> /tmp/hotstandby/postgresql.conf\n>> pg_ctl -D /tmp/hotstandby -l /tmp/hotstandby-logs -o \"-p 5433\" start\n>>\n>> ### keep trying to connect to hot standby server in order to get the error messages in different stages.\n>> while true; do echo \"`date`\"; psql postgres -p 5433 -c \"SELECT txid_current_snapshot();\" sleep 0.2; done\n>>\n>> ### before the patch\n>> psql: error: could not connect to server: FATAL: the database system is starting up\n>> ...\n>>\n>> ### after the patch, got different messages, one message indicates hot_standby is off\n>> psql: error: could not connect to server: FATAL: the database system is starting up\n>> ...\n>> psql: error: could not connect to server: FATAL: the database system is up, but hot_standby is off\n>> ...\n> \n> Thanks for the review and testing!\n\nThanks for the patch! Here is the comment from me.\n\n+\t\telse if (!FatalError && pmState == PM_RECOVERY)\n+\t\t\treturn CAC_STANDBY; /* connection disallowed on non-hot standby */\n\nEven if hot_standby is enabled, pmState seems to indicate PM_RECOVERY\nuntil recovery has reached a consistent state. No? So, if my understanding\nis right, \"FATAL: the database system is up, but hot_standby is off\" can\nbe logged even when hot_standby is on. Which sounds very confusing.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 30 Jul 2020 00:24:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Wed, Jul 29, 2020 at 11:24 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/03 22:49, James Coleman wrote:\n> > On Thu, Apr 2, 2020 at 5:53 PM David Zhang <david.zhang@highgo.ca> wrote:\n> >>\n> >> The following review has been posted through the commitfest application:\n> >> make installcheck-world: not tested\n> >> Implements feature: tested, passed\n> >> Spec compliant: not tested\n> >> Documentation: not tested\n> >>\n> >> I applied the patch to the latest master branch and run a test below. The error messages have been separated. Below is the test steps.\n> >>\n> >> ### setup primary server\n> >> initdb -D /tmp/primary/data\n> >> mkdir /tmp/archive_dir\n> >> echo \"archive_mode='on'\" >> /tmp/primary/data/postgresql.conf\n> >> echo \"archive_command='cp %p /tmp/archive_dir/%f'\" >> /tmp/primary/data/postgresql.conf\n> >> pg_ctl -D /tmp/primary/data -l /tmp/primary-logs start\n> >>\n> >> ### setup host standby server\n> >> pg_basebackup -p 5432 -w -R -D /tmp/hotstandby\n> >> echo \"primary_conninfo='host=127.0.0.1 port=5432 user=pgdev'\" >> /tmp/hotstandby/postgresql.conf\n> >> echo \"restore_command='cp /tmp/archive_dir/%f %p'\" >> /tmp/hotstandby/postgresql.conf\n> >> echo \"hot_standby = off\" >> /tmp/hotstandby/postgresql.conf\n> >> pg_ctl -D /tmp/hotstandby -l /tmp/hotstandby-logs -o \"-p 5433\" start\n> >>\n> >> ### keep trying to connect to hot standby server in order to get the error messages in different stages.\n> >> while true; do echo \"`date`\"; psql postgres -p 5433 -c \"SELECT txid_current_snapshot();\" sleep 0.2; done\n> >>\n> >> ### before the patch\n> >> psql: error: could not connect to server: FATAL: the database system is starting up\n> >> ...\n> >>\n> >> ### after the patch, got different messages, one message indicates hot_standby is off\n> >> psql: error: could not connect to server: FATAL: the database system is starting up\n> >> ...\n> >> psql: error: could not connect to server: FATAL: the database system is up, but hot_standby is off\n> >> ...\n> >\n> > Thanks for the review and testing!\n>\n> Thanks for the patch! Here is the comment from me.\n>\n> + else if (!FatalError && pmState == PM_RECOVERY)\n> + return CAC_STANDBY; /* connection disallowed on non-hot standby */\n>\n> Even if hot_standby is enabled, pmState seems to indicate PM_RECOVERY\n> until recovery has reached a consistent state. No? So, if my understanding\n> is right, \"FATAL: the database system is up, but hot_standby is off\" can\n> be logged even when hot_standby is on. Which sounds very confusing.\n\nThat's a good point. I've attached a corrected version.\n\nI still don't have a good idea for how to add a test for this change.\nIf a test for this warranted, I'd be interested in any ideas.\n\nJames", "msg_date": "Fri, 31 Jul 2020 16:18:54 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2020/08/01 5:18, James Coleman wrote:\n> On Wed, Jul 29, 2020 at 11:24 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/04/03 22:49, James Coleman wrote:\n>>> On Thu, Apr 2, 2020 at 5:53 PM David Zhang <david.zhang@highgo.ca> wrote:\n>>>>\n>>>> The following review has been posted through the commitfest application:\n>>>> make installcheck-world: not tested\n>>>> Implements feature: tested, passed\n>>>> Spec compliant: not tested\n>>>> Documentation: not tested\n>>>>\n>>>> I applied the patch to the latest master branch and run a test below. The error messages have been separated. Below is the test steps.\n>>>>\n>>>> ### setup primary server\n>>>> initdb -D /tmp/primary/data\n>>>> mkdir /tmp/archive_dir\n>>>> echo \"archive_mode='on'\" >> /tmp/primary/data/postgresql.conf\n>>>> echo \"archive_command='cp %p /tmp/archive_dir/%f'\" >> /tmp/primary/data/postgresql.conf\n>>>> pg_ctl -D /tmp/primary/data -l /tmp/primary-logs start\n>>>>\n>>>> ### setup host standby server\n>>>> pg_basebackup -p 5432 -w -R -D /tmp/hotstandby\n>>>> echo \"primary_conninfo='host=127.0.0.1 port=5432 user=pgdev'\" >> /tmp/hotstandby/postgresql.conf\n>>>> echo \"restore_command='cp /tmp/archive_dir/%f %p'\" >> /tmp/hotstandby/postgresql.conf\n>>>> echo \"hot_standby = off\" >> /tmp/hotstandby/postgresql.conf\n>>>> pg_ctl -D /tmp/hotstandby -l /tmp/hotstandby-logs -o \"-p 5433\" start\n>>>>\n>>>> ### keep trying to connect to hot standby server in order to get the error messages in different stages.\n>>>> while true; do echo \"`date`\"; psql postgres -p 5433 -c \"SELECT txid_current_snapshot();\" sleep 0.2; done\n>>>>\n>>>> ### before the patch\n>>>> psql: error: could not connect to server: FATAL: the database system is starting up\n>>>> ...\n>>>>\n>>>> ### after the patch, got different messages, one message indicates hot_standby is off\n>>>> psql: error: could not connect to server: FATAL: the database system is starting up\n>>>> ...\n>>>> psql: error: could not connect to server: FATAL: the database system is up, but hot_standby is off\n>>>> ...\n>>>\n>>> Thanks for the review and testing!\n>>\n>> Thanks for the patch! Here is the comment from me.\n>>\n>> + else if (!FatalError && pmState == PM_RECOVERY)\n>> + return CAC_STANDBY; /* connection disallowed on non-hot standby */\n>>\n>> Even if hot_standby is enabled, pmState seems to indicate PM_RECOVERY\n>> until recovery has reached a consistent state. No? So, if my understanding\n>> is right, \"FATAL: the database system is up, but hot_standby is off\" can\n>> be logged even when hot_standby is on. Which sounds very confusing.\n> \n> That's a good point. I've attached a corrected version.\n\nThanks for updating the patch! But it failed to be applied to the master branch\ncleanly because of the recent commit 0038f94387. So could you update the patch\nagain?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 19 Aug 2020 01:25:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Aug 18, 2020 at 12:25 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> Thanks for updating the patch! But it failed to be applied to the master branch\n> cleanly because of the recent commit 0038f94387. So could you update the patch\n> again?\n\nUpdated patch attached.\n\nJames", "msg_date": "Tue, 8 Sep 2020 13:17:48 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "Hi Fujii,\n\nOn 9/8/20 1:17 PM, James Coleman wrote:\n> On Tue, Aug 18, 2020 at 12:25 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> Thanks for updating the patch! But it failed to be applied to the master branch\n>> cleanly because of the recent commit 0038f94387. So could you update the patch\n>> again?\n> \n> Updated patch attached.\n\nAny thoughts on the updated patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 5 Mar 2021 08:45:17 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/05 22:45, David Steele wrote:\n> Hi Fujii,\n> \n> On 9/8/20 1:17 PM, James Coleman wrote:\n>> On Tue, Aug 18, 2020 at 12:25 PM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> Thanks for updating the patch! But it failed to be applied to the master branch\n>>> cleanly because of the recent commit 0038f94387. So could you update the patch\n>>> again?\n>>\n>> Updated patch attached.\n> \n> Any thoughts on the updated patch?\n\nThanks for the ping!\n\nWith the patch, if hot_standby is enabled, the message\n\"the database system is starting up\" is output while the server is\nin PM_RECOVERY state until it reaches the consistent recovery point.\nOn the other hand, if hot_standby is not enabled, the message\n\"the database system is up, but hot_standby is off\" is output even\nwhile the server is in that same situation. That is, opposite\nmessages can be output for the same situation based on the setting\nof hot_standby. One message is \"system is starting up\", the other\nis \"system is up\". Isn't this rather confusing?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 6 Mar 2021 02:36:55 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Fri, Mar 5, 2021 at 12:36 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/05 22:45, David Steele wrote:\n> > Hi Fujii,\n> >\n> > On 9/8/20 1:17 PM, James Coleman wrote:\n> >> On Tue, Aug 18, 2020 at 12:25 PM Fujii Masao\n> >> <masao.fujii@oss.nttdata.com> wrote:\n> >>> Thanks for updating the patch! But it failed to be applied to the master branch\n> >>> cleanly because of the recent commit 0038f94387. So could you update the patch\n> >>> again?\n> >>\n> >> Updated patch attached.\n> >\n> > Any thoughts on the updated patch?\n>\n> Thanks for the ping!\n>\n> With the patch, if hot_standby is enabled, the message\n> \"the database system is starting up\" is output while the server is\n> in PM_RECOVERY state until it reaches the consistent recovery point.\n> On the other hand, if hot_standby is not enabled, the message\n> \"the database system is up, but hot_standby is off\" is output even\n> while the server is in that same situation. That is, opposite\n> messages can be output for the same situation based on the setting\n> of hot_standby. One message is \"system is starting up\", the other\n> is \"system is up\". Isn't this rather confusing?\n\nDo you have any thoughts on what you'd like to see the message be? I\ncould change the PM_RECOVERY (without hot standby enabled) to return\nCAC_RECOVERY which would give us the message \"the database system is\nin recovery mode\", but that would be a change from what that state\nreturns now in a way that's unrelated to the goal of the patch.\n\nThanks,\nJames\n\n\n", "msg_date": "Fri, 5 Mar 2021 15:04:50 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-05, James Coleman wrote:\n\n> Do you have any thoughts on what you'd like to see the message be? I\n> could change the PM_RECOVERY (without hot standby enabled) to return\n> CAC_RECOVERY which would give us the message \"the database system is\n> in recovery mode\", but that would be a change from what that state\n> returns now in a way that's unrelated to the goal of the patch.\n\nHere's an idea:\n\n* hot_standby=on, before reaching consistent state\n FATAL: database is not accepting connections\n DETAIL: Consistent state has not yet been reached.\n\n* hot_standby=off, past consistent state\n FATAL: database is not accepting connections\n DETAIL: Hot standby mode is disabled.\n\n* hot_standby=off, before reaching consistent state\n FATAL: database is not accepting connections\n DETAIL: Hot standby mode is disabled.\n or maybe\n DETAIL: Consistent state has not yet been reached, and hot standby mode is disabled.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n", "msg_date": "Fri, 5 Mar 2021 17:37:49 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/06 5:37, Alvaro Herrera wrote:\n> On 2021-Mar-05, James Coleman wrote:\n> \n>> Do you have any thoughts on what you'd like to see the message be? I\n>> could change the PM_RECOVERY (without hot standby enabled) to return\n>> CAC_RECOVERY which would give us the message \"the database system is\n>> in recovery mode\", but that would be a change from what that state\n>> returns now in a way that's unrelated to the goal of the patch.\n> \n> Here's an idea:\n> \n> * hot_standby=on, before reaching consistent state\n> FATAL: database is not accepting connections\n> DETAIL: Consistent state has not yet been reached.\n> \n> * hot_standby=off, past consistent state\n> FATAL: database is not accepting connections\n> DETAIL: Hot standby mode is disabled.\n> \n> * hot_standby=off, before reaching consistent state\n> FATAL: database is not accepting connections\n\nThis idea looks good to me!\n\n\n> DETAIL: Hot standby mode is disabled.\n> or maybe\n> DETAIL: Consistent state has not yet been reached, and hot standby mode is disabled.\n\nI prefer the former message. Because the latter message meams that\nwe need to output the different messages based on whether the consistent\nstate is reached or not, and the followings would be necessary to implement\nthat. This looks a bit overkill to me against the purpose, at least for me.\n\n- The startup process needs to send new signal\n (e.g., PMSIGNAL_RECOVERY_CONSISTENT) to postmaster when the consistent\n state has been reached, to let postmaster know that state.\n\n- When receiving that signal, postmaster needs to move its state to new state\n (e.g., PM_CONSISTENT).\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sun, 7 Mar 2021 23:39:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Sun, Mar 7, 2021 at 3:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/06 5:37, Alvaro Herrera wrote:\n> > On 2021-Mar-05, James Coleman wrote:\n> >\n> >> Do you have any thoughts on what you'd like to see the message be? I\n> >> could change the PM_RECOVERY (without hot standby enabled) to return\n> >> CAC_RECOVERY which would give us the message \"the database system is\n> >> in recovery mode\", but that would be a change from what that state\n> >> returns now in a way that's unrelated to the goal of the patch.\n> >\n> > Here's an idea:\n> >\n> > * hot_standby=on, before reaching consistent state\n> > FATAL: database is not accepting connections\n> > DETAIL: Consistent state has not yet been reached.\n> >\n> > * hot_standby=off, past consistent state\n> > FATAL: database is not accepting connections\n> > DETAIL: Hot standby mode is disabled.\n> >\n> > * hot_standby=off, before reaching consistent state\n> > FATAL: database is not accepting connections\n>\n> This idea looks good to me!\n\n+1.\n\n\n> > DETAIL: Hot standby mode is disabled.\n> > or maybe\n> > DETAIL: Consistent state has not yet been reached, and hot standby mode is disabled.\n>\n> I prefer the former message. Because the latter message meams that\n> we need to output the different messages based on whether the consistent\n> state is reached or not, and the followings would be necessary to implement\n> that. This looks a bit overkill to me against the purpose, at least for me.\n\nAgreed. If hot standby is off, why would the admin care about whether\nit's consistent yet or not?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 7 Mar 2021 18:11:09 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-07, Magnus Hagander wrote:\n\n> On Sun, Mar 7, 2021 at 3:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> > > Here's an idea:\n> > >\n> > > * hot_standby=on, before reaching consistent state\n> > > FATAL: database is not accepting connections\n> > > DETAIL: Consistent state has not yet been reached.\n> > >\n> > > * hot_standby=off, past consistent state\n> > > FATAL: database is not accepting connections\n> > > DETAIL: Hot standby mode is disabled.\n> > >\n> > > * hot_standby=off, before reaching consistent state\n> > > FATAL: database is not accepting connections\n[...]\n> > > DETAIL: Hot standby mode is disabled.\n\n> > I prefer the former message. Because the latter message meams that\n> > we need to output the different messages based on whether the consistent\n> > state is reached or not, and the followings would be necessary to implement\n> > that. This looks a bit overkill to me against the purpose, at least for me.\n> \n> Agreed. If hot standby is off, why would the admin care about whether\n> it's consistent yet or not?\n\nGreat, so we're agreed on the messages to emit. James, are you updating\nyour patch, considering Fujii's note about the new signal and pmstate\nthat need to be added?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 9 Mar 2021 10:47:34 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 9, 2021 at 8:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Mar-07, Magnus Hagander wrote:\n>\n> > On Sun, Mar 7, 2021 at 3:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > > > Here's an idea:\n> > > >\n> > > > * hot_standby=on, before reaching consistent state\n> > > > FATAL: database is not accepting connections\n> > > > DETAIL: Consistent state has not yet been reached.\n> > > >\n> > > > * hot_standby=off, past consistent state\n> > > > FATAL: database is not accepting connections\n> > > > DETAIL: Hot standby mode is disabled.\n> > > >\n> > > > * hot_standby=off, before reaching consistent state\n> > > > FATAL: database is not accepting connections\n> [...]\n> > > > DETAIL: Hot standby mode is disabled.\n>\n> > > I prefer the former message. Because the latter message meams that\n> > > we need to output the different messages based on whether the consistent\n> > > state is reached or not, and the followings would be necessary to implement\n> > > that. This looks a bit overkill to me against the purpose, at least for me.\n> >\n> > Agreed. If hot standby is off, why would the admin care about whether\n> > it's consistent yet or not?\n>\n> Great, so we're agreed on the messages to emit. James, are you updating\n> your patch, considering Fujii's note about the new signal and pmstate\n> that need to be added?\n\nPerhaps I'm missing something, but I was under the impression the\n\"prefer the former message\" meant we were not adding a new signal and\npmstate?\n\nJames\n\n\n", "msg_date": "Tue, 9 Mar 2021 09:03:16 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-09, James Coleman wrote:\n\n> On Tue, Mar 9, 2021 at 8:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Mar-07, Magnus Hagander wrote:\n> >\n> > > On Sun, Mar 7, 2021 at 3:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > Great, so we're agreed on the messages to emit. James, are you updating\n> > your patch, considering Fujii's note about the new signal and pmstate\n> > that need to be added?\n> \n> Perhaps I'm missing something, but I was under the impression the\n> \"prefer the former message\" meant we were not adding a new signal and\n> pmstate?\n\nEh, I read that differently. I was proposing two options for the DETAIL\nline in that case:\n\n> DETAIL: Hot standby mode is disabled.\n> or maybe\n> DETAIL: Consistent state has not yet been reached, and hot standby mode is disabled.\n\nand both Fujii and Magnus said they prefer the first option over the\nsecond option. I don't read any of them as saying that they would like\nto do something else (including not doing anything).\n\nMaybe I misinterpreted them?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Tue, 9 Mar 2021 11:07:35 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 9, 2021 at 9:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Mar-09, James Coleman wrote:\n>\n> > On Tue, Mar 9, 2021 at 8:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Mar-07, Magnus Hagander wrote:\n> > >\n> > > > On Sun, Mar 7, 2021 at 3:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >\n> > > Great, so we're agreed on the messages to emit. James, are you updating\n> > > your patch, considering Fujii's note about the new signal and pmstate\n> > > that need to be added?\n> >\n> > Perhaps I'm missing something, but I was under the impression the\n> > \"prefer the former message\" meant we were not adding a new signal and\n> > pmstate?\n>\n> Eh, I read that differently. I was proposing two options for the DETAIL\n> line in that case:\n>\n> > DETAIL: Hot standby mode is disabled.\n> > or maybe\n> > DETAIL: Consistent state has not yet been reached, and hot standby mode is disabled.\n>\n> and both Fujii and Magnus said they prefer the first option over the\n> second option. I don't read any of them as saying that they would like\n> to do something else (including not doing anything).\n>\n> Maybe I misinterpreted them?\n\nYes, I think they both agreed on the \"DETAIL: Hot standby mode is\ndisabled.\" message, but that alternative meant not needing to add any\nnew signals and pm states, correct?\n\nThanks,\nJames\n\n\n", "msg_date": "Tue, 9 Mar 2021 09:11:53 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 9, 2021 at 3:07 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Mar-09, James Coleman wrote:\n>\n> > On Tue, Mar 9, 2021 at 8:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Mar-07, Magnus Hagander wrote:\n> > >\n> > > > On Sun, Mar 7, 2021 at 3:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > >\n> > > Great, so we're agreed on the messages to emit. James, are you updating\n> > > your patch, considering Fujii's note about the new signal and pmstate\n> > > that need to be added?\n> >\n> > Perhaps I'm missing something, but I was under the impression the\n> > \"prefer the former message\" meant we were not adding a new signal and\n> > pmstate?\n>\n> Eh, I read that differently. I was proposing two options for the DETAIL\n> line in that case:\n>\n> > DETAIL: Hot standby mode is disabled.\n> > or maybe\n> > DETAIL: Consistent state has not yet been reached, and hot standby mode is disabled.\n>\n> and both Fujii and Magnus said they prefer the first option over the\n> second option. I don't read any of them as saying that they would like\n> to do something else (including not doing anything).\n>\n> Maybe I misinterpreted them?\n\nThat is indeed what I meant as well.\n\nThe reference to \"the former\" as being the \"first or the two new\noptions\", not the \"old option\". That is, \"DETAIL: Hot standby mode is\ndisabled.\".\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 9 Mar 2021 15:12:09 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-09, James Coleman wrote:\n\n> Yes, I think they both agreed on the \"DETAIL: Hot standby mode is\n> disabled.\" message, but that alternative meant not needing to add any\n> new signals and pm states, correct?\n\nAh, I see! I was thinking that you still needed the state and signal in\norder to print the correct message in hot-standby mode, but that's\n(obviously!) wrong. So you're right that no signal/state are needed.\n\nThanks\n\n-- \n�lvaro Herrera Valdivia, Chile\nSi no sabes adonde vas, es muy probable que acabes en otra parte.\n\n\n", "msg_date": "Tue, 9 Mar 2021 11:17:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 9, 2021 at 9:17 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Mar-09, James Coleman wrote:\n>\n> > Yes, I think they both agreed on the \"DETAIL: Hot standby mode is\n> > disabled.\" message, but that alternative meant not needing to add any\n> > new signals and pm states, correct?\n>\n> Ah, I see! I was thinking that you still needed the state and signal in\n> order to print the correct message in hot-standby mode, but that's\n> (obviously!) wrong. So you're right that no signal/state are needed.\n\nCool. And yes, I'm planning to update the patch soon.\n\nThanks,\nJames\n\n\n", "msg_date": "Tue, 9 Mar 2021 09:19:37 -0500", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/09 23:19, James Coleman wrote:\n> On Tue, Mar 9, 2021 at 9:17 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2021-Mar-09, James Coleman wrote:\n>>\n>>> Yes, I think they both agreed on the \"DETAIL: Hot standby mode is\n>>> disabled.\" message, but that alternative meant not needing to add any\n>>> new signals and pm states, correct?\n>>\n>> Ah, I see! I was thinking that you still needed the state and signal in\n>> order to print the correct message in hot-standby mode, but that's\n>> (obviously!) wrong. So you're right that no signal/state are needed.\n> \n> Cool. And yes, I'm planning to update the patch soon.\n\n+1. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Mar 2021 23:27:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 9, 2021 at 9:27 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/09 23:19, James Coleman wrote:\n> > On Tue, Mar 9, 2021 at 9:17 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> On 2021-Mar-09, James Coleman wrote:\n> >>\n> >>> Yes, I think they both agreed on the \"DETAIL: Hot standby mode is\n> >>> disabled.\" message, but that alternative meant not needing to add any\n> >>> new signals and pm states, correct?\n> >>\n> >> Ah, I see! I was thinking that you still needed the state and signal in\n> >> order to print the correct message in hot-standby mode, but that's\n> >> (obviously!) wrong. So you're right that no signal/state are needed.\n> >\n> > Cool. And yes, I'm planning to update the patch soon.\n>\n> +1. Thanks!\n\nHere's an updated patch; I think I've gotten what Alvaro suggested.\n\nThanks,\nJames", "msg_date": "Fri, 19 Mar 2021 10:35:42 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021/03/19 23:35, James Coleman wrote:\n> Here's an updated patch; I think I've gotten what Alvaro suggested.\n\nThanks for updating the patch! But I was thinking that our consensus is\nsomething like the attached patch. Could you check this patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 23 Mar 2021 02:49:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Mon, Mar 22, 2021 at 1:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/19 23:35, James Coleman wrote:\n> > Here's an updated patch; I think I've gotten what Alvaro suggested.\n>\n> Thanks for updating the patch! But I was thinking that our consensus is\n> something like the attached patch. Could you check this patch?\n\nAs far as I can tell (I might be missing something) your v5 patch does\nthe same thing, albeit with different code organization. It did catch\nthough that I'd neglected to add the DETAIL line as separate from the\nerrmsg line.\n\nIs the attached (in the style of my v4, since I'm not following why we\nneed to move the standby determination logic into a new\nCAC_NOCONSISTENT block) what you're thinking? Or is there something\nelse I'm missing?\n\nThanks,\nJames", "msg_date": "Mon, 22 Mar 2021 14:25:40 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/23 3:25, James Coleman wrote:\n> On Mon, Mar 22, 2021 at 1:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/03/19 23:35, James Coleman wrote:\n>>> Here's an updated patch; I think I've gotten what Alvaro suggested.\n>>\n>> Thanks for updating the patch! But I was thinking that our consensus is\n>> something like the attached patch. Could you check this patch?\n> \n> As far as I can tell (I might be missing something) your v5 patch does\n> the same thing, albeit with different code organization. It did catch\n> though that I'd neglected to add the DETAIL line as separate from the\n> errmsg line.\n> \n> Is the attached (in the style of my v4, since I'm not following why we\n> need to move the standby determination logic into a new\n> CAC_NOCONSISTENT block) what you're thinking? Or is there something\n> else I'm missing?\n\nI just did that to avoid adding more CAC_state. But basically it's\nok to check hot standby at either canAcceptConnections() or\nProcessStartupPacket().\n\n \t\tcase CAC_STARTUP:\n \t\t\tereport(FATAL,\n \t\t\t\t\t(errcode(ERRCODE_CANNOT_CONNECT_NOW),\n-\t\t\t\t\t errmsg(\"the database system is starting up\")));\n+\t\t\t\t\t errmsg(\"the database system is not accepting connections\"),\n+\t\t\t\t\t errdetail(\"Consistent recovery state has not been yet reached.\")));\n\nDo you want to report this message even in crash recovery? Since crash\nrecovery is basically not so much related to \"consistent recovery state\",\nat least for me the original message seems more suitable for crash recovery.\n\nAlso if we adopt this message, the server with hot_standby=off reports\n\"Consistent recovery state has not been yet reached.\" in PM_STARTUP,\nbut stops reporting this message at PM_RECOVERY even if the consistent\nrecovery state has not been reached yet. Instead, it reports \"Hot standby\nmode is disabled.\" at PM_RECOVERY. Isn't this transition of message confusing?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Mar 2021 03:52:21 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Mon, Mar 22, 2021 at 2:52 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/23 3:25, James Coleman wrote:\n> > On Mon, Mar 22, 2021 at 1:49 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2021/03/19 23:35, James Coleman wrote:\n> >>> Here's an updated patch; I think I've gotten what Alvaro suggested.\n> >>\n> >> Thanks for updating the patch! But I was thinking that our consensus is\n> >> something like the attached patch. Could you check this patch?\n> >\n> > As far as I can tell (I might be missing something) your v5 patch does\n> > the same thing, albeit with different code organization. It did catch\n> > though that I'd neglected to add the DETAIL line as separate from the\n> > errmsg line.\n> >\n> > Is the attached (in the style of my v4, since I'm not following why we\n> > need to move the standby determination logic into a new\n> > CAC_NOCONSISTENT block) what you're thinking? Or is there something\n> > else I'm missing?\n>\n> I just did that to avoid adding more CAC_state. But basically it's\n> ok to check hot standby at either canAcceptConnections() or\n> ProcessStartupPacket().\n>\n> case CAC_STARTUP:\n> ereport(FATAL,\n> (errcode(ERRCODE_CANNOT_CONNECT_NOW),\n> - errmsg(\"the database system is starting up\")));\n> + errmsg(\"the database system is not accepting connections\"),\n> + errdetail(\"Consistent recovery state has not been yet reached.\")));\n>\n> Do you want to report this message even in crash recovery? Since crash\n> recovery is basically not so much related to \"consistent recovery state\",\n> at least for me the original message seems more suitable for crash recovery.\n>\n> Also if we adopt this message, the server with hot_standby=off reports\n> \"Consistent recovery state has not been yet reached.\" in PM_STARTUP,\n> but stops reporting this message at PM_RECOVERY even if the consistent\n> recovery state has not been reached yet. Instead, it reports \"Hot standby\n> mode is disabled.\" at PM_RECOVERY. Isn't this transition of message confusing?\n\nAre you saying we should only change the message for a single case:\nthe case where we'd otherwise allow connections but EnableHotStandby\nis false? I believe that's what the original patch did, but then\nAlvaro's proposal included changing additional messages.\n\nJames Coleman\n\n\n", "msg_date": "Mon, 22 Mar 2021 14:59:40 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/23 3:59, James Coleman wrote:\n> Are you saying we should only change the message for a single case:\n> the case where we'd otherwise allow connections but EnableHotStandby\n> is false?\n\nNo. Let me clarify my opinion.\n\nAt PM_STARTUP, \"the database system is starting up\" should be logged\nwhatever the setting of hot_standby is. This is the same as the original\nbehavior. During crash recovery, this message is output. Also at archive\nrecovery or standby server, until the startup process sends\nPMSIGNAL_RECOVERY_STARTED, this message is logged.\n\nAt PM_RECOVERY, originally \"the database system is starting up\" was logged\nwhatever the setting of hot_standby is. My opinion is the same as our\nconsensus, i.e., \"the database system is not accepting connections\" and\n\"Hot standby mode is disabled.\" are logged if hot_standby is disabled.\n\"the database system is not accepting connections\" and \"Consistent\n recovery state has not been yet reached.\" are logged if hot_standby is\n enabled.\n\nAfter the consistent recovery state is reached, if hot_standby is disabled,\nthe postmaster state is still PM_RECOVERY. So \"Hot standby mode is disabled.\"\nis still logged in this case. This is also different behavior from the original.\nIf hot_standby is enabled, read-only connections can be accepted because\nthe consistent state is reached. So no message needs to be logged.\n\nTherefore for now what we've not reached the consensus is what message\nshould be logged at PM_STARTUP. I'm thinking it's better to log\n\"the database system is starting up\" in that case because of the reasons\nthat I explained upthread.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Mar 2021 14:46:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 23, 2021 at 1:46 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/23 3:59, James Coleman wrote:\n> > Are you saying we should only change the message for a single case:\n> > the case where we'd otherwise allow connections but EnableHotStandby\n> > is false?\n>\n> No. Let me clarify my opinion.\n>\n> At PM_STARTUP, \"the database system is starting up\" should be logged\n> whatever the setting of hot_standby is. This is the same as the original\n> behavior. During crash recovery, this message is output. Also at archive\n> recovery or standby server, until the startup process sends\n> PMSIGNAL_RECOVERY_STARTED, this message is logged.\n>\n> At PM_RECOVERY, originally \"the database system is starting up\" was logged\n> whatever the setting of hot_standby is. My opinion is the same as our\n> consensus, i.e., \"the database system is not accepting connections\" and\n> \"Hot standby mode is disabled.\" are logged if hot_standby is disabled.\n> \"the database system is not accepting connections\" and \"Consistent\n> recovery state has not been yet reached.\" are logged if hot_standby is\n> enabled.\n>\n> After the consistent recovery state is reached, if hot_standby is disabled,\n> the postmaster state is still PM_RECOVERY. So \"Hot standby mode is disabled.\"\n> is still logged in this case. This is also different behavior from the original.\n> If hot_standby is enabled, read-only connections can be accepted because\n> the consistent state is reached. So no message needs to be logged.\n>\n> Therefore for now what we've not reached the consensus is what message\n> should be logged at PM_STARTUP. I'm thinking it's better to log\n> \"the database system is starting up\" in that case because of the reasons\n> that I explained upthread.\n>\n> Thought?\n\nI understand your point now, and I agree, that makes sense.\n\nThe attached takes a similar approach to your v5, but I've used\nCAC_NOTCONSISTENT instead of CAC_NOCONSISTENT because I think it reads\nbetter (CAC_INCONSISTENT would technically be better English,\nbut...also it doesn't parallel the code and error message).\n\nThoughts?\n\nJames Coleman", "msg_date": "Tue, 23 Mar 2021 09:16:47 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-23, James Coleman wrote:\n\n> On Tue, Mar 23, 2021 at 1:46 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> > Therefore for now what we've not reached the consensus is what message\n> > should be logged at PM_STARTUP. I'm thinking it's better to log\n> > \"the database system is starting up\" in that case because of the reasons\n> > that I explained upthread.\n\n> I understand your point now, and I agree, that makes sense.\n\nPlease note that PM_STARTUP mode is very very short-lived. It only\nstarts happening when postmaster launches the startup process, and\nbefore the startup process begins WAL replay (as changed by\nsigusr1_handler in postmaster.c). Once WAL replay begins, the PM status\nchanges to PM_RECOVERY. So I don't think we really care all that much\nwhat message is logged in this case. It changes very quickly into the\nCAC_NOTCONSISTENT message anyway. For this state, it seems okay with\neither what James submitted in v7, or what Fujii said.\n\nHowever, for this one\n\n+ case CAC_NOTCONSISTENT:\n+ if (EnableHotStandby)\n+ ereport(FATAL,\n+ (errcode(ERRCODE_CANNOT_CONNECT_NOW),\n+ errmsg(\"the database system is not accepting connections\"),\n+ errdetail(\"Consistent recovery state has not been yet reached.\")));\n\nMaybe it makes sense to say \"... is not accepting connections *yet*\".\nThat'd be a tad redundant with what the DETAIL says, but that seems\nacceptable.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 23 Mar 2021 13:20:11 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> However, for this one\n\n> + case CAC_NOTCONSISTENT:\n> + if (EnableHotStandby)\n> + ereport(FATAL,\n> + (errcode(ERRCODE_CANNOT_CONNECT_NOW),\n> + errmsg(\"the database system is not accepting connections\"),\n> + errdetail(\"Consistent recovery state has not been yet reached.\")));\n\n> Maybe it makes sense to say \"... is not accepting connections *yet*\".\n\n+1, but I think \"... is not yet accepting connections\" is slightly\nbetter style.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 12:34:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Tue, Mar 23, 2021 at 12:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > However, for this one\n>\n> > + case CAC_NOTCONSISTENT:\n> > + if (EnableHotStandby)\n> > + ereport(FATAL,\n> > + (errcode(ERRCODE_CANNOT_CONNECT_NOW),\n> > + errmsg(\"the database system is not accepting connections\"),\n> > + errdetail(\"Consistent recovery state has not been yet reached.\")));\n>\n> > Maybe it makes sense to say \"... is not accepting connections *yet*\".\n>\n> +1, but I think \"... is not yet accepting connections\" is slightly\n> better style.\n\nAll right, see attached v8.\n\nJames Coleman", "msg_date": "Tue, 23 Mar 2021 12:48:32 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/24 1:20, Alvaro Herrera wrote:\n> On 2021-Mar-23, James Coleman wrote:\n> \n>> On Tue, Mar 23, 2021 at 1:46 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>>> Therefore for now what we've not reached the consensus is what message\n>>> should be logged at PM_STARTUP. I'm thinking it's better to log\n>>> \"the database system is starting up\" in that case because of the reasons\n>>> that I explained upthread.\n> \n>> I understand your point now, and I agree, that makes sense.\n> \n> Please note that PM_STARTUP mode is very very short-lived. It only\n> starts happening when postmaster launches the startup process, and\n> before the startup process begins WAL replay (as changed by\n> sigusr1_handler in postmaster.c). Once WAL replay begins, the PM status\n> changes to PM_RECOVERY.\n\nTrue if archive recovery or standby server. But during crash recovery\npostmaster sits in PM_STARTUP mode. So I guess that we still see\nthe log messages for PM_STARTUP lots of times.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Mar 2021 04:56:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-24, Fujii Masao wrote:\n\n> On 2021/03/24 1:20, Alvaro Herrera wrote:\n\n> > Please note that PM_STARTUP mode is very very short-lived. It only\n> > starts happening when postmaster launches the startup process, and\n> > before the startup process begins WAL replay (as changed by\n> > sigusr1_handler in postmaster.c). Once WAL replay begins, the PM status\n> > changes to PM_RECOVERY.\n> \n> True if archive recovery or standby server. But during crash recovery\n> postmaster sits in PM_STARTUP mode. So I guess that we still see\n> the log messages for PM_STARTUP lots of times.\n\nHmm ... true, and I had missed that this is what you had already said\nupthread. In this case, should we add a DETAIL line for this message?\n\nFATAL: the database system is starting up\nDETAIL: WAL is being applied to recover from a system crash.\nor\nDETAIL: The system is applying WAL to recover from a system crash.\nor\nDETAIL: The startup process is applying WAL to recover from a system crash.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La conclusi�n que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusi�n de ellos\" (Tanenbaum)\n\n\n", "msg_date": "Tue, 23 Mar 2021 17:17:11 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> FATAL: the database system is starting up\n> DETAIL: WAL is being applied to recover from a system crash.\n> or\n> DETAIL: The system is applying WAL to recover from a system crash.\n> or\n> DETAIL: The startup process is applying WAL to recover from a system crash.\n\nI don't think the postmaster has enough context to know if that's\nactually true. It just launches the startup process and waits for\nresults. If somebody saw this during a normal (non-crash) startup,\nthey'd be justifiably alarmed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 16:59:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/24 5:59, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> FATAL: the database system is starting up\n>> DETAIL: WAL is being applied to recover from a system crash.\n>> or\n>> DETAIL: The system is applying WAL to recover from a system crash.\n>> or\n>> DETAIL: The startup process is applying WAL to recover from a system crash.\n> \n> I don't think the postmaster has enough context to know if that's\n> actually true. It just launches the startup process and waits for\n> results. If somebody saw this during a normal (non-crash) startup,\n> they'd be justifiably alarmed.\n\nYes, so logging \"the database system is starting up\" seems enough to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Mar 2021 10:46:21 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021-Mar-24, Fujii Masao wrote:\n\n> On 2021/03/24 5:59, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > FATAL: the database system is starting up\n> > > DETAIL: WAL is being applied to recover from a system crash.\n> > > or\n> > > DETAIL: The system is applying WAL to recover from a system crash.\n> > > or\n> > > DETAIL: The startup process is applying WAL to recover from a system crash.\n> > \n> > I don't think the postmaster has enough context to know if that's\n> > actually true. It just launches the startup process and waits for\n> > results. If somebody saw this during a normal (non-crash) startup,\n> > they'd be justifiably alarmed.\n> \n> Yes, so logging \"the database system is starting up\" seems enough to me.\n\nNo objection.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Cuando no hay humildad las personas se degradan\" (A. Christie)\n\n\n", "msg_date": "Wed, 24 Mar 2021 04:59:01 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On 2021/03/24 16:59, Alvaro Herrera wrote:\n> On 2021-Mar-24, Fujii Masao wrote:\n> \n>> On 2021/03/24 5:59, Tom Lane wrote:\n>>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>>>> FATAL: the database system is starting up\n>>>> DETAIL: WAL is being applied to recover from a system crash.\n>>>> or\n>>>> DETAIL: The system is applying WAL to recover from a system crash.\n>>>> or\n>>>> DETAIL: The startup process is applying WAL to recover from a system crash.\n>>>\n>>> I don't think the postmaster has enough context to know if that's\n>>> actually true. It just launches the startup process and waits for\n>>> results. If somebody saw this during a normal (non-crash) startup,\n>>> they'd be justifiably alarmed.\n>>\n>> Yes, so logging \"the database system is starting up\" seems enough to me.\n> \n> No objection.\n\nThanks! So I changed the message reported at PM_STARTUP to that one,\nbased on v8 patch that James posted upthread. I also ran pgindent for\nthe patch. Attached is the updated version of the patch.\n\nBarring any objection, I will commit this.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 24 Mar 2021 18:55:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Wed, Mar 24, 2021 at 5:55 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/24 16:59, Alvaro Herrera wrote:\n> > On 2021-Mar-24, Fujii Masao wrote:\n> >\n> >> On 2021/03/24 5:59, Tom Lane wrote:\n> >>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> >>>> FATAL: the database system is starting up\n> >>>> DETAIL: WAL is being applied to recover from a system crash.\n> >>>> or\n> >>>> DETAIL: The system is applying WAL to recover from a system crash.\n> >>>> or\n> >>>> DETAIL: The startup process is applying WAL to recover from a system crash.\n> >>>\n> >>> I don't think the postmaster has enough context to know if that's\n> >>> actually true. It just launches the startup process and waits for\n> >>> results. If somebody saw this during a normal (non-crash) startup,\n> >>> they'd be justifiably alarmed.\n> >>\n> >> Yes, so logging \"the database system is starting up\" seems enough to me.\n> >\n> > No objection.\n>\n> Thanks! So I changed the message reported at PM_STARTUP to that one,\n> based on v8 patch that James posted upthread. I also ran pgindent for\n> the patch. Attached is the updated version of the patch.\n>\n> Barring any objection, I will commit this.\n\nThat looks good to me. Thanks for working on this.\n\nJames Coleman\n\n\n", "msg_date": "Wed, 24 Mar 2021 09:06:19 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "\n\nOn 2021/03/24 22:06, James Coleman wrote:\n> That looks good to me. Thanks for working on this.\n\nThanks! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:43:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" }, { "msg_contents": "On Wed, Mar 24, 2021 at 9:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/24 22:06, James Coleman wrote:\n> > That looks good to me. Thanks for working on this.\n>\n> Thanks! I pushed the patch.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\nThanks for reviewing and committing!\n\nJames\n\n\n", "msg_date": "Fri, 9 Apr 2021 13:52:06 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicer error when connecting to standby with hot_standby=off" } ]
[ { "msg_contents": "Hello.\n\nWhen I created an event trigger for ddl_command_end, I think the only\nmeans to identify for what the trigger function is called is\npg_event_trigger_ddl_commands() so I wrote as the following function\nand defined an event trigger for ddl_command_end.\n\nCREATE OR REPLACE FUNCTION hoge() RETURNS event_trigger AS $$\nDECLARE\n cmd record = pg_event_trigger_ddl_commands();\nBEGIN\n RAISE NOTICE '\"%\" \"%\" \"%\"',\n cmd.command_tag, cmd.object_type, cmd.object_identity;\nEND\n$$ LANGUAGE plpgsql;\n\nCREATE EVENT TRIGGER hoge_trigger ON ddl_command_end EXECUTE FUNCTION hoge();\n\nFinally I got an ERROR while DROP.\n\n=# CREATE TABLE t (a int);\nNOTICE: \"CREATE TABLE\" \"table\" \"public.t\"\nCREATE TABLE\npostgres=# DROP TABLE t;\nERROR: record \"cmd\" is not assigned yet\nDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\nCONTEXT: PL/pgSQL function hoge() line 5 at RAISE\n\nThe function doesn't return a record for DROP statements.\n\nThe documentation is written as the follows:\n\nhttps://postgresql.org/docs/current/event-trigger-definition.html\n> The ddl_command_end event occurs just after the execution of this same\n> set of commands. To obtain more details on the DDL operations that\n> took place, use the set-returning function\n> pg_event_trigger_ddl_commands() from the ddl_command_end event trigger\n> code (see Section 9.28). Note that the trigger fires after the actions\n> have taken place (but before the transaction commits), and thus the\n> system catalogs can be read as already changed.\n\nSo I think at least pg_event_trigger_ddl_command must return a record\nfor all commands that trigger ddl_command_end and the record should\nhave the correct command_tag. DROP TABLE is currently classified as\nsupporting event trigger. If we don't do that, any workaround and\ndocumentation is needed.\n\nI may be missing something, andt any opinions, thoughts or suggestions\nare welcome.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 09 Mar 2020 16:52:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "DROP and ddl_command_end." }, { "msg_contents": "On Mon, Mar 9, 2020 at 3:54 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I may be missing something, andt any opinions, thoughts or suggestions\n> are welcome.\n\nSince it's a set-returning function, I would have expected that\ninstead of trying to assign the result to a variable, you'd loop over\nit using FOR var IN query.\n\nBut if that's the problem, the error message is a bit odd.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 9 Mar 2020 13:29:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP and ddl_command_end." }, { "msg_contents": "Thanks.\n\nAt Mon, 9 Mar 2020 13:29:47 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Mon, Mar 9, 2020 at 3:54 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I may be missing something, andt any opinions, thoughts or suggestions\n> > are welcome.\n> \n> Since it's a set-returning function, I would have expected that\n> instead of trying to assign the result to a variable, you'd loop over\n> it using FOR var IN query.\n\nYes, right and I know. I intended the sample being simple, but sorry\nfor the bogus example. But the problem is not there. The problem is,\nthe trigger is called for DROP, the function returns no tuples. I'm\nnot sure DROP is the only command to cause the behavior, but if no\ntuples means DROP, we should document that behavior. Otherwise, we\nneed to document something like:\n\n\"pg_event_trigger_ddl_commands() may omit some of the commands and may\n return no tuples.\"\n\nBut it is quite odd.\n\n> But if that's the problem, the error message is a bit odd.\n\nThe error message is correct if we allow zero-tuple return from the\nfunction. Is it useful if we return DROP event with more information\nthan just DROP <OBJTYPE>?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Mar 2020 12:52:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: DROP and ddl_command_end." }, { "msg_contents": "On Mon, Mar 9, 2020 at 11:54 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Yes, right and I know. I intended the sample being simple, but sorry\n> for the bogus example. But the problem is not there. The problem is,\n> the trigger is called for DROP, the function returns no tuples. I'm\n> not sure DROP is the only command to cause the behavior, but if no\n> tuples means DROP, we should document that behavior. Otherwise, we\n> need to document something like:\n>\n> \"pg_event_trigger_ddl_commands() may omit some of the commands and may\n> return no tuples.\"\n>\n> But it is quite odd.\n\nWell, I'm not sure what you're saying here. It seems like you're\nsaying the feature is broken. If that's true, instead of documenting\nthat it doesn't work, shouldn't we fix it?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 12 Mar 2020 09:06:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP and ddl_command_end." } ]
[ { "msg_contents": "MySQL has a really useful feature they call the query rewrite cache. The\noptimizer checks incoming queries to see if a known better rewrite has been\nplaced within the query rewrite cache table. If one is found, the rewrite\nreplaces the incoming query before sending it to the execution engine. This\ncapability allows for one to fix poorly performing queries in 3rd party\napplication code that cannot be modified. For example, suppose a 3rd party\napplication contains the following inefficient query: SELECT COUNT(*) FROM\ntable WHERE SUBSTRING(column,1,3) = 'ABC'. One can place the following\nrewrite in the query rewrite cache: SELECT COUNT(*) FROM table WHERE column\nLIKE 'ABC%'. The original query cannot use an index while the rewrite can.\nSince it's a 3rd party application there is really no other way to make\nsuch an improvement. The existing rewrite rules in PostgreSQL are too\nnarrowly defined to permit such a substitution as the incoming query could\ninvolve many tables, so what's needed is a general \"if input SQL string\nmatches X then replace it with Y\". This check could be placed at the\nbeginning of the parser.c code. Suggest that the matching code should first\ncheck the string lengths and hash values before checking entire string\nmatch for efficiency.\n\nMySQL has a really useful feature they call the query rewrite cache. The optimizer checks incoming queries to see if a known better rewrite has been placed within the query rewrite cache table. If one is found, the rewrite replaces the incoming query before sending it to the execution engine. This capability allows for one to fix poorly performing queries in 3rd party application code that cannot be modified. For example, suppose a 3rd party application contains the following inefficient query: SELECT COUNT(*) FROM table WHERE SUBSTRING(column,1,3) = 'ABC'. One can place the following rewrite in the query rewrite cache: SELECT COUNT(*) FROM table WHERE column LIKE 'ABC%'. The original query cannot use an index while the rewrite can. Since it's a 3rd party application there is really no other way to make such an improvement. The existing rewrite rules in PostgreSQL are too narrowly defined to permit such a substitution as the incoming query could involve many tables, so what's needed is a general \"if input SQL string matches X then replace it with Y\". This check could be placed at the beginning of the parser.c code. Suggest that the matching code should first check the string lengths and  hash values before checking entire string match for efficiency.", "msg_date": "Mon, 9 Mar 2020 07:46:49 -0500", "msg_from": "Bert Scalzo <bertscalzo2@gmail.com>", "msg_from_op": true, "msg_subject": "New feature request: Query Rewrite Cache" } ]
[ { "msg_contents": "Hi,\n\nI am very interested in the Develop Performance Farm Benchmarks and Website (2020) project as one of the GSOC project. Is it possible to link me up with Andreas Scherbaum to discuss more and further understand the project?\n\nRegards,\nWen Rei\nMEng Electrical and Electronics, ECS\nUniversity of Southampton\n\n\n\n\n\n\n\n\n\n\n\nHi, \n\nI am very interested in the Develop Performance Farm Benchmarks and Website (2020) project as one of the GSOC project. Is it possible to link me up with Andreas Scherbaum to discuss more and further understand the project?\n \nRegards,\nWen Rei\nMEng Electrical and Electronics, ECS\nUniversity of Southampton", "msg_date": "Mon, 9 Mar 2020 20:35:10 +0000", "msg_from": "\"do w.r. (wrd1e16)\" <wrd1e16@ecs.soton.ac.uk>", "msg_from_op": true, "msg_subject": "GSOC 2020 - Develop Performance Farm Benchmarks and Website (2020)" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 09, 2020 at 08:35:10PM +0000, do w.r. (wrd1e16) wrote:\n> I am very interested in the Develop Performance Farm Benchmarks and Website (2020) project as one of the GSOC project. Is it possible to link me up with Andreas Scherbaum to discuss more and further understand the project?\n\nI suggest reaching out on the #gsoc2020-students slack channel. Details\non that, and other Postgres specific GSoC information, if you haven't\nalready seen it: https://wiki.postgresql.org/wiki/GSoC\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Wed, 11 Mar 2020 19:21:08 -0700", "msg_from": "Mark Wong <mark@2ndQuadrant.com>", "msg_from_op": false, "msg_subject": "Re: GSOC 2020 - Develop Performance Farm Benchmarks and Website\n (2020)" } ]
[ { "msg_contents": "I extracted from the latest multirange patch a bit that creates a new\nroutine CastCreate() in src/backend/catalog/pg_cast.c. It contains the\ncatalog-accessing bits to create a new cast. It seems harmless, so I\nthought I'd apply it to get rid of a couple of hunks in the large patch.\n\n(I also threw in a move of get_cast_oid from functioncmds.c to\nlsyscache.c, which seems its natural place; at first I thought to put it\nin catalog/pg_cast.c but really it's not a great place IMO. This\nfunction was invented out of whole cloth in commit fd1843ff8979. I also\ncontemplated the move of CreateCast and DropCastById from functioncmds.c\nto some new place, but creating a new commands/castcmds.c seemed a bit\nexcessive, so I left them in their current locations.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)", "msg_date": "Mon, 9 Mar 2020 18:00:03 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "time for catalog/pg_cast.c?" }, { "msg_contents": "On 2020-Mar-09, Alvaro Herrera wrote:\n\n> I extracted from the latest multirange patch a bit that creates a new\n> routine CastCreate() in src/backend/catalog/pg_cast.c. It contains the\n> catalog-accessing bits to create a new cast. It seems harmless, so I\n> thought I'd apply it to get rid of a couple of hunks in the large patch.\n\nI forgot to \"git add\" this comment addition before sending:\n\n/*\n * ----------------------------------------------------------------\n *\t\tCastCreate\n *\n * Forms and inserts catalog tuples for a new cast being created.\n * Caller must have already checked privileges, and done consistency\n * checks on the given datatypes and cast function (if applicable).\n *\n * 'behavior' indicates the dependency that the new cast will have on\n * its input and output types and the cast function.\n * ----------------------------------------------------------------\n */\nObjectAddress\nCastCreate(Oid sourcetypeid, Oid targettypeid, Oid funcid, char castcontext,\n\t\t char castmethod, DependencyType behavior)\n\n\nI think the only API consideration here is for 'castcontext', which we\npass here as the pg_type.h symbol, but could alternatively be passed as\nCoercionContext enum values (from primnodes.h). I think the pg_cast.h\nchar is okay, but maybe somebody has a different opinion.\nWe could also add some trivial asserts (like if procoid is not invalid,\nthen method is function, etc.), but it doesn't seem worth fussing too\nmuch over.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 18:14:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: time for catalog/pg_cast.c?" }, { "msg_contents": "I would even say that DropCastById belongs in the new file, which is\njust the attached. However, none of the Drop.*ById or Remove.*ById\nfunctions seem to be in backend/catalog/ at all, and moving just a\nsingle one seems to make things even more inconsistent. I think all\nthese catalog-accessing functions should be in backend/catalog/ but I'm\nnot in a hurry to patch half of backend/commands/ to move them all.\n\n(I think the current arrangement is just fallout from having created the\ndependency.c system to drop objects, which rid us of a bunch of bespoke\ndeletion-handling code.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 10 Mar 2020 11:41:20 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: time for catalog/pg_cast.c?" }, { "msg_contents": "On 2020-Mar-09, Alvaro Herrera wrote:\n\n> I extracted from the latest multirange patch a bit that creates a new\n> routine CastCreate() in src/backend/catalog/pg_cast.c. It contains the\n> catalog-accessing bits to create a new cast. It seems harmless, so I\n> thought I'd apply it to get rid of a couple of hunks in the large patch.\n\nPushed, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 12:49:22 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: time for catalog/pg_cast.c?" } ]
[ { "msg_contents": "Hello,\n\nThis is a call for committers, reviewers and users,\nregarding \"planning counters in pg_stat_statements\"\npatch [1] but not only.\n\nHistorically, this version of pg_stat_statements\nwith planning counters was performing 3 calls to \npgss_store() for non utility statements in:\n1 - pgss_post_parse_analyze (init entry with queryid \n and store query text)\n2 - pgss_planner_hook (to store planning counters)\n3 - pgss_ExecutorEnd (to store execution counters)\n\nThen a new version was proposed to remove one call \nto pgss_store() by adding the query string to the \nplanner pg_plan_query():\n1 - pgss_planner_hook (to store planning counters)\n2 - pgss_ExecutorEnd (to store execution counters)\n\nMany performances tests where performed concluding\nthat there is no impact on this subject.\n\nPatch \"to pass query string to the planner\", could be \ncommitted by itself, and (maybe) used by other extensions.\n\nIf this was done, this new version of pgss with planning\ncounters could be committed as well, or even later \n(being used as a non core extension starting with pg13). \n\nSo please give us your feedback regarding this patch \n\"to pass query string to the planner\", if you have other \nuse cases, or any comment regarding core architecture.\n\nnote:\nA problem was discovered during IVM testing,\nbecause some queries without sql text where planned\nwithout being parsed, finishing in pgss with a zero \nqueryid.\n\nA work arround is to set track_planning = false,\nwe have chosen to fix that in pgss by ignoring\nzero queryid inside pgss_planner_hook.\n\nThanks in advance\nRegards\nPAscal\n \n[1] \" https://www.postgresql.org/message-id/20200309103142.GA45401%40nol\n<planning counters in pg_stat_statements> \"\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 9 Mar 2020 14:31:27 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": true, "msg_subject": "Patch: to pass query string to pg_plan_query()" }, { "msg_contents": "\n\nOn 2020/03/10 6:31, legrand legrand wrote:\n> Hello,\n> \n> This is a call for committers, reviewers and users,\n> regarding \"planning counters in pg_stat_statements\"\n> patch [1] but not only.\n\nDoes anyone object to this patch? I'm thinking to commit it separetely\nat first before committing the planning_counter_in_pg_stat_statements\npatch.\n\n> Historically, this version of pg_stat_statements\n> with planning counters was performing 3 calls to\n> pgss_store() for non utility statements in:\n> 1 - pgss_post_parse_analyze (init entry with queryid\n> and store query text)\n> 2 - pgss_planner_hook (to store planning counters)\n> 3 - pgss_ExecutorEnd (to store execution counters)\n> \n> Then a new version was proposed to remove one call\n> to pgss_store() by adding the query string to the\n> planner pg_plan_query():\n\nBut pgss_store() still needs to be called three times even in\nnon-utility command if the query has constants. Right?\n\n> 1 - pgss_planner_hook (to store planning counters)\n> 2 - pgss_ExecutorEnd (to store execution counters)\n> \n> Many performances tests where performed concluding\n> that there is no impact on this subject.\n\nSounds good!\n\n> Patch \"to pass query string to the planner\", could be\n> committed by itself, and (maybe) used by other extensions.\n> \n> If this was done, this new version of pgss with planning\n> counters could be committed as well, or even later\n> (being used as a non core extension starting with pg13).\n> \n> So please give us your feedback regarding this patch\n> \"to pass query string to the planner\", if you have other\n> use cases, or any comment regarding core architecture.\n\n*As far as I heard*, pg_hint_plan extension uses very tricky way to\nextract query string in the planner hook. So this patch would be\nvery helpful to make pg_hint_plan avoid using such tricky way.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 26 Mar 2020 22:54:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Patch: to pass query string to pg_plan_query()" }, { "msg_contents": "On Thu, Mar 26, 2020 at 10:54:35PM +0900, Fujii Masao wrote:\n> \n> On 2020/03/10 6:31, legrand legrand wrote:\n> > Hello,\n> > \n> > This is a call for committers, reviewers and users,\n> > regarding \"planning counters in pg_stat_statements\"\n> > patch [1] but not only.\n> \n> Does anyone object to this patch? I'm thinking to commit it separetely\n> at first before committing the planning_counter_in_pg_stat_statements\n> patch.\n> \n> > Historically, this version of pg_stat_statements\n> > with planning counters was performing 3 calls to\n> > pgss_store() for non utility statements in:\n> > 1 - pgss_post_parse_analyze (init entry with queryid\n> > and store query text)\n> > 2 - pgss_planner_hook (to store planning counters)\n> > 3 - pgss_ExecutorEnd (to store execution counters)\n> > \n> > Then a new version was proposed to remove one call\n> > to pgss_store() by adding the query string to the\n> > planner pg_plan_query():\n> \n> But pgss_store() still needs to be called three times even in\n> non-utility command if the query has constants. Right?\n\nYes indeed, this version is actually adding the 3rd pgss_store call. Passing\nthe query string is a collateral requirement in case the entry disappeared\nbetween post parse analysis and planning (which is quite possible with prepared\nstatements at least), as pgss will in this case fallback storing the as-is\nquery string, which is still better that no query text at all.\n\n> > 1 - pgss_planner_hook (to store planning counters)\n> > 2 - pgss_ExecutorEnd (to store execution counters)\n> > \n> > Many performances tests where performed concluding\n> > that there is no impact on this subject.\n> \n> Sounds good!\n> \n> > Patch \"to pass query string to the planner\", could be\n> > committed by itself, and (maybe) used by other extensions.\n> > \n> > If this was done, this new version of pgss with planning\n> > counters could be committed as well, or even later\n> > (being used as a non core extension starting with pg13).\n> > \n> > So please give us your feedback regarding this patch\n> > \"to pass query string to the planner\", if you have other\n> > use cases, or any comment regarding core architecture.\n> \n> *As far as I heard*, pg_hint_plan extension uses very tricky way to\n> extract query string in the planner hook. So this patch would be\n> very helpful to make pg_hint_plan avoid using such tricky way.\n\n+1\n\n\n", "msg_date": "Thu, 26 Mar 2020 15:40:09 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: to pass query string to pg_plan_query()" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Does anyone object to this patch? I'm thinking to commit it separetely\n> at first before committing the planning_counter_in_pg_stat_statements\n> patch.\n\nI took a quick look through v9-0001-Pass-query-string-to-the-planner.patch\nand it's fine by me. It also matches up with something I've wanted to\ndo for awhile, which is to make the query string available during\nplanning and execution so that we can produce error cursors for\nrun-time errors, when relevant.\n\n(It's a little weird that the patch doesn't make standard_planner\nactually *do* anything with the string, like say save it into\nthe PlannerInfo struct. But that can come later I guess.)\n\nNote that I wouldn't want to bet that all of these call sites always have\nnon-null query strings to pass; but probably most of the time they will.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 11:44:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Patch: to pass query string to pg_plan_query()" }, { "msg_contents": "On Thu, Mar 26, 2020 at 11:44:44AM -0400, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > Does anyone object to this patch? I'm thinking to commit it separetely\n> > at first before committing the planning_counter_in_pg_stat_statements\n> > patch.\n> \n> I took a quick look through v9-0001-Pass-query-string-to-the-planner.patch\n> and it's fine by me. It also matches up with something I've wanted to\n> do for awhile, which is to make the query string available during\n> planning and execution so that we can produce error cursors for\n> run-time errors, when relevant.\n> \n> (It's a little weird that the patch doesn't make standard_planner\n> actually *do* anything with the string, like say save it into\n> the PlannerInfo struct. But that can come later I guess.)\n> \n> Note that I wouldn't want to bet that all of these call sites always have\n> non-null query strings to pass; but probably most of the time they will.\n\nSurprinsingly, the whole regression tests pass flawlessly with an non-null\nquery string assert, but we did had some discussion about it. The pending IVM\npatch would break that assumption, same as some non trivial extensions like\ncitus (see\nhttps://www.postgresql.org/message-id/flat/CAFMSG9HJQr%3DH8doWJOp%3DwqyKbVqxMLkk_Qu2KfpmkKvS-Xn7qQ%40mail.gmail.com#ab8ea541b8c8464f7b52ba6d8d480b7d\nand later), so we didn't make it a hard requirement.\n\n\n", "msg_date": "Thu, 26 Mar 2020 17:45:41 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Patch: to pass query string to pg_plan_query()" }, { "msg_contents": "Tom Lane-2 wrote\n> Fujii Masao &lt;\n\n> masao.fujii@.nttdata\n\n> &gt; writes:\n>> Does anyone object to this patch? I'm thinking to commit it separetely\n>> at first before committing the planning_counter_in_pg_stat_statements\n>> patch.\n> \n> I took a quick look through v9-0001-Pass-query-string-to-the-planner.patch\n> and it's fine by me. It also matches up with something I've wanted to\n> do for awhile, which is to make the query string available during\n> planning and execution so that we can produce error cursors for\n> run-time errors, when relevant.\n> \n> [...]\n> \n> \t\t\tregards, tom lane\n\n\nGreat !\nGood news ;o)\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Fri, 27 Mar 2020 10:27:52 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Patch: to pass query string to pg_plan_query()" } ]
[ { "msg_contents": "Dear developers,\n\nDebian (and Ubuntu) are beginning to remove foo-config legacy scripts.\nAlready, xml2-config has been flagged for removal, with packages being\nasked to switch to pkg-config.\n\nThis patch uses pkg-config's PKG_CHECK_MODULES macro to detect libxml2\nor, if pkg-config is not available, falls back to xml2-confg.\n\nThe patch was created against the master branch of git.\n\nI have built PostgreSQL and run `make check`. All 196 tests passed.\n\nHugh", "msg_date": "Tue, 10 Mar 2020 21:53:14 +1100", "msg_from": "Hugh McMaster <hugh.mcmaster@outlook.com>", "msg_from_op": true, "msg_subject": "[PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "> On 10 Mar 2020, at 11:53, Hugh McMaster <hugh.mcmaster@outlook.com> wrote:\n\n> Debian (and Ubuntu) are beginning to remove foo-config legacy scripts.\n> Already, xml2-config has been flagged for removal, with packages being\n> asked to switch to pkg-config.\n> \n> This patch uses pkg-config's PKG_CHECK_MODULES macro to detect libxml2\n> or, if pkg-config is not available, falls back to xml2-confg.\n\nThis was previously discussed in 20200120204715.GA73984@msg.df7cb.de which\nended without a real conclusion on what could/should be done (except that\nnothing *had* to be done).\n\nWhat is the situation on non-Debian/Ubuntu systems (BSD's, macOS etc etc)? Is\nit worth adding pkg-config support if we still need a fallback to xml2-config?\n\ncheers ./daniel\n\n", "msg_date": "Tue, 10 Mar 2020 12:41:51 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "On 2020-03-10 12:41, Daniel Gustafsson wrote:\n>> On 10 Mar 2020, at 11:53, Hugh McMaster <hugh.mcmaster@outlook.com> wrote:\n> \n>> Debian (and Ubuntu) are beginning to remove foo-config legacy scripts.\n>> Already, xml2-config has been flagged for removal, with packages being\n>> asked to switch to pkg-config.\n>>\n>> This patch uses pkg-config's PKG_CHECK_MODULES macro to detect libxml2\n>> or, if pkg-config is not available, falls back to xml2-confg.\n> \n> This was previously discussed in 20200120204715.GA73984@msg.df7cb.de which\n> ended without a real conclusion on what could/should be done (except that\n> nothing *had* to be done).\n> \n> What is the situation on non-Debian/Ubuntu systems (BSD's, macOS etc etc)? Is\n> it worth adding pkg-config support if we still need a fallback to xml2-config?\n\nBtw., here is an older thread for the same issue \n<https://www.postgresql.org/message-id/flat/1358164265.29612.7.camel%40vanquo.pezone.net>. \n Might be worth reflecting on the issues discussed there.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 18:38:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "> On 10 Mar 2020, at 18:38, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-03-10 12:41, Daniel Gustafsson wrote:\n>>> On 10 Mar 2020, at 11:53, Hugh McMaster <hugh.mcmaster@outlook.com> wrote:\n>>> Debian (and Ubuntu) are beginning to remove foo-config legacy scripts.\n>>> Already, xml2-config has been flagged for removal, with packages being\n>>> asked to switch to pkg-config.\n>>> \n>>> This patch uses pkg-config's PKG_CHECK_MODULES macro to detect libxml2\n>>> or, if pkg-config is not available, falls back to xml2-confg.\n>> This was previously discussed in 20200120204715.GA73984@msg.df7cb.de which\n>> ended without a real conclusion on what could/should be done (except that\n>> nothing *had* to be done).\n>> What is the situation on non-Debian/Ubuntu systems (BSD's, macOS etc etc)? Is\n>> it worth adding pkg-config support if we still need a fallback to xml2-config?\n> \n> Btw., here is an older thread for the same issue <https://www.postgresql.org/message-id/flat/1358164265.29612.7.camel%40vanquo.pezone.net>. Might be worth reflecting on the issues discussed there.\n\nThanks, didn't realize that the subject had been up for discussion earlier as\nwell.\n\nFor me, the duplication aspect is the most troubling, since we'd still need the\nxml2-config fallback and thus won't be able to simplify the code.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 10 Mar 2020 21:49:49 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "On Tue, 10 Mar 2020 at 22:41, Daniel Gustafsson wrote:\n>\n> > On 10 Mar 2020, at 11:53, Hugh McMaster <hugh.mcmaster@outlook.com> wrote:\n> > This patch uses pkg-config's PKG_CHECK_MODULES macro to detect libxml2\n> > or, if pkg-config is not available, falls back to xml2-confg.\n>\n> This was previously discussed in 20200120204715.GA73984@msg.df7cb.de which\n> ended without a real conclusion on what could/should be done (except that\n> nothing *had* to be done).\n>\n> What is the situation on non-Debian/Ubuntu systems (BSD's, macOS etc etc)? Is\n> it worth adding pkg-config support if we still need a fallback to xml2-config?\n\nTo the best of my knowledge, FreeBSD, macOS, OpenSUSE, Solaris etc.\nall support detection of libxml2 via pkg-config.\n\nOne way or another, xml2-config is going away, whether by decision of\na package maintainer or upstream.\n\nfreetype-config was deprecated upstream a few years ago. Upstream ICU\nwill also disable the installation of icu-config by default from April\nthis year. Some systems, such as Debian, have not shipped icu-config\nfor a year or so.\n\nThe PHP project last year switched to using pkg-config by default for\nall libraries supplying a .pc file. PHP's build scripts do not fall\nback to legacy scripts.\n\nAnother reason for switching is that xml2-config incorrectly outputs\nstatic libraries when called with `--libs`. You have to call `--libs\n--dynamic` to output -lxml2 only. Debian patches the script to avoid\nthis unusual behaviour.\n\n\n", "msg_date": "Wed, 11 Mar 2020 21:24:10 +1100", "msg_from": "Hugh McMaster <hugh.mcmaster@outlook.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "On Wed, 11 Mar 2020 at 07:49, Daniel Gustafsson wrote:\n> > On 10 Mar 2020, at 18:38, Peter Eisentraut wrote:\n> > Btw., here is an older thread for the same issue <https://www.postgresql.org/message-id/flat/1358164265.29612.7.camel%40vanquo.pezone.net>. Might be worth reflecting on the issues discussed there.\n>\n> Thanks, didn't realize that the subject had been up for discussion earlier as\n> well.\n\nInteresting thread. The issue of precedence (e.g. pkg-config over\nxml2-config) is still relevant, although arguably less so today, due\nto the far greater availability of pkg-config. Some packages choose to\nfall back to xml2-config, say, if they need to support old or\nsoon-to-be EOL systems lacking pkg-config. These situations are\nincreasingly rare.\n\nThe thread is correct on multi-arch header and library directories.\nThat said, pkg-config can handle this easily.\n\n> For me, the duplication aspect is the most troubling, since we'd still need the\n> xml2-config fallback and thus won't be able to simplify the code.\n\nconfigure.in shows that ICU only uses the PKG_CHECK_MODULES macro.\nlibxml2, libxslt and other dependencies could also switch.\n\nUsing AC_CHECK_LIB to add libraries (such as -lxml2) to $LIBS isn't\nprobably the most ideal method (I'd recommend adding pkg-config's\nnative X_CFLAGS and X_LIBS variables as necessary to $LIBS, $CPPFLAGS\netc.), but that's a topic for another thread.\n\n\n", "msg_date": "Wed, 11 Mar 2020 21:37:01 +1100", "msg_from": "Hugh McMaster <hugh.mcmaster@outlook.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 10 Mar 2020, at 18:38, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> Btw., here is an older thread for the same issue <https://www.postgresql.org/message-id/flat/1358164265.29612.7.camel%40vanquo.pezone.net>. Might be worth reflecting on the issues discussed there.\n\n> Thanks, didn't realize that the subject had been up for discussion earlier as\n> well.\n> For me, the duplication aspect is the most troubling, since we'd still need the\n> xml2-config fallback and thus won't be able to simplify the code.\n\nYeah, but at least it's concentrated in a few lines in configure.in.\n\nI think that the main objection to this is the documentation/confusion\nissues raised by Noah in that old thread. Still, we probably don't\nhave much choice given that some distros are going to remove xml2-config.\nIn that connection, Hugh's patch lacks docs which is entirely not OK,\nbut the doc changes in Peter's old patch look workable.\n\nI wonder whether we ought to try to align this with our documented\nprocedure for falling back if you have no icu-config but want to\nuse ICU; that part of the docs suggests setting ICU_CFLAGS and ICU_LIBS\nmanually. The patch as it stands doesn't seem to support manually\ngiving XML2_CFLAGS and XML2_LIBS, but it looks like it could easily\nbe adjusted to allow that.\n\nAlso, I see that pkg.m4 says\n\ndnl Note that if there is a possibility the first call to\ndnl PKG_CHECK_MODULES might not happen, you should be sure to include an\ndnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac\n\nwhich we are not doing. We got away with that as long as there was only\none PKG_CHECK_MODULES call ... but with two, I'd expect that the second\none will fall over if the first one isn't executed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 12:39:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "On Thu, 12 Mar 2020 at 03:39, Tom Lane wrote:\n\n> Daniel Gustafsson writes:\n> > For me, the duplication aspect is the most troubling, since we'd still\n> need the\n> > xml2-config fallback and thus won't be able to simplify the code.\n>\n> Yeah, but at least it's concentrated in a few lines in configure.in.\n>\n> I think that the main objection to this is the documentation/confusion\n> issues raised by Noah in that old thread. Still, we probably don't\n> have much choice given that some distros are going to remove xml2-config.\n> In that connection, Hugh's patch lacks docs which is entirely not OK,\n> but the doc changes in Peter's old patch look workable.\n\n\nDocumentation can be added easily. :-)\n\nI wonder whether we ought to try to align this with our documented\n> procedure for falling back if you have no icu-config but want to\n> use ICU; that part of the docs suggests setting ICU_CFLAGS and ICU_LIBS\n> manually.\n\n\nUnless your system has ICU installed in a non-standard location, there is\nno need to set those variables, as PKG_CHECK_MODULES will handle that for\nyou. `./configure --help` also provides the relevant documentation on\noverriding pkg-config’s X_CFLAGS and X_LIBS.\n\nThe patch as it stands doesn't seem to support manually\n> giving XML2_CFLAGS and XML2_LIBS, but it looks like it could easily\n> be adjusted to allow that.\n\n\nWhat I said for ICU also applies here (in fact, to all use of\nPKG_CHECK_MODULES). For the fallback, a minor rework is required.\n\nThe question is really whether we want to maintain a fallback to\nxml2-config. To give more context, I gave a more detailed assessment of the\nsituation in an earlier email to this list. (Personally, I don’t think we\nshould.)\n\nDo note also that xslt-config will also be a problem at some point.\n\nAlso, I see that pkg.m4 says\n>\n> dnl Note that if there is a possibility the first call to\n> dnl PKG_CHECK_MODULES might not happen, you should be sure to include an\n> dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac\n>\n> which we are not doing. We got away with that as long as there was only\n> one PKG_CHECK_MODULES call ... but with two, I'd expect that the second\n> one will fall over if the first one isn't executed.\n\n\nYes, that macro needs to be set.\n\nHugh\n\nOn Thu, 12 Mar 2020 at 03:39, Tom Lane wrote:Daniel Gustafsson writes:\n> For me, the duplication aspect is the most troubling, since we'd still need the\n> xml2-config fallback and thus won't be able to simplify the code.\n\nYeah, but at least it's concentrated in a few lines in configure.in.\n\nI think that the main objection to this is the documentation/confusion\nissues raised by Noah in that old thread.  Still, we probably don't\nhave much choice given that some distros are going to remove xml2-config.\nIn that connection, Hugh's patch lacks docs which is entirely not OK,\nbut the doc changes in Peter's old patch look workable.Documentation can be added easily. :-)\nI wonder whether we ought to try to align this with our documented\nprocedure for falling back if you have no icu-config but want to\nuse ICU; that part of the docs suggests setting ICU_CFLAGS and ICU_LIBS\nmanually.Unless your system has ICU installed in a non-standard location, there is no need to set those variables, as PKG_CHECK_MODULES will handle that for you. `./configure --help` also provides the relevant documentation on overriding pkg-config’s X_CFLAGS and X_LIBS.The patch as it stands doesn't seem to support manually\ngiving XML2_CFLAGS and XML2_LIBS, but it looks like it could easily\nbe adjusted to allow that.What I said for ICU also applies here (in fact, to all use of PKG_CHECK_MODULES). For the fallback, a minor rework is required.The question is really whether we want to maintain a fallback to xml2-config. To give more context, I gave a more detailed assessment of the situation in an earlier email to this list. (Personally, I don’t think we should.)Do note also that xslt-config will also be a problem at some point.\nAlso, I see that pkg.m4 says\n\ndnl Note that if there is a possibility the first call to\ndnl PKG_CHECK_MODULES might not happen, you should be sure to include an\ndnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac\n\nwhich we are not doing.  We got away with that as long as there was only\none PKG_CHECK_MODULES call ... but with two, I'd expect that the second\none will fall over if the first one isn't executed.Yes, that macro needs to be set.Hugh", "msg_date": "Thu, 12 Mar 2020 09:19:35 +1100", "msg_from": "Hugh McMaster <hugh.mcmaster@outlook.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "Hugh McMaster <hugh.mcmaster@outlook.com> writes:\n> The question is really whether we want to maintain a fallback to\n> xml2-config. To give more context, I gave a more detailed assessment of the\n> situation in an earlier email to this list. (Personally, I don’t think we\n> should.)\n\nI think that that is not optional. We try to maintain portability\nto old systems as well as new.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 18:33:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "I poked at this issue a bit more and realized that Hugh's patch\nflat-out breaks building --with-libxml in environments without\npkg-config, because PKG_CHECK_MODULES just gives up and dies\nif there's no pkg-config (as we'd already found out in connection\nwith ICU). That won't win us any friends, so the attached revision\ndoesn't call PKG_CHECK_MODULES unless we found pkg-config. I also\nconcluded that if the user has set XML2_CONFIG, it's pretty clear\nthat her intent is to use whatever that is pointing at, so we should\nnot use pkg-config in that case either.\n\nAlso, I'd been going back and forth about whether it was worth\ndocumenting XML2_CFLAGS/XML2_LIBS, but I realized that use of\nPKG_CHECK_MODULES(XML2, ...) basically forces the issue for us:\nit does AC_ARG_VAR on them, which puts them into configure's\n--help output and makes configure picky about caching them.\nSo we can't really pretend they're boring implementation detail.\n\nSo the attached mostly adopts Peter's old suggested docs, but\nI added discussion of XML2_CFLAGS/XML2_LIBS and dropped the mention\nof forcing matters with --with-libs/--with-libraries (not because\nthat doesn't work anymore but because it seemed like we were offering\nquite enough alternatives already).\n\nI'd originally thought that we might back-patch this, but I'm now of\nthe opinion that we probably should not. If pkg-config is present,\nthis can change the default behavior about where we get libxml from,\nwhich seems like something not to do in minor releases. (OTOH, it'd\nonly matter if the default pkg-config choice is different from the\ndefault xml2-config choice, so maybe the risk of breakage is small\nenough to be acceptable?)\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 12 Mar 2020 12:39:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "> On 12 Mar 2020, at 17:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I also\n> concluded that if the user has set XML2_CONFIG, it's pretty clear\n> that her intent is to use whatever that is pointing at, so we should\n> not use pkg-config in that case either.\n\n+1\n\n> I'd originally thought that we might back-patch this, but I'm now of\n> the opinion that we probably should not. If pkg-config is present,\n> this can change the default behavior about where we get libxml from,\n> which seems like something not to do in minor releases. (OTOH, it'd\n> only matter if the default pkg-config choice is different from the\n> default xml2-config choice, so maybe the risk of breakage is small\n> enough to be acceptable?)\n\nI read this is as a preventative patch to stay ahead of future changes to\npackaging. If these changes do materialize, won't they be equally likely to\nhit installations for backbranch minors as v13? Changing behavior in a minor\nrelease is however not ideal, but perhaps the risk/benefit analysis comes down\nto releases not compiling being worse?\n\nI haven't had the chance to test the patch, but the changes to configure.in\nreads perfectly fine. In the docs though:\n\n> + To use a libxml installation that is in an unusual location, you\n\nWe refer to both libxml and libxml2 in these paragraphs. Since upstream is\nconsistently referring to it as libxml2, maybe we should take this as\nopportunity to switch to that for the docs?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 12 Mar 2020 22:00:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "On Fri, 13 Mar 2020 at 03:39, Tom Lane wrote:\n> I poked at this issue a bit more and realized that Hugh's patch\n> flat-out breaks building --with-libxml in environments without\n> pkg-config, because PKG_CHECK_MODULES just gives up and dies\n> if there's no pkg-config (as we'd already found out in connection\n> with ICU).\n\nAs you found out, that is by design. PKG_CHECK_MODULES actually checks\nfor pkg-config via PKG_PROG_PKG_CONFIG, but only in the first\nexpansion of PKG_CHECK_MODULES. If the first instance is in a\nconditional, then the check for pkg-config is also in that\nconditional. Once a build system begins using pkg-config without a\nfallback (e.g. like for ICU), pkg-config becomes a build dependency\n(and, yes, I realise ICU isn't mandatory here).\n\nThat won't win us any friends, so the attached revision\n> doesn't call PKG_CHECK_MODULES unless we found pkg-config.\n\nDid you mean to terminate configure if pkg-config cannot find\nlibxml-2.0 or the library is too old? Your doc changes don't indicate\nthat intent, nor was it prior behaviour, but some projects like that\nbehaviour and others don't.\n\n> I also concluded that if the user has set XML2_CONFIG, it's pretty clear\n> that her intent is to use whatever that is pointing at, so we should\n> not use pkg-config in that case either.\n>\n> Also, I'd been going back and forth about whether it was worth\n> documenting XML2_CFLAGS/XML2_LIBS, but I realized that use of\n> PKG_CHECK_MODULES(XML2, ...) basically forces the issue for us:\n> it does AC_ARG_VAR on them, which puts them into configure's\n> --help output and makes configure picky about caching them.\n> So we can't really pretend they're boring implementation detail.\n\nYou might consider this an edge case, but you override custom\nXML2_CFLAGS/LIBS if xml2-config is detected.\n\n> So the attached mostly adopts Peter's old suggested docs, but\n> I added discussion of XML2_CFLAGS/XML2_LIBS and dropped the mention\n> of forcing matters with --with-libs/--with-libraries (not because\n> that doesn't work anymore but because it seemed like we were offering\n> quite enough alternatives already).\n>\n> I'd originally thought that we might back-patch this, but I'm now of\n> the opinion that we probably should not. If pkg-config is present,\n> this can change the default behavior about where we get libxml from,\n> which seems like something not to do in minor releases. (OTOH, it'd\n> only matter if the default pkg-config choice is different from the\n> default xml2-config choice, so maybe the risk of breakage is small\n> enough to be acceptable?)\n\n\n", "msg_date": "Fri, 13 Mar 2020 23:26:03 +1100", "msg_from": "Hugh McMaster <hugh.mcmaster@outlook.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "Hugh McMaster <hugh.mcmaster@outlook.com> writes:\n> On Fri, 13 Mar 2020 at 03:39, Tom Lane wrote:\n>> That won't win us any friends, so the attached revision\n>> doesn't call PKG_CHECK_MODULES unless we found pkg-config.\n\n> Did you mean to terminate configure if pkg-config cannot find\n> libxml-2.0 or the library is too old? Your doc changes don't indicate\n> that intent, nor was it prior behaviour, but some projects like that\n> behaviour and others don't.\n\nYeah, a potential edge-case here is that pkg-config is in the PATH\nbut it has no information about libxml2 (I doubt we need to consider\nthe risk that it has info about a pre-2.6.23 version). Looking\nagain at the generated configure code, I realize I shouldn't have\nleft off the ACTION-IF-NOT-FOUND argument --- the default is to\nthrow an error, but we'd rather fall through and try to use xml2-config.\nThe eventual AC_CHECK_LIB(xml2, ...) test will catch the situation\nwhere the library isn't there. Updated patch attached.\n\n> You might consider this an edge case, but you override custom\n> XML2_CFLAGS/LIBS if xml2-config is detected.\n\nYeah, if pkg-config fails and xml2-config is present, that's true.\nSince those weren't there before, and we now document them as just\na fallback solution, I think that's fine.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 13 Mar 2020 12:06:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 12 Mar 2020, at 17:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd originally thought that we might back-patch this, but I'm now of\n>> the opinion that we probably should not. If pkg-config is present,\n>> this can change the default behavior about where we get libxml from,\n>> which seems like something not to do in minor releases. (OTOH, it'd\n>> only matter if the default pkg-config choice is different from the\n>> default xml2-config choice, so maybe the risk of breakage is small\n>> enough to be acceptable?)\n\n> I read this is as a preventative patch to stay ahead of future changes to\n> packaging. If these changes do materialize, won't they be equally likely to\n> hit installations for backbranch minors as v13?\n\nYeah, that's the argument *for* back-patching. Question is whether it\noutweighs the risk of silently breaking somebody's build by linking\nto the wrong libxml2 version.\n\nI could go either way, honestly. The risk doesn't seem large, but\nit's not zero.\n\n> We refer to both libxml and libxml2 in these paragraphs. Since upstream is\n> consistently referring to it as libxml2, maybe we should take this as\n> opportunity to switch to that for the docs?\n\nI think we're kind of stuck with \"--with-libxml\". Conceivably we\ncould introduce \"--with-libxml2\", redefine the old switch as an\nobsolete alias, and start saying \"libxml2\" instead of \"libxml\".\nBut I'm not sure that's worth the trouble, and it seems like\nmaterial for a different patch anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Mar 2020 12:14:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "On Sat, 14 Mar 2020 at 03:06, Tom Lane wrote:\n> Looking again at the generated configure code, I realize I shouldn't have\n> left off the ACTION-IF-NOT-FOUND argument --- the default is to\n> throw an error, but we'd rather fall through and try to use xml2-config.\n> The eventual AC_CHECK_LIB(xml2, ...) test will catch the situation\n> where the library isn't there. Updated patch attached.\n\nThe updated patch works fine. Thanks for working on this!\n\nHugh\n\n\n", "msg_date": "Mon, 16 Mar 2020 22:49:05 +1100", "msg_from": "Hugh McMaster <hugh.mcmaster@outlook.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "> On 13 Mar 2020, at 17:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> I read this is as a preventative patch to stay ahead of future changes to\n>> packaging. If these changes do materialize, won't they be equally likely to\n>> hit installations for backbranch minors as v13?\n> \n> Yeah, that's the argument *for* back-patching. Question is whether it\n> outweighs the risk of silently breaking somebody's build by linking\n> to the wrong libxml2 version.\n\nCorrect, my argument is that breakage can be expected equally across branches,\nso I think back-patching should be seriously considered.\n\n>> We refer to both libxml and libxml2 in these paragraphs. Since upstream is\n>> consistently referring to it as libxml2, maybe we should take this as\n>> opportunity to switch to that for the docs?\n> \n> I think we're kind of stuck with \"--with-libxml\". Conceivably we\n> could introduce \"--with-libxml2\", redefine the old switch as an\n> obsolete alias, and start saying \"libxml2\" instead of \"libxml\".\n> But I'm not sure that's worth the trouble, and it seems like\n> material for a different patch anyway.\n\nAbsolutely, thats why I referred to changing mentions of libxml in the docs\nonly where we refer to the product and not the switch (the latter was not very\nclear in my email though). Also, shouldn't libxml2 be within <productname>\ntags like OpenSSL and LLVM et.al?\n\ncheers ./daniel\n\n", "msg_date": "Mon, 16 Mar 2020 15:30:17 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 13 Mar 2020, at 17:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, that's the argument *for* back-patching. Question is whether it\n>> outweighs the risk of silently breaking somebody's build by linking\n>> to the wrong libxml2 version.\n\n> Correct, my argument is that breakage can be expected equally across branches,\n> so I think back-patching should be seriously considered.\n\nYou're right that the risk of breakage (of either type) is about the same\nacross branches; but the project's conventions are not. We try to avoid\nunnecessary changes in back branches.\n\nStill, after further reflection, I think the odds favor back-patching.\nThis patch could only break things on systems where\n\n(a) There's more than one libxml2 installation, which is already a\ntiny minority use-case. It seems very unlikely to hurt any packagers\nfollowing typical build processes, for instance.\n\nAND\n\n(b) the default pkg-config and default xml2-config results differ.\nThat seems even more unlikely.\n\nNow, breakage is certainly possible. A counterexample to (b) is that\nif you wanted to build using a non-default libxml2 installation, you\nmight've tried to select that by putting its xml2-config into your\nPATH ahead of the system version, rather than setting XML2_CONFIG.\nPost-patch, we'd consult pkg-config first and presumably end up\nwith the system libxml2.\n\nStill, I think the number of people who'd get bit by that could be\ncounted without running out of fingers, while it seems quite likely\nthat many people will soon need to build our back branches on\nplatforms that won't have xml2-config.\n\nSo I'm now leaning to \"back-patch and make sure to mention this in\nthe next release notes\". Barring objections, I'll do that soon.\n\n> Absolutely, thats why I referred to changing mentions of libxml in the docs\n> only where we refer to the product and not the switch (the latter was not very\n> clear in my email though). Also, shouldn't libxml2 be within <productname>\n> tags like OpenSSL and LLVM et.al?\n\nI don't have a problem with s/libxml/libxml2/ in the running text\n(without changing the switch name). Can't get too excited about\n<productname> though. I think we've only consistently applied\nthat tag to PostgreSQL itself.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Mar 2020 12:12:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "> On 16 Mar 2020, at 17:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Still, I think the number of people who'd get bit by that could be\n> counted without running out of fingers, while it seems quite likely\n> that many people will soon need to build our back branches on\n> platforms that won't have xml2-config.\n\nI agree with this assessment.\n\n> So I'm now leaning to \"back-patch and make sure to mention this in\n> the next release notes\". Barring objections, I'll do that soon.\n\nNone from me.\n\n> I don't have a problem with s/libxml/libxml2/ in the running text\n> (without changing the switch name).\n\n+1\n\n> Can't get too excited about\n> <productname> though. I think we've only consistently applied\n> that tag to PostgreSQL itself.\n\nFair enough. Looking at a random sample it seems we use a bit of a mix of\nnothing, <application /> and <productname /> (the latter ones being more or\nless equal in DocBook IIRC) to mark up externals.\n\ncheers ./daniel\n\n\n", "msg_date": "Mon, 16 Mar 2020 22:04:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 16 Mar 2020, at 17:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So I'm now leaning to \"back-patch and make sure to mention this in\n>> the next release notes\". Barring objections, I'll do that soon.\n\n> None from me.\n\nDone. In the event, it only seemed practical to back-patch as far as\nv10. 9.x didn't use pkg-config for anything, so our infrastructure\nfor it isn't there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Mar 2020 12:11:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Use PKG_CHECK_MODULES to detect the libxml2 library" } ]
[ { "msg_contents": "Hello,\nAfter upgrade from 11.2 to 12.2 I found, that build of ecpg component\ndepends on pgcommon_shlib and pgport_shlib. But build of ecpg\ndoesn't include build of pgcommon_shlib and pgport_shlib. That means, if I\nwant to build ecpg, first I need to build pgcommon_shlib and pgport_shlib\nand after that I am able to build ecpg.\n\nI would like to ask if this behavior is expected or not ? Because previous\nversion doesn't require this separate builds.\n\nThanks\nFilip Januš\n\nHello,After upgrade from 11.2 to 12.2 I found, that build of ecpg component depends on pgcommon_shlib and pgport_shlib.  But build of ecpg doesn't include build of pgcommon_shlib and pgport_shlib. That means, if I want to build ecpg, first I need to build  pgcommon_shlib and pgport_shlib and after that I am able to build ecpg.I would like to ask if this behavior is expected or not ? Because previous version doesn't require this separate builds.Thanks Filip Januš", "msg_date": "Tue, 10 Mar 2020 13:47:14 +0100", "msg_from": "Filip Janus <fjanus@redhat.com>", "msg_from_op": true, "msg_subject": "Ecpg dependency" }, { "msg_contents": "On Tue, Mar 10, 2020 at 01:47:14PM +0100, Filip Janus wrote:\n> Hello,\n> After upgrade from 11.2 to 12.2 I found, that build of ecpg component depends\n> on�pgcommon_shlib and�pgport_shlib.� But build of ecpg doesn't�include build\n> of�pgcommon_shlib and�pgport_shlib. That means, if I want to build ecpg, first\n> I need to build��pgcommon_shlib and�pgport_shlib and after that I am able to\n> build ecpg.\n> \n> I would like to ask if this behavior is expected or not ? Because previous\n> version doesn't require this separate builds.\n\nAh, I see the problem, and this is a new bug in PG 12. The attached\npatch fixes PG 12 and master.\n \n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Sat, 21 Mar 2020 14:14:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ecpg dependency" }, { "msg_contents": "On Sat, Mar 21, 2020 at 02:14:44PM -0400, Bruce Momjian wrote:\n> On Tue, Mar 10, 2020 at 01:47:14PM +0100, Filip Janus wrote:\n> > Hello,\n> > After upgrade from 11.2 to 12.2 I found, that build of ecpg component depends\n> > on�pgcommon_shlib and�pgport_shlib.� But build of ecpg doesn't�include build\n> > of�pgcommon_shlib and�pgport_shlib. That means, if I want to build ecpg, first\n> > I need to build��pgcommon_shlib and�pgport_shlib and after that I am able to\n> > build ecpg.\n> > \n> > I would like to ask if this behavior is expected or not ? Because previous\n> > version doesn't require this separate builds.\n> \n> Ah, I see the problem, and this is a new bug in PG 12. The attached\n> patch fixes PG 12 and master.\n\n> + all-lib: | submake-libpgport\n\nOh, I forgot to mention I got this line from\nsrc/interfaces/libpq/Makefile:\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 21 Mar 2020 14:51:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ecpg dependency" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n\n> On Sat, Mar 21, 2020 at 02:14:44PM -0400, Bruce Momjian wrote:\n>> On Tue, Mar 10, 2020 at 01:47:14PM +0100, Filip Janus wrote:\n>> > Hello,\n>> > After upgrade from 11.2 to 12.2 I found, that build of ecpg component depends\n>> > on pgcommon_shlib and pgport_shlib.  But build of ecpg doesn't include build\n>> > of pgcommon_shlib and pgport_shlib. That means, if I want to build ecpg, first\n>> > I need to build  pgcommon_shlib and pgport_shlib and after that I am able to\n>> > build ecpg.\n>> > \n>> > I would like to ask if this behavior is expected or not ? Because previous\n>> > version doesn't require this separate builds.\n>> \n>> Ah, I see the problem, and this is a new bug in PG 12. The attached\n>> patch fixes PG 12 and master.\n>\n>> + all-lib: | submake-libpgport\n>\n> Oh, I forgot to mention I got this line from\n> src/interfaces/libpq/Makefile:\n\nAnd that line is wrong, but my patch to fix it¹ seems to have fallen\nbetween the cracks.\n\n[1] https://www.postgresql.org/message-id/flat/871rsa13ae.fsf%40wibble.ilmari.org \n\nAdding the dependency to `all-lib` only fixes it for serial builds. To\nfix it properly, so it works with parallel builds (e.g. 'make -j4 -C\nsrc/interfaces/ecpg', the dependency needs to be declared via\nSHLIB_PREREQS, as attached\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl", "msg_date": "Sat, 21 Mar 2020 19:30:48 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: Ecpg dependency" }, { "msg_contents": "On Sat, Mar 21, 2020 at 07:30:48PM +0000, Dagfinn Ilmari Manns�ker wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> \n> > On Sat, Mar 21, 2020 at 02:14:44PM -0400, Bruce Momjian wrote:\n> >> On Tue, Mar 10, 2020 at 01:47:14PM +0100, Filip Janus wrote:\n> >> > Hello,\n> >> > After upgrade from 11.2 to 12.2 I found, that build of ecpg component depends\n> >> > on�pgcommon_shlib and�pgport_shlib.� But build of ecpg doesn't�include build\n> >> > of�pgcommon_shlib and�pgport_shlib. That means, if I want to build ecpg, first\n> >> > I need to build��pgcommon_shlib and�pgport_shlib and after that I am able to\n> >> > build ecpg.\n> >> > \n> >> > I would like to ask if this behavior is expected or not ? Because previous\n> >> > version doesn't require this separate builds.\n> >> \n> >> Ah, I see the problem, and this is a new bug in PG 12. The attached\n> >> patch fixes PG 12 and master.\n> >\n> >> + all-lib: | submake-libpgport\n> >\n> > Oh, I forgot to mention I got this line from\n> > src/interfaces/libpq/Makefile:\n> \n> And that line is wrong, but my patch to fix it� seems to have fallen\n> between the cracks.\n> \n> [1] https://www.postgresql.org/message-id/flat/871rsa13ae.fsf%40wibble.ilmari.org \n> \n> Adding the dependency to `all-lib` only fixes it for serial builds. To\n> fix it properly, so it works with parallel builds (e.g. 'make -j4 -C\n> src/interfaces/ecpg', the dependency needs to be declared via\n> SHLIB_PREREQS, as attached\n\nOh, good catch. I did not notice that patch before. Adding that change\nto src/interfaces/ecpg/pgtypeslib/Makefile fixes the stand-alone\ncompile.\n\nThe attached patch does this, and changes libpq to use it too, so\nparallel Make works there too.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Sat, 21 Mar 2020 18:13:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ecpg dependency" }, { "msg_contents": "On Sat, Mar 21, 2020 at 06:13:03PM -0400, Bruce Momjian wrote:\n> On Sat, Mar 21, 2020 at 07:30:48PM +0000, Dagfinn Ilmari Manns�ker wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Sat, Mar 21, 2020 at 02:14:44PM -0400, Bruce Momjian wrote:\n> > > Oh, I forgot to mention I got this line from\n> > > src/interfaces/libpq/Makefile:\n> > \n> > And that line is wrong, but my patch to fix it� seems to have fallen\n> > between the cracks.\n> > \n> > [1] https://www.postgresql.org/message-id/flat/871rsa13ae.fsf%40wibble.ilmari.org \n> > \n> > Adding the dependency to `all-lib` only fixes it for serial builds. To\n> > fix it properly, so it works with parallel builds (e.g. 'make -j4 -C\n> > src/interfaces/ecpg', the dependency needs to be declared via\n> > SHLIB_PREREQS, as attached\n> \n> Oh, good catch. I did not notice that patch before. Adding that change\n> to src/interfaces/ecpg/pgtypeslib/Makefile fixes the stand-alone\n> compile.\n\nPatch applied and backpatched to PG 12. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 31 Mar 2020 14:18:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Ecpg dependency" } ]
[ { "msg_contents": "Short version: Currently if the server is built with --with-llvm the\n-devel packages must depend on clang for PGXS to work, even though\nllvm is otherwise optional. This is a particular problem on older\nplatforms where 3rd party LLVM may be required to build the server's\nllvmjit support. Work around by skipping the default .bc generation if\nno clang is found by PGXS, as if $(with_llvm) was false.\n\nDetail:\n\nIf PostgreSQL is configured with --with--lvm it writes with_llvm=yes\ninto Makefile.global via AC_SUBST, along with the CLANG path and the\npath to the LLVM_TOOLSET if supplied. PGXS sets up a %.bc dependency\nfor OBJS if it detects that the server was compiled with llvm support.\n\nIf clang is not found at PGXS extension build-time the extension build\nwill then fail, despite the user not having installed the\npostgresql11-llvmjit (or whatever) package and the extension not\ndeclaring any explicit LLVM dependency or requirement.\n\nAndres and others went to a great deal of effort to make it possible\nto separate PostgreSQL's LLVM support, so a server built with LLVM\nsupport doesn't actually have a runtime dependency on llvm unless the\nllvmjit module is loaded. This allows packagers to separate it out and\navoid the need to declare an llvm dependency on the main server\npackage.\n\nI've found that this falls down for -devel packages like those the\nPGDG Yum team produces. The problem arises when a container or VM is\nused to build the server and its Makefile.global etc. Then the -devel\npackage containing Makefile.global and other PGXS bits are installed\non a new machine that does not have llvm's clang. PGXS builds will\nfail when they attempt to generate bytecode, since they expect clang\nto be present at the path baked in to Makefile.global - but it's\nabsent.\n\nI propose that per the attached patch PGXS should simply skip adding\nthe automatic dependency for .bc files if clang cannot be found.\nExtensions may still choose to explicitly declare the rule in their\nown Makefile if they want to force bitcode generation.\n\nIf we want to get fancier about it, we could instead split the llvm\nsupport out from Makefile.global into a Makefile.llvm or similar,\nwhich is then conditionally included by Makefile.global if it exists.\nMakefile.llvm would be packaged in a new postgresqlXX-llvmjit-devel\npackage since distros get uppity if you put makefile fragments in\nruntime packages. This package would depend on llvm-toolset and clang.\nIf you install postgresqlXX-devel but not postgresqlXX-llvmjit-devel\nyou don't get bitcode and don't need clang. If you install\npostgresqlXX-llvmjit-devel, the same clang as we built the server with\nis declared as a dependency and Makefile.llvm is included, so it all\nworks.\n\nBut I don't think it's worth the hassle and I'd rather just skip\nautomatic bitcode generation if we don't find clang.\n\nSee also yum packagers report at\nhttps://www.postgresql.org/message-id/CAMsr+YHokx0rWLV561z3=gAi6CM4YJekgCLkqmDwQSTEjVYhuw@mail.gmail.com\n.\n\n--\n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 11 Mar 2020 11:25:28 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "[PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Wed, 2020-03-11 at 11:25 +0800, Craig Ringer wrote:\n> Short version: Currently if the server is built with --with-llvm the\n> -devel packages must depend on clang for PGXS to work, even though\n> llvm is otherwise optional. This is a particular problem on older\n> platforms where 3rd party LLVM may be required to build the server's\n> llvmjit support. Work around by skipping the default .bc generation if\n> no clang is found by PGXS, as if $(with_llvm) was false.\n\n+1\n\nI have struggled with this, as have several users trying to build oracle_fdw.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 11 Mar 2020 05:28:04 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "Le mer. 11 mars 2020 à 05:28, Laurenz Albe <laurenz.albe@cybertec.at> a\nécrit :\n\n> On Wed, 2020-03-11 at 11:25 +0800, Craig Ringer wrote:\n> > Short version: Currently if the server is built with --with-llvm the\n> > -devel packages must depend on clang for PGXS to work, even though\n> > llvm is otherwise optional. This is a particular problem on older\n> > platforms where 3rd party LLVM may be required to build the server's\n> > llvmjit support. Work around by skipping the default .bc generation if\n> > no clang is found by PGXS, as if $(with_llvm) was false.\n>\n> +1\n>\n> I have struggled with this, as have several users trying to build\n> oracle_fdw.\n>\n\n+1, I had similar experience with other extensions.\n\n>\n\nLe mer. 11 mars 2020 à 05:28, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :On Wed, 2020-03-11 at 11:25 +0800, Craig Ringer wrote:\n> Short version: Currently if the server is built with --with-llvm the\n> -devel packages must depend on clang for PGXS to work, even though\n> llvm is otherwise optional. This is a particular problem on older\n> platforms where 3rd party LLVM may be required to build the server's\n> llvmjit support. Work around by skipping the default .bc generation if\n> no clang is found by PGXS, as if $(with_llvm) was false.\n\n+1\n\nI have struggled with this, as have several users trying to build oracle_fdw.+1, I had similar experience with other extensions.", "msg_date": "Wed, 11 Mar 2020 06:43:24 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "st 11. 3. 2020 v 6:43 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Le mer. 11 mars 2020 à 05:28, Laurenz Albe <laurenz.albe@cybertec.at> a\n> écrit :\n>\n>> On Wed, 2020-03-11 at 11:25 +0800, Craig Ringer wrote:\n>> > Short version: Currently if the server is built with --with-llvm the\n>> > -devel packages must depend on clang for PGXS to work, even though\n>> > llvm is otherwise optional. This is a particular problem on older\n>> > platforms where 3rd party LLVM may be required to build the server's\n>> > llvmjit support. Work around by skipping the default .bc generation if\n>> > no clang is found by PGXS, as if $(with_llvm) was false.\n>>\n>> +1\n>>\n>> I have struggled with this, as have several users trying to build\n>> oracle_fdw.\n>>\n>\n> +1, I had similar experience with other extensions.\n>\n\n+1\n\nPavel\n\nst 11. 3. 2020 v 6:43 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Le mer. 11 mars 2020 à 05:28, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :On Wed, 2020-03-11 at 11:25 +0800, Craig Ringer wrote:\n> Short version: Currently if the server is built with --with-llvm the\n> -devel packages must depend on clang for PGXS to work, even though\n> llvm is otherwise optional. This is a particular problem on older\n> platforms where 3rd party LLVM may be required to build the server's\n> llvmjit support. Work around by skipping the default .bc generation if\n> no clang is found by PGXS, as if $(with_llvm) was false.\n\n+1\n\nI have struggled with this, as have several users trying to build oracle_fdw.+1, I had similar experience with other extensions. +1Pavel", "msg_date": "Wed, 11 Mar 2020 06:47:10 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Wed, 11 Mar 2020 at 13:47, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> st 11. 3. 2020 v 6:43 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>>\n>> Le mer. 11 mars 2020 à 05:28, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :\n>>>\n>>> On Wed, 2020-03-11 at 11:25 +0800, Craig Ringer wrote:\n>>> > Short version: Currently if the server is built with --with-llvm the\n>>> > -devel packages must depend on clang for PGXS to work, even though\n>>> > llvm is otherwise optional. This is a particular problem on older\n>>> > platforms where 3rd party LLVM may be required to build the server's\n>>> > llvmjit support. Work around by skipping the default .bc generation if\n>>> > no clang is found by PGXS, as if $(with_llvm) was false.\n>>>\n>>> +1\n>>>\n>>> I have struggled with this, as have several users trying to build oracle_fdw.\n>>\n>>\n>> +1, I had similar experience with other extensions.\n>\n> +1\n\nBTW, as a workaround in the mean time you can suppress bitcode\ngeneration by building your ext with:\n\n make with_llvm=no all\n\nSimilarly, if postgres is not using your ccache for builds because\nit's invoking a baked-in compiler path, you can run\n\n make CC=$(type -p gcc) GCC=$(type -p gcc) all\n\nto force fresh path-lookups that should find your ccache-wrappers for\nthe compiler.\n\nThis will also work around problems where ccache was used to build\npostgres, so ccache-wrapper paths got baked into Makefile.in, so your\nPGXS builds fail with something like:\n\n make: /usr/lib64/ccache/gcc: Command not found\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Wed, 11 Mar 2020 14:56:46 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "Hi,\n\nOn 2020-03-11 11:25:28 +0800, Craig Ringer wrote:\n> I propose that per the attached patch PGXS should simply skip adding\n> the automatic dependency for .bc files if clang cannot be found.\n> Extensions may still choose to explicitly declare the rule in their\n> own Makefile if they want to force bitcode generation.\n\nHm, that seems like it could also cause silent failures (e.g. after a\npackage upgrade or such).\n\nHow about erroring out, but with an instruction that llvm can be\ndisabled with make NO_LLVM=1 or such?\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Wed, 11 Mar 2020 12:43:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "At Wed, 11 Mar 2020 12:43:22 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-03-11 11:25:28 +0800, Craig Ringer wrote:\n> > I propose that per the attached patch PGXS should simply skip adding\n> > the automatic dependency for .bc files if clang cannot be found.\n> > Extensions may still choose to explicitly declare the rule in their\n> > own Makefile if they want to force bitcode generation.\n> \n> Hm, that seems like it could also cause silent failures (e.g. after a\n> package upgrade or such).\n> \n> How about erroring out, but with an instruction that llvm can be\n> disabled with make NO_LLVM=1 or such?\n\n+1 for requiring such options for the same reason. The current patch\ndisables LLVM for the enviroment where clang is installed but ccache\nis not, while building an extension based on postgresql-devel package.\n(ccache is in EPEL on RHEL/CentOS.)\n\nA bit aside from LLVM itself, I'd like CC/CPP/CLANG to fall back to\nthem in the PATH on the running environment if they don't exist.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 12 Mar 2020 14:59:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Thu, 12 Mar 2020 at 03:43, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2020-03-11 11:25:28 +0800, Craig Ringer wrote:\n> > I propose that per the attached patch PGXS should simply skip adding\n> > the automatic dependency for .bc files if clang cannot be found.\n> > Extensions may still choose to explicitly declare the rule in their\n> > own Makefile if they want to force bitcode generation.\n>\n> Hm, that seems like it could also cause silent failures (e.g. after a\n> package upgrade or such).\n>\n> How about erroring out, but with an instruction that llvm can be\n> disabled with make NO_LLVM=1 or such?\n\nI thought about that at first, but that'll only benefit people who're\nhand-compiling things, and it's already possible with\n\n make with_llvm=no ...\n\nThe proportion of people hand-compiling is an ever-shrinking\nproportion of the user base. When something's nested inside an rpm\nspecfile inside a docker container inside a bash script inside another\nDocker container on an AWS instance .... not so fun. They might be\nable to inject it into the environment. But often not.\n\nExtensions that explicitly must generate bytecode may add their own\ndependency rule. Or we could make skipping bytecode generation if llvm\ncannot be found at build-time something the extension can turn off\nwith a PGXS option, as suggested upthread.\n\nI'm reluctant to go with anything that requires each existing\nextension to be patched because that introduces a huge lag time for\nthis change to actually help anyone out.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Thu, 12 Mar 2020 14:08:31 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "At Thu, 12 Mar 2020 14:08:31 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in \n> On Thu, 12 Mar 2020 at 03:43, Andres Freund <andres@anarazel.de> wrote:\n> \n> > On 2020-03-11 11:25:28 +0800, Craig Ringer wrote:\n> > > I propose that per the attached patch PGXS should simply skip adding\n> > > the automatic dependency for .bc files if clang cannot be found.\n> > > Extensions may still choose to explicitly declare the rule in their\n> > > own Makefile if they want to force bitcode generation.\n> >\n> > Hm, that seems like it could also cause silent failures (e.g. after a\n> > package upgrade or such).\n> >\n> > How about erroring out, but with an instruction that llvm can be\n> > disabled with make NO_LLVM=1 or such?\n> \n> I thought about that at first, but that'll only benefit people who're\n> hand-compiling things, and it's already possible with\n> \n> make with_llvm=no ...\n> \n> The proportion of people hand-compiling is an ever-shrinking\n> proportion of the user base. When something's nested inside an rpm\n> specfile inside a docker container inside a bash script inside another\n> Docker container on an AWS instance .... not so fun. They might be\n> able to inject it into the environment. But often not.\n> \n> Extensions that explicitly must generate bytecode may add their own\n> dependency rule. Or we could make skipping bytecode generation if llvm\n> cannot be found at build-time something the extension can turn off\n> with a PGXS option, as suggested upthread.\n\nFWIW, the patch makes bitcode generation skipped (almost silently)\nwhen clang is installed but ccache is not, with -devel packages for\nCentOS8. Counldn't we make the bitcode generation a separate make\ntarget, at least for PGXS build? Rather than turning it on and off by\nthe condition which doens't seem having a clear relation with the\nnecessity of bitcode generation?\n\n> I'm reluctant to go with anything that requires each existing\n> extension to be patched because that introduces a huge lag time for\n> this change to actually help anyone out.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 12 Mar 2020 17:42:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "Hi,\n\nOn 2020-03-12 14:08:31 +0800, Craig Ringer wrote:\n> On Thu, 12 Mar 2020 at 03:43, Andres Freund <andres@anarazel.de> wrote:\n> \n> > On 2020-03-11 11:25:28 +0800, Craig Ringer wrote:\n> > > I propose that per the attached patch PGXS should simply skip adding\n> > > the automatic dependency for .bc files if clang cannot be found.\n> > > Extensions may still choose to explicitly declare the rule in their\n> > > own Makefile if they want to force bitcode generation.\n> >\n> > Hm, that seems like it could also cause silent failures (e.g. after a\n> > package upgrade or such).\n> >\n> > How about erroring out, but with an instruction that llvm can be\n> > disabled with make NO_LLVM=1 or such?\n> \n> I thought about that at first, but that'll only benefit people who're\n> hand-compiling things, and it's already possible with\n> \n> make with_llvm=no ...\n\nWell, the difference is that you'd be told about it, instead of getting\na hard to parse error message.\n\n\n> The proportion of people hand-compiling is an ever-shrinking\n> proportion of the user base.\n\nThose not building themselves aren't going to care either.\n\n\n> When something's nested inside an rpm specfile inside a docker\n> container inside a bash script inside another Docker container on an\n> AWS instance .... not so fun. They might be able to inject it into the\n> environment. But often not.\n\nUh, I have very little pity with that argument. If you're script\nbuilding stuff, you can also specify the full dependencies.\n\n\n> Extensions that explicitly must generate bytecode may add their own\n> dependency rule.\n\nYou're just moving to per-extension work with that. The likely outcome\nis that you're just not going to have bitcode for numerous extensions,\nbecause there's nothing warning you against an incomplete setup. And\nthat hard dependency then also has to take into account whether PG was\nbuilt with llvm enabled or not. That's not a good direction.\n\n\n> Or we could make skipping bytecode generation if llvm cannot be found\n> at build-time something the extension can turn off with a PGXS option,\n> as suggested upthread.\n\nSee above.\n\n\n> I'm reluctant to go with anything that requires each existing\n> extension to be patched because that introduces a huge lag time for\n> this change to actually help anyone out.\n\nThat's precisely what you're proposing, though. Just in the inverse.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Mar 2020 12:25:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Thu, 12 Mar 2020 at 16:25, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-03-12 14:08:31 +0800, Craig Ringer wrote:\n> >\n> > I thought about that at first, but that'll only benefit people who're\n> > hand-compiling things, and it's already possible with\n> >\n> > make with_llvm=no ...\n>\n> Well, the difference is that you'd be told about it, instead of getting\n> a hard to parse error message.\n>\n> What about adding a WARNING but don't error out if LLVM isn't found? Add\nan additional option (if LLVM isn't found) is annoying because it means\nadding instruction into README of all extensions. What is the side effects\nof not providing .bc files? It seems some extensions won't benefit from\nLLVM.\n\n\nRegards,\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 12 Mar 2020 at 16:25, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-03-12 14:08:31 +0800, Craig Ringer wrote:> \n> I thought about that at first, but that'll only benefit people who're\n> hand-compiling things, and it's already possible with\n> \n>     make with_llvm=no ...\n\nWell, the difference is that you'd be told about it, instead of getting\na hard to parse error message.\nWhat about adding a WARNING but don't error out if LLVM isn't found? Add an additional option (if LLVM isn't found) is annoying because it means adding instruction into README of all extensions. What is the side effects of not providing .bc files? It seems some extensions won't benefit from LLVM. Regards,-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 12 Mar 2020 17:22:09 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "Hi,\n\nOn 2020-03-12 17:22:09 -0300, Euler Taveira wrote:\n> On Thu, 12 Mar 2020 at 16:25, Andres Freund <andres@anarazel.de> wrote:\n>\n> > Hi,\n> >\n> > On 2020-03-12 14:08:31 +0800, Craig Ringer wrote:\n> > >\n> > > I thought about that at first, but that'll only benefit people who're\n> > > hand-compiling things, and it's already possible with\n> > >\n> > > make with_llvm=no ...\n> >\n> > Well, the difference is that you'd be told about it, instead of getting\n> > a hard to parse error message.\n> >\n> What about adding a WARNING but don't error out if LLVM isn't found? Add\n> an additional option (if LLVM isn't found) is annoying because it means\n> adding instruction into README of all extensions.\n\nIMO only if the packager screwed up. The dependencies of the package\nthat includes pgxs, headers should have the dependencies to llvm. Which\ne.g. debian's does:\n\n$ apt show postgresql-server-dev-12\nPackage: postgresql-server-dev-12\nVersion: 12.2-1+b1\nPriority: optional\nSection: libdevel\nSource: postgresql-12 (12.2-1)\nMaintainer: Debian PostgreSQL Maintainers <team+postgresql@tracker.debian.org>\nInstalled-Size: 5,327 kB\nDepends: clang-9, libpq-dev (>= 12~~), llvm-9-dev, postgresql-client-12, postgresql-common (>= 142~)\nBreaks: postgresql-server-dev-all (<< 181~)\nHomepage: http://www.postgresql.org/\nDownload-Size: 919 kB\n\n\nI haven't looked up the dependencies for the rpm packages including the\nheaders.\n\n\nIt can make sense to split the *binary* packages so that the llvm\ndependency is not incurred by default, even when the server was compiled\nwith LLVM support. But I see very little point in doing so for -dev[el]\npackages.\n\n\n> What is the side effects of not providing .bc files?\n\nPrevent JITing the functions in that extension?\n\n\n> It seems some extensions won't benefit from LLVM.\n\nSure, and those can disable it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Mar 2020 13:34:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "Sorry, that mail is almost duplicate with another one, which was sent\nby accident.\n\nAt Thu, 12 Mar 2020 14:59:44 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> +1 for requiring such options for the same reason. The current patch\n> disables LLVM for the enviroment where clang is installed but ccache\n> is not, while building an extension based on postgresql-devel package.\n> (ccache is in EPEL on RHEL/CentOS.)\n> \n> A bit aside from LLVM itself, I'd like CC/CPP/CLANG to fall back to\n> them in the PATH on the running environment if they don't exist.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:11:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Fri, 13 Mar 2020 at 04:35, Andres Freund <andres@anarazel.de> wrote:\n\n>\n> IMO only if the packager screwed up. The dependencies of the package\n> that includes pgxs, headers should have the dependencies to llvm. Which\n> e.g. debian's does:\n>\n\nYes, I agree that the underlying issue is mainly with packaging.\n\nThis proposal came out of chasing down some packaging problems relating to\nthe EL-7 -devel packages for Pg 11 and 12, per linked mails in initial\npost. They don't declare a runtime dependency on llvm toolset or clang, so\nthey're basically broken given the way we assume the presence of those\ntools.\n\nBut\n\n(a) do we really want to force everyone to pull in clang and the llvm\ntoolset when they install the -devel pkg, even if they don't install or\nneed the postgresqlNN-llvmjit package?\n(b) EL-7 doesn't have a usable llvm/clang version even in EPEL, you have to\nadd a separate SCL LLVM toolset repo. So adding a dependency on\nllvm-toolset into EL-7's postgresql11-devel and postgresql12-devel is most\nundesirable, especially in a point release, as it'll make lots of stuff\nexplode messily.\n\nI learned (b) the hard way. Don't do that.\n\nIf the consensus is that this is a packaging issue, (a) is fine, and we\nshould just quit whining and install a suitable clang/llvm, I'll cope with\nthat. I'll ask yum-packagers to patch Makefile.global for EL-7 with a\nworkaround like the one proposed here and for newer RH where a suitable\nLLVM is available, just declare it as a dependency of the -devel pkg going\nforward then make lots of noise about the change.\n\nBut the problem is that even for newer RH \"enterprise\" distros LLVM/clang\nlive in EPEL, and IIRC we don't presently require any dependencies from\nEPEL to install the base postgresqlNN-* packages (except llvmjit). So we'd\nbe making EPEL a new repo dependency for the -devel pkg and that's not\nsomething I'm too fond of doing in a minor release.\n\nThe alternative would be to detect a missing clang and emit a much more\ninformative error than the current one that explicitly suggests retrying\nwith\n\n make with_llvm=no\n\nor setting with_llvm=no in the environment.\n\nThe whole thing is a mess caused by this \"enterprise-y\" repository split\nbetween core and \"extras\" and I'm rather frustrated by the whole thing, but\nthe current situation isn't much fun for users.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 13 Mar 2020 at 04:35, Andres Freund <andres@anarazel.de> wrote:\nIMO only if the packager screwed up. The dependencies of the package\nthat includes pgxs, headers should have the dependencies to llvm. Which\ne.g. debian's does:Yes, I agree that the underlying issue is mainly with packaging.This proposal came out of chasing down some packaging problems relating to the EL-7  -devel packages for Pg  11 and 12, per linked mails in initial post. They don't declare a runtime dependency on llvm toolset or clang, so they're basically broken given the way we assume the presence of those tools.But (a) do we really want to force everyone to pull in clang and the llvm toolset when they install the -devel pkg, even if they don't install or need the postgresqlNN-llvmjit  package?(b) EL-7 doesn't have a usable llvm/clang version even in EPEL, you have to add a separate SCL LLVM toolset repo. So adding a dependency on llvm-toolset into EL-7's postgresql11-devel and postgresql12-devel is most undesirable, especially in a point release, as it'll make lots of stuff explode messily.I learned (b) the hard way. Don't do that.If the consensus is that this is a packaging issue, (a) is fine, and we should just quit whining and install a suitable clang/llvm, I'll cope with that. I'll ask yum-packagers to patch Makefile.global for EL-7 with a workaround like the one proposed here and for newer RH where a suitable LLVM is available, just declare it as a dependency of the -devel pkg going forward then make lots of noise about the change.But the problem is that even for newer RH \"enterprise\" distros LLVM/clang live in EPEL, and IIRC we don't presently require any dependencies from EPEL to install the base postgresqlNN-* packages (except llvmjit). So we'd be making EPEL a new repo dependency for the -devel pkg and that's not something I'm too fond of doing in a minor release.The alternative would be to detect a missing clang and emit a much more informative error than the current one that explicitly suggests retrying with    make with_llvm=no or setting with_llvm=no in the environment.The whole thing is a mess caused by this \"enterprise-y\" repository split between core and \"extras\" and I'm rather frustrated by the whole thing, but the current situation isn't much fun for users.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 13 Mar 2020 14:08:12 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On 2020-03-13 14:08:12 +0800, Craig Ringer wrote:\n> The alternative would be to detect a missing clang and emit a much more\n> informative error than the current one that explicitly suggests retrying\n> with\n> \n> make with_llvm=no\n> \n> or setting with_llvm=no in the environment.\n\nThat, that, that's what I suggested upthread?\n\n\n", "msg_date": "Fri, 13 Mar 2020 00:04:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Fri, 13 Mar 2020 at 15:04, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2020-03-13 14:08:12 +0800, Craig Ringer wrote:\n> > The alternative would be to detect a missing clang and emit a much more\n> > informative error than the current one that explicitly suggests retrying\n> > with\n> >\n> > make with_llvm=no\n> >\n> > or setting with_llvm=no in the environment.\n>\n> That, that, that's what I suggested upthread?\n>\n\nYes, and I still don't like it. \"with_llvm\" is the actual already-working\noption. I'd rather make this not randomly explode for users, but failing\nthat we can just hack the Makefile in the rpms for EL-7 (where it's a\nparticular mess) and rely on an error message for other cases.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 13 Mar 2020 at 15:04, Andres Freund <andres@anarazel.de> wrote:On 2020-03-13 14:08:12 +0800, Craig Ringer wrote:\n> The alternative would be to detect a missing clang and emit a much more\n> informative error than the current one that explicitly suggests retrying\n> with\n> \n>     make with_llvm=no\n> \n> or setting with_llvm=no in the environment.\n\nThat, that, that's what I suggested upthread?\nYes, and I still don't like it. \"with_llvm\" is the actual already-working option. I'd rather make this not randomly explode for users, but failing that we can just hack the Makefile in the rpms for EL-7 (where it's a particular mess) and rely on an error message for other cases.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Sun, 15 Mar 2020 09:28:08 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On 2020-03-15 02:28, Craig Ringer wrote:\n> On Fri, 13 Mar 2020 at 15:04, Andres Freund <andres@anarazel.de \n> <mailto:andres@anarazel.de>> wrote:\n> \n> On 2020-03-13 14:08:12 +0800, Craig Ringer wrote:\n> > The alternative would be to detect a missing clang and emit a\n> much more\n> > informative error than the current one that explicitly suggests\n> retrying\n> > with\n> >\n> >     make with_llvm=no\n> >\n> > or setting with_llvm=no in the environment.\n> \n> That, that, that's what I suggested upthread?\n> \n> \n> Yes, and I still don't like it. \"with_llvm\" is the actual \n> already-working option. I'd rather make this not randomly explode for \n> users, but failing that we can just hack the Makefile in the rpms for \n> EL-7 (where it's a particular mess) and rely on an error message for \n> other cases.\n\nI don't really get the problem. with_llvm=no works, so it can be used.\n\nOptions that automatically disable things based on what is installed in \nthe build environment are bad ideas. For instance, we on purpose don't \nhave configure decide anything based on whether readline is installed. \nEither you select it or you don't, there is no \"auto\" mode.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 11:47:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" }, { "msg_contents": "On Thu, 19 Mar 2020 at 18:47, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-03-15 02:28, Craig Ringer wrote:\n> > On Fri, 13 Mar 2020 at 15:04, Andres Freund <andres@anarazel.de\n> > <mailto:andres@anarazel.de>> wrote:\n> >\n> > On 2020-03-13 14:08:12 +0800, Craig Ringer wrote:\n> > > The alternative would be to detect a missing clang and emit a\n> > much more\n> > > informative error than the current one that explicitly suggests\n> > retrying\n> > > with\n> > >\n> > > make with_llvm=no\n> > >\n> > > or setting with_llvm=no in the environment.\n> >\n> > That, that, that's what I suggested upthread?\n> >\n> >\n> > Yes, and I still don't like it. \"with_llvm\" is the actual\n> > already-working option. I'd rather make this not randomly explode for\n> > users, but failing that we can just hack the Makefile in the rpms for\n> > EL-7 (where it's a particular mess) and rely on an error message for\n> > other cases.\n>\n> I don't really get the problem. with_llvm=no works, so it can be used.\n>\n> Options that automatically disable things based on what is installed in\n> the build environment are bad ideas. For instance, we on purpose don't\n> have configure decide anything based on whether readline is installed.\n> Either you select it or you don't, there is no \"auto\" mode.\n>\n>\nFine with me. I wrote it before identifying that with_llvm=no was a viable\nworkaround.\n\nThe whole thing is a bit ugly, but if the fix isn't clearly better than the\nproblem the fix shouldn't go in. This way it'll be searchable-for at least.\n\nI think we'll be adopting some kind of ugly workaround like this for the\nCentOS 7 packages in PGDG yum because they're a bit of a special case,\nsince the llvm support requires an additional 3rd party that isn't declared\nas a build-depend on the devel package. But that can be done at packaging\nlevel + a small patch applied during package builds for CentOS 7 only.\n\nWithdrawn.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 19 Mar 2020 at 18:47, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-03-15 02:28, Craig Ringer wrote:\n> On Fri, 13 Mar 2020 at 15:04, Andres Freund <andres@anarazel.de \n> <mailto:andres@anarazel.de>> wrote:\n> \n>     On 2020-03-13 14:08:12 +0800, Craig Ringer wrote:\n>      > The alternative would be to detect a missing clang and emit a\n>     much more\n>      > informative error than the current one that explicitly suggests\n>     retrying\n>      > with\n>      >\n>      >     make with_llvm=no\n>      >\n>      > or setting with_llvm=no in the environment.\n> \n>     That, that, that's what I suggested upthread?\n> \n> \n> Yes, and I still don't like it. \"with_llvm\" is the actual \n> already-working option. I'd rather make this not randomly explode for \n> users, but failing that we can just hack the Makefile in the rpms for \n> EL-7 (where it's a particular mess) and rely on an error message for \n> other cases.\n\nI don't really get the problem.  with_llvm=no works, so it can be used.\n\nOptions that automatically disable things based on what is installed in \nthe build environment are bad ideas.  For instance, we on purpose don't \nhave configure decide anything based on whether readline is installed. \nEither you select it or you don't, there is no \"auto\" mode.Fine with me. I wrote it before identifying that with_llvm=no was a viable workaround.The whole thing is a bit ugly, but if the fix isn't clearly better than the problem the fix shouldn't go in. This way it'll be searchable-for at least.I think we'll be adopting some kind of ugly workaround like this for the CentOS 7 packages in PGDG yum because they're a bit of a special case, since the llvm support requires an additional 3rd party that isn't declared as a build-depend on the devel package. But that can be done at packaging level + a small patch applied during package builds for CentOS 7 only.Withdrawn.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 23 Apr 2020 12:57:55 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Skip llvm bytecode generation if LLVM is missing" } ]
[ { "msg_contents": "In your case, the WHERE clauses would get pushed down into the subquery\nfor both queries, with/without the ROLLUP. But since the subquery uses\ngrouping/grouping sets, the WHERE clauses would be put in HAVING of the\nsubquery.\n\nThen when we plan for the subquery, we will decide whether a HAVING\nclause can be transfered into WHERE. Usually we do not do that if there\nare any nonempty grouping sets. Because if any referenced column isn't\npresent in all the grouping sets, moving such a clause into WHERE would\npotentially change the results.\n\nThanks\nRichard\n\nIn your case, the WHERE clauses would get pushed down into the subqueryfor both queries, with/without the ROLLUP. But since the subquery usesgrouping/grouping sets, the WHERE clauses would be put in HAVING of thesubquery.Then when we plan for the subquery, we will decide whether a HAVINGclause can be transfered into WHERE. Usually we do not do that if thereare any nonempty grouping sets. Because if any referenced column isn'tpresent in all the grouping sets, moving such a clause into WHERE wouldpotentially change the results.ThanksRichard", "msg_date": "Wed, 11 Mar 2020 16:59:26 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimizer Doesn't Push Down Where Expressions on Rollups" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> In your case, the WHERE clauses would get pushed down into the subquery\n> for both queries, with/without the ROLLUP. But since the subquery uses\n> grouping/grouping sets, the WHERE clauses would be put in HAVING of the\n> subquery.\n\nRight, we do successfully push the clauses into HAVING of the subquery.\n\n> Then when we plan for the subquery, we will decide whether a HAVING\n> clause can be transfered into WHERE. Usually we do not do that if there\n> are any nonempty grouping sets. Because if any referenced column isn't\n> present in all the grouping sets, moving such a clause into WHERE would\n> potentially change the results.\n\nYeah. I think that it might be safe if the proposed clause can\nbe proven strict for (some subset of?) the grouping columns, because\nthat would eliminate the rollup grouping sets where those columns\ncome out NULL because they aren't being grouped on. (This could then\nalso factor into throwing away those grouping sets, perhaps.)\n\nAnyway, this is not a bug; it's a proposed planner improvement, and\nnot a trivial one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 10:06:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizer Doesn't Push Down Where Expressions on Rollups" }, { "msg_contents": "Just in case it wasn’t obvious from the example, I’m talking about only cases where the all groups in the grouping set share a subset of columns in common and those are the columns being conditioned in the WHERE clause. \n\nI get it if it’s not something actionable, but it is a bummer to see query time explode when going from one grouping set to two grouping sets. :/ \n\nCheers, and I’ve been a big fan of y’alls work for going on two decades now. Thank you! \n\n> On Mar 11, 2020, at 7:06 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Richard Guo <guofenglinux@gmail.com> writes:\n>> In your case, the WHERE clauses would get pushed down into the subquery\n>> for both queries, with/without the ROLLUP. But since the subquery uses\n>> grouping/grouping sets, the WHERE clauses would be put in HAVING of the\n>> subquery.\n> \n> Right, we do successfully push the clauses into HAVING of the subquery.\n> \n>> Then when we plan for the subquery, we will decide whether a HAVING\n>> clause can be transfered into WHERE. Usually we do not do that if there\n>> are any nonempty grouping sets. Because if any referenced column isn't\n>> present in all the grouping sets, moving such a clause into WHERE would\n>> potentially change the results.\n> \n> Yeah. I think that it might be safe if the proposed clause can\n> be proven strict for (some subset of?) the grouping columns, because\n> that would eliminate the rollup grouping sets where those columns\n> come out NULL because they aren't being grouped on. (This could then\n> also factor into throwing away those grouping sets, perhaps.)\n> \n> Anyway, this is not a bug; it's a proposed planner improvement, and\n> not a trivial one.\n> \n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Wed, 11 Mar 2020 09:58:11 -0700", "msg_from": "Logan Bowers <logan.bowers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer Doesn't Push Down Where Expressions on Rollups" }, { "msg_contents": "On Wed, Mar 11, 2020 at 10:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > In your case, the WHERE clauses would get pushed down into the subquery\n> > for both queries, with/without the ROLLUP. But since the subquery uses\n> > grouping/grouping sets, the WHERE clauses would be put in HAVING of the\n> > subquery.\n>\n> Right, we do successfully push the clauses into HAVING of the subquery.\n>\n> > Then when we plan for the subquery, we will decide whether a HAVING\n> > clause can be transfered into WHERE. Usually we do not do that if there\n> > are any nonempty grouping sets. Because if any referenced column isn't\n> > present in all the grouping sets, moving such a clause into WHERE would\n> > potentially change the results.\n>\n> Yeah. I think that it might be safe if the proposed clause can\n> be proven strict for (some subset of?) the grouping columns, because\n> that would eliminate the rollup grouping sets where those columns\n> come out NULL because they aren't being grouped on. (This could then\n> also factor into throwing away those grouping sets, perhaps.)\n>\n\nThis seems correct to me. If we can prove the HAVING clause is strict\nfor some grouping columns, then we can throw away the grouping sets that\ndo not contain these grouping columns, since their results would be\neliminated by this HAVING clause. After that we can move this HAVING\nclause to WHERE. I'm thinking about this example:\n\nselect c1, c2, sum(c4) from t group by\n grouping sets ((c1, c2), (c2, c3), (c1, c4)) having c2 = 2;\n\nselect c1, c2, sum(c4) from t group by\n grouping sets ((c1, c2), (c2, c3)) having c2 = 2;\n\nselect c1, c2, sum(c4) from t where c2 = 2 group by\n grouping sets ((c1, c2), (c2, c3));\n\n\nFor non-strict HAVING clause, if its referenced columns are present in\nall the grouping sets, I think we should also be able to move it to\nWHERE.\n\nThanks\nRichard\n\nOn Wed, Mar 11, 2020 at 10:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> In your case, the WHERE clauses would get pushed down into the subquery\n> for both queries, with/without the ROLLUP. But since the subquery uses\n> grouping/grouping sets, the WHERE clauses would be put in HAVING of the\n> subquery.\n\nRight, we do successfully push the clauses into HAVING of the subquery.\n\n> Then when we plan for the subquery, we will decide whether a HAVING\n> clause can be transfered into WHERE. Usually we do not do that if there\n> are any nonempty grouping sets. Because if any referenced column isn't\n> present in all the grouping sets, moving such a clause into WHERE would\n> potentially change the results.\n\nYeah.  I think that it might be safe if the proposed clause can\nbe proven strict for (some subset of?) the grouping columns, because\nthat would eliminate the rollup grouping sets where those columns\ncome out NULL because they aren't being grouped on.  (This could then\nalso factor into throwing away those grouping sets, perhaps.)This seems correct to me. If we can prove the HAVING clause is strictfor some grouping columns, then we can throw away the grouping sets thatdo not contain these grouping columns, since their results would beeliminated by this HAVING clause. After that we can move this HAVINGclause to WHERE. I'm thinking about this example:select c1, c2, sum(c4) from t group by    grouping sets ((c1, c2), (c2, c3), (c1, c4)) having c2 = 2;select c1, c2, sum(c4) from t group by    grouping sets ((c1, c2), (c2, c3)) having c2 = 2;select c1, c2, sum(c4) from t where c2 = 2 group by    grouping sets ((c1, c2), (c2, c3));For non-strict HAVING clause, if its referenced columns are present inall the grouping sets, I think we should also be able to move it toWHERE.ThanksRichard", "msg_date": "Thu, 12 Mar 2020 18:22:25 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimizer Doesn't Push Down Where Expressions on Rollups" }, { "msg_contents": "Hi,\n\n(cc'ing -hackers)\n\nWe used to push down clauses from HAVING to WHERE when grouping sets are\nused in 61444bfb and then reverted it in a6897efa because of wrong\nresults issue. As now there are people suffering from performance issue\nas described in [1], I'm wondering if we should give it another try.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/17F738BE-8D45-422C-BAD0-ACA3090BF46D%40gmail.com\n\nThanks\nRichard\n\nOn Thu, Mar 12, 2020 at 6:22 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Wed, Mar 11, 2020 at 10:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Richard Guo <guofenglinux@gmail.com> writes:\n>> > In your case, the WHERE clauses would get pushed down into the subquery\n>> > for both queries, with/without the ROLLUP. But since the subquery uses\n>> > grouping/grouping sets, the WHERE clauses would be put in HAVING of the\n>> > subquery.\n>>\n>> Right, we do successfully push the clauses into HAVING of the subquery.\n>>\n>> > Then when we plan for the subquery, we will decide whether a HAVING\n>> > clause can be transfered into WHERE. Usually we do not do that if there\n>> > are any nonempty grouping sets. Because if any referenced column isn't\n>> > present in all the grouping sets, moving such a clause into WHERE would\n>> > potentially change the results.\n>>\n>> Yeah. I think that it might be safe if the proposed clause can\n>> be proven strict for (some subset of?) the grouping columns, because\n>> that would eliminate the rollup grouping sets where those columns\n>> come out NULL because they aren't being grouped on. (This could then\n>> also factor into throwing away those grouping sets, perhaps.)\n>>\n>\n> This seems correct to me. If we can prove the HAVING clause is strict\n> for some grouping columns, then we can throw away the grouping sets that\n> do not contain these grouping columns, since their results would be\n> eliminated by this HAVING clause. After that we can move this HAVING\n> clause to WHERE. I'm thinking about this example:\n>\n> select c1, c2, sum(c4) from t group by\n> grouping sets ((c1, c2), (c2, c3), (c1, c4)) having c2 = 2;\n>\n> select c1, c2, sum(c4) from t group by\n> grouping sets ((c1, c2), (c2, c3)) having c2 = 2;\n>\n> select c1, c2, sum(c4) from t where c2 = 2 group by\n> grouping sets ((c1, c2), (c2, c3));\n>\n>\n> For non-strict HAVING clause, if its referenced columns are present in\n> all the grouping sets, I think we should also be able to move it to\n> WHERE.\n>\n> Thanks\n> Richard\n>\n\nHi,(cc'ing -hackers)We used to push down clauses from HAVING to WHERE when grouping sets areused in 61444bfb and then reverted it in a6897efa because of wrongresults issue. As now there are people suffering from performance issueas described in [1], I'm wondering if we should give it another try.[1] https://www.postgresql.org/message-id/flat/17F738BE-8D45-422C-BAD0-ACA3090BF46D%40gmail.comThanksRichardOn Thu, Mar 12, 2020 at 6:22 PM Richard Guo <guofenglinux@gmail.com> wrote:On Wed, Mar 11, 2020 at 10:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> In your case, the WHERE clauses would get pushed down into the subquery\n> for both queries, with/without the ROLLUP. But since the subquery uses\n> grouping/grouping sets, the WHERE clauses would be put in HAVING of the\n> subquery.\n\nRight, we do successfully push the clauses into HAVING of the subquery.\n\n> Then when we plan for the subquery, we will decide whether a HAVING\n> clause can be transfered into WHERE. Usually we do not do that if there\n> are any nonempty grouping sets. Because if any referenced column isn't\n> present in all the grouping sets, moving such a clause into WHERE would\n> potentially change the results.\n\nYeah.  I think that it might be safe if the proposed clause can\nbe proven strict for (some subset of?) the grouping columns, because\nthat would eliminate the rollup grouping sets where those columns\ncome out NULL because they aren't being grouped on.  (This could then\nalso factor into throwing away those grouping sets, perhaps.)This seems correct to me. If we can prove the HAVING clause is strictfor some grouping columns, then we can throw away the grouping sets thatdo not contain these grouping columns, since their results would beeliminated by this HAVING clause. After that we can move this HAVINGclause to WHERE. I'm thinking about this example:select c1, c2, sum(c4) from t group by    grouping sets ((c1, c2), (c2, c3), (c1, c4)) having c2 = 2;select c1, c2, sum(c4) from t group by    grouping sets ((c1, c2), (c2, c3)) having c2 = 2;select c1, c2, sum(c4) from t where c2 = 2 group by    grouping sets ((c1, c2), (c2, c3));For non-strict HAVING clause, if its referenced columns are present inall the grouping sets, I think we should also be able to move it toWHERE.ThanksRichard", "msg_date": "Thu, 19 Mar 2020 17:28:19 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Optimizer Doesn't Push Down Where Expressions on Rollups" } ]
[ { "msg_contents": "Hi all,\nPlease check the below behavior for the \"SERIAL\" datatype.\n\npostgres=# CREATE TABLE t1(c1 int, c2 serial);\nCREATE TABLE\npostgres=# insert into t1 values (generate_series(1,3));\nINSERT 0 3\npostgres=# insert into t1 values (generate_series(4,6));\nINSERT 0 3\npostgres=# select * from t1;\n c1 | c2\n----+----\n 1 | 1\n 2 | 2\n 3 | 3\n 4 | 5\n 5 | 6\n 6 | 7\n(6 rows)\n\nIn this above case, the serial column \"c2\" is skipping the value \"4\" in\nselect output.\nIs this an expected behavior?\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi all,Please check the below behavior for the \"SERIAL\" datatype.postgres=# CREATE TABLE t1(c1 int, c2 serial);CREATE TABLEpostgres=# insert into t1 values (generate_series(1,3));INSERT 0 3postgres=# insert into t1 values (generate_series(4,6));INSERT 0 3postgres=# select * from t1; c1 | c2 ----+----  1 |  1  2 |  2  3 |  3  4 |  5  5 |  6  6 |  7(6 rows)In this above case, the serial column \"c2\" is skipping the value \"4\" in select output. Is this an expected behavior?-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 11 Mar 2020 15:45:42 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "SERIAL datatype column skipping values." }, { "msg_contents": "On 3/11/20 11:15 AM, Prabhat Sahu wrote:\n> Hi all,\n> Please check the below behavior for the \"SERIAL\" datatype.\n> \n> [...]\n> \n> In this above case, the serial column \"c2\" is skipping the value \"4\" in \n> select output.\n> Is this an expected behavior?\n\nCurious, it seems like DEFAULT expressions of a table are executed an \nextra time if a set returning function is used like in your example. And \nthe SERIAL type is implemented using DEFAULT.\n\nOn the other hand if you use \"INSERT ... SELECT\" the DEFAULT expression \nis only executed once per row inserted.\n\n# CREATE FUNCTION test_default() RETURNS int LANGUAGE plpgsql AS $$\nBEGIN\n RAISE NOTICE 'Ran test_default()';\n RETURN 42;\nEND\n$$;\nCREATE FUNCTION\n\n# CREATE TABLE t2 (c1 int, c2 int DEFAULT test_default());\nCREATE TABLE\n\n# INSERT INTO t2 VALUES (generate_series(1,2));\nNOTICE: Ran test_default()\nNOTICE: Ran test_default()\nNOTICE: Ran test_default()\nINSERT 0 2\n\n# INSERT INTO t2 SELECT generate_series(1,2);\nNOTICE: Ran test_default()\nNOTICE: Ran test_default()\nINSERT 0 2\n\nAndreas\n\n\n", "msg_date": "Wed, 11 Mar 2020 15:01:41 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: SERIAL datatype column skipping values." }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 3/11/20 11:15 AM, Prabhat Sahu wrote:\n>> Is this an expected behavior?\n\n> Curious, it seems like DEFAULT expressions of a table are executed an \n> extra time if a set returning function is used like in your example. And \n> the SERIAL type is implemented using DEFAULT.\n\nYeah, it's the same as if you do\n\nregression=# select generate_series(1,2), test_default();\nNOTICE: Ran test_default()\nNOTICE: Ran test_default()\nNOTICE: Ran test_default()\n generate_series | test_default \n-----------------+--------------\n 1 | 42\n 2 | 42\n(2 rows)\n\nThe generated plan is\n\nregression=# explain verbose select generate_series(1,2), test_default();\n QUERY PLAN \n-------------------------------------------------\n ProjectSet (cost=0.00..0.28 rows=2 width=8)\n Output: generate_series(1, 2), test_default()\n -> Result (cost=0.00..0.01 rows=1 width=0)\n(3 rows)\n\nand if you read nodeProjectSet.c you'll see that it needs to evaluate\nthe target list three times. On the third iteration, generate_series()\nreturns isdone == ExprEndResult indicating that it has no more results,\nso we don't emit an output tuple --- but we still run test_default()\nwhile scanning the tlist.\n\nPossibly the planner should try to avoid putting volatile expressions\ninto ProjectSet's tlist. On the other hand, it's worked this way for\nan awfully long time, so I wonder if anyone is relying on the behavior.\nEven in versions before we used ProjectSet nodes, you still see three\ncalls to the volatile function.\n\nAnyway, to get back to the OP's implied question, no you should never\nassume that a SERIAL column's values won't have holes in the sequence.\nRolled-back transactions will have that effect in any case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 11:15:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SERIAL datatype column skipping values." } ]