threads
listlengths
1
2.99k
[ { "msg_contents": "Hello.\n\n# I'm not sure -hakcers is the place for this, though..\n\nI notieced that the official PG12-devel RPM pacakge for RHEL8 mandates\nccache being installed on building of an extension.\n\n$ grep ccache /usr/pgsql-12/lib/pgxs/src/Makefile.global \nCLANG = /usr/lib64/ccache/clang\n# ccache loses .gcno files\n\nHowever it can be changed by explicitly setting CLANG, it seems that\nthat setting is by accident since gcc is not ccachi'fied. Anyway, I'm\nnot sure but, I think that that decision should be left with extension\ndevelopers. Is it intentional?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 30 Jan 2020 20:36:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "ccache is required by postgresql12-devel RPM" }, { "msg_contents": "On Thu, Jan 30, 2020 at 08:36:17PM +0900, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> # I'm not sure -hakcers is the place for this, though..\n> \n> I notieced that the official PG12-devel RPM pacakge for RHEL8 mandates\n> ccache being installed on building of an extension.\n> \n> $ grep ccache /usr/pgsql-12/lib/pgxs/src/Makefile.global \n> CLANG = /usr/lib64/ccache/clang\n> # ccache loses .gcno files\n> \n> However it can be changed by explicitly setting CLANG, it seems that\n> that setting is by accident since gcc is not ccachi'fied. Anyway, I'm\n> not sure but, I think that that decision should be left with extension\n> developers. Is it intentional?\n\nThat certainly seems wrong. Is it this URL?\n\n\thttps://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n\nCC'ing Devrim.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 7 Mar 2020 13:20:43 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: ccache is required by postgresql12-devel RPM" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Jan 30, 2020 at 08:36:17PM +0900, Kyotaro Horiguchi wrote:\n>> I notieced that the official PG12-devel RPM pacakge for RHEL8 mandates\n>> ccache being installed on building of an extension.\n>> \n>> $ grep ccache /usr/pgsql-12/lib/pgxs/src/Makefile.global \n>> CLANG = /usr/lib64/ccache/clang\n>> # ccache loses .gcno files\n>> \n>> However it can be changed by explicitly setting CLANG, it seems that\n>> that setting is by accident since gcc is not ccachi'fied. Anyway, I'm\n>> not sure but, I think that that decision should be left with extension\n>> developers. Is it intentional?\n\n> That certainly seems wrong. Is it this URL?\n\nI can't get too excited about this. ccache has been part of the standard\ndevelopment environment on Red Hat systems for at least a decade, and thus\n/usr/lib64/ccache has been in the standard PATH setting for just as long.\nDevrim would have to go out of his way to force CLANG to not show up that\nway in the configure result, and I don't see that it would be an\nimprovement for any real-world usage of the RPM results.\n\nThe core reason why it shows up that way is that llvm.m4 uses\nPGAC_PATH_PROGS to set CLANG, while we don't forcibly path-ify\nthe CC setting. But arguably the latter is a bug, not the former.\nI recall that there's been some discussion about how it'd be safer\nif configure made sure that all the tool names it records are fully\npath-ified.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 13:41:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ccache is required by postgresql12-devel RPM" }, { "msg_contents": "At Sat, 07 Mar 2020 13:41:58 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Thu, Jan 30, 2020 at 08:36:17PM +0900, Kyotaro Horiguchi wrote:\n> >> I notieced that the official PG12-devel RPM pacakge for RHEL8 mandates\n> >> ccache being installed on building of an extension.\n> >> \n> >> $ grep ccache /usr/pgsql-12/lib/pgxs/src/Makefile.global \n> >> CLANG = /usr/lib64/ccache/clang\n> >> # ccache loses .gcno files\n> >> \n> >> However it can be changed by explicitly setting CLANG, it seems that\n> >> that setting is by accident since gcc is not ccachi'fied. Anyway, I'm\n> >> not sure but, I think that that decision should be left with extension\n> >> developers. Is it intentional?\n> \n> > That certainly seems wrong. Is it this URL?\n\nMaybe and 10, 11 have the same configuration. For clarity, my\nobjective is just to know whether it is deliberate or not.\n\n> I can't get too excited about this. ccache has been part of the standard\n> development environment on Red Hat systems for at least a decade, and thus\n> /usr/lib64/ccache has been in the standard PATH setting for just as long.\n> Devrim would have to go out of his way to force CLANG to not show up that\n> way in the configure result, and I don't see that it would be an\n> improvement for any real-world usage of the RPM results.\n\nI don't remember how I configured development environments both with\nCentOS7/8 but both of them initially didn't have ccache. And I'm not\nthe only one who stumbled on the CLANG setting. But we might be a\nsmall minority..\n\n> The core reason why it shows up that way is that llvm.m4 uses\n> PGAC_PATH_PROGS to set CLANG, while we don't forcibly path-ify\n> the CC setting. But arguably the latter is a bug, not the former.\n> I recall that there's been some discussion about how it'd be safer\n> if configure made sure that all the tool names it records are fully\n> path-ified.\n\nAlthough I'm not sure we should path-ify tool chains on building\nextensions, if the -devel package doesn't work without ccache, the\npackage needs to depend on ccache package. But such a policy creates\nmany bothersome dependency on every tool. I'm not sure how to deal\nwith that...\n\nSo, I'm satisfied to know it is the intended behavior.\n\nThank you for the explanation, Tom.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 09 Mar 2020 18:03:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ccache is required by postgresql12-devel RPM" } ]
[ { "msg_contents": "Hi,\n\nwe had a customer who ran pg_checksums on their v12 cluster and got a\nconfusing error:\n\n|pg_checksums: error: invalid segment number 0 in file name \"/PG-\n|Data/foo_12_data/pg_tblspc/16402/PG_10_201707211/16390/pg_internal.init\n|.10028\"\n\nTurns out the customer ran a pg_ugprade in copy mode before and started\nup the old cluster again which pg_checksums decided to checked as well -\nnote the PG_10_201707211 in the error message. The attached patch is a\nstab at teaching pg_checksums to only check its own\nTABLESPACE_VERSION_DIRECTORY directory. I guess this implies that it\nwould ignore tablespace directories of outdated catversion instances\nduring development, which I think should be ok, but others might not\nagree?\n\nThe other question is whether it is possible to end up with a\npg_internal.init.$PID file in a running cluster. E.g. if an instance\ncrashes and gets started up again - are those cleaned up during crash\nrecovery, or should pg_checksums ignore them? Right now pg_checksums\nonly checks against a list of filenames and only skips on exact matches\nnot prefixes so that might take a bit of work.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz", "msg_date": "Thu, 30 Jan 2020 18:11:22 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": true, "msg_subject": "[Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Thu, Jan 30, 2020 at 06:11:22PM +0100, Michael Banck wrote:\n> The other question is whether it is possible to end up with a\n> pg_internal.init.$PID file in a running cluster. E.g. if an instance\n> crashes and gets started up again - are those cleaned up during crash\n> recovery, or should pg_checksums ignore them? Right now pg_checksums\n> only checks against a list of filenames and only skips on exact matches\n> not prefixes so that might take a bit of work.\n\nIndeed, with a bad timing and a crash in the middle of\nwrite_relcache_init_file(), it could be possible to have such a file\nleft around in the data folder. Having a past tablespace version left\naround after an upgrade is a pilot error in my opinion because\npg_upgrade generates a script to cleanup past tablespaces, no? So\nyour patch does not look like a good idea to me. And now that I look\nat it, if we happen to leave behind a temporary file for\npg_internal.init, backups fail with the following error:\n2020-01-31 13:26:18.345 JST [102076] 010_pg_basebackup.pl ERROR:\ninvalid segment number 0 in file \"pg_internal.init.123\"\n\nSo, I think that it would be better to change basebackup.c,\npg_checksums and pg_rewind so as files are excluded if there is a\nprefix match with the exclude lists. Please see the attached.\n--\nMichael", "msg_date": "Fri, 31 Jan 2020 13:53:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "At Fri, 31 Jan 2020 13:53:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Jan 30, 2020 at 06:11:22PM +0100, Michael Banck wrote:\n> > The other question is whether it is possible to end up with a\n> > pg_internal.init.$PID file in a running cluster. E.g. if an instance\n> > crashes and gets started up again - are those cleaned up during crash\n> > recovery, or should pg_checksums ignore them? Right now pg_checksums\n> > only checks against a list of filenames and only skips on exact matches\n> > not prefixes so that might take a bit of work.\n> \n> Indeed, with a bad timing and a crash in the middle of\n> write_relcache_init_file(), it could be possible to have such a file\n> left around in the data folder. Having a past tablespace version left\n> around after an upgrade is a pilot error in my opinion because\n> pg_upgrade generates a script to cleanup past tablespaces, no? So\n> your patch does not look like a good idea to me. And now that I look\n> at it, if we happen to leave behind a temporary file for\n> pg_internal.init, backups fail with the following error:\n> 2020-01-31 13:26:18.345 JST [102076] 010_pg_basebackup.pl ERROR:\n> invalid segment number 0 in file \"pg_internal.init.123\"\n\nAgreed.\n\n> So, I think that it would be better to change basebackup.c,\n> pg_checksums and pg_rewind so as files are excluded if there is a\n> prefix match with the exclude lists. Please see the attached.\n\nAgreed that the tools should ignore such files. Looking excludeFile,\nit seems to me that basebackup makes sure to exclude only files that\nshould harm. I'm not sure whether it's explicitly, but\ntablespace_map.old and backup_label.old are not excluded.\n\nThe patch excludes harmless files such like \"backup_label.20200131\"\nand the two files above.\n\nI don't think that is a problem right away, of course. It looks good\nto me except for the possible excessive exclusion. So, I don't object\nit if we don't mind that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 31 Jan 2020 17:30:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "At Fri, 31 Jan 2020 17:30:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I don't think that is a problem right away, of course. It looks good\n> to me except for the possible excessive exclusion. So, I don't object\n> it if we don't mind that.\n\nThat's a bit wrong. All the discussion is only on excludeFiles. I\nthink we should refrain from letting more files match to\nnohecksumFiles.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 31 Jan 2020 17:39:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Hi,\n\nAm Freitag, den 31.01.2020, 13:53 +0900 schrieb Michael Paquier:\n> On Thu, Jan 30, 2020 at 06:11:22PM +0100, Michael Banck wrote:\n> Having a past tablespace version left\n> around after an upgrade is a pilot error in my opinion because\n> pg_upgrade generates a script to cleanup past tablespaces, no? So\n> your patch does not look like a good idea to me. \n\nNot sure I agree with it, but if that (i.e. after pg_upgrade in copy\nmode, you have no business to use the old cluster as well as the new\none) is project policy, fair enough.\n\nHowever, Postgres does not disallow to just create tablespaces in the\nsame location from two different versions, so you don't need the\npg_upgade scenario to get into this (pg_checksums checking the wrong\ncluster's data) problem:\n\npostgres@kohn:~$ psql -p 5437 -c \"CREATE TABLESPACE bar LOCATION '/var/lib/postgresql/bar'\"\nCREATE TABLESPACE\npostgres@kohn:~$ psql -p 5444 -c \"CREATE TABLESPACE bar LOCATION '/var/lib/postgresql/bar'\"\nCREATE TABLESPACE\npostgres@kohn:~$ ls bar\nPG_11_201809051 PG_12_201909212\npostgres@kohn:~$ touch bar/PG_11_201809051/pg_internal.init.123\npostgres@kohn:~$ pg_ctlcluster 12 main stop\n sudo systemctl stop postgresql@12-main\npostgres@kohn:~$ LANG=C /usr/lib/postgresql/12/bin/pg_checksums -D /var/lib/postgresql/12/main\npg_checksums: error: invalid segment number 0 in file name\n\"/var/lib/postgresql/12/main/pg_tblspc/16396/PG_11_201809051/pg_internal\n.init.123\"\n\nI believe this is in order to allow pg_upgrade to run in the first\nplace. But is this pilot error as well? In any case, it is a situation\nwe allow to happen so IMO we should fix pg_checksums to skip the foreign\ntablespaces.\n\nAs an aside, I would advocate to just skip files which fail the segment\nnumber determination step (and maybe log a warning), not bail out.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n", "msg_date": "Fri, 31 Jan 2020 10:59:30 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": true, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Am Freitag, den 31.01.2020, 13:53 +0900 schrieb Michael Paquier:\n> Indeed, with a bad timing and a crash in the middle of\n> write_relcache_init_file(), it could be possible to have such a file\n> left around in the data folder. Having a past tablespace version\n> left\n> around after an upgrade is a pilot error in my opinion because\n> pg_upgrade generates a script to cleanup past tablespaces, no?\n\nI'm suprised, why should that be a problem in copy mode? For me this is\na fair use case to test upgrades, e.g. for development purposes, if\nsomeone want's to still have application tests against the current old\nversion, for fallback and whatever. And people might not want such\nupgrades as a \"fire-and-forget\" task. We even have the --clone feature\nnow, making this even faster.\n\nIf our project policy is to never ever touch an pg_upgrade'd PostgreSQL\ninstance again in copy mode, i wasn't aware of it.\n\nAnd to be honest, even PostgreSQL itself allows you to reuse tablespace\nlocations for multiple instances as well, so the described problem\nshould exist not in upgraded clusters only.\n\n\n\tBernd\n\n\n\n\n", "msg_date": "Fri, 31 Jan 2020 11:33:34 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Fri, Jan 31, 2020 at 11:33:34AM +0100, Bernd Helmle wrote:\n> And to be honest, even PostgreSQL itself allows you to reuse tablespace\n> locations for multiple instances as well, so the described problem\n> should exist not in upgraded clusters only.\n\nFair point. Now, while the proposed patch is right to use\nTABLESPACE_VERSION_DIRECTORY, shouldn't we use strncmp based on the\nlength of TABLESPACE_VERSION_DIRECTORY instead of de->d_name? It\nseems also to me that the code as proposed is rather fragile, and that\nwe had better be sure that the check only happens when we are scanning\nentries within pg_tblspc.\n\nThe issue with pg_internal.init.XX is quite different, so I think that\nit would be better to commit that separately first.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 15:15:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Fri, Jan 31, 2020 at 05:39:36PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 31 Jan 2020 17:30:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> I don't think that is a problem right away, of course. It looks good\n>> to me except for the possible excessive exclusion. So, I don't object\n>> it if we don't mind that.\n> \n> That's a bit wrong. All the discussion is only on excludeFiles. I\n> think we should refrain from letting more files match to\n> nohecksumFiles.\n\nI am not sure what you are saying here. Are you saying that we should\nnot use a prefix matching for that part? Or are you saying that we\nshould not touch this list at all?\n\nPlease note that pg_internal.init is listed within noChecksumFiles in\nbasebackup.c, so we would miss any temporary pg_internal.init.PID if\nwe don't check after the file prefix and the base backup would issue\nextra WARNING messages, potentially masking messages that could\nmatter. So let's fix that as well.\n\nI agree that a side effect of this change would be to discard anything\nprefixed with \"backup_label\" or \"tablespace_map\", including any old,\nrenamed files. Do you know of any backup solutions which could be\nimpacted by that? I am adding David Steele and Stephen Frost in CC so\nas they can comment based on their experience in this area. I recall\nthat backrest stuff uses the replication protocol, but I may be\nwrong.\n--\nMichael", "msg_date": "Wed, 19 Feb 2020 17:13:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On 2/19/20 2:13 AM, Michael Paquier wrote:\n> On Fri, Jan 31, 2020 at 05:39:36PM +0900, Kyotaro Horiguchi wrote:\n>> At Fri, 31 Jan 2020 17:30:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>>> I don't think that is a problem right away, of course. It looks good\n>>> to me except for the possible excessive exclusion. So, I don't object\n>>> it if we don't mind that.\n>>\n>> That's a bit wrong. All the discussion is only on excludeFiles. I\n>> think we should refrain from letting more files match to\n>> nohecksumFiles.\n> \n> I am not sure what you are saying here. Are you saying that we should\n> not use a prefix matching for that part? Or are you saying that we\n> should not touch this list at all?\n\nPerhaps he is saying that if it is already excluded it won't be \nchecksummed. So, if pg_internal.init* is excluded from the backup, that \nis all that is needed. If so, I agree. This might not help \npg_verify_checksums, though, except that it should be applying the same \nrules.\n\n> Please note that pg_internal.init is listed within noChecksumFiles in\n> basebackup.c, so we would miss any temporary pg_internal.init.PID if\n> we don't check after the file prefix and the base backup would issue\n> extra WARNING messages, potentially masking messages that could\n> matter. So let's fix that as well.\n\nAgreed. Though, I think pg_internal.init.XX should be excluded from the \nbackup as well.\n\nAs far as I can see, the pg_internal.init.XX will not be cleaned up by \nPostgreSQL on startup. I've only tested this in 9.6 so far, but I don't \nsee any differences in the code since then that would lead to better \nbehavior. Perhaps that's also something we should fix?\n\n> I agree that a side effect of this change would be to discard anything\n> prefixed with \"backup_label\" or \"tablespace_map\", including any old,\n> renamed files. Do you know of any backup solutions which could be\n> impacted by that? I am adding David Steele and Stephen Frost in CC so\n> as they can comment based on their experience in this area. I recall\n> that backrest stuff uses the replication protocol, but I may be\n> wrong.\n\nI'm really not a fan of a blind prefix match. I think we should stick \nwith only excluding files that are created by Postgres. So \nbackup_label.old and tablespace_map.old should just be added to the \nexclude list. That's how we have it in pgBackRest.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 19 Feb 2020 12:37:00 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On 1/31/20 3:59 AM, Michael Banck wrote:\n> Hi,\n> \n> Am Freitag, den 31.01.2020, 13:53 +0900 schrieb Michael Paquier:\n>> On Thu, Jan 30, 2020 at 06:11:22PM +0100, Michael Banck wrote:\n>> Having a past tablespace version left\n>> around after an upgrade is a pilot error in my opinion because\n>> pg_upgrade generates a script to cleanup past tablespaces, no? So\n>> your patch does not look like a good idea to me.\n> \n> Not sure I agree with it, but if that (i.e. after pg_upgrade in copy\n> mode, you have no business to use the old cluster as well as the new\n> one) is project policy, fair enough.\n\nI don't see how this is project policy. The directories for other \nversions of Postgres should be ignored as they are in other utilities, \ne.g. pg_basebackup.\n\n> However, Postgres does not disallow to just create tablespaces in the\n> same location from two different versions, so you don't need the\n> pg_upgade scenario to get into this (pg_checksums checking the wrong\n> cluster's data) problem:\n\nExactly.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 19 Feb 2020 12:42:53 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Wed, Feb 19, 2020 at 12:37:00PM -0600, David Steele wrote:\n> On 2/19/20 2:13 AM, Michael Paquier wrote:\n>> Please note that pg_internal.init is listed within noChecksumFiles in\n>> basebackup.c, so we would miss any temporary pg_internal.init.PID if\n>> we don't check after the file prefix and the base backup would issue\n>> extra WARNING messages, potentially masking messages that could\n>> matter. So let's fix that as well.\n> \n> Agreed. Though, I think pg_internal.init.XX should be excluded from the\n> backup as well.\n\nSure. That's the intention. pg_rewind, pg_checksums and basebackup.c\nare all the things on my list.\n\n> As far as I can see, the pg_internal.init.XX will not be cleaned up by\n> PostgreSQL on startup. I've only tested this in 9.6 so far, but I don't see\n> any differences in the code since then that would lead to better behavior.\n> Perhaps that's also something we should fix?\n\nNot sure that it is worth spending cycles on that at the beginning of\nrecovery as when a mapping file is written its temporary entry\ntruncates any existing one present matching its name.\n\n> I'm really not a fan of a blind prefix match. I think we should stick with\n> only excluding files that are created by Postgres.\n\nThinking more on that, excluding any backup_label with a custom suffix\nworries me as it could cause a potential breakage for exiting backup\nsolutions. So attached is an updated patch making things in a\nsmarter way: I have added to the exclusion lists the possibility to\nmatch an entry based on its prefix, or not, the choice being optional.\nThis solves the problem with pg_internal.PID and is careful to not\nexclude unnecessary entries like suffixed backup labels or such. This\nleads to some extra duplication within pg_rewind, basebackup.c and\npg_checksums but I think we can live with that, and that makes\nback-patching simpler. Refactoring is still tricky though as it\nrelates to the use of paths across the backend and the frontend..\n\n> So backup_label.old and\n> tablespace_map.old should just be added to the exclude list. That's how we\n> have it in pgBackRest.\n\nThat would be a behavior change. We could change that on HEAD, but I\ndon't think that this can be back-patched as this does not cause an\nactual problem.\n\nFor now, my proposal is to fix the prefix first, and then let's look\nat the business with tablespaces where needed. \n--\nMichael", "msg_date": "Thu, 20 Feb 2020 15:55:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On 2/20/20 12:55 AM, Michael Paquier wrote:\n> On Wed, Feb 19, 2020 at 12:37:00PM -0600, David Steele wrote:\n> \n>> As far as I can see, the pg_internal.init.XX will not be cleaned up by\n>> PostgreSQL on startup. I've only tested this in 9.6 so far, but I don't see\n>> any differences in the code since then that would lead to better behavior.\n>> Perhaps that's also something we should fix?\n> \n> Not sure that it is worth spending cycles on that at the beginning of\n> recovery as when a mapping file is written its temporary entry\n> truncates any existing one present matching its name.\n\nBut since the name includes the backend's pid you would need to get \nlucky and have a new backend with the same pid create the file after a \nrestart. I tried it and the old temp file was left behind after restart \nand first connection to the database.\n\nI doubt this is a big issue in the field, but it seems like it would be \nnice to do something about it.\n\n>> I'm really not a fan of a blind prefix match. I think we should stick with\n>> only excluding files that are created by Postgres.\n> \n> Thinking more on that, excluding any backup_label with a custom suffix\n> worries me as it could cause a potential breakage for exiting backup\n> solutions. So attached is an updated patch making things in a\n> smarter way: I have added to the exclusion lists the possibility to\n> match an entry based on its prefix, or not, the choice being optional.\n> This solves the problem with pg_internal.PID and is careful to not\n> exclude unnecessary entries like suffixed backup labels or such. This\n> leads to some extra duplication within pg_rewind, basebackup.c and\n> pg_checksums but I think we can live with that, and that makes\n> back-patching simpler. Refactoring is still tricky though as it\n> relates to the use of paths across the backend and the frontend..\n\nI'm not excited about the amount of code duplication between these three \ntools. I know this was because of back-patching various issues in the \npast, but I really think we need to unify these data \nstructures/functions in HEAD.\n\n>> So backup_label.old and\n>> tablespace_map.old should just be added to the exclude list. That's how we\n>> have it in pgBackRest.\n> \n> That would be a behavior change. We could change that on HEAD, but I\n> don't think that this can be back-patched as this does not cause an\n> actual problem.\n\nRight, that should be in HEAD.\n\n> For now, my proposal is to fix the prefix first, and then let's look\n> at the business with tablespaces where needed.\n\nOK.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 20 Feb 2020 07:37:15 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Am Dienstag, den 18.02.2020, 15:15 +0900 schrieb Michael Paquier:\n> Fair point. Now, while the proposed patch is right to use\n> TABLESPACE_VERSION_DIRECTORY, shouldn't we use strncmp based on the\n> length of TABLESPACE_VERSION_DIRECTORY instead of de->d_name? It\n> seems also to me that the code as proposed is rather fragile, and\n> that\n> we had better be sure that the check only happens when we are\n> scanning\n> entries within pg_tblspc.\n> \n\nYes, after thinking and playing around with it a little i share your\nposition. You can still easily cause pg_checksums to error out by just\nhaving arbitrary files around in the reference tablespace locations.\nThough i don't think this is something of a big issue, it looks strange\nand misleading if pg_checksums just complains about files not belonging\nto the scanned PostgreSQL data directory (even we explicitly note in\nthe docs that even tablespace locations are somehow taboo for DBAs to\nput other files and/or directories in there).\n\nSo i propose a different approach like the attached patch tries to\nimplement: instead of just blindly iterating over directory contents\nand filter them out, reference the tablespace location and\nTABLESPACE_VERSION_DIRECTORY directly. This is done by a new function\nscan_tablespaces() which is specialized in just follow the\nsymlinks/junctions in pg_tblspc and call scan_directory() with just\nwhat it has found there. It will also honour directories, just in case\nan experienced DBA has copied over the tablespace into pg_tblspc\ndirectly.\n\n> The issue with pg_internal.init.XX is quite different, so I think\n> that\n> it would be better to commit that separately first.\n\nAgreed.\n\nThanks,\n\tBernd", "msg_date": "Thu, 20 Feb 2020 17:38:15 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Thu, Feb 20, 2020 at 07:37:15AM -0600, David Steele wrote:\n> But since the name includes the backend's pid you would need to get lucky\n> and have a new backend with the same pid create the file after a restart. I\n> tried it and the old temp file was left behind after restart and first\n> connection to the database.\n> \n> I doubt this is a big issue in the field, but it seems like it would be nice\n> to do something about it.\n\nThe natural area to do that would be around ResetUnloggedRelations().\nStill that would require combine both operations to not do any\nunnecessary lookups at the data folder paths.\n\n> I'm not excited about the amount of code duplication between these three\n> tools. I know this was because of back-patching various issues in the past,\n> but I really think we need to unify these data structures/functions in HEAD.\n\nThe lists are duplicated because we have never really figured out how\nto combine this code in one place. The idea was to have all the data\nfolder path logic and the lists within one header shared between the\nfrontend and backend but there was not much support for that on HEAD.\n\n>> For now, my proposal is to fix the prefix first, and then let's look\n>> at the business with tablespaces where needed.\n> \n> OK.\n\nI'll let this patch round for a couple of extra day, and revisit it at\nthe beginning of next week.\n--\nMichael", "msg_date": "Fri, 21 Feb 2020 15:07:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Thu, Feb 20, 2020 at 05:38:15PM +0100, Bernd Helmle wrote:\n> So i propose a different approach like the attached patch tries to\n> implement: instead of just blindly iterating over directory contents\n> and filter them out, reference the tablespace location and\n> TABLESPACE_VERSION_DIRECTORY directly. This is done by a new function\n> scan_tablespaces() which is specialized in just follow the\n> symlinks/junctions in pg_tblspc and call scan_directory() with just\n> what it has found there. It will also honour directories, just in case\n> an experienced DBA has copied over the tablespace into pg_tblspc\n> directly.\n\n+ if (S_ISREG(st.st_mode))\n+ {\n+ pg_log_debug(\"ignoring file %s in pg_tblspc\", de->d_name);\n+ continue;\n+ }\nWe don't do that for the normal directory scan path, so it does not\nstrike me as a good idea on consistency ground. As a whole, I don't\nsee much point in having a separate routine which is just roughly a\nduplicate of scan_directory(), and I think that we had better just add\nthe check looking for matches with TABLESPACE_VERSION_DIRECTORY\ndirectly when having a directory, if subdir is \"pg_tblspc\". That\nalso makes the patch much shorter.\n\n+ * the direct path to it and check via lstat wether it exists.\ns/wether/whether/, repeated three times.\n\nWe should have some TAP tests for that. The first patch of this\nthread from Michael had some, but I would just have added a dummy\ntablespace with an empty file in 002_actions.pl, triggering an error\nif pg_checksums is not fixed. Dummy entries around the place where\ndummy temp files are added would be fine.\n--\nMichael", "msg_date": "Fri, 21 Feb 2020 15:36:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Thank you David for decrypting my previous mail.., and your\ntranslation was correct.\n\nAt Fri, 21 Feb 2020 15:07:12 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Feb 20, 2020 at 07:37:15AM -0600, David Steele wrote:\n> > But since the name includes the backend's pid you would need to get lucky\n> > and have a new backend with the same pid create the file after a restart. I\n> > tried it and the old temp file was left behind after restart and first\n> > connection to the database.\n> > \n> > I doubt this is a big issue in the field, but it seems like it would be nice\n> > to do something about it.\n> \n> The natural area to do that would be around ResetUnloggedRelations().\n> Still that would require combine both operations to not do any\n> unnecessary lookups at the data folder paths.\n> \n> > I'm not excited about the amount of code duplication between these three\n> > tools. I know this was because of back-patching various issues in the past,\n> > but I really think we need to unify these data structures/functions in HEAD.\n> \n> The lists are duplicated because we have never really figured out how\n> to combine this code in one place. The idea was to have all the data\n> folder path logic and the lists within one header shared between the\n> frontend and backend but there was not much support for that on HEAD.\n> \n> >> For now, my proposal is to fix the prefix first, and then let's look\n> >> at the business with tablespaces where needed.\n> > \n> > OK.\n> \n> I'll let this patch round for a couple of extra day, and revisit it at\n> the beginning of next week.\n\n\nThank you for the version.\nI didn't look it closer bat it looks in the direction I wanted.\nAt a quick look, the following section attracted my eyes.\n\n+\t\t\t\tif (strncmp(de->d_name, excludeFiles[excludeIdx].name,\n+\t\t\t\t\t\t\tstrlen(excludeFiles[excludeIdx].name)) == 0)\n+\t\t\t\t{\n+\t\t\t\t\telog(DEBUG1, \"file \\\"%s\\\" matching prefix \\\"%s\\\" excluded from backup\",\n+\t\t\t\t\t\t de->d_name, excludeFiles[excludeIdx].name);\n+\t\t\t\t\texcludeFound = true;\n+\t\t\t\t\tbreak;\n+\t\t\t\t}\n+\t\t\t}\n+\t\t\telse\n+\t\t\t{\n+\t\t\t\tif (strcmp(de->d_name, excludeFiles[excludeIdx].name) == 0)\n+\t\t\t\t{\n+\t\t\t\t\telog(DEBUG1, \"file \\\"%s\\\" excluded from backup\", de->d_name);\n+\t\t\t\t\texcludeFound = true;\n+\t\t\t\t\tbreak;\n+\t\t\t\t}\n\nThe two str[n]cmps are different only in matching length. I don't\nthink we don't need to differentiate the two message there, so we\ncould reduce the code as:\n\n| cmplen = strlen(excludeFiles[].name);\n| if (!prefix_patch)\n| cmplen++;\n| if (strncmp(d_name, excludeFilep.name, cmplen) == 0)\n| ...\n \nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Feb 2020 17:37:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Hi Michael,\n\nOn 2/20/20 11:07 PM, Michael Paquier wrote:\n > On Thu, Feb 20, 2020 at 07:37:15AM -0600, David Steele wrote:\n >> But since the name includes the backend's pid you would need to get \nlucky\n >> and have a new backend with the same pid create the file after a \nrestart. I\n >> tried it and the old temp file was left behind after restart and first\n >> connection to the database.\n >>\n >> I doubt this is a big issue in the field, but it seems like it would \nbe nice\n >> to do something about it.\n >\n > The natural area to do that would be around ResetUnloggedRelations().\n > Still that would require combine both operations to not do any\n > unnecessary lookups at the data folder paths.\n\nYeah, that's what I was thinking as well, since there is already a \ndirectory scan there and doing the check would be very cheap. It's not \nobvious how to combine these in the right way without moving a lot of \ncode around to non-obvious places.\n\nOne solution might be to have each subsystem register a function that \ndoes checks/cleanup as each path/file is found in a common scan \nfunction. That's a pretty major rework though, and I doubt there would \nbe much appetite for it to solve such a minor problem.\n\n >> I'm not excited about the amount of code duplication between these three\n >> tools. I know this was because of back-patching various issues in \nthe past,\n >> but I really think we need to unify these data structures/functions \nin HEAD.\n >\n > The lists are duplicated because we have never really figured out how\n > to combine this code in one place. The idea was to have all the data\n > folder path logic and the lists within one header shared between the\n > frontend and backend but there was not much support for that on HEAD.\n\nDo you have the thread? I'd like to see what was proposed and what the \nobjections were.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 21 Feb 2020 08:13:34 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On 2/21/20 1:36 AM, Michael Paquier wrote:\n > On Thu, Feb 20, 2020 at 05:38:15PM +0100, Bernd Helmle wrote:\n >> So i propose a different approach like the attached patch tries to\n >> implement: instead of just blindly iterating over directory contents\n >> and filter them out, reference the tablespace location and\n >> TABLESPACE_VERSION_DIRECTORY directly. This is done by a new function\n >> scan_tablespaces() which is specialized in just follow the\n >> symlinks/junctions in pg_tblspc and call scan_directory() with just\n >> what it has found there. It will also honour directories, just in case\n >> an experienced DBA has copied over the tablespace into pg_tblspc\n >> directly.\n >\n > + if (S_ISREG(st.st_mode))\n > + {\n > + pg_log_debug(\"ignoring file %s in pg_tblspc\", de->d_name);\n > + continue;\n > + }\n > We don't do that for the normal directory scan path, so it does not\n > strike me as a good idea on consistency ground. As a whole, I don't\n > see much point in having a separate routine which is just roughly a\n > duplicate of scan_directory(), and I think that we had better just add\n > the check looking for matches with TABLESPACE_VERSION_DIRECTORY\n > directly when having a directory, if subdir is \"pg_tblspc\". That\n > also makes the patch much shorter.\n\n+1. This is roughly what pg_basebackup does and it seems simpler to me.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Fri, 21 Feb 2020 08:18:51 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Fri, Feb 21, 2020 at 05:37:15PM +0900, Kyotaro Horiguchi wrote:\n> The two str[n]cmps are different only in matching length. I don't\n> think we don't need to differentiate the two message there, so we\n> could reduce the code as:\n> \n> | cmplen = strlen(excludeFiles[].name);\n> | if (!prefix_patch)\n> | cmplen++;\n> | if (strncmp(d_name, excludeFilep.name, cmplen) == 0)\n> | ...\n\nGood idea. Let's do things as you suggest.\n--\nMichael", "msg_date": "Sun, 23 Feb 2020 16:08:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Fri, Feb 21, 2020 at 08:13:34AM -0500, David Steele wrote:\n> Do you have the thread? I'd like to see what was proposed and what the\n> objections were.\n\nHere you go:\nhttps://www.postgresql.org/message-id/20180205071022.GA17337@paquier.xyz\n--\nMichael", "msg_date": "Sun, 23 Feb 2020 16:12:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Fri, 2020-02-21 at 15:36 +0900, Michael Paquier wrote:\n> We don't do that for the normal directory scan path, so it does not\n> strike me as a good idea on consistency ground. As a whole, I don't\n> see much point in having a separate routine which is just roughly a\n> duplicate of scan_directory(), and I think that we had better just\n> add\n> the check looking for matches with TABLESPACE_VERSION_DIRECTORY\n> directly when having a directory, if subdir is \"pg_tblspc\". That\n> also makes the patch much shorter.\n\nTo be honest, i dislike both: The other doubles logic (note: i don't\nsee it necessarily as 100% code duplication, since the semantic of\nscan_tablespaces() is different: it serves as a driver for\nscan_directories() and just resolves entries in pg_tblspc directly).\n\nThe other makes scan_directories() complicater to read and special\ncases just a single directory in an otherwise more or less generic\nfunction. E.g. it makes me uncomfortable if we get a pg_tblspc\nsomewhere else than PGDATA (if someone managed to create such a\ndirectory in a foreign tablespace location for example), so we should\nmaintain an additional check if we really operate on the pg_tblspc we\nhave to. That was the reason(s) i've moved it into a separate function.\n\nThat said, i'll provide an updated patch with your ideas.\n\n\tBernd\n\n\n\n\n", "msg_date": "Mon, 24 Feb 2020 13:11:10 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Sun, Feb 23, 2020 at 04:08:58PM +0900, Michael Paquier wrote:\n> Good idea. Let's do things as you suggest.\n\nApplied and back-patched this one down to 11.\n--\nMichael", "msg_date": "Mon, 24 Feb 2020 21:26:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Hi Michael,\n\nOn 2/24/20 7:26 AM, Michael Paquier wrote:\n> On Sun, Feb 23, 2020 at 04:08:58PM +0900, Michael Paquier wrote:\n>> Good idea. Let's do things as you suggest.\n> \n> Applied and back-patched this one down to 11.\n\nFWIW, we took a slightly narrower approach to this issue in the \npgBackRest patch (attached).\n\nI don't have an issue with the prefix approach since it works and the \nPostgres project is very likely to catch it if there is a change in \nbehavior.\n\nFor third-party projects, though, it might pay to be more conservative \nin case the behavior changes in the future, i.e. \npg_internal.init[something] (but not pg_internal\\.init[0-9]+) becomes valid.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Mon, 24 Feb 2020 19:44:04 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Mon, Feb 24, 2020 at 01:11:10PM +0100, Bernd Helmle wrote:\n> The other makes scan_directories() complicated to read and special\n> cases just a single directory in an otherwise more or less generic\n> function. E.g. it makes me uncomfortable if we get a pg_tblspc\n> somewhere else than PGDATA (if someone managed to create such a\n> directory in a foreign tablespace location for example), so we should\n> maintain an additional check if we really operate on the pg_tblspc we\n> have to. That was the reason(s) i've moved it into a separate function.\n\nWe are just discussing about the code path involving scanning a\ndirectory, so that does not seem that bad to me. I really think that\nwe should avoid duplicating the same logic around, and that we should\nremain consistent with non-directory entries in those paths,\ncomplaining with a proper failure if extra, unwanted files are\npresent.\n--\nMichael", "msg_date": "Tue, 25 Feb 2020 11:33:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "Am Dienstag, den 25.02.2020, 11:33 +0900 schrieb Michael Paquier:\n> I really think that\n> we should avoid duplicating the same logic around, and that we should\n> remain consistent with non-directory entries in those paths,\n> complaining with a proper failure if extra, unwanted files are\n> present.\n\nOkay, please find an updated patch attached.\n\nMy feeling is that in the case we cannot successfully resolve a\ntablespace location from pg_tblspc, we should error out, but i could\nimagine that people would like to have just a warning instead.\n\nI've updated the TAP test for pg_checksums by adding a dummy\nsubdirectory into the tablespace directory already created for the\ncorrupted relfilenode test, containing a file to process in case an\nunpatched pg_checksums is run. With the patch attached, these\ndirectories simply won't be considered to check.\n\nThanks,\n\n\tBernd", "msg_date": "Wed, 26 Feb 2020 18:02:22 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" }, { "msg_contents": "On Wed, Feb 26, 2020 at 06:02:22PM +0100, Bernd Helmle wrote:\n> My feeling is that in the case we cannot successfully resolve a\n> tablespace location from pg_tblspc, we should error out, but i could\n> imagine that people would like to have just a warning instead.\n\nThanks, this patch is much cleaner in its approach, and I don't have\nmuch to say about it except that the error message for lstat() should\nbe more consistent with the one above in scan_directory(). The\nversion for v11 has required a bit of rework, but nothing huge\neither.\n\n> I've updated the TAP test for pg_checksums by adding a dummy\n> subdirectory into the tablespace directory already created for the\n> corrupted relfilenode test, containing a file to process in case an\n> unpatched pg_checksums is run. With the patch attached, these\n> directories simply won't be considered to check.\n\nWhat you have here is much more simple than the original proposal, so\nI kept it. Applied and back-patched down to 11.\n--\nMichael", "msg_date": "Thu, 27 Feb 2020 15:48:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Patch] Make pg_checksums skip foreign tablespace directories" } ]
[ { "msg_contents": "Using today's HEAD, the regression database cannot be dumped and\nrestored normally. Since the buildfarm isn't all red, I suppose\nit works in --binary-upgrade mode ... but if I just do\n\n$ make installcheck # to set up the test database\n$ pg_dump -Fc regression >r.dump\n$ createdb r2\n$ pg_restore -d r2 r.dump\n\nI get\n\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 6016; 2604 24926 DEFAULT gtest1_1 b postgres\npg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\nCommand was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\n\npg_restore: from TOC entry 6041; 2604 25966 DEFAULT gtest30_1 b postgres\npg_restore: error: could not execute query: ERROR: cannot use column reference in DEFAULT expression\nCommand was: ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\n\npg_restore: warning: errors ignored on restore: 2\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 30 Jan 2020 13:54:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Brokenness in dump/restore for GENERATED expressions" }, { "msg_contents": "On 2020-01-30 19:54, Tom Lane wrote:\n> Using today's HEAD, the regression database cannot be dumped and\n> restored normally. Since the buildfarm isn't all red, I suppose\n> it works in --binary-upgrade mode ... but if I just do\n> \n> $ make installcheck # to set up the test database\n> $ pg_dump -Fc regression >r.dump\n> $ createdb r2\n> $ pg_restore -d r2 r.dump\n> \n> I get\n> \n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 6016; 2604 24926 DEFAULT gtest1_1 b postgres\n> pg_restore: error: could not execute query: ERROR: column \"b\" of relation \"gtest1_1\" is a generated column\n> Command was: ALTER TABLE ONLY public.gtest1_1 ALTER COLUMN b SET DEFAULT (a * 2);\n> \n> \n> pg_restore: from TOC entry 6041; 2604 25966 DEFAULT gtest30_1 b postgres\n> pg_restore: error: could not execute query: ERROR: cannot use column reference in DEFAULT expression\n> Command was: ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT (a * 2);\n\nThis is the same issue as \n<https://www.postgresql.org/message-id/15830.1575468847@sss.pgh.pa.us>. \nI will work in it this week.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Feb 2020 14:19:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Brokenness in dump/restore for GENERATED expressions" } ]
[ { "msg_contents": "When I was working on the test_json stuff yesterday, I noticed that\nthere are some unexpected (by me at least) things installed when we do\nan MSVC build:\n\n$ ls -l bin| egrep 'regress|isolation'\n-rwxr-xr-x 1 pgrunner None 72192 Jan 30 07:51 isolationtester.exe\n-rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_isolation_regress.exe\n-rwxr-xr-x 1 pgrunner None 112128 Jan 30 07:51 pg_regress.exe\n-rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_regress_ecpg.exe\n\nThis is made all the more obscure by the fact that the install script\ndoesn't tell you exactly what it's installing, unlike the \"make\"\ndriven install. There could well be other things that are installed\nthat shouldn't be.\n\nSo I think we need to do several things:\n\n. make the install script more verbose\n. work out how to ensure the things above (and test_json when we add\nit) are not installed.\n. check that nothing else is installed that shouldn't be.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 31 Jan 2020 12:47:29 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "MSVC installs too much stuff?" }, { "msg_contents": "On Fri, Jan 31, 2020 at 12:47:29PM +1030, Andrew Dunstan wrote:\n> When I was working on the test_json stuff yesterday, I noticed that\n> there are some unexpected (by me at least) things installed when we do\n> an MSVC build:\n> \n> $ ls -l bin| egrep 'regress|isolation'\n> -rwxr-xr-x 1 pgrunner None 72192 Jan 30 07:51 isolationtester.exe\n> -rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_isolation_regress.exe\n> -rwxr-xr-x 1 pgrunner None 112128 Jan 30 07:51 pg_regress.exe\n> -rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_regress_ecpg.exe\n> \n> This is made all the more obscure by the fact that the install script\n> doesn't tell you exactly what it's installing, unlike the \"make\"\n> driven install. There could well be other things that are installed\n> that shouldn't be.\n\n+1. Looking at vcregress.pl, all four are always invoked from the\nroot of the build folder.\n\n> So I think we need to do several things:\n> \n> . make the install script more verbose\n> . work out how to ensure the things above (and test_json when we add\n> it) are not installed.\n> . check that nothing else is installed that shouldn't be.\n\nHmm. It seems to me that an exclusion list with patterns to match\nshould be enough in Install.pm. Having only one code path for the\nfiltering would be nice, which means merging CopyFiles and\nCopySetOfFiles.\n--\nMichael", "msg_date": "Fri, 31 Jan 2020 14:26:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: MSVC installs too much stuff?" }, { "msg_contents": "On Fri, 31 Jan 2020 at 13:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 31, 2020 at 12:47:29PM +1030, Andrew Dunstan wrote:\n> > When I was working on the test_json stuff yesterday, I noticed that\n> > there are some unexpected (by me at least) things installed when we do\n> > an MSVC build:\n> >\n> > $ ls -l bin| egrep 'regress|isolation'\n> > -rwxr-xr-x 1 pgrunner None 72192 Jan 30 07:51 isolationtester.exe\n> > -rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_isolation_regress.exe\n> > -rwxr-xr-x 1 pgrunner None 112128 Jan 30 07:51 pg_regress.exe\n> > -rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_regress_ecpg.exe\n\nThese tools should be installed. They are useful, important in fact,\nfor testing extensions.\n\nIn *nix builds we install them to\n$PREFIX/lib/postgresql/pgxs/src/test/regress/pg_regress etc.\n\nOn Windows we don't have PGXS. It probably doesn't make sense to\ninstall them to the pgxs dir. So putting them in bin is pretty\nreasonable.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Fri, 31 Jan 2020 14:05:16 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: MSVC installs too much stuff?" }, { "msg_contents": "On Fri, Jan 31, 2020 at 4:35 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> On Fri, 31 Jan 2020 at 13:27, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Jan 31, 2020 at 12:47:29PM +1030, Andrew Dunstan wrote:\n> > > When I was working on the test_json stuff yesterday, I noticed that\n> > > there are some unexpected (by me at least) things installed when we do\n> > > an MSVC build:\n> > >\n> > > $ ls -l bin| egrep 'regress|isolation'\n> > > -rwxr-xr-x 1 pgrunner None 72192 Jan 30 07:51 isolationtester.exe\n> > > -rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_isolation_regress.exe\n> > > -rwxr-xr-x 1 pgrunner None 112128 Jan 30 07:51 pg_regress.exe\n> > > -rwxr-xr-x 1 pgrunner None 112640 Jan 30 07:51 pg_regress_ecpg.exe\n>\n> These tools should be installed. They are useful, important in fact,\n> for testing extensions.\n>\n> In *nix builds we install them to\n> $PREFIX/lib/postgresql/pgxs/src/test/regress/pg_regress etc.\n>\n> On Windows we don't have PGXS. It probably doesn't make sense to\n> install them to the pgxs dir. So putting them in bin is pretty\n> reasonable.\n\nOh, Ha! Forget I spoke.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 10:48:27 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: MSVC installs too much stuff?" } ]
[ { "msg_contents": "Hello,\n\nPlease find a one-liner patch in the attachment.\n\nThis patch fixes a size parameter of `pg_strncasecmp` which compared a\n\"string\" literal with a variable by passing a size of 5 while the \"string\"\nliteral has 6 bytes.\n\nThis issue can be observed with the following query (where 'X' is any\ncharacter other than 'g' and null byte):\n\n select json_to_tsvector('\"abc\"'::json, '\"strinX\"')\n\nBefore this patch this query returns the `'abc':1` result instead of\nfailing with the following error:\n\n wrong flag in flag array: \"strinX\"\n\nBy the way, the `strncasecmp` usages around the fixed line could use\n`strcasecmp` which doesn't accept the `size_t n` argument.\n\n---\nRegards,\nDisconnect3d", "msg_date": "Fri, 31 Jan 2020 04:18:09 +0100", "msg_from": "Dominik Czarnota <dominik.b.czarnota@gmail.com>", "msg_from_op": true, "msg_subject": "PATCH: Fix wrong size argument to pg_strncasecmp" }, { "msg_contents": "Dominik Czarnota <dominik.b.czarnota@gmail.com> writes:\n> This patch fixes a size parameter of `pg_strncasecmp` which compared a\n> \"string\" literal with a variable by passing a size of 5 while the \"string\"\n> literal has 6 bytes.\n\nPushed, thanks for the report!\n\n> By the way, the `strncasecmp` usages around the fixed line could use\n> `strcasecmp` which doesn't accept the `size_t n` argument.\n\nMaybe. It's not clear to me that it's be okay to assume that the\nvariable input string is null-terminated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 Jan 2020 17:28:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: Fix wrong size argument to pg_strncasecmp" } ]
[ { "msg_contents": "Hi:\n\nI wrote a patch to erase the distinctClause if the result is unique by\ndefinition, I find this because a user switch this code from oracle\nto PG and find the performance is bad due to this, so I adapt pg for\nthis as well.\n\nThis patch doesn't work for a well-written SQL, but some drawback\nof a SQL may be not very obvious, since the cost of checking is pretty\nlow as well, so I think it would be ok to add..\n\nPlease see the patch for details.\n\nThank you.", "msg_date": "Fri, 31 Jan 2020 20:39:37 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "update the patch with considering the semi/anti join.\n\nCan anyone help to review this patch?\n\nThanks\n\n\nOn Fri, Jan 31, 2020 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n>\n> I wrote a patch to erase the distinctClause if the result is unique by\n> definition, I find this because a user switch this code from oracle\n> to PG and find the performance is bad due to this, so I adapt pg for\n> this as well.\n>\n> This patch doesn't work for a well-written SQL, but some drawback\n> of a SQL may be not very obvious, since the cost of checking is pretty\n> low as well, so I think it would be ok to add..\n>\n> Please see the patch for details.\n>\n> Thank you.\n>", "msg_date": "Thu, 6 Feb 2020 14:01:27 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi Andy,\nWhat might help is to add more description to your email message like\ngiving examples to explain your idea.\n\nAnyway, I looked at the testcases you added for examples.\n+create table select_distinct_a(a int, b char(20), c char(20) not null, d\nint, e int, primary key(a, b));\n+set enable_mergejoin to off;\n+set enable_hashjoin to off;\n+-- no node for distinct.\n+explain (costs off) select distinct * from select_distinct_a;\n+ QUERY PLAN\n+-------------------------------\n+ Seq Scan on select_distinct_a\n+(1 row)\n\n From this example, it seems that the distinct operation can be dropped\nbecause (a, b) is a primary key. Is my understanding correct?\n\nI like the idea since it eliminates one expensive operation.\n\nHowever the patch as presented has some problems\n1. What happens if the primary key constraint or NOT NULL constraint gets\ndropped between a prepare and execute? The plan will no more be valid and\nthus execution may produce non-distinct results. PostgreSQL has similar\nconcept of allowing non-grouping expression as part of targetlist when\nthose expressions can be proved to be functionally dependent on the GROUP\nBY clause. See check_functional_grouping() and its caller. I think,\nDISTINCT elimination should work on similar lines.\n2. For the same reason described in check_functional_grouping(), using\nunique indexes for eliminating DISTINCT should be discouraged.\n3. If you could eliminate DISTINCT you could similarly eliminate GROUP BY\nas well\n4. The patch works only at the query level, but that functionality can be\nexpanded generally to other places which add Unique/HashAggregate/Group\nnodes if the underlying relation can be proved to produce distinct rows.\nBut that's probably more work since we will have to label paths with unique\nkeys similar to pathkeys.\n5. Have you tested this OUTER joins, which can render inner side nullable?\n\nOn Thu, Feb 6, 2020 at 11:31 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> update the patch with considering the semi/anti join.\n>\n> Can anyone help to review this patch?\n>\n> Thanks\n>\n>\n> On Fri, Jan 31, 2020 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> Hi:\n>>\n>> I wrote a patch to erase the distinctClause if the result is unique by\n>> definition, I find this because a user switch this code from oracle\n>> to PG and find the performance is bad due to this, so I adapt pg for\n>> this as well.\n>>\n>> This patch doesn't work for a well-written SQL, but some drawback\n>> of a SQL may be not very obvious, since the cost of checking is pretty\n>> low as well, so I think it would be ok to add..\n>>\n>> Please see the patch for details.\n>>\n>> Thank you.\n>>\n>\n\n-- \n--\nBest Wishes,\nAshutosh Bapat\n\nHi Andy,What might help is to add more description to your email message like giving examples to explain your idea.Anyway, I looked at the testcases you added for examples.+create table select_distinct_a(a int, b char(20),  c char(20) not null,  d int, e int, primary key(a, b));+set enable_mergejoin to off;+set enable_hashjoin to off;+-- no node for distinct.+explain (costs off) select distinct * from select_distinct_a;+          QUERY PLAN           +-------------------------------+ Seq Scan on select_distinct_a+(1 row)From this example, it seems that the distinct operation can be dropped because (a, b) is a primary key. Is my understanding correct?I like the idea since it eliminates one expensive operation.However the patch as presented has some problems1. What happens if the primary key constraint or NOT NULL constraint gets dropped between a prepare and execute? The plan will no more be valid and thus execution may produce non-distinct results. PostgreSQL has similar concept of allowing non-grouping expression as part of targetlist when those expressions can be proved to be functionally dependent on the GROUP BY clause. See check_functional_grouping() and its caller. I think, DISTINCT elimination should work on similar lines.2. For the same reason described in check_functional_grouping(), using unique indexes for eliminating DISTINCT should be discouraged.3. If you could eliminate DISTINCT you could similarly eliminate GROUP BY as well4. The patch works only at the query level, but that functionality can be expanded generally to other places which add Unique/HashAggregate/Group nodes if the underlying relation can be proved to produce distinct rows. But that's probably more work since we will have to label paths with unique keys similar to pathkeys.5. Have you tested this OUTER joins, which can render inner side nullable?On Thu, Feb 6, 2020 at 11:31 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:update the patch with considering the semi/anti join. Can anyone help to review this patch?  ThanksOn Fri, Jan 31, 2020 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi:I wrote a patch to erase the distinctClause if the result is unique by definition,  I find this because a user switch this code from oracle to PG and find the performance is bad due to this,  so I adapt pg forthis as well. This patch doesn't work for a well-written SQL,  but some drawback of a SQL may be not very obvious,  since the cost of checking is prettylow as well,  so I think it would be ok to add.. Please see the patch for details.   Thank you. \n\n-- --Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 7 Feb 2020 21:24:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi Ashutosh:\n Thanks for your time.\n\nOn Fri, Feb 7, 2020 at 11:54 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Hi Andy,\n> What might help is to add more description to your email message like\n> giving examples to explain your idea.\n>\n> Anyway, I looked at the testcases you added for examples.\n> +create table select_distinct_a(a int, b char(20), c char(20) not null,\n> d int, e int, primary key(a, b));\n> +set enable_mergejoin to off;\n> +set enable_hashjoin to off;\n> +-- no node for distinct.\n> +explain (costs off) select distinct * from select_distinct_a;\n> + QUERY PLAN\n> +-------------------------------\n> + Seq Scan on select_distinct_a\n> +(1 row)\n>\n> From this example, it seems that the distinct operation can be dropped\n> because (a, b) is a primary key. Is my understanding correct?\n>\n\nYes, you are correct. Actually I added then to commit message,\nbut it's true that I should have copied them in this email body as\n well. so copy it now.\n\n[PATCH] Erase the distinctClause if the result is unique by\n definition\n\nFor a single relation, we can tell it by any one of the following\nis true:\n1. The pk is in the target list.\n2. The uk is in the target list and the columns is not null\n3. The columns in group-by clause is also in the target list\n\nfor relation join, we can tell it by:\nif every relation in the jointree yields a unique result set, then\nthe final result is unique as well regardless the join method.\nfor semi/anti join, we will ignore the righttable.\n\nI like the idea since it eliminates one expensive operation.\n>\n> However the patch as presented has some problems\n> 1. What happens if the primary key constraint or NOT NULL constraint gets\n> dropped between a prepare and execute? The plan will no more be valid and\n> thus execution may produce non-distinct results.\n>\n\nWill this still be an issue if user use doesn't use a \"read uncommitted\"\nisolation level? I suppose it should be ok for this case. But even though\nI should add an isolation level check for this. Just added that in the\npatch\nto continue discussing of this issue.\n\n\n> PostgreSQL has similar concept of allowing non-grouping expression as part\n> of targetlist when those expressions can be proved to be functionally\n> dependent on the GROUP BY clause. See check_functional_grouping() and its\n> caller. I think, DISTINCT elimination should work on similar lines.\n>\n2. For the same reason described in check_functional_grouping(), using\n> unique indexes for eliminating DISTINCT should be discouraged.\n>\n\nI checked the comments of check_functional_grouping, the reason is\n\n * Currently we only check to see if the rel has a primary key that is a\n * subset of the grouping_columns. We could also use plain unique\nconstraints\n * if all their columns are known not null, but there's a problem: we need\n * to be able to represent the not-null-ness as part of the constraints\nadded\n * to *constraintDeps. FIXME whenever not-null constraints get represented\n * in pg_constraint.\n\nActually I am doubtful the reason for pg_constraint since we still be able\nto get the not null information from relation->rd_attr->attrs[n].attnotnull\nwhich\nis just what this patch did.\n\n3. If you could eliminate DISTINCT you could similarly eliminate GROUP BY\n> as well\n>\n\nThis is a good point. The rules may have some different for join, so I\nprefer\nto to focus on the current one so far.\n\n\n> 4. The patch works only at the query level, but that functionality can be\n> expanded generally to other places which add Unique/HashAggregate/Group\n> nodes if the underlying relation can be proved to produce distinct rows.\n> But that's probably more work since we will have to label paths with unique\n> keys similar to pathkeys.\n>\n\nDo you mean adding some information into PlannerInfo, and when we create\na node for Unique/HashAggregate/Group, we can just create a dummy node?\n\n\n> 5. Have you tested this OUTER joins, which can render inner side nullable?\n>\n\nYes, that part was missed in the test case. I just added them.\n\nOn Thu, Feb 6, 2020 at 11:31 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> update the patch with considering the semi/anti join.\n>>\n>> Can anyone help to review this patch?\n>>\n>> Thanks\n>>\n>>\n>> On Fri, Jan 31, 2020 at 8:39 PM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>>\n>>> Hi:\n>>>\n>>> I wrote a patch to erase the distinctClause if the result is unique by\n>>> definition, I find this because a user switch this code from oracle\n>>> to PG and find the performance is bad due to this, so I adapt pg for\n>>> this as well.\n>>>\n>>> This patch doesn't work for a well-written SQL, but some drawback\n>>> of a SQL may be not very obvious, since the cost of checking is pretty\n>>> low as well, so I think it would be ok to add..\n>>>\n>>> Please see the patch for details.\n>>>\n>>> Thank you.\n>>>\n>>\n>\n> --\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>", "msg_date": "Sat, 8 Feb 2020 15:22:47 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Sat, Feb 8, 2020 at 12:53 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi Ashutosh:\n> Thanks for your time.\n>\n> On Fri, Feb 7, 2020 at 11:54 PM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n>> Hi Andy,\n>> What might help is to add more description to your email message like\n>> giving examples to explain your idea.\n>>\n>> Anyway, I looked at the testcases you added for examples.\n>> +create table select_distinct_a(a int, b char(20), c char(20) not null,\n>> d int, e int, primary key(a, b));\n>> +set enable_mergejoin to off;\n>> +set enable_hashjoin to off;\n>> +-- no node for distinct.\n>> +explain (costs off) select distinct * from select_distinct_a;\n>> + QUERY PLAN\n>> +-------------------------------\n>> + Seq Scan on select_distinct_a\n>> +(1 row)\n>>\n>> From this example, it seems that the distinct operation can be dropped\n>> because (a, b) is a primary key. Is my understanding correct?\n>>\n>\n> Yes, you are correct. Actually I added then to commit message,\n> but it's true that I should have copied them in this email body as\n> well. so copy it now.\n>\n> [PATCH] Erase the distinctClause if the result is unique by\n> definition\n>\n\nI forgot to mention this in the last round of comments. Your patch was\nactually removing distictClause from the Query structure. Please avoid\ndoing that. If you remove it, you are also removing the evidence that this\nQuery had a DISTINCT clause in it.\n\n\n>\n>\n> However the patch as presented has some problems\n> 1. What happens if the primary key constraint or NOT NULL constraint gets\n> dropped between a prepare and execute? The plan will no more be valid and\n> thus execution may produce non-distinct results.\n>\n> Will this still be an issue if user use doesn't use a \"read uncommitted\"\n> isolation level? I suppose it should be ok for this case. But even though\n> I should add an isolation level check for this. Just added that in the\n> patch\n> to continue discussing of this issue.\n>\n\nIn PostgreSQL there's no \"read uncommitted\". But that doesn't matter since\na query can be prepared outside a transaction and executed within one or\nmore subsequent transactions.\n\n\n>\n>\n>> PostgreSQL has similar concept of allowing non-grouping expression as\n>> part of targetlist when those expressions can be proved to be functionally\n>> dependent on the GROUP BY clause. See check_functional_grouping() and its\n>> caller. I think, DISTINCT elimination should work on similar lines.\n>>\n> 2. For the same reason described in check_functional_grouping(), using\n>> unique indexes for eliminating DISTINCT should be discouraged.\n>>\n>\n> I checked the comments of check_functional_grouping, the reason is\n>\n> * Currently we only check to see if the rel has a primary key that is a\n> * subset of the grouping_columns. We could also use plain unique\n> constraints\n> * if all their columns are known not null, but there's a problem: we need\n> * to be able to represent the not-null-ness as part of the constraints\n> added\n> * to *constraintDeps. FIXME whenever not-null constraints get represented\n> * in pg_constraint.\n>\n> Actually I am doubtful the reason for pg_constraint since we still be able\n> to get the not null information from\n> relation->rd_attr->attrs[n].attnotnull which\n> is just what this patch did.\n>\n\nThe problem isn't whether not-null-less can be inferred or not, the problem\nis whether that can be guaranteed across planning and execution of query\n(prepare and execute for example.) The constraintDep machinary registers\nthe constraints used for preparing plan and invalidates the plan if any of\nthose constraints change after plan is created.\n\n\n>\n> 3. If you could eliminate DISTINCT you could similarly eliminate GROUP BY\n>> as well\n>>\n>\n> This is a good point. The rules may have some different for join, so I\n> prefer\n> to to focus on the current one so far.\n>\n\nI doubt that since DISTINCT is ultimately carried out as Grouping\noperation. But anyway, I won't hang upon that.\n\n>\n>\n>> 4. The patch works only at the query level, but that functionality can be\n>> expanded generally to other places which add Unique/HashAggregate/Group\n>> nodes if the underlying relation can be proved to produce distinct rows.\n>> But that's probably more work since we will have to label paths with unique\n>> keys similar to pathkeys.\n>>\n>\n> Do you mean adding some information into PlannerInfo, and when we create\n> a node for Unique/HashAggregate/Group, we can just create a dummy node?\n>\n\nNot so much as PlannerInfo but something on lines of PathKey. See PathKey\nstructure and related code. What I envision is PathKey class is also\nannotated with the information whether that PathKey implies uniqueness.\nE.g. a PathKey derived from a Primary index would imply uniqueness also. A\nPathKey derived from say Group operation also implies uniqueness. Then just\nby looking at the underlying Path we would be able to say whether we need\nGroup/Unique node on top of it or not. I think that would make it much\nwider usecase and a very useful optimization.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Sat, Feb 8, 2020 at 12:53 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi Ashutosh:   Thanks for your time. On Fri, Feb 7, 2020 at 11:54 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Andy,What might help is to add more description to your email message like giving examples to explain your idea.Anyway, I looked at the testcases you added for examples.+create table select_distinct_a(a int, b char(20),  c char(20) not null,  d int, e int, primary key(a, b));+set enable_mergejoin to off;+set enable_hashjoin to off;+-- no node for distinct.+explain (costs off) select distinct * from select_distinct_a;+          QUERY PLAN           +-------------------------------+ Seq Scan on select_distinct_a+(1 row)From this example, it seems that the distinct operation can be dropped because (a, b) is a primary key. Is my understanding correct?Yes, you are correct.   Actually I added then to commit message,but it's true that I should have copied them in this email body as well.  so copy it now. [PATCH] Erase the distinctClause if the result is unique by definitionI forgot to mention this in the last round of comments. Your patch was actually removing distictClause from the Query structure. Please avoid doing that. If you remove it, you are also removing the evidence that this Query had a DISTINCT clause in it. However the patch as presented has some problems1. What happens if the primary key constraint or NOT NULL constraint gets dropped between a prepare and execute? The plan will no more be valid and thus execution may produce non-distinct results.Will this still be an issue if user use doesn't use a \"read uncommitted\" isolation level?  I suppose it should be ok for this case.  But even thoughI should add an isolation level check for this.  Just added that in the patchto continue discussing of this issue. In PostgreSQL there's no \"read uncommitted\". But that doesn't matter since a query can be prepared outside a transaction and executed within one or more subsequent transactions.   PostgreSQL has similar concept of allowing non-grouping expression as part of targetlist when those expressions can be proved to be functionally dependent on the GROUP BY clause. See check_functional_grouping() and its caller. I think, DISTINCT elimination should work on similar lines.2. For the same reason described in check_functional_grouping(), using unique indexes for eliminating DISTINCT should be discouraged. I checked the comments of check_functional_grouping,  the reason is  * Currently we only check to see if the rel has a primary key that is a * subset of the grouping_columns.  We could also use plain unique constraints * if all their columns are known not null, but there's a problem: we need * to be able to represent the not-null-ness as part of the constraints added * to *constraintDeps.  FIXME whenever not-null constraints get represented * in pg_constraint.Actually I am doubtful the reason for pg_constraint since we still be able to get the not null information from relation->rd_attr->attrs[n].attnotnull which is just what this patch did.   The problem isn't whether not-null-less can be inferred or not, the problem is whether that can be guaranteed across planning and execution of query (prepare and execute for example.) The constraintDep machinary registers the constraints used for preparing plan and invalidates the plan if any of those constraints change after plan is created. 3. If you could eliminate DISTINCT you could similarly eliminate GROUP BY as wellThis is a good point.   The rules may have some different for join,  so I prefer to to focus on the current one so far.I doubt that since DISTINCT is ultimately carried out as Grouping operation. But anyway, I won't hang upon that.  4. The patch works only at the query level, but that functionality can be expanded generally to other places which add Unique/HashAggregate/Group nodes if the underlying relation can be proved to produce distinct rows. But that's probably more work since we will have to label paths with unique keys similar to pathkeys. Do you mean adding some information into PlannerInfo,  and when we create a node for Unique/HashAggregate/Group,  we can just create a dummy node? Not so much as PlannerInfo but something on lines of PathKey. See PathKey structure and related code. What I envision is PathKey class is also annotated with the information whether that PathKey implies uniqueness. E.g. a PathKey derived from a Primary index would imply uniqueness also. A PathKey derived from say Group operation also implies uniqueness. Then just by looking at the underlying Path we would be able to say whether we need Group/Unique node on top of it or not. I think that would make it much wider usecase and a very useful optimization.--Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 10 Feb 2020 21:52:40 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n>> On Sat, Feb 8, 2020 at 12:53 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Do you mean adding some information into PlannerInfo, and when we create\n>> a node for Unique/HashAggregate/Group, we can just create a dummy node?\n\n> Not so much as PlannerInfo but something on lines of PathKey. See PathKey\n> structure and related code. What I envision is PathKey class is also\n> annotated with the information whether that PathKey implies uniqueness.\n> E.g. a PathKey derived from a Primary index would imply uniqueness also. A\n> PathKey derived from say Group operation also implies uniqueness. Then just\n> by looking at the underlying Path we would be able to say whether we need\n> Group/Unique node on top of it or not. I think that would make it much\n> wider usecase and a very useful optimization.\n\nFWIW, that doesn't seem like a very prudent approach to me, because it\nconfuses sorted-ness with unique-ness. PathKeys are about sorting,\nbut it's possible to have uniqueness guarantees without having sorted\nanything, for instance via hashed grouping.\n\nI haven't looked at this patch, but I'd expect it to use infrastructure\nrelated to query_is_distinct_for(), and that doesn't deal in PathKeys.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Feb 2020 12:27:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> wrote:\n\n>\n>\n>>\n>> [PATCH] Erase the distinctClause if the result is unique by\n>> definition\n>>\n>\n> I forgot to mention this in the last round of comments. Your patch was\n> actually removing distictClause from the Query structure. Please avoid\n> doing that. If you remove it, you are also removing the evidence that this\n> Query had a DISTINCT clause in it.\n>\n\nYes, I removed it because it is the easiest way to do it. what is the\npurpose of keeping the evidence?\n\n\n>\n>\n>>\n>>\n>> However the patch as presented has some problems\n>> 1. What happens if the primary key constraint or NOT NULL constraint gets\n>> dropped between a prepare and execute? The plan will no more be valid and\n>> thus execution may produce non-distinct results.\n>>\n>> Will this still be an issue if user use doesn't use a \"read uncommitted\"\n>> isolation level? I suppose it should be ok for this case. But even\n>> though\n>> I should add an isolation level check for this. Just added that in the\n>> patch\n>> to continue discussing of this issue.\n>>\n>\n> In PostgreSQL there's no \"read uncommitted\".\n>\n\nThanks for the hint, I just noticed read uncommitted is treated as read\ncommitted\n in Postgresql.\n\n\n> But that doesn't matter since a query can be prepared outside a\n> transaction and executed within one or more subsequent transactions.\n>\n\nSuppose after a DDL, the prepared statement need to be re-parsed/planned\nif it is not executed or it will prevent the DDL to happen.\n\nThe following is my test.\n\npostgres=# create table t (a int primary key, b int not null, c int);\nCREATE TABLE\npostgres=# insert into t values(1, 1, 1), (2, 2, 2);\nINSERT 0 2\npostgres=# create unique index t_idx1 on t(b);\nCREATE INDEX\n\npostgres=# prepare st as select distinct b from t where c = $1;\nPREPARE\npostgres=# explain execute st(1);\n QUERY PLAN\n-------------------------------------------------\n Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n Filter: (c = 1)\n(2 rows)\n...\npostgres=# explain execute st(1);\n QUERY PLAN\n-------------------------------------------------\n Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n Filter: (c = $1)\n(2 rows)\n\n-- session 2\npostgres=# alter table t alter column b drop not null;\nALTER TABLE\n\n-- session 1:\npostgres=# explain execute st(1);\n QUERY PLAN\n-------------------------------------------------------------\n Unique (cost=1.03..1.04 rows=1 width=4)\n -> Sort (cost=1.03..1.04 rows=1 width=4)\n Sort Key: b\n -> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n Filter: (c = $1)\n(5 rows)\n\n-- session 2\npostgres=# insert into t values (3, null, 3), (4, null, 3);\nINSERT 0 2\n\n-- session 1\npostgres=# execute st(3);\n b\n---\n\n(1 row)\n\nand if we prepare sql outside a transaction, and execute it in the\ntransaction, the other session can't drop the constraint until the\ntransaction is ended.\n\n\n\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nOn Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:[PATCH] Erase the distinctClause if the result is unique by definitionI forgot to mention this in the last round of comments. Your patch was actually removing distictClause from the Query structure. Please avoid doing that. If you remove it, you are also removing the evidence that this Query had a DISTINCT clause in it.Yes, I removed it because it is the easiest way to do it.  what is the purpose of keeping the evidence?  However the patch as presented has some problems1. What happens if the primary key constraint or NOT NULL constraint gets dropped between a prepare and execute? The plan will no more be valid and thus execution may produce non-distinct results.Will this still be an issue if user use doesn't use a \"read uncommitted\" isolation level?  I suppose it should be ok for this case.  But even thoughI should add an isolation level check for this.  Just added that in the patchto continue discussing of this issue. In PostgreSQL there's no \"read uncommitted\". Thanks for the hint, I just noticed read uncommitted is treated as read committed in Postgresql.  But that doesn't matter since a query can be prepared outside a transaction and executed within one or more subsequent transactions. Suppose after a DDL, the prepared statement need to be re-parsed/planned if it is not executed or it will prevent the DDL to happen.  The following is my test. postgres=# create table t (a int primary key, b int not null,  c int);CREATE TABLEpostgres=# insert into t values(1, 1, 1), (2, 2, 2);INSERT 0 2postgres=# create unique index t_idx1 on t(b);CREATE INDEXpostgres=# prepare st as select distinct b from t where c = $1;PREPAREpostgres=# explain execute st(1);                   QUERY PLAN------------------------------------------------- Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)   Filter: (c = 1)(2 rows)...postgres=# explain execute st(1);                   QUERY PLAN------------------------------------------------- Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)   Filter: (c = $1)(2 rows)-- session 2postgres=# alter table t alter column b drop not null;ALTER TABLE-- session 1:postgres=# explain execute st(1);                         QUERY PLAN------------------------------------------------------------- Unique  (cost=1.03..1.04 rows=1 width=4)   ->  Sort  (cost=1.03..1.04 rows=1 width=4)         Sort Key: b         ->  Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)               Filter: (c = $1)(5 rows)-- session 2postgres=# insert into t values (3, null, 3), (4, null, 3);INSERT 0 2-- session 1postgres=# execute st(3); b---(1 row)and if we prepare sql outside a transaction, and execute it in the transaction, the other session can't drop the constraint until the transaction is ended.  --Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 11 Feb 2020 10:57:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 10:57:26AM +0800, Andy Fan wrote:\n> On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n> > I forgot to mention this in the last round of comments. Your patch was\n> > actually removing distictClause from the Query structure. Please avoid\n> > doing that. If you remove it, you are also removing the evidence that this\n> > Query had a DISTINCT clause in it.\n> >\n>\n> Yes, I removed it because it is the easiest way to do it. what is the\n> purpose of keeping the evidence?\n>\n> >> However the patch as presented has some problems\n> >> 1. What happens if the primary key constraint or NOT NULL constraint gets\n> >> dropped between a prepare and execute? The plan will no more be valid and\n> >> thus execution may produce non-distinct results.\n>\n> > But that doesn't matter since a query can be prepared outside a\n> > transaction and executed within one or more subsequent transactions.\n> >\n>\n> Suppose after a DDL, the prepared statement need to be re-parsed/planned\n> if it is not executed or it will prevent the DDL to happen.\n>\n> The following is my test.\n>\n> postgres=# create table t (a int primary key, b int not null, c int);\n> CREATE TABLE\n> postgres=# insert into t values(1, 1, 1), (2, 2, 2);\n> INSERT 0 2\n> postgres=# create unique index t_idx1 on t(b);\n> CREATE INDEX\n>\n> postgres=# prepare st as select distinct b from t where c = $1;\n> PREPARE\n> postgres=# explain execute st(1);\n> QUERY PLAN\n> -------------------------------------------------\n> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> Filter: (c = 1)\n> (2 rows)\n> ...\n> postgres=# explain execute st(1);\n> QUERY PLAN\n> -------------------------------------------------\n> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> Filter: (c = $1)\n> (2 rows)\n>\n> -- session 2\n> postgres=# alter table t alter column b drop not null;\n> ALTER TABLE\n>\n> -- session 1:\n> postgres=# explain execute st(1);\n> QUERY PLAN\n> -------------------------------------------------------------\n> Unique (cost=1.03..1.04 rows=1 width=4)\n> -> Sort (cost=1.03..1.04 rows=1 width=4)\n> Sort Key: b\n> -> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> Filter: (c = $1)\n> (5 rows)\n>\n> -- session 2\n> postgres=# insert into t values (3, null, 3), (4, null, 3);\n> INSERT 0 2\n>\n> -- session 1\n> postgres=# execute st(3);\n> b\n> ---\n>\n> (1 row)\n>\n> and if we prepare sql outside a transaction, and execute it in the\n> transaction, the other session can't drop the constraint until the\n> transaction is ended.\n\nAnd what if you create a view on top of a query containing a distinct clause\nrather than using prepared statements? FWIW your patch doesn't handle such\ncase at all, without even needing to drop constraints:\n\nCREATE TABLE t (a int primary key, b int not null, c int);\nINSERT INTO t VALUEs(1, 1, 1), (2, 2, 2);\nCREATE UNIQUE INDEX t_idx1 on t(b);\nCREATE VIEW v1 AS SELECT DISTINCT b FROM t;\nEXPLAIN SELECT * FROM v1;\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\nI also think this is not the right way to handle this optimization.\n\n\n", "msg_date": "Tue, 11 Feb 2020 08:57:51 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 3:56 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Feb 11, 2020 at 10:57:26AM +0800, Andy Fan wrote:\n> > On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> > ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > > I forgot to mention this in the last round of comments. Your patch was\n> > > actually removing distictClause from the Query structure. Please avoid\n> > > doing that. If you remove it, you are also removing the evidence that\n> this\n> > > Query had a DISTINCT clause in it.\n> > >\n> >\n> > Yes, I removed it because it is the easiest way to do it. what is the\n> > purpose of keeping the evidence?\n> >\n> > >> However the patch as presented has some problems\n> > >> 1. What happens if the primary key constraint or NOT NULL constraint\n> gets\n> > >> dropped between a prepare and execute? The plan will no more be valid\n> and\n> > >> thus execution may produce non-distinct results.\n> >\n> > > But that doesn't matter since a query can be prepared outside a\n> > > transaction and executed within one or more subsequent transactions.\n> > >\n> >\n> > Suppose after a DDL, the prepared statement need to be re-parsed/planned\n> > if it is not executed or it will prevent the DDL to happen.\n> >\n> > The following is my test.\n> >\n> > postgres=# create table t (a int primary key, b int not null, c int);\n> > CREATE TABLE\n> > postgres=# insert into t values(1, 1, 1), (2, 2, 2);\n> > INSERT 0 2\n> > postgres=# create unique index t_idx1 on t(b);\n> > CREATE INDEX\n> >\n> > postgres=# prepare st as select distinct b from t where c = $1;\n> > PREPARE\n> > postgres=# explain execute st(1);\n> > QUERY PLAN\n> > -------------------------------------------------\n> > Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> > Filter: (c = 1)\n> > (2 rows)\n> > ...\n> > postgres=# explain execute st(1);\n> > QUERY PLAN\n> > -------------------------------------------------\n> > Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> > Filter: (c = $1)\n> > (2 rows)\n> >\n> > -- session 2\n> > postgres=# alter table t alter column b drop not null;\n> > ALTER TABLE\n> >\n> > -- session 1:\n> > postgres=# explain execute st(1);\n> > QUERY PLAN\n> > -------------------------------------------------------------\n> > Unique (cost=1.03..1.04 rows=1 width=4)\n> > -> Sort (cost=1.03..1.04 rows=1 width=4)\n> > Sort Key: b\n> > -> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> > Filter: (c = $1)\n> > (5 rows)\n> >\n> > -- session 2\n> > postgres=# insert into t values (3, null, 3), (4, null, 3);\n> > INSERT 0 2\n> >\n> > -- session 1\n> > postgres=# execute st(3);\n> > b\n> > ---\n> >\n> > (1 row)\n> >\n> > and if we prepare sql outside a transaction, and execute it in the\n> > transaction, the other session can't drop the constraint until the\n> > transaction is ended.\n>\n> And what if you create a view on top of a query containing a distinct\n> clause\n> rather than using prepared statements? FWIW your patch doesn't handle such\n> case at all, without even needing to drop constraints:\n\n\n>\nCREATE TABLE t (a int primary key, b int not null, c int);\n> INSERT INTO t VALUEs(1, 1, 1), (2, 2, 2);\n> CREATE UNIQUE INDEX t_idx1 on t(b);\n> CREATE VIEW v1 AS SELECT DISTINCT b FROM t;\n> EXPLAIN SELECT * FROM v1;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n>\nThanks for pointing it out. This is unexpected based on my current\nknowledge, I\nwill check that.\n\n\n> I also think this is not the right way to handle this optimization.\n>\n\nI started to check query_is_distinct_for when Tom point it out, but still\ndoesn't\nunderstand the context fully. I will take your finding with this as well.\n\nOn Tue, Feb 11, 2020 at 3:56 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Feb 11, 2020 at 10:57:26AM +0800, Andy Fan wrote:\n> On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n> > I forgot to mention this in the last round of comments. Your patch was\n> > actually removing distictClause from the Query structure. Please avoid\n> > doing that. If you remove it, you are also removing the evidence that this\n> > Query had a DISTINCT clause in it.\n> >\n>\n> Yes, I removed it because it is the easiest way to do it.  what is the\n> purpose of keeping the evidence?\n>\n> >> However the patch as presented has some problems\n> >> 1. What happens if the primary key constraint or NOT NULL constraint gets\n> >> dropped between a prepare and execute? The plan will no more be valid and\n> >> thus execution may produce non-distinct results.\n>\n> > But that doesn't matter since a query can be prepared outside a\n> > transaction and executed within one or more subsequent transactions.\n> >\n>\n> Suppose after a DDL, the prepared statement need to be re-parsed/planned\n> if it is not executed or it will prevent the DDL to happen.\n>\n> The following is my test.\n>\n> postgres=# create table t (a int primary key, b int not null,  c int);\n> CREATE TABLE\n> postgres=# insert into t values(1, 1, 1), (2, 2, 2);\n> INSERT 0 2\n> postgres=# create unique index t_idx1 on t(b);\n> CREATE INDEX\n>\n> postgres=# prepare st as select distinct b from t where c = $1;\n> PREPARE\n> postgres=# explain execute st(1);\n>                    QUERY PLAN\n> -------------------------------------------------\n>  Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)\n>    Filter: (c = 1)\n> (2 rows)\n> ...\n> postgres=# explain execute st(1);\n>                    QUERY PLAN\n> -------------------------------------------------\n>  Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)\n>    Filter: (c = $1)\n> (2 rows)\n>\n> -- session 2\n> postgres=# alter table t alter column b drop not null;\n> ALTER TABLE\n>\n> -- session 1:\n> postgres=# explain execute st(1);\n>                          QUERY PLAN\n> -------------------------------------------------------------\n>  Unique  (cost=1.03..1.04 rows=1 width=4)\n>    ->  Sort  (cost=1.03..1.04 rows=1 width=4)\n>          Sort Key: b\n>          ->  Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)\n>                Filter: (c = $1)\n> (5 rows)\n>\n> -- session 2\n> postgres=# insert into t values (3, null, 3), (4, null, 3);\n> INSERT 0 2\n>\n> -- session 1\n> postgres=# execute st(3);\n>  b\n> ---\n>\n> (1 row)\n>\n> and if we prepare sql outside a transaction, and execute it in the\n> transaction, the other session can't drop the constraint until the\n> transaction is ended.\n\nAnd what if you create a view on top of a query containing a distinct clause\nrather than using prepared statements?  FWIW your patch doesn't handle such\ncase at all, without even needing to drop constraints:  \nCREATE TABLE t (a int primary key, b int not null,  c int);\nINSERT INTO t VALUEs(1, 1, 1), (2, 2, 2);\nCREATE UNIQUE INDEX t_idx1 on t(b);\nCREATE VIEW v1 AS SELECT DISTINCT b FROM t;\nEXPLAIN SELECT * FROM v1;\nserver closed the connection unexpectedly\n        This probably means the server terminated abnormally\n        before or while processing the request.\nThanks for pointing it out.  This is unexpected based on my current knowledge,    I will check that.  \nI also think this is not the right way to handle this optimization. I started to check query_is_distinct_for when Tom point it out,  but still doesn'tunderstand the context fully.   I will take your finding with this as well.", "msg_date": "Tue, 11 Feb 2020 17:17:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 3:56 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> >\n> > and if we prepare sql outside a transaction, and execute it in the\n> > transaction, the other session can't drop the constraint until the\n> > transaction is ended.\n>\n> And what if you create a view on top of a query containing a distinct\n> clause\n> rather than using prepared statements? FWIW your patch doesn't handle such\n> case at all, without even needing to drop constraints:\n>\n> CREATE TABLE t (a int primary key, b int not null, c int);\n> INSERT INTO t VALUEs(1, 1, 1), (2, 2, 2);\n> CREATE UNIQUE INDEX t_idx1 on t(b);\n> CREATE VIEW v1 AS SELECT DISTINCT b FROM t;\n> EXPLAIN SELECT * FROM v1;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n>\nThis error can be fixed with\n\n- num_of_rtables = bms_num_members(non_semi_anti_relids);\n+ num_of_rtables = list_length(query->rtable);\n\nThis test case also be added into the patch.\n\n\n> I also think this is not the right way to handle this optimization.\n>\n\ndo you have any other concerns?", "msg_date": "Tue, 11 Feb 2020 20:14:14 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 08:14:14PM +0800, Andy Fan wrote:\n> On Tue, Feb 11, 2020 at 3:56 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > >\n> > > and if we prepare sql outside a transaction, and execute it in the\n> > > transaction, the other session can't drop the constraint until the\n> > > transaction is ended.\n> >\n> > And what if you create a view on top of a query containing a distinct\n> > clause\n> > rather than using prepared statements? FWIW your patch doesn't handle such\n> > case at all, without even needing to drop constraints:\n> >\n> > CREATE TABLE t (a int primary key, b int not null, c int);\n> > INSERT INTO t VALUEs(1, 1, 1), (2, 2, 2);\n> > CREATE UNIQUE INDEX t_idx1 on t(b);\n> > CREATE VIEW v1 AS SELECT DISTINCT b FROM t;\n> > EXPLAIN SELECT * FROM v1;\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> >\n> >\n> This error can be fixed with\n> \n> - num_of_rtables = bms_num_members(non_semi_anti_relids);\n> + num_of_rtables = list_length(query->rtable);\n> \n> This test case also be added into the patch.\n> \n> \n> > I also think this is not the right way to handle this optimization.\n> >\n> \n> do you have any other concerns?\n\nYes, it seems to be broken as soon as you alter the view's underlying table:\n\n=# CREATE TABLE t (a int primary key, b int not null, c int);\nCREATE TABLE\n\n=# INSERT INTO t VALUEs(1, 1, 1), (2, 2, 2);\nINSERT 0 2\n\n=# CREATE UNIQUE INDEX t_idx1 on t(b);\nCREATE INDEX\n\n=# CREATE VIEW v1 AS SELECT DISTINCT b FROM t;\nCREATE VIEW\n\n=# EXPLAIN SELECT * FROM v1;\n QUERY PLAN\n-------------------------------------------------\n Seq Scan on t (cost=0.00..1.02 rows=2 width=4)\n(1 row)\n\n=# EXPLAIN SELECT DISTINCT b FROM t;\n QUERY PLAN\n-------------------------------------------------\n Seq Scan on t (cost=0.00..1.02 rows=2 width=4)\n(1 row)\n\n=# ALTER TABLE t ALTER COLUMN b DROP NOT NULL;\nALTER TABLE\n\n=# EXPLAIN SELECT * FROM v1;\n QUERY PLAN\n-------------------------------------------------\n Seq Scan on t (cost=0.00..1.02 rows=2 width=4)\n(1 row)\n\n=# EXPLAIN SELECT DISTINCT b FROM t;\n QUERY PLAN\n-------------------------------------------------------------\n Unique (cost=1.03..1.04 rows=2 width=4)\n -> Sort (cost=1.03..1.03 rows=2 width=4)\n Sort Key: b\n -> Seq Scan on t (cost=0.00..1.02 rows=2 width=4)\n(4 rows)\n\n\n\n", "msg_date": "Tue, 11 Feb 2020 13:43:11 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Mon, Feb 10, 2020 at 10:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> >> On Sat, Feb 8, 2020 at 12:53 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >> Do you mean adding some information into PlannerInfo, and when we\n> create\n> >> a node for Unique/HashAggregate/Group, we can just create a dummy node?\n>\n> > Not so much as PlannerInfo but something on lines of PathKey. See PathKey\n> > structure and related code. What I envision is PathKey class is also\n> > annotated with the information whether that PathKey implies uniqueness.\n> > E.g. a PathKey derived from a Primary index would imply uniqueness also.\n> A\n> > PathKey derived from say Group operation also implies uniqueness. Then\n> just\n> > by looking at the underlying Path we would be able to say whether we need\n> > Group/Unique node on top of it or not. I think that would make it much\n> > wider usecase and a very useful optimization.\n>\n> FWIW, that doesn't seem like a very prudent approach to me, because it\n> confuses sorted-ness with unique-ness. PathKeys are about sorting,\n> but it's possible to have uniqueness guarantees without having sorted\n> anything, for instance via hashed grouping.\n>\n\n> I haven't looked at this patch, but I'd expect it to use infrastructure\n> related to query_is_distinct_for(), and that doesn't deal in PathKeys.\n>\n> Thanks for the pointer. I think there's another problem with my approach.\nPathKeys are specific to paths since the order of the result depends upon\nthe Path. But uniqueness is a property of the result i.e. relation and thus\nshould be attached to RelOptInfo as query_is_distinct_for() does. I think\nuniquness should bubble up the RelOptInfo tree, annotating each RelOptInfo\nwith the minimum set of TLEs which make the result from that relation\nunique. Thus we could eliminate extra Group/Unique node if the underlying\nRelOptInfo's unique column set is subset of required uniqueness.\n-- \n--\nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Feb 10, 2020 at 10:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n>> On Sat, Feb 8, 2020 at 12:53 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> Do you mean adding some information into PlannerInfo,  and when we create\n>> a node for Unique/HashAggregate/Group,  we can just create a dummy node?\n\n> Not so much as PlannerInfo but something on lines of PathKey. See PathKey\n> structure and related code. What I envision is PathKey class is also\n> annotated with the information whether that PathKey implies uniqueness.\n> E.g. a PathKey derived from a Primary index would imply uniqueness also. A\n> PathKey derived from say Group operation also implies uniqueness. Then just\n> by looking at the underlying Path we would be able to say whether we need\n> Group/Unique node on top of it or not. I think that would make it much\n> wider usecase and a very useful optimization.\n\nFWIW, that doesn't seem like a very prudent approach to me, because it\nconfuses sorted-ness with unique-ness.  PathKeys are about sorting,\nbut it's possible to have uniqueness guarantees without having sorted\nanything, for instance via hashed grouping. \n\nI haven't looked at this patch, but I'd expect it to use infrastructure\nrelated to query_is_distinct_for(), and that doesn't deal in PathKeys.Thanks for the pointer. I think there's another problem with my approach. PathKeys are specific to paths since the order of the result depends upon the Path. But uniqueness is a property of the result i.e. relation and thus should be attached to RelOptInfo as query_is_distinct_for() does. I think uniquness should bubble up the RelOptInfo tree, annotating each RelOptInfo with the minimum set of TLEs which make the result from that relation unique. Thus we could eliminate extra Group/Unique node if the underlying RelOptInfo's unique column set is subset of required uniqueness.-- --Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 11 Feb 2020 21:59:06 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n>>\n>>\n>>>\n>>> [PATCH] Erase the distinctClause if the result is unique by\n>>> definition\n>>>\n>>\n>> I forgot to mention this in the last round of comments. Your patch was\n>> actually removing distictClause from the Query structure. Please avoid\n>> doing that. If you remove it, you are also removing the evidence that this\n>> Query had a DISTINCT clause in it.\n>>\n>\n> Yes, I removed it because it is the easiest way to do it. what is the\n> purpose of keeping the evidence?\n>\n\nJulien's example provides an explanation for this. The Query structure is\nserialised into a view definition. Removing distinctClause from there means\nthat the view will never try to produce unique results.\n\n>\n>\n\n\n>\n> Suppose after a DDL, the prepared statement need to be re-parsed/planned\n> if it is not executed or it will prevent the DDL to happen.\n>\n\nThe query will be replanned. I am not sure about reparsed though.\n\n\n>\n>\n> -- session 2\n> postgres=# alter table t alter column b drop not null;\n> ALTER TABLE\n>\n> -- session 1:\n> postgres=# explain execute st(1);\n> QUERY PLAN\n> -------------------------------------------------------------\n> Unique (cost=1.03..1.04 rows=1 width=4)\n> -> Sort (cost=1.03..1.04 rows=1 width=4)\n> Sort Key: b\n> -> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n> Filter: (c = $1)\n> (5 rows)\n>\n\nSince this prepared statement is parameterised PostgreSQL is replanning it\nevery time it gets executed. It's not using a stored prepared plan. Try\nwithout parameters. Also make sure that a prepared plan is used for\nexecution and not a new plan.\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:[PATCH] Erase the distinctClause if the result is unique by definitionI forgot to mention this in the last round of comments. Your patch was actually removing distictClause from the Query structure. Please avoid doing that. If you remove it, you are also removing the evidence that this Query had a DISTINCT clause in it.Yes, I removed it because it is the easiest way to do it.  what is the purpose of keeping the evidence?Julien's example provides an explanation for this. The Query structure is serialised into a view definition. Removing distinctClause from there means that the view will never try to produce unique results.   Suppose after a DDL, the prepared statement need to be re-parsed/planned if it is not executed or it will prevent the DDL to happen.  The query will be replanned. I am not sure about reparsed though.  -- session 2postgres=# alter table t alter column b drop not null;ALTER TABLE-- session 1:postgres=# explain execute st(1);                         QUERY PLAN------------------------------------------------------------- Unique  (cost=1.03..1.04 rows=1 width=4)   ->  Sort  (cost=1.03..1.04 rows=1 width=4)         Sort Key: b         ->  Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)               Filter: (c = $1)(5 rows)Since this prepared statement is parameterised PostgreSQL is replanning it every time it gets executed. It's not using a stored prepared plan. Try without parameters. Also make sure that a prepared plan is used for execution and not a new plan.--Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 11 Feb 2020 22:06:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Feb 11, 2020 at 10:06:17PM +0530, Ashutosh Bapat wrote:\n> On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> > On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> > ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> >>>\n> >>> [PATCH] Erase the distinctClause if the result is unique by\n> >>> definition\n> >>\n> >> I forgot to mention this in the last round of comments. Your patch was\n> >> actually removing distictClause from the Query structure. Please avoid\n> >> doing that. If you remove it, you are also removing the evidence that this\n> >> Query had a DISTINCT clause in it.\n> >\n> > Yes, I removed it because it is the easiest way to do it. what is the\n> > purpose of keeping the evidence?\n> >\n>\n> Julien's example provides an explanation for this. The Query structure is\n> serialised into a view definition. Removing distinctClause from there means\n> that the view will never try to produce unique results.\n\nAnd also I think that this approach will have a lot of other unexpected side\neffects. Isn't changing the Query going to affect pg_stat_statements queryid\ncomputing for instance?\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:39:45 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Thu, Feb 13, 2020 at 5:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Feb 11, 2020 at 10:06:17PM +0530, Ashutosh Bapat wrote:\n> > On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >\n> > > On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> > > ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > >>>\n> > >>> [PATCH] Erase the distinctClause if the result is unique by\n> > >>> definition\n> > >>\n> > >> I forgot to mention this in the last round of comments. Your patch was\n> > >> actually removing distictClause from the Query structure. Please avoid\n> > >> doing that. If you remove it, you are also removing the evidence that\n> this\n> > >> Query had a DISTINCT clause in it.\n> > >\n> > > Yes, I removed it because it is the easiest way to do it. what is the\n> > > purpose of keeping the evidence?\n> > >\n> >\n> > Julien's example provides an explanation for this. The Query structure is\n> > serialised into a view definition. Removing distinctClause from there\n> means\n> > that the view will never try to produce unique results.\n>\n> And also I think that this approach will have a lot of other unexpected\n> side\n> effects. Isn't changing the Query going to affect pg_stat_statements\n> queryid\n> computing for instance?\n>\n\nThanks, the 2 factors above are pretty valuable. so erasing the\ndistinctClause is not reasonable, I will try another way.\n\nOn Thu, Feb 13, 2020 at 5:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Feb 11, 2020 at 10:06:17PM +0530, Ashutosh Bapat wrote:\n> On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> > On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n> > ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> >>>\n> >>> [PATCH] Erase the distinctClause if the result is unique by\n> >>>  definition\n> >>\n> >> I forgot to mention this in the last round of comments. Your patch was\n> >> actually removing distictClause from the Query structure. Please avoid\n> >> doing that. If you remove it, you are also removing the evidence that this\n> >> Query had a DISTINCT clause in it.\n> >\n> > Yes, I removed it because it is the easiest way to do it.  what is the\n> > purpose of keeping the evidence?\n> >\n>\n> Julien's example provides an explanation for this. The Query structure is\n> serialised into a view definition. Removing distinctClause from there means\n> that the view will never try to produce unique results.\n\nAnd also I think that this approach will have a lot of other unexpected side\neffects.  Isn't changing the Query going to affect pg_stat_statements queryid\ncomputing for instance?Thanks,  the 2 factors above are pretty valuable.  so erasing the distinctClause is not reasonable,  I will try another way.", "msg_date": "Thu, 13 Feb 2020 19:36:52 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Wed, Feb 12, 2020 at 12:36 AM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> wrote:\n\n>\n>\n> On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>>\n>>\n>> On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n>> ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>>>\n>>>\n>>>>\n>>>> [PATCH] Erase the distinctClause if the result is unique by\n>>>> definition\n>>>>\n>>>\n>>> I forgot to mention this in the last round of comments. Your patch was\n>>> actually removing distictClause from the Query structure. Please avoid\n>>> doing that. If you remove it, you are also removing the evidence that this\n>>> Query had a DISTINCT clause in it.\n>>>\n>>\n>> Yes, I removed it because it is the easiest way to do it. what is the\n>> purpose of keeping the evidence?\n>>\n>\n> Julien's example provides an explanation for this. The Query structure is\n> serialised into a view definition. Removing distinctClause from there means\n> that the view will never try to produce unique results.\n>\n>>\n>>\n>\nActually it is not true. If a view is used in the query, the definition\nwill be *copied*\ninto the query tree. so if we modify the query tree, the definition of the\nview never\ntouched. The issue of Julien reported is because of a typo error.\n\n-- session 2\n>> postgres=# alter table t alter column b drop not null;\n>> ALTER TABLE\n>>\n>> -- session 1:\n>> postgres=# explain execute st(1);\n>> QUERY PLAN\n>> -------------------------------------------------------------\n>> Unique (cost=1.03..1.04 rows=1 width=4)\n>> -> Sort (cost=1.03..1.04 rows=1 width=4)\n>> Sort Key: b\n>> -> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n>> Filter: (c = $1)\n>> (5 rows)\n>>\n>\n> Since this prepared statement is parameterised PostgreSQL is replanning it\n> every time it gets executed. It's not using a stored prepared plan. Try\n> without parameters. Also make sure that a prepared plan is used for\n> execution and not a new plan.\n>\n\nEven for parameterised prepared statement, it is still possible to\ngenerate an generic\nplan. so it will not replanning every time. But no matter generic plan or\nnot, after a DDL like\nchanging the NOT NULL constraints. pg will generated a plan based on the\nstored query\ntree. However, the query tree will be *copied* again to generate a new\nplan. so even I\nmodified the query tree, everything will be ok as well.\n\nAt last, I am agreed with that modifying the query tree is not a good\nidea.\nso my updated patch doesn't use it any more.\n\nOn Wed, Feb 12, 2020 at 12:36 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:[PATCH] Erase the distinctClause if the result is unique by definitionI forgot to mention this in the last round of comments. Your patch was actually removing distictClause from the Query structure. Please avoid doing that. If you remove it, you are also removing the evidence that this Query had a DISTINCT clause in it.Yes, I removed it because it is the easiest way to do it.  what is the purpose of keeping the evidence?Julien's example provides an explanation for this. The Query structure is serialised into a view definition. Removing distinctClause from there means that the view will never try to produce unique results.   Actually it is not true.  If a view is used in the query,  the definition will be *copied*into the query tree. so if we modify the query tree,  the definition of the view nevertouched.  The issue of Julien reported is because of a typo error. -- session 2postgres=# alter table t alter column b drop not null;ALTER TABLE-- session 1:postgres=# explain execute st(1);                         QUERY PLAN------------------------------------------------------------- Unique  (cost=1.03..1.04 rows=1 width=4)   ->  Sort  (cost=1.03..1.04 rows=1 width=4)         Sort Key: b         ->  Seq Scan on t  (cost=0.00..1.02 rows=1 width=4)               Filter: (c = $1)(5 rows)Since this prepared statement is parameterised PostgreSQL is replanning it every time it gets executed. It's not using a stored prepared plan. Try without parameters. Also make sure that a prepared plan is used for execution and not a new plan.Even for  parameterised prepared statement, it is still possible to generate an genericplan. so it will not replanning every time.  But no matter generic plan or not,  after a DDL likechanging the NOT NULL constraints.   pg will generated a plan based on the stored querytree.   However, the query tree will be *copied* again to generate a new plan. so even I modified the query tree,  everything will be ok as well. At last,  I am agreed with that modifying the query tree is not a good idea. so my updated patch doesn't use it any more.", "msg_date": "Mon, 24 Feb 2020 20:38:58 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi All:\n\nHere is the updated patch. It used some functions from\nquery_is_distinct_for.\nI check the query's distinctness in create_distinct_paths, if it is\ndistinct already,\nit will not generate the paths for that. so at last the query tree is not\nuntouched.\n\nPlease see if you have any comments. Thanks\n\nOn Mon, Feb 24, 2020 at 8:38 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Wed, Feb 12, 2020 at 12:36 AM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n>>\n>>\n>> On Tue, Feb 11, 2020 at 8:27 AM Andy Fan <zhihui.fan1213@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Tue, Feb 11, 2020 at 12:22 AM Ashutosh Bapat <\n>>> ashutosh.bapat.oss@gmail.com> wrote:\n>>>\n>>>>\n>>>>\n>>>>>\n>>>>> [PATCH] Erase the distinctClause if the result is unique by\n>>>>> definition\n>>>>>\n>>>>\n>>>> I forgot to mention this in the last round of comments. Your patch was\n>>>> actually removing distictClause from the Query structure. Please avoid\n>>>> doing that. If you remove it, you are also removing the evidence that this\n>>>> Query had a DISTINCT clause in it.\n>>>>\n>>>\n>>> Yes, I removed it because it is the easiest way to do it. what is the\n>>> purpose of keeping the evidence?\n>>>\n>>\n>> Julien's example provides an explanation for this. The Query structure is\n>> serialised into a view definition. Removing distinctClause from there means\n>> that the view will never try to produce unique results.\n>>\n>>>\n>>>\n>>\n> Actually it is not true. If a view is used in the query, the definition\n> will be *copied*\n> into the query tree. so if we modify the query tree, the definition of\n> the view never\n> touched. The issue of Julien reported is because of a typo error.\n>\n> -- session 2\n>>> postgres=# alter table t alter column b drop not null;\n>>> ALTER TABLE\n>>>\n>>> -- session 1:\n>>> postgres=# explain execute st(1);\n>>> QUERY PLAN\n>>> -------------------------------------------------------------\n>>> Unique (cost=1.03..1.04 rows=1 width=4)\n>>> -> Sort (cost=1.03..1.04 rows=1 width=4)\n>>> Sort Key: b\n>>> -> Seq Scan on t (cost=0.00..1.02 rows=1 width=4)\n>>> Filter: (c = $1)\n>>> (5 rows)\n>>>\n>>\n>> Since this prepared statement is parameterised PostgreSQL is replanning\n>> it every time it gets executed. It's not using a stored prepared plan. Try\n>> without parameters. Also make sure that a prepared plan is used for\n>> execution and not a new plan.\n>>\n>\n> Even for parameterised prepared statement, it is still possible to\n> generate an generic\n> plan. so it will not replanning every time. But no matter generic plan or\n> not, after a DDL like\n> changing the NOT NULL constraints. pg will generated a plan based on the\n> stored query\n> tree. However, the query tree will be *copied* again to generate a new\n> plan. so even I\n> modified the query tree, everything will be ok as well.\n>\n> At last, I am agreed with that modifying the query tree is not a good\n> idea.\n> so my updated patch doesn't use it any more.\n>", "msg_date": "Mon, 24 Feb 2020 20:44:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Please see if you have any comments. Thanks\n\nThe cfbot isn't at all happy with this. Its linux build is complaining\nabout a possibly-uninitialized variable, and then giving up:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/656722993\n\nThe Windows build isn't using -Werror, but it is crashing in at least\ntwo different spots in the regression tests:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81778\n\nI've not attempted to identify the cause of that.\n\nAt a high level, I'm a bit disturbed that this focuses only on DISTINCT\nand doesn't (appear to) have any equivalent intelligence for GROUP BY,\nthough surely that offers much the same opportunities for optimization.\nIt seems like it'd be worthwhile to take a couple steps back and see\nif we couldn't recast the logic to work for both.\n\nSome other random comments:\n\n* Don't especially like the way you broke query_is_distinct_for()\ninto two functions, especially when you then introduced a whole\nlot of other code in between. That's just making reviewer's job\nharder to see what changed. It makes the comments a bit disjointed\ntoo, that is where you even had any. (Zero introductory comment\nfor query_is_distinct_agg is *not* up to project coding standards.\nThere are a lot of other undercommented places in this patch, too.)\n\n* Definitely don't like having query_distinct_through_join re-open\nall the relations. The data needed for this should get absorbed\nwhile plancat.c has the relations open at the beginning. (Non-nullness\nof columns, in particular, seems like it'll be useful for other\npurposes; I'm a bit surprised the planner isn't using that already.)\n\n* In general, query_distinct_through_join seems hugely messy, expensive,\nand not clearly correct. If it is correct, the existing comments sure\naren't enough to explain what it is doing or why.\n\n* Not entirely convinced that a new GUC is needed for this, but if\nit is, you have to document it.\n\n* I wouldn't bother with bms_array_free(), nor with any of the other\ncleanup you've got at the bottom of query_distinct_through_join.\nThe planner leaks *lots* of memory, and this function isn't going to\nbe called so many times that it'll move the needle.\n\n* There seem to be some pointless #include additions, eg in planner.c\nthe added code doesn't look to justify any of them. Please also\navoid unnecessary whitespace changes, those also slow down reviewing.\n\n* I see you decided to add a new regression test file select_distinct_2.\nThat's a poor choice of name because it conflicts with our rules for the\nnaming of alternative output files. Besides which, you forgot to plug\nit into the test schedule files, so it isn't actually getting run.\nIs there a reason not to just add the new test cases to select_distinct?\n\n* There are some changes in existing regression cases that aren't\nvisibly related to the stated purpose of the patch, eg it now\nnotices that \"select distinct max(unique2) from tenk1\" doesn't\nrequire an explicit DISTINCT step. That's not wrong, but I wonder\nif maybe you should subdivide this patch into more than one patch,\nbecause that must be coming from some separate change. I'm also\nwondering what caused the plan change in expected/join.out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 01 Mar 2020 15:46:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Thank you Tom for the review!\n\nOn Mon, Mar 2, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > Please see if you have any comments. Thanks\n>\n> The cfbot isn't at all happy with this. Its linux build is complaining\n> about a possibly-uninitialized variable, and then giving up:\n>\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/656722993\n>\n> The Windows build isn't using -Werror, but it is crashing in at least\n> two different spots in the regression tests:\n>\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81778\n>\n> I've not attempted to identify the cause of that.\n>\n>\nBefore I submit the patch, I can make sure \"make check-world\" is\nsuccessful, but\nsince the compile option is not same, so I didn't catch\nthe possibly-uninitialized\nvariable. As for the crash on the windows, I didn't get the enough\ninformation\nnow, I will find a windows server and reproduce the cases.\n\nI just found the link http://commitfest.cputube.org/ this morning, I will\nmake sure\nthe next patch can go pass this test.\n\n\nAt a high level, I'm a bit disturbed that this focuses only on DISTINCT\n> and doesn't (appear to) have any equivalent intelligence for GROUP BY,\n> though surely that offers much the same opportunities for optimization.\n> It seems like it'd be worthwhile to take a couple steps back and see\n> if we couldn't recast the logic to work for both.\n>\n>\nOK, Looks group by is a bit harder than distinct since the aggregation\nfunction.\nI will go through the code to see why to add this logic.\n\n\n> Some other random comments:\n>\n> * Don't especially like the way you broke query_is_distinct_for()\n> into two functions, especially when you then introduced a whole\n> lot of other code in between.\n\n\nThis is not expected by me until you point it out. In this case, I have to\nbreak the query_is_distinct_for to two functions, but it true that we\nshould put the two functions together.\n\n\nThat's just making reviewer's job\n> harder to see what changed. It makes the comments a bit disjointed\n> too, that is where you even had any. (Zero introductory comment\n> for query_is_distinct_agg is *not* up to project coding standards.\n> There are a lot of other undercommented places in this patch, too.)\n>\n> * Definitely don't like having query_distinct_through_join re-open\n> all the relations. The data needed for this should get absorbed\n> while plancat.c has the relations open at the beginning. (Non-nullness\n> of columns, in particular, seems like it'll be useful for other\n> purposes; I'm a bit surprised the planner isn't using that already.)\n>\n\nI can add new attributes to RelOptInfo and fill the value in\nget_relation_info\ncall.\n\n\n> * In general, query_distinct_through_join seems hugely messy, expensive,\n> and not clearly correct. If it is correct, the existing comments sure\n> aren't enough to explain what it is doing or why.\n\n\n>\nRemoving the relation_open call can make it a bit simpler, I will try more\ncomment to make it clearer in the following patch.\n\n\n> * There seem to be some pointless #include additions, eg in planner.c\n> the added code doesn't look to justify any of them. Please also\n> avoid unnecessary whitespace changes, those also slow down reviewing.\n>\n>\nThat may because I added the header file some time 1 and then refactored\nthe code later then forget the remove the header file accordingly. Do we\nneed\nto relay on experience to tell if the header file is needed or not, or do\nhave have\nany code to tell it automatically?\n\n\n> * I see you decided to add a new regression test file select_distinct_2.\n> That's a poor choice of name because it conflicts with our rules for the\n> naming of alternative output files. Besides which, you forgot to plug\n> it into the test schedule files, so it isn't actually getting run.\n> Is there a reason not to just add the new test cases to select_distinct?\n>\n>\nAdding it to select_distinct.sql is ok for me as well. Actually I have no\nobviously reason to add the new file.\n\n\n> * There are some changes in existing regression cases that aren't\n> visibly related to the stated purpose of the patch, eg it now\n> notices that \"select distinct max(unique2) from tenk1\" doesn't\n> require an explicit DISTINCT step. That's not wrong, but I wonder\n> if maybe you should subdivide this patch into more than one patch,\n> because that must be coming from some separate change. I'm also\n> wondering what caused the plan change in expected/join.out.\n>\n\nPer my purpose it should be in the same patch, the logical here is we\nhave distinct in the sql and the query is distinct already since the max\nfunction (the rule is defined in query_is_distinct_agg which is splited\nfrom\nthe original query_is_distinct_for clause).\n\n\n> regards, tom lane\n>\n\nThank you Tom for the review! On Mon, Mar 2, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Please see if you have any comments.   Thanks\n\nThe cfbot isn't at all happy with this.  Its linux build is complaining\nabout a possibly-uninitialized variable, and then giving up:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/656722993\n\nThe Windows build isn't using -Werror, but it is crashing in at least\ntwo different spots in the regression tests:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81778\n\nI've not attempted to identify the cause of that.\n Before I submit the patch, I can make sure \"make check-world\" is successful, butsince the compile option is not same,  so I didn't catch the possibly-uninitialized variable.   As for the crash on the windows,  I didn't get the enough information now,  I will find a windows server and reproduce the cases.I just found the link http://commitfest.cputube.org/ this morning, I will make surethe next patch can go pass this test.  \nAt a high level, I'm a bit disturbed that this focuses only on DISTINCT\nand doesn't (appear to) have any equivalent intelligence for GROUP BY,\nthough surely that offers much the same opportunities for optimization.\nIt seems like it'd be worthwhile to take a couple steps back and see\nif we couldn't recast the logic to work for both.\nOK,  Looks group by is a bit harder  than distinct since the aggregation function. I will go through the code to see why to add this logic.   \nSome other random comments:\n\n* Don't especially like the way you broke query_is_distinct_for()\ninto two functions, especially when you then introduced a whole\nlot of other code in between.  This is not expected by me until you point it out.  In this case, I have to break the query_is_distinct_for to two functions, but it true that we should put the two functions together. That's just making reviewer's job\nharder to see what changed.  It makes the comments a bit disjointed\ntoo, that is where you even had any.  (Zero introductory comment\nfor query_is_distinct_agg is *not* up to project coding standards.\nThere are a lot of other undercommented places in this patch, too.)\n\n* Definitely don't like having query_distinct_through_join re-open\nall the relations.  The data needed for this should get absorbed\nwhile plancat.c has the relations open at the beginning.  (Non-nullness\nof columns, in particular, seems like it'll be useful for other\npurposes; I'm a bit surprised the planner isn't using that already.)I can add new attributes to RelOptInfo and fill the value in get_relation_infocall.   \n* In general, query_distinct_through_join seems hugely messy, expensive,\nand not clearly correct.  If it is correct, the existing comments sure\naren't enough to explain what it is doing or why.  Removing the relation_open call can make it a bit simpler,  I will try more comment to make it clearer in the following patch. \n* There seem to be some pointless #include additions, eg in planner.c\nthe added code doesn't look to justify any of them.  Please also\navoid unnecessary whitespace changes, those also slow down reviewing.\nThat may because I added the header file some time 1 and then refactoredthe code later then forget the remove the header file accordingly.  Do we needto relay on experience to tell if the header file is needed or not, or do have haveany code to tell it automatically?  \n* I see you decided to add a new regression test file select_distinct_2.\nThat's a poor choice of name because it conflicts with our rules for the\nnaming of alternative output files.  Besides which, you forgot to plug\nit into the test schedule files, so it isn't actually getting run.\nIs there a reason not to just add the new test cases to select_distinct?\nAdding it to select_distinct.sql is ok for me as well.  Actually I have noobviously reason to add the new file. \n* There are some changes in existing regression cases that aren't\nvisibly related to the stated purpose of the patch, eg it now\nnotices that \"select distinct max(unique2) from tenk1\" doesn't\nrequire an explicit DISTINCT step.  That's not wrong, but I wonder\nif maybe you should subdivide this patch into more than one patch,\nbecause that must be coming from some separate change.  I'm also\nwondering what caused the plan change in expected/join.out.Per my purpose it should be in the same patch,  the logical here is we have distinct in the sql and the query is distinct already since the maxfunction (the rule is defined in query_is_distinct_agg which is splited from the original query_is_distinct_for clause).  \n\n                        regards, tom lane", "msg_date": "Tue, 3 Mar 2020 01:24:57 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Mar 3, 2020 at 1:24 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Thank you Tom for the review!\n>\n> On Mon, Mar 2, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Andy Fan <zhihui.fan1213@gmail.com> writes:\n>> > Please see if you have any comments. Thanks\n>>\n>> The cfbot isn't at all happy with this. Its linux build is complaining\n>> about a possibly-uninitialized variable, and then giving up:\n>>\n>> https://travis-ci.org/postgresql-cfbot/postgresql/builds/656722993\n>>\n>> The Windows build isn't using -Werror, but it is crashing in at least\n>> two different spots in the regression tests:\n>>\n>>\n>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81778\n>>\n>> I've not attempted to identify the cause of that.\n>>\n>>\n> Before I submit the patch, I can make sure \"make check-world\" is\n> successful, but\n> since the compile option is not same, so I didn't catch\n> the possibly-uninitialized\n> variable. As for the crash on the windows, I didn't get the enough\n> information\n> now, I will find a windows server and reproduce the cases.\n>\n> I just found the link http://commitfest.cputube.org/ this morning, I will\n> make sure\n> the next patch can go pass this test.\n>\n>\n> At a high level, I'm a bit disturbed that this focuses only on DISTINCT\n>> and doesn't (appear to) have any equivalent intelligence for GROUP BY,\n>> though surely that offers much the same opportunities for optimization.\n>> It seems like it'd be worthwhile to take a couple steps back and see\n>> if we couldn't recast the logic to work for both.\n>>\n>>\n> OK, Looks group by is a bit harder than distinct since the aggregation\n> function.\n> I will go through the code to see why to add this logic.\n>\n>\n\nCan we grantee any_aggr_func(a) == a if only 1 row returned, if so, we\ncan do\nsome work on the pathtarget/reltarget by transforming the Aggref to raw\nexpr.\nI checked the execution path of the aggregation call, looks it depends on\nAgg node\nwhich is the thing we want to remove.\n\n>\n>> * There seem to be some pointless #include additions, eg in planner.c\n>> the added code doesn't look to justify any of them. Please also\n>> avoid unnecessary whitespace changes, those also slow down reviewing.\n>>\n>\nfixed some typo errors.\n\nThat may be because I added the header file at time 1 and then refactored\nthe code but forget to remove the header file when it is not necessary.\nDo we need to relay on experience to tell if the header file is needed or\nnot,\nor do we have any tool to tell it automatically?\n\n\n regards, Andy Fan\n\nOn Tue, Mar 3, 2020 at 1:24 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Thank you Tom for the review! On Mon, Mar 2, 2020 at 4:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Please see if you have any comments.   Thanks\n\nThe cfbot isn't at all happy with this.  Its linux build is complaining\nabout a possibly-uninitialized variable, and then giving up:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/656722993\n\nThe Windows build isn't using -Werror, but it is crashing in at least\ntwo different spots in the regression tests:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.81778\n\nI've not attempted to identify the cause of that.\n Before I submit the patch, I can make sure \"make check-world\" is successful, butsince the compile option is not same,  so I didn't catch the possibly-uninitialized variable.   As for the crash on the windows,  I didn't get the enough information now,  I will find a windows server and reproduce the cases.I just found the link http://commitfest.cputube.org/ this morning, I will make surethe next patch can go pass this test.  \nAt a high level, I'm a bit disturbed that this focuses only on DISTINCT\nand doesn't (appear to) have any equivalent intelligence for GROUP BY,\nthough surely that offers much the same opportunities for optimization.\nIt seems like it'd be worthwhile to take a couple steps back and see\nif we couldn't recast the logic to work for both.\nOK,  Looks group by is a bit harder  than distinct since the aggregation function. I will go through the code to see why to add this logic.   Can we grantee  any_aggr_func(a) == a  if only 1 row returned,  if so, we can dosome work on the pathtarget/reltarget by transforming the Aggref to raw expr.I checked the execution path of the aggregation call, looks it depends on Agg node which is the thing we want to remove.  \n* There seem to be some pointless #include additions, eg in planner.c\nthe added code doesn't look to justify any of them.  Please also\navoid unnecessary whitespace changes, those also slow down reviewing.fixed some typo errors. That may be because I added the header file at time 1 and then refactoredthe code but forget to remove the header file when it is not necessary.  Do we need to relay on experience to tell if the header file is needed or not, or do we have any tool to tell it automatically?    regards,  Andy Fan", "msg_date": "Tue, 3 Mar 2020 04:25:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": ">\n>\n>> * There are some changes in existing regression cases that aren't\n>> visibly related to the stated purpose of the patch, eg it now\n>> notices that \"select distinct max(unique2) from tenk1\" doesn't\n>> require an explicit DISTINCT step. That's not wrong, but I wonder\n>> if maybe you should subdivide this patch into more than one patch,\n>> because that must be coming from some separate change. I'm also\n>> wondering what caused the plan change in expected/join.out.\n>>\n>\n> Per my purpose it should be in the same patch, the logical here is we\n> have distinct in the sql and the query is distinct already since the max\n> function (the rule is defined in query_is_distinct_agg which is splited\n> from\n> the original query_is_distinct_for clause).\n>\n\nI think I was right until I come\ninto contrib/postgres_fdw/sql/postgres_fdw.sql.\nPer my understanding, the query the result of \"select max(a) from t\" is\nunique\nsince the aggregation function and has no group clause there. But in the\npostgres_fdw.sql case, the Query->hasAggs is true for \"select distinct\n(select count(*) filter (where t2.c2 = 6 and t2.c1 < 10) from ft1 t1 where\nt1.c1 = 6)\nfrom ft2 t2 where t2.c2 % 6 = 0 order by 1;\" This looks very strange to\nme.\nIs my understanding wrong or there is a bug here?\n\nquery->hasAggs was set to true in the following call stack.\n\n pstate->p_hasAggs = true;\n\n..\n\n qry->hasAggs = pstate->p_hasAggs;\n\n\n0 in check_agglevels_and_constraints of parse_agg.c:343\n1 in transformAggregateCall of parse_agg.c:236\n2 in ParseFuncOrColumn of parse_func.c:805\n3 in transformFuncCall of parse_expr.c:1558\n4 in transformExprRecurse of parse_expr.c:265\n5 in transformExpr of parse_expr.c:155\n6 in transformTargetEntry of parse_target.c:105\n7 in transformTargetList of parse_target.c:193\n8 in transformSelectStmt of analyze.c:1224\n9 in transformStmt of analyze.c:301\n\nYou can see the new updated patch which should fix all the issues you point\nout\nexcept the one for supporting group by. The another reason for this patch\nwill\nnot be the final one is because the changes for postgres_fdw.out is too\narbitrary.\nuploading it now just for reference. (The new introduced guc variable can\nbe\nremoved at last, keeping it now just make sure the testing is easier.)\n\n\nAt a high level, I'm a bit disturbed that this focuses only on DISTINCT\n>>> and doesn't (appear to) have any equivalent intelligence for GROUP BY,\n>>> though surely that offers much the same opportunities for optimization.\n>>> It seems like it'd be worthwhile to take a couple steps back and see\n>>> if we couldn't recast the logic to work for both.\n>>>\n>>>\n>> OK, Looks group by is a bit harder than distinct since the aggregation\n>> function.\n>> I will go through the code to see where to add this logic.\n>>\n>>\n>\n> Can we grantee any_aggr_func(a) == a if only 1 row returned, if so, we\n> can do\n> some work on the pathtarget/reltarget by transforming the Aggref to raw\n> expr.\n> I checked the execution path of the aggregation call, looks it depends on\n> Agg node\n> which is the thing we want to remove.\n>\n\nWe can't grantee any_aggr_func(a) == a when only 1 row returned, so the\nabove\nmethod doesn't work. do you have any suggestion for this?", "msg_date": "Wed, 4 Mar 2020 21:13:54 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Upload the newest patch so that the cfbot can pass. The last patch failed\nbecause some explain without the (cost off).\n\nI'm still on the way to figure out how to handle aggregation calls without\naggregation path. Probably we can get there by hacking some\nExprEvalPushStep for Aggref node. But since the current patch not tied\nwith this closely, so I would put this patch for review first.\n\n\n\nOn Wed, Mar 4, 2020 at 9:13 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n>>\n>>> * There are some changes in existing regression cases that aren't\n>>> visibly related to the stated purpose of the patch, eg it now\n>>> notices that \"select distinct max(unique2) from tenk1\" doesn't\n>>> require an explicit DISTINCT step. That's not wrong, but I wonder\n>>> if maybe you should subdivide this patch into more than one patch,\n>>> because that must be coming from some separate change. I'm also\n>>> wondering what caused the plan change in expected/join.out.\n>>>\n>>\n>> Per my purpose it should be in the same patch, the logical here is we\n>> have distinct in the sql and the query is distinct already since the max\n>> function (the rule is defined in query_is_distinct_agg which is splited\n>> from\n>> the original query_is_distinct_for clause).\n>>\n>\n> I think I was right until I come\n> into contrib/postgres_fdw/sql/postgres_fdw.sql.\n> Per my understanding, the query the result of \"select max(a) from t\" is\n> unique\n> since the aggregation function and has no group clause there. But in the\n> postgres_fdw.sql case, the Query->hasAggs is true for \"select distinct\n> (select count(*) filter (where t2.c2 = 6 and t2.c1 < 10) from ft1 t1 where\n> t1.c1 = 6)\n> from ft2 t2 where t2.c2 % 6 = 0 order by 1;\" This looks very strange to\n> me.\n> Is my understanding wrong or there is a bug here?\n>\n> query->hasAggs was set to true in the following call stack.\n>\n> pstate->p_hasAggs = true;\n>\n> ..\n>\n> qry->hasAggs = pstate->p_hasAggs;\n>\n>\n> 0 in check_agglevels_and_constraints of parse_agg.c:343\n> 1 in transformAggregateCall of parse_agg.c:236\n> 2 in ParseFuncOrColumn of parse_func.c:805\n> 3 in transformFuncCall of parse_expr.c:1558\n> 4 in transformExprRecurse of parse_expr.c:265\n> 5 in transformExpr of parse_expr.c:155\n> 6 in transformTargetEntry of parse_target.c:105\n> 7 in transformTargetList of parse_target.c:193\n> 8 in transformSelectStmt of analyze.c:1224\n> 9 in transformStmt of analyze.c:301\n>\n> You can see the new updated patch which should fix all the issues you\n> point out\n> except the one for supporting group by. The another reason for this\n> patch will\n> not be the final one is because the changes for postgres_fdw.out is too\n> arbitrary.\n> uploading it now just for reference. (The new introduced guc variable can\n> be\n> removed at last, keeping it now just make sure the testing is easier.)\n>\n>\n> At a high level, I'm a bit disturbed that this focuses only on DISTINCT\n>>>> and doesn't (appear to) have any equivalent intelligence for GROUP BY,\n>>>> though surely that offers much the same opportunities for optimization.\n>>>> It seems like it'd be worthwhile to take a couple steps back and see\n>>>> if we couldn't recast the logic to work for both.\n>>>>\n>>>>\n>>> OK, Looks group by is a bit harder than distinct since the aggregation\n>>> function.\n>>> I will go through the code to see where to add this logic.\n>>>\n>>>\n>>\n>> Can we grantee any_aggr_func(a) == a if only 1 row returned, if so, we\n>> can do\n>> some work on the pathtarget/reltarget by transforming the Aggref to raw\n>> expr.\n>> I checked the execution path of the aggregation call, looks it depends on\n>> Agg node\n>> which is the thing we want to remove.\n>>\n>\n> We can't grantee any_aggr_func(a) == a when only 1 row returned, so the\n> above\n> method doesn't work. do you have any suggestion for this?\n>", "msg_date": "Fri, 6 Mar 2020 19:46:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Sat, 7 Mar 2020 at 00:47, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Upload the newest patch so that the cfbot can pass. The last patch failed\n> because some explain without the (cost off).\n\nI've only really glanced at this patch, but I think we need to do this\nin a completely different way.\n\nI've been mentioning UniqueKeys around this mailing list for quite a\nwhile now [1]. To summarise the idea:\n\n1. Add a new List field to RelOptInfo named unique_keys\n2. During get_relation_info() process the base relation's unique\nindexes and add to the RelOptInfo's unique_keys list the indexed\nexpressions from each unique index (this may need to be delayed until\ncheck_index_predicates() since predOK is only set there)\n3. Perhaps in add_paths_to_joinrel(), or maybe when creating the join\nrel itself (I've not looked for the best location in detail),\ndetermine if the join can cause rows to be duplicated. If it can't,\nthen add the UniqueKeys from that rel. For example: SELECT * FROM t1\nINNER JOIN t2 ON t1.unique = t2.not_unique; would have the joinrel for\n{t1,t2} only take the unique keys from t2 (t1 can't duplicate t2 rows\nsince it's an eqijoin and t1.unique has a unique index). If the\ncondition was t1.unique = t2.unique then we could take the unique keys\nfrom both sides of the join, and with t1.non_unique = t2.non_unique,\nwe can take neither.\n4. When creating the GROUP BY paths (when there are no aggregates),\ndon't bother doing anything if the input rel's unique keys are a\nsubset of the GROUP BY clause. Otherwise, create the group by paths\nand tag the new unique keys onto the GROUP BY rel.\n5. When creating the DISTINCT paths, don't bother if the input rel has\nunique keys are a subset of the distinct clause.\n\n4 and 5 will mean that: SELECT DISTINCT non_unique FROM t1 GROUP BY\nnon_unique will just uniquify once for the GROUP BY and not for the\ndistinct. SELECT DISTINCT unique FROM t1 GROUP BY unique; won't do\nanything to uniquify the results.\n\nBecause both 4 and 5 require that the uniquekeys are a subset of the\ndistinct/group by clause, an empty uniquekey set would mean that the\nRelOptInfo returns no more than 1 row. That would allow your:\n\nSELECT DISTINCT max(non_unique) FROM t1; to skip doing the DISTINCT part.\n\nThere's a separate effort in\nhttps://commitfest.postgresql.org/27/1741/ to implement some parts of\nthe uniquekeys idea. However the implementation currently only covers\nadding the unique keys to Paths, not to RelOptInfos.\n\nI also believe that the existing code in analyzejoins.c should be\nedited to make use of unique keys rather than how it looks at unique\nindexes and group by / distinct clauses.\n\n[1] https://www.postgresql.org/search/?m=1&ln=pgsql-hackers&q=uniquekeys\n\n\n", "msg_date": "Tue, 10 Mar 2020 11:21:27 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi David:\n\n\n> 3. Perhaps in add_paths_to_joinrel(), or maybe when creating the join\n> rel itself (I've not looked for the best location in detail),\n> determine if the join can cause rows to be duplicated. If it can't,\n> then add the UniqueKeys from that rel.\n\n\nI have some concerns about this method, maybe I misunderstand\nsomething, if so, please advise.\n\nIn my current implementation, it calculates the uniqueness for each\nBaseRel only, but in your way, looks we need to calculate the\nUniquePathKey for both BaseRel and JoinRel. This makes more\ndifference for multi table join. Another concern is UniquePathKey\nis designed for a general purpose, we need to maintain it no matter\ndistinctClause/groupbyClause.\n\n\n> For example: SELECT * FROM t1\n> INNER JOIN t2 ON t1.unique = t2.not_unique; would have the joinrel for\n> {t1,t2} only take the unique keys from t2 (t1 can't duplicate t2 rows\n> since it's an eqijoin and t1.unique has a unique index).\n>\n\nThanks for raising this. My current rule requires *every* relation yields\na\nunique result and *no matter* with the join method. Actually I want to make\nthe rule less strict, for example, we may just need 1 table yields unique\nresult\nand the final result will be unique as well under some join type.\n\nAs for the t1 INNER JOIN t2 ON t1.unique = t2.not_unique; looks we can't\nremove the distinct based on this.\n\ncreate table m1(a int primary key, b int);\ncreate table m2(a int primary key, b int);\ninsert into m1 values(1, 1), (2, 1);\ninsert into m2 values(1, 1), (2, 1);\nselect distinct m1.a from m1, m2 where m1.a = m2.b;\n\n\n\n> SELECT DISTINCT max(non_unique) FROM t1; to skip doing the DISTINCT part.\n>\n\nActually I want to keep the distinct for this case now. One reason is\nthere are only 1\nrow returned, so the distinct erasing can't help much. The more important\nreason is\nQuery->hasAggs is true for \"select distinct (select count(*) filter (where\nt2.c2 = 6\nand t2.c1 < 10) from ft1 t1 where t1.c1 = 6) from ft2 t2 where t2.c2 % 6 =\n0 order by 1;\"\n(this sql came from postgres_fdw.sql).\n\nThere's a separate effort in\n> https://commitfest.postgresql.org/27/1741/ to implement some parts of\n> the uniquekeys idea. However the implementation currently only covers\n> adding the unique keys to Paths, not to RelOptInfos\n\n\nThanks for this info. I guess this patch is not merged so far, but looks\nthe cfbot\nfor my patch [1] failed due to this :( search\n\"explain (costs off) select distinct on(pk1) pk1, pk2 from\nselect_distinct_a;\"\n\n\n> I also believe that the existing code in analyzejoins.c should be\n> edited to make use of unique keys rather than how it looks at unique\n> indexes and group by / distinct clauses.\n>\n> I can do this after we have agreement on the UniquePath.\n\nFor my cbbot failure, another strange thing is \"A\" appear ahead of \"a\" after\nthe order by.. Still didn't find out why.\n\n[1]\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.83298\n\nRegards\nAndy Fan\n\nHi David: 3. Perhaps in add_paths_to_joinrel(), or maybe when creating the joinrel itself (I've not looked for the best location in detail),determine if the join can cause rows to be duplicated. If it can't,then add the UniqueKeys from that rel.I have some concerns about this method,  maybe I misunderstand something, if so, please advise.  In my current implementation, it calculates the uniqueness for eachBaseRel only, but in your way,  looks we need to calculate theUniquePathKey for both BaseRel and JoinRel.   This makes more difference for multi table join.    Another concern is UniquePathKeyis designed for a general purpose,  we need to maintain it no matterdistinctClause/groupbyClause.    For example: SELECT * FROM t1INNER JOIN t2 ON t1.unique = t2.not_unique; would have the joinrel for{t1,t2} only take the unique keys from t2 (t1 can't duplicate t2 rowssince it's an eqijoin and t1.unique has a unique index). Thanks for raising this.  My current rule requires *every* relation yields a unique result and *no matter* with the join method.  Actually I want to makethe rule less strict, for example, we  may just need 1 table yields unique resultand the final result will be unique as well under some join type. As for the t1 INNER JOIN t2 ON t1.unique = t2.not_unique;  looks we can't remove the distinct based on this. create table m1(a int primary key, b int);create table m2(a int primary key, b int);insert into m1 values(1, 1), (2, 1);insert into m2 values(1, 1), (2, 1);select distinct m1.a from m1, m2 where m1.a = m2.b; SELECT DISTINCT max(non_unique) FROM t1; to skip doing the DISTINCT part. Actually I want to keep the distinct for this case now.  One reason is there are only 1 row returned, so the distinct erasing can't help much.   The more important reason isQuery->hasAggs is true for \"select distinct  (select count(*) filter (where t2.c2 = 6 and t2.c1 < 10) from ft1 t1 where t1.c1 = 6)  from ft2 t2 where t2.c2 % 6 = 0 order by 1;\"(this sql came from postgres_fdw.sql).  There's a separate effort inhttps://commitfest.postgresql.org/27/1741/ to implement some parts ofthe uniquekeys idea.  However the implementation currently only coversadding the unique keys to Paths, not to RelOptInfosThanks for this info.  I guess this patch is not merged so far, but looks the cfbotfor my patch [1]  failed due to this :(   search \"explain (costs off) select distinct on(pk1) pk1, pk2 from select_distinct_a;\" I also believe that the existing code in analyzejoins.c should beedited to make use of unique keys rather than how it looks at uniqueindexes and group by / distinct clauses. I can do this after we have agreement on the UniquePath.  For my cbbot failure, another strange thing is \"A\" appear ahead of \"a\" afterthe order by..  Still didn't find out why.[1] https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.83298Regards Andy Fan", "msg_date": "Tue, 10 Mar 2020 16:19:03 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Mar 10, 2020 at 3:51 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sat, 7 Mar 2020 at 00:47, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Upload the newest patch so that the cfbot can pass. The last patch\n> failed\n> > because some explain without the (cost off).\n>\n> I've only really glanced at this patch, but I think we need to do this\n> in a completely different way.\n>\n> I've been mentioning UniqueKeys around this mailing list for quite a\n> while now [1]. To summarise the idea:\n>\n> 1. Add a new List field to RelOptInfo named unique_keys\n> 2. During get_relation_info() process the base relation's unique\n> indexes and add to the RelOptInfo's unique_keys list the indexed\n> expressions from each unique index (this may need to be delayed until\n> check_index_predicates() since predOK is only set there)\n> 3. Perhaps in add_paths_to_joinrel(), or maybe when creating the join\n> rel itself (I've not looked for the best location in detail),\n>\n\nbuild_*_join_rel() will be a good place for this. The paths created might\ntake advantage of this information for costing.\n\n\n> determine if the join can cause rows to be duplicated. If it can't,\n> then add the UniqueKeys from that rel. For example: SELECT * FROM t1\n> INNER JOIN t2 ON t1.unique = t2.not_unique; would have the joinrel for\n> {t1,t2} only take the unique keys from t2 (t1 can't duplicate t2 rows\n> since it's an eqijoin and t1.unique has a unique index).\n\n\nthis is interesting.\n\n\n> If the\n> condition was t1.unique = t2.unique then we could take the unique keys\n> from both sides of the join, and with t1.non_unique = t2.non_unique,\n> we can take neither.\n> 4. When creating the GROUP BY paths (when there are no aggregates),\n> don't bother doing anything if the input rel's unique keys are a\n> subset of the GROUP BY clause. Otherwise, create the group by paths\n> and tag the new unique keys onto the GROUP BY rel.\n> 5. When creating the DISTINCT paths, don't bother if the input rel has\n> unique keys are a subset of the distinct clause.\n>\n\nThanks for laying this out in more details. Two more cases can be added to\nthis\n6. When creating RelOptInfo for a grouped/aggregated result, if all the\ncolumns of a group by clause are part of the result i.e. targetlist, the\ncolumns in group by clause server as the unique keys of the result. So the\ncorresponding RelOptInfo can be marked as such.\n7. The result of DISTINCT is unique for the columns contained in the\nDISTINCT clause. Hence we can add those columns to the unique_key of the\nRelOptInfo representing the result of the distinct clause.\n8. If each partition of a partitioned table has a unique key with the same\ncolumns in it and that happens to be superset of the partition key, then\nthe whole partitioned table gets that unique key as well.\n\nWith this we could actually pass the uniqueness information through\nSubquery scans as well and the overall query will benefit with that.\n\n\n>\n> 4 and 5 will mean that: SELECT DISTINCT non_unique FROM t1 GROUP BY\n> non_unique will just uniquify once for the GROUP BY and not for the\n> distinct. SELECT DISTINCT unique FROM t1 GROUP BY unique; won't do\n> anything to uniquify the results.\n>\n> Because both 4 and 5 require that the uniquekeys are a subset of the\n> distinct/group by clause, an empty uniquekey set would mean that the\n> RelOptInfo returns no more than 1 row. That would allow your:\n>\n> SELECT DISTINCT max(non_unique) FROM t1; to skip doing the DISTINCT part.\n>\n> There's a separate effort in\n> https://commitfest.postgresql.org/27/1741/ to implement some parts of\n> the uniquekeys idea. However the implementation currently only covers\n> adding the unique keys to Paths, not to RelOptInfos.\n>\n\nI haven't looked at that patch, but as discussed upthread, in this case we\nwant the uniqueness associated with the RelOptInfo and not the path.\n\n\n>\n> I also believe that the existing code in analyzejoins.c should be\n> edited to make use of unique keys rather than how it looks at unique\n> indexes and group by / distinct clauses.\n>\n\n+1.\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Tue, Mar 10, 2020 at 3:51 AM David Rowley <dgrowleyml@gmail.com> wrote:On Sat, 7 Mar 2020 at 00:47, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Upload the newest patch so that the cfbot can pass.  The last patch failed\n> because some explain without the (cost off).\n\nI've only really glanced at this patch, but I think we need to do this\nin a completely different way.\n\nI've been mentioning UniqueKeys around this mailing list for quite a\nwhile now [1]. To summarise the idea:\n\n1. Add a new List field to RelOptInfo named unique_keys\n2. During get_relation_info() process the base relation's unique\nindexes and add to the RelOptInfo's unique_keys list the indexed\nexpressions from each unique index (this may need to be delayed until\ncheck_index_predicates() since predOK is only set there)\n3. Perhaps in add_paths_to_joinrel(), or maybe when creating the join\nrel itself (I've not looked for the best location in detail),build_*_join_rel() will be a good place for this. The paths created might take advantage of this information for costing. \ndetermine if the join can cause rows to be duplicated. If it can't,\nthen add the UniqueKeys from that rel.  For example: SELECT * FROM t1\nINNER JOIN t2 ON t1.unique = t2.not_unique; would have the joinrel for\n{t1,t2} only take the unique keys from t2 (t1 can't duplicate t2 rows\nsince it's an eqijoin and t1.unique has a unique index).this is interesting.  If the\ncondition was t1.unique = t2.unique then we could take the unique keys\nfrom both sides of the join, and with t1.non_unique = t2.non_unique,\nwe can take neither.\n4. When creating the GROUP BY paths (when there are no aggregates),\ndon't bother doing anything if the input rel's unique keys are a\nsubset of the GROUP BY clause. Otherwise, create the group by paths\nand tag the new unique keys onto the GROUP BY rel.\n5. When creating the DISTINCT paths, don't bother if the input rel has\nunique keys are a subset of the distinct clause.Thanks for laying this out in more details. Two more cases can be added to this6. When creating RelOptInfo for a grouped/aggregated result, if all the columns of a group by clause are part of the result i.e. targetlist, the columns in group by clause server as the unique keys of the result. So the corresponding RelOptInfo can be marked as such.7. The result of DISTINCT is unique for the columns contained in the DISTINCT clause. Hence we can add those columns to the unique_key of the RelOptInfo representing the result of the distinct clause.8. If each partition of a partitioned table has a unique key with the same columns in it and that happens to be superset of the partition key, then the whole partitioned table gets that unique key as well.With this we could actually pass the uniqueness information through Subquery scans as well and the overall query will benefit with that. \n\n4 and 5 will mean that: SELECT DISTINCT non_unique FROM t1 GROUP BY\nnon_unique will just uniquify once for the GROUP BY and not for the\ndistinct.  SELECT DISTINCT unique FROM t1 GROUP BY unique; won't do\nanything to uniquify the results.\n\nBecause both 4 and 5 require that the uniquekeys are a subset of the\ndistinct/group by clause, an empty uniquekey set would mean that the\nRelOptInfo returns no more than 1 row.  That would allow your:\n\nSELECT DISTINCT max(non_unique) FROM t1; to skip doing the DISTINCT part.\n\nThere's a separate effort in\nhttps://commitfest.postgresql.org/27/1741/ to implement some parts of\nthe uniquekeys idea.  However the implementation currently only covers\nadding the unique keys to Paths, not to RelOptInfos.I haven't looked at that patch, but as discussed upthread, in this case we want the uniqueness associated with the RelOptInfo and not the path. \n\nI also believe that the existing code in analyzejoins.c should be\nedited to make use of unique keys rather than how it looks at unique\nindexes and group by / distinct clauses.+1.-- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 10 Mar 2020 15:23:44 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi Andy,\n\nOn Tue, Mar 10, 2020 at 1:49 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi David:\n>\n>>\n>> 3. Perhaps in add_paths_to_joinrel(), or maybe when creating the join\n>> rel itself (I've not looked for the best location in detail),\n>> determine if the join can cause rows to be duplicated. If it can't,\n>> then add the UniqueKeys from that rel.\n>\n>\n> I have some concerns about this method, maybe I misunderstand\n> something, if so, please advise.\n>\n> In my current implementation, it calculates the uniqueness for each\n> BaseRel only, but in your way, looks we need to calculate the\n> UniquePathKey for both BaseRel and JoinRel. This makes more\n> difference for multi table join.\n\nI didn't understand this concern. I think, it would be better to do it\nfor all kinds of relation types base, join etc. This way we are sure\nthat one method works across the planner to eliminate the need for\nDistinct or grouping. If we just implement something for base\nrelations right now and don't do that for joins, there is a chance\nthat that method may not work for joins when we come to implement it.\n\n> Another concern is UniquePathKey\n> is designed for a general purpose, we need to maintain it no matter\n> distinctClause/groupbyClause.\n\nThis should be ok. The time spent in annotating a RelOptInfo about\nuniqueness is not going to be a lot. But doing so would help generic\nelimination of Distinct/Group/Unique operations. What is\nUniquePathKey; I didn't find this in your patch or in the code.\n\n>\n>\n>>\n>> For example: SELECT * FROM t1\n>> INNER JOIN t2 ON t1.unique = t2.not_unique; would have the joinrel for\n>> {t1,t2} only take the unique keys from t2 (t1 can't duplicate t2 rows\n>> since it's an eqijoin and t1.unique has a unique index).\n>\n>\n> Thanks for raising this. My current rule requires *every* relation yields a\n> unique result and *no matter* with the join method. Actually I want to make\n> the rule less strict, for example, we may just need 1 table yields unique result\n> and the final result will be unique as well under some join type.\n\nThat is desirable.\n\n>\n> As for the t1 INNER JOIN t2 ON t1.unique = t2.not_unique; looks we can't\n> remove the distinct based on this.\n>\n> create table m1(a int primary key, b int);\n> create table m2(a int primary key, b int);\n> insert into m1 values(1, 1), (2, 1);\n> insert into m2 values(1, 1), (2, 1);\n> select distinct m1.a from m1, m2 where m1.a = m2.b;\n\nIIUC, David's rule is other way round. \"select distinct m2.a from m1,\nm2 where m1.a = m2.b\" won't need DISTINCT node since the result of\njoining m1 and m2 has unique value of m2.a for each row. In your\nexample the join will produce two rows (m1.a, m1.b, m2.a, m2.b) (1, 1,\n1, 1) and (1, 1, 2, 1) where m2.a is unique key.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 10 Mar 2020 19:20:23 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi Tom & David & Bapat:\n\nThanks for your review so far. I want to summarize the current issues to\nhelp\nour following discussion.\n\n1. Shall we bypass the AggNode as well with the same logic.\n\nI think yes, since the rules to bypass a AggNode and UniqueNode is exactly\nsame.\nThe difficulty of bypassing AggNode is the current aggregation function\ncall is closely\ncoupled with AggNode. In the past few days, I have make the aggregation\ncall can\nrun without AggNode (at least I tested sum(without finalized fn), avg\n(with finalized fn)).\nBut there are a few things to do, like acl check, anynull check and maybe\nmore check.\nalso there are some MemoryContext mess up need to fix.\nI still need some time for this goal, so I think the complex of it\ndeserves another thread\nto discuss it, any thought?\n\n2. Shall we used the UniquePath as David suggested.\n\nActually I am trending to this way now. Daivd, can you share more insights\nabout the\nbenefits of UniquePath? Costing size should be one of them, another one\nmay be\nchanging the semi join to normal join as the current innerrel_is_unique\ndid. any others?\n\n3. Can we make the rule more general?\n\nCurrently it requires every relation yields a unique result. Daivd & Bapat\nprovides another example:\nselect m2.pk from m1, m2 where m1.pk = m2.non_unqiue_key. That's\ninteresting and not easy to\nhandle in my current framework. This is another reason I want to take the\nUniquePath framework.\n\nDo we have any other rules to think about before implementing it?\n\nThanks for your feedback.\n\n\n> This should be ok. The time spent in annotating a RelOptInfo about\n> uniqueness is not going to be a lot. But doing so would help generic\n> elimination of Distinct/Group/Unique operations. What is\n> UniquePathKey; I didn't find this in your patch or in the code.\n>\n> This is a proposal from David, so not in current patch/code :)\n\nRegards\nAndy Fan\n\nHi Tom & David & Bapat:Thanks for your review so far.  I want to summarize the current issues to helpour following discussion. 1. Shall we bypass the AggNode as well with the same logic.I think yes, since the rules to bypass a AggNode and UniqueNode is exactly same.The difficulty of bypassing AggNode is the current aggregation function call is closelycoupled with AggNode.  In the past few days, I have make the aggregation call canrun without AggNode (at least I tested sum(without finalized  fn),  avg (with finalized fn)).But there are a few things to do, like acl check,  anynull check and maybe more check.also there are some MemoryContext mess up need to fix.I still need some time for this goal,  so I think the complex of it deserves another threadto discuss it,  any thought?2. Shall we used the UniquePath as David suggested.Actually I am trending to this way now.  Daivd, can you share more insights about the benefits of UniquePath?  Costing size should be one of them,  another one may be changing the semi join to normal join as the current innerrel_is_unique did.  any others?3. Can we make the rule more general?Currently it requires every relation yields a unique result.  Daivd & Bapat provides another example: select m2.pk from m1, m2 where m1.pk = m2.non_unqiue_key. That's interesting and not easy to handle in my current framework.  This is another reason I want to take the UniquePath framework. Do we have any other rules to think about before implementing it? Thanks for your feedback. \n\nThis should be ok. The time spent in annotating a RelOptInfo about\nuniqueness is not going to be a lot. But doing so would help generic\nelimination of Distinct/Group/Unique operations. What is\nUniquePathKey; I didn't find this in your patch or in the code.This is a proposal from David,  so not in current patch/code :) RegardsAndy Fan", "msg_date": "Tue, 10 Mar 2020 23:41:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Wed, 11 Mar 2020 at 02:50, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Mar 10, 2020 at 1:49 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > In my current implementation, it calculates the uniqueness for each\n> > BaseRel only, but in your way, looks we need to calculate the\n> > UniquePathKey for both BaseRel and JoinRel. This makes more\n> > difference for multi table join.\n>\n> I didn't understand this concern. I think, it would be better to do it\n> for all kinds of relation types base, join etc. This way we are sure\n> that one method works across the planner to eliminate the need for\n> Distinct or grouping. If we just implement something for base\n> relations right now and don't do that for joins, there is a chance\n> that that method may not work for joins when we come to implement it.\n\nYeah, it seems to me that we're seeing more and more features that\nrequire knowledge of uniqueness of a RelOptInfo. The skip scans patch\nneeds to know if a join will cause row duplication so it knows if the\nskip scan path can be joined to without messing up the uniqueness of\nthe skip scan. Adding more and more places that loop over the rel's\nindexlist just does not seem the right way to do it, especially so\nwhen you have to dissect the join rel down to its base rel components\nto check which indexes there are. Having the knowledge on-hand at the\nRelOptInfo level means we no longer have to look at indexes for unique\nproofs.\n\n> > Another concern is UniquePathKey\n> > is designed for a general purpose, we need to maintain it no matter\n> > distinctClause/groupbyClause.\n>\n> This should be ok. The time spent in annotating a RelOptInfo about\n> uniqueness is not going to be a lot. But doing so would help generic\n> elimination of Distinct/Group/Unique operations. What is\n> UniquePathKey; I didn't find this in your patch or in the code.\n\nPossibly a misinterpretation. There is some overlap between this patch\nand the skip scan patch, both would like to skip doing explicit work\nto implement DISTINCT. Skip scans just go about it by adding new path\ntypes that scan the index and only gathers up unique values. In that\ncase, the RelOptInfo won't contain the unique keys, but the skip scan\npath will. How I imagine both of these patches working together in\ncreate_distinct_paths() is that we first check if the DISTINCT clause\nis a subset of the a set of the RelOptInfo's unique keys (this patch),\nelse we check if there are any paths with uniquekeys that we can use\nto perform a no-op on the DISTINCT clause (the skip scan patch), if\nnone of those apply, we create the required paths to uniquify the\nresults.\n\n\n", "msg_date": "Wed, 11 Mar 2020 11:48:48 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, 10 Mar 2020 at 21:19, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> SELECT DISTINCT max(non_unique) FROM t1; to skip doing the DISTINCT part.\n>\n>\n> Actually I want to keep the distinct for this case now. One reason is there are only 1\n> row returned, so the distinct erasing can't help much. The more important reason is\n> Query->hasAggs is true for \"select distinct (select count(*) filter (where t2.c2 = 6\n> and t2.c1 < 10) from ft1 t1 where t1.c1 = 6) from ft2 t2 where t2.c2 % 6 = 0 order by 1;\"\n> (this sql came from postgres_fdw.sql).\n\nI think that sort of view is part of the problem here. If you want to\ninvent some new way to detect uniqueness that does not count that case\nthen we have more code with more possible places to have bugs.\n\nFWIW, query_is_distinct_for() does detect that case with:\n\n/*\n* If we have no GROUP BY, but do have aggregates or HAVING, then the\n* result is at most one row so it's surely unique, for any operators.\n*/\nif (query->hasAggs || query->havingQual)\nreturn true;\n\nwhich can be seen by the fact that the following find the unique join on t2.\n\npostgres=# explain verbose select * from t1 inner join (select\ncount(*) c from t1) t2 on t1.a=t2.c;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Hash Join (cost=41.91..84.25 rows=13 width=12)\n Output: t1.a, (count(*))\n Inner Unique: true\n Hash Cond: (t1.a = (count(*)))\n -> Seq Scan on public.t1 (cost=0.00..35.50 rows=2550 width=4)\n Output: t1.a\n -> Hash (cost=41.89..41.89 rows=1 width=8)\n Output: (count(*))\n -> Aggregate (cost=41.88..41.88 rows=1 width=8)\n Output: count(*)\n -> Seq Scan on public.t1 t1_1 (cost=0.00..35.50\nrows=2550 width=0)\n Output: t1_1.a\n(12 rows)\n\nIt will be very simple to add an empty List of UniqueKeys to the GROUP\nBY's RelOptInfo to indicate that all expressions are unique. That way\nany code that checks if some of the RelOptInfo's unique keys are a\nsubset of some expressions they'd like to know are unique, then\nthey'll get a match.\n\nIt does not really matter how much effort is saved in your example\nabove. The UniqueKey infrastructure won't know how much effort\nproperly adding all the uniquekeys will save. It should just add all\nthe keys it can and let whichever code cares about that reap the\nbenefits.\n\n\n", "msg_date": "Wed, 11 Mar 2020 12:05:02 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Wed, Mar 11, 2020 at 6:49 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 11 Mar 2020 at 02:50, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Mar 10, 2020 at 1:49 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > > In my current implementation, it calculates the uniqueness for each\n> > > BaseRel only, but in your way, looks we need to calculate the\n> > > UniquePathKey for both BaseRel and JoinRel. This makes more\n> > > difference for multi table join.\n> >\n> > I didn't understand this concern. I think, it would be better to do it\n> > for all kinds of relation types base, join etc. This way we are sure\n> > that one method works across the planner to eliminate the need for\n> > Distinct or grouping. If we just implement something for base\n> > relations right now and don't do that for joins, there is a chance\n> > that that method may not work for joins when we come to implement it.\n>\n> Yeah, it seems to me that we're seeing more and more features that\n> require knowledge of uniqueness of a RelOptInfo. The skip scans patch\n> needs to know if a join will cause row duplication so it knows if the\n> skip scan path can be joined to without messing up the uniqueness of\n> the skip scan. Adding more and more places that loop over the rel's\n> indexlist just does not seem the right way to do it, especially so\n> when you have to dissect the join rel down to its base rel components\n> to check which indexes there are. Having the knowledge on-hand at the\n> RelOptInfo level means we no longer have to look at indexes for unique\n> proofs.\n>\n> > > Another concern is UniquePathKey\n> > > is designed for a general purpose, we need to maintain it no matter\n> > > distinctClause/groupbyClause.\n> >\n> > This should be ok. The time spent in annotating a RelOptInfo about\n> > uniqueness is not going to be a lot. But doing so would help generic\n> > elimination of Distinct/Group/Unique operations. What is\n> > UniquePathKey; I didn't find this in your patch or in the code.\n>\n> Possibly a misinterpretation. There is some overlap between this patch\n> and the skip scan patch, both would like to skip doing explicit work\n> to implement DISTINCT. Skip scans just go about it by adding new path\n> types that scan the index and only gathers up unique values. In that\n> case, the RelOptInfo won't contain the unique keys, but the skip scan\n> path will. How I imagine both of these patches working together in\n> create_distinct_paths() is that we first check if the DISTINCT clause\n> is a subset of the a set of the RelOptInfo's unique keys (this patch),\n> else we check if there are any paths with uniquekeys that we can use\n> to perform a no-op on the DISTINCT clause (the skip scan patch), if\n> none of those apply, we create the required paths to uniquify the\n> results.\n>\n\nNow I am convinced that we should maintain UniquePath on RelOptInfo,\nI would see how to work with \"Index Skip Scan\" patch.\n\nOn Wed, Mar 11, 2020 at 6:49 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 11 Mar 2020 at 02:50, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Mar 10, 2020 at 1:49 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > In my current implementation, it calculates the uniqueness for each\n> > BaseRel only, but in your way,  looks we need to calculate the\n> > UniquePathKey for both BaseRel and JoinRel.   This makes more\n> > difference for multi table join.\n>\n> I didn't understand this concern. I think, it would be better to do it\n> for all kinds of relation types base, join etc. This way we are sure\n> that one method works across the planner to eliminate the need for\n> Distinct or grouping. If we just implement something for base\n> relations right now and don't do that for joins, there is a chance\n> that that method may not work for joins when we come to implement it.\n\nYeah, it seems to me that we're seeing more and more features that\nrequire knowledge of uniqueness of a RelOptInfo. The skip scans patch\nneeds to know if a join will cause row duplication so it knows if the\nskip scan path can be joined to without messing up the uniqueness of\nthe skip scan.  Adding more and more places that loop over the rel's\nindexlist just does not seem the right way to do it, especially so\nwhen you have to dissect the join rel down to its base rel components\nto check which indexes there are. Having the knowledge on-hand at the\nRelOptInfo level means we no longer have to look at indexes for unique\nproofs.\n\n> > Another concern is UniquePathKey\n> > is designed for a general purpose,  we need to maintain it no matter\n> > distinctClause/groupbyClause.\n>\n> This should be ok. The time spent in annotating a RelOptInfo about\n> uniqueness is not going to be a lot. But doing so would help generic\n> elimination of Distinct/Group/Unique operations. What is\n> UniquePathKey; I didn't find this in your patch or in the code.\n\nPossibly a misinterpretation. There is some overlap between this patch\nand the skip scan patch, both would like to skip doing explicit work\nto implement DISTINCT. Skip scans just go about it by adding new path\ntypes that scan the index and only gathers up unique values.  In that\ncase, the RelOptInfo won't contain the unique keys, but the skip scan\npath will. How I imagine both of these patches working together in\ncreate_distinct_paths() is that we first check if the DISTINCT clause\nis a subset of the a set of the RelOptInfo's unique keys (this patch),\nelse we check if there are any paths with uniquekeys that we can use\nto perform a no-op on the DISTINCT clause (the skip scan patch), if\nnone of those apply, we create the required paths to uniquify the\nresults.Now I am convinced that we should maintain UniquePath on RelOptInfo,I would see how to work with \"Index Skip Scan\" patch.", "msg_date": "Wed, 11 Mar 2020 12:23:30 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Tue, Mar 10, 2020 at 9:12 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> Hi Tom & David & Bapat:\n>\n> Thanks for your review so far. I want to summarize the current issues to help\n> our following discussion.\n>\n> 1. Shall we bypass the AggNode as well with the same logic.\n>\n> I think yes, since the rules to bypass a AggNode and UniqueNode is exactly same.\n> The difficulty of bypassing AggNode is the current aggregation function call is closely\n> coupled with AggNode. In the past few days, I have make the aggregation call can\n> run without AggNode (at least I tested sum(without finalized fn), avg (with finalized fn)).\n> But there are a few things to do, like acl check, anynull check and maybe more check.\n> also there are some MemoryContext mess up need to fix.\n> I still need some time for this goal, so I think the complex of it deserves another thread\n> to discuss it, any thought?\n\nI think if the relation underlying an Agg node is know to be unique\nfor given groupByClause, we could safely use AGG_SORTED strategy.\nThough the input is not ordered, it's sorted thus for every row Agg\nnode will combine/finalize the aggregate result.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 11 Mar 2020 21:12:43 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Wed, Mar 11, 2020 at 4:19 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 11 Mar 2020 at 02:50, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Mar 10, 2020 at 1:49 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > > In my current implementation, it calculates the uniqueness for each\n> > > BaseRel only, but in your way, looks we need to calculate the\n> > > UniquePathKey for both BaseRel and JoinRel. This makes more\n> > > difference for multi table join.\n> >\n> > I didn't understand this concern. I think, it would be better to do it\n> > for all kinds of relation types base, join etc. This way we are sure\n> > that one method works across the planner to eliminate the need for\n> > Distinct or grouping. If we just implement something for base\n> > relations right now and don't do that for joins, there is a chance\n> > that that method may not work for joins when we come to implement it.\n>\n> Yeah, it seems to me that we're seeing more and more features that\n> require knowledge of uniqueness of a RelOptInfo. The skip scans patch\n> needs to know if a join will cause row duplication so it knows if the\n> skip scan path can be joined to without messing up the uniqueness of\n> the skip scan. Adding more and more places that loop over the rel's\n> indexlist just does not seem the right way to do it, especially so\n> when you have to dissect the join rel down to its base rel components\n> to check which indexes there are. Having the knowledge on-hand at the\n> RelOptInfo level means we no longer have to look at indexes for unique\n> proofs.\n\n+1. Yep. When we break join down to the base relation, partitioned\nrelation pose another challenge that the partitioned relation may not\nhave an index on it per say but each partition may have it and the\nindex key happens to be part of the partition key. That case would be\neasy to track through RelOptInfo instead of breaking a base rel down\ninto its child rels.\n\n>\n> > > Another concern is UniquePathKey\n> > > is designed for a general purpose, we need to maintain it no matter\n> > > distinctClause/groupbyClause.\n> >\n> > This should be ok. The time spent in annotating a RelOptInfo about\n> > uniqueness is not going to be a lot. But doing so would help generic\n> > elimination of Distinct/Group/Unique operations. What is\n> > UniquePathKey; I didn't find this in your patch or in the code.\n>\n> Possibly a misinterpretation. There is some overlap between this patch\n> and the skip scan patch, both would like to skip doing explicit work\n> to implement DISTINCT. Skip scans just go about it by adding new path\n> types that scan the index and only gathers up unique values. In that\n> case, the RelOptInfo won't contain the unique keys, but the skip scan\n> path will. How I imagine both of these patches working together in\n> create_distinct_paths() is that we first check if the DISTINCT clause\n> is a subset of the a set of the RelOptInfo's unique keys (this patch),\n> else we check if there are any paths with uniquekeys that we can use\n> to perform a no-op on the DISTINCT clause (the skip scan patch), if\n> none of those apply, we create the required paths to uniquify the\n> results.\n\nLooks good to me. But I have not seen index skip patch.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 11 Mar 2020 21:17:20 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Wed, 11 Mar 2020 at 17:23, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Now I am convinced that we should maintain UniquePath on RelOptInfo,\n> I would see how to work with \"Index Skip Scan\" patch.\n\nI've attached a very early proof of concept patch for unique keys.\nThe NULL detection stuff is not yet hooked up, so it'll currently do\nthe wrong thing for NULLable columns. I've left some code in there\nwith my current idea of how to handle that, but I'll need to add more\ncode both to look at the catalogue tables to see if there's a NOT NULL\nconstraint and also to check for strict quals that filter out NULLs.\n\nAdditionally, I've not hooked up the collation checking stuff yet. I\njust wanted to see if it would work ok for non-collatable types first.\n\nI've added a couple of lines to create_distinct_paths() to check if\nthe input_rel has the required UniqueKeys to skip doing the DISTINCT.\nIt seems to work, but my tests so far are limited to:\n\ncreate table t1(a int primary key, b int);\ncreate table t2(a int primary key, b int);\n\npostgres=# -- t2 could duplicate t1, don't remove DISTINCT\npostgres=# explain (costs off) select distinct t1.a from t1 inner join\nt2 on t1.a = t2.b;\n QUERY PLAN\n----------------------------------\n HashAggregate\n Group Key: t1.a\n -> Hash Join\n Hash Cond: (t2.b = t1.a)\n -> Seq Scan on t2\n -> Hash\n -> Seq Scan on t1\n(7 rows)\n\n\npostgres=# -- neither rel can duplicate the other due to join on PK.\nRemove DISTINCT\npostgres=# explain (costs off) select distinct t1.a from t1 inner join\nt2 on t1.a = t2.a;\n QUERY PLAN\n----------------------------\n Hash Join\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n(5 rows)\n\n\npostgres=# -- t2.a cannot duplicate t1 and t1.a is unique. Remove DISTINCT\npostgres=# explain (costs off) select distinct t1.a from t1 inner join\nt2 on t1.b = t2.a;\n QUERY PLAN\n----------------------------\n Hash Join\n Hash Cond: (t1.b = t2.a)\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n(5 rows)\n\n\npostgres=# -- t1.b can duplicate t2.a. Don't remove DISTINCT\npostgres=# explain (costs off) select distinct t2.a from t1 inner join\nt2 on t1.b = t2.a;\n QUERY PLAN\n----------------------------------\n HashAggregate\n Group Key: t2.a\n -> Hash Join\n Hash Cond: (t1.b = t2.a)\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n(7 rows)\n\n\npostgres=# -- t1.a cannot duplicate t2.a. Remove DISTINCT.\npostgres=# explain (costs off) select distinct t2.a from t1 inner join\nt2 on t1.a = t2.b;\n QUERY PLAN\n----------------------------\n Hash Join\n Hash Cond: (t2.b = t1.a)\n -> Seq Scan on t2\n -> Hash\n -> Seq Scan on t1\n(5 rows)\n\nI've also left a bunch of XXX comments for things that I know need more thought.\n\nI believe we can propagate the joinrel's unique keys where the patch\nis currently doing it. I understand that in\npopulate_joinrel_with_paths() we do things like swapping LEFT JOINs\nfor RIGHT JOINs and switch the input rels around, but we do so only\nbecause it's equivalent, so I don't currently see why we can't take\nthe jointype for the SpecialJoinInfo. I need to know that as I'll need\nto ignore pushed down RestrictInfos for outer joins.\n\nI'm posting now as I know I've been mentioning this UniqueKeys idea\nfor quite a while and if it's not something that's going to get off\nthe ground, then it's better to figure that out now.", "msg_date": "Thu, 12 Mar 2020 20:51:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi David:\n\nOn Thu, Mar 12, 2020 at 3:51 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 11 Mar 2020 at 17:23, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Now I am convinced that we should maintain UniquePath on RelOptInfo,\n> > I would see how to work with \"Index Skip Scan\" patch.\n>\n> I've attached a very early proof of concept patch for unique keys.\n> The NULL detection stuff is not yet hooked up, so it'll currently do\n> the wrong thing for NULLable columns. I've left some code in there\n> with my current idea of how to handle that, but I'll need to add more\n> code both to look at the catalogue tables to see if there's a NOT NULL\n> constraint and also to check for strict quals that filter out NULLs.\n>\n> Additionally, I've not hooked up the collation checking stuff yet. I\n> just wanted to see if it would work ok for non-collatable types first.\n>\n> I've added a couple of lines to create_distinct_paths() to check if\n> the input_rel has the required UniqueKeys to skip doing the DISTINCT.\n> It seems to work, but my tests so far are limited to:\n>\n> create table t1(a int primary key, b int);\n> create table t2(a int primary key, b int);\n>\n> postgres=# -- t2 could duplicate t1, don't remove DISTINCT\n> postgres=# explain (costs off) select distinct t1.a from t1 inner join\n> t2 on t1.a = t2.b;\n> QUERY PLAN\n> ----------------------------------\n> HashAggregate\n> Group Key: t1.a\n> -> Hash Join\n> Hash Cond: (t2.b = t1.a)\n> -> Seq Scan on t2\n> -> Hash\n> -> Seq Scan on t1\n> (7 rows)\n>\n>\n> postgres=# -- neither rel can duplicate the other due to join on PK.\n> Remove DISTINCT\n> postgres=# explain (costs off) select distinct t1.a from t1 inner join\n> t2 on t1.a = t2.a;\n> QUERY PLAN\n> ----------------------------\n> Hash Join\n> Hash Cond: (t1.a = t2.a)\n> -> Seq Scan on t1\n> -> Hash\n> -> Seq Scan on t2\n> (5 rows)\n>\n>\n> postgres=# -- t2.a cannot duplicate t1 and t1.a is unique. Remove DISTINCT\n> postgres=# explain (costs off) select distinct t1.a from t1 inner join\n> t2 on t1.b = t2.a;\n> QUERY PLAN\n> ----------------------------\n> Hash Join\n> Hash Cond: (t1.b = t2.a)\n> -> Seq Scan on t1\n> -> Hash\n> -> Seq Scan on t2\n> (5 rows)\n>\n>\n> postgres=# -- t1.b can duplicate t2.a. Don't remove DISTINCT\n> postgres=# explain (costs off) select distinct t2.a from t1 inner join\n> t2 on t1.b = t2.a;\n> QUERY PLAN\n> ----------------------------------\n> HashAggregate\n> Group Key: t2.a\n> -> Hash Join\n> Hash Cond: (t1.b = t2.a)\n> -> Seq Scan on t1\n> -> Hash\n> -> Seq Scan on t2\n> (7 rows)\n>\n>\n> postgres=# -- t1.a cannot duplicate t2.a. Remove DISTINCT.\n> postgres=# explain (costs off) select distinct t2.a from t1 inner join\n> t2 on t1.a = t2.b;\n> QUERY PLAN\n> ----------------------------\n> Hash Join\n> Hash Cond: (t2.b = t1.a)\n> -> Seq Scan on t2\n> -> Hash\n> -> Seq Scan on t1\n> (5 rows)\n>\n> I've also left a bunch of XXX comments for things that I know need more\n> thought.\n>\n> I believe we can propagate the joinrel's unique keys where the patch\n> is currently doing it. I understand that in\n> populate_joinrel_with_paths() we do things like swapping LEFT JOINs\n> for RIGHT JOINs and switch the input rels around, but we do so only\n> because it's equivalent, so I don't currently see why we can't take\n> the jointype for the SpecialJoinInfo. I need to know that as I'll need\n> to ignore pushed down RestrictInfos for outer joins.\n>\n> I'm posting now as I know I've been mentioning this UniqueKeys idea\n> for quite a while and if it's not something that's going to get off\n> the ground, then it's better to figure that out now.\n>\n\nThanks for the code! Here is some points from me.\n\n1. for pupulate_baserel_uniquekeys, we need handle the \"pk = Const\" as\nwell.\n(relation_has_unqiue_for has a similar logic) currently the following\ndistinct path is still\n there.\n\npostgres=# explain select distinct b from t100 where pk = 1;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique (cost=8.18..8.19 rows=1 width=4)\n -> Sort (cost=8.18..8.19 rows=1 width=4)\n Sort Key: b\n -> Index Scan using t100_pkey on t100 (cost=0.15..8.17 rows=1\nwidth=4)\n Index Cond: (pk = 1)\n(5 rows)\n\nI think in this case, we can add both (pk) and (b) as the UniquePaths.\nIf so we\ncan get more opportunities to reach our goal.\n\n2. As for the propagate_unique_keys_to_joinrel, we can add 1 more\nUniquePath as\n(rel1_unique_paths, rel2_unique_paths) if the current rules doesn't apply.\nor else the following cases can't be handled.\n\npostgres=# explain select distinct t100.pk, t101.pk from t100, t101;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Unique (cost=772674.11..810981.11 rows=5107600 width=8)\n -> Sort (cost=772674.11..785443.11 rows=5107600 width=8)\n Sort Key: t100.pk, t101.pk\n -> Nested Loop (cost=0.00..63915.85 rows=5107600 width=8)\n -> Seq Scan on t100 (cost=0.00..32.60 rows=2260 width=4)\n -> Materialize (cost=0.00..43.90 rows=2260 width=4)\n -> Seq Scan on t101 (cost=0.00..32.60 rows=2260\nwidth=4)\n(7 rows)\n\nBut if we add such rule, the unique paths probably become much longer, so\nwe need\na strategy to tell if the UniquePath is useful for our query, if not, we\ncan ignore that.\nrel->reltarget maybe a good info for such optimization. I think we can\ntake this into\nconsideration for pupulate_baserel_uniquekeys as well.\n\nFor the non_null info, Tom suggested to add maintain such info RelOptInfo,\nI have done that for the not_null_info for basic relation catalog, I think\nwe can\nmaintain the same flag for joinrel and the not null info from\nfind_nonnullable_vars as\nwell, but I still didn't find a good place to add that so far.\n\n\nA small question about the following code:\n\n+ if (relation_has_uniquekeys_for(root, input_rel,\nget_sortgrouplist_exprs(parse->distinctClause, parse->targetList), false))\n+ {\n+\n+ add_path(distinct_rel, (Path *)cheapest_input_path);\n+\n+ /* XXX yeah yeah, need to call the hooks etc. */\n+\n+ /* Now choose the best path(s) */\n+ set_cheapest(distinct_rel);\n+\n+ return distinct_rel;\n+ }\n\nSince we don't create new RelOptInfo/Path, do we need to call add_path and\nset_cheapest?\n\n\nBest Regards\nAndy Fan\n\nHi David:On Thu, Mar 12, 2020 at 3:51 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 11 Mar 2020 at 17:23, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Now I am convinced that we should maintain UniquePath on RelOptInfo,\n> I would see how to work with \"Index Skip Scan\" patch.\n\nI've attached a very early proof of concept patch for unique keys.\nThe NULL detection stuff is not yet hooked up, so it'll currently do\nthe wrong thing for NULLable columns.  I've left some code in there\nwith my current idea of how to handle that, but I'll need to add more\ncode both to look at the catalogue tables to see if there's a NOT NULL\nconstraint and also to check for strict quals that filter out NULLs.\n\nAdditionally, I've not hooked up the collation checking stuff yet. I\njust wanted to see if it would work ok for non-collatable types first.\n\nI've added a couple of lines to create_distinct_paths() to check if\nthe input_rel has the required UniqueKeys to skip doing the DISTINCT.\nIt seems to work, but my tests so far are limited to:\n\ncreate table t1(a int primary key, b int);\ncreate table t2(a int primary key, b int);\n\npostgres=# -- t2 could duplicate t1, don't remove DISTINCT\npostgres=# explain (costs off) select distinct t1.a from t1 inner join\nt2 on t1.a = t2.b;\n            QUERY PLAN\n----------------------------------\n HashAggregate\n   Group Key: t1.a\n   ->  Hash Join\n         Hash Cond: (t2.b = t1.a)\n         ->  Seq Scan on t2\n         ->  Hash\n               ->  Seq Scan on t1\n(7 rows)\n\n\npostgres=# -- neither rel can duplicate the other due to join on PK.\nRemove DISTINCT\npostgres=# explain (costs off) select distinct t1.a from t1 inner join\nt2 on t1.a = t2.a;\n         QUERY PLAN\n----------------------------\n Hash Join\n   Hash Cond: (t1.a = t2.a)\n   ->  Seq Scan on t1\n   ->  Hash\n         ->  Seq Scan on t2\n(5 rows)\n\n\npostgres=# -- t2.a cannot duplicate t1 and t1.a is unique. Remove DISTINCT\npostgres=# explain (costs off) select distinct t1.a from t1 inner join\nt2 on t1.b = t2.a;\n         QUERY PLAN\n----------------------------\n Hash Join\n   Hash Cond: (t1.b = t2.a)\n   ->  Seq Scan on t1\n   ->  Hash\n         ->  Seq Scan on t2\n(5 rows)\n\n\npostgres=# -- t1.b can duplicate t2.a. Don't remove DISTINCT\npostgres=# explain (costs off) select distinct t2.a from t1 inner join\nt2 on t1.b = t2.a;\n            QUERY PLAN\n----------------------------------\n HashAggregate\n   Group Key: t2.a\n   ->  Hash Join\n         Hash Cond: (t1.b = t2.a)\n         ->  Seq Scan on t1\n         ->  Hash\n               ->  Seq Scan on t2\n(7 rows)\n\n\npostgres=# -- t1.a cannot duplicate t2.a. Remove DISTINCT.\npostgres=# explain (costs off) select distinct t2.a from t1 inner join\nt2 on t1.a = t2.b;\n         QUERY PLAN\n----------------------------\n Hash Join\n   Hash Cond: (t2.b = t1.a)\n   ->  Seq Scan on t2\n   ->  Hash\n         ->  Seq Scan on t1\n(5 rows)\n\nI've also left a bunch of XXX comments for things that I know need more thought.\n\nI believe we can propagate the joinrel's unique keys where the patch\nis currently doing it.  I understand that in\npopulate_joinrel_with_paths() we do things like swapping LEFT JOINs\nfor RIGHT JOINs and switch the input rels around, but we do so only\nbecause it's equivalent, so I don't currently see why we can't take\nthe jointype for the SpecialJoinInfo. I need to know that as I'll need\nto ignore pushed down RestrictInfos for outer joins.\n\nI'm posting now as I know I've been mentioning this UniqueKeys idea\nfor quite a while and if it's not something that's going to get off\nthe ground, then it's better to figure that out now.Thanks for the code!   Here is some points from me. 1.   for pupulate_baserel_uniquekeys,  we need handle the \"pk = Const\" as well. (relation_has_unqiue_for has a similar logic) currently the following distinct path is still there. postgres=# explain select distinct b from t100 where pk = 1;                                    QUERY PLAN---------------------------------------------------------------------------------- Unique  (cost=8.18..8.19 rows=1 width=4)   ->  Sort  (cost=8.18..8.19 rows=1 width=4)         Sort Key: b         ->  Index Scan using t100_pkey on t100  (cost=0.15..8.17 rows=1 width=4)               Index Cond: (pk = 1)(5 rows)I think in this case,  we can add  both (pk) and (b) as the UniquePaths.  If so wecan get more opportunities to reach our goal.  2.  As for the propagate_unique_keys_to_joinrel,  we can add 1 more UniquePath as (rel1_unique_paths,  rel2_unique_paths) if the current rules doesn't apply. or else the following cases can't be handled.postgres=# explain select distinct t100.pk,  t101.pk from t100, t101;                                   QUERY PLAN-------------------------------------------------------------------------------- Unique  (cost=772674.11..810981.11 rows=5107600 width=8)   ->  Sort  (cost=772674.11..785443.11 rows=5107600 width=8)         Sort Key: t100.pk, t101.pk         ->  Nested Loop  (cost=0.00..63915.85 rows=5107600 width=8)               ->  Seq Scan on t100  (cost=0.00..32.60 rows=2260 width=4)               ->  Materialize  (cost=0.00..43.90 rows=2260 width=4)                     ->  Seq Scan on t101  (cost=0.00..32.60 rows=2260 width=4)(7 rows)But if we add such rule,  the unique paths probably become much longer,  so we needa strategy to tell if the UniquePath is useful for our query,  if not,  we can ignore that.rel->reltarget maybe a good info for such optimization.   I think we can take this into consideration for pupulate_baserel_uniquekeys as well. For the non_null info,  Tom suggested to add maintain such info RelOptInfo, I have done that for the not_null_info for basic relation catalog,  I think we canmaintain the same flag for joinrel and the not null info from find_nonnullable_vars as well, but I still didn't find a good place to add that so far. A small question about the following code:+       if (relation_has_uniquekeys_for(root, input_rel, get_sortgrouplist_exprs(parse->distinctClause, parse->targetList), false))+       {++               add_path(distinct_rel, (Path *)cheapest_input_path);++               /* XXX yeah yeah, need to call the hooks etc. */++               /* Now choose the best path(s) */+               set_cheapest(distinct_rel);++               return distinct_rel;+       }Since we don't create new RelOptInfo/Path,  do we need to call add_path and set_cheapest?Best RegardsAndy Fan", "msg_date": "Fri, 13 Mar 2020 09:47:10 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Fri, 13 Mar 2020 at 14:47, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 1. for pupulate_baserel_uniquekeys, we need handle the \"pk = Const\" as well.\n> (relation_has_unqiue_for has a similar logic) currently the following distinct path is still\n> there.\n\nYeah, I left a comment in propagate_unique_keys_to_joinrel() to\nmention that still needs to be done.\n\n> postgres=# explain select distinct b from t100 where pk = 1;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------\n> Unique (cost=8.18..8.19 rows=1 width=4)\n> -> Sort (cost=8.18..8.19 rows=1 width=4)\n> Sort Key: b\n> -> Index Scan using t100_pkey on t100 (cost=0.15..8.17 rows=1 width=4)\n> Index Cond: (pk = 1)\n> (5 rows)\n>\n> I think in this case, we can add both (pk) and (b) as the UniquePaths. If so we\n> can get more opportunities to reach our goal.\n\nThe UniqueKeySet containing \"b\" could only be added in the\ndistinct_rel in the upper planner. It must not change the input_rel\nfor the distinct.\n\nIt's likely best to steer clear of calling UniqueKeys UniquePaths as\nit might confuse people. The term \"path\" is used in PostgreSQL as a\nlightweight representation containing all the information required to\nbuild a plan node in createplan.c. More details in\nsrc/backend/optimizer/README.\n\n> 2. As for the propagate_unique_keys_to_joinrel, we can add 1 more UniquePath as\n> (rel1_unique_paths, rel2_unique_paths) if the current rules doesn't apply.\n> or else the following cases can't be handled.\n>\n> postgres=# explain select distinct t100.pk, t101.pk from t100, t101;\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Unique (cost=772674.11..810981.11 rows=5107600 width=8)\n> -> Sort (cost=772674.11..785443.11 rows=5107600 width=8)\n> Sort Key: t100.pk, t101.pk\n> -> Nested Loop (cost=0.00..63915.85 rows=5107600 width=8)\n> -> Seq Scan on t100 (cost=0.00..32.60 rows=2260 width=4)\n> -> Materialize (cost=0.00..43.90 rows=2260 width=4)\n> -> Seq Scan on t101 (cost=0.00..32.60 rows=2260 width=4)\n> (7 rows)\n\nI don't really follow what you mean here. It seems to me there's no\nway we can skip doing DISTINCT in the case above. If you've just\nmissed out the join clause and you meant to have \"WHERE t100.pk =\nt101.pk\", then we can likely fix that later with some sort of\nfunctional dependency tracking. Likely we can just add a Relids field\nto UniqueKeySet to track which relids are functionally dependant on a\nrow from the UniqueKeySet's uk_exprs. That might be as simple as just\npull_varnos from the non-matched exprs and checking to ensure the\nresult is a subset of functionally dependant rels. I'd need to give\nthat more thought.\n\nWas this a case you had working in your patch?\n\n> But if we add such rule, the unique paths probably become much longer, so we need\n> a strategy to tell if the UniquePath is useful for our query, if not, we can ignore that.\n> rel->reltarget maybe a good info for such optimization. I think we can take this into\n> consideration for pupulate_baserel_uniquekeys as well.\n\nI don't really think the number of unique indexes in a base rel will\nreally ever get out of hand for legitimate cases.\npropagate_unique_keys_to_joinrel is just concatenating baserel\nUniqueKeySets to the joinrel. They're not copied, so it's just tagging\npointers onto the end of an array, which is at best a memcpy(), or at\nworst a realloc() then memcpy(). That's not so costly.\n\n> For the non_null info, Tom suggested to add maintain such info RelOptInfo,\n> I have done that for the not_null_info for basic relation catalog, I think we can\n> maintain the same flag for joinrel and the not null info from find_nonnullable_vars as\n> well, but I still didn't find a good place to add that so far.\n\nI'd considered just adding a get_notnull() function to lsyscache.c.\nJust below get_attname() looks like a good spot. I imagined just\nsetting the bit in the UniqueKeySet's non_null_keys field\ncorresponding to the column position from the index. I could see the\nbenefit of having a field in RelOptInfo if there was some way to\ndetermine the not-null properties of all columns in the table at once,\nbut there's not, so we're likely best just looking at the ones that\nthere are unique indexes on.\n\n> A small question about the following code:\n>\n> + if (relation_has_uniquekeys_for(root, input_rel, get_sortgrouplist_exprs(parse->distinctClause, parse->targetList), false))\n> + {\n> +\n> + add_path(distinct_rel, (Path *)cheapest_input_path);\n> +\n> + /* XXX yeah yeah, need to call the hooks etc. */\n> +\n> + /* Now choose the best path(s) */\n> + set_cheapest(distinct_rel);\n> +\n> + return distinct_rel;\n> + }\n>\n> Since we don't create new RelOptInfo/Path, do we need to call add_path and set_cheapest?\n\nThe distinct_rel already exists. add_path() is the standard way we\nhave of adding paths to the rel's pathlist. Why would you want to\nbypass that? set_cheapest() is our standard way of looking at the\npathlist and figuring out the least costly one. It's not a very hard\njob to do when there's just 1 path. Not sure why you'd want to do it\nanother way.\n\n\n", "msg_date": "Fri, 13 Mar 2020 16:46:27 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Fri, Mar 13, 2020 at 11:46 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 13 Mar 2020 at 14:47, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > 1. for pupulate_baserel_uniquekeys, we need handle the \"pk = Const\"\n> as well.\n> > (relation_has_unqiue_for has a similar logic) currently the following\n> distinct path is still\n> > there.\n>\n> Yeah, I left a comment in propagate_unique_keys_to_joinrel() to\n> mention that still needs to be done.\n\n> postgres=# explain select distinct b from t100 where pk = 1;\n> > QUERY PLAN\n> >\n> ----------------------------------------------------------------------------------\n> > Unique (cost=8.18..8.19 rows=1 width=4)\n> > -> Sort (cost=8.18..8.19 rows=1 width=4)\n> > Sort Key: b\n> > -> Index Scan using t100_pkey on t100 (cost=0.15..8.17 rows=1\n> width=4)\n> > Index Cond: (pk = 1)\n> > (5 rows)\n> >\n> > I think in this case, we can add both (pk) and (b) as the\n> UniquePaths. If so we\n> > can get more opportunities to reach our goal.\n>\n> The UniqueKeySet containing \"b\" could only be added in the\n> distinct_rel in the upper planner. It must not change the input_rel\n> for the distinct.\n>\n> I think we maintain UniqueKey even without distinct_rel, so at this\nstage,\nCan we say b is unique for this(no is possible)? If yes, we probably\nneed to set that information without consider the distinct clause.\n\nIt's likely best to steer clear of calling UniqueKeys UniquePaths as\n> it might confuse people. The term \"path\" is used in PostgreSQL as a\n> lightweight representation containing all the information required to\n> build a plan node in createplan.c. More details in\n> src/backend/optimizer/README.\n>\n>\nOK.\n\n\n> > 2. As for the propagate_unique_keys_to_joinrel, we can add 1 more\n> UniquePath as\n> > (rel1_unique_paths, rel2_unique_paths) if the current rules doesn't\n> apply.\n> > or else the following cases can't be handled.\n> >\n> > postgres=# explain select distinct t100.pk, t101.pk from t100, t101;\n> > QUERY PLAN\n> >\n> --------------------------------------------------------------------------------\n> > Unique (cost=772674.11..810981.11 rows=5107600 width=8)\n> > -> Sort (cost=772674.11..785443.11 rows=5107600 width=8)\n> > Sort Key: t100.pk, t101.pk\n> > -> Nested Loop (cost=0.00..63915.85 rows=5107600 width=8)\n> > -> Seq Scan on t100 (cost=0.00..32.60 rows=2260 width=4)\n> > -> Materialize (cost=0.00..43.90 rows=2260 width=4)\n> > -> Seq Scan on t101 (cost=0.00..32.60 rows=2260\n> width=4)\n> > (7 rows)\n>\n> I don't really follow what you mean here. It seems to me there's no\n> way we can skip doing DISTINCT in the case above. If you've just\n> missed out the join clause and you meant to have \"WHERE t100.pk =\n> t101.pk\", then we can likely fix that later with some sort of\n> functional dependency tracking.\n\n\nIn the above case the result should be unique, the knowledge behind that\nis if *we join 2 unique results in any join method, the result is unique as\nwell*\nin the above example, the final unique Key is (t100.pk, t101.pk).\n\n\n> Likely we can just add a Relids field\n> to UniqueKeySet to track which relids are functionally dependant on a\n> row from the UniqueKeySet's uk_exprs. That might be as simple as just\n> pull_varnos from the non-matched exprs and checking to ensure the\n> result is a subset of functionally dependant rels. I'd need to give\n> that more thought.\n>\n> Was this a case you had working in your patch?\n>\n\nI think we can do that after I get your UniqueKey idea, so, no, my\nprevious patch is not as smart\nas yours:)\n\n\n> > But if we add such rule, the unique paths probably become much longer,\n> so we need\n> > a strategy to tell if the UniquePath is useful for our query, if not,\n> we can ignore that.\n> > rel->reltarget maybe a good info for such optimization. I think we can\n> take this into\n> > consideration for pupulate_baserel_uniquekeys as well.\n>\n> I don't really think the number of unique indexes in a base rel will\n> really ever get out of hand for legitimate cases.\n> propagate_unique_keys_to_joinrel is just concatenating baserel\n> UniqueKeySets to the joinrel. They're not copied, so it's just tagging\n> pointers onto the end of an array, which is at best a memcpy(), or at\n> worst a realloc() then memcpy(). That's not so costly.\n>\n\nThe memcpy is not key concerns here. My main point is we need\nto focus on the length of RelOptInfo->uniquekeys. For example:\nt has 3 uk like this (uk1), (uk2), (uk3). And the query is\nselect b from t where m = 1; If so there is no need to add these 3\nto UniqueKeys so that we can keep rel->uniquekeys shorter.\n\nThe the length of rel->uniquekeys maybe a concern if we add the rule\nI suggested above , the (t100.pk, t101.pk) case. Think about this\nfor example:\n\n1. select .. from t1, t2, t3, t4...;\n2. suppose each table has 2 UniqueKeys, named (t{m}_uk{n})\n3. follow my above rule (t1.pk1, t2.pk)is a UniqueKey for joinrel.\n4. suppose we join with the following order (t1 vs t2 vs t3 vs t4)\n\nFor for (t1 vs t2), we need to add 4 more UniqueKeys for this joinrel.\n(t1_uk1 , t2_uk1), (t1_uk1, t2_uk2), (t1_uk2 , t2_uk1), (t1_uk2, t2_uk2)\n\nAfter we come to join the last one, the joinrel->uniquekey will be much\nlonger\nwhich makes the scan of it less efficient.\n\nBut this will not be an issue if my above rule should not be considered.\nso\nwe need to talk about that first.\n\n> For the non_null info, Tom suggested to add maintain such info\n> RelOptInfo,\n> > I have done that for the not_null_info for basic relation catalog, I\n> think we can\n> > maintain the same flag for joinrel and the not null info from\n> find_nonnullable_vars as\n> > well, but I still didn't find a good place to add that so far.\n>\n> I'd considered just adding a get_notnull() function to lsyscache.c.\n> Just below get_attname() looks like a good spot. I imagined just\n> setting the bit in the UniqueKeySet's non_null_keys field\n> corresponding to the column position from the index. I could see the\n> benefit of having a field in RelOptInfo if there was some way to\n> determine the not-null properties of all columns in the table at once,\n>\n\ndo you mean get the non-null properties from catalog or restrictinfo?\nif you mean catalog, get_relation_info may be a good place for that.\n\nbut there's not, so we're likely best just looking at the ones that\n> there are unique indexes on.\n>\n\n\n> > A small question about the following code:\n> >\n> > + if (relation_has_uniquekeys_for(root, input_rel,\n> get_sortgrouplist_exprs(parse->distinctClause, parse->targetList), false))\n> > + {\n> > +\n> > + add_path(distinct_rel, (Path *)cheapest_input_path);\n> > +\n> > + /* XXX yeah yeah, need to call the hooks etc. */\n> > +\n> > + /* Now choose the best path(s) */\n> > + set_cheapest(distinct_rel);\n> > +\n> > + return distinct_rel;\n> > + }\n> >\n> > Since we don't create new RelOptInfo/Path, do we need to call add_path\n> and set_cheapest?\n>\n> The distinct_rel already exists. add_path() is the standard way we\n> have of adding paths to the rel's pathlist. Why would you want to\n> bypass that? set_cheapest() is our standard way of looking at the\n> pathlist and figuring out the least costly one. It's not a very hard\n> job to do when there's just 1 path. Not sure why you'd want to do it\n> another way.\n>\n\nI got the point now. In this case, you create an new RelOptInfo\nnamed distinct_rel, so we *must* set it. Can we just return the input_rel\nin this case? if we can, we don't need that.\n\nOn Fri, Mar 13, 2020 at 11:46 AM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 13 Mar 2020 at 14:47, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 1.   for pupulate_baserel_uniquekeys,  we need handle the \"pk = Const\" as well.\n> (relation_has_unqiue_for has a similar logic) currently the following distinct path is still\n>  there.\n\nYeah, I left a comment in propagate_unique_keys_to_joinrel() to\nmention that still needs to be done. \n> postgres=# explain select distinct b from t100 where pk = 1;\n>                                     QUERY PLAN\n> ----------------------------------------------------------------------------------\n>  Unique  (cost=8.18..8.19 rows=1 width=4)\n>    ->  Sort  (cost=8.18..8.19 rows=1 width=4)\n>          Sort Key: b\n>          ->  Index Scan using t100_pkey on t100  (cost=0.15..8.17 rows=1 width=4)\n>                Index Cond: (pk = 1)\n> (5 rows)\n>\n> I think in this case,  we can add  both (pk) and (b) as the UniquePaths.  If so we\n> can get more opportunities to reach our goal.\n\nThe UniqueKeySet containing \"b\" could only be added in the\ndistinct_rel in the upper planner. It must not change the input_rel\nfor the distinct.\nI think we maintain UniqueKey even without distinct_rel,  so at this stage, Can we say b is unique for this(no is possible)?  If yes,  we probably need to set that information without consider the distinct clause. \nIt's likely best to steer clear of calling UniqueKeys UniquePaths as\nit might confuse people.  The term \"path\" is used in PostgreSQL as a\nlightweight representation containing all the information required to\nbuild a plan node in createplan.c. More details in\nsrc/backend/optimizer/README.\n OK.  \n> 2.  As for the propagate_unique_keys_to_joinrel,  we can add 1 more UniquePath as\n> (rel1_unique_paths,  rel2_unique_paths) if the current rules doesn't apply.\n> or else the following cases can't be handled.\n>\n> postgres=# explain select distinct t100.pk,  t101.pk from t100, t101;\n>                                    QUERY PLAN\n> --------------------------------------------------------------------------------\n>  Unique  (cost=772674.11..810981.11 rows=5107600 width=8)\n>    ->  Sort  (cost=772674.11..785443.11 rows=5107600 width=8)\n>          Sort Key: t100.pk, t101.pk\n>          ->  Nested Loop  (cost=0.00..63915.85 rows=5107600 width=8)\n>                ->  Seq Scan on t100  (cost=0.00..32.60 rows=2260 width=4)\n>                ->  Materialize  (cost=0.00..43.90 rows=2260 width=4)\n>                      ->  Seq Scan on t101  (cost=0.00..32.60 rows=2260 width=4)\n> (7 rows)\n\nI don't really follow what you mean here. It seems to me there's no\nway we can skip doing DISTINCT in the case above.  If you've just\nmissed out the join clause and you meant to have \"WHERE t100.pk =\nt101.pk\", then we can likely fix that later with some sort of\nfunctional dependency tracking.In the above case the result should be unique, the knowledge behind thatis if *we join 2 unique results in any join method, the result is unique as well*in the above example,  the final unique Key is (t100.pk, t101.pk).     Likely we can just add a Relids field\nto UniqueKeySet to track which relids are functionally dependant on a\nrow from the UniqueKeySet's uk_exprs.  That might be as simple as just\npull_varnos from the non-matched exprs and checking to ensure the\nresult is a subset of functionally dependant rels.  I'd need to give\nthat more thought.\n\nWas this a case you had working in your patch?I think we can do that after I get your UniqueKey idea,  so, no,  my previous patch is not as smartas yours:)  \n\n> But if we add such rule,  the unique paths probably become much longer,  so we need\n> a strategy to tell if the UniquePath is useful for our query,  if not,  we can ignore that.\n> rel->reltarget maybe a good info for such optimization.   I think we can take this into\n> consideration for pupulate_baserel_uniquekeys as well.\n\nI don't really think the number of unique indexes in a base rel will\nreally ever get out of hand for legitimate cases.\npropagate_unique_keys_to_joinrel is just concatenating baserel\nUniqueKeySets to the joinrel. They're not copied, so it's just tagging\npointers onto the end of an array, which is at best a memcpy(), or at\nworst a realloc() then memcpy(). That's not so costly.The memcpy is not key concerns here.  My main point is we needto focus on the length of RelOptInfo->uniquekeys.  For example:  t  has 3 uk like this (uk1),  (uk2), (uk3).  And the query isselect b from t where m = 1;  If so there is no need to add these 3to UniqueKeys so that we can keep rel->uniquekeys shorter.   The the length of rel->uniquekeys maybe a  concern if we add the rule I suggested above , the (t100.pk, t101.pk) case.   Think about thisfor example:1. select .. from t1, t2, t3, t4...;  2. suppose each table has 2 UniqueKeys, named (t{m}_uk{n}) 3. follow my above rule (t1.pk1, t2.pk)is a UniqueKey for joinrel. 4. suppose we join with the following order (t1 vs t2 vs t3 vs t4)For for (t1 vs t2),  we need to add 4 more UniqueKeys for this joinrel. (t1_uk1 , t2_uk1), (t1_uk1, t2_uk2), (t1_uk2 , t2_uk1), (t1_uk2, t2_uk2)After we come to join the last one, the joinrel->uniquekey will be much longerwhich makes the scan of it less efficient. But this will not be an issue if my above rule should not be considered.  so we need to talk about that first. \n> For the non_null info,  Tom suggested to add maintain such info RelOptInfo,\n> I have done that for the not_null_info for basic relation catalog,  I think we can\n> maintain the same flag for joinrel and the not null info from find_nonnullable_vars as\n> well, but I still didn't find a good place to add that so far.\n\nI'd considered just adding a get_notnull() function to lsyscache.c.\nJust below get_attname() looks like a good spot.  I imagined just\nsetting the bit in the UniqueKeySet's non_null_keys field\ncorresponding to the column position from the index.  I could see the\nbenefit of having a field in RelOptInfo if there was some way to\ndetermine the not-null properties of all columns in the table at once,do you mean get the non-null  properties from catalog or restrictinfo? if you mean catalog,  get_relation_info may be a good place for that. \nbut there's not, so we're likely best just looking at the ones that\nthere are unique indexes on. \n> A small question about the following code:\n>\n> +       if (relation_has_uniquekeys_for(root, input_rel, get_sortgrouplist_exprs(parse->distinctClause, parse->targetList), false))\n> +       {\n> +\n> +               add_path(distinct_rel, (Path *)cheapest_input_path);\n> +\n> +               /* XXX yeah yeah, need to call the hooks etc. */\n> +\n> +               /* Now choose the best path(s) */\n> +               set_cheapest(distinct_rel);\n> +\n> +               return distinct_rel;\n> +       }\n>\n> Since we don't create new RelOptInfo/Path,  do we need to call add_path and set_cheapest?\n\nThe distinct_rel already exists. add_path() is the standard way we\nhave of adding paths to the rel's pathlist.  Why would you want to\nbypass that? set_cheapest() is our standard way of looking at the\npathlist and figuring out the least costly one. It's not a very hard\njob to do when there's just 1 path. Not sure why you'd want to do it\nanother way.I got the point now.  In this case,  you create an new RelOptInfo  named distinct_rel, so we *must* set it.  Can we just return the input_rel in this case? if we can,  we don't need that.", "msg_date": "Fri, 13 Mar 2020 12:50:55 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi All:\n\nI have re-implemented the patch based on David's suggestion/code, Looks it\nworks well. The updated patch mainly includes:\n\n1. Maintain the not_null_colno in RelOptInfo, which includes the not null\nfrom\n catalog and the not null from vars.\n2. Add the restictinfo check at populate_baserel_uniquekeys. If we are sure\n about only 1 row returned, I add each expr in rel->reltarget->expr as a\nunique key.\n like (select a, b, c from t where pk = 1), the uk will be ( (a), (b),\n(c) )\n3. postpone the propagate_unique_keys_to_joinrel call to\npopulate_joinrel_with_paths\n since we know jointype at that time. so we can handle the semi/anti join\nspecially.\n4. Add the rule I suggested above, if both of the 2 relation yields the a\nunique result,\n the join result will be unique as well. the UK can be ( (rel1_uk1,\nrel1_uk2).. )\n5. If the unique key is impossible to be referenced by others, we can\nsafely ignore\n it in order to keep the (join)rel->unqiuekeys short.\n6. I only consider the not null check/opfamily check for the uniquekey\nwhich comes\n from UniqueIndex. I think that should be correct.\n7. I defined each uniquekey as List of Expr, so I didn't introduce new\nnode type.\n8. checked the uniquekeys's information before create_distinct_paths and\n create_group_paths ignore the new paths to be created if the\nsortgroupclauses\n is unique already or else create it and add the new uniquekey to the\n distinctrel/grouprel.\n\nThere are some things I still be in-progress, like:\n1. Partition table.\n2. union/union all\n3. maybe refactor the is_innerrel_unqiue_for/query_is_distinct_for to use\nUniqueKey\n4. if we are sure the groupby clause is unique, and we have aggregation\ncall, maybe we\nshould try Bapat's suggestion, we can use sort rather than hash. The\nstrategy sounds\nawesome, but I didn't check the details so far.\n5. more clearer commit message.\n6. any more ?\n\nAny feedback is welcome, Thanks for you for your any ideas, suggestions,\ndemo code!\n\nBest Regards\nAndy Fan", "msg_date": "Mon, 16 Mar 2020 01:01:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Mon, 16 Mar 2020 at 06:01, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi All:\n>\n> I have re-implemented the patch based on David's suggestion/code, Looks it\n> works well. The updated patch mainly includes:\n>\n> 1. Maintain the not_null_colno in RelOptInfo, which includes the not null from\n> catalog and the not null from vars.\n\nWhat about non-nullability that we can derive from means other than\nNOT NULL constraints. Where will you track that now that you've\nremoved the UniqueKeySet type?\n\nTraditionally we use attno or attnum rather than colno for variable\nnames containing attribute numbers\n\n> 3. postpone the propagate_unique_keys_to_joinrel call to populate_joinrel_with_paths\n> since we know jointype at that time. so we can handle the semi/anti join specially.\n\nok, but the join type was known already where I was calling the\nfunction from. It just wasn't passed to the function.\n\n> 4. Add the rule I suggested above, if both of the 2 relation yields the a unique result,\n> the join result will be unique as well. the UK can be ( (rel1_uk1, rel1_uk2).. )\n\nI see. So basically you're saying that the joinrel's uniquekeys should\nbe the cartesian product of the unique rels from either side of the\njoin. I wonder if that's a special case we need to worry about too\nmuch. Surely it only applies for clauseless joins.\n\n> 5. If the unique key is impossible to be referenced by others, we can safely ignore\n> it in order to keep the (join)rel->unqiuekeys short.\n\nYou could probably have an equivalent of has_useful_pathkeys() and\npathkeys_useful_for_ordering()\n\n> 6. I only consider the not null check/opfamily check for the uniquekey which comes\n> from UniqueIndex. I think that should be correct.\n> 7. I defined each uniquekey as List of Expr, so I didn't introduce new node type.\n\nWhere will you store the collation Oid? I left comments to mention\nthat needed to be checked but just didn't wire it up.\n\n\n", "msg_date": "Wed, 18 Mar 2020 14:56:08 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi David:\n\nThanks for your time.\n\nOn Wed, Mar 18, 2020 at 9:56 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 16 Mar 2020 at 06:01, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > Hi All:\n> >\n> > I have re-implemented the patch based on David's suggestion/code, Looks\n> it\n> > works well. The updated patch mainly includes:\n> >\n> > 1. Maintain the not_null_colno in RelOptInfo, which includes the not\n> null from\n> > catalog and the not null from vars.\n>\n> What about non-nullability that we can derive from means other than\n> NOT NULL constraints. Where will you track that now that you've\n> removed the UniqueKeySet type?\n>\n\nI tracked it in 'deconstruct_recurse', just before\nthe distribute_qual_to_rels call.\n\n+ ListCell *lc;\n+ foreach(lc, find_nonnullable_vars(qual))\n+ {\n+ Var *var = lfirst_node(Var, lc);\n+ RelOptInfo *rel =\nroot->simple_rel_array[var->varno];\n+ if (var->varattno > InvalidAttrNumber)\n+ rel->not_null_cols =\nbms_add_member(rel->not_null_cols, var->varattno);\n+ }\n\n\n> Traditionally we use attno or attnum rather than colno for variable\n> names containing attribute numbers\n>\n\nCurrently I use a list of Var for a UnqiueKey, I guess it is ok?\n\n\n>\n> > 3. postpone the propagate_unique_keys_to_joinrel call to\n> populate_joinrel_with_paths\n> > since we know jointype at that time. so we can handle the semi/anti\n> join specially.\n>\n> ok, but the join type was known already where I was calling the\n> function from. It just wasn't passed to the function.\n>\n> > 4. Add the rule I suggested above, if both of the 2 relation yields the\n> a unique result,\n> > the join result will be unique as well. the UK can be ( (rel1_uk1,\n> rel1_uk2).. )\n>\n> I see. So basically you're saying that the joinrel's uniquekeys should\n> be the cartesian product of the unique rels from either side of the\n> join. I wonder if that's a special case we need to worry about too\n> much. Surely it only applies for clauseless joins\n\n\nSome other cases we may need this as well:). like select m1.pk, m2.pk\nfrom m1, m2\nwhere m1.b = m2.b;\n\nThe cartesian product of the unique rels will make the unqiue keys too\nlong, so I maintain\nthe UnqiueKeyContext to make it short. The idea is if (UK1) is unique\nalready, no bother\nto add another UK as (UK1, UK2) which is just a superset of it.\n\n\n>\n>\n> 5. If the unique key is impossible to be referenced by others, we can\n> safely ignore\n> > it in order to keep the (join)rel->unqiuekeys short.\n>\n> You could probably have an equivalent of has_useful_pathkeys() and\n> pathkeys_useful_for_ordering()\n>\n>\nThanks for suggestion, I will do so in the v5-xx.patch.\n\n\n> > 6. I only consider the not null check/opfamily check for the uniquekey\n> which comes\n> > from UniqueIndex. I think that should be correct.\n> > 7. I defined each uniquekey as List of Expr, so I didn't introduce new\n> node type.\n>\n> Where will you store the collation Oid? I left comments to mention\n> that needed to be checked but just didn't wire it up.\n>\n\nThis is too embarrassed, I am not sure if it is safe to ignore it. I\nremoved it due to\nthe following reasons (sorry for that I didn't explain it carefully for the\nlast email).\n\n1. When we choose if an UK is usable, we have chance to compare the\ncollation info\nfor restrictinfo (where uk = 1) or target list (select uk from t) with\nthe indexinfo's collation,\nthe targetlist one has more trouble since we need to figure out the default\ncollation for it.\nHowever relation_has_unique_index_for has the same situation as us, it\nignores it as well.\nSee comment /* XXX at some point we may need to check collations here too.\n*/. It think\nif there are some reasons we can ignore that.\n\n2. What we expect from UK is:\na). Where m1.uniquekey = m2.b m2.uk will not be duplicated by this\njoinclause. Here\nif m1.uk has different collation, it will raise runtime error.\nb). Where m1.uniquekey collate 'xxxx' = m2.b. We may can't depends on\nthe run-time error this time. But if we are sure that *if uk is uk at\ndefault collation is unique,\nthen (uk collcate 'other-colation') is unique as well**. if so we may safe\nignore it as well.\nc). select uniquekey from t / select uniquekey collate 'xxxx' from t.\nThis have the same\nrequirement as item b).\n\n3). Looks maintain the collation for our case totally is a big effort,\nand user rarely use it, If\nmy expectation for 2.b is not true, I prefer to detect such case (user use\na different collation),\nwe can just ignore the UK for that.\n\nBut After all, I think this should be an open question for now.\n\n---\nAt last, I am so grateful for your suggestion/feedback, that's really\nheuristic and constructive.\nAnd so thanks Tom's for the quick review and suggest to add a new fields\nfor RelOptInfo,\nwithout it I don't think I can add a new field to a so important struct.\nAnd also thanks Bapat who\nexplains the thing more detailed. I'm now writing the code for partition\nindex stuff, which\nis a bit of boring, since every partition may have different unique index.\nI am expecting that\nI can finish it in the following 2 days, and hope you can have another\nround of review again.\n\nThanks for your feedback!\n\nBest Regards\nAndy Fan\n\nHi David:Thanks for your time. On Wed, Mar 18, 2020 at 9:56 AM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 16 Mar 2020 at 06:01, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi All:\n>\n> I have re-implemented the patch based on David's suggestion/code,  Looks it\n> works well.   The updated patch mainly includes:\n>\n> 1. Maintain the not_null_colno in RelOptInfo, which includes the not null from\n>     catalog and the not null from vars.\n\nWhat about non-nullability that we can derive from means other than\nNOT NULL constraints. Where will you track that now that you've\nremoved the UniqueKeySet type?I tracked it in 'deconstruct_recurse',  just before the distribute_qual_to_rels call. +                       ListCell        *lc;+                       foreach(lc, find_nonnullable_vars(qual))+                       {+                               Var *var = lfirst_node(Var, lc);+                               RelOptInfo *rel = root->simple_rel_array[var->varno];+                               if (var->varattno > InvalidAttrNumber)+                                       rel->not_null_cols = bms_add_member(rel->not_null_cols, var->varattno);+                       }\n\nTraditionally we use attno or attnum rather than colno for variable\nnames containing attribute numbersCurrently I use a list of Var for a UnqiueKey,   I guess it is ok?  \n\n> 3. postpone the propagate_unique_keys_to_joinrel call to populate_joinrel_with_paths\n>   since we know jointype at that time. so we can handle the semi/anti join specially.\n\nok, but the join type was known already where I was calling the\nfunction from. It just wasn't passed to the function.\n\n> 4. Add the rule I suggested above,  if both of the 2 relation yields the a unique result,\n>   the join result will be unique as well. the UK can be  ( (rel1_uk1, rel1_uk2).. )\n\nI see. So basically you're saying that the joinrel's uniquekeys should\nbe the cartesian product of the unique rels from either side of the\njoin.  I wonder if that's a special case we need to worry about too\nmuch. Surely it only applies for clauseless joinsSome other cases we may need this as well:).   like select m1.pk, m2.pk  from m1, m2 where m1.b = m2.b; The cartesian product of the unique rels will make the unqiue keys too long,  so I maintainthe UnqiueKeyContext to make it short.  The idea is if  (UK1) is unique already,  no botherto add another UK as (UK1, UK2) which is just a superset of it.   \n> 5. If the unique key is impossible to be referenced by others,  we can safely ignore\n>     it in order  to keep the (join)rel->unqiuekeys short.\n\nYou could probably have an equivalent of has_useful_pathkeys() and\npathkeys_useful_for_ordering()\n Thanks for suggestion,  I will do so  in the v5-xx.patch.   \n> 6. I only consider the not null check/opfamily check for the uniquekey which comes\n>    from UniqueIndex.  I think that should be correct.\n> 7. I defined each uniquekey as List of Expr,  so I didn't introduce new node type.\n\nWhere will you store the collation Oid? I left comments to mention\nthat needed to be checked but just didn't wire it up.This is too embarrassed,  I am not sure if it is safe to ignore it.  I removed it due tothe following reasons (sorry for that I didn't explain it carefully for the last email). 1.  When we choose if an UK is usable,  we have chance to compare the collation infofor  restrictinfo  (where uk = 1) or target list (select uk from t) with the indexinfo's collation,the targetlist one has more trouble since we need to figure out the default collation  for it.However relation_has_unique_index_for has the same situation as us,  it ignores it as well.See comment /* XXX at some point we may need to check collations here too. */. It thinkif there are some reasons we can ignore that.  2. What we expect from UK is:a).  Where m1.uniquekey = m2.b   m2.uk will not be duplicated by this joinclause.   Hereif m1.uk has different collation, it will raise runtime error. b).  Where m1.uniquekey collate 'xxxx' = m2.b.   We may can't depends on the run-time error this time. But if we are sure that  *if uk is uk at default collation is unique,  then (uk collcate 'other-colation') is unique as well**.  if so we may safe ignore it as well.c).  select uniquekey from t / select uniquekey collate 'xxxx'  from t.  This have the samerequirement as item b). 3).  Looks maintain the collation for our case totally is a big effort,  and user rarely use it,  Ifmy expectation for 2.b is not true,  I prefer to detect such case (user use a different collation),we can just ignore the UK for that. But After all, I think this should be an open question for now. ---At last,  I am so grateful for your suggestion/feedback,  that's really heuristic and constructive. And so thanks Tom's for the quick review and suggest to add a new fields for RelOptInfo, without it I don't think I can add a new field to a so important struct.  And also thanks Bapat who explains the thing more detailed.   I'm now writing the code for partition index stuff, which is a bit of boring, since every partition may have different unique index.  I am expecting that I can finish it in the following 2 days,  and hope you can have another round of review again.Thanks for your feedback!Best RegardsAndy Fan", "msg_date": "Wed, 18 Mar 2020 10:56:58 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "On Wed, 18 Mar 2020 at 15:57, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I'm now writing the code for partition index stuff, which\n> is a bit of boring, since every partition may have different unique index.\n\nWhy is that case so different?\n\nFor a partitioned table to have a valid unique index, a unique index\nmust exist on each partition having columns that are a superset of the\npartition key columns. An IndexOptInfo will exist on the partitioned\ntable's RelOptInfo, in this case.\n\nAt the leaf partition level, wouldn't you just add the uniquekeys the\nsame as we do for base rels? Maybe only do it if\nenable_partitionwise_aggregation is on. Otherwise, I don't think we'll\ncurrently have a need for them. Currently, we don't do unique joins\nfor partition-wise joins. Perhaps uniquekeys will be a good way to fix\nthat omission in the future.\n\n\n", "msg_date": "Wed, 18 Mar 2020 17:12:56 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "Hi David:\n\nOn Wed, Mar 18, 2020 at 12:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 18 Mar 2020 at 15:57, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > I'm now writing the code for partition index stuff, which\n> > is a bit of boring, since every partition may have different unique\n> index.\n>\n> Why is that case so different?\n>\n> For a partitioned table to have a valid unique index, a unique index\n> must exist on each partition having columns that are a superset of the\n> partition key columns. An IndexOptInfo will exist on the partitioned\n> table's RelOptInfo, in this case.\n>\n> The main difference are caused:\n\n1. we can create unique index on some of partition only.\n\ncreate table q100 (a int, b int, c int) partition by range (b);\ncreate table q100_1 partition of q100 for values from (1) to (10);\ncreate table q100_2 partition of q100 for values from (11) to (20);\ncreate unique index q100_1_c on q100_1(c); // user may create this index\non q100_1 only\n\n2. The unique index may not contains part key as above.\n\nFor the above case, even the same index on all the partition as well, we\nstill can't\nuse it since the it unique on local partition only.\n\n3. So the unique index on a partition table can be used only if it\ncontains the partition key\nAND exists on all the partitions.\n\n4. When we apply the uniquekey_is_useful_for_rel, I compare the\ninformation between ind->indextlist\nand rel->reltarget, but the indextlist has a wrong varno, we we have to\nchange the varno with\nChangeVarNodes for the indextlist from childrel since the varno is for\nchildrel.\n\n5. When we detect the uk = 1 case, the uk is also present with\nparentrel->relid information, which\nwe may requires the ChangeVarNodes on childrel->indexinfo->indextlist as\nwell.\n\nEven the rules looks long, The run time should be very short since\nusually we don't have\nmany unique index on partition table.\n\n\n> At the leaf partition level, wouldn't you just add the uniquekeys the\n> same as we do for base rels?\n\n\nYes, But due to the uk of a childrel may be not useful for parent rel (the\nuk only exist\nin one partiton), so I think we can bypass if it is a child rel case?\n\n\nBest Regards\nAndy Fan\n\nHi David:On Wed, Mar 18, 2020 at 12:13 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 18 Mar 2020 at 15:57, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I'm now writing the code for partition index stuff, which\n> is a bit of boring, since every partition may have different unique index.\n\nWhy is that case so different?\n\nFor a partitioned table to have a valid unique index, a unique index\nmust exist on each partition having columns that are a superset of the\npartition key columns. An IndexOptInfo will exist on the partitioned\ntable's RelOptInfo, in this case.\nThe main difference are caused:1.  we can create unique index on some of  partition only.create table q100 (a int, b int, c int) partition by range (b);create table q100_1 partition of q100 for values from (1) to (10);create table q100_2 partition of q100 for values from (11) to (20);create unique index q100_1_c on q100_1(c);   // user may create this index on q100_1 only 2.  The unique index may not contains part key as above.For the above case, even the same index on all the partition as well, we still can'tuse it since the it unique on local partition only.   3.  So the unique index on a partition table can be used only if it contains the partition keyAND exists on all the partitions.4.  When we apply the uniquekey_is_useful_for_rel,  I compare the information between ind->indextlistand rel->reltarget,  but the indextlist has a wrong varno,  we we have to change the varno with ChangeVarNodes for the indextlist from childrel since the varno is for childrel.   5.  When we detect the  uk = 1 case,  the uk is also present with parentrel->relid information, whichwe may requires the ChangeVarNodes on childrel->indexinfo->indextlist  as well. Even the rules looks long,   The run time should be very short since usually we don't have many unique index on partition table.  \nAt the leaf partition level, wouldn't you just add the uniquekeys the\nsame as we do for base rels? Yes, But due to the uk of a childrel may be not useful for parent rel (the uk only existin one partiton),  so I think we can bypass if it is a child rel case?  Best RegardsAndy Fan", "msg_date": "Wed, 18 Mar 2020 12:57:57 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" }, { "msg_contents": "I have started the new thread [1] to continue talking about this.\nMr. cfbot is happy now.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.com\n\nThanks\n\n>\n\nI have started the new thread [1] to continue talking about this.Mr. cfbot is happy now. [1] https://www.postgresql.org/message-id/flat/CAKU4AWrwZMAL%3DuaFUDMf4WGOVkEL3ONbatqju9nSXTUucpp_pw%40mail.gmail.comThanks", "msg_date": "Wed, 25 Mar 2020 23:20:30 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Erase the distinctClause if the result is unique by\n definition" } ]
[ { "msg_contents": "Eventually we find out that logical replication in the current version \nof Postgres works significantly slower on table with replica identity \nfull than old pglogical implementation.\n\nThe comment to RelationFindReplTupleSeq says:\n\n     Note that this stops on the first matching tuple.\n\nBut actually this function continue traversal until end of the table \neven if tuple was found.\nI wonder if break; should be added to the end of for loop.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 31 Jan 2020 16:46:36 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Missing break in RelationFindReplTupleSeq" }, { "msg_contents": "On 2020-Jan-31, Konstantin Knizhnik wrote:\n\n> Eventually we find out that logical replication in the current version of\n> Postgres works significantly slower on table with replica identity full than\n> old pglogical implementation.\n> \n> The comment to RelationFindReplTupleSeq says:\n> \n> ��� Note that this stops on the first matching tuple.\n> \n> But actually this function continue traversal until end of the table even if\n> tuple was found.\n> I wonder if break; should be added to the end of for loop.\n\nWow, you're right, and the \"break\" is missing there. I propose it like\nthis.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 31 Jan 2020 10:58:31 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing break in RelationFindReplTupleSeq" }, { "msg_contents": "On 2020-Jan-31, Alvaro Herrera wrote:\n\n> On 2020-Jan-31, Konstantin Knizhnik wrote:\n> \n> > Eventually we find out that logical replication in the current version of\n> > Postgres works significantly slower on table with replica identity full than\n> > old pglogical implementation.\n> > \n> > The comment to RelationFindReplTupleSeq says:\n> > \n> > ��� Note that this stops on the first matching tuple.\n> > \n> > But actually this function continue traversal until end of the table even if\n> > tuple was found.\n> > I wonder if break; should be added to the end of for loop.\n> \n> Wow, you're right, and the \"break\" is missing there. I propose it like\n> this.\n\nPushed, thanks for reporting.\n\nI had one very strange thing happen while testing this -- I put the\ntests to run in all branches in parallel, and they took about 12 minutes\nto finish instead of the normal 5. I tried to repeat this result but\nwas unable to do so. My only hypothesis is that my laptop entered some\nkind of low-performance mode.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 3 Feb 2020 19:04:04 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing break in RelationFindReplTupleSeq" } ]
[ { "msg_contents": "I would like to introduce the ability to get object DDL (server-side) by introducing a new function with roughly the following prototype:\n\nget_ddl(regclass)\nRETURNS text\nLANGUAGE C STRICT PARALLEL SAFE;\n\nA previous conversation seemed to encourage the development of this feature\n\nhttps://www.postgresql.org/message-id/CADkLM=fxfsrHASKk_bY_A4uomJ1Te5MfGgD_rwwQfV8wP68ewg@mail.gmail.com\n\nI would like to start work on this patch but wanted acceptance on the function signature. \n\nThank you!\n\n\n", "msg_date": "Fri, 31 Jan 2020 13:59:28 -0500", "msg_from": "\"Jordan Deitch\" <jd@rsa.pub>", "msg_from_op": true, "msg_subject": "get a relations DDL server-side" }, { "msg_contents": "\"Jordan Deitch\" <jd@rsa.pub> writes:\n> I would like to introduce the ability to get object DDL (server-side) by introducing a new function with roughly the following prototype:\n> get_ddl(regclass)\n> RETURNS text\n> LANGUAGE C STRICT PARALLEL SAFE;\n\nUmm ... \"regclass\" would only be appropriate for relations.\n\nIf you actually want to support more than one type of object with a single\nfunction, you'll need two OIDs. Catalog's OID and object's OID are the\nusual choices, per pg_describe_object() and similar functions.\n\nI don't think \"get_ddl\" is a particularly apt function name, either.\nIt ignores the precedent of existing functions with essentially this\nsame functionality, such as pg_get_triggerdef(), pg_get_constraintdef(),\netc. One wonders why duplicate that existing functionality, so maybe\nyou should think about adding per-object-type functions instead of\ntrying to make one function to rule them all.\n\nThe larger reason why this doesn't exist already, BTW, is that we've\ntended to find that it's not all that useful to get back a monolithic\nchunk of DDL text for complicated objects such as tables. You should\nprovide a little more clarity as to what use-case you foresee, because\notherwise there are just a *whole* lot of things that aren't clear.\nSome examples:\n\n* Should the output include a CREATE COMMENT if the object has a comment?\n* What about ownership and ACL (grants)?\n* On tables, are foreign keys part of the table, or are they distinct\n objects? (Hint: both answers can be correct depending on use-case)\n* How about indexes, and do you want to treat constraint indexes\n differently from other ones? (Constraint indexes *could* be made\n part of the table's DDL, but other indexes need a separate CREATE)\n* Do you need options, such as whether to pretty-print expressions?\n\nYou might also find it instructive to dig through the archives for\npast discussions about moving more of pg_dump's logic into the server;\nthat's the area where this has come up over and over.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 Jan 2020 15:01:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: get a relations DDL server-side" } ]
[ { "msg_contents": "We recently noticed that vacuum buffer counters wraparound in extreme\ncases, with ridiculous results. Example:\n\n2020-01-06 16:38:38.010 EST [45625-1] app= LOG: automatic vacuum of table \"somtab.sf.foobar\": index scans: 17\n pages: 0 removed, 207650641 remain, 0 skipped due to pins, 13419403 skipped frozen\n tuples: 141265419 removed, 3186614627 remain, 87783760 are dead but not yet removable\n buffer usage: -2022059267 hits, -17141881 misses, 1252507767 dirtied\n avg read rate: -0.043 MB/s, avg write rate: 3.146 MB/s\n system usage: CPU 107819.92s/2932957.75u sec elapsed 3110498.10 sec\n\nThat's to be expected, as tables exist that are large enough for 4 billion\nbuffer accesses to be a possibility. Let's widen the counters, as in the\nattached patch.\n\nI propose to backpatch this.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 31 Jan 2020 17:59:26 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "widen vacuum buffer counters" }, { "msg_contents": "On Fri, Jan 31, 2020 at 9:59 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> We recently noticed that vacuum buffer counters wraparound in extreme\n> cases, with ridiculous results. Example:\n>\n> 2020-01-06 16:38:38.010 EST [45625-1] app= LOG: automatic vacuum of table \"somtab.sf.foobar\": index scans: 17\n> pages: 0 removed, 207650641 remain, 0 skipped due to pins, 13419403 skipped frozen\n> tuples: 141265419 removed, 3186614627 remain, 87783760 are dead but not yet removable\n> buffer usage: -2022059267 hits, -17141881 misses, 1252507767 dirtied\n> avg read rate: -0.043 MB/s, avg write rate: 3.146 MB/s\n> system usage: CPU 107819.92s/2932957.75u sec elapsed 3110498.10 sec\n>\n> That's to be expected, as tables exist that are large enough for 4 billion\n> buffer accesses to be a possibility. Let's widen the counters, as in the\n> attached patch.\n>\n> I propose to backpatch this.\n\n+1, and patch LGTM.\n\n\n", "msg_date": "Fri, 31 Jan 2020 22:10:50 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: widen vacuum buffer counters" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> We recently noticed that vacuum buffer counters wraparound in extreme\n> cases, with ridiculous results.\n\nUgh.\n\n> I propose to backpatch this.\n\n+1 for widening these counters, but since they're global variables, -0.2\nor so for back-patching. I don't know of any reason that an extension\nwould be touching these, but I feel like the problem isn't severe enough\nto justify taking an ABI-break risk.\n\nAlso, %zd is the wrong format code for int64. Recommended practice\nthese days is to use \"%lld\" with an explicit cast of the printf argument\nto long long (just to be sure). That doesn't work safely before v12,\nand if you did insist on back-patching further, you'd need to jump\nthrough hoops to avoid having platform-specific format codes in a\ntranslatable string. (The side-effects for translation seem like\nan independent argument against back-patching.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 31 Jan 2020 17:13:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: widen vacuum buffer counters" }, { "msg_contents": "On Fri, Jan 31, 2020 at 05:13:53PM -0500, Tom Lane wrote:\n> +1 for widening these counters, but since they're global variables, -0.2\n> or so for back-patching. I don't know of any reason that an extension\n> would be touching these, but I feel like the problem isn't severe enough\n> to justify taking an ABI-break risk.\n\nI would not recommend doing a back-patch because of that. I don't\nthink that's worth taking any risk. Extension authors can have a lot\nof imagination.\n\n> Also, %zd is the wrong format code for int64. Recommended practice\n> these days is to use \"%lld\" with an explicit cast of the printf argument\n> to long long (just to be sure). That doesn't work safely before v12,\n> and if you did insist on back-patching further, you'd need to jump\n> through hoops to avoid having platform-specific format codes in a\n> translatable string. (The side-effects for translation seem like\n> an independent argument against back-patching.)\n\nSurely you meant INT64_FORMAT here? Anyway, looking at the patch,\ncouldn't we just use uint64?\n--\nMichael", "msg_date": "Sat, 1 Feb 2020 18:52:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: widen vacuum buffer counters" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Jan 31, 2020 at 05:13:53PM -0500, Tom Lane wrote:\n>> Also, %zd is the wrong format code for int64. Recommended practice\n>> these days is to use \"%lld\" with an explicit cast of the printf argument\n>> to long long (just to be sure). That doesn't work safely before v12,\n>> and if you did insist on back-patching further, you'd need to jump\n>> through hoops to avoid having platform-specific format codes in a\n>> translatable string. (The side-effects for translation seem like\n>> an independent argument against back-patching.)\n\n> Surely you meant INT64_FORMAT here?\n\nNo, because that varies depending on platform, so using it in a\ntranslatable string is a bad idea. See e.g. 6a1cd8b92.\n\n> Anyway, looking at the patch,\n> couldn't we just use uint64?\n\nYeah, I was wondering if those counters shouldn't be unsigned, too.\nProbably doesn't matter once we widen them to 64 bits though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 01 Feb 2020 10:26:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: widen vacuum buffer counters" }, { "msg_contents": "On 2020-Jan-31, Tom Lane wrote:\n\n> Also, %zd is the wrong format code for int64. Recommended practice\n> these days is to use \"%lld\" with an explicit cast of the printf argument\n> to long long (just to be sure). That doesn't work safely before v12,\n> and if you did insist on back-patching further, you'd need to jump\n> through hoops to avoid having platform-specific format codes in a\n> translatable string. (The side-effects for translation seem like\n> an independent argument against back-patching.)\n\nPushed with that change; did not backpatch, because I don't think it's\nreally worth the possible breakage :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 5 Feb 2020 17:18:08 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: widen vacuum buffer counters" } ]
[ { "msg_contents": "Hi,\n\npg_current_wal_flush_lsn()pg_lsnGet current write-ahead log flush location\npg_current_wal_insert_lsn()pg_lsnGet current write-ahead log insert location\npg_current_wal_lsn()pg_lsnGet current write-ahead log write location\n\nI guess write is about how many bytes written in shared cache, and\nflush is flush to file, which makes it persistent.\n\nAnybody gives some official explanation?\nThanks.\n\nRegards,\nJinhua Luo\n\n\n", "msg_date": "Sat, 1 Feb 2020 11:18:42 +0800", "msg_from": "Jinhua Luo <luajit.io@gmail.com>", "msg_from_op": true, "msg_subject": "What's difference among insert/write/flush lsn?" }, { "msg_contents": "On Sat, Feb 1, 2020 at 11:18:42AM +0800, Jinhua Luo wrote:\n> Hi,\n> \n> pg_current_wal_flush_lsn()pg_lsnGet current write-ahead log flush location\n> pg_current_wal_insert_lsn()pg_lsnGet current write-ahead log insert location\n> pg_current_wal_lsn()pg_lsnGet current write-ahead log write location\n> \n> I guess write is about how many bytes written in shared cache, and\n> flush is flush to file, which makes it persistent.\n> \n> Anybody gives some official explanation?\n\nI think the insert location is where data is being added to WAL, the\nwrite location is where it was last written to the file system, and\nflush is the last time is was flushed to storage.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 7 Mar 2020 18:24:56 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: What's difference among insert/write/flush lsn?" } ]
[ { "msg_contents": "Hello\n\nAt my current job, we have a lot of multi-tenant databases, thus with tables \ncontaining a tenant_id column. Such a column introduces a severe bias in \nstatistics estimation since any other FK in the next columns is very likely to \nhave a functional dependency on the tenant id. We found several queries where \nthis functional dependency messed up the estimations so much the optimizer \nchose wrong plans.\nWhen we tried to use extended statistics with CREATE STATISTIC on tenant_id, \nother_id, we noticed that the current implementation for detecting functional \ndependency lacks two features (at least in our use case):\n- support for IN clauses\n- support for the array contains operator (that could be considered as a \nspecial case of IN)\n\nAfter digging in the source code, I think the lack of support for IN clauses \nis an oversight and due to the fact that IN clauses are ScalarArrayOpExpr \ninstead of OpExpr. The attached patch fixes this by simply copying the code-\npath for OpExpr and changing the type name. It compiles and the results are \ncorrect, with a dependency being correctly taken into consideration when \nestimating rows. If you think such a copy paste is bad and should be factored \nin another static bool function, please say so and I will happily provide an \nupdated patch.\nThe lack of support for @> operator, on the other hand, seems to be a decision \ntaken when writing the initial code, but I can not find any mathematical nor \ntechnical reason for it. The current code restricts dependency calculation to \nthe = operator, obviously because inequality operators are not going to \nwork... but array contains is just several = operators grouped, thus the same \nfor the dependency calculation. The second patch refactors the operator check \nin order to also include array contains.\n\nI tested the patches on current HEAD, but I can test and provide back-ported \nversions of the patch for other versions if needed (this code path hardly \nchanged since its introduction in 10).\n\nBest regards\n\n Pierre Ducroquet", "msg_date": "Sat, 01 Feb 2020 08:51:04 +0100", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "PATCH: add support for IN and @> in functional-dependency statistics\n use" }, { "msg_contents": "On Sat, Feb 01, 2020 at 08:51:04AM +0100, Pierre Ducroquet wrote:\n>Hello\n>\n>At my current job, we have a lot of multi-tenant databases, thus with tables\n>containing a tenant_id column. Such a column introduces a severe bias in\n>statistics estimation since any other FK in the next columns is very likely to\n>have a functional dependency on the tenant id. We found several queries where\n>this functional dependency messed up the estimations so much the optimizer\n>chose wrong plans.\n>When we tried to use extended statistics with CREATE STATISTIC on tenant_id,\n>other_id, we noticed that the current implementation for detecting functional\n>dependency lacks two features (at least in our use case):\n>- support for IN clauses\n>- support for the array contains operator (that could be considered as a\n>special case of IN)\n>\n\nThanks for the patch. I don't think the lack of support for these clause\ntypes is an oversight - we haven't done them because we were not quite\nsure the functional dependencies can actually apply to them. But maybe\nwe can support them, I'm not against that in principle.\n\n>After digging in the source code, I think the lack of support for IN clauses\n>is an oversight and due to the fact that IN clauses are ScalarArrayOpExpr\n>instead of OpExpr. The attached patch fixes this by simply copying the code-\n>path for OpExpr and changing the type name. It compiles and the results are\n>correct, with a dependency being correctly taken into consideration when\n>estimating rows. If you think such a copy paste is bad and should be factored\n>in another static bool function, please say so and I will happily provide an\n>updated patch.\n\nHmmm. Consider a query like this:\n\n ... WHERE tenant_id = 1 AND another_column IN (2,3)\n\nwhich kinda contradicts the idea of functional dependencies that knowing\na value in one column, tells us something about a value in a second\ncolumn. But that assumes a single value, which is not quite true here.\n\nThe above query is essentially the same thing as\n\n ... WHERE (tenant_id=1 AND (another_column=2 OR another_column=3))\n\nand also\n\n ... WHERE (tenant_id=1 AND another_column=2)\n OR (tenant_id=1 AND another_column=3)\n\nat wchich point we could apply functional dependencies - but we'd do it\nonce for each AND-clause, and then combine the results to compute\nselectivity for the OR clause.\n\nBut this means that if we apply functional dependencies directly to the\noriginal clause, it'll be inconsistent. Which seems a bit unfortunate.\n\nOr do I get this wrong?\n\nBTW the code added in the 0001 patch is the same as for is_opclause, so\nmaybe we can simply do\n\n if (is_opclause(rinfo->clause) ||\n IsA(rinfo->clause, ScalarArrayOpExpr))\n {\n ...\n }\n\ninstead of just duplicating the code. We also need some at least some\nregression tests, testing functional dependencies with this clause type.\n\n>The lack of support for @> operator, on the other hand, seems to be a decision\n>taken when writing the initial code, but I can not find any mathematical nor\n>technical reason for it. The current code restricts dependency calculation to\n>the = operator, obviously because inequality operators are not going to\n>work... but array contains is just several = operators grouped, thus the same\n>for the dependency calculation. The second patch refactors the operator check\n>in order to also include array contains.\n>\n\nNo concrete opinion on this yet. I think my concerns are pretty similar\nto the IN clause, although I'm not sure what you mean by \"this could be\nconsidered as special case of IN\".\n\n\n\n>I tested the patches on current HEAD, but I can test and provide back-ported\n>versions of the patch for other versions if needed (this code path hardly\n>changed since its introduction in 10).\n\nI think the chance of this getting backpatched is zero, because it might\neasily break existing apps.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 1 Feb 2020 15:24:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Saturday, February 1, 2020 3:24:46 PM CET Tomas Vondra wrote:\n> On Sat, Feb 01, 2020 at 08:51:04AM +0100, Pierre Ducroquet wrote:\n> >Hello\n> >\n> >At my current job, we have a lot of multi-tenant databases, thus with\n> >tables containing a tenant_id column. Such a column introduces a severe\n> >bias in statistics estimation since any other FK in the next columns is\n> >very likely to have a functional dependency on the tenant id. We found\n> >several queries where this functional dependency messed up the estimations\n> >so much the optimizer chose wrong plans.\n> >When we tried to use extended statistics with CREATE STATISTIC on\n> >tenant_id, other_id, we noticed that the current implementation for\n> >detecting functional dependency lacks two features (at least in our use\n> >case):\n> >- support for IN clauses\n> >- support for the array contains operator (that could be considered as a\n> >special case of IN)\n> \n> Thanks for the patch. I don't think the lack of support for these clause\n> types is an oversight - we haven't done them because we were not quite\n> sure the functional dependencies can actually apply to them. But maybe\n> we can support them, I'm not against that in principle.\n> \n> >After digging in the source code, I think the lack of support for IN\n> >clauses is an oversight and due to the fact that IN clauses are\n> >ScalarArrayOpExpr instead of OpExpr. The attached patch fixes this by\n> >simply copying the code- path for OpExpr and changing the type name. It\n> >compiles and the results are correct, with a dependency being correctly\n> >taken into consideration when estimating rows. If you think such a copy\n> >paste is bad and should be factored in another static bool function,\n> >please say so and I will happily provide an updated patch.\n> \n> Hmmm. Consider a query like this:\n> \n> ... WHERE tenant_id = 1 AND another_column IN (2,3)\n> \n> which kinda contradicts the idea of functional dependencies that knowing\n> a value in one column, tells us something about a value in a second\n> column. But that assumes a single value, which is not quite true here.\n> \n> The above query is essentially the same thing as\n> \n> ... WHERE (tenant_id=1 AND (another_column=2 OR another_column=3))\n> \n> and also\n> \n> ... WHERE (tenant_id=1 AND another_column=2)\n> OR (tenant_id=1 AND another_column=3)\n> \n> at wchich point we could apply functional dependencies - but we'd do it\n> once for each AND-clause, and then combine the results to compute\n> selectivity for the OR clause.\n> \n> But this means that if we apply functional dependencies directly to the\n> original clause, it'll be inconsistent. Which seems a bit unfortunate.\n> \n> Or do I get this wrong?\n\nIn my tests, I've got a table with two columns a and b, generated this way:\n CREATE TABLE test (a INT, b INT)\n AS SELECT i, i/10 FROM \n generate_series(1, 100000) s(i),\n generate_series(1, 5) x;\n\nWith statistics defined on the a, b columns\n\nHere are the estimated selectivity results without any patch:\n\nSELECT * FROM test WHERE a = 1 AND b = 1 : 5\nSELECT * FROM test WHERE a = 1 AND (b = 1 OR b = 2) : 1\nSELECT * FROM test WHERE (a = 1 AND b = 1) OR (a = 1 AND b = 2) : 1\nSELECT * FROM test WHERE a = 1 AND b IN (1, 2) : 1\n\nWith the patch, the estimated rows of the last query goes back to 5, which is \nmore logical. The other ones don't change.\n\n> BTW the code added in the 0001 patch is the same as for is_opclause, so\n> maybe we can simply do\n> \n> if (is_opclause(rinfo->clause) ||\n> IsA(rinfo->clause, ScalarArrayOpExpr))\n> {\n> ...\n> }\n> \n> instead of just duplicating the code.\n\nI would love doing that, but the ScalarArrayOpExpr and OpExpr are not binary \ncompatible for the members used here. In ScalarArrayOpExpr, on AMD64, args is \nat offset 24 and opno at 4, while they are at 32 and 4 in OpExpr. I can work \naround with this kind of code, but I don't like it much:\nList *args;\nOid opno;\nif (IsA(rinfo->clause, OpExpr))\n{\n /* If it's an opclause, we will check for Var = Const or Const = Var. */\n OpExpr\t *expr = (OpExpr *) rinfo->clause;\n args = expr->args;\n opno = expr->opno;\n}\nelse if (IsA(rinfo->clause, ScalarArrayOpExpr))\n{\n /* If it's a ScalarArrayOpExpr, check for Var IN Const. */\n ScalarArrayOpExpr *expr = (ScalarArrayOpExpr *) rinfo->clause;\n args = expr->args;\n opno = expr->opno;\n}\n\nOr I can rewrite it in C++ to play with templates... :)\n\n> We also need some at least some\n> regression tests, testing functional dependencies with this clause type.\n\nAgreed\n\n> >The lack of support for @> operator, on the other hand, seems to be a\n> >decision taken when writing the initial code, but I can not find any\n> >mathematical nor technical reason for it. The current code restricts\n> >dependency calculation to the = operator, obviously because inequality\n> >operators are not going to work... but array contains is just several =\n> >operators grouped, thus the same for the dependency calculation. The\n> >second patch refactors the operator check in order to also include array\n> >contains.\n> \n> No concrete opinion on this yet. I think my concerns are pretty similar\n> to the IN clause, although I'm not sure what you mean by \"this could be\n> considered as special case of IN\".\n\nI meant from a mathematical point of view.\n\n> >I tested the patches on current HEAD, but I can test and provide\n> >back-ported versions of the patch for other versions if needed (this code\n> >path hardly changed since its introduction in 10).\n> \n> I think the chance of this getting backpatched is zero, because it might\n> easily break existing apps.\n\nI understand\n\n\n\n\n", "msg_date": "Sun, 02 Feb 2020 10:59:32 +0100", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Sun, Feb 02, 2020 at 10:59:32AM +0100, Pierre Ducroquet wrote:\n>On Saturday, February 1, 2020 3:24:46 PM CET Tomas Vondra wrote:\n>> On Sat, Feb 01, 2020 at 08:51:04AM +0100, Pierre Ducroquet wrote:\n>> >Hello\n>> >\n>> >At my current job, we have a lot of multi-tenant databases, thus with\n>> >tables containing a tenant_id column. Such a column introduces a severe\n>> >bias in statistics estimation since any other FK in the next columns is\n>> >very likely to have a functional dependency on the tenant id. We found\n>> >several queries where this functional dependency messed up the estimations\n>> >so much the optimizer chose wrong plans.\n>> >When we tried to use extended statistics with CREATE STATISTIC on\n>> >tenant_id, other_id, we noticed that the current implementation for\n>> >detecting functional dependency lacks two features (at least in our use\n>> >case):\n>> >- support for IN clauses\n>> >- support for the array contains operator (that could be considered as a\n>> >special case of IN)\n>>\n>> Thanks for the patch. I don't think the lack of support for these clause\n>> types is an oversight - we haven't done them because we were not quite\n>> sure the functional dependencies can actually apply to them. But maybe\n>> we can support them, I'm not against that in principle.\n>>\n>> >After digging in the source code, I think the lack of support for IN\n>> >clauses is an oversight and due to the fact that IN clauses are\n>> >ScalarArrayOpExpr instead of OpExpr. The attached patch fixes this by\n>> >simply copying the code- path for OpExpr and changing the type name. It\n>> >compiles and the results are correct, with a dependency being correctly\n>> >taken into consideration when estimating rows. If you think such a copy\n>> >paste is bad and should be factored in another static bool function,\n>> >please say so and I will happily provide an updated patch.\n>>\n>> Hmmm. Consider a query like this:\n>>\n>> ... WHERE tenant_id = 1 AND another_column IN (2,3)\n>>\n>> which kinda contradicts the idea of functional dependencies that knowing\n>> a value in one column, tells us something about a value in a second\n>> column. But that assumes a single value, which is not quite true here.\n>>\n>> The above query is essentially the same thing as\n>>\n>> ... WHERE (tenant_id=1 AND (another_column=2 OR another_column=3))\n>>\n>> and also\n>>\n>> ... WHERE (tenant_id=1 AND another_column=2)\n>> OR (tenant_id=1 AND another_column=3)\n>>\n>> at wchich point we could apply functional dependencies - but we'd do it\n>> once for each AND-clause, and then combine the results to compute\n>> selectivity for the OR clause.\n>>\n>> But this means that if we apply functional dependencies directly to the\n>> original clause, it'll be inconsistent. Which seems a bit unfortunate.\n>>\n>> Or do I get this wrong?\n>\n>In my tests, I've got a table with two columns a and b, generated this way:\n> CREATE TABLE test (a INT, b INT)\n> AS SELECT i, i/10 FROM\n> generate_series(1, 100000) s(i),\n> generate_series(1, 5) x;\n>\n>With statistics defined on the a, b columns\n>\n>Here are the estimated selectivity results without any patch:\n>\n>SELECT * FROM test WHERE a = 1 AND b = 1 : 5\n>SELECT * FROM test WHERE a = 1 AND (b = 1 OR b = 2) : 1\n>SELECT * FROM test WHERE (a = 1 AND b = 1) OR (a = 1 AND b = 2) : 1\n>SELECT * FROM test WHERE a = 1 AND b IN (1, 2) : 1\n>\n>With the patch, the estimated rows of the last query goes back to 5, which is\n>more logical. The other ones don't change.\n>\n\nYes, I think you're right. I've been playing with this a bit more, and I\nthink you're right we can support it the way you propose.\n\nI'm still a bit annoyed by the inconsistency this might/does introduce.\nConsider for example these clauses:\n\na) ... WHERE a = 10 AND b IN (100, 200)\nb) ... WHERE a = 10 AND (b = 100 OR b = 200)\nc) ... WHERE (a = 10 AND b = 100) OR (a = 10 AND b = 200)\n\nAll three cases are logically equivalent and do return the same set of\nrows. But we estimate them differently, arriving at different estimates.\n\nCase (a) is the one you improve in your patch. Case (c) is actually not\npossible in practice, because we rewrite it as (b) during planning.\n\nBut (b) is estimated very differently, because we don't recognize the OR\nclause as supported by functional dependencies. On the one hand I'm sure\nit's not the first case where we already estimate equivalent clauses\ndifferently. On the other hand I wonder how difficult would it be to\nsupport this specific type of OR clause (with all expressions having the\nform \"Var = Const\" and all Vars referencing the same rel).\n\nI'm not going to block the patch because of this, of course. Similarly,\nit'd be nice to add support for ScalarArrayOpExpr to MCV stats, not just\nfunctional dependencies ...\n\n>> BTW the code added in the 0001 patch is the same as for is_opclause, so\n>> maybe we can simply do\n>>\n>> if (is_opclause(rinfo->clause) ||\n>> IsA(rinfo->clause, ScalarArrayOpExpr))\n>> {\n>> ...\n>> }\n>>\n>> instead of just duplicating the code.\n>\n>I would love doing that, but the ScalarArrayOpExpr and OpExpr are not binary\n>compatible for the members used here. In ScalarArrayOpExpr, on AMD64, args is\n>at offset 24 and opno at 4, while they are at 32 and 4 in OpExpr. I can work\n>around with this kind of code, but I don't like it much:\n>List *args;\n>Oid opno;\n>if (IsA(rinfo->clause, OpExpr))\n>{\n> /* If it's an opclause, we will check for Var = Const or Const = Var. */\n> OpExpr\t *expr = (OpExpr *) rinfo->clause;\n> args = expr->args;\n> opno = expr->opno;\n>}\n>else if (IsA(rinfo->clause, ScalarArrayOpExpr))\n>{\n> /* If it's a ScalarArrayOpExpr, check for Var IN Const. */\n> ScalarArrayOpExpr *expr = (ScalarArrayOpExpr *) rinfo->clause;\n> args = expr->args;\n> opno = expr->opno;\n>}\n>\n\nOh, right. I'm dumb, I missed this obvious detail. I blame belgian beer.\n\n>Or I can rewrite it in C++ to play with templates... :)\n>\n\nPlease don't ;-)\n\n>> We also need some at least some\n>> regression tests, testing functional dependencies with this clause type.\n>\n>Agreed\n>\n>> >The lack of support for @> operator, on the other hand, seems to be a\n>> >decision taken when writing the initial code, but I can not find any\n>> >mathematical nor technical reason for it. The current code restricts\n>> >dependency calculation to the = operator, obviously because inequality\n>> >operators are not going to work... but array contains is just several =\n>> >operators grouped, thus the same for the dependency calculation. The\n>> >second patch refactors the operator check in order to also include array\n>> >contains.\n>>\n>> No concrete opinion on this yet. I think my concerns are pretty similar\n>> to the IN clause, although I'm not sure what you mean by \"this could be\n>> considered as special case of IN\".\n>\n>I meant from a mathematical point of view.\n>\n\nCan you elaborate a bit? I still don't understand how it's just \"several\nequality operators grouped\".\n\nI think the challenge here is in applying the functional dependency\ncomputed for the whole array to individual elements. I'm not sure we can\ndo that.\n\nFor example, with a table like this:\n\n CREATE TABLE t (a int, b int[]);\n CREATE STATISTICS s (dependencies) ON a, b FROM t;\n\nLet's say the functional dependency is \"perfect\" i.e. has strength 1.0.\nBut that only tells us dependency for complete array values, we don't\nknow how much information we gain by knowledge of subset of the values.\n\nFor example, all the arrays may contain {1, 2, 3} as subset, and then\nsome \"unique\" element, like this:\n\n INSERT INTO t SELECT i/1000, ARRAY[1,2,3, 4 + i/100]\n FROM generate_series(1,1000000) s(i);\n\nand then do a query like this:\n\n select * from t where a = 10 and b @> ARRAY[1,2];\n\nWithout extended stats, it's estimated like this:\n\n QUERY PLAN\n---------------------------------------------------------------\n Seq Scan on t (cost=0.00..24346.00 rows=997 width=41)\n (actual time=1.391..140.261 rows=1000 loops=1)\n Filter: ((b @> '{1,2}'::integer[]) AND (a = 10))\n Rows Removed by Filter: 999000\n Planning Time: 0.052 ms\n Execution Time: 140.707 ms\n(5 rows)\n\nbut the moment you create functional stats, you get this:\n\n QUERY PLAN\n---------------------------------------------------------------\n Seq Scan on t (cost=0.00..24346.00 rows=1000000 width=41)\n (actual time=1.432..143.047 rows=1000 loops=1)\n Filter: ((b @> '{1,2}'::integer[]) AND (a = 10))\n Rows Removed by Filter: 999000\n Planning Time: 0.099 ms\n Execution Time: 143.527 ms\n(5 rows)\n\nThat doesn't seem very useful :-(\n\n>> >I tested the patches on current HEAD, but I can test and provide\n>> >back-ported versions of the patch for other versions if needed (this code\n>> >path hardly changed since its introduction in 10).\n>>\n>> I think the chance of this getting backpatched is zero, because it might\n>> easily break existing apps.\n>\n>I understand\n>\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 2 Feb 2020 19:41:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "Hi Pierre,\n\nI've looked at this patch series, hoping to get it close to committable.\nHere is a somewhat improved version of the patch series, split into 5\npieces. The first 4 parts are about applying functional dependencies to\nScalarArrayOpExpr clauses. The last part is about doing the same thing\nfor MCV lists, so it seemed fine to post it here.\n\n0001 is the patch you posted back in October\n\n0002 simplifies the handling logic a bit, because ScalarArrayOpExpr can\nonly have form (Var op Const) but not (Const op Var).\n\n0003 fixes what I think is a bug - ScalarArrayOpExpr can represent three\ndifferent cases:\n\n * Var op ANY ()\n * Var IN () -- special case of ANY\n * Var op ALL ()\n\nI don't think functional dependencies can handle the ALL case, we need\nto reject it by checking the useOr flag.\n\n0004 adds queries to the stats_ext test suite, to test all of this (some\nof the cases illustrate the need for 0003, I think)\n\n0005 allows estimation of ScalarArrayOpExpr by MCV lists, including\nregression tests etc.\n\nWill you have time to look at this, particularly 0001-0004, but maybe\neven the 0005 part?\n\nAs for the second part of your patch (the one allowing estimation of\narray containment queries), I still think that's not something we can\nreasonably do without also building statistics on elements (which is\nwhat we have in pg_stats but not pg_stats_ext).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 5 Mar 2020 03:34:14 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "[ For the sake of the archives, some of the discussion on the other\nthread [1-3] should really have been on this thread. ]\n\nOn Sun, 2 Feb 2020 at 18:41, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> I think the challenge here is in applying the functional dependency\n> computed for the whole array to individual elements. I'm not sure we can\n> do that.\n>\n> For example, with a table like this:\n>\n> CREATE TABLE t (a int, b int[]);\n> CREATE STATISTICS s (dependencies) ON a, b FROM t;\n>\n> Let's say the functional dependency is \"perfect\" i.e. has strength 1.0.\n> But that only tells us dependency for complete array values, we don't\n> know how much information we gain by knowledge of subset of the values.\n>\n\nThe more I think about this example, the more I think this is really\njust a special case of the more general problem of compatibility of\nclauses. Once you add support for IN (...) clauses, any query of the\nform\n\n SELECT ... WHERE (any clauses on col a) AND (any clauses on col b)\n\ncan be recast as\n\n SELECT ... WHERE a IN (...) AND b IN (...)\n\nso any counter-example with bad estimates produced with a query in the\nfirst form can also be written in the second form.\n\nI think we should really be thinking in terms of making a strong\nfunctional dependency (a => b) applicable generally to queries in the\nfirst form, which will work well if the clauses on b are compatible\nwith those on b, but not if they're incompatible. However, that's not\nso very different from the current state without extended stats, which\nassumes independence, and will return poor estimates if the\ncolumns/clauses aren't independent.\n\nSo I'd be tempted to apply a tidied up version of the patch from [3],\nand then lift all restrictions from dependency_is_compatible_clause(),\nother than the requirement that the clause refer to a single variable.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/CAEZATCXaNFZyOhR4XXAfkvj1tibRBEjje6ZbXwqWUB_tqbH%3Drw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/20200309181915.5lxhuw2qxoihfoqo%40development\n[3] https://www.postgresql.org/message-id/CAEZATCUic8PwhTnexC%2BUx-Z_e5MhWD-8jk%3DJ1MtnVW8TJD%2BVHw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 12 Mar 2020 10:25:41 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Thu, Mar 12, 2020 at 10:25:41AM +0000, Dean Rasheed wrote:\n>[ For the sake of the archives, some of the discussion on the other\n>thread [1-3] should really have been on this thread. ]\n>\n>On Sun, 2 Feb 2020 at 18:41, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> I think the challenge here is in applying the functional dependency\n>> computed for the whole array to individual elements. I'm not sure we can\n>> do that.\n>>\n>> For example, with a table like this:\n>>\n>> CREATE TABLE t (a int, b int[]);\n>> CREATE STATISTICS s (dependencies) ON a, b FROM t;\n>>\n>> Let's say the functional dependency is \"perfect\" i.e. has strength 1.0.\n>> But that only tells us dependency for complete array values, we don't\n>> know how much information we gain by knowledge of subset of the values.\n>>\n>\n>The more I think about this example, the more I think this is really\n>just a special case of the more general problem of compatibility of\n>clauses. Once you add support for IN (...) clauses, any query of the\n>form\n>\n> SELECT ... WHERE (any clauses on col a) AND (any clauses on col b)\n>\n>can be recast as\n>\n> SELECT ... WHERE a IN (...) AND b IN (...)\n>\n>so any counter-example with bad estimates produced with a query in the\n>first form can also be written in the second form.\n>\n>I think we should really be thinking in terms of making a strong\n>functional dependency (a => b) applicable generally to queries in the\n>first form, which will work well if the clauses on b are compatible\n>with those on b, but not if they're incompatible. However, that's not\n>so very different from the current state without extended stats, which\n>assumes independence, and will return poor estimates if the\n>columns/clauses aren't independent.\n>\n\nI'm sorry, but I don't see how we could do this for arbitrary clauses. I\nthink we could do that for clauses that have equality semantics and\nreference column values as a whole. So I think it's possible to do this\nfor IN clauses (which is what the first part of the patch does), but I\ndon't think we can do it for the containment operator.\n\nI.e. we can do that for\n\n WHERE a IN (...) AND b IN (...)\n\nbut I don't see how we could do that for\n\n WHERE a @> (...) AND b @> (...)\n\nI don't think the dependency degree gives us any reliable insight into\nstatistical dependency of elements of the values.\n\nOr maybe we're just talking about different things? You seem to be\ntalking abotu IN clauses (which I think is doable), but my question was\nabout using functional dependencies to estimate array containment\nclauses (which I think is not really doable).\n\n>So I'd be tempted to apply a tidied up version of the patch from [3],\n>and then lift all restrictions from dependency_is_compatible_clause(),\n>other than the requirement that the clause refer to a single variable.\n>\n\nI haven't looked at the patch from [3] closely yet, but you're right\n\n P(A & B) <= Min(P(A), P(B))\n\nand the approach you proposed seems reasonable. I don't think how we\ncan just remove all the restriction on clause type - the restriction\nthat dependencies only handle equality-like clauses seems pretty much\nbaked into the dependencies.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 12 Mar 2020 18:30:47 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Thu, 12 Mar 2020 at 17:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> I'm sorry, but I don't see how we could do this for arbitrary clauses. I\n> think we could do that for clauses that have equality semantics and\n> reference column values as a whole. So I think it's possible to do this\n> for IN clauses (which is what the first part of the patch does), but I\n> don't think we can do it for the containment operator.\n>\n> I.e. we can do that for\n>\n> WHERE a IN (...) AND b IN (...)\n>\n\nHmm, the difficulty always comes back to the compatibility of the\nclauses though. It's easy to come up with artificial examples for\nwhich functional dependencies come up with bad estimates, even with\njust = and IN (...) operators. For example, given a perfect\ncorrelation like\n\n a | b\n -------\n 1 | 1\n 2 | 2\n 3 | 3\n : | :\n\nyou only need to write a query like \"WHERE a IN (1,3,5,7,9,...) AND b\nIN (2,4,6,8,...)\" to get a very bad estimate from functional\ndependencies.\n\nHowever, I don't think such artificial examples are that useful. I\nthink you have to think in terms of real data distributions together\nwith real queries expected to go with them. For example:\n\nUsing the OP's original example of a multi-tenant system, you might\nwell have a table with columns (product_type, tenant_id) and a\nfunctional dependency product_type => tenant_id. In that case, it\ncould well be very useful in optimising queries like \"WHERE\nproduct_type IN (X,Y,Z) AND tenant_id = 123\".\n\nBut this isn't necessarily limited to = and IN (...). For example,\nconsider a table with UK-based geographic data with columns (location\npoint, postcode text). Then there would be a strong functional\ndependency location => postcode (and possibly also the other way\nround, depending on how dense the points were). That dependency could\nbe used to estimate much more general queries like \"WHERE location <@\nsome_area AND postcode ~ '^CB.*'\", where there may be no useful stats\non location, but a histogram on postcode might give a reasonable\nestimate.\n\nThis also extends to inequalities. For example a table with columns\n(weight, category) might have a strong functional dependency weight =>\ncategory. Then a query like \"WHERE weight > 10 AND weight < 20 AND\ncategory = 'large'\" could get a decent estimate from a histogram on\nthe weight column, and then use the functional dependency to note that\nthat implies the category. Note that such an example would work with\nmy patch from the other thread, because it groups clauses by column,\nand uses clauselist_selectivity_simple() on them. So in this case, the\nclauses \"weight > 10 AND weight < 20\" would be estimated together, and\nwould be able to make use of the RangeQueryClause code.\n\nOf course, it's equally easy to come up with counter-example queries\nfor any of those examples, where using the functional dependency would\nproduce a poor estimate. Ultimately, it's up to the user to decide\nwhether or not to build functional dependency statistics, and that\ndecision needs to be based not just on the data distribution, but also\non the types of queries expected.\n\nGiven the timing though, perhaps it is best to limit this to IN (..)\nclauses for PG13, and we can consider other possibilities later.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 13 Mar 2020 08:42:49 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Fri, Mar 13, 2020 at 08:42:49AM +0000, Dean Rasheed wrote:\n> On Thu, 12 Mar 2020 at 17:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > I'm sorry, but I don't see how we could do this for arbitrary clauses. I\n> > think we could do that for clauses that have equality semantics and\n> > reference column values as a whole. So I think it's possible to do this\n> > for IN clauses (which is what the first part of the patch does), but I\n> > don't think we can do it for the containment operator.\n> >\n> > I.e. we can do that for\n> >\n> > WHERE a IN (...) AND b IN (...)\n> >\n> \n> Hmm, the difficulty always comes back to the compatibility of the\n> clauses though. It's easy to come up with artificial examples for\n> which functional dependencies come up with bad estimates, even with\n> just = and IN (...) operators. For example, given a perfect\n> correlation like\n> \n> a | b\n> -------\n> 1 | 1\n> 2 | 2\n> 3 | 3\n> : | :\n> \n> you only need to write a query like \"WHERE a IN (1,3,5,7,9,...) AND b\n> IN (2,4,6,8,...)\" to get a very bad estimate from functional\n> dependencies.\n\nWow, that is a very good example --- the arrays do not tie elements in\none array to elements in another array; good point. I get it now!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:19:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Fri, Mar 13, 2020 at 08:42:49AM +0000, Dean Rasheed wrote:\n>On Thu, 12 Mar 2020 at 17:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> I'm sorry, but I don't see how we could do this for arbitrary clauses. I\n>> think we could do that for clauses that have equality semantics and\n>> reference column values as a whole. So I think it's possible to do this\n>> for IN clauses (which is what the first part of the patch does), but I\n>> don't think we can do it for the containment operator.\n>>\n>> I.e. we can do that for\n>>\n>> WHERE a IN (...) AND b IN (...)\n>>\n>\n>Hmm, the difficulty always comes back to the compatibility of the\n>clauses though. It's easy to come up with artificial examples for\n>which functional dependencies come up with bad estimates, even with\n>just = and IN (...) operators. For example, given a perfect\n>correlation like\n>\n> a | b\n> -------\n> 1 | 1\n> 2 | 2\n> 3 | 3\n> : | :\n>\n>you only need to write a query like \"WHERE a IN (1,3,5,7,9,...) AND b\n>IN (2,4,6,8,...)\" to get a very bad estimate from functional\n>dependencies.\n>\n>However, I don't think such artificial examples are that useful. I\n>think you have to think in terms of real data distributions together\n>with real queries expected to go with them. For example:\n>\n>Using the OP's original example of a multi-tenant system, you might\n>well have a table with columns (product_type, tenant_id) and a\n>functional dependency product_type => tenant_id. In that case, it\n>could well be very useful in optimising queries like \"WHERE\n>product_type IN (X,Y,Z) AND tenant_id = 123\".\n>\n>But this isn't necessarily limited to = and IN (...). For example,\n>consider a table with UK-based geographic data with columns (location\n>point, postcode text). Then there would be a strong functional\n>dependency location => postcode (and possibly also the other way\n>round, depending on how dense the points were). That dependency could\n>be used to estimate much more general queries like \"WHERE location <@\n>some_area AND postcode ~ '^CB.*'\", where there may be no useful stats\n>on location, but a histogram on postcode might give a reasonable\n>estimate.\n>\n>This also extends to inequalities. For example a table with columns\n>(weight, category) might have a strong functional dependency weight =>\n>category. Then a query like \"WHERE weight > 10 AND weight < 20 AND\n>category = 'large'\" could get a decent estimate from a histogram on\n>the weight column, and then use the functional dependency to note that\n>that implies the category. Note that such an example would work with\n>my patch from the other thread, because it groups clauses by column,\n>and uses clauselist_selectivity_simple() on them. So in this case, the\n>clauses \"weight > 10 AND weight < 20\" would be estimated together, and\n>would be able to make use of the RangeQueryClause code.\n>\n>Of course, it's equally easy to come up with counter-example queries\n>for any of those examples, where using the functional dependency would\n>produce a poor estimate. Ultimately, it's up to the user to decide\n>whether or not to build functional dependency statistics, and that\n>decision needs to be based not just on the data distribution, but also\n>on the types of queries expected.\n>\n\nWell, yeah. I'm sure we can produce countless examples where applying\nthe functional dependencies to additional types of clauses helps a lot.\nI'm somewhat hesitant to just drop any restrictions, though, because\nit's equally simple to produce examples with poor results.\n\nThe main issue I have with just applying dependencies to arbitrary\nclauses is it uses \"degree\" computed for the value as a whole, and\njust uses it to estimate dependency between pieces of the values.\n\nThe IN() clause does not have this problem, the other cases like @> or\npattern matching do.\n\n>Given the timing though, perhaps it is best to limit this to IN (..)\n>clauses for PG13, and we can consider other possibilities later.\n>\n\nYeah, I was gonna propose the same thing. I'll get the IN bit committed\nshortly (both for dependencies and MCV), improved handling of OR clauses\nand some additional regression tests to increase the coverage.\n\nThen we can discuss these improvement in more detail for PG14.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 17:09:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "Hi,\n\nI've pushed the first part of the patch, adding ScalarArrayOpExpr as\nsupported clause for functional dependencies, and then also doing the\nsame for MCV lists.\n\nAs discussed, I'm not going to do anything about the array containment\nclauses for PG13, that needs more discussion.\n\nI have a bunch of additional improvements for extended stats, discussed\nin [1]. I'll wait a bit for buildfarm and maybe some feedback before\npushing those.\n\n\n[1] https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 14 Mar 2020 16:21:59 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "Hi,\n\nI realized there's one more thing that probably needs discussing.\nEssentially, these two clause types are the same:\n\n a IN (1, 2, 3)\n\n (a = 1 OR a = 2 OR a = 3)\n\nbut with 8f321bd1 we only recognize the first one as compatible with\nfunctional dependencies. It was always the case that we estimated those\ntwo clauses a bit differently, but the differences were usually small.\nBut now that we recognize IN as compatible with dependencies, the\ndifference may be much larger, which bugs me a bit ...\n\nSo I wonder if we should recognize the special form of an OR clause,\nwith all Vars referencing to the same attribute etc. and treat this as\nsupported by functional dependencies - the attached patch does that.\nMCV lists there's already no difference because OR clauses are\nsupported.\n\nThe question is whether we want to do this, and whether we should also\nteach the per-column estimates to recognize this special case of IN\nclause. That would allow producing exactly the same estimates even with\nfunctional dependencies - recognizing the OR clause as supported gets us\nonly half-way there, because we still use estimates for each clause (and\nthose will be slightly different).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 14 Mar 2020 19:45:35 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Sat, 14 Mar 2020 at 18:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> I realized there's one more thing that probably needs discussing.\n> Essentially, these two clause types are the same:\n>\n> a IN (1, 2, 3)\n>\n> (a = 1 OR a = 2 OR a = 3)\n>\n> but with 8f321bd1 we only recognize the first one as compatible with\n> functional dependencies. It was always the case that we estimated those\n> two clauses a bit differently, but the differences were usually small.\n> But now that we recognize IN as compatible with dependencies, the\n> difference may be much larger, which bugs me a bit ...\n>\n> So I wonder if we should recognize the special form of an OR clause,\n> with all Vars referencing to the same attribute etc. and treat this as\n> supported by functional dependencies - the attached patch does that.\n> MCV lists there's already no difference because OR clauses are\n> supported.\n>\n\nMakes sense, and the patch looks straightforward enough.\n\n> The question is whether we want to do this, and whether we should also\n> teach the per-column estimates to recognize this special case of IN\n> clause.\n\nI'm not convinced about that second part though. I'd say that\nrecognising the OR clause for functional dependencies is sufficient to\nprevent the large differences in estimates relative to the equivalent\nIN clauses. The small differences between the way that OR and IN\nclauses are handled have always been there, and I think that changing\nthat is out of scope for this work.\n\nThe other thing that I'm still concerned about is the possibility of\nreturning estimates with P(a,b) > P(a) or P(b). I think that such a\nthing becomes much more likely with the new types of clause supported\nhere, because they now allow multiple values from each column, where\nbefore we only allowed one. I took another look at the patch I posted\non the other thread, and I've convinced myself that it's correct.\nAttached is an updated version, with some cosmetic tidying up and now\nwith some additional regression tests.\n\nRegards,\nDean", "msg_date": "Tue, 17 Mar 2020 12:42:52 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Tue, Mar 17, 2020 at 12:42:52PM +0000, Dean Rasheed wrote:\n>On Sat, 14 Mar 2020 at 18:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> I realized there's one more thing that probably needs discussing.\n>> Essentially, these two clause types are the same:\n>>\n>> a IN (1, 2, 3)\n>>\n>> (a = 1 OR a = 2 OR a = 3)\n>>\n>> but with 8f321bd1 we only recognize the first one as compatible with\n>> functional dependencies. It was always the case that we estimated those\n>> two clauses a bit differently, but the differences were usually small.\n>> But now that we recognize IN as compatible with dependencies, the\n>> difference may be much larger, which bugs me a bit ...\n>>\n>> So I wonder if we should recognize the special form of an OR clause,\n>> with all Vars referencing to the same attribute etc. and treat this as\n>> supported by functional dependencies - the attached patch does that.\n>> MCV lists there's already no difference because OR clauses are\n>> supported.\n>>\n>\n>Makes sense, and the patch looks straightforward enough.\n>\n>> The question is whether we want to do this, and whether we should also\n>> teach the per-column estimates to recognize this special case of IN\n>> clause.\n>\n>I'm not convinced about that second part though. I'd say that\n>recognising the OR clause for functional dependencies is sufficient to\n>prevent the large differences in estimates relative to the equivalent\n>IN clauses. The small differences between the way that OR and IN\n>clauses are handled have always been there, and I think that changing\n>that is out of scope for this work.\n>\n\nNot sure. I think the inconsistency between plan and extended stats may\nbe a bit surprising, but I agree that issue may be negligible.\n\n>The other thing that I'm still concerned about is the possibility of\n>returning estimates with P(a,b) > P(a) or P(b). I think that such a\n>thing becomes much more likely with the new types of clause supported\n>here, because they now allow multiple values from each column, where\n>before we only allowed one. I took another look at the patch I posted\n>on the other thread, and I've convinced myself that it's correct.\n>Attached is an updated version, with some cosmetic tidying up and now\n>with some additional regression tests.\n>\n\nYeah, I agree that's something we need to fix. Do you plan to push the\nfix, or do you want me to do it?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Mar 2020 16:37:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Tue, 17 Mar 2020 at 15:37, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Tue, Mar 17, 2020 at 12:42:52PM +0000, Dean Rasheed wrote:\n>\n> >The other thing that I'm still concerned about is the possibility of\n> >returning estimates with P(a,b) > P(a) or P(b). I think that such a\n> >thing becomes much more likely with the new types of clause supported\n> >here, because they now allow multiple values from each column, where\n> >before we only allowed one. I took another look at the patch I posted\n> >on the other thread, and I've convinced myself that it's correct.\n> >Attached is an updated version, with some cosmetic tidying up and now\n> >with some additional regression tests.\n>\n> Yeah, I agree that's something we need to fix. Do you plan to push the\n> fix, or do you want me to do it?\n>\n\nI can push it. Have you had a chance to review it?\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 17 Mar 2020 16:14:26 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Tue, Mar 17, 2020 at 04:14:26PM +0000, Dean Rasheed wrote:\n>On Tue, 17 Mar 2020 at 15:37, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Tue, Mar 17, 2020 at 12:42:52PM +0000, Dean Rasheed wrote:\n>>\n>> >The other thing that I'm still concerned about is the possibility of\n>> >returning estimates with P(a,b) > P(a) or P(b). I think that such a\n>> >thing becomes much more likely with the new types of clause supported\n>> >here, because they now allow multiple values from each column, where\n>> >before we only allowed one. I took another look at the patch I posted\n>> >on the other thread, and I've convinced myself that it's correct.\n>> >Attached is an updated version, with some cosmetic tidying up and now\n>> >with some additional regression tests.\n>>\n>> Yeah, I agree that's something we need to fix. Do you plan to push the\n>> fix, or do you want me to do it?\n>>\n>\n>I can push it. Have you had a chance to review it?\n>\n\nNot yet, I'll take a look today.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Mar 2020 18:05:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Tue, Mar 17, 2020 at 06:05:17PM +0100, Tomas Vondra wrote:\n>On Tue, Mar 17, 2020 at 04:14:26PM +0000, Dean Rasheed wrote:\n>>On Tue, 17 Mar 2020 at 15:37, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>On Tue, Mar 17, 2020 at 12:42:52PM +0000, Dean Rasheed wrote:\n>>>\n>>>>The other thing that I'm still concerned about is the possibility of\n>>>>returning estimates with P(a,b) > P(a) or P(b). I think that such a\n>>>>thing becomes much more likely with the new types of clause supported\n>>>>here, because they now allow multiple values from each column, where\n>>>>before we only allowed one. I took another look at the patch I posted\n>>>>on the other thread, and I've convinced myself that it's correct.\n>>>>Attached is an updated version, with some cosmetic tidying up and now\n>>>>with some additional regression tests.\n>>>\n>>>Yeah, I agree that's something we need to fix. Do you plan to push the\n>>>fix, or do you want me to do it?\n>>>\n>>\n>>I can push it. Have you had a chance to review it?\n>>\n>\n>Not yet, I'll take a look today.\n>\n\nOK, I took a look. I think from the correctness POV the patch is OK, but\nI think the dependencies_clauselist_selectivity() function now got a bit\ntoo complex. I've been able to parse it now, but I'm sure I'll have\ntrouble in the future :-(\n\nCan we refactor / split it somehow and move bits of the logic to smaller\nfunctions, or something like that?\n\nAnother thing I'd like to suggest is keeping the \"old\" formula, and\ninstead of just replacing it with\n\n P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b)\n\nbut explaining how the old formula may produce nonsensical selectivity,\nand how the new formula addresses that issue.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 01:29:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Tue, Mar 17, 2020 at 04:37:06PM +0100, Tomas Vondra wrote:\n>On Tue, Mar 17, 2020 at 12:42:52PM +0000, Dean Rasheed wrote:\n>>On Sat, 14 Mar 2020 at 18:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>\n>>>I realized there's one more thing that probably needs discussing.\n>>>Essentially, these two clause types are the same:\n>>>\n>>> a IN (1, 2, 3)\n>>>\n>>> (a = 1 OR a = 2 OR a = 3)\n>>>\n>>>but with 8f321bd1 we only recognize the first one as compatible with\n>>>functional dependencies. It was always the case that we estimated those\n>>>two clauses a bit differently, but the differences were usually small.\n>>>But now that we recognize IN as compatible with dependencies, the\n>>>difference may be much larger, which bugs me a bit ...\n>>>\n>>>So I wonder if we should recognize the special form of an OR clause,\n>>>with all Vars referencing to the same attribute etc. and treat this as\n>>>supported by functional dependencies - the attached patch does that.\n>>>MCV lists there's already no difference because OR clauses are\n>>>supported.\n>>>\n>>\n>>Makes sense, and the patch looks straightforward enough.\n>>\n>>>The question is whether we want to do this, and whether we should also\n>>>teach the per-column estimates to recognize this special case of IN\n>>>clause.\n>>\n>>I'm not convinced about that second part though. I'd say that\n>>recognising the OR clause for functional dependencies is sufficient to\n>>prevent the large differences in estimates relative to the equivalent\n>>IN clauses. The small differences between the way that OR and IN\n>>clauses are handled have always been there, and I think that changing\n>>that is out of scope for this work.\n>>\n>\n>Not sure. I think the inconsistency between plan and extended stats may\n>be a bit surprising, but I agree that issue may be negligible.\n>\n\nOK, I've pushed the change recognizing the special case of OR clauses as\nsupported by functional dependencies. I've left the estimation of the\nclause itself as it's, we can address that in the future if needed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 16:55:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Wed, 18 Mar 2020 at 00:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> OK, I took a look. I think from the correctness POV the patch is OK, but\n> I think the dependencies_clauselist_selectivity() function now got a bit\n> too complex. I've been able to parse it now, but I'm sure I'll have\n> trouble in the future :-(\n>\n> Can we refactor / split it somehow and move bits of the logic to smaller\n> functions, or something like that?\n>\n\nYeah, it has gotten a bit long. It's somewhat tricky splitting it up,\nbecause of the number of shared variables used throughout the\nfunction, but here's an updated patch splitting it into what seemed\nlike the 2 most logical pieces. The first piece (still in\ndependencies_clauselist_selectivity()) works out what dependencies\ncan/should be applied, and the second piece in a new function does the\nactual work of applying the list of functional dependencies to the\nclause list.\n\nI think that has made it easier to follow, and it has also reduced the\ncomplexity of the final \"no applicable stats\" branch.\n\n> Another thing I'd like to suggest is keeping the \"old\" formula, and\n> instead of just replacing it with\n>\n> P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b)\n>\n> but explaining how the old formula may produce nonsensical selectivity,\n> and how the new formula addresses that issue.\n>\n\nI think this is purely a comment issue? I've added some more extensive\ncomments attempting to justify the formulae.\n\nRegards,\nDean", "msg_date": "Thu, 19 Mar 2020 19:53:39 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Thu, Mar 19, 2020 at 07:53:39PM +0000, Dean Rasheed wrote:\n>On Wed, 18 Mar 2020 at 00:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> OK, I took a look. I think from the correctness POV the patch is OK, but\n>> I think the dependencies_clauselist_selectivity() function now got a bit\n>> too complex. I've been able to parse it now, but I'm sure I'll have\n>> trouble in the future :-(\n>>\n>> Can we refactor / split it somehow and move bits of the logic to smaller\n>> functions, or something like that?\n>>\n>\n>Yeah, it has gotten a bit long. It's somewhat tricky splitting it up,\n>because of the number of shared variables used throughout the\n>function, but here's an updated patch splitting it into what seemed\n>like the 2 most logical pieces. The first piece (still in\n>dependencies_clauselist_selectivity()) works out what dependencies\n>can/should be applied, and the second piece in a new function does the\n>actual work of applying the list of functional dependencies to the\n>clause list.\n>\n>I think that has made it easier to follow, and it has also reduced the\n>complexity of the final \"no applicable stats\" branch.\n>\n\nSeems OK to me.\n\nI'd perhaps name deps_clauselist_selectivity differently, it's a bit too\nsimilar to dependencies_clauselist_selectivity. Perhaps something like\nclauselist_apply_dependencies? But that's a minor detail.\n\n>> Another thing I'd like to suggest is keeping the \"old\" formula, and\n>> instead of just replacing it with\n>>\n>> P(a,b) = f * Min(P(a), P(b)) + (1-f) * P(a) * P(b)\n>>\n>> but explaining how the old formula may produce nonsensical selectivity,\n>> and how the new formula addresses that issue.\n>>\n>\n>I think this is purely a comment issue? I've added some more extensive\n>comments attempting to justify the formulae.\n>\n\nYes, it was purely a comment issue. Seems fine now.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 01:28:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Wed, 25 Mar 2020 at 00:28, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> Seems OK to me.\n>\n> I'd perhaps name deps_clauselist_selectivity differently, it's a bit too\n> similar to dependencies_clauselist_selectivity. Perhaps something like\n> clauselist_apply_dependencies? But that's a minor detail.\n>\n\nOK, I've pushed that with your recommendation for that function name.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 28 Mar 2020 13:18:17 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Sat, 28 Mar 2020 at 13:18, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> OK, I've pushed that with your recommendation for that function name.\n>\n\nDoes this now complete everything that you wanted to do for functional\ndependency stats for PG13? Re-reading the thread, I couldn't see\nanything else that needed looking at. If that's the case, the CF entry\ncan be closed.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 29 Mar 2020 10:22:25 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "On Sun, Mar 29, 2020 at 10:22:25AM +0100, Dean Rasheed wrote:\n>On Sat, 28 Mar 2020 at 13:18, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> OK, I've pushed that with your recommendation for that function name.\n>>\n>\n>Does this now complete everything that you wanted to do for functional\n>dependency stats for PG13? Re-reading the thread, I couldn't see\n>anything else that needed looking at. If that's the case, the CF entry\n>can be closed.\n>\n\nYes. There were two improvements proposed, we've committed one of them\n(the IN/ANY operator handling) and the other (containment) needs more\ndiscussion. So I think it's OK to mark this either as committed or maybe\nreturned with feedback.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 29 Mar 2020 17:27:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" }, { "msg_contents": "> On 29 Mar 2020, at 17:27, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> On Sun, Mar 29, 2020 at 10:22:25AM +0100, Dean Rasheed wrote:\n>> On Sat, 28 Mar 2020 at 13:18, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n\n>>> OK, I've pushed that with your recommendation for that function name.\n>> \n>> Does this now complete everything that you wanted to do for functional\n>> dependency stats for PG13? Re-reading the thread, I couldn't see\n>> anything else that needed looking at. If that's the case, the CF entry\n>> can be closed.\n> \n> Yes. There were two improvements proposed, we've committed one of them\n> (the IN/ANY operator handling) and the other (containment) needs more\n> discussion. So I think it's OK to mark this either as committed or maybe\n> returned with feedback.\n\nSince there hasn't been more discussion on the second item I've closed this\nitem as committed. The containment part can be opened as a new CF entry.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 10:47:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: PATCH: add support for IN and @> in functional-dependency\n statistics use" } ]
[ { "msg_contents": "Prevent running pg_basebackup as root\n\nSimilarly to pg_upgrade, pg_ctl and initdb, a root user is able to use\n--version and --help, but cannot execute the actual operation to avoid\nthe creation of files with permissions incompatible with the\npostmaster.\n\nThis is a behavior change, so not back-patching is done.\n\nAuthor: Ian Barwick\nDiscussion: https://postgr.es/m/CABvVfJVqOdD2neLkYdygdOHvbWz_5K_iWiqY+psMfA=FeAa3qQ@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/7bae0ad9fcb76b28410571dc71edfdc3175c4a02\n\nModified Files\n--------------\nsrc/bin/pg_basebackup/pg_basebackup.c | 16 ++++++++++++++++\n1 file changed, 16 insertions(+)", "msg_date": "Sat, 01 Feb 2020 09:33:20 +0000", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> Prevent running pg_basebackup as root\n> \n> Similarly to pg_upgrade, pg_ctl and initdb, a root user is able to use\n> --version and --help, but cannot execute the actual operation to avoid\n> the creation of files with permissions incompatible with the\n> postmaster.\n> \n> This is a behavior change, so not back-patching is done.\n\nWhile it's maybe not ideal, surely there isn't an actual issue if\npg_basebackup is run as root with -Ft, is there..?\n\nThere's possibly something to be said about the fact that we hard-code\nthe username/groupname in the tar file too (interestingly, we actually\ndo pass through the uid/gid..)- perhaps we should actually be passing\nthe username/groupname through, but if we did do something like that\nthen having pg_basebackup running as root would be necessary if we want\nto preserve the file ownership.\n\nIn any case, sorry for not responding on this sooner (was traveling for\nFOSDEM and such), but I'm not really convinced this is something we want\nand it certainly breaks at least somewhat reasonable use-cases when you\nthink about using pg_basebackup with -Ft. In that vein, this change is\nkinda like saying \"you can't run pg_dump as root\"..\n\nThanks,\n\nStephen", "msg_date": "Wed, 5 Feb 2020 12:22:59 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "On Wed, Feb 05, 2020 at 12:22:59PM -0500, Stephen Frost wrote:\n> In any case, sorry for not responding on this sooner (was traveling for\n> FOSDEM and such), but I'm not really convinced this is something we want\n> and it certainly breaks at least somewhat reasonable use-cases when you\n> think about using pg_basebackup with -Ft. In that vein, this change is\n> kinda like saying \"you can't run pg_dump as root\"..\n\nIt seems to me that this is entirely different than the case of\npg_dump, as it is possible to restore a dump even as root, something\nthat cannot happen with physical backups without an extra chmod -R.\nYou have a point with -Ft as untaring the tarballs from a base backup\ntaken with pg_basebackup -Ft used by root generates files owned by the\noriginal user. -Fp enforces the files to be owned by the user taking\nthe backup, which makes the most sense, so for consistency with the\nother tools preventing root to run pg_basebackup makes sense to me\nwith -Fp. Any thoughts from others to restrict the tool with -Fp but \nnot with -Ft? The argument of consistency mattered for me first for\nboth formats.\n--\nMichael", "msg_date": "Thu, 6 Feb 2020 16:04:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "On Thu, Feb 6, 2020 at 8:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 05, 2020 at 12:22:59PM -0500, Stephen Frost wrote:\n> > In any case, sorry for not responding on this sooner (was traveling for\n> > FOSDEM and such), but I'm not really convinced this is something we want\n> > and it certainly breaks at least somewhat reasonable use-cases when you\n> > think about using pg_basebackup with -Ft. In that vein, this change is\n> > kinda like saying \"you can't run pg_dump as root\"..\n>\n> It seems to me that this is entirely different than the case of\n> pg_dump, as it is possible to restore a dump even as root, something\n> that cannot happen with physical backups without an extra chmod -R.\n\nI don't see how that's relevant? And yes, you can restore physical\nbackups this way too, if the userids match. (though see Stephens\ncomment about the username, but that's independent of this issue)\n\nAnd pg_basebackup is about taking backups, not restores :)\n\n\n> You have a point with -Ft as untaring the tarballs from a base backup\n> taken with pg_basebackup -Ft used by root generates files owned by the\n> original user. -Fp enforces the files to be owned by the user taking\n> the backup, which makes the most sense, so for consistency with the\n> other tools preventing root to run pg_basebackup makes sense to me\n> with -Fp. Any thoughts from others to restrict the tool with -Fp but\n> not with -Ft? The argument of consistency mattered for me first for\n> both formats.\n\nI think having -Fp and -Ft consistent is a lot more important than\nbeing consistent with other tools that aren't really that closely\nrelated. And it's already inconsistent against probably the most\nrelated command, being pg_dump.\n\nSo *very* strong objection to makeing -Fp and -Ft behave differently\nin this regard.\n\n\nI agree with Stephen that this seems to be misguided, and my vote is\nto revert. I would've also objected had you given more than 2 days\nwarning before committing, and it happened to be during FOSDEM. I saw\nthe original email which clearly said it'd be in the March commitfest,\nso I figured I'd have time...\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 6 Feb 2020 13:02:07 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Thu, Feb 6, 2020 at 8:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Feb 05, 2020 at 12:22:59PM -0500, Stephen Frost wrote:\n> > > In any case, sorry for not responding on this sooner (was traveling for\n> > > FOSDEM and such), but I'm not really convinced this is something we want\n> > > and it certainly breaks at least somewhat reasonable use-cases when you\n> > > think about using pg_basebackup with -Ft. In that vein, this change is\n> > > kinda like saying \"you can't run pg_dump as root\"..\n> >\n> > It seems to me that this is entirely different than the case of\n> > pg_dump, as it is possible to restore a dump even as root, something\n> > that cannot happen with physical backups without an extra chmod -R.\n> \n> I don't see how that's relevant? And yes, you can restore physical\n> backups this way too, if the userids match. (though see Stephens\n> comment about the username, but that's independent of this issue)\n\nRight.\n\n> And pg_basebackup is about taking backups, not restores :)\n\nYes- one of the downsides of pg_basebackup is that it doesn't really do\nmuch for you when it comes to restores, in fact.. Something that will\nhave to change if it starts doing incrementals of some kind. That's\nmostly orthogonal to this discussion though.\n\n> > You have a point with -Ft as untaring the tarballs from a base backup\n> > taken with pg_basebackup -Ft used by root generates files owned by the\n> > original user. -Fp enforces the files to be owned by the user taking\n> > the backup, which makes the most sense, so for consistency with the\n> > other tools preventing root to run pg_basebackup makes sense to me\n> > with -Fp. Any thoughts from others to restrict the tool with -Fp but\n> > not with -Ft? The argument of consistency mattered for me first for\n> > both formats.\n\nErm- no, with -Ft + untar-as-root they get owned by \"postgres\", NOT the\noriginal user. That's what I was pointing out up-thread (since it seems\nto be confusing- and clearly not always well understood..) and it's an\nissue imv, but it's independent of this, so probably deserves its own\nthread if someone wants to do something about that.\n\nHaving -Fp run-as-root result in the files being owned by root isn't\ngood and I agree that's unfortunate and it would be good to fix it, but\npreventing pg_basebackup from ever being run as root isn't a good\nsolution to that issue.\n\n> I think having -Fp and -Ft consistent is a lot more important than\n> being consistent with other tools that aren't really that closely\n> related. And it's already inconsistent against probably the most\n> related command, being pg_dump.\n\nYeah, I agree on consistency here being important too, and that pg_dump\nis a closer command to be thinking about than initdb and friends.\n\n> So *very* strong objection to makeing -Fp and -Ft behave differently\n> in this regard.\n\nWhat we aren't consistent about today is what happens when you do:\n\n- Backup as root with -Ft\n- Untar results as root\n\n- Backup as root with -Fp\n\nand that really seems less than ideal, but I don't think the answer is\n\"don't allow backing up as root\".\n\n> I agree with Stephen that this seems to be misguided, and my vote is\n> to revert. I would've also objected had you given more than 2 days\n> warning before committing, and it happened to be during FOSDEM. I saw\n> the original email which clearly said it'd be in the March commitfest,\n> so I figured I'd have time...\n\nYeah, I also agree with reverting this change. Even if we can come to\nsomething we all agree on, I'm pretty confident it's not going to be\nexactly this patch, so let's back it out for now and discuss it further\non the -hackers thread.\n\nThanks,\n\nStephen", "msg_date": "Thu, 6 Feb 2020 09:44:07 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "On Thu, Feb 06, 2020 at 09:44:07AM -0500, Stephen Frost wrote:\n> * Magnus Hagander (magnus@hagander.net) wrote:\n>> On Thu, Feb 6, 2020 at 8:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>> You have a point with -Ft as untaring the tarballs from a base backup\n>>> taken with pg_basebackup -Ft used by root generates files owned by the\n>>> original user. -Fp enforces the files to be owned by the user taking\n>>> the backup, which makes the most sense, so for consistency with the\n>>> other tools preventing root to run pg_basebackup makes sense to me\n>>> with -Fp. Any thoughts from others to restrict the tool with -Fp but\n>>> not with -Ft? The argument of consistency mattered for me first for\n>>> both formats.\n> \n> Erm- no, with -Ft + untar-as-root they get owned by \"postgres\", NOT the\n> original user. That's what I was pointing out up-thread (since it seems\n> to be confusing- and clearly not always well understood..) and it's an\n> issue imv, but it's independent of this, so probably deserves its own\n> thread if someone wants to do something about that.\n\nHmm. I don't think that you are completely correct here either as it\ndepends on if the OS user \"postgres\" exists or not. As mentioned in\nhttps://www.gnu.org/software/tar/manual/tar.html#SEC138, if the user\nname cannot be found in /etc/passwd, then tar switches to the user ID\n(if one does not have any user or group named \"postgres\", then the\nfiles are untar'ed with the same user and group as the one running the\ncluster and that's to the UID and GID set by tarCreateHeader, as you\nsay). I think that it is a problem to not have more documentation on\nthe matter (now there is just a small mention in the base backup\nrestore about being sure to have the proper permissions). And it may\nbe interesting to add into pg_basebackup options to enforce the user\nand/or group similarly to what tar does with --owner and --group?\n\n>> I agree with Stephen that this seems to be misguided, and my vote is\n>> to revert. I would've also objected had you given more than 2 days\n>> warning before committing, and it happened to be during FOSDEM. I saw\n>> the original email which clearly said it'd be in the March commitfest,\n>> so I figured I'd have time...\n> \n> Yeah, I also agree with reverting this change. Even if we can come to\n> something we all agree on, I'm pretty confident it's not going to be\n> exactly this patch, so let's back it out for now and discuss it further\n> on the -hackers thread.\n\nOK, done that part as of dcddc3f.\n--\nMichael", "msg_date": "Fri, 7 Feb 2020 10:55:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "Hi,\n\nOn 2020-02-06 13:02:07 +0100, Magnus Hagander wrote:\n> I agree with Stephen that this seems to be misguided, and my vote is\n> to revert.\n\n+1. I honestly don't think we should increase the number of \"root\ndisallowed\" tools unless actually necessary.\n\nMaybe that's looking too far into the future, but I'd like to see\nimprovements to pg_basebackup that make it integrate with root requiring\ntooling, to do more efficient base backups. E.g. having pg_basebackup\nhandle start/stop backup and WAL handling, but do the actual backup of\nthe data via a snapshot mechanism (yes, one needs start/stop backup in\nthe general case, for multiple FSs), would be nice.\n\nBtw, I think it's good form in a discussion like this to CC the original\nauthor. I'll also add a reference to this discussion from the -hackers\nthread.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Feb 2020 18:07:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "On 2020/02/07 11:07, Andres Freund wrote:\n> Hi,\n> \n> On 2020-02-06 13:02:07 +0100, Magnus Hagander wrote:\n>> I agree with Stephen that this seems to be misguided, and my vote is\n>> to revert.\n> \n> +1. I honestly don't think we should increase the number of \"root\n> disallowed\" tools unless actually necessary.\n> \n> Maybe that's looking too far into the future, but I'd like to see\n> improvements to pg_basebackup that make it integrate with root requiring\n> tooling, to do more efficient base backups. E.g. having pg_basebackup\n> handle start/stop backup and WAL handling, but do the actual backup of\n> the data via a snapshot mechanism (yes, one needs start/stop backup in\n> the general case, for multiple FSs), would be nice.\n> \n> Btw, I think it's good form in a discussion like this to CC the original\n> author. I'll also add a reference to this discussion from the -hackers\n> thread.\n\nThanks for the notification.\n\nPoints raised upthread seem reasonable enough; to be honest I was expecting\nthis patch to hang around a bit longer anway, because (as so often) there's\nsome aspect which wouldn't have occurred to me.\n\n\nRegards\n\nIan Barwick\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Fri, 7 Feb 2020 11:23:56 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Thu, Feb 06, 2020 at 09:44:07AM -0500, Stephen Frost wrote:\n> > Erm- no, with -Ft + untar-as-root they get owned by \"postgres\", NOT the\n> > original user. That's what I was pointing out up-thread (since it seems\n> > to be confusing- and clearly not always well understood..) and it's an\n> > issue imv, but it's independent of this, so probably deserves its own\n> > thread if someone wants to do something about that.\n> \n> Hmm. I don't think that you are completely correct here either as it\n> depends on if the OS user \"postgres\" exists or not. \n\nYes, I do know what happens if the named user doesn't exist, but in the\ngeneral case, where the 'postgres' user does exist, they'll get owned by\n'postgres'.\n\n> As mentioned in\n> https://www.gnu.org/software/tar/manual/tar.html#SEC138, if the user\n> name cannot be found in /etc/passwd, then tar switches to the user ID\n> (if one does not have any user or group named \"postgres\", then the\n> files are untar'ed with the same user and group as the one running the\n> cluster and that's to the UID and GID set by tarCreateHeader, as you\n> say). I think that it is a problem to not have more documentation on\n> the matter (now there is just a small mention in the base backup\n> restore about being sure to have the proper permissions). And it may\n> be interesting to add into pg_basebackup options to enforce the user\n> and/or group similarly to what tar does with --owner and --group?\n\nYes, I agree with improving the documentation and with adding such\noptions.\n\n> >> I agree with Stephen that this seems to be misguided, and my vote is\n> >> to revert. I would've also objected had you given more than 2 days\n> >> warning before committing, and it happened to be during FOSDEM. I saw\n> >> the original email which clearly said it'd be in the March commitfest,\n> >> so I figured I'd have time...\n> > \n> > Yeah, I also agree with reverting this change. Even if we can come to\n> > something we all agree on, I'm pretty confident it's not going to be\n> > exactly this patch, so let's back it out for now and discuss it further\n> > on the -hackers thread.\n> \n> OK, done that part as of dcddc3f.\n\nGreat, thanks!\n\nStephen", "msg_date": "Fri, 7 Feb 2020 14:22:09 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pgsql: Prevent running pg_basebackup as root" }, { "msg_contents": "Greetings,\n\n(Moving to -hackers, changing thread title)\n\n* Andres Freund (andres@anarazel.de) wrote:\n> Maybe that's looking too far into the future, but I'd like to see\n> improvements to pg_basebackup that make it integrate with root requiring\n> tooling, to do more efficient base backups. E.g. having pg_basebackup\n> handle start/stop backup and WAL handling, but do the actual backup of\n> the data via a snapshot mechanism (yes, one needs start/stop backup in\n> the general case, for multiple FSs), would be nice.\n\nThe challenge with this approach is that you need to drop the 'backup\nlabel' file into place as part of this operation, either by putting it\ninto the snapshot after it's been taken, or by putting it into the data\ndirectory at restore time. Of course, you have to keep track of WAL\nanyway from the time the snapshots are taken until the restore is done,\nso it's certainly possible, as with all of this, it's just somewhat\ncomplicated.\n\nCertainly open to ideas on how to improve this.\n\nThanks,\n\nStephen", "msg_date": "Fri, 7 Feb 2020 14:56:47 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "pg_basebackup and snapshots" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 14:56:47 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > Maybe that's looking too far into the future, but I'd like to see\n> > improvements to pg_basebackup that make it integrate with root requiring\n> > tooling, to do more efficient base backups. E.g. having pg_basebackup\n> > handle start/stop backup and WAL handling, but do the actual backup of\n> > the data via a snapshot mechanism (yes, one needs start/stop backup in\n> > the general case, for multiple FSs), would be nice.\n> \n> The challenge with this approach is that you need to drop the 'backup\n> label' file into place as part of this operation, either by putting it\n> into the snapshot after it's been taken, or by putting it into the data\n> directory at restore time. Of course, you have to keep track of WAL\n> anyway from the time the snapshots are taken until the restore is done,\n> so it's certainly possible, as with all of this, it's just somewhat\n> complicated.\n\nIt's not dead trivial, but also doesn't seem *that* hard to me compared\nto the other challenges of adding features like this? How to best\napproach it I think depends somewhat on what exact type of backup\n(mainly whether to set up a new system or to make a PITR base backup)\nwe'd want to focus on. And what kind of snapshotting system / what kind\nof target data store.\n\nPlenty of snapshotting systems allow write access to the snapshot once\nit finished, so that's one way one can deal with that. I have a hard\ntime believing that it'd be hard to have pg_basebackup delay writing the\nbackup label in that case. The WAL part would probably be harder, since\nthere we want to start writing before the snapshot is done. And copying\nall the WAL at the end isn't enticing either.\n\nFor the PITR base backup case it'd definitely be nice to support writing\n(potentially with callbacks instead of implementing all of them in core)\ninto $cloud_provider's blob store, without having to transfer all data\nfirst through a replication connection and then again to the blob store\n(and without manually implementing non-exclusive base backup). Adding\nWAL after the fact to the same blob really a thing for anything like\nthat (obviously - even if one can hack it by storing tars etc).\n\nWonder if the the WAL part in particular would actually be best solved\nby having recovery probe more than one WAL directory when looking for\nWAL segments (i.e. doing so before switching methods). Much faster than\nusing restore_command, and what one really wants in a pretty decent\nnumber of cases. And it'd allow to just restore the base backup\n(e.g. mount [copy of] the snapshot) and the received WAL stream\nseparately, without needing more complicated orchestration.\n\n\nPerhaps I am also answering something completely besides what you were\nwondering about?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Feb 2020 12:21:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup and snapshots" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-02-07 14:56:47 -0500, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > Maybe that's looking too far into the future, but I'd like to see\n> > > improvements to pg_basebackup that make it integrate with root requiring\n> > > tooling, to do more efficient base backups. E.g. having pg_basebackup\n> > > handle start/stop backup and WAL handling, but do the actual backup of\n> > > the data via a snapshot mechanism (yes, one needs start/stop backup in\n> > > the general case, for multiple FSs), would be nice.\n> > \n> > The challenge with this approach is that you need to drop the 'backup\n> > label' file into place as part of this operation, either by putting it\n> > into the snapshot after it's been taken, or by putting it into the data\n> > directory at restore time. Of course, you have to keep track of WAL\n> > anyway from the time the snapshots are taken until the restore is done,\n> > so it's certainly possible, as with all of this, it's just somewhat\n> > complicated.\n> \n> It's not dead trivial, but also doesn't seem *that* hard to me compared\n> to the other challenges of adding features like this? How to best\n> approach it I think depends somewhat on what exact type of backup\n> (mainly whether to set up a new system or to make a PITR base backup)\n> we'd want to focus on. And what kind of snapshotting system / what kind\n> of target data store.\n\nI'm also not sure that pg_basebackup is the right tool for this though,\nreally, given the complications and how it's somewhat beyond what\npg_basebackup's mandate is. This isn't something you'd like do\nremotely, for example, due to the need to take the snapshot, mount the\nsnapshot, etc. I don't see this as really in line with \"just another\noption to -F\", there'd be a fair bit of configuring, it seems, and a\ngood deal of what pg_basebackup would really be doing with this feature\nis just running bits of code the user has given us, except for the\nactual calls to PG to do start/stop backup.\n\n> Plenty of snapshotting systems allow write access to the snapshot once\n> it finished, so that's one way one can deal with that. I have a hard\n> time believing that it'd be hard to have pg_basebackup delay writing the\n> backup label in that case. The WAL part would probably be harder, since\n> there we want to start writing before the snapshot is done. And copying\n> all the WAL at the end isn't enticing either.\n\npg_basebackup already delays writing out the backup label until the end.\n\nBut, yes, there's also timing issues to deal with, which are complicated\nbecause there isn't just a syscall we can use to say \"take a snapshot\nfor us\" or to say \"mount this snapshot over here\" (at least, not in any\nkind of portable way, even in places where such things do exist). Maybe\nwe could have shell commands that a user provides for \"take a snapshot\"\nand \"mount this snapshot\", but putting all of that on the user has its\nown drawbacks (more on that below..).\n\n> For the PITR base backup case it'd definitely be nice to support writing\n> (potentially with callbacks instead of implementing all of them in core)\n> into $cloud_provider's blob store, without having to transfer all data\n> first through a replication connection and then again to the blob store\n> (and without manually implementing non-exclusive base backup). Adding\n> WAL after the fact to the same blob really a thing for anything like\n> that (obviously - even if one can hack it by storing tars etc).\n\nWe seem to be mixing things now.. You've moved into talking about 'blob\nstores' which are rather different from snapshots, no? I certainly agree\nwith the general idea of supporting blob stores (pgbackrest has\nsupported s3 for quite some time, with a nicely pluggable architecture\nthat we'll be using to write drivers for other blob storage, all in very\nwell tested C99 code, and it's all done directly, if you want, without\ngoing over the network in some other way first..).\n\nI don't really care for the idea of using callbacks for this, at least\nif what you mean by \"callback\" is \"something like archive_command\".\nThere's a lot of potential failure cases and issues, writing to most s3\nstores requires retries, and getting it all to work right when you're\ngoing through a shell to run some other command to actually get the data\nacross safely and durably is, ultimately, a bit of a mess. I feel like\nwe should be learning from the mess that is archive_command and avoiding\nanything like that if at all possible when it comes to moving data\naround that needs to be confirmed durably written. Making users have to\npiece together the bits to make it work just isn't a good idea either\n(see, again, archive command, and our own documentation for why that's a\nbad idea...).\n\n> Wonder if the the WAL part in particular would actually be best solved\n> by having recovery probe more than one WAL directory when looking for\n> WAL segments (i.e. doing so before switching methods). Much faster than\n> using restore_command, and what one really wants in a pretty decent\n> number of cases. And it'd allow to just restore the base backup\n> (e.g. mount [copy of] the snapshot) and the received WAL stream\n> separately, without needing more complicated orchestration.\n\nThat looks to be pretty orthogonal to the original discussion, but it\ndoesn't seem like a terrible idea. I'd want David's thoughts on it, but\nit seems like this might work pretty well for pgbackrest- we already\npull down WAL in advance of the restore_command asking for it and store\nit nearby so we can swap it into place about as fast as possible. Being\nable to give a directory instead would be nice, although you have to\nfigure out which WAL is going to be needed (which timeline, what time or\nrecovery point for PITR, etc) and that information isn't passed to the\nrecovery_command currently. We are working presently on adding support\nto pgbackrest to better understand the point in time being asked by the\nuser for a restore, and we have plans to scan the WAL and track recovery\npoints, and we should know the timeline they're asking for, so maybe\nonce all that's done we will just 'know' what PG is going to ask for and\ncan prep it into a directory, but I don't think it really makes sense to\nassume that all of the WAL that might ever be asked for is going to be\nin one directory or that users will necessairly be happy with having\nwhat would potentially be a pretty large volume have all of the WAL to\nperform the restore with. Having something fetch WAL and feed it into\nthe directory, maintaining some user-defined size, and then having\nsomething (PG maybe?) remove WAL when done might work..\n\nIf we were doing all of this from scratch, or without a\n'restore_command' kind of interface, I feel like we'd have 3 or 4\ndifferent patches to choose from that implemented s3 support in core,\npotentially with all of this pre-fetching and queue'ing. The restore\ncommand approach does mean that older versions of PG can leverage a tool\nlike pgbackrest to get these features though, so I guess that's a\npositive for it. Certainly, one of the reasons we've hacked on\npgbackrest with these things is because we can support *existing*\ndeployments, whereas something in core wouldn't be available until at\nleast next year and you'd have to get people upgraded to it and such..\n\n> Perhaps I am also answering something completely besides what you were\n> wondering about?\n\nThere definitely are a few different threads and thoughts in here...\nThey're mostly about backups and PITR of some sort though, so I'm happy\nto chat about them. :)\n\nThanks,\n\nStephen", "msg_date": "Fri, 7 Feb 2020 16:13:39 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup and snapshots" } ]
[ { "msg_contents": "Hi,\n\nI've started a new separate thread from the previous long thread[1]\nfor internal key management system to PostgreSQL. As I mentioned in\nthe previous mail[2], we've decided to step back and focus on only\ninternal key management system for PG13. The internal key management\nsystem introduces the functionality to PostgreSQL that allows user to\nencrypt and decrypt data without knowing the actual key. Besides, it\nwill be able to be integrated with transparent data encryption in the\nfuture.\n\nThe basic idea is that PostgreSQL generates the master encryption key\nwhich is further protected by the user-provided passphrase. The key\nmanagement system provides two functions to wrap and unwrap the secret\nby the master encryption key. A user generates a secret key locally\nand send it to PostgreSQL to wrap it using by pg_kmgr_wrap() and save\nit somewhere. Then the user can use the encrypted secret key to\nencrypt data and decrypt data by something like:\n\nINSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\nSELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;\n\nWhere 'xxxxx' is the result of pg_kmgr_wrap function.\n\nThat way we can get something encrypted and decrypted without ever\nknowing the actual key that was used to encrypt it.\n\nI'm currently updating the patch and will submit it.\n\nOn Sun, 2 Feb 2020 at 00:37, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n>\n> On Fri, Jan 31, 2020 at 1:21 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > On Thu, 30 Jan 2020 at 20:36, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> > > That\n> > > would allow the internal usage to have a fixed output length of\n> > > LEN(IV) + LEN(HMAC) + LEN(DATA) = 16 + 32 + 64 = 112 bytes.\n> >\n> > Probably you meant LEN(DATA) is 32? DATA will be an encryption key for\n> > AES256 (master key) internally generated.\n>\n> No it should be 64-bytes. That way we can have separate 32-byte\n> encryption key (for AES256) and 32-byte MAC key (for HMAC-SHA256).\n>\n> While it's common to reuse the same 32-byte key for both AES256 and an\n> HMAC-SHA256 and there aren't any known issues with doing so, when\n> designing something from scratch it's more secure to use entirely\n> separate keys.\n\nThe HMAC key you mentioned above is not the same as the HMAC key\nderived from the user provided passphrase, right? That is, individual\nkey needs to have its IV and HMAC key. Given that the HMAC key used\nfor HMAC(IV || ENCRYPT(KEY, IV, DATA)) is the latter key (derived from\npassphrase), what will be the former key used for?\n\n>\n> > > For the user facing piece, padding would enabled to support arbitrary\n> > > input data lengths. That would make the output length grow by up to\n> > > 16-bytes (rounding the data length up to the AES block size) plus one\n> > > more byte if a version field is added.\n> >\n> > I think the length of padding also needs to be added to the output.\n> > Anyway, in the first version the same methods of wrapping/unwrapping\n> > key are used for both internal use and user facing function. And user\n> > input key needs to be a multiple of 16 bytes value.\n>\n> A separate length field does not need to be added as the\n> padding-enabled output will already include it at the end[1]. This\n> would be handled automatically by the OpenSSL encryption / decryption\n> operations (if it's enabled):\n>\n\nYes, right.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/031401d3f41d%245c70ed90%241552c8b0%24%40lab.ntt.co.jp\n[2] https://www.postgresql.org/message-id/CAD21AoD8QT0TWs3ma-aB821vwDKa1X519y1w3yrRKkAWjhZcrw%40mail.gmail.com\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 2 Feb 2020 09:02:04 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Internal key management system" }, { "msg_contents": "\nHello Masahiko-san,\n\n> I've started a new separate thread from the previous long thread[1]\n> for internal key management system to PostgreSQL. As I mentioned in\n> the previous mail[2], we've decided to step back and focus on only\n> internal key management system for PG13. The internal key management\n> system introduces the functionality to PostgreSQL that allows user to\n> encrypt and decrypt data without knowing the actual key. Besides, it\n> will be able to be integrated with transparent data encryption in the\n> future.\n>\n> The basic idea is that PostgreSQL generates the master encryption key\n> which is further protected by the user-provided passphrase. The key\n> management system provides two functions to wrap and unwrap the secret\n> by the master encryption key. A user generates a secret key locally\n\nIn understand that the secret key is sent in the clear for being encrypted \nby a master key.\n\n> and send it to PostgreSQL to wrap it using by pg_kmgr_wrap() and save\n> it somewhere. Then the user can use the encrypted secret key to\n> encrypt data and decrypt data by something like:\n>\n> INSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\n> SELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;\n>\n> Where 'xxxxx' is the result of pg_kmgr_wrap function.\n\nI'm lost. If pg_{en,de}crypt and pg_kmgr_unwrap are functions, what \nprevent users to:\n\n SELECT pg_kmgr_unwrap('xxxx');\n\nso as to recover the key, somehow in contradiction to \"allows user to \nencrypt and decrypt data without knowing the actual key\".\n\nWhen dealing with cryptography and key management, I can only recommand \nextreme caution.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 2 Feb 2020 09:05:38 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi;\n\nSo I actually have tried to do carefully encrypted data in Postgres via\npg_crypto. I think the key management problems in PostgreSQL are separable\nfrom table-level encryption. In particular the largest problem right now\nwith having encrypted attributes is accidental key disclosure. I think if\nwe solve key management in a way that works for encrypted attributes first,\nwe can then add encrypted tables later.\n\nAdditionally big headaches come with key rotation. So here are my thoughts\nhere. This is a fairly big topic. And I am not sure it can be done\nincrementally as much as that seems to doom big things in the community,\nbut I think it could be done with a major push by a combination of big\nplayers, such as Second Quadrant.\n\n\nOn Sun, Feb 2, 2020 at 3:02 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> Hi,\n>\n> I've started a new separate thread from the previous long thread[1]\n> for internal key management system to PostgreSQL. As I mentioned in\n> the previous mail[2], we've decided to step back and focus on only\n> internal key management system for PG13. The internal key management\n> system introduces the functionality to PostgreSQL that allows user to\n> encrypt and decrypt data without knowing the actual key. Besides, it\n> will be able to be integrated with transparent data encryption in the\n> future.\n>\n> The basic idea is that PostgreSQL generates the master encryption key\n> which is further protected by the user-provided passphrase. The key\n> management system provides two functions to wrap and unwrap the secret\n> by the master encryption key. A user generates a secret key locally\n> and send it to PostgreSQL to wrap it using by pg_kmgr_wrap() and save\n> it somewhere. Then the user can use the encrypted secret key to\n> encrypt data and decrypt data by something like:\n>\n\nSo my understanding is that you would then need something like:\n\n1. Symmetric keys for actual data storage. These could never be stored in\nthe clear.\n2. User public/private keys to use to access data storage keys. The\nprivate key would need to be encrypted with passphrases. And the server\nneeds to access the private key.\n3. Symmetric secret keys to encrypt private keys\n4. A key management public/private key pair used to exchange the password\nfor the private key.\n\n>\n> INSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\n> SELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;\n>\n\nIf you get anything wrong you risk logs being useful to break tne\nencryption keys and make data access easy. You don't want\npg_kmgr_unwrap('xxxx') in your logs.\n\nHere what I would suggest is a protocol extension to do the key exchange.\nIn other words, protocol messages to:\n1. Request data exchange server public key.\n2. Send server public-key encrypted symmetric key. Make sure it is\nproperly padded etc.\n\nThese are safe still only over SSL with sslmode=full_verify since otherwise\nyou might be vulnerable to an MITM attack.\n\nThen the keys should be stored in something like CacheMemoryContext and\npg_encrypt()/pg_decrypt() would have access to them along with appropriate\n catalogs needed to get to the storage keys themselves.\n\n\n\n\n>\n> Where 'xxxxx' is the result of pg_kmgr_wrap function.\n>\n> That way we can get something encrypted and decrypted without ever\n> knowing the actual key that was used to encrypt it.\n>\n> I'm currently updating the patch and will submit it.\n>\n\nThe above though is only a small part of the problem. What we also need\nare a variety of new DDL commands specifically for key management. This is\nneeded because without commands of this sort, we cannot make absolutely\nsure that the commands are never logged. These commands MUST not have keys\nlogged and therefore must have keys stripped prior to logging. If I were\ndesigning this:\n\n1. Users on an SSL connection would be able to: CREATE ENCRYPTION USER\nKEY PAIR WITH PASSWORD 'xyz' which would automatically rotate keys.\n2. Superusers could: ALTER SYSTEM ROTATE ENCRYPTION EXCHANGE KEY PAIR;\n3. Add an ENCRYPTED attribute to columns and disallow indexing of\nENCRYPTED columns. This would store keys for the columns encrypted with\nuser public keys where they have access.\n4. Allow superusers to ALTER TABLE foo ALTER encrypted_column ROTATE KEYS;\nwhich would naturally require a full table rewrite.\n\nNow, what that proposal does not provide is the use of encryption to\nenforce finer-grained access such as per-row keys but that's another topic\nand maybe something we don't need.\n\nHowever I hope that explains what I see as a version of a minimum viable\ninfrastructure here.\n\n>\n> On Sun, 2 Feb 2020 at 00:37, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> >\n> > On Fri, Jan 31, 2020 at 1:21 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > On Thu, 30 Jan 2020 at 20:36, Sehrope Sarkuni <sehrope@jackdb.com>\n> wrote:\n> > > > That\n> > > > would allow the internal usage to have a fixed output length of\n> > > > LEN(IV) + LEN(HMAC) + LEN(DATA) = 16 + 32 + 64 = 112 bytes.\n> > >\n> > > Probably you meant LEN(DATA) is 32? DATA will be an encryption key for\n> > > AES256 (master key) internally generated.\n> >\n> > No it should be 64-bytes. That way we can have separate 32-byte\n> > encryption key (for AES256) and 32-byte MAC key (for HMAC-SHA256).\n> >\n> > While it's common to reuse the same 32-byte key for both AES256 and an\n> > HMAC-SHA256 and there aren't any known issues with doing so, when\n> > designing something from scratch it's more secure to use entirely\n> > separate keys.\n>\n> The HMAC key you mentioned above is not the same as the HMAC key\n> derived from the user provided passphrase, right? That is, individual\n> key needs to have its IV and HMAC key. Given that the HMAC key used\n> for HMAC(IV || ENCRYPT(KEY, IV, DATA)) is the latter key (derived from\n> passphrase), what will be the former key used for?\n>\n> >\n> > > > For the user facing piece, padding would enabled to support arbitrary\n> > > > input data lengths. That would make the output length grow by up to\n> > > > 16-bytes (rounding the data length up to the AES block size) plus one\n> > > > more byte if a version field is added.\n> > >\n> > > I think the length of padding also needs to be added to the output.\n> > > Anyway, in the first version the same methods of wrapping/unwrapping\n> > > key are used for both internal use and user facing function. And user\n> > > input key needs to be a multiple of 16 bytes value.\n> >\n> > A separate length field does not need to be added as the\n> > padding-enabled output will already include it at the end[1]. This\n> > would be handled automatically by the OpenSSL encryption / decryption\n> > operations (if it's enabled):\n> >\n>\n> Yes, right.\n>\n> Regards,\n>\n> [1]\n> https://www.postgresql.org/message-id/031401d3f41d%245c70ed90%241552c8b0%24%40lab.ntt.co.jp\n> [2]\n> https://www.postgresql.org/message-id/CAD21AoD8QT0TWs3ma-aB821vwDKa1X519y1w3yrRKkAWjhZcrw%40mail.gmail.com\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nHi;So I actually have tried to do carefully encrypted data in Postgres via pg_crypto.  I think the key management problems in PostgreSQL are separable from table-level encryption. In particular the largest problem right now with having encrypted attributes is accidental key disclosure.  I think if we solve key management in a way that works for encrypted attributes first, we can then add encrypted tables later.Additionally big headaches come with key rotation.  So here are my thoughts here.  This is a fairly big topic.  And I am not sure it can be done incrementally as much as that seems to doom big things in the community, but I think it could be done with a major push by a combination of big players, such as Second Quadrant.On Sun, Feb 2, 2020 at 3:02 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:Hi,\n\nI've started a new separate thread from the previous long thread[1]\nfor internal key management system to PostgreSQL. As I mentioned in\nthe previous mail[2], we've decided to step back and focus on only\ninternal key management system for PG13. The internal key management\nsystem introduces the functionality to PostgreSQL that allows user to\nencrypt and decrypt data without knowing the actual key. Besides, it\nwill be able to be integrated with transparent data encryption in the\nfuture.\n\nThe basic idea is that PostgreSQL generates the master encryption key\nwhich is further protected by the user-provided passphrase. The key\nmanagement system provides two functions to wrap and unwrap the secret\nby the master encryption key. A user generates a secret key locally\nand send it to PostgreSQL to wrap it using by pg_kmgr_wrap() and save\nit somewhere. Then the user can use the encrypted secret key to\nencrypt data and decrypt data by something like:So my understanding is that  you would then need something like:1.  Symmetric keys for actual data storage.  These could never be stored in the clear.2.  User public/private keys to  use to access data storage keys.  The private key would need to be encrypted with passphrases.  And the server needs to access the private key.3.  Symmetric secret keys to encrypt private keys4.  A key management public/private key pair used to exchange the password for the private key.\n\nINSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\nSELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;If you get anything wrong you risk logs being useful to break tne encryption keys and make data access easy.  You don't want pg_kmgr_unwrap('xxxx') in your logs.Here what I would suggest is a protocol extension to do the key exchange.  In other words, protocol messages to:1.  Request data exchange server public key.2.  Send server public-key encrypted symmetric key.  Make sure it is properly padded etc.These are safe still only over SSL with sslmode=full_verify since otherwise you might be vulnerable to an MITM attack.Then the keys should be stored in something like CacheMemoryContext and pg_encrypt()/pg_decrypt() would have access to them along with appropriate  catalogs needed to get to the storage keys themselves. \n\nWhere 'xxxxx' is the result of pg_kmgr_wrap function.\n\nThat way we can get something encrypted and decrypted without ever\nknowing the actual key that was used to encrypt it.\n\nI'm currently updating the patch and will submit it.The above though is only a small part of the problem.  What we also need are a variety of new DDL commands specifically for key management.  This is needed because without commands of this sort, we cannot make absolutely sure that the commands are never logged.  These commands MUST not have keys logged  and therefore must have keys stripped prior to logging.  If I were designing this:1.  Users on an SSL connection would be able to:  CREATE ENCRYPTION USER KEY PAIR WITH PASSWORD 'xyz' which would automatically rotate keys.2.  Superusers could:  ALTER SYSTEM ROTATE ENCRYPTION EXCHANGE KEY PAIR;3.  Add an ENCRYPTED attribute to columns and disallow indexing of ENCRYPTED columns.  This would store keys for the columns encrypted with user public keys where they have access.4. Allow superusers to ALTER TABLE foo ALTER encrypted_column ROTATE KEYS; which would naturally require a full table rewrite.Now, what that proposal does not provide is the use of encryption to enforce finer-grained access such as per-row keys but that's another topic and maybe something we don't need.However I hope that explains what I see as a version of a minimum viable infrastructure here.\n\nOn Sun, 2 Feb 2020 at 00:37, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n>\n> On Fri, Jan 31, 2020 at 1:21 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > On Thu, 30 Jan 2020 at 20:36, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> > > That\n> > > would allow the internal usage to have a fixed output length of\n> > > LEN(IV) + LEN(HMAC) + LEN(DATA) = 16 + 32 + 64 = 112 bytes.\n> >\n> > Probably you meant LEN(DATA) is 32? DATA will be an encryption key for\n> > AES256 (master key) internally generated.\n>\n> No it should be 64-bytes. That way we can have separate 32-byte\n> encryption key (for AES256) and 32-byte MAC key (for HMAC-SHA256).\n>\n> While it's common to reuse the same 32-byte key for both AES256 and an\n> HMAC-SHA256 and there aren't any known issues with doing so, when\n> designing something from scratch it's more secure to use entirely\n> separate keys.\n\nThe HMAC key you mentioned above is not the same as the HMAC key\nderived from the user provided passphrase, right? That is, individual\nkey needs to have its IV and HMAC key. Given that the HMAC key used\nfor HMAC(IV || ENCRYPT(KEY, IV, DATA)) is the latter key (derived from\npassphrase), what will be the former key used for?\n\n>\n> > > For the user facing piece, padding would enabled to support arbitrary\n> > > input data lengths. That would make the output length grow by up to\n> > > 16-bytes (rounding the data length up to the AES block size) plus one\n> > > more byte if a version field is added.\n> >\n> > I think the length of padding also needs to be added to the output.\n> > Anyway, in the first version the same methods of wrapping/unwrapping\n> > key are used for both internal use and user facing function. And user\n> > input key needs to be a multiple of 16 bytes value.\n>\n> A separate length field does not need to be added as the\n> padding-enabled output will already include it at the end[1]. This\n> would be handled automatically by the OpenSSL encryption / decryption\n> operations (if it's enabled):\n>\n\nYes, right.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/031401d3f41d%245c70ed90%241552c8b0%24%40lab.ntt.co.jp\n[2] https://www.postgresql.org/message-id/CAD21AoD8QT0TWs3ma-aB821vwDKa1X519y1w3yrRKkAWjhZcrw%40mail.gmail.com\n\n-- \nMasahiko Sawada            http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin", "msg_date": "Mon, 3 Feb 2020 05:37:01 +0300", "msg_from": "Chris Travers <chris.travers@adjust.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sun, 2 Feb 2020 at 17:05, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> > I've started a new separate thread from the previous long thread[1]\n> > for internal key management system to PostgreSQL. As I mentioned in\n> > the previous mail[2], we've decided to step back and focus on only\n> > internal key management system for PG13. The internal key management\n> > system introduces the functionality to PostgreSQL that allows user to\n> > encrypt and decrypt data without knowing the actual key. Besides, it\n> > will be able to be integrated with transparent data encryption in the\n> > future.\n> >\n> > The basic idea is that PostgreSQL generates the master encryption key\n> > which is further protected by the user-provided passphrase. The key\n> > management system provides two functions to wrap and unwrap the secret\n> > by the master encryption key. A user generates a secret key locally\n>\n> In understand that the secret key is sent in the clear for being encrypted\n> by a master key.\n\nYeah we need to be careful about the secret key not being logged in\nany logs such as server logs for example when log_statement = 'all'. I\nguess that wrapping key doesn't often happen during service running\nbut does once at development phase. So it would not be a big problem\nbut probably we need to have something to deal with it.\n\n>\n> > and send it to PostgreSQL to wrap it using by pg_kmgr_wrap() and save\n> > it somewhere. Then the user can use the encrypted secret key to\n> > encrypt data and decrypt data by something like:\n> >\n> > INSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\n> > SELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;\n> >\n> > Where 'xxxxx' is the result of pg_kmgr_wrap function.\n>\n> I'm lost. If pg_{en,de}crypt and pg_kmgr_unwrap are functions, what\n> prevent users to:\n>\n> SELECT pg_kmgr_unwrap('xxxx');\n>\n> so as to recover the key, somehow in contradiction to \"allows user to\n> encrypt and decrypt data without knowing the actual key\".\n\nI might be missing your point but the above 'xxxx' is the wrapped key\nwrapped by the master key stored in PostgreSQL server. So user doesn't\nneed to know the raw secret key to encrypt/decrypt the data. Even if a\nmalicious user gets 'xxxx' they cannot know the actual secret key\nwithout the master key. pg_kmgr_wrap and pg_kmgr_unwrap are functions\nand it's possible for user to know the raw secret key by using\npg_kmgr_unwrap(). The master key stored in PostgreSQL server never be\nrevealed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 4 Feb 2020 12:17:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, Feb 1, 2020 at 7:02 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n> On Sun, 2 Feb 2020 at 00:37, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> >\n> > On Fri, Jan 31, 2020 at 1:21 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > On Thu, 30 Jan 2020 at 20:36, Sehrope Sarkuni <sehrope@jackdb.com>\nwrote:\n> > > > That\n> > > > would allow the internal usage to have a fixed output length of\n> > > > LEN(IV) + LEN(HMAC) + LEN(DATA) = 16 + 32 + 64 = 112 bytes.\n> > >\n> > > Probably you meant LEN(DATA) is 32? DATA will be an encryption key for\n> > > AES256 (master key) internally generated.\n> >\n> > No it should be 64-bytes. That way we can have separate 32-byte\n> > encryption key (for AES256) and 32-byte MAC key (for HMAC-SHA256).\n> >\n> > While it's common to reuse the same 32-byte key for both AES256 and an\n> > HMAC-SHA256 and there aren't any known issues with doing so, when\n> > designing something from scratch it's more secure to use entirely\n> > separate keys.\n>\n> The HMAC key you mentioned above is not the same as the HMAC key\n> derived from the user provided passphrase, right? That is, individual\n> key needs to have its IV and HMAC key. Given that the HMAC key used\n> for HMAC(IV || ENCRYPT(KEY, IV, DATA)) is the latter key (derived from\n> passphrase), what will be the former key used for?\n\nIt's not derived from the passphrase, it's unlocked by the passphrase\n(along with the master encryption key). The server will have 64-bytes of\nrandom data, saved encrypted in pg_control, which can be treated as two\nseparate 32-byte keys, let's call them master_encryption_key and\nmaster_mac_key. The 64-bytes is unlocked by decrypting it with the user\npassphrase at startup (which itself would be split into a pair of\nencryption and MAC keys to do the unlocking).\n\nThe wrap and unwrap operations would use both keys:\n\nwrap(plain_text, encryption_key, mac_key) {\n // Generate random IV:\n iv = pg_strong_random(16);\n // Encrypt:\n cipher_text = encrypt_aes256_cbc(encryption_key, iv, plain_text);\n // Compute MAC on all inputs:\n mac = hmac_sha256(mac_key, encryption_key || iv || cipher_text);\n // Concat user facing pieces together\n wrapped = mac || iv || cipher_text;\n return wrapped;\n}\n\nunwrap(wrapped, encryption_key, mac_key) {\n // Split wrapped into its pieces:\n actual_mac = wrapped.slice(0, 32);\n iv = wrapped.slice(0 + 32, 16);\n cipher_text = wrapped.slice(0 + 32 + 16);\n // Compute MAC on all inputs:\n expected_mac = hmac_sha256(mac_key, encryption_key || iv ||\ncipher_text);\n // Compare MAC vs value in wrapped:\n if (expected_mac != actual_mac) { return Error(\"MAC does not match\"); }\n // MAC matches so decrypt:\n plain_text = decrypt_aes256_cbc(encryption_key, iv, cipher_text);\n return plain_text;\n}\n\nEvery input to the encryption operation, including the encryption key, must\nbe included into the HMAC calculation. If you use the same key for both\nencryption and MAC that's not required as it's already part of the MAC\nprocess as the key. Using separate keys requires explicitly adding in the\nencryption key into the MAC input to ensure that it the correct key prior\nto decryption in the unwrap operation. Any additional parts of the wrapped\noutput (ex: a \"version\" byte for the algos or padding choices) should also\nbe included.\n\nThe wrap / unwrap above would be used with the encryption and mac keys\nderived from the user passphrase to unlock the master_encryption_key and\nmaster_mac_key from pg_control. Then those would be used by the higher\nlevel functions:\n\npg_kmgr_wrap(plain_text) {\n return wrap(plain_text, master_encryption_key, master_mac_key);\n}\n\npg_kmgr_unwrap(wrapped) {\n return unwrap(wrapped, master_encryption_key, master_mac_key);\n}\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nOn Sat, Feb 1, 2020 at 7:02 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:> On Sun, 2 Feb 2020 at 00:37, Sehrope Sarkuni <sehrope@jackdb.com> wrote:> >> > On Fri, Jan 31, 2020 at 1:21 AM Masahiko Sawada> > <masahiko.sawada@2ndquadrant.com> wrote:> > > On Thu, 30 Jan 2020 at 20:36, Sehrope Sarkuni <sehrope@jackdb.com> wrote:> > > > That> > > > would allow the internal usage to have a fixed output length of> > > > LEN(IV) + LEN(HMAC) + LEN(DATA) = 16 + 32 + 64 = 112 bytes.> > >> > > Probably you meant LEN(DATA) is 32? DATA will be an encryption key for> > > AES256 (master key) internally generated.> >> > No it should be 64-bytes. That way we can have separate 32-byte> > encryption key (for AES256) and 32-byte MAC key (for HMAC-SHA256).> >> > While it's common to reuse the same 32-byte key for both AES256 and an> > HMAC-SHA256 and there aren't any known issues with doing so, when> > designing something from scratch it's more secure to use entirely> > separate keys.>> The HMAC key you mentioned above is not the same as the HMAC key> derived from the user provided passphrase, right? That is, individual> key needs to have its IV and HMAC key. Given that the HMAC key used> for HMAC(IV || ENCRYPT(KEY, IV, DATA)) is the latter key (derived from> passphrase), what will be the former key used for?It's not derived from the passphrase, it's unlocked by the passphrase (along with the master encryption key). The server will have 64-bytes of random data, saved encrypted in pg_control, which can be treated as two separate 32-byte keys, let's call them master_encryption_key and master_mac_key. The 64-bytes is unlocked by decrypting it with the user passphrase at startup (which itself would be split into a pair of encryption and MAC keys to do the unlocking).The wrap and unwrap operations would use both keys:wrap(plain_text, encryption_key, mac_key) {    // Generate random IV:    iv = pg_strong_random(16);    // Encrypt:    cipher_text = encrypt_aes256_cbc(encryption_key, iv, plain_text);    // Compute MAC on all inputs:    mac = hmac_sha256(mac_key, encryption_key || iv || cipher_text);    // Concat user facing pieces together    wrapped = mac || iv || cipher_text;    return wrapped;}unwrap(wrapped, encryption_key, mac_key) {    // Split wrapped into its pieces:    actual_mac = wrapped.slice(0, 32);    iv = wrapped.slice(0 + 32, 16);    cipher_text = wrapped.slice(0 + 32 + 16);    // Compute MAC on all inputs:    expected_mac = hmac_sha256(mac_key, encryption_key || iv || cipher_text);    // Compare MAC vs value in wrapped:    if (expected_mac != actual_mac) { return Error(\"MAC does not match\"); }    // MAC matches so decrypt:    plain_text = decrypt_aes256_cbc(encryption_key, iv, cipher_text);        return plain_text;}Every input to the encryption operation, including the encryption key, must be included into the HMAC calculation. If you use the same key for both encryption and MAC that's not required as it's already part of the MAC process as the key. Using separate keys requires explicitly adding in the encryption key into the MAC input to ensure that it the correct key prior to decryption in the unwrap operation. Any additional parts of the wrapped output (ex: a \"version\" byte for the algos or padding choices) should also be included.The wrap / unwrap above would be used with the encryption and mac keys derived from the user passphrase to unlock the master_encryption_key and master_mac_key from pg_control. Then those would be used by the higher level functions:pg_kmgr_wrap(plain_text) {    return wrap(plain_text, master_encryption_key, master_mac_key);}pg_kmgr_unwrap(wrapped) {    return unwrap(wrapped, master_encryption_key, master_mac_key);}Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Wed, 5 Feb 2020 08:28:22 -0500", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Feb 3, 2020 at 10:18 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> > I'm lost. If pg_{en,de}crypt and pg_kmgr_unwrap are functions, what\n> > prevent users to:\n> >\n> > SELECT pg_kmgr_unwrap('xxxx');\n> >\n> > so as to recover the key, somehow in contradiction to \"allows user to\n> > encrypt and decrypt data without knowing the actual key\".\n>\n> I might be missing your point but the above 'xxxx' is the wrapped key\n> wrapped by the master key stored in PostgreSQL server. So user doesn't\n> need to know the raw secret key to encrypt/decrypt the data. Even if a\n> malicious user gets 'xxxx' they cannot know the actual secret key\n> without the master key. pg_kmgr_wrap and pg_kmgr_unwrap are functions\n> and it's possible for user to know the raw secret key by using\n> pg_kmgr_unwrap(). The master key stored in PostgreSQL server never be\n> revealed.\n\nI think I have the same confusion as Fabien. Isn't it bad if somebody\njust runs pg_kmgr_unwrap() and records the return value? Now they've\nstolen your encryption key, it seems.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Feb 2020 15:30:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Since the user does not need to know the master secret key used to cipher the data, I don't think we should expose \"pg_kmgr_unwrap(\"xxxx\")\" SQL function to the user at all.\n\nThe wrapped key \"xxxx\" is stored in control data and it is possible to obtain by malicious user and steal the key by running SELECT pg_kmgr_unwrap(\"xxxx\"). \n\nEven the user is righteous, it may not be very straightforward for that user to obtain the wrapped key \"xxxx\" to use in the unwrap function.\n\n\n\npg_kmgr_(un)wrap function is in discussion because encrypt and decrypt function require the master secret key as input argument. \n\nI would suggest using cluster passphrase as input instead of master key, so the user does not have to obtain the master key using pg_kmgr_unwrap(\"xxxx\") in order to use the encrypt and decrypt function. \n\nThe passphrase is in fact not stored anywhere in the system and we have to be careful that this passphrase is not shown in any activity log\n\n\n\nso instead of:\n\n------------------\n\n\nINSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\n\nSELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;\n\n\n\nit would become:\n\n------------------\n\nINSERT INTO tbl VALUES (pg_encrypt('user data', 'cluster_pass_phrase');\n\nSELECT pg_decrypt(secret_column, 'cluster_pass_phrase') FROM tbl;\n\n\n\npg_decrypt will then have to:\n\n\n\n1. derive the cluster pass phrase into KEK and HMAC key \n\n2. verify pass phrase by comparing MAC\n\n3. unwrap the key - Sehrope suggests a good approach to make wrap/unwrap function more secure by adding MAC verification and randomed IV instead of default. I think it is good\n\n4. decrypt the data\n\n5. return\n\n\n\nUsing passphrase instead of master key to encrypt and decrypt function will also make front end tool integration simpler, as the front end tool also do not need to know the master key so it does not need to derive KEK or unwrap the key...etc. \n\nNot sure if you guys agree?\n\n\n\nThanks!\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n\n\n---- On Thu, 06 Feb 2020 12:30:02 -0800 Robert Haas <robertmhaas@gmail.com> wrote ----\n\n\n\nOn Mon, Feb 3, 2020 at 10:18 PM Masahiko Sawada \n<mailto:masahiko.sawada@2ndquadrant.com> wrote: \n> > I'm lost. If pg_{en,de}crypt and pg_kmgr_unwrap are functions, what \n> > prevent users to: \n> > \n> > SELECT pg_kmgr_unwrap('xxxx'); \n> > \n> > so as to recover the key, somehow in contradiction to \"allows user to \n> > encrypt and decrypt data without knowing the actual key\". \n> \n> I might be missing your point but the above 'xxxx' is the wrapped key \n> wrapped by the master key stored in PostgreSQL server. So user doesn't \n> need to know the raw secret key to encrypt/decrypt the data. Even if a \n> malicious user gets 'xxxx' they cannot know the actual secret key \n> without the master key. pg_kmgr_wrap and pg_kmgr_unwrap are functions \n> and it's possible for user to know the raw secret key by using \n> pg_kmgr_unwrap(). The master key stored in PostgreSQL server never be \n> revealed. \n \nI think I have the same confusion as Fabien. Isn't it bad if somebody \njust runs pg_kmgr_unwrap() and records the return value? Now they've \nstolen your encryption key, it seems. \n \n-- \nRobert Haas \nEnterpriseDB: http://www.enterprisedb.com \nThe Enterprise PostgreSQL Company\nSince the user does not need to know the master secret key used to cipher the data, I don't think we should expose \"pg_kmgr_unwrap(\"xxxx\")\" SQL function to the user at all.The wrapped key \"xxxx\" is stored in control data and it is possible to obtain by malicious user and steal the key by running SELECT pg_kmgr_unwrap(\"xxxx\"). Even the user is righteous, it may not be very straightforward for that user to obtain the wrapped key \"xxxx\" to use in the unwrap function.pg_kmgr_(un)wrap function is in discussion because encrypt and decrypt function require the master secret key as input argument. I would suggest using cluster passphrase as input instead of master key, so the user does not have to obtain the master key using pg_kmgr_unwrap(\"xxxx\") in order to use the encrypt and decrypt function. The passphrase is in fact not stored anywhere in the system and we have to be careful that this passphrase is not shown in any activity logso instead of:------------------INSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));SELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;it would become:------------------INSERT INTO tbl VALUES (pg_encrypt('user data', 'cluster_pass_phrase');SELECT pg_decrypt(secret_column, 'cluster_pass_phrase') FROM tbl;pg_decrypt will then have to:1. derive the cluster pass phrase into KEK and HMAC key 2. verify pass phrase by comparing MAC3. unwrap the key - Sehrope suggests a good approach to make wrap/unwrap function more secure by adding MAC verification and randomed IV instead of default. I think it is good4. decrypt the data5. returnUsing passphrase instead of master key to encrypt and decrypt function will also make front end tool integration simpler, as the front end tool also do not need to know the master key so it does not need to derive KEK or unwrap the key...etc. Not sure if you guys agree?Thanks!Cary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Thu, 06 Feb 2020 12:30:02 -0800 Robert Haas <robertmhaas@gmail.com> wrote ----On Mon, Feb 3, 2020 at 10:18 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote: > > I'm lost. If pg_{en,de}crypt and pg_kmgr_unwrap are functions, what > > prevent users to: > > > > SELECT pg_kmgr_unwrap('xxxx'); > > > > so as to recover the key, somehow in contradiction to \"allows user to > > encrypt and decrypt data without knowing the actual key\". > > I might be missing your point but the above 'xxxx' is the wrapped key > wrapped by the master key stored in PostgreSQL server. So user doesn't > need to know the raw secret key to encrypt/decrypt the data. Even if a > malicious user gets 'xxxx' they cannot know the actual secret key > without the master key. pg_kmgr_wrap and pg_kmgr_unwrap are functions > and it's possible for user to know the raw secret key by using > pg_kmgr_unwrap(). The master key stored in PostgreSQL server never be > revealed. I think I have the same confusion as Fabien. Isn't it bad if somebody just runs pg_kmgr_unwrap() and records the return value? Now they've stolen your encryption key, it seems. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company", "msg_date": "Thu, 06 Feb 2020 13:36:58 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, 7 Feb 2020 at 05:30, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Feb 3, 2020 at 10:18 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > > I'm lost. If pg_{en,de}crypt and pg_kmgr_unwrap are functions, what\n> > > prevent users to:\n> > >\n> > > SELECT pg_kmgr_unwrap('xxxx');\n> > >\n> > > so as to recover the key, somehow in contradiction to \"allows user to\n> > > encrypt and decrypt data without knowing the actual key\".\n> >\n> > I might be missing your point but the above 'xxxx' is the wrapped key\n> > wrapped by the master key stored in PostgreSQL server. So user doesn't\n> > need to know the raw secret key to encrypt/decrypt the data. Even if a\n> > malicious user gets 'xxxx' they cannot know the actual secret key\n> > without the master key. pg_kmgr_wrap and pg_kmgr_unwrap are functions\n> > and it's possible for user to know the raw secret key by using\n> > pg_kmgr_unwrap(). The master key stored in PostgreSQL server never be\n> > revealed.\n>\n> I think I have the same confusion as Fabien. Isn't it bad if somebody\n> just runs pg_kmgr_unwrap() and records the return value? Now they've\n> stolen your encryption key, it seems.\n\nThis feature protects data from disk thefts but cannot protect data\nfrom attackers who are able to access PostgreSQL server. In this\ndesign application side still is responsible for managing the wrapped\nsecret in order to protect it from attackers. This is the same as when\nwe use pgcrypto now. The difference is that data is safe even if\nattackers steal the wrapped secret and the disk. The data cannot be\ndecrypted either without the passphrase which can be stored to other\nkey management systems or without accessing postgres server. IOW for\nexample, attackers can get the data if they get the wrapped secret\nmanaged by application side then run pg_kmgr_unwrap() to get the\nsecret and then steal the disk.\n\nAnother idea we discussed is to internally integrate pgcrypto with the\nkey management system. That is, the key management system has one\nmaster key and provides a C function to pass the master key to other\npostgres modules. pgcrypto uses that function and provides new\nencryption and decryption functions something like\npg_encrypt_with_key() and pg_decrypt_with_key(). Which\nencrypts/decrypts the given data by the master key stored in database\ncluster. That way user still doesn't have to know the encryption key\nand we can protect data from disk thefts. But the down side would be\nthat we have only one encryption key and that we might need to change\npgcrypto much.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 7 Feb 2020 11:18:29 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 11:18:29 +0900, Masahiko Sawada wrote:\n> Another idea we discussed is to internally integrate pgcrypto with the\n> key management system.\n\nPerhaps this has already been discussed (I only briefly looked): I'd\nstrongly advise against having any new infrastrure depend on\npgcrypto. Its code quality imo is well below our standards and contains\nserious red flags like very outdated copies of cryptography algorithm\nimplementations. I think we should consider deprecating and removing\nit, not expanding its use. It certainly shouldn't be involved in any\npotential disk encryption system at a later stage.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Feb 2020 18:36:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, 7 Feb 2020 at 11:36, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-02-07 11:18:29 +0900, Masahiko Sawada wrote:\n> > Another idea we discussed is to internally integrate pgcrypto with the\n> > key management system.\n>\n> Perhaps this has already been discussed (I only briefly looked): I'd\n> strongly advise against having any new infrastrure depend on\n> pgcrypto. Its code quality imo is well below our standards and contains\n> serious red flags like very outdated copies of cryptography algorithm\n> implementations. I think we should consider deprecating and removing\n> it, not expanding its use. It certainly shouldn't be involved in any\n> potential disk encryption system at a later stage.\n\nThank you for the advise.\n\nYeah I'm not going to use pgcrypto for transparent data encryption.\nThe KMS patch includes the new basic infrastructure for cryptographic\nfunctions (mainly AES-CBC). I'm thinking we can expand that\ninfrastructure so that we can also use it for TDE purpose by\nsupporting new cryptographic functions such as AES-CTR. Anyway, I\nagree to not have it depend on pgcrypto.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 7 Feb 2020 20:44:31 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 20:44:31 +0900, Masahiko Sawada wrote:\n> Yeah I'm not going to use pgcrypto for transparent data encryption.\n> The KMS patch includes the new basic infrastructure for cryptographic\n> functions (mainly AES-CBC). I'm thinking we can expand that\n> infrastructure so that we can also use it for TDE purpose by\n> supporting new cryptographic functions such as AES-CTR. Anyway, I\n> agree to not have it depend on pgcrypto.\n\nI thought for a minute, before checking the patch, that you were saying\nabove that the KMS patch includes its *own* implementation of\ncryptographic functions. I think it's pretty crucial that it continues\nnot to do that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Feb 2020 10:24:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 8 Feb 2020 at 03:24, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-02-07 20:44:31 +0900, Masahiko Sawada wrote:\n> > Yeah I'm not going to use pgcrypto for transparent data encryption.\n> > The KMS patch includes the new basic infrastructure for cryptographic\n> > functions (mainly AES-CBC). I'm thinking we can expand that\n> > infrastructure so that we can also use it for TDE purpose by\n> > supporting new cryptographic functions such as AES-CTR. Anyway, I\n> > agree to not have it depend on pgcrypto.\n>\n> I thought for a minute, before checking the patch, that you were saying\n> above that the KMS patch includes its *own* implementation of\n> cryptographic functions. I think it's pretty crucial that it continues\n> not to do that...\n\nI meant that we're going to use OpenSSL for AES encryption and\ndecryption independent of pgcrypto's openssl code, as the first step.\nThat is, KMS is available only when configured --with-openssl. And\nhopefully we eventually merge these openssl code and have pgcrypto use\nit, like when we introduced SCRAM.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 8 Feb 2020 14:48:54 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, Feb 08, 2020 at 02:48:54PM +0900, Masahiko Sawada wrote:\n>On Sat, 8 Feb 2020 at 03:24, Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2020-02-07 20:44:31 +0900, Masahiko Sawada wrote:\n>> > Yeah I'm not going to use pgcrypto for transparent data encryption.\n>> > The KMS patch includes the new basic infrastructure for cryptographic\n>> > functions (mainly AES-CBC). I'm thinking we can expand that\n>> > infrastructure so that we can also use it for TDE purpose by\n>> > supporting new cryptographic functions such as AES-CTR. Anyway, I\n>> > agree to not have it depend on pgcrypto.\n>>\n>> I thought for a minute, before checking the patch, that you were saying\n>> above that the KMS patch includes its *own* implementation of\n>> cryptographic functions. I think it's pretty crucial that it continues\n>> not to do that...\n>\n>I meant that we're going to use OpenSSL for AES encryption and\n>decryption independent of pgcrypto's openssl code, as the first step.\n>That is, KMS is available only when configured --with-openssl. And\n>hopefully we eventually merge these openssl code and have pgcrypto use\n>it, like when we introduced SCRAM.\n>\n\nI don't think it's very likely we'll ever merge any openssl code into\nour repository, e.g. because of licensing. But we already have AES\nimplementation in pgcrypto - why not to use that? I'm not saying we\nshould make this depend on pgcrypto, but maybe we should move the AES\nlibrary from pgcrypto into src/common or something like that.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 8 Feb 2020 16:08:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi,\n\nI wonder if this is meant to support external KMS systems/services like\nVault (from HashiCorp) or CloudHSM (from AWS) or a hardware HSM. AFAICS\nthe current implementation does not allow storing keys in such external\nsystems, right? But it seems kinda reasonable to want to do that, when\nalready using the HSM for other parts of the system.\n\nNow, I'm not saying the first version we commit has to support this, or\nthat it necessarily makes sense. But for example MariaDB seems to\nsupport this [1].\n\n[1] https://mariadb.com/kb/en/encryption-key-management/\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 8 Feb 2020 16:16:55 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi, \n\nOn February 8, 2020 7:08:26 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>On Sat, Feb 08, 2020 at 02:48:54PM +0900, Masahiko Sawada wrote:\n>>On Sat, 8 Feb 2020 at 03:24, Andres Freund <andres@anarazel.de> wrote:\n>>>\n>>> Hi,\n>>>\n>>> On 2020-02-07 20:44:31 +0900, Masahiko Sawada wrote:\n>>> > Yeah I'm not going to use pgcrypto for transparent data\n>encryption.\n>>> > The KMS patch includes the new basic infrastructure for\n>cryptographic\n>>> > functions (mainly AES-CBC). I'm thinking we can expand that\n>>> > infrastructure so that we can also use it for TDE purpose by\n>>> > supporting new cryptographic functions such as AES-CTR. Anyway, I\n>>> > agree to not have it depend on pgcrypto.\n>>>\n>>> I thought for a minute, before checking the patch, that you were\n>saying\n>>> above that the KMS patch includes its *own* implementation of\n>>> cryptographic functions. I think it's pretty crucial that it\n>continues\n>>> not to do that...\n>>\n>>I meant that we're going to use OpenSSL for AES encryption and\n>>decryption independent of pgcrypto's openssl code, as the first step.\n>>That is, KMS is available only when configured --with-openssl. And\n>>hopefully we eventually merge these openssl code and have pgcrypto use\n>>it, like when we introduced SCRAM.\n>>\n>\n>I don't think it's very likely we'll ever merge any openssl code into\n>our repository, e.g. because of licensing. But we already have AES\n>implementation in pgcrypto - why not to use that? I'm not saying we\n>should make this depend on pgcrypto, but maybe we should move the AES\n>library from pgcrypto into src/common or something like that.\n\nThe code uses functions exposed by openssl, it doesn't copy there code.\n\nAnd no, I don't think we should copy the implemented from pgcrypto - it's not good. We should remove it entirely.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 08 Feb 2020 07:47:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, Feb 08, 2020 at 07:47:24AM -0800, Andres Freund wrote:\n>Hi,\n>\n>On February 8, 2020 7:08:26 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>On Sat, Feb 08, 2020 at 02:48:54PM +0900, Masahiko Sawada wrote:\n>>>On Sat, 8 Feb 2020 at 03:24, Andres Freund <andres@anarazel.de> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> On 2020-02-07 20:44:31 +0900, Masahiko Sawada wrote:\n>>>> > Yeah I'm not going to use pgcrypto for transparent data\n>>encryption.\n>>>> > The KMS patch includes the new basic infrastructure for\n>>cryptographic\n>>>> > functions (mainly AES-CBC). I'm thinking we can expand that\n>>>> > infrastructure so that we can also use it for TDE purpose by\n>>>> > supporting new cryptographic functions such as AES-CTR. Anyway, I\n>>>> > agree to not have it depend on pgcrypto.\n>>>>\n>>>> I thought for a minute, before checking the patch, that you were\n>>saying\n>>>> above that the KMS patch includes its *own* implementation of\n>>>> cryptographic functions. I think it's pretty crucial that it\n>>continues\n>>>> not to do that...\n>>>\n>>>I meant that we're going to use OpenSSL for AES encryption and\n>>>decryption independent of pgcrypto's openssl code, as the first step.\n>>>That is, KMS is available only when configured --with-openssl. And\n>>>hopefully we eventually merge these openssl code and have pgcrypto use\n>>>it, like when we introduced SCRAM.\n>>>\n>>\n>>I don't think it's very likely we'll ever merge any openssl code into\n>>our repository, e.g. because of licensing. But we already have AES\n>>implementation in pgcrypto - why not to use that? I'm not saying we\n>>should make this depend on pgcrypto, but maybe we should move the AES\n>>library from pgcrypto into src/common or something like that.\n>\n>The code uses functions exposed by openssl, it doesn't copy there code.\n>\n\nSure, I know the code is currently calling ooenssl functions. I was\nresponding to Masahiko-san's message that we might eventually merge this\nopenssl code into our tree.\n\n>And no, I don't think we should copy the implemented from pgcrypto -\n>it's not good. We should remove it entirely.\n\nOK, no opinion on the quality of this implementation.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 8 Feb 2020 17:53:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sat, Feb 08, 2020 at 07:47:24AM -0800, Andres Freund wrote:\n>> On February 8, 2020 7:08:26 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>>> I don't think it's very likely we'll ever merge any openssl code into\n>>>> our repository, e.g. because of licensing. But we already have AES\n>>>> implementation in pgcrypto - why not to use that? I'm not saying we\n>>>> should make this depend on pgcrypto, but maybe we should move the AES\n>>>> library from pgcrypto into src/common or something like that.\n\n>> The code uses functions exposed by openssl, it doesn't copy there code.\n\n> Sure, I know the code is currently calling ooenssl functions. I was\n> responding to Masahiko-san's message that we might eventually merge this\n> openssl code into our tree.\n\nNo. This absolutely, positively, will not happen. There will never be\ncrypto functions in our core tree, because then there'd be no recourse for\npeople who want to use Postgres in countries with restrictions on crypto\nsoftware. It's hard enough for them that we have such code in contrib\n--- but at least they can remove pgcrypto and be legal. If it's in\nsrc/common then they're stuck.\n\nFor the same reason, I don't think that an \"internal key management\"\nfeature in the core code is ever going to be acceptable. It has to\nbe an extension. (But, as long as it's an extension, whether it's\nbringing its own crypto or relying on some other extension for that\ndoesn't matter from the legal standpoint.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Feb 2020 12:07:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sun, 9 Feb 2020 at 01:53, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Feb 08, 2020 at 07:47:24AM -0800, Andres Freund wrote:\n> >Hi,\n> >\n> >On February 8, 2020 7:08:26 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >>On Sat, Feb 08, 2020 at 02:48:54PM +0900, Masahiko Sawada wrote:\n> >>>On Sat, 8 Feb 2020 at 03:24, Andres Freund <andres@anarazel.de> wrote:\n> >>>>\n> >>>> Hi,\n> >>>>\n> >>>> On 2020-02-07 20:44:31 +0900, Masahiko Sawada wrote:\n> >>>> > Yeah I'm not going to use pgcrypto for transparent data\n> >>encryption.\n> >>>> > The KMS patch includes the new basic infrastructure for\n> >>cryptographic\n> >>>> > functions (mainly AES-CBC). I'm thinking we can expand that\n> >>>> > infrastructure so that we can also use it for TDE purpose by\n> >>>> > supporting new cryptographic functions such as AES-CTR. Anyway, I\n> >>>> > agree to not have it depend on pgcrypto.\n> >>>>\n> >>>> I thought for a minute, before checking the patch, that you were\n> >>saying\n> >>>> above that the KMS patch includes its *own* implementation of\n> >>>> cryptographic functions. I think it's pretty crucial that it\n> >>continues\n> >>>> not to do that...\n> >>>\n> >>>I meant that we're going to use OpenSSL for AES encryption and\n> >>>decryption independent of pgcrypto's openssl code, as the first step.\n> >>>That is, KMS is available only when configured --with-openssl. And\n> >>>hopefully we eventually merge these openssl code and have pgcrypto use\n> >>>it, like when we introduced SCRAM.\n> >>>\n> >>\n> >>I don't think it's very likely we'll ever merge any openssl code into\n> >>our repository, e.g. because of licensing. But we already have AES\n> >>implementation in pgcrypto - why not to use that? I'm not saying we\n> >>should make this depend on pgcrypto, but maybe we should move the AES\n> >>library from pgcrypto into src/common or something like that.\n> >\n> >The code uses functions exposed by openssl, it doesn't copy there code.\n> >\n>\n> Sure, I know the code is currently calling ooenssl functions. I was\n> responding to Masahiko-san's message that we might eventually merge this\n> openssl code into our tree.\n\nSorry for confusing you. What I wanted to say is to write AES\nencryption code in src/common using openssl library as the first step\napart from pgcrypto's openssl code, and then merge these two code\nlibrary into src/common as the next step. That is, it's moving the AES\nlibrary pgcrypto to src/common as you mentioned. IIRC when we\nintroduced SCRAM we moved sha2 library from pgcrypto to src/common.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 9 Feb 2020 10:11:30 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "From: Andres Freund <andres@anarazel.de>\n> Perhaps this has already been discussed (I only briefly looked): I'd\n> strongly advise against having any new infrastrure depend on\n> pgcrypto. Its code quality imo is well below our standards and contains\n> serious red flags like very outdated copies of cryptography algorithm\n> implementations. I think we should consider deprecating and removing\n> it, not expanding its use. It certainly shouldn't be involved in any\n> potential disk encryption system at a later stage.\n\n+1\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Mon, 10 Feb 2020 00:23:15 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Internal key management system" }, { "msg_contents": "On Wed, 5 Feb 2020 at 22:28, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n>\n> On Sat, Feb 1, 2020 at 7:02 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > On Sun, 2 Feb 2020 at 00:37, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> > >\n> > > On Fri, Jan 31, 2020 at 1:21 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > On Thu, 30 Jan 2020 at 20:36, Sehrope Sarkuni <sehrope@jackdb.com> wrote:\n> > > > > That\n> > > > > would allow the internal usage to have a fixed output length of\n> > > > > LEN(IV) + LEN(HMAC) + LEN(DATA) = 16 + 32 + 64 = 112 bytes.\n> > > >\n> > > > Probably you meant LEN(DATA) is 32? DATA will be an encryption key for\n> > > > AES256 (master key) internally generated.\n> > >\n> > > No it should be 64-bytes. That way we can have separate 32-byte\n> > > encryption key (for AES256) and 32-byte MAC key (for HMAC-SHA256).\n> > >\n> > > While it's common to reuse the same 32-byte key for both AES256 and an\n> > > HMAC-SHA256 and there aren't any known issues with doing so, when\n> > > designing something from scratch it's more secure to use entirely\n> > > separate keys.\n> >\n> > The HMAC key you mentioned above is not the same as the HMAC key\n> > derived from the user provided passphrase, right? That is, individual\n> > key needs to have its IV and HMAC key. Given that the HMAC key used\n> > for HMAC(IV || ENCRYPT(KEY, IV, DATA)) is the latter key (derived from\n> > passphrase), what will be the former key used for?\n>\n> It's not derived from the passphrase, it's unlocked by the passphrase (along with the master encryption key). The server will have 64-bytes of random data, saved encrypted in pg_control, which can be treated as two separate 32-byte keys, let's call them master_encryption_key and master_mac_key. The 64-bytes is unlocked by decrypting it with the user passphrase at startup (which itself would be split into a pair of encryption and MAC keys to do the unlocking).\n>\n> The wrap and unwrap operations would use both keys:\n>\n> wrap(plain_text, encryption_key, mac_key) {\n> // Generate random IV:\n> iv = pg_strong_random(16);\n> // Encrypt:\n> cipher_text = encrypt_aes256_cbc(encryption_key, iv, plain_text);\n> // Compute MAC on all inputs:\n> mac = hmac_sha256(mac_key, encryption_key || iv || cipher_text);\n> // Concat user facing pieces together\n> wrapped = mac || iv || cipher_text;\n> return wrapped;\n> }\n>\n> unwrap(wrapped, encryption_key, mac_key) {\n> // Split wrapped into its pieces:\n> actual_mac = wrapped.slice(0, 32);\n> iv = wrapped.slice(0 + 32, 16);\n> cipher_text = wrapped.slice(0 + 32 + 16);\n> // Compute MAC on all inputs:\n> expected_mac = hmac_sha256(mac_key, encryption_key || iv || cipher_text);\n> // Compare MAC vs value in wrapped:\n> if (expected_mac != actual_mac) { return Error(\"MAC does not match\"); }\n> // MAC matches so decrypt:\n> plain_text = decrypt_aes256_cbc(encryption_key, iv, cipher_text);\n> return plain_text;\n> }\n>\n> Every input to the encryption operation, including the encryption key, must be included into the HMAC calculation. If you use the same key for both encryption and MAC that's not required as it's already part of the MAC process as the key. Using separate keys requires explicitly adding in the encryption key into the MAC input to ensure that it the correct key prior to decryption in the unwrap operation. Any additional parts of the wrapped output (ex: a \"version\" byte for the algos or padding choices) should also be included.\n>\n> The wrap / unwrap above would be used with the encryption and mac keys derived from the user passphrase to unlock the master_encryption_key and master_mac_key from pg_control. Then those would be used by the higher level functions:\n>\n> pg_kmgr_wrap(plain_text) {\n> return wrap(plain_text, master_encryption_key, master_mac_key);\n> }\n>\n> pg_kmgr_unwrap(wrapped) {\n> return unwrap(wrapped, master_encryption_key, master_mac_key);\n> }\n\nThank you for explaining the details. I had missed something.\n\nAttached updated patch incorporated all comments I got so far. The changes are:\n\n* Renamed data_encryption_cipher to key_management_cipher\n* Renamed pg_kmgr_wrap and pg_kmgr_unwrap to pg_wrap_key and pg_unwrap_key\n* Changed wrap and unwrap procedure based on the comments\n* Removed the restriction of requiring the input key being a multiple\nof 16 bytes.\n* Created a context dedicated to wrap and unwrap data\n\nDocumentation and regression tests are still missing.\n\nRegarding key rotation, currently we allow online key rotation by\ndoing pg_rotate_encryption_key after changing\ncluster_passphrase_command and loading. But if the server crashed\nduring key rotation it might require the old passphrase in spite of\nthe passphrase command in postgresql.conf having been changed. We need\nto deal with it but I'm not sure the best approach. Possibly having a\nnew frontend tool that changes the key offline would be a safe\napproach.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 10 Feb 2020 15:15:32 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi,\n\nOn 2020-02-08 12:07:57 -0500, Tom Lane wrote:\n> For the same reason, I don't think that an \"internal key management\"\n> feature in the core code is ever going to be acceptable. It has to\n> be an extension. (But, as long as it's an extension, whether it's\n> bringing its own crypto or relying on some other extension for that\n> doesn't matter from the legal standpoint.)\n\nI'm not convinced by that. We have optional in-core functionality that\nrequires external libraries, and we add more cases, if necessary. Given\nthat the goal of this work is to be useful for on-disk encryption, I\ndon't see moving it into an extension being viable?\n\nI am somewhat doubtful that the, imo significant, complexity of the\nfeature is worth it, but that's imo a different discussion.\n\n\n> > Sure, I know the code is currently calling ooenssl functions. I was\n> > responding to Masahiko-san's message that we might eventually merge this\n> > openssl code into our tree.\n> \n> No. This absolutely, positively, will not happen. There will never be\n> crypto functions in our core tree, because then there'd be no recourse for\n> people who want to use Postgres in countries with restrictions on crypto\n> software. It's hard enough for them that we have such code in contrib\n> --- but at least they can remove pgcrypto and be legal. If it's in\n> src/common then they're stuck\n\nIsn't that basically a problem of the past by now? Partially due to\nchanged laws (e.g. France, which used to be a problematic case), but\nalso because it's basically futile to have import restrictions on\ncryptography by now. Just about every larger project contains\nsignificant amounts of cryptographic code and it's entirely impractical\nto operate anything interfacing with network without some form of\ntransport encryption. And just about all open source distribution\nmechanism have stopped separating out crypto code a long time ago.\n\nI however do agree that we should strive to not introduce cryptographic\ncode into the pg source tree - nobody here seems to have even close to\nenough experience to maintaining / writing that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:57:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Feb 10, 2020 at 05:57:47PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2020-02-08 12:07:57 -0500, Tom Lane wrote:\n> > For the same reason, I don't think that an \"internal key management\"\n> > feature in the core code is ever going to be acceptable. It has to\n> > be an extension. (But, as long as it's an extension, whether it's\n> > bringing its own crypto or relying on some other extension for that\n> > doesn't matter from the legal standpoint.)\n> \n> I'm not convinced by that. We have optional in-core functionality that\n> requires external libraries, and we add more cases, if necessary.\n\nTake for example libreadline, without which our CLI is at best\ndysfunctional.\n\n> > > Sure, I know the code is currently calling ooenssl functions. I\n> > > was responding to Masahiko-san's message that we might\n> > > eventually merge this openssl code into our tree.\n> > \n> > No. This absolutely, positively, will not happen. There will\n> > never be crypto functions in our core tree, because then there'd\n> > be no recourse for people who want to use Postgres in countries\n> > with restrictions on crypto software. It's hard enough for them\n> > that we have such code in contrib --- but at least they can remove\n> > pgcrypto and be legal. If it's in src/common then they're stuck\n> \n> Isn't that basically a problem of the past by now?\n> \n> Partially due to changed laws (e.g. France, which used to be a\n> problematic case),\n\nIt's less of a problem than it once was, but it's not exactly gone yet.\nhttps://en.wikipedia.org/wiki/Restrictions_on_the_import_of_cryptography\n\n> but also because it's basically futile to have\n> import restrictions on cryptography by now. Just about every larger\n> project contains significant amounts of cryptographic code and it's\n> entirely impractical to operate anything interfacing with network\n> without some form of transport encryption. And just about all open\n> source distribution mechanism have stopped separating out crypto\n> code a long time ago.\n\nThat's true. We have access to legal counsel. Maybe it's worth asking\nthem how best to include cryptographic functionality, \"how\" being the\nquestion one asks when one wants to get a positive response.\n\n> I however do agree that we should strive to not introduce\n> cryptographic code into the pg source tree - nobody here seems to\n> have even close to enough experience to maintaining / writing that.\n\n+1 for not turning ourselves into implementers of cryptographic\nprimitives.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 11 Feb 2020 10:18:04 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 11 Feb 2020 at 10:57, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-02-08 12:07:57 -0500, Tom Lane wrote:\n> > For the same reason, I don't think that an \"internal key management\"\n> > feature in the core code is ever going to be acceptable. It has to\n> > be an extension. (But, as long as it's an extension, whether it's\n> > bringing its own crypto or relying on some other extension for that\n> > doesn't matter from the legal standpoint.)\n>\n> I'm not convinced by that. We have optional in-core functionality that\n> requires external libraries, and we add more cases, if necessary. Given\n> that the goal of this work is to be useful for on-disk encryption, I\n> don't see moving it into an extension being viable?\n\nAs far as I researched it is significantly hard to implement\ntransparent data encryption without introducing into core. Adding a\nhook to the point where flushing data to the disk for encryption,\ncompression and tracking dirty blocks has ever been proposed but it\nhas been rejected every time.\n\n>\n> I am somewhat doubtful that the, imo significant, complexity of the\n> feature is worth it, but that's imo a different discussion.\n>\n>\n> > > Sure, I know the code is currently calling ooenssl functions. I was\n> > > responding to Masahiko-san's message that we might eventually merge this\n> > > openssl code into our tree.\n> >\n> > No. This absolutely, positively, will not happen. There will never be\n> > crypto functions in our core tree, because then there'd be no recourse for\n> > people who want to use Postgres in countries with restrictions on crypto\n> > software. It's hard enough for them that we have such code in contrib\n> > --- but at least they can remove pgcrypto and be legal. If it's in\n> > src/common then they're stuck\n>\n> Isn't that basically a problem of the past by now? Partially due to\n> changed laws (e.g. France, which used to be a problematic case), but\n> also because it's basically futile to have import restrictions on\n> cryptography by now. Just about every larger project contains\n> significant amounts of cryptographic code and it's entirely impractical\n> to operate anything interfacing with network without some form of\n> transport encryption. And just about all open source distribution\n> mechanism have stopped separating out crypto code a long time ago.\n>\n> I however do agree that we should strive to not introduce cryptographic\n> code into the pg source tree\n\nIt doesn't include the case where we introduce the code using openssl\ncryptographic function library to the core. Is that right?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 13 Feb 2020 13:00:04 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, Feb 6, 2020 at 4:37 PM Cary Huang <cary.huang@highgo.ca> wrote:\n> Since the user does not need to know the master secret key used to cipher the data, I don't think we should expose \"pg_kmgr_unwrap(\"xxxx\")\" SQL function to the user at all.\n> The wrapped key \"xxxx\" is stored in control data and it is possible to obtain by malicious user and steal the key by running SELECT pg_kmgr_unwrap(\"xxxx\").\n> Even the user is righteous, it may not be very straightforward for that user to obtain the wrapped key \"xxxx\" to use in the unwrap function.\n\nI agree.\n\n> so instead of:\n> ------------------\n> INSERT INTO tbl VALUES (pg_encrypt('user data', pg_kmgr_unwrap('xxxxx'));\n> SELECT pg_decrypt(secret_column, pg_kmgr_unwrap('xxxxx')) FROM tbl;\n>\n> it would become:\n> ------------------\n> INSERT INTO tbl VALUES (pg_encrypt('user data', 'cluster_pass_phrase');\n> SELECT pg_decrypt(secret_column, 'cluster_pass_phrase') FROM tbl;\n\nThe second one is certainly better than the first one, as it prevents\nthe key from being stolen. It's still pretty bad, though, because the\nsupposedly-secret passphrase will end up in the server log.\n\nI have a hard time believing that this feature as currently proposed\nis worth anything.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 10:37:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, Feb 6, 2020 at 9:19 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> This feature protects data from disk thefts but cannot protect data\n> from attackers who are able to access PostgreSQL server. In this\n> design application side still is responsible for managing the wrapped\n> secret in order to protect it from attackers. This is the same as when\n> we use pgcrypto now. The difference is that data is safe even if\n> attackers steal the wrapped secret and the disk. The data cannot be\n> decrypted either without the passphrase which can be stored to other\n> key management systems or without accessing postgres server. IOW for\n> example, attackers can get the data if they get the wrapped secret\n> managed by application side then run pg_kmgr_unwrap() to get the\n> secret and then steal the disk.\n\nIf you only care about protecting against the theft of the disk, you\nmight as well just encrypt the whole filesystem, which will probably\nperform better and probably be a lot harder to break since you won't\nhave short encrypted strings but instead large encrypted blocks of\ndata. Moreover, I think a lot of people who are interested in these\nkinds of features are hoping for more than just protecting against the\ntheft of the disk. While some people may be hoping for too much in\nthis area, setting your sights only on encryption at rest seems like a\nfairly low bar.\n\nIt also doesn't seem very likely to actually provide any security.\nYou're talking about sending the encryption key in the query string,\nwhich means that there's a good chance it's going to end up in a log\nfile someplace. One way that could happen is if the user has\nconfigured log_statement=all or log_min_duration_statement, but it\ncould also happen any time the query throws an error. In theory, you\nmight arrange for the log messages to be sent to another server that\nis protected by separate layers of security, but a lot of people are\ngoing to just log locally. And, even if you do have a separate server,\ndo you really want to have the logfile over there be full of\npasswords? I know I can be awfully negative some times, but that it\nseems like a weakness so serious as to make this whole thing\neffectively useless.\n\nOne way to plug this hole is to use new protocol messages for key\nexchanges. For example, suppose that after authentication is complete,\nyou can send the server a new protocol message: KeyPassphrase\n<key-name> <passphrase>. The server stores the passphrase in\nbackend-private memory and returns ReadyForQuery, and does not log the\nmessage payload anywhere. Now you do this:\n\nINSERT INTO tbl VALUES (pg_encrypt('user data', 'key-name');\nSELECT pg_decrypt(secret_column, 'key-name') FROM tbl;\n\nIf the passphrase for the named key has not been loaded into the\ncurrent session's memory, this produces an error; otherwise, it looks\nup the passphrase and uses it to do the decryption. Now the passphrase\nnever gets logged anywhere, and, also, the user can't persuade the\nserver to provide it with the encryption key, because there's no\nSQL-level function to access that data.\n\nWe could take it a step further: suppose that encryption is a column\nproperty, and the value of the property is a key name. If the user\nhasn't sent a KeyPassphrase message with the relevant key name,\nattempts to access that column just error out. If they have, then the\nserver just does the encryption and decryption automatically. Now the\nuser can just do:\n\nINSERT INTO tbl VALUES ('user data');\nSELECT secret_column FROM tbl;\n\nIt's a huge benefit if the SQL doesn't need to be changed. All that an\napplication needs to do in order to use encryption in this scenario is\nuse PQsetKeyPassphrase() or whatever before doing whatever else they\nwant to do.\n\nEven with these changes, the security of this whole approach can be\ncriticized on the basis that a good amount of information about the\ndata can be inferred without decrypting anything. You can tell which\nencrypted values are long and which are short. If someone builds an\nindex on the column, you can tell the order of all the encrypted\nvalues even though you may not know what the actual values are. Those\ncould well be meaningful information leaks, but I think such a system\nmight still be of use for certain purposes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 11:00:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi \n\n\n\nI have tried the attached kms_v3 patch and have some comments:\n\n\n\n1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data? \n\n\n\n---> in kmgr_wrap_key( ):\n\n+       /*\n\n+        * Assemble the wrapped key. The order of the wrapped key is iv, hmac and\n\n+        * encrypted data.\n\n+        */\n\n\n\n\n\n2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others?\n\n\n\n+ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key)\n\n\n\n+{\n\n\n+       if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL))\n\n+               return false;\n\n+       if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN))\n\n+               return false;\n\n+       if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL))\n\n+               return false;\n\n+\n\n+       /*\n\n+        * Always enable padding. We don't need to check the return\n\n+        * value as EVP_CIPHER_CTX_set_padding always returns 1.\n\n+        */\n\n+       EVP_CIPHER_CTX_set_padding(ctx, 1);\n\n+\n\n+       return true;\n\n+}\n\n\n\n3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc).\n\n\n\n---> kmgr.h\n\n+/* Value of key_management_cipher */\n\n\n\n\n\n\n+enum\n\n+{\n\n+       KMGR_CIPHER_OFF = 0,\n\n+       KMGR_CIPHER_AES256\n\n+};\n\n+\n\n\n\nso it would become \n\n+enum\n\n+{\n\n+       KMGR_CIPHER_OFF = 0,\n\n+       KMGR_CIPHER_AES256_CBC = 1,\n\n+       KMGR_CIPHER_AES256_CTR = 2\n\n+};\n\n\n\nIf you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code....\n\n\n\n4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.c\n\nI tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable.\n\n\n\nthe len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output.\n\n\n\n---> pg_unwrap_key function in kmgr.c\n\n+       if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen,\n\n\n+                                                (uint8 *) VARDATA(res), &len))\n\n+               ereport(ERROR,\n\n+                               (errmsg(\"could not unwrap the given secret\")));\n\n+\n\n+       /*\n\n+        * The size of unwrapped key can be smaller than the size estimated\n\n+        * before unwrapping since the padding is removed during unwrapping.\n\n+        */\n\n+       SET_VARSIZE(res, len);\n\n+       PG_RETURN_BYTEA_P(res);\n\n\n\nI am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to user\n\n\n\nthank you\n\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n---- On Fri, 14 Feb 2020 08:00:45 -0800 Robert Haas <robertmhaas@gmail.com> wrote ----\n\n\nOn Thu, Feb 6, 2020 at 9:19 PM Masahiko Sawada \n<mailto:masahiko.sawada@2ndquadrant.com> wrote: \n> This feature protects data from disk thefts but cannot protect data \n> from attackers who are able to access PostgreSQL server. In this \n> design application side still is responsible for managing the wrapped \n> secret in order to protect it from attackers. This is the same as when \n> we use pgcrypto now. The difference is that data is safe even if \n> attackers steal the wrapped secret and the disk. The data cannot be \n> decrypted either without the passphrase which can be stored to other \n> key management systems or without accessing postgres server. IOW for \n> example, attackers can get the data if they get the wrapped secret \n> managed by application side then run pg_kmgr_unwrap() to get the \n> secret and then steal the disk. \n \nIf you only care about protecting against the theft of the disk, you \nmight as well just encrypt the whole filesystem, which will probably \nperform better and probably be a lot harder to break since you won't \nhave short encrypted strings but instead large encrypted blocks of \ndata. Moreover, I think a lot of people who are interested in these \nkinds of features are hoping for more than just protecting against the \ntheft of the disk. While some people may be hoping for too much in \nthis area, setting your sights only on encryption at rest seems like a \nfairly low bar. \n \nIt also doesn't seem very likely to actually provide any security. \nYou're talking about sending the encryption key in the query string, \nwhich means that there's a good chance it's going to end up in a log \nfile someplace. One way that could happen is if the user has \nconfigured log_statement=all or log_min_duration_statement, but it \ncould also happen any time the query throws an error. In theory, you \nmight arrange for the log messages to be sent to another server that \nis protected by separate layers of security, but a lot of people are \ngoing to just log locally. And, even if you do have a separate server, \ndo you really want to have the logfile over there be full of \npasswords? I know I can be awfully negative some times, but that it \nseems like a weakness so serious as to make this whole thing \neffectively useless. \n \nOne way to plug this hole is to use new protocol messages for key \nexchanges. For example, suppose that after authentication is complete, \nyou can send the server a new protocol message: KeyPassphrase \n<key-name> <passphrase>. The server stores the passphrase in \nbackend-private memory and returns ReadyForQuery, and does not log the \nmessage payload anywhere. Now you do this: \n \nINSERT INTO tbl VALUES (pg_encrypt('user data', 'key-name'); \nSELECT pg_decrypt(secret_column, 'key-name') FROM tbl; \n \nIf the passphrase for the named key has not been loaded into the \ncurrent session's memory, this produces an error; otherwise, it looks \nup the passphrase and uses it to do the decryption. Now the passphrase \nnever gets logged anywhere, and, also, the user can't persuade the \nserver to provide it with the encryption key, because there's no \nSQL-level function to access that data. \n \nWe could take it a step further: suppose that encryption is a column \nproperty, and the value of the property is a key name. If the user \nhasn't sent a KeyPassphrase message with the relevant key name, \nattempts to access that column just error out. If they have, then the \nserver just does the encryption and decryption automatically. Now the \nuser can just do: \n \nINSERT INTO tbl VALUES ('user data'); \nSELECT secret_column FROM tbl; \n \nIt's a huge benefit if the SQL doesn't need to be changed. All that an \napplication needs to do in order to use encryption in this scenario is \nuse PQsetKeyPassphrase() or whatever before doing whatever else they \nwant to do. \n \nEven with these changes, the security of this whole approach can be \ncriticized on the basis that a good amount of information about the \ndata can be inferred without decrypting anything. You can tell which \nencrypted values are long and which are short. If someone builds an \nindex on the column, you can tell the order of all the encrypted \nvalues even though you may not know what the actual values are. Those \ncould well be meaningful information leaks, but I think such a system \nmight still be of use for certain purposes. \n \n-- \nRobert Haas \nEnterpriseDB: http://www.enterprisedb.com \nThe Enterprise PostgreSQL Company\nHi I have tried the attached kms_v3 patch and have some comments:1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data? ---> in kmgr_wrap_key( ):+       /*+        * Assemble the wrapped key. The order of the wrapped key is iv, hmac and+        * encrypted data.+        */2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others?+ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key)+{+       if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL))+               return false;+       if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN))+               return false;+       if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL))+               return false;++       /*+        * Always enable padding. We don't need to check the return+        * value as EVP_CIPHER_CTX_set_padding always returns 1.+        */+       EVP_CIPHER_CTX_set_padding(ctx, 1);++       return true;+}3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc).---> kmgr.h+/* Value of key_management_cipher */+enum+{+       KMGR_CIPHER_OFF = 0,+       KMGR_CIPHER_AES256+};+so it would become +enum+{+       KMGR_CIPHER_OFF = 0,+       KMGR_CIPHER_AES256_CBC = 1,+       KMGR_CIPHER_AES256_CTR = 2+};If you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code....4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.cI tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable.the len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output.---> pg_unwrap_key function in kmgr.c+       if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen,+                                                (uint8 *) VARDATA(res), &len))+               ereport(ERROR,+                               (errmsg(\"could not unwrap the given secret\")));++       /*+        * The size of unwrapped key can be smaller than the size estimated+        * before unwrapping since the padding is removed during unwrapping.+        */+       SET_VARSIZE(res, len);+       PG_RETURN_BYTEA_P(res);I am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to userthank youCary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Fri, 14 Feb 2020 08:00:45 -0800 Robert Haas <robertmhaas@gmail.com> wrote ----On Thu, Feb 6, 2020 at 9:19 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote: > This feature protects data from disk thefts but cannot protect data > from attackers who are able to access PostgreSQL server. In this > design application side still is responsible for managing the wrapped > secret in order to protect it from attackers. This is the same as when > we use pgcrypto now. The difference is that data is safe even if > attackers steal the wrapped secret and the disk. The data cannot be > decrypted either without the passphrase which can be stored to other > key management systems or without accessing postgres server. IOW for > example, attackers can get the data if they get the wrapped secret > managed by application side then run pg_kmgr_unwrap() to get the > secret and then steal the disk. If you only care about protecting against the theft of the disk, you might as well just encrypt the whole filesystem, which will probably perform better and probably be a lot harder to break since you won't have short encrypted strings but instead large encrypted blocks of data. Moreover, I think a lot of people who are interested in these kinds of features are hoping for more than just protecting against the theft of the disk. While some people may be hoping for too much in this area, setting your sights only on encryption at rest seems like a fairly low bar. It also doesn't seem very likely to actually provide any security. You're talking about sending the encryption key in the query string, which means that there's a good chance it's going to end up in a log file someplace. One way that could happen is if the user has configured log_statement=all or log_min_duration_statement, but it could also happen any time the query throws an error. In theory, you might arrange for the log messages to be sent to another server that is protected by separate layers of security, but a lot of people are going to just log locally. And, even if you do have a separate server, do you really want to have the logfile over there be full of passwords? I know I can be awfully negative some times, but that it seems like a weakness so serious as to make this whole thing effectively useless. One way to plug this hole is to use new protocol messages for key exchanges. For example, suppose that after authentication is complete, you can send the server a new protocol message: KeyPassphrase <key-name> <passphrase>. The server stores the passphrase in backend-private memory and returns ReadyForQuery, and does not log the message payload anywhere. Now you do this: INSERT INTO tbl VALUES (pg_encrypt('user data', 'key-name'); SELECT pg_decrypt(secret_column, 'key-name') FROM tbl; If the passphrase for the named key has not been loaded into the current session's memory, this produces an error; otherwise, it looks up the passphrase and uses it to do the decryption. Now the passphrase never gets logged anywhere, and, also, the user can't persuade the server to provide it with the encryption key, because there's no SQL-level function to access that data. We could take it a step further: suppose that encryption is a column property, and the value of the property is a key name. If the user hasn't sent a KeyPassphrase message with the relevant key name, attempts to access that column just error out. If they have, then the server just does the encryption and decryption automatically. Now the user can just do: INSERT INTO tbl VALUES ('user data'); SELECT secret_column FROM tbl; It's a huge benefit if the SQL doesn't need to be changed. All that an application needs to do in order to use encryption in this scenario is use PQsetKeyPassphrase() or whatever before doing whatever else they want to do. Even with these changes, the security of this whole approach can be criticized on the basis that a good amount of information about the data can be inferred without decrypting anything. You can tell which encrypted values are long and which are short. If someone builds an index on the column, you can tell the order of all the encrypted values even though you may not know what the actual values are. Those could well be meaningful information leaks, but I think such a system might still be of use for certain purposes. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company", "msg_date": "Tue, 18 Feb 2020 16:29:15 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 15 Feb 2020 at 01:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 6, 2020 at 9:19 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > This feature protects data from disk thefts but cannot protect data\n> > from attackers who are able to access PostgreSQL server. In this\n> > design application side still is responsible for managing the wrapped\n> > secret in order to protect it from attackers. This is the same as when\n> > we use pgcrypto now. The difference is that data is safe even if\n> > attackers steal the wrapped secret and the disk. The data cannot be\n> > decrypted either without the passphrase which can be stored to other\n> > key management systems or without accessing postgres server. IOW for\n> > example, attackers can get the data if they get the wrapped secret\n> > managed by application side then run pg_kmgr_unwrap() to get the\n> > secret and then steal the disk.\n>\n> If you only care about protecting against the theft of the disk, you\n> might as well just encrypt the whole filesystem, which will probably\n> perform better and probably be a lot harder to break since you won't\n> have short encrypted strings but instead large encrypted blocks of\n> data. Moreover, I think a lot of people who are interested in these\n> kinds of features are hoping for more than just protecting against the\n> theft of the disk. While some people may be hoping for too much in\n> this area, setting your sights only on encryption at rest seems like a\n> fairly low bar.\n\nThis feature also protects data from reading database files directly.\nAnd it's also good that it's independent of platforms.\n\nTo be clear, let me summarize scenarios where we will be able to\nprotect data and won't. We can put the cluster key which will be\nobtained by cluster_passphrase_command into another component in the\nsystem, for instance into KMS ideally. The user key is wrapped and\nsaved to an application server or somewhere it can obtain promptly.\nPostgreSQL server has the master key in the disk which is wrapped by\nthe cluster key along with the user data encrypted by the user key.\nWhile running PostgreSQL server, user can unwrap the user key using by\npg_unwrap_key to get the user key. Given that attackers stole the\ndatabase disk that includes encrypted user data and the wrapped master\nkey, what they need to complete their attack is (1) the wrapped user\nkey and an access to PostgreSQL server, (2) the cluster key and the\nwrapped user key or (3) the master key and the wrapped user key. They\ncannot get user data with only one of those secrets: the cluster key,\nthe master key and the wrapped user key.\n\nIn case (1), PostgreSQL needs to be running and they need to be able\nto access a PostgreSQL server, which may require a password, to\nexecute pg_unwrap_key with the wrapped user key they stole. In case\n(2), since the wrapped user key is stored in the application server\nand it will be likely to be accessible without special privilege it\nmay be easy for attackers to get it. However in addition, they need to\nattack KMS to get the cluster key. Finally in case (3), again, they\nmay be able to steal the wrapped user key. But they need also to be\nable to login to OS in an unauthorized way and then illegally see the\nPostgreSQL shared buffer.\n\nISTM these all cases will be not easy for attackers.\n\n>\n> It also doesn't seem very likely to actually provide any security.\n> You're talking about sending the encryption key in the query string,\n> which means that there's a good chance it's going to end up in a log\n> file someplace. One way that could happen is if the user has\n> configured log_statement=all or log_min_duration_statement, but it\n> could also happen any time the query throws an error. In theory, you\n> might arrange for the log messages to be sent to another server that\n> is protected by separate layers of security, but a lot of people are\n> going to just log locally. And, even if you do have a separate server,\n> do you really want to have the logfile over there be full of\n> passwords? I know I can be awfully negative some times, but that it\n> seems like a weakness so serious as to make this whole thing\n> effectively useless.\n>\n\nSince the user key could be logged to server logs attackers will be\nable to get user data by stealing only the database disk if the server\nlogs locally. But I personally think that it's not a serious problem\nthat will make this feature meaningless, depending on user cases. User\nwill be likely to have user key per users or one key for one instance.\nSo for example, in the case where the system doesn't add new users\nduring running, user can wrap the user key before the system starting\nservice and therefore user will need pay attention only at that time.\nIf user can take care of that we can accept such restriction.\n\n> One way to plug this hole is to use new protocol messages for key\n> exchanges. For example, suppose that after authentication is complete,\n> you can send the server a new protocol message: KeyPassphrase\n> <key-name> <passphrase>. The server stores the passphrase in\n> backend-private memory and returns ReadyForQuery, and does not log the\n> message payload anywhere. Now you do this:\n>\n> INSERT INTO tbl VALUES (pg_encrypt('user data', 'key-name');\n> SELECT pg_decrypt(secret_column, 'key-name') FROM tbl;\n>\n> If the passphrase for the named key has not been loaded into the\n> current session's memory, this produces an error; otherwise, it looks\n> up the passphrase and uses it to do the decryption. Now the passphrase\n> never gets logged anywhere, and, also, the user can't persuade the\n> server to provide it with the encryption key, because there's no\n> SQL-level function to access that data.\n>\n> We could take it a step further: suppose that encryption is a column\n> property, and the value of the property is a key name. If the user\n> hasn't sent a KeyPassphrase message with the relevant key name,\n> attempts to access that column just error out. If they have, then the\n> server just does the encryption and decryption automatically. Now the\n> user can just do:\n>\n> INSERT INTO tbl VALUES ('user data');\n> SELECT secret_column FROM tbl;\n>\n> It's a huge benefit if the SQL doesn't need to be changed. All that an\n> application needs to do in order to use encryption in this scenario is\n> use PQsetKeyPassphrase() or whatever before doing whatever else they\n> want to do.\n\nYour idea seems good. I think the point from development perspective\nis whether it's worth to have such a dedicated feature in order to\nprovide the transparent encryption feature using pgcrypto. That is,\nlooking at this feature as a building block of transparent data at\nrest encryption such changes might be overkill. Generally encrypting\ndata using pgcrypto is not good in terms of performance. In\ntransparent data encryption, PostgreSQL would be able to encrypt data\nby the key stored inside its database. As I mentioned above if this\nfeature can cover a certain use case, it might be enough as is.\n\n>\n> Even with these changes, the security of this whole approach can be\n> criticized on the basis that a good amount of information about the\n> data can be inferred without decrypting anything. You can tell which\n> encrypted values are long and which are short. If someone builds an\n> index on the column, you can tell the order of all the encrypted\n> values even though you may not know what the actual values are. Those\n> could well be meaningful information leaks, but I think such a system\n> might still be of use for certain purposes.\n\nYeah, that's another reason why I personally hesitate to use pgcrypto\nas a transparent data encryption feature. It's still under discussion\nthat what data needs to be encrypted by the transparent data at rest\nencryption but it would be much better than pgcrypto's one from that\nperspective.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Feb 2020 00:44:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, 19 Feb 2020 at 09:29, Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Hi\n>\n> I have tried the attached kms_v3 patch and have some comments:\n>\n> 1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data?\n>\n> ---> in kmgr_wrap_key( ):\n> + /*\n> + * Assemble the wrapped key. The order of the wrapped key is iv, hmac and\n> + * encrypted data.\n> + */\n\nRight, will fix.\n\n>\n>\n> 2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others?\n>\n> +ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key)\n> +{\n> + if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL))\n> + return false;\n> + if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN))\n> + return false;\n> + if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL))\n> + return false;\n> +\n> + /*\n> + * Always enable padding. We don't need to check the return\n> + * value as EVP_CIPHER_CTX_set_padding always returns 1.\n> + */\n> + EVP_CIPHER_CTX_set_padding(ctx, 1);\n> +\n> + return true;\n> +}\n\nIt seems good. We can expand it to make caller decide the block cipher\nmode of operation and key length. I removed such code from the\nprevious patch to make it simple since currently we support only\nAES-256 CBC.\n\n>\n> 3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc).\n>\n> ---> kmgr.h\n> +/* Value of key_management_cipher */\n> +enum\n> +{\n> + KMGR_CIPHER_OFF = 0,\n> + KMGR_CIPHER_AES256\n> +};\n> +\n>\n> so it would become\n> +enum\n> +{\n> + KMGR_CIPHER_OFF = 0,\n> + KMGR_CIPHER_AES256_CBC = 1,\n> + KMGR_CIPHER_AES256_CTR = 2\n> +};\n>\n> If you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code....\n\nKMGR_CIPHER_XXX is relevant with cipher mode used by KMS and KMS would\nstill use AES256 CBC even if we had TDE which would use AES256 CTR.\n\nAfter more thoughts, I think currently we can specify -e aes-256 to\ninitdb but actually this is not necessary. When\n--cluster-passphrase-command specified, we enable the internal KMS and\nalways use AES256 CBC. Something like -e option will be needed after\nsupporting TDE with several cipher options. Thoughts?\n\n>\n> 4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.c\n> I tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable.\n>\n> the len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output.\n>\n> ---> pg_unwrap_key function in kmgr.c\n> + if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen,\n> + (uint8 *) VARDATA(res), &len))\n> + ereport(ERROR,\n> + (errmsg(\"could not unwrap the given secret\")));\n> +\n> + /*\n> + * The size of unwrapped key can be smaller than the size estimated\n> + * before unwrapping since the padding is removed during unwrapping.\n> + */\n> + SET_VARSIZE(res, len);\n> + PG_RETURN_BYTEA_P(res);\n>\n> I am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to user\n\nThank you for testing. I'm going to include regression tests and\ndocumentation in the next version patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Feb 2020 16:16:33 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 20 Feb 2020 at 16:16, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 19 Feb 2020 at 09:29, Cary Huang <cary.huang@highgo.ca> wrote:\n> >\n> > Hi\n> >\n> > I have tried the attached kms_v3 patch and have some comments:\n> >\n> > 1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data?\n> >\n> > ---> in kmgr_wrap_key( ):\n> > + /*\n> > + * Assemble the wrapped key. The order of the wrapped key is iv, hmac and\n> > + * encrypted data.\n> > + */\n>\n> Right, will fix.\n>\n> >\n> >\n> > 2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others?\n> >\n> > +ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key)\n> > +{\n> > + if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL))\n> > + return false;\n> > + if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN))\n> > + return false;\n> > + if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL))\n> > + return false;\n> > +\n> > + /*\n> > + * Always enable padding. We don't need to check the return\n> > + * value as EVP_CIPHER_CTX_set_padding always returns 1.\n> > + */\n> > + EVP_CIPHER_CTX_set_padding(ctx, 1);\n> > +\n> > + return true;\n> > +}\n>\n> It seems good. We can expand it to make caller decide the block cipher\n> mode of operation and key length. I removed such code from the\n> previous patch to make it simple since currently we support only\n> AES-256 CBC.\n>\n> >\n> > 3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc).\n> >\n> > ---> kmgr.h\n> > +/* Value of key_management_cipher */\n> > +enum\n> > +{\n> > + KMGR_CIPHER_OFF = 0,\n> > + KMGR_CIPHER_AES256\n> > +};\n> > +\n> >\n> > so it would become\n> > +enum\n> > +{\n> > + KMGR_CIPHER_OFF = 0,\n> > + KMGR_CIPHER_AES256_CBC = 1,\n> > + KMGR_CIPHER_AES256_CTR = 2\n> > +};\n> >\n> > If you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code....\n>\n> KMGR_CIPHER_XXX is relevant with cipher mode used by KMS and KMS would\n> still use AES256 CBC even if we had TDE which would use AES256 CTR.\n>\n> After more thoughts, I think currently we can specify -e aes-256 to\n> initdb but actually this is not necessary. When\n> --cluster-passphrase-command specified, we enable the internal KMS and\n> always use AES256 CBC. Something like -e option will be needed after\n> supporting TDE with several cipher options. Thoughts?\n>\n> >\n> > 4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.c\n> > I tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable.\n> >\n> > the len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output.\n> >\n> > ---> pg_unwrap_key function in kmgr.c\n> > + if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen,\n> > + (uint8 *) VARDATA(res), &len))\n> > + ereport(ERROR,\n> > + (errmsg(\"could not unwrap the given secret\")));\n> > +\n> > + /*\n> > + * The size of unwrapped key can be smaller than the size estimated\n> > + * before unwrapping since the padding is removed during unwrapping.\n> > + */\n> > + SET_VARSIZE(res, len);\n> > + PG_RETURN_BYTEA_P(res);\n> >\n> > I am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to user\n>\n> Thank you for testing. I'm going to include regression tests and\n> documentation in the next version patch.\n>\n\nAttached the updated version patch. In this version, I've removed -e\noption of initdb that was used to specify the encryption algorithm and\nkey length like aes-256. The cipher algorithm and key length used by\nKMS is fixed, aes-256, so it's no longer necessary as long as we\nsupport only KMS. When we introduce transparent data encryption and\nwe'd like to support multiple options we will have such option.\nTherefore, the internal KMS is enabled when PostgreSQL is built with\n--with-openssl and --cluster-passphrase-command is specified to\ninitdb. The patch includes minimal doc and regression tests.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 25 Feb 2020 10:55:09 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi \n\nI would like to share with you a front end patch based on kms_v4.patch that you have shared, called \"kms_v4_fe.patch\". \n\n\n\nThe patch integrates front end tool pg_waldump with the KMSv4 and obtain encryption and decryption cipher contexts from the KMS backend. These cipher contexts can then be used in subsequent encryption and decryption operations provided by cipher.h when TDE is enabled. I added two common functions in your kmgr_utils that other front end tools affected by TDE can also use to obtain the cipher contexts. Do let me know if this is how you would envision KMS APIs to be used by a front end. \n\n\n\ncheers\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Mon, 24 Feb 2020 17:55:09 -0800 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----\n\n\n\nOn Thu, 20 Feb 2020 at 16:16, Masahiko Sawada \n<mailto:masahiko.sawada@2ndquadrant.com> wrote: \n> \n> On Wed, 19 Feb 2020 at 09:29, Cary Huang <mailto:cary.huang@highgo.ca> wrote: \n> > \n> > Hi \n> > \n> > I have tried the attached kms_v3 patch and have some comments: \n> > \n> > 1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data? \n> > \n> > ---> in kmgr_wrap_key( ): \n> > + /* \n> > + * Assemble the wrapped key. The order of the wrapped key is iv, hmac and \n> > + * encrypted data. \n> > + */ \n> \n> Right, will fix. \n> \n> > \n> > \n> > 2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others? \n> > \n> > +ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key) \n> > +{ \n> > + if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL)) \n> > + return false; \n> > + if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN)) \n> > + return false; \n> > + if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL)) \n> > + return false; \n> > + \n> > + /* \n> > + * Always enable padding. We don't need to check the return \n> > + * value as EVP_CIPHER_CTX_set_padding always returns 1. \n> > + */ \n> > + EVP_CIPHER_CTX_set_padding(ctx, 1); \n> > + \n> > + return true; \n> > +} \n> \n> It seems good. We can expand it to make caller decide the block cipher \n> mode of operation and key length. I removed such code from the \n> previous patch to make it simple since currently we support only \n> AES-256 CBC. \n> \n> > \n> > 3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc). \n> > \n> > ---> kmgr.h \n> > +/* Value of key_management_cipher */ \n> > +enum \n> > +{ \n> > + KMGR_CIPHER_OFF = 0, \n> > + KMGR_CIPHER_AES256 \n> > +}; \n> > + \n> > \n> > so it would become \n> > +enum \n> > +{ \n> > + KMGR_CIPHER_OFF = 0, \n> > + KMGR_CIPHER_AES256_CBC = 1, \n> > + KMGR_CIPHER_AES256_CTR = 2 \n> > +}; \n> > \n> > If you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code.... \n> \n> KMGR_CIPHER_XXX is relevant with cipher mode used by KMS and KMS would \n> still use AES256 CBC even if we had TDE which would use AES256 CTR. \n> \n> After more thoughts, I think currently we can specify -e aes-256 to \n> initdb but actually this is not necessary. When \n> --cluster-passphrase-command specified, we enable the internal KMS and \n> always use AES256 CBC. Something like -e option will be needed after \n> supporting TDE with several cipher options. Thoughts? \n> \n> > \n> > 4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.c \n> > I tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable. \n> > \n> > the len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output. \n> > \n> > ---> pg_unwrap_key function in kmgr.c \n> > + if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen, \n> > + (uint8 *) VARDATA(res), &len)) \n> > + ereport(ERROR, \n> > + (errmsg(\"could not unwrap the given secret\"))); \n> > + \n> > + /* \n> > + * The size of unwrapped key can be smaller than the size estimated \n> > + * before unwrapping since the padding is removed during unwrapping. \n> > + */ \n> > + SET_VARSIZE(res, len); \n> > + PG_RETURN_BYTEA_P(res); \n> > \n> > I am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to user \n> \n> Thank you for testing. I'm going to include regression tests and \n> documentation in the next version patch. \n> \n \nAttached the updated version patch. In this version, I've removed -e \noption of initdb that was used to specify the encryption algorithm and \nkey length like aes-256. The cipher algorithm and key length used by \nKMS is fixed, aes-256, so it's no longer necessary as long as we \nsupport only KMS. When we introduce transparent data encryption and \nwe'd like to support multiple options we will have such option. \nTherefore, the internal KMS is enabled when PostgreSQL is built with \n--with-openssl and --cluster-passphrase-command is specified to \ninitdb. The patch includes minimal doc and regression tests. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 25 Feb 2020 12:50:18 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi Masahiko\n\nPlease see below my comments regarding kms_v4.patch that you have shared earlier.\n\n(1)\nThere is a discrepancy between the documentation and the actual function definition. For example in func.sgml, it states pg_wrap_key takes argument in text data type but in pg_proc.dat and kmgr.c, the function is defined to take argument in bytea data type.\n\n\n\n===> doc/src/sgml/func.sgml\n\n+         <entry>\n\n+          <indexterm>\n\n+           <primary>pg_wrap_key</primary>\n\n+          </indexterm>\n\n+          <literal><function>pg_wrap_key(<parameter>data</parameter> <type>text</type>)</function></literal>\n\n+         </entry>\n\n+         <entry>\n\n+          <type>bytea</type>\n\n+         </entry>\n\n\n\n===> src/include/catalog/pg_proc.dat\n\n+{ oid => '8201', descr => 'wrap the given secret',\n\n+  proname => 'pg_wrap_key',\n\n+  provolatile => 'v', prorettype => 'bytea',\n\n+  proargtypes => 'bytea', prosrc => 'pg_wrap_key' },\n\n\n\n===> src/backend/crypto/kmgr.c\n\n+Datum\n\n+pg_wrap_key(PG_FUNCTION_ARGS)\n\n+{\n\n+       bytea      *data = PG_GETARG_BYTEA_PP(0);\n\n+       bytea      *res;\n\n+       int                     datalen;\n\n+       int                     reslen;\n\n+       int                     len;\n\n+\n\n\n\n(2)\n\nI think the documentation needs to make clear the difference between a key and a user secret. Some parts of it are trying to mix the 2 terms together when they shouldn't. To my understanding, a key is expressed as binary data that is actually used in the encryption and decryption operations. A user secret, on the other hand, is more like a passphrase, expressed as string, that is used to derive a encryption key for subsequent encryption operations.\n\n\n\nIf I just look at the function names \"pg_wrap_key\" and \"pg_unwrap_key\", I immediately feel that these functions are used to encapsulate and uncover cryptographic key materials. The input and output to these 2 functions should all be key materials expressed in bytea data type. In previous email discussion, there was only one key material in discussion, called the master key (generated during initdb and stored in cluster), and this somehow automatically makes people (including myself) associate pg_wrap_key and pg_unwrap_key to be used on this master key and raise a bunch of security concerns around it.\n\n\n\nHaving read the documentation provided by the patch describing pg_wrap_key and pg_unwrap_key, they seem to serve another purpose. It states that pg_wrap_key is used to encrypt a user-supplied secret (text) with the master key and produce a wrapped secret while pg_unwrap_key does the opposite, so we can prevent user from having to enter the secret in plaintext when using pgcrypto functions. \n\n\n\nThis use case is ok but user secret is not really a cryptographic key material is it? It is more similar to a secret passphrase expressed in text and pg_wrap_key is merely used to turn the passphrase into a wrapped passphrase expressed in bytea.\n\n\n\nIf I give pg_wrap_key with a real key material expressed in bytea, I will not be able to unwrap it properly:\n\n\n\nSelect pg_unwrap_key (pg_wrap_key(E'\\\\x2b073713476f9f0e761e45c64be8175424d2742e7d53df90b6416f1d84168e8a') );\n\n\n\n                pg_unwrap_key                \n\n----------------------------------------------\n\n+\\x077\\x13Go\\x0Ev\\x1EEK\\x17T$t.}SߐAo\\x1D\\x16\n\n(1 row)\n\n\n\nMaybe we should rename these SQL functions like this to prevent confusion.\n\n=> pg_wrap_secret (takes a text, returns a bytea)\n\n=> pg_unwrap_secret(takes a bytea, returns a text)\n\n\n\nif there is a use case for users to encapsulate key materials then we can define 2 more wrap functions for these, if there is no use case, don't bother:\n\n=> pg_wrap_key (takes a bytea, returns a bytea)\n\n=> pg_unwrap_key (takes a bytea, returns a bytea)\n\n\n\n(3)\n\nI would rephrase \"chapter 32: Encryption Key Management Part III. Server Administration\" documentation like this:\n\n\n\n=====================\n\nPostgreSQL supports Encryption Key Management System, which is enabled when PostgreSQL is built with --with-openssl option and cluster_passphrase_command is specified during initdb process. The user-provided cluster_passphrase_command in postgresql.conf and the cluster_passphrase_command specified during initdb process must match, otherwise, the database cluster will not start up.\n\n\n\nThe user-provided cluster passphrase is derived into a Key Encryption Key (KEK), which is used to encapsulate the Master Encryption Key generated during the initdb process. The encapsulated Master Encryption Key is stored inside the database cluster.\n\n\n\nEncryption Key Management System provides several functions to allow users to use the master encryption key to wrap and unwrap their own encryption secrets during encryption and decryption operations. This feature allows users to encrypt and decrypt data without knowing the actual key.\n\n=====================\n\n\n\n(4)\n\nI would rephrase \"chapter 32.2: Wrap and Unwrap user secret\" documentation like this: Note that I rephrase based on (2) and uses pg_(un)wrap_secret instead.\n\n\n\n=====================\nEncryption key management System provides several functions described in Table 9.97, to wrap and unwrap user secrets with the Master Encryption Key, which is uniquely and securely stored inside the database cluster.\n\n\n\nThese functions allow user to encrypt and decrypt user data without having to provide user encryption secret in plain text. One possible use case is to use encryption key management together with pgcrypto. User wraps the user encryption secret with pg_wrap_secret() and passes the wrapped encryption secret to the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operations with pgcrypto.\n\n[same examples follow after...]\n\n=====================\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n---- On Tue, 25 Feb 2020 12:50:18 -0800 Cary Huang <mailto:cary.huang@highgo.ca> wrote ----\n\n\nHi \n\nI would like to share with you a front end patch based on kms_v4.patch that you have shared, called \"kms_v4_fe.patch\". \n\n\n\nThe patch integrates front end tool pg_waldump with the KMSv4 and obtain encryption and decryption cipher contexts from the KMS backend. These cipher contexts can then be used in subsequent encryption and decryption operations provided by cipher.h when TDE is enabled. I added two common functions in your kmgr_utils that other front end tools affected by TDE can also use to obtain the cipher contexts. Do let me know if this is how you would envision KMS APIs to be used by a front end. \n\n\n\ncheers\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Mon, 24 Feb 2020 17:55:09 -0800 Masahiko Sawada <mailto:masahiko.sawada@2ndquadrant.com> wrote ----\n\n\n\n\n\n\n\n\n\n\n\nOn Thu, 20 Feb 2020 at 16:16, Masahiko Sawada \n<mailto:masahiko.sawada@2ndquadrant.com> wrote: \n> \n> On Wed, 19 Feb 2020 at 09:29, Cary Huang <mailto:cary.huang@highgo.ca> wrote: \n> > \n> > Hi \n> > \n> > I have tried the attached kms_v3 patch and have some comments: \n> > \n> > 1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data? \n> > \n> > ---> in kmgr_wrap_key( ): \n> > + /* \n> > + * Assemble the wrapped key. The order of the wrapped key is iv, hmac and \n> > + * encrypted data. \n> > + */ \n> \n> Right, will fix. \n> \n> > \n> > \n> > 2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others? \n> > \n> > +ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key) \n> > +{ \n> > + if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL)) \n> > + return false; \n> > + if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN)) \n> > + return false; \n> > + if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL)) \n> > + return false; \n> > + \n> > + /* \n> > + * Always enable padding. We don't need to check the return \n> > + * value as EVP_CIPHER_CTX_set_padding always returns 1. \n> > + */ \n> > + EVP_CIPHER_CTX_set_padding(ctx, 1); \n> > + \n> > + return true; \n> > +} \n> \n> It seems good. We can expand it to make caller decide the block cipher \n> mode of operation and key length. I removed such code from the \n> previous patch to make it simple since currently we support only \n> AES-256 CBC. \n> \n> > \n> > 3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc). \n> > \n> > ---> kmgr.h \n> > +/* Value of key_management_cipher */ \n> > +enum \n> > +{ \n> > + KMGR_CIPHER_OFF = 0, \n> > + KMGR_CIPHER_AES256 \n> > +}; \n> > + \n> > \n> > so it would become \n> > +enum \n> > +{ \n> > + KMGR_CIPHER_OFF = 0, \n> > + KMGR_CIPHER_AES256_CBC = 1, \n> > + KMGR_CIPHER_AES256_CTR = 2 \n> > +}; \n> > \n> > If you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code.... \n> \n> KMGR_CIPHER_XXX is relevant with cipher mode used by KMS and KMS would \n> still use AES256 CBC even if we had TDE which would use AES256 CTR. \n> \n> After more thoughts, I think currently we can specify -e aes-256 to \n> initdb but actually this is not necessary. When \n> --cluster-passphrase-command specified, we enable the internal KMS and \n> always use AES256 CBC. Something like -e option will be needed after \n> supporting TDE with several cipher options. Thoughts? \n> \n> > \n> > 4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.c \n> > I tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable. \n> > \n> > the len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output. \n> > \n> > ---> pg_unwrap_key function in kmgr.c \n> > + if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen, \n> > + (uint8 *) VARDATA(res), &len)) \n> > + ereport(ERROR, \n> > + (errmsg(\"could not unwrap the given secret\"))); \n> > + \n> > + /* \n> > + * The size of unwrapped key can be smaller than the size estimated \n> > + * before unwrapping since the padding is removed during unwrapping. \n> > + */ \n> > + SET_VARSIZE(res, len); \n> > + PG_RETURN_BYTEA_P(res); \n> > \n> > I am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to user \n> \n> Thank you for testing. I'm going to include regression tests and \n> documentation in the next version patch. \n> \n \nAttached the updated version patch. In this version, I've removed -e \noption of initdb that was used to specify the encryption algorithm and \nkey length like aes-256. The cipher algorithm and key length used by \nKMS is fixed, aes-256, so it's no longer necessary as long as we \nsupport only KMS. When we introduce transparent data encryption and \nwe'd like to support multiple options we will have such option. \nTherefore, the internal KMS is enabled when PostgreSQL is built with \n--with-openssl and --cluster-passphrase-command is specified to \ninitdb. The patch includes minimal doc and regression tests. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nHi MasahikoPlease see below my comments regarding kms_v4.patch that you have shared earlier.(1)There is a discrepancy between the documentation and the actual function definition. For example in func.sgml, it states pg_wrap_key takes argument in text data type but in pg_proc.dat and kmgr.c, the function is defined to take argument in bytea data type.===> doc/src/sgml/func.sgml+         <entry>+          <indexterm>+           <primary>pg_wrap_key</primary>+          </indexterm>+          <literal><function>pg_wrap_key(<parameter>data</parameter> <type>text</type>)</function></literal>+         </entry>+         <entry>+          <type>bytea</type>+         </entry>===> src/include/catalog/pg_proc.dat+{ oid => '8201', descr => 'wrap the given secret',+  proname => 'pg_wrap_key',+  provolatile => 'v', prorettype => 'bytea',+  proargtypes => 'bytea', prosrc => 'pg_wrap_key' },===> src/backend/crypto/kmgr.c+Datum+pg_wrap_key(PG_FUNCTION_ARGS)+{+       bytea      *data = PG_GETARG_BYTEA_PP(0);+       bytea      *res;+       int                     datalen;+       int                     reslen;+       int                     len;+(2)I think the documentation needs to make clear the difference between a key and a user secret. Some parts of it are trying to mix the 2 terms together when they shouldn't. To my understanding, a key is expressed as binary data that is actually used in the encryption and decryption operations. A user secret, on the other hand, is more like a passphrase, expressed as string, that is used to derive a encryption key for subsequent encryption operations.If I just look at the function names \"pg_wrap_key\" and \"pg_unwrap_key\", I immediately feel that these functions are used to encapsulate and uncover cryptographic key materials. The input and output to these 2 functions should all be key materials expressed in bytea data type. In previous email discussion, there was only one key material in discussion, called the master key (generated during initdb and stored in cluster), and this somehow automatically makes people (including myself) associate pg_wrap_key and pg_unwrap_key to be used on this master key and raise a bunch of security concerns around it.Having read the documentation provided by the patch describing pg_wrap_key and pg_unwrap_key, they seem to serve another purpose. It states that pg_wrap_key is used to encrypt a user-supplied secret (text) with the master key and produce a wrapped secret while pg_unwrap_key does the opposite, so we can prevent user from having to enter the secret in plaintext when using pgcrypto functions. This use case is ok but user secret is not really a cryptographic key material is it? It is more similar to a secret passphrase expressed in text and pg_wrap_key is merely used to turn the passphrase into a wrapped passphrase expressed in bytea.If I give pg_wrap_key with a real key material expressed in bytea, I will not be able to unwrap it properly:Select pg_unwrap_key (pg_wrap_key(E'\\\\x2b073713476f9f0e761e45c64be8175424d2742e7d53df90b6416f1d84168e8a') );                pg_unwrap_key                ----------------------------------------------+\\x077\\x13Go\\x0Ev\\x1EEK\\x17T$t.}SߐAo\\x1D\\x16(1 row)Maybe we should rename these SQL functions like this to prevent confusion.=> pg_wrap_secret (takes a text, returns a bytea)=> pg_unwrap_secret(takes a bytea, returns a text)if there is a use case for users to encapsulate key materials then we can define 2 more wrap functions for these, if there is no use case, don't bother:=> pg_wrap_key (takes a bytea, returns a bytea)=> pg_unwrap_key (takes a bytea, returns a bytea)(3)I would rephrase \"chapter 32: Encryption Key Management Part III. Server Administration\" documentation like this:=====================PostgreSQL supports Encryption Key Management System, which is enabled when PostgreSQL is built with --with-openssl option and cluster_passphrase_command is specified during initdb process. The user-provided cluster_passphrase_command in postgresql.conf and the cluster_passphrase_command specified during initdb process must match, otherwise, the database cluster will not start up.The user-provided cluster passphrase is derived into a Key Encryption Key (KEK), which is used to encapsulate the Master Encryption Key generated during the initdb process. The encapsulated Master Encryption Key is stored inside the database cluster.Encryption Key Management System provides several functions to allow users to use the master encryption key to wrap and unwrap their own encryption secrets during encryption and decryption operations. This feature allows users to encrypt and decrypt data without knowing the actual key.=====================(4)I would rephrase \"chapter 32.2: Wrap and Unwrap user secret\" documentation like this: Note that I rephrase based on (2) and uses pg_(un)wrap_secret instead.=====================Encryption key management System provides several functions described in Table 9.97, to wrap and unwrap user secrets with the Master Encryption Key, which is uniquely and securely stored inside the database cluster.These functions allow user to encrypt and decrypt user data without having to provide user encryption secret in plain text. One possible use case is to use encryption key management together with pgcrypto. User wraps the user encryption secret with pg_wrap_secret() and passes the wrapped encryption secret to the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operations with pgcrypto.[same examples follow after...]=====================Cary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Tue, 25 Feb 2020 12:50:18 -0800 Cary Huang <cary.huang@highgo.ca> wrote ----Hi I would like to share with you a front end patch based on kms_v4.patch that you have shared, called \"kms_v4_fe.patch\". The patch integrates front end tool pg_waldump with the KMSv4 and obtain encryption and decryption cipher contexts from the KMS backend. These cipher contexts can then be used in subsequent encryption and decryption operations provided by cipher.h when TDE is enabled. I added two common functions in your kmgr_utils that other front end tools affected by TDE can also use to obtain the cipher contexts. Do let me know if this is how you would envision KMS APIs to be used by a front end. cheersCary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Mon, 24 Feb 2020 17:55:09 -0800 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----On Thu, 20 Feb 2020 at 16:16, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote: > > On Wed, 19 Feb 2020 at 09:29, Cary Huang <cary.huang@highgo.ca> wrote: > > > > Hi > > > > I have tried the attached kms_v3 patch and have some comments: > > > > 1) In the comments, I think you meant hmac + iv + encrypted data instead of iv + hmac + encrypted data? > > > > ---> in kmgr_wrap_key( ): > > + /* > > + * Assemble the wrapped key. The order of the wrapped key is iv, hmac and > > + * encrypted data. > > + */ > > Right, will fix. > > > > > > > 2) I see that create_keywrap_ctx function in kmgr_utils.c and regular cipher context init will both call ossl_aes256_encrypt_init to initialise context for encryption and key wrapping. In ossl_aes256_encrypt_init, the cipher method always initialises to aes-256-cbc, which is ok for keywrap because under CBC block cipher mode, we only had to supply one unique IV as initial value. But for actual WAL and buffer encryption that will come in later, I think the discussion is to use CTR block cipher mode, which requires one unique IV for each block, and the sequence id from WAL and buffer can be used to derive unique IV for each block for better security? I think it would be better to allow caller to decide which EVP_CIPHER to initialize? EVP_aes_256_cbc(), EVP_aes_256_ctr() or others? > > > > +ossl_aes256_encrypt_init(pg_cipher_ctx *ctx, uint8 *key) > > +{ > > + if (!EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, NULL, NULL)) > > + return false; > > + if (!EVP_CIPHER_CTX_set_key_length(ctx, PG_AES256_KEY_LEN)) > > + return false; > > + if (!EVP_EncryptInit_ex(ctx, NULL, NULL, key, NULL)) > > + return false; > > + > > + /* > > + * Always enable padding. We don't need to check the return > > + * value as EVP_CIPHER_CTX_set_padding always returns 1. > > + */ > > + EVP_CIPHER_CTX_set_padding(ctx, 1); > > + > > + return true; > > +} > > It seems good. We can expand it to make caller decide the block cipher > mode of operation and key length. I removed such code from the > previous patch to make it simple since currently we support only > AES-256 CBC. > > > > > 3) Following up point 2), I think we should enhance the enum to include not only the Encryption algorithm and key size, but also the block cipher mode as well because having all 3 pieces of information can describe exactly how KMS is performing the encryption and decryption. So when we call \"ossl_aes256_encrypt_init\", we can include the new enum as input parameter and it will initialise the EVP_CIPHER_CTX with either EVP_aes_256_cbc() or EVP_aes_256_ctr() for different purposes (key wrapping, or WAL encryption..etc). > > > > ---> kmgr.h > > +/* Value of key_management_cipher */ > > +enum > > +{ > > + KMGR_CIPHER_OFF = 0, > > + KMGR_CIPHER_AES256 > > +}; > > + > > > > so it would become > > +enum > > +{ > > + KMGR_CIPHER_OFF = 0, > > + KMGR_CIPHER_AES256_CBC = 1, > > + KMGR_CIPHER_AES256_CTR = 2 > > +}; > > > > If you agree with this change, several other places will need to be changed as well, such as \"kmgr_cipher_string\", \"kmgr_cipher_value\" and the initdb code.... > > KMGR_CIPHER_XXX is relevant with cipher mode used by KMS and KMS would > still use AES256 CBC even if we had TDE which would use AES256 CTR. > > After more thoughts, I think currently we can specify -e aes-256 to > initdb but actually this is not necessary. When > --cluster-passphrase-command specified, we enable the internal KMS and > always use AES256 CBC. Something like -e option will be needed after > supporting TDE with several cipher options. Thoughts? > > > > > 4) the pg_wrap_key and pg_unwrap_key SQL functions defined in kmgr.c > > I tried these new SQL functions and found that the pg_unwrap_key will produce the original key with 4 bytes less. This is because the result length is not set long enough to accommodate the 4 byte VARHDRSZ header used by the multi-type variable. > > > > the len variable in SET_VARSIZE(res, len) should include also the variable header VARHDRSZ. Now it is 4 byte short so it will produce incomplete output. > > > > ---> pg_unwrap_key function in kmgr.c > > + if (!kmgr_unwrap_key(UnwrapCtx, (uint8 *) VARDATA_ANY(data), datalen, > > + (uint8 *) VARDATA(res), &len)) > > + ereport(ERROR, > > + (errmsg(\"could not unwrap the given secret\"))); > > + > > + /* > > + * The size of unwrapped key can be smaller than the size estimated > > + * before unwrapping since the padding is removed during unwrapping. > > + */ > > + SET_VARSIZE(res, len); > > + PG_RETURN_BYTEA_P(res); > > > > I am only testing their functionalities with random key as input data. It is currently not possible for a user to obtain the wrapped key from the server in order to use these wrap/unwrap functions. I personally don't think it is a good idea to expose these functions to user > > Thank you for testing. I'm going to include regression tests and > documentation in the next version patch. > Attached the updated version patch. In this version, I've removed -e option of initdb that was used to specify the encryption algorithm and key length like aes-256. The cipher algorithm and key length used by KMS is fixed, aes-256, so it's no longer necessary as long as we support only KMS. When we introduce transparent data encryption and we'd like to support multiple options we will have such option. Therefore, the internal KMS is enabled when PostgreSQL is built with --with-openssl and --cluster-passphrase-command is specified to initdb. The patch includes minimal doc and regression tests. Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 02 Mar 2020 15:48:53 -0800", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 3 Mar 2020 at 08:49, Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Hi Masahiko\n> Please see below my comments regarding kms_v4.patch that you have shared earlier.\n\nThank you for reviewing this patch!\n\n>\n> (1)\n> There is a discrepancy between the documentation and the actual function definition. For example in func.sgml, it states pg_wrap_key takes argument in text data type but in pg_proc.dat and kmgr.c, the function is defined to take argument in bytea data type.\n>\n> ===> doc/src/sgml/func.sgml\n> + <entry>\n> + <indexterm>\n> + <primary>pg_wrap_key</primary>\n> + </indexterm>\n> + <literal><function>pg_wrap_key(<parameter>data</parameter> <type>text</type>)</function></literal>\n> + </entry>\n> + <entry>\n> + <type>bytea</type>\n> + </entry>\n>\n> ===> src/include/catalog/pg_proc.dat\n> +{ oid => '8201', descr => 'wrap the given secret',\n> + proname => 'pg_wrap_key',\n> + provolatile => 'v', prorettype => 'bytea',\n> + proargtypes => 'bytea', prosrc => 'pg_wrap_key' },\n>\n> ===> src/backend/crypto/kmgr.c\n> +Datum\n> +pg_wrap_key(PG_FUNCTION_ARGS)\n> +{\n> + bytea *data = PG_GETARG_BYTEA_PP(0);\n> + bytea *res;\n> + int datalen;\n> + int reslen;\n> + int len;\n\nFixed.\n\n> +\n>\n> (2)\n> I think the documentation needs to make clear the difference between a key and a user secret. Some parts of it are trying to mix the 2 terms together when they shouldn't. To my understanding, a key is expressed as binary data that is actually used in the encryption and decryption operations. A user secret, on the other hand, is more like a passphrase, expressed as string, that is used to derive a encryption key for subsequent encryption operations.\n>\n> If I just look at the function names \"pg_wrap_key\" and \"pg_unwrap_key\", I immediately feel that these functions are used to encapsulate and uncover cryptographic key materials. The input and output to these 2 functions should all be key materials expressed in bytea data type. In previous email discussion, there was only one key material in discussion, called the master key (generated during initdb and stored in cluster), and this somehow automatically makes people (including myself) associate pg_wrap_key and pg_unwrap_key to be used on this master key and raise a bunch of security concerns around it.\n>\n> Having read the documentation provided by the patch describing pg_wrap_key and pg_unwrap_key, they seem to serve another purpose. It states that pg_wrap_key is used to encrypt a user-supplied secret (text) with the master key and produce a wrapped secret while pg_unwrap_key does the opposite, so we can prevent user from having to enter the secret in plaintext when using pgcrypto functions.\n>\n> This use case is ok but user secret is not really a cryptographic key material is it? It is more similar to a secret passphrase expressed in text and pg_wrap_key is merely used to turn the passphrase into a wrapped passphrase expressed in bytea.\n>\n> If I give pg_wrap_key with a real key material expressed in bytea, I will not be able to unwrap it properly:\n>\n> Select pg_unwrap_key (pg_wrap_key(E'\\\\x2b073713476f9f0e761e45c64be8175424d2742e7d53df90b6416f1d84168e8a') );\n>\n> pg_unwrap_key\n> ----------------------------------------------\n> +\\x077\\x13Go\\x0Ev\\x1EEK\\x17T$t.}SߐAo\\x1D\\x16\n> (1 row)\n>\n> Maybe we should rename these SQL functions like this to prevent confusion.\n> => pg_wrap_secret (takes a text, returns a bytea)\n> => pg_unwrap_secret(takes a bytea, returns a text)\n\nAgreed to change argument types. User secret will be normally text\npassword as we do with pgcrypto. So probably these functions can cover\nmost cases. I changed the function name to pg_wrap and pg_unwrap\nbecause these functions generically wrap and unwrap the given data.\n\n>\n> if there is a use case for users to encapsulate key materials then we can define 2 more wrap functions for these, if there is no use case, don't bother:\n> => pg_wrap_key (takes a bytea, returns a bytea)\n> => pg_unwrap_key (takes a bytea, returns a bytea)\n\n+1. Like pgcrypto has both pgp_sym_encrypt_bytea and pgp_sym_encrypt,\nmaybe we can have such functions.\n\n>\n> (3)\n> I would rephrase \"chapter 32: Encryption Key Management Part III. Server Administration\" documentation like this:\n>\n> =====================\n> PostgreSQL supports Encryption Key Management System, which is enabled when PostgreSQL is built with --with-openssl option and cluster_passphrase_command is specified during initdb process. The user-provided cluster_passphrase_command in postgresql.conf and the cluster_passphrase_command specified during initdb process must match, otherwise, the database cluster will not start up.\n>\n> The user-provided cluster passphrase is derived into a Key Encryption Key (KEK), which is used to encapsulate the Master Encryption Key generated during the initdb process. The encapsulated Master Encryption Key is stored inside the database cluster.\n>\n> Encryption Key Management System provides several functions to allow users to use the master encryption key to wrap and unwrap their own encryption secrets during encryption and decryption operations. This feature allows users to encrypt and decrypt data without knowing the actual key.\n> =====================\n>\n> (4)\n> I would rephrase \"chapter 32.2: Wrap and Unwrap user secret\" documentation like this: Note that I rephrase based on (2) and uses pg_(un)wrap_secret instead.\n>\n> =====================\n> Encryption key management System provides several functions described in Table 9.97, to wrap and unwrap user secrets with the Master Encryption Key, which is uniquely and securely stored inside the database cluster.\n>\n> These functions allow user to encrypt and decrypt user data without having to provide user encryption secret in plain text. One possible use case is to use encryption key management together with pgcrypto. User wraps the user encryption secret with pg_wrap_secret() and passes the wrapped encryption secret to the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operations with pgcrypto.\n> [same examples follow after...]\n> =====================\n\nThank you for suggesting the updated sentences. I've updated the docs\nbased on your suggestions.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 3 Mar 2020 17:58:11 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Dear Sawada-san\n\nI don't know if my environment or email system is weird, but the V5\npatch file is only include simply a changed list.\nand previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\nCan you check it?\n\nBest regards.\nMoon.\n\nOn Tue, Mar 3, 2020 at 5:58 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 3 Mar 2020 at 08:49, Cary Huang <cary.huang@highgo.ca> wrote:\n> >\n> > Hi Masahiko\n> > Please see below my comments regarding kms_v4.patch that you have shared earlier.\n>\n> Thank you for reviewing this patch!\n>\n> >\n> > (1)\n> > There is a discrepancy between the documentation and the actual function definition. For example in func.sgml, it states pg_wrap_key takes argument in text data type but in pg_proc.dat and kmgr.c, the function is defined to take argument in bytea data type.\n> >\n> > ===> doc/src/sgml/func.sgml\n> > + <entry>\n> > + <indexterm>\n> > + <primary>pg_wrap_key</primary>\n> > + </indexterm>\n> > + <literal><function>pg_wrap_key(<parameter>data</parameter> <type>text</type>)</function></literal>\n> > + </entry>\n> > + <entry>\n> > + <type>bytea</type>\n> > + </entry>\n> >\n> > ===> src/include/catalog/pg_proc.dat\n> > +{ oid => '8201', descr => 'wrap the given secret',\n> > + proname => 'pg_wrap_key',\n> > + provolatile => 'v', prorettype => 'bytea',\n> > + proargtypes => 'bytea', prosrc => 'pg_wrap_key' },\n> >\n> > ===> src/backend/crypto/kmgr.c\n> > +Datum\n> > +pg_wrap_key(PG_FUNCTION_ARGS)\n> > +{\n> > + bytea *data = PG_GETARG_BYTEA_PP(0);\n> > + bytea *res;\n> > + int datalen;\n> > + int reslen;\n> > + int len;\n>\n> Fixed.\n>\n> > +\n> >\n> > (2)\n> > I think the documentation needs to make clear the difference between a key and a user secret. Some parts of it are trying to mix the 2 terms together when they shouldn't. To my understanding, a key is expressed as binary data that is actually used in the encryption and decryption operations. A user secret, on the other hand, is more like a passphrase, expressed as string, that is used to derive a encryption key for subsequent encryption operations.\n> >\n> > If I just look at the function names \"pg_wrap_key\" and \"pg_unwrap_key\", I immediately feel that these functions are used to encapsulate and uncover cryptographic key materials. The input and output to these 2 functions should all be key materials expressed in bytea data type. In previous email discussion, there was only one key material in discussion, called the master key (generated during initdb and stored in cluster), and this somehow automatically makes people (including myself) associate pg_wrap_key and pg_unwrap_key to be used on this master key and raise a bunch of security concerns around it.\n> >\n> > Having read the documentation provided by the patch describing pg_wrap_key and pg_unwrap_key, they seem to serve another purpose. It states that pg_wrap_key is used to encrypt a user-supplied secret (text) with the master key and produce a wrapped secret while pg_unwrap_key does the opposite, so we can prevent user from having to enter the secret in plaintext when using pgcrypto functions.\n> >\n> > This use case is ok but user secret is not really a cryptographic key material is it? It is more similar to a secret passphrase expressed in text and pg_wrap_key is merely used to turn the passphrase into a wrapped passphrase expressed in bytea.\n> >\n> > If I give pg_wrap_key with a real key material expressed in bytea, I will not be able to unwrap it properly:\n> >\n> > Select pg_unwrap_key (pg_wrap_key(E'\\\\x2b073713476f9f0e761e45c64be8175424d2742e7d53df90b6416f1d84168e8a') );\n> >\n> > pg_unwrap_key\n> > ----------------------------------------------\n> > +\\x077\\x13Go\\x0Ev\\x1EEK\\x17T$t.}SߐAo\\x1D\\x16\n> > (1 row)\n> >\n> > Maybe we should rename these SQL functions like this to prevent confusion.\n> > => pg_wrap_secret (takes a text, returns a bytea)\n> > => pg_unwrap_secret(takes a bytea, returns a text)\n>\n> Agreed to change argument types. User secret will be normally text\n> password as we do with pgcrypto. So probably these functions can cover\n> most cases. I changed the function name to pg_wrap and pg_unwrap\n> because these functions generically wrap and unwrap the given data.\n>\n> >\n> > if there is a use case for users to encapsulate key materials then we can define 2 more wrap functions for these, if there is no use case, don't bother:\n> > => pg_wrap_key (takes a bytea, returns a bytea)\n> > => pg_unwrap_key (takes a bytea, returns a bytea)\n>\n> +1. Like pgcrypto has both pgp_sym_encrypt_bytea and pgp_sym_encrypt,\n> maybe we can have such functions.\n>\n> >\n> > (3)\n> > I would rephrase \"chapter 32: Encryption Key Management Part III. Server Administration\" documentation like this:\n> >\n> > =====================\n> > PostgreSQL supports Encryption Key Management System, which is enabled when PostgreSQL is built with --with-openssl option and cluster_passphrase_command is specified during initdb process. The user-provided cluster_passphrase_command in postgresql.conf and the cluster_passphrase_command specified during initdb process must match, otherwise, the database cluster will not start up.\n> >\n> > The user-provided cluster passphrase is derived into a Key Encryption Key (KEK), which is used to encapsulate the Master Encryption Key generated during the initdb process. The encapsulated Master Encryption Key is stored inside the database cluster.\n> >\n> > Encryption Key Management System provides several functions to allow users to use the master encryption key to wrap and unwrap their own encryption secrets during encryption and decryption operations. This feature allows users to encrypt and decrypt data without knowing the actual key.\n> > =====================\n> >\n> > (4)\n> > I would rephrase \"chapter 32.2: Wrap and Unwrap user secret\" documentation like this: Note that I rephrase based on (2) and uses pg_(un)wrap_secret instead.\n> >\n> > =====================\n> > Encryption key management System provides several functions described in Table 9.97, to wrap and unwrap user secrets with the Master Encryption Key, which is uniquely and securely stored inside the database cluster.\n> >\n> > These functions allow user to encrypt and decrypt user data without having to provide user encryption secret in plain text. One possible use case is to use encryption key management together with pgcrypto. User wraps the user encryption secret with pg_wrap_secret() and passes the wrapped encryption secret to the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operations with pgcrypto.\n> > [same examples follow after...]\n> > =====================\n>\n> Thank you for suggesting the updated sentences. I've updated the docs\n> based on your suggestions.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 6 Mar 2020 15:24:47 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n>\n> Dear Sawada-san\n>\n> I don't know if my environment or email system is weird, but the V5\n> patch file is only include simply a changed list.\n> and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> Can you check it?\n>\n\nThank you! I'd attached wrong file.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 6 Mar 2020 15:31:00 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Mar 6, 2020 at 03:31:00PM +0900, Masahiko Sawada wrote:\n> On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> >\n> > Dear Sawada-san\n> >\n> > I don't know if my environment or email system is weird, but the V5\n> > patch file is only include simply a changed list.\n> > and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> > Can you check it?\n> >\n> \n> Thank you! I'd attached wrong file.\n\nLooking at this thread, I wanted to make a few comments:\n\nEveryone seems to think pgcrypto need some maintenance. Who would like\nto take on that task?\n\nThis feature does require openssl since all the encryption/decryption\nhappen via openssl function calls.\n\nThree are three levels of encrypt here:\n\n1. The master key generated during initdb\n\n2. The passphrase to unlock the master key at boot time. Is that\noptional or required? \n\n3. The wrap/unwrap key, which can be per-user/application/cluster\n\nIn the patch, the doc heading \"Cluster Encryption Key Rotation\" seems\nconfusing. Doesn't that change the pass phrase? Why refer to it as the\ncluster encryption key here?\n\nCould the wrap functions expose the master encryption key by passing in\nempty string or null? I wonder if we should create a derived key from\nthe master key to use for pg_wrap/pg_unwrap, maybe by appending a fixed\nstring to all strings supplied to these functions. We could create\nanother derived key for use in block-level encryption, so we are sure\nthe two key spaces would never overlap.\n\npgcryptokey shows a method for creating a secret between client and\nserver using SQL that does not expose the secret in the server logs:\n\n\thttps://momjian.us/download/pgcryptokey/\n\nI assume we will create a 256-bit key for the master key, but give users\nan option of 128-bit vs 256-bit keys for block-level encryption. \n256-bit keys are considered necessary for security against future\nquantum computing attacks.\n\nThis looks like a bug in the patch:\n\n- This parameter can only be set in the <filename>postgresql.conf</filename>\n+ This parameter can only be set in the <filename>postgresql.confo</filename>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Wed, 11 Mar 2020 19:13:44 -0400", "msg_from": "Bruce Momjian <bruce.momjian@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 12 Mar 2020 at 08:13, Bruce Momjian\n<bruce.momjian@enterprisedb.com> wrote:\n>\n> On Fri, Mar 6, 2020 at 03:31:00PM +0900, Masahiko Sawada wrote:\n> > On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > >\n> > > Dear Sawada-san\n> > >\n> > > I don't know if my environment or email system is weird, but the V5\n> > > patch file is only include simply a changed list.\n> > > and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> > > Can you check it?\n> > >\n> >\n> > Thank you! I'd attached wrong file.\n>\n> Looking at this thread, I wanted to make a few comments:\n>\n> Everyone seems to think pgcrypto need some maintenance. Who would like\n> to take on that task?\n>\n> This feature does require openssl since all the encryption/decryption\n> happen via openssl function calls.\n>\n> Three are three levels of encrypt here:\n>\n> 1. The master key generated during initdb\n>\n> 2. The passphrase to unlock the master key at boot time. Is that\n> optional or required?\n\nThe passphrase is required if the internal kms is enabled during\ninitdb. Currently hashing the passphrase is also required but it could\nbe optional. Even if we make hashing optional, we still require\nopenssl to wrap and unwrap.\n\n>\n> 3. The wrap/unwrap key, which can be per-user/application/cluster\n>\n> In the patch, the doc heading \"Cluster Encryption Key Rotation\" seems\n> confusing. Doesn't that change the pass phrase? Why refer to it as the\n> cluster encryption key here?\n\nAgreed, changed to \"Changing Cluster Passphrase\".\n\n>\n> Could the wrap functions expose the master encryption key by passing in\n> empty string or null?\n\nCurrently the wrap function returns NULL if null is passed, and\ndoesn't expose the master encryption key even if empty string is\npassed because we add random IV for each wrapping.\n\n> I wonder if we should create a derived key from\n> the master key to use for pg_wrap/pg_unwrap, maybe by appending a fixed\n> string to all strings supplied to these functions. We could create\n> another derived key for use in block-level encryption, so we are sure\n> the two key spaces would never overlap.\n\nCurrently the master key is 32 bytes but you mean to add fixed string\nto the master key to derive a new key?\n\n>\n> pgcryptokey shows a method for creating a secret between client and\n> server using SQL that does not expose the secret in the server logs:\n>\n> https://momjian.us/download/pgcryptokey/\n>\n> I assume we will create a 256-bit key for the master key, but give users\n> an option of 128-bit vs 256-bit keys for block-level encryption.\n> 256-bit keys are considered necessary for security against future\n> quantum computing attacks.\n\n256-bit keys are more weaker than 128-bit key in terms of quantum\ncomputing attacks?\n\n>\n> This looks like a bug in the patch:\n>\n> - This parameter can only be set in the <filename>postgresql.conf</filename>\n> + This parameter can only be set in the <filename>postgresql.confo</filename>\n\nFixed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 16:13:21 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Mar 16, 2020 at 04:13:21PM +0900, Masahiko Sawada wrote:\n> On Thu, 12 Mar 2020 at 08:13, Bruce Momjian\n> <bruce.momjian@enterprisedb.com> wrote:\n> >\n> > On Fri, Mar 6, 2020 at 03:31:00PM +0900, Masahiko Sawada wrote:\n> > > On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > > >\n> > > > Dear Sawada-san\n> > > >\n> > > > I don't know if my environment or email system is weird, but the V5\n> > > > patch file is only include simply a changed list.\n> > > > and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> > > > Can you check it?\n> > > >\n> > >\n> > > Thank you! I'd attached wrong file.\n> >\n> > Looking at this thread, I wanted to make a few comments:\n> >\n> > Everyone seems to think pgcrypto need some maintenance. Who would like\n> > to take on that task?\n> >\n> > This feature does require openssl since all the encryption/decryption\n> > happen via openssl function calls.\n> >\n> > Three are three levels of encrypt here:\n> >\n> > 1. The master key generated during initdb\n> >\n> > 2. The passphrase to unlock the master key at boot time. Is that\n> > optional or required?\n> \n> The passphrase is required if the internal kms is enabled during\n> initdb. Currently hashing the passphrase is also required but it could\n> be optional. Even if we make hashing optional, we still require\n> openssl to wrap and unwrap.\n\nI think openssl should be required for any of this --- that is what I\nwas asking.\n\n> > Could the wrap functions expose the master encryption key by passing in\n> > empty string or null?\n> \n> Currently the wrap function returns NULL if null is passed, and\n> doesn't expose the master encryption key even if empty string is\n> passed because we add random IV for each wrapping.\n\nOK, good, makes sense, but you see why I am asking? We never want the\nmaster key to be visible.\n\n> > I wonder if we should create a derived key from\n> > the master key to use for pg_wrap/pg_unwrap, maybe by appending a fixed\n> > string to all strings supplied to these functions. We could create\n> > another derived key for use in block-level encryption, so we are sure\n> > the two key spaces would never overlap.\n> \n> Currently the master key is 32 bytes but you mean to add fixed string\n> to the master key to derive a new key?\n\nYes, that was my idea --- make a separate keyspace for wrap/unwrap and\nblock-level encryption.\n\n> > pgcryptokey shows a method for creating a secret between client and\n> > server using SQL that does not expose the secret in the server logs:\n> >\n> > https://momjian.us/download/pgcryptokey/\n> >\n> > I assume we will create a 256-bit key for the master key, but give users\n> > an option of 128-bit vs 256-bit keys for block-level encryption.\n> > 256-bit keys are considered necessary for security against future\n> > quantum computing attacks.\n> \n> 256-bit keys are more weaker than 128-bit key in terms of quantum\n> computing attacks?\n\nNo, I was saying we are using 256-bits for the master key and might\nallow 128 or 256 keys for block encryption.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 16 Mar 2020 14:18:12 -0400", "msg_from": "Bruce Momjian <bruce.momjian@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Sending to pgsql-hackers again.\n\nOn Tue, 17 Mar 2020 at 03:18, Bruce Momjian\n<bruce.momjian@enterprisedb.com> wrote:\n>\n> On Mon, Mar 16, 2020 at 04:13:21PM +0900, Masahiko Sawada wrote:\n> > On Thu, 12 Mar 2020 at 08:13, Bruce Momjian\n> > <bruce.momjian@enterprisedb.com> wrote:\n> > >\n> > > On Fri, Mar 6, 2020 at 03:31:00PM +0900, Masahiko Sawada wrote:\n> > > > On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > > > >\n> > > > > Dear Sawada-san\n> > > > >\n> > > > > I don't know if my environment or email system is weird, but the V5\n> > > > > patch file is only include simply a changed list.\n> > > > > and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> > > > > Can you check it?\n> > > > >\n> > > >\n> > > > Thank you! I'd attached wrong file.\n> > >\n> > > Looking at this thread, I wanted to make a few comments:\n> > >\n> > > Everyone seems to think pgcrypto need some maintenance. Who would like\n> > > to take on that task?\n> > >\n> > > This feature does require openssl since all the encryption/decryption\n> > > happen via openssl function calls.\n> > >\n> > > Three are three levels of encrypt here:\n> > >\n> > > 1. The master key generated during initdb\n> > >\n> > > 2. The passphrase to unlock the master key at boot time. Is that\n> > > optional or required?\n> >\n> > The passphrase is required if the internal kms is enabled during\n> > initdb. Currently hashing the passphrase is also required but it could\n> > be optional. Even if we make hashing optional, we still require\n> > openssl to wrap and unwrap.\n>\n> I think openssl should be required for any of this --- that is what I\n> was asking.\n>\n> > > Could the wrap functions expose the master encryption key by passing in\n> > > empty string or null?\n> >\n> > Currently the wrap function returns NULL if null is passed, and\n> > doesn't expose the master encryption key even if empty string is\n> > passed because we add random IV for each wrapping.\n>\n> OK, good, makes sense, but you see why I am asking? We never want the\n> master key to be visible.\n\nUnderstood.\n\n>\n> > > I wonder if we should create a derived key from\n> > > the master key to use for pg_wrap/pg_unwrap, maybe by appending a fixed\n> > > string to all strings supplied to these functions. We could create\n> > > another derived key for use in block-level encryption, so we are sure\n> > > the two key spaces would never overlap.\n> >\n> > Currently the master key is 32 bytes but you mean to add fixed string\n> > to the master key to derive a new key?\n>\n> Yes, that was my idea --- make a separate keyspace for wrap/unwrap and\n> block-level encryption.\n\nI understand that your idea is to include fixed length string to the\n256 bit key in order to separate key space. But if we do that, I think\nthat the key strength would actually be the same as the strength of\nweaker key length, depending on how we have the fixed string. I think\nif we want to have multiple key spaces, we need to derive keys from the\nmaster key using KDF. How do you think we can have the fixed length\nstring?\n\n>\n> > > pgcryptokey shows a method for creating a secret between client and\n> > > server using SQL that does not expose the secret in the server logs:\n> > >\n> > > https://momjian.us/download/pgcryptokey/\n> > >\n> > > I assume we will create a 256-bit key for the master key, but give users\n> > > an option of 128-bit vs 256-bit keys for block-level encryption.\n> > > 256-bit keys are considered necessary for security against future\n> > > quantum computing attacks.\n> >\n> > 256-bit keys are more weaker than 128-bit key in terms of quantum\n> > computing attacks?\n>\n> No, I was saying we are using 256-bits for the master key and might\n> allow 128 or 256 keys for block encryption.\n\nYes, we might have 128 and 256 keys for block encryption. The current\npatch doesn't have option, supports only 256 bits key for the master\nkey.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:59:07 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Sending to pgsql-hackers again.\n>\n> On Tue, 17 Mar 2020 at 03:18, Bruce Momjian\n> <bruce.momjian@enterprisedb.com> wrote:\n> >\n> > On Mon, Mar 16, 2020 at 04:13:21PM +0900, Masahiko Sawada wrote:\n> > > On Thu, 12 Mar 2020 at 08:13, Bruce Momjian\n> > > <bruce.momjian@enterprisedb.com> wrote:\n> > > >\n> > > > On Fri, Mar 6, 2020 at 03:31:00PM +0900, Masahiko Sawada wrote:\n> > > > > On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > > > > >\n> > > > > > Dear Sawada-san\n> > > > > >\n> > > > > > I don't know if my environment or email system is weird, but the V5\n> > > > > > patch file is only include simply a changed list.\n> > > > > > and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> > > > > > Can you check it?\n> > > > > >\n> > > > >\n> > > > > Thank you! I'd attached wrong file.\n> > > >\n> > > > Looking at this thread, I wanted to make a few comments:\n> > > >\n> > > > Everyone seems to think pgcrypto need some maintenance. Who would like\n> > > > to take on that task?\n> > > >\n> > > > This feature does require openssl since all the encryption/decryption\n> > > > happen via openssl function calls.\n> > > >\n> > > > Three are three levels of encrypt here:\n> > > >\n> > > > 1. The master key generated during initdb\n> > > >\n> > > > 2. The passphrase to unlock the master key at boot time. Is that\n> > > > optional or required?\n> > >\n> > > The passphrase is required if the internal kms is enabled during\n> > > initdb. Currently hashing the passphrase is also required but it could\n> > > be optional. Even if we make hashing optional, we still require\n> > > openssl to wrap and unwrap.\n> >\n> > I think openssl should be required for any of this --- that is what I\n> > was asking.\n> >\n> > > > Could the wrap functions expose the master encryption key by passing in\n> > > > empty string or null?\n> > >\n> > > Currently the wrap function returns NULL if null is passed, and\n> > > doesn't expose the master encryption key even if empty string is\n> > > passed because we add random IV for each wrapping.\n> >\n> > OK, good, makes sense, but you see why I am asking? We never want the\n> > master key to be visible.\n>\n> Understood.\n>\n> >\n> > > > I wonder if we should create a derived key from\n> > > > the master key to use for pg_wrap/pg_unwrap, maybe by appending a fixed\n> > > > string to all strings supplied to these functions. We could create\n> > > > another derived key for use in block-level encryption, so we are sure\n> > > > the two key spaces would never overlap.\n> > >\n> > > Currently the master key is 32 bytes but you mean to add fixed string\n> > > to the master key to derive a new key?\n> >\n> > Yes, that was my idea --- make a separate keyspace for wrap/unwrap and\n> > block-level encryption.\n>\n> I understand that your idea is to include fixed length string to the\n> 256 bit key in order to separate key space. But if we do that, I think\n> that the key strength would actually be the same as the strength of\n> weaker key length, depending on how we have the fixed string. I think\n> if we want to have multiple key spaces, we need to derive keys from the\n> master key using KDF.\n\nOr we can simply generate a different encryption key for block\nencryption. Therefore we will end up with having two encryption keys\ninside database. Maybe we can discuss this after the key manager has\nbeen introduced.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 18:32:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 19 Mar 2020 at 18:32, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Sending to pgsql-hackers again.\n> >\n> > On Tue, 17 Mar 2020 at 03:18, Bruce Momjian\n> > <bruce.momjian@enterprisedb.com> wrote:\n> > >\n> > > On Mon, Mar 16, 2020 at 04:13:21PM +0900, Masahiko Sawada wrote:\n> > > > On Thu, 12 Mar 2020 at 08:13, Bruce Momjian\n> > > > <bruce.momjian@enterprisedb.com> wrote:\n> > > > >\n> > > > > On Fri, Mar 6, 2020 at 03:31:00PM +0900, Masahiko Sawada wrote:\n> > > > > > On Fri, 6 Mar 2020 at 15:25, Moon, Insung <tsukiwamoon.pgsql@gmail.com> wrote:\n> > > > > > >\n> > > > > > > Dear Sawada-san\n> > > > > > >\n> > > > > > > I don't know if my environment or email system is weird, but the V5\n> > > > > > > patch file is only include simply a changed list.\n> > > > > > > and previous V4 patch file size was 64kb, but the v5 patch file size was 2kb.\n> > > > > > > Can you check it?\n> > > > > > >\n> > > > > >\n> > > > > > Thank you! I'd attached wrong file.\n> > > > >\n> > > > > Looking at this thread, I wanted to make a few comments:\n> > > > >\n> > > > > Everyone seems to think pgcrypto need some maintenance. Who would like\n> > > > > to take on that task?\n> > > > >\n> > > > > This feature does require openssl since all the encryption/decryption\n> > > > > happen via openssl function calls.\n> > > > >\n> > > > > Three are three levels of encrypt here:\n> > > > >\n> > > > > 1. The master key generated during initdb\n> > > > >\n> > > > > 2. The passphrase to unlock the master key at boot time. Is that\n> > > > > optional or required?\n> > > >\n> > > > The passphrase is required if the internal kms is enabled during\n> > > > initdb. Currently hashing the passphrase is also required but it could\n> > > > be optional. Even if we make hashing optional, we still require\n> > > > openssl to wrap and unwrap.\n> > >\n> > > I think openssl should be required for any of this --- that is what I\n> > > was asking.\n> > >\n> > > > > Could the wrap functions expose the master encryption key by passing in\n> > > > > empty string or null?\n> > > >\n> > > > Currently the wrap function returns NULL if null is passed, and\n> > > > doesn't expose the master encryption key even if empty string is\n> > > > passed because we add random IV for each wrapping.\n> > >\n> > > OK, good, makes sense, but you see why I am asking? We never want the\n> > > master key to be visible.\n> >\n> > Understood.\n> >\n> > >\n> > > > > I wonder if we should create a derived key from\n> > > > > the master key to use for pg_wrap/pg_unwrap, maybe by appending a fixed\n> > > > > string to all strings supplied to these functions. We could create\n> > > > > another derived key for use in block-level encryption, so we are sure\n> > > > > the two key spaces would never overlap.\n> > > >\n> > > > Currently the master key is 32 bytes but you mean to add fixed string\n> > > > to the master key to derive a new key?\n> > >\n> > > Yes, that was my idea --- make a separate keyspace for wrap/unwrap and\n> > > block-level encryption.\n> >\n> > I understand that your idea is to include fixed length string to the\n> > 256 bit key in order to separate key space. But if we do that, I think\n> > that the key strength would actually be the same as the strength of\n> > weaker key length, depending on how we have the fixed string. I think\n> > if we want to have multiple key spaces, we need to derive keys from the\n> > master key using KDF.\n>\n> Or we can simply generate a different encryption key for block\n> encryption. Therefore we will end up with having two encryption keys\n> inside database. Maybe we can discuss this after the key manager has\n> been introduced.\n>\n\nAttached updated version patch. This patch incorporated the comments\nand changed pg_upgrade so that we take over the master encryption key\nfrom the old cluster to the new one if both enable key management.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 19 Mar 2020 21:33:09 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, Mar 19, 2020 at 06:32:57PM +0900, Masahiko Sawada wrote:\n> On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n> > I understand that your idea is to include fixed length string to the\n> > 256 bit key in order to separate key space. But if we do that, I think\n> > that the key strength would actually be the same as the strength of\n> > weaker key length, depending on how we have the fixed string. I think\n> > if we want to have multiple key spaces, we need to derive keys from the\n> > master key using KDF.\n> \n> Or we can simply generate a different encryption key for block\n> encryption. Therefore we will end up with having two encryption keys\n> inside database. Maybe we can discuss this after the key manager has\n> been introduced.\n\nI know Sehrope liked derived keys so let's get his feedback on this. We\nmight want to have two keys anyway for key rotation purposes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 19 Mar 2020 09:00:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 19 Mar 2020 at 22:00, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Mar 19, 2020 at 06:32:57PM +0900, Masahiko Sawada wrote:\n> > On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n> > > I understand that your idea is to include fixed length string to the\n> > > 256 bit key in order to separate key space. But if we do that, I think\n> > > that the key strength would actually be the same as the strength of\n> > > weaker key length, depending on how we have the fixed string. I think\n> > > if we want to have multiple key spaces, we need to derive keys from the\n> > > master key using KDF.\n> >\n> > Or we can simply generate a different encryption key for block\n> > encryption. Therefore we will end up with having two encryption keys\n> > inside database. Maybe we can discuss this after the key manager has\n> > been introduced.\n>\n> I know Sehrope liked derived keys so let's get his feedback on this. We\n> might want to have two keys anyway for key rotation purposes.\n>\n\nAgreed. Maybe we can derive a key for the use of wrap and unwrap SQL\ninterface by like HKDF(MK, 'USER_KEY:') or HKDF(KM, 'USER_KEY:' ||\nsystem_identifier).\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 23:42:36 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, Mar 19, 2020 at 11:42:36PM +0900, Masahiko Sawada wrote:\n> On Thu, 19 Mar 2020 at 22:00, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, Mar 19, 2020 at 06:32:57PM +0900, Masahiko Sawada wrote:\n> > > On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n> > > > I understand that your idea is to include fixed length string to the\n> > > > 256 bit key in order to separate key space. But if we do that, I think\n> > > > that the key strength would actually be the same as the strength of\n> > > > weaker key length, depending on how we have the fixed string. I think\n> > > > if we want to have multiple key spaces, we need to derive keys from the\n> > > > master key using KDF.\n> > >\n> > > Or we can simply generate a different encryption key for block\n> > > encryption. Therefore we will end up with having two encryption keys\n> > > inside database. Maybe we can discuss this after the key manager has\n> > > been introduced.\n> >\n> > I know Sehrope liked derived keys so let's get his feedback on this. We\n> > might want to have two keys anyway for key rotation purposes.\n> >\n> \n> Agreed. Maybe we can derive a key for the use of wrap and unwrap SQL\n> interface by like HKDF(MK, 'USER_KEY:') or HKDF(KM, 'USER_KEY:' ||\n> system_identifier).\n\nWell, the issue is if the user can control the user key, there might be\na way to make the user key do nothing.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 19 Mar 2020 11:35:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Mar 20, 2020 at 0:35 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Mar 19, 2020 at 11:42:36PM +0900, Masahiko Sawada wrote:\n> > On Thu, 19 Mar 2020 at 22:00, Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Thu, Mar 19, 2020 at 06:32:57PM +0900, Masahiko Sawada wrote:\n> > > > On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n> > > > > I understand that your idea is to include fixed length string to\n> the\n> > > > > 256 bit key in order to separate key space. But if we do that, I\n> think\n> > > > > that the key strength would actually be the same as the strength of\n> > > > > weaker key length, depending on how we have the fixed string. I\n> think\n> > > > > if we want to have multiple key spaces, we need to derive keys\n> from the\n> > > > > master key using KDF.\n> > > >\n> > > > Or we can simply generate a different encryption key for block\n> > > > encryption. Therefore we will end up with having two encryption keys\n> > > > inside database. Maybe we can discuss this after the key manager has\n> > > > been introduced.\n> > >\n> > > I know Sehrope liked derived keys so let's get his feedback on this.\n> We\n> > > might want to have two keys anyway for key rotation purposes.\n> > >\n> >\n> > Agreed. Maybe we can derive a key for the use of wrap and unwrap SQL\n> > interface by like HKDF(MK, 'USER_KEY:') or HKDF(KM, 'USER_KEY:' ||\n> > system_identifier).\n>\n> Well, the issue is if the user can control the user key, there is might be\n> a way to make the user key do nothing.\n\n\nWell I meant ‘USER_KEY:’ is a fixed length string for the key used for wrap\nand unwrap SQL interface functions. So user cannot control it. We will have\nanother key derived by, for example, HKDF(MK, ‘TDE_KEY:’ ||\nsystem_identifier) for block encryption.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, Mar 20, 2020 at 0:35 Bruce Momjian <bruce@momjian.us> wrote:On Thu, Mar 19, 2020 at 11:42:36PM +0900, Masahiko Sawada wrote:\n> On Thu, 19 Mar 2020 at 22:00, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Thu, Mar 19, 2020 at 06:32:57PM +0900, Masahiko Sawada wrote:\n> > > On Thu, 19 Mar 2020 at 15:59, Masahiko Sawada\n> > > > I understand that your idea is to include fixed length string to the\n> > > > 256 bit key in order to separate key space. But if we do that, I think\n> > > > that the key strength would actually be the same as the strength of\n> > > > weaker key length, depending on how we have the fixed string. I think\n> > > > if we want to have multiple key spaces, we need to derive keys from the\n> > > > master key using KDF.\n> > >\n> > > Or we can simply generate a different encryption key for block\n> > > encryption. Therefore we will end up with having two encryption keys\n> > > inside database. Maybe we can discuss this after the key manager has\n> > > been introduced.\n> >\n> > I know Sehrope liked derived keys so let's get his feedback on this.  We\n> > might want to have two keys anyway for key rotation purposes.\n> >\n> \n> Agreed. Maybe we can derive a key for the use of wrap and unwrap SQL\n> interface by like HKDF(MK, 'USER_KEY:') or HKDF(KM, 'USER_KEY:' ||\n> system_identifier).\n\nWell, the issue is if the user can control the user key, there is might be\na way to make the user key do nothing.Well I meant ‘USER_KEY:’ is a fixed length string for the key used for wrap and unwrap SQL interface functions. So user cannot control it. We will have another key derived by, for example, HKDF(MK, ‘TDE_KEY:’ || system_identifier) for block encryption.Regards, -- Masahiko Sawada            http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 20 Mar 2020 00:50:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Mar 20, 2020 at 12:50:27AM +0900, Masahiko Sawada wrote:\n> On Fri, Mar 20, 2020 at 0:35 Bruce Momjian <bruce@momjian.us> wrote:\n> Well, the issue is if the user can control the user key, there is might be\n> a way to make the user key do nothing.\n> \n> Well I meant ‘USER_KEY:’ is a fixed length string for the key used for wrap and\n> unwrap SQL interface functions. So user cannot control it. We will have another\n> key derived by, for example, HKDF(MK, ‘TDE_KEY:’ || system_identifier) for\n> block encryption.\n\nOK, yes, something liek that might make sense.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 19 Mar 2020 12:38:40 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, 20 Mar 2020 at 01:38, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Mar 20, 2020 at 12:50:27AM +0900, Masahiko Sawada wrote:\n> > On Fri, Mar 20, 2020 at 0:35 Bruce Momjian <bruce@momjian.us> wrote:\n> > Well, the issue is if the user can control the user key, there is might be\n> > a way to make the user key do nothing.\n> >\n> > Well I meant ‘USER_KEY:’ is a fixed length string for the key used for wrap and\n> > unwrap SQL interface functions. So user cannot control it. We will have another\n> > key derived by, for example, HKDF(MK, ‘TDE_KEY:’ || system_identifier) for\n> > block encryption.\n>\n> OK, yes, something liek that might make sense.\n>\n\nAttached the updated version patch. The patch introduces KDF to derive\na new key from the master encryption key. We use the derived key for\npg_wrap and pg_unwrap SQL functions, instead of directly using the\nmaster encryption key. In the future, we will be able to have a\nseparate derived keys for block encryption. As a result of using KDF,\nthe minimum version of OpenSSL when enabling key management is 1.1.0.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 20 Mar 2020 15:17:47 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, Mar 19, 2020 at 09:33:09PM +0900, Masahiko Sawada wrote:\n> Attached updated version patch. This patch incorporated the comments\n> and changed pg_upgrade so that we take over the master encryption key\n> from the old cluster to the new one if both enable key management.\n\nWe had a crypto team meeting today, and came away with a few ideas:\n\nWe should create an SQL-level master key that is different from the\nblock-level master key. By using separate keys, and not deriving them\nfrom a single key, they keys can be rotated and migrated to a different\ncluster independently. For example, users might want to create a new\ncluster with a new block-level key, but might want to copy the SQL-level\nkey from the old cluster to the new cluster. Both keys would be\nunlocked with the same passphrase.\n\nI was confused by how the wrap/unwrap work. Here is an example from\nthe proposed doc patch:\n\n\t+<programlisting>\n\t+=# SELECT pg_wrap('user sercret key');\n\t+ pg_wrap\n\t+--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\t+ \\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe\n\t+(1 row)\n\t+</programlisting>\n\t+\n\t+ <para>\n\t+ Once wrapping the user key, user can encrypt and decrypt user data using the\n\t+ wrapped user key togehter with the key unwrap functions:\n\t+ </para>\n\t+\n\t+<programlisting>\n\t+ =# INSERT INTO tbl\n\t+ VALUES (pgp_sym_encrypt('secret data',\n\t+ pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')));\n\t+ INSERT 1\n\t+\n\t+ =# SELECT * FROM tbl;\n\t+ col\n\t+--------------------------------------------------------------------------------------------------------------------------------------------------------------\n\t+ \\xc30d04070302a199ee38bea0320b75d23c01577bb3ffb315d67eecbeca3e40e869cea65efbf0b470f805549af905f94d94c447fbfb8113f585fc86b30c0bd784b10c9857322dc00d556aa8de14\n\t+(1 row)\n\t+\n\t+ =# SELECT pgp_sym_decrypt(col,\n\t+ pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')) as col\n\t+ FROM tbl;\n\t+ col\n\t+------------------\n\t+ user secret data\n\nAll pg_wrap() does is to take the user string, in this case 'user\nsercret key' and encrypt it with the SQL-level master key. It doesn't\nmix the SQL-level master key into the output, which is what I originally\nthought. This means that the pg_unwrap() call above just returns 'user\nsercret key'.\n\nHow would this be used? Users would call pg_wrap() once, and store the\nresult on the client. The client could then use the output of pg_wrap()\nin all future sessions, without exposing 'user sercret key', which is\nthe key used to encrypt user data.\n\nThe passing of the parameter to pg_wrap() has to be done in a way that\ndoesn't permanently record the parameter anywhere, like in the logs. \npgcryptokey (https://momjian.us/download/pgcryptokey/) has a method of\ndoing this. This is how it passes the data encryption key without\nmaking it visible in the logs, using psql:\n\n\tSELECT get_shared_key()\n \\gset\n \\set enc_access_password `echo 'my secret' | tr -d '\\n' | openssl dgst -sha256 -binary | gpg2 --symmetric --batch --cipher-algo AES128 --passphrase :'get_shared_key' | xxd -plain | tr -d '\\n'`\n SELECT set_session_access_password(:'enc_access_password');\n\nRemoving the sanity checks and user-interface simplicity, it is\ninternally doing this:\n\n SELECT set_config('pgcryptokey.shared_key',\n encode(gen_random_bytes(32), 'hex'),\n FALSE) AS get_shared_key\n\t\\gset\n \\set enc_access_password `echo 'my secret' | tr -d '\\n' | openssl dgst -sha256 -binary | gpg2 --symmetric --batch --cipher-algo AES128 --passphrase :'get_shared_key' | xxd -plain | tr -d '\\n'`\n SELECT set_config('pgcryptokey.access_password',\n encode(pgp_sym_decrypt_bytea(decode(:'enc_access_password', 'hex'),\n :'get_shared_key'),\n 'hex'),\n FALSE) || NULL;\n\nIn English, what it does is the server generates a random key, stores it\nin a server-side veraible, and sends it to the client. The client\nhashes a user-supplied key and encrypts it with the random key it got\nfrom the server, and sends it to the sever. The server decrypts it\nusing the key it sent (stored in a server-side variable) and stores the\nthe result in another server-side veriable. Perhaps this can be added\nto our docs as a way of calling pg_wrap().\n\nWhat good is this feature? Well, the user-supplied data encryption key\nlike 'user sercret key', which is used to encrypt user data, is not\nvisible in the query or the server logs. The wrapped password is\nvisible, but to use it you must be able to connect to a running server\n(to unwrap it), or have a shut down server and know the paasphrase. \nRead access to the file system is not sufficient since there is no\naccess to the pass phrase.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 20 Mar 2020 16:30:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 21 Mar 2020 at 05:30, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Mar 19, 2020 at 09:33:09PM +0900, Masahiko Sawada wrote:\n> > Attached updated version patch. This patch incorporated the comments\n> > and changed pg_upgrade so that we take over the master encryption key\n> > from the old cluster to the new one if both enable key management.\n>\n> We had a crypto team meeting today, and came away with a few ideas:\n>\n> We should create an SQL-level master key that is different from the\n> block-level master key. By using separate keys, and not deriving them\n> from a single key, they keys can be rotated and migrated to a different\n> cluster independently. For example, users might want to create a new\n> cluster with a new block-level key, but might want to copy the SQL-level\n> key from the old cluster to the new cluster. Both keys would be\n> unlocked with the same passphrase.\n\nI've updated the patch according to yesterday's meeting. As the above\ndescription by Bruce, the current patch have two encryption keys.\nPreviously we have the master key in pg_control but due to exceeding\nthe safe size limit of pg_control I moved two keys to the dedicated\nfile located at global/pg_key. A wrapped key is 128 bytes and the\ntotal size including two wrapped key became 552 bytes while safe limit\nis 512 bytes.\n\nDuring pg_upgrade we copy the key file from the old cluster to the new\ncluster. Therefore we can unwrap the data that is wrapped on the old\ncluster on the new cluster.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 21 Mar 2020 14:12:46 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, Mar 21, 2020 at 02:12:46PM +0900, Masahiko Sawada wrote:\n> On Sat, 21 Mar 2020 at 05:30, Bruce Momjian <bruce@momjian.us> wrote:\n> > We should create an SQL-level master key that is different from the\n> > block-level master key. By using separate keys, and not deriving them\n> > from a single key, they keys can be rotated and migrated to a different\n> > cluster independently. For example, users might want to create a new\n> > cluster with a new block-level key, but might want to copy the SQL-level\n> > key from the old cluster to the new cluster. Both keys would be\n> > unlocked with the same passphrase.\n> \n> I've updated the patch according to yesterday's meeting. As the above\n> description by Bruce, the current patch have two encryption keys.\n> Previously we have the master key in pg_control but due to exceeding\n> the safe size limit of pg_control I moved two keys to the dedicated\n> file located at global/pg_key. A wrapped key is 128 bytes and the\n> total size including two wrapped key became 552 bytes while safe limit\n> is 512 bytes.\n> \n> During pg_upgrade we copy the key file from the old cluster to the new\n> cluster. Therefore we can unwrap the data that is wrapped on the old\n> cluster on the new cluster.\n\nI wonder if we should just use two files, one for each key.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 21 Mar 2020 10:01:02 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, Mar 21, 2020 at 10:01:02AM -0400, Bruce Momjian wrote:\n> On Sat, Mar 21, 2020 at 02:12:46PM +0900, Masahiko Sawada wrote:\n> > On Sat, 21 Mar 2020 at 05:30, Bruce Momjian <bruce@momjian.us> wrote:\n> > > We should create an SQL-level master key that is different from the\n> > > block-level master key. By using separate keys, and not deriving them\n> > > from a single key, they keys can be rotated and migrated to a different\n> > > cluster independently. For example, users might want to create a new\n> > > cluster with a new block-level key, but might want to copy the SQL-level\n> > > key from the old cluster to the new cluster. Both keys would be\n> > > unlocked with the same passphrase.\n> > \n> > I've updated the patch according to yesterday's meeting. As the above\n> > description by Bruce, the current patch have two encryption keys.\n> > Previously we have the master key in pg_control but due to exceeding\n> > the safe size limit of pg_control I moved two keys to the dedicated\n> > file located at global/pg_key. A wrapped key is 128 bytes and the\n> > total size including two wrapped key became 552 bytes while safe limit\n> > is 512 bytes.\n> > \n> > During pg_upgrade we copy the key file from the old cluster to the new\n> > cluster. Therefore we can unwrap the data that is wrapped on the old\n> > cluster on the new cluster.\n> \n> I wonder if we should just use two files, one for each key.\n\nActually, I think we need three files:\n\n* TDE WAL key file\n* TDE block key file\n* SQL-level file\n\nPrimaries and standbys have to use the same TDE WAL key file, but can\nuse different TDE block key files to allow for key rotation, so having\nseparate files makes sense --- maybe they need to be in their own\ndirectory.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 21 Mar 2020 10:50:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 21 Mar 2020 at 23:50, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Mar 21, 2020 at 10:01:02AM -0400, Bruce Momjian wrote:\n> > On Sat, Mar 21, 2020 at 02:12:46PM +0900, Masahiko Sawada wrote:\n> > > On Sat, 21 Mar 2020 at 05:30, Bruce Momjian <bruce@momjian.us> wrote:\n> > > > We should create an SQL-level master key that is different from the\n> > > > block-level master key. By using separate keys, and not deriving them\n> > > > from a single key, they keys can be rotated and migrated to a different\n> > > > cluster independently. For example, users might want to create a new\n> > > > cluster with a new block-level key, but might want to copy the SQL-level\n> > > > key from the old cluster to the new cluster. Both keys would be\n> > > > unlocked with the same passphrase.\n> > >\n> > > I've updated the patch according to yesterday's meeting. As the above\n> > > description by Bruce, the current patch have two encryption keys.\n> > > Previously we have the master key in pg_control but due to exceeding\n> > > the safe size limit of pg_control I moved two keys to the dedicated\n> > > file located at global/pg_key. A wrapped key is 128 bytes and the\n> > > total size including two wrapped key became 552 bytes while safe limit\n> > > is 512 bytes.\n> > >\n> > > During pg_upgrade we copy the key file from the old cluster to the new\n> > > cluster. Therefore we can unwrap the data that is wrapped on the old\n> > > cluster on the new cluster.\n> >\n> > I wonder if we should just use two files, one for each key.\n>\n> Actually, I think we need three files:\n>\n> * TDE WAL key file\n> * TDE block key file\n> * SQL-level file\n>\n> Primaries and standbys have to use the same TDE WAL key file, but can\n> use different TDE block key files to allow for key rotation, so having\n> separate files makes sense --- maybe they need to be in their own\n> directory.\n\nI've considered to have separate key files once but it would make\nthings complex to update multiple files atomically. Postgres server\nwill never start if it crashes in the middle of cluster passphrase\nrotation. Can we consider to have keys related to TDE after we\nintroduce the basic key management system? Probably having keys in a\nseparate file rather than in pg_control file would be better but we\ndon't need these keys so far.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 23 Mar 2020 15:55:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Mar 23, 2020 at 03:55:34PM +0900, Masahiko Sawada wrote:\n> On Sat, 21 Mar 2020 at 23:50, Bruce Momjian <bruce@momjian.us> wrote:\n> > Actually, I think we need three files:\n> >\n> > * TDE WAL key file\n> > * TDE block key file\n> > * SQL-level file\n> >\n> > Primaries and standbys have to use the same TDE WAL key file, but can\n> > use different TDE block key files to allow for key rotation, so having\n> > separate files makes sense --- maybe they need to be in their own\n> > directory.\n> \n> I've considered to have separate key files once but it would make\n> things complex to update multiple files atomically. Postgres server\n> will never start if it crashes in the middle of cluster passphrase\n> rotation. Can we consider to have keys related to TDE after we\n> introduce the basic key management system? Probably having keys in a\n> separate file rather than in pg_control file would be better but we\n> don't need these keys so far.\n\nWell, we need to be able to upgrade this so we have to set it up now in\na way that allows that.\n\nI am not sure we have ever had a case where we needed to update multiple\nfiles atomically at the same time, without the help of WAL.\n\nPerhaps we should put the three keys in separate files in a directory\ncalled 'cryptokeys', and when we change the pass phrase, we create a new\ndirectory called 'cryptokeys.new'. Then once we have created the files\nin there with the new pass phrase, we remove cryptokeys and rename\ndirectory cryptokeys.new to cryptokeys. On boot, if cryptokeys exists\nand cryptokeys.new does too, remove cryptokeys.new because we crashed\nduring key rotation, If cryptokeys.new exists and cryptokeys doesn't,\nwe rename cryptokeys.new to cryptokeys because we crashed before the\nrename.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 23 Mar 2020 18:14:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 24 Mar 2020 at 07:15, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Mar 23, 2020 at 03:55:34PM +0900, Masahiko Sawada wrote:\n> > On Sat, 21 Mar 2020 at 23:50, Bruce Momjian <bruce@momjian.us> wrote:\n> > > Actually, I think we need three files:\n> > >\n> > > * TDE WAL key file\n> > > * TDE block key file\n> > > * SQL-level file\n> > >\n> > > Primaries and standbys have to use the same TDE WAL key file, but can\n> > > use different TDE block key files to allow for key rotation, so having\n> > > separate files makes sense --- maybe they need to be in their own\n> > > directory.\n> >\n> > I've considered to have separate key files once but it would make\n> > things complex to update multiple files atomically. Postgres server\n> > will never start if it crashes in the middle of cluster passphrase\n> > rotation. Can we consider to have keys related to TDE after we\n> > introduce the basic key management system? Probably having keys in a\n> > separate file rather than in pg_control file would be better but we\n> > don't need these keys so far.\n>\n> Well, we need to be able to upgrade this so we have to set it up now in\n> a way that allows that.\n>\n> I am not sure we have ever had a case where we needed to update multiple\n> files atomically at the same time, without the help of WAL.\n>\n> Perhaps we should put the three keys in separate files in a directory\n> called 'cryptokeys', and when we change the pass phrase, we create a new\n> directory called 'cryptokeys.new'. Then once we have created the files\n> in there with the new pass phrase, we remove cryptokeys and rename\n> directory cryptokeys.new to cryptokeys. On boot, if cryptokeys exists\n> and cryptokeys.new does too, remove cryptokeys.new because we crashed\n> during key rotation, If cryptokeys.new exists and cryptokeys doesn't,\n> we rename cryptokeys.new to cryptokeys because we crashed before the\n> rename.\n\nThat seems to work fine.\n\nSo we will have pg_cryptokeys within PGDATA and each key is stored\ninto separate file named the key id such as \"sql\", \"tde-wal\" and\n\"tde-block\". I'll update the patch and post.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Mar 2020 14:29:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, Mar 24, 2020 at 02:29:57PM +0900, Masahiko Sawada wrote:\n> That seems to work fine.\n> \n> So we will have pg_cryptokeys within PGDATA and each key is stored\n> into separate file named the key id such as \"sql\", \"tde-wal\" and\n> \"tde-block\". I'll update the patch and post.\n\nYes, that makes sense to me.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 24 Mar 2020 10:15:21 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 24 Mar 2020 at 23:15, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Mar 24, 2020 at 02:29:57PM +0900, Masahiko Sawada wrote:\n> > That seems to work fine.\n> >\n> > So we will have pg_cryptokeys within PGDATA and each key is stored\n> > into separate file named the key id such as \"sql\", \"tde-wal\" and\n> > \"tde-block\". I'll update the patch and post.\n>\n> Yes, that makes sense to me.\n>\n\nI've attached the updated patch. With the patch, we have three\ninternal keys: SQL key, TDE-block key and TDE-wal key. Only SQL key\ncan be used so far to wrap and unwrap user secret via pg_wrap and\npg_unwrap SQL functions. Each keys is saved to the single file located\nat pg_cryptokeys. After initdb with enabling key manager, the\npg_cryptokeys directory has the following files:\n\n$ ll data/pg_cryptokeys\ntotal 12K\n-rw------- 1 masahiko staff 132 Mar 25 15:45 0000\n-rw------- 1 masahiko staff 132 Mar 25 15:45 0001\n-rw------- 1 masahiko staff 132 Mar 25 15:45 0002\n\nI used the integer id rather than string id to make the code simple.\n\nWhen cluster passphrase rotation, we update all keys atomically using\ntemporary directory as follows:\n\n1. Derive the new passphrase\n2. Wrap all internal keys with the new passphrase\n3. Save all internal keys to the temp directory\n4. Remove the original directory, pg_cryptokeys\n5. Rename the temp directory to pg_cryptokeys\n\nIn case of failure during rotation, pg_cyrptokeys and\npg_cyrptokeys_tmp can be left in an incomplete state. We recover it by\nchecking if the temporary directory exists and the wrapped keys in the\ntemporary directory are valid.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 25 Mar 2020 17:51:08 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Mar 25, 2020 at 05:51:08PM +0900, Masahiko Sawada wrote:\n> On Tue, 24 Mar 2020 at 23:15, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Tue, Mar 24, 2020 at 02:29:57PM +0900, Masahiko Sawada wrote:\n> > > That seems to work fine.\n> > >\n> > > So we will have pg_cryptokeys within PGDATA and each key is stored\n> > > into separate file named the key id such as \"sql\", \"tde-wal\" and\n> > > \"tde-block\". I'll update the patch and post.\n> >\n> > Yes, that makes sense to me.\n> >\n> \n> I've attached the updated patch. With the patch, we have three\n> internal keys: SQL key, TDE-block key and TDE-wal key. Only SQL key\n> can be used so far to wrap and unwrap user secret via pg_wrap and\n> pg_unwrap SQL functions. Each keys is saved to the single file located\n> at pg_cryptokeys. After initdb with enabling key manager, the\n> pg_cryptokeys directory has the following files:\n> \n> $ ll data/pg_cryptokeys\n> total 12K\n> -rw------- 1 masahiko staff 132 Mar 25 15:45 0000\n> -rw------- 1 masahiko staff 132 Mar 25 15:45 0001\n> -rw------- 1 masahiko staff 132 Mar 25 15:45 0002\n> \n> I used the integer id rather than string id to make the code simple.\n\nGreat, thanks. I assume the final version will use file names.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 27 Mar 2020 17:30:55 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi\n\nI had a look on kms_v9 patch and have some comments\n\n\n\n--> pg_upgrade.c\n\nkeys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so. \n\n\n\n-->Kmgr.c \n\n+\t/*\n\n+\t * If there is only temporary directory, it means that the previous\n\n+\t * rotation failed after wrapping the all internal keys by the new\n\n+\t * passphrase.  Therefore we use the new cluster passphrase.\n\n+\t */\n\n+\tif (stat(KMGR_DIR, &st) != 0)\n\n+\t{\n\n+\t\tereport(DEBUG1,\n\n+\t\t\t\t(errmsg(\"both directories %s and %s exist, use the newly wrapped keys\",\n\n+\t\t\t\t\t\tKMGR_DIR, KMGR_TMP_DIR)));\n\n\n\nI think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\"\n\n\n\nthanks!\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n\n\n---- On Wed, 25 Mar 2020 01:51:08 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----\n\n\n\nOn Tue, 24 Mar 2020 at 23:15, Bruce Momjian <mailto:bruce@momjian.us> wrote: \n> \n> On Tue, Mar 24, 2020 at 02:29:57PM +0900, Masahiko Sawada wrote: \n> > That seems to work fine. \n> > \n> > So we will have pg_cryptokeys within PGDATA and each key is stored \n> > into separate file named the key id such as \"sql\", \"tde-wal\" and \n> > \"tde-block\". I'll update the patch and post. \n> \n> Yes, that makes sense to me. \n> \n \nI've attached the updated patch. With the patch, we have three \ninternal keys: SQL key, TDE-block key and TDE-wal key. Only SQL key \ncan be used so far to wrap and unwrap user secret via pg_wrap and \npg_unwrap SQL functions. Each keys is saved to the single file located \nat pg_cryptokeys. After initdb with enabling key manager, the \npg_cryptokeys directory has the following files: \n \n$ ll data/pg_cryptokeys \ntotal 12K \n-rw------- 1 masahiko staff 132 Mar 25 15:45 0000 \n-rw------- 1 masahiko staff 132 Mar 25 15:45 0001 \n-rw------- 1 masahiko staff 132 Mar 25 15:45 0002 \n \nI used the integer id rather than string id to make the code simple. \n \nWhen cluster passphrase rotation, we update all keys atomically using \ntemporary directory as follows: \n \n1. Derive the new passphrase \n2. Wrap all internal keys with the new passphrase \n3. Save all internal keys to the temp directory \n4. Remove the original directory, pg_cryptokeys \n5. Rename the temp directory to pg_cryptokeys \n \nIn case of failure during rotation, pg_cyrptokeys and \npg_cyrptokeys_tmp can be left in an incomplete state. We recover it by \nchecking if the temporary directory exists and the wrapped keys in the \ntemporary directory are valid. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nHiI had a look on kms_v9 patch and have some comments--> pg_upgrade.ckeys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so. -->Kmgr.c +\t/*+\t * If there is only temporary directory, it means that the previous+\t * rotation failed after wrapping the all internal keys by the new+\t * passphrase.  Therefore we use the new cluster passphrase.+\t */+\tif (stat(KMGR_DIR, &st) != 0)+\t{+\t\tereport(DEBUG1,+\t\t\t\t(errmsg(\"both directories %s and %s exist, use the newly wrapped keys\",+\t\t\t\t\t\tKMGR_DIR, KMGR_TMP_DIR)));I think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\"thanks!Cary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Wed, 25 Mar 2020 01:51:08 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----On Tue, 24 Mar 2020 at 23:15, Bruce Momjian <bruce@momjian.us> wrote: > > On Tue, Mar 24, 2020 at 02:29:57PM +0900, Masahiko Sawada wrote: > > That seems to work fine. > > > > So we will have pg_cryptokeys within PGDATA and each key is stored > > into separate file named the key id such as \"sql\", \"tde-wal\" and > > \"tde-block\". I'll update the patch and post. > > Yes, that makes sense to me. > I've attached the updated patch. With the patch, we have three internal keys: SQL key, TDE-block key and TDE-wal key. Only SQL key can be used so far to wrap and unwrap user secret via pg_wrap and pg_unwrap SQL functions. Each keys is saved to the single file located at pg_cryptokeys. After initdb with enabling key manager, the pg_cryptokeys directory has the following files: $ ll data/pg_cryptokeys total 12K -rw------- 1 masahiko staff 132 Mar 25 15:45 0000 -rw------- 1 masahiko staff 132 Mar 25 15:45 0001 -rw------- 1 masahiko staff 132 Mar 25 15:45 0002 I used the integer id rather than string id to make the code simple. When cluster passphrase rotation, we update all keys atomically using temporary directory as follows: 1. Derive the new passphrase 2. Wrap all internal keys with the new passphrase 3. Save all internal keys to the temp directory 4. Remove the original directory, pg_cryptokeys 5. Rename the temp directory to pg_cryptokeys In case of failure during rotation, pg_cyrptokeys and pg_cyrptokeys_tmp can be left in an incomplete state. We recover it by checking if the temporary directory exists and the wrapped keys in the temporary directory are valid. Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Mar 2020 17:36:24 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 31 Mar 2020 at 09:36, Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Hi\n> I had a look on kms_v9 patch and have some comments\n>\n> --> pg_upgrade.c\n> keys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so.\n\nI think both the old cluster and the new cluster must be initialized\nwith the same passphrase at initdb. Specifying the different\npassphrase command to the new cluster at initdb and changing it after\npg_upgrade doesn't make sense. Also I don't think we need to copy\ncluster_passphrase_command same as other GUC parameters.\n\nI've changed the patch so that pg_upgrade copies the crypto keys only\nif both new and old cluster enable the key management. User must\nspecify the same passphrase command to both old and new cluster, which\nis not cumbersome, I think. I also added the description about this to\nthe doc.\n\n>\n> -->Kmgr.c\n> + /*\n> + * If there is only temporary directory, it means that the previous\n> + * rotation failed after wrapping the all internal keys by the new\n> + * passphrase. Therefore we use the new cluster passphrase.\n> + */\n> + if (stat(KMGR_DIR, &st) != 0)\n> + {\n> + ereport(DEBUG1,\n> + (errmsg(\"both directories %s and %s exist, use the newly wrapped keys\",\n> + KMGR_DIR, KMGR_TMP_DIR)));\n>\n> I think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\"\n\nYou're right. Fixed.\n\nI've attached the new version patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 31 Mar 2020 13:30:19 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hello\n\n\n\nThanks a lot for the patch, I think in terms of functionality, the patch provides very straightforward functionalities regarding key management. In terms of documentation, I think the patch is still lacking some pieces of information that kind of prevent people from fully understanding how KMS works and how it can be used and why, (at least that is the impression I got from the zoom meeting recordings :p). I spent some time today revisiting the key-management documentation in the patch and rephrase and restructure it based on my current understanding of latest KMS design. I mentioned all 3 application level keys that we have agreed and emphasize on explaining the SQL level encryption key because that is the key that can be used right now. Block and WAL levels keys we can add here more information once they are actually used in the TDE development. \n\n\n\nPlease see below the KMS documentation that I have revised and I hope it will be more clear and easier for people to understand KMS. Feel free to make adjustments. Please note that we use the term \"wrap\" and \"unwrap\" a lot in our past discussions. Originally we used the terms within a context involving Key encryption keys (KEK). For example, \"KMS wraps a master key with KEK\". Later, we used the same term in a context involving encrypting user secret /password. For example, \"KMS wraps a user secret with SQL key\". In my opinion, both make sense but it may be confusing to people having the same term used differently. So in my revision below, the terms \"wrap\" and \"unwrap\" refer to encrypting or decrypting user secret / password as they are used in \"pg_wrap() and pg_unwrap()\". I use the terms \"encapsulate\" and \"restore\" when KEK is used to encrypt or decrypt a key.\n\n\n\n\n\n\n\nChapter 32: Encryption Key Management \n\n----------------------------------------------\n\n\nPostgreSQL supports internal Encryption Key Management System, which is designed to manage the life cycles of cryptographic keys within the PostgreSQL system. This includes dealing with their generation, storage, usage and rotation.\n\n\n\nEncryption Key Management is enabled when PostgreSQL is build with --with-openssl and cluster passphrase command is specified during initdb. The cluster passphrase provided by --cluster-passphrase-command option during initdb and the one generated by cluster_passphrase_command in the postgresql.conf must match, otherwise, the database cluster will not start up.\n\n\n\n32.1 Key Generations and Derivations\n\n------------------------------------------\n\n\n\nWhen cluster_passphrase_command option is specified to the initdb, the process will derive the cluster passphrase into a Key Encryption Key (KEK) and a HMAC Key using key derivation protocol before the actual generation of application level cryptographic level keys.\n\n\n\n-Key Encryption Key (KEK)\n\nKEK is primarily used to encapsulate or restore a given application level cryptographic key\n\n\n\n-HMAC Key\n\nHMAC key is used to compute the HASH of a given application level cryptographic key for integrity check purposes\n\n\n\nThese 2 keys are not stored physically within the PostgreSQL cluster as they are designed to be derived from the correctly configured cluster passphrase.\n\n\n\nEncryption Key Management System currently manages 3 application level cryptographic keys that have different purposes and usages within the PostgreSQL system and these are generated using pg_strong_random() after KEK and HMAC key derivation during initdb process.\n\n\n\nThe 3 keys are:\n\n\n\n-SQL Level Key\n\nSQL Level Key is used to wrap and unwrap a user secret / passphrase via pg_wrap() and pg_unwrap() SQL functions. These 2 functions are designed to be used in conjunction with the cryptographic functions provided by pgcrypto extension to perform column level encryption/decryption without having to supply a clear text user secret or passphrase that is required by many pgcrypto functions as input. Please refer to [Wrap and Unwrap User Secret section] for usage examples.\n\n\n\n-Block Level Key\n\nBlock Level Key is primarily used to encrypt / decrypt buffers as part of the Transparent Data Encryption (TDE) feature\n\n\n\n-WAL Level Key\n\nWAL Level Key is primarily used to encrypt / decrypt WAL files as part of the Transparent Data Encryption (TDE) feature\n\n\n\nThe 3 application level keys above will be encapsulated and hashed using KEK and HMAC key mentioned above before they are physically stored to pg_cryptokeys directory within the cluster.\n\n\n\n32.1. Key Initialization\n\n-------------------------\n\n\n\nWhen a PostgreSQL cluster with encryption key management enabled is started, the cluster_passphrase_command parameter in postgresql.conf will be evaluated and the cluster passphrase will be derived into KEK and HMAC Key in similar ways as initdb.\n\n\n\nAfter that, the 3 encapsulated application level cryptographic keys will be retrieved from pg_cryptokeys directory to be restored and integrity-checked by the key management system using the derived KEK and HMAC key. If this process fails, it is likely that the cluster passphrase supplied to the cluster is not the same as that supplied to the initdb process. The cluster will refuse to start in this case and user has to manually correct the cluster passphrase.\n\n\n\n32.2. Wrap and Unwrap User Secret\n\n----------------------------------------\n\nEncryption key management system provides pg_wrap() and pg_unwrap SQL functions (listed in Table 9.97) to perform wrap and unwrap operations on user secret with the SQL level encryption key. The SQL level encryption key is one of the 3 application level keys generated during initdb process when cluster_passphrase is supplied.\n\n\n\nWhen pg_wrap() and pg_unwrap() functions are invoked, SQL level encryption key will internally be used to perform the encryption and decryption operation with HMAC-based integrity check. From user's point of view, he or she is not aware of the actual SQL level encryption key used internally by both wrap functions \n\n\n\nOne possible use case is to combine pg_wrap() and pg_unwrap() with pgcrypto. User wraps the user encryption secret with pg_wrap function and passes the wrapped encryption secret to pg_unwrap function for the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operation with pgcrypto.\n\n\n\nHere is an example that shows how to encrypt and decrypt data together with wrap and unwrap functions:\n\n=# SELECT pg_wrap('my secret passward');\n\n                                                                              pg_wrap\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe\n\n(1 row)\n\nOnce wrapping the user key, user can encrypt and decrypt user data using the wrapped user key together with the key unwrap functions:\n\n=# INSERT INTO tbl\n\n        VALUES (pgp_sym_encrypt('secret data',\n\n                                 pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')));\n\nINSERT 1\n\n=# SELECT * FROM tbl;\n\n                                                                             col\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\\xc30d04070302a199ee38bea0320b75d23c01577bb3ffb315d67eecbeca3e40e869cea65efbf0b470f805549af905f94d94c447fbfb8113f585fc86b30c0bd784b10c9857322dc00d556aa8de14\n\n(1 row)\n\n=# SELECT pgp_sym_decrypt(col,\n\n                           pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')) as col\n\n    FROM tbl;\n\n     col\n\n--------------\n\nsecret data\n\n(1 row)\n\nThe data 'secret data' is practically encrypted by the user secret 'my secret passward' but using wrap and unwrap functions user don't need to know the actual user secret during operation.\n\n\n\n\n\n32.3. Key Rotation Process\n\n------------------------------\n\n\n\nEncryption keys in general are not interminable, the longer the same key is in use, the chance  of it being breached increases. Performing key rotation on a regular basis help meet standardized security practices such as PCI-DSS and it is a good practice in security to limit the number of encrypted bytes available for a specific key version. The key lifetimse are based on key length, key strength, algorithm and total number of bytes enciphered. The key management systems provides a efficient method to perform key rotation.\n\n\n\nPlease be aware that the phrase \"key rotation\" here only refers to the rotation of KEK and HMAC keys. The 3 application level encryption keys (SQL, Block and WAL levels) are not rotated; they will in fact be the same before and after a \"key rotation.\" This can be justified because the actual keys are never stored anywhere physically, presented to user or captured in logging. What is being rotated here is the KEK and HMAC keys who are responsible for encapsulating and restoring the actual application level encryption keys.\n\n\n\nSince both KEK and HMAC keys are derived from a cluster passphrase, the \"key rotation\" ultimately refers to the rotation of cluster passphrase and deriving a new KEK and HMAC keys from the new cluster passphrase. The new set of KEK and HMAC keys can then be used to encapsulate all 3 application level encryptions keys and store the new results in pg_cryptokeys directory.\n\n\n\nTo rotate the cluster passphrase, user firstly needs to update cluster_passphrase_command in the postgresql.conf and then execute pg_rotate_cluster_passphrase() SQL function to initiate the rotation.\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n\n---- On Mon, 30 Mar 2020 21:30:19 -0700 Masahiko Sawada <mailto:masahiko.sawada@2ndquadrant.com> wrote ----\n\n\nOn Tue, 31 Mar 2020 at 09:36, Cary Huang <mailto:cary.huang@highgo.ca> wrote: \n> \n> Hi \n> I had a look on kms_v9 patch and have some comments \n> \n> --> pg_upgrade.c \n> keys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so. \n \nI think both the old cluster and the new cluster must be initialized \nwith the same passphrase at initdb. Specifying the different \npassphrase command to the new cluster at initdb and changing it after \npg_upgrade doesn't make sense. Also I don't think we need to copy \ncluster_passphrase_command same as other GUC parameters. \n \nI've changed the patch so that pg_upgrade copies the crypto keys only \nif both new and old cluster enable the key management. User must \nspecify the same passphrase command to both old and new cluster, which \nis not cumbersome, I think. I also added the description about this to \nthe doc. \n \n> \n> -->Kmgr.c \n> + /* \n> + * If there is only temporary directory, it means that the previous \n> + * rotation failed after wrapping the all internal keys by the new \n> + * passphrase. Therefore we use the new cluster passphrase. \n> + */ \n> + if (stat(KMGR_DIR, &st) != 0) \n> + { \n> + ereport(DEBUG1, \n> + (errmsg(\"both directories %s and %s exist, use the newly wrapped keys\", \n> + KMGR_DIR, KMGR_TMP_DIR))); \n> \n> I think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\" \n \nYou're right. Fixed. \n \nI've attached the new version patch. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nHelloThanks a lot for the patch, I think in terms of functionality, the patch provides very straightforward functionalities regarding key management. In terms of documentation, I think the patch is still lacking some pieces of information that kind of prevent people from fully understanding how KMS works and how it can be used and why, (at least that is the impression I got from the zoom meeting recordings :p). I spent some time today revisiting the key-management documentation in the patch and rephrase and restructure it based on my current understanding of latest KMS design. I mentioned all 3 application level keys that we have agreed and emphasize on explaining the SQL level encryption key because that is the key that can be used right now. Block and WAL levels keys we can add here more information once they are actually used in the TDE development. Please see below the KMS documentation that I have revised and I hope it will be more clear and easier for people to understand KMS. Feel free to make adjustments. Please note that we use the term \"wrap\" and \"unwrap\" a lot in our past discussions. Originally we used the terms within a context involving Key encryption keys (KEK). For example, \"KMS wraps a master key with KEK\". Later, we used the same term in a context involving encrypting user secret /password. For example, \"KMS wraps a user secret with SQL key\". In my opinion, both make sense but it may be confusing to people having the same term used differently. So in my revision below, the terms \"wrap\" and \"unwrap\" refer to encrypting or decrypting user secret / password as they are used in \"pg_wrap() and pg_unwrap()\". I use the terms \"encapsulate\" and \"restore\" when KEK is used to encrypt or decrypt a key.Chapter 32: Encryption Key Management ----------------------------------------------PostgreSQL supports internal Encryption Key Management System, which is designed to manage the life cycles of cryptographic keys within the PostgreSQL system. This includes dealing with their generation, storage, usage and rotation.Encryption Key Management is enabled when PostgreSQL is build with --with-openssl and cluster passphrase command is specified during initdb. The cluster passphrase provided by --cluster-passphrase-command option during initdb and the one generated by cluster_passphrase_command in the postgresql.conf must match, otherwise, the database cluster will not start up.32.1 Key Generations and Derivations------------------------------------------When cluster_passphrase_command option is specified to the initdb, the process will derive the cluster passphrase into a Key Encryption Key (KEK) and a HMAC Key using key derivation protocol before the actual generation of application level cryptographic level keys.-Key Encryption Key (KEK)KEK is primarily used to encapsulate or restore a given application level cryptographic key-HMAC KeyHMAC key is used to compute the HASH of a given application level cryptographic key for integrity check purposesThese 2 keys are not stored physically within the PostgreSQL cluster as they are designed to be derived from the correctly configured cluster passphrase.Encryption Key Management System currently manages 3 application level cryptographic keys that have different purposes and usages within the PostgreSQL system and these are generated using pg_strong_random() after KEK and HMAC key derivation during initdb process.The 3 keys are:-SQL Level KeySQL Level Key is used to wrap and unwrap a user secret / passphrase via pg_wrap() and pg_unwrap() SQL functions. These 2 functions are designed to be used in conjunction with the cryptographic functions provided by pgcrypto extension to perform column level encryption/decryption without having to supply a clear text user secret or passphrase that is required by many pgcrypto functions as input. Please refer to [Wrap and Unwrap User Secret section] for usage examples.-Block Level KeyBlock Level Key is primarily used to encrypt / decrypt buffers as part of the Transparent Data Encryption (TDE) feature-WAL Level KeyWAL Level Key is primarily used to encrypt / decrypt WAL files as part of the Transparent Data Encryption (TDE) featureThe 3 application level keys above will be encapsulated and hashed using KEK and HMAC key mentioned above before they are physically stored to pg_cryptokeys directory within the cluster.32.1. Key Initialization-------------------------When a PostgreSQL cluster with encryption key management enabled is started, the cluster_passphrase_command parameter in postgresql.conf will be evaluated and the cluster passphrase will be derived into KEK and HMAC Key in similar ways as initdb.After that, the 3 encapsulated application level cryptographic keys will be retrieved from pg_cryptokeys directory to be restored and integrity-checked by the key management system using the derived KEK and HMAC key. If this process fails, it is likely that the cluster passphrase supplied to the cluster is not the same as that supplied to the initdb process. The cluster will refuse to start in this case and user has to manually correct the cluster passphrase.32.2. Wrap and Unwrap User Secret----------------------------------------Encryption key management system provides pg_wrap() and pg_unwrap SQL functions (listed in Table 9.97) to perform wrap and unwrap operations on user secret with the SQL level encryption key. The SQL level encryption key is one of the 3 application level keys generated during initdb process when cluster_passphrase is supplied.When pg_wrap() and pg_unwrap() functions are invoked, SQL level encryption key will internally be used to perform the encryption and decryption operation with HMAC-based integrity check. From user's point of view, he or she is not aware of the actual SQL level encryption key used internally by both wrap functions One possible use case is to combine pg_wrap() and pg_unwrap() with pgcrypto. User wraps the user encryption secret with pg_wrap function and passes the wrapped encryption secret to pg_unwrap function for the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operation with pgcrypto.Here is an example that shows how to encrypt and decrypt data together with wrap and unwrap functions:=# SELECT pg_wrap('my secret passward');                                                                              pg_wrap--------------------------------------------------------------------------------------------------------------------------------------------------------------------\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe(1 row)Once wrapping the user key, user can encrypt and decrypt user data using the wrapped user key together with the key unwrap functions:=# INSERT INTO tbl        VALUES (pgp_sym_encrypt('secret data',                                 pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')));INSERT 1=# SELECT * FROM tbl;                                                                             col--------------------------------------------------------------------------------------------------------------------------------------------------------------\\xc30d04070302a199ee38bea0320b75d23c01577bb3ffb315d67eecbeca3e40e869cea65efbf0b470f805549af905f94d94c447fbfb8113f585fc86b30c0bd784b10c9857322dc00d556aa8de14(1 row)=# SELECT pgp_sym_decrypt(col,                           pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')) as col    FROM tbl;     col--------------secret data(1 row)The data 'secret data' is practically encrypted by the user secret 'my secret passward' but using wrap and unwrap functions user don't need to know the actual user secret during operation.32.3. Key Rotation Process------------------------------Encryption keys in general are not interminable, the longer the same key is in use, the chance  of it being breached increases. Performing key rotation on a regular basis help meet standardized security practices such as PCI-DSS and it is a good practice in security to limit the number of encrypted bytes available for a specific key version. The key lifetimse are based on key length, key strength, algorithm and total number of bytes enciphered. The key management systems provides a efficient method to perform key rotation.Please be aware that the phrase \"key rotation\" here only refers to the rotation of KEK and HMAC keys. The 3 application level encryption keys (SQL, Block and WAL levels) are not rotated; they will in fact be the same before and after a \"key rotation.\" This can be justified because the actual keys are never stored anywhere physically, presented to user or captured in logging. What is being rotated here is the KEK and HMAC keys who are responsible for encapsulating and restoring the actual application level encryption keys.Since both KEK and HMAC keys are derived from a cluster passphrase, the \"key rotation\" ultimately refers to the rotation of cluster passphrase and deriving a new KEK and HMAC keys from the new cluster passphrase. The new set of KEK and HMAC keys can then be used to encapsulate all 3 application level encryptions keys and store the new results in pg_cryptokeys directory.To rotate the cluster passphrase, user firstly needs to update cluster_passphrase_command in the postgresql.conf and then execute pg_rotate_cluster_passphrase() SQL function to initiate the rotation.Cary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Mon, 30 Mar 2020 21:30:19 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----On Tue, 31 Mar 2020 at 09:36, Cary Huang <cary.huang@highgo.ca> wrote: > > Hi > I had a look on kms_v9 patch and have some comments > > --> pg_upgrade.c > keys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so. I think both the old cluster and the new cluster must be initialized with the same passphrase at initdb. Specifying the different passphrase command to the new cluster at initdb and changing it after pg_upgrade doesn't make sense. Also I don't think we need to copy cluster_passphrase_command same as other GUC parameters. I've changed the patch so that pg_upgrade copies the crypto keys only if both new and old cluster enable the key management. User must specify the same passphrase command to both old and new cluster, which is not cumbersome, I think. I also added the description about this to the doc. > > -->Kmgr.c > + /* > + * If there is only temporary directory, it means that the previous > + * rotation failed after wrapping the all internal keys by the new > + * passphrase. Therefore we use the new cluster passphrase. > + */ > + if (stat(KMGR_DIR, &st) != 0) > + { > + ereport(DEBUG1, > + (errmsg(\"both directories %s and %s exist, use the newly wrapped keys\", > + KMGR_DIR, KMGR_TMP_DIR))); > > I think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\" You're right. Fixed. I've attached the new version patch. Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 07 Apr 2020 15:46:33 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi Bruce/Joe,\n\nIn the last meeting we discussed the need for improving the documentation\nfor KMS so it is easier to understand the interface. Cary from highgo had a\ngo at doing that, please see the previous email on this thread from Cary\nand let us know if it looks good...?\n\n-- Ahsan\n\nOn Wed, Apr 8, 2020 at 3:46 AM Cary Huang <cary.huang@highgo.ca> wrote:\n\n> Hello\n>\n> Thanks a lot for the patch, I think in terms of functionality, the patch\n> provides very straightforward functionalities regarding key management. In\n> terms of documentation, I think the patch is still lacking some pieces of\n> information that kind of prevent people from fully understanding how KMS\n> works and how it can be used and why, (at least that is the impression I\n> got from the zoom meeting recordings :p). I spent some time today\n> revisiting the key-management documentation in the patch and rephrase and\n> restructure it based on my current understanding of latest KMS design. I\n> mentioned all 3 application level keys that we have agreed and emphasize on\n> explaining the SQL level encryption key because that is the key that can be\n> used right now. Block and WAL levels keys we can add here more information\n> once they are actually used in the TDE development.\n>\n> Please see below the KMS documentation that I have revised and I hope it\n> will be more clear and easier for people to understand KMS. Feel free to\n> make adjustments. Please note that we use the term \"wrap\" and \"unwrap\" a\n> lot in our past discussions. Originally we used the terms within a context\n> involving Key encryption keys (KEK). For example, \"KMS wraps a master key\n> with KEK\". Later, we used the same term in a context involving encrypting\n> user secret /password. For example, \"KMS wraps a user secret with SQL key\".\n> In my opinion, both make sense but it may be confusing to people having the\n> same term used differently. So in my revision below, the terms \"wrap\" and\n> \"unwrap\" refer to encrypting or decrypting user secret / password as they\n> are used in \"pg_wrap() and pg_unwrap()\". I use the terms \"encapsulate\" and\n> \"restore\" when KEK is used to encrypt or decrypt a key.\n>\n>\n>\n> Chapter 32: Encryption Key Management\n> ----------------------------------------------\n>\n> PostgreSQL supports internal Encryption Key Management System, which is\n> designed to manage the life cycles of cryptographic keys within the\n> PostgreSQL system. This includes dealing with their generation, storage,\n> usage and rotation.\n>\n> Encryption Key Management is enabled when PostgreSQL is build\n> with --with-openssl and cluster passphrase command is specified\n> during initdb. The cluster passphrase provided\n> by --cluster-passphrase-command option during initdb and the one generated\n> by cluster_passphrase_command in the postgresql.conf must match, otherwise,\n> the database cluster will not start up.\n>\n> 32.1 Key Generations and Derivations\n> ------------------------------------------\n>\n> When cluster_passphrase_command option is specified to the initdb, the\n> process will derive the cluster passphrase into a Key Encryption Key (KEK)\n> and a HMAC Key using key derivation protocol before the actual generation\n> of application level cryptographic level keys.\n>\n> -Key Encryption Key (KEK)\n> KEK is primarily used to encapsulate or restore a given application level\n> cryptographic key\n>\n> -HMAC Key\n> HMAC key is used to compute the HASH of a given application level\n> cryptographic key for integrity check purposes\n>\n> These 2 keys are not stored physically within the PostgreSQL cluster as\n> they are designed to be derived from the correctly configured cluster\n> passphrase.\n>\n> Encryption Key Management System currently manages 3 application level\n> cryptographic keys that have different purposes and usages within the\n> PostgreSQL system and these are generated using pg_strong_random() after\n> KEK and HMAC key derivation during initdb process.\n>\n> The 3 keys are:\n>\n> -SQL Level Key\n> SQL Level Key is used to wrap and unwrap a user secret / passphrase via\n> pg_wrap() and pg_unwrap() SQL functions. These 2 functions are designed to\n> be used in conjunction with the cryptographic functions provided by\n> pgcrypto extension to perform column level encryption/decryption without\n> having to supply a clear text user secret or passphrase that is required by\n> many pgcrypto functions as input. Please refer to [Wrap and Unwrap User\n> Secret section] for usage examples.\n>\n> -Block Level Key\n> Block Level Key is primarily used to encrypt / decrypt buffers as part of\n> the Transparent Data Encryption (TDE) feature\n>\n> -WAL Level Key\n> WAL Level Key is primarily used to encrypt / decrypt WAL files as part of\n> the Transparent Data Encryption (TDE) feature\n>\n> The 3 application level keys above will be encapsulated and hashed using\n> KEK and HMAC key mentioned above before they are physically stored to\n> pg_cryptokeys directory within the cluster.\n>\n> 32.1. Key Initialization\n> -------------------------\n>\n> When a PostgreSQL cluster with encryption key management enabled is\n> started, the cluster_passphrase_command parameter in postgresql.conf will\n> be evaluated and the cluster passphrase will be derived into KEK and HMAC\n> Key in similar ways as initdb.\n>\n> After that, the 3 encapsulated application level cryptographic keys will\n> be retrieved from pg_cryptokeys directory to be restored and\n> integrity-checked by the key management system using the derived KEK and\n> HMAC key. If this process fails, it is likely that the cluster passphrase\n> supplied to the cluster is not the same as that supplied to the initdb\n> process. The cluster will refuse to start in this case and user has to\n> manually correct the cluster passphrase.\n>\n> 32.2. Wrap and Unwrap User Secret\n> ----------------------------------------\n> Encryption key management system provides pg_wrap() and pg_unwrap SQL\n> functions (listed in Table 9.97) to perform wrap and unwrap operations on\n> user secret with the SQL level encryption key. The SQL level encryption key\n> is one of the 3 application level keys generated during initdb process when\n> cluster_passphrase is supplied.\n>\n> When pg_wrap() and pg_unwrap() functions are invoked, SQL level encryption\n> key will internally be used to perform the encryption and decryption\n> operation with HMAC-based integrity check. From user's point of view, he or\n> she is not aware of the actual SQL level encryption key used internally by\n> both wrap functions\n>\n> One possible use case is to combine pg_wrap() and pg_unwrap()\n> with pgcrypto. User wraps the user encryption secret with pg_wrap function\n> and passes the wrapped encryption secret to pg_unwrap function for\n> the pgcrypto encryption functions. The wrapped secret can be stored in the\n> application server or somewhere secured and should be obtained promptly for\n> cryptographic operation with pgcrypto.\n>\n> Here is an example that shows how to encrypt and decrypt data together\n> with wrap and unwrap functions:\n> =# SELECT pg_wrap('my secret passward');\n>\n> pg_wrap\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> \\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe\n> (1 row)\n> Once wrapping the user key, user can encrypt and decrypt user data using\n> the wrapped user key together with the key unwrap functions:\n> =# INSERT INTO tbl\n> VALUES (pgp_sym_encrypt('secret data',\n>\n> pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')));\n> INSERT 1\n> =# SELECT * FROM tbl;\n>\n> col\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> \\xc30d04070302a199ee38bea0320b75d23c01577bb3ffb315d67eecbeca3e40e869cea65efbf0b470f805549af905f94d94c447fbfb8113f585fc86b30c0bd784b10c9857322dc00d556aa8de14\n> (1 row)\n> =# SELECT pgp_sym_decrypt(col,\n>\n> pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe'))\n> as col\n> FROM tbl;\n> col\n> --------------\n> secret data\n> (1 row)\n> The data 'secret data' is practically encrypted by the user secret 'my\n> secret passward' but using wrap and unwrap functions user don't need to\n> know the actual user secret during operation.\n>\n>\n> 32.3. Key Rotation Process\n> ------------------------------\n>\n> Encryption keys in general are not interminable, the longer the same key\n> is in use, the chance of it being breached increases. Performing key\n> rotation on a regular basis help meet standardized security practices such\n> as PCI-DSS and it is a good practice in security to limit the number of\n> encrypted bytes available for a specific key version. The key lifetimse are\n> based on key length, key strength, algorithm and total number of bytes\n> enciphered. The key management systems provides a efficient method to\n> perform key rotation.\n>\n> Please be aware that the phrase \"key rotation\" here only refers to the\n> rotation of KEK and HMAC keys. The 3 application level encryption keys\n> (SQL, Block and WAL levels) are not rotated; they will in fact be the same\n> before and after a \"key rotation.\" This can be justified because the actual\n> keys are never stored anywhere physically, presented to user or captured in\n> logging. What is being rotated here is the KEK and HMAC keys who are\n> responsible for encapsulating and restoring the actual application level\n> encryption keys.\n>\n> Since both KEK and HMAC keys are derived from a cluster passphrase, the\n> \"key rotation\" ultimately refers to the rotation of cluster passphrase and\n> deriving a new KEK and HMAC keys from the new cluster passphrase. The new\n> set of KEK and HMAC keys can then be used to encapsulate all 3 application\n> level encryptions keys and store the new results in pg_cryptokeys directory.\n>\n> To rotate the cluster passphrase, user firstly needs to\n> update cluster_passphrase_command in the postgresql.conf and then\n> execute pg_rotate_cluster_passphrase() SQL function to initiate the\n> rotation.\n>\n>\n>\n> Cary Huang\n> -------------\n> HighGo Software Inc. (Canada)\n> cary.huang@highgo.ca\n> www.highgo.ca\n>\n>\n> ---- On Mon, 30 Mar 2020 21:30:19 -0700 *Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com <masahiko.sawada@2ndquadrant.com>>*\n> wrote ----\n>\n> On Tue, 31 Mar 2020 at 09:36, Cary Huang <cary.huang@highgo.ca> wrote:\n> >\n> > Hi\n> > I had a look on kms_v9 patch and have some comments\n> >\n> > --> pg_upgrade.c\n> > keys are copied correctly, but as pg_upgrade progresses further, it will\n> try to start the new_cluster from \"issue_warnings_and_set_wal_level()\"\n> function, which is called after key copy. The new cluster will fail to\n> start due to the mismatch between cluster_passphrase_command and the newly\n> copied keys. This causes pg_upgrade to always finish with failure. We could\n> move \"copy_master_encryption_key()\" to be called after\n> \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to\n> finish with success, but user will still have to manually correct the\n> \"cluster_passphrase_command\" param on the new cluster in order for it to\n> start up correctly. Should pg_upgrade also take care of copying\n> \"cluster_passphrase_command\" param from old to new cluster after it has\n> copied the encryption keys so users don't have to do this step? If the\n> expectation is for users to manually correct \"cluster_passphrase_command\"\n> param after successful pg_upgrade and key copy, then there should be a\n> message to remind the users to do so.\n>\n> I think both the old cluster and the new cluster must be initialized\n> with the same passphrase at initdb. Specifying the different\n> passphrase command to the new cluster at initdb and changing it after\n> pg_upgrade doesn't make sense. Also I don't think we need to copy\n> cluster_passphrase_command same as other GUC parameters.\n>\n> I've changed the patch so that pg_upgrade copies the crypto keys only\n> if both new and old cluster enable the key management. User must\n> specify the same passphrase command to both old and new cluster, which\n> is not cumbersome, I think. I also added the description about this to\n> the doc.\n>\n> >\n> > -->Kmgr.c\n> > + /*\n> > + * If there is only temporary directory, it means that the previous\n> > + * rotation failed after wrapping the all internal keys by the new\n> > + * passphrase. Therefore we use the new cluster passphrase.\n> > + */\n> > + if (stat(KMGR_DIR, &st) != 0)\n> > + {\n> > + ereport(DEBUG1,\n> > + (errmsg(\"both directories %s and %s exist, use the newly wrapped\n> keys\",\n> > + KMGR_DIR, KMGR_TMP_DIR)));\n> >\n> > I think the error message should say \"there is only temporary directory\n> exist\" instead of \"both directories exist\"\n>\n> You're right. Fixed.\n>\n> I've attached the new version patch.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nHi Bruce/Joe,In the last meeting we discussed the need for improving the documentation for KMS so it is easier to understand the interface. Cary from highgo had a go at doing that, please see the previous email on this thread from Cary and let us know if it looks good...?-- Ahsan On Wed, Apr 8, 2020 at 3:46 AM Cary Huang <cary.huang@highgo.ca> wrote:HelloThanks a lot for the patch, I think in terms of functionality, the patch provides very straightforward functionalities regarding key management. In terms of documentation, I think the patch is still lacking some pieces of information that kind of prevent people from fully understanding how KMS works and how it can be used and why, (at least that is the impression I got from the zoom meeting recordings :p). I spent some time today revisiting the key-management documentation in the patch and rephrase and restructure it based on my current understanding of latest KMS design. I mentioned all 3 application level keys that we have agreed and emphasize on explaining the SQL level encryption key because that is the key that can be used right now. Block and WAL levels keys we can add here more information once they are actually used in the TDE development. Please see below the KMS documentation that I have revised and I hope it will be more clear and easier for people to understand KMS. Feel free to make adjustments. Please note that we use the term \"wrap\" and \"unwrap\" a lot in our past discussions. Originally we used the terms within a context involving Key encryption keys (KEK). For example, \"KMS wraps a master key with KEK\". Later, we used the same term in a context involving encrypting user secret /password. For example, \"KMS wraps a user secret with SQL key\". In my opinion, both make sense but it may be confusing to people having the same term used differently. So in my revision below, the terms \"wrap\" and \"unwrap\" refer to encrypting or decrypting user secret / password as they are used in \"pg_wrap() and pg_unwrap()\". I use the terms \"encapsulate\" and \"restore\" when KEK is used to encrypt or decrypt a key.Chapter 32: Encryption Key Management ----------------------------------------------PostgreSQL supports internal Encryption Key Management System, which is designed to manage the life cycles of cryptographic keys within the PostgreSQL system. This includes dealing with their generation, storage, usage and rotation.Encryption Key Management is enabled when PostgreSQL is build with --with-openssl and cluster passphrase command is specified during initdb. The cluster passphrase provided by --cluster-passphrase-command option during initdb and the one generated by cluster_passphrase_command in the postgresql.conf must match, otherwise, the database cluster will not start up.32.1 Key Generations and Derivations------------------------------------------When cluster_passphrase_command option is specified to the initdb, the process will derive the cluster passphrase into a Key Encryption Key (KEK) and a HMAC Key using key derivation protocol before the actual generation of application level cryptographic level keys.-Key Encryption Key (KEK)KEK is primarily used to encapsulate or restore a given application level cryptographic key-HMAC KeyHMAC key is used to compute the HASH of a given application level cryptographic key for integrity check purposesThese 2 keys are not stored physically within the PostgreSQL cluster as they are designed to be derived from the correctly configured cluster passphrase.Encryption Key Management System currently manages 3 application level cryptographic keys that have different purposes and usages within the PostgreSQL system and these are generated using pg_strong_random() after KEK and HMAC key derivation during initdb process.The 3 keys are:-SQL Level KeySQL Level Key is used to wrap and unwrap a user secret / passphrase via pg_wrap() and pg_unwrap() SQL functions. These 2 functions are designed to be used in conjunction with the cryptographic functions provided by pgcrypto extension to perform column level encryption/decryption without having to supply a clear text user secret or passphrase that is required by many pgcrypto functions as input. Please refer to [Wrap and Unwrap User Secret section] for usage examples.-Block Level KeyBlock Level Key is primarily used to encrypt / decrypt buffers as part of the Transparent Data Encryption (TDE) feature-WAL Level KeyWAL Level Key is primarily used to encrypt / decrypt WAL files as part of the Transparent Data Encryption (TDE) featureThe 3 application level keys above will be encapsulated and hashed using KEK and HMAC key mentioned above before they are physically stored to pg_cryptokeys directory within the cluster.32.1. Key Initialization-------------------------When a PostgreSQL cluster with encryption key management enabled is started, the cluster_passphrase_command parameter in postgresql.conf will be evaluated and the cluster passphrase will be derived into KEK and HMAC Key in similar ways as initdb.After that, the 3 encapsulated application level cryptographic keys will be retrieved from pg_cryptokeys directory to be restored and integrity-checked by the key management system using the derived KEK and HMAC key. If this process fails, it is likely that the cluster passphrase supplied to the cluster is not the same as that supplied to the initdb process. The cluster will refuse to start in this case and user has to manually correct the cluster passphrase.32.2. Wrap and Unwrap User Secret----------------------------------------Encryption key management system provides pg_wrap() and pg_unwrap SQL functions (listed in Table 9.97) to perform wrap and unwrap operations on user secret with the SQL level encryption key. The SQL level encryption key is one of the 3 application level keys generated during initdb process when cluster_passphrase is supplied.When pg_wrap() and pg_unwrap() functions are invoked, SQL level encryption key will internally be used to perform the encryption and decryption operation with HMAC-based integrity check. From user's point of view, he or she is not aware of the actual SQL level encryption key used internally by both wrap functions One possible use case is to combine pg_wrap() and pg_unwrap() with pgcrypto. User wraps the user encryption secret with pg_wrap function and passes the wrapped encryption secret to pg_unwrap function for the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operation with pgcrypto.Here is an example that shows how to encrypt and decrypt data together with wrap and unwrap functions:=# SELECT pg_wrap('my secret passward');                                                                              pg_wrap--------------------------------------------------------------------------------------------------------------------------------------------------------------------\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe(1 row)Once wrapping the user key, user can encrypt and decrypt user data using the wrapped user key together with the key unwrap functions:=# INSERT INTO tbl        VALUES (pgp_sym_encrypt('secret data',                                 pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')));INSERT 1=# SELECT * FROM tbl;                                                                             col--------------------------------------------------------------------------------------------------------------------------------------------------------------\\xc30d04070302a199ee38bea0320b75d23c01577bb3ffb315d67eecbeca3e40e869cea65efbf0b470f805549af905f94d94c447fbfb8113f585fc86b30c0bd784b10c9857322dc00d556aa8de14(1 row)=# SELECT pgp_sym_decrypt(col,                           pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')) as col    FROM tbl;     col--------------secret data(1 row)The data 'secret data' is practically encrypted by the user secret 'my secret passward' but using wrap and unwrap functions user don't need to know the actual user secret during operation.32.3. Key Rotation Process------------------------------Encryption keys in general are not interminable, the longer the same key is in use, the chance  of it being breached increases. Performing key rotation on a regular basis help meet standardized security practices such as PCI-DSS and it is a good practice in security to limit the number of encrypted bytes available for a specific key version. The key lifetimse are based on key length, key strength, algorithm and total number of bytes enciphered. The key management systems provides a efficient method to perform key rotation.Please be aware that the phrase \"key rotation\" here only refers to the rotation of KEK and HMAC keys. The 3 application level encryption keys (SQL, Block and WAL levels) are not rotated; they will in fact be the same before and after a \"key rotation.\" This can be justified because the actual keys are never stored anywhere physically, presented to user or captured in logging. What is being rotated here is the KEK and HMAC keys who are responsible for encapsulating and restoring the actual application level encryption keys.Since both KEK and HMAC keys are derived from a cluster passphrase, the \"key rotation\" ultimately refers to the rotation of cluster passphrase and deriving a new KEK and HMAC keys from the new cluster passphrase. The new set of KEK and HMAC keys can then be used to encapsulate all 3 application level encryptions keys and store the new results in pg_cryptokeys directory.To rotate the cluster passphrase, user firstly needs to update cluster_passphrase_command in the postgresql.conf and then execute pg_rotate_cluster_passphrase() SQL function to initiate the rotation.Cary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Mon, 30 Mar 2020 21:30:19 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----On Tue, 31 Mar 2020 at 09:36, Cary Huang <cary.huang@highgo.ca> wrote: > > Hi > I had a look on kms_v9 patch and have some comments > > --> pg_upgrade.c > keys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so. I think both the old cluster and the new cluster must be initialized with the same passphrase at initdb. Specifying the different passphrase command to the new cluster at initdb and changing it after pg_upgrade doesn't make sense. Also I don't think we need to copy cluster_passphrase_command same as other GUC parameters. I've changed the patch so that pg_upgrade copies the crypto keys only if both new and old cluster enable the key management. User must specify the same passphrase command to both old and new cluster, which is not cumbersome, I think. I also added the description about this to the doc. > > -->Kmgr.c > + /* > + * If there is only temporary directory, it means that the previous > + * rotation failed after wrapping the all internal keys by the new > + * passphrase. Therefore we use the new cluster passphrase. > + */ > + if (stat(KMGR_DIR, &st) != 0) > + { > + ereport(DEBUG1, > + (errmsg(\"both directories %s and %s exist, use the newly wrapped keys\", > + KMGR_DIR, KMGR_TMP_DIR))); > > I think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\" You're right. Fixed. I've attached the new version patch. Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Wed, 8 Apr 2020 08:56:12 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi all\n\n\n\nI am sharing here a document patch based on top of kms_v10 that was shared awhile back. This document patch aims to cover more design details of the current KMS design and to help people understand KMS better. Please let me know if you have any more comments.\n\n\n\nthank you\n\n\n\nBest regards\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Tue, 07 Apr 2020 20:56:12 -0700 Ahsan Hadi <mailto:ahsan.hadi@gmail.com> wrote ----\n\n\n\nHi Bruce/Joe,\n\n\n\nIn the last meeting we discussed the need for improving the documentation for KMS so it is easier to understand the interface. Cary from highgo had a go at doing that, please see the previous email on this thread from Cary and let us know if it looks good...?\n\n\n\n-- Ahsan \n\n\n\n\nOn Wed, Apr 8, 2020 at 3:46 AM Cary Huang <mailto:cary.huang@highgo.ca> wrote:\n\n\n\n\n\n\n\n\n-- \n\nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca/\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: \n\n\n\n\n\nHello\n\n\n\nThanks a lot for the patch, I think in terms of functionality, the patch provides very straightforward functionalities regarding key management. In terms of documentation, I think the patch is still lacking some pieces of information that kind of prevent people from fully understanding how KMS works and how it can be used and why, (at least that is the impression I got from the zoom meeting recordings :p). I spent some time today revisiting the key-management documentation in the patch and rephrase and restructure it based on my current understanding of latest KMS design. I mentioned all 3 application level keys that we have agreed and emphasize on explaining the SQL level encryption key because that is the key that can be used right now. Block and WAL levels keys we can add here more information once they are actually used in the TDE development. \n\n\n\nPlease see below the KMS documentation that I have revised and I hope it will be more clear and easier for people to understand KMS. Feel free to make adjustments. Please note that we use the term \"wrap\" and \"unwrap\" a lot in our past discussions. Originally we used the terms within a context involving Key encryption keys (KEK). For example, \"KMS wraps a master key with KEK\". Later, we used the same term in a context involving encrypting user secret /password. For example, \"KMS wraps a user secret with SQL key\". In my opinion, both make sense but it may be confusing to people having the same term used differently. So in my revision below, the terms \"wrap\" and \"unwrap\" refer to encrypting or decrypting user secret / password as they are used in \"pg_wrap() and pg_unwrap()\". I use the terms \"encapsulate\" and \"restore\" when KEK is used to encrypt or decrypt a key.\n\n\n\n\n\n\n\nChapter 32: Encryption Key Management \n\n----------------------------------------------\n\n\n\nPostgreSQL supports internal Encryption Key Management System, which is designed to manage the life cycles of cryptographic keys within the PostgreSQL system. This includes dealing with their generation, storage, usage and rotation.\n\n\n\nEncryption Key Management is enabled when PostgreSQL is build with --with-openssl and cluster passphrase command is specified during initdb. The cluster passphrase provided by --cluster-passphrase-command option during initdb and the one generated by cluster_passphrase_command in the postgresql.conf must match, otherwise, the database cluster will not start up.\n\n\n\n32.1 Key Generations and Derivations\n\n------------------------------------------\n\n\n\nWhen cluster_passphrase_command option is specified to the initdb, the process will derive the cluster passphrase into a Key Encryption Key (KEK) and a HMAC Key using key derivation protocol before the actual generation of application level cryptographic level keys.\n\n\n\n-Key Encryption Key (KEK)\n\nKEK is primarily used to encapsulate or restore a given application level cryptographic key\n\n\n\n-HMAC Key\n\nHMAC key is used to compute the HASH of a given application level cryptographic key for integrity check purposes\n\n\n\nThese 2 keys are not stored physically within the PostgreSQL cluster as they are designed to be derived from the correctly configured cluster passphrase.\n\n\n\nEncryption Key Management System currently manages 3 application level cryptographic keys that have different purposes and usages within the PostgreSQL system and these are generated using pg_strong_random() after KEK and HMAC key derivation during initdb process.\n\n\n\nThe 3 keys are:\n\n\n\n-SQL Level Key\n\nSQL Level Key is used to wrap and unwrap a user secret / passphrase via pg_wrap() and pg_unwrap() SQL functions. These 2 functions are designed to be used in conjunction with the cryptographic functions provided by pgcrypto extension to perform column level encryption/decryption without having to supply a clear text user secret or passphrase that is required by many pgcrypto functions as input. Please refer to [Wrap and Unwrap User Secret section] for usage examples.\n\n\n\n-Block Level Key\n\nBlock Level Key is primarily used to encrypt / decrypt buffers as part of the Transparent Data Encryption (TDE) feature\n\n\n\n-WAL Level Key\n\nWAL Level Key is primarily used to encrypt / decrypt WAL files as part of the Transparent Data Encryption (TDE) feature\n\n\n\nThe 3 application level keys above will be encapsulated and hashed using KEK and HMAC key mentioned above before they are physically stored to pg_cryptokeys directory within the cluster.\n\n\n\n32.1. Key Initialization\n\n-------------------------\n\n\n\nWhen a PostgreSQL cluster with encryption key management enabled is started, the cluster_passphrase_command parameter in postgresql.conf will be evaluated and the cluster passphrase will be derived into KEK and HMAC Key in similar ways as initdb.\n\n\n\nAfter that, the 3 encapsulated application level cryptographic keys will be retrieved from pg_cryptokeys directory to be restored and integrity-checked by the key management system using the derived KEK and HMAC key. If this process fails, it is likely that the cluster passphrase supplied to the cluster is not the same as that supplied to the initdb process. The cluster will refuse to start in this case and user has to manually correct the cluster passphrase.\n\n\n\n32.2. Wrap and Unwrap User Secret\n\n----------------------------------------\n\nEncryption key management system provides pg_wrap() and pg_unwrap SQL functions (listed in Table 9.97) to perform wrap and unwrap operations on user secret with the SQL level encryption key. The SQL level encryption key is one of the 3 application level keys generated during initdb process when cluster_passphrase is supplied.\n\n\n\nWhen pg_wrap() and pg_unwrap() functions are invoked, SQL level encryption key will internally be used to perform the encryption and decryption operation with HMAC-based integrity check. From user's point of view, he or she is not aware of the actual SQL level encryption key used internally by both wrap functions \n\n\n\nOne possible use case is to combine pg_wrap() and pg_unwrap() with pgcrypto. User wraps the user encryption secret with pg_wrap function and passes the wrapped encryption secret to pg_unwrap function for the pgcrypto encryption functions. The wrapped secret can be stored in the application server or somewhere secured and should be obtained promptly for cryptographic operation with pgcrypto.\n\n\n\nHere is an example that shows how to encrypt and decrypt data together with wrap and unwrap functions:\n\n=# SELECT pg_wrap('my secret passward');\n\n                                                                              pg_wrap\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe\n\n(1 row)\n\nOnce wrapping the user key, user can encrypt and decrypt user data using the wrapped user key together with the key unwrap functions:\n\n=# INSERT INTO tbl\n\n        VALUES (pgp_sym_encrypt('secret data',\n\n                                 pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')));\n\nINSERT 1\n\n=# SELECT * FROM tbl;\n\n                                                                             col\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\\xc30d04070302a199ee38bea0320b75d23c01577bb3ffb315d67eecbeca3e40e869cea65efbf0b470f805549af905f94d94c447fbfb8113f585fc86b30c0bd784b10c9857322dc00d556aa8de14\n\n(1 row)\n\n=# SELECT pgp_sym_decrypt(col,\n\n                           pg_unwrap('\\xb2c89f76f04f95d029f179e0fc3df4ed7254127b5562a9e27d42d1cd037c942dea65ce7c0750c520fa4f4e90481c9eb7e1e42a068248c262c1a6f25c6eab64303b1154ccc9a14361223641aab4a7aabe')) as col\n\n    FROM tbl;\n\n     col\n\n--------------\n\nsecret data\n\n(1 row)\n\nThe data 'secret data' is practically encrypted by the user secret 'my secret passward' but using wrap and unwrap functions user don't need to know the actual user secret during operation.\n\n\n\n\n\n32.3. Key Rotation Process\n\n------------------------------\n\n\n\nEncryption keys in general are not interminable, the longer the same key is in use, the chance  of it being breached increases. Performing key rotation on a regular basis help meet standardized security practices such as PCI-DSS and it is a good practice in security to limit the number of encrypted bytes available for a specific key version. The key lifetimse are based on key length, key strength, algorithm and total number of bytes enciphered. The key management systems provides a efficient method to perform key rotation.\n\n\n\nPlease be aware that the phrase \"key rotation\" here only refers to the rotation of KEK and HMAC keys. The 3 application level encryption keys (SQL, Block and WAL levels) are not rotated; they will in fact be the same before and after a \"key rotation.\" This can be justified because the actual keys are never stored anywhere physically, presented to user or captured in logging. What is being rotated here is the KEK and HMAC keys who are responsible for encapsulating and restoring the actual application level encryption keys.\n\n\n\nSince both KEK and HMAC keys are derived from a cluster passphrase, the \"key rotation\" ultimately refers to the rotation of cluster passphrase and deriving a new KEK and HMAC keys from the new cluster passphrase. The new set of KEK and HMAC keys can then be used to encapsulate all 3 application level encryptions keys and store the new results in pg_cryptokeys directory.\n\n\n\nTo rotate the cluster passphrase, user firstly needs to update cluster_passphrase_command in the postgresql.conf and then execute pg_rotate_cluster_passphrase() SQL function to initiate the rotation.\n\n\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n\n\n---- On Mon, 30 Mar 2020 21:30:19 -0700 Masahiko Sawada <mailto:masahiko.sawada@2ndquadrant.com> wrote ----\n\n\n\nOn Tue, 31 Mar 2020 at 09:36, Cary Huang <mailto:cary.huang@highgo.ca> wrote: \n> \n> Hi \n> I had a look on kms_v9 patch and have some comments \n> \n> --> pg_upgrade.c \n> keys are copied correctly, but as pg_upgrade progresses further, it will try to start the new_cluster from \"issue_warnings_and_set_wal_level()\" function, which is called after key copy. The new cluster will fail to start due to the mismatch between cluster_passphrase_command and the newly copied keys. This causes pg_upgrade to always finish with failure. We could move \"copy_master_encryption_key()\" to be called after \"issue_warnings_and_set_wal_level()\" and this will make pg_upgrade to finish with success, but user will still have to manually correct the \"cluster_passphrase_command\" param on the new cluster in order for it to start up correctly. Should pg_upgrade also take care of copying \"cluster_passphrase_command\" param from old to new cluster after it has copied the encryption keys so users don't have to do this step? If the expectation is for users to manually correct \"cluster_passphrase_command\" param after successful pg_upgrade and key copy, then there should be a message to remind the users to do so. \n \nI think both the old cluster and the new cluster must be initialized \nwith the same passphrase at initdb. Specifying the different \npassphrase command to the new cluster at initdb and changing it after \npg_upgrade doesn't make sense. Also I don't think we need to copy \ncluster_passphrase_command same as other GUC parameters. \n \nI've changed the patch so that pg_upgrade copies the crypto keys only \nif both new and old cluster enable the key management. User must \nspecify the same passphrase command to both old and new cluster, which \nis not cumbersome, I think. I also added the description about this to \nthe doc. \n \n> \n> -->Kmgr.c \n> + /* \n> + * If there is only temporary directory, it means that the previous \n> + * rotation failed after wrapping the all internal keys by the new \n> + * passphrase. Therefore we use the new cluster passphrase. \n> + */ \n> + if (stat(KMGR_DIR, &st) != 0) \n> + { \n> + ereport(DEBUG1, \n> + (errmsg(\"both directories %s and %s exist, use the newly wrapped keys\", \n> + KMGR_DIR, KMGR_TMP_DIR))); \n> \n> I think the error message should say \"there is only temporary directory exist\" instead of \"both directories exist\" \n \nYou're right. Fixed. \n \nI've attached the new version patch. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 01 May 2020 15:16:46 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 2 May 2020 at 07:17, Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Hi all\n>\n> I am sharing here a document patch based on top of kms_v10 that was shared awhile back. This document patch aims to cover more design details of the current KMS design and to help people understand KMS better. Please let me know if you have any more comments.\n\nThank you for your patch! I've changed the internal key management\npatch much. Here is the summary of the changes:\n\nI've changed the key manager so that it can manage multiple\ncryptographic keys up to 128 bytes long. Currently, all keys managed\nby the key manager need to be pre-defined, and the key manager has\nonly one cryptographic key, SQL key which is used to encrypt/decrypt\ndata via SQL function interface. But it's easy to add new keys for\npotential use cases, for example when we need some keys for\ntransparent data encryption. When the server starting up, the key\nmanager unwraps the internal key and load to the shared memory.\nPerhaps we need to protect the load key memory space from being\nswapped out using mlock() but it's not implemented yet.\n\nFor SQL interface, I've changed the patch much. The encryption process\nwe called 'wrap' and 'unwrap' is actually authenticated encryption\nwith associated data[1] (AEAD) which is not a dedicated way to wrap\ncryptographic keys. I renamed pg_wrap() and pg_unwrap() to\npg_encrypt() and pg_decrypt() to make these function names more\nunderstandable. These SQL functions encrypt/decrypt data using the SQL\nkey. So currently, there are two usages of pg_encrypt () and\npg_decrypt() functions to encrypt database data:\n\nFirst, we can encrypt data directly using these SQL functions. That\nway, users don't need to manage and know the encryption key, moreover,\nwe enable users to use AEAD without pgcrypto. Second, by wrapping the\nuser secret key using these SQL functions we can use them in\nconjunction with the cryptographic functions provided by pgcrypto.\nUsers can wrap their secret key by SQL key via pg_encrypt(), and then\nuse the user secret unwrapped by pg_decrypt() when SELECT, INSERT,\nUPDATE, and DELETE operation. Here is an example:\n\n-- Wrap user secret and save to 'key' variable, or somewhere\n=# SELECT pg_encrypt('user password') as key \\gset\n\n-- Encrypt/decrypt user data with the secret string 'user password'\nwhich is obtained by unwrapping 'key' variable.\n=# INSERT INTO tbl VALUES (pgp_sym_encrypt('abc123', pg_decrypt(:'key')));\n=# SELECT pgp_sym_decrypt(col, pg_decrypt(:'key')) FROM tbl;\n\nHowever, this usage has a downside that user secret can be logged to\nserver logs when log_statement = 'all' or an error happens. To deal\nwith this issue I've created a PoC patch on top of the key manager\npatch which adds a libpq function PQencrypt() to encrypt data and new\npsql meta-command named \\encrypt in order to encrypt data while\neliminating the possibility of the user data being logged.\nPQencrypt() just calls pg_encrypt() via PQfn(). Using this command the\nabove example can become as follows:\n\n-- Encrypt user secret via PQfn and store it to 'key variable, or somewhere\n=# \\encrypt\nEnter data:\nEnter it again:\nencrypted data:\n\\x8e17079ed65f570f5adcac9023cb5d079708f34563e62f3f9f1f0f26c7ad4ecf7b90dc199d7b3bbf663c8800d98162d02dc30da247ca4c825f3240c4a7c419a7c8785d9f7f974d0ed310f179ecbbab1ecf38ec48d74d41dd13544595d45d5ec9\n=# \\set key '\\x8e17079ed65f570f5adcac9023cb5d079708f34563e62f3f9f1f0f26c7ad4ecf7b90dc199d7b3bbf663c8800d98162d02dc30da247ca4c825f3240c4a7c419a7c8785d9f7f974d0ed310f179ecbbab1ecf38ec48d74d41dd13544595d45d5ec9\n\n-- Encrypt/decrypt user data with the secret string 'user password'\nwhich is obtained by unwrapping 'key' variable.\n=# INSERT INTO tbl VALUES (pgp_sym_encrypt('abc123', pg_decrypt(:'key')));\n=# SELECT pgp_sym_decrypt(col, pg_decrypt(:'key')) FROM tbl;\n\nBTW after some research, I've found Always Encrypted which is a\ndatabase encryption feature provided by SQL Server uses a quite\nsimilar approach called AEAD_AES_256_CBC_HMAC_SHA_256[2].\nAEAD_AES_256_CBC_HMAC_SHA_256 is actually derived from from the\nspecification draft[3].\n\nFor documentation, I've incorporated the proposed update by Cary and\nadd some descriptions, especially for AEAD.\n\nI've separate the patch into several pieces so it can be reviewed\neasily. Here is a short description of each patch:\n\n0001 patch introduces AES256-CBC and HMAC-SHA512 to src/common. These\nfunctions are enabled only when --with-openssl environment.\n\n0002 patch introduces AEAD algorithm to src/common.\n\n0003 patch introduces the key management module that is split into two\nparts: utility code and backend code. The key manager reads/writes\ncryptographic keys, verifies the given passphrase, wraps and unwraps\nkeys using AEAD. Currently the key manager has only one internal key,\nSQL key.\n\n0004 patch added two SQL functions: pg_encrypt() and pg_decrypt()\n\n0005 and 0006 patch introduce regression tests and documentation respectively.\n\n0007 patch is a PoC patch that adds PQencrypt() function and psql's\n\\encrypt meta-command to encrypt data so that target data are not\nlogged.\n\nRegards,\n\n[1] https://en.wikipedia.org/wiki/Authenticated_encryption#Authenticated_encryption_with_associated_data_(AEAD)\n[2] https://docs.microsoft.com/ja-jp/sql/relational-databases/security/encryption/always-encrypted-cryptography?view=sql-server-ver15\n[3] https://tools.ietf.org/html/draft-mcgrew-aead-aes-cbc-hmac-sha2-05.\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 29 May 2020 14:49:54 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, May 29, 2020 at 1:50 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> However, this usage has a downside that user secret can be logged to\n> server logs when log_statement = 'all' or an error happens. To deal\n> with this issue I've created a PoC patch on top of the key manager\n> patch which adds a libpq function PQencrypt() to encrypt data and new\n> psql meta-command named \\encrypt in order to encrypt data while\n> eliminating the possibility of the user data being logged.\n> PQencrypt() just calls pg_encrypt() via PQfn(). Using this command the\n> above example can become as follows:\n\nIf PQfn() calls aren't currently logged, that's probably more of an\noversight due to the feature being almost dead than something upon\nwhich we want to rely.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 29 May 2020 15:20:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hello Masahiko-san,\n\n>> I am sharing here a document patch based on top of kms_v10 that was \n>> shared awhile back. This document patch aims to cover more design \n>> details of the current KMS design and to help people understand KMS \n>> better. Please let me know if you have any more comments.\n\nA few questions and comments, mostly about the design. If I'm off topic, \nor these concerns have been clearly addressed in the thread, please accept \nmy apology.\n\nA lot of what I write is based on guessing from a look at the doc & code \nprovided in the patch. The patch should provide some explanatory README \nabout the overall design.\n\nIt is a lot of code, which for me should not be there, inside the backend. \nCould this whole thing be an extension? I cannot see why not. If it could, \nthen ISTM that it should. If not, what set of features is needed to allow \nthat as an extension? How could pg be improved so that it could be an \nextension?\n\nAlso, I'm not at fully at ease with some of the underlying principles \nbehind this proposal. Are we re-inventing/re-implementing kerberos or \nwhatever? Are we re-implementing a brand new KMS inside pg? Why having \nour own?\n\nI think that key management should *not* belong to pg itself, but to some \nexternal facility/process with which pg would interact, so that no master \nkey would ever be inside pg process, and possibly not on the same host, if \nit was me doing it.\n\nIf some extension could provide it inside the process and stores thing \ninside some pg_cryptokeys directory, then fine if it fits the threat model \nbeing addressed, but the paranoïd user wanting that should have other \noptions which could be summarized as \"outside\".\n\nAnother benefit of \"outside\" is that if there is a security issue attached \nto the kms, then it would not be a pg security issue, and it would not \naffect normal pg users which do not use the feature.\n\nAlso, implementing a crash-safe key rotation algorithm does not look like \ninside pg backend, that is not its job. Likewise, the AEAD AES-CBC \nHMAC-SHA512 does definitely not belong to postgres core backend \nimplementation. Why should I use the OpenSSL library and not some other \nfacility?\n\nBasically, I'm -1 on having such a feature right inside pg, and +1 on \nallowing pg to have it outside and interact with it appropriately, \npreferably through an extension which could be in core.\n\nSo my take is that pg should allow an extension to:\n\n - provide a *generic* way to interact with an *external* kms\n eg by running a command (possibly setuid something) and interacting\n with its stdin/stderr what the command does should be of no concern\n to pg and use some trivial text protocol, and the existing code\n can be wrapped as an example working implementation.\n\n - store some local keys somewhere and provide functions to use these\n keys to encrypt/decrypt stuff, obviously, as generic as possible.\n\n ISTM that what crypto algorithms are actually used should not be\n hardcoded, but I'm not sure how to achieve that. Maybe simply by\n redefining the relevant function, maybe at the SQL level.\n\nThere is an open question on how the \"command\" validates that it is indeed \nthe right pg which is interacting with it. This means some authentication, \nprobably some passphrase to provide somehow, probably close to what is \nbeing implemented, so from an interface point of view, it could look quite \nthe same, but the key point is that the whole thing would be out of \npostgres process, only encryption keys being used would be in postgres,\nand probably only in the process which actually needs it.\n\nRandom comments about details I saw in passing:\n\n* key_management_enabled\n\nkey_management (on|off) ?\n\n* initdb -D dbname --cluster-passphrase-command=\"cat /path/to/passphrase-file\"\n\nPutting example in the documentation looks like a recommendation. It would \nput a caveat that doing the above is probably a bad idea.\n\n-- \nFabien.", "msg_date": "Sun, 31 May 2020 10:13:26 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sun, 31 May 2020 at 17:13, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> >> I am sharing here a document patch based on top of kms_v10 that was\n> >> shared awhile back. This document patch aims to cover more design\n> >> details of the current KMS design and to help people understand KMS\n> >> better. Please let me know if you have any more comments.\n>\n> A few questions and comments, mostly about the design. If I'm off topic,\n> or these concerns have been clearly addressed in the thread, please accept\n> my apology.\n\nThank you for your comments! Please correct me if I'm misunderstanding\nyour questions and comments.\n\n>\n> A lot of what I write is based on guessing from a look at the doc & code\n> provided in the patch. The patch should provide some explanatory README\n> about the overall design.\n\nAgreed.\n\n>\n> It is a lot of code, which for me should not be there, inside the backend.\n> Could this whole thing be an extension? I cannot see why not. If it could,\n> then ISTM that it should. If not, what set of features is needed to allow\n> that as an extension? How could pg be improved so that it could be an\n> extension?\n\nLet me explain some information about TDE behind this key manager patch.\n\nThis key manager is aimed to manage cryptographic keys used for\ntransparent data encryption. As a result of the discussion, we\nconcluded it's safer to use multiple keys to encrypt database data\nrather than using one key to encrypt the whole thing, for example, in\norder to make sure different data is not encrypted with the same key\nand IV. Therefore, in terms of TDE, the minimum requirement is that\nPostgreSQL can use multiple keys.\n\nUsing multiple keys in PG, there are roughly two possible designs:\n\n1. Store all keys for TDE into the external KMS, and PG gets them from\nit as needed.\n2. PG manages all keys for TDE inside and protect these keys on disk\nby the key (i.g. KEK) stored in the external KMS.\n\nThere are pros and cons to each design. If I take one cons of #1 as an\nexample, the operation between PG and the external KMS could be\ncomplex. The operations could be creating, removing and rotate key and\nso on. We can implement these operations in an extension to interact\nwith different kinds of external KMS, and perhaps we can use KMIP. But\nthe development cost could become high because we might need different\nextensions for each key management solutions/services.\n\n#2 is better at that point; the interaction between PG and KMS is only\nGET. Other databases employes a similar approach are SQL Server and\nDB2.\n\nIn terms of the necessity of introducing the key manager into PG core,\nI think at least TDE needs to be implemented in PG core. And as this\nkey manager is for managing keys for TDE, I think the key manager also\nneeds to be introduced into the core so that TDE functionality doesn't\ndepend on external modules.\n\n>\n> Also, I'm not at fully at ease with some of the underlying principles\n> behind this proposal. Are we re-inventing/re-implementing kerberos or\n> whatever? Are we re-implementing a brand new KMS inside pg? Why having\n> our own?\n\nAs I explained above, this key manager is for managing internal keys\nused by TDE. It's not an alternative to existing key management\nsolutions/services.\n\nThe requirements of this key manager are generating internal keys,\nletting other PG components use them, protecting them by KEK when\npersisting, and support KEK rotation. It doesn’t have a feature like\nallowing users to store arbitrary keys into this key manager, like\nother key management solutions/services have.\n\n>\n> I think that key management should *not* belong to pg itself, but to some\n> external facility/process with which pg would interact, so that no master\n> key would ever be inside pg process, and possibly not on the same host, if\n> it was me doing it.\n>\n> If some extension could provide it inside the process and stores thing\n> inside some pg_cryptokeys directory, then fine if it fits the threat model\n> being addressed, but the paranoïd user wanting that should have other\n> options which could be summarized as \"outside\".\n>\n> Another benefit of \"outside\" is that if there is a security issue attached\n> to the kms, then it would not be a pg security issue, and it would not\n> affect normal pg users which do not use the feature.\n\nI agree that the key used to encrypt data must not be placed in the\nsame host. But it's true only when the key is not protected, right? In\nthis key manager, since we protect all internal keys by KEK it's no\nproblem unless KEK is leaked. KEK can be obtained from outside key\nmanagement solutions/services through cluster_passphrase_command.\n\n>\n> Also, implementing a crash-safe key rotation algorithm does not look like\n> inside pg backend, that is not its job.\n\nThe key rotation this key manager has is KEK rotation, which is very\nimportant. Without KEK rotation, when KEK is leaked an attacker can\nget database data by disk theft. Since KEK is responsible for\nencrypting all internal keys it's necessary to re-encrypt the internal\nkeys when KEK is rotated. I think PG is the only role that can do that\njob.\n\nIn terms of rotation of internal keys, an idea proposed during the\ndiscussion is that change the internal keys when pg_basebackup. The\nsender transfers database data after decryption and the receiver\nencrypts the received data with the different internal keys from what\nthe sender has.\n\n> Likewise, the AEAD AES-CBC\n> HMAC-SHA512 does definitely not belong to postgres core backend\n> implementation. Why should I use the OpenSSL library and not some other\n> facility?\n\nThe purpose of AEAD is to do both things: encrypting internal keys and\nintegrity checks of these keys. We cannot do integrity checks using\nonly AES. Another option could be to use AES key wrapping[1].\n\n>\n> Basically, I'm -1 on having such a feature right inside pg, and +1 on\n> allowing pg to have it outside and interact with it appropriately,\n> preferably through an extension which could be in core.\n>\n> So my take is that pg should allow an extension to:\n>\n> - provide a *generic* way to interact with an *external* kms\n> eg by running a command (possibly setuid something) and interacting\n> with its stdin/stderr what the command does should be of no concern\n> to pg and use some trivial text protocol, and the existing code\n> can be wrapped as an example working implementation.\n>\n> - store some local keys somewhere and provide functions to use these\n> keys to encrypt/decrypt stuff, obviously, as generic as possible.\n>\n> ISTM that what crypto algorithms are actually used should not be\n> hardcoded, but I'm not sure how to achieve that. Maybe simply by\n> redefining the relevant function, maybe at the SQL level.\n>\n\nI think this key manager satisfies the fist point by\ncluster_passphrase_command. For the second point, the key manager\nstores local keys inside PG while protecting them by KEK managed\noutside of PG.\n\nInspired by SQL Server's Always Encrypted I implemented pg_encrypt()\nand pg_decrypt() but these are actually not necessary in terms of TDE.\nWe can introduce the key manager with empty internal keys and then\nintroduce TDE with adding necessary keys.\n\nI agree with the point that crypto algorithms should not be hardcoded.\n\n> There is an open question on how the \"command\" validates that it is indeed\n> the right pg which is interacting with it. This means some authentication,\n> probably some passphrase to provide somehow, probably close to what is\n> being implemented, so from an interface point of view, it could look quite\n> the same, but the key point is that the whole thing would be out of\n> postgres process, only encryption keys being used would be in postgres,\n> and probably only in the process which actually needs it.\n>\n\nI might be missing your point but is the question of how to verify the\npassphrase given by cluseter_passphrase_command is correct?\n\n> Random comments about details I saw in passing:\n>\n> * key_management_enabled\n>\n> key_management (on|off) ?\n>\n> * initdb -D dbname --cluster-passphrase-command=\"cat /path/to/passphrase-file\"\n>\n> Putting example in the documentation looks like a recommendation. It would\n> put a caveat that doing the above is probably a bad idea.\n\nAgreed on the above two points.\n\nRegards,\n\n[1] https://tools.ietf.org/html/rfc3394\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jun 2020 15:34:10 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 30 May 2020 at 04:20, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, May 29, 2020 at 1:50 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > However, this usage has a downside that user secret can be logged to\n> > server logs when log_statement = 'all' or an error happens. To deal\n> > with this issue I've created a PoC patch on top of the key manager\n> > patch which adds a libpq function PQencrypt() to encrypt data and new\n> > psql meta-command named \\encrypt in order to encrypt data while\n> > eliminating the possibility of the user data being logged.\n> > PQencrypt() just calls pg_encrypt() via PQfn(). Using this command the\n> > above example can become as follows:\n>\n> If PQfn() calls aren't currently logged, that's probably more of an\n> oversight due to the feature being almost dead than something upon\n> which we want to rely.\n\nAgreed.\n\nThe patch includes pg_encrypt() and pg_decrypt() SQL functions\ninspired by Always Encryption but these functions are interfaces of\nthe key manager to make it work independently from TDE and are\nactually not necessary in terms of TDE. Perhaps it's better to\nconsider whether it's worth having them after introducing TDE.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jun 2020 15:58:31 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi\n\n \n\nI took a step back\ntoday and started to think about the purpose of internal KMS and what it is\nsupposed to do, and how it compares to external KMS. Both are intended to\nmanage the life cycle of encryptions keys including their generation,\nprotection, storage and rotation. External KMS, on the other hand, is a more\ncentralized but expensive way to manage encryption life cycles and many\ndeployment actually starts with internal KMS and later migrate to external one.\n\n \n\nAnyhow, the design\nand implementation of internal KMS should circle around these stages of key\nlifecycle.\n\n \n\n1. Key Generation -  Yes, internal KMS module generates keys\n using pseudo random function, but only the keys for TDE and  SQL level keys. Users cannot request new\n key generation\n\n2. Key Protection - Yes,\n internal KMS wrap all keys with KEK and HMAC hash derived from a cluster\n passphrase\n\n3. Key Storage - Yes, the\n wrapped keys are stored in the cluster\n\n4. Key Rotation - Yes, internal\n KMS has a SQL method to swap out cluster passphrase, which rotates the KEK\n and HMAC key\n\n \n\nI am\nsaying this, because I want to make sure we can all agree on the scope of\ninternal KMS. Without clear scope, this KMS development will seem to go on\nforever.\n\n \n\nIn this\npatch, the internal KMS exposes pg_encrypt() and pg_decrypt() (was pg_wrap()\nand pg_unwrap() before) to the user to turn a clear text password into some\nsort of key material based on the SQL level key generated at initdb. This is\nused so the user does not have to provide clear text password to\npgp_sym_encrypt() provided by pgcrypto extension. The intention is good, I\nunderstand, but I don't think it is within the scope of KMS and it is\ndefinitely not within the scope of TDE either.\n\n \n\nEven\nif the password can be passed into pgp_sym_encrypt() securely by using pg_decrypt() function, the pgp_sym_encrypt() still will have to take\nthis password and derive into an encryption key using algorithms that internal\nKMS does not manage currently. This kind of defeats the purpose of internal\nKMS. So simply using pg_encrypt() and pg_decrypt() is not really a solution to\npgcrypto's limitation. This should be in another topic/project that is aimed to\nimprove pgcrypto by integrating it with the internal KMS, similar to TDE where\nit also has to integrate with the internal KMS later.\n\n \n\nSo\nfor internal KMS, the only cryptographic functions needed for now is\nkmgr_wrap_key() and kmgr_unwrap_key() to encapsulate and restore the\nencryptions keys to satisfy the \"key protection\" life cycle stage. I\ndon't think pg_encrypt() and pg_decrypt() should be part of internal KMS.\n\n \n\nAnyways, I have also\nreviewed the patch and have a few comments below:\n\n \n\n(1)\n\nThe ciphering\nalgorithm in my opinion should contain the algorithm name, key length and block\ncipher mode, which is similar to openssl's definition.\n\n \n\nInstead of defining\na cipher as  PG_CIPHER_AES_CBC, and have\nkey length as separate parameter, I would define them as\n\n#define\nPG_CIPHER_AES128_CBC 0\n\n#define\nPG_CIPHER_AES256_CBC 1\n\n#define\nPG_CIPHER_AES128_CTR 2\n\n#define\nPG_CIPHER_AES256_CTR 3\n\n \n\nI know\nPG_CIPHER_AES128_CTR and PG_CIPHER_AES256_CTR are not being used now as these\nare for the TDE in future, but might as well list them here because this KMS is\nmade to work specifically for TDE as I understand.\n\n-----------------------------------------------------------------------------------------------------------\n\n/*\n\n * Supported symmetric encryption algorithm.\nThese identifiers are passed\n\n * to pg_cipher_ctx_create() function, and then\nactual encryption\n\n * implementations need to initialize their\ncontext of the given encryption\n\n * algorithm.\n\n */\n\n#define\nPG_CIPHER_AES_CBC                        0\n\n#define\nPG_MAX_CIPHER_ID                        1\n\n-----------------------------------------------------------------------------------------------------------\n\n \n\n(2)\n\nIf the cipher\nalgorithms are defined like (1), then there is no need to pass key length as\nargument to ossl_cipher_ctx_create() function because it already knows the key\nlength based on the cipher definition. Less argument the better.\n\n \n\n-----------------------------------------------------------------------------------------------------------\n\nPgCipherCtx *\n\npg_cipher_ctx_create(int\ncipher, uint8 *key, int klen)\n\n{\n\nPgCipherCtx\n*ctx = NULL;\n\n \n\nif\n(cipher >= PG_MAX_CIPHER_ID)\n\nreturn\nNULL;\n\n \n\n#ifdef USE_OPENSSL\n\nctx\n= (PgCipherCtx *) palloc0(sizeof(PgCipherCtx));\n\n \n\nctx->encctx\n= ossl_cipher_ctx_create(cipher, key, klen, true);\n\nctx->decctx\n= ossl_cipher_ctx_create(cipher, key, klen, false);\n\n#endif\n\n \n\nreturn\nctx;\n\n}\n\n-----------------------------------------------------------------------------------------------------------\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n---- On Sun, 31 May 2020 23:58:31 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----\n\n\nOn Sat, 30 May 2020 at 04:20, Robert Haas <mailto:robertmhaas@gmail.com> wrote: \n> \n> On Fri, May 29, 2020 at 1:50 AM Masahiko Sawada \n> <mailto:masahiko.sawada@2ndquadrant.com> wrote: \n> > However, this usage has a downside that user secret can be logged to \n> > server logs when log_statement = 'all' or an error happens. To deal \n> > with this issue I've created a PoC patch on top of the key manager \n> > patch which adds a libpq function PQencrypt() to encrypt data and new \n> > psql meta-command named \\encrypt in order to encrypt data while \n> > eliminating the possibility of the user data being logged. \n> > PQencrypt() just calls pg_encrypt() via PQfn(). Using this command the \n> > above example can become as follows: \n> \n> If PQfn() calls aren't currently logged, that's probably more of an \n> oversight due to the feature being almost dead than something upon \n> which we want to rely. \n \nAgreed. \n \nThe patch includes pg_encrypt() and pg_decrypt() SQL functions \ninspired by Always Encryption but these functions are interfaces of \nthe key manager to make it work independently from TDE and are \nactually not necessary in terms of TDE. Perhaps it's better to \nconsider whether it's worth having them after introducing TDE. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nHi I took a step back\ntoday and started to think about the purpose of internal KMS and what it is\nsupposed to do, and how it compares to external KMS. Both are intended to\nmanage the life cycle of encryptions keys including their generation,\nprotection, storage and rotation. External KMS, on the other hand, is a more\ncentralized but expensive way to manage encryption life cycles and many\ndeployment actually starts with internal KMS and later migrate to external one. Anyhow, the design\nand implementation of internal KMS should circle around these stages of key\nlifecycle. 1. Key Generation -  Yes, internal KMS module generates keys\n using pseudo random function, but only the keys for TDE and  SQL level keys. Users cannot request new\n key generation2. Key Protection - Yes,\n internal KMS wrap all keys with KEK and HMAC hash derived from a cluster\n passphrase3. Key Storage - Yes, the\n wrapped keys are stored in the cluster4. Key Rotation - Yes, internal\n KMS has a SQL method to swap out cluster passphrase, which rotates the KEK\n and HMAC key I am\nsaying this, because I want to make sure we can all agree on the scope of\ninternal KMS. Without clear scope, this KMS development will seem to go on\nforever. In this\npatch, the internal KMS exposes pg_encrypt() and pg_decrypt() (was pg_wrap()\nand pg_unwrap() before) to the user to turn a clear text password into some\nsort of key material based on the SQL level key generated at initdb. This is\nused so the user does not have to provide clear text password to\npgp_sym_encrypt() provided by pgcrypto extension. The intention is good, I\nunderstand, but I don't think it is within the scope of KMS and it is\ndefinitely not within the scope of TDE either. Even\nif the password can be passed into pgp_sym_encrypt() securely by using pg_decrypt() function, the pgp_sym_encrypt() still will have to take\nthis password and derive into an encryption key using algorithms that internal\nKMS does not manage currently. This kind of defeats the purpose of internal\nKMS. So simply using pg_encrypt() and pg_decrypt() is not really a solution to\npgcrypto's limitation. This should be in another topic/project that is aimed to\nimprove pgcrypto by integrating it with the internal KMS, similar to TDE where\nit also has to integrate with the internal KMS later. So\nfor internal KMS, the only cryptographic functions needed for now is\nkmgr_wrap_key() and kmgr_unwrap_key() to encapsulate and restore the\nencryptions keys to satisfy the \"key protection\" life cycle stage. I\ndon't think pg_encrypt() and pg_decrypt() should be part of internal KMS. Anyways, I have also\nreviewed the patch and have a few comments below: (1)The ciphering\nalgorithm in my opinion should contain the algorithm name, key length and block\ncipher mode, which is similar to openssl's definition. Instead of defining\na cipher as  PG_CIPHER_AES_CBC, and have\nkey length as separate parameter, I would define them as#define\nPG_CIPHER_AES128_CBC 0#define\nPG_CIPHER_AES256_CBC 1#define\nPG_CIPHER_AES128_CTR 2#define\nPG_CIPHER_AES256_CTR 3 I know\nPG_CIPHER_AES128_CTR and PG_CIPHER_AES256_CTR are not being used now as these\nare for the TDE in future, but might as well list them here because this KMS is\nmade to work specifically for TDE as I understand.-----------------------------------------------------------------------------------------------------------/* * Supported symmetric encryption algorithm.\nThese identifiers are passed * to pg_cipher_ctx_create() function, and then\nactual encryption * implementations need to initialize their\ncontext of the given encryption * algorithm. */#define\nPG_CIPHER_AES_CBC                        0#define\nPG_MAX_CIPHER_ID                        1----------------------------------------------------------------------------------------------------------- (2)If the cipher\nalgorithms are defined like (1), then there is no need to pass key length as\nargument to ossl_cipher_ctx_create() function because it already knows the key\nlength based on the cipher definition. Less argument the better. -----------------------------------------------------------------------------------------------------------PgCipherCtx *pg_cipher_ctx_create(int\ncipher, uint8 *key, int klen){PgCipherCtx\n*ctx = NULL; if\n(cipher >= PG_MAX_CIPHER_ID)return\nNULL; #ifdef USE_OPENSSLctx\n= (PgCipherCtx *) palloc0(sizeof(PgCipherCtx)); ctx->encctx\n= ossl_cipher_ctx_create(cipher, key, klen, true);ctx->decctx\n= ossl_cipher_ctx_create(cipher, key, klen, false);#endif return\nctx;}-----------------------------------------------------------------------------------------------------------Cary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Sun, 31 May 2020 23:58:31 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----On Sat, 30 May 2020 at 04:20, Robert Haas <robertmhaas@gmail.com> wrote: > > On Fri, May 29, 2020 at 1:50 AM Masahiko Sawada > <masahiko.sawada@2ndquadrant.com> wrote: > > However, this usage has a downside that user secret can be logged to > > server logs when log_statement = 'all' or an error happens. To deal > > with this issue I've created a PoC patch on top of the key manager > > patch which adds a libpq function PQencrypt() to encrypt data and new > > psql meta-command named \\encrypt in order to encrypt data while > > eliminating the possibility of the user data being logged. > > PQencrypt() just calls pg_encrypt() via PQfn(). Using this command the > > above example can become as follows: > > If PQfn() calls aren't currently logged, that's probably more of an > oversight due to the feature being almost dead than something upon > which we want to rely. Agreed. The patch includes pg_encrypt() and pg_decrypt() SQL functions inspired by Always Encryption but these functions are interfaces of the key manager to make it work independently from TDE and are actually not necessary in terms of TDE. Perhaps it's better to consider whether it's worth having them after introducing TDE. Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 02 Jun 2020 16:30:28 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, 3 Jun 2020 at 08:30, Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> Hi\n>\n>\n>\n> I took a step back today and started to think about the purpose of internal KMS and what it is supposed to do, and how it compares to external KMS. Both are intended to manage the life cycle of encryptions keys including their generation, protection, storage and rotation. External KMS, on the other hand, is a more centralized but expensive way to manage encryption life cycles and many deployment actually starts with internal KMS and later migrate to external one.\n>\n>\n>\n> Anyhow, the design and implementation of internal KMS should circle around these stages of key lifecycle.\n>\n>\n>\n> 1. Key Generation - Yes, internal KMS module generates keys using pseudo random function, but only the keys for TDE and SQL level keys. Users cannot request new key generation\n> 2. Key Protection - Yes, internal KMS wrap all keys with KEK and HMAC hash derived from a cluster passphrase\n> 3. Key Storage - Yes, the wrapped keys are stored in the cluster\n> 4. Key Rotation - Yes, internal KMS has a SQL method to swap out cluster passphrase, which rotates the KEK and HMAC key\n>\n>\n>\n> I am saying this, because I want to make sure we can all agree on the scope of internal KMS. Without clear scope, this KMS development will seem to go on forever.\n>\n>\n\nYes, the internal KMS is not an alternative to external KMS such as\nAWS KMS, SafeNet Key Secure, and Vault, but a PostgreSQL internal\ncomponent that can work with these external solutions (via\ncluster_passphrase_command). It's the same position as our other\nmanagement components such as bufmgr, smgr, and lmgr.\n\nI agree with this scope. It manages only encryption keys used by PostgreSQL.\n\n>\n> In this patch, the internal KMS exposes pg_encrypt() and pg_decrypt() (was pg_wrap() and pg_unwrap() before) to the user to turn a clear text password into some sort of key material based on the SQL level key generated at initdb. This is used so the user does not have to provide clear text password to pgp_sym_encrypt() provided by pgcrypto extension. The intention is good, I understand, but I don't think it is within the scope of KMS and it is definitely not within the scope of TDE either.\n>\n\nI agree that neither pg_encrypt() nor pg_decrypt() is within the scope\nof KMS and TDE. That's why I've split the patch, and that's why I\nrenamed to pg_encrypt() and pg_decrypt() to clarify the purpose of\nthese functions is not key management. Key wrapping and unwrapping is\none of the usages of these functions.\n\nI think we can use the internal KMS for several purposes. It can\nmanage encryption keys not only for cluster-wide TDE but also, for\nexample, for column-level TDE and encryption SQL functions.\npg_encrypt() and pg_decrypt() are one example of the usage of the\ninternal KMS. Originally since we thought KMS and TDE are not\nintroduced at the same release, the idea is come up with so that users\ncan use KMS functionality with some interface. Therefore these SQL\nfunctions are not within the scope of KMS and it should be fine with\nintroducing the internal KMS having 0 keys.\n\n> Even if the password can be passed into pgp_sym_encrypt() securely by using pg_decrypt() function, the pgp_sym_encrypt() still will have to take this password and derive into an encryption key using algorithms that internal KMS does not manage currently. This kind of defeats the purpose of internal KMS. So simply using pg_encrypt() and pg_decrypt() is not really a solution to pgcrypto's limitation.\n\nYeah, when using pgcrypto, user must manage their encryption keys. The\ninternal KMS doesn't help that because it manages only keys internally\nused. What pg_encrypt() and pg_decrypt() can help is only to hide the\npassword from server logs.\n\n> This should be in another topic/project that is aimed to improve pgcrypto by integrating it with the internal KMS, similar to TDE where it also has to integrate with the internal KMS later.\n>\n\nAgreed.\n\n> So for internal KMS, the only cryptographic functions needed for now is kmgr_wrap_key() and kmgr_unwrap_key() to encapsulate and restore the encryptions keys to satisfy the \"key protection\" life cycle stage. I don't think pg_encrypt() and pg_decrypt() should be part of internal KMS.\n>\n\nAgreed.\n\n>\n> Anyways, I have also reviewed the patch and have a few comments below:\n>\n>\n>\n> (1)\n>\n> The ciphering algorithm in my opinion should contain the algorithm name, key length and block cipher mode, which is similar to openssl's definition.\n>\n>\n>\n> Instead of defining a cipher as PG_CIPHER_AES_CBC, and have key length as separate parameter, I would define them as\n>\n> #define PG_CIPHER_AES128_CBC 0\n>\n> #define PG_CIPHER_AES256_CBC 1\n>\n> #define PG_CIPHER_AES128_CTR 2\n>\n> #define PG_CIPHER_AES256_CTR 3\n>\n\nAgreed. I was concerned that we will end up having many IDs in the\nfuture for example when porting pgcrypto functions into core but I'm\nokay with that change.\n\n>\n>\n> I know PG_CIPHER_AES128_CTR and PG_CIPHER_AES256_CTR are not being used now as these are for the TDE in future, but might as well list them here because this KMS is made to work specifically for TDE as I understand.\n>\n> -----------------------------------------------------------------------------------------------------------\n>\n> /*\n>\n> * Supported symmetric encryption algorithm. These identifiers are passed\n>\n> * to pg_cipher_ctx_create() function, and then actual encryption\n>\n> * implementations need to initialize their context of the given encryption\n>\n> * algorithm.\n>\n> */\n>\n> #define PG_CIPHER_AES_CBC 0\n>\n> #define PG_MAX_CIPHER_ID 1\n>\n> -----------------------------------------------------------------------------------------------------------\n>\n>\n>\n> (2)\n>\n> If the cipher algorithms are defined like (1), then there is no need to pass key length as argument to ossl_cipher_ctx_create() function because it already knows the key length based on the cipher definition. Less argument the better.\n\nAgreed.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jun 2020 15:14:35 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hello Masahiko-san,\n\n> This key manager is aimed to manage cryptographic keys used for\n> transparent data encryption. As a result of the discussion, we\n> concluded it's safer to use multiple keys to encrypt database data\n> rather than using one key to encrypt the whole thing, for example, in\n> order to make sure different data is not encrypted with the same key\n> and IV. Therefore, in terms of TDE, the minimum requirement is that\n> PostgreSQL can use multiple keys.\n>\n> Using multiple keys in PG, there are roughly two possible designs:\n>\n> 1. Store all keys for TDE into the external KMS, and PG gets them from\n> it as needed.\n\n+1\n\n> 2. PG manages all keys for TDE inside and protect these keys on disk\n> by the key (i.g. KEK) stored in the external KMS.\n\n-1, this is the one where you would need arguing.\n\n> There are pros and cons to each design. If I take one cons of #1 as an\n> example, the operation between PG and the external KMS could be\n> complex. The operations could be creating, removing and rotate key and\n> so on.\n\nISTM that only create (delete?) are really needed. Rotating is the problem \nof the KMS itself, thus does not need to be managed by pg under #1.\n\n> We can implement these operations in an extension to interact\n> with different kinds of external KMS, and perhaps we can use KMIP.\n\nI would even put that (KMIP protocol stuff) outside pg core.\n\nEven under #2, if some KMS is implemented and managed by pg, I would put \nthe stuff in a separate process which I would probably run with a \ndifferent uid, so that the KEK is not accessible directly by pg, ever.\n\nOnce KMS interactions are managed with an outside process, then what this \nprocess does becomes an interface, and whether this process actually \nmanages the keys or discuss with some external KMS with some KMIP or \nwhatever is irrelevant to pg. Providing an interface means that anyone \ncould implement their KMS fitting their requirements if they comply with \nthe interface/protocol.\n\nNote that I'd be fine with having the current implementation somehow \nwrapped up as an example KMS.\n\n> But the development cost could become high because we might need \n> different extensions for each key management solutions/services.\n\nYes and no. What I suggest is, I think, pretty simple, and I think I can \nimplement it in a few line of script, so the cost is not high, and having \na separate process looks, to me, like a security win and an extensibility \nwin (i.e. another implementation can be provided).\n\n> #2 is better at that point; the interaction between PG and KMS is only\n> GET.\n\nI think that it could be the same with #1. I think that having a separate \nprocess is a reasonable security requirement, and if you do that #1 and #2 \nare more or less the same.\n\n> Other databases employes a similar approach are SQL Server and DB2.\n\nToo bad for them:-) I'd still disagree with having the master key inside \nthe database process, even if Microsoft, IBM and Oracle think it is a good \nidea.\n\n> In terms of the necessity of introducing the key manager into PG core,\n> I think at least TDE needs to be implemented in PG core. And as this\n> key manager is for managing keys for TDE, I think the key manager also\n> needs to be introduced into the core so that TDE functionality doesn't\n> depend on external modules.\n\nHmmm.\n\nMy point is that only interactions should be in core.\n\nThe implementation could be in core, but as a separate process.\n\nI agree that pg needs to be able to manage the DEK, so it needs to store \ndata keys.\n\nI still do not understand why an extension, possibly distributed with pg, \nwould not be ok. There may be good arguments for that, but I do not think \nyou provided any yet.\n\n>> Also, I'm not at fully at ease with some of the underlying principles\n>> behind this proposal. Are we re-inventing/re-implementing kerberos or\n>> whatever? Are we re-implementing a brand new KMS inside pg? Why having\n>> our own?\n>\n> As I explained above, this key manager is for managing internal keys\n> used by TDE. It's not an alternative to existing key management\n> solutions/services.\n\nHmmm. This seels to suggest that interacting with something outside should \nbe an option.\n\n> The requirements of this key manager are generating internal keys,\n> letting other PG components use them, protecting them by KEK when\n> persisting,\n\nIf you want that, I'd still argue that you should have a separate process.\n\n> and support KEK rotation. It doesn�t have a feature like\n> allowing users to store arbitrary keys into this key manager, like\n> other key management solutions/services have.\n\nHmmm.\n\n> I agree that the key used to encrypt data must not be placed in the\n> same host. But it's true only when the key is not protected, right?\n\nThe DEK is needed when encrypting and decrypting, obviously, so it would \nbe there once obtained, it cannot be helped. My concern is about the KEK, \nwhich AFAICS in your code is somewhere in memory accessible by the \npostgres process, which is a no go for me.\n\nThe definition of \"protected\" is fuzzy, it would depend on what the user \nrequires. Maybe protected for someone is \"in a file which is only readable \nby postgres\", and for someone else it means \"inside an external hardware \ncomponents activated by the fingerprint of the CEO\".\n\n> In\n> this key manager, since we protect all internal keys by KEK it's no\n> problem unless KEK is leaked. KEK can be obtained from outside key\n> management solutions/services through cluster_passphrase_command.\n\nAgain, I do not think that the KEK should be in postgres process, ever.\n\n>>\n>> Also, implementing a crash-safe key rotation algorithm does not look like\n>> inside pg backend, that is not its job.\n>\n> The key rotation this key manager has is KEK rotation, which is very\n> important. Without KEK rotation, when KEK is leaked an attacker can\n> get database data by disk theft. Since KEK is responsible for\n> encrypting all internal keys it's necessary to re-encrypt the internal\n> keys when KEK is rotated. I think PG is the only role that can do that\n> job.\n\nI'm not claiming that KEK rotation is a bad thing, I'm saying that it \nshould not be postgres problem. My issue is where you put the thing, not \nabout the thing itself.\n\n> I think this key manager satisfies the fist point by\n> cluster_passphrase_command. For the second point, the key manager\n> stores local keys inside PG while protecting them by KEK managed\n> outside of PG.\n\nI do not understand. From what I understood from the code, the KEK is \nloaded into postgres process. That is what I'm disagreeing with, only \nneeded DEK should be there.\n\n[...]\n\n-- \nFabien.", "msg_date": "Wed, 3 Jun 2020 09:16:03 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, 3 Jun 2020 at 16:16, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> > This key manager is aimed to manage cryptographic keys used for\n> > transparent data encryption. As a result of the discussion, we\n> > concluded it's safer to use multiple keys to encrypt database data\n> > rather than using one key to encrypt the whole thing, for example, in\n> > order to make sure different data is not encrypted with the same key\n> > and IV. Therefore, in terms of TDE, the minimum requirement is that\n> > PostgreSQL can use multiple keys.\n> >\n> > Using multiple keys in PG, there are roughly two possible designs:\n> >\n> > 1. Store all keys for TDE into the external KMS, and PG gets them from\n> > it as needed.\n>\n> +1\n\nIn this approach, encryption keys obtained from the external KMS are\ndirectly used to encrypt/decrypt data. What KEK and DEK are you\nreferring to in this approach?\n\n>\n> > 2. PG manages all keys for TDE inside and protect these keys on disk\n> > by the key (i.g. KEK) stored in the external KMS.\n>\n> -1, this is the one where you would need arguing.\n>\n> > There are pros and cons to each design. If I take one cons of #1 as an\n> > example, the operation between PG and the external KMS could be\n> > complex. The operations could be creating, removing and rotate key and\n> > so on.\n>\n> ISTM that only create (delete?) are really needed. Rotating is the problem\n> of the KMS itself, thus does not need to be managed by pg under #1.\n\nWith your idea how is the key rotation going to be performed? After\ninvoking key rotation on the external KMS we need to re-encrypt all\ndata encrypted with the old keys? Or you assume that the external KMS\nemployes something like 2-tier key hierarchy?\n\n>\n> > We can implement these operations in an extension to interact\n> > with different kinds of external KMS, and perhaps we can use KMIP.\n>\n> I would even put that (KMIP protocol stuff) outside pg core.\n>\n> Even under #2, if some KMS is implemented and managed by pg, I would put\n> the stuff in a separate process which I would probably run with a\n> different uid, so that the KEK is not accessible directly by pg, ever.\n>\n> Once KMS interactions are managed with an outside process, then what this\n> process does becomes an interface, and whether this process actually\n> manages the keys or discuss with some external KMS with some KMIP or\n> whatever is irrelevant to pg. Providing an interface means that anyone\n> could implement their KMS fitting their requirements if they comply with\n> the interface/protocol.\n\nJust to be clear we don't keep KEK on neither shared memory nor disk.\nPostmaster and a backend who executes pg_rotate_cluster_passphrase()\nget KEK and use it to (re-)encrypt internal keys. But after that they\nimmediately free it. The encryption keys we need to store inside\nPostgreSQL are DEK.\n\n>\n> Note that I'd be fine with having the current implementation somehow\n> wrapped up as an example KMS.\n>\n> > But the development cost could become high because we might need\n> > different extensions for each key management solutions/services.\n>\n> Yes and no. What I suggest is, I think, pretty simple, and I think I can\n> implement it in a few line of script, so the cost is not high, and having\n> a separate process looks, to me, like a security win and an extensibility\n> win (i.e. another implementation can be provided).\n\nHow can we get multiple keys from the external KMS? I think we will\nneed to save something like identifiers for each encryption key\nPostgres needs in the core and ask the external KMS for the key by the\nidentifier via an extension. Is that right?\n\n>\n> > #2 is better at that point; the interaction between PG and KMS is only\n> > GET.\n>\n> I think that it could be the same with #1. I think that having a separate\n> process is a reasonable security requirement, and if you do that #1 and #2\n> are more or less the same.\n>\n> > Other databases employes a similar approach are SQL Server and DB2.\n>\n> Too bad for them:-) I'd still disagree with having the master key inside\n> the database process, even if Microsoft, IBM and Oracle think it is a good\n> idea.\n>\n> > In terms of the necessity of introducing the key manager into PG core,\n> > I think at least TDE needs to be implemented in PG core. And as this\n> > key manager is for managing keys for TDE, I think the key manager also\n> > needs to be introduced into the core so that TDE functionality doesn't\n> > depend on external modules.\n>\n> Hmmm.\n>\n> My point is that only interactions should be in core.\n>\n> The implementation could be in core, but as a separate process.\n>\n> I agree that pg needs to be able to manage the DEK, so it needs to store\n> data keys.\n>\n> I still do not understand why an extension, possibly distributed with pg,\n> would not be ok. There may be good arguments for that, but I do not think\n> you provided any yet.\n\nHmm I think I don't fully understand your idea yet. With the current\npatch, KEK could be obtained by either postmaster or backend processs\nwho execute pg_rotate_cluster_passphrase() and KEK isn't stored\nanywhere on shared memory and disk. With your idea, KEK always is\nobtained by the particular process by a way provided by an extension.\nIs my understanding right?\n\n>\n> >> Also, I'm not at fully at ease with some of the underlying principles\n> >> behind this proposal. Are we re-inventing/re-implementing kerberos or\n> >> whatever? Are we re-implementing a brand new KMS inside pg? Why having\n> >> our own?\n> >\n> > As I explained above, this key manager is for managing internal keys\n> > used by TDE. It's not an alternative to existing key management\n> > solutions/services.\n>\n> Hmmm. This seels to suggest that interacting with something outside should\n> be an option.\n>\n> > The requirements of this key manager are generating internal keys,\n> > letting other PG components use them, protecting them by KEK when\n> > persisting,\n>\n> If you want that, I'd still argue that you should have a separate process.\n>\n> > and support KEK rotation. It doesn’t have a feature like\n> > allowing users to store arbitrary keys into this key manager, like\n> > other key management solutions/services have.\n>\n> Hmmm.\n>\n> > I agree that the key used to encrypt data must not be placed in the\n> > same host. But it's true only when the key is not protected, right?\n>\n> The DEK is needed when encrypting and decrypting, obviously, so it would\n> be there once obtained, it cannot be helped. My concern is about the KEK,\n> which AFAICS in your code is somewhere in memory accessible by the\n> postgres process, which is a no go for me.\n\nNo. In the current patch, we don't save KEK anywhere on shared memory\nand disk. Once a process (postmaster or backend) used KEK stored in a\nPgAeadCtx it frees this context. We put the internal keys, DEK, in the\nshared buffer during startup.\n\n>\n> The definition of \"protected\" is fuzzy, it would depend on what the user\n> requires. Maybe protected for someone is \"in a file which is only readable\n> by postgres\", and for someone else it means \"inside an external hardware\n> components activated by the fingerprint of the CEO\".\n>\n> > In\n> > this key manager, since we protect all internal keys by KEK it's no\n> > problem unless KEK is leaked. KEK can be obtained from outside key\n> > management solutions/services through cluster_passphrase_command.\n>\n> Again, I do not think that the KEK should be in postgres process, ever.\n>\n> >>\n> >> Also, implementing a crash-safe key rotation algorithm does not look like\n> >> inside pg backend, that is not its job.\n> >\n> > The key rotation this key manager has is KEK rotation, which is very\n> > important. Without KEK rotation, when KEK is leaked an attacker can\n> > get database data by disk theft. Since KEK is responsible for\n> > encrypting all internal keys it's necessary to re-encrypt the internal\n> > keys when KEK is rotated. I think PG is the only role that can do that\n> > job.\n>\n> I'm not claiming that KEK rotation is a bad thing, I'm saying that it\n> should not be postgres problem. My issue is where you put the thing, not\n> about the thing itself.\n>\n> > I think this key manager satisfies the fist point by\n> > cluster_passphrase_command. For the second point, the key manager\n> > stores local keys inside PG while protecting them by KEK managed\n> > outside of PG.\n>\n> I do not understand. From what I understood from the code, the KEK is\n> loaded into postgres process. That is what I'm disagreeing with, only\n> needed DEK should be there.\n\nPlease refer to kmgr_verify_passphrase() that is responsible for\nderiving KEK from passphrase, checking if the given passphrase is\ncorrect by unwrapping the internal keys, and storing the internal keys\ninto the shared buffer:\n\n+bool\n+kmgr_verify_passphrase(char *passphrase, int passlen,\n+ CryptoKey *keys_in, CryptoKey *keys_out, int nkeys)\n+{\n+ PgAeadCtx *tmpctx;\n+ uint8 user_enckey[PG_AEAD_ENC_KEY_LEN];\n+ uint8 user_hmackey[PG_AEAD_MAC_KEY_LEN];\n+\n+ /*\n+ * Create temporary wrap context with encryption key and HMAC key extracted\n+ * from the passphrase.\n+ */\n+ kmgr_derive_keys(passphrase, passlen, user_enckey, user_hmackey);\n+ tmpctx = pg_create_aead_ctx(user_enckey, user_hmackey);\n+\n+ for (int i = 0; i < nkeys; i++)\n+ {\n+\n+ if (!kmgr_unwrap_key(tmpctx, &(keys_in[i]), &(keys_out[i])))\n+ {\n+ /* The passphrase is not correct */\n+ pg_free_aead_ctx(tmpctx);\n+ return false;\n+ }\n+ }\n+\n+ /* The passphrase is correct, free the cipher context */\n+ pg_free_aead_ctx(tmpctx);\n+\n+ return true;\n+}\n\nWe free tmpctx having KEK immediately after use. Or your argument is\nthat we should not put KEK even onto a postgres process's local\nmemory?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jun 2020 19:34:46 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Jun 3, 2020 at 09:16:03AM +0200, Fabien COELHO wrote:\n> > > Also, I'm not at fully at ease with some of the underlying principles\n> > > behind this proposal. Are we re-inventing/re-implementing kerberos or\n> > > whatever? Are we re-implementing a brand new KMS inside pg? Why having\n> > > our own?\n> > \n> > As I explained above, this key manager is for managing internal keys\n> > used by TDE. It's not an alternative to existing key management\n> > solutions/services.\n> \n> Hmmm. This seels to suggest that interacting with something outside should\n> be an option.\n\nOur goal is not to implement every possible security idea someone has,\nbecause we will never finish, and the final result would be too complex\nto be unable. You will need to explain exactly why having a separate\nprocess has value over coding/user complexity, and you will need to get\nagreement from a sufficient number of people to move that idea forward.\n\n> > The key rotation this key manager has is KEK rotation, which is very\n> > important. Without KEK rotation, when KEK is leaked an attacker can\n> > get database data by disk theft. Since KEK is responsible for\n> > encrypting all internal keys it's necessary to re-encrypt the internal\n> > keys when KEK is rotated. I think PG is the only role that can do that\n> > job.\n> \n> I'm not claiming that KEK rotation is a bad thing, I'm saying that it should\n> not be postgres problem. My issue is where you put the thing, not about the\n> thing itself.\n> \n> > I think this key manager satisfies the fist point by\n> > cluster_passphrase_command. For the second point, the key manager\n> > stores local keys inside PG while protecting them by KEK managed\n> > outside of PG.\n> \n> I do not understand. From what I understood from the code, the KEK is loaded\n> into postgres process. That is what I'm disagreeing with, only needed DEK\n> should be there.\n\nOne option would be to send the data needing to be encrypted to an\nexternal command, and get the decrypted data back. In that way, the KEK\nis never on the Postgres server. However, the API for doing such an\ninterface seems very complex and could lead to failures.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 3 Jun 2020 15:57:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hello Bruce,\n\n>> Hmmm. This seels to suggest that interacting with something outside \n>> should be an option.\n>\n> Our goal is not to implement every possible security idea someone has,\n> because we will never finish, and the final result would be too complex\n> to be unable.\n\nSure. I'm trying to propose something both simple and extensible, so that \nother people could plug their own KMS if they are not fully satisfied with \nthe way the internal pg KMS works, which IMHO should be the case if \nsomeone is motivated and paranoid enough to setup a KMS in the first \nplace.\n\n> You will need to explain exactly why having a separate process has value \n> over coding/user complexity, and you will need to get agreement from a \n> sufficient number of people to move that idea forward.\n\nISTM that the value is simple: The whole KMS idea turns around a \"KEK\", \nwhich is a secret key which allows to unlock/retrieve/recompute many data \nkeys, aka DEKs. Loosing the KEK basically means loosing all data keys, \npast, present and possibly future, depending on how the KEK/DEK mechanism \noperates internally.\n\nSo the thing you should not want is to lose your KEK.\n\nKeeping it inside pg process means that any pg process compromision would \nresult in the KEK to be compromised as well, while the whole point of \ndoing this KMS business was to provide security by isolating realms of \ndata encryption.\n\nIf you provide an interface instead, which I'm advocating, then where the \nKEK is does not concern pg, which has just to ask for DEKs. A compromise \nof pg would compromise local DEKs, but not the KEK \"master key\". The KEK \nmay be somewhere on the same host, in another process, or possibly on \nanother host, on some attached specialized quantum hardware inacessible to \nhuman beings. Postgres should not decide where the user should put its \nKEK, because it would depend on the threat model being addressed that you \ndo not know.\n\n From an implementation point of view, what I suggest is reasonably simple \nand allows people to interface with the KMS of their choice, including one \nbased on the patch, which would be a demos about what can be done, but \nother systems would be accessible just as well. The other software \nengineering aspect is that a KMS is a complex/sensitive thing, so \nreinventing our own and forcing it on users seems like a bad idea.\n\n>> From what I understood from the code, the KEK is loaded into postgres \n>> process. That is what I'm disagreeing with, only needed DEK should be \n>> there.\n>\n> One option would be to send the data needing to be encrypted to an\n> external command, and get the decrypted data back. In that way, the KEK\n> is never on the Postgres server. However, the API for doing such an\n> interface seems very complex and could lead to failures.\n\nI was more thinking of an interface to retrieve DEKs, but to still keep \nencryption/decryption inside postgres, to limit traffic, but what you \nsuggest could be a valid option, so maybe should be allowed.\n\nI disagree with the implementation complexity, though.\n\nBasically the protocol only function is sending \"GET \n<opaque-key-identifier>\" and retrieving a response which is either the DEK \nor an error, which looks like a manageable complexity. Attached a \nsimplistic server-side implementation of that for illustration.\n\nIf you want externalized DEK as well, it would be sending \"ENC/DEC \n<key-identifier> <data>\" and the response is an error, or the translated \ndata. Looks manageable as well. Allowing both approaches looks ok.\n\nObviously it requires some more thinking and design, but my point is that \npostgres should not hold a KEK, ever, nor presume how DEK are to be \nmanaged by a DMS, and that is not very difficult to achieve by putting it \noutside of pg and defining how interactions take place. Providing a \nreference/example implementation would be nice as well, and Masahiko-san \ncode can be rewrapped quite easily.\n\n-- \nFabien.", "msg_date": "Fri, 5 Jun 2020 15:34:54 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Jun 5, 2020 at 03:34:54PM +0200, Fabien COELHO wrote:\n> Obviously it requires some more thinking and design, but my point is that\n> postgres should not hold a KEK, ever, nor presume how DEK are to be managed\n> by a DMS, and that is not very difficult to achieve by putting it outside of\n> pg and defining how interactions take place. Providing a reference/example\n> implementation would be nice as well, and Masahiko-san code can be rewrapped\n> quite easily.\n\nWell, the decrypted keys are already stored in backend memory, so what\nrisk does haveing the KEK in memory for a brief period avoid?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 10 Jun 2020 13:40:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "\nHello Bruce,\n\nSorry for the length (yet again) of this answer, I'm trying to make my \npoint as clear as possible.\n\n>> Obviously it requires some more thinking and design, but my point is that\n>> postgres should not hold a KEK, ever, nor presume how DEK are to be managed\n>> by a DMS, and that is not very difficult to achieve by putting it outside of\n>> pg and defining how interactions take place. Providing a reference/example\n>> implementation would be nice as well, and Masahiko-san code can be rewrapped\n>> quite easily.\n>\n> Well, the decrypted keys are already stored in backend memory,\n\nThe fact that if the pg process is compromised then the DEK and data \nencrypted are compromised is more or less unavoidable (maybe only the data \ncould be compromised though, but not the DEK, depending on how the \nencryption/decryption operates).\n\n> so what risk does haveing the KEK in memory for a brief period avoid?\n\nMy understanding is that the KEK does not protect one key, but all keys, \nthus all data, possibly even past or future, so it loss impacts more than \nthe here and now local process.\n\nIf the KEK is ever present in pg process, it presumes that the threat \nmodel being addressed allows its loss if the process is compromised, i.e. \nall (past, present, future) security properties are void once the process \nis compromised.\n\nThis may be okay for some use cases, but I can easily foresee that it \nwould not be for all. I can think of use cases where the user/auditor says \nthat the KEK should be elsewhere, and I would tend to agree.\n\nSo my point is that the implementation should allow it, i.e. define a \nsimple interface, and possibly a reference implementation with good \nproperties which might fit some/typical security requirements, and the \npatch mostly fits that need, but for the possible isolation of the KEK.\n\nISTM that a reasonable point of comparision is the design of kerberos, \nwith a central authority/server which authenticate parties and allow them \nto authenticate one another and communicate securely.\n\nThe design means that any compromised host/service would compromise all \nits interaction with other parties, but not communications between third \nparties. The compromission stays local, with the exception is the kerberos \nserver itself, which somehow holds all the keys.\n\nFor me the KEK is basically the kerberos server, you should provide means \nto allow the user to isolate that where they think it should be, and not \nenforce that it is within postgres process.\n\nAnother point is that what I suggest does not seem very hard from an \nimplementation point of view, and allows extensibility, which is also a \nwin.\n\nLastly, I still think that all this, whatever the design, should be \npackaged as an extension, unless it is really impossible to do so. I would \nalso try to separate the extension for KMS interaction with the extension \nfor actually using the keys, so that the user could change the underlying \ncryptographic primitives as they see fit.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 11 Jun 2020 09:07:40 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 11 Jun 2020 at 16:07, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Bruce,\n>\n> Sorry for the length (yet again) of this answer, I'm trying to make my\n> point as clear as possible.\n\nThank you for your explanation!\n\n>\n> >> Obviously it requires some more thinking and design, but my point is that\n> >> postgres should not hold a KEK, ever, nor presume how DEK are to be managed\n> >> by a DMS, and that is not very difficult to achieve by putting it outside of\n> >> pg and defining how interactions take place. Providing a reference/example\n> >> implementation would be nice as well, and Masahiko-san code can be rewrapped\n> >> quite easily.\n> >\n> > Well, the decrypted keys are already stored in backend memory,\n>\n> The fact that if the pg process is compromised then the DEK and data\n> encrypted are compromised is more or less unavoidable (maybe only the data\n> could be compromised though, but not the DEK, depending on how the\n> encryption/decryption operates).\n>\n> > so what risk does haveing the KEK in memory for a brief period avoid?\n>\n> My understanding is that the KEK does not protect one key, but all keys,\n> thus all data, possibly even past or future, so it loss impacts more than\n> the here and now local process.\n>\n> If the KEK is ever present in pg process, it presumes that the threat\n> model being addressed allows its loss if the process is compromised, i.e.\n> all (past, present, future) security properties are void once the process\n> is compromised.\n\nWhy we should not put KEK in pg process but it's okay for other\nprocesses? I guess you're talking about a threat when a malicious user\nlogged in OS (or at least accessible) but I thought there is no\ndifference between pg process and other processes in terms of the\nprocess being compromised. So the solution, in that case, would be to\noutsource encryption/decryption to external servers as Bruce\nmentioned.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 11 Jun 2020 17:39:35 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "\nHello Masahiko-san,\n\n>> If the KEK is ever present in pg process, it presumes that the threat\n>> model being addressed allows its loss if the process is compromised, i.e.\n>> all (past, present, future) security properties are void once the process\n>> is compromised.\n>\n> Why we should not put KEK in pg process but it's okay for other\n> processes?\n\nMy point is \"elsewhere\".\n\nIndeed, it could be on another process on the same host, in which case I'd \nrather have the process run under a different uid, which means another \ncompromission would be required if pg is compromissed locally ; it could \nalso be in a process on another host ; it could be on some special \nhardware. Your imagination is the limit.\n\n> I guess you're talking about a threat when a malicious user logged in OS \n> (or at least accessible) but I thought there is no difference between pg \n> process and other processes in terms of the process being compromised.\n\nProcesses are isolated based on uid, unless root is compromised. Once a id \nis compromised (eg \"postgres\"), the hacker has basically access to all \nfiles and processes accessible to that id.\n\n> So the solution, in that case, would be to outsource \n> encryption/decryption to external servers as Bruce mentioned.\n\nHosting stuff (keys, encryption) on another server is indeed an option if \n\"elsewhere\" is implemented.\n\n From a design point of view:\n\n 0. KEK, DEK & crypto are managed by pg\n\n 1. DEK & crypto are managed by pg,\n but KEK is outside pg.\n\n 2. eveything is managed out of pg.\n\nI think that both 1 & 2 are valid options, which do not require the same \ninterface. If you have 1, you can do 0 by giving KEK to a pg process.\n\nHow DEK are identified and created with the KEK should also be something \nopen, left to the implementation, the interface should not need to know.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 11 Jun 2020 13:03:15 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, 11 Jun 2020 at 20:03, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> >> If the KEK is ever present in pg process, it presumes that the threat\n> >> model being addressed allows its loss if the process is compromised, i.e.\n> >> all (past, present, future) security properties are void once the process\n> >> is compromised.\n> >\n> > Why we should not put KEK in pg process but it's okay for other\n> > processes?\n>\n> My point is \"elsewhere\".\n>\n> Indeed, it could be on another process on the same host, in which case I'd\n> rather have the process run under a different uid, which means another\n> compromission would be required if pg is compromissed locally ; it could\n> also be in a process on another host ; it could be on some special\n> hardware. Your imagination is the limit.\n>\n> > I guess you're talking about a threat when a malicious user logged in OS\n> > (or at least accessible) but I thought there is no difference between pg\n> > process and other processes in terms of the process being compromised.\n>\n> Processes are isolated based on uid, unless root is compromised. Once a id\n> is compromised (eg \"postgres\"), the hacker has basically access to all\n> files and processes accessible to that id.\n>\n> > So the solution, in that case, would be to outsource\n> > encryption/decryption to external servers as Bruce mentioned.\n>\n> Hosting stuff (keys, encryption) on another server is indeed an option if\n> \"elsewhere\" is implemented.\n\nIf I understand your idea correctly we put both DEK and KEK\n\"elsewhere\", and a postgres process gets only DEK from it. It seems to\nme this idea assumes that the place storing encryption keys employees\n2-tire key hierarchy or similar thing. What if the user wants to or\nhas to manage a single encryption key? For example, storing an\nencryption key for PostgreSQL TDE into a file in a safe server instead\nof KMS using DEK and KEK because of budgets or requirements whatever.\nIn this case, if the user does key rotation, that encryption key would\nneed to be rotated, resulting in the user would need to re-encrypt all\ndatabase data encrypted with old key. It should work but what do you\nthink about how postgres does key rotation and re-encryption?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 12 Jun 2020 15:09:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "\nHello Masahiko-san,\n\nI'm not sure I understood your concern. I try to answer below.\n\n> If I understand your idea correctly we put both DEK and KEK\n> \"elsewhere\", and a postgres process gets only DEK from it.\n\nYes, that is one of the option.\n\n> It seems to me this idea assumes that the place storing encryption keys \n> employees 2-tire key hierarchy or similar thing.\n\nISTM that there is no such assumption. There is the assumption that there \nis an interface to retrieve DEK. What is done being the interface to \nretrieve this DEK should be irrelevant to pg. Having them secure by a \nKEK looks like an reasonable design, though. Maybe keys are actually \nstored. Maybe thay are computed based on something, eg key identifier and \nsome secret. Maybe there is indeed a 2-tier something. Maybe whatever.\n\n> What if the user wants to or has to manage a single encryption key?\n\nThen it has one key identifier and it retrieve one key from the DMS. \nHaving a \"management system\" for a singleton looks like overkill though, \nbut it should work.\n\n> For example, storing an encryption key for PostgreSQL TDE into a file in \n> a safe server instead of KMS using DEK and KEK because of budgets or \n> requirements whatever.\n\nGood. If you have an interface to retrieve a key, then it can probably \ncontact said server to get it when needed?\n\n> In this case, if the user does key rotation, that encryption key would\n> need to be rotated, resulting in the user would need to re-encrypt all\n> database data encrypted with old key.\n\nSure, by definition actually changing the key requires a \ndecryption/encryption cycle on all data.\n\n> It should work but what do you think about how postgres does key \n> rotation and re-encryption?\n\nIf pg actually has the DEK, then it means that while the re-encryption is \nperformed it has to manage two keys simultenaously, this is a question for \nwhat is done on pg server with the keys, not really about the DMS ?\n\nIf the \"elsewhere\" service does the encryption, maybe the protocol could \ninclude it, eg something like:\n\nREC key1-id key2-id data-encrypted-with-key1\n -> data-encrypted-with-key2\n\nBut it could also achieve the same thing with two commands, eg:\n\nDEC key1-id data-encrypted-with-key1\n -> clear-text-data\n\nENC key2-id clear-text-data\n -> data-encrypted-with-key2\n\nThe question is what should be put in the protocol, and I would tend to \nthink that some careful design time should be put in it.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 12 Jun 2020 09:17:37 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, 12 Jun 2020 at 16:17, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> I'm not sure I understood your concern. I try to answer below.\n>\n> > If I understand your idea correctly we put both DEK and KEK\n> > \"elsewhere\", and a postgres process gets only DEK from it.\n>\n> Yes, that is one of the option.\n>\n> > It seems to me this idea assumes that the place storing encryption keys\n> > employees 2-tire key hierarchy or similar thing.\n>\n> ISTM that there is no such assumption. There is the assumption that there\n> is an interface to retrieve DEK. What is done being the interface to\n> retrieve this DEK should be irrelevant to pg. Having them secure by a\n> KEK looks like an reasonable design, though. Maybe keys are actually\n> stored. Maybe thay are computed based on something, eg key identifier and\n> some secret. Maybe there is indeed a 2-tier something. Maybe whatever.\n>\n> > What if the user wants to or has to manage a single encryption key?\n>\n> Then it has one key identifier and it retrieve one key from the DMS.\n> Having a \"management system\" for a singleton looks like overkill though,\n> but it should work.\n>\n> > For example, storing an encryption key for PostgreSQL TDE into a file in\n> > a safe server instead of KMS using DEK and KEK because of budgets or\n> > requirements whatever.\n>\n> Good. If you have an interface to retrieve a key, then it can probably\n> contact said server to get it when needed?\n>\n> > In this case, if the user does key rotation, that encryption key would\n> > need to be rotated, resulting in the user would need to re-encrypt all\n> > database data encrypted with old key.\n>\n> Sure, by definition actually changing the key requires a\n> decryption/encryption cycle on all data.\n>\n> > It should work but what do you think about how postgres does key\n> > rotation and re-encryption?\n>\n> If pg actually has the DEK, then it means that while the re-encryption is\n> performed it has to manage two keys simultenaously, this is a question for\n> what is done on pg server with the keys, not really about the DMS ?\n\nYes. Your explanation made my concern clear. Thanks!\n\n>\n> If the \"elsewhere\" service does the encryption, maybe the protocol could\n> include it, eg something like:\n>\n> REC key1-id key2-id data-encrypted-with-key1\n> -> data-encrypted-with-key2\n>\n> But it could also achieve the same thing with two commands, eg:\n>\n> DEC key1-id data-encrypted-with-key1\n> -> clear-text-data\n>\n> ENC key2-id clear-text-data\n> -> data-encrypted-with-key2\n>\n> The question is what should be put in the protocol, and I would tend to\n> think that some careful design time should be put in it.\n>\n\nSummarizing the discussed points so far, I think that the major\nadvantage points of your idea comparing to the current patch's\narchitecture are:\n\n* More secure. Because it never loads KEK in postgres processes we can\nlower the likelihood of KEK leakage.\n* More extensible. We will be able to implement more protocols to\noutsource other operations to the external place.\n\nOn the other hand, here are some downsides and issues:\n\n* The external place needs to manage more encryption keys than the\ncurrent patch does. Some cloud key management services are charged by\nthe number of active keys and key operations. So the number of keys\npostgres requires affects the charges. It'd be worse if we were to\nhave keys per table.\n\n* If this approach supports only GET protocol, the user needs to\ncreate encryption keys with appropriate ids in advance so that\npostgres can get keys by id. If postgres TDE creates keys as needed,\nCREATE protocol would also be required.\n\n* If we need only GET protocol, the current approach (i.g.\ncluser_passphase_command) would be more simple. I imagine the\ninterface between postgres and the extension is C function. This\napproach is more extensible but it also means extensions need to\nsupport multiple protocols, leading to increase complexity and\ndevelopment cost.\n\n* This approach necessarily doesn’t eliminate the data leakage threat\ncompletely caused by process compromisation. Since DEK is placed in\npostgres process memory, it’s still possible that if a postgres\nprocess is compromised the attacker can steal database data. The\nbenefit of lowering the possibility of KEK leakage is to deal with the\nthreat that the attacker sees database data encrypted by past or\nfuture DEK protected by the stolen KEK.\n\n* An open question is, as you previously mentioned, how to verify the\nkey obtained from the external place is the right key.\n\nAnything else we need to note?\n\nFinally, please understand I’m not controverting your idea but just\ntrying to understand which approach is better from multiple\nperspectives.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 12 Jun 2020 23:46:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Jun 12, 2020 at 09:17:37AM +0200, Fabien COELHO wrote:\n> The question is what should be put in the protocol, and I would tend to\n> think that some careful design time should be put in it.\n\nI still don't see the value of this vs. its complexity.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 12 Jun 2020 13:33:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hello Masahiko-san,\n\n> Summarizing the discussed points so far, I think that the major\n> advantage points of your idea comparing to the current patch's\n> architecture are:\n>\n> * More secure. Because it never loads KEK in postgres processes we can\n> lower the likelihood of KEK leakage.\n\nYes.\n\n> * More extensible. We will be able to implement more protocols to\n> outsource other operations to the external place.\n\nYes.\n\n> On the other hand, here are some downsides and issues:\n>\n> * The external place needs to manage more encryption keys than the\n> current patch does.\n\nWhy? If the external place is just a separate process on the same host, \nprobably it would manage the very same amount as what your patch.\n\n> Some cloud key management services are charged by the number of active \n> keys and key operations. So the number of keys postgres requires affects \n> the charges. It'd be worse if we were to have keys per table.\n\nPossibly. Note that you do not have to use a cloud storage paid as a \nservice. However, you could do it if there is an interface, because it \nwould allow postgres to do so if the user wishes that. That is the point \nof having an interface that can be implemented differently for different \nuse cases.\n\n> * If this approach supports only GET protocol, the user needs to\n> create encryption keys with appropriate ids in advance so that\n> postgres can get keys by id. If postgres TDE creates keys as needed,\n> CREATE protocol would also be required.\n\nI'm not sure. ISTM that if there is a KMS to manage keys, it could be its \nresponsability to actually create a key, however the client (pg) would \nhave to request it, basically say \"given me a new key for this id\".\n\nThis could even work with a \"get\" command only, if the KMS is expected to \ncreate a new key when asked for a key which does not exists yet. ISTM that \nthe client could (should?) only have to create identifiers for its keys.\n\n> * If we need only GET protocol, the current approach (i.g.\n> cluser_passphase_command) would be more simple. I imagine the\n> interface between postgres and the extension is C function.\n\nYes. ISTM that can be pretty simple, something like:\n\nA guc to define the process to start the interface (having a process means \nthat its uid can be changed), which would communicate on its stdin/stdout.\n\nA guc to define how to interact with the interface (eg whether DEK are \nretrieved, or whether the interface is to ask for encryption/decryption, \nor possibly some other modes).\n\nA few function:\n\n - set_key(<local-id:int>, <key-identifier:bytea>);\n # may retrieve the DEK, or only note that local id of some key.\n\n - encode(<local-id:int>, <data:bytea>) -> <encrypted-data:bytea>\n # may fail if no key is associated to local-id\n # or if the service is down somehow\n\n - decode(<local-id>, <encrypted-data>) -> <data>\n # could also fail if there is some integrity check associated\n\n> This approach is more extensible\n\nYep.\n\n> but it also means extensions need to support multiple protocols, leading \n> to increase complexity and development cost.\n\nI do not understand what you mean by \"multiple protocols\". For me there is \none protocol, possibly a few commands in this protocol between client \n(postgres) and DMS. Anyway, sending \"GET <key-id>\" to retreive a DEK, for \ninstance, does not sound \"complex\". Here is some pseudo code:\n\nFor get_key:\n\n if (mode of operation is to have DEKS locally)\n try\n send to KMS \"get <key-id>\"\n keys[local-id] = answer\n catch & rethrow possible errors\n elif (mode is to keep DEKs remote)\n key_id[local-id] = key-id;\n else ...\n\nFor encode, the code is basically:\n\n if (has_key(local-id))\n if (mode of operation is to have DEKs locally)\n return some_encode(key[local-id], data);\n elif (mode is to keep DEKs remote)\n send to KMS \"encode key_id[local-id] data\"\n return answer; # or error\n else ...\n else\n throw error local-id has no associated key;\n\ndecode is more or less the same as encode.\n\nThere is another thing to consider is how the client \"proves\" its identity \nto the KMS interface, which might suggest some provisions when starting a \nprocess, but you already have things in your patch to deal with the KEK, \nwhich could be turned into some generic auth.\n\n> * This approach necessarily doesn�t eliminate the data leakage threat\n> completely caused by process compromisation.\n\nSure, if the process as decrypted data or DEK or whatever, then the \nprocess compromission leaks these data. My point is to try to limit the \nleakage potential of a process compromission.\n\n> Since DEK is placed in postgres process memory,\n\nMay be placed, depending on the mode of operation.\n\n> it�s still possible that if a postgres process is compromised the \n> attacker can steal database data.\n\nObviously. This cannot be helped if pg is to hold unencrypted data.\n\n> The benefit of lowering the possibility of KEK leakage is to deal with \n> the threat that the attacker sees database data encrypted by past or \n> future DEK protected by the stolen KEK.\n\nYes.\n\n> * An open question is, as you previously mentioned, how to verify the\n> key obtained from the external place is the right key.\n\nIt woud succeed in decrypting data if there is some associated integrity \ncheck.\n\nNote that from a cryptographic point if view, depending on the use case, \nit may be a desirable property that you cannot tell whether it is the \nright one.\n\n> Anything else we need to note?\n\nDunno.\n\nI would like to see some thread model, and what properties you would \nexpect depending on the hypothesis.\n\nFor instance, I guess that the minimal you would like is that stolen \ndatabase cold data (PGDATA contents) should not allow to recover clear \ncontents, but only encrypted stuff which is the whole point of encrypting \ndata in the first place. This is the \"here and now\".\n\nISTM that the only possible achievement of the current patch is the above.\n\nThen you should also consider past data (prior states of PGDATA which may \nhave been stored somewhere the attacker might recover) and future data \n(that the attacker may be able to recover later).\n\nNow what happens on those (past, present, future) data on:\n\n - stolen DEK\n\n - stolen KEK\n\n - stolen full cold data (whole disk stolen)\n\n - access to process & live data\n (pg account compromission at some point in time)\n\n - access process & live data & ability to issue more commands at some\n point in time...\n\n - access to full host live data (root compromission)\n\n - ...\n\n - network full compromission (eg AD has been subverted, this is the usual\n target for taking down everything on a network if every\n authentication and authorization is managed by it, which is often\n the case in a corporate network).\n\n - the pg admin is working for the attacker...\n\n - the sys admin is working for the attacker...\n\n - ...\n\nIn the end anyway you would lose, the question is how soon, how many \ncompromissions are necessary.\n\n> Finally, please understand I�m not controverting your idea but just\n> trying to understand which approach is better from multiple\n> perspectives.\n\nThe point of a discussion is basically to present arguments.\n\n-- \nFabien.", "msg_date": "Fri, 12 Jun 2020 22:59:37 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "\nHello Bruce.\n\n>> The question is what should be put in the protocol, and I would tend to\n>> think that some careful design time should be put in it.\n>\n> I still don't see the value of this vs. its complexity.\n\nDunno. I'm looking for the value of having such a thing in core.\n\nISTM that there are no clear design goals of the system, no clear \ndescription of the target use case(s), no clear explanations of the \nunderlying choices (in something like a README), no saying what it \nachieves and what it does not achieve. It is only code.\n\nIf the proposed thing is very specific to one use case, which may be more \nor less particular, then I'd say the stuff should really be an external \nextension, and you do not need to ask for a review. Call it pgcryptoXYZ \nand it is done.\n\nHowever, if the stuff is amenable to many/more use cases, then it may \nstill be an extension because it is specialized somehow and not everyone \nwould like to have it if they do not use it, but having it in core would \nbe much more justified. Also, it would have to be a little more \"complex\" \nto be extensible, sure. I do not think that it needs to be very complex in \nthe end, but it needs to be carefully designed to be extensible.\n\nNote I still do not see why it should be in core directly, i.e. not an \nextension. I'm yet to see a convincing argument about that.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 12 Jun 2020 23:26:42 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 13 Jun 2020 at 05:59, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> > Summarizing the discussed points so far, I think that the major\n> > advantage points of your idea comparing to the current patch's\n> > architecture are:\n> >\n> > * More secure. Because it never loads KEK in postgres processes we can\n> > lower the likelihood of KEK leakage.\n>\n> Yes.\n>\n> > * More extensible. We will be able to implement more protocols to\n> > outsource other operations to the external place.\n>\n> Yes.\n>\n> > On the other hand, here are some downsides and issues:\n> >\n> > * The external place needs to manage more encryption keys than the\n> > current patch does.\n>\n> Why? If the external place is just a separate process on the same host,\n> probably it would manage the very same amount as what your patch.\n\nIn the current patch, the external place needs to manage only one key\nwhereas postgres needs to manages multiple DEKs. But with your idea,\nthe external place needs to manage both KEK and DEKs.\n\n>\n> > Some cloud key management services are charged by the number of active\n> > keys and key operations. So the number of keys postgres requires affects\n> > the charges. It'd be worse if we were to have keys per table.\n>\n> Possibly. Note that you do not have to use a cloud storage paid as a\n> service. However, you could do it if there is an interface, because it\n> would allow postgres to do so if the user wishes that. That is the point\n> of having an interface that can be implemented differently for different\n> use cases.\n\nThe same is true for the current patch. The user can get KEK from\nanywhere they want using cluster_passphrase_command. But as I\nmentioned above the number of keys that the user manages outside\npostgres is different.\n\n>\n> > * If this approach supports only GET protocol, the user needs to\n> > create encryption keys with appropriate ids in advance so that\n> > postgres can get keys by id. If postgres TDE creates keys as needed,\n> > CREATE protocol would also be required.\n>\n> I'm not sure. ISTM that if there is a KMS to manage keys, it could be its\n> responsability to actually create a key, however the client (pg) would\n> have to request it, basically say \"given me a new key for this id\".\n>\n> This could even work with a \"get\" command only, if the KMS is expected to\n> create a new key when asked for a key which does not exists yet. ISTM that\n> the client could (should?) only have to create identifiers for its keys.\n\nYeah, it depends on KMS, meaning we need different extensions for\ndifferent KMS. A KMS might support an interface that creates key if\nnot exist during GET but some KMS might support CREATE and GET\nseparately.\n\n>\n> > * If we need only GET protocol, the current approach (i.g.\n> > cluser_passphase_command) would be more simple. I imagine the\n> > interface between postgres and the extension is C function.\n>\n> Yes. ISTM that can be pretty simple, something like:\n>\n> A guc to define the process to start the interface (having a process means\n> that its uid can be changed), which would communicate on its stdin/stdout.\n>\n> A guc to define how to interact with the interface (eg whether DEK are\n> retrieved, or whether the interface is to ask for encryption/decryption,\n> or possibly some other modes).\n>\n> A few function:\n>\n> - set_key(<local-id:int>, <key-identifier:bytea>);\n> # may retrieve the DEK, or only note that local id of some key.\n>\n> - encode(<local-id:int>, <data:bytea>) -> <encrypted-data:bytea>\n> # may fail if no key is associated to local-id\n> # or if the service is down somehow\n>\n> - decode(<local-id>, <encrypted-data>) -> <data>\n> # could also fail if there is some integrity check associated\n>\n> > This approach is more extensible\n>\n> Yep.\n>\n> > but it also means extensions need to support multiple protocols, leading\n> > to increase complexity and development cost.\n>\n> I do not understand what you mean by \"multiple protocols\". For me there is\n> one protocol, possibly a few commands in this protocol between client\n> (postgres) and DMS. Anyway, sending \"GET <key-id>\" to retreive a DEK, for\n> instance, does not sound \"complex\". Here is some pseudo code:\n>\n> For get_key:\n>\n> if (mode of operation is to have DEKS locally)\n> try\n> send to KMS \"get <key-id>\"\n> keys[local-id] = answer\n> catch & rethrow possible errors\n> elif (mode is to keep DEKs remote)\n> key_id[local-id] = key-id;\n> else ...\n>\n> For encode, the code is basically:\n>\n> if (has_key(local-id))\n> if (mode of operation is to have DEKs locally)\n> return some_encode(key[local-id], data);\n> elif (mode is to keep DEKs remote)\n> send to KMS \"encode key_id[local-id] data\"\n> return answer; # or error\n> else ...\n> else\n> throw error local-id has no associated key;\n>\n> decode is more or less the same as encode.\n>\n> There is another thing to consider is how the client \"proves\" its identity\n> to the KMS interface, which might suggest some provisions when starting a\n> process, but you already have things in your patch to deal with the KEK,\n> which could be turned into some generic auth.\n>\n> > * This approach necessarily doesn’t eliminate the data leakage threat\n> > completely caused by process compromisation.\n>\n> Sure, if the process as decrypted data or DEK or whatever, then the\n> process compromission leaks these data. My point is to try to limit the\n> leakage potential of a process compromission.\n>\n> > Since DEK is placed in postgres process memory,\n>\n> May be placed, depending on the mode of operation.\n>\n> > it’s still possible that if a postgres process is compromised the\n> > attacker can steal database data.\n>\n> Obviously. This cannot be helped if pg is to hold unencrypted data.\n>\n> > The benefit of lowering the possibility of KEK leakage is to deal with\n> > the threat that the attacker sees database data encrypted by past or\n> > future DEK protected by the stolen KEK.\n>\n> Yes.\n>\n> > * An open question is, as you previously mentioned, how to verify the\n> > key obtained from the external place is the right key.\n>\n> It woud succeed in decrypting data if there is some associated integrity\n> check.\n>\n> Note that from a cryptographic point if view, depending on the use case,\n> it may be a desirable property that you cannot tell whether it is the\n> right one.\n>\n> > Anything else we need to note?\n>\n> Dunno.\n>\n> I would like to see some thread model, and what properties you would\n> expect depending on the hypothesis.\n>\n> For instance, I guess that the minimal you would like is that stolen\n> database cold data (PGDATA contents) should not allow to recover clear\n> contents, but only encrypted stuff which is the whole point of encrypting\n> data in the first place. This is the \"here and now\".\n>\n> ISTM that the only possible achievement of the current patch is the above.\n>\n> Then you should also consider past data (prior states of PGDATA which may\n> have been stored somewhere the attacker might recover) and future data\n> (that the attacker may be able to recover later).\n>\n\nThe current patch can protect old data and future data from theft\nusing KEK rotation for the case of KEK theft. The user needs to rotate\nKEK on a regular basis.\n\n> Now what happens on those (past, present, future) data on:\n>\n> - stolen DEK\n>\n> - stolen KEK\n>\n> - stolen full cold data (whole disk stolen)\n>\n> - access to process & live data\n> (pg account compromission at some point in time)\n>\n> - access process & live data & ability to issue more commands at some\n> point in time...\n>\n> - access to full host live data (root compromission)\n>\n> - ...\n>\n> - network full compromission (eg AD has been subverted, this is the usual\n> target for taking down everything on a network if every\n> authentication and authorization is managed by it, which is often\n> the case in a corporate network).\n>\n> - the pg admin is working for the attacker...\n>\n> - the sys admin is working for the attacker...\n>\n> - ...\n>\n> In the end anyway you would lose, the question is how soon, how many\n> compromissions are necessary.\n>\n> > Finally, please understand I’m not controverting your idea but just\n> > trying to understand which approach is better from multiple\n> > perspectives.\n>\n> The point of a discussion is basically to present arguments.\n\nMy point is the same as Bruce. I'm concerned about the fact that even\nif we introduce this approach the present data could still be stolen\nwhen a postgres process is compromised. It seems to me that your\napproach is extensible and can protect data from threats in addition\nto threats that the current patch can protect but it would bring some\ncosts and complexity instead comparing to the current patch. I'd like\nto hear opinions from other hackers in the community.\n\nI think the actual code would help to explain how your approach is not\ncomplexed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 14 Jun 2020 19:00:44 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "\nHello Masahiko-san,\n\n>>> * The external place needs to manage more encryption keys than the\n>>> current patch does.\n>>\n>> Why? If the external place is just a separate process on the same host,\n>> probably it would manage the very same amount as what your patch.\n>\n> In the current patch, the external place needs to manage only one key\n> whereas postgres needs to manages multiple DEKs. But with your idea,\n> the external place needs to manage both KEK and DEKs.\n\nHmmm. I do not see a good use case for a \"management system\" which would \nonly have to manage a singleton. ISTM that one point of using one KEK is \nto allows several DEKs under it. Maybe I have missed something in your \npatch, but only one key is a very restricted use case.\n\n>>> Some cloud key management services are charged by the number of active\n>>> keys and key operations. So the number of keys postgres requires affects\n>>> the charges. It'd be worse if we were to have keys per table.\n>>\n>> Possibly. Note that you do not have to use a cloud storage paid as a\n>> service. However, you could do it if there is an interface, because it\n>> would allow postgres to do so if the user wishes that. That is the point\n>> of having an interface that can be implemented differently for different\n>> use cases.\n>\n> The same is true for the current patch. The user can get KEK from\n> anywhere they want using cluster_passphrase_command.\n\nYep. Somehow I'm proposing to have a command to get DEKs instead of just \nthe KEK, otherwise it is not that far.\n\n> But as I mentioned above the number of keys that the user manages \n> outside postgres is different.\n\nYep, and I do not think that \"only one key\" approach is good. I really \nmissed something in the patch. From a use case point of view, I thought \nthat the user could have has many keys has they see fit. Maybe one per \ncluser, or database, or table, or a row if for some reason the application \nwould demand it. I do not think that the pg should decide that, among \nother things. That is why I'm constantly refering to a \"key identifier\", \nand in the pseudo code I added a \"local id\" (typically an int).\n\n>>> * If this approach supports only GET protocol, the user needs to\n>>> create encryption keys with appropriate ids in advance so that\n>>> postgres can get keys by id. If postgres TDE creates keys as needed,\n>>> CREATE protocol would also be required.\n>>\n>> I'm not sure. ISTM that if there is a KMS to manage keys, it could be its\n>> responsability to actually create a key, however the client (pg) would\n>> have to request it, basically say \"given me a new key for this id\".\n>>\n>> This could even work with a \"get\" command only, if the KMS is expected to\n>> create a new key when asked for a key which does not exists yet. ISTM that\n>> the client could (should?) only have to create identifiers for its keys.\n>\n> Yeah, it depends on KMS, meaning we need different extensions for \n> different KMS. A KMS might support an interface that creates key if not \n> exist during GET but some KMS might support CREATE and GET separately.\n\nI disagree that it is necessary, but this is debatable. The KMS-side \ninterface code could take care of that, eg:\n\n if command is \"get X\"\n if (X does not exist in KMS)\n create a new key stored in KMS, return it;\n else\n return KMS-stored key;\n ...\n\nSo you can still have a \"GET\" only interface which adapts to the \"final\"\nKMS. Basically, the glue code which implements the interface for the KMS \ncan include some logic to adapt to the KMS point of view.\n\n>>> * If we need only GET protocol, the current approach (i.g.\n\n>> The point of a discussion is basically to present arguments.\n>\n> My point is the same as Bruce. I'm concerned about the fact that even\n> if we introduce this approach the present data could still be stolen\n> when a postgres process is compromised.\n\nYes, sure.\n\n> It seems to me that your approach is extensible and can protect data \n> from threats in addition to threats that the current patch can protect \n> but it would bring some costs and complexity instead comparing to the \n> current patch. I'd like to hear opinions from other hackers in the \n> community.\n\nI'd like an extensible design to have anything in core. As I said in an \nother mail, if you want to handle a somehow restricted use case, just \nprovide an external extension and do nothing in core, please. Put in core \nsomething that people with a slightly different use case or auditor can \nbuild on as well. The current patch makes a dozen hard-coded decisions \nwhich it should not, IMHO.\n\n> I think the actual code would help to explain how your approach is not\n> complexed.\n\nI provided quite some pseudo code, including some python. I'm not planning \nto redevelop the whole thing: my contribution is a review, currently about \nthe overall design, then if somehow I agree on the design, I would look at \nthe code more precisely.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 14 Jun 2020 12:39:07 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sun, 14 Jun 2020 at 19:39, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> >>> * The external place needs to manage more encryption keys than the\n> >>> current patch does.\n> >>\n> >> Why? If the external place is just a separate process on the same host,\n> >> probably it would manage the very same amount as what your patch.\n> >\n> > In the current patch, the external place needs to manage only one key\n> > whereas postgres needs to manages multiple DEKs. But with your idea,\n> > the external place needs to manage both KEK and DEKs.\n>\n> Hmmm. I do not see a good use case for a \"management system\" which would\n> only have to manage a singleton. ISTM that one point of using one KEK is\n> to allows several DEKs under it. Maybe I have missed something in your\n> patch, but only one key is a very restricted use case.\n>\n> >>> Some cloud key management services are charged by the number of active\n> >>> keys and key operations. So the number of keys postgres requires affects\n> >>> the charges. It'd be worse if we were to have keys per table.\n> >>\n> >> Possibly. Note that you do not have to use a cloud storage paid as a\n> >> service. However, you could do it if there is an interface, because it\n> >> would allow postgres to do so if the user wishes that. That is the point\n> >> of having an interface that can be implemented differently for different\n> >> use cases.\n> >\n> > The same is true for the current patch. The user can get KEK from\n> > anywhere they want using cluster_passphrase_command.\n>\n> Yep. Somehow I'm proposing to have a command to get DEKs instead of just\n> the KEK, otherwise it is not that far.\n>\n> > But as I mentioned above the number of keys that the user manages\n> > outside postgres is different.\n>\n> Yep, and I do not think that \"only one key\" approach is good. I really\n> missed something in the patch. From a use case point of view, I thought\n> that the user could have has many keys has they see fit. Maybe one per\n> cluser, or database, or table, or a row if for some reason the application\n> would demand it. I do not think that the pg should decide that, among\n> other things. That is why I'm constantly refering to a \"key identifier\",\n> and in the pseudo code I added a \"local id\" (typically an int).\n\nWhat I referred to \"only one key\" is KEK. In the current patch,\npostgres needs to manage multiple DEKs and fetches one KEK from\nsomewhere. According to the recent TDE discussion, we would have one\nDEK for all tables/indexes encryption and one DEK for WAL encryption\nas the first step.\n\n>\n> >>> * If this approach supports only GET protocol, the user needs to\n> >>> create encryption keys with appropriate ids in advance so that\n> >>> postgres can get keys by id. If postgres TDE creates keys as needed,\n> >>> CREATE protocol would also be required.\n> >>\n> >> I'm not sure. ISTM that if there is a KMS to manage keys, it could be its\n> >> responsability to actually create a key, however the client (pg) would\n> >> have to request it, basically say \"given me a new key for this id\".\n> >>\n> >> This could even work with a \"get\" command only, if the KMS is expected to\n> >> create a new key when asked for a key which does not exists yet. ISTM that\n> >> the client could (should?) only have to create identifiers for its keys.\n> >\n> > Yeah, it depends on KMS, meaning we need different extensions for\n> > different KMS. A KMS might support an interface that creates key if not\n> > exist during GET but some KMS might support CREATE and GET separately.\n>\n> I disagree that it is necessary, but this is debatable. The KMS-side\n> interface code could take care of that, eg:\n>\n> if command is \"get X\"\n> if (X does not exist in KMS)\n> create a new key stored in KMS, return it;\n> else\n> return KMS-stored key;\n> ...\n>\n> So you can still have a \"GET\" only interface which adapts to the \"final\"\n> KMS. Basically, the glue code which implements the interface for the KMS\n> can include some logic to adapt to the KMS point of view.\n\nIs the above code is for the extension side, right? For example, if\nusers want to use a cloud KMS, say AWS KMS, to store DEKs and KEK they\nneed an extension that is loaded to postgres and can communicate with\nAWS KMS. I imagine that such extension needs to be written in C, the\ncommunication between the extension uses AWS KMS API, and the\ncommunication between postgres core and the extension uses text\nprotocol. When postgres core needs a DEK identified by KEY-A, it asks\nfor the extension to get the DEK by passing something like “GET KEY-A”\nmessage, and then the extension asks the existence of that key to AWK\nKMS, creates if not exist and returns it to the postgres core. Is my\nunderstanding right?\n\nWhen we have TDE feature in the future, we would also need to change\nfrontend tools such as pg_waldump and pg_rewind that read database\nfiles so that they can read encrypted files. It means that these\nfrond-end tools also somehow need to communicate with the external\nplace to get DEKs in order to decrypt encrypted database files. In\nyour idea, what do you think about how we can support it?\n\n>\n> >>> * If we need only GET protocol, the current approach (i.g.\n>\n> >> The point of a discussion is basically to present arguments.\n> >\n> > My point is the same as Bruce. I'm concerned about the fact that even\n> > if we introduce this approach the present data could still be stolen\n> > when a postgres process is compromised.\n>\n> Yes, sure.\n>\n> > It seems to me that your approach is extensible and can protect data\n> > from threats in addition to threats that the current patch can protect\n> > but it would bring some costs and complexity instead comparing to the\n> > current patch. I'd like to hear opinions from other hackers in the\n> > community.\n>\n> I'd like an extensible design to have anything in core. As I said in an\n> other mail, if you want to handle a somehow restricted use case, just\n> provide an external extension and do nothing in core, please. Put in core\n> something that people with a slightly different use case or auditor can\n> build on as well. The current patch makes a dozen hard-coded decisions\n> which it should not, IMHO.\n\nIt might have confused you that I included key manager and encryption\nSQL functions to the patches but this key manager has been designed\ndedicated to only TDE. It might be better to remove both SQL interface\nand SQL key we discussed from the patch set as they are actually not\nnecessary for TDE purposes. Aside from the security risk you\nmentioned, it was a natural design decision for me that we have our\nkey manager component in postgres core that is responsible for\nmanaging encryption keys for our TDE. To make the key manager and TDE\nsimple as much as possible, we discussed that we will have\ncluster-wide TDE and key manager that manages a few encryption keys\nused by TDE (e.g. one key for table/index encryption and another key\nfor WAL encryption), as the first step.\n\n>\n> > I think the actual code would help to explain how your approach is not\n> > complexed.\n>\n> I provided quite some pseudo code, including some python. I'm not planning\n> to redevelop the whole thing: my contribution is a review, currently about\n> the overall design, then if somehow I agree on the design, I would look at\n> the code more precisely.\n\nHmm, understood. Let's wait for comments from other members.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 17 Jun 2020 15:52:03 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hi all\n\n\n\nHaving read through the discussion, I have some comments and suggestions that I would like to share. \n\n\n\nI think it is still quite early to even talk about external key management system even if it is running on the same host as PG. This is most likely achieved as an extension that can provide communication to external key server and it would be a separate project/discussion. I think the focus now is to finalize on the internal KMS design, and we can discuss about how to migrate internally managed keys to the external when the time is right.\n\n\n\nKey management system is generally built to manage the life cycle of cryptographic keys, so our KMS in my opinion needs to be built with key life cycle in mind such as:\n\n\n\n* Key generation\n* key protection\n* key storage\n* key rotation\n\n* key rewrap\n* key disable/enable\n* key destroy\n\n\n\nKMS should not perform the above life cycle management by itself automatically or hardcoded, instead it should expose some interfaces to the end user or even a backend process like TDE to trigger the above. \n\nThe only key KMS should manage by itself is the KEK, which is derived from cluster passphrase value. This is fine in my opinion. This KEK should exist only within KMS to perform key protection (by wrapping) and key storage (save as file).\n\n\n\nThe other life cycle stages should be just left as interfaces, waiting for somebody to request KMS to perform. Somebody could be end user or back end process like TDE.\n\n\n\nUsing TDE as example, when TDE initializes, it calls KMS's key_generation interface to get however many keys that it needs, KMS should not return the keys in clear text of hex, it can return something like a key ID.\n\nUsing regular user as example, each user can also call KMS's key_generation interface to create a cryptographic key for their own purpose, KMS should also return just an key ID and this key should be bound to the user and we can limit that each user can have only one key managed, and regular user can only manage his/her own key with KMS, rotate, destroy, disable..etc; he/she cannot manage others' keys\n\n\n\nsuper user (or key admin), however, can do all kinds of management to all keys, (generated by TDE or by other users). He or she can do key rotation, key rewrap, disable or destroy. Here we will need to think about how to prevent this user from misusing the key management functions. \n\n\n\nSuper user should also be able to view the status of all the keys managed, information such as: \n\n* date of generation\n\n* key ID\n\n* owner\n\n* status\n\n* key length\n\n* suggested date of rotation..etc etc\n\n* expiry date??\n\n\nto actually perform the encryption with keys managed by internal KMS, I suggest adding a set of wrapper functions within KMS using the Key ID as input argument. So for example, TDE wants to encrypt some data, it will call KMS's wrapper encryption function with Key ID supplied, KMS looked up the key with  key ID ,verify caller's permission and translate these parameters and feed to pgcrypto for example. This may be a little slower due to the lookup, but we can have a variation of the function where KMS can look up the key with supplied key ID and convert it to encryption context and return it back to TDE. Then TDE can use this context to call another wrapper function for encryption without lookup all the time. If an end user wants to encrypt something, it will also call KMS's wrapper function and supply the key ID in the same way. \n\n\n\nI know that there is a discussion on moving the cryptographic functions as extension. In an already running PG, it is fine, but when TDE and XLOG bootstraps during initdb, it requires cryptographic function to encrypt the initial WAL file and during initdb i dont think it has access to any cryptographic function provided by the extension. we may need to include limited cryptographic function within KMS and TDE so it is enough to finish initdb with intial WAl encrypted.\n\n\n\nThis is just my thought how this KMS and TDE should look like. \n\n\nBest\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Tue, 16 Jun 2020 23:52:03 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----\n\n\n\nOn Sun, 14 Jun 2020 at 19:39, Fabien COELHO <mailto:coelho@cri.ensmp.fr> wrote: \n> \n> \n> Hello Masahiko-san, \n> \n> >>> * The external place needs to manage more encryption keys than the \n> >>> current patch does. \n> >> \n> >> Why? If the external place is just a separate process on the same host, \n> >> probably it would manage the very same amount as what your patch. \n> > \n> > In the current patch, the external place needs to manage only one key \n> > whereas postgres needs to manages multiple DEKs. But with your idea, \n> > the external place needs to manage both KEK and DEKs. \n> \n> Hmmm. I do not see a good use case for a \"management system\" which would \n> only have to manage a singleton. ISTM that one point of using one KEK is \n> to allows several DEKs under it. Maybe I have missed something in your \n> patch, but only one key is a very restricted use case. \n> \n> >>> Some cloud key management services are charged by the number of active \n> >>> keys and key operations. So the number of keys postgres requires affects \n> >>> the charges. It'd be worse if we were to have keys per table. \n> >> \n> >> Possibly. Note that you do not have to use a cloud storage paid as a \n> >> service. However, you could do it if there is an interface, because it \n> >> would allow postgres to do so if the user wishes that. That is the point \n> >> of having an interface that can be implemented differently for different \n> >> use cases. \n> > \n> > The same is true for the current patch. The user can get KEK from \n> > anywhere they want using cluster_passphrase_command. \n> \n> Yep. Somehow I'm proposing to have a command to get DEKs instead of just \n> the KEK, otherwise it is not that far. \n> \n> > But as I mentioned above the number of keys that the user manages \n> > outside postgres is different. \n> \n> Yep, and I do not think that \"only one key\" approach is good. I really \n> missed something in the patch. From a use case point of view, I thought \n> that the user could have has many keys has they see fit. Maybe one per \n> cluser, or database, or table, or a row if for some reason the application \n> would demand it. I do not think that the pg should decide that, among \n> other things. That is why I'm constantly refering to a \"key identifier\", \n> and in the pseudo code I added a \"local id\" (typically an int). \n \nWhat I referred to \"only one key\" is KEK. In the current patch, \npostgres needs to manage multiple DEKs and fetches one KEK from \nsomewhere. According to the recent TDE discussion, we would have one \nDEK for all tables/indexes encryption and one DEK for WAL encryption \nas the first step. \n \n> \n> >>> * If this approach supports only GET protocol, the user needs to \n> >>> create encryption keys with appropriate ids in advance so that \n> >>> postgres can get keys by id. If postgres TDE creates keys as needed, \n> >>> CREATE protocol would also be required. \n> >> \n> >> I'm not sure. ISTM that if there is a KMS to manage keys, it could be its \n> >> responsability to actually create a key, however the client (pg) would \n> >> have to request it, basically say \"given me a new key for this id\". \n> >> \n> >> This could even work with a \"get\" command only, if the KMS is expected to \n> >> create a new key when asked for a key which does not exists yet. ISTM that \n> >> the client could (should?) only have to create identifiers for its keys. \n> > \n> > Yeah, it depends on KMS, meaning we need different extensions for \n> > different KMS. A KMS might support an interface that creates key if not \n> > exist during GET but some KMS might support CREATE and GET separately. \n> \n> I disagree that it is necessary, but this is debatable. The KMS-side \n> interface code could take care of that, eg: \n> \n> if command is \"get X\" \n> if (X does not exist in KMS) \n> create a new key stored in KMS, return it; \n> else \n> return KMS-stored key; \n> ... \n> \n> So you can still have a \"GET\" only interface which adapts to the \"final\" \n> KMS. Basically, the glue code which implements the interface for the KMS \n> can include some logic to adapt to the KMS point of view. \n \nIs the above code is for the extension side, right? For example, if \nusers want to use a cloud KMS, say AWS KMS, to store DEKs and KEK they \nneed an extension that is loaded to postgres and can communicate with \nAWS KMS. I imagine that such extension needs to be written in C, the \ncommunication between the extension uses AWS KMS API, and the \ncommunication between postgres core and the extension uses text \nprotocol. When postgres core needs a DEK identified by KEY-A, it asks \nfor the extension to get the DEK by passing something like “GET KEY-A” \nmessage, and then the extension asks the existence of that key to AWK \nKMS, creates if not exist and returns it to the postgres core. Is my \nunderstanding right? \n \nWhen we have TDE feature in the future, we would also need to change \nfrontend tools such as pg_waldump and pg_rewind that read database \nfiles so that they can read encrypted files. It means that these \nfrond-end tools also somehow need to communicate with the external \nplace to get DEKs in order to decrypt encrypted database files. In \nyour idea, what do you think about how we can support it? \n \n> \n> >>> * If we need only GET protocol, the current approach (i.g. \n> \n> >> The point of a discussion is basically to present arguments. \n> > \n> > My point is the same as Bruce. I'm concerned about the fact that even \n> > if we introduce this approach the present data could still be stolen \n> > when a postgres process is compromised. \n> \n> Yes, sure. \n> \n> > It seems to me that your approach is extensible and can protect data \n> > from threats in addition to threats that the current patch can protect \n> > but it would bring some costs and complexity instead comparing to the \n> > current patch. I'd like to hear opinions from other hackers in the \n> > community. \n> \n> I'd like an extensible design to have anything in core. As I said in an \n> other mail, if you want to handle a somehow restricted use case, just \n> provide an external extension and do nothing in core, please. Put in core \n> something that people with a slightly different use case or auditor can \n> build on as well. The current patch makes a dozen hard-coded decisions \n> which it should not, IMHO. \n \nIt might have confused you that I included key manager and encryption \nSQL functions to the patches but this key manager has been designed \ndedicated to only TDE. It might be better to remove both SQL interface \nand SQL key we discussed from the patch set as they are actually not \nnecessary for TDE purposes. Aside from the security risk you \nmentioned, it was a natural design decision for me that we have our \nkey manager component in postgres core that is responsible for \nmanaging encryption keys for our TDE. To make the key manager and TDE \nsimple as much as possible, we discussed that we will have \ncluster-wide TDE and key manager that manages a few encryption keys \nused by TDE (e.g. one key for table/index encryption and another key \nfor WAL encryption), as the first step. \n \n> \n> > I think the actual code would help to explain how your approach is not \n> > complexed. \n> \n> I provided quite some pseudo code, including some python. I'm not planning \n> to redevelop the whole thing: my contribution is a review, currently about \n> the overall design, then if somehow I agree on the design, I would look at \n> the code more precisely. \n \nHmm, understood. Let's wait for comments from other members. \n \nRegards, \n \n-- \nMasahiko Sawada http://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\nHi allHaving read through the discussion, I have some comments and suggestions that I would like to share. I think it is still quite early to even talk about external key management system even if it is running on the same host as PG. This is most likely achieved as an extension that can provide communication to external key server and it would be a separate project/discussion. I think the focus now is to finalize on the internal KMS design, and we can discuss about how to migrate internally managed keys to the external when the time is right.Key management system is generally built to manage the life cycle of cryptographic keys, so our KMS in my opinion needs to be built with key life cycle in mind such as:* Key generation* key protection* key storage* key rotation* key rewrap* key disable/enable* key destroyKMS should not perform the above life cycle management by itself automatically or hardcoded, instead it should expose some interfaces to the end user or even a backend process like TDE to trigger the above. The only key KMS should manage by itself is the KEK, which is derived from cluster passphrase value. This is fine in my opinion. This KEK should exist only within KMS to perform key protection (by wrapping) and key storage (save as file).The other life cycle stages should be just left as interfaces, waiting for somebody to request KMS to perform. Somebody could be end user or back end process like TDE.Using TDE as example, when TDE initializes, it calls KMS's key_generation interface to get however many keys that it needs, KMS should not return the keys in clear text of hex, it can return something like a key ID.Using regular user as example, each user can also call KMS's key_generation interface to create a cryptographic key for their own purpose, KMS should also return just an key ID and this key should be bound to the user and we can limit that each user can have only one key managed, and regular user can only manage his/her own key with KMS, rotate, destroy, disable..etc; he/she cannot manage others' keyssuper user (or key admin), however, can do all kinds of management to all keys, (generated by TDE or by other users). He or she can do key rotation, key rewrap, disable or destroy. Here we will need to think about how to prevent this user from misusing the key management functions. Super user should also be able to view the status of all the keys managed, information such as: * date of generation* key ID* owner* status* key length* suggested date of rotation..etc etc* expiry date??to actually perform the encryption with keys managed by internal KMS, I suggest adding a set of wrapper functions within KMS using the Key ID as input argument. So for example, TDE wants to encrypt some data, it will call KMS's wrapper encryption function with Key ID supplied, KMS looked up the key with  key ID ,verify caller's permission and translate these parameters and feed to pgcrypto for example. This may be a little slower due to the lookup, but we can have a variation of the function where KMS can look up the key with supplied key ID and convert it to encryption context and return it back to TDE. Then TDE can use this context to call another wrapper function for encryption without lookup all the time. If an end user wants to encrypt something, it will also call KMS's wrapper function and supply the key ID in the same way. I know that there is a discussion on moving the cryptographic functions as extension. In an already running PG, it is fine, but when TDE and XLOG bootstraps during initdb, it requires cryptographic function to encrypt the initial WAL file and during initdb i dont think it has access to any cryptographic function provided by the extension. we may need to include limited cryptographic function within KMS and TDE so it is enough to finish initdb with intial WAl encrypted.This is just my thought how this KMS and TDE should look like. BestCary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Tue, 16 Jun 2020 23:52:03 -0700 Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote ----On Sun, 14 Jun 2020 at 19:39, Fabien COELHO <coelho@cri.ensmp.fr> wrote: > > > Hello Masahiko-san, > > >>> * The external place needs to manage more encryption keys than the > >>> current patch does. > >> > >> Why? If the external place is just a separate process on the same host, > >> probably it would manage the very same amount as what your patch. > > > > In the current patch, the external place needs to manage only one key > > whereas postgres needs to manages multiple DEKs. But with your idea, > > the external place needs to manage both KEK and DEKs. > > Hmmm. I do not see a good use case for a \"management system\" which would > only have to manage a singleton. ISTM that one point of using one KEK is > to allows several DEKs under it. Maybe I have missed something in your > patch, but only one key is a very restricted use case. > > >>> Some cloud key management services are charged by the number of active > >>> keys and key operations. So the number of keys postgres requires affects > >>> the charges. It'd be worse if we were to have keys per table. > >> > >> Possibly. Note that you do not have to use a cloud storage paid as a > >> service. However, you could do it if there is an interface, because it > >> would allow postgres to do so if the user wishes that. That is the point > >> of having an interface that can be implemented differently for different > >> use cases. > > > > The same is true for the current patch. The user can get KEK from > > anywhere they want using cluster_passphrase_command. > > Yep. Somehow I'm proposing to have a command to get DEKs instead of just > the KEK, otherwise it is not that far. > > > But as I mentioned above the number of keys that the user manages > > outside postgres is different. > > Yep, and I do not think that \"only one key\" approach is good. I really > missed something in the patch. From a use case point of view, I thought > that the user could have has many keys has they see fit. Maybe one per > cluser, or database, or table, or a row if for some reason the application > would demand it. I do not think that the pg should decide that, among > other things. That is why I'm constantly refering to a \"key identifier\", > and in the pseudo code I added a \"local id\" (typically an int). What I referred to \"only one key\" is KEK. In the current patch, postgres needs to manage multiple DEKs and fetches one KEK from somewhere. According to the recent TDE discussion, we would have one DEK for all tables/indexes encryption and one DEK for WAL encryption as the first step. > > >>> * If this approach supports only GET protocol, the user needs to > >>> create encryption keys with appropriate ids in advance so that > >>> postgres can get keys by id. If postgres TDE creates keys as needed, > >>> CREATE protocol would also be required. > >> > >> I'm not sure. ISTM that if there is a KMS to manage keys, it could be its > >> responsability to actually create a key, however the client (pg) would > >> have to request it, basically say \"given me a new key for this id\". > >> > >> This could even work with a \"get\" command only, if the KMS is expected to > >> create a new key when asked for a key which does not exists yet. ISTM that > >> the client could (should?) only have to create identifiers for its keys. > > > > Yeah, it depends on KMS, meaning we need different extensions for > > different KMS. A KMS might support an interface that creates key if not > > exist during GET but some KMS might support CREATE and GET separately. > > I disagree that it is necessary, but this is debatable. The KMS-side > interface code could take care of that, eg: > > if command is \"get X\" > if (X does not exist in KMS) > create a new key stored in KMS, return it; > else > return KMS-stored key; > ... > > So you can still have a \"GET\" only interface which adapts to the \"final\" > KMS. Basically, the glue code which implements the interface for the KMS > can include some logic to adapt to the KMS point of view. Is the above code is for the extension side, right? For example, if users want to use a cloud KMS, say AWS KMS, to store DEKs and KEK they need an extension that is loaded to postgres and can communicate with AWS KMS. I imagine that such extension needs to be written in C, the communication between the extension uses AWS KMS API, and the communication between postgres core and the extension uses text protocol. When postgres core needs a DEK identified by KEY-A, it asks for the extension to get the DEK by passing something like “GET KEY-A” message, and then the extension asks the existence of that key to AWK KMS, creates if not exist and returns it to the postgres core. Is my understanding right? When we have TDE feature in the future, we would also need to change frontend tools such as pg_waldump and pg_rewind that read database files so that they can read encrypted files. It means that these frond-end tools also somehow need to communicate with the external place to get DEKs in order to decrypt encrypted database files. In your idea, what do you think about how we can support it? > > >>> * If we need only GET protocol, the current approach (i.g. > > >> The point of a discussion is basically to present arguments. > > > > My point is the same as Bruce. I'm concerned about the fact that even > > if we introduce this approach the present data could still be stolen > > when a postgres process is compromised. > > Yes, sure. > > > It seems to me that your approach is extensible and can protect data > > from threats in addition to threats that the current patch can protect > > but it would bring some costs and complexity instead comparing to the > > current patch. I'd like to hear opinions from other hackers in the > > community. > > I'd like an extensible design to have anything in core. As I said in an > other mail, if you want to handle a somehow restricted use case, just > provide an external extension and do nothing in core, please. Put in core > something that people with a slightly different use case or auditor can > build on as well. The current patch makes a dozen hard-coded decisions > which it should not, IMHO. It might have confused you that I included key manager and encryption SQL functions to the patches but this key manager has been designed dedicated to only TDE. It might be better to remove both SQL interface and SQL key we discussed from the patch set as they are actually not necessary for TDE purposes. Aside from the security risk you mentioned, it was a natural design decision for me that we have our key manager component in postgres core that is responsible for managing encryption keys for our TDE. To make the key manager and TDE simple as much as possible, we discussed that we will have cluster-wide TDE and key manager that manages a few encryption keys used by TDE (e.g. one key for table/index encryption and another key for WAL encryption), as the first step. > > > I think the actual code would help to explain how your approach is not > > complexed. > > I provided quite some pseudo code, including some python. I'm not planning > to redevelop the whole thing: my contribution is a review, currently about > the overall design, then if somehow I agree on the design, I would look at > the code more precisely. Hmm, understood. Let's wait for comments from other members. Regards, -- Masahiko Sawada http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 18 Jun 2020 10:41:28 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On 18/6/20 19:41, Cary Huang wrote:\n> Hi all\n>\n> Having read through the discussion, I have some comments and \n> suggestions that I would like to share.\n>\n> I think it is still quite early to even talk about external key \n> management system even if it is running on the same host as PG. This \n> is most likely achieved as an extension that can provide communication \n> to external key server and it would be a separate project/discussion. \n> I think the focus now is to finalize on the internal KMS design, and \n> we can discuss about how to migrate internally managed keys to the \n> external when the time is right.\n\nAs long as there exists a clean interface, and the \"default\" (internal) \nbackend is a provider of said functionality, it'll be fine.\n\nGiven that having different KMS within a single instance (e.g. per \ndatabase) is quite unlikely, I suggest just exposing hook-like \nfunction-pointer variables and be done with it. Requiring a preloaded \nlibrary for this purpose doesn't seem too restrictive ---at least at \nthis stage--- and can be very easily evolved in the future --- \nsuper-simple API which receives a struct made of function pointers, plus \nanother function to reset it to \"internal defaults\" and that's it.\n\n>\n> Key management system is generally built to manage the life cycle of \n> cryptographic keys, so our KMS in my opinion needs to be built with \n> key life cycle in mind such as:\n>\n> * Key generation\n> * key protection\n> * key storage\n> * key rotation\n> * key rewrap\n> * key disable/enable\n> * key destroy\n\nAdd the support functions for your suggested \"key information\" \nfunctionality, and that's a very rough first draft of the API ...\n\n> KMS should not perform the above life cycle management by itself \n> automatically or hardcoded, instead it should expose some interfaces \n> to the end user or even a backend process like TDE to trigger the above.\n> The only key KMS should manage by itself is the KEK, which is derived \n> from cluster passphrase value. This is fine in my opinion. This KEK \n> should exist only within KMS to perform key protection (by wrapping) \n> and key storage (save as file).\n\nAsking for the \"cluster password\" is something better left optional / \nmade easily overrideable ... or we risk thousands of clusters suddenly \nnot working after a reboot.... :S\n\n\nJust my .02€\n\n\nThanks,\n\n     J.L.\n\n\n\n\n\n\n\n\nOn 18/6/20 19:41, Cary Huang wrote:\n\n\n\n\nHi all\n\n\n\nHaving read through the discussion, I have some comments\n and suggestions that I would like to share. \n\n\n\nI think it is still quite early to even talk about external\n key management system even if it is running on the same host\n as PG. This is most likely achieved as an extension that can\n provide communication to external key server and it would be a\n separate project/discussion. I think the focus now is to\n finalize on the internal KMS design, and we can discuss about\n how to migrate internally managed keys to the external when\n the time is right.\n\n\n\nAs long as there exists a clean interface, and the \"default\"\n (internal) backend is a provider of said functionality, it'll be\n fine.\n\nGiven that having different KMS within a single instance (e.g.\n per database) is quite unlikely, I suggest just exposing hook-like\n function-pointer variables and be done with it. Requiring a\n preloaded library for this purpose doesn't seem too restrictive\n ---at least at this stage--- and can be very easily evolved in the\n future --- super-simple API which receives a struct made of\n function pointers, plus another function to reset it to \"internal\n defaults\" and that's it.\n\n\n\n\n\nKey management system is generally built to manage the life\n cycle of cryptographic keys, so our KMS in my opinion needs to\n be built with key life cycle in mind such as:\n\n\n\n* Key generation\n * key protection\n * key storage\n* key rotation\n\n* key rewrap\n * key disable/enable\n * key destroy\n\n\n\nAdd the support functions for your suggested \"key information\"\n functionality, and that's a very rough first draft of the API ...\n\n\n\nKMS should not perform the above life cycle\n management by itself automatically or hardcoded, instead it\n should expose some interfaces to the end user or even a\n backend process like TDE to trigger the above. \n\nThe only key KMS should manage by itself is the KEK, which\n is derived from cluster passphrase value. This is fine in my\n opinion. This KEK should exist only within KMS to perform key\n protection (by wrapping) and key storage (save as file).\n\n\n\nAsking for the \"cluster password\" is something better left\n optional / made easily overrideable ... or we risk thousands of\n clusters suddenly not working after a reboot.... :S\n\n\nJust my .02€\n\n\nThanks,\n    J.L.", "msg_date": "Thu, 18 Jun 2020 20:21:56 +0200", "msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Hello Masahiko-san,\n\n> What I referred to \"only one key\" is KEK.\n\nOk, sorry, I misunderstood.\n\n>>> Yeah, it depends on KMS, meaning we need different extensions for\n>>> different KMS. A KMS might support an interface that creates key if not\n>>> exist during GET but some KMS might support CREATE and GET separately.\n>>\n>> I disagree that it is necessary, but this is debatable. The KMS-side\n>> interface code could take care of that, eg:\n>>\n>> if command is \"get X\"\n>> if (X does not exist in KMS)\n>> create a new key stored in KMS, return it;\n>> else\n>> return KMS-stored key;\n>> ...\n>>\n>> So you can still have a \"GET\" only interface which adapts to the \"final\"\n>> KMS. Basically, the glue code which implements the interface for the KMS\n>> can include some logic to adapt to the KMS point of view.\n>\n> Is the above code is for the extension side, right?\n\nSuch a code could be in the command with which pg communicates (eg through \nits stdin/stdout, or whatever) to get keys.\n\npg talks to the command, the command can do anything, such as storing keys \nor communicating with an external service to retrieve them, anything \nreally, that is the point.\n\nI'm advocating defining the pg/command protocol, something along \"GET xxx\" \nas you wrote, and possibly provide a possible/reasonable command \nimplementation, which would be part of the code you put in your patch, \nonly it would be in the command instead of postgres.\n\n> For example, if users want to use a cloud KMS, say AWS KMS, to store \n> DEKs and KEK they need an extension that is loaded to postgres and can \n> communicate with AWS KMS. I imagine that such extension needs to be \n> written in C,\n\nWhy? I could write it in bash, probably. Ok, maybe not so good for suid, \nbut in principle it could be anything. I'd probably write it in C, though.\n\n> the communication between the extension uses AWS KMS API, and the \n> communication between postgres core and the extension uses text \n> protocol.\n\nI'm not sure of the word \"extension\" above. For me the postgres side could \nbe an extension as in \"CREATE EXTENSION\". The command itself could be \nprovided in the extension code, but would not be in the \"CREATE \nEXTENSION\", it would be something run independently.\n\n> When postgres core needs a DEK identified by KEY-A, it asks \n> for the extension to get the DEK by passing something like “GET KEY-A” \n> message, and then the extension asks the existence of that key to AWK \n> KMS, creates if not exist and returns it to the postgres core. Is my \n> understanding right?\n\nYes. The command in the use-case you outline would just be an \nintermediary, but for another use-case it would do the storing. The point \nof aiming at extensibility if that from pg point of view the external \ncommands provide keys, but what these commands really do to do this can be \nanything.\n\n> When we have TDE feature in the future, we would also need to change\n> frontend tools such as pg_waldump and pg_rewind that read database\n> files so that they can read encrypted files. It means that these\n> frond-end tools also somehow need to communicate with the external\n> place to get DEKs in order to decrypt encrypted database files. In\n> your idea, what do you think about how we can support it?\n\nHmmm. My idea was that the natural interface would be to get things \nthrough postgres. For a debug tool such as pg_waldump, probably it needs \nto be adapted if it needs to decrypt data to operate.\n\nNow I'm not sure I understood, because of the DEK are managed by postgres \nin your patch, waldump and other external commands would have no access to \nthe decrypted data anyway, so the issue would be the same?\n\nWith file-level encryption, obviously all commands which have to read and \nunderstand the files have to be adapted if they are to still work, which \nis another argument to have some interface rather than integrated \nserver-side stuff, because these external commands would need to be able \nto get keys and use them as well.\n\nOr I misunderstood something.\n\n>> I'd like an extensible design to have anything in core. As I said in an\n>> other mail, if you want to handle a somehow restricted use case, just\n>> provide an external extension and do nothing in core, please. Put in core\n>> something that people with a slightly different use case or auditor can\n>> build on as well. The current patch makes a dozen hard-coded decisions\n>> which it should not, IMHO.\n>\n> It might have confused you that I included key manager and encryption\n> SQL functions to the patches but this key manager has been designed\n> dedicated to only TDE.\n\nHmmm. This is NOT AT ALL what the patch does. The documentation in your \npatch talks about \"column level encryption\", which is an application \nthing. Now you seem to say that it does not matter and can be removed \nbecause the use case is elsewhere.\n\n> It might be better to remove both SQL interface\n> and SQL key we discussed from the patch set as they are actually not\n> necessary for TDE purposes.\n\nThe documentation part of the patch, at no point, talks about TDE \n(transparent data encryption), which is a file-level encryption as far as \nI understand it, i.e. whole files are encrypted.\n\nI'm lost, because if you want to do that you cannot easily use \npadding/HMAC and so because they would change block sizes, and probably \nyou would use CRT instead of CBC to be able to decrypt data selectively.\n\nSo you certainly succeeded in confusing me deeply:-)\n\n> Aside from the security risk you mentioned, it was a natural design \n> decision for me that we have our key manager component in postgres core \n> that is responsible for managing encryption keys for our TDE.\n\nThe patch really needs a README to explain what it really does, and why, \nand how, and what is the thread model, what are the choices (there should \nbe as few as possible), how it can/could be extended.\n\nI've looked at the whole patch, and I could not find the place where files \nare actually encrypted/decrypted at a low level, that I would expect for \nfile encryption implementation.\n\n> To make the key manager and TDE simple as much as possible, we discussed \n> that we will have cluster-wide TDE and key manager that manages a few \n> encryption keys used by TDE (e.g. one key for table/index encryption and \n> another key for WAL encryption), as the first step.\n\nHmmm. Ok. So in fact all that is for TDE, *but* the patch does not do TDE, \nbut provides a column-oriented SQL-level encryption, which is unrelated to \nyour actual objective, which is to do file-level encryption in the end.\n\nHowever, for TDE, it may that you cannot do it with a pg extension because \nfor the extension to work the database must work, which would require some \n\"data\" files not to be encrypted in the first place. That seems like a \ngood argument to actually have something in core.\n\nProbably for TDE you only want the configuration file not to be encrypted.\n\nI'd still advocate to have the key management system possibly outside of \npg, and have pg interact with it to get keys when needed. Probably key ids \nwould be the relative file names in that case. The approach of \nexternalizing encryption/decryption would be totally impractical for \nperformance reasons, though.\n\nI see value in Cary Huang suggestion on the thread to have dynamically \nloaded functions implement an interface. That would at least allow to \nremove some hardcoded choices such as what cypher is actually used, key \nsizes, and so on. One possible implementation would be to manage things \nmore or less internally as you do, another to fork an external command and \ntalk with it to do the same.\n\nHowever, I do not share the actual interface briefly outlined: I do not \nthinkpg should have to care about key management functions such as \nrotation, generation or derivation, storage… the interest of pg should be \nlimited to retrieving the keys it needs. That does not mean such functions \ndo not have security value and should not be implemented, I'd say that it \nshould not be visible/hardcoded in the pg/kms interface, especially if \nthis interface is expected to be generic.\n\nAs I see it, a pg/kms C-level loadable interface would provide the \nfollowing function:\n\n// options would be supplied by a guc and allow to initialize the \n// interface with the relevant data, whatever the underlying \n// implementation needs.\nerror kms_init(char *options);\n\n// associate opaque key identifier to a local id\nerror kms_key(local_id int, int key_id_len, byte *key_id);\n\nor maybe something like:\n\n// would return the local id attributed to the key\nerror/int kms_key(key_id_len, key_id);\n\n// the actual functions should be clarified\n// for TDE file-level, probably the encrypted length is the same as the \n// input, you cannot have padding, hmac or whatever added.\n// for SQL app-level, the rules could be different\nerror kms_(en|de)crypt(local_id int, int mode, int len,\n byte *in, byte *out);\n\n// maybe\nerror kms_key_forget(local_id int);\nerror kms_destroy(…);\n\n// maybe, to allow extensibility and genericity\n// eg kms_command(\"rotate keys with new kek=123\");\nerror kms_command(char *cmd);\n\nI'm a little bit unsure that there should be only one KMS active ever, \nthough: a file-level vs app-level encryption could have quite different \nconstraints. Also, should the app-level encryption be able to access keys\nloaded for file-level encryption?\n\n-- \nFabien.", "msg_date": "Fri, 19 Jun 2020 08:43:53 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, 19 Jun 2020 at 15:44, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Masahiko-san,\n>\n> > What I referred to \"only one key\" is KEK.\n>\n> Ok, sorry, I misunderstood.\n>\n> >>> Yeah, it depends on KMS, meaning we need different extensions for\n> >>> different KMS. A KMS might support an interface that creates key if not\n> >>> exist during GET but some KMS might support CREATE and GET separately.\n> >>\n> >> I disagree that it is necessary, but this is debatable. The KMS-side\n> >> interface code could take care of that, eg:\n> >>\n> >> if command is \"get X\"\n> >> if (X does not exist in KMS)\n> >> create a new key stored in KMS, return it;\n> >> else\n> >> return KMS-stored key;\n> >> ...\n> >>\n> >> So you can still have a \"GET\" only interface which adapts to the \"final\"\n> >> KMS. Basically, the glue code which implements the interface for the KMS\n> >> can include some logic to adapt to the KMS point of view.\n> >\n> > Is the above code is for the extension side, right?\n>\n> Such a code could be in the command with which pg communicates (eg through\n> its stdin/stdout, or whatever) to get keys.\n>\n> pg talks to the command, the command can do anything, such as storing keys\n> or communicating with an external service to retrieve them, anything\n> really, that is the point.\n>\n> I'm advocating defining the pg/command protocol, something along \"GET xxx\"\n> as you wrote, and possibly provide a possible/reasonable command\n> implementation, which would be part of the code you put in your patch,\n> only it would be in the command instead of postgres.\n>\n> > For example, if users want to use a cloud KMS, say AWS KMS, to store\n> > DEKs and KEK they need an extension that is loaded to postgres and can\n> > communicate with AWS KMS. I imagine that such extension needs to be\n> > written in C,\n>\n> Why? I could write it in bash, probably. Ok, maybe not so good for suid,\n> but in principle it could be anything. I'd probably write it in C, though.\n>\n> > the communication between the extension uses AWS KMS API, and the\n> > communication between postgres core and the extension uses text\n> > protocol.\n>\n> I'm not sure of the word \"extension\" above. For me the postgres side could\n> be an extension as in \"CREATE EXTENSION\". The command itself could be\n> provided in the extension code, but would not be in the \"CREATE\n> EXTENSION\", it would be something run independently.\n\nOh, I imagined extensions that can be installed by CREATE EXTENSION or\nspecifying it to shared_preload_libraries.\n\nIf the command runs in the background to talk with postgres there are\nsome problems that we need to deal with. For instance, what if the\nprocess downs? Does the postmaster re-execute it? How does it work in\nsingle-user mode? etc. It seems to me it will bring another\ncomplexity.\n\n>\n> > When postgres core needs a DEK identified by KEY-A, it asks\n> > for the extension to get the DEK by passing something like “GET KEY-A”\n> > message, and then the extension asks the existence of that key to AWK\n> > KMS, creates if not exist and returns it to the postgres core. Is my\n> > understanding right?\n>\n> Yes. The command in the use-case you outline would just be an\n> intermediary, but for another use-case it would do the storing. The point\n> of aiming at extensibility if that from pg point of view the external\n> commands provide keys, but what these commands really do to do this can be\n> anything.\n>\n> > When we have TDE feature in the future, we would also need to change\n> > frontend tools such as pg_waldump and pg_rewind that read database\n> > files so that they can read encrypted files. It means that these\n> > frond-end tools also somehow need to communicate with the external\n> > place to get DEKs in order to decrypt encrypted database files. In\n> > your idea, what do you think about how we can support it?\n>\n> Hmmm. My idea was that the natural interface would be to get things\n> through postgres. For a debug tool such as pg_waldump, probably it needs\n> to be adapted if it needs to decrypt data to operate.\n>\n> Now I'm not sure I understood, because of the DEK are managed by postgres\n> in your patch, waldump and other external commands would have no access to\n> the decrypted data anyway, so the issue would be the same?\n\nWith the current patch, we will be able to add\n--cluster-passphase-command command-line option to front-end tools\nthat want to read encrypted database files. The front-end tools\nexecute the specified command to get KEK and unwrap DEKs. The\nfunctions such as running passphrase command, verifying the passphrase\nis correct, and getting wrapped DEKs from the database cluster are\nimplemented in src/common so both can use these functions.\n\n>\n> With file-level encryption, obviously all commands which have to read and\n> understand the files have to be adapted if they are to still work, which\n> is another argument to have some interface rather than integrated\n> server-side stuff, because these external commands would need to be able\n> to get keys and use them as well.\n>\n> Or I misunderstood something.\n>\n> >> I'd like an extensible design to have anything in core. As I said in an\n> >> other mail, if you want to handle a somehow restricted use case, just\n> >> provide an external extension and do nothing in core, please. Put in core\n> >> something that people with a slightly different use case or auditor can\n> >> build on as well. The current patch makes a dozen hard-coded decisions\n> >> which it should not, IMHO.\n> >\n> > It might have confused you that I included key manager and encryption\n> > SQL functions to the patches but this key manager has been designed\n> > dedicated to only TDE.\n>\n> Hmmm. This is NOT AT ALL what the patch does. The documentation in your\n> patch talks about \"column level encryption\", which is an application\n> thing. Now you seem to say that it does not matter and can be removed\n> because the use case is elsewhere.\n>\n> > It might be better to remove both SQL interface\n> > and SQL key we discussed from the patch set as they are actually not\n> > necessary for TDE purposes.\n>\n> The documentation part of the patch, at no point, talks about TDE\n> (transparent data encryption), which is a file-level encryption as far as\n> I understand it, i.e. whole files are encrypted.\n\nI should have described the positioning of these patches.. The current\npatch is divided into 7 patches but there are two patches for\ndifferent purposes.\n\nThe first patch is to add an internal key manager. This is\nPostgreSQL’s internal component to manage cryptographic keys mainly\nfor TDE. Originally I proposed both key manager and TDE[1] but these\nare actually independent each other and we can discuss them\nseparately. Therefore, I started a new thread to discuss only the key\nmanager. According to the discussion so far, since TDE doesn't need to\ndynamically register DEKs the key manager doesn't have an interface to\nregister DEK for now. We cannot do anything with only the first patch\nbut the plan is to implement TDE on top of the key manager. So we can\nimplement the key manager patch and TDE patch separately but these\nactually depend on each other.\n\nThe second patch is to make the key manager use even without TDE. The\nidea of this patch is to help the incremental development of TDE.\nThere was a discussion that since the development of both the key\nmanager and TDE will take a long time, it’s better to make the key\nmanager work alone by providing SQL interfaces to it. This patch adds\nnew SQL functions: pg_wrap() and pg_unwrap(), to wrap and unwrap the\narbitrary user secret by the encryption key called SQL internal key\nwhich is managed and stored by the key manager. What this patch does\nis to register the SQL internal key to the key manager and add SQL\nfunctions that wrap and unwrap the given string in the same way of key\nwrapping used by the key manager. These functions renamed to\npg_encrypt() and pg_decrypt().\n\nGiven that the purpose of the key manager is to help TDE, discussing\nthe SQL interface part (i.g., the second patch) deviates from the\noriginal purpose. I think we should discuss the design and\nimplementation of the key manager first and then other additional\ninterfaces. So I’ve attached a new version patch and removed the\nsecond patch part so that we can focus on only the key manager part.\n\n>\n> I'm lost, because if you want to do that you cannot easily use\n> padding/HMAC and so because they would change block sizes, and probably\n> you would use CRT instead of CBC to be able to decrypt data selectively.\n>\n\nThe patch introducing TDE will add CTR mode. The padding/HMAC is for\nonly wrapping DEKs by the key manager. Since this patch adds only key\nmanager it has only the encryption methods it needs.\n\n> So you certainly succeeded in confusing me deeply:-)\n>\n> > Aside from the security risk you mentioned, it was a natural design\n> > decision for me that we have our key manager component in postgres core\n> > that is responsible for managing encryption keys for our TDE.\n>\n> The patch really needs a README to explain what it really does, and why,\n> and how, and what is the thread model, what are the choices (there should\n> be as few as possible), how it can/could be extended.\n>\n> I've looked at the whole patch, and I could not find the place where files\n> are actually encrypted/decrypted at a low level, that I would expect for\n> file encryption implementation.\n\nAs I explained above, the patch introduces only the key manager which\nwill be a building block of TDE.\n\n>\n> > To make the key manager and TDE simple as much as possible, we discussed\n> > that we will have cluster-wide TDE and key manager that manages a few\n> > encryption keys used by TDE (e.g. one key for table/index encryption and\n> > another key for WAL encryption), as the first step.\n>\n> Hmmm. Ok. So in fact all that is for TDE, *but* the patch does not do TDE,\n> but provides a column-oriented SQL-level encryption, which is unrelated to\n> your actual objective, which is to do file-level encryption in the end.\n>\n> However, for TDE, it may that you cannot do it with a pg extension because\n> for the extension to work the database must work, which would require some\n> \"data\" files not to be encrypted in the first place. That seems like a\n> good argument to actually have something in core.\n>\n> Probably for TDE you only want the configuration file not to be encrypted.\n\nYeah, for TDE, what we have discussed is to encrypt only tables,\nindexes, temporary files, and WAL (and possibly other database files\nthat could have or help to infer user sensitive data). And each\ntable/index page first several bytes in the page header are not\nencrypted.\n\n>\n> I'd still advocate to have the key management system possibly outside of\n> pg, and have pg interact with it to get keys when needed. Probably key ids\n> would be the relative file names in that case. The approach of\n> externalizing encryption/decryption would be totally impractical for\n> performance reasons, though.\n>\n> I see value in Cary Huang suggestion on the thread to have dynamically\n> loaded functions implement an interface. That would at least allow to\n> remove some hardcoded choices such as what cypher is actually used, key\n> sizes, and so on. One possible implementation would be to manage things\n> more or less internally as you do, another to fork an external command and\n> talk with it to do the same.\n>\n> However, I do not share the actual interface briefly outlined: I do not\n> thinkpg should have to care about key management functions such as\n> rotation, generation or derivation, storage… the interest of pg should be\n> limited to retrieving the keys it needs. That does not mean such functions\n> do not have security value and should not be implemented, I'd say that it\n> should not be visible/hardcoded in the pg/kms interface, especially if\n> this interface is expected to be generic.\n\nSince the current key manager is designed for only TDE or something\nencryption feature inside PostgreSQL, the usage of the key manager is\nlimited. It’s minimum and simple but it has minimum extensible that\nPostgreSQL internal modules such as TDE can register their\ncryptographic key. I agree to have more an extensible feature if it is\nexpected to cover generic use cases but currently it isn't.\n\n>\n> As I see it, a pg/kms C-level loadable interface would provide the\n> following function:\n>\n> // options would be supplied by a guc and allow to initialize the\n> // interface with the relevant data, whatever the underlying\n> // implementation needs.\n> error kms_init(char *options);\n>\n> // associate opaque key identifier to a local id\n> error kms_key(local_id int, int key_id_len, byte *key_id);\n>\n> or maybe something like:\n>\n> // would return the local id attributed to the key\n> error/int kms_key(key_id_len, key_id);\n>\n> // the actual functions should be clarified\n> // for TDE file-level, probably the encrypted length is the same as the\n> // input, you cannot have padding, hmac or whatever added.\n> // for SQL app-level, the rules could be different\n> error kms_(en|de)crypt(local_id int, int mode, int len,\n> byte *in, byte *out);\n>\n> // maybe\n> error kms_key_forget(local_id int);\n> error kms_destroy(…);\n>\n> // maybe, to allow extensibility and genericity\n> // eg kms_command(\"rotate keys with new kek=123\");\n> error kms_command(char *cmd);\n\nMy first proposal implements a similar thing; having C-level loadable\ninterface to get, generate, and rotating KEK. But the concern was the\ndevelopment cost and complexity vs benefit. And I don't think it's a\ngood idea to support both key management and encryption/decryption as\nkms interface.\n\n>\n> I'm a little bit unsure that there should be only one KMS active ever,\n> though: a file-level vs app-level encryption could have quite different\n> constraints. Also, should the app-level encryption be able to access keys\n> loaded for file-level encryption?\n>\n\nYou mean app-level encryption also uses encryption keys get from postgres?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%40mail.gmail.com\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 23 Jun 2020 22:46:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 23 Jun 2020 at 22:46, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 19 Jun 2020 at 15:44, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >\n> >\n> > Hello Masahiko-san,\n> >\n> > > What I referred to \"only one key\" is KEK.\n> >\n> > Ok, sorry, I misunderstood.\n> >\n> > >>> Yeah, it depends on KMS, meaning we need different extensions for\n> > >>> different KMS. A KMS might support an interface that creates key if not\n> > >>> exist during GET but some KMS might support CREATE and GET separately.\n> > >>\n> > >> I disagree that it is necessary, but this is debatable. The KMS-side\n> > >> interface code could take care of that, eg:\n> > >>\n> > >> if command is \"get X\"\n> > >> if (X does not exist in KMS)\n> > >> create a new key stored in KMS, return it;\n> > >> else\n> > >> return KMS-stored key;\n> > >> ...\n> > >>\n> > >> So you can still have a \"GET\" only interface which adapts to the \"final\"\n> > >> KMS. Basically, the glue code which implements the interface for the KMS\n> > >> can include some logic to adapt to the KMS point of view.\n> > >\n> > > Is the above code is for the extension side, right?\n> >\n> > Such a code could be in the command with which pg communicates (eg through\n> > its stdin/stdout, or whatever) to get keys.\n> >\n> > pg talks to the command, the command can do anything, such as storing keys\n> > or communicating with an external service to retrieve them, anything\n> > really, that is the point.\n> >\n> > I'm advocating defining the pg/command protocol, something along \"GET xxx\"\n> > as you wrote, and possibly provide a possible/reasonable command\n> > implementation, which would be part of the code you put in your patch,\n> > only it would be in the command instead of postgres.\n> >\n> > > For example, if users want to use a cloud KMS, say AWS KMS, to store\n> > > DEKs and KEK they need an extension that is loaded to postgres and can\n> > > communicate with AWS KMS. I imagine that such extension needs to be\n> > > written in C,\n> >\n> > Why? I could write it in bash, probably. Ok, maybe not so good for suid,\n> > but in principle it could be anything. I'd probably write it in C, though.\n> >\n> > > the communication between the extension uses AWS KMS API, and the\n> > > communication between postgres core and the extension uses text\n> > > protocol.\n> >\n> > I'm not sure of the word \"extension\" above. For me the postgres side could\n> > be an extension as in \"CREATE EXTENSION\". The command itself could be\n> > provided in the extension code, but would not be in the \"CREATE\n> > EXTENSION\", it would be something run independently.\n>\n> Oh, I imagined extensions that can be installed by CREATE EXTENSION or\n> specifying it to shared_preload_libraries.\n>\n> If the command runs in the background to talk with postgres there are\n> some problems that we need to deal with. For instance, what if the\n> process downs? Does the postmaster re-execute it? How does it work in\n> single-user mode? etc. It seems to me it will bring another\n> complexity.\n>\n> >\n> > > When postgres core needs a DEK identified by KEY-A, it asks\n> > > for the extension to get the DEK by passing something like “GET KEY-A”\n> > > message, and then the extension asks the existence of that key to AWK\n> > > KMS, creates if not exist and returns it to the postgres core. Is my\n> > > understanding right?\n> >\n> > Yes. The command in the use-case you outline would just be an\n> > intermediary, but for another use-case it would do the storing. The point\n> > of aiming at extensibility if that from pg point of view the external\n> > commands provide keys, but what these commands really do to do this can be\n> > anything.\n> >\n> > > When we have TDE feature in the future, we would also need to change\n> > > frontend tools such as pg_waldump and pg_rewind that read database\n> > > files so that they can read encrypted files. It means that these\n> > > frond-end tools also somehow need to communicate with the external\n> > > place to get DEKs in order to decrypt encrypted database files. In\n> > > your idea, what do you think about how we can support it?\n> >\n> > Hmmm. My idea was that the natural interface would be to get things\n> > through postgres. For a debug tool such as pg_waldump, probably it needs\n> > to be adapted if it needs to decrypt data to operate.\n> >\n> > Now I'm not sure I understood, because of the DEK are managed by postgres\n> > in your patch, waldump and other external commands would have no access to\n> > the decrypted data anyway, so the issue would be the same?\n>\n> With the current patch, we will be able to add\n> --cluster-passphase-command command-line option to front-end tools\n> that want to read encrypted database files. The front-end tools\n> execute the specified command to get KEK and unwrap DEKs. The\n> functions such as running passphrase command, verifying the passphrase\n> is correct, and getting wrapped DEKs from the database cluster are\n> implemented in src/common so both can use these functions.\n>\n> >\n> > With file-level encryption, obviously all commands which have to read and\n> > understand the files have to be adapted if they are to still work, which\n> > is another argument to have some interface rather than integrated\n> > server-side stuff, because these external commands would need to be able\n> > to get keys and use them as well.\n> >\n> > Or I misunderstood something.\n> >\n> > >> I'd like an extensible design to have anything in core. As I said in an\n> > >> other mail, if you want to handle a somehow restricted use case, just\n> > >> provide an external extension and do nothing in core, please. Put in core\n> > >> something that people with a slightly different use case or auditor can\n> > >> build on as well. The current patch makes a dozen hard-coded decisions\n> > >> which it should not, IMHO.\n> > >\n> > > It might have confused you that I included key manager and encryption\n> > > SQL functions to the patches but this key manager has been designed\n> > > dedicated to only TDE.\n> >\n> > Hmmm. This is NOT AT ALL what the patch does. The documentation in your\n> > patch talks about \"column level encryption\", which is an application\n> > thing. Now you seem to say that it does not matter and can be removed\n> > because the use case is elsewhere.\n> >\n> > > It might be better to remove both SQL interface\n> > > and SQL key we discussed from the patch set as they are actually not\n> > > necessary for TDE purposes.\n> >\n> > The documentation part of the patch, at no point, talks about TDE\n> > (transparent data encryption), which is a file-level encryption as far as\n> > I understand it, i.e. whole files are encrypted.\n>\n> I should have described the positioning of these patches.. The current\n> patch is divided into 7 patches but there are two patches for\n> different purposes.\n>\n> The first patch is to add an internal key manager. This is\n> PostgreSQL’s internal component to manage cryptographic keys mainly\n> for TDE. Originally I proposed both key manager and TDE[1] but these\n> are actually independent each other and we can discuss them\n> separately. Therefore, I started a new thread to discuss only the key\n> manager. According to the discussion so far, since TDE doesn't need to\n> dynamically register DEKs the key manager doesn't have an interface to\n> register DEK for now. We cannot do anything with only the first patch\n> but the plan is to implement TDE on top of the key manager. So we can\n> implement the key manager patch and TDE patch separately but these\n> actually depend on each other.\n>\n> The second patch is to make the key manager use even without TDE. The\n> idea of this patch is to help the incremental development of TDE.\n> There was a discussion that since the development of both the key\n> manager and TDE will take a long time, it’s better to make the key\n> manager work alone by providing SQL interfaces to it. This patch adds\n> new SQL functions: pg_wrap() and pg_unwrap(), to wrap and unwrap the\n> arbitrary user secret by the encryption key called SQL internal key\n> which is managed and stored by the key manager. What this patch does\n> is to register the SQL internal key to the key manager and add SQL\n> functions that wrap and unwrap the given string in the same way of key\n> wrapping used by the key manager. These functions renamed to\n> pg_encrypt() and pg_decrypt().\n>\n> Given that the purpose of the key manager is to help TDE, discussing\n> the SQL interface part (i.g., the second patch) deviates from the\n> original purpose. I think we should discuss the design and\n> implementation of the key manager first and then other additional\n> interfaces. So I’ve attached a new version patch and removed the\n> second patch part so that we can focus on only the key manager part.\n>\n\nSince the previous patch sets conflicts with the current HEAD, I've\nattached the rebased patch set.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 31 Jul 2020 16:06:38 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Jul 31, 2020 at 04:06:38PM +0900, Masahiko Sawada wrote:\n> Since the previous patch sets conflicts with the current HEAD, I've\n> attached the rebased patch set.\n\nPatch 0002 fails to apply, so a rebase is needed.\n--\nMichael", "msg_date": "Tue, 8 Sep 2020 17:23:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Jul 31, 2020 at 04:06:38PM +0900, Masahiko Sawada wrote:\n> > Given that the purpose of the key manager is to help TDE, discussing\n> > the SQL interface part (i.g., the second patch) deviates from the\n> > original purpose. I think we should discuss the design and\n> > implementation of the key manager first and then other additional\n> > interfaces. So I’ve attached a new version patch and removed the\n> > second patch part so that we can focus on only the key manager part.\n> >\n> \n> Since the previous patch sets conflicts with the current HEAD, I've\n> attached the rebased patch set.\n\nI have updated the attached patch and am hoping to move this feature\nforward. The changes I made are:\n\n* handle merge conflicts\n* changed ssl initialization to match other places in our code\n* changed StrNCpy() to strlcpy\n* update the docs\n\nThe first three were needed to get it to compile. I then ran some tests\nusing the attached shell script as my password script. First, I found\nthat initdb called the script twice. The first call worked fine, but\nthe second call would accept a password that didn't match the first\ncall. This is because there are no keys defined, so there is nothing\nfor kmgr_verify_passphrase() to check for passkey verification, so it\njust succeeds. In fact, I can't figure out how to create any keys with\nthe patch, and pg_encrypt() is documented, but not defined anywhere.\n\nSecond, in testing starting/stopping the server, pg_ctl doesn't allow\nthe cluster_passphrase_command to read from /dev/tty, which I think is a\nrequirement because the command could likely require a user-supplied\nunlock key, even if that is not the actual passphrase, just like ssl\nkeys. This is because pg_ctl calls setsid() just before calling execl()\nto start the server, and setsid() disassociates itself from the\ncontrolling terminal. I think the fix is to remove setsid() from pg_ctl\nand add a postmaster flag to call setsid() after it has potentially\ncalled cluster_passphrase_command, and pg_ctl would use that flag.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Fri, 16 Oct 2020 16:24:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Second, in testing starting/stopping the server, pg_ctl doesn't allow\n> the cluster_passphrase_command to read from /dev/tty, which I think is a\n> requirement because the command could likely require a user-supplied\n> unlock key, even if that is not the actual passphrase, just like ssl\n> keys. This is because pg_ctl calls setsid() just before calling execl()\n> to start the server, and setsid() disassociates itself from the\n> controlling terminal. I think the fix is to remove setsid() from pg_ctl\n> and add a postmaster flag to call setsid() after it has potentially\n> called cluster_passphrase_command, and pg_ctl would use that flag.\n\nWe discussed that and rejected it in the thread leading up to\nbb24439ce [1]. The primary problem being that it's not very clear\nwhen the postmaster should daemonize itself, and later generally\nisn't better. I doubt that this proposal is doing anything to\nclarify that situation.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAEET0ZH5Bf7dhZB3mYy8zZQttJrdZg_0Wwaj0o1PuuBny1JkEw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 16 Oct 2020 16:56:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Fri, Oct 16, 2020 at 04:56:47PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Second, in testing starting/stopping the server, pg_ctl doesn't allow\n> > the cluster_passphrase_command to read from /dev/tty, which I think is a\n> > requirement because the command could likely require a user-supplied\n> > unlock key, even if that is not the actual passphrase, just like ssl\n> > keys. This is because pg_ctl calls setsid() just before calling execl()\n> > to start the server, and setsid() disassociates itself from the\n> > controlling terminal. I think the fix is to remove setsid() from pg_ctl\n> > and add a postmaster flag to call setsid() after it has potentially\n> > called cluster_passphrase_command, and pg_ctl would use that flag.\n> \n> We discussed that and rejected it in the thread leading up to\n> bb24439ce [1]. The primary problem being that it's not very clear\n> when the postmaster should daemonize itself, and later generally\n> isn't better. I doubt that this proposal is doing anything to\n> clarify that situation.\n\nAgreed. No reason to destablize the postmaster for this. What about\nhaving pg_ctl open /dev/tty, and then pass in an open file descriptor\nthat is a copy of /dev/tty, that can be closed by the postmaster after\nthe cluster_passphrase_command? I just tested this and it worked.\n\nI am thinking we would pass the file descriptor number to the postmaster\nvia a command-line argument. Ideally we would pass in the device name\nof /dev/tty, but I don't know of a good way to do that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 16 Oct 2020 18:51:11 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Sat, 17 Oct 2020 at 05:24, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Jul 31, 2020 at 04:06:38PM +0900, Masahiko Sawada wrote:\n> > > Given that the purpose of the key manager is to help TDE, discussing\n> > > the SQL interface part (i.g., the second patch) deviates from the\n> > > original purpose. I think we should discuss the design and\n> > > implementation of the key manager first and then other additional\n> > > interfaces. So I’ve attached a new version patch and removed the\n> > > second patch part so that we can focus on only the key manager part.\n> > >\n> >\n> > Since the previous patch sets conflicts with the current HEAD, I've\n> > attached the rebased patch set.\n>\n> I have updated the attached patch and am hoping to move this feature\n> forward. The changes I made are:\n>\n> * handle merge conflicts\n> * changed ssl initialization to match other places in our code\n> * changed StrNCpy() to strlcpy\n> * update the docs\n\nThank you for updating the patch!\n\n>\n> The first three were needed to get it to compile. I then ran some tests\n> using the attached shell script as my password script. First, I found\n> that initdb called the script twice. The first call worked fine, but\n> the second call would accept a password that didn't match the first\n> call. This is because there are no keys defined, so there is nothing\n> for kmgr_verify_passphrase() to check for passkey verification, so it\n> just succeeds. In fact, I can't figure out how to create any keys with\n> the patch,\n\nThe patch introduces only key management infrastructure but with no\nkey. Currently, there is no interface to dynamically add a new\nencryption key. We need to add the new key ID to internalKeyLengths\narray and increase KMGR_MAX_INTERNAL_KEY. The plan was to add a\nsubsequent patch that adds functionality using encryption keys managed\nby the key manager. The patch was to add two SQL functions: pg_wrap()\nand pg_unwrap(), along with the internal key wrap key.\n\nIIUC, what to integrate with the key manager is still an open\nquestion. The idea of pg_wrap() and pg_unwrap() seems good but it\nstill has the problem that the password could be logged to the server\nlog when the user wraps it. Of course, since the key manager is\noriginally designed for cluster-wide transparent encryption, TDE will\nbe one of the users of the key manager. But there was a discussion\nit’s better to introduce the key manager first and to make it have\nsmall functions or integrate with existing features such as pgcrypto\nbecause TDE development might take time over the releases. So I'm\nthinking to deal with the problem the pg_wrap idea has or to make\npgcrypto use the key manager so that it doesn't require the user to\npass the password as a function argument.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 19 Oct 2020 12:15:24 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Oct 19, 2020 at 11:16 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\nThe patch introduces only key management infrastructure but with no\n> key. Currently, there is no interface to dynamically add a new\n> encryption key.\n\n\nI'm a bit confused by the exact intent and use cases behind this patch.\nhttps://www.postgresql.org/message-id/17156d2e419.12a27f6df87825.436300492203108132%40highgo.ca\nthat was somewhat helpful but not entirely clear.\n\nThe main intent of this proposal seems to be to power TDE-style encryption\nof data at rest, with a single master key for the entire cluster. Has any\nconsideration been given to user- or role-level key management as part of\nthis, or is that expected to be done separately and protected by the master\nkey supplied by this patch?\n\nIf so, what if I have a HSM (or virtualised or paravirt or network proxied\nHSM) that I want to use to manage my database keys, such that the database\nmaster key is protected by the HSM? Say I want to put my database key in a\nsmartcard, my machine's TPM, a usb HSM, a virtual HSM provided by my\nVM/cloud platform, etc?\n\nAs far as I can tell with the current design I'd have to encrypt my unlock\npassphrase and put it in the cluster_passphrase_command script or its\narguments. The script would ask the HSM to decrypt the key passphrase and\nwrite that over stdio where Pg would read it and use it to decrypt the\nmaster key(s). That would work - but it should not be necessary and it\nweakens the protection offered by the HSM considerably.\n\nI suggest we allow the user to supply their own KEK via a\ncluster_encryption_key GUC. If set, Pg would create an SSLContext with the\nsupplied key and use that SSLContext to decrypt the application keys - with\nno intermediate KEK-derivation based on cluster_passphrase_command\nperformed. cluster_encryption_key could be set to an openssl engine URI, in\nwhich case OpenSSL would transparently use the supplied engine (usually a\nHSM) to decrypt the application keys. We'd install the\ncluster_passphrase_command as an openssl askpass callback so that if the\nHSM requires an unlock password it can be provided - like how it's done for\nlibpq in Pg 13. Some thought is required for how to do key rotation here,\nthough it matters a great deal less when a HSM is managing key escrow.\n\nFor example if I want to lock my database with a YubiHSM I would configure\nsomething like:\n\n cluster_encryption_key = 'pkcs11:token=YubiHSM;id=0:0001;type=private'\n\nThe DB would be encrypted and decrypted using application keys unlocked by\nthe HSM. Backups of the database, stolen disk images, etc, would be\nunreadable unless you have access to another HSM with the same key loaded.\n\nIf cluster_encryption_key is unset, Pg would perform its own KEK derivation\nbased on cluster_passphrase_command as currently implemented.\n\nI really don't think we should be adopting something that doesn't consider\nplatform based hardware key escrow and protection.\n\nOn Mon, Oct 19, 2020 at 11:16 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\nThe patch introduces only key management infrastructure but with no\nkey. Currently, there is no interface to dynamically add a new\nencryption key. I'm a bit confused by the exact intent and use cases behind this patch. https://www.postgresql.org/message-id/17156d2e419.12a27f6df87825.436300492203108132%40highgo.ca  that was somewhat helpful but not entirely clear.The main intent of this proposal seems to be to power TDE-style encryption of data at rest, with a single master key for the entire cluster. Has any consideration been given to user- or role-level key management as part of this, or is that expected to be done separately and protected by the master key supplied by this patch?If so, what if I have a HSM (or virtualised or paravirt or network proxied HSM) that I want to use to manage my database keys, such that the database master key is protected by the HSM? Say I want to put my database key in a smartcard, my machine's TPM, a usb HSM, a virtual HSM provided by my VM/cloud platform, etc?As far as I can tell with the current design I'd have to encrypt my unlock passphrase and put it in the cluster_passphrase_command script or its arguments. The script would ask the HSM to decrypt the key passphrase and write that over stdio where Pg would read it and use it to decrypt the master key(s). That would work - but it should not be necessary and it weakens the protection offered by the HSM considerably.I suggest we allow the user to supply their own KEK via a cluster_encryption_key GUC. If set, Pg would create an SSLContext with the supplied key and use that SSLContext to decrypt the application keys - with no intermediate KEK-derivation based on cluster_passphrase_command performed. cluster_encryption_key could be set to an openssl engine URI, in which case OpenSSL would transparently use the supplied engine (usually a HSM) to decrypt the application keys. We'd install the cluster_passphrase_command as an openssl askpass callback so that if the HSM requires an unlock password it can be provided - like how it's done for libpq in Pg 13. Some thought is required for how to do key rotation here, though it matters a great deal less when a HSM is managing key escrow.For example if I want to lock my database with a YubiHSM I would configure something like:    cluster_encryption_key = 'pkcs11:token=YubiHSM;id=0:0001;type=private'The DB would be encrypted and decrypted using application keys unlocked by the HSM. Backups of the database, stolen disk images, etc, would be unreadable unless you have access to another HSM with the same key loaded.If cluster_encryption_key is unset, Pg would perform its own KEK derivation based on cluster_passphrase_command as currently implemented.I really don't think we should be adopting something that doesn't consider platform based hardware key escrow and protection.", "msg_date": "Mon, 26 Oct 2020 22:05:10 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Greetings,\n\n* Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> On Mon, Oct 19, 2020 at 11:16 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> > The patch introduces only key management infrastructure but with no\n> > key. Currently, there is no interface to dynamically add a new\n> > encryption key.\n> \n> I'm a bit confused by the exact intent and use cases behind this patch.\n> https://www.postgresql.org/message-id/17156d2e419.12a27f6df87825.436300492203108132%40highgo.ca\n> that was somewhat helpful but not entirely clear.\n> \n> The main intent of this proposal seems to be to power TDE-style encryption\n> of data at rest, with a single master key for the entire cluster. Has any\n> consideration been given to user- or role-level key management as part of\n> this, or is that expected to be done separately and protected by the master\n> key supplied by this patch?\n\nI've not been following very closely, but I definitely agree with the\ngeneral feedback here (more on that below), but to this point- I do\nbelieve that was the intent, or at least I sure hope that it was. Being\nable to have user/role keys would certainly be good. Having a way for a\nuser to log in and unlock their key would also be really nice.\n\n> If so, what if I have a HSM (or virtualised or paravirt or network proxied\n> HSM) that I want to use to manage my database keys, such that the database\n> master key is protected by the HSM? Say I want to put my database key in a\n> smartcard, my machine's TPM, a usb HSM, a virtual HSM provided by my\n> VM/cloud platform, etc?\n> \n> As far as I can tell with the current design I'd have to encrypt my unlock\n> passphrase and put it in the cluster_passphrase_command script or its\n> arguments. The script would ask the HSM to decrypt the key passphrase and\n> write that over stdio where Pg would read it and use it to decrypt the\n> master key(s). That would work - but it should not be necessary and it\n> weakens the protection offered by the HSM considerably.\n\nYeah, I do think this is how you'd need to do it and I agree that it'd\nbe better to offer an option that can go to the HSM directly. That\nsaid- I don't think we necessarily want to throw out tho command-based\noption, as users may wish to use a vaulting solution or similar instead\nof an HSM. What I am curious about though- what are the thoughts around\nusing a vaulting solution's command-line tool vs. writing code to work\nwith an API? Between these various options, what are the risks of\nhaving a script vs. using an API and would one or the other weaken the\noverall solution? Or is what's really needed here is a way to tell us\nif it's a passphrase we're getting or a proper key, regardless of the\nmethod being used to fetch it?\n\n> I suggest we allow the user to supply their own KEK via a\n> cluster_encryption_key GUC. If set, Pg would create an SSLContext with the\n> supplied key and use that SSLContext to decrypt the application keys - with\n> no intermediate KEK-derivation based on cluster_passphrase_command\n> performed. cluster_encryption_key could be set to an openssl engine URI, in\n> which case OpenSSL would transparently use the supplied engine (usually a\n> HSM) to decrypt the application keys. We'd install the\n> cluster_passphrase_command as an openssl askpass callback so that if the\n> HSM requires an unlock password it can be provided - like how it's done for\n> libpq in Pg 13. Some thought is required for how to do key rotation here,\n> though it matters a great deal less when a HSM is managing key escrow.\n\nThis really locks us into OpenSSL for this, which I don't particularly\nlike. If we do go down this route, we should definitely make it clear\nthat this is for use when PG has been built with OpenSSL, ie:\nopenssl_cluster_encryption_key as the parameter name, or such.\n\n> For example if I want to lock my database with a YubiHSM I would configure\n> something like:\n> \n> cluster_encryption_key = 'pkcs11:token=YubiHSM;id=0:0001;type=private'\n> \n> The DB would be encrypted and decrypted using application keys unlocked by\n> the HSM. Backups of the database, stolen disk images, etc, would be\n> unreadable unless you have access to another HSM with the same key loaded.\n\nWell, you would surely just need the key, since you could change the PG\nconfig to fetch the key from whereever you have it, you wouldn't need an\nactual HSM..\n\n> If cluster_encryption_key is unset, Pg would perform its own KEK derivation\n> based on cluster_passphrase_command as currently implemented.\n\nTo what I was suggesting above- what if we just had a GUC that's\n\"kek_method\" with options 'passphrase' and 'direct', where passphrase\ngoes through KEK and 'direct' doesn't, which just changes how we treat\nthe results of called cluster_passphrase_command?\n\n> I really don't think we should be adopting something that doesn't consider\n> platform based hardware key escrow and protection.\n\nI agree that we should consider platform based hardware key escrow and\nprotection. I'm generally supportive of trying to do so in a way that\nkeeps things very flexible for users without us having to write a lot of\ncode that's either library-specific or solution-specific.\n\nThanks,\n\nStephen", "msg_date": "Mon, 26 Oct 2020 11:02:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Oct 26, 2020 at 11:02 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n\nTL;DR:\n\n* Important to check that key rotation is possible on a replica, i.e.\nprimary and standby can have different cluster passphrase and KEK\nencrypting the same WAL and heap keys;\n* with a HSM we can't read the key out, so a pluggable KEK operations\ncontext or a configurable URI for the KEK is necessary\n* I want the SQL key and SQL wrap/unwrap part in a separate patch, I\ndon't think it's fully baked and oppose its inclusion in its current\nform\n* Otherwise this looks good so far\n\nExplanation and argument for why below.\n\n> I've not been following very closely, but I definitely agree with the\n> general feedback here (more on that below), but to this point- I do\n> believe that was the intent, or at least I sure hope that it was. Being\n> able to have user/role keys would certainly be good. Having a way for a\n> user to log in and unlock their key would also be really nice.\n\nRight. AFAICS this is supposed to provide the foundation layer for\nwhole-cluster encryption, and it looks ok for that, caveat about HSMs\naside. I see nothing wrong with using a single key for heap (and one\nfor WAL, or even the same key). Finer grained and autovacuum etc\nbecomes seriously painful.\n\nI want to take a closer look at how the current implementation will\nplay with physical replication. I assume the WAL and heap keys have to\nbe constant for the full cluster lifetime, short of a dump and reload.\nBut I want to make sure that the KEK+HMAC can differ from one node to\nanother, i.e. that we can perform KEK rotation on a replica to\nre-encrypt the WAL and heap keys against a new KEK. This is important\nfor backups, and also for effective use of a HSM where we may not want\nor be able to have the same key on a primary and its replicas. If it\nisn't already supported it looks like it should be simple, but it's\nIMO important not to miss.\n\nThe main issue I have so far is that I don't think the SQL key\nactually fits well with the current proposal. Its proposed interface\nand use cases are incomplete, it doesn't fully address key leak risks,\nthere's no user access control, etc. Also the SQL key part could be\nimplemented on top of the base cluster encryption part, I don't see\nwhy it actually has to integrate with the whole-cluster key management\ndirectly.\n\nSQL KEY\n----\n\nI'm not against the SQL key and wrap/unwrap functionality - quite the\ncontrary, I think it's really important to have something like it. But\nis it appropriate to have a single, fixed-for-cluster-lifetime key for\nthis, one with no SQL-level access control over who can or cannot use\nit, etc? The material encrypted with the key is user-exposed so key\nrotation is an issue, but is not addressed here. And the interface\ndoesn't really solve the numerous potential problems with key material\nleaks through logs and error messages.\n\nI just think that if we bake in the proposed user visible wrap/unwrap\ninterface now we're going to regret it later. How will it work when we\nwant to add user- or role- level access control for database-stored\nkeys? When we want to introduce a C-level API for extensions to work\ndirectly with encrypted data like they can work currently with TOASTed\ndata, to prevent decrypted data from ever becoming SQL function\narguments subject to possible leakage and to allow implementation of\nalways-encrypted data types, etc?\n\nMost importantly - I don't think the SQL key adds anything really\ncrucial that we cannot do at the SQL level with an extension. An\nextension \"pg_wrap\" could provide pg_wrap() and pg_unwrap() already,\nusing a single master key much like the SQL key proposed in this\npatch. To store the master key it could:\n\n1. Derive the key at shared_preload_libraries time in the same way\nthis key management system proposes to do so, using a\npg_wrap.pg_wrap_passphrase_command ; or\n2. Read the key from a PEM file on disk, accepting a passphrase to\ndecrypt it via a password command GUC or an SQL-level function call;\nor\n3. Read the key from a pg_wrap.pg_wrap_keys extension catalog, which\nis superuser-only and which is protected by whole-db encryption if in\nuse;\n4. Like (3) but use generic WAL and a custom relation to make the\nin-db key store opaque without use of extensions like pageinspect.\n\nThat way we haven't baked some sort of limited wrap/unwrap into Pg's\nlong term user visible API. I'd be totally happy for such a SQL key\nwrap/unwrap to become part of pgcrypto, or a separate extension that\nuses pgcrypto, if you're worried about having it available to users. I\njust don't really want it in src/backend in its current form.\n\nTo simplify (1) we could make the implementation of the KEK/HMAC\nderivation accessible from extensions and allow them to provide their\nown password callback, though that might make life harder if we want\nto change things later, and it'd mean that you couldn't use the\nextension on a db that was not configured for whole db encryption. So\nI'd actually rather make key derivation and storage the extension's\nproblem for now.\n\nOTHER TRANSPARENT ENCRYPTION USE CASES\n----\n\nDoes this patch get in the way of supporting other kinds of\ntransparent encryption that are frequently requested and are in use on\nother systems already?\n\nI don't think so. Whole-cluster encryption is quite separate and the\nproposed patch doesn't seem to do anything that'd make table-, row- or\ncolumn-level encryption, per-user key management, etc any harder.\n\nSpecific use cases I looked at:\n\n* Finer grained keying than whole-cluster for transparent\nencryption-at-rest. As soon as we have relations that require user\nsession supplied information to allow the backend to read the relation\nwe get into a real mess with autovacuum, logical decoding, etc. So if\nanyone wants to implement that sorts of thing they're probably going\nto want to do so separately to block-level whole-cluster encryption,\nin a way that preserves the normal page and page item structure and\nencrypts the row data only.\n\n* Client-driver-assisted transparently encrypted\nat-rest-and-in-transit data, where the database engine doesn't have\nthe encrypt/decrypt keys at all. Again in this case they're going to\nhave to do that at the row level or column level, not the block\n(relfilenode extents and WAL) level, otherwise we can't provide\nautovacuum etc.\n\n> > If so, what if I have a HSM [...]\n>\n> [...] I agree that it'd\n> be better to offer an option that can go to the HSM directly.\n\nRight.\n\n> That\n> said- I don't think we necessarily want to throw out tho command-based\n> option, as users may wish to use a vaulting solution or similar instead\n> of an HSM.\n\nI agree. I wasn't proposing to throw out the command based approach,\njust provide a way to inform postgres that it should do operations\nwith the KEK using an external engine instead of deriving its own KEK\nfrom a passphrase and other inputs.\n\n> What I am curious about though- what are the thoughts around\n> using a vaulting solution's command-line tool vs. writing code to work\n> with an API?\n\nI think the code that fetches the cluster passphrase from a command\nshould be interceptable by a hook, so organisations with Wacky\nSecurity Policies Written By People Who Have Heard About Computers But\nNever Used One can jump through the necessary hoops. I am of course\nabsolutely not speaking from experience here, no, not at all... see\nssl_passphrase_function in src/backend/libpq/be-secure-openssl.c, and\nsee src/test/modules/ssl_passphrase_callback/ssl_passphrase_func.c .\n\nSo I suggest something like that - a hook that by default calls an\nexternal command but can by overridden by an extension. It wouldn't be\nopenssl specific like the server key passphrase example though. That\nwas done with an openssl specific hook because we don't know if we're\ngoing to need a passphrase at all until openssl has opened the key. In\nthe cluster encryption case we'll know if we're doing our own KEK+HMAC\ngeneration or not without having to ask the SSL library.\n\n> Between these various options, what are the risks of\n> having a script vs. using an API and would one or the other weaken the\n> overall solution? Or is what's really needed here is a way to tell us\n> if it's a passphrase we're getting or a proper key, regardless of the\n> method being used to fetch it?\n\nFor various vault systems I don't think it matters at all whether the\nsecret they manage is the key, input used to generate the key, or\ninput used to decrypt a key stored elsewhere. Either way they have the\npivotal secret. So I don't see much point allowing the command to\nreturn a fully formed key.\n\nThe point of a HSM that you don't get to read the key. Pg can never\nread the key, it can only perform encrypt and decrypt operations on\nthe key using the HSM via the SSL library:\n\nPg -> openssl:\n \"this is the ciphertext of the wal_key. Please decrypt it for me.\"\nopenssl -> engine layer\n \"engine, please decrypt this\"\npkcs#11 engine-> pkcs#11 provider:\n \"please decrypt this\"\npkcs#11 provider -> HSM-specific libraries, network proxies, whatever:\n \"please decrypt this\"\n \"... here's the plaintext\"\n<- flows back up\n\nSo the KEK used to encrypt the main cluster keys for heap and wal\nencryption is never readable by Pg. It usually never enters host\nmemory - in the case of a HSM, the ciphertext is sent over USB or PCIe\nto the HSM and the cleartext comes back.\n\nIn openssl, the default engine is file-based with host software crypto\nimplementations. You can specify alternate engines using various\nOpenSSL APIs, or you can specify them by supplying a URI where you'd\nusually supply a file path to a key.\n\nI'm proposing we make it easy to supply a key URI and let openssl\nhandle the engine etc. It's far from perfect, and it's really meant as\na fallback to allow apps that don't natively understand SSL engines\netc to still use them in a limited capacity.\n\nWhat I'd *prefer* to do is make the function that sets up the KEK\nhookable. So by default we'd call a function that'd read the external\npassphrase from a command use that to generate KEK+HMAC. But an\nextension hook installed at shared_preload_libraries time could\noverride the behaviour completely and return its own implementation.\n\nIn the phrasing of the patch as written that'd probably be a hook that\nreturns its own pg_cipher_ctx* , overriding the default\nimplementation of pg_cipher_ctx * pg_cipher_ctx_create(void); . For a\nHSM the returned context would delegate operations to the HSM.\n\n> This really locks us into OpenSSL for this, which I don't particularly\n> like.\n\nWe're pretty locked into openssl already. I don't like it either, it\nwas just the option that has the least impact/delay on the main work\non this patch.\n\nI'd rather abstract KEK operations behind a context object-like struct\nwith function pointer members, like we do in many other places in Pg.\nMake the default one do the dance of reading the external passphrase\nand generating the KEK on the fly. Allow plugins to override it with\ntheir own, and let them set it up to delegate to a HSM or whatever\nelse they want.\n\nThen ship a simple openssl based default implementation of HSM support\nthat can be shoved in shared_preload_libraries. Or if we don't like\nusing s_p_l, add a separate GUC for cluster_encryption_key_manager or\nwhatever, and a different entrypoint, instead of having s_p_l call\n_PG_init() to register a hook.\n\n> > For example if I want to lock my database with a YubiHSM I would configure\n> > something like:\n> >\n> > cluster_encryption_key = 'pkcs11:token=YubiHSM;id=0:0001;type=private'\n> >\n> > The DB would be encrypted and decrypted using application keys unlocked by\n> > the HSM. Backups of the database, stolen disk images, etc, would be\n> > unreadable unless you have access to another HSM with the same key loaded.\n>\n> Well, you would surely just need the key, since you could change the PG\n> config to fetch the key from wherever you have it, you wouldn't need an\n> actual HSM.\n\nRight - if your HSM was programmed by generating a key and storing\nthat into the HSM and you have that key backed up in file form\nsomewhere, you could likely put it in a pem file and use that directly\nby pointing Pg at the file instead of an engine URI.\n\nBut you might not even have the key. In some HSM implementations the\nkey is completely sealed - you can program new HSMs to have the same\nkey by using the same configuration, but you cannot actually obtain\nthe key short of attacks on the HSM hardware itself. That's very much\nby design - the HSM configuration is usually on an air-gapped system,\nand it isn't sufficient to decrypt anything unless you also have\naccess to a copy of the HSM hardware itself. Obviously you accept the\nrisks if you take that approach, and you must have an escape route\nwhere you can re-encrypt the material protected by the HSM against\nsome other key. But it's not at all uncommon.\n\nKey rotation is obviously vital to make this vaguely sane. In Pg's\ncase you'd to change the key configuration, then trigger a key\nrotation step, which would decrypt with a context obtained from the\nold config then encrypt with a context obtained from the new config.\n\n> > If cluster_encryption_key is unset, Pg would perform its own KEK derivation\n> > based on cluster_passphrase_command as currently implemented.\n>\n> To what I was suggesting above- what if we just had a GUC that's\n> \"kek_method\" with options 'passphrase' and 'direct', where passphrase\n> goes through KEK and 'direct' doesn't, which just changes how we treat\n> the results of called cluster_passphrase_command?\n\nThat won't work for a HSM. It is not possible to extract the key.\n\"direct\" cannot be implemented.\n\n\n", "msg_date": "Tue, 27 Oct 2020 15:07:22 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Mon, Oct 26, 2020 at 10:05:10PM +0800, Craig Ringer wrote:\n> For example if I want to lock my database with a YubiHSM I would configure\n> something like:\n> \n> ��� cluster_encryption_key = 'pkcs11:token=YubiHSM;id=0:0001;type=private'\n\nWell, openssl uses a prefix before the password string, e.g.:\n\n* pass:password\n* env:var\n* file:pathname\n* fd:number\n* stdin\n\nSee 'man openssl'. I always thought that API was ugly, but I now see\nthe value in it. We could implement a 'command:' prefix now, and maybe\na 'pass:' one, and allow other methods like 'pkcs11' later.\n\nI can also imagine using the 'file' one to allow the key to be placed on\nan encrypted file system that has to be mounted for Postgres to start. \nYou could also have the key on a USB device that has to be inserted to\nbe used, and the 'file' is on the USB key --- seems clearer than having\nto create a script to 'cat' the file.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 27 Oct 2020 07:15:25 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, Oct 27, 2020 at 03:07:22PM +0800, Craig Ringer wrote:\n> On Mon, Oct 26, 2020 at 11:02 PM Stephen Frost <sfrost@snowman.net> wrote:\n> \n> \n> TL;DR:\n> \n> * Important to check that key rotation is possible on a replica, i.e.\n> primary and standby can have different cluster passphrase and KEK\n> encrypting the same WAL and heap keys;\n> * with a HSM we can't read the key out, so a pluggable KEK operations\n> context or a configurable URI for the KEK is necessary\n> * I want the SQL key and SQL wrap/unwrap part in a separate patch, I\n> don't think it's fully baked and oppose its inclusion in its current\n> form\n> * Otherwise this looks good so far\n> \n> Explanation and argument for why below.\n> \n> > I've not been following very closely, but I definitely agree with the\n> > general feedback here (more on that below), but to this point- I do\n> > believe that was the intent, or at least I sure hope that it was. Being\n> > able to have user/role keys would certainly be good. Having a way for a\n> > user to log in and unlock their key would also be really nice.\n> \n> Right. AFAICS this is supposed to provide the foundation layer for\n> whole-cluster encryption, and it looks ok for that, caveat about HSMs\n> aside. I see nothing wrong with using a single key for heap (and one\n> for WAL, or even the same key). Finer grained and autovacuum etc\n> becomes seriously painful.\n\nYou need to use separate keys for heap/index and WAL so you can\nreplicate to another server that uses a different heap/index key, but\nthe same WAL. You can then fail-over to the replica and change the WAL\nkey to complete full key rotation. The replication protocol needs to\ndecrypt, and the receiving end has to encrypt with a different\nheap/index key. This is the key rotation method this is planned. This\nis another good reason the keys should be in a separate directory so\nthey can be easily copied or replaced.\n\n> I want to take a closer look at how the current implementation will\n> play with physical replication. I assume the WAL and heap keys have to\n> be constant for the full cluster lifetime, short of a dump and reload.\n\nThe WAL key can change if you are willing to stop/start the server. You\nonly read the WAL during crash recovery.\n\n> The main issue I have so far is that I don't think the SQL key\n> actually fits well with the current proposal. Its proposed interface\n> and use cases are incomplete, it doesn't fully address key leak risks,\n> there's no user access control, etc. Also the SQL key part could be\n> implemented on top of the base cluster encryption part, I don't see\n> why it actually has to integrate with the whole-cluster key management\n> directly.\n\nAgreed. Maybe we should just focus on the TDE use now. I do think the\ncurrent patch is not commitable since, because there are no defined\nkeys, there is no way to validate the boot-time password. The no-key\ncase should be an unsupported configuration. Maybe we need to just\ncreate one key just to verify the boot password.\n\n> \n> SQL KEY\n> ----\n> \n> I'm not against the SQL key and wrap/unwrap functionality - quite the\n> contrary, I think it's really important to have something like it. But\n> is it appropriate to have a single, fixed-for-cluster-lifetime key for\n> this, one with no SQL-level access control over who can or cannot use\n> it, etc? The material encrypted with the key is user-exposed so key\n> rotation is an issue, but is not addressed here. And the interface\n> doesn't really solve the numerous potential problems with key material\n> leaks through logs and error messages.\n> \n> I just think that if we bake in the proposed user visible wrap/unwrap\n> interface now we're going to regret it later. How will it work when we\n> want to add user- or role- level access control for database-stored\n> keys? When we want to introduce a C-level API for extensions to work\n> directly with encrypted data like they can work currently with TOASTed\n> data, to prevent decrypted data from ever becoming SQL function\n> arguments subject to possible leakage and to allow implementation of\n> always-encrypted data types, etc?\n> \n> Most importantly - I don't think the SQL key adds anything really\n> crucial that we cannot do at the SQL level with an extension. An\n> extension \"pg_wrap\" could provide pg_wrap() and pg_unwrap() already,\n> using a single master key much like the SQL key proposed in this\n> patch. To store the master key it could:\n\nThe idea of the SQL key was to give the boot key a use, but I am now\nseeing that the SQL key is just holding us back, and is preventing the\nboot testing that is a requirement. Maybe we just need to forget the\nSQL part and focus on the TDE usage now, and come back to the SQL part. \nI am also not 100% clear on the usefulness of the SQL key.\n\n> OTHER TRANSPARENT ENCRYPTION USE CASES\n> ----\n> \n> Does this patch get in the way of supporting other kinds of\n> transparent encryption that are frequently requested and are in use on\n> other systems already?\n> \n> I don't think so. Whole-cluster encryption is quite separate and the\n> proposed patch doesn't seem to do anything that'd make table-, row- or\n> column-level encryption, per-user key management, etc any harder.\n\nI think those all are very different and will require more user-level\nfeatures that what is being done here.\n\n> Specific use cases I looked at:\n> \n> * Finer grained keying than whole-cluster for transparent\n> encryption-at-rest. As soon as we have relations that require user\n> session supplied information to allow the backend to read the relation\n> we get into a real mess with autovacuum, logical decoding, etc. So if\n> anyone wants to implement that sorts of thing they're probably going\n> to want to do so separately to block-level whole-cluster encryption,\n> in a way that preserves the normal page and page item structure and\n> encrypts the row data only.\n\nAgreed.\n\n> * Client-driver-assisted transparently encrypted\n> at-rest-and-in-transit data, where the database engine doesn't have\n> the encrypt/decrypt keys at all. Again in this case they're going to\n> have to do that at the row level or column level, not the block\n> (relfilenode extents and WAL) level, otherwise we can't provide\n> autovacuum etc.\n\nYes, this is all going to have to be user-level.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 27 Oct 2020 07:34:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 27 Oct 2020, 19:15 Bruce Momjian, <bruce@momjian.us> wrote:\n\n> We could implement a 'command:' prefix now, and maybe\n> a 'pass:' one, and allow other methods like 'pkcs11' later.\n>\n\nWe don't need to do anything except provide a way to tell OpenSSL where to\nget the KEK from, for situations where having Pg generate it internally\nundesirable.\n\nI proposed a simple GUC that we could supply to OpenSSL as a key path\nbecause it's simple. It's definitely not best.\n\nIn my prior mail I outlined what I think is a better way. Abstract key key\ninitialisation - passphrase fetching KEK/HMAC loading and all of it -\nbehind a pluggable interface. Looking at the patch, it's mostly there\nalready. We just need a way to hook the key loading and setup so it can be\noverridden to use whatever method is required. Then KEK operations to\nencrypt and decrypt the heap and WAL keys happen via that abstraction.\n\nThat way Pg does not have to care about the details of hardware key\nmanagement, PKCS#11 or OpenSSL engines, etc.\n\nA little thought is needed to make key rotation work well. Especially when\nyou want to switch from cluster passphrase to a plugin that supports use of\na HVM escrowed key, or vice versa.\n\nBut most of what's needed looks like it's there already. It's just down to\nmaking sure the key loading and initialisation is overrideable.\n\nOn Tue, 27 Oct 2020, 19:15 Bruce Momjian, <bruce@momjian.us> wrote:We could implement a 'command:' prefix now, and maybe\na 'pass:' one, and allow other methods like 'pkcs11' later.We don't need to do anything except provide a way to tell OpenSSL where to get the KEK from, for situations where having Pg generate it internally undesirable. I proposed a simple GUC that we could supply to OpenSSL as a key path because it's simple. It's definitely not best.In my prior mail I outlined what I think is a better way. Abstract key key initialisation -  passphrase fetching KEK/HMAC loading and all of it - behind a pluggable interface. Looking at the patch, it's mostly there already. We just need a way to hook the key loading and setup so it can be overridden to use whatever method is required. Then KEK operations to encrypt and decrypt the heap and WAL keys happen via that abstraction.That way Pg does not have to care about the details of hardware key management, PKCS#11 or OpenSSL engines, etc.A little thought is needed to make key rotation work well. Especially when you want to switch from cluster passphrase to a plugin that supports use of a HVM escrowed key, or vice versa.But most of what's needed looks like it's there already. It's just down to making sure the key loading and initialisation is overrideable.", "msg_date": "Tue, 27 Oct 2020 22:02:53 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, Oct 27, 2020 at 10:02:53PM +0800, Craig Ringer wrote:\n> On Tue, 27 Oct 2020, 19:15 Bruce Momjian, <bruce@momjian.us> wrote:\n> We don't need to do anything except provide a way to tell OpenSSL where to get\n> the KEK from, for situations where having Pg generate it internally\n> undesirable.�\n> \n> I proposed a simple GUC that we could supply to OpenSSL as a key path because\n> it's simple. It's definitely not best.\n> \n> In my prior mail I outlined what I think is a better way. Abstract key key\n> initialisation -� passphrase fetching KEK/HMAC loading and all of it - behind a\n> pluggable interface. Looking at the patch, it's mostly there already. We just\n> need a way to hook the key loading and setup so it can be overridden to use\n> whatever method is required. Then KEK operations to encrypt and decrypt the\n> heap and WAL keys happen via that abstraction.\n> \n> That way Pg does not have to care about the details of hardware key management,\n> PKCS#11 or OpenSSL engines, etc.\n> \n> A little thought is needed to make key rotation work well. Especially when you\n> want to switch from cluster passphrase to a plugin that supports use of a HVM\n> escrowed key, or vice versa.\n> \n> But most of what's needed looks like it's there already. It's just down to\n> making sure the key loading and initialisation is overrideable.\n\nI don't know much about how to hook into that stuff so if you have an\nidea, I am all ears. I have used OpenSSL with Yubikey via pksc11. You\ncan see the use of it on slide 57 and following:\n\n\thttps://momjian.us/main/writings/crypto_hw_config.pdf#page=57\n\nInterestingly, that still needed the user to type in a key to unlock the\nYubikey, so we might need PKCS11 and a password for the same server\nstart.\n\nI would like to get this moving forward so I will work on the idea of\npassing an open /dev/tty file descriptor from pg_ctl to the postmaster\non start.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 27 Oct 2020 10:20:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, Oct 27, 2020 at 10:20:35AM -0400, Bruce Momjian wrote:\n> I don't know much about how to hook into that stuff so if you have an\n> idea, I am all ears. I have used OpenSSL with Yubikey via pksc11. You\n> can see the use of it on slide 57 and following:\n> \n> \thttps://momjian.us/main/writings/crypto_hw_config.pdf#page=57\n> \n> Interestingly, that still needed the user to type in a key to unlock the\n> Yubikey, so we might need PKCS11 and a password for the same server\n> start.\n> \n> I would like to get this moving forward so I will work on the idea of\n> passing an open /dev/tty file descriptor from pg_ctl to the postmaster\n> on start.\n\nHere is an updated patch that uses an argument to pass an open /dev/tty\nfile descriptor to the postmaster. It uses -R for initdb/pg_ctl, -R ###\nfor postmaster/postgres, and %R for cluster_passphrase_command. Here is\na sample session:\n\n-->\t$ initdb -R --cluster-passphrase-command '/tmp/pass_fd.sh \"%p\" %R'\n\tThe files belonging to this database system will be owned by user \"postgres\".\n\tThis user must also own the server process.\n\t\n\tThe database cluster will be initialized with locale \"en_US.UTF-8\".\n\tThe default database encoding has accordingly been set to \"UTF8\".\n\tThe default text search configuration will be set to \"english\".\n\t\n\tData page checksums are disabled.\n\tKey management system is enabled.\n\t\n\tfixing permissions on existing directory /u/pgsql/data ... ok\n\tcreating subdirectories ... ok\n\tselecting dynamic shared memory implementation ... posix\n\tselecting default max_connections ... 100\n\tselecting default shared_buffers ... 128MB\n\tselecting default time zone ... America/New_York\n\tcreating configuration files ... ok\n\trunning bootstrap script ...\n-->\tEnter database encryption pass phrase: B1D7B405EDCD97B7351DD3B7AE0637775FFBC6A2C2EEADAEC009A75A58A79F50\n\tok\n\tperforming post-bootstrap initialization ...\n-->\tEnter database encryption pass phrase: B1D7B405EDCD97B7351DD3B7AE0637775FFBC6A2C2EEADAEC009A75A58A79F50\n\tok\n\tsyncing data to disk ... ok\n\t\n\tinitdb: warning: enabling \"trust\" authentication for local connections\n\tYou can change this by editing pg_hba.conf or using the option -A, or\n\t--auth-local and --auth-host, the next time you run initdb.\n\t\n\tSuccess. You can now start the database server using:\n\t\n\t pg_ctl -D /u/pgsql/data -l logfile start\n\t\n\t$ pg_ctl stop\n\tpg_ctl: PID file \"/u/pgsql/data/postmaster.pid\" does not exist\n\tIs server running?\n-->\t$ pg_ctl -l /u/pg/server.log -R start\n\twaiting for server to start...\n-->\tEnter database encryption pass phrase: B1D7B405EDCD97B7351DD3B7AE0637775FFBC6A2C2EEADAEC009A75A58A79F50\n\t done\n\tserver started\n\nAttached is my updated patch, based on Masahiko Sawada's patch, and my\npass_fd.sh script. \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Tue, 27 Oct 2020 21:43:14 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Oct 28, 2020 at 9:43 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n\n> I don't know much about how to hook into that stuff so if you have an\n> idea, I am all ears.\n\n\nYeah, I have a reasonable idea. The main thing will be to re-read the patch\nand put it into more concrete terms, which I'll try to find time for soon.\nI need to find time to craft a proper demo that uses a virtual hsm, and can\nalso demonstrate how to use the host TPM or a Yubikey using the simple\nopenssl engine interfaces or a URI.\n\n\n I have used OpenSSL with Yubikey via pksc11. You\n> can see the use of it on slide 57 and following:\n>\n> https://momjian.us/main/writings/crypto_hw_config.pdf#page=57\n>\n> Interestingly, that still needed the user to type in a key to unlock the\n> Yubikey, so we might need PKCS11 and a password for the same server\n> start.\n>\n\n\nYes, that's possible. But in that case the passphrase will be asked for by\nopenssl only when required, and we'll need to supply an openssl askpass\nhook.\n\nOn Wed, Oct 28, 2020 at 9:43 AM Bruce Momjian <bruce@momjian.us> wrote:>I don't know much about how to hook into that stuff so if you have anidea, I am all ears.Yeah, I have a reasonable idea. The main thing will be to re-read the patch and put it into more concrete terms, which I'll try to find time for soon. I need to find time to craft a proper demo that uses a virtual hsm, and can also demonstrate how to use the host TPM or a Yubikey using the simple openssl engine interfaces or a URI.  I have used OpenSSL with Yubikey via pksc11.  Youcan see the use of it on slide 57 and following:        https://momjian.us/main/writings/crypto_hw_config.pdf#page=57Interestingly, that still needed the user to type in a key to unlock theYubikey, so we might need PKCS11 and a password for the same serverstart.Yes, that's possible. But in that case the passphrase will be asked for by openssl only when required, and we'll need to supply an openssl askpass hook.", "msg_date": "Wed, 28 Oct 2020 12:02:46 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Oct 28, 2020 at 12:02 PM Craig Ringer <craig.ringer@enterprisedb.com>\nwrote:\n\n> On Wed, Oct 28, 2020 at 9:43 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n>\n>> I don't know much about how to hook into that stuff so if you have an\n>> idea, I am all ears.\n>\n>\n> Yeah, I have a reasonable idea. The main thing will be to re-read the\n> patch and put it into more concrete terms, which I'll try to find time for\n> soon. I need to find time to craft a proper demo that uses a virtual hsm,\n> and can also demonstrate how to use the host TPM or a Yubikey using the\n> simple openssl engine interfaces or a URI.\n>\n\nDo you have this in a public git tree anywhere? If not please consider\nusing \"git format-patch -v 1 -1\" or similar to generate it, so I can \"git\nam\" the patch.\n\nA few comments on the patch as I read through. Some will be addressed by\nthe quick PoC I'm preparing for pluggable key derivation, some won't. In no\nparticular order:\n\n* The term KEK only appears in the docs; where it appears in the sources\nit's lower case. I suggest making \"KEK\" grep-able in the sources.\n* BootStrapKmgr() says to call once on \"system install\" . I suggest\n\"initdb\".\n* The jumble of #ifdef FRONTEND in src/common/kmgr_utils.c shouldn't remain\nin the final patch if possible. This may require some \"static inline\"\nwrappers or helpers.\n* PgKeyWrapCtx.key is unused and should probably be deleted\n* HMAC support should go into PgCipherCtx so that we can have a single\nPgCipherCtx that supports cipher ops, cipher+HMAC ops, or just HMAC ops\n* PgKeyWrapCtx.cipherctx should then be supplemented with a hmacctx. It\nshould be legal to set cipherctx and hmacctx to the same value, since in\nsome cases the it won't be easy to initialize the backing implementation\nseparately for key and HMAC.\n\nThe patch I've been hacking together will look like this, though I haven't\ngot far along with it yet:\n\n* Give each PgKeyWrapCtx a 'key_name' field to identify it\n* Extract default passphrase based KEK creation into separate function that\nreturns\n a new PgKeyWrapCtx for the KEK, currently called\n kmgr_create_kek_context_from_cluster_passphrase()\n* BootStrapKmgr() and pg_rotate_cluster_passphrase() call\n kmgr_create_kek_context_from_cluster_passphrase()\n instead of doing their own ctx creation\n* [TODO] Replace kmgr_verify_passphrase() with kmgr_verify_ctx(...)\n which takes a PgKeyWrapCtx instead of constructing its own from a\npassphrase\n* [TODO] In InitializeKmgr() use kmgr_verify_ctx() instead of explicit\npassphrase fetch\n* [TODO] Teach PgCipherCtx about HMAC operations\n* [TODO] replace PgKeyWrapCtx.mackey with another PgCipherCtx\n* [TODO] add PgKeyWrapCtx.teardown_cb callback to be called before free\n* [TODO] add a kmgr_create_kek_context() that checks for a hook/plugin\n or other means of loading a non-default means of getting a KEK\n PgKeyWrapContext, and calls\nkmgr_create_kek_context_from_cluster_passphrase()\n by default\n* [TODO] replace calls to kmgr_create_kek_context_from_cluster_passphrase()\n with calls to kmgr_create_kek_context()\n\nThat should hide the details of HMAC operations and of KEK creation from\nkmgr_* .\n\nThen via a TBD configuration mechanism we'd be able to select a method of\ncreating the PgKeyWrapCtx for the KEK and its contained PgCipherCtx\nimplementations for cipher and HMAC operations, then use that without\ncaring about how it works internally.\n\nThe key manager no longer has to care if the KEK was created by reading a\npassword from a command and deriving the KEK and HMAC. Or whether it's\nactually backed by an OpenSSL engine that delegates to PKCS#11. kmgr ops\ncan just request the KEK context and use it.\n\nFORK?\n----\n\nOne possible problem with this is that we should not assume we can perform\nKEK operations in postmaster children, since there's no guarantee we can\nuse whatever sits behind a PgCipherCtx after fork(). But AFAICS the KEK\ndoesn't live beyond the various kmgr_ operations as it is, so there's no\nreason it should ever have to be carried over a fork() anyway.\n\nCONFIGURING SOURCE OF KEK\n---\n\nRe the configuration mechanism: the usual way Pg does things is do provide\na global foo_hook_type foo_hook. The foo() function checks for foo_hook and\ncalls it if it's non-null, otherwise it calls the default implementation in\nstandard_foo(). A hook may choose to override standard_foo() completely, or\ntake its own actions before or after. The hook is installed by an extension\nloaded in shared_preload_libraries.\n\nThere are a couple of issues with using this method for kmgr:\n\n* the kmgr appears to need to be able to work in frontend code (?)\n* for key rotation we need to be able to change KEKs, and possibly KEK\nacquisition methods, at runtime\n\nso I'm inclined to handle this a bit like we do for logical decoding output\nplugins instead. Use a normal Pg extension library with PG_MODULE_MAGIC,\nbut dlsym() a different entrypoint. Have that entrypoint populate a struct\nof API function pointers. The kmgr can use that struct to request KEK\nloading. If no kmgr plugin is configured, use the default API struct that\ndoes KEK loading based on password.\n\nWhen re-keying, we'd (re)load the kmgr KEK library, possibly a different\none to that used at startup, or if the user switched to the default method\nwe'd use the default API struct.\n\nTo the user this would probably look like\n\n kmgr_plugin = 'kmgr_openssl_engine'\n kmgr_openssl_engine.key_uri = 'pkcs11:foo;bar;baz'\n\nor however else we feel like spelling it.\n\nReasonable?\n\nOn Wed, Oct 28, 2020 at 12:02 PM Craig Ringer <craig.ringer@enterprisedb.com> wrote:On Wed, Oct 28, 2020 at 9:43 AM Bruce Momjian <bruce@momjian.us> wrote:>I don't know much about how to hook into that stuff so if you have anidea, I am all ears.Yeah, I have a reasonable idea. The main thing will be to re-read the patch and put it into more concrete terms, which I'll try to find time for soon. I need to find time to craft a proper demo that uses a virtual hsm, and can also demonstrate how to use the host TPM or a Yubikey using the simple openssl engine interfaces or a URI.Do you have this in a public git tree anywhere? If not please consider using \"git format-patch -v 1 -1\" or similar to generate it, so I can \"git am\" the patch.A few comments on the patch as I read through. Some will be addressed by the quick PoC I'm preparing for pluggable key derivation, some won't. In no particular order:* The term KEK only appears in the docs; where it appears in the sources it's lower case. I suggest making \"KEK\" grep-able in the sources.* BootStrapKmgr() says to call once on \"system install\" . I suggest \"initdb\".* The jumble of #ifdef FRONTEND in src/common/kmgr_utils.c shouldn't remain in the final patch if possible. This may require some \"static inline\" wrappers or helpers.* PgKeyWrapCtx.key is unused and should probably be deleted* HMAC support should go into PgCipherCtx so that we can have a single PgCipherCtx that supports cipher ops, cipher+HMAC ops, or just HMAC ops* PgKeyWrapCtx.cipherctx should then be supplemented with a hmacctx. It should be legal to set cipherctx and hmacctx to the same value, since in some cases the it won't be easy to initialize the backing implementation separately for key and HMAC. The patch I've been hacking together will look like this, though I haven't got far along with it yet:* Give each PgKeyWrapCtx a 'key_name' field to identify it* Extract default passphrase based KEK creation into separate function that returns  a new PgKeyWrapCtx for the KEK, currently called  kmgr_create_kek_context_from_cluster_passphrase()* BootStrapKmgr() and pg_rotate_cluster_passphrase() call  kmgr_create_kek_context_from_cluster_passphrase()  instead of doing their own ctx creation* [TODO] Replace kmgr_verify_passphrase() with kmgr_verify_ctx(...)  which takes a PgKeyWrapCtx instead of constructing its own from a passphrase* [TODO] In InitializeKmgr() use kmgr_verify_ctx() instead of explicit passphrase fetch* [TODO] Teach PgCipherCtx about HMAC operations* [TODO] replace PgKeyWrapCtx.mackey with another PgCipherCtx* [TODO] add PgKeyWrapCtx.teardown_cb callback to be called before free* [TODO] add a kmgr_create_kek_context() that checks for a hook/plugin  or other means of loading a non-default means of getting a KEK  PgKeyWrapContext, and calls kmgr_create_kek_context_from_cluster_passphrase()  by default* [TODO] replace calls to kmgr_create_kek_context_from_cluster_passphrase()  with calls to kmgr_create_kek_context()That should hide the details of HMAC operations and of KEK creation from kmgr_* .Then via a TBD configuration mechanism we'd be able to select a method of creating the PgKeyWrapCtx for the KEK and its contained PgCipherCtx implementations for cipher and HMAC operations, then use that without caring about how it works internally.The key manager no longer has to care if the KEK was created by reading a password from a command and deriving the KEK and HMAC. Or whether it's actually backed by an OpenSSL engine that delegates to PKCS#11. kmgr ops can just request the KEK context and use it.FORK?----One possible problem with this is that we should not assume we can perform KEK operations in postmaster children, since there's no guarantee we can use whatever sits behind a PgCipherCtx after fork(). But AFAICS the KEK doesn't live beyond the various kmgr_ operations as it is, so there's no reason it should ever have to be carried over a fork() anyway.CONFIGURING SOURCE OF KEK---Re the configuration mechanism: the usual way Pg does things is do provide a global foo_hook_type foo_hook. The foo() function checks for foo_hook and calls it if it's non-null, otherwise it calls the default implementation in standard_foo(). A hook may choose to override standard_foo() completely, or take its own actions before or after. The hook is installed by an extension loaded in shared_preload_libraries.There are a couple of issues with using this method for kmgr:* the kmgr appears to need to be able to work in frontend code (?)* for key rotation we need to be able to change KEKs, and possibly KEK acquisition methods, at runtimeso I'm inclined to handle this a bit like we do for logical decoding output plugins instead. Use a normal Pg extension library with PG_MODULE_MAGIC, but dlsym() a different entrypoint. Have that entrypoint populate a struct of API function pointers. The kmgr can use that struct to request KEK loading. If no kmgr plugin is configured, use the default API struct that does KEK loading based on password. When re-keying, we'd (re)load the kmgr KEK library, possibly a different one to that used at startup, or if the user switched to the default method we'd use the default API struct.To the user this would probably look like    kmgr_plugin = 'kmgr_openssl_engine'    kmgr_openssl_engine.key_uri = 'pkcs11:foo;bar;baz'or however else we feel like spelling it.Reasonable?", "msg_date": "Wed, 28 Oct 2020 17:16:32 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Tue, 27 Oct 2020 at 20:34, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Oct 27, 2020 at 03:07:22PM +0800, Craig Ringer wrote:\n> > On Mon, Oct 26, 2020 at 11:02 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> >\n> > TL;DR:\n> >\n> > * Important to check that key rotation is possible on a replica, i.e.\n> > primary and standby can have different cluster passphrase and KEK\n> > encrypting the same WAL and heap keys;\n> > * with a HSM we can't read the key out, so a pluggable KEK operations\n> > context or a configurable URI for the KEK is necessary\n> > * I want the SQL key and SQL wrap/unwrap part in a separate patch, I\n> > don't think it's fully baked and oppose its inclusion in its current\n> > form\n> > * Otherwise this looks good so far\n> >\n> > Explanation and argument for why below.\n> >\n> > > I've not been following very closely, but I definitely agree with the\n> > > general feedback here (more on that below), but to this point- I do\n> > > believe that was the intent, or at least I sure hope that it was. Being\n> > > able to have user/role keys would certainly be good. Having a way for a\n> > > user to log in and unlock their key would also be really nice.\n> >\n> > Right. AFAICS this is supposed to provide the foundation layer for\n> > whole-cluster encryption, and it looks ok for that, caveat about HSMs\n> > aside. I see nothing wrong with using a single key for heap (and one\n> > for WAL, or even the same key). Finer grained and autovacuum etc\n> > becomes seriously painful.\n>\n> You need to use separate keys for heap/index and WAL so you can\n> replicate to another server that uses a different heap/index key, but\n> the same WAL. You can then fail-over to the replica and change the WAL\n> key to complete full key rotation. The replication protocol needs to\n> decrypt, and the receiving end has to encrypt with a different\n> heap/index key. This is the key rotation method this is planned. This\n> is another good reason the keys should be in a separate directory so\n> they can be easily copied or replaced.\n\nI think it's better we decrypt WAL in the xlogreader layer, instead of\ndoing in replication protocol. That way, we also can support frontend\ntools that need to read WAL such as pg_waldump and pg_rewind as well\nas logical decoding.\n\n>\n> > I want to take a closer look at how the current implementation will\n> > play with physical replication. I assume the WAL and heap keys have to\n> > be constant for the full cluster lifetime, short of a dump and reload.\n>\n> The WAL key can change if you are willing to stop/start the server. You\n> only read the WAL during crash recovery.\n\nWe might need to consider having multiple key generations, rather than\nin-place rotation. If we simply update the WAL key in-place in the\nprimary, archive WALs restored via restore_command cannot be decrypted\nin the replica. We might need to do generation management for WAL key\nand provide the functionality to purge old WAL keys.\n\n> >\n> > SQL KEY\n> > ----\n> >\n> > I'm not against the SQL key and wrap/unwrap functionality - quite the\n> > contrary, I think it's really important to have something like it. But\n> > is it appropriate to have a single, fixed-for-cluster-lifetime key for\n> > this, one with no SQL-level access control over who can or cannot use\n> > it, etc? The material encrypted with the key is user-exposed so key\n> > rotation is an issue, but is not addressed here. And the interface\n> > doesn't really solve the numerous potential problems with key material\n> > leaks through logs and error messages.\n> >\n> > I just think that if we bake in the proposed user visible wrap/unwrap\n> > interface now we're going to regret it later. How will it work when we\n> > want to add user- or role- level access control for database-stored\n> > keys? When we want to introduce a C-level API for extensions to work\n> > directly with encrypted data like they can work currently with TOASTed\n> > data, to prevent decrypted data from ever becoming SQL function\n> > arguments subject to possible leakage and to allow implementation of\n> > always-encrypted data types, etc?\n> >\n> > Most importantly - I don't think the SQL key adds anything really\n> > crucial that we cannot do at the SQL level with an extension. An\n> > extension \"pg_wrap\" could provide pg_wrap() and pg_unwrap() already,\n> > using a single master key much like the SQL key proposed in this\n> > patch. To store the master key it could:\n>\n> The idea of the SQL key was to give the boot key a use, but I am now\n> seeing that the SQL key is just holding us back, and is preventing the\n> boot testing that is a requirement. Maybe we just need to forget the\n> SQL part and focus on the TDE usage now, and come back to the SQL part.\n> I am also not 100% clear on the usefulness of the SQL key.\n\nI agree to focus on the TDE usage now.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 28 Oct 2020 21:24:35 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Oct 28, 2020 at 09:24:35PM +0900, Masahiko Sawada wrote:\n> On Tue, 27 Oct 2020 at 20:34, Bruce Momjian <bruce@momjian.us> wrote:\n> > You need to use separate keys for heap/index and WAL so you can\n> > replicate to another server that uses a different heap/index key, but\n> > the same WAL. You can then fail-over to the replica and change the WAL\n> > key to complete full key rotation. The replication protocol needs to\n> > decrypt, and the receiving end has to encrypt with a different\n> > heap/index key. This is the key rotation method this is planned. This\n> > is another good reason the keys should be in a separate directory so\n> > they can be easily copied or replaced.\n> \n> I think it's better we decrypt WAL in the xlogreader layer, instead of\n> doing in replication protocol. That way, we also can support frontend\n> tools that need to read WAL such as pg_waldump and pg_rewind as well\n> as logical decoding.\n\nSure. I was just saying the heap/index files coming from the primary\nshould have decrypted heap/index blocks, but I was not sure what level\nit should happen. If the data coming out the primary is encrypted, you\nwould need the old (to decrypt) and new (to encrypt) keys on the\nstandby, which seems too complex.\n\nTo clarify, the data and heap/index pages in the WAL are only encrypted\nwith the WAL key, but when pg_basebackup is streaming the files from\nPGDATA, it shouldn't be encrypted, or encrypted only with the WAL key,\nat the time of transfer since the receiver should be re-encrypting it. \nIf that will not work, we should know now.\n\n> > > I want to take a closer look at how the current implementation will\n> > > play with physical replication. I assume the WAL and heap keys have to\n> > > be constant for the full cluster lifetime, short of a dump and reload.\n> >\n> > The WAL key can change if you are willing to stop/start the server. You\n> > only read the WAL during crash recovery.\n> \n> We might need to consider having multiple key generations, rather than\n> in-place rotation. If we simply update the WAL key in-place in the\n> primary, archive WALs restored via restore_command cannot be decrypted\n> in the replica. We might need to do generation management for WAL key\n> and provide the functionality to purge old WAL keys.\n\nSince we have the keys stored in the file system, I think we will use a\ncommand-line tool that can access both old and new keys and re-encrypted\nthe archived WAL. I think old/new keys inside the server is too\ncomplex.\n\n> > The idea of the SQL key was to give the boot key a use, but I am now\n> > seeing that the SQL key is just holding us back, and is preventing the\n> > boot testing that is a requirement. Maybe we just need to forget the\n> > SQL part and focus on the TDE usage now, and come back to the SQL part.\n> > I am also not 100% clear on the usefulness of the SQL key.\n> \n> I agree to focus on the TDE usage now.\n\nI admit the SQL key idea was mine, and I now see it was a bad idea since\nit just adds confusion and doesn't add value.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 28 Oct 2020 12:24:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "Greetings,\n\n* Craig Ringer (craig.ringer@enterprisedb.com) wrote:\n> On Mon, Oct 26, 2020 at 11:02 PM Stephen Frost <sfrost@snowman.net> wrote:\n> \n> TL;DR:\n> \n> * Important to check that key rotation is possible on a replica, i.e.\n> primary and standby can have different cluster passphrase and KEK\n> encrypting the same WAL and heap keys;\n\nI agree that key rotation would certainly be good to have.\n\n> * with a HSM we can't read the key out, so a pluggable KEK operations\n> context or a configurable URI for the KEK is necessary\n\nThere's a lot of options around HSMs, the Linux crypto API, potential\ndifferent encryption libraries, et al. One thing that I'm not sure\nwe're being clear enough on here is when we're talking about a KEK (key\nencryption key) vs. when we're talking about actually off-loading all of\nthe encryption to an HSM or to an OpenSSL engine (which might in turn\nuse the Linux crypto API...), etc.\n\nAgreed that, with some HSMs, we aren't able to actually pull out the\nkey. Depending on the HSM, it may or may not be able to perform\nencryption and decryption with any kind of speed and therefore we should\nhave options which don't require that. This would be the typical case\nwhere we'd have a KEK which encrypts a key we have stored and then that\nkey is what's actually used for the encryption/decryption of the data.\n\n> * I want the SQL key and SQL wrap/unwrap part in a separate patch, I\n> don't think it's fully baked and oppose its inclusion in its current\n> form\n\nI'm generally a fan of having something at the SQL level, but I agree\nthat it doesn't need to be part of this initial capability and could be\ndone later as a separate patch.\n\n> Most importantly - I don't think the SQL key adds anything really\n> crucial that we cannot do at the SQL level with an extension. An\n> extension \"pg_wrap\" could provide pg_wrap() and pg_unwrap() already,\n> using a single master key much like the SQL key proposed in this\n> patch. To store the master key it could:\n\nLots of things can be done in extensions but, at least for my part, I'd\nmuch rather see us build in an SQL key capability (with things like\ngrammar support and being able to tie to to a role cleanly) than to try\nand figure out how to make this work as an extension.\n\n> That way we haven't baked some sort of limited wrap/unwrap into Pg's\n> long term user visible API. I'd be totally happy for such a SQL key\n> wrap/unwrap to become part of pgcrypto, or a separate extension that\n> uses pgcrypto, if you're worried about having it available to users. I\n> just don't really want it in src/backend in its current form.\n\nThere's no shortage of interfaces that exist in other database systems\nfor this that we can look at to help guide us in coming up with a good\nAPI here. All that said, we can debate that on another thread and\nindependently of this discussion around TDE.\n\n> OTHER TRANSPARENT ENCRYPTION USE CASES\n> ----\n> \n> Does this patch get in the way of supporting other kinds of\n> transparent encryption that are frequently requested and are in use on\n> other systems already?\n> \n> I don't think so. Whole-cluster encryption is quite separate and the\n> proposed patch doesn't seem to do anything that'd make table-, row- or\n> column-level encryption, per-user key management, etc any harder.\n> \n> Specific use cases I looked at:\n> \n> * Finer grained keying than whole-cluster for transparent\n> encryption-at-rest. As soon as we have relations that require user\n> session supplied information to allow the backend to read the relation\n> we get into a real mess with autovacuum, logical decoding, etc. So if\n> anyone wants to implement that sorts of thing they're probably going\n> to want to do so separately to block-level whole-cluster encryption,\n> in a way that preserves the normal page and page item structure and\n> encrypts the row data only.\n\nI tend to agree with this.\n\n> * Client-driver-assisted transparently encrypted\n> at-rest-and-in-transit data, where the database engine doesn't have\n> the encrypt/decrypt keys at all. Again in this case they're going to\n> have to do that at the row level or column level, not the block\n> (relfilenode extents and WAL) level, otherwise we can't provide\n> autovacuum etc.\n\n+100 to having client-driver-assisted encryption, this solves real\nattack vectors which traditional TDE simply doesn't, compared to\nfilesystem or block device level encryption (even though lots of\npeople seem to think it does, which is bizarre to me).\n\n> > That\n> > said- I don't think we necessarily want to throw out tho command-based\n> > option, as users may wish to use a vaulting solution or similar instead\n> > of an HSM.\n> \n> I agree. I wasn't proposing to throw out the command based approach,\n> just provide a way to inform postgres that it should do operations\n> with the KEK using an external engine instead of deriving its own KEK\n> from a passphrase and other inputs.\n\nI would think we'd want to enable admins to be able to control if what\nis being provided is a KEK (where the key is then decrypted by PG and PG\nthen uses whatever libraries it's built with to perform the encryption\nand decryption in PG process space), or an engine/offloading\nconfiguration (where PG doesn't ever see the actual key and all\nencryption and decryption is done outside of PG's control by an HSM or\nthe Linux kernel through the crypto API or whatever).\n\nThe use-cases I'm thinking about:\n\n- User has a Yubikey, but would like PG to be able to write more than\n one block at a time. In this case, the Yubikey would have a KEK which\n PG doesn't ever see. PG would have an encrypted blob that it then\n asks the yubikey to decrypt which contains the actual key that's then\n kept in PG's memory to perform the encryption/decryption. Naturally,\n if that key is stolen then an attacker could decrypt the entire\n database, even if they don't have the yubikey. An attacker could\n acquire that key by having sufficient access on the PG sever to be\n able to read PG's memory.\n\n- User has a Thales Luna PCIe HSM, or similar. In this case, the user\n wants *all* of the encryption/decryption happening on the HSM and none\n of it happening in PG space, making it impossible for an attacker to\n acquire the actual key.\n\n- User has a yubikey, similar to #1, but would like to have the Linux\n kernel used to safe-guard the actual key used. This is a bit of an\n in-between area between the first case above and the second-\n specifically, a yubikey could have the KEK but then the actual data\n encryption key isn't given to PG, it's put into the Linux kernel's\n keyring and PG uses (perhaps through OpenSSL) the Linux crypto API to\n off-load the actual encryption and decryption to have that happening\n outside of PG's process space. This would make it much more difficult\n for an attacker to acquire the key if they only have control over PG\n or the postgres unix account, since the Linux kernel would prevent\n access to it, but it wouldn't require a HSM crypto accelerator. Of\n course, should an attacker gain root or direct physical access to the\n system somehow, they might be able to acquire the actual data\n encryption key that way.\n\n- User has a vaulting solution, and perhaps wants to store the actual\n encryption/decryption key there, or perhaps the user wants to store a\n passphrase in the vault and have PG derive the actual key from that.\n Either seems like it could be reasonable.\n\n- User hasn't got anything special and just wants to keep it simple by\n using a passphrase that's entered when PG is started up.\n\n> > What I am curious about though- what are the thoughts around\n> > using a vaulting solution's command-line tool vs. writing code to work\n> > with an API?\n> \n> I think the code that fetches the cluster passphrase from a command\n> should be interceptable by a hook, so organisations with Wacky\n> Security Policies Written By People Who Have Heard About Computers But\n> Never Used One can jump through the necessary hoops. I am of course\n> absolutely not speaking from experience here, no, not at all... see\n> ssl_passphrase_function in src/backend/libpq/be-secure-openssl.c, and\n> see src/test/modules/ssl_passphrase_callback/ssl_passphrase_func.c .\n> \n> So I suggest something like that - a hook that by default calls an\n> external command but can by overridden by an extension. It wouldn't be\n> openssl specific like the server key passphrase example though. That\n> was done with an openssl specific hook because we don't know if we're\n> going to need a passphrase at all until openssl has opened the key. In\n> the cluster encryption case we'll know if we're doing our own KEK+HMAC\n> generation or not without having to ask the SSL library.\n\nWhat I'm wondering about here is if we should make it an explicit option\nfor a user to pick through the server configuration about if they're\ngiving PG a direct key to use, a KEK that's actually meant to decrypt\nthe data key, a way to fetch the direct key or the KEK, or a engine\nwhich has the KEK to ask to decrypt the data key, etc. If we can come\nup with a way to configure PG that will support the different use cases\noutlined above without being overly complicated, that'd be great. I'm\nnot sure that I see that in what you've proposed here, but maybe by\ngoing through each of the use-cases and showing how a user would\nconfigure PG for each with this proposal, I will.\n\n> > Between these various options, what are the risks of\n> > having a script vs. using an API and would one or the other weaken the\n> > overall solution? Or is what's really needed here is a way to tell us\n> > if it's a passphrase we're getting or a proper key, regardless of the\n> > method being used to fetch it?\n> \n> For various vault systems I don't think it matters at all whether the\n> secret they manage is the key, input used to generate the key, or\n> input used to decrypt a key stored elsewhere. Either way they have the\n> pivotal secret. So I don't see much point allowing the command to\n> return a fully formed key.\n\nI hadn't really considered that to be a distinction either, so I'm glad\nthat it sounds like we agreed on that point.\n\n> The point of a HSM that you don't get to read the key. Pg can never\n> read the key, it can only perform encrypt and decrypt operations on\n> the key using the HSM via the SSL library:\n\nThis really depends on exactly what \"key\" is being referred to here, and\nwhere the encryption/decryption is happening. Hopefully the above use\ncases help clarify.\n\n> Pg -> openssl:\n> \"this is the ciphertext of the wal_key. Please decrypt it for me.\"\n> openssl -> engine layer\n> \"engine, please decrypt this\"\n> pkcs#11 engine-> pkcs#11 provider:\n> \"please decrypt this\"\n> pkcs#11 provider -> HSM-specific libraries, network proxies, whatever:\n> \"please decrypt this\"\n> \"... here's the plaintext\"\n> <- flows back up\n\nRight- in this case, ultimately, the actual key used for the encryption\nand decryption ends up in PG's memory space as plaintext and could\ntherefore be acquired by an attacker with access to PG memory space.\n\n> So the KEK used to encrypt the main cluster keys for heap and wal\n> encryption is never readable by Pg. It usually never enters host\n> memory - in the case of a HSM, the ciphertext is sent over USB or PCIe\n> to the HSM and the cleartext comes back.\n\nAgreed, the KEK isn't, but that isn't actually all that interesting\nsince the KEK isn't needed to decrypt the data.\n\n> In openssl, the default engine is file-based with host software crypto\n> implementations. You can specify alternate engines using various\n> OpenSSL APIs, or you can specify them by supplying a URI where you'd\n> usually supply a file path to a key.\n\nRight.\n\n> I'm proposing we make it easy to supply a key URI and let openssl\n> handle the engine etc. It's far from perfect, and it's really meant as\n> a fallback to allow apps that don't natively understand SSL engines\n> etc to still use them in a limited capacity.\n\nI agree that it doesn't seem like a bad approach to expose that URI, but\nI'm not sure that's really the end of it since there's going to be cases\nwhere people would like to have a KEK on a yubikey and there'll be other\ncases where people would like to offload all of the encryption and\ndecryption to a HSM crypto accelerator and, ideally, we'd allow them to\nbe able to configure PG for either of those cases.\n\n> What I'd *prefer* to do is make the function that sets up the KEK\n> hookable. So by default we'd call a function that'd read the external\n> passphrase from a command use that to generate KEK+HMAC. But an\n> extension hook installed at shared_preload_libraries time could\n> override the behaviour completely and return its own implementation.\n\nI don't see a problem with adding hooks, where they make sense, but we\nshould also make things work in a sensible way and a way that works with\nat least the use-cases that I've outlined, ideally, without having to go\nget an extension or write C code.\n\n> > This really locks us into OpenSSL for this, which I don't particularly\n> > like.\n> \n> We're pretty locked into openssl already. I don't like it either, it\n> was just the option that has the least impact/delay on the main work\n> on this patch.\n\nThere's an active patch that's been worked on for quite some time that's\ngetting some renewed interest in adding NSS support, something I\ncertainly support also, so we really shouldn't be taking steps that end\nup making it more difficult to support alternatives. Perhaps a generic\n'key URI' type of option wouldn't be too bad, and each library we\nsupport could parse that string out based on what information it needs\n(eg: for NSS, a database + key nickname could be provided in some\nspecific format), but overall we certainly shouldn't be baking things in\nwhich are very OpenSSL-specific and exposed to users.\n\n> I'd rather abstract KEK operations behind a context object-like struct\n> with function pointer members, like we do in many other places in Pg.\n> Make the default one do the dance of reading the external passphrase\n> and generating the KEK on the fly. Allow plugins to override it with\n> their own, and let them set it up to delegate to a HSM or whatever\n> else they want.\n> \n> Then ship a simple openssl based default implementation of HSM support\n> that can be shoved in shared_preload_libraries. Or if we don't like\n> using s_p_l, add a separate GUC for cluster_encryption_key_manager or\n> whatever, and a different entrypoint, instead of having s_p_l call\n> _PG_init() to register a hook.\n\nI definitely think we want to support things directly in PG and not\nrequire an extension or something to be in s_p_l for this.\n\n> > > For example if I want to lock my database with a YubiHSM I would configure\n> > > something like:\n> > >\n> > > cluster_encryption_key = 'pkcs11:token=YubiHSM;id=0:0001;type=private'\n> > >\n> > > The DB would be encrypted and decrypted using application keys unlocked by\n> > > the HSM. Backups of the database, stolen disk images, etc, would be\n> > > unreadable unless you have access to another HSM with the same key loaded.\n> >\n> > Well, you would surely just need the key, since you could change the PG\n> > config to fetch the key from wherever you have it, you wouldn't need an\n> > actual HSM.\n> \n> Right - if your HSM was programmed by generating a key and storing\n> that into the HSM and you have that key backed up in file form\n> somewhere, you could likely put it in a pem file and use that directly\n> by pointing Pg at the file instead of an engine URI.\n\nSure.\n\n> But you might not even have the key. In some HSM implementations the\n> key is completely sealed - you can program new HSMs to have the same\n> key by using the same configuration, but you cannot actually obtain\n> the key short of attacks on the HSM hardware itself. That's very much\n> by design - the HSM configuration is usually on an air-gapped system,\n> and it isn't sufficient to decrypt anything unless you also have\n> access to a copy of the HSM hardware itself. Obviously you accept the\n> risks if you take that approach, and you must have an escape route\n> where you can re-encrypt the material protected by the HSM against\n> some other key. But it's not at all uncommon.\n\nRight, but in such cases you'd need an HSM that's able to perform\nencryption and decryption at some reasonable rate.\n\n> Key rotation is obviously vital to make this vaguely sane. In Pg's\n> case you'd to change the key configuration, then trigger a key\n> rotation step, which would decrypt with a context obtained from the\n> old config then encrypt with a context obtained from the new config.\n\nYes, key rotation is an important part.\n\n> > > If cluster_encryption_key is unset, Pg would perform its own KEK derivation\n> > > based on cluster_passphrase_command as currently implemented.\n> >\n> > To what I was suggesting above- what if we just had a GUC that's\n> > \"kek_method\" with options 'passphrase' and 'direct', where passphrase\n> > goes through KEK and 'direct' doesn't, which just changes how we treat\n> > the results of called cluster_passphrase_command?\n> \n> That won't work for a HSM. It is not possible to extract the key.\n> \"direct\" cannot be implemented.\n\nPerhaps the above helps explain what I was getting at there.\n\nThanks,\n\nStephen", "msg_date": "Wed, 28 Oct 2020 13:22:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Oct 28, 2020 at 12:02:46PM +0800, Craig Ringer wrote:\n> On Wed, Oct 28, 2020 at 9:43 AM Bruce Momjian <bruce@momjian.us> wrote:\n> �I have used OpenSSL with Yubikey via pksc11.� You\n> can see the use of it on slide 57 and following:\n> \n> � � � � https://momjian.us/main/writings/crypto_hw_config.pdf#page=57\n> \n> Interestingly, that still needed the user to type in a key to unlock the\n> Yubikey, so we might need PKCS11 and a password for the same server\n> start.\n> \n> Yes, that's possible. But in that case the passphrase will be asked for by\n> openssl only when required, and we'll need to supply an openssl askpass hook.\n\nWhat we _will_ need is access to a /dev/tty file descriptor, and this\npatch does that, though it closes it as soon as the internal keys are\nunlocked so the terminal can be disconnected from the database\nprocesses.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 28 Oct 2020 14:29:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Oct 28, 2020 at 02:29:16PM -0400, Bruce Momjian wrote:\n> On Wed, Oct 28, 2020 at 12:02:46PM +0800, Craig Ringer wrote:\n> > Yes, that's possible. But in that case the passphrase will be asked for by\n> > openssl only when required, and we'll need to supply an openssl askpass hook.\n> \n> What we _will_ need is access to a /dev/tty file descriptor, and this\n> patch does that, though it closes it as soon as the internal keys are\n> unlocked so the terminal can be disconnected from the database\n> processes.\n\nFYI, the file descriptor facility will eventually allow for SSL\ncertificate unlocking passwords to be prompted from the terminal,\ninstead of requiring the use of ssl_passphrase_command, but let's get\nthe facility fully completed first.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 28 Oct 2020 15:12:19 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Wed, Oct 28, 2020 at 05:16:32PM +0800, Craig Ringer wrote:\n> On Wed, Oct 28, 2020 at 12:02 PM Craig Ringer <craig.ringer@enterprisedb.com>\n> wrote:\n> \n> On Wed, Oct 28, 2020 at 9:43 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> \n> I don't know much about how to hook into that stuff so if you have an\n> idea, I am all ears.\n> \n> \n> Yeah, I have a reasonable idea. The main thing will be to re-read the patch\n> and put it into more concrete terms, which I'll try to find time for soon.\n> I need to find time to craft a proper demo that uses a virtual hsm, and can\n> also demonstrate how to use the host TPM or a Yubikey using the simple\n> openssl engine interfaces or a URI.\n> \n> \n> Do you have this in a public git tree anywhere? If not please consider using\n> \"git format-patch -v 1 -1\" or similar to generate it, so I can \"git am\" the\n> patch.\n\nI have made a github branch, and will keep it updated:\n\n\thttps://github.com/bmomjian/postgres/tree/key\n\nI am also attaching an updated patch.\n\n> A few comments on the patch as I read through. Some will be addressed by the\n> quick PoC I'm preparing for pluggable key derivation, some won't. In no\n> particular order:\n> \n> * The term KEK only appears in the docs; where it appears in the sources it's\n> lower case. I suggest making \"KEK\" grep-able in the sources.\n\nFixed.\n\n> * BootStrapKmgr() says to call once on \"system install\" . I suggest \"initdb\".\n\nDone.\n\n> * The jumble of #ifdef FRONTEND in src/common/kmgr_utils.c shouldn't remain in\n> the final patch if possible. This may require some \"static inline\" wrappers or\n> helpers.\n\nI can do this if you give me more details.\n\n> * PgKeyWrapCtx.key is unused and should probably be deleted\n\nRemoved.\n\n> * HMAC support should go into PgCipherCtx so that we can have a single\n> PgCipherCtx that supports cipher ops, cipher+HMAC ops, or just HMAC ops\n> * PgKeyWrapCtx.cipherctx should then be supplemented with a hmacctx. It should\n> be legal to set cipherctx and hmacctx to the same value, since in some cases\n> the it won't be easy to initialize the backing implementation separately for\n> key and HMAC.\n\nSorry, I don't know how to do the above items.\n\n> FORK?\n> ----\n> \n> One possible problem with this is that we should not assume we can perform KEK\n> operations in postmaster children, since there's no guarantee we can use\n> whatever sits behind a PgCipherCtx after fork(). But AFAICS the KEK doesn't\n> live beyond the various kmgr_ operations as it is, so there's no reason it\n> should ever have to be carried over a fork() anyway.\n\nYes, I think so.\n\n> * the kmgr appears to need to be able to work in frontend code (?)\n> * for key rotation we need to be able to change KEKs, and possibly KEK\n> acquisition methods, at runtime\n\nWe might need to change the KEK using a command-line tool so we can more\neasily prompt for the new KEK.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Wed, 28 Oct 2020 16:23:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" }, { "msg_contents": "On Thu, Oct 29, 2020 at 1:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n\n> > Most importantly - I don't think the SQL key adds anything really\n> > crucial that we cannot do at the SQL level with an extension. An\n> > extension \"pg_wrap\" could provide pg_wrap() and pg_unwrap() already,\n> > using a single master key much like the SQL key proposed in this\n> > patch. To store the master key it could:\n>\n> Lots of things can be done in extensions but, at least for my part, I'd\n> much rather see us build in an SQL key capability (with things like\n> grammar support and being able to tie to to a role cleanly) than to try\n> and figure out how to make this work as an extension.\n>\n\nI agree with you there. I'm suggesting that this first patch focus on full\non-disk encryption, and that someone who desperately needs SQL-level keyops\ncould build on this patch in an extension.\n\nI definitely don't want an extension to be the preferred / blessed way to\ndo those things, I'm only pointing out that deferring the SQL-level stuff\ndoesn't prevent someone from doing it if they need those capabilities\nbefore a mature core patch is ready for them. Trying to roll the SQL-level\nstuff into this patch will distract from getting the basics working and\neither cause massive scope creep or leave us with a seriously limited\ninterface that will make doing it right later much harder.\n\n+100 to having client-driver-assisted encryption, this solves real\n> attack vectors which traditional TDE simply doesn't, compared to\n> filesystem or block device level encryption (even though lots of\n> people seem to think it does, which is bizarre to me).\n>\n\nMany things people believe about security are bizarre to me. I stopped\nbeing surprised a long time ago...\n\nI would think we'd want to enable admins to be able to control if what\n> is being provided is a KEK (where the key is then decrypted by PG and PG\n> then uses whatever libraries it's built with to perform the encryption\n> and decryption in PG process space), or an engine/offloading\n> configuration (where PG doesn't ever see the actual key and all\n> encryption and decryption is done outside of PG's control by an HSM or\n> the Linux kernel through the crypto API or whatever).\n>\n\nI had that in mind too, but deliberately did not raise it because I don't\nthink it's necessary to address that when introducing the basics of full\non-disk encryption.\n\nI just don't think there are enough users who both have access to a high\nperformance PCIe or SoC based crypto offload engine and could tolerate the\nlimited database throughput they'd get when using even the most optimised\ncrypto offload engine out there. Most HSMs are optimised for SSL/TLS and\nfor asymmetric crypto ops using RSA etc, plus small-packet AES. There are\nalso crypto offload cards for high throughput bulk symmetric AES etc but\nthey don't all have HSM-like secrecy features, plus the cost tends to be\nabsolutely staggering.\n\nSo I thought it made sense to focus on the KEK for now. I don't think\nmanaging the WAL and heap keys in a HSM is a realistic use case for all but\nthe tinest possible set of users, and the complexity we'd have to deal with\nin terms of key rotations etc would be much greater.\n\nThe use-cases I'm thinking about:\n>\n> - User has a Yubikey, but would like PG to be able to write more than\n> one block at a time. In this case, the Yubikey would have a KEK which\n> PG doesn't ever see.\n\n\nYes. This is the main case I'm focused on making it possible to add support\nfor. Not necessarily in the first cut of this patch, but I want to at least\nensure that this patch doesn't bake in so many assumptions about the KEK\nthat it'd be really hard to add external KEK management later.\n\n PG would have an encrypted blob that it then\n> asks the yubikey to decrypt which contains the actual key that's then\n> kept in PG's memory to perform the encryption/decryption. Naturally,\n> if that key is stolen then an attacker could decrypt the entire\n> database, even if they don't have the yubikey. An attacker could\n> acquire that key by having sufficient access on the PG sever to be\n> able to read PG's memory.\n>\n\nCorrect. Or they could gain the rights to run code as the postgres unix\nuser and ask the HSM to decrypt the cluster keys for them - assuming the\nHSM doesn't have any external authorization channel or checks, like PIN\nentry, touch-test for physical access, or the like.\n\nFor that to actually be useful they also have to have a copy of the\ndatabase's on-disk representation - copy it off, steal a backup, etc. If\nthey gained enough access to copy the whole DB off they can probably just\nas easily pg_dump it though; the only way to prevent that kind of attack is\nto use client-driver-side encryption which is a totally different topic.\n\nStealing a backup then separately breaking into a running instance with\nmatching keys to steal the key is a pretty high bar to set.\n\nThe main weakness here is with replicas. But it doesn't really matter if\nthe replicas have the same heap and WAL keys as the primary or not, if the\nattacker compromises one replica your data is still exposed.\n\n- User has a Thales Luna PCIe HSM, or similar. In this case, the user\n> wants *all* of the encryption/decryption happening on the HSM and none\n> of it happening in PG space, making it impossible for an attacker to\n> acquire the actual key.\n>\n\nRight. They can still pg_dump it, or trick Pg into decrypting it for them\nin other ways, but they cannot steal the key then use it to decrypt a\nstolen copy of the DB itself.\n\nPer above, though, I don't think this actually adds all that much real\nworld security over protecting the KEK.\n\nI mean - face it, building database encryption into postgres itself isn't\nthat much stronger than doing it at the filesystem level in a LUKS volume\nor similar. It's a marketing thing as much as a real world security\nbenefit. The interesting parts only really come when the DB doesn't even\nhave access to the keys (client-driver encryption) or only has transient\naccess to the keys (session-level client secret key unlock), neither of\nwhich are within the scope of this proposed patch.\n\nHandling encryption at the Pg level is mainly nice for backup protection.\n\nTo be clear I'm not against making this possible, and think it should\nactually be relatively simple to do if we use proper key ops abstractions,\nI just don't think it's all that interesting or important. It could also\nget very hairy when dealing with postgres's fork() based processing...\n\n- User has a yubikey, similar to #1, but would like to have the Linux\n> kernel used to safe-guard the actual key used.\n>\n\nThat really works the same as #2, Pg is using some kind of engine to handle\ncrypto ops on the WAL and heap keys rather than doing them in-process\nitself. It doesn't matter what the engine is or where it lives - in\nsoftware in the kernel, in a PCIe card that costs more than a luxury car,\nor whatever else.\n\n- User has a vaulting solution, and perhaps wants to store the actual\n> encryption/decryption key there, or perhaps the user wants to store a\n> passphrase in the vault and have PG derive the actual key from that.\n> Either seems like it could be reasonable.\n>\n\nSure. Storing the KEK-generation passphrase in a vault is possible with the\nproposed approach as-is.\n\nIf they want to store the actual KEK in a vault they could do so in much\nthe same way as a HSM or anything else, and Pg does not have to care. So\nlong as we provide a way to plug in KEK loading and we only do KEK\ncrypt/decrypt/verify ops via an API like the one we have already for\nPgCipherCtx it just doesn't matter exactly where the KEK lives.\n\nInstead of\n\n cluster_crypto_method = 'openssl_engine'\n\nto load cluster_crypto_openssl_engine.so and have that provide the\nPgKeyWrapCtx with PgCipherCtx, you'd\n\n cluster_crypto_method = 'loadkey'\n\nand have the cluster_crypto_loadkey.so accept a keyfile path or read-key\ncommand.\n\nPersonally I think the default method of generating a kek from a passphrase\nshould be behind the same kind of abstraction, but I don't get to have a\nstrong opinion on that if I am not currently prepared to write all the code\nfor it.\n\nWhat I'm wondering about here is if we should make it an explicit option\n> for a user to pick through the server configuration about if they're\n> giving PG a direct key to use, a KEK that's actually meant to decrypt\n> the data key, a way to fetch the direct key or the KEK, or a engine\n> which has the KEK to ask to decrypt the data key, etc.\n\n\n-1\n\nWe can't anticipate all the things users will want, and if we try we'll\nland up with a horribly complex set of configuration options.\n\nWe should provide an interface people can use to implement and load what\nthey want to do, then provide a simple default implementation that does the\nbasic passphrase based setup.\n\nWant anything else? Load a plugin.\n\nThat way we aren't stuck supporting some weird and random openssl-specific\nGUCs once we eventually support host crypto libraries. And re-keying the\nKEK becomes as simple as \"load new KEK module and write the cluster keys\nusing the new KEK module\". Code for re-keying etc doesn't have to know all\nthe details.\n\nThis approach was taken for logical decoding and I think it was 100% the\nright one. We should go for something like it here too.\n\nI don't want to go full plugin crazy. I've used Jenkins, I know the pain\nthat \"everything is a plugin\" brings. But in the right places, and with\ngood default plugin implementations bundled with the server (like we have\nwith pgoutput) having plugin interfaces at the correct boundaries works\nreally well.\n\nIf we can come\n> up with a way to configure PG that will support the different use cases\n> outlined above without being overly complicated, that'd be great. I'm\n> not sure that I see that in what you've proposed here, but maybe by\n> going through each of the use-cases and showing how a user would\n> configure PG for each with this proposal, I will.\n>\n\nInteractive password prompt, using Bruce's %R file descriptor passing:\n\n cluster_encryption = 'password'\n cluster_encryption_password.password_command = ' IFS=$'\\n' read -s -p\n\"Prompt: \" -r -u %R PASS && echo $PASS '\n\nwhich will do the same as this:\n\n $ IFS=$'\\n' read -s -p \"Prompt: \" -r -u 0 PASS ; echo; echo $PASS\n Prompt:\n pass word here\n\nA pretty script or default command would obviously be appropriate here, I'm\njust showing how basic it is.\n\nThe same thing as above would work for a vault tool that pases the key on\nstdin, or that passes a file descriptor for an unlinked tempfile the\npassword can be read from.\n\nPassword fetched by obfuscated command or from some vault tool etc:\n\n cluster_encryption = 'password'\n cluster_encryption_password.password_command =\n'/usr/bin/read-my-secret-password'\n\nRead key from a file on a short-lived mount, usb key that's physically\nremoved after loading, or whatever:\n\n cluster_encryption = 'keyfile'\n cluster_encryption_keyfile.key_file = '/mnt/secretusb/key.pem'\n\nRead whole key from a command, vault tool, etc in case you wanted that\ninstead:\n\n cluster_encryption = 'keyfile'\n cluster_encryption_keyfile.key_command = '/bin/my-vault-tool get-key\nfoo'\n\nUse AWS CloudHSM for your KEK:\n\n cluster_encryption = 'openssl_engine'\n cluster_encryption_openssl.engine = 'cloudhsm'\n cluster_encryption_openssl.key = 'mycloudkeyname'\n\nKeep the key in the host TPM and use it to perform KEK ops, assuming you\nhave p11-kit and you generated a key in the TPM with the tpm2 tools:\n\n cluster_encryption = 'openssl_engine'\n cluster_encryption_openssl_engine.engine = 'pkcs11'\n cluster_encryption_openssl_engine.key =\n'pkcs11:module-path=/usr/lib64/pkcs11/libtpm2_pkcs11.so;model=TPM2'\n\nKeep the key in an OpenSC-supported smartcard or key like a yubikey and use\nit via OpenSC to perform KEK ops, once the key is appropriately configured\nwith the card tools and assuming p11-kit:\n\n cluster_encryption = 'openssl_engine'\n cluster_encryption_openssl_engine.engine = 'pkcs11'\n cluster_encryption_openssl_engine.key =\n'pkcs11;module-path=/usr/lib64/pkcs11/opensc-pkcs11.so;token=%2FCN%3Dpg%2F'\n\n\n... etc\n\nI agree that it doesn't seem like a bad approach to expose that URI, but\n> I'm not sure that's really the end of it since there's going to be cases\n> where people would like to have a KEK on a yubikey and there'll be other\n> cases where people would like to offload all of the encryption and\n> decryption to a HSM crypto accelerator and, ideally, we'd allow them to\n> be able to configure PG for either of those cases.\n>\n\nSure, eventually.\n\nI don't think it's necessarily that hard either. If you wanted you could\nprobably put the WAL and heap key acquisition behind a pluggable interface\ntoo, and use the same KeyWrapCtx and PgCipherCtx to abstract their use.\n\nfork() could be exciting, but mostly that's a matter of adding before-fork\nand after-fork APIs to let the plugin do the right thing depending on the\nunderlying library it uses.\n\nI don't see a problem with adding hooks, where they make sense, but we\n> should also make things work in a sensible way and a way that works with\n> at least the use-cases that I've outlined, ideally, without having to go\n> get an extension or write C code.\n>\n\nI think the sensible use case *is* the generated password, simple\nconfiguration.\n\nWhat I'd ideally like to do is have that as a sort of default\ncluster_encryption_plugin called 'password' per the imaginary config I\noutlined above.\n\nThen we could bundle an openssl_engine plugin that would let you do pretty\nmuch anything else by configuring openssl, using openssl engines directly\nor via pkcs#11, etc.\n\n\n> There's an active patch that's been worked on for quite some time that's\n> getting some renewed interest in adding NSS support, something I\n> certainly support also, so we really shouldn't be taking steps that end\n> up making it more difficult to support alternatives.\n\n\nRight.\n\nSo in the plugin based approach above that would mean providing a\ncluster_encryption_plugin='nss' .\n\nIf we extend PgCipherCtx to support HMAC it should be fairly\nstraightforward.\n\nI definitely think we want to support things directly in PG and not\n> require an extension or something to be in s_p_l for this.\n>\n\nAlternative proposed above - support dynamic loading but use a separate\nentrypoint. And if we want we can compile in \"plugins\" anyway. The\ninterface should be the same whether dynamically loaded or baked in.\n\n> But you might not even have the key. In some HSM implementations the\n> > key is completely sealed - you can program new HSMs to have the same\n> > key by using the same configuration, but you cannot actually obtain\n> > the key short of attacks on the HSM hardware itself. That's very much\n> > by design - the HSM configuration is usually on an air-gapped system,\n> > and it isn't sufficient to decrypt anything unless you also have\n> > access to a copy of the HSM hardware itself. Obviously you accept the\n> > risks if you take that approach, and you must have an escape route\n> > where you can re-encrypt the material protected by the HSM against\n> > some other key. But it's not at all uncommon.\n>\n> Right, but in such cases you'd need an HSM that's able to perform\n> encryption and decryption at some reasonable rate.\n>\n\nNo, you just have to use it to decrypt and load the WAL and heap keys at\nstartup.\n\n\nI understand why you're exploring the idea of full crypto offload, but I\npersonally think it's premature. However the same sorts of things that\nwould allow HSM use instead of a password would also be necessary steps\ntoward what you propose.\n\nOn Thu, Oct 29, 2020 at 1:22 AM Stephen Frost <sfrost@snowman.net> wrote: \n> Most importantly - I don't think the SQL key adds anything really\n> crucial that we cannot do at the SQL level with an extension.  An\n> extension \"pg_wrap\" could provide pg_wrap() and pg_unwrap() already,\n> using a single master key much like the SQL key proposed in this\n> patch. To store the master key it could:\n\nLots of things can be done in extensions but, at least for my part, I'd\nmuch rather see us build in an SQL key capability (with things like\ngrammar support and being able to tie to to a role cleanly) than to try\nand figure out how to make this work as an extension.I agree with you there. I'm suggesting that this first patch focus on full on-disk encryption, and that someone who desperately needs SQL-level keyops could build on this patch in an extension.I definitely don't want an extension to be the preferred / blessed way to do those things, I'm only pointing out that deferring the SQL-level stuff doesn't prevent someone from doing it if they need those capabilities before a mature core patch is ready for them. Trying to roll the SQL-level stuff into this patch will distract from getting the basics working and either cause massive scope creep or leave us with a seriously limited interface that will make doing it right later much harder.\n+100 to having client-driver-assisted encryption, this solves real\nattack vectors which traditional TDE simply doesn't, compared to\nfilesystem or block device level encryption (even though lots of\npeople seem to think it does, which is bizarre to me).Many things people believe about security are bizarre to me. I stopped being surprised a long time ago...\n\nI would think we'd want to enable admins to be able to control if what\nis being provided is a KEK (where the key is then decrypted by PG and PG\nthen uses whatever libraries it's built with to perform the encryption\nand decryption in PG process space), or an engine/offloading\nconfiguration (where PG doesn't ever see the actual key and all\nencryption and decryption is done outside of PG's control by an HSM or\nthe Linux kernel through the crypto API or whatever).I had that in mind too, but deliberately did not raise it because I don't think it's necessary to address that when introducing the basics of full on-disk encryption.I just don't think there are enough users who both have access to a high performance PCIe or SoC based crypto offload engine and could tolerate the limited database throughput they'd get when using even the most optimised crypto offload engine out there. Most HSMs are optimised for SSL/TLS and for asymmetric crypto ops using RSA etc, plus small-packet AES. There are also crypto offload cards for high throughput bulk symmetric AES etc but they don't all have HSM-like secrecy features, plus the cost tends to be absolutely staggering. So I thought it made sense to focus on the KEK for now. I don't think managing the WAL and heap keys in a HSM is a realistic use case for all but the tinest possible set of users, and the complexity we'd have to deal with in terms of key rotations etc would be much greater.\n\nThe use-cases I'm thinking about:\n\n- User has a Yubikey, but would like PG to be able to write more than\n  one block at a time.  In this case, the Yubikey would have a KEK which\n  PG doesn't ever see.Yes. This is the main case I'm focused on making it possible to add support for. Not necessarily in the first cut of this patch, but I want to at least ensure that this patch doesn't bake in so many assumptions about the KEK that it'd be really hard to add external KEK management later.   PG would have an encrypted blob that it then\n  asks the yubikey to decrypt which contains the actual key that's then\n  kept in PG's memory to perform the encryption/decryption.  Naturally,\n  if that key is stolen then an attacker could decrypt the entire\n  database, even if they don't have the yubikey.  An attacker could\n  acquire that key by having sufficient access on the PG sever to be\n  able to read PG's memory.Correct. Or they could gain the rights to run code as the postgres unix user and ask the HSM to decrypt the cluster keys for them - assuming the HSM doesn't have any external authorization channel or checks, like PIN entry, touch-test for physical access, or the like.For that to actually be useful they also have to have a copy of the database's on-disk representation - copy it off, steal a backup, etc. If they gained enough access to copy the whole DB off they can probably just as easily pg_dump it though; the only way to prevent that kind of attack is to use client-driver-side encryption which is a totally different topic.Stealing a backup then separately breaking into a running instance with matching keys to steal the key is a pretty high bar to set.The main weakness here is with replicas. But it doesn't really matter if the replicas have the same heap and WAL keys as the primary or not, if the attacker compromises one replica your data is still exposed. \n- User has a Thales Luna PCIe HSM, or similar.  In this case, the user\n  wants *all* of the encryption/decryption happening on the HSM and none\n  of it happening in PG space, making it impossible for an attacker to\n  acquire the actual key. Right. They can still pg_dump it, or trick Pg into decrypting it for them in other ways, but they cannot steal the key then use it to decrypt a stolen copy of the DB itself.Per above, though, I don't think this actually adds all that much real world security over protecting the KEK.I mean - face it, building database encryption into postgres itself isn't that much stronger than doing it at the filesystem level in a LUKS volume or similar. It's a marketing thing as much as a real world security benefit. The interesting parts only really come when the DB doesn't even have access to the keys (client-driver encryption) or only has transient access to the keys (session-level client secret key unlock), neither of which are within the scope of this proposed patch.Handling encryption at the Pg level is mainly nice for backup protection.To be clear I'm not against making this possible, and think it should actually be relatively simple to do if we use proper key ops abstractions, I just don't think it's all that interesting or important. It could also get very hairy when dealing with postgres's fork() based processing...\n\n- User has a yubikey, similar to #1, but would like to have the Linux\n  kernel used to safe-guard the actual key used.That really works the same as #2, Pg is using some kind of engine to handle crypto ops on the WAL and heap keys rather than doing them in-process itself. It doesn't matter what the engine is or where it lives - in software in the kernel, in a PCIe card that costs more than a luxury car, or whatever else. \n- User has a vaulting solution, and perhaps wants to store the actual\n  encryption/decryption key there, or perhaps the user wants to store a\n  passphrase in the vault and have PG derive the actual key from that.\n  Either seems like it could be reasonable.Sure. Storing the KEK-generation passphrase in a vault is possible with the proposed approach as-is.If they want to store the actual KEK in a vault they could do so in much the same way as a HSM or anything else, and Pg does not have to care. So long as we provide a way to plug in KEK loading and we only do KEK crypt/decrypt/verify ops via an API like the one we have already for PgCipherCtx it just doesn't matter exactly where the KEK lives.Instead of    cluster_crypto_method = 'openssl_engine'to load cluster_crypto_openssl_engine.so and have that provide the PgKeyWrapCtx with PgCipherCtx, you'd    cluster_crypto_method = 'loadkey'and have the cluster_crypto_loadkey.so accept a keyfile path or read-key command.Personally I think the default method of generating a kek from a passphrase should be behind the same kind of abstraction, but I don't get to have a strong opinion on that if I am not currently prepared to write all the code for it. \nWhat I'm wondering about here is if we should make it an explicit option\nfor a user to pick through the server configuration about if they're\ngiving PG a direct key to use, a KEK that's actually meant to decrypt\nthe data key, a way to fetch the direct key or the KEK, or a engine\nwhich has the KEK to ask to decrypt the data key, etc.-1 We can't anticipate all the things users will want, and if we try we'll land up with a horribly complex set of configuration options.We should provide an interface people can use to implement and load what they want to do, then provide a simple default implementation that does the basic passphrase based setup.Want anything else? Load a plugin.That way we aren't stuck supporting some weird and random openssl-specific GUCs once we eventually support host crypto libraries. And re-keying the KEK becomes as simple as \"load new KEK module and write the cluster keys using the new KEK module\". Code for re-keying etc doesn't have to know all the details.This approach was taken for logical decoding and I think it was 100% the right one. We should go for something like it here too.I don't want to go full plugin crazy. I've used Jenkins, I know the pain that \"everything is a plugin\" brings. But in the right places, and with good default plugin implementations bundled with the server (like we have with pgoutput) having plugin interfaces at the correct boundaries works really well.If we can come\nup with a way to configure PG that will support the different use cases\noutlined above without being overly complicated, that'd be great.  I'm\nnot sure that I see that in what you've proposed here, but maybe by\ngoing through each of the use-cases and showing how a user would\nconfigure PG for each with this proposal, I will.Interactive password prompt, using Bruce's %R file descriptor passing:      cluster_encryption = 'password'      cluster_encryption_password.password_command = ' IFS=$'\\n' read -s -p \"Prompt: \" -r -u %R PASS && echo $PASS 'which will do the same as this:    $ IFS=$'\\n' read -s -p \"Prompt: \" -r -u 0 PASS ; echo; echo $PASS    Prompt:     pass word hereA pretty script or default command would obviously be appropriate here, I'm just showing how basic it is.The same thing as above would work for a vault tool that pases the key on stdin, or that passes a file descriptor for an unlinked tempfile the password can be read from.Password fetched by obfuscated command or from some vault tool etc:      cluster_encryption = 'password'      cluster_encryption_password.password_command = '/usr/bin/read-my-secret-password'Read key from a file on a short-lived mount, usb key that's physically removed after loading, or whatever:     cluster_encryption = 'keyfile'     cluster_encryption_keyfile.key_file = '/mnt/secretusb/key.pem'Read whole key from a command, vault tool, etc in case you wanted that instead:     cluster_encryption = 'keyfile'     cluster_encryption_keyfile.key_command = '/bin/my-vault-tool get-key foo'Use AWS CloudHSM for your KEK:    cluster_encryption = 'openssl_engine'    cluster_encryption_openssl.engine = 'cloudhsm'    cluster_encryption_openssl.key = 'mycloudkeyname'Keep the key in the host TPM and use it to perform KEK ops, assuming you have p11-kit and you generated a key in the TPM with the tpm2 tools:     cluster_encryption = 'openssl_engine'     cluster_encryption_openssl_engine.engine = 'pkcs11'     cluster_encryption_openssl_engine.key = 'pkcs11:module-path=/usr/lib64/pkcs11/libtpm2_pkcs11.so;model=TPM2'Keep the key in an OpenSC-supported smartcard or key like a yubikey and use it via OpenSC to perform KEK ops, once the key is appropriately configured with the card tools and assuming p11-kit:    cluster_encryption = 'openssl_engine'    cluster_encryption_openssl_engine.engine = 'pkcs11'    cluster_encryption_openssl_engine.key = 'pkcs11;module-path=/usr/lib64/pkcs11/opensc-pkcs11.so;token=%2FCN%3Dpg%2F'... etcI agree that it doesn't seem like a bad approach to expose that URI, but\nI'm not sure that's really the end of it since there's going to be cases\nwhere people would like to have a KEK on a yubikey and there'll be other\ncases where people would like to offload all of the encryption and\ndecryption to a HSM crypto accelerator and, ideally, we'd allow them to\nbe able to configure PG for either of those cases.Sure, eventually.I don't think it's necessarily that hard either. If you wanted you could probably put the WAL and heap key acquisition behind a pluggable interface too, and use the same KeyWrapCtx and PgCipherCtx to abstract their use.fork() could be exciting, but mostly that's a matter of adding before-fork and after-fork APIs to let the plugin do the right thing depending on the underlying library it uses.\nI don't see a problem with adding hooks, where they make sense, but we\nshould also make things work in a sensible way and a way that works with\nat least the use-cases that I've outlined, ideally, without having to go\nget an extension or write C code.I think the sensible use case *is* the generated password, simple configuration.What I'd ideally like to do is have that as a sort of default cluster_encryption_plugin called 'password' per the imaginary config I outlined above.Then we could bundle an openssl_engine plugin that would let you do pretty much anything else by configuring openssl, using openssl engines directly or via pkcs#11, etc. \n\nThere's an active patch that's been worked on for quite some time that's\ngetting some renewed interest in adding NSS support, something I\ncertainly support also, so we really shouldn't be taking steps that end\nup making it more difficult to support alternatives.Right.So in the plugin based approach above that would mean providing a cluster_encryption_plugin='nss' .If we extend PgCipherCtx to support HMAC it should be fairly straightforward.\nI definitely think we want to support things directly in PG and not\nrequire an extension or something to be in s_p_l for this.Alternative proposed above - support dynamic loading but use a separate entrypoint. And if we want we can compile in \"plugins\" anyway. The interface should be the same whether dynamically loaded or baked in. \n> But you might not even have the key. In some HSM implementations the\n> key is completely sealed - you can program new HSMs to have the same\n> key by using the same configuration, but you cannot actually obtain\n> the key short of attacks on the HSM hardware itself. That's very much\n> by design - the HSM configuration is usually on an air-gapped system,\n> and it isn't sufficient to decrypt anything unless you also have\n> access to a copy of the HSM hardware itself. Obviously you accept the\n> risks if you take that approach, and you must have an escape route\n> where you can re-encrypt the material protected by the HSM against\n> some other key. But it's not at all uncommon.\n\nRight, but in such cases you'd need an HSM that's able to perform\nencryption and decryption at some reasonable rate.No, you just have to use it to decrypt and load the WAL and heap keys at startup. I understand why you're exploring the idea of full crypto offload, but I personally think it's premature. However the same sorts of things that would allow HSM use instead of a password would also be necessary steps toward what you propose.", "msg_date": "Thu, 29 Oct 2020 14:10:56 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Internal key management system" } ]
[ { "msg_contents": "Other options are preserved by ALTER (and CLUSTER ON is and most obviously\nshould be preserved by CLUSTER's rewrite), so I think (SET) CLUSTER should be\npreserved by ALTER, too.\n\nAs far as I can see, this should be the responsibility of something in the\nvicinity of ATPostAlterTypeParse/RememberIndexForRebuilding.\n\nAttach patch sketches a fix.\n\nts=# SET client_min_messages=debug; DROP TABLE t; CREATE TABLE t(i int); CREATE INDEX ON t(i)WITH(fillfactor=11, vacuum_cleanup_index_scale_factor=12); CLUSTER t USING t_i_key; ALTER TABLE t ALTER i TYPE bigint; \\d t\nSET\nDEBUG: drop auto-cascades to type t\nDEBUG: drop auto-cascades to type t[]\nDEBUG: drop auto-cascades to index t_i_idx\nDROP TABLE\nCREATE TABLE\nDEBUG: building index \"t_i_idx\" on table \"t\" serially\nCREATE INDEX\nERROR: index \"t_i_key\" for table \"t\" does not exist\nDEBUG: rewriting table \"t\"\nDEBUG: building index \"t_i_idx\" on table \"t\" serially\nDEBUG: drop auto-cascades to type pg_temp_3091172777\nDEBUG: drop auto-cascades to type pg_temp_3091172777[]\nALTER TABLE\n Table \"public.t\"\n Column | Type | Collation | Nullable | Default \n--------+--------+-----------+----------+---------\n i | bigint | | | \nIndexes:\n \"t_i_idx\" btree (i) WITH (fillfactor='11', vacuum_cleanup_index_scale_factor='12')", "msg_date": "Sun, 2 Feb 2020 10:17:18 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "Hi Justin,\n\nOn Mon, Feb 3, 2020 at 1:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Other options are preserved by ALTER (and CLUSTER ON is and most obviously\n> should be preserved by CLUSTER's rewrite), so I think (SET) CLUSTER should be\n> preserved by ALTER, too.\n\nYes.\n\ncreate table foo (a int primary key);\ncluster foo;\nERROR: there is no previously clustered index for table \"foo\"\ncluster foo using foo_pkey;\nalter table foo alter a type bigint;\ncluster foo;\nERROR: there is no previously clustered index for table \"foo\"\n\nWith your patch, this last error doesn't occur.\n\nLike you, I too suspect that losing indisclustered like this is\nunintentional, so should be fixed.\n\n> As far as I can see, this should be the responsibility of something in the\n> vicinity of ATPostAlterTypeParse/RememberIndexForRebuilding.\n>\n> Attach patch sketches a fix.\n\nWhile your sketch hits pretty close, it could be done a bit\ndifferently. For one, I don't like the way it's misusing\nchangedIndexOids and changedIndexDefs.\n\nInstead, we can do something similar to what\nRebuildConstraintComments() does for constraint comments. For\nexample, we can have a PreserveClusterOn() that adds a AT_ClusterOn\ncommand into table's AT_PASS_OLD_INDEX pass commands. Attached patch\nshows what I'm thinking. I also added representative tests.\n\nThanks,\nAmit", "msg_date": "Wed, 5 Feb 2020 15:53:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Wed, Feb 05, 2020 at 03:53:45PM +0900, Amit Langote wrote:\n> Hi Justin,\n> \n> On Mon, Feb 3, 2020 at 1:17 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Other options are preserved by ALTER (and CLUSTER ON is and most obviously\n> > should be preserved by CLUSTER's rewrite), so I think (SET) CLUSTER should be\n> > preserved by ALTER, too.\n> \n> Yes.\n> \n> create table foo (a int primary key);\n> cluster foo;\n> ERROR: there is no previously clustered index for table \"foo\"\n> cluster foo using foo_pkey;\n> alter table foo alter a type bigint;\n> cluster foo;\n> ERROR: there is no previously clustered index for table \"foo\"\n> \n> With your patch, this last error doesn't occur.\n> \n> Like you, I too suspect that losing indisclustered like this is\n> unintentional, so should be fixed.\n\nThanks for checking.\n\nIt doesn't need to be said, but your patch is obviously superior.\n\nI ran into this while looking into a suggestion from Alvaro that ALTER should\nrewrite in order of a clustered index (if any) rather than in pre-existing heap\norder (more on that another day). So while this looks like a bug, and I can't\nthink how a backpatch would break something, my suggestion is that backpatching\na fix is of low value, so it's only worth +0.\n\nThanks\nJustin\n\n\n", "msg_date": "Wed, 5 Feb 2020 02:32:55 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Thu, Feb 6, 2020 at 10:31 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Feb 6, 2020 at 8:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Wed, Feb 05, 2020 at 03:53:45PM +0900, Amit Langote wrote:\n> > > diff --git a/src/test/regress/sql/alter_table.sql b/src/test/regress/sql/alter_table.sql\n> > > +-- alter type shouldn't lose clustered index\n> >\n> > My only suggestion is to update the comment\n> > +-- alter type rewrite/rebuild should preserve cluster marking on index\n>\n> Sure, done.\n\nDeja vu. Last two messages weren't sent to the list; updated patch attached.\n\nThanks,\nAmit", "msg_date": "Thu, 6 Feb 2020 18:14:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "I wondered if it wouldn't be better if CLUSTER ON was stored in pg_class as the\nOid of a clustered index, rather than a boolean in pg_index.\n\nThat likely would've avoided (or at least exposed) this issue.\nAnd avoids the possibility of having two indices marked as \"clustered\".\nThese would be more trivial:\nmark_index_clustered\n/* We need to find the index that has indisclustered set. */\n\n\n", "msg_date": "Thu, 6 Feb 2020 08:44:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On 2020-Feb-06, Justin Pryzby wrote:\n\n> I wondered if it wouldn't be better if CLUSTER ON was stored in pg_class as the\n> Oid of a clustered index, rather than a boolean in pg_index.\n\nMaybe. Do you want to try a patch?\n\n> That likely would've avoided (or at least exposed) this issue.\n> And avoids the possibility of having two indices marked as \"clustered\".\n> These would be more trivial:\n> mark_index_clustered\n> /* We need to find the index that has indisclustered set. */\n\nYou need to be careful when dropping the index ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 6 Feb 2020 14:24:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Fri, Feb 7, 2020 at 2:24 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Feb-06, Justin Pryzby wrote:\n> > I wondered if it wouldn't be better if CLUSTER ON was stored in pg_class as the\n> > Oid of a clustered index, rather than a boolean in pg_index.\n>\n> Maybe. Do you want to try a patch?\n\n+1\n\nThanksm\nAmit\n\n\n", "msg_date": "Fri, 7 Feb 2020 17:42:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Thu, Feb 06, 2020 at 02:24:47PM -0300, Alvaro Herrera wrote:\n> On 2020-Feb-06, Justin Pryzby wrote:\n> \n> > I wondered if it wouldn't be better if CLUSTER ON was stored in pg_class as the\n> > Oid of a clustered index, rather than a boolean in pg_index.\n> \n> Maybe. Do you want to try a patch?\n\nI think the attached is 80% complete (I didn't touch pg_dump).\n\nOne objection to this change would be that all relations (including indices)\nend up with relclustered fields, and pg_index already has a number of bools, so\nit's not like this one bool is wasting a byte.\n\nI think relisclustered was a's clever way of avoiding that overhead (c0ad5953).\nSo I would be -0.5 on moving it to pg_class..\n\nBut I think 0001 and 0002 are worthy. Maybe the test in 0002 should live\nsomewhere else.", "msg_date": "Fri, 7 Feb 2020 08:39:35 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index (consider moving\n indisclustered to pg_class)" }, { "msg_contents": "Hi Justin,\n\nOn Fri, Feb 7, 2020 at 11:39 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Thu, Feb 06, 2020 at 02:24:47PM -0300, Alvaro Herrera wrote:\n> > On 2020-Feb-06, Justin Pryzby wrote:\n> >\n> > > I wondered if it wouldn't be better if CLUSTER ON was stored in pg_class as the\n> > > Oid of a clustered index, rather than a boolean in pg_index.\n> >\n> > Maybe. Do you want to try a patch?\n>\n> I think the attached is 80% complete (I didn't touch pg_dump).\n>\n> One objection to this change would be that all relations (including indices)\n> end up with relclustered fields, and pg_index already has a number of bools, so\n> it's not like this one bool is wasting a byte.\n>\n> I think relisclustered was a's clever way of avoiding that overhead (c0ad5953).\n> So I would be -0.5 on moving it to pg_class..\n\nAre you still for fixing ALTER TABLE losing relisclustered with the\npatch we were working on earlier [1], if not for moving relisclustered\nto pg_class anymore?\n\nI have read elsewhere [2] that forcing ALTER TABLE to rewrite in\nclustered order might not be a good option, but maybe that one is a\nmore radical proposal than this.\n\nThanks,\nAmit\n\n[1] https://postgr.es/m/CA%2BHiwqEt1HnXYckCdaO8%2BpOoFs7NNS5byoZ6Xg2B7epKbhS85w%40mail.gmail.com\n[2] https://postgr.es/m/10984.1581181029%40sss.pgh.pa.us\n\n\n", "msg_date": "Mon, 17 Feb 2020 14:31:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index (consider moving\n indisclustered to pg_class)" }, { "msg_contents": "On Mon, Feb 17, 2020 at 02:31:42PM +0900, Amit Langote wrote:\n> Hi Justin,\n> \n> On Fri, Feb 7, 2020 at 11:39 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Thu, Feb 06, 2020 at 02:24:47PM -0300, Alvaro Herrera wrote:\n> > > On 2020-Feb-06, Justin Pryzby wrote:\n> > >\n> > > > I wondered if it wouldn't be better if CLUSTER ON was stored in pg_class as the\n> > > > Oid of a clustered index, rather than a boolean in pg_index.\n> > >\n> > > Maybe. Do you want to try a patch?\n> >\n> > I think the attached is 80% complete (I didn't touch pg_dump).\n> >\n> > One objection to this change would be that all relations (including indices)\n> > end up with relclustered fields, and pg_index already has a number of bools, so\n> > it's not like this one bool is wasting a byte.\n> >\n> > I think relisclustered was a's clever way of avoiding that overhead (c0ad5953).\n> > So I would be -0.5 on moving it to pg_class..\n\nIn case there's any confusion: \"a's\" was probably me halfway changing\n\"someone's\" to \"a\".\n\n> Are you still for fixing ALTER TABLE losing relisclustered with the\n> patch we were working on earlier [1], if not for moving relisclustered\n> to pg_class anymore?\n\nThanks for remembering this one.\n\nI think your patch is the correct fix.\n\nI forgot to say it, but moving relisclustered to pg_class doesn't help to avoid\nlosting indisclustered: it still needs a fix just like this. Anyway, I\nwithdrew my suggestion for moving to pg_class, since it has more overhead, even\nfor pg_class rows for relations which can't have indexes.\n\n> I have read elsewhere [2] that forcing ALTER TABLE to rewrite in\n> clustered order might not be a good option, but maybe that one is a\n> more radical proposal than this.\n\nRight; your fix seems uncontroversial. I ran into this (indisclustered) bug\nwhile starting to write that patch for \"ALTER rewrite in clustered order\".\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 16 Feb 2020 23:49:49 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index (consider moving\n indisclustered to pg_class)" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I think the attached is 80% complete (I didn't touch pg_dump).\n> One objection to this change would be that all relations (including indices)\n> end up with relclustered fields, and pg_index already has a number of bools, so\n> it's not like this one bool is wasting a byte.\n> I think relisclustered was a's clever way of avoiding that overhead (c0ad5953).\n> So I would be -0.5 on moving it to pg_class..\n> But I think 0001 and 0002 are worthy. Maybe the test in 0002 should live\n> somewhere else.\n\n0001 has been superseded by events (faade5d4c), so the cfbot is choking\non that one's failure to apply, and not testing any further. Please\nrepost without 0001 so that we can get this testing again.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 18:26:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index (consider moving\n indisclustered to pg_class)" }, { "msg_contents": "On Fri, Feb 28, 2020 at 06:26:04PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I think the attached is 80% complete (I didn't touch pg_dump).\n> > One objection to this change would be that all relations (including indices)\n> > end up with relclustered fields, and pg_index already has a number of bools, so\n> > it's not like this one bool is wasting a byte.\n> > I think relisclustered was a's clever way of avoiding that overhead (c0ad5953).\n> > So I would be -0.5 on moving it to pg_class..\n> > But I think 0001 and 0002 are worthy. Maybe the test in 0002 should live\n> > somewhere else.\n> \n> 0001 has been superseded by events (faade5d4c), so the cfbot is choking\n> on that one's failure to apply, and not testing any further. Please\n> repost without 0001 so that we can get this testing again.\n\nI've just noticed while working on (1) that this separately affects REINDEX\nCONCURRENTLY, which would be a new bug in v12. Without CONCURRENTLY there's no\nissue. I guess we need a separate patch for that case.\n\n(1) https://commitfest.postgresql.org/27/2269/\n\nThe ALTER bug goes back further and its fix should be a kept separate.\n\npostgres=# DROP TABLE tt; CREATE TABLE tt(i int unique); CLUSTER tt USING tt_i_key; CLUSTER tt; REINDEX INDEX tt_i_key; CLUSTER tt;\nDROP TABLE\nCREATE TABLE\nCLUSTER\nCLUSTER\nREINDEX\nCLUSTER\n\npostgres=# DROP TABLE tt; CREATE TABLE tt(i int unique); CLUSTER tt USING tt_i_key; CLUSTER tt; REINDEX INDEX CONCURRENTLY tt_i_key; CLUSTER tt;\nDROP TABLE\nCREATE TABLE\nCLUSTER\nCLUSTER\nREINDEX\nERROR: there is no previously clustered index for table \"tt\"\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 28 Feb 2020 20:42:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Fri, Feb 28, 2020 at 08:42:02PM -0600, Justin Pryzby wrote:\n> On Fri, Feb 28, 2020 at 06:26:04PM -0500, Tom Lane wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > I think the attached is 80% complete (I didn't touch pg_dump).\n> > > One objection to this change would be that all relations (including indices)\n> > > end up with relclustered fields, and pg_index already has a number of bools, so\n> > > it's not like this one bool is wasting a byte.\n> > > I think relisclustered was a's clever way of avoiding that overhead (c0ad5953).\n> > > So I would be -0.5 on moving it to pg_class..\n> > > But I think 0001 and 0002 are worthy. Maybe the test in 0002 should live\n> > > somewhere else.\n> > \n> > 0001 has been superseded by events (faade5d4c), so the cfbot is choking\n> > on that one's failure to apply, and not testing any further. Please\n> > repost without 0001 so that we can get this testing again.\n> \n> I've just noticed while working on (1) that this separately affects REINDEX\n> CONCURRENTLY, which would be a new bug in v12. Without CONCURRENTLY there's no\n> issue. I guess we need a separate patch for that case.\n\nRebased Amit's patch and included my own 0002 to fix the REINDEX CONCURRENTLY\nissue.\n\n-- \nJustin", "msg_date": "Sat, 29 Feb 2020 10:52:58 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Sat, Feb 29, 2020 at 10:52:58AM -0600, Justin Pryzby wrote:\n> Rebased Amit's patch and included my own 0002 to fix the REINDEX CONCURRENTLY\n> issue.\n\nI have looked at 0002 as that concerns me.\n\n> +SELECT indexrelid::regclass FROM pg_index WHERE indrelid='concur_clustered'::regclass;\n> + indexrelid \n> +------------------------\n> + concur_clustered_i_idx\n> +(1 row)\n\nThis test should check after indisclustered. Except that, the patch\nis fine so I'll apply it if there are no objections.\n--\nMichael", "msg_date": "Mon, 2 Mar 2020 12:28:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Mon, Mar 02, 2020 at 12:28:18PM +0900, Michael Paquier wrote:\n> This test should check after indisclustered. Except that, the patch\n> is fine so I'll apply it if there are no objections.\n\nI got a second look at this one, and applied it down to 12 after some\nsmall modifications in the new test and in the comments.\n--\nMichael", "msg_date": "Tue, 3 Mar 2020 10:14:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Mon, Mar 02, 2020 at 12:28:18PM +0900, Michael Paquier wrote:\n> > +SELECT indexrelid::regclass FROM pg_index WHERE indrelid='concur_clustered'::regclass;\n> \n> This test should check after indisclustered. Except that, the patch\n> is fine so I'll apply it if there are no objections.\n\nOops - I realized that, but didn't send a new patch before you noticed - thanks\nfor handling it.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 2 Mar 2020 19:31:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "@cfbot: resending with only Amit's 0001, since Michael pushed a variation on\n0002.\n\n-- \nJustin", "msg_date": "Mon, 2 Mar 2020 19:36:25 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI tested the patch on the master branch (a77315fdf2a197a925e670be2d8b376c4ac02efc) and it works fine.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 05 Mar 2020 20:11:10 +0000", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> @cfbot: resending with only Amit's 0001, since Michael pushed a variation on\n> 0002.\n\nBoy, I really dislike this patch. ATPostAlterTypeParse is documented as\nusing the supplied definition string, and nothing else, to reconstruct\nthe index. This breaks that without even the courtesy of documenting\nthe breakage. Moreover, the reason why it's designed like that is to\navoid requiring the old index objects to still be accessible. So I'm\nsurprised that this hack works at all. I don't think it would have\nworked at the time the code was first written, and I think it's imposing\na constraint we'll have problems with (again?) in future.\n\nThe right way to fix this is to note the presence of the indisclustered\nflag when we're examining the index earlier, and include a suitable\ncommand in the definition string. So probably pg_get_indexdef_string()\nis what needs to change. It doesn't look like that's used anywhere\nelse, so we can just redefine its behavior as needed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Mar 2020 13:19:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Fri, Mar 13, 2020 at 2:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > @cfbot: resending with only Amit's 0001, since Michael pushed a variation on\n> > 0002.\n\nThank you for taking a look at it.\n\n> Boy, I really dislike this patch. ATPostAlterTypeParse is documented as\n> using the supplied definition string, and nothing else, to reconstruct\n> the index. This breaks that without even the courtesy of documenting\n> the breakage. Moreover, the reason why it's designed like that is to\n> avoid requiring the old index objects to still be accessible. So I'm\n> surprised that this hack works at all. I don't think it would have\n> worked at the time the code was first written, and I think it's imposing\n> a constraint we'll have problems with (again?) in future.\n\nOkay, so maybe in the middle of ATPostAlterTypeParse() is not a place\nto do it, but don't these arguments apply to\nRebuildConstraintComment(), which I based the patch on?\n\n> The right way to fix this is to note the presence of the indisclustered\n> flag when we're examining the index earlier, and include a suitable\n> command in the definition string. So probably pg_get_indexdef_string()\n> is what needs to change. It doesn't look like that's used anywhere\n> else, so we can just redefine its behavior as needed.\n\nI came across a commit that recently went in:\n\ncommit 1cc9c2412cc9a2fbe6a381170097d315fd40ccca\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Fri Mar 13 11:28:11 2020 +0100\n\n Preserve replica identity index across ALTER TABLE rewrite\n\nwhich fixes something very similar to what we are trying to with this\npatch. The way it's done looks to me very close to what you are\ntelling. I have updated the patch to be similar to the above fix.\n\n--\nThank you,\nAmit", "msg_date": "Mon, 16 Mar 2020 16:01:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Mon, Mar 16, 2020 at 04:01:42PM +0900, Amit Langote wrote:\n> I came across a commit that recently went in:\n> \n> commit 1cc9c2412cc9a2fbe6a381170097d315fd40ccca\n> Author: Peter Eisentraut <peter@eisentraut.org>\n> Date: Fri Mar 13 11:28:11 2020 +0100\n> \n> Preserve replica identity index across ALTER TABLE rewrite\n> \n> which fixes something very similar to what we are trying to with this\n> patch. The way it's done looks to me very close to what you are\n> telling. I have updated the patch to be similar to the above fix.\n\nYes, I noticed it too.\n\nShould we use your get_index_isclustered more widely ?\n\nAlso, should we call it \"is_index_clustered\", since otherwise it sounds alot\nlike \"+get_index_clustered\" (without \"is\"), which sounds like it takes a table\nand returns which index is clustered. That might be just as useful for some of\nthese callers.\n\n-- \nJustin", "msg_date": "Mon, 16 Mar 2020 08:27:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On 2020-Mar-16, Justin Pryzby wrote:\n\n> Also, should we call it \"is_index_clustered\", since otherwise it sounds alot\n> like \"+get_index_clustered\" (without \"is\"), which sounds like it takes a table\n> and returns which index is clustered. That might be just as useful for some of\n> these callers.\n\nAmit's proposed name seems to match lsyscache.c usual conventions better.\n\n> Should we use your get_index_isclustered more widely ?\n\nYeah, in cluster(), mark_index_clustered().\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 11:25:23 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Mon, Mar 16, 2020 at 11:25:23AM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-16, Justin Pryzby wrote:\n> \n> > Also, should we call it \"is_index_clustered\", since otherwise it sounds alot\n> > like \"+get_index_clustered\" (without \"is\"), which sounds like it takes a table\n> > and returns which index is clustered. That might be just as useful for some of\n> > these callers.\n> \n> Amit's proposed name seems to match lsyscache.c usual conventions better.\n\nThere were no get_index_isvalid() (introduced by me) or\nget_index_isreplident() (introduced by Peter) until last week, and\nthose names have been chosen to be consistent with the existing\nget_index_column_opclass(), so using get_index_isclustered is in my\nopinion the most consistent choice.\n\n> Yeah, in cluster(), mark_index_clustered().\n\nPatch 0002 from Justin does that, I would keep this refactoring as\nHEAD-only material though, and I don't spot any other code paths in\nneed of patching.\n\nThe commit message of patch 0001 is not what you wanted I guess.\n\n if (OidIsValid(indexOid))\n {\n- indexTuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexOid));\n- if (!HeapTupleIsValid(indexTuple))\n- elog(ERROR, \"cache lookup failed for index %u\", indexOid);\n- indexForm = (Form_pg_index) GETSTRUCT(indexTuple);\n-\n- if (indexForm->indisclustered)\n- {\n- ReleaseSysCache(indexTuple);\n+ if (get_index_isclustered(indexOid))\n return;\n- }\n-\n- ReleaseSysCache(indexTuple);\n }\nNo need for two layers of if(s) here.\n\n+create index alttype_cluster_a on alttype_cluster (a);\n+alter table alttype_cluster cluster on alttype_cluster_a;\n+select indisclustered from pg_index where indrelid = 'alttype_cluster'::regclass;\n\nWould it make sense to add a second index not used for clustering to\ncheck after the case where isclustered is false? A second thing would\nbe to check if relfilenode values match before and after each DDL to\nmake sure that a rewrite happened or not, see check_ddl_rewrite() for\nexample in alter_table.sql.\n\nKeeping both RememberClusterOnForRebuilding and\nRememberReplicaIdentityForRebuilding as separate looks fine to me. \nThe code could be a bit more organized though. We have\nRememberIndexForRebuilding which may go through\nRememberConstraintForRebuilding if the relation OID changed is a\nconstraint, and both register the replindent or isclustered\ninformation if present. Not really something for this patch set to\ncare about, just a thought while reading this code.\n\nWhile looking at this bug, I have spotted a behavior which is perhaps\nnot welcome. Take this test case:\ncreate table aa (a int);\ninsert into aa values (1), (1);\ncreate unique index concurrently aai on aa(a); -- fails\nalter table aa alter column a type bigint;\n\nThis generates the following error:\nERROR: 23505: could not create unique index \"aai\"\nDETAIL: Key (a)=(1) is duplicated.\nSCHEMA NAME: public\nTABLE NAME: aa\nCONSTRAINT NAME: aai\nLOCATION: comparetup_index_btree, tuplesort.c:4049\n\nAfter a REINDEX CONCURRENTLY, we may leave behind an invalid index\non the relation's toast table or even normal indexes. CREATE INDEX\nCONCURRENTLY may also leave behind invalid indexes. If triggering an\nALTER TABLE that causes a rewrite of the relation, we have the\nfollowing behavior:\n- An invalid toast index is correctly discarded, keeping one valid\ntoast index. No problem here.\n- Invalid non-toast indexes are all rebuilt. If the index relies on a\nconstraint then ALTER TABLE would fail, like the above.\n\nI am wondering if there is an argument for not including invalid\nindexes in the rebuilt version, even if the existing behavior may be\nuseful for some users.\n--\nMichael", "msg_date": "Tue, 17 Mar 2020 14:33:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Tue, Mar 17, 2020 at 02:33:32PM +0900, Michael Paquier wrote:\n> > Yeah, in cluster(), mark_index_clustered().\n> \n> Patch 0002 from Justin does that, I would keep this refactoring as\n> HEAD-only material though, and I don't spot any other code paths in\n> need of patching.\n> \n> The commit message of patch 0001 is not what you wanted I guess.\n\nThat's what git-am does, and I didn't find any option to make it less\nunreadable. I guess I should just delete the email body it inserts.\n\n| The commit message is formed by the title taken from the \"Subject: \", a\n| blank line and the body of the message up to where the patch begins. Excess\n| whitespace at the end of each line is automatically stripped.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 17 Mar 2020 11:20:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Tue, Mar 17, 2020 at 11:20:44AM -0500, Justin Pryzby wrote:\n> On Tue, Mar 17, 2020 at 02:33:32PM +0900, Michael Paquier wrote:\n>> Patch 0002 from Justin does that, I would keep this refactoring as\n>> HEAD-only material though, and I don't spot any other code paths in\n>> need of patching.\n>> \n>> The commit message of patch 0001 is not what you wanted I guess.\n> \n> That's what git-am does, and I didn't find any option to make it less\n> unreadable. I guess I should just delete the email body it inserts.\n\nStrange...\n\nAnyway, Tom, Alvaro, are you planning to look at what is proposed on\nthis thread? I don't want to step on your toes if that's the case and\nit seems to me that the approach taken by the patch is sound, using as\nbasic fix the addition of an AT_ClusterOn sub-command to the list of\ncommands to execute when rebuilding the table, ensuring that any\nfollow-up CLUSTER command will use the correct index.\n--\nMichael", "msg_date": "Wed, 18 Mar 2020 11:48:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Wed, Mar 18, 2020 at 11:48:37AM +0900, Michael Paquier wrote:\n> Anyway, Tom, Alvaro, are you planning to look at what is proposed on\n> this thread? I don't want to step on your toes if that's the case and\n> it seems to me that the approach taken by the patch is sound, using as\n> basic fix the addition of an AT_ClusterOn sub-command to the list of\n> commands to execute when rebuilding the table, ensuring that any\n> follow-up CLUSTER command will use the correct index.\n\nHearing nothing, I have been looking at the patches sent upthread, and\ndid some modifications as per the attached for 0001. The logic looked\nfine to me and it is unchanged as you are taking care of normal\nindexes as well as constraint indexes. Please note that I have\ntweaked some comments, and removed what was on top of\nRememberConstraintForRebuilding() as that was just a duplicate.\nRegarding the tests, I was annoyed by the fact that the logic was not\nmanipulating two indexes at the same time on the relation rewritten\nwith a normal and a constraint index, and cross-checking both at the\nsame time to see which one is clustered after each rewrite is good for\nconsistency.\n\nNow, regarding patch 0002, note that you have a problem for this part:\n- tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexOid));\n- if (!HeapTupleIsValid(tuple)) /* probably can't happen */\n- {\n- relation_close(OldHeap, AccessExclusiveLock);\n- pgstat_progress_end_command();\n- return;\n- }\n- indexForm = (Form_pg_index) GETSTRUCT(tuple);\n- if (!indexForm->indisclustered)\n+ if (!get_index_isclustered(indexOid))\n {\n- ReleaseSysCache(tuple);\n relation_close(OldHeap, AccessExclusiveLock);\n pgstat_progress_end_command();\n return;\n }\n- ReleaseSysCache(tuple);\n\nOn an invalid tuple for pg_index, the new code would issue an error,\nwhile the old code would just return. And it seems to me that this\ncan lead to problems because the parent relation is processed and\nlocked at the beginning of cluster_rel(), *after* we know the index\nOID to work on. The refactoring is fine for the other two areas\nthough, so I think that there is still value to apply\nget_index_isclustered() within mark_index_clustered() and cluster(),\nand I would suggest to keep 0002 to that.\n\nJustin, what do you think?\n--\nMichael", "msg_date": "Thu, 2 Apr 2020 15:14:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Thu, Apr 02, 2020 at 03:14:21PM +0900, Michael Paquier wrote:\n> Now, regarding patch 0002, note that you have a problem for this part:\n> - tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexOid));\n> - if (!HeapTupleIsValid(tuple)) /* probably can't happen */\n> - {\n> - relation_close(OldHeap, AccessExclusiveLock);\n> - pgstat_progress_end_command();\n> - return;\n> - }\n> - indexForm = (Form_pg_index) GETSTRUCT(tuple);\n> - if (!indexForm->indisclustered)\n> + if (!get_index_isclustered(indexOid))\n> {\n> - ReleaseSysCache(tuple);\n> relation_close(OldHeap, AccessExclusiveLock);\n> pgstat_progress_end_command();\n> return;\n> }\n> - ReleaseSysCache(tuple);\n> \n> On an invalid tuple for pg_index, the new code would issue an error,\n> while the old code would just return. And it seems to me that this\n> can lead to problems because the parent relation is processed and\n> locked at the beginning of cluster_rel(), *after* we know the index\n> OID to work on.\n\n> The refactoring is fine for the other two areas\n> though, so I think that there is still value to apply\n> get_index_isclustered() within mark_index_clustered() and cluster(),\n> and I would suggest to keep 0002 to that.\n> \n> Justin, what do you think?\n\nSounds right. Or else get_index_isclustered() could be redefined to take a\nboolean \"do_elog\" flag, and if syscache fails and that's false, then return\nfalse instead of ERROR.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Apr 2020 01:52:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Thu, Apr 02, 2020 at 01:52:09AM -0500, Justin Pryzby wrote:\n> Sounds right. Or else get_index_isclustered() could be redefined to take a\n> boolean \"do_elog\" flag, and if syscache fails and that's false, then return\n> false instead of ERROR.\n\nNot sure if that's completely right to do either. For one, it is not\nconsistent with the surroundings as of lsyscache.c.\n--\nMichael", "msg_date": "Thu, 2 Apr 2020 16:24:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On 2020-Apr-02, Michael Paquier wrote:\n\n> Now, regarding patch 0002, note that you have a problem for this part:\n> - tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexOid));\n> - if (!HeapTupleIsValid(tuple)) /* probably can't happen */\n> - {\n> - relation_close(OldHeap, AccessExclusiveLock);\n> - pgstat_progress_end_command();\n> - return;\n> - }\n> - indexForm = (Form_pg_index) GETSTRUCT(tuple);\n> - if (!indexForm->indisclustered)\n> + if (!get_index_isclustered(indexOid))\n> {\n> - ReleaseSysCache(tuple);\n> relation_close(OldHeap, AccessExclusiveLock);\n> pgstat_progress_end_command();\n> return;\n> }\n> - ReleaseSysCache(tuple);\n> \n> On an invalid tuple for pg_index, the new code would issue an error,\n> while the old code would just return.\n\nI don't think we need to worry about that problem, because we already\nchecked that the pg_class tuple for the index is there two lines above.\nThe pg_index tuple cannot have gone away on its own; and the index can't\nbe deleted either, because cluster_rel holds AEL on the table. There\nisn't \"probably\" about the can't-happen condition, is there?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 2 Apr 2020 04:38:36 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Thu, Apr 02, 2020 at 04:24:03PM +0900, Michael Paquier wrote:\n> On Thu, Apr 02, 2020 at 01:52:09AM -0500, Justin Pryzby wrote:\n>> Sounds right. Or else get_index_isclustered() could be redefined to take a\n>> boolean \"do_elog\" flag, and if syscache fails and that's false, then return\n>> false instead of ERROR.\n> \n> Not sure if that's completely right to do either. For one, it is not\n> consistent with the surroundings as of lsyscache.c.\n\nActually, we do have some missing_ok flags lying around already in\nlsyscache.c, so it would be much more consistent to use that name that\ninstead of the do_elog you are suggesting. Could you update the\npatch to reflect that?\n--\nMichael", "msg_date": "Thu, 2 Apr 2020 16:39:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Thu, Apr 02, 2020 at 04:38:36AM -0300, Alvaro Herrera wrote:\n> I don't think we need to worry about that problem, because we already\n> checked that the pg_class tuple for the index is there two lines above.\n> The pg_index tuple cannot have gone away on its own; and the index can't\n> be deleted either, because cluster_rel holds AEL on the table. There\n> isn't \"probably\" about the can't-happen condition, is there?\n\nYes, you are right here. I was wondering about an interference with\nthe multi-relation cluster that would not lock the parent relation at\nthe upper level of cluster() but the check on the existence of the\nindex makes sure that we'll never see an invalid entry in pg_index, so\nlet's keep patch 0002 as originally presented. As the commit tree is\ngoing to be rather crowded until feature freeze on Sunday, I'll wait\nuntil Monday or Tuesday to finalize this patch set.\n\nNow, would it be better to apply the refactoring patch for HEAD before\nfeature freeze, or are people fine if this is committed a bit after?\nPatch 0002 is neither a new feature, nor an actual bug, and just some\ncode cleanup, but I am a bit worried about applying that cleanup on\nHEAD just after the freeze.\n--\nMichael", "msg_date": "Fri, 3 Apr 2020 15:54:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" }, { "msg_contents": "On Fri, Apr 03, 2020 at 03:54:38PM +0900, Michael Paquier wrote:\n> Now, would it be better to apply the refactoring patch for HEAD before\n> feature freeze, or are people fine if this is committed a bit after?\n> Patch 0002 is neither a new feature, nor an actual bug, and just some\n> code cleanup, but I am a bit worried about applying that cleanup on\n> HEAD just after the freeze.\n\nI have worked more on this one this morning and just applied the bug\nfix down to 9.5, and the refactoring on HEAD. Thanks!\n--\nMichael", "msg_date": "Mon, 6 Apr 2020 11:47:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ALTER tbl rewrite loses CLUSTER ON index" } ]
[ { "msg_contents": "Hello all,\n\nLike the title says, using \"gin_fuzzy_search_limit\" degrades speed when it\nhas a relatively low setting.\n\nWhat do I mean by \"relatively low\"? An example would be when a table with a\nGIN index has many millions of rows and a particular keyword search has\n1,000,000 possible results because the keyword is very common (or it's just\nthat the table is so supremely large that even a somewhat common keyword\nappears enough to return one million results). However, you only want to\nreturn around 100 random results from that one million, so you set\ngin_fuzzy_search_limit to 100. That limit is relatively low when you look\nat the ratio of the limit value to the possible results: 100 / 1,000,000 =\n0.0001. You'll find the query is very slow for such a low ratio. It isn't\nso slow if gin_fuzzy_search_limit is 100 but the keyword search has only a\ntotal of 10,000 possible results (resulting in a higher ratio of 0.1).\n\nThis would explain why in the documentation it is said that \"From\nexperience, values in the thousands (e.g., 5000 — 20000) work well\". It's\nnot so common to have queries that return large enough result sets such\nthat gin_fuzzy_search_limit values between 5,000 and 20,000 would result in\nlow ratios and so result in the performance issue I've observed (these\ngin_fuzzy_search_limit values have relatively high ratios between 0.005 and\n0.02 if you have 1,000,000 results for a keyword search). However, if you\ndesire a lower gin_fuzzy_search_limit such as 100, while also having a\nrelatively larger table, you'll find this slowness issue.\n\nI discussed this issue more and the reason for it in my original bug\nreport:\nhttps://www.postgresql.org/message-id/16220-1a0a4f0cb67cafdc@postgresql.org\n\nAttached is SQL to test and observe this issue and also attached is a patch\nI want to eventually submit to a commitfest.\n\nBest regards,\nAdé", "msg_date": "Sun, 2 Feb 2020 13:06:01 -0500", "msg_from": "=?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix for slow GIN index queries when \"gin_fuzzy_search_limit\"\n setting is relatively small for large tables" }, { "msg_contents": "=?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com> writes:\n> Like the title says, using \"gin_fuzzy_search_limit\" degrades speed when it\n> has a relatively low setting.\n> ...\n> Attached is SQL to test and observe this issue and also attached is a patch\n> I want to eventually submit to a commitfest.\n\nI took a brief look at this. It seems like what you're actually trying\nto accomplish is to ensure that entryLoadMoreItems's \"stepright\" path\nis taken, instead of re-descending the index from the root. Okay,\nI can see why that'd be a big win, but why are you tying it to the\ndropItem behavior? It should apply any time we're iterating this loop\nmore than once. IOW, it seems to me like the logic about when to step\nright is just kinda broken, and this is a band-aid rather than a full fix.\nThe performance hit is worse for fuzzy-search mode because it will\niterate the loop more (relative to the amount of work done elsewhere),\nbut there's still a potential for wasted work.\n\nActually, a look at the code coverage report shows that the\nnot-step-right path is never taken at all in our standard regression\ntests. Maybe that just says bad things about the tests' coverage, but\nnow I'm wondering whether we could just flush that code path altogether,\nand assume that we should always step right at this point.\n\n[ cc'ing heikki and alexander, who seem to have originated that code\nat 626a120656a75bf4fe64b1d0d83c23cb38d3771a. The commit message says\nit saves a lot of I/O, but I'm wondering if this report disproves that.\nIn any case the lack of test coverage is pretty damning. ]\n\nWhile we're here, what do you think about the comment in the other\ncode branch just above:\n\n\t\t/* XXX: shouldn't we apply the fuzzy search limit here? */\n\nI'm personally inclined to suspect that the fuzzy-search business has\ngot lots of bugs, which haven't been noticed because (a) it's so squishily\ndefined that one can hardly tell whether a given result is buggy or not,\nand (b) nearly nobody uses it anyway (possibly not unrelated to (a)).\nAs somebody who evidently is using it, you're probably more motivated\nto track down bugs than the rest of us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Mar 2020 19:16:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix for slow GIN index queries when\n \"gin_fuzzy_search_limit\" setting is relatively small for large tables" }, { "msg_contents": "Hi, Tom. Thanks for taking a look.\n\n\n> It seems like what you're actually trying\n> to accomplish is to ensure that entryLoadMoreItems's \"stepright\" path\n> is taken, instead of re-descending the index from the root.\n\n\nWhat I was primarily trying to do is make sure that when entryLoadMoreItems\nis called, it loads new items that it didn’t load previously, which would\noccur in special cases. But the solution to that goal does result in the\n\"stepright\" path being used. I explain the main goal in a segment of my bug\nreport this way, though it's a bit longwinded (from\nhttps://www.postgresql.org/message-id/16220-1a0a4f0cb67cafdc@postgresql.org\n):\n\n\nSince the program doesn't load all items into memory at once, it calls the\n> \"entryLoadMoreItems\" function when it needs to get another page of items to\n> iterate through. The \"entryLoadMoreItems\" function calls are passed an\n> \"advancePast\" variable as an argument. This variable decides what leaf page\n> in the items/posting tree should more items be retrieved from. Usually when\n> iterating through all possible results, execution will exit the do-while\n> loop responsible for iteration in order to perform some important actions\n> (including the updating of the \"advancePast\" variable) before returning\n> into\n> the loop again to iterate over more items. However, when \"dropItem\" returns\n> true in succession a great many times, the do-while loop can not be exited\n> for updating the \"advancePast\" variable until a non-drop finally occurs.\n> When this \"advancePast\" variable is not updated it leads to calls to\n> \"entryLoadMoreItems\" repeatedly returning the same items when stuck in the\n> do-while loop by a high succession of dropped items (because \"advancePast\"\n> is never updated to a value after items already iterated through).\n\n\n\n> Along with the issue of returning the same items, there's the issue of how\n> the \"entryLoadMoreItems\" function traverses the posting tree from the root\n> each time it's called while stuck in the do-while loop. This especially is\n> the cause for the bad performance seen for low \"gin_fuzzy_search_limit\"\n> values. In some cases, this situation is made even worse when \"advancePast\"\n> is set to a value that leads to loading a page of items that has relatively\n> few items actually past \"advancePast\", and so it must almost immediately\n> call \"entryLoadMoreItems\" again. But because \"advancePast\" never gets\n> updated, this results in a higher than usual succession of\n> \"entryLoadMoreItems\" function calls (the program loads the same page,\n> iterates over the same relatively few items until it goes and loads the\n> same\n> page again), with each call requiring traversal from the root of the\n> posting\n> tree down to the same leaf page as before.\n\n\n\n> My patch makes it so that when stuck in the do-while loop after many\n> successive \"dropItems\" returning true, the program instead now loads actual\n> new items by passing the last item dropped into the \"entryLoadMoreItems\"\n> function instead of the \"advancePast\" variable that can't be appropriately\n> updated while stuck in the do-while loop. This means \"entryLoadMoreItems\"\n> will instead load items ordered right after the last dropped item. This\n> last\n> item dropped is also the current item (\"curItem\") and so the\n> \"entryLoadMoreItems\" can directly obtain the next page of items by making a\n> step right from the current page, instead of traversing from the root of\n> the\n> posting tree, which is the most important fix for performance.\n\n\nIn regards to this:\n\nWhile we're here, what do you think about the comment in the other\n> code branch just above:\n> /* XXX: shouldn't we apply the fuzzy search limit here? */\n> I'm personally inclined to suspect that the fuzzy-search business has\n> got lots of bugs, which haven't been noticed because (a) it's so squishily\n> defined that one can hardly tell whether a given result is buggy or not,\n> and (b) nearly nobody uses it anyway (possibly not unrelated to (a)).\n> As somebody who evidently is using it, you're probably more motivated\n> to track down bugs than the rest of us.\n\n\nI think the comment is correct. It should be applied if you are to stay\nconsistent. Like the comment above that comment says, that code branch is\nfor the two cases of either (1) reaching the last page of a posting tree or\n(2) when an \"entry\"/keyword has so few results that the item pointers fit\nin just one page containing a posting list. If there is a chance of a\ndropped item in the other pages of the posting tree, there should be a\nchance of dropped items in the last page too for consistency sake at least.\nAnd there should also be a chance of dropped items when iterating a single\nposting list of entry with relatively few results. Placing \"||\n(entry->reduceResult == true && dropItem(entry))\" at the end of the while\ncondition should be all that's needed to apply the fuzzy search limit there.\n\nAnd I agree that probably usage of the fuzzy search feature is extremely\nrare and the way I'm using it probably even more rare. So thank you for\ntaking a look at it. It's a really great feature for me though and I'm glad\nthe creator placed it in.\n\nRegards,\nAde\n\nOn Tue, Mar 10, 2020 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com> writes:\n> > Like the title says, using \"gin_fuzzy_search_limit\" degrades speed when\n> it\n> > has a relatively low setting.\n> > ...\n> > Attached is SQL to test and observe this issue and also attached is a\n> patch\n> > I want to eventually submit to a commitfest.\n>\n> I took a brief look at this. It seems like what you're actually trying\n> to accomplish is to ensure that entryLoadMoreItems's \"stepright\" path\n> is taken, instead of re-descending the index from the root. Okay,\n> I can see why that'd be a big win, but why are you tying it to the\n> dropItem behavior? It should apply any time we're iterating this loop\n> more than once. IOW, it seems to me like the logic about when to step\n> right is just kinda broken, and this is a band-aid rather than a full fix.\n> The performance hit is worse for fuzzy-search mode because it will\n> iterate the loop more (relative to the amount of work done elsewhere),\n> but there's still a potential for wasted work.\n>\n> Actually, a look at the code coverage report shows that the\n> not-step-right path is never taken at all in our standard regression\n> tests. Maybe that just says bad things about the tests' coverage, but\n> now I'm wondering whether we could just flush that code path altogether,\n> and assume that we should always step right at this point.\n>\n> [ cc'ing heikki and alexander, who seem to have originated that code\n> at 626a120656a75bf4fe64b1d0d83c23cb38d3771a. The commit message says\n> it saves a lot of I/O, but I'm wondering if this report disproves that.\n> In any case the lack of test coverage is pretty damning. ]\n>\n> While we're here, what do you think about the comment in the other\n> code branch just above:\n>\n> /* XXX: shouldn't we apply the fuzzy search limit here? */\n>\n> I'm personally inclined to suspect that the fuzzy-search business has\n> got lots of bugs, which haven't been noticed because (a) it's so squishily\n> defined that one can hardly tell whether a given result is buggy or not,\n> and (b) nearly nobody uses it anyway (possibly not unrelated to (a)).\n> As somebody who evidently is using it, you're probably more motivated\n> to track down bugs than the rest of us.\n>\n> regards, tom lane\n>\n\nHi, Tom. Thanks for taking a look. It seems like what you're actually tryingto accomplish is to ensure that entryLoadMoreItems's \"stepright\" pathis taken, instead of re-descending the index from the root.What I was primarily trying to do is make sure that when entryLoadMoreItems is called, it loads new items that it didn’t load previously, which would occur in special cases. But the solution to that goal does result in the \"stepright\" path being used. I explain the main goal in a segment of my bug report this way, though it's a bit longwinded (from https://www.postgresql.org/message-id/16220-1a0a4f0cb67cafdc@postgresql.org):Since the program doesn't load all items into memory at once, it calls the\"entryLoadMoreItems\" function when it needs to get another page of items toiterate through. The \"entryLoadMoreItems\" function calls are passed an\"advancePast\" variable as an argument. This variable decides what leaf pagein the items/posting tree should more items be retrieved from. Usually wheniterating through all possible results, execution will exit the do-whileloop responsible for iteration in order to perform some important actions(including the updating of the \"advancePast\" variable) before returning intothe loop again to iterate over more items. However, when \"dropItem\" returnstrue in succession a great many times, the do-while loop can not be exitedfor updating the \"advancePast\" variable until a non-drop finally occurs.When this \"advancePast\" variable is not updated it leads to calls to\"entryLoadMoreItems\" repeatedly returning the same items when stuck in thedo-while loop by a high succession of dropped items (because \"advancePast\"is never updated to a value after items already iterated through). Along with the issue of returning the same items, there's the issue of howthe \"entryLoadMoreItems\" function traverses the posting tree from the rooteach time it's called while stuck in the do-while loop. This especially isthe cause for the bad performance seen for low \"gin_fuzzy_search_limit\"values. In some cases, this situation is made even worse when \"advancePast\"is set to a value that leads to loading a page of items that has relativelyfew items actually past \"advancePast\", and so it must almost immediatelycall \"entryLoadMoreItems\" again. But because \"advancePast\" never getsupdated, this results in a higher than usual succession of\"entryLoadMoreItems\" function calls (the program loads the same page,iterates over the same relatively few items until it goes and loads the samepage again), with each call requiring traversal from the root of the postingtree down to the same leaf page as before. My patch makes it so that when stuck in the do-while loop after manysuccessive \"dropItems\" returning true, the program instead now loads actualnew items by passing the last item dropped into the \"entryLoadMoreItems\"function instead of the \"advancePast\" variable that can't be appropriatelyupdated while stuck in the do-while loop. This means \"entryLoadMoreItems\"will instead load items ordered right after the last dropped item. This lastitem dropped is also the current item (\"curItem\") and so the\"entryLoadMoreItems\" can directly obtain the next page of items by making astep right from the current page, instead of traversing from the root of theposting tree, which is the most important fix for performance.In regards to this:While we're here, what do you think about the comment in the othercode branch just above:                /* XXX: shouldn't we apply the fuzzy search limit here? */I'm personally inclined to suspect that the fuzzy-search business hasgot lots of bugs, which haven't been noticed because (a) it's so squishilydefined that one can hardly tell whether a given result is buggy or not,and (b) nearly nobody uses it anyway (possibly not unrelated to (a)).As somebody who evidently is using it, you're probably more motivatedto track down bugs than the rest of us. I think the comment is correct. It should be applied if you are to stay consistent. Like the comment above that comment says, that code branch is for the two cases of either (1) reaching the last page of a posting tree or (2) when an \"entry\"/keyword has so few results that the item pointers fit in just one page containing a posting list. If there is a chance of a dropped item in the other pages of the posting tree, there should be a chance of dropped items in the last page too for consistency sake at least. And there should also be a chance of dropped items when iterating a single posting list of entry with relatively few results. Placing \"|| (entry->reduceResult == true && dropItem(entry))\" at the end of the while condition should be all that's needed to apply the fuzzy search limit there.And I agree that probably usage of the fuzzy search feature is extremely rare and the way I'm using it probably even more rare. So thank you for taking a look at it. It's a really great feature for me though and I'm glad the creator placed it in.Regards,AdeOn Tue, Mar 10, 2020 at 7:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com> writes:\n> Like the title says, using \"gin_fuzzy_search_limit\" degrades speed when it\n> has a relatively low setting.\n> ...\n> Attached is SQL to test and observe this issue and also attached is a patch\n> I want to eventually submit to a commitfest.\n\nI took a brief look at this.  It seems like what you're actually trying\nto accomplish is to ensure that entryLoadMoreItems's \"stepright\" path\nis taken, instead of re-descending the index from the root.  Okay,\nI can see why that'd be a big win, but why are you tying it to the\ndropItem behavior?  It should apply any time we're iterating this loop\nmore than once.  IOW, it seems to me like the logic about when to step\nright is just kinda broken, and this is a band-aid rather than a full fix.\nThe performance hit is worse for fuzzy-search mode because it will\niterate the loop more (relative to the amount of work done elsewhere),\nbut there's still a potential for wasted work.\n\nActually, a look at the code coverage report shows that the\nnot-step-right path is never taken at all in our standard regression\ntests.  Maybe that just says bad things about the tests' coverage, but\nnow I'm wondering whether we could just flush that code path altogether,\nand assume that we should always step right at this point.\n\n[ cc'ing heikki and alexander, who seem to have originated that code\nat 626a120656a75bf4fe64b1d0d83c23cb38d3771a.  The commit message says\nit saves a lot of I/O, but I'm wondering if this report disproves that.\nIn any case the lack of test coverage is pretty damning. ]\n\nWhile we're here, what do you think about the comment in the other\ncode branch just above:\n\n                /* XXX: shouldn't we apply the fuzzy search limit here? */\n\nI'm personally inclined to suspect that the fuzzy-search business has\ngot lots of bugs, which haven't been noticed because (a) it's so squishily\ndefined that one can hardly tell whether a given result is buggy or not,\nand (b) nearly nobody uses it anyway (possibly not unrelated to (a)).\nAs somebody who evidently is using it, you're probably more motivated\nto track down bugs than the rest of us.\n\n                        regards, tom lane", "msg_date": "Thu, 12 Mar 2020 00:22:54 -0400", "msg_from": "=?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix for slow GIN index queries when\n \"gin_fuzzy_search_limit\"\n setting is relatively small for large tables" }, { "msg_contents": "=?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com> writes:\n>> It seems like what you're actually trying\n>> to accomplish is to ensure that entryLoadMoreItems's \"stepright\" path\n>> is taken, instead of re-descending the index from the root.\n\n> What I was primarily trying to do is make sure that when entryLoadMoreItems\n> is called, it loads new items that it didn't load previously, which would\n> occur in special cases. But the solution to that goal does result in the\n> \"stepright\" path being used.\n\nOh, hm, now I see what you mean: as things stand, it's likely to\nrepeatedly (and very expensively) reload the same page we were\nalready on, until the random dropItem() test finally accepts some\nitem from that page. Ick.\n\nI think though that the fix can be a bit less messy than you have here,\nbecause advancePast is just a local variable in entryGetItem, so we\ncan overwrite it without any problem. So what I want to do is just\nupdate it to equal entry->curItem before looping around. But shoving\nthat assignment into the while-condition was too ugly for my taste\n(and no, I didn't like your assignment there either). So I ended up\nrefactoring the do-loop into a for-loop with internal break conditions,\nas attached.\n\nI also made the posting-list case handle reduction in the same way,\nand for good measure changed the bitmap-result case to look similar,\nwhich caused me to notice that it had a bug too: the \"continue\" case\nwithin that loop failed to reset gotitem = false as it should,\nif we'd looped around after rejecting an item due to reduceResult.\nAs far as I can see, that would lead to returning the previously-\nrejected curItem value, which isn't fatal but it's still wrong.\nSo I just got rid of the gotitem variable altogether; it really\nwasn't helping with either clarity or correctness.\n\nThis patch also adds a couple of test cases so that we have more\ncode coverage in this area. The overall coverage of ginget.c\nis still mighty lame, but at least we're going through some of\nthese lines that we weren't before.\n\nI'm inclined to back-patch this. Given how fuzzy the definition\nof gin_fuzzy_search_limit is, it seems unlikely that anyone would\nbe depending on the current buggy behavior.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 02 Apr 2020 19:18:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix for slow GIN index queries when\n \"gin_fuzzy_search_limit\" setting is relatively small for large tables" }, { "msg_contents": "I wrote:\n> I'm inclined to back-patch this. Given how fuzzy the definition\n> of gin_fuzzy_search_limit is, it seems unlikely that anyone would\n> be depending on the current buggy behavior.\n\nAnd done. Thanks for the bug report and patch!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Apr 2020 13:18:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix for slow GIN index queries when\n \"gin_fuzzy_search_limit\" setting is relatively small for large tables" }, { "msg_contents": "Great. Thanks for refactoring it further and fixing other bugs in there\n(and making it more clean too)!\n\nRegards,\nAde\n\nOn Fri, Apr 3, 2020 at 1:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > I'm inclined to back-patch this. Given how fuzzy the definition\n> > of gin_fuzzy_search_limit is, it seems unlikely that anyone would\n> > be depending on the current buggy behavior.\n>\n> And done. Thanks for the bug report and patch!\n>\n> regards, tom lane\n>\n\nGreat. Thanks for refactoring it further and fixing other bugs in there (and making it more clean too)! Regards,AdeOn Fri, Apr 3, 2020 at 1:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> I'm inclined to back-patch this.  Given how fuzzy the definition\n> of gin_fuzzy_search_limit is, it seems unlikely that anyone would\n> be depending on the current buggy behavior.\n\nAnd done.  Thanks for the bug report and patch!\n\n                        regards, tom lane", "msg_date": "Fri, 3 Apr 2020 20:38:50 -0400", "msg_from": "=?UTF-8?B?QWTDqQ==?= <ade.hey@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix for slow GIN index queries when\n \"gin_fuzzy_search_limit\"\n setting is relatively small for large tables" } ]
[ { "msg_contents": "Hackers,\n\nI have implemented $subject, attached.\n\n\nWhile reviewing the \"New SQL counter statistics view (pg_stat_sql)” thread [1], I came across Andres’ comment\n\n> That's not really something in this patch, but a lot of this would be\n> better if we didn't internally have command tags as strings, but as an\n> enum. We should really only convert to a string with needed. That\n> we're doing string comparisons on Portal->commandTag is just plain bad.\n> \n> \n> \n> If so, we could also make this whole patch a lot cheaper - have a fixed\n> size array that has an entry for every possible tag (possibly even in\n> shared memory, and then use atomics there).\n\n\n\nI put the CommandTag enum in src/common because there wasn’t any reason not to do so. It seems plausible that frontend test frameworks might want access to this enum. I don’t have any frontend code using it yet, nor any concrete plans for that. I’m indifferent about this, and will move it into src/backend if folks think that’s better.\n\n\nIn commands/event_trigger.c, I changed the separation between EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED and EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED. It used to claim not to recognize command tags that are indeed recognized elsewhere in the system, but simply not expected here. It now returns “not supported” for them, and only returns “not recognized” for special enum values COMMANDTAG_NULL and COMMANDTAG_UNKNOWN, as well as values outside the recognized range of the enum. I’m happy to change my implementation to preserve the old behavior if necessary. Is there a backward compatibility issue here? It does not impact regression test output for me to change this, but that’s not definitive….\n\nI have extended the event_trigger.sql regression test, with new expected output, and when applying that change to master, the test fails due to the “not supported” vs. “not recognized” distinction. I have kept this regression test change in its own patch file, 0002. The differences when applied to master look like:\n\n> create event trigger regress_event_trigger_ALTER_SYSTEM on ddl_command_start\n> when tag in ('ALTER SYSTEM')\n> execute procedure test_event_trigger2();\n> -ERROR: event triggers are not supported for ALTER SYSTEM\n> +ERROR: filter value \"ALTER SYSTEM\" not recognized for filter variable \"tag\"\n\n\n\nPreventCommandIfReadOnly and PreventCommandIfParallelMode sometimes take a commandTag, but in commands/sequence.c they take strings “nextval()” and “setval()”. Likewise, PreventCommandDuringRecovery takes \"txid_current()” in adt/txid.c. I had to work around this a little, which was not hard to do, but it made me wonder if command tags and these sorts of functions shouldn’t be unified a bit more. They don’t really look consistent with all the other values in the CommandTag enum, so I left them out. I’m open to opinions about this.\n\n\nThere was some confusion in the code between a commandTag and a completionTag, with the commandTag getting overwritten with the completionTag over the course of execution. I’ve split that out into two distinctly separate concepts, which I think makes the code easier to grok. I’ve added a portal->completionTag field that is a fixed size buffer rather than a palloc’d string, to match how completionTag works elsewhere. But the old code that was overwriting the commandTag (a palloc’d string) with a completionTag (a char[] buffer) was using pstrdup for that purpose. I’m now using strlcpy. I don’t care much which way to go here (buffer vs. palloc’d string). Let me know if using a fixed sized buffer as I’ve done bothers anybody.\n\n\nThere were some instances of things like:\n\n strcpy(completionTag, portal->commandTag);\n\nwhich should have more properly been\n\n strlcpy(completionTag, portal->commandTag, COMPLETION_TAG_BUFSIZE);\n\nI don’t know if any of these were live bugs, but they seemed like traps for the future, should any new commandTag length overflow the buffer size. I think this patch fixes all of those cases.\n\n\nGenerating CommandTag enum values from user queries and then converting those back to string for printing or use in set_ps_display results in normalization of the commandTag, by which I mean that it becomes all uppercase. I don’t know of any situations where this would matter, but I can’t say for sure that it doesn’t. Anybody have thoughts on that?\n \n\n[1] https://www.postgresql.org/message-id/flat/CAJrrPGeY4xujjoR=z=KoyRMHEK_pSjjp=7VBhOAHq9rfgpV7QQ@mail.gmail.com\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 2 Feb 2020 16:41:01 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Portal->commandTag as an enum" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> I put the CommandTag enum in src/common because there wasn’t any reason\n> not to do so. It seems plausible that frontend test frameworks might want\n> access to this enum.\n\nAu contraire, that's an absolutely fundamental mistake. There is\nzero chance of this enum holding still across PG versions, so if\nwe expose it to frontend code, we're going to have big problems\nwith cross-version compatibility. See our historical problems with\nassuming the enum for character set encodings was the same between\nfrontend and backend ... and that set is orders of magnitude more\nstable than this one.\n\nAs I recall, the hardest problem with de-string-ifying this is the fact\nthat for certain tags we include a rowcount in the string. I'd like to\nsee that undone --- we have to keep it like that on-the-wire to avoid a\nprotocol break, but it'd be best if noplace inside the backend did it that\nway, and we converted at the last moment before sending a CommandComplete\nto the client. Your references to \"completion tags\" make it sound like\nyou've only half done the conversion (though I've not read the patch\nin enough detail to verify).\n\nBTW, the size of the patch is rather depressing, especially if it's\nonly half done. Unlike Andres, I'm not even a little bit convinced\nthat this is worth the amount of code churn it'll cause. Have you\ndone any code-size or speed comparisons?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Feb 2020 21:14:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 2, 2020, at 6:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> I put the CommandTag enum in src/common because there wasn’t any reason\n>> not to do so. It seems plausible that frontend test frameworks might want\n>> access to this enum.\n> \n\nThanks for looking!\n\n> Au contraire, that's an absolutely fundamental mistake. There is\n> zero chance of this enum holding still across PG versions, so if\n> we expose it to frontend code, we're going to have big problems\n> with cross-version compatibility. See our historical problems with\n> assuming the enum for character set encodings was the same between\n> frontend and backend ... and that set is orders of magnitude more\n> stable than this one.\n\nI completely agree that this enum cannot be expected to remain stable across versions.\n\nFor the purposes of this patch, which has nothing to do with frontend tools, this issue doesn’t matter to me. I’m happy to move this into src/backend.\n\nIs there no place to put code which would be useful for frontend tools without implying stability? Sure, psql and friends can’t use it, because they need to be able to connect to servers of other versions. But why couldn’t a test framework tool use something like this? Could we have someplace like src/common/volatile for this sort of thing?\n\n> \n> As I recall, the hardest problem with de-string-ifying this is the fact\n> that for certain tags we include a rowcount in the string. I'd like to\n> see that undone --- we have to keep it like that on-the-wire to avoid a\n> protocol break, but it'd be best if noplace inside the backend did it that\n> way, and we converted at the last moment before sending a CommandComplete\n> to the client. Your references to \"completion tags\" make it sound like\n> you've only half done the conversion (though I've not read the patch\n> in enough detail to verify).\n\nIn v1, I stayed closer to the existing code structure than you are requesting. I like the direction you’re suggesting that I go, and I’ve begun that transition in anticipation of posting a v2 patch set soon.\n\n> BTW, the size of the patch is rather depressing, especially if it's\n> only half done. Unlike Andres, I'm not even a little bit convinced\n> that this is worth the amount of code churn it'll cause. Have you\n> done any code-size or speed comparisons?\n\nA fair amount of the code churn is replacing strings with their enum equivalent, creating the enum itself, and creating a data table mapping enums to strings. The churn doesn’t look too bad to me when viewing the original vs new code diff side-by-side.\n\nThe second file (v1-0002…) is entirely an extension of the regression tests. Applying v1-0001… doesn’t entail needing to apply v1-0002… as the code being tested exists before and after the patch. If you don’t want to apply that regression test change, that’s fine. It just provides more extensive coverage of event_triggers over different command tags.\n\nThere will be a bit more churn in v2, since I’m changing the code flow a bit more to avoid generating the strings until they are about to get sent to the client, per your comments above. That has the advantage that multiple places in the old code where the completionTag was parsed to get the nprocessed count back out now doesn’t need any parsing.\n\nI’ll include stats about code-size and speed when I post v2.\n\nThanks again for reviewing my patch idea!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 3 Feb 2020 09:41:56 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "> On Feb 3, 2020, at 9:41 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> In v1, I stayed closer to the existing code structure than you are requesting. I like the direction you’re suggesting that I go, and I’ve begun that transition in anticipation of posting a v2 patch set soon.\n\nOk, here is v2, attached.\n\nIn master, a number of functions pass a char *completionTag argument (really a char completionTag[COMPLETION_TAG_BUFSIZE]) which gets filled in with the string to return to the client from EndCommand. I have removed that kind of logic:\n\n- /* save the rowcount if we're given a completionTag to fill */\n- if (completionTag)\n- snprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n- \"SELECT \" UINT64_FORMAT,\n- queryDesc->estate->es_processed);\n\nIn the patch, this is replaced with a new struct, QueryCompletionData. That bit of code above is replaced with:\n\n+ /* save the rowcount if we're given a qc to fill */\n+ if (qc)\n+ SetQC(qc, COMMANDTAG_SELECT, queryDesc->estate->es_processed, DISPLAYFORMAT_NPROCESSED);\n\nFor wire protocol compatibility, we have to track the display format. When this gets to EndCommand, the display format allows the string to be written exactly as the client will expect. If we ever get to the point where we can break with that compatibility, the third member of this struct, display_format, can be removed.\n\nWhere string parsing is being done in master to get the count back out, it changes to look like this:\n\n\u0010\u0010\u0010\u0010\u0010\u0010\u0010\u0010\u0010- if (strncmp(completionTag, \"SELECT \", 7) == 0)\n- _SPI_current->processed =\n- pg_strtouint64(completionTag + 7, NULL, 10);\n+ if (qcdata.commandTag == COMMANDTAG_SELECT)\n+ _SPI_current->processed = qcdata.nprocessed;\n\nOne of the advantages to the patch is that the commandTag for a portal is not overwritten by the commandTag in the QueryCompletionData, meaning for example that if an EXECUTE command returns the string “UPDATE 0”, the portal->commandTag remains COMMANDTAG_EXECUTE while the qcdata.commandTag becomes COMMANDTAG_UPDATE. This could be helpful to code trying to track how many operations of a given type have run.\n\nIn event_trigger.c, in master there are ad-hoc comparisons against c-strings:\n\n- /*\n- * Handle some idiosyncratic special cases.\n- */\n- if (pg_strcasecmp(tag, \"CREATE TABLE AS\") == 0 ||\n- pg_strcasecmp(tag, \"SELECT INTO\") == 0 ||\n- pg_strcasecmp(tag, \"REFRESH MATERIALIZED VIEW\") == 0 ||\n- pg_strcasecmp(tag, \"ALTER DEFAULT PRIVILEGES\") == 0 ||\n- pg_strcasecmp(tag, \"ALTER LARGE OBJECT\") == 0 ||\n- pg_strcasecmp(tag, \"COMMENT\") == 0 ||\n- pg_strcasecmp(tag, \"GRANT\") == 0 ||\n- pg_strcasecmp(tag, \"REVOKE\") == 0 ||\n- pg_strcasecmp(tag, \"DROP OWNED\") == 0 ||\n- pg_strcasecmp(tag, \"IMPORT FOREIGN SCHEMA\") == 0 ||\n- pg_strcasecmp(tag, \"SECURITY LABEL\") == 0)\n\nThese are replaced by switch() case statements over the possible commandTags:\n\n+ switch (commandTag)\n+ {\n+ /*\n+ * Supported idiosyncratic special cases.\n+ */\n+ case COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n+ case COMMANDTAG_ALTER_LARGE_OBJECT:\n+ case COMMANDTAG_COMMENT:\n+ case COMMANDTAG_CREATE_TABLE_AS:\n+ case COMMANDTAG_DROP_OWNED:\n+ case COMMANDTAG_GRANT:\n+ case COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n+ case COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n+ case COMMANDTAG_REVOKE:\n+ case COMMANDTAG_SECURITY_LABEL:\n+ case COMMANDTAG_SELECT_INTO:\n\nI think this is easier to read, verify, and maintain. The compiler can help if you leave a command tag out of the list, which the compiler cannot help discover in master as it is currently written. But I also think all those pg_strcasecmp calls are likely more expensive at runtime.\n\nIn master, EventTriggerCacheItem tracks a sorted array of palloc’d cstrings. In the patch, that becomes a Bitmapset over the enum:\n\ntypedef struct\n {\n Oid fnoid; /* function to be called */\n char enabled; /* as SESSION_REPLICATION_ROLE_* */\n- int ntags; /* number of command tags */\n- char **tag; /* command tags in SORTED order */\n+ Bitmapset *tagset; /* command tags, or NULL if empty */\n } EventTriggerCacheItem;\n\nThe code in evtcache.c is shorter and, in my opinion, easier to read. In filter_event_trigger, rather than running bsearch through a sorted array of strings, it just runs bms_is_member.\n\nI’ve kept this change to the event trigger code in its own separate patch file, to make the change easier to review in isolation.\n\n> I’ll include stats about code-size and speed when I post v2.\n\nThe benchmarks are from tpc-b_96.sql. I think I’ll need to adjust the benchmark to put more emphasis on the particular code that I’m changing, but I have run this standard benchmark for this email:\n\nFor master (1fd687a035):\n\n\tpostgresql % find src -type f -name \"*.c\" -or -name \"*.h\" | xargs cat | wc\n\t 1482117 5690660 45256959\n\n\tpostgresql % find src -type f -name \"*.o\" | xargs cat | wc\n\t 38283 476264 18999164\n\nAverages for test set 1 by scale:\nset\tscale\ttps\tavg_latency\t90%<\tmax_latency\n1\t1\t3741\t1.734\t3.162\t133.718\n1\t9\t6124\t0.904\t1.05\t230.547\n1\t81\t5921\t0.931\t1.015\t67.023\n\nAverages for test set 1 by clients:\nset\tclients\ttps\tavg_latency\t90%<\tmax_latency\n1\t1\t2163\t0.461\t0.514\t24.414\n1\t4\t5968\t0.675\t0.791\t40.354\n1\t16\t7655\t2.433\t3.922\t366.519\n\n\nFor command tag patch (branched from 1fd687a035):\n\n\tpostgresql % find src -type f -name \"*.c\" -or -name \"*.h\" | xargs cat | wc\n\t 1482969 5691908 45281399\n\n\tpostgresql % find src -type f -name \"*.o\" | xargs cat | wc\n\t 38209 476243 18999752\n\n\nAverages for test set 1 by scale:\nset\tscale\ttps\tavg_latency\t90%<\tmax_latency\n1\t1\t3877\t1.645\t3.066\t24.973\n1\t9\t6383\t0.859\t1.032\t64.566\n1\t81\t5945\t0.925\t1.023\t162.9\n\nAverages for test set 1 by clients:\nset\tclients\ttps\tavg_latency\t90%<\tmax_latency\n1\t1\t2141\t0.466\t0.522\t11.531\n1\t4\t5967\t0.673\t0.783\t136.882\n1\t16\t8096\t2.292\t3.817\t104.026\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 4 Feb 2020 18:18:52 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Hi,\n\nOn 2020-02-04 18:18:52 -0800, Mark Dilger wrote:\n> In master, a number of functions pass a char *completionTag argument (really a char completionTag[COMPLETION_TAG_BUFSIZE]) which gets filled in with the string to return to the client from EndCommand. I have removed that kind of logic:\n>\n> - /* save the rowcount if we're given a completionTag to fill */\n> - if (completionTag)\n> - snprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n> - \"SELECT \" UINT64_FORMAT,\n> - queryDesc->estate->es_processed);\n>\n> In the patch, this is replaced with a new struct, QueryCompletionData. That bit of code above is replaced with:\n>\n> + /* save the rowcount if we're given a qc to fill */\n> + if (qc)\n> + SetQC(qc, COMMANDTAG_SELECT, queryDesc->estate->es_processed, DISPLAYFORMAT_NPROCESSED);\n>\n> For wire protocol compatibility, we have to track the display format.\n> When this gets to EndCommand, the display format allows the string to\n> be written exactly as the client will expect. If we ever get to the\n> point where we can break with that compatibility, the third member of\n> this struct, display_format, can be removed.\n\nHm. While I like not having this as strings a lot, I wish we could get\nrid of this displayformat stuff.\n\n\n\n> These are replaced by switch() case statements over the possible commandTags:\n>\n> + switch (commandTag)\n> + {\n> + /*\n> + * Supported idiosyncratic special cases.\n> + */\n> + case COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n> + case COMMANDTAG_ALTER_LARGE_OBJECT:\n> + case COMMANDTAG_COMMENT:\n> + case COMMANDTAG_CREATE_TABLE_AS:\n> + case COMMANDTAG_DROP_OWNED:\n> + case COMMANDTAG_GRANT:\n> + case COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n> + case COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n> + case COMMANDTAG_REVOKE:\n> + case COMMANDTAG_SECURITY_LABEL:\n> + case COMMANDTAG_SELECT_INTO:\n\nThe number of these makes me wonder if we should just have a metadata\ntable in one place, instead of needing to edit multiple\nlocations. Something like\n\nconst ... CommandTagBehaviour[] = {\n [COMMANDTAG_INSERT] = {\n .display_processed = true, .display_oid = true, ...},\n [COMMANDTAG_CREATE_TABLE_AS] = {\n .event_trigger = true, ...},\n\nwith the zero initialized defaults being the common cases.\n\nNot sure if it's worth going there. But it's maybe worth thinking about\nfor a minute?\n\n\n> Averages for test set 1 by scale:\n> set\tscale\ttps\tavg_latency\t90%<\tmax_latency\n> 1\t1\t3741\t1.734\t3.162\t133.718\n> 1\t9\t6124\t0.904\t1.05\t230.547\n> 1\t81\t5921\t0.931\t1.015\t67.023\n>\n> Averages for test set 1 by clients:\n> set\tclients\ttps\tavg_latency\t90%<\tmax_latency\n> 1\t1\t2163\t0.461\t0.514\t24.414\n> 1\t4\t5968\t0.675\t0.791\t40.354\n> 1\t16\t7655\t2.433\t3.922\t366.519\n>\n>\n> For command tag patch (branched from 1fd687a035):\n>\n> \tpostgresql % find src -type f -name \"*.c\" -or -name \"*.h\" | xargs cat | wc\n> \t 1482969 5691908 45281399\n>\n> \tpostgresql % find src -type f -name \"*.o\" | xargs cat | wc\n> \t 38209 476243 18999752\n>\n>\n> Averages for test set 1 by scale:\n> set\tscale\ttps\tavg_latency\t90%<\tmax_latency\n> 1\t1\t3877\t1.645\t3.066\t24.973\n> 1\t9\t6383\t0.859\t1.032\t64.566\n> 1\t81\t5945\t0.925\t1.023\t162.9\n>\n> Averages for test set 1 by clients:\n> set\tclients\ttps\tavg_latency\t90%<\tmax_latency\n> 1\t1\t2141\t0.466\t0.522\t11.531\n> 1\t4\t5967\t0.673\t0.783\t136.882\n> 1\t16\t8096\t2.292\t3.817\t104.026\n\nNot bad.\n\n\n> diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c\n> index 9aa2b61600..5322c14ce4 100644\n> --- a/src/backend/commands/async.c\n> +++ b/src/backend/commands/async.c\n> @@ -594,7 +594,7 @@ pg_notify(PG_FUNCTION_ARGS)\n> \t\tpayload = text_to_cstring(PG_GETARG_TEXT_PP(1));\n>\n> \t/* For NOTIFY as a statement, this is checked in ProcessUtility */\n> -\tPreventCommandDuringRecovery(\"NOTIFY\");\n> +\tPreventCommandDuringRecovery(COMMANDTAG_NOTIFY);\n>\n> \tAsync_Notify(channel, payload);\n>\n> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n> index 40a8ec1abd..4828e75bd5 100644\n> --- a/src/backend/commands/copy.c\n> +++ b/src/backend/commands/copy.c\n> @@ -1063,7 +1063,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n>\n> \t\t/* check read-only transaction and parallel mode */\n> \t\tif (XactReadOnly && !rel->rd_islocaltemp)\n> -\t\t\tPreventCommandIfReadOnly(\"COPY FROM\");\n> +\t\t\tPreventCommandIfReadOnly(COMMANDTAG_COPY_FROM);\n>\n> \t\tcstate = BeginCopyFrom(pstate, rel, stmt->filename, stmt->is_program,\n> \t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options);\n\nI'm not sure this really ought to be part of this change - seems like a\nsomewhat independent change to me. With less obvious benefits.\n\n\n\n> static event_trigger_command_tag_check_result\n> -check_ddl_tag(const char *tag)\n> +check_ddl_tag(CommandTag commandTag)\n> {\n> -\tconst char *obtypename;\n> -\tconst event_trigger_support_data *etsd;\n> +\tswitch (commandTag)\n> +\t{\n> +\t\t\t/*\n> +\t\t\t * Supported idiosyncratic special cases.\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n> +\t\tcase COMMANDTAG_ALTER_LARGE_OBJECT:\n> +\t\tcase COMMANDTAG_COMMENT:\n> +\t\tcase COMMANDTAG_CREATE_TABLE_AS:\n> +\t\tcase COMMANDTAG_DROP_OWNED:\n> +\t\tcase COMMANDTAG_GRANT:\n> +\t\tcase COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n> +\t\tcase COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n> +\t\tcase COMMANDTAG_REVOKE:\n> +\t\tcase COMMANDTAG_SECURITY_LABEL:\n> +\t\tcase COMMANDTAG_SELECT_INTO:\n>\n> -\t/*\n> -\t * Handle some idiosyncratic special cases.\n> -\t */\n> -\tif (pg_strcasecmp(tag, \"CREATE TABLE AS\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"SELECT INTO\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"REFRESH MATERIALIZED VIEW\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"ALTER DEFAULT PRIVILEGES\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"ALTER LARGE OBJECT\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"COMMENT\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"GRANT\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"REVOKE\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"DROP OWNED\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"IMPORT FOREIGN SCHEMA\") == 0 ||\n> -\t\tpg_strcasecmp(tag, \"SECURITY LABEL\") == 0)\n> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n> +\t\t\t/*\n> +\t\t\t * Supported CREATE commands\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_CREATE_ACCESS_METHOD:\n> +\t\tcase COMMANDTAG_CREATE_AGGREGATE:\n> +\t\tcase COMMANDTAG_CREATE_CAST:\n> +\t\tcase COMMANDTAG_CREATE_COLLATION:\n> +\t\tcase COMMANDTAG_CREATE_CONSTRAINT:\n> +\t\tcase COMMANDTAG_CREATE_CONVERSION:\n> +\t\tcase COMMANDTAG_CREATE_DOMAIN:\n> +\t\tcase COMMANDTAG_CREATE_EXTENSION:\n> +\t\tcase COMMANDTAG_CREATE_FOREIGN_DATA_WRAPPER:\n> +\t\tcase COMMANDTAG_CREATE_FOREIGN_TABLE:\n> +\t\tcase COMMANDTAG_CREATE_FUNCTION:\n> +\t\tcase COMMANDTAG_CREATE_INDEX:\n> +\t\tcase COMMANDTAG_CREATE_LANGUAGE:\n> +\t\tcase COMMANDTAG_CREATE_MATERIALIZED_VIEW:\n> +\t\tcase COMMANDTAG_CREATE_OPERATOR:\n> +\t\tcase COMMANDTAG_CREATE_OPERATOR_CLASS:\n> +\t\tcase COMMANDTAG_CREATE_OPERATOR_FAMILY:\n> +\t\tcase COMMANDTAG_CREATE_POLICY:\n> +\t\tcase COMMANDTAG_CREATE_PROCEDURE:\n> +\t\tcase COMMANDTAG_CREATE_PUBLICATION:\n> +\t\tcase COMMANDTAG_CREATE_ROUTINE:\n> +\t\tcase COMMANDTAG_CREATE_RULE:\n> +\t\tcase COMMANDTAG_CREATE_SCHEMA:\n> +\t\tcase COMMANDTAG_CREATE_SEQUENCE:\n> +\t\tcase COMMANDTAG_CREATE_SERVER:\n> +\t\tcase COMMANDTAG_CREATE_STATISTICS:\n> +\t\tcase COMMANDTAG_CREATE_SUBSCRIPTION:\n> +\t\tcase COMMANDTAG_CREATE_TABLE:\n> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_CONFIGURATION:\n> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_DICTIONARY:\n> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_PARSER:\n> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_TEMPLATE:\n> +\t\tcase COMMANDTAG_CREATE_TRANSFORM:\n> +\t\tcase COMMANDTAG_CREATE_TRIGGER:\n> +\t\tcase COMMANDTAG_CREATE_TYPE:\n> +\t\tcase COMMANDTAG_CREATE_USER_MAPPING:\n> +\t\tcase COMMANDTAG_CREATE_VIEW:\n>\n> -\t/*\n> -\t * Otherwise, command should be CREATE, ALTER, or DROP.\n> -\t */\n> -\tif (pg_strncasecmp(tag, \"CREATE \", 7) == 0)\n> -\t\tobtypename = tag + 7;\n> -\telse if (pg_strncasecmp(tag, \"ALTER \", 6) == 0)\n> -\t\tobtypename = tag + 6;\n> -\telse if (pg_strncasecmp(tag, \"DROP \", 5) == 0)\n> -\t\tobtypename = tag + 5;\n> -\telse\n> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n> +\t\t\t/*\n> +\t\t\t * Supported ALTER commands\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_ALTER_ACCESS_METHOD:\n> +\t\tcase COMMANDTAG_ALTER_AGGREGATE:\n> +\t\tcase COMMANDTAG_ALTER_CAST:\n> +\t\tcase COMMANDTAG_ALTER_COLLATION:\n> +\t\tcase COMMANDTAG_ALTER_CONSTRAINT:\n> +\t\tcase COMMANDTAG_ALTER_CONVERSION:\n> +\t\tcase COMMANDTAG_ALTER_DOMAIN:\n> +\t\tcase COMMANDTAG_ALTER_EXTENSION:\n> +\t\tcase COMMANDTAG_ALTER_FOREIGN_DATA_WRAPPER:\n> +\t\tcase COMMANDTAG_ALTER_FOREIGN_TABLE:\n> +\t\tcase COMMANDTAG_ALTER_FUNCTION:\n> +\t\tcase COMMANDTAG_ALTER_INDEX:\n> +\t\tcase COMMANDTAG_ALTER_LANGUAGE:\n> +\t\tcase COMMANDTAG_ALTER_MATERIALIZED_VIEW:\n> +\t\tcase COMMANDTAG_ALTER_OPERATOR:\n> +\t\tcase COMMANDTAG_ALTER_OPERATOR_CLASS:\n> +\t\tcase COMMANDTAG_ALTER_OPERATOR_FAMILY:\n> +\t\tcase COMMANDTAG_ALTER_POLICY:\n> +\t\tcase COMMANDTAG_ALTER_PROCEDURE:\n> +\t\tcase COMMANDTAG_ALTER_PUBLICATION:\n> +\t\tcase COMMANDTAG_ALTER_ROUTINE:\n> +\t\tcase COMMANDTAG_ALTER_RULE:\n> +\t\tcase COMMANDTAG_ALTER_SCHEMA:\n> +\t\tcase COMMANDTAG_ALTER_SEQUENCE:\n> +\t\tcase COMMANDTAG_ALTER_SERVER:\n> +\t\tcase COMMANDTAG_ALTER_STATISTICS:\n> +\t\tcase COMMANDTAG_ALTER_SUBSCRIPTION:\n> +\t\tcase COMMANDTAG_ALTER_TABLE:\n> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_CONFIGURATION:\n> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_DICTIONARY:\n> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_PARSER:\n> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_TEMPLATE:\n> +\t\tcase COMMANDTAG_ALTER_TRANSFORM:\n> +\t\tcase COMMANDTAG_ALTER_TRIGGER:\n> +\t\tcase COMMANDTAG_ALTER_TYPE:\n> +\t\tcase COMMANDTAG_ALTER_USER_MAPPING:\n> +\t\tcase COMMANDTAG_ALTER_VIEW:\n>\n> -\t/*\n> -\t * ...and the object type should be something recognizable.\n> -\t */\n> -\tfor (etsd = event_trigger_support; etsd->obtypename != NULL; etsd++)\n> -\t\tif (pg_strcasecmp(etsd->obtypename, obtypename) == 0)\n> +\t\t\t/*\n> +\t\t\t * Supported DROP commands\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_DROP_ACCESS_METHOD:\n> +\t\tcase COMMANDTAG_DROP_AGGREGATE:\n> +\t\tcase COMMANDTAG_DROP_CAST:\n> +\t\tcase COMMANDTAG_DROP_COLLATION:\n> +\t\tcase COMMANDTAG_DROP_CONSTRAINT:\n> +\t\tcase COMMANDTAG_DROP_CONVERSION:\n> +\t\tcase COMMANDTAG_DROP_DOMAIN:\n> +\t\tcase COMMANDTAG_DROP_EXTENSION:\n> +\t\tcase COMMANDTAG_DROP_FOREIGN_DATA_WRAPPER:\n> +\t\tcase COMMANDTAG_DROP_FOREIGN_TABLE:\n> +\t\tcase COMMANDTAG_DROP_FUNCTION:\n> +\t\tcase COMMANDTAG_DROP_INDEX:\n> +\t\tcase COMMANDTAG_DROP_LANGUAGE:\n> +\t\tcase COMMANDTAG_DROP_MATERIALIZED_VIEW:\n> +\t\tcase COMMANDTAG_DROP_OPERATOR:\n> +\t\tcase COMMANDTAG_DROP_OPERATOR_CLASS:\n> +\t\tcase COMMANDTAG_DROP_OPERATOR_FAMILY:\n> +\t\tcase COMMANDTAG_DROP_POLICY:\n> +\t\tcase COMMANDTAG_DROP_PROCEDURE:\n> +\t\tcase COMMANDTAG_DROP_PUBLICATION:\n> +\t\tcase COMMANDTAG_DROP_ROUTINE:\n> +\t\tcase COMMANDTAG_DROP_RULE:\n> +\t\tcase COMMANDTAG_DROP_SCHEMA:\n> +\t\tcase COMMANDTAG_DROP_SEQUENCE:\n> +\t\tcase COMMANDTAG_DROP_SERVER:\n> +\t\tcase COMMANDTAG_DROP_STATISTICS:\n> +\t\tcase COMMANDTAG_DROP_SUBSCRIPTION:\n> +\t\tcase COMMANDTAG_DROP_TABLE:\n> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_CONFIGURATION:\n> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_DICTIONARY:\n> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_PARSER:\n> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_TEMPLATE:\n> +\t\tcase COMMANDTAG_DROP_TRANSFORM:\n> +\t\tcase COMMANDTAG_DROP_TRIGGER:\n> +\t\tcase COMMANDTAG_DROP_TYPE:\n> +\t\tcase COMMANDTAG_DROP_USER_MAPPING:\n> +\t\tcase COMMANDTAG_DROP_VIEW:\n> +\t\t\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n> +\n> +\t\t\t/*\n> +\t\t\t * Unsupported CREATE commands\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_CREATE_DATABASE:\n> +\t\tcase COMMANDTAG_CREATE_EVENT_TRIGGER:\n> +\t\tcase COMMANDTAG_CREATE_ROLE:\n> +\t\tcase COMMANDTAG_CREATE_TABLESPACE:\n> +\n> +\t\t\t/*\n> +\t\t\t * Unsupported ALTER commands\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_ALTER_DATABASE:\n> +\t\tcase COMMANDTAG_ALTER_EVENT_TRIGGER:\n> +\t\tcase COMMANDTAG_ALTER_ROLE:\n> +\t\tcase COMMANDTAG_ALTER_TABLESPACE:\n> +\n> +\t\t\t/*\n> +\t\t\t * Unsupported DROP commands\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_DROP_DATABASE:\n> +\t\tcase COMMANDTAG_DROP_EVENT_TRIGGER:\n> +\t\tcase COMMANDTAG_DROP_ROLE:\n> +\t\tcase COMMANDTAG_DROP_TABLESPACE:\n> +\n> +\t\t\t/*\n> +\t\t\t * Other unsupported commands. These used to return\n> +\t\t\t * EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED prior to the\n> +\t\t\t * conversion of commandTag from string to enum.\n> +\t\t\t */\n> +\t\tcase COMMANDTAG_ALTER_SYSTEM:\n> +\t\tcase COMMANDTAG_ANALYZE:\n> +\t\tcase COMMANDTAG_BEGIN:\n> +\t\tcase COMMANDTAG_CALL:\n> +\t\tcase COMMANDTAG_CHECKPOINT:\n> +\t\tcase COMMANDTAG_CLOSE:\n> +\t\tcase COMMANDTAG_CLOSE_CURSOR:\n> +\t\tcase COMMANDTAG_CLOSE_CURSOR_ALL:\n> +\t\tcase COMMANDTAG_CLUSTER:\n> +\t\tcase COMMANDTAG_COMMIT:\n> +\t\tcase COMMANDTAG_COMMIT_PREPARED:\n> +\t\tcase COMMANDTAG_COPY:\n> +\t\tcase COMMANDTAG_COPY_FROM:\n> +\t\tcase COMMANDTAG_DEALLOCATE:\n> +\t\tcase COMMANDTAG_DEALLOCATE_ALL:\n> +\t\tcase COMMANDTAG_DECLARE_CURSOR:\n> +\t\tcase COMMANDTAG_DELETE:\n> +\t\tcase COMMANDTAG_DISCARD:\n> +\t\tcase COMMANDTAG_DISCARD_ALL:\n> +\t\tcase COMMANDTAG_DISCARD_PLANS:\n> +\t\tcase COMMANDTAG_DISCARD_SEQUENCES:\n> +\t\tcase COMMANDTAG_DISCARD_TEMP:\n> +\t\tcase COMMANDTAG_DO:\n> +\t\tcase COMMANDTAG_DROP_REPLICATION_SLOT:\n> +\t\tcase COMMANDTAG_EXECUTE:\n> +\t\tcase COMMANDTAG_EXPLAIN:\n> +\t\tcase COMMANDTAG_FETCH:\n> +\t\tcase COMMANDTAG_GRANT_ROLE:\n> +\t\tcase COMMANDTAG_INSERT:\n> +\t\tcase COMMANDTAG_LISTEN:\n> +\t\tcase COMMANDTAG_LOAD:\n> +\t\tcase COMMANDTAG_LOCK_TABLE:\n> +\t\tcase COMMANDTAG_MOVE:\n> +\t\tcase COMMANDTAG_NOTIFY:\n> +\t\tcase COMMANDTAG_PREPARE:\n> +\t\tcase COMMANDTAG_PREPARE_TRANSACTION:\n> +\t\tcase COMMANDTAG_REASSIGN_OWNED:\n> +\t\tcase COMMANDTAG_REINDEX:\n> +\t\tcase COMMANDTAG_RELEASE:\n> +\t\tcase COMMANDTAG_RESET:\n> +\t\tcase COMMANDTAG_REVOKE_ROLE:\n> +\t\tcase COMMANDTAG_ROLLBACK:\n> +\t\tcase COMMANDTAG_ROLLBACK_PREPARED:\n> +\t\tcase COMMANDTAG_SAVEPOINT:\n> +\t\tcase COMMANDTAG_SELECT:\n> +\t\tcase COMMANDTAG_SELECT_FOR_KEY_SHARE:\n> +\t\tcase COMMANDTAG_SELECT_FOR_NO_KEY_UPDATE:\n> +\t\tcase COMMANDTAG_SELECT_FOR_SHARE:\n> +\t\tcase COMMANDTAG_SELECT_FOR_UPDATE:\n> +\t\tcase COMMANDTAG_SET:\n> +\t\tcase COMMANDTAG_SET_CONSTRAINTS:\n> +\t\tcase COMMANDTAG_SHOW:\n> +\t\tcase COMMANDTAG_START_TRANSACTION:\n> +\t\tcase COMMANDTAG_TRUNCATE_TABLE:\n> +\t\tcase COMMANDTAG_UNLISTEN:\n> +\t\tcase COMMANDTAG_UPDATE:\n> +\t\tcase COMMANDTAG_VACUUM:\n> +\t\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED;\n> +\t\tcase COMMANDTAG_UNKNOWN:\n> \t\t\tbreak;\n> -\tif (etsd->obtypename == NULL)\n> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n> -\tif (!etsd->supported)\n> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED;\n> -\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n> +\t}\n> +\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n> }\n\nThis is pretty painful.\n\n\n> @@ -745,7 +902,7 @@ EventTriggerCommonSetup(Node *parsetree,\n> \t\treturn NIL;\n>\n> \t/* Get the command tag. */\n> -\ttag = CreateCommandTag(parsetree);\n> +\ttag = GetCommandTagName(CreateCommandTag(parsetree));\n>\n> \t/*\n> \t * Filter list of event triggers by command tag, and copy them into our\n> @@ -2136,7 +2293,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS)\n> \t\t\t\t\t/* objsubid */\n> \t\t\t\t\tvalues[i++] = Int32GetDatum(addr.objectSubId);\n> \t\t\t\t\t/* command tag */\n> -\t\t\t\t\tvalues[i++] = CStringGetTextDatum(CreateCommandTag(cmd->parsetree));\n> +\t\t\t\t\tvalues[i++] = CStringGetTextDatum(GetCommandTagName(CreateCommandTag(cmd->parsetree)));\n> \t\t\t\t\t/* object_type */\n> \t\t\t\t\tvalues[i++] = CStringGetTextDatum(type);\n> \t\t\t\t\t/* schema */\n> @@ -2161,7 +2318,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS)\n> \t\t\t\t/* objsubid */\n> \t\t\t\tnulls[i++] = true;\n> \t\t\t\t/* command tag */\n> -\t\t\t\tvalues[i++] = CStringGetTextDatum(CreateCommandTag(cmd->parsetree));\n> +\t\t\t\tvalues[i++] = CStringGetTextDatum(GetCommandTagName(CreateCommandTag(cmd->parsetree)));\n> \t\t\t\t/* object_type */\n> \t\t\t\tvalues[i++] = CStringGetTextDatum(stringify_adefprivs_objtype(cmd->d.defprivs.objtype));\n> \t\t\t\t/* schema */\n\nSo GetCommandTagName we commonly do twice for some reason? Once in\nEventTriggerCommonSetup() and then again in\npg_event_trigger_ddl_commands()? Why is EventTriggerData.tag still the\nstring?\n\n> \tAssert(list_length(plan->plancache_list) == 1);\n> @@ -1469,7 +1469,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan,\n> \t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t\t\t\t/* translator: %s is a SQL statement name */\n> \t\t\t\t\t\t errmsg(\"%s is not allowed in a non-volatile function\",\n> -\t\t\t\t\t\t\t\tCreateCommandTag((Node *) pstmt))));\n> +\t\t\t\t\t\t\t\tGetCommandTagName(CreateCommandTag((Node *) pstmt)))));\n\nProbably worth having a wrapper for this - these lines are pretty long,\nand there quite a number of cases like it in the patch.\n\n> @@ -172,11 +175,38 @@ EndCommand(const char *commandTag, CommandDest dest)\n> \t\tcase DestRemoteSimple:\n>\n> \t\t\t/*\n> -\t\t\t * We assume the commandTag is plain ASCII and therefore requires\n> -\t\t\t * no encoding conversion.\n> +\t\t\t * We assume the tagname is plain ASCII and therefore\n> +\t\t\t * requires no encoding conversion.\n> \t\t\t */\n> -\t\t\tpq_putmessage('C', commandTag, strlen(commandTag) + 1);\n> -\t\t\tbreak;\n> +\t\t\ttagname = GetCommandTagName(qc->commandTag);\n> +\t\t\tswitch (qc->display_format)\n> +\t\t\t{\n> +\t\t\t\tcase DISPLAYFORMAT_PLAIN:\n> +\t\t\t\t\tpq_putmessage('C', tagname, strlen(tagname) + 1);\n> +\t\t\t\t\tbreak;\n> +\t\t\t\tcase DISPLAYFORMAT_LAST_OID:\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * We no longer display LastOid, but to preserve the wire protocol,\n> +\t\t\t\t\t * we write InvalidOid where the LastOid used to be written. For\n> +\t\t\t\t\t * efficiency in the snprintf(), hard-code InvalidOid as zero.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tAssert(InvalidOid == 0);\n> +\t\t\t\t\tsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n> +\t\t\t\t\t\t\t\t\"%s 0 \" UINT64_FORMAT,\n> +\t\t\t\t\t\t\t\ttagname,\n> +\t\t\t\t\t\t\t\tqc->nprocessed);\n> +\t\t\t\t\tpq_putmessage('C', completionTag, strlen(completionTag) + 1);\n> +\t\t\t\t\tbreak;\n> +\t\t\t\tcase DISPLAYFORMAT_NPROCESSED:\n> +\t\t\t\t\tsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n> +\t\t\t\t\t\t\t\"%s \" UINT64_FORMAT,\n> +\t\t\t\t\t\t\ttagname,\n> +\t\t\t\t\t\t\tqc->nprocessed);\n> +\t\t\t\t\tpq_putmessage('C', completionTag, strlen(completionTag) + 1);\n> +\t\t\t\t\tbreak;\n> +\t\t\t\tdefault:\n> +\t\t\t\t\telog(ERROR, \"Invalid display_format in EndCommand\");\n> +\t\t\t}\n\nImo there should only be one pq_putmessage(). Also think this type of\ndefault: is a bad idea, just prevents the compiler from warning if we\nwere to ever introduce a new variant of DISPLAYFORMAT_*, without\nproviding any meaningful additional security.\n\n> @@ -855,7 +889,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>\n> \t\tcase T_DiscardStmt:\n> \t\t\t/* should we allow DISCARD PLANS? */\n> -\t\t\tCheckRestrictedOperation(\"DISCARD\");\n> +\t\t\tCheckRestrictedOperation(COMMANDTAG_DISCARD);\n> \t\t\tDiscardCommand((DiscardStmt *) parsetree, isTopLevel);\n> \t\t\tbreak;\n>\n> @@ -974,7 +1008,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tExecuteGrantStmt(stmt);\n> \t\t\t}\n> @@ -987,7 +1021,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->removeType))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tExecDropStmt(stmt, isTopLevel);\n> \t\t\t}\n> @@ -1000,7 +1034,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->renameType))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tExecRenameStmt(stmt);\n> \t\t\t}\n> @@ -1013,7 +1047,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tExecAlterObjectDependsStmt(stmt, NULL);\n> \t\t\t}\n> @@ -1026,7 +1060,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tExecAlterObjectSchemaStmt(stmt, NULL);\n> \t\t\t}\n> @@ -1039,7 +1073,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tExecAlterOwnerStmt(stmt);\n> \t\t\t}\n> @@ -1052,7 +1086,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \t\t\t\telse\n> \t\t\t\t\tCommentObject(stmt);\n> \t\t\t\tbreak;\n> @@ -1065,7 +1099,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n> +\t\t\t\t\t\t\t\t\t dest, qc);\n\nNot this patch's fault or task. But I hate this type of code - needing\nto touch a dozen places for new type of statement is just\ninsane. utility.c should long have been rewritten to just have one\nmetadata table for nearly all of this. Perhaps with a few callbacks for\nspecial cases.\n\n\n> +static const char * tag_names[] = {\n> +\t\"???\",\n> +\t\"ALTER ACCESS METHOD\",\n> +\t\"ALTER AGGREGATE\",\n> +\t\"ALTER CAST\",\n\nThis seems problematic to maintain, because the order needs to match\nbetween this and something defined in a header - and there's no\nguarantee a misordering is immediately noticeable. We should either go\nfor my metadata table idea, or at least rewrite this, even if more\nverbose, to something like\n\nstatic const char * tag_names[] = {\n [COMMAND_TAG_ALTER_ACCESS_METHOD] = \"ALTER ACCESS METHOD\",\n ...\n\nI think the fact that this would show up in a grep for\nCOMMAND_TAG_ALTER_ACCESS_METHOD is good too.\n\n\n\n> +/*\n> + * Search CommandTag by name\n> + *\n> + * Returns CommandTag, or COMMANDTAG_UNKNOWN if not recognized\n> + */\n> +CommandTag\n> +GetCommandTagEnum(const char *commandname)\n> +{\n> +\tconst char **base, **last, **position;\n> +\tint\t\t result;\n> +\n> +\tOPTIONALLY_CHECK_COMMAND_TAGS();\n> +\tif (commandname == NULL || *commandname == '\\0')\n> +\t\treturn COMMANDTAG_UNKNOWN;\n> +\n> +\tbase = tag_names;\n> +\tlast = tag_names + tag_name_length - 1;\n> +\twhile (last >= base)\n> +\t{\n> +\t\tposition = base + ((last - base) >> 1);\n> +\t\tresult = pg_strcasecmp(commandname, *position);\n> +\t\tif (result == 0)\n> +\t\t\treturn (CommandTag) (position - tag_names);\n> +\t\telse if (result < 0)\n> +\t\t\tlast = position - 1;\n> +\t\telse\n> +\t\t\tbase = position + 1;\n> +\t}\n> +\treturn COMMANDTAG_UNKNOWN;\n> +}\n\nThis seems pretty grotty - but you get rid of it later. See my comments there.\n\n\n\n> +#ifdef COMMANDTAG_CHECKING\n> +bool\n> +CheckCommandTagEnum()\n> +{\n> +\tCommandTag\ti, j;\n> +\n> +\tif (FIRST_COMMAND_TAG < 0 || LAST_COMMAND_TAG < 0 || LAST_COMMAND_TAG < FIRST_COMMAND_TAG)\n> +\t{\n> +\t\telog(ERROR, \"FIRST_COMMAND_TAG (%u), LAST_COMMAND_TAG (%u) not reasonable\",\n> +\t\t\t (unsigned int) FIRST_COMMAND_TAG, (unsigned int) LAST_COMMAND_TAG);\n> +\t\treturn false;\n> +\t}\n> +\tif (FIRST_COMMAND_TAG != (CommandTag)0)\n> +\t{\n> +\t\telog(ERROR, \"FIRST_COMMAND_TAG (%u) != 0\", (unsigned int) FIRST_COMMAND_TAG);\n> +\t\treturn false;\n> +\t}\n> +\tif (LAST_COMMAND_TAG != (CommandTag)(tag_name_length - 1))\n> +\t{\n> +\t\telog(ERROR, \"LAST_COMMAND_TAG (%u) != tag_name_length (%u)\",\n> +\t\t\t (unsigned int) LAST_COMMAND_TAG, (unsigned int) tag_name_length);\n> +\t\treturn false;\n> +\t}\n\nThese all seem to want to be static asserts.\n\n\n> +\tfor (i = FIRST_COMMAND_TAG; i < LAST_COMMAND_TAG; i++)\n> +\t{\n> +\t\tfor (j = i+1; j < LAST_COMMAND_TAG; j++)\n> +\t\t{\n> +\t\t\tint cmp = strcmp(tag_names[i], tag_names[j]);\n> +\t\t\tif (cmp == 0)\n> +\t\t\t{\n> +\t\t\t\telog(ERROR, \"Found duplicate tag_name: \\\"%s\\\"\",\n> +\t\t\t\t\ttag_names[i]);\n> +\t\t\t\treturn false;\n> +\t\t\t}\n> +\t\t\tif (cmp > 0)\n> +\t\t\t{\n> +\t\t\t\telog(ERROR, \"Found commandnames out of order: \\\"%s\\\" before \\\"%s\\\"\",\n> +\t\t\t\t\ttag_names[i], tag_names[j]);\n> +\t\t\t\treturn false;\n> +\t\t\t}\n> +\t\t}\n> +\t}\n> +\treturn true;\n> +}\n\nAnd I think we could get rid of this with my earlier suggestions?\n\n\n> +/*\n> + * BEWARE: These are in sorted order, but ordered by their printed\n> + * values in the tag_name list (see common/commandtag.c).\n> + * In particular it matters because the sort ordering changes\n> + * when you replace a space with an underscore. To wit:\n> + *\n> + * \"CREATE TABLE\"\n> + * \"CREATE TABLE AS\"\n> + * \"CREATE TABLESPACE\"\n> + *\n> + * but...\n> + *\n> + * CREATE_TABLE\n> + * CREATE_TABLESPACE\n> + * CREATE_TABLE_AS\n> + *\n> + * It also matters that COMMANDTAG_UNKNOWN is written \"???\".\n> + *\n> + * If you add a value here, add it in common/commandtag.c also, and\n> + * be careful to get the ordering right. You can build with\n> + * COMMANDTAG_CHECKING to have this automatically checked\n> + * at runtime, but that adds considerable overhead, so do so sparingly.\n> + */\n> +typedef enum CommandTag\n> +{\n\nThis seems pretty darn nightmarish.\n\n\n> +#define FIRST_COMMAND_TAG COMMANDTAG_UNKNOWN\n> +\tCOMMANDTAG_UNKNOWN,\n> +\tCOMMANDTAG_ALTER_ACCESS_METHOD,\n> +\tCOMMANDTAG_ALTER_AGGREGATE,\n> +\tCOMMANDTAG_ALTER_CAST,\n> +\tCOMMANDTAG_ALTER_COLLATION,\n> +\tCOMMANDTAG_ALTER_CONSTRAINT,\n> +\tCOMMANDTAG_ALTER_CONVERSION,\n> +\tCOMMANDTAG_ALTER_DATABASE,\n> +\tCOMMANDTAG_ALTER_DEFAULT_PRIVILEGES,\n> +\tCOMMANDTAG_ALTER_DOMAIN,\n> [...]\n\nI'm a bit worried that this basically duplicates a good portion of NodeTag, without having otherwise much of a point?\n\n\n> From a70b0cadc1142e92b2354a0ca3cd47aaeb0c148e Mon Sep 17 00:00:00 2001\n> From: Mark Dilger <mark.dilger@enterprisedb.com>\n> Date: Tue, 4 Feb 2020 12:25:05 -0800\n> Subject: [PATCH v2 2/3] Using a Bitmapset of tags rather than a string array.\n> MIME-Version: 1.0\n> Content-Type: text/plain; charset=UTF-8\n> Content-Transfer-Encoding: 8bit\n>\n> EventTriggerCacheItem no longer holds an array of palloc’d tag strings\n> in sorted order, but rather just a Bitmapset over the CommandTags. This\n> makes the code a little simpler and easier to read, in my opinion. In\n> filter_event_trigger, rather than running bsearch through a sorted array\n> of strings, it just runs bms_is_member.\n> ---\n\nIt seems weird to add the bsearch just to remove it immediately again a\npatch later. This probably should just go first?\n\n\n\n\n> diff --git a/src/test/regress/sql/event_trigger.sql b/src/test/regress/sql/event_trigger.sql\n> index 346168673d..cad02212ad 100644\n> --- a/src/test/regress/sql/event_trigger.sql\n> +++ b/src/test/regress/sql/event_trigger.sql\n> @@ -10,6 +10,13 @@ BEGIN\n> END\n> $$ language plpgsql;\n>\n> +-- OK\n> +create function test_event_trigger2() returns event_trigger as $$\n> +BEGIN\n> +\tRAISE NOTICE 'test_event_trigger2: % %', tg_event, tg_tag;\n> +END\n> +$$ LANGUAGE plpgsql;\n> +\n> -- should fail, event triggers cannot have declared arguments\n> create function test_event_trigger_arg(name text)\n> returns event_trigger as $$ BEGIN RETURN 1; END $$ language plpgsql;\n> @@ -82,6 +89,783 @@ create event trigger regress_event_trigger2 on ddl_command_start\n> -- OK\n> comment on event trigger regress_event_trigger is 'test comment';\n>\n> +-- These are all unsupported\n> +create event trigger regress_event_triger_NULL on ddl_command_start\n> + when tag in ('')\n> + execute procedure test_event_trigger2();\n> +\n> +create event trigger regress_event_triger_UNKNOWN on ddl_command_start\n> + when tag in ('???')\n> + execute procedure test_event_trigger2();\n> +\n> +create event trigger regress_event_trigger_ALTER_DATABASE on ddl_command_start\n> + when tag in ('ALTER DATABASE')\n> + execute procedure test_event_trigger2();\n[...]\n\nThere got to be a more maintainable way to write this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Feb 2020 19:34:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 4, 2020, at 7:34 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n\nThanks for reviewing! I am pretty much in agreement with your comments, below.\n\n> On 2020-02-04 18:18:52 -0800, Mark Dilger wrote:\n>> In master, a number of functions pass a char *completionTag argument (really a char completionTag[COMPLETION_TAG_BUFSIZE]) which gets filled in with the string to return to the client from EndCommand. I have removed that kind of logic:\n>> \n>> - /* save the rowcount if we're given a completionTag to fill */\n>> - if (completionTag)\n>> - snprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n>> - \"SELECT \" UINT64_FORMAT,\n>> - queryDesc->estate->es_processed);\n>> \n>> In the patch, this is replaced with a new struct, QueryCompletionData. That bit of code above is replaced with:\n>> \n>> + /* save the rowcount if we're given a qc to fill */\n>> + if (qc)\n>> + SetQC(qc, COMMANDTAG_SELECT, queryDesc->estate->es_processed, DISPLAYFORMAT_NPROCESSED);\n>> \n>> For wire protocol compatibility, we have to track the display format.\n>> When this gets to EndCommand, the display format allows the string to\n>> be written exactly as the client will expect. If we ever get to the\n>> point where we can break with that compatibility, the third member of\n>> this struct, display_format, can be removed.\n> \n> Hm. While I like not having this as strings a lot, I wish we could get\n> rid of this displayformat stuff.\n\nAgreed, but I don’t know how. \n\n>> These are replaced by switch() case statements over the possible commandTags:\n>> \n>> + switch (commandTag)\n>> + {\n>> + /*\n>> + * Supported idiosyncratic special cases.\n>> + */\n>> + case COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n>> + case COMMANDTAG_ALTER_LARGE_OBJECT:\n>> + case COMMANDTAG_COMMENT:\n>> + case COMMANDTAG_CREATE_TABLE_AS:\n>> + case COMMANDTAG_DROP_OWNED:\n>> + case COMMANDTAG_GRANT:\n>> + case COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n>> + case COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n>> + case COMMANDTAG_REVOKE:\n>> + case COMMANDTAG_SECURITY_LABEL:\n>> + case COMMANDTAG_SELECT_INTO:\n> \n> The number of these makes me wonder if we should just have a metadata\n> table in one place, instead of needing to edit multiple\n> locations. Something like\n> \n> const ... CommandTagBehaviour[] = {\n> [COMMANDTAG_INSERT] = {\n> .display_processed = true, .display_oid = true, ...},\n> [COMMANDTAG_CREATE_TABLE_AS] = {\n> .event_trigger = true, ...},\n> \n> with the zero initialized defaults being the common cases.\n> \n> Not sure if it's worth going there. But it's maybe worth thinking about\n> for a minute?\n\nYes, I was thinking about something like this, only I had in mind a Bitmapset for these. It just so happens that there are 192 enum values, 0..191, which happens to fit in 3 64bit words plus a varlena header. Mind you, that nice property would be immediately blown if we added another entry to the enum. Has anybody made a compile-time static version of Bitmapset? We could store this information in either 24 bytes or 32 bytes, depending on whether we add another enum value.\n\nGetting a little off topic, I was also thinking about having a counting Bitmapset that would store one bit per enum that is included, and then a sparse array of counts, perhaps with one byte counts for [0..127] and 8 byte counts for [128..huge] that we could use in shared memory for the pg_stat_tag work. Is there anything like that?\n\nAnyway, I don’t think we should invent lots of different structures for CommandTag tracking, so something that serves double duty might keep the code tighter. I’m already using ordinary Bitmapset over CommandTags in event_trigger, so naturally that comes to mind for this, too.\n\n\n>> Averages for test set 1 by scale:\n>> set\tscale\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t3741\t1.734\t3.162\t133.718\n>> 1\t9\t6124\t0.904\t1.05\t230.547\n>> 1\t81\t5921\t0.931\t1.015\t67.023\n>> \n>> Averages for test set 1 by clients:\n>> set\tclients\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t2163\t0.461\t0.514\t24.414\n>> 1\t4\t5968\t0.675\t0.791\t40.354\n>> 1\t16\t7655\t2.433\t3.922\t366.519\n>> \n>> \n>> For command tag patch (branched from 1fd687a035):\n>> \n>> \tpostgresql % find src -type f -name \"*.c\" -or -name \"*.h\" | xargs cat | wc\n>> \t 1482969 5691908 45281399\n>> \n>> \tpostgresql % find src -type f -name \"*.o\" | xargs cat | wc\n>> \t 38209 476243 18999752\n>> \n>> \n>> Averages for test set 1 by scale:\n>> set\tscale\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t3877\t1.645\t3.066\t24.973\n>> 1\t9\t6383\t0.859\t1.032\t64.566\n>> 1\t81\t5945\t0.925\t1.023\t162.9\n>> \n>> Averages for test set 1 by clients:\n>> set\tclients\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t2141\t0.466\t0.522\t11.531\n>> 1\t4\t5967\t0.673\t0.783\t136.882\n>> 1\t16\t8096\t2.292\t3.817\t104.026\n> \n> Not bad.\n\nI still need to get a benchmark more targeted at this codepath.\n\n>> diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c\n>> index 9aa2b61600..5322c14ce4 100644\n>> --- a/src/backend/commands/async.c\n>> +++ b/src/backend/commands/async.c\n>> @@ -594,7 +594,7 @@ pg_notify(PG_FUNCTION_ARGS)\n>> \t\tpayload = text_to_cstring(PG_GETARG_TEXT_PP(1));\n>> \n>> \t/* For NOTIFY as a statement, this is checked in ProcessUtility */\n>> -\tPreventCommandDuringRecovery(\"NOTIFY\");\n>> +\tPreventCommandDuringRecovery(COMMANDTAG_NOTIFY);\n>> \n>> \tAsync_Notify(channel, payload);\n>> \n>> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n>> index 40a8ec1abd..4828e75bd5 100644\n>> --- a/src/backend/commands/copy.c\n>> +++ b/src/backend/commands/copy.c\n>> @@ -1063,7 +1063,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n>> \n>> \t\t/* check read-only transaction and parallel mode */\n>> \t\tif (XactReadOnly && !rel->rd_islocaltemp)\n>> -\t\t\tPreventCommandIfReadOnly(\"COPY FROM\");\n>> +\t\t\tPreventCommandIfReadOnly(COMMANDTAG_COPY_FROM);\n>> \n>> \t\tcstate = BeginCopyFrom(pstate, rel, stmt->filename, stmt->is_program,\n>> \t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options);\n> \n> I'm not sure this really ought to be part of this change - seems like a\n> somewhat independent change to me. With less obvious benefits.\n\nI don’t think I care too much either way. I had some vague ideas about consolidating all of these strings in the backend into one place. \n\n>> static event_trigger_command_tag_check_result\n>> -check_ddl_tag(const char *tag)\n>> +check_ddl_tag(CommandTag commandTag)\n>> {\n>> -\tconst char *obtypename;\n>> -\tconst event_trigger_support_data *etsd;\n>> +\tswitch (commandTag)\n>> +\t{\n>> +\t\t\t/*\n>> +\t\t\t * Supported idiosyncratic special cases.\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n>> +\t\tcase COMMANDTAG_ALTER_LARGE_OBJECT:\n>> +\t\tcase COMMANDTAG_COMMENT:\n>> +\t\tcase COMMANDTAG_CREATE_TABLE_AS:\n>> +\t\tcase COMMANDTAG_DROP_OWNED:\n>> +\t\tcase COMMANDTAG_GRANT:\n>> +\t\tcase COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n>> +\t\tcase COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_REVOKE:\n>> +\t\tcase COMMANDTAG_SECURITY_LABEL:\n>> +\t\tcase COMMANDTAG_SELECT_INTO:\n>> \n>> -\t/*\n>> -\t * Handle some idiosyncratic special cases.\n>> -\t */\n>> -\tif (pg_strcasecmp(tag, \"CREATE TABLE AS\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"SELECT INTO\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"REFRESH MATERIALIZED VIEW\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"ALTER DEFAULT PRIVILEGES\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"ALTER LARGE OBJECT\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"COMMENT\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"GRANT\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"REVOKE\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"DROP OWNED\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"IMPORT FOREIGN SCHEMA\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"SECURITY LABEL\") == 0)\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n>> +\t\t\t/*\n>> +\t\t\t * Supported CREATE commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_CREATE_ACCESS_METHOD:\n>> +\t\tcase COMMANDTAG_CREATE_AGGREGATE:\n>> +\t\tcase COMMANDTAG_CREATE_CAST:\n>> +\t\tcase COMMANDTAG_CREATE_COLLATION:\n>> +\t\tcase COMMANDTAG_CREATE_CONSTRAINT:\n>> +\t\tcase COMMANDTAG_CREATE_CONVERSION:\n>> +\t\tcase COMMANDTAG_CREATE_DOMAIN:\n>> +\t\tcase COMMANDTAG_CREATE_EXTENSION:\n>> +\t\tcase COMMANDTAG_CREATE_FOREIGN_DATA_WRAPPER:\n>> +\t\tcase COMMANDTAG_CREATE_FOREIGN_TABLE:\n>> +\t\tcase COMMANDTAG_CREATE_FUNCTION:\n>> +\t\tcase COMMANDTAG_CREATE_INDEX:\n>> +\t\tcase COMMANDTAG_CREATE_LANGUAGE:\n>> +\t\tcase COMMANDTAG_CREATE_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_CREATE_OPERATOR:\n>> +\t\tcase COMMANDTAG_CREATE_OPERATOR_CLASS:\n>> +\t\tcase COMMANDTAG_CREATE_OPERATOR_FAMILY:\n>> +\t\tcase COMMANDTAG_CREATE_POLICY:\n>> +\t\tcase COMMANDTAG_CREATE_PROCEDURE:\n>> +\t\tcase COMMANDTAG_CREATE_PUBLICATION:\n>> +\t\tcase COMMANDTAG_CREATE_ROUTINE:\n>> +\t\tcase COMMANDTAG_CREATE_RULE:\n>> +\t\tcase COMMANDTAG_CREATE_SCHEMA:\n>> +\t\tcase COMMANDTAG_CREATE_SEQUENCE:\n>> +\t\tcase COMMANDTAG_CREATE_SERVER:\n>> +\t\tcase COMMANDTAG_CREATE_STATISTICS:\n>> +\t\tcase COMMANDTAG_CREATE_SUBSCRIPTION:\n>> +\t\tcase COMMANDTAG_CREATE_TABLE:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_CONFIGURATION:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_DICTIONARY:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_PARSER:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_TEMPLATE:\n>> +\t\tcase COMMANDTAG_CREATE_TRANSFORM:\n>> +\t\tcase COMMANDTAG_CREATE_TRIGGER:\n>> +\t\tcase COMMANDTAG_CREATE_TYPE:\n>> +\t\tcase COMMANDTAG_CREATE_USER_MAPPING:\n>> +\t\tcase COMMANDTAG_CREATE_VIEW:\n>> \n>> -\t/*\n>> -\t * Otherwise, command should be CREATE, ALTER, or DROP.\n>> -\t */\n>> -\tif (pg_strncasecmp(tag, \"CREATE \", 7) == 0)\n>> -\t\tobtypename = tag + 7;\n>> -\telse if (pg_strncasecmp(tag, \"ALTER \", 6) == 0)\n>> -\t\tobtypename = tag + 6;\n>> -\telse if (pg_strncasecmp(tag, \"DROP \", 5) == 0)\n>> -\t\tobtypename = tag + 5;\n>> -\telse\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n>> +\t\t\t/*\n>> +\t\t\t * Supported ALTER commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_ACCESS_METHOD:\n>> +\t\tcase COMMANDTAG_ALTER_AGGREGATE:\n>> +\t\tcase COMMANDTAG_ALTER_CAST:\n>> +\t\tcase COMMANDTAG_ALTER_COLLATION:\n>> +\t\tcase COMMANDTAG_ALTER_CONSTRAINT:\n>> +\t\tcase COMMANDTAG_ALTER_CONVERSION:\n>> +\t\tcase COMMANDTAG_ALTER_DOMAIN:\n>> +\t\tcase COMMANDTAG_ALTER_EXTENSION:\n>> +\t\tcase COMMANDTAG_ALTER_FOREIGN_DATA_WRAPPER:\n>> +\t\tcase COMMANDTAG_ALTER_FOREIGN_TABLE:\n>> +\t\tcase COMMANDTAG_ALTER_FUNCTION:\n>> +\t\tcase COMMANDTAG_ALTER_INDEX:\n>> +\t\tcase COMMANDTAG_ALTER_LANGUAGE:\n>> +\t\tcase COMMANDTAG_ALTER_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_ALTER_OPERATOR:\n>> +\t\tcase COMMANDTAG_ALTER_OPERATOR_CLASS:\n>> +\t\tcase COMMANDTAG_ALTER_OPERATOR_FAMILY:\n>> +\t\tcase COMMANDTAG_ALTER_POLICY:\n>> +\t\tcase COMMANDTAG_ALTER_PROCEDURE:\n>> +\t\tcase COMMANDTAG_ALTER_PUBLICATION:\n>> +\t\tcase COMMANDTAG_ALTER_ROUTINE:\n>> +\t\tcase COMMANDTAG_ALTER_RULE:\n>> +\t\tcase COMMANDTAG_ALTER_SCHEMA:\n>> +\t\tcase COMMANDTAG_ALTER_SEQUENCE:\n>> +\t\tcase COMMANDTAG_ALTER_SERVER:\n>> +\t\tcase COMMANDTAG_ALTER_STATISTICS:\n>> +\t\tcase COMMANDTAG_ALTER_SUBSCRIPTION:\n>> +\t\tcase COMMANDTAG_ALTER_TABLE:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_CONFIGURATION:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_DICTIONARY:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_PARSER:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_TEMPLATE:\n>> +\t\tcase COMMANDTAG_ALTER_TRANSFORM:\n>> +\t\tcase COMMANDTAG_ALTER_TRIGGER:\n>> +\t\tcase COMMANDTAG_ALTER_TYPE:\n>> +\t\tcase COMMANDTAG_ALTER_USER_MAPPING:\n>> +\t\tcase COMMANDTAG_ALTER_VIEW:\n>> \n>> -\t/*\n>> -\t * ...and the object type should be something recognizable.\n>> -\t */\n>> -\tfor (etsd = event_trigger_support; etsd->obtypename != NULL; etsd++)\n>> -\t\tif (pg_strcasecmp(etsd->obtypename, obtypename) == 0)\n>> +\t\t\t/*\n>> +\t\t\t * Supported DROP commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_DROP_ACCESS_METHOD:\n>> +\t\tcase COMMANDTAG_DROP_AGGREGATE:\n>> +\t\tcase COMMANDTAG_DROP_CAST:\n>> +\t\tcase COMMANDTAG_DROP_COLLATION:\n>> +\t\tcase COMMANDTAG_DROP_CONSTRAINT:\n>> +\t\tcase COMMANDTAG_DROP_CONVERSION:\n>> +\t\tcase COMMANDTAG_DROP_DOMAIN:\n>> +\t\tcase COMMANDTAG_DROP_EXTENSION:\n>> +\t\tcase COMMANDTAG_DROP_FOREIGN_DATA_WRAPPER:\n>> +\t\tcase COMMANDTAG_DROP_FOREIGN_TABLE:\n>> +\t\tcase COMMANDTAG_DROP_FUNCTION:\n>> +\t\tcase COMMANDTAG_DROP_INDEX:\n>> +\t\tcase COMMANDTAG_DROP_LANGUAGE:\n>> +\t\tcase COMMANDTAG_DROP_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_DROP_OPERATOR:\n>> +\t\tcase COMMANDTAG_DROP_OPERATOR_CLASS:\n>> +\t\tcase COMMANDTAG_DROP_OPERATOR_FAMILY:\n>> +\t\tcase COMMANDTAG_DROP_POLICY:\n>> +\t\tcase COMMANDTAG_DROP_PROCEDURE:\n>> +\t\tcase COMMANDTAG_DROP_PUBLICATION:\n>> +\t\tcase COMMANDTAG_DROP_ROUTINE:\n>> +\t\tcase COMMANDTAG_DROP_RULE:\n>> +\t\tcase COMMANDTAG_DROP_SCHEMA:\n>> +\t\tcase COMMANDTAG_DROP_SEQUENCE:\n>> +\t\tcase COMMANDTAG_DROP_SERVER:\n>> +\t\tcase COMMANDTAG_DROP_STATISTICS:\n>> +\t\tcase COMMANDTAG_DROP_SUBSCRIPTION:\n>> +\t\tcase COMMANDTAG_DROP_TABLE:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_CONFIGURATION:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_DICTIONARY:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_PARSER:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_TEMPLATE:\n>> +\t\tcase COMMANDTAG_DROP_TRANSFORM:\n>> +\t\tcase COMMANDTAG_DROP_TRIGGER:\n>> +\t\tcase COMMANDTAG_DROP_TYPE:\n>> +\t\tcase COMMANDTAG_DROP_USER_MAPPING:\n>> +\t\tcase COMMANDTAG_DROP_VIEW:\n>> +\t\t\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Unsupported CREATE commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_CREATE_DATABASE:\n>> +\t\tcase COMMANDTAG_CREATE_EVENT_TRIGGER:\n>> +\t\tcase COMMANDTAG_CREATE_ROLE:\n>> +\t\tcase COMMANDTAG_CREATE_TABLESPACE:\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Unsupported ALTER commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_DATABASE:\n>> +\t\tcase COMMANDTAG_ALTER_EVENT_TRIGGER:\n>> +\t\tcase COMMANDTAG_ALTER_ROLE:\n>> +\t\tcase COMMANDTAG_ALTER_TABLESPACE:\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Unsupported DROP commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_DROP_DATABASE:\n>> +\t\tcase COMMANDTAG_DROP_EVENT_TRIGGER:\n>> +\t\tcase COMMANDTAG_DROP_ROLE:\n>> +\t\tcase COMMANDTAG_DROP_TABLESPACE:\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Other unsupported commands. These used to return\n>> +\t\t\t * EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED prior to the\n>> +\t\t\t * conversion of commandTag from string to enum.\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_SYSTEM:\n>> +\t\tcase COMMANDTAG_ANALYZE:\n>> +\t\tcase COMMANDTAG_BEGIN:\n>> +\t\tcase COMMANDTAG_CALL:\n>> +\t\tcase COMMANDTAG_CHECKPOINT:\n>> +\t\tcase COMMANDTAG_CLOSE:\n>> +\t\tcase COMMANDTAG_CLOSE_CURSOR:\n>> +\t\tcase COMMANDTAG_CLOSE_CURSOR_ALL:\n>> +\t\tcase COMMANDTAG_CLUSTER:\n>> +\t\tcase COMMANDTAG_COMMIT:\n>> +\t\tcase COMMANDTAG_COMMIT_PREPARED:\n>> +\t\tcase COMMANDTAG_COPY:\n>> +\t\tcase COMMANDTAG_COPY_FROM:\n>> +\t\tcase COMMANDTAG_DEALLOCATE:\n>> +\t\tcase COMMANDTAG_DEALLOCATE_ALL:\n>> +\t\tcase COMMANDTAG_DECLARE_CURSOR:\n>> +\t\tcase COMMANDTAG_DELETE:\n>> +\t\tcase COMMANDTAG_DISCARD:\n>> +\t\tcase COMMANDTAG_DISCARD_ALL:\n>> +\t\tcase COMMANDTAG_DISCARD_PLANS:\n>> +\t\tcase COMMANDTAG_DISCARD_SEQUENCES:\n>> +\t\tcase COMMANDTAG_DISCARD_TEMP:\n>> +\t\tcase COMMANDTAG_DO:\n>> +\t\tcase COMMANDTAG_DROP_REPLICATION_SLOT:\n>> +\t\tcase COMMANDTAG_EXECUTE:\n>> +\t\tcase COMMANDTAG_EXPLAIN:\n>> +\t\tcase COMMANDTAG_FETCH:\n>> +\t\tcase COMMANDTAG_GRANT_ROLE:\n>> +\t\tcase COMMANDTAG_INSERT:\n>> +\t\tcase COMMANDTAG_LISTEN:\n>> +\t\tcase COMMANDTAG_LOAD:\n>> +\t\tcase COMMANDTAG_LOCK_TABLE:\n>> +\t\tcase COMMANDTAG_MOVE:\n>> +\t\tcase COMMANDTAG_NOTIFY:\n>> +\t\tcase COMMANDTAG_PREPARE:\n>> +\t\tcase COMMANDTAG_PREPARE_TRANSACTION:\n>> +\t\tcase COMMANDTAG_REASSIGN_OWNED:\n>> +\t\tcase COMMANDTAG_REINDEX:\n>> +\t\tcase COMMANDTAG_RELEASE:\n>> +\t\tcase COMMANDTAG_RESET:\n>> +\t\tcase COMMANDTAG_REVOKE_ROLE:\n>> +\t\tcase COMMANDTAG_ROLLBACK:\n>> +\t\tcase COMMANDTAG_ROLLBACK_PREPARED:\n>> +\t\tcase COMMANDTAG_SAVEPOINT:\n>> +\t\tcase COMMANDTAG_SELECT:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_KEY_SHARE:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_NO_KEY_UPDATE:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_SHARE:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_UPDATE:\n>> +\t\tcase COMMANDTAG_SET:\n>> +\t\tcase COMMANDTAG_SET_CONSTRAINTS:\n>> +\t\tcase COMMANDTAG_SHOW:\n>> +\t\tcase COMMANDTAG_START_TRANSACTION:\n>> +\t\tcase COMMANDTAG_TRUNCATE_TABLE:\n>> +\t\tcase COMMANDTAG_UNLISTEN:\n>> +\t\tcase COMMANDTAG_UPDATE:\n>> +\t\tcase COMMANDTAG_VACUUM:\n>> +\t\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED;\n>> +\t\tcase COMMANDTAG_UNKNOWN:\n>> \t\t\tbreak;\n>> -\tif (etsd->obtypename == NULL)\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n>> -\tif (!etsd->supported)\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED;\n>> -\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n>> +\t}\n>> +\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n>> }\n> \n> This is pretty painful.\n\nI think it is painful in a different way. The existing code on master is a mess of parsing logic that is harder to reason through, but fewer lines. There are other places in the backend that have long switch statements, so I didn’t feel I was breaking with project style to do this. I also made the switch longer than I had too, by including all enumerated values rather than just the ones that are supported. We could remove the extra cases, but I think that’s only a half measure. Something more like a consolidated table or bitmap seems better.\n\n> \n>> @@ -745,7 +902,7 @@ EventTriggerCommonSetup(Node *parsetree,\n>> \t\treturn NIL;\n>> \n>> \t/* Get the command tag. */\n>> -\ttag = CreateCommandTag(parsetree);\n>> +\ttag = GetCommandTagName(CreateCommandTag(parsetree));\n>> \n>> \t/*\n>> \t * Filter list of event triggers by command tag, and copy them into our\n>> @@ -2136,7 +2293,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS)\n>> \t\t\t\t\t/* objsubid */\n>> \t\t\t\t\tvalues[i++] = Int32GetDatum(addr.objectSubId);\n>> \t\t\t\t\t/* command tag */\n>> -\t\t\t\t\tvalues[i++] = CStringGetTextDatum(CreateCommandTag(cmd->parsetree));\n>> +\t\t\t\t\tvalues[i++] = CStringGetTextDatum(GetCommandTagName(CreateCommandTag(cmd->parsetree)));\n>> \t\t\t\t\t/* object_type */\n>> \t\t\t\t\tvalues[i++] = CStringGetTextDatum(type);\n>> \t\t\t\t\t/* schema */\n>> @@ -2161,7 +2318,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS)\n>> \t\t\t\t/* objsubid */\n>> \t\t\t\tnulls[i++] = true;\n>> \t\t\t\t/* command tag */\n>> -\t\t\t\tvalues[i++] = CStringGetTextDatum(CreateCommandTag(cmd->parsetree));\n>> +\t\t\t\tvalues[i++] = CStringGetTextDatum(GetCommandTagName(CreateCommandTag(cmd->parsetree)));\n>> \t\t\t\t/* object_type */\n>> \t\t\t\tvalues[i++] = CStringGetTextDatum(stringify_adefprivs_objtype(cmd->d.defprivs.objtype));\n>> \t\t\t\t/* schema */\n> \n> So GetCommandTagName we commonly do twice for some reason? Once in\n> EventTriggerCommonSetup() and then again in\n> pg_event_trigger_ddl_commands()? Why is EventTriggerData.tag still the\n> string?\n\nIt is not a string after applying v2-0003…. The main issue I see in the code you are quoting is that CreateCommandTag(cmd->parsetree) is called more than once, and that’s not the consequence of this patch. That’s pre-existing. I didn’t look into it, though I can if you think it is relevant to this patch set. The name of the function, CreateCommandTag, sounds like something I invented as part of this patch, but it pre-exists this patch. I only changed it’s return value from char * to CommandTag.\n\n>> \tAssert(list_length(plan->plancache_list) == 1);\n>> @@ -1469,7 +1469,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan,\n>> \t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> \t\t\t\t/* translator: %s is a SQL statement name */\n>> \t\t\t\t\t\t errmsg(\"%s is not allowed in a non-volatile function\",\n>> -\t\t\t\t\t\t\t\tCreateCommandTag((Node *) pstmt))));\n>> +\t\t\t\t\t\t\t\tGetCommandTagName(CreateCommandTag((Node *) pstmt)))));\n> \n> Probably worth having a wrapper for this - these lines are pretty long,\n> and there quite a number of cases like it in the patch.\n\nActually, I looked at that. The number of them seemed right on the line between making a wrapper and not. I thought the counter-argument was that by making a wrapper that only got used in a few places, I was creating more lines of code and obfuscating what happens. I’m happy to do it your way, if consensus emerges around that.\n\n>> @@ -172,11 +175,38 @@ EndCommand(const char *commandTag, CommandDest dest)\n>> \t\tcase DestRemoteSimple:\n>> \n>> \t\t\t/*\n>> -\t\t\t * We assume the commandTag is plain ASCII and therefore requires\n>> -\t\t\t * no encoding conversion.\n>> +\t\t\t * We assume the tagname is plain ASCII and therefore\n>> +\t\t\t * requires no encoding conversion.\n>> \t\t\t */\n>> -\t\t\tpq_putmessage('C', commandTag, strlen(commandTag) + 1);\n>> -\t\t\tbreak;\n>> +\t\t\ttagname = GetCommandTagName(qc->commandTag);\n>> +\t\t\tswitch (qc->display_format)\n>> +\t\t\t{\n>> +\t\t\t\tcase DISPLAYFORMAT_PLAIN:\n>> +\t\t\t\t\tpq_putmessage('C', tagname, strlen(tagname) + 1);\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\tcase DISPLAYFORMAT_LAST_OID:\n>> +\t\t\t\t\t/*\n>> +\t\t\t\t\t * We no longer display LastOid, but to preserve the wire protocol,\n>> +\t\t\t\t\t * we write InvalidOid where the LastOid used to be written. For\n>> +\t\t\t\t\t * efficiency in the snprintf(), hard-code InvalidOid as zero.\n>> +\t\t\t\t\t */\n>> +\t\t\t\t\tAssert(InvalidOid == 0);\n>> +\t\t\t\t\tsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n>> +\t\t\t\t\t\t\t\t\"%s 0 \" UINT64_FORMAT,\n>> +\t\t\t\t\t\t\t\ttagname,\n>> +\t\t\t\t\t\t\t\tqc->nprocessed);\n>> +\t\t\t\t\tpq_putmessage('C', completionTag, strlen(completionTag) + 1);\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\tcase DISPLAYFORMAT_NPROCESSED:\n>> +\t\t\t\t\tsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n>> +\t\t\t\t\t\t\t\"%s \" UINT64_FORMAT,\n>> +\t\t\t\t\t\t\ttagname,\n>> +\t\t\t\t\t\t\tqc->nprocessed);\n>> +\t\t\t\t\tpq_putmessage('C', completionTag, strlen(completionTag) + 1);\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\tdefault:\n>> +\t\t\t\t\telog(ERROR, \"Invalid display_format in EndCommand\");\n>> +\t\t\t}\n> \n> Imo there should only be one pq_putmessage(). Also think this type of\n> default: is a bad idea, just prevents the compiler from warning if we\n> were to ever introduce a new variant of DISPLAYFORMAT_*, without\n> providing any meaningful additional security.\n\nOk.\n\n>> @@ -855,7 +889,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \n>> \t\tcase T_DiscardStmt:\n>> \t\t\t/* should we allow DISCARD PLANS? */\n>> -\t\t\tCheckRestrictedOperation(\"DISCARD\");\n>> +\t\t\tCheckRestrictedOperation(COMMANDTAG_DISCARD);\n>> \t\t\tDiscardCommand((DiscardStmt *) parsetree, isTopLevel);\n>> \t\t\tbreak;\n>> \n>> @@ -974,7 +1008,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecuteGrantStmt(stmt);\n>> \t\t\t}\n>> @@ -987,7 +1021,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->removeType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecDropStmt(stmt, isTopLevel);\n>> \t\t\t}\n>> @@ -1000,7 +1034,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->renameType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecRenameStmt(stmt);\n>> \t\t\t}\n>> @@ -1013,7 +1047,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecAlterObjectDependsStmt(stmt, NULL);\n>> \t\t\t}\n>> @@ -1026,7 +1060,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecAlterObjectSchemaStmt(stmt, NULL);\n>> \t\t\t}\n>> @@ -1039,7 +1073,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecAlterOwnerStmt(stmt);\n>> \t\t\t}\n>> @@ -1052,7 +1086,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tCommentObject(stmt);\n>> \t\t\t\tbreak;\n>> @@ -1065,7 +1099,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \n> Not this patch's fault or task. But I hate this type of code - needing\n> to touch a dozen places for new type of statement is just\n> insane. utility.c should long have been rewritten to just have one\n> metadata table for nearly all of this. Perhaps with a few callbacks for\n> special cases.\n\nNo objection from me, though I’d have to see the alternative and what it does to performance.\n\n>> +static const char * tag_names[] = {\n>> +\t\"???\",\n>> +\t\"ALTER ACCESS METHOD\",\n>> +\t\"ALTER AGGREGATE\",\n>> +\t\"ALTER CAST\",\n> \n> This seems problematic to maintain, because the order needs to match\n> between this and something defined in a header - and there's no\n> guarantee a misordering is immediately noticeable. We should either go\n> for my metadata table idea, or at least rewrite this, even if more\n> verbose, to something like\n> \n> static const char * tag_names[] = {\n> [COMMAND_TAG_ALTER_ACCESS_METHOD] = \"ALTER ACCESS METHOD\",\n> ...\n> \n> I think the fact that this would show up in a grep for\n> COMMAND_TAG_ALTER_ACCESS_METHOD is good too.\n\nI had something closer to what you’re asking for as part of the v1 patch and ripped it out to get the code size down. Avoiding code bloat was one of Tom's concerns. What you are suggesting is admittedly better than what I ripped out, though. \n\n>> +/*\n>> + * Search CommandTag by name\n>> + *\n>> + * Returns CommandTag, or COMMANDTAG_UNKNOWN if not recognized\n>> + */\n>> +CommandTag\n>> +GetCommandTagEnum(const char *commandname)\n>> +{\n>> +\tconst char **base, **last, **position;\n>> +\tint\t\t result;\n>> +\n>> +\tOPTIONALLY_CHECK_COMMAND_TAGS();\n>> +\tif (commandname == NULL || *commandname == '\\0')\n>> +\t\treturn COMMANDTAG_UNKNOWN;\n>> +\n>> +\tbase = tag_names;\n>> +\tlast = tag_names + tag_name_length - 1;\n>> +\twhile (last >= base)\n>> +\t{\n>> +\t\tposition = base + ((last - base) >> 1);\n>> +\t\tresult = pg_strcasecmp(commandname, *position);\n>> +\t\tif (result == 0)\n>> +\t\t\treturn (CommandTag) (position - tag_names);\n>> +\t\telse if (result < 0)\n>> +\t\t\tlast = position - 1;\n>> +\t\telse\n>> +\t\t\tbase = position + 1;\n>> +\t}\n>> +\treturn COMMANDTAG_UNKNOWN;\n>> +}\n> \n> This seems pretty grotty - but you get rid of it later. See my comments there.\n> \n> \n> \n>> +#ifdef COMMANDTAG_CHECKING\n>> +bool\n>> +CheckCommandTagEnum()\n>> +{\n>> +\tCommandTag\ti, j;\n>> +\n>> +\tif (FIRST_COMMAND_TAG < 0 || LAST_COMMAND_TAG < 0 || LAST_COMMAND_TAG < FIRST_COMMAND_TAG)\n>> +\t{\n>> +\t\telog(ERROR, \"FIRST_COMMAND_TAG (%u), LAST_COMMAND_TAG (%u) not reasonable\",\n>> +\t\t\t (unsigned int) FIRST_COMMAND_TAG, (unsigned int) LAST_COMMAND_TAG);\n>> +\t\treturn false;\n>> +\t}\n>> +\tif (FIRST_COMMAND_TAG != (CommandTag)0)\n>> +\t{\n>> +\t\telog(ERROR, \"FIRST_COMMAND_TAG (%u) != 0\", (unsigned int) FIRST_COMMAND_TAG);\n>> +\t\treturn false;\n>> +\t}\n>> +\tif (LAST_COMMAND_TAG != (CommandTag)(tag_name_length - 1))\n>> +\t{\n>> +\t\telog(ERROR, \"LAST_COMMAND_TAG (%u) != tag_name_length (%u)\",\n>> +\t\t\t (unsigned int) LAST_COMMAND_TAG, (unsigned int) tag_name_length);\n>> +\t\treturn false;\n>> +\t}\n> \n> These all seem to want to be static asserts.\n> \n> \n>> +\tfor (i = FIRST_COMMAND_TAG; i < LAST_COMMAND_TAG; i++)\n>> +\t{\n>> +\t\tfor (j = i+1; j < LAST_COMMAND_TAG; j++)\n>> +\t\t{\n>> +\t\t\tint cmp = strcmp(tag_names[i], tag_names[j]);\n>> +\t\t\tif (cmp == 0)\n>> +\t\t\t{\n>> +\t\t\t\telog(ERROR, \"Found duplicate tag_name: \\\"%s\\\"\",\n>> +\t\t\t\t\ttag_names[i]);\n>> +\t\t\t\treturn false;\n>> +\t\t\t}\n>> +\t\t\tif (cmp > 0)\n>> +\t\t\t{\n>> +\t\t\t\telog(ERROR, \"Found commandnames out of order: \\\"%s\\\" before \\\"%s\\\"\",\n>> +\t\t\t\t\ttag_names[i], tag_names[j]);\n>> +\t\t\t\treturn false;\n>> +\t\t\t}\n>> +\t\t}\n>> +\t}\n>> +\treturn true;\n>> +}\n> \n> And I think we could get rid of this with my earlier suggestions?\n> \n> \n>> +/*\n>> + * BEWARE: These are in sorted order, but ordered by their printed\n>> + * values in the tag_name list (see common/commandtag.c).\n>> + * In particular it matters because the sort ordering changes\n>> + * when you replace a space with an underscore. To wit:\n>> + *\n>> + * \"CREATE TABLE\"\n>> + * \"CREATE TABLE AS\"\n>> + * \"CREATE TABLESPACE\"\n>> + *\n>> + * but...\n>> + *\n>> + * CREATE_TABLE\n>> + * CREATE_TABLESPACE\n>> + * CREATE_TABLE_AS\n>> + *\n>> + * It also matters that COMMANDTAG_UNKNOWN is written \"???\".\n>> + *\n>> + * If you add a value here, add it in common/commandtag.c also, and\n>> + * be careful to get the ordering right. You can build with\n>> + * COMMANDTAG_CHECKING to have this automatically checked\n>> + * at runtime, but that adds considerable overhead, so do so sparingly.\n>> + */\n>> +typedef enum CommandTag\n>> +{\n> \n> This seems pretty darn nightmarish.\n> \n> \n>> +#define FIRST_COMMAND_TAG COMMANDTAG_UNKNOWN\n>> +\tCOMMANDTAG_UNKNOWN,\n>> +\tCOMMANDTAG_ALTER_ACCESS_METHOD,\n>> +\tCOMMANDTAG_ALTER_AGGREGATE,\n>> +\tCOMMANDTAG_ALTER_CAST,\n>> +\tCOMMANDTAG_ALTER_COLLATION,\n>> +\tCOMMANDTAG_ALTER_CONSTRAINT,\n>> +\tCOMMANDTAG_ALTER_CONVERSION,\n>> +\tCOMMANDTAG_ALTER_DATABASE,\n>> +\tCOMMANDTAG_ALTER_DEFAULT_PRIVILEGES,\n>> +\tCOMMANDTAG_ALTER_DOMAIN,\n>> [...]\n> \n> I'm a bit worried that this basically duplicates a good portion of NodeTag, without having otherwise much of a point?\n\nI never quite came up with a one-size-fits-all enumeration. There are lots of places where these enumerations seem to almost map onto each other, but with special cases that don’t line up. I’m open to suggestions.\n\n>> From a70b0cadc1142e92b2354a0ca3cd47aaeb0c148e Mon Sep 17 00:00:00 2001\n>> From: Mark Dilger <mark.dilger@enterprisedb.com>\n>> Date: Tue, 4 Feb 2020 12:25:05 -0800\n>> Subject: [PATCH v2 2/3] Using a Bitmapset of tags rather than a string array.\n>> MIME-Version: 1.0\n>> Content-Type: text/plain; charset=UTF-8\n>> Content-Transfer-Encoding: 8bit\n>> \n>> EventTriggerCacheItem no longer holds an array of palloc’d tag strings\n>> in sorted order, but rather just a Bitmapset over the CommandTags. This\n>> makes the code a little simpler and easier to read, in my opinion. In\n>> filter_event_trigger, rather than running bsearch through a sorted array\n>> of strings, it just runs bms_is_member.\n>> ---\n> \n> It seems weird to add the bsearch just to remove it immediately again a\n> patch later. This probably should just go first?\n\nI’m not sure what you mean. That bsearch is pre-existing, not mine.\n\n>> diff --git a/src/test/regress/sql/event_trigger.sql b/src/test/regress/sql/event_trigger.sql\n>> index 346168673d..cad02212ad 100644\n>> --- a/src/test/regress/sql/event_trigger.sql\n>> +++ b/src/test/regress/sql/event_trigger.sql\n>> @@ -10,6 +10,13 @@ BEGIN\n>> END\n>> $$ language plpgsql;\n>> \n>> +-- OK\n>> +create function test_event_trigger2() returns event_trigger as $$\n>> +BEGIN\n>> +\tRAISE NOTICE 'test_event_trigger2: % %', tg_event, tg_tag;\n>> +END\n>> +$$ LANGUAGE plpgsql;\n>> +\n>> -- should fail, event triggers cannot have declared arguments\n>> create function test_event_trigger_arg(name text)\n>> returns event_trigger as $$ BEGIN RETURN 1; END $$ language plpgsql;\n>> @@ -82,6 +89,783 @@ create event trigger regress_event_trigger2 on ddl_command_start\n>> -- OK\n>> comment on event trigger regress_event_trigger is 'test comment';\n>> \n>> +-- These are all unsupported\n>> +create event trigger regress_event_triger_NULL on ddl_command_start\n>> + when tag in ('')\n>> + execute procedure test_event_trigger2();\n>> +\n>> +create event trigger regress_event_triger_UNKNOWN on ddl_command_start\n>> + when tag in ('???')\n>> + execute procedure test_event_trigger2();\n>> +\n>> +create event trigger regress_event_trigger_ALTER_DATABASE on ddl_command_start\n>> + when tag in ('ALTER DATABASE')\n>> + execute procedure test_event_trigger2();\n> [...]\n> \n> There got to be a more maintainable way to write this.\n\nYeah, I already conceded to Tom in his review that I’m not wedded to committing this test in any form, let alone in this form. That’s part of why I kept it as a separate patch file. But if you like what it is doing, and just don’t like the verbosity, I can try harder to compress it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 4 Feb 2020 21:09:11 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Andres,\n\nThe previous patch set seemed to cause confusion, having separated changes into multiple files. The latest patch, heavily influenced by your review, is all in one file, attached.\n\n> On Feb 4, 2020, at 7:34 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> \n>> These are replaced by switch() case statements over the possible commandTags:\n>> \n>> + switch (commandTag)\n>> + {\n>> + /*\n>> + * Supported idiosyncratic special cases.\n>> + */\n>> + case COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n>> + case COMMANDTAG_ALTER_LARGE_OBJECT:\n>> + case COMMANDTAG_COMMENT:\n>> + case COMMANDTAG_CREATE_TABLE_AS:\n>> + case COMMANDTAG_DROP_OWNED:\n>> + case COMMANDTAG_GRANT:\n>> + case COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n>> + case COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n>> + case COMMANDTAG_REVOKE:\n>> + case COMMANDTAG_SECURITY_LABEL:\n>> + case COMMANDTAG_SELECT_INTO:\n> \n> The number of these makes me wonder if we should just have a metadata\n> table in one place, instead of needing to edit multiple\n> locations. Something like\n> \n> const ... CommandTagBehaviour[] = {\n> [COMMANDTAG_INSERT] = {\n> .display_processed = true, .display_oid = true, ...},\n> [COMMANDTAG_CREATE_TABLE_AS] = {\n> .event_trigger = true, ...},\n> \n> with the zero initialized defaults being the common cases.\n> \n> Not sure if it's worth going there. But it's maybe worth thinking about\n> for a minute?\n\nThe v3 patch does something like you suggest.\n\nThe only gotcha I came across while reorganizing the code this way is that exec_replication_command(…) outputs “SELECT” rather than “SELECT <ROWCOUNT>” as is done everywhere else. Strangely, exec_replication_command(…) does output the rowcount for “COPY <ROWCOUNT>”, which matches how COPY is handled elsewhere. I can’t see any logic in this. I’m concerned that outputting “SELECT 0” from exec_replication_command rather than “SELECT” as is currently done will break some client somewhere, though none that I can find.\n\nPutting the display information into the CommandTag behavior table forces the behavior per tag to be the same everywhere, which forces this change on exec_replication_command.\n\nTo get around this, I’ve added an extremely bogus extra boolean argument to EndCommand, force_undecorated_output, that is false from all callers except exec_replication_command(…) in the one spot I described.\n\nI don’t know whether the code should be committed this way, but I need something as a placeholder until I get a better understanding of why exec_replication_command(…) behaves as it does and what I should do about it in the patch.\n\n>> Averages for test set 1 by scale:\n>> set\tscale\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t3741\t1.734\t3.162\t133.718\n>> 1\t9\t6124\t0.904\t1.05\t230.547\n>> 1\t81\t5921\t0.931\t1.015\t67.023\n>> \n>> Averages for test set 1 by clients:\n>> set\tclients\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t2163\t0.461\t0.514\t24.414\n>> 1\t4\t5968\t0.675\t0.791\t40.354\n>> 1\t16\t7655\t2.433\t3.922\t366.519\n>> \n>> \n>> For command tag patch (branched from 1fd687a035):\n>> \n>> \tpostgresql % find src -type f -name \"*.c\" -or -name \"*.h\" | xargs cat | wc\n>> \t 1482969 5691908 45281399\n>> \n>> \tpostgresql % find src -type f -name \"*.o\" | xargs cat | wc\n>> \t 38209 476243 18999752\n>> \n>> \n>> Averages for test set 1 by scale:\n>> set\tscale\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t3877\t1.645\t3.066\t24.973\n>> 1\t9\t6383\t0.859\t1.032\t64.566\n>> 1\t81\t5945\t0.925\t1.023\t162.9\n>> \n>> Averages for test set 1 by clients:\n>> set\tclients\ttps\tavg_latency\t90%<\tmax_latency\n>> 1\t1\t2141\t0.466\t0.522\t11.531\n>> 1\t4\t5967\t0.673\t0.783\t136.882\n>> 1\t16\t8096\t2.292\t3.817\t104.026\n> \n> Not bad.\n> \n> \n>> diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c\n>> index 9aa2b61600..5322c14ce4 100644\n>> --- a/src/backend/commands/async.c\n>> +++ b/src/backend/commands/async.c\n>> @@ -594,7 +594,7 @@ pg_notify(PG_FUNCTION_ARGS)\n>> \t\tpayload = text_to_cstring(PG_GETARG_TEXT_PP(1));\n>> \n>> \t/* For NOTIFY as a statement, this is checked in ProcessUtility */\n>> -\tPreventCommandDuringRecovery(\"NOTIFY\");\n>> +\tPreventCommandDuringRecovery(COMMANDTAG_NOTIFY);\n>> \n>> \tAsync_Notify(channel, payload);\n>> \n>> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n>> index 40a8ec1abd..4828e75bd5 100644\n>> --- a/src/backend/commands/copy.c\n>> +++ b/src/backend/commands/copy.c\n>> @@ -1063,7 +1063,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n>> \n>> \t\t/* check read-only transaction and parallel mode */\n>> \t\tif (XactReadOnly && !rel->rd_islocaltemp)\n>> -\t\t\tPreventCommandIfReadOnly(\"COPY FROM\");\n>> +\t\t\tPreventCommandIfReadOnly(COMMANDTAG_COPY_FROM);\n>> \n>> \t\tcstate = BeginCopyFrom(pstate, rel, stmt->filename, stmt->is_program,\n>> \t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options);\n> \n> I'm not sure this really ought to be part of this change - seems like a\n> somewhat independent change to me. With less obvious benefits.\n\nThis is changed back in v3 to be more like how it was before.\n\n>> static event_trigger_command_tag_check_result\n>> -check_ddl_tag(const char *tag)\n>> +check_ddl_tag(CommandTag commandTag)\n>> {\n>> -\tconst char *obtypename;\n>> -\tconst event_trigger_support_data *etsd;\n>> +\tswitch (commandTag)\n>> +\t{\n>> +\t\t\t/*\n>> +\t\t\t * Supported idiosyncratic special cases.\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_DEFAULT_PRIVILEGES:\n>> +\t\tcase COMMANDTAG_ALTER_LARGE_OBJECT:\n>> +\t\tcase COMMANDTAG_COMMENT:\n>> +\t\tcase COMMANDTAG_CREATE_TABLE_AS:\n>> +\t\tcase COMMANDTAG_DROP_OWNED:\n>> +\t\tcase COMMANDTAG_GRANT:\n>> +\t\tcase COMMANDTAG_IMPORT_FOREIGN_SCHEMA:\n>> +\t\tcase COMMANDTAG_REFRESH_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_REVOKE:\n>> +\t\tcase COMMANDTAG_SECURITY_LABEL:\n>> +\t\tcase COMMANDTAG_SELECT_INTO:\n>> \n>> -\t/*\n>> -\t * Handle some idiosyncratic special cases.\n>> -\t */\n>> -\tif (pg_strcasecmp(tag, \"CREATE TABLE AS\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"SELECT INTO\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"REFRESH MATERIALIZED VIEW\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"ALTER DEFAULT PRIVILEGES\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"ALTER LARGE OBJECT\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"COMMENT\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"GRANT\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"REVOKE\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"DROP OWNED\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"IMPORT FOREIGN SCHEMA\") == 0 ||\n>> -\t\tpg_strcasecmp(tag, \"SECURITY LABEL\") == 0)\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n>> +\t\t\t/*\n>> +\t\t\t * Supported CREATE commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_CREATE_ACCESS_METHOD:\n>> +\t\tcase COMMANDTAG_CREATE_AGGREGATE:\n>> +\t\tcase COMMANDTAG_CREATE_CAST:\n>> +\t\tcase COMMANDTAG_CREATE_COLLATION:\n>> +\t\tcase COMMANDTAG_CREATE_CONSTRAINT:\n>> +\t\tcase COMMANDTAG_CREATE_CONVERSION:\n>> +\t\tcase COMMANDTAG_CREATE_DOMAIN:\n>> +\t\tcase COMMANDTAG_CREATE_EXTENSION:\n>> +\t\tcase COMMANDTAG_CREATE_FOREIGN_DATA_WRAPPER:\n>> +\t\tcase COMMANDTAG_CREATE_FOREIGN_TABLE:\n>> +\t\tcase COMMANDTAG_CREATE_FUNCTION:\n>> +\t\tcase COMMANDTAG_CREATE_INDEX:\n>> +\t\tcase COMMANDTAG_CREATE_LANGUAGE:\n>> +\t\tcase COMMANDTAG_CREATE_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_CREATE_OPERATOR:\n>> +\t\tcase COMMANDTAG_CREATE_OPERATOR_CLASS:\n>> +\t\tcase COMMANDTAG_CREATE_OPERATOR_FAMILY:\n>> +\t\tcase COMMANDTAG_CREATE_POLICY:\n>> +\t\tcase COMMANDTAG_CREATE_PROCEDURE:\n>> +\t\tcase COMMANDTAG_CREATE_PUBLICATION:\n>> +\t\tcase COMMANDTAG_CREATE_ROUTINE:\n>> +\t\tcase COMMANDTAG_CREATE_RULE:\n>> +\t\tcase COMMANDTAG_CREATE_SCHEMA:\n>> +\t\tcase COMMANDTAG_CREATE_SEQUENCE:\n>> +\t\tcase COMMANDTAG_CREATE_SERVER:\n>> +\t\tcase COMMANDTAG_CREATE_STATISTICS:\n>> +\t\tcase COMMANDTAG_CREATE_SUBSCRIPTION:\n>> +\t\tcase COMMANDTAG_CREATE_TABLE:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_CONFIGURATION:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_DICTIONARY:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_PARSER:\n>> +\t\tcase COMMANDTAG_CREATE_TEXT_SEARCH_TEMPLATE:\n>> +\t\tcase COMMANDTAG_CREATE_TRANSFORM:\n>> +\t\tcase COMMANDTAG_CREATE_TRIGGER:\n>> +\t\tcase COMMANDTAG_CREATE_TYPE:\n>> +\t\tcase COMMANDTAG_CREATE_USER_MAPPING:\n>> +\t\tcase COMMANDTAG_CREATE_VIEW:\n>> \n>> -\t/*\n>> -\t * Otherwise, command should be CREATE, ALTER, or DROP.\n>> -\t */\n>> -\tif (pg_strncasecmp(tag, \"CREATE \", 7) == 0)\n>> -\t\tobtypename = tag + 7;\n>> -\telse if (pg_strncasecmp(tag, \"ALTER \", 6) == 0)\n>> -\t\tobtypename = tag + 6;\n>> -\telse if (pg_strncasecmp(tag, \"DROP \", 5) == 0)\n>> -\t\tobtypename = tag + 5;\n>> -\telse\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n>> +\t\t\t/*\n>> +\t\t\t * Supported ALTER commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_ACCESS_METHOD:\n>> +\t\tcase COMMANDTAG_ALTER_AGGREGATE:\n>> +\t\tcase COMMANDTAG_ALTER_CAST:\n>> +\t\tcase COMMANDTAG_ALTER_COLLATION:\n>> +\t\tcase COMMANDTAG_ALTER_CONSTRAINT:\n>> +\t\tcase COMMANDTAG_ALTER_CONVERSION:\n>> +\t\tcase COMMANDTAG_ALTER_DOMAIN:\n>> +\t\tcase COMMANDTAG_ALTER_EXTENSION:\n>> +\t\tcase COMMANDTAG_ALTER_FOREIGN_DATA_WRAPPER:\n>> +\t\tcase COMMANDTAG_ALTER_FOREIGN_TABLE:\n>> +\t\tcase COMMANDTAG_ALTER_FUNCTION:\n>> +\t\tcase COMMANDTAG_ALTER_INDEX:\n>> +\t\tcase COMMANDTAG_ALTER_LANGUAGE:\n>> +\t\tcase COMMANDTAG_ALTER_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_ALTER_OPERATOR:\n>> +\t\tcase COMMANDTAG_ALTER_OPERATOR_CLASS:\n>> +\t\tcase COMMANDTAG_ALTER_OPERATOR_FAMILY:\n>> +\t\tcase COMMANDTAG_ALTER_POLICY:\n>> +\t\tcase COMMANDTAG_ALTER_PROCEDURE:\n>> +\t\tcase COMMANDTAG_ALTER_PUBLICATION:\n>> +\t\tcase COMMANDTAG_ALTER_ROUTINE:\n>> +\t\tcase COMMANDTAG_ALTER_RULE:\n>> +\t\tcase COMMANDTAG_ALTER_SCHEMA:\n>> +\t\tcase COMMANDTAG_ALTER_SEQUENCE:\n>> +\t\tcase COMMANDTAG_ALTER_SERVER:\n>> +\t\tcase COMMANDTAG_ALTER_STATISTICS:\n>> +\t\tcase COMMANDTAG_ALTER_SUBSCRIPTION:\n>> +\t\tcase COMMANDTAG_ALTER_TABLE:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_CONFIGURATION:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_DICTIONARY:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_PARSER:\n>> +\t\tcase COMMANDTAG_ALTER_TEXT_SEARCH_TEMPLATE:\n>> +\t\tcase COMMANDTAG_ALTER_TRANSFORM:\n>> +\t\tcase COMMANDTAG_ALTER_TRIGGER:\n>> +\t\tcase COMMANDTAG_ALTER_TYPE:\n>> +\t\tcase COMMANDTAG_ALTER_USER_MAPPING:\n>> +\t\tcase COMMANDTAG_ALTER_VIEW:\n>> \n>> -\t/*\n>> -\t * ...and the object type should be something recognizable.\n>> -\t */\n>> -\tfor (etsd = event_trigger_support; etsd->obtypename != NULL; etsd++)\n>> -\t\tif (pg_strcasecmp(etsd->obtypename, obtypename) == 0)\n>> +\t\t\t/*\n>> +\t\t\t * Supported DROP commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_DROP_ACCESS_METHOD:\n>> +\t\tcase COMMANDTAG_DROP_AGGREGATE:\n>> +\t\tcase COMMANDTAG_DROP_CAST:\n>> +\t\tcase COMMANDTAG_DROP_COLLATION:\n>> +\t\tcase COMMANDTAG_DROP_CONSTRAINT:\n>> +\t\tcase COMMANDTAG_DROP_CONVERSION:\n>> +\t\tcase COMMANDTAG_DROP_DOMAIN:\n>> +\t\tcase COMMANDTAG_DROP_EXTENSION:\n>> +\t\tcase COMMANDTAG_DROP_FOREIGN_DATA_WRAPPER:\n>> +\t\tcase COMMANDTAG_DROP_FOREIGN_TABLE:\n>> +\t\tcase COMMANDTAG_DROP_FUNCTION:\n>> +\t\tcase COMMANDTAG_DROP_INDEX:\n>> +\t\tcase COMMANDTAG_DROP_LANGUAGE:\n>> +\t\tcase COMMANDTAG_DROP_MATERIALIZED_VIEW:\n>> +\t\tcase COMMANDTAG_DROP_OPERATOR:\n>> +\t\tcase COMMANDTAG_DROP_OPERATOR_CLASS:\n>> +\t\tcase COMMANDTAG_DROP_OPERATOR_FAMILY:\n>> +\t\tcase COMMANDTAG_DROP_POLICY:\n>> +\t\tcase COMMANDTAG_DROP_PROCEDURE:\n>> +\t\tcase COMMANDTAG_DROP_PUBLICATION:\n>> +\t\tcase COMMANDTAG_DROP_ROUTINE:\n>> +\t\tcase COMMANDTAG_DROP_RULE:\n>> +\t\tcase COMMANDTAG_DROP_SCHEMA:\n>> +\t\tcase COMMANDTAG_DROP_SEQUENCE:\n>> +\t\tcase COMMANDTAG_DROP_SERVER:\n>> +\t\tcase COMMANDTAG_DROP_STATISTICS:\n>> +\t\tcase COMMANDTAG_DROP_SUBSCRIPTION:\n>> +\t\tcase COMMANDTAG_DROP_TABLE:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_CONFIGURATION:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_DICTIONARY:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_PARSER:\n>> +\t\tcase COMMANDTAG_DROP_TEXT_SEARCH_TEMPLATE:\n>> +\t\tcase COMMANDTAG_DROP_TRANSFORM:\n>> +\t\tcase COMMANDTAG_DROP_TRIGGER:\n>> +\t\tcase COMMANDTAG_DROP_TYPE:\n>> +\t\tcase COMMANDTAG_DROP_USER_MAPPING:\n>> +\t\tcase COMMANDTAG_DROP_VIEW:\n>> +\t\t\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Unsupported CREATE commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_CREATE_DATABASE:\n>> +\t\tcase COMMANDTAG_CREATE_EVENT_TRIGGER:\n>> +\t\tcase COMMANDTAG_CREATE_ROLE:\n>> +\t\tcase COMMANDTAG_CREATE_TABLESPACE:\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Unsupported ALTER commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_DATABASE:\n>> +\t\tcase COMMANDTAG_ALTER_EVENT_TRIGGER:\n>> +\t\tcase COMMANDTAG_ALTER_ROLE:\n>> +\t\tcase COMMANDTAG_ALTER_TABLESPACE:\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Unsupported DROP commands\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_DROP_DATABASE:\n>> +\t\tcase COMMANDTAG_DROP_EVENT_TRIGGER:\n>> +\t\tcase COMMANDTAG_DROP_ROLE:\n>> +\t\tcase COMMANDTAG_DROP_TABLESPACE:\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Other unsupported commands. These used to return\n>> +\t\t\t * EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED prior to the\n>> +\t\t\t * conversion of commandTag from string to enum.\n>> +\t\t\t */\n>> +\t\tcase COMMANDTAG_ALTER_SYSTEM:\n>> +\t\tcase COMMANDTAG_ANALYZE:\n>> +\t\tcase COMMANDTAG_BEGIN:\n>> +\t\tcase COMMANDTAG_CALL:\n>> +\t\tcase COMMANDTAG_CHECKPOINT:\n>> +\t\tcase COMMANDTAG_CLOSE:\n>> +\t\tcase COMMANDTAG_CLOSE_CURSOR:\n>> +\t\tcase COMMANDTAG_CLOSE_CURSOR_ALL:\n>> +\t\tcase COMMANDTAG_CLUSTER:\n>> +\t\tcase COMMANDTAG_COMMIT:\n>> +\t\tcase COMMANDTAG_COMMIT_PREPARED:\n>> +\t\tcase COMMANDTAG_COPY:\n>> +\t\tcase COMMANDTAG_COPY_FROM:\n>> +\t\tcase COMMANDTAG_DEALLOCATE:\n>> +\t\tcase COMMANDTAG_DEALLOCATE_ALL:\n>> +\t\tcase COMMANDTAG_DECLARE_CURSOR:\n>> +\t\tcase COMMANDTAG_DELETE:\n>> +\t\tcase COMMANDTAG_DISCARD:\n>> +\t\tcase COMMANDTAG_DISCARD_ALL:\n>> +\t\tcase COMMANDTAG_DISCARD_PLANS:\n>> +\t\tcase COMMANDTAG_DISCARD_SEQUENCES:\n>> +\t\tcase COMMANDTAG_DISCARD_TEMP:\n>> +\t\tcase COMMANDTAG_DO:\n>> +\t\tcase COMMANDTAG_DROP_REPLICATION_SLOT:\n>> +\t\tcase COMMANDTAG_EXECUTE:\n>> +\t\tcase COMMANDTAG_EXPLAIN:\n>> +\t\tcase COMMANDTAG_FETCH:\n>> +\t\tcase COMMANDTAG_GRANT_ROLE:\n>> +\t\tcase COMMANDTAG_INSERT:\n>> +\t\tcase COMMANDTAG_LISTEN:\n>> +\t\tcase COMMANDTAG_LOAD:\n>> +\t\tcase COMMANDTAG_LOCK_TABLE:\n>> +\t\tcase COMMANDTAG_MOVE:\n>> +\t\tcase COMMANDTAG_NOTIFY:\n>> +\t\tcase COMMANDTAG_PREPARE:\n>> +\t\tcase COMMANDTAG_PREPARE_TRANSACTION:\n>> +\t\tcase COMMANDTAG_REASSIGN_OWNED:\n>> +\t\tcase COMMANDTAG_REINDEX:\n>> +\t\tcase COMMANDTAG_RELEASE:\n>> +\t\tcase COMMANDTAG_RESET:\n>> +\t\tcase COMMANDTAG_REVOKE_ROLE:\n>> +\t\tcase COMMANDTAG_ROLLBACK:\n>> +\t\tcase COMMANDTAG_ROLLBACK_PREPARED:\n>> +\t\tcase COMMANDTAG_SAVEPOINT:\n>> +\t\tcase COMMANDTAG_SELECT:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_KEY_SHARE:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_NO_KEY_UPDATE:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_SHARE:\n>> +\t\tcase COMMANDTAG_SELECT_FOR_UPDATE:\n>> +\t\tcase COMMANDTAG_SET:\n>> +\t\tcase COMMANDTAG_SET_CONSTRAINTS:\n>> +\t\tcase COMMANDTAG_SHOW:\n>> +\t\tcase COMMANDTAG_START_TRANSACTION:\n>> +\t\tcase COMMANDTAG_TRUNCATE_TABLE:\n>> +\t\tcase COMMANDTAG_UNLISTEN:\n>> +\t\tcase COMMANDTAG_UPDATE:\n>> +\t\tcase COMMANDTAG_VACUUM:\n>> +\t\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED;\n>> +\t\tcase COMMANDTAG_UNKNOWN:\n>> \t\t\tbreak;\n>> -\tif (etsd->obtypename == NULL)\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n>> -\tif (!etsd->supported)\n>> -\t\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_SUPPORTED;\n>> -\treturn EVENT_TRIGGER_COMMAND_TAG_OK;\n>> +\t}\n>> +\treturn EVENT_TRIGGER_COMMAND_TAG_NOT_RECOGNIZED;\n>> }\n> \n> This is pretty painful.\n\nYeah, and it’s gone in v3. This sort of logic now lives in the behavior table in src/backend/utils/misc/commandtag.c.\n\n>> @@ -745,7 +902,7 @@ EventTriggerCommonSetup(Node *parsetree,\n>> \t\treturn NIL;\n>> \n>> \t/* Get the command tag. */\n>> -\ttag = CreateCommandTag(parsetree);\n>> +\ttag = GetCommandTagName(CreateCommandTag(parsetree));\n>> \n>> \t/*\n>> \t * Filter list of event triggers by command tag, and copy them into our\n>> @@ -2136,7 +2293,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS)\n>> \t\t\t\t\t/* objsubid */\n>> \t\t\t\t\tvalues[i++] = Int32GetDatum(addr.objectSubId);\n>> \t\t\t\t\t/* command tag */\n>> -\t\t\t\t\tvalues[i++] = CStringGetTextDatum(CreateCommandTag(cmd->parsetree));\n>> +\t\t\t\t\tvalues[i++] = CStringGetTextDatum(GetCommandTagName(CreateCommandTag(cmd->parsetree)));\n>> \t\t\t\t\t/* object_type */\n>> \t\t\t\t\tvalues[i++] = CStringGetTextDatum(type);\n>> \t\t\t\t\t/* schema */\n>> @@ -2161,7 +2318,7 @@ pg_event_trigger_ddl_commands(PG_FUNCTION_ARGS)\n>> \t\t\t\t/* objsubid */\n>> \t\t\t\tnulls[i++] = true;\n>> \t\t\t\t/* command tag */\n>> -\t\t\t\tvalues[i++] = CStringGetTextDatum(CreateCommandTag(cmd->parsetree));\n>> +\t\t\t\tvalues[i++] = CStringGetTextDatum(GetCommandTagName(CreateCommandTag(cmd->parsetree)));\n>> \t\t\t\t/* object_type */\n>> \t\t\t\tvalues[i++] = CStringGetTextDatum(stringify_adefprivs_objtype(cmd->d.defprivs.objtype));\n>> \t\t\t\t/* schema */\n> \n> So GetCommandTagName we commonly do twice for some reason? Once in\n> EventTriggerCommonSetup() and then again in\n> pg_event_trigger_ddl_commands()? Why is EventTriggerData.tag still the\n> string?\n\nEventTriggerCommonSetup() gets the command tag enum, not the string, at least in v3.\n\n> \n>> \tAssert(list_length(plan->plancache_list) == 1);\n>> @@ -1469,7 +1469,7 @@ SPI_cursor_open_internal(const char *name, SPIPlanPtr plan,\n>> \t\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> \t\t\t\t/* translator: %s is a SQL statement name */\n>> \t\t\t\t\t\t errmsg(\"%s is not allowed in a non-volatile function\",\n>> -\t\t\t\t\t\t\t\tCreateCommandTag((Node *) pstmt))));\n>> +\t\t\t\t\t\t\t\tGetCommandTagName(CreateCommandTag((Node *) pstmt)))));\n> \n> Probably worth having a wrapper for this - these lines are pretty long,\n> and there quite a number of cases like it in the patch.\n\nI was having some trouble figuring out what to name the wrapper. “CreateCommandTagAndGetName\" is nearly as long as the two function names it replaces. “CreateCommandTagName” sounds like you’re creating a name rather than a CommandTag, which is misleading. But then I realized this function was poorly named to begin with. “Create” is an entirely inappropriate verb for what this function does. Even before this patch, it wasn’t creating anything. I was looking up a constant string name. Now it is looking up an enum.\n\nI went with CreateCommandName(…), but I think this leaves a lot to be desired. Thoughts? \n\n>> @@ -172,11 +175,38 @@ EndCommand(const char *commandTag, CommandDest dest)\n>> \t\tcase DestRemoteSimple:\n>> \n>> \t\t\t/*\n>> -\t\t\t * We assume the commandTag is plain ASCII and therefore requires\n>> -\t\t\t * no encoding conversion.\n>> +\t\t\t * We assume the tagname is plain ASCII and therefore\n>> +\t\t\t * requires no encoding conversion.\n>> \t\t\t */\n>> -\t\t\tpq_putmessage('C', commandTag, strlen(commandTag) + 1);\n>> -\t\t\tbreak;\n>> +\t\t\ttagname = GetCommandTagName(qc->commandTag);\n>> +\t\t\tswitch (qc->display_format)\n>> +\t\t\t{\n>> +\t\t\t\tcase DISPLAYFORMAT_PLAIN:\n>> +\t\t\t\t\tpq_putmessage('C', tagname, strlen(tagname) + 1);\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\tcase DISPLAYFORMAT_LAST_OID:\n>> +\t\t\t\t\t/*\n>> +\t\t\t\t\t * We no longer display LastOid, but to preserve the wire protocol,\n>> +\t\t\t\t\t * we write InvalidOid where the LastOid used to be written. For\n>> +\t\t\t\t\t * efficiency in the snprintf(), hard-code InvalidOid as zero.\n>> +\t\t\t\t\t */\n>> +\t\t\t\t\tAssert(InvalidOid == 0);\n>> +\t\t\t\t\tsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n>> +\t\t\t\t\t\t\t\t\"%s 0 \" UINT64_FORMAT,\n>> +\t\t\t\t\t\t\t\ttagname,\n>> +\t\t\t\t\t\t\t\tqc->nprocessed);\n>> +\t\t\t\t\tpq_putmessage('C', completionTag, strlen(completionTag) + 1);\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\tcase DISPLAYFORMAT_NPROCESSED:\n>> +\t\t\t\t\tsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n>> +\t\t\t\t\t\t\t\"%s \" UINT64_FORMAT,\n>> +\t\t\t\t\t\t\ttagname,\n>> +\t\t\t\t\t\t\tqc->nprocessed);\n>> +\t\t\t\t\tpq_putmessage('C', completionTag, strlen(completionTag) + 1);\n>> +\t\t\t\t\tbreak;\n>> +\t\t\t\tdefault:\n>> +\t\t\t\t\telog(ERROR, \"Invalid display_format in EndCommand\");\n>> +\t\t\t}\n> \n> Imo there should only be one pq_putmessage(). Also think this type of\n> default: is a bad idea, just prevents the compiler from warning if we\n> were to ever introduce a new variant of DISPLAYFORMAT_*, without\n> providing any meaningful additional security.\n\nThis is fixed in v3.\n\n>> @@ -855,7 +889,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \n>> \t\tcase T_DiscardStmt:\n>> \t\t\t/* should we allow DISCARD PLANS? */\n>> -\t\t\tCheckRestrictedOperation(\"DISCARD\");\n>> +\t\t\tCheckRestrictedOperation(COMMANDTAG_DISCARD);\n>> \t\t\tDiscardCommand((DiscardStmt *) parsetree, isTopLevel);\n>> \t\t\tbreak;\n>> \n>> @@ -974,7 +1008,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecuteGrantStmt(stmt);\n>> \t\t\t}\n>> @@ -987,7 +1021,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->removeType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecDropStmt(stmt, isTopLevel);\n>> \t\t\t}\n>> @@ -1000,7 +1034,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->renameType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecRenameStmt(stmt);\n>> \t\t\t}\n>> @@ -1013,7 +1047,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecAlterObjectDependsStmt(stmt, NULL);\n>> \t\t\t}\n>> @@ -1026,7 +1060,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecAlterObjectSchemaStmt(stmt, NULL);\n>> \t\t\t}\n>> @@ -1039,7 +1073,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objectType))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tExecAlterOwnerStmt(stmt);\n>> \t\t\t}\n>> @@ -1052,7 +1086,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n>> \t\t\t\telse\n>> \t\t\t\t\tCommentObject(stmt);\n>> \t\t\t\tbreak;\n>> @@ -1065,7 +1099,7 @@ standard_ProcessUtility(PlannedStmt *pstmt,\n>> \t\t\t\tif (EventTriggerSupportsObjectType(stmt->objtype))\n>> \t\t\t\t\tProcessUtilitySlow(pstate, pstmt, queryString,\n>> \t\t\t\t\t\t\t\t\t context, params, queryEnv,\n>> -\t\t\t\t\t\t\t\t\t dest, completionTag);\n>> +\t\t\t\t\t\t\t\t\t dest, qc);\n> \n> Not this patch's fault or task. But I hate this type of code - needing\n> to touch a dozen places for new type of statement is just\n> insane. utility.c should long have been rewritten to just have one\n> metadata table for nearly all of this. Perhaps with a few callbacks for\n> special cases.\n\nI’ve decided not to touch this issue. There are no changes here from how it was done in v2.\n\n>> +static const char * tag_names[] = {\n>> +\t\"???\",\n>> +\t\"ALTER ACCESS METHOD\",\n>> +\t\"ALTER AGGREGATE\",\n>> +\t\"ALTER CAST\",\n> \n> This seems problematic to maintain, because the order needs to match\n> between this and something defined in a header - and there's no\n> guarantee a misordering is immediately noticeable. We should either go\n> for my metadata table idea, or at least rewrite this, even if more\n> verbose, to something like\n> \n> static const char * tag_names[] = {\n> [COMMAND_TAG_ALTER_ACCESS_METHOD] = \"ALTER ACCESS METHOD\",\n> ...\n> \n> I think the fact that this would show up in a grep for\n> COMMAND_TAG_ALTER_ACCESS_METHOD is good too.\n\nRewriting this as you suggest does not prevent tag names from being out of sorted order. Version 3 of the patch adds a perl script that reads commandtag.h and commandtag.c during the build process and stops the build with a brief error message if they don’t match, are malformed, or if the sorting order is wrong. The script does not modify the code. It just reviews it for correctness. As such, it probably doesn’t matter whether it runs on all platforms. I did not look into whether this runs on Windows, but if there is any difficulty there, it could simply be disabled on that platform.\n\nIt also doesn’t matter if this perl script gets committed. There is a trade-off here between maintaining the script vs. manually maintaining the enum ordering.\n\n>> +/*\n>> + * Search CommandTag by name\n>> + *\n>> + * Returns CommandTag, or COMMANDTAG_UNKNOWN if not recognized\n>> + */\n>> +CommandTag\n>> +GetCommandTagEnum(const char *commandname)\n>> +{\n>> +\tconst char **base, **last, **position;\n>> +\tint\t\t result;\n>> +\n>> +\tOPTIONALLY_CHECK_COMMAND_TAGS();\n>> +\tif (commandname == NULL || *commandname == '\\0')\n>> +\t\treturn COMMANDTAG_UNKNOWN;\n>> +\n>> +\tbase = tag_names;\n>> +\tlast = tag_names + tag_name_length - 1;\n>> +\twhile (last >= base)\n>> +\t{\n>> +\t\tposition = base + ((last - base) >> 1);\n>> +\t\tresult = pg_strcasecmp(commandname, *position);\n>> +\t\tif (result == 0)\n>> +\t\t\treturn (CommandTag) (position - tag_names);\n>> +\t\telse if (result < 0)\n>> +\t\t\tlast = position - 1;\n>> +\t\telse\n>> +\t\t\tbase = position + 1;\n>> +\t}\n>> +\treturn COMMANDTAG_UNKNOWN;\n>> +}\n> \n> This seems pretty grotty - but you get rid of it later. See my comments there.\n\nI kept a form of GetCommandTagEnum.\n\n>> +#ifdef COMMANDTAG_CHECKING\n>> +bool\n>> +CheckCommandTagEnum()\n>> +{\n>> +\tCommandTag\ti, j;\n>> +\n>> +\tif (FIRST_COMMAND_TAG < 0 || LAST_COMMAND_TAG < 0 || LAST_COMMAND_TAG < FIRST_COMMAND_TAG)\n>> +\t{\n>> +\t\telog(ERROR, \"FIRST_COMMAND_TAG (%u), LAST_COMMAND_TAG (%u) not reasonable\",\n>> +\t\t\t (unsigned int) FIRST_COMMAND_TAG, (unsigned int) LAST_COMMAND_TAG);\n>> +\t\treturn false;\n>> +\t}\n>> +\tif (FIRST_COMMAND_TAG != (CommandTag)0)\n>> +\t{\n>> +\t\telog(ERROR, \"FIRST_COMMAND_TAG (%u) != 0\", (unsigned int) FIRST_COMMAND_TAG);\n>> +\t\treturn false;\n>> +\t}\n>> +\tif (LAST_COMMAND_TAG != (CommandTag)(tag_name_length - 1))\n>> +\t{\n>> +\t\telog(ERROR, \"LAST_COMMAND_TAG (%u) != tag_name_length (%u)\",\n>> +\t\t\t (unsigned int) LAST_COMMAND_TAG, (unsigned int) tag_name_length);\n>> +\t\treturn false;\n>> +\t}\n> \n> These all seem to want to be static asserts.\n\nThis is all gone now, either to the perl script or to a StaticAssert, or to a bit of both.\n\n>> +\tfor (i = FIRST_COMMAND_TAG; i < LAST_COMMAND_TAG; i++)\n>> +\t{\n>> +\t\tfor (j = i+1; j < LAST_COMMAND_TAG; j++)\n>> +\t\t{\n>> +\t\t\tint cmp = strcmp(tag_names[i], tag_names[j]);\n>> +\t\t\tif (cmp == 0)\n>> +\t\t\t{\n>> +\t\t\t\telog(ERROR, \"Found duplicate tag_name: \\\"%s\\\"\",\n>> +\t\t\t\t\ttag_names[i]);\n>> +\t\t\t\treturn false;\n>> +\t\t\t}\n>> +\t\t\tif (cmp > 0)\n>> +\t\t\t{\n>> +\t\t\t\telog(ERROR, \"Found commandnames out of order: \\\"%s\\\" before \\\"%s\\\"\",\n>> +\t\t\t\t\ttag_names[i], tag_names[j]);\n>> +\t\t\t\treturn false;\n>> +\t\t\t}\n>> +\t\t}\n>> +\t}\n>> +\treturn true;\n>> +}\n> \n> And I think we could get rid of this with my earlier suggestions?\n\nThis is now handled by the perl script, also.\n\n>> +/*\n>> + * BEWARE: These are in sorted order, but ordered by their printed\n>> + * values in the tag_name list (see common/commandtag.c).\n>> + * In particular it matters because the sort ordering changes\n>> + * when you replace a space with an underscore. To wit:\n>> + *\n>> + * \"CREATE TABLE\"\n>> + * \"CREATE TABLE AS\"\n>> + * \"CREATE TABLESPACE\"\n>> + *\n>> + * but...\n>> + *\n>> + * CREATE_TABLE\n>> + * CREATE_TABLESPACE\n>> + * CREATE_TABLE_AS\n>> + *\n>> + * It also matters that COMMANDTAG_UNKNOWN is written \"???\".\n>> + *\n>> + * If you add a value here, add it in common/commandtag.c also, and\n>> + * be careful to get the ordering right. You can build with\n>> + * COMMANDTAG_CHECKING to have this automatically checked\n>> + * at runtime, but that adds considerable overhead, so do so sparingly.\n>> + */\n>> +typedef enum CommandTag\n>> +{\n> \n> This seems pretty darn nightmarish.\n\nWell, it does get automatically checked for you.\n\n>> +#define FIRST_COMMAND_TAG COMMANDTAG_UNKNOWN\n>> +\tCOMMANDTAG_UNKNOWN,\n>> +\tCOMMANDTAG_ALTER_ACCESS_METHOD,\n>> +\tCOMMANDTAG_ALTER_AGGREGATE,\n>> +\tCOMMANDTAG_ALTER_CAST,\n>> +\tCOMMANDTAG_ALTER_COLLATION,\n>> +\tCOMMANDTAG_ALTER_CONSTRAINT,\n>> +\tCOMMANDTAG_ALTER_CONVERSION,\n>> +\tCOMMANDTAG_ALTER_DATABASE,\n>> +\tCOMMANDTAG_ALTER_DEFAULT_PRIVILEGES,\n>> +\tCOMMANDTAG_ALTER_DOMAIN,\n>> [...]\n> \n> I'm a bit worried that this basically duplicates a good portion of NodeTag, without having otherwise much of a point?\n\nThere is not enough overlap between NodeTag and CommandTag for any obvious consolidation. Feel free to recommend something specific.\n\n>> From a70b0cadc1142e92b2354a0ca3cd47aaeb0c148e Mon Sep 17 00:00:00 2001\n>> From: Mark Dilger <mark.dilger@enterprisedb.com>\n>> Date: Tue, 4 Feb 2020 12:25:05 -0800\n>> Subject: [PATCH v2 2/3] Using a Bitmapset of tags rather than a string array.\n>> MIME-Version: 1.0\n>> Content-Type: text/plain; charset=UTF-8\n>> Content-Transfer-Encoding: 8bit\n>> \n>> EventTriggerCacheItem no longer holds an array of palloc’d tag strings\n>> in sorted order, but rather just a Bitmapset over the CommandTags. This\n>> makes the code a little simpler and easier to read, in my opinion. In\n>> filter_event_trigger, rather than running bsearch through a sorted array\n>> of strings, it just runs bms_is_member.\n>> ---\n> \n> It seems weird to add the bsearch just to remove it immediately again a\n> patch later. This probably should just go first?\n\nI still don’t know what this comment means.\n\n>> diff --git a/src/test/regress/sql/event_trigger.sql b/src/test/regress/sql/event_trigger.sql\n>> index 346168673d..cad02212ad 100644\n>> --- a/src/test/regress/sql/event_trigger.sql\n>> +++ b/src/test/regress/sql/event_trigger.sql\n>> @@ -10,6 +10,13 @@ BEGIN\n>> END\n>> $$ language plpgsql;\n>> \n>> +-- OK\n>> +create function test_event_trigger2() returns event_trigger as $$\n>> +BEGIN\n>> +\tRAISE NOTICE 'test_event_trigger2: % %', tg_event, tg_tag;\n>> +END\n>> +$$ LANGUAGE plpgsql;\n>> +\n>> -- should fail, event triggers cannot have declared arguments\n>> create function test_event_trigger_arg(name text)\n>> returns event_trigger as $$ BEGIN RETURN 1; END $$ language plpgsql;\n>> @@ -82,6 +89,783 @@ create event trigger regress_event_trigger2 on ddl_command_start\n>> -- OK\n>> comment on event trigger regress_event_trigger is 'test comment';\n>> \n>> +-- These are all unsupported\n>> +create event trigger regress_event_triger_NULL on ddl_command_start\n>> + when tag in ('')\n>> + execute procedure test_event_trigger2();\n>> +\n>> +create event trigger regress_event_triger_UNKNOWN on ddl_command_start\n>> + when tag in ('???')\n>> + execute procedure test_event_trigger2();\n>> +\n>> +create event trigger regress_event_trigger_ALTER_DATABASE on ddl_command_start\n>> + when tag in ('ALTER DATABASE')\n>> + execute procedure test_event_trigger2();\n> [...]\n> \n> There got to be a more maintainable way to write this.\n\nThis has all been removed from version 3 of the patch set.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 7 Feb 2020 09:36:36 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-07, Mark Dilger wrote:\n\n> Andres,\n> \n> The previous patch set seemed to cause confusion, having separated\n> changes into multiple files. The latest patch, heavily influenced by\n> your review, is all in one file, attached.\n\nCool stuff.\n\nI think is a little confused about what is source and what is generated.\nI'm not clear why commandtag.c is a C file at all; wouldn't it be\nsimpler to have it as a Data::Dumper file or some sort of Perl struct,\nso that it can be read easily by the Perl file? Similar to the\nsrc/include/catalog/pg_*.dat files. That script can then generate all\nthe needed .c and .h files, which are not going to be part of the source\ntree, and will always be in-sync and won't have the formatting\nstrictness about it. And you won't have the Martian syntax you had to\nuse in the commandtag.c file.\n\nAs for API, please don't shorten things such as SetQC, just use\nSetQueryCompletion. Perhaps call it SetCompletionTag or SetCommandTag?.\n(I'm not sure about the name \"QueryCompletionData\"; maybe CommandTag or\nQueryTag would work better for that struct. There seems to be a lot of\neffort in separating QueryCompletion from CommandTag; is that really\nnecessary?) Lastly, we have a convention that we have a struct called\nFooData, and a typedef FooData *Foo, then use the latter in the API.\nWe don't adhere to that 100%, and some people dislike it, but I'd rather\nbe consistent and not be passing \"FooData *\" around; it's just noisier.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Feb 2020 16:09:29 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 11, 2020, at 11:09 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Feb-07, Mark Dilger wrote:\n> \n>> Andres,\n>> \n>> The previous patch set seemed to cause confusion, having separated\n>> changes into multiple files. The latest patch, heavily influenced by\n>> your review, is all in one file, attached.\n> \n> Cool stuff.\n\nThanks for the review!\n\n> I think is a little confused about what is source and what is generated.\n\nThe perl file generates nothing. It merely checks that the .h and .c files are valid and consistent with each other.\n\n> I'm not clear why commandtag.c is a C file at all; wouldn't it be\n> simpler to have it as a Data::Dumper file or some sort of Perl struct,\n> so that it can be read easily by the Perl file? Similar to the\n> src/include/catalog/pg_*.dat files. That script can then generate all\n> the needed .c and .h files, which are not going to be part of the source\n> tree, and will always be in-sync and won't have the formatting\n> strictness about it. And you won't have the Martian syntax you had to\n> use in the commandtag.c file.\n\nI thought about generating the files rather than merely checking them. I could see arguments both ways. I wasn’t sure whether there would be broad support for having yet another perl script generating source files, nor for the maintenance burden of having to do that on all platforms. Having a perl script that merely sanity checks the source files has the advantage that there is no requirement for it to function on all platforms. There’s not even a requirement for it to be committed to the tree, since you could also argue that the maintenance burden of the script outweighs the burden of getting the source files right by hand.\n\n> As for API, please don't shorten things such as SetQC, just use\n> SetQueryCompletion. Perhaps call it SetCompletionTag or SetCommandTag?.\n> (I'm not sure about the name \"QueryCompletionData\"; maybe CommandTag or\n> QueryTag would work better for that struct.\n\nI am happy to rename it as SetQueryCompletion.\n\n> There seems to be a lot of\n> effort in separating QueryCompletion from CommandTag; is that really\n> necessary?)\n\nFor some code paths, prior to this patch, the commandTag gets changed before returning, and I’m not just talking about the change where the rowcount gets written into the commandTag string. I have a work-in-progress patch to provide system views to track the number of commands executed of each type, and that patch includes the ability to distinguish between what the command started as and what it completed as, so I do want to keep those concepts separate. I rejected the “SetCommandTag” naming suggestion above because we’re really setting information about the completion (what it finished as) and not the command (what it started as). I rejected the “SetCompletionTag” naming because it’s not just the tag that is being set, but both the tag and the row count. I am happy with “SetQueryCompletion” because that naming is consistent with setting the pair of values.\n\n> Lastly, we have a convention that we have a struct called\n> FooData, and a typedef FooData *Foo, then use the latter in the API.\n> We don't adhere to that 100%, and some people dislike it, but I'd rather\n> be consistent and not be passing \"FooData *\" around; it's just noisier.\n\nI’m familiar with the convention, and don’t like it, so I’ll have to look at a better way of naming this. I specifically don’t like it because it makes a mess of using the const qualifier.\n\nOnce again, thanks for the review! I will work to get another version of this patch posted around the time I post (separately) the command stats patch.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 11 Feb 2020 12:37:14 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-11, Mark Dilger wrote:\n\n> I thought about generating the files rather than merely checking them.\n> I could see arguments both ways. I wasn’t sure whether there would be\n> broad support for having yet another perl script generating source\n> files, nor for the maintenance burden of having to do that on all\n> platforms. Having a perl script that merely sanity checks the source\n> files has the advantage that there is no requirement for it to\n> function on all platforms. There’s not even a requirement for it to\n> be committed to the tree, since you could also argue that the\n> maintenance burden of the script outweighs the burden of getting the\n> source files right by hand.\n\nNo thanks.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Feb 2020 17:50:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 11, 2020, at 12:50 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Feb-11, Mark Dilger wrote:\n> \n>> I thought about generating the files rather than merely checking them.\n>> I could see arguments both ways. I wasn’t sure whether there would be\n>> broad support for having yet another perl script generating source\n>> files, nor for the maintenance burden of having to do that on all\n>> platforms. Having a perl script that merely sanity checks the source\n>> files has the advantage that there is no requirement for it to\n>> function on all platforms. There’s not even a requirement for it to\n>> be committed to the tree, since you could also argue that the\n>> maintenance burden of the script outweighs the burden of getting the\n>> source files right by hand.\n> \n> No thanks.\n\nI’m not sure which option you are voting for:\n\n(Option #1) Have the perl script generate the .c and .h file from a .dat file\n\n(Option #2) Have the perl script validate but not generate the .c and .h files\n\n(Option #3) Have no perl script, with all burden on the programmer to get the .c and .h files right by hand.\n\nI think you’re voting against #3, and I’m guessing you’re voting for #1, but I’m not certain.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 11 Feb 2020 12:54:11 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-11, Mark Dilger wrote:\n\n> > No thanks.\n> \n> I’m not sure which option you are voting for:\n> \n> (Option #1) Have the perl script generate the .c and .h file from a .dat file\n> \n> (Option #2) Have the perl script validate but not generate the .c and .h files\n> \n> (Option #3) Have no perl script, with all burden on the programmer to get the .c and .h files right by hand.\n> \n> I think you’re voting against #3, and I’m guessing you’re voting for #1, but I’m not certain.\n\nI was voting against #2 (burden the programmer with consistency checks\nthat must be fixed by hand, without actually doing the programmatically-\ndoable work), but I don't like #3 either. I do like #1.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Feb 2020 18:02:46 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 11, 2020, at 1:02 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Feb-11, Mark Dilger wrote:\n> \n>>> No thanks.\n>> \n>> I’m not sure which option you are voting for:\n>> \n>> (Option #1) Have the perl script generate the .c and .h file from a .dat file\n>> \n>> (Option #2) Have the perl script validate but not generate the .c and .h files\n>> \n>> (Option #3) Have no perl script, with all burden on the programmer to get the .c and .h files right by hand.\n>> \n>> I think you’re voting against #3, and I’m guessing you’re voting for #1, but I’m not certain.\n> \n> I was voting against #2 (burden the programmer with consistency checks\n> that must be fixed by hand, without actually doing the programmatically-\n> doable work), but I don't like #3 either. I do like #1.\n\nOption #1 works for me. If I don’t see any contrary votes before I get back to this patch, I’ll implement it that way for the next version.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 11 Feb 2020 13:05:12 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "> On Feb 11, 2020, at 1:05 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> On Feb 11, 2020, at 1:02 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> \n>> On 2020-Feb-11, Mark Dilger wrote:\n>> \n>>>> No thanks.\n>>> \n>>> I’m not sure which option you are voting for:\n>>> \n>>> (Option #1) Have the perl script generate the .c and .h file from a .dat file\n>>> \n>>> (Option #2) Have the perl script validate but not generate the .c and .h files\n>>> \n>>> (Option #3) Have no perl script, with all burden on the programmer to get the .c and .h files right by hand.\n>>> \n>>> I think you’re voting against #3, and I’m guessing you’re voting for #1, but I’m not certain.\n>> \n>> I was voting against #2 (burden the programmer with consistency checks\n>> that must be fixed by hand, without actually doing the programmatically-\n>> doable work), but I don't like #3 either. I do like #1.\n> \n> Option #1 works for me. If I don’t see any contrary votes before I get back to this patch, I’ll implement it that way for the next version.\n\nWhile investigating how best to implement option #1, I took a look at how Catalog.pm does it.\n\nCatalog.pm reads data files and eval()s chunks of them to vivify perl data.\n\n # We're treating the input line as a piece of Perl, so we\n # need to use string eval here. Tell perlcritic we know what\n # we're doing.\n eval '$hash_ref = ' . $_; ## no critic (ProhibitStringyEval)\n\nThis would only make sense to me if the string held in $_ had already been checked for safety, but Catalog.pm does very little to verify that the string is safe to eval. The assumption here, so far as I can infer, is that we don’t embed anything dangerous in our .dat files, so this should be ok. That may be true for the moment, but I can imagine a day when we start embedding perl functions as quoted text inside a data file, or shell commands as text which look enough like perl for eval() to be able to execute them. Developers who edit these .dat files and mess up the quoting, and then rerun ‘make’ to get the new .c and .h files generated, may not like the side effects. Perhaps I’m being overly paranoid…. \n\nRather than add more code generation logic based on the design of Catalog.pm, I wrote a perl based data file parser that parses .dat files and returns vivified perl data, as Catalog.pm does, but with much stricter parsing logic to make certain nothing dangerous gets eval()ed. I put the new module in DataFile.pm. The commandtag data has been consolidated into a single .dat file. A new perl script, gencommandtag.pl, parses the commandtag.dat file and generates the .c and .h files. So far, only gencommandtag.pl uses DataFile.pm, but I’ve checked that it can parse all the .dat files currently in the source tree.\n\nThe new parser is more flexible about the structure of the data, which seems good to me for making it easier to add or modify data files in the future. The new parser does not yet have a means of hacking up the data to add autogenerated fields and such that Catalog.pm does, but I think a more clean break between parsing and autovivifying fields would be good anyway. If I get generally favorable reviews of DataFile.pm, I might go refactor Catalog.pm. For now, I’m just leaving Catalog.pm alone. \n\n> That script can then generate all\n> the needed .c and .h files, which are not going to be part of the source\n> tree, and will always be in-sync and won't have the formatting\n> strictness about it. And you won't have the Martian syntax you had to\n> use in the commandtag.c file.\n\nI still have the “Martian syntax”, though now it is generated by the perl script. I can get rid of it, but I think Andres liked the Martian stuff.\n\n> We don't adhere to that 100%, and some people dislike it, but I'd rather\n> be consistent and not be passing \"FooData *\" around; it's just noisier.\n\nI renamed QueryCompletionData to just QueryCompletion, and I’m passing pointers to that.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 18 Feb 2020 18:40:34 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Hi Mark,\n\nOn Wed, Feb 19, 2020 at 10:40 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> This would only make sense to me if the string held in $_ had already been checked for safety, but Catalog.pm does very little to verify that the string is safe to eval. The assumption here, so far as I can infer, is that we don’t embed anything dangerous in our .dat files, so this should be ok. That may be true for the moment, but I can imagine a day when we start embedding perl functions as quoted text inside a data file, or shell commands as text which look enough like perl for eval() to be able to execute them. Developers who edit these .dat files and mess up the quoting, and then rerun ‘make’ to get the new .c and .h files generated, may not like the side effects. Perhaps I’m being overly paranoid….\n\nThe use case for that seems slim. However, at a brief glance your\nmodule seems more robust in other ways.\n\n> Rather than add more code generation logic based on the design of Catalog.pm, I wrote a perl based data file parser that parses .dat files and returns vivified perl data, as Catalog.pm does, but with much stricter parsing logic to make certain nothing dangerous gets eval()ed. I put the new module in DataFile.pm.\n> [...]\n> The new parser is more flexible about the structure of the data, which seems good to me for making it easier to add or modify data files in the future. The new parser does not yet have a means of hacking up the data to add autogenerated fields and such that Catalog.pm does, but I think a more clean break between parsing and autovivifying fields would be good anyway.\n\nSeparation of concerns sounds like a good idea, but I've not fully\nthought it through. For one advantage, I think it might be nicer to\nhave indexing.dat and toasting.dat instead of having to dig the info\nout of C macros. This was evident while recently experimenting with\ngenerating catcache control data.\n\nAs for the patch, I have not done a full review, but I have some\ncomments based on a light read-through:\n\nutils/Makefile:\n\n+# location of commandtag.dat\n+headerdir = $(top_srcdir)/src/include/utils\n\nThis variable name is too generic for what the comment says it is. A\nbetter abstraction, if we want one, would be the full path to the\ncommandtag input file. The other script invocations in this Makefile\ndon't do it this way, but that's a separate patch.\n\n+# location to write generated headers\n+sourcedir = $(top_srcdir)/src/backend/utils\n\nCalling the output the source is bound to confuse people. The comment\nimplies all generated headers, not just the ones introduced by the\npatch. I would just output to the current directory (i.e. have an\n--output option with a default empty string). Also, if we want to\noutput somewhere else, I would imagine it'd be under the top builddir,\nnot srcdir.\n\n+$(PERL) -I $(top_srcdir)/src/include/utils $<\n--headerdir=$(headerdir) --sourcedir=$(sourcedir)\n--inputfile=$(headerdir)/commandtag.dat\n\n1. headerdir is entirely unused by the script\n2. We can default to working dir for the output as mentioned above\n3. -I $(top_srcdir)/src/include/utils is supposed to point to the dir\ncontaining DataFile.pm, but since gencommandtag.pl has \"use lib...\"\nit's probably not needed here. I don't recall why we keep the \"-I\"\nelsewhere. (ditto in Solution.pm)\n\nI'm thinking it would look something like this:\n\n+$(PERL) $< --inputfile=$(top_srcdir)/src/include/utils/commandtag.dat\n\n--\nutils/misc/Makefile\n\n+all: distprep\n+\n # Note: guc-file.c is not deleted by 'make clean',\n # since we want to ship it in distribution tarballs.\n clean:\n @rm -f lex.yy.c\n+\n+maintainer-clean: clean\n\nSeems non-functional.\n\n--\nDataFiles.pm\n\nI haven't studied this in detail, but I'll suggest that if this meant\nto have wider application, maybe it should live in src/tools ?\n\nI'm not familiar with using different IO routines depending on the OS\n-- what's the benefit of that?\n\n--\ngencommandtag.pl\n\nslurp_without_comments() is unused.\n\nsanity_check_data() seems longer than the main body of the script\n(minus header boilerplate), and I wonder if we can pare it down some.\nFor one, I have difficulty imagining anyone would accidentally type an\nunprintable or non-ascii character in a command tag and somehow not\nrealize it. For another, duplicating checks that were done earlier\nseems like a maintenance headache.\n\ndataerror() is defined near the top, but other functions are defined\nat the bottom.\n\n+# Generate all output internally before outputting anything, to avoid\n+# partially overwriting generated files under error conditions\n\nMy personal preference is, having this as a design requirement\nsacrifices readability for unclear gain, especially since a \"chunk\"\nalso includes things like header boilerplate. That said, the script is\nalso short enough that it doesn't make a huge difference either way.\nSpeaking of boilerplate, it's better for readability to separate that\nfrom actual code such as:\n\ntypedef enum CommandTag\n{\n#define FIRST_COMMANDTAG COMMANDTAG_$sorted[0]->{taglabel})\n\n--\ntcop/dest.c\n\n+ * We no longer display LastOid, but to preserve the wire protocol,\n+ * we write InvalidOid where the LastOid used to be written. For\n+ * efficiency in the snprintf(), hard-code InvalidOid as zero.\n\nHmm, is hard-coding zero going to make any difference here?\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Feb 2020 19:31:20 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "> On Feb 19, 2020, at 3:31 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> Hi Mark,\n\nHi John, thanks for reviewing!\n\n> On Wed, Feb 19, 2020 at 10:40 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> This would only make sense to me if the string held in $_ had already been checked for safety, but Catalog.pm does very little to verify that the string is safe to eval. The assumption here, so far as I can infer, is that we don’t embed anything dangerous in our .dat files, so this should be ok. That may be true for the moment, but I can imagine a day when we start embedding perl functions as quoted text inside a data file, or shell commands as text which look enough like perl for eval() to be able to execute them. Developers who edit these .dat files and mess up the quoting, and then rerun ‘make’ to get the new .c and .h files generated, may not like the side effects. Perhaps I’m being overly paranoid….\n> \n> The use case for that seems slim. However, at a brief glance your\n> module seems more robust in other ways.\n> \n>> Rather than add more code generation logic based on the design of Catalog.pm, I wrote a perl based data file parser that parses .dat files and returns vivified perl data, as Catalog.pm does, but with much stricter parsing logic to make certain nothing dangerous gets eval()ed. I put the new module in DataFile.pm.\n>> [...]\n>> The new parser is more flexible about the structure of the data, which seems good to me for making it easier to add or modify data files in the future. The new parser does not yet have a means of hacking up the data to add autogenerated fields and such that Catalog.pm does, but I think a more clean break between parsing and autovivifying fields would be good anyway.\n> \n> Separation of concerns sounds like a good idea, but I've not fully\n> thought it through. For one advantage, I think it might be nicer to\n> have indexing.dat and toasting.dat instead of having to dig the info\n> out of C macros. This was evident while recently experimenting with\n> generating catcache control data.\n\nI guess you mean macros DECLARE_UNIQUE_INDEX and DECLARE_TOAST. I don’t mind converting that to .dat files, though I’m mindful of Tom’s concern expressed early in this thread about the amount of code churn. Is there sufficient demand for refactoring this stuff? There are more reasons in the conversation below to refactor the perl modules and code generation scripts.\n\n> As for the patch, I have not done a full review, but I have some\n> comments based on a light read-through:\n> \n> utils/Makefile:\n> \n> +# location of commandtag.dat\n> +headerdir = $(top_srcdir)/src/include/utils\n> \n> This variable name is too generic for what the comment says it is. A\n> better abstraction, if we want one, would be the full path to the\n> commandtag input file. The other script invocations in this Makefile\n> don't do it this way, but that's a separate patch.\n> \n> +# location to write generated headers\n> +sourcedir = $(top_srcdir)/src/backend/utils\n> \n> Calling the output the source is bound to confuse people. The comment\n> implies all generated headers, not just the ones introduced by the\n> patch. I would just output to the current directory (i.e. have an\n> --output option with a default empty string). Also, if we want to\n> output somewhere else, I would imagine it'd be under the top builddir,\n> not srcdir.\n> \n> +$(PERL) -I $(top_srcdir)/src/include/utils $<\n> --headerdir=$(headerdir) --sourcedir=$(sourcedir)\n> --inputfile=$(headerdir)/commandtag.dat\n> \n> 1. headerdir is entirely unused by the script\n> 2. We can default to working dir for the output as mentioned above\n> 3. -I $(top_srcdir)/src/include/utils is supposed to point to the dir\n> containing DataFile.pm, but since gencommandtag.pl has \"use lib...\"\n> it's probably not needed here. I don't recall why we keep the \"-I\"\n> elsewhere. (ditto in Solution.pm)\n> \n> I'm thinking it would look something like this:\n> \n> +$(PERL) $< --inputfile=$(top_srcdir)/src/include/utils/commandtag.dat\\\n\nI have taken all this advice in v5 of the patch. --inputfile and --outputdir (previously named --sourcedir) are now optional with the defaults as you suggested.\n\n> --\n> utils/misc/Makefile\n> \n> +all: distprep\n> +\n> # Note: guc-file.c is not deleted by 'make clean',\n> # since we want to ship it in distribution tarballs.\n> clean:\n> @rm -f lex.yy.c\n> +\n> +maintainer-clean: clean\n> \n> Seems non-functional.\n\nYeah, I also had an unnecessary addition to .gitignore in that directory. I had originally placed the commandtag stuff here before moving it one directory up. Thanks for catching that.\n\n> --\n> DataFiles.pm\n> \n> I haven't studied this in detail, but I'll suggest that if this meant\n> to have wider application, maybe it should live in src/tools ?\n\nWe don’t seem to have a standard place for perl modules. src/test/perl has some that are specifically for tap testing, and src/backend/catalog has some for catalog data file processing. I put DataFiles.pm in src/backend/catalog because that’s where most data file processing currently is located. src/tools has PerfectHash.pm, and a bunch of Windows specific modules under src/tools/msvc.\n\n> I'm not familiar with using different IO routines depending on the OS\n> -- what's the benefit of that?\n\nI think you are talking about the slurp_file routine. That came directly from the TestLib.pm module. I don't have enough perl-on-windows experience to comment about why it does things that way. I was reluctant to have DataFile.pm 'use TestLib', since DataFile has absolutely nothing to do with regression testing. I don't like copying the function, either, though I chose that as the lesser evil. Which is more evil is debateable.\n\nsrc/test/perl/ contains SimpleTee.pm and RecursiveCopy.pm, neither of which contain functionality limited to just testing. I think they could be moved to src/tools. src/test/perl/TestLib.pm contains a mixture of testing specific functions and more general purpose functions. For instance, TestLib.pm contains functions to read in a file or directory (slurp_file(filepath) and slurp_dir(dirpath), respectively). I think we should have just one implementation of those in just one place. Neither TestLib nor DataFile seem appropriate, nor does src/test/perl seem right. I checked whether Perl ships with core module support for this and didn't find anything. There is a cpan module named File::Slurp, but it is not a core module so far as I can tell, and it does more than we want.\n\nShould I submit a separate patch refactoring the location of perl modules and functions of general interest into src/tools? src/tools/perl?\n\nI am not changing DataFile.pm's duplicate copy of slurp_file in v5 of the patch, as I don't yet know the best way to approach the problem. I expect there will have to be a v6 once this has been adequately debated.\n\n> --\n> gencommandtag.pl\n> \n> slurp_without_comments() is unused.\n\nRight. An earlier version of gencommandtag.pl didn't use DataFile.pm, and I neglected to remove this function when I transitioned to using DataFile.pm. Thanks for noticing!\n\n> sanity_check_data() seems longer than the main body of the script\n> (minus header boilerplate), and I wonder if we can pare it down some.\n> For one, I have difficulty imagining anyone would accidentally type an\n> unprintable or non-ascii character in a command tag and somehow not\n> realize it.\n\nI'm uncertain about that. There is logic in EndCommand in tcop/dest.c that specifically warns that no encoding conversion will be performed due to the assumption that command tags contain only 7-bit ascii. I think that's a perfectly reasonable assumption in the C-code, but it needs to be checked by gencommandtag.pl because the bugs that might ensue from inserting an accent character or whatever could be subtle enough to not be caught right away. Such mistakes only get easier as time goes by, as the tendency for editors to change your quotes into \"smart quotes\" and such gets more common, and potentially as the assumption that PostgreSQL has been internationalized gets more common. Hopefully, we're moving more and more towards supporting non-ascii in more and more places. It might be less obvious to a contributor some years hence that they cannot stick an accented character into a command tag. (Compare, for example, that it used to be widely accepted that you shouldn't stick spaces and hyphens into file names, but now a fair number of programmers will do that without flinching.)\n\nAs for checking for unprintable characters, the case is weaker. I'm not too motivated to remove the check, though.\n\n> For another, duplicating checks that were done earlier\n> seems like a maintenance headache.\n\nHmmm. As long as gencommandtag.pl is the only user of DataFile.pm, I'm inclined to agree that we're double-checking some things. The code comments I wrote certainly say so. But if DataFile.pm gets wider adoption, it might start to accept more varied input, and then gencommandtag.pl will need to assert its own set of validation. There is also the distinction between checking that the input data file meets the syntax requirements of the *parser* vs. making certain that the vivified perl structures meet the semantic requirements of the *code generator*. You may at this point be able to assert that meeting the first guarantees meeting the second, but that can't be expected to hold indefinitely.\n\nIt would be easier to decide these matters if we knew whether commandtag logic will ever be removed and whether DataFile will ever gain wider adoption for code generation purposes....\n\n> dataerror() is defined near the top, but other functions are defined\n> at the bottom.\n\nMoved.\n\n> +# Generate all output internally before outputting anything, to avoid\n> +# partially overwriting generated files under error conditions\n> \n> My personal preference is, having this as a design requirement\n> sacrifices readability for unclear gain, especially since a \"chunk\"\n> also includes things like header boilerplate. That said, the script is\n> also short enough that it doesn't make a huge difference either way.\n\nCatalog.pm writes a temporary file and then moves it to the final file name at the end. DataFile buffers the output and only writes it after all the code generation has succeeded. There is no principled basis for these two modules tackling the same problem in two different ways. Perhaps that's another argument for pulling this kind of functionality out of random places and consolidating it in one or more modules in src/tools.\n\n> Speaking of boilerplate, it's better for readability to separate that\n> from actual code such as:\n> \n> typedef enum CommandTag\n> {\n> #define FIRST_COMMANDTAG COMMANDTAG_$sorted[0]->{taglabel})\n\nGood idea. While I was doing this, I also consolidated the duplicated boilerplate into just one function. I think this function, too, should go in just one perl module somewhere. See boilerplate_header() for details.\n\n> --\n> tcop/dest.c\n> \n> + * We no longer display LastOid, but to preserve the wire protocol,\n> + * we write InvalidOid where the LastOid used to be written. For\n> + * efficiency in the snprintf(), hard-code InvalidOid as zero.\n> \n> Hmm, is hard-coding zero going to make any difference here?\n\nPart of the value of refactoring the commandtag logic is to make it easier to remove the whole ugly mess later. Having snprintf write the Oid into the string obfuscates the stupidity of what is really being done here. Putting the zero directly into the format string makes it clearer, to my eyes, that nothing clever is afoot.\n\nI have removed the sentence about efficiency. Thanks for mentioning it.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 19 Feb 2020 13:16:30 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On Thu, Feb 20, 2020 at 5:16 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Feb 19, 2020, at 3:31 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > thought it through. For one advantage, I think it might be nicer to\n> > have indexing.dat and toasting.dat instead of having to dig the info\n> > out of C macros. This was evident while recently experimenting with\n> > generating catcache control data.\n>\n> I guess you mean macros DECLARE_UNIQUE_INDEX and DECLARE_TOAST. I don’t mind converting that to .dat files, though I’m mindful of Tom’s concern expressed early in this thread about the amount of code churn. Is there sufficient demand for refactoring this stuff? There are more reasons in the conversation below to refactor the perl modules and code generation scripts.\n\nYes, I was referring to those macros, but I did not mean to imply you\nshould convert them, either now or ever. I was just thinking out loud\nabout the possibility. :-)\n\nThat said, if we ever want Catalog.pm to delegate vivification to\nDataFile.pm, there will eventually need to be a way to optionally\npreserve comments and whitespace.\n\n> I have taken all this advice in v5 of the patch. --inputfile and --outputdir (previously named --sourcedir) are now optional with the defaults as you suggested.\n\n+my $inputfile = '../../include/utils/commandtag.dat';\n\nI think we should skip the default for the input file, since the\nrelative path is fragile and every such script I've seen requires it\nto be passed in.\n\n> > DataFiles.pm\n> > [...]\n> > I'm not familiar with using different IO routines depending on the OS\n> > -- what's the benefit of that?\n>\n> I think you are talking about the slurp_file routine. That came directly from the TestLib.pm module. I don't have enough perl-on-windows experience to comment about why it does things that way.\n\nYes, that one, sorry I wasn't explicit.\n\n> Should I submit a separate patch refactoring the location of perl modules and functions of general interest into src/tools? src/tools/perl?\n\nWe may have accumulated enough disparate functionality by now to\nconsider this, but it sounds like PG14 material in any case.\n\n> I expect there will have to be a v6 once this has been adequately debated.\n\nSo far I haven't heard any justification for why it should remain in\nsrc/backend/catalog, when it has nothing to do with catalogs. We don't\nhave to have a standard other-place for there to be a better\nother-place.\n\n> > --\n> > gencommandtag.pl\n\n> > sanity_check_data() seems longer than the main body of the script\n> > (minus header boilerplate), and I wonder if we can pare it down some.\n> > For one, I have difficulty imagining anyone would accidentally type an\n> > unprintable or non-ascii character in a command tag and somehow not\n> > realize it.\n> > [...]\n> > For another, duplicating checks that were done earlier\n> > seems like a maintenance headache.\n>\n> [ detailed rebuttals about the above points ]\n\nThose were just examples that stuck out at me, so even if I were\nconvinced to your side on those, my broader point was: the sanity\ncheck seems way over-engineered for something that spits out an enum\nand an array. Maybe I'm not imaginative enough. I found it hard to\nread in any case.\n\n> Catalog.pm writes a temporary file and then moves it to the final file name at the end. DataFile buffers the output and only writes it after all the code generation has succeeded. There is no principled basis for these two modules tackling the same problem in two different ways.\n\nNot the same problem. The temp files were originally for parallel Make\nhazards, and now to prevent large portions of the backend to be\nrebuilt. I actually think partially written files can be helpful for\ndebugging, so I don't even think it's a problem. But as I said, it\ndoesn't matter terribly much either way.\n\n> > Speaking of boilerplate, it's better for readability to separate that\n> > from actual code such as:\n> >\n> > typedef enum CommandTag\n> > {\n> > #define FIRST_COMMANDTAG COMMANDTAG_$sorted[0]->{taglabel})\n>\n> Good idea. While I was doing this, I also consolidated the duplicated boilerplate into just one function. I think this function, too, should go in just one perl module somewhere. See boilerplate_header() for details.\n\nI like this a lot.\n\nWhile thinking, I wonder if it makes sense to have a CodeGen module,\nwhich would contain e.g. the new ParseData function,\nFindDefinedSymbol, and functions for writing boilerplate.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Feb 2020 10:52:08 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Thinking about this some more, would it be possible to treat these\nlike we do parser/kwlist.h? Something like this:\n\ncommandtag_list.h:\nPG_COMMANDTAG(ALTER_ACCESS_METHOD, \"ALTER ACCESS METHOD\", true, false,\nfalse, false)\n...\n\nthen, just:\n\n#define PG_COMMANDTAG(taglabel, tagname, event_trigger, table_rewrite,\ndisplay_rowcount, display_oid) label,\n\ntypedef enum CommandTag\n{\n#include \"commandtag_list.h\"\n}\n\n#undef PG_COMMANDTAG\n\n...and then:\n\n#define PG_COMMANDTAG(taglabel, tagname, event_trigger, table_rewrite,\ndisplay_rowcount, display_oid) \\\n{ tagname, event_trigger, table_rewrite, display_rowcount, display_oid },\n\nconst CommandTagBehavior tag_behavior[] =\n{\n#include \"commandtag_list.h\"\n}\n\n#undef PG_COMMANDTAG\n\nI'm hand-waving a bit, and it doesn't have the flexibility of a .dat\nfile, but it's a whole lot simpler.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Feb 2020 16:53:27 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-21, John Naylor wrote:\n\n> Thinking about this some more, would it be possible to treat these\n> like we do parser/kwlist.h? Something like this:\n> \n> commandtag_list.h:\n> PG_COMMANDTAG(ALTER_ACCESS_METHOD, \"ALTER ACCESS METHOD\", true, false,\n> false, false)\n> ...\n\nI liked this idea, so I'm halfway on it now.\n\n> I'm hand-waving a bit, and it doesn't have the flexibility of a .dat\n> file, but it's a whole lot simpler.\n\nYeah, I for one don't want to maintain the proposed DataFile.pm.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 17:54:16 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-28, Alvaro Herrera wrote:\n\n> On 2020-Feb-21, John Naylor wrote:\n> \n> > Thinking about this some more, would it be possible to treat these\n> > like we do parser/kwlist.h? Something like this:\n> > \n> > commandtag_list.h:\n> > PG_COMMANDTAG(ALTER_ACCESS_METHOD, \"ALTER ACCESS METHOD\", true, false,\n> > false, false)\n> > ...\n> \n> I liked this idea, so I'm halfway on it now.\n\nHere.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 28 Feb 2020 19:40:30 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "I just realized that we could rename command_tag_display_last_oid() to\nsomething like command_tag_print_a_useless_zero_for_historical_reasons() \nand nothing would be lost.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 19:57:24 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I just realized that we could rename command_tag_display_last_oid() to\n> something like command_tag_print_a_useless_zero_for_historical_reasons() \n> and nothing would be lost.\n\nIs there a way to drop that logic altogether by making the tagname string\nbe \"INSERT 0\" for the INSERT case? Or would the zero bleed into other\nplaces where we don't want it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 18:05:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-28, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I just realized that we could rename command_tag_display_last_oid() to\n> > something like command_tag_print_a_useless_zero_for_historical_reasons() \n> > and nothing would be lost.\n> \n> Is there a way to drop that logic altogether by making the tagname string\n> be \"INSERT 0\" for the INSERT case? Or would the zero bleed into other\n> places where we don't want it?\n\nHmm, interesting thought. But yeah, it would show up in ps display:\n\n\t\tcommandTag = CreateCommandTag(parsetree->stmt);\n\n\t\tset_ps_display(GetCommandTagName(commandTag), false);\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 20:24:17 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 28, 2020, at 3:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> I just realized that we could rename command_tag_display_last_oid() to\n>> something like command_tag_print_a_useless_zero_for_historical_reasons() \n>> and nothing would be lost.\n> \n> Is there a way to drop that logic altogether by making the tagname string\n> be \"INSERT 0\" for the INSERT case? Or would the zero bleed into other\n> places where we don't want it?\n\nIn general, I don't think we want to increase the number of distinct tags. Which command you finished running and whether you want a rowcount and/or lastoid are orthogonal issues. We already have problems with there being different commandtags for different versions of morally the same commands. Take for example \"SELECT FOR KEY SHARE\" vs. \"SELECT FOR NO KEY UPDATE\" vs. \"SELECT FOR SHARE\" vs. \"SELECT FOR UPDATE\".\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 28 Feb 2020 16:26:13 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Feb 28, 2020, at 3:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Is there a way to drop that logic altogether by making the tagname string\n>> be \"INSERT 0\" for the INSERT case? Or would the zero bleed into other\n>> places where we don't want it?\n\n> In general, I don't think we want to increase the number of distinct\n> tags. Which command you finished running and whether you want a\n> rowcount and/or lastoid are orthogonal issues.\n\nWell, my thought is that last_oid is gone and it isn't ever coming back.\nSo the less code we use supporting a dead feature, the better.\n\nIf we can't remove the special case in EndCommand() altogether, I'd be\ninclined to hard-code it as \"if (tag == CMDTAG_INSERT ...\" rather than\nexpend infrastructure on treating last_oid as a live option for commands\nto have.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Feb 2020 20:42:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Feb 28, 2020, at 5:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Feb 28, 2020, at 3:05 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Is there a way to drop that logic altogether by making the tagname string\n>>> be \"INSERT 0\" for the INSERT case? Or would the zero bleed into other\n>>> places where we don't want it?\n> \n>> In general, I don't think we want to increase the number of distinct\n>> tags. Which command you finished running and whether you want a\n>> rowcount and/or lastoid are orthogonal issues.\n> \n> Well, my thought is that last_oid is gone and it isn't ever coming back.\n> So the less code we use supporting a dead feature, the better.\n> \n> If we can't remove the special case in EndCommand() altogether, I'd be\n> inclined to hard-code it as \"if (tag == CMDTAG_INSERT ...\" rather than\n> expend infrastructure on treating last_oid as a live option for commands\n> to have.\n\nYou may want to think about the embedding of InvalidOid into the EndCommand output differently from how you think about the embedding of the rowcount into the EndCommand output, but my preference is to treat these issues the same and make a strong distinction between the commandtag and the embedded oid and/or rowcount. It's hard to say how many future features would be crippled by having the embedded InvalidOid in the commandtag, but as an example *right now* in the works, we have a feature to count how many commands of a given type have been executed. It stands to reason that feature, whether accepted in its current form or refactored, would not want to show users a pg_stats table like this:\n\n cnt command\n ---- -------------\n 5 INSERT 0\n 37 SELECT\n\t\nWhat the heck is the zero doing after the INSERT? That's the hardcoded InvalidOid that you are apparently arguing for. You could get around that by having the pg_sql_stats patch have its own separate set of command tag strings, but why would we intentionally design that sort of duplication into the solution?\n\nAs for hardcoding the behavior of whether to embed a rowcount in the output from EndCommand; In src/backend/replication/walsender.c, exec_replication_command() returns \"SELECT\" from EndCommand, and not \"SELECT $rowcount\" like everywhere else. The patch as submitted does not change behavior. It only refactors the code while preserving the current behavior. So we would have to agree that the patch can change how exec_replication_command() behaves and start embedding a rowcount there, too, if we want to make SELECT behave the same everywhere.\n\nThere is another problem, though, which is that if we're hoping to eventually abate this historical behavior and stop embedding InvalidOid and/or rowcount in the commandtag returned from EndCommand, it might be necessary (for backward compatibility with clients) to do that incrementally, in which case we still need the distinction between commandtags and formats to exist in the code. How else can you say that, for example, in the next rev of the protocol that we're not going to embed InvalidOid anymore, but we will continue to return it for clients who connect via the older protocol? What if the next rev of the protocol still returns rowcount, but in a way that doesn't require the clients to implement (or link to) a parser that extracts the rowcount by parsing a string?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 29 Feb 2020 10:12:23 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Feb-29, Mark Dilger wrote:\n\n> You may want to think about the embedding of InvalidOid into the EndCommand output differently from how you think about the embedding of the rowcount into the EndCommand output, but my preference is to treat these issues the same and make a strong distinction between the commandtag and the embedded oid and/or rowcount. It's hard to say how many future features would be crippled by having the embedded InvalidOid in the commandtag, but as an example *right now* in the works, we have a feature to count how many commands of a given type have been executed. It stands to reason that feature, whether accepted in its current form or refactored, would not want to show users a pg_stats table like this:\n> \n> cnt command\n> ---- -------------\n> 5 INSERT 0\n> 37 SELECT\n> \t\n> What the heck is the zero doing after the INSERT? That's the hardcoded InvalidOid that you are apparently arguing for. You could get around that by having the pg_sql_stats patch have its own separate set of command tag strings, but why would we intentionally design that sort of duplication into the solution?\n\nThis is what I think Tom means to use in EndCommand:\n\n if (command_tag_display_rowcount(tag) && !force_undecorated_output)\n snprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n tag == CMDTAG_INSERT ?\n \"%s 0 \" UINT64_FORMAT : \"%s \" UINT64_FORMAT,\n tagname, qc->nprocessed);\n else\n ... no rowcount ...\n\nThe point is not to change the returned tag in any way -- just to make\nthe method to arrive at it not involve the additional data column in the\ndata file, instead hardcode the behavior in EndCommand. I don't\nunderstand your point of pg_stats_sql having to deal with this in a\nparticular way. How is that patch obtaining the command tags? I would\nhope it calls GetCommandTagName() rather than call CommandEnd, but maybe\nI misunderstand where it hooks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 13:12:15 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Mar 2, 2020, at 8:12 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Feb-29, Mark Dilger wrote:\n> \n>> You may want to think about the embedding of InvalidOid into the EndCommand output differently from how you think about the embedding of the rowcount into the EndCommand output, but my preference is to treat these issues the same and make a strong distinction between the commandtag and the embedded oid and/or rowcount. It's hard to say how many future features would be crippled by having the embedded InvalidOid in the commandtag, but as an example *right now* in the works, we have a feature to count how many commands of a given type have been executed. It stands to reason that feature, whether accepted in its current form or refactored, would not want to show users a pg_stats table like this:\n>> \n>> cnt command\n>> ---- -------------\n>> 5 INSERT 0\n>> 37 SELECT\n>> \t\n>> What the heck is the zero doing after the INSERT? That's the hardcoded InvalidOid that you are apparently arguing for. You could get around that by having the pg_sql_stats patch have its own separate set of command tag strings, but why would we intentionally design that sort of duplication into the solution?\n> \n> This is what I think Tom means to use in EndCommand:\n> \n> if (command_tag_display_rowcount(tag) && !force_undecorated_output)\n> snprintf(completionTag, COMPLETION_TAG_BUFSIZE,\n> tag == CMDTAG_INSERT ?\n> \"%s 0 \" UINT64_FORMAT : \"%s \" UINT64_FORMAT,\n> tagname, qc->nprocessed);\n> else\n> ... no rowcount ...\n> \n> The point is not to change the returned tag in any way -- just to make\n> the method to arrive at it not involve the additional data column in the\n> data file, instead hardcode the behavior in EndCommand.\n\nThanks, Álvaro, I think I get it now. I thought Tom was arguing to have \"INSERT 0\" rather than \"INSERT\" be the commandtag.\n\n> I don't\n> understand your point of pg_stats_sql having to deal with this in a\n> particular way. How is that patch obtaining the command tags? I would\n> hope it calls GetCommandTagName() rather than call CommandEnd, but maybe\n> I misunderstand where it hooks.\n\nMy objection was based on my misunderstanding of what Tom was requesting.\n\nI can rework the patch the way Tom suggests.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 2 Mar 2020 08:19:56 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Mar-02, Mark Dilger wrote:\n\n> > I don't\n> > understand your point of pg_stats_sql having to deal with this in a\n> > particular way. How is that patch obtaining the command tags? I would\n> > hope it calls GetCommandTagName() rather than call CommandEnd, but maybe\n> > I misunderstand where it hooks.\n> \n> My objection was based on my misunderstanding of what Tom was requesting.\n> \n> I can rework the patch the way Tom suggests.\n\nI already did it :-) Posting in a jiffy\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 13:33:57 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "Here's the patch I propose for commit. I also rewrote the commit\nmessage.\n\nThere are further refinements that can be done, but they don't need to\nbe in the first patch. Notably, the event trigger code can surely do a\nlot better now by translating the tag list to a bitmapset earlier.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 2 Mar 2020 13:53:56 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Mar-02, Alvaro Herrera wrote:\n\n> Here's the patch I propose for commit. I also rewrote the commit\n> message.\n\nBTW I wonder if we should really change the definition of\nEventTriggerData. ISTM that it would be sensible to keep it the same\nfor now ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 14:08:29 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Mar 2, 2020, at 9:08 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Mar-02, Alvaro Herrera wrote:\n> \n>> Here's the patch I propose for commit. I also rewrote the commit\n>> message.\n> \n> BTW I wonder if we should really change the definition of\n> EventTriggerData. ISTM that it would be sensible to keep it the same\n> for now ...\n\nI think it is more natural to change event trigger code to reason natively about CommandTags rather than continuing to reason about strings. The conversion back-and-forth between the enum and the string representation serves no useful purpose that I can see. But I understand if you are just trying to have the patch change fewer parts of the code, and if you feel more comfortable about reverting that part of the patch, as the committer, I think that's your prerogative.\n\nDid you want to do that yourself, or have me do it and resubmit?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 2 Mar 2020 09:17:29 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Mar-02, Mark Dilger wrote:\n\n> I think it is more natural to change event trigger code to reason\n> natively about CommandTags rather than continuing to reason about\n> strings. The conversion back-and-forth between the enum and the\n> string representation serves no useful purpose that I can see. But I\n> understand if you are just trying to have the patch change fewer parts\n> of the code, and if you feel more comfortable about reverting that\n> part of the patch, as the committer, I think that's your prerogative.\n\nNah -- after reading it again, that would make no sense. With the\nchange, we're forcing all writers of event trigger functions in C to\nadapt their functions to the new API, but that's okay -- I don't expect\nthere will be many, and we're doing other things to the API anyway.\n\nI pushed it now.\n\nI made very small changes before pushing: notably, I removed the\nInitializeQueryCompletion() call from standard_ProcessUtility; instead\nits callers are supposed to do it. Updated ProcessUtility's comment to\nthat effect.\n\nAlso, the affected plancache.c functions (CreateCachedPlan and\nCreateOneShotCachedPlan) had not had their comments updated. Previously\nthey received compile-time constants, but that was important because\nthey were strings. No longer.\n\nI noticed some other changes that could perhaps be made here, but didn't\ndo them; for instance, in pg_stat_statement we have a comparison to\nCMDTAG_COPY that seems pointless; we could just acquire the value\nalways. I left it alone for now but I think the change is without side\neffects (notably so because most actual DML does not go through\nProcessUtility anyway). Also, event_trigger.c could resolve command\nstrings to the tag enum earlier.\n\nThere's also a lot of nonsense in the pquery.c functions, such as this,\n\n\t\t\t\t/*\n\t\t\t\t * Now fetch desired portion of results.\n\t\t\t\t */\n\t\t\t\tnprocessed = PortalRunSelect(portal, true, count, dest);\n\n\t\t\t\t/*\n\t\t\t\t * If the portal result contains a command tag and the caller\n\t\t\t\t * gave us a pointer to store it, copy it and update the\n\t\t\t\t * rowcount.\n\t\t\t\t */\n\t\t\t\tif (qc && portal->qc.commandTag != CMDTAG_UNKNOWN)\n\t\t\t\t{\n\t\t\t\t\tCopyQueryCompletion(qc, &portal->qc);\n\t\t\t\t\tqc->nprocessed = nprocessed;\n\t\t\t\t}\n\nI think we could simplify that by passing the qc.\n\nSimilar consideration with DoCopy; instead of a 'uint64 nprocessed' we\ncould have a *qc to fill in and avoid this bit of silliness,\n\n\t\t\t\tDoCopy(pstate, (CopyStmt *) parsetree,\n\t\t\t\t\t pstmt->stmt_location, pstmt->stmt_len,\n\t\t\t\t\t &processed);\n\t\t\t\tif (qc)\n\t\t\t\t\tSetQueryCompletion(qc, CMDTAG_COPY, processed);\n\nI see no reason to have PortalRun() initialize the qc; ISTM that its\ncallers should do so.\n\nAnd so on.\n\nNothing of that is critical.\n\nThanks for your patch,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 2 Mar 2020 18:57:55 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Mar 2, 2020, at 1:57 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Mar-02, Mark Dilger wrote:\n> \n>> I think it is more natural to change event trigger code to reason\n>> natively about CommandTags rather than continuing to reason about\n>> strings. The conversion back-and-forth between the enum and the\n>> string representation serves no useful purpose that I can see. But I\n>> understand if you are just trying to have the patch change fewer parts\n>> of the code, and if you feel more comfortable about reverting that\n>> part of the patch, as the committer, I think that's your prerogative.\n> \n> Nah -- after reading it again, that would make no sense. With the\n> change, we're forcing all writers of event trigger functions in C to\n> adapt their functions to the new API, but that's okay -- I don't expect\n> there will be many, and we're doing other things to the API anyway.\n> \n> I pushed it now.\n\nThanks! I greatly appreciate your time.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 2 Mar 2020 14:30:58 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "\n\n> On Mar 2, 2020, at 1:57 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> I pushed it now.\n\nThanks again! While rebasing some other work on top, I noticed one of your comments is out of date:\n\n--- a/src/include/tcop/cmdtaglist.h\n+++ b/src/include/tcop/cmdtaglist.h\n@@ -23,7 +23,7 @@\n * textual name, so that we can bsearch on it; see GetCommandTagEnum().\n */\n \n-/* symbol name, textual name, event_trigger_ok, table_rewrite_ok, rowcount, last_oid */\n+/* symbol name, textual name, event_trigger_ok, table_rewrite_ok, rowcount */\n PG_CMDTAG(CMDTAG_UNKNOWN, \"???\", false, false, false)\n PG_CMDTAG(CMDTAG_ALTER_ACCESS_METHOD, \"ALTER ACCESS METHOD\", true, false, false)\n PG_CMDTAG(CMDTAG_ALTER_AGGREGATE, \"ALTER AGGREGATE\", true, false, false)\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:13:04 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Portal->commandTag as an enum" }, { "msg_contents": "On 2020-Mar-04, Mark Dilger wrote:\n\n> \n> \n> > On Mar 2, 2020, at 1:57 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > \n> > I pushed it now.\n> \n> Thanks again! While rebasing some other work on top, I noticed one of your comments is out of date:\n> \n> --- a/src/include/tcop/cmdtaglist.h\n> +++ b/src/include/tcop/cmdtaglist.h\n> @@ -23,7 +23,7 @@\n> * textual name, so that we can bsearch on it; see GetCommandTagEnum().\n> */\n> \n> -/* symbol name, textual name, event_trigger_ok, table_rewrite_ok, rowcount, last_oid */\n> +/* symbol name, textual name, event_trigger_ok, table_rewrite_ok, rowcount */\n\nOops. Pushed, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 18:41:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Portal->commandTag as an enum" } ]
[ { "msg_contents": "I noticed that \"ctid\" in the select list prevents an index only scan:\n\nCREATE TABLE ios (id bigint NOT NULL, val text NOT NULL);\n\nINSERT INTO ios SELECT i, i::text FROM generate_series(1, 100000) AS i;\n\nCREATE INDEX ON ios (id);\n\nVACUUM (ANALYZE) ios;\n\nEXPLAIN (VERBOSE, COSTS off) SELECT ctid, id FROM ios WHERE id < 100;\n QUERY PLAN \n--------------------------------------------\n Index Scan using ios_id_idx on laurenz.ios\n Output: ctid, id\n Index Cond: (ios.id < 100)\n(3 rows)\n\nThis strikes me as strange, since every index contains \"ctid\".\n\nThis is not an artificial example either, because \"ctid\" is automatically\nadded to all data modifying queries to be able to identify the tuple\nfor EvalPlanQual:\n\nEXPLAIN (VERBOSE, COSTS off) UPDATE ios SET val = '' WHERE id < 100;\n QUERY PLAN \n--------------------------------------------------\n Update on laurenz.ios\n -> Index Scan using ios_id_idx on laurenz.ios\n Output: id, ''::text, ctid\n Index Cond: (ios.id < 100)\n(4 rows)\n\nIs this low hanging fruit? If yes, I might take a stab at it.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 03 Feb 2020 20:37:16 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Index only scan and ctid" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> I noticed that \"ctid\" in the select list prevents an index only scan:\n> This strikes me as strange, since every index contains \"ctid\".\n\nThere's no provision for an IOS to return a system column, though.\nNot sure what it'd take to make that possible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Feb 2020 14:43:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index only scan and ctid" }, { "msg_contents": "On Mon, 2020-02-03 at 14:43 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > I noticed that \"ctid\" in the select list prevents an index only scan:\n> > This strikes me as strange, since every index contains \"ctid\".\n> \n> There's no provision for an IOS to return a system column, though.\n> Not sure what it'd take to make that possible.\n\nI was reminded what the obvious problem is:\nthe ctid of a heap only tuple is not stored in the index. Duh.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 04 Feb 2020 19:11:59 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Index only scan and ctid" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2020-02-03 at 14:43 -0500, Tom Lane wrote:\n>> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n>>> I noticed that \"ctid\" in the select list prevents an index only scan:\n>>> This strikes me as strange, since every index contains \"ctid\".\n\n>> There's no provision for an IOS to return a system column, though.\n>> Not sure what it'd take to make that possible.\n\n> I was reminded what the obvious problem is:\n> the ctid of a heap only tuple is not stored in the index. Duh.\n\nDuh ... the members of a HOT chain share the same indexed value(s),\nwhich is why we needn't care exactly which one is live during IOS.\nBut they don't have the same TID. Oh well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Feb 2020 13:22:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index only scan and ctid" }, { "msg_contents": "For the user visible ctid we could just arbitrarily declare that the ctid\nreturned by an IOS is the head of the HOT update chain instead of the tail.\nIt might be a bit confusing when sequential scans return the tail (or\nwhichever member is visible). But it's not really wrong, all the members of\nthe chain are equally valid answers.\n\nFor a data modifying query -- and it would have to be one targeting some\nother table or else there's no way it could be an IOS -- does having a ctid\nfor the head rather than the tail still work? I'm not clear how EPQ works\nfor such cases. Does it still do an index scan at all or does it just do a\nctid scan? And does it follow HOT update chains if the row was updated?\n\nOn Tue., Feb. 4, 2020, 13:23 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Mon, 2020-02-03 at 14:43 -0500, Tom Lane wrote:\n> >> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> >>> I noticed that \"ctid\" in the select list prevents an index only scan:\n> >>> This strikes me as strange, since every index contains \"ctid\".\n>\n> >> There's no provision for an IOS to return a system column, though.\n> >> Not sure what it'd take to make that possible.\n>\n> > I was reminded what the obvious problem is:\n> > the ctid of a heap only tuple is not stored in the index. Duh.\n>\n> Duh ... the members of a HOT chain share the same indexed value(s),\n> which is why we needn't care exactly which one is live during IOS.\n> But they don't have the same TID. Oh well.\n>\n> regards, tom lane\n>\n>\n>\n\nFor the user visible ctid we could just arbitrarily declare that the ctid returned by an IOS is the head of the HOT update chain instead of the tail. It might be a bit confusing when sequential scans return the tail (or whichever member is visible). But it's not really wrong, all the members of the chain are equally valid answers.For a data modifying query -- and it would have to be one targeting some other table or else there's no way it could be an IOS -- does having a ctid for the head rather than the tail still work? I'm not clear how EPQ works for such cases. Does it still do an index scan at all or does it just do a ctid scan? And does it follow HOT update chains if the row was updated?On Tue., Feb. 4, 2020, 13:23 Tom Lane, <tgl@sss.pgh.pa.us> wrote:Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2020-02-03 at 14:43 -0500, Tom Lane wrote:\n>> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n>>> I noticed that \"ctid\" in the select list prevents an index only scan:\n>>> This strikes me as strange, since every index contains \"ctid\".\n\n>> There's no provision for an IOS to return a system column, though.\n>> Not sure what it'd take to make that possible.\n\n> I was reminded what the obvious problem is:\n> the ctid of a heap only tuple is not stored in the index.  Duh.\n\nDuh ... the members of a HOT chain share the same indexed value(s),\nwhich is why we needn't care exactly which one is live during IOS.\nBut they don't have the same TID.  Oh well.\n\n                        regards, tom lane", "msg_date": "Tue, 18 Feb 2020 08:29:39 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Index only scan and ctid" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> For the user visible ctid we could just arbitrarily declare that the ctid\n> returned by an IOS is the head of the HOT update chain instead of the tail.\n\nNo, I don't think that'd work at all, because that tuple might be dead.\nA minimum expectation is that \"SELECT ... WHERE ctid = 'xxx'\" would return\nthe same data as the IOS, and that would fail because it wouldn't return\nanything.\n\n(In principle I suppose we could *also* redefine what selecting by ctid\nmeans. Doubt I want to go there though.)\n\n> For a data modifying query -- and it would have to be one targeting some\n> other table or else there's no way it could be an IOS -- does having a ctid\n> for the head rather than the tail still work?\n\nIf you target a tuple that is live according to your current snapshot,\nbut nonetheless out-of-date, EPQ will chase up to the head for you.\nBut you gotta start with a tuple that is visible to your snapshot.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Feb 2020 09:21:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Index only scan and ctid" } ]
[ { "msg_contents": "Hi all,\n\nAs of [1], I have been playing with the compile time assertions that\nwe have for expressions, declarations and statements. And it happens\nthat it is visibly possible to consolidate the fallback\nimplementations for C and C++. Attached is the result of what I am\ngetting at. I am adding this patch to next CF. Thoughts are\nwelcome.\n\n[1]: https://www.postgresql.org/message-id/201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217\n\nThanks,\n--\nMichael", "msg_date": "Tue, 4 Feb 2020 17:15:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nIn my humble opinion the patch improves readability, hence gets my +1.\r\n\r\nNo further suggestions. Passing on to a committer to judge further.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Wed, 04 Mar 2020 14:34:53 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> As of [1], I have been playing with the compile time assertions that\n> we have for expressions, declarations and statements. And it happens\n> that it is visibly possible to consolidate the fallback\n> implementations for C and C++. Attached is the result of what I am\n> getting at. I am adding this patch to next CF. Thoughts are\n> welcome.\n\ncfbot reports this doesn't work with MSVC. Not sure why --- maybe\nit defines __cpp_static_assert differently than you're expecting?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 16:44:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Sat, Mar 07, 2020 at 04:44:48PM -0500, Tom Lane wrote:\n> cfbot reports this doesn't work with MSVC. Not sure why --- maybe\n> it defines __cpp_static_assert differently than you're expecting?\n\nI don't think that's the issue. The CF bot uses MSVC 12.0 which\nrefers to the 2013. __cpp_static_assert being introduced in MSVC\n2017, this error is visibly telling us that this environment does not\nlike the C++ fallback implementation, which is actually what my\nprevious version of the patch was using (I can reproduce the error\nwith my MSVC 2015 VM as well). I think that this points to an error\nin the patch: for the refactoring, the fallback implementation of C\nand C++ should use the fallback implementation for C that we have\ncurrently on HEAD.\n\nWith the updated patch attached, the error goes away for me. Let's\nsee what Mr. Robot thinks. The patch was marked as ready for\ncommitter, I am switching it back to \"Needs review\".\n--\nMichael", "msg_date": "Wed, 11 Mar 2020 16:46:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Thank you for updating the status of the issue.\r\n\r\nI have to admit that I completely missed the CF bot. Lesson learned.\r\n\r\nFor whatever is worth, my previous comment that the patch improves\r\nreadability also applies to the updated version of the patch.\r\n\r\nThe CF bot seems happy now, which means that your assessment as\r\nto the error and fix are correct.\r\n\r\nSo :+1: from me.", "msg_date": "Wed, 11 Mar 2020 12:31:19 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@pm.me>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Wed, Mar 11, 2020 at 12:31:19PM +0000, Georgios Kokolatos wrote:\n> For whatever is worth, my previous comment that the patch improves\n> readability also applies to the updated version of the patch.\n\nv2 has actually less diffs for the C++ part.\n\n> The CF bot seems happy now, which means that your assessment as\n> to the error and fix are correct.\n\nIndeed the bot is happy now. By looking at the patch, one would note\nthat what it just does is unifying the fallback \"hack-ish\"\nimplementations so as C and C++ use the same thing, which is the\nfallback implementation for C of HEAD. I would prefer hear first from\nmore people to see if they like this change. Or not.\n--\nMichael", "msg_date": "Thu, 12 Mar 2020 08:43:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Indeed the bot is happy now. By looking at the patch, one would note\n> that what it just does is unifying the fallback \"hack-ish\"\n> implementations so as C and C++ use the same thing, which is the\n> fallback implementation for C of HEAD. I would prefer hear first from\n> more people to see if they like this change. Or not.\n\nI looked at this and tried it on an old (non HAVE__STATIC_ASSERT)\ngcc version. Seems to work, but I have a couple cosmetic suggestions:\n\n1. The comment block above this was never updated to mention that\nwe're now catering for C++ too. Maybe something like\n\n * gcc 4.6 and up supports _Static_assert(), but there are bizarre syntactic\n * placement restrictions. Macros StaticAssertStmt() and StaticAssertExpr()\n * make it safe to use as a statement or in an expression, respectively.\n * The macro StaticAssertDecl() is suitable for use at file scope (outside of\n * any function).\n *\n+ * On recent C++ compilers, we can use standard static_assert().\n+ *\n * Otherwise we fall back on a kluge that assumes the compiler will complain\n * about a negative width for a struct bit-field. This will not include a\n * helpful error message, but it beats not getting an error at all.\n\n\n2. I think you could simplify the #elif to just\n\n#elif defined(__cplusplus) && __cpp_static_assert >= 200410\n\nPer the C standard, an unrecognized identifier in an #if condition\nis replaced by zero. So the condition will come out false as desired\nif __cpp_static_assert isn't defined; you don't need to test that\nseparately.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Mar 2020 00:33:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Thu, Mar 12, 2020 at 12:33:21AM -0400, Tom Lane wrote:\n> I looked at this and tried it on an old (non HAVE__STATIC_ASSERT)\n> gcc version. Seems to work, but I have a couple cosmetic suggestions:\n\nThanks for the review.\n\n> 1. The comment block above this was never updated to mention that\n> we're now catering for C++ too. Maybe something like\n> \n> + * On recent C++ compilers, we can use standard static_assert().\n> + *\n\nSounds fine to me. Looking here this is present since GCC 4.3:\nhttps://gcc.gnu.org/projects/cxx-status.html#cxx11\n\nFor MSVC, actually I was a bit wrong, only the flavor without error\nmessage is supported since VS 2017, and the one we use is much older:\nhttps://docs.microsoft.com/en-us/cpp/cpp/static-assert?view=vs-2015\n\nSo, should we add a reference about both in the new comment? I would\nactually not add them, so I have used your suggestion in the attached,\nbut the comment block above does that for _Static_assert(). Do you\nthink it is better to add some references to some of those compilers\n(say GCC 4.3, MSVC)? Just stick with your suggestion? Or stick with\nyour version and replace the reference to GCC 4.6 with something like\n\"recent compilers\"?\n\n> 2. I think you could simplify the #elif to just\n> \n> #elif defined(__cplusplus) && __cpp_static_assert >= 200410\n> \n> Per the C standard, an unrecognized identifier in an #if condition\n> is replaced by zero. So the condition will come out false as desired\n> if __cpp_static_assert isn't defined; you don't need to test that\n> separately.\n\nThanks, indeed.\n--\nMichael", "msg_date": "Thu, 12 Mar 2020 16:12:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> So, should we add a reference about both in the new comment? I would\n> actually not add them, so I have used your suggestion in the attached,\n> but the comment block above does that for _Static_assert(). Do you\n> think it is better to add some references to some of those compilers\n> (say GCC 4.3, MSVC)? Just stick with your suggestion? Or stick with\n> your version and replace the reference to GCC 4.6 with something like\n> \"recent compilers\"?\n\nI don't feel a need to expend a whole lot of sweat there. The existing\ntext is fine, it just bugged me that the code deals with three cases\nwhile the comment block only acknowledged two. So I'd just go with\nwhat you have in v3.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Mar 2020 09:43:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Thu, Mar 12, 2020 at 09:43:54AM -0400, Tom Lane wrote:\n> I don't feel a need to expend a whole lot of sweat there. The existing\n> text is fine, it just bugged me that the code deals with three cases\n> while the comment block only acknowledged two. So I'd just go with\n> what you have in v3.\n\nThanks, Tom. I have committed v3 then.\n--\nMichael", "msg_date": "Fri, 13 Mar 2020 15:12:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Fri, Mar 13, 2020 at 03:12:34PM +0900, Michael Paquier wrote:\n> On Thu, Mar 12, 2020 at 09:43:54AM -0400, Tom Lane wrote:\n>> I don't feel a need to expend a whole lot of sweat there. The existing\n>> text is fine, it just bugged me that the code deals with three cases\n>> while the comment block only acknowledged two. So I'd just go with\n>> what you have in v3.\n> \n> Thanks, Tom. I have committed v3 then.\n\nHmm. v3 actually broke the C++ fallback of StaticAssertExpr() and\nStaticAssertStmt() (v1 did not), a simple fix being something like\nthe attached.\n\nThe buildfarm does not really care about that, but it could for\nexample by using the only c++ code compiled in the tree in\nsrc/backend/jit/? That also means that only builds using --with-llvm\nwith a compiler old enough would trigger that stuff.\n--\nMichael", "msg_date": "Fri, 13 Mar 2020 20:50:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Hmm. v3 actually broke the C++ fallback of StaticAssertExpr() and\n> StaticAssertStmt() (v1 did not), a simple fix being something like\n> the attached.\n\nThe buildfarm seems happy, so why do you think it's broken?\n\nIf we do need to change it, I'd be inclined to just use the do{}\nblock everywhere, not bothering with the extra #if test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Mar 2020 11:00:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Fri, Mar 13, 2020 at 11:00:33AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Hmm. v3 actually broke the C++ fallback of StaticAssertExpr() and\n>> StaticAssertStmt() (v1 did not), a simple fix being something like\n>> the attached.\n> \n> The buildfarm seems happy, so why do you think it's broken?\n\nExtensions like the attached don't appreciate it, and we have nothing\nlike that in core. Perhaps we could, but it is not really appealing\nfor all platforms willing to run the tests to require CXX or such..\n\n> If we do need to change it, I'd be inclined to just use the do{}\n> block everywhere, not bothering with the extra #if test.\n\nNot sure what you mean here because we cannot use the do{} flavor\neither for the C fallback, no? See for example the definitions of\nunconstify() in c.h.\n--\nMichael", "msg_date": "Mon, 16 Mar 2020 14:32:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Mar 13, 2020 at 11:00:33AM -0400, Tom Lane wrote:\n>> If we do need to change it, I'd be inclined to just use the do{}\n>> block everywhere, not bothering with the extra #if test.\n\n> Not sure what you mean here because we cannot use the do{} flavor\n> either for the C fallback, no? See for example the definitions of\n> unconstify() in c.h.\n\nSorry for being unclear --- I just meant that we could use do{}\nin StaticAssertStmt for both C and C++. Although now I notice\nthat the code is trying to use StaticAssertStmt for StaticAssertExpr,\nwhich you're right isn't going to do. But I think something like\nthis would work and be a bit simpler than what you proposed:\n\n #else\n /* Fallback implementation for C and C++ */\n #define StaticAssertStmt(condition, errmessage) \\\n-\t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n+\tdo { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n #define StaticAssertExpr(condition, errmessage) \\\n-\tStaticAssertStmt(condition, errmessage)\n+\t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n #define StaticAssertDecl(condition, errmessage) \\\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Mar 2020 10:32:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Mon, Mar 16, 2020 at 10:32:36AM -0400, Tom Lane wrote:\n> Sorry for being unclear --- I just meant that we could use do{}\n> in StaticAssertStmt for both C and C++. Although now I notice\n> that the code is trying to use StaticAssertStmt for StaticAssertExpr,\n> which you're right isn't going to do. But I think something like\n> this would work and be a bit simpler than what you proposed:\n> \n> #else\n> /* Fallback implementation for C and C++ */\n> #define StaticAssertStmt(condition, errmessage) \\\n> -\t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n> +\tdo { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n> #define StaticAssertExpr(condition, errmessage) \\\n> -\tStaticAssertStmt(condition, errmessage)\n> +\t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n> #define StaticAssertDecl(condition, errmessage) \\\n\nC++ does not allow defining a struct inside a sizeof() call, so in\nthis case StaticAssertExpr() does not work with the previous extension\nin C++. StaticAssertStmt() does the work though.\n\nOne alternatine I can think of for C++ would be something like the\nfollowing, though C does not like this flavor either:\ntypedef char static_assert_struct[condition ? 1 : -1]\n--\nMichael", "msg_date": "Tue, 17 Mar 2020 11:06:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Mar 16, 2020 at 10:32:36AM -0400, Tom Lane wrote:\n>> Sorry for being unclear --- I just meant that we could use do{}\n>> in StaticAssertStmt for both C and C++. Although now I notice\n>> that the code is trying to use StaticAssertStmt for StaticAssertExpr,\n>> which you're right isn't going to do. But I think something like\n>> this would work and be a bit simpler than what you proposed:\n>> \n>> #else\n>> /* Fallback implementation for C and C++ */\n>> #define StaticAssertStmt(condition, errmessage) \\\n>> -\t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n>> +\tdo { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n>> #define StaticAssertExpr(condition, errmessage) \\\n>> -\tStaticAssertStmt(condition, errmessage)\n>> +\t((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n>> #define StaticAssertDecl(condition, errmessage) \\\n\n> C++ does not allow defining a struct inside a sizeof() call, so in\n> this case StaticAssertExpr() does not work with the previous extension\n> in C++. StaticAssertStmt() does the work though.\n\n[ scratches head... ] A do{} is okay in an expression in C++ ??\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Mar 2020 22:35:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Mon, Mar 16, 2020 at 10:35:05PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> C++ does not allow defining a struct inside a sizeof() call, so in\n>> this case StaticAssertExpr() does not work with the previous extension\n>> in C++. StaticAssertStmt() does the work though.\n> \n> [ scratches head... ] A do{} is okay in an expression in C++ ??\n\ncpp-fallback-fix.patch in [1] was doing that.\n\nThe fun does not stop here. gcc is fine when using that for C and\nC++:\n#define StaticAssertStmt(condition, errmessage) \\\n do { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n#define StaticAssertExpr(condition, errmessage) \\\n ((void) ({ StaticAssertStmt(condition, errmessage); }))\n\nBut then problems come from MSVC which does not like the do{} part for\nstatements, and this works:\n#define StaticAssertStmt(condition, errmessage) \\\n ((void) sizeof(struct { int static_assert_failure : (condition) ? 1 : -1; }))\n#define StaticAssertExpr(condition, errmessage) \\\n StaticAssertStmt(condition, errmessage)\n\n[1]: https://postgr.es/m/20200313115033.GA183471@paquier.xyz\n--\nMichael", "msg_date": "Tue, 17 Mar 2020 13:11:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> The fun does not stop here. gcc is fine when using that for C and\n> C++:\n> #define StaticAssertStmt(condition, errmessage) \\\n> do { struct static_assert_struct { int static_assert_failure : (condition) ? 1 : -1; }; } while(0)\n> #define StaticAssertExpr(condition, errmessage) \\\n> ((void) ({ StaticAssertStmt(condition, errmessage); }))\n\nHm, I'm not so sure. I just noticed that cpluspluscheck is failing\nfor me now:\n\n$ src/tools/pginclude/cpluspluscheck\nIn file included from /tmp/cpluspluscheck.HRgpVA/test.cpp:4:\n./src/include/common/int128.h: In function 'void int128_add_int64_mul_int64(INT128*, int64, int64)':\n./src/include/common/int128.h:180: error: types may not be defined in 'sizeof' expressions\n\nwhich of course is pointing at\n\n StaticAssertStmt(((int64) -1 >> 1) == (int64) -1,\n \"arithmetic right shift is needed\");\n\nso the existing \"C and C++\" fallback StaticAssertStmt doesn't work for\nolder g++. (This is g++ 4.4.7 from RHEL6.)\n\n> But then problems come from MSVC which does not like the do{} part for\n> statements, and this works:\n\nHuh? Surely do{} is a legal statement.\n\nMaybe we should just revert b7f64c64d instead of putting more time\ninto this. It's looking like we're going to end up with four or so\nimplementations no matter what, so it's getting hard to see any\nreal benefit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Mar 2020 19:22:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Sat, Mar 21, 2020 at 07:22:41PM -0400, Tom Lane wrote:\n> which of course is pointing at\n> \n> StaticAssertStmt(((int64) -1 >> 1) == (int64) -1,\n> \"arithmetic right shift is needed\");\n> \n> so the existing \"C and C++\" fallback StaticAssertStmt doesn't work for\n> older g++. (This is g++ 4.4.7 from RHEL6.)\n\nHmm. Thanks. I just have an access down to 7.5 on my machine.\n\n> Huh? Surely do{} is a legal statement.\n\nYep, still my VS-2015 compiler complains when using the fallback with\ndo{} for statements, and I am not sure why. An extra choice coming to\nmy mind would be to use a more native MS implementation, as documented\nhere:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/assert-asserte-assert-expr-macros?view=vs-2019\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/static-assert-macro?view=vs-2019\n\nThis requires an extra branch in our implementation set which is not\nreally appealing, with I guess the following mapping (not tested):\n- _STATIC_ASSERT for StaticAssertDecl and StaticAssertExpr.\n- _ASSERT_EXPR for and StaticAssertStmt.\nI think that one advantage is that this would allow to simplify the\nfallback implementations for C/C++ to use do{}s.\n\n> Maybe we should just revert b7f64c64d instead of putting more time\n> into this. It's looking like we're going to end up with four or so\n> implementations no matter what, so it's getting hard to see any\n> real benefit.\n\nIndeed. I have tried a couple of other things I could think of, but I\ncannot really get down to 3 implementations, so there is no actual\nbenefit.\n\nI have done a complete revert to keep the history cleaner for release\nnotes and such, including this part:\n- * On recent C++ compilers, we can use standard static_assert().\nDon't you think that we should keep this comment at the end? It is\nstill true.\n\nThanks for the discussion!\n--\nMichael", "msg_date": "Mon, 23 Mar 2020 12:58:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Mar 21, 2020 at 07:22:41PM -0400, Tom Lane wrote:\n>> Maybe we should just revert b7f64c64d instead of putting more time\n>> into this. It's looking like we're going to end up with four or so\n>> implementations no matter what, so it's getting hard to see any\n>> real benefit.\n\n> Indeed. I have tried a couple of other things I could think of, but I\n> cannot really get down to 3 implementations, so there is no actual\n> benefit.\n> I have done a complete revert to keep the history cleaner for release\n> notes and such, including this part:\n> - * On recent C++ compilers, we can use standard static_assert().\n> Don't you think that we should keep this comment at the end? It is\n> still true.\n\nYeah, the comment needs an update; but if we have four implementations\nthen it ought to describe each of them, IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 00:22:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" }, { "msg_contents": "On Mon, Mar 23, 2020 at 12:22:48AM -0400, Tom Lane wrote:\n> Yeah, the comment needs an update; but if we have four implementations\n> then it ought to describe each of them, IMO.\n\nI got an idea as per the attached. Perhaps you have a better idea?\n--\nMichael", "msg_date": "Thu, 26 Mar 2020 12:58:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Refactor compile-time assertion checks for C/C++" } ]
[ { "msg_contents": "Hello,\n\nHere are two more bugs I found by running the regression tests\nrepeatedly with dsm_create() hacked to fail at random.\n\n1. If find_or_make_matching_shared_tupledesc() fails, we leave behind\na null pointer in RecordCacheHash, so that a later lookup segfaults.\n\n2. If we do a rescan, then ExecHashJoinReInitializeDSM() needs to\nreturn early if there is no DSM segment, otherwise a TOC lookup raises\na bogus error.\n\nHere are some draft patches.", "msg_date": "Tue, 4 Feb 2020 23:44:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "More DSM slot exhaustion bugs" } ]
[ { "msg_contents": "This patch adds the possibility to use the \"header\" option when using COPY with\nthe text format. A todo entry was opened for this and I updated the tests and\nthe documentation.\n\nThis was previously discussed at https://www.postgresql.org/message-id/flat/CACfv%2BpJ31tesLvncJyP24quo8AE%2BM0GP6p6MEpwPv6yV8%3DsVHQ%40mail.gmail.com\n\nGreetings,\nRémi\n\n---\n doc/src/sgml/ref/copy.sgml | 3 +-\n src/backend/commands/copy.c | 11 ++++---\n src/test/regress/input/copy.source | 46 +++++++++++++++++++----------\n src/test/regress/output/copy.source | 41 +++++++++++++++----------\n 4 files changed, 64 insertions(+), 37 deletions(-)", "msg_date": "Tue, 4 Feb 2020 14:25:03 +0100", "msg_from": "=?UTF-8?q?R=C3=A9mi=20Lapeyre?= <remi.lapeyre@henki.fr>", "msg_from_op": true, "msg_subject": "[PATCH v1] Allow COPY \"text\" format to output a header" }, { "msg_contents": "Hi,\nOn Tue, Feb 4, 2020 at 4:25 PM Rémi Lapeyre <remi.lapeyre@henki.fr> wrote:\n\n> This patch adds the possibility to use the \"header\" option when using COPY\n> with\n> the text format. A todo entry was opened for this and I updated the tests\n> and\n> the documentation.\n>\n> This was previously discussed at\n> https://www.postgresql.org/message-id/flat/CACfv%2BpJ31tesLvncJyP24quo8AE%2BM0GP6p6MEpwPv6yV8%3DsVHQ%40mail.gmail.com\n>\n>\nFWIW there was more recent propose patch at\nhttps://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com\n and among feedback given is to adding header matching feature on to this.\n\nregards\nSurafel\n\nHi,On Tue, Feb 4, 2020 at 4:25 PM Rémi Lapeyre <remi.lapeyre@henki.fr> wrote:This patch adds the possibility to use the \"header\" option when using COPY with\nthe text format. A todo entry was opened for this and I updated the tests and\nthe documentation.\n\nThis was previously discussed at https://www.postgresql.org/message-id/flat/CACfv%2BpJ31tesLvncJyP24quo8AE%2BM0GP6p6MEpwPv6yV8%3DsVHQ%40mail.gmail.com\nFWIW there was more recent propose patch at https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com and among feedback given is to adding header matching feature on to this.regards Surafel", "msg_date": "Wed, 5 Feb 2020 09:06:32 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] Allow COPY \"text\" format to output a header" }, { "msg_contents": "> \n> FWIW there was more recent propose patch at https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com\n> and among feedback given is to adding header matching feature on to this.\n\nThanks for the feedback. What should happen now? Can I just move the patch to the current Commitfest and send a new patch to the old thread?\n\n", "msg_date": "Wed, 5 Feb 2020 14:18:59 +0100", "msg_from": "=?utf-8?Q?R=C3=A9mi_Lapeyre?= <remi.lapeyre@henki.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] Allow COPY \"text\" format to output a header" }, { "msg_contents": "On Wed, Feb 5, 2020 at 4:19 PM Rémi Lapeyre <remi.lapeyre@henki.fr> wrote:\n\n> >\n> > FWIW there was more recent propose patch at\n> https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com\n> > and among feedback given is to adding header matching feature on to\n> this.\n>\n> Thanks for the feedback. What should happen now? Can I just move the patch\n> to the current Commitfest and send a new patch to the old thread?\n\n\nBoth way is possible you can add this tread with feedback incorporated\npatch or you can add old tread with a new patch\n\nregards\nSurafel\n\nOn Wed, Feb 5, 2020 at 4:19 PM Rémi Lapeyre <remi.lapeyre@henki.fr> wrote:> \n> FWIW there was more recent propose patch at https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com\n>  and among feedback given is to adding header matching feature on to this.\n\nThanks for the feedback. What should happen now? Can I just move the patch to the current Commitfest and send a new patch to the old thread?Both way is possible you can add this tread with feedback incorporated patch or you can add old tread with a new patch regards Surafel", "msg_date": "Thu, 6 Feb 2020 08:53:29 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] Allow COPY \"text\" format to output a header" } ]
[ { "msg_contents": "Add kqueue(2) support to the WaitEventSet API.\n\nUse kevent(2) to wait for events on the BSD family of operating\nsystems and macOS. This is similar to the epoll(2) support added\nfor Linux by commit 98a64d0bd.\n\nAuthor: Thomas Munro\nReviewed-by: Andres Freund, Marko Tiikkaja, Tom Lane\nTested-by: Mateusz Guzik, Matteo Beccati, Keith Fiske, Heikki Linnakangas, Michael Paquier, Peter Eisentraut, Rui DeSousa, Tom Lane, Mark Wong\nDiscussion: https://postgr.es/m/CAEepm%3D37oF84-iXDTQ9MrGjENwVGds%2B5zTr38ca73kWR7ez_tA%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/815c2f0972c8386aba7c606f1ee6690d13b04db2\n\nModified Files\n--------------\nconfigure | 4 +-\nconfigure.in | 2 +\nsrc/backend/storage/ipc/latch.c | 300 +++++++++++++++++++++++++++++++++++++++-\nsrc/include/pg_config.h.in | 6 +\nsrc/tools/msvc/Solution.pm | 2 +\n5 files changed, 311 insertions(+), 3 deletions(-)", "msg_date": "Wed, 05 Feb 2020 04:59:10 +0000", "msg_from": "Thomas Munro <tmunro@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Hi Thomas,\n\nOn Wed, Feb 05, 2020 at 04:59:10AM +0000, Thomas Munro wrote:\n> Add kqueue(2) support to the WaitEventSet API.\n> \n> Use kevent(2) to wait for events on the BSD family of operating\n> systems and macOS. This is similar to the epoll(2) support added\n> for Linux by commit 98a64d0bd.\n\nWorth noting this issue with the test suite of postgres_fdw for\nbuildfarm animal coypu, running on NetBSD:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=coypu&dt=2020-02-19%2023%3A01%3A01\n+ERROR: kqueue failed: Too many open files\n--\nMichael", "msg_date": "Thu, 20 Feb 2020 16:24:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Thu, Feb 20, 2020 at 8:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Feb 05, 2020 at 04:59:10AM +0000, Thomas Munro wrote:\n> > Add kqueue(2) support to the WaitEventSet API.\n> >\n> > Use kevent(2) to wait for events on the BSD family of operating\n> > systems and macOS. This is similar to the epoll(2) support added\n> > for Linux by commit 98a64d0bd.\n>\n> Worth noting this issue with the test suite of postgres_fdw for\n> buildfarm animal coypu, running on NetBSD:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=coypu&dt=2020-02-19%2023%3A01%3A01\n> +ERROR: kqueue failed: Too many open files\n\nHmm. So coypu just came back after 48 days, and the new kqueue() code\nfails for process 19829 after successfully running 265 log lines'\nworth of postgres_fdw tests, because it's run out of file\ndescriptors. I can see that WaitLatchOrSocket() actually could leak\nan epoll/kqueue socket if WaitEventSetWait() raises an error, which is\ninteresting, but apparently not the explanation here because we don't\nsee a preceding error report. Another theory would be that this\nmachine has a low max_safe_fds, and NUM_RESERVED_FDS is only just\nenough to handle the various sockets that postgres_fdw.sql creates and\nat some point kqueue()'s demand for just one more pushed it over the\nedge. From the error text and a look at the man page for errno, this\nerror is EMFILE (per process limit, which could be as low as 64)\nrather then ENFILE (system limit).\n\nRemi, any chance you could run gmake installcheck under\ncontrib/postgres_fdw on that host, to see if this is repeatable? Can\nyou tell us about the relevant limits? Maybe ulimit -n (for the user\nthat runs the build farm), and also sysctl -a | grep descriptors,\nsysctl -a | grep maxfiles?\n\n\n", "msg_date": "Fri, 21 Feb 2020 00:15:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "\n\n> Le 20 févr. 2020 à 12:15, Thomas Munro <thomas.munro@gmail.com> a écrit :\n> \n> On Thu, Feb 20, 2020 at 8:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Wed, Feb 05, 2020 at 04:59:10AM +0000, Thomas Munro wrote:\n>>> Add kqueue(2) support to the WaitEventSet API.\n>>> \n>>> Use kevent(2) to wait for events on the BSD family of operating\n>>> systems and macOS. This is similar to the epoll(2) support added\n>>> for Linux by commit 98a64d0bd.\n>> \n>> Worth noting this issue with the test suite of postgres_fdw for\n>> buildfarm animal coypu, running on NetBSD:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=coypu&dt=2020-02-19%2023%3A01%3A01\n>> +ERROR: kqueue failed: Too many open files\n> \n> Hmm. So coypu just came back after 48 days, and the new kqueue() code\n> fails for process 19829 after successfully running 265 log lines'\n> worth of postgres_fdw tests, because it's run out of file\n> descriptors. I can see that WaitLatchOrSocket() actually could leak\n> an epoll/kqueue socket if WaitEventSetWait() raises an error, which is\n> interesting, but apparently not the explanation here because we don't\n> see a preceding error report. Another theory would be that this\n> machine has a low max_safe_fds, and NUM_RESERVED_FDS is only just\n> enough to handle the various sockets that postgres_fdw.sql creates and\n> at some point kqueue()'s demand for just one more pushed it over the\n> edge. From the error text and a look at the man page for errno, this\n> error is EMFILE (per process limit, which could be as low as 64)\n> rather then ENFILE (system limit).\n> \n> Remi, any chance you could run gmake installcheck under\n> contrib/postgres_fdw on that host, to see if this is repeatable? Can\n> you tell us about the relevant limits? Maybe ulimit -n (for the user\n> that runs the build farm), and also sysctl -a | grep descriptors,\n> sysctl -a | grep maxfiles?\n\n\nHi,\n\nUnfortunalty, coypu went offline again. I will run tests as soon as I can bring it back up.\n\nRémi\n\n", "msg_date": "Thu, 20 Feb 2020 19:03:59 +0100", "msg_from": "=?utf-8?Q?R=C3=A9mi_Zara?= <remi_zara@mac.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "=?utf-8?Q?R=C3=A9mi_Zara?= <remi_zara@mac.com> writes:\n>> Le 20 févr. 2020 à 12:15, Thomas Munro <thomas.munro@gmail.com> a écrit :\n>> Remi, any chance you could run gmake installcheck under\n>> contrib/postgres_fdw on that host, to see if this is repeatable? Can\n>> you tell us about the relevant limits? Maybe ulimit -n (for the user\n>> that runs the build farm), and also sysctl -a | grep descriptors,\n>> sysctl -a | grep maxfiles?\n\n> Unfortunalty, coypu went offline again. I will run tests as soon as I can bring it back up.\n\nI have a working NetBSD 8/ppc installation, will try to reproduce there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 13:40:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "[ redirecting to -hackers ]\n\nI wrote:\n> =?utf-8?Q?R=C3=A9mi_Zara?= <remi_zara@mac.com> writes:\n> Le 20 févr. 2020 à 12:15, Thomas Munro <thomas.munro@gmail.com> a écrit :\n>>> Remi, any chance you could run gmake installcheck under\n>>> contrib/postgres_fdw on that host, to see if this is repeatable? Can\n>>> you tell us about the relevant limits? Maybe ulimit -n (for the user\n>>> that runs the build farm), and also sysctl -a | grep descriptors,\n>>> sysctl -a | grep maxfiles?\n\n> I have a working NetBSD 8/ppc installation, will try to reproduce there.\n\nYup, it reproduces fine here. I see\n\n$ ulimit -a\n...\nnofiles (-n descriptors) 128\n\nwhich squares with the sysctl values:\n\nproc.curproc.rlimit.descriptors.soft = 128\nproc.curproc.rlimit.descriptors.hard = 1772\nkern.maxfiles = 1772\n\nand also with set_max_safe_fds' results:\n\n2020-02-20 14:29:38.610 EST [2218] DEBUG: max_safe_fds = 115, usable_fds = 125, already_open = 3\n\nIt seems fairly obvious now that I look at it, but: the epoll and kqueue\nvariants of CreateWaitEventSet are both *fundamentally* unsafe, because\nthey assume that they can always get a FD when they want one, which is\nnot a property that we generally want backend code to have. The only\nreason we've not seen this before with epoll is a lack of testing\nunder lots-of-FDs stress.\n\nThe fact that they'll likely leak those FDs on subsequent failures is\nanother not-very-nice property.\n\nI think we ought to redesign this so that those FDs are handed out by\nfd.c, which can ReleaseLruFile() and retry if it gets EMFILE or ENFILE.\nfd.c could also be responsible for the resource tracking needed to\nprevent leakage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 14:44:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "I wrote:\n> It seems fairly obvious now that I look at it, but: the epoll and kqueue\n> variants of CreateWaitEventSet are both *fundamentally* unsafe, because\n> they assume that they can always get a FD when they want one, which is\n> not a property that we generally want backend code to have. The only\n> reason we've not seen this before with epoll is a lack of testing\n> under lots-of-FDs stress.\n> The fact that they'll likely leak those FDs on subsequent failures is\n> another not-very-nice property.\n\nHmmm ... actually, there's a third problem, which is the implicit\nassumption that we can have as many concurrently-active WaitEventSets\nas we like. We can't, if they depend on FDs --- that's a precious\nresource. It doesn't look like we actually ever have more than about\ntwo per process right now, but I'm concerned about what will happen\nas the usage of the feature increases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 14:56:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Fri, Feb 21, 2020 at 8:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > It seems fairly obvious now that I look at it, but: the epoll and kqueue\n> > variants of CreateWaitEventSet are both *fundamentally* unsafe, because\n> > they assume that they can always get a FD when they want one, which is\n> > not a property that we generally want backend code to have. The only\n> > reason we've not seen this before with epoll is a lack of testing\n> > under lots-of-FDs stress.\n> > The fact that they'll likely leak those FDs on subsequent failures is\n> > another not-very-nice property.\n>\n> Hmmm ... actually, there's a third problem, which is the implicit\n> assumption that we can have as many concurrently-active WaitEventSets\n> as we like. We can't, if they depend on FDs --- that's a precious\n> resource. It doesn't look like we actually ever have more than about\n> two per process right now, but I'm concerned about what will happen\n> as the usage of the feature increases.\n\nOne thing I've been planning to do for 13 is to get rid of all the\ntemporary create/destroy WaitEventSets from the main backend loops.\nMy goal was cutting down on stupid system calls, but this puts a new\nspin on it. I have a patch set to do a bunch of that[1], but now I'm\nthinking that perhaps I need to be even more aggressive about it and\nset up the 'common' long lived WES up front at backend startup, rather\nthan doing it on demand, so that there is no chance of failure due to\nlack of fds once you've started up. I also recently figured out how\nto handle some more places with the common WES. I'll post a new patch\nset over on that thread shortly.\n\nThat wouldn't mean that the postgres_fdw.sql can't fail on a ulimit -n\n= 128 system, though, it might just mean that it's postgres_fdw's\nsocket() call that hits EMFILE rather than WES's kqueue() call while\nrunning that test. I suppose there are two kinds of system: those\nwhere ulimit -n is higher than max_files_per_process (defaults, on\nLinux: 1024 vs 1000) so you have more allowance for sockets and the\nlike, and those where it isn't, like coypu, where NUM_RESERVED_FDS is\nthe only thing ensuring we have some spare fds. I don't know the\nhistory but it looks like NUM_RESERVED_FDS was deliberately or\naccidentally tuned to be just enough to be able to run the regression\ntests (the interesting ones being the ones that use sockets, like\npostgres_fdw.sql), but with a new long lived kqueue() fd in the\npicture, it might have to be increased to cover it, no?\n\nAbout the potential for leaks, Horiguchi-san realised this hazard and\nposted a patch[2] to allow WaitEventSets to be cleaned up by the\nresource owner machinery. That's useful for the temporary\nWaitEventSet objects that we'd genuinely need in the patch set that's\npart of: that's for creating a query-lifetime WES to manage N\nconnections to remote shards, and it needs to be cleaned up on\nfailure. For the temporary ones created by WaitLatch(), I suspect\nthey don't really belong in a resource owner: instead we should get\nrid of it using my WaitMyLatch() patch set, and if there are any\nplaces where we can't for some reason (I hope not), perhaps a\ntry/catch block should be used to fix that.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJAC4Oqao%3DqforhNey20J8CiG2R%3DoBPqvfR0vOJrFysGw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/20191206.171211.1119526746053895900.horikyota.ntt%40gmail.com\n\n\n", "msg_date": "Fri, 21 Feb 2020 09:50:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> One thing I've been planning to do for 13 is to get rid of all the\n> temporary create/destroy WaitEventSets from the main backend loops.\n> My goal was cutting down on stupid system calls, but this puts a new\n> spin on it. I have a patch set to do a bunch of that[1], but now I'm\n> thinking that perhaps I need to be even more aggressive about it and\n> set up the 'common' long lived WES up front at backend startup, rather\n> than doing it on demand, so that there is no chance of failure due to\n> lack of fds once you've started up.\n\n+1\n\n> That wouldn't mean that the postgres_fdw.sql can't fail on a ulimit -n\n> = 128 system, though, it might just mean that it's postgres_fdw's\n> socket() call that hits EMFILE rather than WES's kqueue() call while\n> running that test.\n\nGood point.\n\n> I suppose there are two kinds of system: those\n> where ulimit -n is higher than max_files_per_process (defaults, on\n> Linux: 1024 vs 1000) so you have more allowance for sockets and the\n> like, and those where it isn't, like coypu, where NUM_RESERVED_FDS is\n> the only thing ensuring we have some spare fds. I don't know the\n> history but it looks like NUM_RESERVED_FDS was deliberately or\n> accidentally tuned to be just enough to be able to run the regression\n> tests (the interesting ones being the ones that use sockets, like\n> postgres_fdw.sql), but with a new long lived kqueue() fd in the\n> picture, it might have to be increased to cover it, no?\n\nNo. NUM_RESERVED_FDS was set decades ago, long before any of those tests\nexisted, and it has never been changed AFAIK. It is a bit striking that\nwe just started seeing it be insufficient with this patch. Maybe that's\njust happenstance, but I wonder whether there is a plain old FD leak\ninvolved in addition to the design issue? I'll take a closer look at\nexactly what's open when we hit the error.\n\nThe point about possibly hitting EMFILE in libpq's socket() call is\nan interesting one. libpq of course can't do anything to recover\nfrom that (and even if it could, there are lower levels such as a\npossible DNS lookup that we're not going to be able to modify).\nI'm speculating about having postgres_fdw ask fd.c to forcibly\nfree one LRU file before it attempts to open a new libpq connection.\nThat would prevent EMFILE (process-level exhaustion) and it would\nalso provide some small protection against ENFILE (system-wide\nexhaustion), though of course there's no guarantee that someone\nelse doesn't snap up the FD you so graciously relinquished.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 17:05:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> ... like coypu, where NUM_RESERVED_FDS is\n>> the only thing ensuring we have some spare fds. I don't know the\n>> history but it looks like NUM_RESERVED_FDS was deliberately or\n>> accidentally tuned to be just enough to be able to run the regression\n>> tests (the interesting ones being the ones that use sockets, like\n>> postgres_fdw.sql), but with a new long lived kqueue() fd in the\n>> picture, it might have to be increased to cover it, no?\n\n> No. NUM_RESERVED_FDS was set decades ago, long before any of those tests\n> existed, and it has never been changed AFAIK. It is a bit striking that\n> we just started seeing it be insufficient with this patch. Maybe that's\n> just happenstance, but I wonder whether there is a plain old FD leak\n> involved in addition to the design issue? I'll take a closer look at\n> exactly what's open when we hit the error.\n\nHmm ... looks like I'm wrong: we've been skating way too close to the edge\nfor awhile. Here's a breakdown of the open FDs in the backend at the time\nof the failure, excluding stdin/stdout/stderr (which set_max_safe_fds\naccounted for) and FDs pointing to database files:\n\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\npostmaste 2657 postgres 3r FIFO 0,8 0t0 20902158 pipe\t\t\tpostmaster_alive_fds[0]\npostmaste 2657 postgres 4u REG 0,9 0 3877 [eventpoll]\t\tFeBeWaitSet's epoll_fd\npostmaste 2657 postgres 7u unix 0xffff880878e21880 0t0 20902664 socket\t\tsocket for a PGconn owned by postgres_fdw\npostmaste 2657 postgres 8u IPv6 20902171 0t0 UDP localhost:40795->localhost:40795\tpgStatSock\npostmaste 2657 postgres 9u unix 0xffff880876903c00 0t0 20902605 /tmp/.s.PGSQL.5432\tMyProcPort->sock\npostmaste 2657 postgres 10r FIFO 0,8 0t0 20902606 pipe\t\t\tselfpipe_readfd\npostmaste 2657 postgres 11w FIFO 0,8 0t0 20902606 pipe\t\t\tselfpipe_writefd\npostmaste 2657 postgres 105u unix 0xffff880878e21180 0t0 20902647 socket\t\tsocket for a PGconn owned by postgres_fdw\npostmaste 2657 postgres 118u unix 0xffff8804772c88c0 0t0 20902650 socket\t\tsocket for a PGconn owned by postgres_fdw\n\nOne thing to notice is that there are only nine, though NUM_RESERVED_FDS\nshould have allowed ten. That's because there are 116 open FDs pointing\nat database files, though max_safe_fds is 115. I'm not sure whether\nthat's OK or an off-by-one error in fd.c's accounting. One of the 116\nis pointing at a WAL segment, and I think we might not be sending that\nthrough the normal VFD path, so it might be \"expected\".\n\nBut anyway, what this shows is that over time we've eaten enough of\nthe \"reserved\" FDs that only three are available for random usage like\npostgres_fdw's, if the process's back is against the wall FD-wise.\nThe postgres_fdw regression test is using all three, meaning there's\nexactly no daylight in that test.\n\nClearly, we gotta do something about that too. Maybe bumping up\nNUM_RESERVED_FDS would be a good idea, but I feel like more-honest\naccounting for permanently-eaten FDs would be a better idea. And\nin any case we can't suppose that there's a fixed upper limit on\nthe number of postgres_fdw connections. I'm liking the idea I floated\nearlier of letting postgres_fdw forcibly close the oldest LRU entry.\n\nBTW, you don't need anything very exotic to provoke this error.\nThe results above are from a Linux box; I just did \"ulimit -n 128\"\nbefore starting the postmaster.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 18:55:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "I wrote:\n> Clearly, we gotta do something about that too. Maybe bumping up\n> NUM_RESERVED_FDS would be a good idea, but I feel like more-honest\n> accounting for permanently-eaten FDs would be a better idea. And\n> in any case we can't suppose that there's a fixed upper limit on\n> the number of postgres_fdw connections. I'm liking the idea I floated\n> earlier of letting postgres_fdw forcibly close the oldest LRU entry.\n\nA late-night glimmer of an idea: suppose we make fd.c keep count,\nnot just of the open FDs under its control, but also of open FDs\nnot under its control. The latter count would include the initial\nFDs (stdin/stdout/stderr), and FDs allocated by OpenTransientFile\net al, and we could provide functions for other callers to report\nthat they just allocated or released a FD. So postgres_fdw could\nreport, when it opens or closes a PGconn, that the count of external\nFDs should be increased or decreased. fd.c would then forcibly\nclose VFDs as needed to keep NUM_RESERVED_FDS worth of slop. We\nstill need slop, to provide some daylight for code that isn't aware\nof this mechanism. But we could certainly get all these known\nlong-lived FDs to be counted, and that would allow fd.c to reduce\nthe number of open VFDs enough to ensure that some slop remains.\n\nThis idea doesn't fix every possible problem. For instance, if you\nhave a plperlu function that wants to open a bunch of files, I don't\nsee any easy way to ensure we release VFDs to make that possible.\nBut it's sure an improvement on the status quo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 23:30:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "I wrote:\n>> Clearly, we gotta do something about that too. Maybe bumping up\n>> NUM_RESERVED_FDS would be a good idea, but I feel like more-honest\n>> accounting for permanently-eaten FDs would be a better idea. And\n>> in any case we can't suppose that there's a fixed upper limit on\n>> the number of postgres_fdw connections. I'm liking the idea I floated\n>> earlier of letting postgres_fdw forcibly close the oldest LRU entry.\n\n> A late-night glimmer of an idea: suppose we make fd.c keep count,\n> not just of the open FDs under its control, but also of open FDs\n> not under its control.\n\nHere's a draft patch that does it like that. There are undoubtedly\nmore places that need to be taught to report their FD consumption;\none obvious candidate that I didn't touch is dblink. But this is\nenough to account for all the long-lived \"extra\" FDs that are currently\nopen in a default build, and it passes check-world even with ulimit -n\nset to the minimum that set_max_safe_fds will allow.\n\nOne thing I noticed is that if you open enough postgres_fdw connections\nto cause a failure, that tends to come out as an unintelligible-to-\nthe-layman \"epoll_create1 failed: Too many open files\" error. That's\nbecause after postgres_fdw has consumed the last available \"external\nFD\" slot, it tries to use CreateWaitEventSet to wait for input from\nthe remote server, and now that needs to get another external FD.\nI left that alone for the moment, because if you do rejigger the\nWaitEventSet code to avoid dynamically creating/destroying epoll sockets,\nit will stop being a problem. If that doesn't happen, another\npossibility is to reclassify CreateWaitEventSet as a caller that should\nignore \"failure\" from ReserveExternalFD --- but that would open us up\nto problems if a lot of WaitEventSets get created, so it's not a great\nanswer. It'd be okay perhaps if we added a distinction between\nlong-lived and short-lived WaitEventSets (ignoring \"failure\" only for\nthe latter). But I didn't want to go there in this patch.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 23 Feb 2020 16:49:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "I wrote:\n> Here's a draft patch that does it like that.\n\nOn reflection, trying to make ReserveExternalFD serve two different\nuse-cases was pretty messy. Here's a version that splits it into two\nfunctions. I also took the trouble to fix dblink.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 23 Feb 2020 18:24:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Mon, Feb 24, 2020 at 12:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On reflection, trying to make ReserveExternalFD serve two different\n> use-cases was pretty messy. Here's a version that splits it into two\n> functions. I also took the trouble to fix dblink.\n\n+ /*\n+ * We don't want more than max_safe_fds / 3 FDs to be consumed for\n+ * \"external\" FDs.\n+ */\n+ if (numExternalFDs < max_safe_fds / 3)\n\nThis looks pretty reasonable to me.\n\nI'll have a new patch set to create a common WES at startup over on\nthat other thread in a day or two.\n\n\n", "msg_date": "Mon, 24 Feb 2020 19:42:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Mon, Feb 24, 2020 at 7:42 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Feb 24, 2020 at 12:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > On reflection, trying to make ReserveExternalFD serve two different\n> > use-cases was pretty messy. Here's a version that splits it into two\n> > functions. I also took the trouble to fix dblink.\n>\n> + /*\n> + * We don't want more than max_safe_fds / 3 FDs to be consumed for\n> + * \"external\" FDs.\n> + */\n> + if (numExternalFDs < max_safe_fds / 3)\n\nI suppose there may be users who have set ulimit -n high enough to\nsupport an FDW workload that connects to very many hosts, who will now\nneed to set max_files_per_process higher to avoid the new error now\nthat we're doing this accounting. That doesn't seem to be a problem\nin itself, but I wonder if the error message should make it clearer\nthat it's our limit they hit here.\n\n\n", "msg_date": "Mon, 24 Feb 2020 19:53:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I suppose there may be users who have set ulimit -n high enough to\n> support an FDW workload that connects to very many hosts, who will now\n> need to set max_files_per_process higher to avoid the new error now\n> that we're doing this accounting. That doesn't seem to be a problem\n> in itself, but I wonder if the error message should make it clearer\n> that it's our limit they hit here.\n\nI struggled with the wording of that message, actually. The patch\nproposes\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION),\n+ errmsg(\"could not connect to server \\\"%s\\\"\",\n+ server->servername),\n+ errdetail(\"There are too many open files.\")));\n\nI wanted to say \"The server has too many open files.\" but in context\nit would seem to be talking about the remote server, so I'm not sure\nhow to fix that.\n\nWe could also consider a HINT, along the lines of \"Raise the server's\nmax_files_per_process and/or \\\"ulimit -n\\\" limits.\" This again has\nthe ambiguity about which server, and it also seems dangerously\nplatform-specific.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 09:44:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "\n\n> On Feb 20, 2020, at 8:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> This idea doesn't fix every possible problem. For instance, if you\n> have a plperlu function that wants to open a bunch of files, I don't\n> see any easy way to ensure we release VFDs to make that possible.\n> But it's sure an improvement on the status quo.\n\nI understand that you were using plperlu just as an example, but it got me thinking. Could we ship a wrapper using perl's tie() mechanism to call a new spi function to report when a file handle is opened and when it is closed? Most plperlu functions would not participate, since developers will not necessarily know to use the wrapper, but at least they could learn about the wrapper and use it as a work-around if this becomes a problem for them. Perhaps the same spi function could be used by other procedural languages.\n\nI can't see this solution working unless the backend can cleanup properly under exceptional conditions, and decrement the counter of used file handles appropriately. But that's the same requirement that postgres_fdw would also have, right? Would the same mechanism work for both?\n\nI imagine something like <PgPerluSafe>::IO::File and <PgPerluSafe>::File::Temp which could be thin wrappers around IO::File and File::Temp that automatically do the tie()ing for you. (Replace <PgPerluSafe> with whichever name seems best.)\n\nIs this too convoluted to be worth the bother?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 24 Feb 2020 10:29:51 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Hi,\n\nOn 2020-02-24 10:29:51 -0800, Mark Dilger wrote:\n> > On Feb 20, 2020, at 8:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > This idea doesn't fix every possible problem. For instance, if you\n> > have a plperlu function that wants to open a bunch of files, I don't\n> > see any easy way to ensure we release VFDs to make that possible.\n> > But it's sure an improvement on the status quo.\n> \n> I understand that you were using plperlu just as an example, but it\n> got me thinking. Could we ship a wrapper using perl's tie() mechanism\n> to call a new spi function to report when a file handle is opened and\n> when it is closed? Most plperlu functions would not participate,\n> since developers will not necessarily know to use the wrapper, but at\n> least they could learn about the wrapper and use it as a work-around\n> if this becomes a problem for them. Perhaps the same spi function\n> could be used by other procedural languages.\n\nWhile we're thinking a bit outside of the box: We could just dup() a\nbunch of fds within fd.c to protect fd.c's fd \"quota\". And then close\nthem when actually needing the fds.\n\nNot really suggesting that we go for that, but it does have some appeal.\n\n\n\n> I can't see this solution working unless the backend can cleanup properly under exceptional conditions, and decrement the counter of used file handles appropriately. But that's the same requirement that postgres_fdw would also have, right? Would the same mechanism work for both?\n\nWe can just throw an error, and all fdw state should get cleaned up. We\ncan't generally rely on that for plperl, as it IIRC can global state. So\nI don't think they're in the same boat.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Feb 2020 10:41:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Feb 20, 2020, at 8:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This idea doesn't fix every possible problem. For instance, if you\n>> have a plperlu function that wants to open a bunch of files, I don't\n>> see any easy way to ensure we release VFDs to make that possible.\n>> But it's sure an improvement on the status quo.\n\n> I understand that you were using plperlu just as an example, but it got\n> me thinking. Could we ship a wrapper using perl's tie() mechanism to\n> call a new spi function to report when a file handle is opened and when\n> it is closed? Most plperlu functions would not participate, since\n> developers will not necessarily know to use the wrapper, but at least\n> they could learn about the wrapper and use it as a work-around if this\n> becomes a problem for them. Perhaps the same spi function could be used\n> by other procedural languages.\n\nHmm. I had thought briefly about getting plperl to do that automatically\nand had concluded that I didn't see a way (though there might be one;\nI'm not much of a Perl expert). But if we accept that changes in the\nplperl function's source code might be needed, it gets a lot easier,\nfor sure.\n\nAnyway, the point of the current patch is to provide the mechanism and\nuse it in a couple of places where we know there's an issue. Improving\nthe PLs is something that could be added later.\n\n> I can't see this solution working unless the backend can cleanup\n> properly under exceptional conditions, and decrement the counter of used\n> file handles appropriately. But that's the same requirement that\n> postgres_fdw would also have, right? Would the same mechanism work for\n> both?\n\nThe hard part is to tie into whatever is responsible for closing the\nkernel FD. If you can ensure that the FD gets closed, you can put\nthe ReleaseExternalFD() call at the same place(s).\n\n> Is this too convoluted to be worth the bother?\n\nSo far we've not seen field reports of PL functions running out of FDs;\nand there's always the ad-hoc solution of making sure the server's\nulimit -n limit is sufficiently larger than max_files_per_process.\nSo I wouldn't put a lot of effort into it right now. But it's nice\nto have an idea about what to do if it does become a hot issue for\nsomebody.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 13:48:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On 2020-Feb-24, Tom Lane wrote:\n\n> We could also consider a HINT, along the lines of \"Raise the server's\n> max_files_per_process and/or \\\"ulimit -n\\\" limits.\" This again has\n> the ambiguity about which server, and it also seems dangerously\n> platform-specific.\n\nMaybe talk about \"the local server\" to distinguish from the other one.\n\nAs for the platform dependencies, I see two main options: make the hint\nplatform-specific (i.e have #ifdef branches per platform) or make it\neven more generic, such as \"file descriptor limits\".\n\nA quick search suggests that current Windows itself doesn't typically\nhave such problems:\nhttps://stackoverflow.com/questions/31108693/increasing-no-of-file-handles-in-windows-7-64-bit\nhttps://docs.microsoft.com/es-es/archive/blogs/markrussinovich/pushing-the-limits-of-windows-handles\n\nBut the C runtime does:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/file-handling?view=vs-2019\nI suppose we do use the C runtime. There's a reference to\n_setmaxstdio() being able to raise the limit from the default of 512 to\nup to 8192 open files. We don't currently invoke that.\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setmaxstdio?view=vs-2019\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Feb 2020 16:14:53 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Feb-24, Tom Lane wrote:\n>> We could also consider a HINT, along the lines of \"Raise the server's\n>> max_files_per_process and/or \\\"ulimit -n\\\" limits.\" This again has\n>> the ambiguity about which server, and it also seems dangerously\n>> platform-specific.\n\n> Maybe talk about \"the local server\" to distinguish from the other one.\n\nOK by me.\n\n> As for the platform dependencies, I see two main options: make the hint\n> platform-specific (i.e have #ifdef branches per platform) or make it\n> even more generic, such as \"file descriptor limits\".\n\nI thought about platform-specific messages, but it's not clear to me\nwhether our translation infrastructure will find messages that are\ninside #ifdefs ... anyone know? If that does work, I'd be inclined\nto mention ulimit -n on non-Windows and just say nothing about that\non Windows. \"File descriptor limits\" seems too unhelpful here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 14:27:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Hi,\n\nOn 2020-02-24 16:14:53 -0300, Alvaro Herrera wrote:\n> But the C runtime does:\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/file-handling?view=vs-2019\n> I suppose we do use the C runtime. There's a reference to\n> _setmaxstdio() being able to raise the limit from the default of 512 to\n> up to 8192 open files. We don't currently invoke that.\n> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setmaxstdio?view=vs-2019\n\nIf we ever go for that, we should also consider raising the limit on\nunix systems up to the hard limit when hitting the fd ceiling. I.e. get\nthe current limit with getrlimit(RLIMIT_NOFILE) and raise rlim_cur\n[closer] to rlim_max with setrlimit.\n\nPerhaps it'd even be worthwhile to just always raise the limit, if\npossible, in set_max_safe_fds(), by max_safe_fds +\nNUM_RESERVED_FDS. That way PLs, other shared libs, would have a more\nusualy amount of FDs available. Rather than a fairly small number, but\nonly when the backend has been running for a while in the right\nworkload.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Feb 2020 12:01:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-02-24 16:14:53 -0300, Alvaro Herrera wrote:\n>> I suppose we do use the C runtime. There's a reference to\n>> _setmaxstdio() being able to raise the limit from the default of 512 to\n>> up to 8192 open files. We don't currently invoke that.\n>> https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/setmaxstdio?view=vs-2019\n\n> If we ever go for that, we should also consider raising the limit on\n> unix systems up to the hard limit when hitting the fd ceiling. I.e. get\n> the current limit with getrlimit(RLIMIT_NOFILE) and raise rlim_cur\n> [closer] to rlim_max with setrlimit.\n\nI'm disinclined to think we should override the user's wishes in this way.\nEspecially given PG's proven ability to run the kernel completely out of\nfile descriptors ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 15:07:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> As for the platform dependencies, I see two main options: make the hint\n>> platform-specific (i.e have #ifdef branches per platform) or make it\n>> even more generic, such as \"file descriptor limits\".\n\n> I thought about platform-specific messages, but it's not clear to me\n> whether our translation infrastructure will find messages that are\n> inside #ifdefs ... anyone know?\n\nOh, but of course it does. So let's do\n\n errdetail(\"There are too many open files on the local server.\"),\n#ifndef WIN32\n errhint(\"Raise the server's max_files_per_process and/or \\\"ulimit -n\\\" limits.\")\n#else\n errhint(\"Raise the server's max_files_per_process setting.\")\n#endif\n\nI don't think there's much point in telling Windows users about\n_setmaxstdio() here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 15:30:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On 2020-Feb-24, Tom Lane wrote:\n\n> I wrote:\n\n> > I thought about platform-specific messages, but it's not clear to me\n> > whether our translation infrastructure will find messages that are\n> > inside #ifdefs ... anyone know?\n> \n> Oh, but of course it does. So let's do\n> \n> errdetail(\"There are too many open files on the local server.\"),\n> #ifndef WIN32\n> errhint(\"Raise the server's max_files_per_process and/or \\\"ulimit -n\\\" limits.\")\n> #else\n> errhint(\"Raise the server's max_files_per_process setting.\")\n> #endif\n\nWFM.\n\n> I don't think there's much point in telling Windows users about\n> _setmaxstdio() here.\n\nYeah, telling users to _setmaxstdio() themselves is useless, because\nthey can't do it; that's something *we* should do. I think the 512\nlimit is a bit low; why not increase that a little bit? Maybe just to\nthe Linux default of 1024.\n\nThen again, that would be akin to setrlimit() on Linux. Maybe we can\nconsider that a separate GUC, in a separate patch, with a\nplatform-specific default value that just corresponds to the OS's\ndefault, and the user can set to whatever suits them; then we call\neither _setmaxstdio() or setrlimit().\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Feb 2020 17:55:09 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Feb-24, Tom Lane wrote:\n>> I don't think there's much point in telling Windows users about\n>> _setmaxstdio() here.\n\n> Yeah, telling users to _setmaxstdio() themselves is useless, because\n> they can't do it; that's something *we* should do. I think the 512\n> limit is a bit low; why not increase that a little bit? Maybe just to\n> the Linux default of 1024.\n\n> Then again, that would be akin to setrlimit() on Linux. Maybe we can\n> consider that a separate GUC, in a separate patch, with a\n> platform-specific default value that just corresponds to the OS's\n> default, and the user can set to whatever suits them; then we call\n> either _setmaxstdio() or setrlimit().\n\nWhy not just drive it off max_files_per_process? On Unix, that\nlargely exists to override the ulimit setting anyway. With no\ncomparable knob on a Windows system, we might as well just say\nthat's what you set.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 16:01:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On 2020-Feb-24, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > Then again, that would be akin to setrlimit() on Linux. Maybe we can\n> > consider that a separate GUC, in a separate patch, with a\n> > platform-specific default value that just corresponds to the OS's\n> > default, and the user can set to whatever suits them; then we call\n> > either _setmaxstdio() or setrlimit().\n> \n> Why not just drive it off max_files_per_process? On Unix, that\n> largely exists to override the ulimit setting anyway. With no\n> comparable knob on a Windows system, we might as well just say\n> that's what you set.\n\nThat makes sense to me -- but if we do that, then maybe we should be\ndoing the setrlimit() dance on it too, on Linux^W^W where supported.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Feb 2020 18:17:09 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Feb-24, Tom Lane wrote:\n>> Why not just drive it off max_files_per_process? On Unix, that\n>> largely exists to override the ulimit setting anyway. With no\n>> comparable knob on a Windows system, we might as well just say\n>> that's what you set.\n\n> That makes sense to me -- but if we do that, then maybe we should be\n> doing the setrlimit() dance on it too, on Linux^W^W where supported.\n\nYeah, arguably we could try to setrlimit if max_files_per_process is\nlarger than the ulimit. We should definitely not reduce the ulimit\nif max_files_per_process is smaller, though, since the DBA might\nintentionally be leaving daylight for purposes such as FD-hungry PL\nfunctions. On the whole I'm inclined to leave well enough alone on\nthe Unix side --- there's nothing there that the DBA can't set if\nshe wishes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Feb 2020 16:44:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Wed, Feb 5, 2020 at 7:59 AM Thomas Munro <tmunro@postgresql.org> wrote:\n> Add kqueue(2) support to the WaitEventSet API.\n>\n> Use kevent(2) to wait for events on the BSD family of operating\n> systems and macOS. This is similar to the epoll(2) support added\n> for Linux by commit 98a64d0bd.\n\nI'm not sure if it was already reported in this thread (it seems to be\nnot at the first glance), but I've discovered following issue on macos\n10.13.6. If backend is under lldb and does XactLockTableWait(), then\nit does proc_exit(1).\n\nThe full reproduction case is following.\n\ns1# create table test (id serial primary key, value int);\ns1# insert into test values (1,1);\ns1# begin;\ns1# update test set value = value + 1 where id = 1;\n\nlldb attached to s2: b proc_exit\ns2# update test set value = value + 1 where id = 1;\n\n(lldb) bt\n* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1\n * frame #0: 0x000000010f9a7f5b postgres`proc_exit(code=1) at ipc.c:107\n frame #1: 0x000000010f9aa3b9\npostgres`WaitEventSetWaitBlock(set=0x00007fabdd847c90, cur_timeout=-1,\noccurred_events=0x00007ffee0796c10, nevents=1) at latch.c:1427\n frame #2: 0x000000010f9a9a43\npostgres`WaitEventSetWait(set=0x00007fabdd847c90, timeout=-1,\noccurred_events=0x00007ffee0796c10, nevents=1,\nwait_event_info=50331652) at latch.c:1237\n frame #3: 0x000000010f9a93b5\npostgres`WaitLatchOrSocket(latch=0x00000001197eacc4, wakeEvents=33,\nsock=-1, timeout=-1, wait_event_info=50331652) at latch.c:428\n frame #4: 0x000000010f9a91c1\npostgres`WaitLatch(latch=0x00000001197eacc4, wakeEvents=33, timeout=0,\nwait_event_info=50331652) at latch.c:368\n frame #5: 0x000000010f9d65b6\npostgres`ProcSleep(locallock=0x00007fabdd01d5d8,\nlockMethodTable=0x000000010fdb5cf8) at proc.c:1286\n frame #6: 0x000000010f9c2af9\npostgres`WaitOnLock(locallock=0x00007fabdd01d5d8,\nowner=0x00007fabdf0056d0) at lock.c:1766\n frame #7: 0x000000010f9c13a1\npostgres`LockAcquireExtended(locktag=0x00007ffee0797140, lockmode=5,\nsessionLock=false, dontWait=false, reportMemoryError=true,\nlocallockp=0x0000000000000000) at lock.c:1048\n frame #8: 0x000000010f9c08b5\npostgres`LockAcquire(locktag=0x00007ffee0797140, lockmode=5,\nsessionLock=false, dontWait=false) at lock.c:713\n frame #9: 0x000000010f9bef32 postgres`XactLockTableWait(xid=511,\nrel=0x000000011031e148, ctid=0x00007ffee0797394, oper=XLTW_Update) at\nlmgr.c:658\n frame #10: 0x000000010f4e9cab\npostgres`heap_update(relation=0x000000011031e148,\notid=0x00007ffee0797818, newtup=0x00007fabdd847a48, cid=0,\ncrosscheck=0x0000000000000000, wait=true, tmfd=0x00007ffee07976f0,\nlockmode=0x00007ffee07976d8) at heapam.c:3239\n frame #11: 0x000000010f4f9353\npostgres`heapam_tuple_update(relation=0x000000011031e148,\notid=0x00007ffee0797818, slot=0x00007fabdc828558, cid=0,\nsnapshot=0x00007fabdc818170, crosscheck=0x0000000000000000, wait=true,\ntmfd=0x00007ffee07976f0, lockmode=0x00007ffee07976d8,\nupdate_indexes=0x00007ffee07976d6) at heapam_handler.c:326\n frame #12: 0x000000010f7ba73d\npostgres`table_tuple_update(rel=0x000000011031e148,\notid=0x00007ffee0797818, slot=0x00007fabdc828558, cid=0,\nsnapshot=0x00007fabdc818170, crosscheck=0x0000000000000000, wait=true,\ntmfd=0x00007ffee07976f0, lockmode=0x00007ffee07976d8,\nupdate_indexes=0x00007ffee07976d6) at tableam.h:1293\n frame #13: 0x000000010f7b8952\npostgres`ExecUpdate(mtstate=0x00007fabdc826ca8,\ntupleid=0x00007ffee0797818, oldtuple=0x0000000000000000,\nslot=0x00007fabdc828558, planSlot=0x00007fabdc828408,\nepqstate=0x00007fabdc826da0, estate=0x00007fabdc826920,\ncanSetTag=true) at nodeModifyTable.c:1336\n frame #14: 0x000000010f7b6d5a\npostgres`ExecModifyTable(pstate=0x00007fabdc826ca8) at\nnodeModifyTable.c:2246\n frame #15: 0x000000010f780e82\npostgres`ExecProcNodeFirst(node=0x00007fabdc826ca8) at\nexecProcnode.c:444\n frame #16: 0x000000010f779332\npostgres`ExecProcNode(node=0x00007fabdc826ca8) at executor.h:245\n frame #17: 0x000000010f7751b1\npostgres`ExecutePlan(estate=0x00007fabdc826920,\nplanstate=0x00007fabdc826ca8, use_parallel_mode=false,\noperation=CMD_UPDATE, sendTuples=false, numberTuples=0,\ndirection=ForwardScanDirection, dest=0x00007fabdd843840,\nexecute_once=true) at execMain.c:1646\n frame #18: 0x000000010f775072\npostgres`standard_ExecutorRun(queryDesc=0x00007fabdd81ff20,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:364\n frame #19: 0x000000010f774e42\npostgres`ExecutorRun(queryDesc=0x00007fabdd81ff20,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:308\n frame #20: 0x000000010f9eb63e\npostgres`ProcessQuery(plan=0x00007fabdd8447a8, sourceText=\"update test\nset value = value + 1 where id = 1;\", params=0x0000000000000000,\nqueryEnv=0x0000000000000000, dest=0x00007fabdd843840,\nqc=0x00007ffee0797d70) at pquery.c:160\n frame #21: 0x000000010f9ea71d\npostgres`PortalRunMulti(portal=0x00007fabdd823720, isTopLevel=true,\nsetHoldSnapshot=false, dest=0x00007fabdd843840,\naltdest=0x00007fabdd843840, qc=0x00007ffee0797d70) at pquery.c:1265\n frame #22: 0x000000010f9e9d92\npostgres`PortalRun(portal=0x00007fabdd823720,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\ndest=0x00007fabdd843840, altdest=0x00007fabdd843840,\nqc=0x00007ffee0797d70) at pquery.c:779\n frame #23: 0x000000010f9e5279\npostgres`exec_simple_query(query_string=\"update test set value = value\n+ 1 where id = 1;\") at postgres.c:1236\n frame #24: 0x000000010f9e43b8 postgres`PostgresMain(argc=1,\nargv=0x00007fabdd01fe78, dbname=\"postgres\", username=\"smagen\") at\npostgres.c:4295\n frame #25: 0x000000010f9147a0\npostgres`BackendRun(port=0x00007fabde000320) at postmaster.c:4510\n frame #26: 0x000000010f913b9a\npostgres`BackendStartup(port=0x00007fabde000320) at postmaster.c:4202\n frame #27: 0x000000010f912aea postgres`ServerLoop at postmaster.c:1727\n frame #28: 0x000000010f9104fa postgres`PostmasterMain(argc=3,\nargv=0x00007fabdbd009b0) at postmaster.c:1400\n frame #29: 0x000000010f7fae19 postgres`main(argc=3,\nargv=0x00007fabdbd009b0) at main.c:210\n frame #30: 0x00007fff69069015 libdyld.dylib`start + 1\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 16 Mar 2020 14:54:43 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Tue, Mar 17, 2020 at 12:55 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Wed, Feb 5, 2020 at 7:59 AM Thomas Munro <tmunro@postgresql.org> wrote:\n> > Add kqueue(2) support to the WaitEventSet API.\n> >\n> > Use kevent(2) to wait for events on the BSD family of operating\n> > systems and macOS. This is similar to the epoll(2) support added\n> > for Linux by commit 98a64d0bd.\n>\n> I'm not sure if it was already reported in this thread (it seems to be\n> not at the first glance), but I've discovered following issue on macos\n> 10.13.6. If backend is under lldb and does XactLockTableWait(), then\n> it does proc_exit(1).\n\n/me digs out a Macintosh\n\nReproduced here. The problem seems to be that macOS's getppid()\nreturns the debugger's PID, while the debugger is attached. This\ndoesn't happen on FreeBSD (even though the debugger does internally\nbecome the parent, getppid() is careful to return the \"real\" parent\nPID so that user space doesn't notice this trickery; apparently Apple\nmade a different choice).\n\nThe getppid() check is there to close a vanishingly rare race\ncondition: when creating a WaitEventSet, we ask the kernel to tell us\nwhen the postmaster exits, but there is a possibility that the\npostmaster has already exited; normally that causes an error with\nerrno == ESRCH (no such PID, it's already gone), but another unrelated\nprocess might have started that has the same PID, so we check if our\nppid has changed after a successful return code. That's not going to\nwork under a debugger on this OS.\n\nLooking into some options.\n\n\n", "msg_date": "Tue, 17 Mar 2020 09:11:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On 2020-Mar-17, Thomas Munro wrote:\n\n> Reproduced here. The problem seems to be that macOS's getppid()\n> returns the debugger's PID, while the debugger is attached. This\n> doesn't happen on FreeBSD (even though the debugger does internally\n> become the parent, getppid() is careful to return the \"real\" parent\n> PID so that user space doesn't notice this trickery; apparently Apple\n> made a different choice).\n\nWow ... Yeah, that was a known problem with FreeBSD, see\nhttps://postgr.es/m/1292851036-sup-5399@alvh.no-ip.org\nEvidently FreeBSD must have fixed it, but macOS has not caught up with\nthat ...\n\n> The getppid() check is there to close a vanishingly rare race\n> condition: when creating a WaitEventSet, we ask the kernel to tell us\n> when the postmaster exits, but there is a possibility that the\n> postmaster has already exited; normally that causes an error with\n> errno == ESRCH (no such PID, it's already gone), but another unrelated\n> process might have started that has the same PID, so we check if our\n> ppid has changed after a successful return code. That's not going to\n> work under a debugger on this OS.\n\nIrk.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 17:30:40 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Tue, Mar 17, 2020 at 9:30 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Mar-17, Thomas Munro wrote:\n> > Reproduced here. The problem seems to be that macOS's getppid()\n> > returns the debugger's PID, while the debugger is attached. This\n> > doesn't happen on FreeBSD (even though the debugger does internally\n> > become the parent, getppid() is careful to return the \"real\" parent\n> > PID so that user space doesn't notice this trickery; apparently Apple\n> > made a different choice).\n>\n> Wow ... Yeah, that was a known problem with FreeBSD, see\n> https://postgr.es/m/1292851036-sup-5399@alvh.no-ip.org\n> Evidently FreeBSD must have fixed it, but macOS has not caught up with\n> that ...\n\nOh, interesting. Sorry to bring a variant of this problem back.\n\n> > The getppid() check is there to close a vanishingly rare race\n> > condition: when creating a WaitEventSet, we ask the kernel to tell us\n> > when the postmaster exits, but there is a possibility that the\n> > postmaster has already exited; normally that causes an error with\n> > errno == ESRCH (no such PID, it's already gone), but another unrelated\n> > process might have started that has the same PID, so we check if our\n> > ppid has changed after a successful return code. That's not going to\n> > work under a debugger on this OS.\n>\n> Irk.\n\nI'm now far away from my home Mac so I can't test until later but I\nthink we can fix this by double checking with the pipe:\n\n- else if (event->events == WL_POSTMASTER_DEATH && PostmasterPid\n!= getppid())\n+ else if (event->events == WL_POSTMASTER_DEATH &&\n+ PostmasterPid != getppid() &&\n+ !PostmasterIsAliveInternal())\n+ {\n+ /*\n+ * The extra PostmasterIsAliveInternal() check\nprevents false alarms\n+ * from systems where getppid() returns a debugger PID\nwhile being\n+ * traced.\n+ */\n set->report_postmaster_not_running = true;\n+ }\n\nThe fast getppid() check will prevent the slow and redundant\nPostmasterIsAliveInternal() check from being reached on production\nsystems, until the postmaster really is gone in the race scenario\ndescribed.\n\n(Note that all of this per-lock-wait work will go away with\nhttps://commitfest.postgresql.org/27/2452/, so I'm glad Alexander\nfound this now).\n\n\n", "msg_date": "Tue, 17 Mar 2020 10:21:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Tue, Mar 17, 2020 at 10:21 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'm now far away from my home Mac so I can't test until later but I\n> think we can fix this by double checking with the pipe:\n\nPushed.\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:07:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Pushed.\n\nprairiedog just turned up a different issue in this area [1].\nI wondered why it hadn't reported in for awhile, and upon\ninvestigation I found that the test run was stuck in the\nfinal pg_dump step of the pg_upgrade test. pg_dump itself\nwas waiting for a query result, while the connected backend\nwas sitting here:\n\n(gdb) bt\n#0 0x9002ec88 in kevent ()\n#1 0x0039cff8 in WaitEventSetWait (set=0x20c502c, timeout=-1, occurred_events=0xbfffdd4c, nevents=1, wait_event_info=100663296) at latch.c:1443\n#2 0x00261d98 in secure_read (port=0x2401ba0, ptr=0x713558, len=8192) at be-secure.c:184\n#3 0x00269d34 in pq_recvbuf () at pqcomm.c:950\n#4 0x00269e24 in pq_getbyte () at pqcomm.c:993\n#5 0x003cec2c in PostgresMain (argc=1, argv=0x38060ac, dbname=0x20c5154 \"regression\", username=0x20c5138 \"buildfarm\") at postgres.c:337\n#6 0x0032de0c in BackendStartup (port=0x2401ba0) at postmaster.c:4510\n#7 0x0032fcf8 in PostmasterMain (argc=1585338749, argv=0x5e7e59b9) at postmaster.c:1727\n#8 0x0026f32c in main (argc=6, argv=0x24009b0) at main.c:210\n\nIt'd appear that we dropped an input-is-available condition.\n\nNow prairiedog is running a museum-grade macOS release, so\nit's hardly impossible that this is a kernel bug not a\nPostgres bug. But we shouldn't jump to that conclusion,\neither, given that our kevent support is so new.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2020-03-27%2018%3A55%3A51\nThe log shows a SIGABRT trap, but that's because I manually did \"kill\n-ABRT\" to unblock the buildfarm animal.\n\n\n", "msg_date": "Sat, 28 Mar 2020 14:43:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "On Sun, Mar 29, 2020 at 7:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Pushed.\n>\n> prairiedog just turned up a different issue in this area [1].\n> I wondered why it hadn't reported in for awhile, and upon\n> investigation I found that the test run was stuck in the\n> final pg_dump step of the pg_upgrade test. pg_dump itself\n> was waiting for a query result, while the connected backend\n> was sitting here:\n>\n> (gdb) bt\n> #0 0x9002ec88 in kevent ()\n> #1 0x0039cff8 in WaitEventSetWait (set=0x20c502c, timeout=-1, occurred_events=0xbfffdd4c, nevents=1, wait_event_info=100663296) at latch.c:1443\n> #2 0x00261d98 in secure_read (port=0x2401ba0, ptr=0x713558, len=8192) at be-secure.c:184\n> #3 0x00269d34 in pq_recvbuf () at pqcomm.c:950\n> #4 0x00269e24 in pq_getbyte () at pqcomm.c:993\n> #5 0x003cec2c in PostgresMain (argc=1, argv=0x38060ac, dbname=0x20c5154 \"regression\", username=0x20c5138 \"buildfarm\") at postgres.c:337\n> #6 0x0032de0c in BackendStartup (port=0x2401ba0) at postmaster.c:4510\n> #7 0x0032fcf8 in PostmasterMain (argc=1585338749, argv=0x5e7e59b9) at postmaster.c:1727\n> #8 0x0026f32c in main (argc=6, argv=0x24009b0) at main.c:210\n>\n> It'd appear that we dropped an input-is-available condition.\n>\n> Now prairiedog is running a museum-grade macOS release, so\n> it's hardly impossible that this is a kernel bug not a\n> Postgres bug. But we shouldn't jump to that conclusion,\n> either, given that our kevent support is so new.\n\nMy first thought was that it might have been due to the EV_CLEAR flag\nproblem discussed elsewhere, but the failing build has commit 9b8aa092\nso that's not it.\n\nAbout the kernel bug hypothesis: I see that the libevent project\ndoesn't use kqueue on early macOS versions due to some bug that it\ntests for that apparently fails on 10.4/kernel 8.11 (what you have\nthere). Kqueue was added to macOS 10.3 (which pulled a bunch of code\nfrom FreeBSD 5 including this), so in 10.4 I suppose it was still\nsomewhat new. I also found a few other vague complaints about bugs\nfrom that era including some claims of missing events, but without\nconclusions. The kernel source is mirrored on github with change\nhistory[1], but without commit log messages or a public bug tracker\nit's practically impossible for a drive-by reader to figure out what\nwas broken and fixed. That seems like a bit of a wild dino-goose\nchase.\n\nHmm, I see that Remi also runs an ancient PowerPC Mac on macOS\n10.5/Darwin 9.8. His build farm animal \"locust\" hasn't reported in 22\ndays. Remi, is that animal down for other reasons, or could it be\nstuck like this?\n\nFurther evidence for a version-specific problem is that there are\nsurely many in our hacker community working on modern Macs, and I\nhaven't heard of any problems so far. Of course that doesn't rule\nanything out.\n\n[1] https://github.com/apple/darwin-xnu/blob/master/bsd/kern/kern_event.c\n\n\n", "msg_date": "Sun, 29 Mar 2020 11:25:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." }, { "msg_contents": "\n\n> Le 28 mars 2020 à 23:25, Thomas Munro <thomas.munro@gmail.com> a écrit :\n> \n> Hmm, I see that Remi also runs an ancient PowerPC Mac on macOS\n> 10.5/Darwin 9.8. His build farm animal \"locust\" hasn't reported in 22\n> days. Remi, is that animal down for other reasons, or could it be\n> stuck like this?\n\nHi,\n\nlocust is down, and due to circulation restrictions, I cannot access it for the moment. Sorry.\n\nRémi\n\n", "msg_date": "Mon, 30 Mar 2020 15:42:59 +0200", "msg_from": "=?utf-8?Q?R=C3=A9mi_Zara?= <remi_zara@mac.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add kqueue(2) support to the WaitEventSet API." } ]
[ { "msg_contents": "Hi,\n\nWhen I compiled PostgreSQL with -DLWLOCK_STATS and tried to check\nthe statistics of light-weight locks, I observed that more than one\nstatistics entries were output *for the same backend process and\nthe same lwlock*. For example, I got the following four statistics\nwhen I checked how the process with PID 81141 processed ProcArrayLock.\nThis is strange, and IMO only one statistics should be output for\nthe same backend process and lwlock.\n\n$ grep \"PID 81141 lwlock ProcArrayLock\" data/log/postgresql-2020-02-05_141842.log\nPID 81141 lwlock ProcArrayLock 0x111e87780: shacq 4000 exacq 0 blk 0 spindelay 0 dequeue self 0\nPID 81141 lwlock ProcArrayLock 0x111e87780: shacq 2 exacq 0 blk 0 spindelay 0 dequeue self 0\nPID 81141 lwlock ProcArrayLock 0x111e87780: shacq 6001 exacq 1 blk 0 spindelay 0 dequeue self 0\nPID 81141 lwlock ProcArrayLock 0x111e87780: shacq 5 exacq 1 blk 0 spindelay 0 dequeue self 0\n\nThe cause of this issue is that the key variable used for lwlock hash\ntable was not fully initialized. The key consists of two fields and\nthey are initialized as follows. But the following 4 bytes allocated\nfor the alignment was not initialized. So even if the same key was\nspecified, hash_search(HASH_ENTER) could not find the existing\nentry for that key and created new one.\n\n\tkey.tranche = lock->tranche;\n\tkey.instance = lock;\n\nAttached is the patch fixing this issue by initializing the key\nvariable with zero. In the patched version, I confirmed that only one\nstatistics is output for the same process and the same lwlock.\nAlso this patch would reduce the volume of lwlock statistics\nvery much.\n\nThis issue was introduced by commit 3761fe3c20. So the patch needs\nto be back-patch to v10.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Wed, 5 Feb 2020 14:43:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "A bug in LWLOCK_STATS" }, { "msg_contents": "On Wed, Feb 05, 2020 at 02:43:49PM +0900, Fujii Masao wrote:\n> Hi,\n> \n> When I compiled PostgreSQL with -DLWLOCK_STATS and tried to check\n> the statistics of light-weight locks, I observed that more than one\n> statistics entries were output *for the same backend process and\n> the same lwlock*. For example, I got the following four statistics\n> when I checked how the process with PID 81141 processed ProcArrayLock.\n> This is strange, and IMO only one statistics should be output for\n> the same backend process and lwlock.\n> \n> $ grep \"PID 81141 lwlock ProcArrayLock\" data/log/postgresql-2020-02-05_141842.log\n> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 4000 exacq 0 blk 0 spindelay 0 dequeue self 0\n> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 2 exacq 0 blk 0 spindelay 0 dequeue self 0\n> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 6001 exacq 1 blk 0 spindelay 0 dequeue self 0\n> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 5 exacq 1 blk 0 spindelay 0 dequeue self 0\n> \n> The cause of this issue is that the key variable used for lwlock hash\n> table was not fully initialized. The key consists of two fields and\n> they are initialized as follows. But the following 4 bytes allocated\n> for the alignment was not initialized. So even if the same key was\n> specified, hash_search(HASH_ENTER) could not find the existing\n> entry for that key and created new one.\n> \n> \tkey.tranche = lock->tranche;\n> \tkey.instance = lock;\n> \n> Attached is the patch fixing this issue by initializing the key\n> variable with zero. In the patched version, I confirmed that only one\n> statistics is output for the same process and the same lwlock.\n> Also this patch would reduce the volume of lwlock statistics\n> very much.\n> \n> This issue was introduced by commit 3761fe3c20. So the patch needs\n> to be back-patch to v10.\n\nGood catch! The patch looks good to me. Just in case I looked at other users\nof HASH_BLOBS and AFAICT there's no other cases of key that can contain padding\nbytes that aren't memset first.\n\n\n", "msg_date": "Wed, 5 Feb 2020 09:13:42 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A bug in LWLOCK_STATS" }, { "msg_contents": "At Wed, 5 Feb 2020 14:43:49 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> The cause of this issue is that the key variable used for lwlock hash\n> table was not fully initialized. The key consists of two fields and\n> they are initialized as follows. But the following 4 bytes allocated\n> for the alignment was not initialized. So even if the same key was\n> specified, hash_search(HASH_ENTER) could not find the existing\n> entry for that key and created new one.\n> \n> \tkey.tranche = lock->tranche;\n> \tkey.instance = lock;\n> \n> Attached is the patch fixing this issue by initializing the key\n> variable with zero. In the patched version, I confirmed that only one\n> statistics is output for the same process and the same lwlock.\n> Also this patch would reduce the volume of lwlock statistics\n> very much.\n\nNice catch. A brielf grepping showed me no another instance of the\nsame issue. I found some composite hash key struct are used without\nintialization but AFAIS they don't have padding before the last\nmember, or uses strcmp or custom coparison functions. (I don't think\nwe don't have that in the regular paths, though..)\n\n> This issue was introduced by commit 3761fe3c20. So the patch needs\n> to be back-patch to v10.\n\nAgreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 05 Feb 2020 17:25:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A bug in LWLOCK_STATS" }, { "msg_contents": "\n\nOn 2020/02/05 17:13, Julien Rouhaud wrote:\n> On Wed, Feb 05, 2020 at 02:43:49PM +0900, Fujii Masao wrote:\n>> Hi,\n>>\n>> When I compiled PostgreSQL with -DLWLOCK_STATS and tried to check\n>> the statistics of light-weight locks, I observed that more than one\n>> statistics entries were output *for the same backend process and\n>> the same lwlock*. For example, I got the following four statistics\n>> when I checked how the process with PID 81141 processed ProcArrayLock.\n>> This is strange, and IMO only one statistics should be output for\n>> the same backend process and lwlock.\n>>\n>> $ grep \"PID 81141 lwlock ProcArrayLock\" data/log/postgresql-2020-02-05_141842.log\n>> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 4000 exacq 0 blk 0 spindelay 0 dequeue self 0\n>> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 2 exacq 0 blk 0 spindelay 0 dequeue self 0\n>> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 6001 exacq 1 blk 0 spindelay 0 dequeue self 0\n>> PID 81141 lwlock ProcArrayLock 0x111e87780: shacq 5 exacq 1 blk 0 spindelay 0 dequeue self 0\n>>\n>> The cause of this issue is that the key variable used for lwlock hash\n>> table was not fully initialized. The key consists of two fields and\n>> they are initialized as follows. But the following 4 bytes allocated\n>> for the alignment was not initialized. So even if the same key was\n>> specified, hash_search(HASH_ENTER) could not find the existing\n>> entry for that key and created new one.\n>>\n>> \tkey.tranche = lock->tranche;\n>> \tkey.instance = lock;\n>>\n>> Attached is the patch fixing this issue by initializing the key\n>> variable with zero. In the patched version, I confirmed that only one\n>> statistics is output for the same process and the same lwlock.\n>> Also this patch would reduce the volume of lwlock statistics\n>> very much.\n>>\n>> This issue was introduced by commit 3761fe3c20. So the patch needs\n>> to be back-patch to v10.\n\nPushed.\n\n> Good catch! The patch looks good to me. Just in case I looked at other users\n> of HASH_BLOBS and AFAICT there's no other cases of key that can contain padding\n> bytes that aren't memset first.\n\nThanks Julien and Horiguchi-san for reviewing the patch\nand checking other cases!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 6 Feb 2020 14:49:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: A bug in LWLOCK_STATS" } ]
[ { "msg_contents": "I can't figure out why ExecGather/ExecGatherMerge do check whether num_workers\nis non-zero. I think the code would be a bit clearer if these tests were\nreplaced with Assert() statements, as the attached patch does.\n\nIn addition, if my assumptions are correct, I think that only Gather node\nneeds the single_copy field, but GatherPath does not.\n\nIn the patch I also added Assert() to add_partial_path so that I'm more likely\nto catch special cases. Regression tests passed though.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 05 Feb 2020 10:50:05 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Assumptions about the number of parallel workers" }, { "msg_contents": "Hi,\n\nOn 2020-02-05 10:50:05 +0100, Antonin Houska wrote:\n> I can't figure out why ExecGather/ExecGatherMerge do check whether num_workers\n> is non-zero. I think the code would be a bit clearer if these tests were\n> replaced with Assert() statements, as the attached patch does.\n\nIt's probably related to force_parallel_mode. With that we'll IIRC\ngenerate gather nodes even if num_workers == 0.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Feb 2020 18:50:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assumptions about the number of parallel workers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n> \n> On 2020-02-05 10:50:05 +0100, Antonin Houska wrote:\n> > I can't figure out why ExecGather/ExecGatherMerge do check whether num_workers\n> > is non-zero. I think the code would be a bit clearer if these tests were\n> > replaced with Assert() statements, as the attached patch does.\n> \n> It's probably related to force_parallel_mode. With that we'll IIRC\n> generate gather nodes even if num_workers == 0.\n\nThose Gather nodes still have non-zero num_workers, see this part of\nstandard_planner:\n\n if (force_parallel_mode != FORCE_PARALLEL_OFF && top_plan->parallel_safe)\n {\n ...\n gather->num_workers = 1;\n\tgather->single_copy = true;\n\nAlso, if it num_workers was zero for any reason, my patch would probably make\nregression tests fail. However I haven't noticed any assertion failure.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 07 Feb 2020 09:44:34 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Assumptions about the number of parallel workers" }, { "msg_contents": "On Wed, Feb 5, 2020 at 4:49 AM Antonin Houska <ah@cybertec.at> wrote:\n> I can't figure out why ExecGather/ExecGatherMerge do check whether num_workers\n> is non-zero. I think the code would be a bit clearer if these tests were\n> replaced with Assert() statements, as the attached patch does.\n\nHmm. There are some cases where we plan on using a Gather node but\nthen can't actually fire up parallelism because we run out of DSM\nsegments or we run out of background workers. But the Gather is just\npart of the plan, so it would still have num_workers > 0 in those\ncases. This might just have been a thinko on my part, but I'm not\ntotally sure.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Feb 2020 10:18:25 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assumptions about the number of parallel workers" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 09:44:34 +0100, Antonin Houska wrote:\n> Those Gather nodes still have non-zero num_workers, see this part of\n> standard_planner:\n> \n> if (force_parallel_mode != FORCE_PARALLEL_OFF && top_plan->parallel_safe)\n> {\n> ...\n> gather->num_workers = 1;\n> \tgather->single_copy = true;\n\nIck. Looks like you might be right...\n\n\n> Also, if it num_workers was zero for any reason, my patch would probably make\n> regression tests fail. However I haven't noticed any assertion failure.\n\nThat however, is not at all guaranteed. The regression tests don't run\n(or at least not much) with force_parallel_mode set. You can try\nyourself though, with something like\n\nPGOPTIONS='-c force_parallel_mode=regress' make check\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Feb 2020 10:28:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assumptions about the number of parallel workers" } ]
[ { "msg_contents": "Hi,\n\nUser can create database objects such as functions into pg_catalog.\nBut if I'm not missing something, currently there is no\nstraightforward way to identify if the object is a user created object\nor a system object which is created during initdb. If we can do that\nuser will be able to check if malicious functions are not created in\nthe database, which is important from the security perspective.\n\nI've attached PoC patch to introduce a SQL function\npg_is_user_object() that returns true if the given oid is user object\noid, that is greater than or equal to FirstNormalObjectId. Feedback is\nvery welcome.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 5 Feb 2020 20:26:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Identifying user-created objects" }, { "msg_contents": "On Wed, Feb 5, 2020 at 8:27 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> User can create database objects such as functions into pg_catalog.\n> But if I'm not missing something, currently there is no\n> straightforward way to identify if the object is a user created object\n> or a system object which is created during initdb. If we can do that\n> user will be able to check if malicious functions are not created in\n> the database, which is important from the security perspective.\n>\n> I've attached PoC patch to introduce a SQL function\n> pg_is_user_object() that returns true if the given oid is user object\n> oid, that is greater than or equal to FirstNormalObjectId. Feedback is\n> very welcome.\n\n+1.\n\nAbout the implementation, how about defining a static inline function,\nsay is_user_object(), next to FirstNormalObjectId's definition and\nmake pg_is_user_object() call it? There are a few placed in the\nbackend code that perform the same computation as pg_is_user_object(),\nwhich could be changed to use is_user_object() instead.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 6 Feb 2020 16:25:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> About the implementation, how about defining a static inline function,\n> say is_user_object(), next to FirstNormalObjectId's definition and\n> make pg_is_user_object() call it? There are a few placed in the\n> backend code that perform the same computation as pg_is_user_object(),\n> which could be changed to use is_user_object() instead.\n\nFWIW, if we bother adding SQL functions for that, my first impression\nwas to have three functions, each one of them returning:\n- FirstNormalObjectId\n- FirstGenbkiObjectId\n- FirstNormalObjectId\nPerhaps you should add a minimal set of regression tests in the\npatch.\n--\nMichael", "msg_date": "Thu, 6 Feb 2020 16:31:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, Feb 6, 2020 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > About the implementation, how about defining a static inline function,\n> > say is_user_object(), next to FirstNormalObjectId's definition and\n> > make pg_is_user_object() call it? There are a few placed in the\n> > backend code that perform the same computation as pg_is_user_object(),\n> > which could be changed to use is_user_object() instead.\n>\n> FWIW, if we bother adding SQL functions for that, my first impression\n> was to have three functions, each one of them returning:\n> - FirstNormalObjectId\n> - FirstGenbkiObjectId\n> - FirstNormalObjectId\n\nDid you miss FirstBootstrapObjectId by any chance?\n\nI see the following ranges as defined in transam.h.\n\n1-(FirstGenbkiObjectId - 1): manually assigned OIDs\nFirstGenbkiObjectId-(FirstBootstrapObjectId - 1): genbki.pl assigned OIDs\nFirstBootstrapObjectId-(FirstNormalObjectId - 1): initdb requested\nFirstNormalObjectId or greater: user-defined objects\n\nSawada-san's proposal covers #4. Do we need an SQL function for the\nfirst three? IOW, would the distinction between OIDs belonging to the\nfirst three ranges be of interest to anyone except core PG hackers?\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 6 Feb 2020 16:52:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, Feb 6, 2020 at 8:53 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Feb 6, 2020 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > > About the implementation, how about defining a static inline function,\n> > > say is_user_object(), next to FirstNormalObjectId's definition and\n> > > make pg_is_user_object() call it? There are a few placed in the\n> > > backend code that perform the same computation as pg_is_user_object(),\n> > > which could be changed to use is_user_object() instead.\n> >\n> > FWIW, if we bother adding SQL functions for that, my first impression\n> > was to have three functions, each one of them returning:\n> > - FirstNormalObjectId\n> > - FirstGenbkiObjectId\n> > - FirstNormalObjectId\n>\n> Did you miss FirstBootstrapObjectId by any chance?\n>\n> I see the following ranges as defined in transam.h.\n>\n> 1-(FirstGenbkiObjectId - 1): manually assigned OIDs\n> FirstGenbkiObjectId-(FirstBootstrapObjectId - 1): genbki.pl assigned OIDs\n> FirstBootstrapObjectId-(FirstNormalObjectId - 1): initdb requested\n> FirstNormalObjectId or greater: user-defined objects\n>\n> Sawada-san's proposal covers #4. Do we need an SQL function for the\n> first three? IOW, would the distinction between OIDs belonging to the\n> first three ranges be of interest to anyone except core PG hackers?\n\n+1 for #4, but I'm not sure that the other 3 are really interesting to\nhave at SQL level.\n\n\n", "msg_date": "Thu, 6 Feb 2020 08:59:09 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, Feb 06, 2020 at 04:52:48PM +0900, Amit Langote wrote:\n> On Thu, Feb 6, 2020 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> FWIW, if we bother adding SQL functions for that, my first impression\n>> was to have three functions, each one of them returning:\n>> - FirstNormalObjectId\n>> - FirstGenbkiObjectId\n>> - FirstNormalObjectId\n> \n> Did you miss FirstBootstrapObjectId by any chance?\n\nYep, incorrect copy-pasto.\n--\nMichael", "msg_date": "Thu, 6 Feb 2020 17:11:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, 6 Feb 2020 at 16:53, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Feb 6, 2020 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > > About the implementation, how about defining a static inline function,\n> > > say is_user_object(), next to FirstNormalObjectId's definition and\n> > > make pg_is_user_object() call it? There are a few placed in the\n> > > backend code that perform the same computation as pg_is_user_object(),\n> > > which could be changed to use is_user_object() instead.\n> >\n> > FWIW, if we bother adding SQL functions for that, my first impression\n> > was to have three functions, each one of them returning:\n> > - FirstNormalObjectId\n> > - FirstGenbkiObjectId\n> > - FirstNormalObjectId\n>\n> Did you miss FirstBootstrapObjectId by any chance?\n>\n> I see the following ranges as defined in transam.h.\n>\n> 1-(FirstGenbkiObjectId - 1): manually assigned OIDs\n> FirstGenbkiObjectId-(FirstBootstrapObjectId - 1): genbki.pl assigned OIDs\n> FirstBootstrapObjectId-(FirstNormalObjectId - 1): initdb requested\n> FirstNormalObjectId or greater: user-defined objects\n>\n> Sawada-san's proposal covers #4. Do we need an SQL function for the\n> first three? IOW, would the distinction between OIDs belonging to the\n> first three ranges be of interest to anyone except core PG hackers?\n\nYeah I thought of these three values but I'm also not sure it's worth for users.\n\nIf we have these functions returning the values respectively, when we\nwant to check if an oid is assigned during initdb we will end up with\ndoing something like 'WHERE oid >= pg_first_bootstrap_oid() and oid <\npg_first_normal_oid()', which is not intuitive, I think. Users have to\nremember the order of these values.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 6 Feb 2020 17:18:59 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, 6 Feb 2020 at 17:18, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 6 Feb 2020 at 16:53, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Thu, Feb 6, 2020 at 4:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > > > About the implementation, how about defining a static inline function,\n> > > > say is_user_object(), next to FirstNormalObjectId's definition and\n> > > > make pg_is_user_object() call it? There are a few placed in the\n> > > > backend code that perform the same computation as pg_is_user_object(),\n> > > > which could be changed to use is_user_object() instead.\n> > >\n> > > FWIW, if we bother adding SQL functions for that, my first impression\n> > > was to have three functions, each one of them returning:\n> > > - FirstNormalObjectId\n> > > - FirstGenbkiObjectId\n> > > - FirstNormalObjectId\n> >\n> > Did you miss FirstBootstrapObjectId by any chance?\n> >\n> > I see the following ranges as defined in transam.h.\n> >\n> > 1-(FirstGenbkiObjectId - 1): manually assigned OIDs\n> > FirstGenbkiObjectId-(FirstBootstrapObjectId - 1): genbki.pl assigned OIDs\n> > FirstBootstrapObjectId-(FirstNormalObjectId - 1): initdb requested\n> > FirstNormalObjectId or greater: user-defined objects\n> >\n> > Sawada-san's proposal covers #4. Do we need an SQL function for the\n> > first three? IOW, would the distinction between OIDs belonging to the\n> > first three ranges be of interest to anyone except core PG hackers?\n>\n> Yeah I thought of these three values but I'm also not sure it's worth for users.\n>\n> If we have these functions returning the values respectively, when we\n> want to check if an oid is assigned during initdb we will end up with\n> doing something like 'WHERE oid >= pg_first_bootstrap_oid() and oid <\n> pg_first_normal_oid()', which is not intuitive, I think. Users have to\n> remember the order of these values.\n>\n\nAttached the updated version patch that includes regression tests. And\nI have registered this to the next commit fest.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 10 Feb 2020 12:24:31 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "Sawada-san,\n\nOn Mon, Feb 10, 2020 at 12:25 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> > > > On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > > > > About the implementation, how about defining a static inline function,\n> > > > > say is_user_object(), next to FirstNormalObjectId's definition and\n> > > > > make pg_is_user_object() call it? There are a few placed in the\n> > > > > backend code that perform the same computation as pg_is_user_object(),\n> > > > > which could be changed to use is_user_object() instead.\n>\n> Attached the updated version patch that includes regression tests. And\n> I have registered this to the next commit fest.\n\nThank you.\n\nAny thoughts on the above suggestion?\n\nRegards,\nAmit\n\n\n", "msg_date": "Mon, 10 Feb 2020 12:54:10 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Mon, 10 Feb 2020 at 12:54, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Sawada-san,\n>\n> On Mon, Feb 10, 2020 at 12:25 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > > > > > About the implementation, how about defining a static inline function,\n> > > > > > say is_user_object(), next to FirstNormalObjectId's definition and\n> > > > > > make pg_is_user_object() call it? There are a few placed in the\n> > > > > > backend code that perform the same computation as pg_is_user_object(),\n> > > > > > which could be changed to use is_user_object() instead.\n> >\n> > Attached the updated version patch that includes regression tests. And\n> > I have registered this to the next commit fest.\n>\n> Thank you.\n>\n> Any thoughts on the above suggestion?\n\nOops, I had overlooked it. I agree with you.\n\nHow about having it as a macro like:\n\n#define ObjectIdIsUserObject(oid) ((Oid)(oid) >= FirstNormalObjectId)\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:06:23 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Mon, Feb 10, 2020 at 1:06 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> On Mon, 10 Feb 2020 at 12:54, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Sawada-san,\n> >\n> > On Mon, Feb 10, 2020 at 12:25 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > > On Thu, Feb 06, 2020 at 04:25:47PM +0900, Amit Langote wrote:\n> > > > > > > About the implementation, how about defining a static inline function,\n> > > > > > > say is_user_object(), next to FirstNormalObjectId's definition and\n> > > > > > > make pg_is_user_object() call it? There are a few placed in the\n> > > > > > > backend code that perform the same computation as pg_is_user_object(),\n> > > > > > > which could be changed to use is_user_object() instead.\n> > >\n> > > Attached the updated version patch that includes regression tests. And\n> > > I have registered this to the next commit fest.\n> >\n> > Thank you.\n> >\n> > Any thoughts on the above suggestion?\n>\n> Oops, I had overlooked it. I agree with you.\n>\n> How about having it as a macro like:\n>\n> #define ObjectIdIsUserObject(oid) ((Oid)(oid) >= FirstNormalObjectId)\n\nI'm fine with a macro.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:16:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Mon, Feb 10, 2020 at 01:16:30PM +0900, Amit Langote wrote:\n> On Mon, Feb 10, 2020 at 1:06 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>> How about having it as a macro like:\n>>\n>> #define ObjectIdIsUserObject(oid) ((Oid)(oid) >= FirstNormalObjectId)\n> \n> I'm fine with a macro.\n\nI am not sure that it is worth having one extra abstraction layer for\nthat.\n--\nMichael", "msg_date": "Mon, 10 Feb 2020 14:09:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Mon, 10 Feb 2020 at 14:09, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Feb 10, 2020 at 01:16:30PM +0900, Amit Langote wrote:\n> > On Mon, Feb 10, 2020 at 1:06 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >> How about having it as a macro like:\n> >>\n> >> #define ObjectIdIsUserObject(oid) ((Oid)(oid) >= FirstNormalObjectId)\n> >\n> > I'm fine with a macro.\n>\n> I am not sure that it is worth having one extra abstraction layer for\n> that.\n\nHmm I'm not going to insist on that but I thought that it could\nsomewhat improve readability at places where they already compares an\noid to FirstNormalObjectId as Amit mentioned:\n\nsrc/backend/catalog/pg_publication.c: relid >= FirstNormalObjectId;\nsrc/backend/utils/adt/json.c: if (typoid >= FirstNormalObjectId)\nsrc/backend/utils/adt/jsonb.c: if (typoid >= FirstNormalObjectId)\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 14:22:54 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Mon, Feb 10, 2020 at 2:23 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> On Mon, 10 Feb 2020 at 14:09, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Feb 10, 2020 at 01:16:30PM +0900, Amit Langote wrote:\n> > > On Mon, Feb 10, 2020 at 1:06 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >> How about having it as a macro like:\n> > >>\n> > >> #define ObjectIdIsUserObject(oid) ((Oid)(oid) >= FirstNormalObjectId)\n> > >\n> > > I'm fine with a macro.\n> >\n> > I am not sure that it is worth having one extra abstraction layer for\n> > that.\n>\n> Hmm I'm not going to insist on that but I thought that it could\n> somewhat improve readability at places where they already compares an\n> oid to FirstNormalObjectId as Amit mentioned:\n>\n> src/backend/catalog/pg_publication.c: relid >= FirstNormalObjectId;\n> src/backend/utils/adt/json.c: if (typoid >= FirstNormalObjectId)\n> src/backend/utils/adt/jsonb.c: if (typoid >= FirstNormalObjectId)\n\nAgree that ObjectIsUserObject(oid) is easier to read than oid >=\nFirstNormalObject. I would have not bothered, for example, if it was\nsomething like oid >= FirstUserObjectId to begin with.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 10 Feb 2020 14:32:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "At Mon, 10 Feb 2020 14:32:44 +0900, Amit Langote <amitlangote09@gmail.com> wrote in \n> On Mon, Feb 10, 2020 at 2:23 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > On Mon, 10 Feb 2020 at 14:09, Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Feb 10, 2020 at 01:16:30PM +0900, Amit Langote wrote:\n> > > > On Mon, Feb 10, 2020 at 1:06 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >> How about having it as a macro like:\n> > > >>\n> > > >> #define ObjectIdIsUserObject(oid) ((Oid)(oid) >= FirstNormalObjectId)\n> > > >\n> > > > I'm fine with a macro.\n> > >\n> > > I am not sure that it is worth having one extra abstraction layer for\n> > > that.\n> >\n> > Hmm I'm not going to insist on that but I thought that it could\n> > somewhat improve readability at places where they already compares an\n> > oid to FirstNormalObjectId as Amit mentioned:\n> >\n> > src/backend/catalog/pg_publication.c: relid >= FirstNormalObjectId;\n> > src/backend/utils/adt/json.c: if (typoid >= FirstNormalObjectId)\n> > src/backend/utils/adt/jsonb.c: if (typoid >= FirstNormalObjectId)\n> \n> Agree that ObjectIsUserObject(oid) is easier to read than oid >=\n> FirstNormalObject. I would have not bothered, for example, if it was\n> something like oid >= FirstUserObjectId to begin with.\n\nAside from the naming, I'm not sure it's sensible to use\nFirstNormalObjectId since I don't see a clear definition or required\ncharacteristics for \"user created objects\" is. If we did CREATE\nTABLE, FUNCTION or maybe any objects during single-user mode before\nthe first object is created during normal multiuser operation, the\n\"user-created(or not?)\" object has an OID less than\nFirstNormalObjectId. If such objects are the \"user created object\", we\nneed FirstUserObjectId defferent from FirstNormalObjectId.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:28:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, Feb 13, 2020 at 10:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Mon, 10 Feb 2020 14:32:44 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > Agree that ObjectIsUserObject(oid) is easier to read than oid >=\n> > FirstNormalObject. I would have not bothered, for example, if it was\n> > something like oid >= FirstUserObjectId to begin with.\n>\n> Aside from the naming, I'm not sure it's sensible to use\n> FirstNormalObjectId since I don't see a clear definition or required\n> characteristics for \"user created objects\" is. If we did CREATE\n> TABLE, FUNCTION or maybe any objects during single-user mode before\n> the first object is created during normal multiuser operation, the\n> \"user-created(or not?)\" object has an OID less than\n> FirstNormalObjectId. If such objects are the \"user created object\", we\n> need FirstUserObjectId defferent from FirstNormalObjectId.\n\nInteresting observation. Connecting to database in --single mode,\nwhether done using initdb or directly, is always considered\n\"bootstrapping\", so the OIDs from the bootstrapping range are\nconsumed.\n\n$ postgres --single -D pgdata postgres\n\nPostgreSQL stand-alone backend 13devel\nbackend> create table a (a int);\nbackend> select 'a'::regclass::oid;\n 1: oid (typeid = 26, len = 4, typmod = -1, byval = t)\n ----\n 1: oid = \"14168\" (typeid = 26, len = 4, typmod = -1, byval = t)\n\nHere, FirstBootstrapObjectId < 14168 < FirstNormalObjectId\n\nMaybe we could document that pg_is_user_object() and its internal\ncounterpart returns true only for objects that are created during\n\"normal\" multi-user database operation.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 13 Feb 2020 16:31:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, Feb 13, 2020 at 8:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Feb 13, 2020 at 10:30 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Mon, 10 Feb 2020 14:32:44 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > Agree that ObjectIsUserObject(oid) is easier to read than oid >=\n> > > FirstNormalObject. I would have not bothered, for example, if it was\n> > > something like oid >= FirstUserObjectId to begin with.\n> >\n> > Aside from the naming, I'm not sure it's sensible to use\n> > FirstNormalObjectId since I don't see a clear definition or required\n> > characteristics for \"user created objects\" is. If we did CREATE\n> > TABLE, FUNCTION or maybe any objects during single-user mode before\n> > the first object is created during normal multiuser operation, the\n> > \"user-created(or not?)\" object has an OID less than\n> > FirstNormalObjectId. If such objects are the \"user created object\", we\n> > need FirstUserObjectId defferent from FirstNormalObjectId.\n>\n> Interesting observation. Connecting to database in --single mode,\n> whether done using initdb or directly, is always considered\n> \"bootstrapping\", so the OIDs from the bootstrapping range are\n> consumed.\n>\n> $ postgres --single -D pgdata postgres\n>\n> PostgreSQL stand-alone backend 13devel\n> backend> create table a (a int);\n> backend> select 'a'::regclass::oid;\n> 1: oid (typeid = 26, len = 4, typmod = -1, byval = t)\n> ----\n> 1: oid = \"14168\" (typeid = 26, len = 4, typmod = -1, byval = t)\n>\n> Here, FirstBootstrapObjectId < 14168 < FirstNormalObjectId\n\nFTR it's also possible to get the same result using binary mode and\nbinary_upgrade_set_next_XXX functions.\n\n> Maybe we could document that pg_is_user_object() and its internal\n> counterpart returns true only for objects that are created during\n> \"normal\" multi-user database operation.\n\n+1\n\n\n", "msg_date": "Thu, 13 Feb 2020 09:15:02 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, 13 Feb 2020 at 17:13, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Feb 13, 2020 at 8:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Thu, Feb 13, 2020 at 10:30 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > At Mon, 10 Feb 2020 14:32:44 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > > Agree that ObjectIsUserObject(oid) is easier to read than oid >=\n> > > > FirstNormalObject. I would have not bothered, for example, if it was\n> > > > something like oid >= FirstUserObjectId to begin with.\n> > >\n> > > Aside from the naming, I'm not sure it's sensible to use\n> > > FirstNormalObjectId since I don't see a clear definition or required\n> > > characteristics for \"user created objects\" is. If we did CREATE\n> > > TABLE, FUNCTION or maybe any objects during single-user mode before\n> > > the first object is created during normal multiuser operation, the\n> > > \"user-created(or not?)\" object has an OID less than\n> > > FirstNormalObjectId. If such objects are the \"user created object\", we\n> > > need FirstUserObjectId defferent from FirstNormalObjectId.\n> >\n> > Interesting observation. Connecting to database in --single mode,\n> > whether done using initdb or directly, is always considered\n> > \"bootstrapping\", so the OIDs from the bootstrapping range are\n> > consumed.\n> >\n> > $ postgres --single -D pgdata postgres\n> >\n> > PostgreSQL stand-alone backend 13devel\n> > backend> create table a (a int);\n> > backend> select 'a'::regclass::oid;\n> > 1: oid (typeid = 26, len = 4, typmod = -1, byval = t)\n> > ----\n> > 1: oid = \"14168\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> >\n> > Here, FirstBootstrapObjectId < 14168 < FirstNormalObjectId\n>\n> FTR it's also possible to get the same result using binary mode and\n> binary_upgrade_set_next_XXX functions.\n>\n> > Maybe we could document that pg_is_user_object() and its internal\n> > counterpart returns true only for objects that are created during\n> > \"normal\" multi-user database operation.\n>\n> +1\n\nAgreed.\n\nAttached updated version patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Feb 2020 16:47:56 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, Feb 26, 2020 at 4:48 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> On Thu, 13 Feb 2020 at 17:13, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Thu, Feb 13, 2020 at 8:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Maybe we could document that pg_is_user_object() and its internal\n> > > counterpart returns true only for objects that are created during\n> > > \"normal\" multi-user database operation.\n> >\n> > +1\n>\n> Agreed.\n>\n> Attached updated version patch.\n\nThanks for updating the patch. Some comments:\n\n+ <row>\n+ <entry><literal><function>pg_is_user_object(<parameter>oid</parameter>)</function></literal></entry>\n+ <entry><type>bool</type></entry>\n+ <entry>\n+ true if <parameter>oid</parameter> is the object which is\ncreated during\n+ normal multi-user database operation.\n+ </entry>\n+ </row>\n\nHow about clarifying the description further as follows:\n\n\"true for objects created while database is operating in normal\nmulti-user mode, as opposed to single-user mode (see <xref\nlinkend=\"app-postgres\"/>).\"\n\nTerm \"multi-user operation\" is not mentioned elsewhere in the\ndocumentation, so better to clarify what it means.\n\nAlso, maybe a minor nitpick, but how about adding the new function's\nrow at the end of the table (Table 9.72) instead of in the middle?\n\nOther than that, patch looks to be in pretty good shape.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 28 Feb 2020 14:28:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, Feb 26, 2020 at 1:18 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 13 Feb 2020 at 17:13, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, Feb 13, 2020 at 8:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > >\n> > > On Thu, Feb 13, 2020 at 10:30 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > At Mon, 10 Feb 2020 14:32:44 +0900, Amit Langote <amitlangote09@gmail.com> wrote in\n> > > > > Agree that ObjectIsUserObject(oid) is easier to read than oid >=\n> > > > > FirstNormalObject. I would have not bothered, for example, if it was\n> > > > > something like oid >= FirstUserObjectId to begin with.\n> > > >\n> > > > Aside from the naming, I'm not sure it's sensible to use\n> > > > FirstNormalObjectId since I don't see a clear definition or required\n> > > > characteristics for \"user created objects\" is. If we did CREATE\n> > > > TABLE, FUNCTION or maybe any objects during single-user mode before\n> > > > the first object is created during normal multiuser operation, the\n> > > > \"user-created(or not?)\" object has an OID less than\n> > > > FirstNormalObjectId. If such objects are the \"user created object\", we\n> > > > need FirstUserObjectId defferent from FirstNormalObjectId.\n> > >\n> > > Interesting observation. Connecting to database in --single mode,\n> > > whether done using initdb or directly, is always considered\n> > > \"bootstrapping\", so the OIDs from the bootstrapping range are\n> > > consumed.\n> > >\n> > > $ postgres --single -D pgdata postgres\n> > >\n> > > PostgreSQL stand-alone backend 13devel\n> > > backend> create table a (a int);\n> > > backend> select 'a'::regclass::oid;\n> > > 1: oid (typeid = 26, len = 4, typmod = -1, byval = t)\n> > > ----\n> > > 1: oid = \"14168\" (typeid = 26, len = 4, typmod = -1, byval = t)\n> > >\n> > > Here, FirstBootstrapObjectId < 14168 < FirstNormalObjectId\n> >\n> > FTR it's also possible to get the same result using binary mode and\n> > binary_upgrade_set_next_XXX functions.\n> >\n> > > Maybe we could document that pg_is_user_object() and its internal\n> > > counterpart returns true only for objects that are created during\n> > > \"normal\" multi-user database operation.\n> >\n> > +1\n>\n> Agreed.\n>\n> Attached updated version patch.\n>\n\nShould we add some check if object exists or not here:\n+Datum\n+pg_is_user_object(PG_FUNCTION_ARGS)\n+{\n+ Oid oid = PG_GETARG_OID(0);\n+\n+ PG_RETURN_BOOL(ObjectIsUserObject(oid));\n+}\n\nI was trying some scenarios where we pass an object which does not exist:\npostgres=# SELECT pg_is_user_object(0);\n pg_is_user_object\n-------------------\n f\n(1 row)\npostgres=# SELECT pg_is_user_object(222222);\n pg_is_user_object\n-------------------\n t\n(1 row)\nSELECT pg_is_user_object('pg_class1'::regclass);\nERROR: relation \"pg_class1\" does not exist\nLINE 1: SELECT pg_is_user_object('pg_class1'::regclass);\n ^\nI felt these behavior seems to be slightly inconsistent.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Mar 2020 20:03:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Tue, 3 Mar 2020 at 23:33, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Should we add some check if object exists or not here:\n> +Datum\n> +pg_is_user_object(PG_FUNCTION_ARGS)\n> +{\n> + Oid oid = PG_GETARG_OID(0);\n> +\n> + PG_RETURN_BOOL(ObjectIsUserObject(oid));\n> +}\n>\n> I was trying some scenarios where we pass an object which does not exist:\n> postgres=# SELECT pg_is_user_object(0);\n> pg_is_user_object\n> -------------------\n> f\n> (1 row)\n> postgres=# SELECT pg_is_user_object(222222);\n> pg_is_user_object\n> -------------------\n> t\n> (1 row)\n> SELECT pg_is_user_object('pg_class1'::regclass);\n> ERROR: relation \"pg_class1\" does not exist\n> LINE 1: SELECT pg_is_user_object('pg_class1'::regclass);\n> ^\n> I felt these behavior seems to be slightly inconsistent.\n> Thoughts?\n>\n\nHmm I'm not sure we should existing check in that function. Main use\ncase would be passing an oid of a tuple of a system catalog to that\nfunction to check if the given object was created while multi-user\nmode. So I think this function can assume that the given object id\nexists. And if we want to do that check, we will end up with checking\nif the object having that oid in all system catalogs, which is very\nhigh cost I think.\n\nI suspect perhaps the function name pg_is_user_object led that\nconfusion. That name looks like it checks if the given 'object' is\ncreated while multi-user mode. So maybe we can improve it either by\nrenaming to pg_is_user_object_id (or pg_is_user_oid?) or leaving the\nname but describing in the doc (based on Amit's suggestion in previous\nmail):\n\n\"true for oids of objects assigned while database is operating in\nnormal multi-user mode, as opposed to single-user mode (see\n<xreflinkend=\"app-postgres\"/>).\"\n\nWhat do you think?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Mar 2020 12:31:46 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, Mar 4, 2020 at 9:02 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 3 Mar 2020 at 23:33, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Should we add some check if object exists or not here:\n> > +Datum\n> > +pg_is_user_object(PG_FUNCTION_ARGS)\n> > +{\n> > + Oid oid = PG_GETARG_OID(0);\n> > +\n> > + PG_RETURN_BOOL(ObjectIsUserObject(oid));\n> > +}\n> >\n> > I was trying some scenarios where we pass an object which does not exist:\n> > postgres=# SELECT pg_is_user_object(0);\n> > pg_is_user_object\n> > -------------------\n> > f\n> > (1 row)\n> > postgres=# SELECT pg_is_user_object(222222);\n> > pg_is_user_object\n> > -------------------\n> > t\n> > (1 row)\n> > SELECT pg_is_user_object('pg_class1'::regclass);\n> > ERROR: relation \"pg_class1\" does not exist\n> > LINE 1: SELECT pg_is_user_object('pg_class1'::regclass);\n> > ^\n> > I felt these behavior seems to be slightly inconsistent.\n> > Thoughts?\n> >\n>\n> Hmm I'm not sure we should existing check in that function. Main use\n> case would be passing an oid of a tuple of a system catalog to that\n> function to check if the given object was created while multi-user\n> mode. So I think this function can assume that the given object id\n> exists. And if we want to do that check, we will end up with checking\n> if the object having that oid in all system catalogs, which is very\n> high cost I think.\n>\n> I suspect perhaps the function name pg_is_user_object led that\n> confusion. That name looks like it checks if the given 'object' is\n> created while multi-user mode. So maybe we can improve it either by\n> renaming to pg_is_user_object_id (or pg_is_user_oid?) or leaving the\n> name but describing in the doc (based on Amit's suggestion in previous\n> mail):\n\nI liked pg_is_user_oid over pg_is_user_object_id but this liking may\nvary from person to person, so I'm still ok if you don't change the\nname. I'm fine about adding the information in the document unless\nsomeone else feels that this check is required in this function.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 11:58:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "\n\nOn 2020/02/05 20:26, Masahiko Sawada wrote:\n> Hi,\n> \n> User can create database objects such as functions into pg_catalog.\n> But if I'm not missing something, currently there is no\n> straightforward way to identify if the object is a user created object\n> or a system object which is created during initdb. If we can do that\n> user will be able to check if malicious functions are not created in\n> the database, which is important from the security perspective.\n\nThe function that you are proposing is really enough for this use case?\nWhat if malicious users directly change the oid of function\nto < FirstNormalObjectId? Or you're assuming that malicious users will\nnever log in as superuser and not be able to change the oid?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 16:43:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, 4 Mar 2020 at 16:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/02/05 20:26, Masahiko Sawada wrote:\n> > Hi,\n> >\n> > User can create database objects such as functions into pg_catalog.\n> > But if I'm not missing something, currently there is no\n> > straightforward way to identify if the object is a user created object\n> > or a system object which is created during initdb. If we can do that\n> > user will be able to check if malicious functions are not created in\n> > the database, which is important from the security perspective.\n>\n> The function that you are proposing is really enough for this use case?\n> What if malicious users directly change the oid of function\n> to < FirstNormalObjectId? Or you're assuming that malicious users will\n> never log in as superuser and not be able to change the oid?\n\nThat's a good point! I'm surprised that user is allowed to update an\noid of database object. In addition, surprisingly we can update it to\n0, which in turn leads the assertion failure:\n\nTRAP: BadArgument(\"OidIsValid(relid)\", File: \"autovacuum.c\", Line: 2990)\n\nAs you pointed out, it's not enough as long as users can manually\nupdate oid to < FirstNormalObjectId. But I wonder if we should rather\nforbid that.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Mar 2020 17:05:25 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "\n\nOn 2020/03/04 17:05, Masahiko Sawada wrote:\n> On Wed, 4 Mar 2020 at 16:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/02/05 20:26, Masahiko Sawada wrote:\n>>> Hi,\n>>>\n>>> User can create database objects such as functions into pg_catalog.\n>>> But if I'm not missing something, currently there is no\n>>> straightforward way to identify if the object is a user created object\n>>> or a system object which is created during initdb. If we can do that\n>>> user will be able to check if malicious functions are not created in\n>>> the database, which is important from the security perspective.\n>>\n>> The function that you are proposing is really enough for this use case?\n>> What if malicious users directly change the oid of function\n>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n>> never log in as superuser and not be able to change the oid?\n> \n> That's a good point! I'm surprised that user is allowed to update an\n> oid of database object. In addition, surprisingly we can update it to\n> 0, which in turn leads the assertion failure:\n\nSince non-superusers are not allowed to do that by default,\nthat's not so bad? That is, to avoid such unexpected change of oid,\nadmin just should prevent malicious users from logging in as superusers\nand not give the permission on system catalogs to such users.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 18:02:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, 4 Mar 2020 at 18:02, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/04 17:05, Masahiko Sawada wrote:\n> > On Wed, 4 Mar 2020 at 16:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/02/05 20:26, Masahiko Sawada wrote:\n> >>> Hi,\n> >>>\n> >>> User can create database objects such as functions into pg_catalog.\n> >>> But if I'm not missing something, currently there is no\n> >>> straightforward way to identify if the object is a user created object\n> >>> or a system object which is created during initdb. If we can do that\n> >>> user will be able to check if malicious functions are not created in\n> >>> the database, which is important from the security perspective.\n> >>\n> >> The function that you are proposing is really enough for this use case?\n> >> What if malicious users directly change the oid of function\n> >> to < FirstNormalObjectId? Or you're assuming that malicious users will\n> >> never log in as superuser and not be able to change the oid?\n> >\n> > That's a good point! I'm surprised that user is allowed to update an\n> > oid of database object. In addition, surprisingly we can update it to\n> > 0, which in turn leads the assertion failure:\n>\n> Since non-superusers are not allowed to do that by default,\n> that's not so bad? That is, to avoid such unexpected change of oid,\n> admin just should prevent malicious users from logging in as superusers\n> and not give the permission on system catalogs to such users.\n>\n\nI think there is still insider threats. As long as we depend on\nsuperuser privilege to do some DBA work, a malicious DBA might be able\nto log in as superuser and modify oid.\n\nThis behavior is introduced in PG12 where we made oid column\nnon-system column. A table having oid = 0 is shown in pg_class but we\ncannot drop it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Mar 2020 18:36:36 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "\n\nOn 2020/03/04 18:36, Masahiko Sawada wrote:\n> On Wed, 4 Mar 2020 at 18:02, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/04 17:05, Masahiko Sawada wrote:\n>>> On Wed, 4 Mar 2020 at 16:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/02/05 20:26, Masahiko Sawada wrote:\n>>>>> Hi,\n>>>>>\n>>>>> User can create database objects such as functions into pg_catalog.\n>>>>> But if I'm not missing something, currently there is no\n>>>>> straightforward way to identify if the object is a user created object\n>>>>> or a system object which is created during initdb. If we can do that\n>>>>> user will be able to check if malicious functions are not created in\n>>>>> the database, which is important from the security perspective.\n>>>>\n>>>> The function that you are proposing is really enough for this use case?\n>>>> What if malicious users directly change the oid of function\n>>>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n>>>> never log in as superuser and not be able to change the oid?\n>>>\n>>> That's a good point! I'm surprised that user is allowed to update an\n>>> oid of database object. In addition, surprisingly we can update it to\n>>> 0, which in turn leads the assertion failure:\n>>\n>> Since non-superusers are not allowed to do that by default,\n>> that's not so bad? That is, to avoid such unexpected change of oid,\n>> admin just should prevent malicious users from logging in as superusers\n>> and not give the permission on system catalogs to such users.\n>>\n> \n> I think there is still insider threats. As long as we depend on\n> superuser privilege to do some DBA work, a malicious DBA might be able\n> to log in as superuser and modify oid.\n\nYes. But I'm sure that DBA has already considered the measures\nagaint such threads. Otherwise malicious users can do anything\nmore malicious rather than changing oid.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 18:57:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, 4 Mar 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/04 18:36, Masahiko Sawada wrote:\n> > On Wed, 4 Mar 2020 at 18:02, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/04 17:05, Masahiko Sawada wrote:\n> >>> On Wed, 4 Mar 2020 at 16:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/02/05 20:26, Masahiko Sawada wrote:\n> >>>>> Hi,\n> >>>>>\n> >>>>> User can create database objects such as functions into pg_catalog.\n> >>>>> But if I'm not missing something, currently there is no\n> >>>>> straightforward way to identify if the object is a user created object\n> >>>>> or a system object which is created during initdb. If we can do that\n> >>>>> user will be able to check if malicious functions are not created in\n> >>>>> the database, which is important from the security perspective.\n> >>>>\n> >>>> The function that you are proposing is really enough for this use case?\n> >>>> What if malicious users directly change the oid of function\n> >>>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n> >>>> never log in as superuser and not be able to change the oid?\n> >>>\n> >>> That's a good point! I'm surprised that user is allowed to update an\n> >>> oid of database object. In addition, surprisingly we can update it to\n> >>> 0, which in turn leads the assertion failure:\n> >>\n> >> Since non-superusers are not allowed to do that by default,\n> >> that's not so bad? That is, to avoid such unexpected change of oid,\n> >> admin just should prevent malicious users from logging in as superusers\n> >> and not give the permission on system catalogs to such users.\n> >>\n> >\n> > I think there is still insider threats. As long as we depend on\n> > superuser privilege to do some DBA work, a malicious DBA might be able\n> > to log in as superuser and modify oid.\n>\n> Yes. But I'm sure that DBA has already considered the measures\n> againt such threads. Otherwise malicious users can do anything\n> more malicious rather than changing oid.\n\nAgreed. So that's not a serious problem in practice but we cannot say\nthe checking by pg_is_user_object() is totally enough for checking\nwhether malicious object exists or not. Is that right?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Mar 2020 19:14:36 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, 4 Mar 2020 at 15:28, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Mar 4, 2020 at 9:02 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 3 Mar 2020 at 23:33, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Should we add some check if object exists or not here:\n> > > +Datum\n> > > +pg_is_user_object(PG_FUNCTION_ARGS)\n> > > +{\n> > > + Oid oid = PG_GETARG_OID(0);\n> > > +\n> > > + PG_RETURN_BOOL(ObjectIsUserObject(oid));\n> > > +}\n> > >\n> > > I was trying some scenarios where we pass an object which does not exist:\n> > > postgres=# SELECT pg_is_user_object(0);\n> > > pg_is_user_object\n> > > -------------------\n> > > f\n> > > (1 row)\n> > > postgres=# SELECT pg_is_user_object(222222);\n> > > pg_is_user_object\n> > > -------------------\n> > > t\n> > > (1 row)\n> > > SELECT pg_is_user_object('pg_class1'::regclass);\n> > > ERROR: relation \"pg_class1\" does not exist\n> > > LINE 1: SELECT pg_is_user_object('pg_class1'::regclass);\n> > > ^\n> > > I felt these behavior seems to be slightly inconsistent.\n> > > Thoughts?\n> > >\n> >\n> > Hmm I'm not sure we should existing check in that function. Main use\n> > case would be passing an oid of a tuple of a system catalog to that\n> > function to check if the given object was created while multi-user\n> > mode. So I think this function can assume that the given object id\n> > exists. And if we want to do that check, we will end up with checking\n> > if the object having that oid in all system catalogs, which is very\n> > high cost I think.\n> >\n> > I suspect perhaps the function name pg_is_user_object led that\n> > confusion. That name looks like it checks if the given 'object' is\n> > created while multi-user mode. So maybe we can improve it either by\n> > renaming to pg_is_user_object_id (or pg_is_user_oid?) or leaving the\n> > name but describing in the doc (based on Amit's suggestion in previous\n> > mail):\n>\n> I liked pg_is_user_oid over pg_is_user_object_id but this liking may\n> vary from person to person, so I'm still ok if you don't change the\n> name. I'm fine about adding the information in the document unless\n> someone else feels that this check is required in this function.\n>\n\nAttached updated patch that incorporated comments from Amit and Vignesh.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 4 Mar 2020 20:06:48 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "\n\nOn 2020/03/04 19:14, Masahiko Sawada wrote:\n> On Wed, 4 Mar 2020 at 18:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/04 18:36, Masahiko Sawada wrote:\n>>> On Wed, 4 Mar 2020 at 18:02, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/04 17:05, Masahiko Sawada wrote:\n>>>>> On Wed, 4 Mar 2020 at 16:43, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/02/05 20:26, Masahiko Sawada wrote:\n>>>>>>> Hi,\n>>>>>>>\n>>>>>>> User can create database objects such as functions into pg_catalog.\n>>>>>>> But if I'm not missing something, currently there is no\n>>>>>>> straightforward way to identify if the object is a user created object\n>>>>>>> or a system object which is created during initdb. If we can do that\n>>>>>>> user will be able to check if malicious functions are not created in\n>>>>>>> the database, which is important from the security perspective.\n>>>>>>\n>>>>>> The function that you are proposing is really enough for this use case?\n>>>>>> What if malicious users directly change the oid of function\n>>>>>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n>>>>>> never log in as superuser and not be able to change the oid?\n>>>>>\n>>>>> That's a good point! I'm surprised that user is allowed to update an\n>>>>> oid of database object. In addition, surprisingly we can update it to\n>>>>> 0, which in turn leads the assertion failure:\n>>>>\n>>>> Since non-superusers are not allowed to do that by default,\n>>>> that's not so bad? That is, to avoid such unexpected change of oid,\n>>>> admin just should prevent malicious users from logging in as superusers\n>>>> and not give the permission on system catalogs to such users.\n>>>>\n>>>\n>>> I think there is still insider threats. As long as we depend on\n>>> superuser privilege to do some DBA work, a malicious DBA might be able\n>>> to log in as superuser and modify oid.\n>>\n>> Yes. But I'm sure that DBA has already considered the measures\n>> againt such threads. Otherwise malicious users can do anything\n>> more malicious rather than changing oid.\n> \n> Agreed. So that's not a serious problem in practice but we cannot say\n> the checking by pg_is_user_object() is totally enough for checking\n> whether malicious object exists or not. Is that right?\n\nYes.\n\nMy opinion is that, if malious users are not allowed to log in\nas superusers and the admin give no permission on the system\nschema/catalog to them, checking whether the object is defined\nunder pg_catalog schema or not is enough for your purpose.\nBecause they are also not allowed to create the object under\npg_catalog. pg_is_user_object() seems not necessary.\n\nOTOH, if you address the case where malicious users can create\nthe object under pg_catalog, of course, checking whether\nthe object is defined under pg_catalog schema or not is enough\nfor the purpose. But pg_is_user_object() is also not enough\nbecause such users can change oid.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 21:07:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "At Wed, 4 Mar 2020 21:07:05 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> >>>>>> The function that you are proposing is really enough for this use\n> >>>>>> case?\n> >>>>>> What if malicious users directly change the oid of function\n> >>>>>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n> >>>>>> never log in as superuser and not be able to change the oid?\n> >>>>>\n> >>>>> That's a good point! I'm surprised that user is allowed to update an\n> >>>>> oid of database object. In addition, surprisingly we can update it to\n> >>>>> 0, which in turn leads the assertion failure:\n> >>>>\n> >>>> Since non-superusers are not allowed to do that by default,\n> >>>> that's not so bad? That is, to avoid such unexpected change of oid,\n> >>>> admin just should prevent malicious users from logging in as\n> >>>> superusers\n> >>>> and not give the permission on system catalogs to such users.\n> >>>>\n> >>>\n> >>> I think there is still insider threats. As long as we depend on\n> >>> superuser privilege to do some DBA work, a malicious DBA might be able\n> >>> to log in as superuser and modify oid.\n> >>\n> >> Yes. But I'm sure that DBA has already considered the measures\n> >> againt such threads. Otherwise malicious users can do anything\n> >> more malicious rather than changing oid.\n> > Agreed. So that's not a serious problem in practice but we cannot say\n> > the checking by pg_is_user_object() is totally enough for checking\n> > whether malicious object exists or not. Is that right?\n> \n> Yes.\n> \n> My opinion is that, if malious users are not allowed to log in\n> as superusers and the admin give no permission on the system\n> schema/catalog to them, checking whether the object is defined\n> under pg_catalog schema or not is enough for your purpose.\n> Because they are also not allowed to create the object under\n> pg_catalog. pg_is_user_object() seems not necessary.\n> \n> OTOH, if you address the case where malicious users can create\n> the object under pg_catalog, of course, checking whether\n> the object is defined under pg_catalog schema or not is enough\n> for the purpose. But pg_is_user_object() is also not enough\n> because such users can change oid.\n\nThe discussion seems assuming the feature is related to some security\nmeasure. But I think I haven't seen the objective or use case for the\nfeature. I don't see how we should treat them according the result\nfrom the \"user-defined objects detection\" feature.\n\nFor example, we could decide a function whether to be pushed-out or\nnot to remote server on postgres_fdw. In this case, we need to ask \"is\nthe behavior of this function known to us?\", in short, \"is this\nfunction is predefined?\". In this use case, we have no concern if DBA\nhave added some functions as \"not user-defined\", since it's their own\nrisk.\n\nI don't come up with another use cases but, anyway, I think we need to\nclarify the scope of the feature.\n\nregads.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Mar 2020 12:32:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "\n\nOn 2020/03/05 12:32, Kyotaro Horiguchi wrote:\n> At Wed, 4 Mar 2020 21:07:05 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>>>>>>> The function that you are proposing is really enough for this use\n>>>>>>>> case?\n>>>>>>>> What if malicious users directly change the oid of function\n>>>>>>>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n>>>>>>>> never log in as superuser and not be able to change the oid?\n>>>>>>>\n>>>>>>> That's a good point! I'm surprised that user is allowed to update an\n>>>>>>> oid of database object. In addition, surprisingly we can update it to\n>>>>>>> 0, which in turn leads the assertion failure:\n>>>>>>\n>>>>>> Since non-superusers are not allowed to do that by default,\n>>>>>> that's not so bad? That is, to avoid such unexpected change of oid,\n>>>>>> admin just should prevent malicious users from logging in as\n>>>>>> superusers\n>>>>>> and not give the permission on system catalogs to such users.\n>>>>>>\n>>>>>\n>>>>> I think there is still insider threats. As long as we depend on\n>>>>> superuser privilege to do some DBA work, a malicious DBA might be able\n>>>>> to log in as superuser and modify oid.\n>>>>\n>>>> Yes. But I'm sure that DBA has already considered the measures\n>>>> againt such threads. Otherwise malicious users can do anything\n>>>> more malicious rather than changing oid.\n>>> Agreed. So that's not a serious problem in practice but we cannot say\n>>> the checking by pg_is_user_object() is totally enough for checking\n>>> whether malicious object exists or not. Is that right?\n>>\n>> Yes.\n>>\n>> My opinion is that, if malious users are not allowed to log in\n>> as superusers and the admin give no permission on the system\n>> schema/catalog to them, checking whether the object is defined\n>> under pg_catalog schema or not is enough for your purpose.\n>> Because they are also not allowed to create the object under\n>> pg_catalog. pg_is_user_object() seems not necessary.\n>>\n>> OTOH, if you address the case where malicious users can create\n>> the object under pg_catalog, of course, checking whether\n>> the object is defined under pg_catalog schema or not is enough\n>> for the purpose. But pg_is_user_object() is also not enough\n>> because such users can change oid.\n> \n> The discussion seems assuming the feature is related to some security\n> measure. But I think I haven't seen the objective or use case for the\n> feature. I don't see how we should treat them according the result\n> from the \"user-defined objects detection\" feature.\n> \n> For example, we could decide a function whether to be pushed-out or\n> not to remote server on postgres_fdw. In this case, we need to ask \"is\n> the behavior of this function known to us?\", in short, \"is this\n> function is predefined?\". In this use case, we have no concern if DBA\n> have added some functions as \"not user-defined\", since it's their own\n> risk.\n> \n> I don't come up with another use cases but, anyway, I think we need to\n> clarify the scope of the feature.\n\nAgreed. Also we would need to consider that the existing approach\n(e.g., checking whether the object is defined under pg_catalog or not,\nor seeing pg_stat_user_functions, _indexes, and _tables) is enough\nfor the use cases. If enough, new function might not be necessary.\nIf not enough, we might also need to reconsider the definitions of\npg_stat_user_xxx after considering the function.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 5 Mar 2020 13:23:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Wed, Mar 04, 2020 at 06:57:00PM +0900, Fujii Masao wrote:\n> Yes. But I'm sure that DBA has already considered the measures\n> againt such threads. Otherwise malicious users can do anything\n> more malicious rather than changing oid.\n\nA superuser is by definition able to do anything on the system using\nthe rights of the OS user running the Postgres backend. One thing for\nexample is to take a base backup of the full instance, but you can do\nmuch more interesting things once you have such rights. So I don't\nquite get the line of arguments used on this thread regarding the\nrelation with somebody being malicious with superuser rights, and the\narguments about a superuser able to manipulate freely the catalog's\ncontents.\n--\nMichael", "msg_date": "Thu, 5 Mar 2020 14:23:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, 5 Mar 2020 at 13:23, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/05 12:32, Kyotaro Horiguchi wrote:\n> > At Wed, 4 Mar 2020 21:07:05 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >>>>>>>> The function that you are proposing is really enough for this use\n> >>>>>>>> case?\n> >>>>>>>> What if malicious users directly change the oid of function\n> >>>>>>>> to < FirstNormalObjectId? Or you're assuming that malicious users will\n> >>>>>>>> never log in as superuser and not be able to change the oid?\n> >>>>>>>\n> >>>>>>> That's a good point! I'm surprised that user is allowed to update an\n> >>>>>>> oid of database object. In addition, surprisingly we can update it to\n> >>>>>>> 0, which in turn leads the assertion failure:\n> >>>>>>\n> >>>>>> Since non-superusers are not allowed to do that by default,\n> >>>>>> that's not so bad? That is, to avoid such unexpected change of oid,\n> >>>>>> admin just should prevent malicious users from logging in as\n> >>>>>> superusers\n> >>>>>> and not give the permission on system catalogs to such users.\n> >>>>>>\n> >>>>>\n> >>>>> I think there is still insider threats. As long as we depend on\n> >>>>> superuser privilege to do some DBA work, a malicious DBA might be able\n> >>>>> to log in as superuser and modify oid.\n> >>>>\n> >>>> Yes. But I'm sure that DBA has already considered the measures\n> >>>> againt such threads. Otherwise malicious users can do anything\n> >>>> more malicious rather than changing oid.\n> >>> Agreed. So that's not a serious problem in practice but we cannot say\n> >>> the checking by pg_is_user_object() is totally enough for checking\n> >>> whether malicious object exists or not. Is that right?\n> >>\n> >> Yes.\n> >>\n> >> My opinion is that, if malious users are not allowed to log in\n> >> as superusers and the admin give no permission on the system\n> >> schema/catalog to them, checking whether the object is defined\n> >> under pg_catalog schema or not is enough for your purpose.\n> >> Because they are also not allowed to create the object under\n> >> pg_catalog. pg_is_user_object() seems not necessary.\n> >>\n> >> OTOH, if you address the case where malicious users can create\n> >> the object under pg_catalog, of course, checking whether\n> >> the object is defined under pg_catalog schema or not is enough\n> >> for the purpose. But pg_is_user_object() is also not enough\n> >> because such users can change oid.\n> >\n> > The discussion seems assuming the feature is related to some security\n> > measure. But I think I haven't seen the objective or use case for the\n> > feature. I don't see how we should treat them according the result\n> > from the \"user-defined objects detection\" feature.\n> >\n> > For example, we could decide a function whether to be pushed-out or\n> > not to remote server on postgres_fdw. In this case, we need to ask \"is\n> > the behavior of this function known to us?\", in short, \"is this\n> > function is predefined?\". In this use case, we have no concern if DBA\n> > have added some functions as \"not user-defined\", since it's their own\n> > risk.\n> >\n> > I don't come up with another use cases but, anyway, I think we need to\n> > clarify the scope of the feature.\n>\n> Agreed. Also we would need to consider that the existing approach\n> (e.g., checking whether the object is defined under pg_catalog or not,\n> or seeing pg_stat_user_functions, _indexes, and _tables) is enough\n> for the use cases. If enough, new function might not be necessary.\n> If not enough, we might also need to reconsider the definitions of\n> pg_stat_user_xxx after considering the function.\n>\n\nOriginally the motivation of this feature is that while studying PCI\nDSS 2.2.5 I thought that a running PostgreSQL server is not able to\nprove that there is no malicious function in database. PCI DSS 2.2.5\nstates \"Remove all unnecessary functionality, such as scripts,\ndrivers, features, subsystems, file systems, and unnecessary web\nservers.\" If we want to clarify unnecessary or malicious functions we\ncan check public schema and user schema but once a function is created\non pg_proc we cannot distinguish whether it's a built-in (i.g. safe)\nfunction or not. I totally agree that if malicious someone logs in as\na superuser he/she can do anything more serious than changing catalog\ncontents but I wanted to have a way to prove soundness of running\ndatabase.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 15:21:49 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "At Thu, 5 Mar 2020 15:21:49 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> > > I don't come up with another use cases but, anyway, I think we need to\n> > > clarify the scope of the feature.\n> >\n> > Agreed. Also we would need to consider that the existing approach\n> > (e.g., checking whether the object is defined under pg_catalog or not,\n> > or seeing pg_stat_user_functions, _indexes, and _tables) is enough\n> > for the use cases. If enough, new function might not be necessary.\n> > If not enough, we might also need to reconsider the definitions of\n> > pg_stat_user_xxx after considering the function.\n> >\n> \n> Originally the motivation of this feature is that while studying PCI\n> DSS 2.2.5 I thought that a running PostgreSQL server is not able to\n> prove that there is no malicious function in database. PCI DSS 2.2.5\n> states \"Remove all unnecessary functionality, such as scripts,\n> drivers, features, subsystems, file systems, and unnecessary web\n> servers.\" If we want to clarify unnecessary or malicious functions we\n> can check public schema and user schema but once a function is created\n> on pg_proc we cannot distinguish whether it's a built-in (i.g. safe)\n> function or not. I totally agree that if malicious someone logs in as\n> a superuser he/she can do anything more serious than changing catalog\n> contents but I wanted to have a way to prove soundness of running\n> database.\n\nThanks for the elaboration. It doesn't seem to me as the\nresposibility of PostgreSQL program. The same can be said to OSes.\n\nI think the section is not saying that \"keep you system only with\ndefaultly installed components\", but \"remove all features unncecessary\nto your system even if it is defaultly installed as far as you can\".\nAnd whether A system is needing a feature or not cannot be the matter\nof PostgreSQL or OSes.\n\nSo you need to remove some system-admistrative functions if you know\nit is not required by your system in order to comform the\nrequirement. But they would be \"non-user-defined\" objects.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Mar 2020 16:34:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, 5 Mar 2020 at 16:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 5 Mar 2020 15:21:49 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > > I don't come up with another use cases but, anyway, I think we need to\n> > > > clarify the scope of the feature.\n> > >\n> > > Agreed. Also we would need to consider that the existing approach\n> > > (e.g., checking whether the object is defined under pg_catalog or not,\n> > > or seeing pg_stat_user_functions, _indexes, and _tables) is enough\n> > > for the use cases. If enough, new function might not be necessary.\n> > > If not enough, we might also need to reconsider the definitions of\n> > > pg_stat_user_xxx after considering the function.\n> > >\n> >\n> > Originally the motivation of this feature is that while studying PCI\n> > DSS 2.2.5 I thought that a running PostgreSQL server is not able to\n> > prove that there is no malicious function in database. PCI DSS 2.2.5\n> > states \"Remove all unnecessary functionality, such as scripts,\n> > drivers, features, subsystems, file systems, and unnecessary web\n> > servers.\" If we want to clarify unnecessary or malicious functions we\n> > can check public schema and user schema but once a function is created\n> > on pg_proc we cannot distinguish whether it's a built-in (i.g. safe)\n> > function or not. I totally agree that if malicious someone logs in as\n> > a superuser he/she can do anything more serious than changing catalog\n> > contents but I wanted to have a way to prove soundness of running\n> > database.\n>\n> Thanks for the elaboration. It doesn't seem to me as the\n> resposibility of PostgreSQL program. The same can be said to OSes.\n>\n> I think the section is not saying that \"keep you system only with\n> defaultly installed components\", but \"remove all features unncecessary\n> to your system even if it is defaultly installed as far as you can\".\n\nAgreed.\n\n> And whether A system is needing a feature or not cannot be the matter\n> of PostgreSQL or OSes.\n>\n> So you need to remove some system-admistrative functions if you know\n> it is not required by your system in order to comform the\n> requirement. But they would be \"non-user-defined\" objects.\n\nI think normally users don't want to remove built-in functions because\nthey think these functions are trusted and it's hard to restore them\nwhen they want later. So I thought user want to search functions that\nis unnecessary but not a built-in function.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 18:06:26 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "At Thu, 5 Mar 2020 18:06:26 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Thu, 5 Mar 2020 at 16:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 5 Mar 2020 15:21:49 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > > > I don't come up with another use cases but, anyway, I think we need to\n> > > > > clarify the scope of the feature.\n> > > >\n> > > > Agreed. Also we would need to consider that the existing approach\n> > > > (e.g., checking whether the object is defined under pg_catalog or not,\n> > > > or seeing pg_stat_user_functions, _indexes, and _tables) is enough\n> > > > for the use cases. If enough, new function might not be necessary.\n> > > > If not enough, we might also need to reconsider the definitions of\n> > > > pg_stat_user_xxx after considering the function.\n> > > >\n> > >\n> > > Originally the motivation of this feature is that while studying PCI\n> > > DSS 2.2.5 I thought that a running PostgreSQL server is not able to\n> > > prove that there is no malicious function in database. PCI DSS 2.2.5\n> > > states \"Remove all unnecessary functionality, such as scripts,\n> > > drivers, features, subsystems, file systems, and unnecessary web\n> > > servers.\" If we want to clarify unnecessary or malicious functions we\n> > > can check public schema and user schema but once a function is created\n> > > on pg_proc we cannot distinguish whether it's a built-in (i.g. safe)\n> > > function or not. I totally agree that if malicious someone logs in as\n> > > a superuser he/she can do anything more serious than changing catalog\n> > > contents but I wanted to have a way to prove soundness of running\n> > > database.\n> >\n> > Thanks for the elaboration. It doesn't seem to me as the\n> > resposibility of PostgreSQL program. The same can be said to OSes.\n> >\n> > I think the section is not saying that \"keep you system only with\n> > defaultly installed components\", but \"remove all features unncecessary\n> > to your system even if it is defaultly installed as far as you can\".\n> \n> Agreed.\n> \n> > And whether A system is needing a feature or not cannot be the matter\n> > of PostgreSQL or OSes.\n> >\n> > So you need to remove some system-admistrative functions if you know\n> > it is not required by your system in order to comform the\n> > requirement. But they would be \"non-user-defined\" objects.\n> \n> I think normally users don't want to remove built-in functions because\n> they think these functions are trusted and it's hard to restore them\n> when they want later. So I thought user want to search functions that\n> is unnecessary but not a built-in function.\n\nI'm not sure those who wants to introduce PCI-DSS are under a normal\nsitautation, though:p\n\nThat seems beside the point. pg_read_file is known to be usable for\ndrawing out database files. If you leave the function alone, the\nsecurity officer (designer?) have to consider the possibility that\nsomeone draws out files in the database system using the function and\nhave to plan the action for the threat. In that context,\nbuilt-in-or-not distinction is useless.\n\nIn the first place, if you assume that someone may install malicious\nfunctions in the server after beginning operation, distinction by OID\ndoesn't work at all because who can illegally install a malicious\nfunction also be able to modify its OID with quite low odds.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 05 Mar 2020 18:37:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "On Thu, 5 Mar 2020 at 18:39, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 5 Mar 2020 18:06:26 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Thu, 5 Mar 2020 at 16:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 5 Mar 2020 15:21:49 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > > > > I don't come up with another use cases but, anyway, I think we need to\n> > > > > > clarify the scope of the feature.\n> > > > >\n> > > > > Agreed. Also we would need to consider that the existing approach\n> > > > > (e.g., checking whether the object is defined under pg_catalog or not,\n> > > > > or seeing pg_stat_user_functions, _indexes, and _tables) is enough\n> > > > > for the use cases. If enough, new function might not be necessary.\n> > > > > If not enough, we might also need to reconsider the definitions of\n> > > > > pg_stat_user_xxx after considering the function.\n> > > > >\n> > > >\n> > > > Originally the motivation of this feature is that while studying PCI\n> > > > DSS 2.2.5 I thought that a running PostgreSQL server is not able to\n> > > > prove that there is no malicious function in database. PCI DSS 2.2.5\n> > > > states \"Remove all unnecessary functionality, such as scripts,\n> > > > drivers, features, subsystems, file systems, and unnecessary web\n> > > > servers.\" If we want to clarify unnecessary or malicious functions we\n> > > > can check public schema and user schema but once a function is created\n> > > > on pg_proc we cannot distinguish whether it's a built-in (i.g. safe)\n> > > > function or not. I totally agree that if malicious someone logs in as\n> > > > a superuser he/she can do anything more serious than changing catalog\n> > > > contents but I wanted to have a way to prove soundness of running\n> > > > database.\n> > >\n> > > Thanks for the elaboration. It doesn't seem to me as the\n> > > resposibility of PostgreSQL program. The same can be said to OSes.\n> > >\n> > > I think the section is not saying that \"keep you system only with\n> > > defaultly installed components\", but \"remove all features unncecessary\n> > > to your system even if it is defaultly installed as far as you can\".\n> >\n> > Agreed.\n> >\n> > > And whether A system is needing a feature or not cannot be the matter\n> > > of PostgreSQL or OSes.\n> > >\n> > > So you need to remove some system-admistrative functions if you know\n> > > it is not required by your system in order to comform the\n> > > requirement. But they would be \"non-user-defined\" objects.\n> >\n> > I think normally users don't want to remove built-in functions because\n> > they think these functions are trusted and it's hard to restore them\n> > when they want later. So I thought user want to search functions that\n> > is unnecessary but not a built-in function.\n>\n> I'm not sure those who wants to introduce PCI-DSS are under a normal\n> sitautation, though:p\n>\n> That seems beside the point. pg_read_file is known to be usable for\n> drawing out database files. If you leave the function alone, the\n> security officer (designer?) have to consider the possibility that\n> someone draws out files in the database system using the function and\n> have to plan the action for the threat. In that context,\n> built-in-or-not distinction is useless.\n\nSo how do you check if unnecessary, malicious or unauthorized function\nexists in database after that, for example when periodical security\ncheck? Functions defined after initdb must be checked under user's\nresponsibility but since normally there are many built-in functions in\npg_proc the check in pg_proc could be cumbersome. So the idea of this\nfeature is to make that check easier by marking built-in functions.\n\n>\n> In the first place, if you assume that someone may install malicious\n> functions in the server after beginning operation, distinction by OID\n> doesn't work at all because who can illegally install a malicious\n> function also be able to modify its OID with quite low odds.\n\nYes, that's what Fujii-san also pointed out. It's better to find a way\nto distinct functions while not relying on OID.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 8 Mar 2020 11:55:06 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "At Sun, 8 Mar 2020 11:55:06 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Thu, 5 Mar 2020 at 18:39, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 5 Mar 2020 18:06:26 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > On Thu, 5 Mar 2020 at 16:36, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > I think normally users don't want to remove built-in functions because\n> > > they think these functions are trusted and it's hard to restore them\n> > > when they want later. So I thought user want to search functions that\n> > > is unnecessary but not a built-in function.\n> >\n> > I'm not sure those who wants to introduce PCI-DSS are under a normal\n> > sitautation, though:p\n> >\n> > That seems beside the point. pg_read_file is known to be usable for\n> > drawing out database files. If you leave the function alone, the\n> > security officer (designer?) have to consider the possibility that\n> > someone draws out files in the database system using the function and\n> > have to plan the action for the threat. In that context,\n> > built-in-or-not distinction is useless.\n> \n> So how do you check if unnecessary, malicious or unauthorized function\n> exists in database after that, for example when periodical security\n> check? Functions defined after initdb must be checked under user's\n> responsibility but since normally there are many built-in functions in\n> pg_proc the check in pg_proc could be cumbersome. So the idea of this\n> feature is to make that check easier by marking built-in functions.\n\nI think there's no easy way to accomplish it. If PostgreSQL\ndocumentation says that \"Yeah, the function tells if using the\nfunction or feature complies the request of PCI-DSS 2.2.5!\" and it\ntells safe for all built-in functions, it is an outright lie even if\nthe server is just after initdb'ed.\n\nSparating from PCI-DSS and we document it just as \"the function tells\nif the function is built-in or not\", it's true. (But I'm not sure\nabout its usage..)\n\nI might be misunderstanding about the operation steps in your mind.\n\n> >\n> > In the first place, if you assume that someone may install malicious\n> > functions in the server after beginning operation, distinction by OID\n> > doesn't work at all because who can illegally install a malicious\n> > function also be able to modify its OID with quite low odds.\n> \n> Yes, that's what Fujii-san also pointed out. It's better to find a way\n> to distinct functions while not relying on OID.\n\nAnd it is out-of-scope of PCI-DSS 2.2.5. It mentions design or\nsystem-building time.\n\nApart from PCI-DSS, if you are concerning operation-time threats. If\nonce someone malicious could install a function to the server, I think\nthat kind of feature with any criteria no longer work as a\ncountermeasure for further threats. Maybe something like tripwire\nwould work. That is, maybe a kind of checksum over system catalogs.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 09 Mar 2020 18:44:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "> On 4 Mar 2020, at 12:06, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n\n> Attached updated patch that incorporated comments from Amit and Vignesh.\n\nThis patch fails to compile due to an Oid collision in pg_proc.dat. Please\nsubmit a new version with an Oid from the recommended range for new patches:\n8000-9999. See the below documentation page for more information on this.\n\n https://www.postgresql.org/docs/devel/system-catalog-initial-data.html\n\nI'm marking the entry Waiting on Author in the meantime.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 14:15:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" }, { "msg_contents": "> On 1 Jul 2020, at 14:15, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 4 Mar 2020, at 12:06, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> \n>> Attached updated patch that incorporated comments from Amit and Vignesh.\n> \n> This patch fails to compile due to an Oid collision in pg_proc.dat. Please\n> submit a new version with an Oid from the recommended range for new patches:\n> 8000-9999. See the below documentation page for more information on this.\n> \n> https://www.postgresql.org/docs/devel/system-catalog-initial-data.html\n> \n> I'm marking the entry Waiting on Author in the meantime.\n\nAs no new patch has been presented, and the thread contains doubts over the\nproposed functionality, I'm marking this returned with feedback.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 30 Jul 2020 23:52:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Identifying user-created objects" } ]
[ { "msg_contents": "Hello pgsql-hackers,\n\nSubmitting a patch that would enable gathering of per-statement WAL\ngeneration statistics, similar to how it is done for buffer usage.\nCollected is the number of records added to WAL and number of WAL\nbytes written.\n\nThe data collected was found valuable to analyze update-heavy load,\nwith WAL generation being the bottleneck.\n\nThe usage data is collected at low level, after compression is done on\nWAL record. Data is then exposed via pg_stat_statements, could also be\nused in EXPLAIN ANALYZE if needed. Instrumentation is alike to the one\nused for buffer stats. I didn't dare to unify both usage metric sets\ninto single struct, nor rework the way both are passed to parallel\nworkers.\n\nPerformance impact is (supposed to be) very low, essentially adding\ntwo int operations and memory access on WAL record insert. Additional\nefforts to allocate shmem chunk for parallel workers. Parallel workers\nshmem usage is increased to fir in a struct of two longs.\n\nPatch is separated in two parts: core changes and pg_stat_statements\nadditions. Essentially the extension has its schema updated to allow\ntwo more fields, docs updated to reflect the change. Patch is prepared\nagainst master branch.\n\nPlease provide your comments and/or code findings.", "msg_date": "Wed, 5 Feb 2020 16:35:59 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "WAL usage calculation patch" }, { "msg_contents": "On Wed, 5 Feb 2020 at 21:36, Kirill Bychik <kirill.bychik@gmail.com> wrote:\n>\n> Hello pgsql-hackers,\n>\n> Submitting a patch that would enable gathering of per-statement WAL\n> generation statistics, similar to how it is done for buffer usage.\n> Collected is the number of records added to WAL and number of WAL\n> bytes written.\n>\n> The data collected was found valuable to analyze update-heavy load,\n> with WAL generation being the bottleneck.\n>\n> The usage data is collected at low level, after compression is done on\n> WAL record. Data is then exposed via pg_stat_statements, could also be\n> used in EXPLAIN ANALYZE if needed. Instrumentation is alike to the one\n> used for buffer stats. I didn't dare to unify both usage metric sets\n> into single struct, nor rework the way both are passed to parallel\n> workers.\n>\n> Performance impact is (supposed to be) very low, essentially adding\n> two int operations and memory access on WAL record insert. Additional\n> efforts to allocate shmem chunk for parallel workers. Parallel workers\n> shmem usage is increased to fir in a struct of two longs.\n>\n> Patch is separated in two parts: core changes and pg_stat_statements\n> additions. Essentially the extension has its schema updated to allow\n> two more fields, docs updated to reflect the change. Patch is prepared\n> against master branch.\n>\n> Please provide your comments and/or code findings.\n\nI like the concept, I'm a big fan of anything that affordably improves\nvisibility into Pg's I/O and activity.\n\nTo date I've been relying on tools like systemtap to do this sort of\nthing. But that's a bit specialised, and Pg currently lacks useful\ninstrumentation for it so it can be a pain to match up activity by\nparallel workers and that sort of thing. (I aim to find time to submit\na patch for that.)\n\nI haven't yet reviewed the patch.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Mon, 10 Feb 2020 15:20:32 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Feb 10, 2020 at 8:20 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n> On Wed, 5 Feb 2020 at 21:36, Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> > Patch is separated in two parts: core changes and pg_stat_statements\n> > additions. Essentially the extension has its schema updated to allow\n> > two more fields, docs updated to reflect the change. Patch is prepared\n> > against master branch.\n> >\n> > Please provide your comments and/or code findings.\n>\n> I like the concept, I'm a big fan of anything that affordably improves\n> visibility into Pg's I/O and activity.\n\n+1\n\n> To date I've been relying on tools like systemtap to do this sort of\n> thing. But that's a bit specialised, and Pg currently lacks useful\n> instrumentation for it so it can be a pain to match up activity by\n> parallel workers and that sort of thing. (I aim to find time to submit\n> a patch for that.)\n\n(I'm interested in seeing your conference talk about that! I did a\nbunch of stuff with static probes to measure PHJ behaviour around\nbarrier waits and so on but it was hard to figure out what stuff like\nthat to put in the actual tree, it was all a bit\nuse-once-to-test-a-theory-and-then-throw-away.)\n\nKirill, I noticed that you included a regression test that is failing. Can\nthis possibly be stable across machines or even on the same machine?\nDoes it still pass for you or did something change on the master\nbranch to add a new WAL record since you posted the patch?\n\nquery | calls | rows | wal_write_bytes | wal_write_records\n -------------------------------------------+-------+------+-----------------+-------------------\n- CREATE INDEX test_b ON test(b) | 1 | 0 | 1673 |\n 16\n- DROP FUNCTION IF EXISTS PLUS_ONE(INTEGER) | 1 | 0 | 56 |\n 1\n+ CREATE INDEX test_b ON test(b) | 1 | 0 | 1755 |\n 17\n+ DROP FUNCTION IF EXISTS PLUS_ONE(INTEGER) | 1 | 0 | 0 |\n 0\n\n\n", "msg_date": "Tue, 18 Feb 2020 16:23:14 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "вт, 18 февр. 2020 г. в 06:23, Thomas Munro <thomas.munro@gmail.com>:\n> On Mon, Feb 10, 2020 at 8:20 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n> > On Wed, 5 Feb 2020 at 21:36, Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> > > Patch is separated in two parts: core changes and pg_stat_statements\n> > > additions. Essentially the extension has its schema updated to allow\n> > > two more fields, docs updated to reflect the change. Patch is prepared\n> > > against master branch.\n> > >\n> > > Please provide your comments and/or code findings.\n> >\n> > I like the concept, I'm a big fan of anything that affordably improves\n> > visibility into Pg's I/O and activity.\n>\n> +1\n>\n> > To date I've been relying on tools like systemtap to do this sort of\n> > thing. But that's a bit specialised, and Pg currently lacks useful\n> > instrumentation for it so it can be a pain to match up activity by\n> > parallel workers and that sort of thing. (I aim to find time to submit\n> > a patch for that.)\n>\n> (I'm interested in seeing your conference talk about that! I did a\n> bunch of stuff with static probes to measure PHJ behaviour around\n> barrier waits and so on but it was hard to figure out what stuff like\n> that to put in the actual tree, it was all a bit\n> use-once-to-test-a-theory-and-then-throw-away.)\n>\n> Kirill, I noticed that you included a regression test that is failing. Can\n> this possibly be stable across machines or even on the same machine?\n> Does it still pass for you or did something change on the master\n> branch to add a new WAL record since you posted the patch?\n\nThank you for testing the patch and running extension checks. I assume\nthe patch applies without problems.\n\nAs for the regr test, it apparently requires some rework. I didn't pay\nattention enough to make sure the data I check is actually meaningful\nand isolated enough to be repeatable.\n\nPlease consider the extension part of the patch as WIP, I'll resubmit\nthe patch once I get a stable and meanngful test up. Thanks for\nfinding it!\n\n> query | calls | rows | wal_write_bytes | wal_write_records\n> -------------------------------------------+-------+------+-----------------+-------------------\n> - CREATE INDEX test_b ON test(b) | 1 | 0 | 1673 |\n> 16\n> - DROP FUNCTION IF EXISTS PLUS_ONE(INTEGER) | 1 | 0 | 56 |\n> 1\n> + CREATE INDEX test_b ON test(b) | 1 | 0 | 1755 |\n> 17\n> + DROP FUNCTION IF EXISTS PLUS_ONE(INTEGER) | 1 | 0 | 0 |\n> 0\n\n\n", "msg_date": "Wed, 19 Feb 2020 10:27:50 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> вт, 18 февр. 2020 г. в 06:23, Thomas Munro <thomas.munro@gmail.com>:\n> > On Mon, Feb 10, 2020 at 8:20 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n> > > On Wed, 5 Feb 2020 at 21:36, Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> > > > Patch is separated in two parts: core changes and pg_stat_statements\n> > > > additions. Essentially the extension has its schema updated to allow\n> > > > two more fields, docs updated to reflect the change. Patch is prepared\n> > > > against master branch.\n> > > >\n> > > > Please provide your comments and/or code findings.\n> > >\n> > > I like the concept, I'm a big fan of anything that affordably improves\n> > > visibility into Pg's I/O and activity.\n> >\n> > +1\n> >\n> > > To date I've been relying on tools like systemtap to do this sort of\n> > > thing. But that's a bit specialised, and Pg currently lacks useful\n> > > instrumentation for it so it can be a pain to match up activity by\n> > > parallel workers and that sort of thing. (I aim to find time to submit\n> > > a patch for that.)\n> >\n> > (I'm interested in seeing your conference talk about that! I did a\n> > bunch of stuff with static probes to measure PHJ behaviour around\n> > barrier waits and so on but it was hard to figure out what stuff like\n> > that to put in the actual tree, it was all a bit\n> > use-once-to-test-a-theory-and-then-throw-away.)\n> >\n> > Kirill, I noticed that you included a regression test that is failing. Can\n> > this possibly be stable across machines or even on the same machine?\n> > Does it still pass for you or did something change on the master\n> > branch to add a new WAL record since you posted the patch?\n>\n> Thank you for testing the patch and running extension checks. I assume\n> the patch applies without problems.\n>\n> As for the regr test, it apparently requires some rework. I didn't pay\n> attention enough to make sure the data I check is actually meaningful\n> and isolated enough to be repeatable.\n>\n> Please consider the extension part of the patch as WIP, I'll resubmit\n> the patch once I get a stable and meanngful test up. Thanks for\n> finding it!\n>\n\nI have reworked the extension regression test to be more isolated.\nApparently, something merged into master branch shifted my numbers.\n\nPFA the new patch. Core part didn't change a bit, the extension part\nhas regression test SQL and expected log changed.\n\nLooking forward for new comments.", "msg_date": "Thu, 20 Feb 2020 18:56:27 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Feb 20, 2020 at 06:56:27PM +0300, Kirill Bychik wrote:\n> > вт, 18 февр. 2020 г. в 06:23, Thomas Munro <thomas.munro@gmail.com>:\n> > > On Mon, Feb 10, 2020 at 8:20 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n> > > > On Wed, 5 Feb 2020 at 21:36, Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> > > > > Patch is separated in two parts: core changes and pg_stat_statements\n> > > > > additions. Essentially the extension has its schema updated to allow\n> > > > > two more fields, docs updated to reflect the change. Patch is prepared\n> > > > > against master branch.\n> > > > >\n> > > > > Please provide your comments and/or code findings.\n> > > >\n> > > > I like the concept, I'm a big fan of anything that affordably improves\n> > > > visibility into Pg's I/O and activity.\n> > >\n> > > +1\n\nHuge +1 too.\n\n> > Thank you for testing the patch and running extension checks. I assume\n> > the patch applies without problems.\n> >\n> > As for the regr test, it apparently requires some rework. I didn't pay\n> > attention enough to make sure the data I check is actually meaningful\n> > and isolated enough to be repeatable.\n> >\n> > Please consider the extension part of the patch as WIP, I'll resubmit\n> > the patch once I get a stable and meanngful test up. Thanks for\n> > finding it!\n> >\n>\n> I have reworked the extension regression test to be more isolated.\n> Apparently, something merged into master branch shifted my numbers.\n>\n> PFA the new patch. Core part didn't change a bit, the extension part\n> has regression test SQL and expected log changed.\n\nI'm quite worried about the stability of those counters for regression tests.\nWouldn't a checkpoint happening during the test change them?\n\nWhile at it, did you consider adding a full-page image counter in the WalUsage?\nThat's something I'd really like to have and it doesn't seem hard to integrate.\n\nAnother point is that this patch won't help to see autovacuum activity.\nAs an example, I did a quick test to store the informations in pgstat, sending\nthe data in the PG_FINALLY part of vacuum():\n\nrjuju=# create table t1(id integer, val text);\nCREATE TABLE\nrjuju=# insert into t1 select i, 'val ' || i from generate_series(1, 100000) i;\nINSERT 0 100000\nrjuju=# vacuum t1;\nVACUUM\nrjuju=# select datname, vac_wal_records, vac_wal_bytes, autovac_wal_records, autovac_wal_bytes\nfrom pg_stat_database where datname = 'rjuju';\n datname | vac_wal_records | vac_wal_bytes | autovac_wal_records | autovac_wal_bytes\n---------+-----------------+---------------+---------------------+-------------------\n rjuju | 547 | 65201 | 0 | 0\n(1 row)\n\nrjuju=# delete from t1 where id % 2 = 0;\nDELETE 50000\nrjuju=# select pg_sleep(60);\n pg_sleep\n----------\n\n(1 row)\n\nrjuju=# select datname, vac_wal_records, vac_wal_bytes, autovac_wal_records, autovac_wal_bytes\nfrom pg_stat_database where datname = 'rjuju';\n datname | vac_wal_records | vac_wal_bytes | autovac_wal_records | autovac_wal_bytes\n---------+-----------------+---------------+---------------------+-------------------\n rjuju | 547 | 65201 | 1631 | 323193\n(1 row)\n\nThat's seems like useful data (especially since I recently had to dig into a\nproblematic WAL consumption issue that was due to some autovacuum activity),\nbut that may seem strange to only account for (auto)vacuum activity, rather\nthan globally, grouping per RmgrId or CommandTag for instance. We could then\nsee the complete WAL usage per-database. What do you think?\n\nSome minor points I noticed:\n\n- the extension patch doesn't apply anymore, I guess since 70a7732007bc4689\n\n #define PARALLEL_KEY_JIT_INSTRUMENTATION UINT64CONST(0xE000000000000009)\n+#define PARALLEL_KEY_WAL_USAGE UINT64CONST(0xE000000000000010)\n\nShouldn't it be 0xA rather than 0x10?\n\n- it would be better to add a version number to the patches, so we're sure\n which one we're talking about.\n\n\n", "msg_date": "Wed, 4 Mar 2020 17:02:25 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Mar 04, 2020 at 05:02:25PM +0100, Julien Rouhaud wrote:\n> I'm quite worried about the stability of those counters for regression tests.\n> Wouldn't a checkpoint happening during the test change them?\n\nYep. One way to go through that would be to test if this output is\nnon-zero still I suspect at quick glance that this won't be entirely\nreliable either.\n\n> While at it, did you consider adding a full-page image counter in the WalUsage?\n> That's something I'd really like to have and it doesn't seem hard to integrate.\n\nFWIW, one reason here is that we had recently some benchmark work done\ninternally where this would have been helpful in studying some spiky\nWAL load patterns.\n--\nMichael", "msg_date": "Thu, 5 Mar 2020 15:35:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> I'm quite worried about the stability of those counters for regression tests.\n> Wouldn't a checkpoint happening during the test change them?\n\nAgree, stability of test could be an issue, even shifting of write\nformat or compression method or adding compatible changes could break\nsuch test. Frankly speaking, the numbers expected are not actually\ncalculated, my logic was rather well described by \"these numbers\nshould be non-zero for real tables\". I believe the test can be\nmodified to check that numbers are above zero, both for bytes written\nand for records stored.\n\nHaving a checkpoint in the moddle of the test can be almost 100%\ncountered by triggering one before the test. I'll add a checkpoint\ncall to the test scenario, if no objections here.\n\n> While at it, did you consider adding a full-page image counter in the WalUsage?\n> That's something I'd really like to have and it doesn't seem hard to integrate.\n\nWell, not sure I understand you 100%, being new to Postgres dev. Do\nyou want a separate counter for pages written whenever doPageWrites is\ntrue? I can do that, if needed. Please confirm.\n\n> Another point is that this patch won't help to see autovacuum activity.\n> As an example, I did a quick te.....\n> ...LONG QUOTE...\n> but that may seem strange to only account for (auto)vacuum activity, rather\n> than globally, grouping per RmgrId or CommandTag for instance. We could then\n> see the complete WAL usage per-database. What do you think?\n\nI wanted to keep the patch small and simple, and fit to practical\nneeds. This patch is supposed to provide tuning assistance, catching\nan io heavy query in commit-bound situation.\nTotal WAL usage per DB can be assessed rather easily using other means.\nLet's get this change into the codebase and then work on connecting\nWAL usage to (auto)vacuum stats.\n\n>\n> Some minor points I noticed:\n>\n> - the extension patch doesn't apply anymore, I guess since 70a7732007bc4689\n\nWill fix, thank you.\n\n>\n> #define PARALLEL_KEY_JIT_INSTRUMENTATION UINT64CONST(0xE000000000000009)\n> +#define PARALLEL_KEY_WAL_USAGE UINT64CONST(0xE000000000000010)\n>\n> Shouldn't it be 0xA rather than 0x10?\n\nOww, my bad, this is embaracing! Will fix, thank you.\n\n> - it would be better to add a version number to the patches, so we're sure\n> which one we're talking about.\n\nNoted, thank you.\n\nPlease comment on the proposed changes, I will cook up a new version\nonce all are agreed upon.\n\n\n", "msg_date": "Thu, 5 Mar 2020 22:55:34 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: WAL usage calculation patch" }, { "msg_contents": "On Thu, Mar 5, 2020 at 8:55 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n>\n> > While at it, did you consider adding a full-page image counter in the WalUsage?\n> > That's something I'd really like to have and it doesn't seem hard to integrate.\n>\n> Well, not sure I understand you 100%, being new to Postgres dev. Do\n> you want a separate counter for pages written whenever doPageWrites is\n> true? I can do that, if needed. Please confirm.\n\nYes, I meant a separate 3rd counter for the number of full page images\nwritten. However after a quick look I think that a FPI should be\ndetected with (doPageWrites && fpw_lsn != InvalidXLogRecPtr && fpw_lsn\n<= RedoRecPtr).\n\n> > Another point is that this patch won't help to see autovacuum activity.\n> > As an example, I did a quick te.....\n> > ...LONG QUOTE...\n> > but that may seem strange to only account for (auto)vacuum activity, rather\n> > than globally, grouping per RmgrId or CommandTag for instance. We could then\n> > see the complete WAL usage per-database. What do you think?\n>\n> I wanted to keep the patch small and simple, and fit to practical\n> needs. This patch is supposed to provide tuning assistance, catching\n> an io heavy query in commit-bound situation.\n> Total WAL usage per DB can be assessed rather easily using other means.\n> Let's get this change into the codebase and then work on connecting\n> WAL usage to (auto)vacuum stats.\n\nI agree that having a view of the full activity is a way bigger scope,\nso it could be done later (and at this point in pg14), but I'm still\nhoping that we can get insight of other backend WAL activity, such as\nautovacuum, in pg13.\n\n\n", "msg_date": "Fri, 6 Mar 2020 18:14:37 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "пт, 6 мар. 2020 г. в 20:14, Julien Rouhaud <rjuju123@gmail.com>:\n>\n> On Thu, Mar 5, 2020 at 8:55 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> >\n> > > While at it, did you consider adding a full-page image counter in the WalUsage?\n> > > That's something I'd really like to have and it doesn't seem hard to integrate.\n> >\n> > Well, not sure I understand you 100%, being new to Postgres dev. Do\n> > you want a separate counter for pages written whenever doPageWrites is\n> > true? I can do that, if needed. Please confirm.\n>\n> Yes, I meant a separate 3rd counter for the number of full page images\n> written. However after a quick look I think that a FPI should be\n> detected with (doPageWrites && fpw_lsn != InvalidXLogRecPtr && fpw_lsn\n> <= RedoRecPtr).\n\nThis seems easy, will implement once I get some spare time.\n\n> > > Another point is that this patch won't help to see autovacuum activity.\n> > > As an example, I did a quick te.....\n> > > ...LONG QUOTE...\n> > > but that may seem strange to only account for (auto)vacuum activity, rather\n> > > than globally, grouping per RmgrId or CommandTag for instance. We could then\n> > > see the complete WAL usage per-database. What do you think?\n> >\n> > I wanted to keep the patch small and simple, and fit to practical\n> > needs. This patch is supposed to provide tuning assistance, catching\n> > an io heavy query in commit-bound situation.\n> > Total WAL usage per DB can be assessed rather easily using other means.\n> > Let's get this change into the codebase and then work on connecting\n> > WAL usage to (auto)vacuum stats.\n>\n> I agree that having a view of the full activity is a way bigger scope,\n> so it could be done later (and at this point in pg14), but I'm still\n> hoping that we can get insight of other backend WAL activity, such as\n> autovacuum, in pg13.\n\nHow do you think this information should be exposed? Via the pg_stat_statement?\n\nAnyways, I believe this change could be bigger than FPI. I propose to\nplan a separate patch for it, or even add it to the TODO after the\ncore patch of wal usage is merged.\n\nPlease expect a new patch version next week, with FPI counters added.\n\n\n", "msg_date": "Fri, 6 Mar 2020 20:59:31 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Mar 6, 2020 at 6:59 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n>\n> пт, 6 мар. 2020 г. в 20:14, Julien Rouhaud <rjuju123@gmail.com>:\n> >\n> > On Thu, Mar 5, 2020 at 8:55 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> > > I wanted to keep the patch small and simple, and fit to practical\n> > > needs. This patch is supposed to provide tuning assistance, catching\n> > > an io heavy query in commit-bound situation.\n> > > Total WAL usage per DB can be assessed rather easily using other means.\n> > > Let's get this change into the codebase and then work on connecting\n> > > WAL usage to (auto)vacuum stats.\n> >\n> > I agree that having a view of the full activity is a way bigger scope,\n> > so it could be done later (and at this point in pg14), but I'm still\n> > hoping that we can get insight of other backend WAL activity, such as\n> > autovacuum, in pg13.\n>\n> How do you think this information should be exposed? Via the pg_stat_statement?\n\nThat's unlikely, since autovacuum won't trigger any hook. I was\nthinking on some new view for pgstats, similarly to the example I\nshowed previously. The implementation is straightforward, although\npg_stat_database is maybe not the best choice here.\n\n> Anyways, I believe this change could be bigger than FPI. I propose to\n> plan a separate patch for it, or even add it to the TODO after the\n> core patch of wal usage is merged.\n\nJust in case, if the problem is a lack of time, I'd be happy to help\non that if needed. Otherwise, I'll definitely not try to block any\nprogress for the feature as proposed.\n\n> Please expect a new patch version next week, with FPI counters added.\n\nThanks!\n\n\n", "msg_date": "Fri, 6 Mar 2020 20:19:17 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> > > On Thu, Mar 5, 2020 at 8:55 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> > > > I wanted to keep the patch small and simple, and fit to practical\n> > > > needs. This patch is supposed to provide tuning assistance, catching\n> > > > an io heavy query in commit-bound situation.\n> > > > Total WAL usage per DB can be assessed rather easily using other means.\n> > > > Let's get this change into the codebase and then work on connecting\n> > > > WAL usage to (auto)vacuum stats.\n> > >\n> > > I agree that having a view of the full activity is a way bigger scope,\n> > > so it could be done later (and at this point in pg14), but I'm still\n> > > hoping that we can get insight of other backend WAL activity, such as\n> > > autovacuum, in pg13.\n> >\n> > How do you think this information should be exposed? Via the pg_stat_statement?\n>\n> That's unlikely, since autovacuum won't trigger any hook. I was\n> thinking on some new view for pgstats, similarly to the example I\n> showed previously. The implementation is straightforward, although\n> pg_stat_database is maybe not the best choice here.\n\nAfter extensive thinking and some code diving, I did not manage to\ncome up with a sane idea on how to expose data about autovacuum WAL\nusage. Must be the flu.\n\n> > Anyways, I believe this change could be bigger than FPI. I propose to\n> > plan a separate patch for it, or even add it to the TODO after the\n> > core patch of wal usage is merged.\n>\n> Just in case, if the problem is a lack of time, I'd be happy to help\n> on that if needed. Otherwise, I'll definitely not try to block any\n> progress for the feature as proposed.\n\nPlease feel free to work on any extension of this patch idea. I lack\nboth time and knowledge to do it all by myself.\n\n> > Please expect a new patch version next week, with FPI counters added.\n\nPlease find attached patch version 003, with FP writes and minor\ncorrections. Hope i use attachment versioning as expected in this\ngroup :)\n\nTest had been reworked, and I believe it should be stable now, the\npart which checks WAL is written and there is a correlation between\naffected rows and WAL records. I still have no idea how to test\nfull-page writes against regular updates, it seems very unstable.\nPlease share ideas if any.\n\nThanks!", "msg_date": "Sun, 15 Mar 2020 21:52:18 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sun, Mar 15, 2020 at 09:52:18PM +0300, Kirill Bychik wrote:\n> > > > On Thu, Mar 5, 2020 at 8:55 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> After extensive thinking and some code diving, I did not manage to\n> come up with a sane idea on how to expose data about autovacuum WAL\n> usage. Must be the flu.\n>\n> > > Anyways, I believe this change could be bigger than FPI. I propose to\n> > > plan a separate patch for it, or even add it to the TODO after the\n> > > core patch of wal usage is merged.\n> >\n> > Just in case, if the problem is a lack of time, I'd be happy to help\n> > on that if needed. Otherwise, I'll definitely not try to block any\n> > progress for the feature as proposed.\n>\n> Please feel free to work on any extension of this patch idea. I lack\n> both time and knowledge to do it all by myself.\n\n\nI'm adding a 3rd patch on top of yours to expose the new WAL counters in\npg_stat_database, for vacuum and autovacuum. I'm not really enthiusiastic with\nthis approach but I didn't find better, and maybe this will raise some better\nideas. The only sure thing is that we're not going to add a bunch of new\nfields in pg_stat_all_tables anyway.\n\nWe can also drop this 3rd patch entirely if no one's happy about it without\nimpacting the first two.\n\n\n> > > Please expect a new patch version next week, with FPI counters added.\n>\n> Please find attached patch version 003, with FP writes and minor\n> corrections. Hope i use attachment versioning as expected in this\n> group :)\n\n\nThanks!\n\n\n> Test had been reworked, and I believe it should be stable now, the\n> part which checks WAL is written and there is a correlation between\n> affected rows and WAL records. I still have no idea how to test\n> full-page writes against regular updates, it seems very unstable.\n> Please share ideas if any.\n\n\nI just reviewed the patches, and it globally looks good to me. The way to\ndetect full page images looks sensible, but I'm really not familiar with that\ncode so additional review would be useful.\n\nI noticed that the new wal_write_fp_records field in pg_stat_statements wasn't\nused in the test. Since I have to add all the patches to make the cfbot happy,\nI slightly adapted the tests to reference the fp column too. There was also a\nminor issue in the documentation, as wal_records and wal_bytes were copy/pasted\ntwice while wal_write_fp_records wasn't documented, so I also changed it.\n\nLet me know if you're ok with those changes.", "msg_date": "Tue, 17 Mar 2020 16:31:36 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> > Please feel free to work on any extension of this patch idea. I lack\n> > both time and knowledge to do it all by myself.\n>\n>\n> I'm adding a 3rd patch on top of yours to expose the new WAL counters in\n> pg_stat_database, for vacuum and autovacuum. I'm not really enthiusiastic with\n> this approach but I didn't find better, and maybe this will raise some better\n> ideas. The only sure thing is that we're not going to add a bunch of new\n> fields in pg_stat_all_tables anyway.\n>\n> We can also drop this 3rd patch entirely if no one's happy about it without\n> impacting the first two.\n\nNo objections about 3rd on my side, unless we miss the CF completely.\n\nAs for the code, I believe:\n+ walusage.wal_records = pgWalUsage.wal_records -\n+ walusage_start.wal_records;\n+ walusage.wal_fp_records = pgWalUsage.wal_fp_records -\n+ walusage_start.wal_fp_records;\n+ walusage.wal_bytes = pgWalUsage.wal_bytes - walusage_start.wal_bytes;\n\nCould be done much simpler via the utility:\nWalUsageAccumDiff(walusage, pgWalUsage, walusage_start);\n\nOn a side note, I agree API to the buf/wal usage is far from perfect.\n\n> > Test had been reworked, and I believe it should be stable now, the\n> > part which checks WAL is written and there is a correlation between\n> > affected rows and WAL records. I still have no idea how to test\n> > full-page writes against regular updates, it seems very unstable.\n> > Please share ideas if any.\n>\n>\n> I just reviewed the patches, and it globally looks good to me. The way to\n> detect full page images looks sensible, but I'm really not familiar with that\n> code so additional review would be useful.\n>\n> I noticed that the new wal_write_fp_records field in pg_stat_statements wasn't\n> used in the test. Since I have to add all the patches to make the cfbot happy,\n> I slightly adapted the tests to reference the fp column too. There was also a\n> minor issue in the documentation, as wal_records and wal_bytes were copy/pasted\n> twice while wal_write_fp_records wasn't documented, so I also changed it.\n>\n> Let me know if you're ok with those changes.\n\nSorry for not getting wal_fp_usage into the docs, my fault.\n\nAs for the tests, please get somebody else to review this. I strongly\nbelieve checking full page writes here could be a source of\ninstability.\n\n\n", "msg_date": "Tue, 17 Mar 2020 22:27:05 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 17, 2020 at 10:27:05PM +0300, Kirill Bychik wrote:\n> > > Please feel free to work on any extension of this patch idea. I lack\n> > > both time and knowledge to do it all by myself.\n> >\n> > I'm adding a 3rd patch on top of yours to expose the new WAL counters in\n> > pg_stat_database, for vacuum and autovacuum. I'm not really enthiusiastic with\n> > this approach but I didn't find better, and maybe this will raise some better\n> > ideas. The only sure thing is that we're not going to add a bunch of new\n> > fields in pg_stat_all_tables anyway.\n> >\n> > We can also drop this 3rd patch entirely if no one's happy about it without\n> > impacting the first two.\n>\n> No objections about 3rd on my side, unless we miss the CF completely.\n>\n> As for the code, I believe:\n> + walusage.wal_records = pgWalUsage.wal_records -\n> + walusage_start.wal_records;\n> + walusage.wal_fp_records = pgWalUsage.wal_fp_records -\n> + walusage_start.wal_fp_records;\n> + walusage.wal_bytes = pgWalUsage.wal_bytes - walusage_start.wal_bytes;\n>\n> Could be done much simpler via the utility:\n> WalUsageAccumDiff(walusage, pgWalUsage, walusage_start);\n\n\nIndeed, but this function is private to instrument.c. AFAICT\npg_stat_statements is already duplicating similar code for buffers rather than\nhaving BufferUsageAccumDiff being exported, so I chose the same approach.\n\nI'd be in favor of exporting both functions though.\n\n\n> On a side note, I agree API to the buf/wal usage is far from perfect.\n\n\nYes clearly.\n\n\n> > > Test had been reworked, and I believe it should be stable now, the\n> > > part which checks WAL is written and there is a correlation between\n> > > affected rows and WAL records. I still have no idea how to test\n> > > full-page writes against regular updates, it seems very unstable.\n> > > Please share ideas if any.\n> >\n> >\n> > I just reviewed the patches, and it globally looks good to me. The way to\n> > detect full page images looks sensible, but I'm really not familiar with that\n> > code so additional review would be useful.\n> >\n> > I noticed that the new wal_write_fp_records field in pg_stat_statements wasn't\n> > used in the test. Since I have to add all the patches to make the cfbot happy,\n> > I slightly adapted the tests to reference the fp column too. There was also a\n> > minor issue in the documentation, as wal_records and wal_bytes were copy/pasted\n> > twice while wal_write_fp_records wasn't documented, so I also changed it.\n> >\n> > Let me know if you're ok with those changes.\n>\n> Sorry for not getting wal_fp_usage into the docs, my fault.\n>\n> As for the tests, please get somebody else to review this. I strongly\n> believe checking full page writes here could be a source of\n> instability.\n\n\nI'm also a little bit dubious about it. The initial checkpoint should make\nthings stable (of course unless full_page_writes is disabled), and Cfbot also\nseems happy about it. At least keeping it for the temporary tables test\nshouldn't be a problem.\n\n\n", "msg_date": "Tue, 17 Mar 2020 21:32:22 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> > > > Please feel free to work on any extension of this patch idea. I lack\n> > > > both time and knowledge to do it all by myself.\n> > >\n> > > I'm adding a 3rd patch on top of yours to expose the new WAL counters in\n> > > pg_stat_database, for vacuum and autovacuum. I'm not really enthiusiastic with\n> > > this approach but I didn't find better, and maybe this will raise some better\n> > > ideas. The only sure thing is that we're not going to add a bunch of new\n> > > fields in pg_stat_all_tables anyway.\n> > >\n> > > We can also drop this 3rd patch entirely if no one's happy about it without\n> > > impacting the first two.\n> >\n> > No objections about 3rd on my side, unless we miss the CF completely.\n> >\n> > As for the code, I believe:\n> > + walusage.wal_records = pgWalUsage.wal_records -\n> > + walusage_start.wal_records;\n> > + walusage.wal_fp_records = pgWalUsage.wal_fp_records -\n> > + walusage_start.wal_fp_records;\n> > + walusage.wal_bytes = pgWalUsage.wal_bytes - walusage_start.wal_bytes;\n> >\n> > Could be done much simpler via the utility:\n> > WalUsageAccumDiff(walusage, pgWalUsage, walusage_start);\n>\n>\n> Indeed, but this function is private to instrument.c. AFAICT\n> pg_stat_statements is already duplicating similar code for buffers rather than\n> having BufferUsageAccumDiff being exported, so I chose the same approach.\n>\n> I'd be in favor of exporting both functions though.\n> > On a side note, I agree API to the buf/wal usage is far from perfect.\n>\n>\n> Yes clearly.\n\nThere is a higher-level Instrumentation API that can be used with\nINSTRUMENT_WAL flag to collect the wal usage information. I believe\nthe instrumentation is widely used in the executor code, so it should\nnot be a problem to colelct instrumentation information on autovacuum\nworker level.\n\nJust a recommendation/chat, though. I am happy with the way the data\nis collected now. If you commit this variant, please add a TODO to\nrework wal usage to common instr API.\n\n> > > > Test had been reworked, and I believe it should be stable now, the\n> > > > part which checks WAL is written and there is a correlation between\n> > > > affected rows and WAL records. I still have no idea how to test\n> > > > full-page writes against regular updates, it seems very unstable.\n> > > > Please share ideas if any.\n> > >\n> > >\n> > > I just reviewed the patches, and it globally looks good to me. The way to\n> > > detect full page images looks sensible, but I'm really not familiar with that\n> > > code so additional review would be useful.\n> > >\n> > > I noticed that the new wal_write_fp_records field in pg_stat_statements wasn't\n> > > used in the test. Since I have to add all the patches to make the cfbot happy,\n> > > I slightly adapted the tests to reference the fp column too. There was also a\n> > > minor issue in the documentation, as wal_records and wal_bytes were copy/pasted\n> > > twice while wal_write_fp_records wasn't documented, so I also changed it.\n> > >\n> > > Let me know if you're ok with those changes.\n> >\n> > Sorry for not getting wal_fp_usage into the docs, my fault.\n> >\n> > As for the tests, please get somebody else to review this. I strongly\n> > believe checking full page writes here could be a source of\n> > instability.\n>\n>\n> I'm also a little bit dubious about it. The initial checkpoint should make\n> things stable (of course unless full_page_writes is disabled), and Cfbot also\n> seems happy about it. At least keeping it for the temporary tables test\n> shouldn't be a problem.\n\nTemp tables should show zero FPI WAL records, true :)\n\nI have no objections to the patch.\n\n\n", "msg_date": "Wed, 18 Mar 2020 09:02:58 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Mar 18, 2020 at 09:02:58AM +0300, Kirill Bychik wrote:\n>\n> There is a higher-level Instrumentation API that can be used with\n> INSTRUMENT_WAL flag to collect the wal usage information. I believe\n> the instrumentation is widely used in the executor code, so it should\n> not be a problem to colelct instrumentation information on autovacuum\n> worker level.\n>\n> Just a recommendation/chat, though. I am happy with the way the data\n> is collected now. If you commit this variant, please add a TODO to\n> rework wal usage to common instr API.\n\n\nThe instrumentation is somewhat intended to be used with executor nodes, not\nbackend commands. I don't see real technical reason that would prevent that,\nbut I prefer to keep things as-is for now, as it sound less controversial.\nThis is for the 3rd patch, which may not even be considered for this CF anyway.\n\n\n> > > As for the tests, please get somebody else to review this. I strongly\n> > > believe checking full page writes here could be a source of\n> > > instability.\n> >\n> >\n> > I'm also a little bit dubious about it. The initial checkpoint should make\n> > things stable (of course unless full_page_writes is disabled), and Cfbot also\n> > seems happy about it. At least keeping it for the temporary tables test\n> > shouldn't be a problem.\n>\n> Temp tables should show zero FPI WAL records, true :)\n>\n> I have no objections to the patch.\n\n\nI'm attaching a v5 with fp records only for temp tables, so there's no risk of\ninstability. As I previously said I'm fine with your two patches, so unless\nyou have objections on the fpi test for temp tables or the documentation\nchanges, I believe those should be ready for committer.", "msg_date": "Wed, 18 Mar 2020 18:19:16 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> > There is a higher-level Instrumentation API that can be used with\n> > INSTRUMENT_WAL flag to collect the wal usage information. I believe\n> > the instrumentation is widely used in the executor code, so it should\n> > not be a problem to colelct instrumentation information on autovacuum\n> > worker level.\n> >\n> > Just a recommendation/chat, though. I am happy with the way the data\n> > is collected now. If you commit this variant, please add a TODO to\n> > rework wal usage to common instr API.\n>\n>\n> The instrumentation is somewhat intended to be used with executor nodes, not\n> backend commands. I don't see real technical reason that would prevent that,\n> but I prefer to keep things as-is for now, as it sound less controversial.\n> This is for the 3rd patch, which may not even be considered for this CF anyway.\n>\n>\n> > > > As for the tests, please get somebody else to review this. I strongly\n> > > > believe checking full page writes here could be a source of\n> > > > instability.\n> > >\n> > >\n> > > I'm also a little bit dubious about it. The initial checkpoint should make\n> > > things stable (of course unless full_page_writes is disabled), and Cfbot also\n> > > seems happy about it. At least keeping it for the temporary tables test\n> > > shouldn't be a problem.\n> >\n> > Temp tables should show zero FPI WAL records, true :)\n> >\n> > I have no objections to the patch.\n>\n>\n> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> instability. As I previously said I'm fine with your two patches, so unless\n> you have objections on the fpi test for temp tables or the documentation\n> changes, I believe those should be ready for committer.\n\nNo objections on my side either. Thank you for your review, time and efforts!\n\n\n", "msg_date": "Wed, 18 Mar 2020 20:48:17 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Mar 18, 2020 at 08:48:17PM +0300, Kirill Bychik wrote:\n> > I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> > instability. As I previously said I'm fine with your two patches, so unless\n> > you have objections on the fpi test for temp tables or the documentation\n> > changes, I believe those should be ready for committer.\n>\n> No objections on my side either. Thank you for your review, time and efforts!\n\n\nGreat, thanks also for the patches and efforts! I'll mark the entry as RFC.\n\n\n", "msg_date": "Wed, 18 Mar 2020 20:24:00 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "\n\nOn 2020/03/19 2:19, Julien Rouhaud wrote:\n> On Wed, Mar 18, 2020 at 09:02:58AM +0300, Kirill Bychik wrote:\n>>\n>> There is a higher-level Instrumentation API that can be used with\n>> INSTRUMENT_WAL flag to collect the wal usage information. I believe\n>> the instrumentation is widely used in the executor code, so it should\n>> not be a problem to colelct instrumentation information on autovacuum\n>> worker level.\n>>\n>> Just a recommendation/chat, though. I am happy with the way the data\n>> is collected now. If you commit this variant, please add a TODO to\n>> rework wal usage to common instr API.\n> \n> \n> The instrumentation is somewhat intended to be used with executor nodes, not\n> backend commands. I don't see real technical reason that would prevent that,\n> but I prefer to keep things as-is for now, as it sound less controversial.\n> This is for the 3rd patch, which may not even be considered for this CF anyway.\n> \n> \n>>>> As for the tests, please get somebody else to review this. I strongly\n>>>> believe checking full page writes here could be a source of\n>>>> instability.\n>>>\n>>>\n>>> I'm also a little bit dubious about it. The initial checkpoint should make\n>>> things stable (of course unless full_page_writes is disabled), and Cfbot also\n>>> seems happy about it. At least keeping it for the temporary tables test\n>>> shouldn't be a problem.\n>>\n>> Temp tables should show zero FPI WAL records, true :)\n>>\n>> I have no objections to the patch.\n> \n> \n> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> instability. As I previously said I'm fine with your two patches, so unless\n> you have objections on the fpi test for temp tables or the documentation\n> changes, I believe those should be ready for committer.\n\nYou added the columns into pg_stat_database, but seem to forget to\nupdate the document for pg_stat_database.\n\nIs it really reasonable to add the columns for vacuum's WAL usage into\npg_stat_database? I'm not sure how much the information about\nthe amount of WAL generated by vacuum per database is useful.\nIsn't it better to make VACUUM VERBOSE and autovacuum log include\nthat information, instead, to see how much each vacuum activity\ngenerates the WAL? Sorry if this discussion has already been done\nupthread.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 19 Mar 2020 21:03:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Mar 19, 2020 at 09:03:02PM +0900, Fujii Masao wrote:\n> \n> On 2020/03/19 2:19, Julien Rouhaud wrote:\n> > \n> > I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> > instability. As I previously said I'm fine with your two patches, so unless\n> > you have objections on the fpi test for temp tables or the documentation\n> > changes, I believe those should be ready for committer.\n> \n> You added the columns into pg_stat_database, but seem to forget to\n> update the document for pg_stat_database.\n\nAh right, I totally missed that when I tried to clean up the original POC.\n\n> Is it really reasonable to add the columns for vacuum's WAL usage into\n> pg_stat_database? I'm not sure how much the information about\n> the amount of WAL generated by vacuum per database is useful.\n\nThe amount per database isn't really useful, but I didn't had a better idea on\nhow to expose (auto)vacuum WAL usage until this:\n\n> Isn't it better to make VACUUM VERBOSE and autovacuum log include\n> that information, instead, to see how much each vacuum activity\n> generates the WAL? Sorry if this discussion has already been done\n> upthread.\n\nThat's a way better idea! I'm attaching the full patchset with the 3rd patch\nto use this approach instead. There's a bit a duplicate code for computing the\nWalUsage, as I didn't find a better way to avoid that without exposing\nWalUsageAccumDiff().\n\nAutovacuum log sample:\n\n2020-03-19 15:49:05.708 CET [5843] LOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n\tpages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n\ttuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n\tbuffer usage: 4448 hits, 4 misses, 4 dirtied\n\tavg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n\tsystem usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n\tWAL usage: 6643 records, 4 full page records, 1402679 bytes\n\nVACUUM log sample:\n\n# vacuum VERBOSE t1;\nINFO: vacuuming \"public.t1\"\nINFO: \"t1\": removed 50000 row versions in 443 pages\nINFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\nThere were 50000 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\n1332 WAL records, 4 WAL full page records, 306901 WAL bytes\nCPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\nINFO: \"t1\": truncated 443 to 0 pages\nDETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nINFO: vacuuming \"pg_toast.pg_toast_16385\"\nINFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nINFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\nThere were 0 unused item identifiers.\nSkipped 0 pages due to buffer pins, 0 frozen pages.\n0 pages are entirely empty.\n0 WAL records, 0 WAL full page records, 0 WAL bytes\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nVACUUM\n\nNote that the 3rd patch is an addition on top of Kirill's original patch, as\nthis is information that would have been greatly helpful to investigate in some\nperformance issues I had to investigate recently. I'd be happy to have it land\ninto v13, but if that's controversial or too late I'm happy to postpone it to\nv14 if the infrastructure added in Kirill's patches can make it to v13.", "msg_date": "Thu, 19 Mar 2020 16:31:38 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> > > I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> > > instability. As I previously said I'm fine with your two patches, so unless\n> > > you have objections on the fpi test for temp tables or the documentation\n> > > changes, I believe those should be ready for committer.\n> >\n> > You added the columns into pg_stat_database, but seem to forget to\n> > update the document for pg_stat_database.\n>\n> Ah right, I totally missed that when I tried to clean up the original POC.\n>\n> > Is it really reasonable to add the columns for vacuum's WAL usage into\n> > pg_stat_database? I'm not sure how much the information about\n> > the amount of WAL generated by vacuum per database is useful.\n>\n> The amount per database isn't really useful, but I didn't had a better idea on\n> how to expose (auto)vacuum WAL usage until this:\n>\n> > Isn't it better to make VACUUM VERBOSE and autovacuum log include\n> > that information, instead, to see how much each vacuum activity\n> > generates the WAL? Sorry if this discussion has already been done\n> > upthread.\n>\n> That's a way better idea! I'm attaching the full patchset with the 3rd patch\n> to use this approach instead. There's a bit a duplicate code for computing the\n> WalUsage, as I didn't find a better way to avoid that without exposing\n> WalUsageAccumDiff().\n>\n> Autovacuum log sample:\n>\n> 2020-03-19 15:49:05.708 CET [5843] LOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n> pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n> buffer usage: 4448 hits, 4 misses, 4 dirtied\n> avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n> system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n> WAL usage: 6643 records, 4 full page records, 1402679 bytes\n>\n> VACUUM log sample:\n>\n> # vacuum VERBOSE t1;\n> INFO: vacuuming \"public.t1\"\n> INFO: \"t1\": removed 50000 row versions in 443 pages\n> INFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\n> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\n> There were 50000 unused item identifiers.\n> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> 0 pages are entirely empty.\n> 1332 WAL records, 4 WAL full page records, 306901 WAL bytes\n> CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n> INFO: \"t1\": truncated 443 to 0 pages\n> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> INFO: vacuuming \"pg_toast.pg_toast_16385\"\n> INFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> INFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\n> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\n> There were 0 unused item identifiers.\n> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> 0 pages are entirely empty.\n> 0 WAL records, 0 WAL full page records, 0 WAL bytes\n> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> VACUUM\n>\n> Note that the 3rd patch is an addition on top of Kirill's original patch, as\n> this is information that would have been greatly helpful to investigate in some\n> performance issues I had to investigate recently. I'd be happy to have it land\n> into v13, but if that's controversial or too late I'm happy to postpone it to\n> v14 if the infrastructure added in Kirill's patches can make it to v13.\n\nDear all, can we please focus on getting the core patch committed?\nGiven the uncertainity regarding autovacuum stats, can we please get\nparts 1 and 2 into the codebase, and think about exposing autovacuum\nstats later?\n\n\n", "msg_date": "Mon, 23 Mar 2020 01:32:07 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "\n\nOn 2020/03/23 7:32, Kirill Bychik wrote:\n>>>> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n>>>> instability. As I previously said I'm fine with your two patches, so unless\n>>>> you have objections on the fpi test for temp tables or the documentation\n>>>> changes, I believe those should be ready for committer.\n>>>\n>>> You added the columns into pg_stat_database, but seem to forget to\n>>> update the document for pg_stat_database.\n>>\n>> Ah right, I totally missed that when I tried to clean up the original POC.\n>>\n>>> Is it really reasonable to add the columns for vacuum's WAL usage into\n>>> pg_stat_database? I'm not sure how much the information about\n>>> the amount of WAL generated by vacuum per database is useful.\n>>\n>> The amount per database isn't really useful, but I didn't had a better idea on\n>> how to expose (auto)vacuum WAL usage until this:\n>>\n>>> Isn't it better to make VACUUM VERBOSE and autovacuum log include\n>>> that information, instead, to see how much each vacuum activity\n>>> generates the WAL? Sorry if this discussion has already been done\n>>> upthread.\n>>\n>> That's a way better idea! I'm attaching the full patchset with the 3rd patch\n>> to use this approach instead. There's a bit a duplicate code for computing the\n>> WalUsage, as I didn't find a better way to avoid that without exposing\n>> WalUsageAccumDiff().\n>>\n>> Autovacuum log sample:\n>>\n>> 2020-03-19 15:49:05.708 CET [5843] LOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n>> pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n>> tuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n>> buffer usage: 4448 hits, 4 misses, 4 dirtied\n>> avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n>> system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n>> WAL usage: 6643 records, 4 full page records, 1402679 bytes\n>>\n>> VACUUM log sample:\n>>\n>> # vacuum VERBOSE t1;\n>> INFO: vacuuming \"public.t1\"\n>> INFO: \"t1\": removed 50000 row versions in 443 pages\n>> INFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\n>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\n>> There were 50000 unused item identifiers.\n>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n>> 0 pages are entirely empty.\n>> 1332 WAL records, 4 WAL full page records, 306901 WAL bytes\n>> CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n>> INFO: \"t1\": truncated 443 to 0 pages\n>> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>> INFO: vacuuming \"pg_toast.pg_toast_16385\"\n>> INFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\n>> DETAIL: 0 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n>> INFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\n>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\n>> There were 0 unused item identifiers.\n>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n>> 0 pages are entirely empty.\n>> 0 WAL records, 0 WAL full page records, 0 WAL bytes\n>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n>> VACUUM\n>>\n>> Note that the 3rd patch is an addition on top of Kirill's original patch, as\n>> this is information that would have been greatly helpful to investigate in some\n>> performance issues I had to investigate recently. I'd be happy to have it land\n>> into v13, but if that's controversial or too late I'm happy to postpone it to\n>> v14 if the infrastructure added in Kirill's patches can make it to v13.\n> \n> Dear all, can we please focus on getting the core patch committed?\n> Given the uncertainity regarding autovacuum stats, can we please get\n> parts 1 and 2 into the codebase, and think about exposing autovacuum\n> stats later?\n\nHere are the comments for 0001 patch.\n\n+\t\t\t/*\n+\t\t\t * Report a full page image constructed for the WAL record\n+\t\t\t */\n+\t\t\tpgWalUsage.wal_fp_records++;\n\nIsn't it better to use \"fpw\" or \"fpi\" for the variable name rather than\n\"fp\" here? In other places, \"fpw\" and \"fpi\" are used for full page\nwrites/image.\n\nISTM that this counter could be incorrect if XLogInsertRecord() determines to\ncalculate again whether FPI is necessary or not. No? IOW, this issue could\nhappen if XLogInsert() calls XLogRecordAssemble() multiple times in\nits do-while loop. Isn't this problematic?\n\n+\tlong\t\twal_bytes;\t\t/* size of wal records produced */\n\nIsn't it safer to use uint64 (i.e., XLogRecPtr) as the type of this variable\nrather than long?\n\n+\tshm_toc_insert(pcxt->toc, PARALLEL_KEY_WAL_USAGE, bufusage_space);\n\nbufusage_space should be walusage_space here?\n\n/*\n * Finish parallel execution. We wait for parallel workers to finish, and\n * accumulate their buffer usage.\n */\n\nThere are some comments mentioning buffer usage, in execParallel.c.\nFor example, the top comment for ExecParallelFinish(), as the above.\nThese should be updated.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 23 Mar 2020 21:01:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "\n\nOn 2020/03/23 21:01, Fujii Masao wrote:\n> \n> \n> On 2020/03/23 7:32, Kirill Bychik wrote:\n>>>>> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n>>>>> instability.  As I previously said I'm fine with your two patches, so unless\n>>>>> you have objections on the fpi test for temp tables or the documentation\n>>>>> changes, I believe those should be ready for committer.\n>>>>\n>>>> You added the columns into pg_stat_database, but seem to forget to\n>>>> update the document for pg_stat_database.\n>>>\n>>> Ah right, I totally missed that when I tried to clean up the original POC.\n>>>\n>>>> Is it really reasonable to add the columns for vacuum's WAL usage into\n>>>> pg_stat_database? I'm not sure how much the information about\n>>>> the amount of WAL generated by vacuum per database is useful.\n>>>\n>>> The amount per database isn't really useful, but I didn't had a better idea on\n>>> how to expose (auto)vacuum WAL usage until this:\n>>>\n>>>> Isn't it better to make VACUUM VERBOSE and autovacuum log include\n>>>> that information, instead, to see how much each vacuum activity\n>>>> generates the WAL? Sorry if this discussion has already been done\n>>>> upthread.\n>>>\n>>> That's a way better idea!  I'm attaching the full patchset with the 3rd patch\n>>> to use this approach instead.  There's a bit a duplicate code for computing the\n>>> WalUsage, as I didn't find a better way to avoid that without exposing\n>>> WalUsageAccumDiff().\n>>>\n>>> Autovacuum log sample:\n>>>\n>>> 2020-03-19 15:49:05.708 CET [5843] LOG:  automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n>>>          pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n>>>          tuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n>>>          buffer usage: 4448 hits, 4 misses, 4 dirtied\n>>>          avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n>>>          system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n>>>          WAL usage: 6643 records, 4 full page records, 1402679 bytes\n>>>\n>>> VACUUM log sample:\n>>>\n>>> # vacuum VERBOSE t1;\n>>> INFO:  vacuuming \"public.t1\"\n>>> INFO:  \"t1\": removed 50000 row versions in 443 pages\n>>> INFO:  \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\n>>> DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 512\n>>> There were 50000 unused item identifiers.\n>>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n>>> 0 pages are entirely empty.\n>>> 1332 WAL records, 4 WAL full page records, 306901 WAL bytes\n>>> CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n>>> INFO:  \"t1\": truncated 443 to 0 pages\n>>> DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>>> INFO:  vacuuming \"pg_toast.pg_toast_16385\"\n>>> INFO:  index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\n>>> DETAIL:  0 index row versions were removed.\n>>> 0 index pages have been deleted, 0 are currently reusable.\n>>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n>>> INFO:  \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\n>>> DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 513\n>>> There were 0 unused item identifiers.\n>>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n>>> 0 pages are entirely empty.\n>>> 0 WAL records, 0 WAL full page records, 0 WAL bytes\n>>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n>>> VACUUM\n>>>\n>>> Note that the 3rd patch is an addition on top of Kirill's original patch, as\n>>> this is information that would have been greatly helpful to investigate in some\n>>> performance issues I had to investigate recently.  I'd be happy to have it land\n>>> into v13, but if that's controversial or too late I'm happy to postpone it to\n>>> v14 if the infrastructure added in Kirill's patches can make it to v13.\n>>\n>> Dear all, can we please focus on getting the core patch committed?\n>> Given the uncertainity regarding autovacuum stats, can we please get\n>> parts 1 and 2 into the codebase, and think about exposing autovacuum\n>> stats later?\n> \n> Here are the comments for 0001 patch.\n> \n> +            /*\n> +             * Report a full page image constructed for the WAL record\n> +             */\n> +            pgWalUsage.wal_fp_records++;\n> \n> Isn't it better to use \"fpw\" or \"fpi\" for the variable name rather than\n> \"fp\" here? In other places, \"fpw\" and \"fpi\" are used for full page\n> writes/image.\n> \n> ISTM that this counter could be incorrect if XLogInsertRecord() determines to\n> calculate again whether FPI is necessary or not. No? IOW, this issue could\n> happen if XLogInsert() calls  XLogRecordAssemble() multiple times in\n> its do-while loop. Isn't this problematic?\n> \n> +    long        wal_bytes;        /* size of wal records produced */\n> \n> Isn't it safer to use uint64 (i.e., XLogRecPtr) as the type of this variable\n> rather than long?\n> \n> +    shm_toc_insert(pcxt->toc, PARALLEL_KEY_WAL_USAGE, bufusage_space);\n> \n> bufusage_space should be walusage_space here?\n> \n> /*\n>  * Finish parallel execution.  We wait for parallel workers to finish, and\n>  * accumulate their buffer usage.\n>  */\n> \n> There are some comments mentioning buffer usage, in execParallel.c.\n> For example, the top comment for ExecParallelFinish(), as the above.\n> These should be updated.\n\nHere are the comments for 0002 patch.\n\n+ OUT wal_write_bytes int8,\n+ OUT wal_write_records int8,\n+ OUT wal_write_fp_records int8\n\nIsn't \"write\" part in the column names confusing because it's WAL\n*generated* (not written) by the statement?\n\n+RETURNS SETOF record\n+AS 'MODULE_PATHNAME', 'pg_stat_statements_1_4'\n+LANGUAGE C STRICT VOLATILE;\n\nPARALLEL SAFE should be specified?\n\n+/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n\nISTM it's good timing to have also pg_stat_statements--1.8.sql since\nthe definition of pg_stat_statements() is changed. Thought?\n\n+-- CHECKPOINT before WAL tests to ensure test stability\n+CHECKPOINT;\n\nIs this true? I thought you added this because the number of FPI\nshould be larger than zero in the subsequent test. No? But there\nseems no such test. I'm not excited about adding the test checking\nthe number of FPI because it looks fragile, though...\n\n+UPDATE pgss_test SET b = '333' WHERE a = 3 \\;\n+UPDATE pgss_test SET b = '444' WHERE a = 4 ;\n\nCould you tell me why several queries need to be run to test\nthe WAL usage? Isn't running a few query enough for the test purpase?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 23 Mar 2020 23:24:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Mar 23, 2020 at 3:24 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/03/23 21:01, Fujii Masao wrote:\n> >\n> >\n> > On 2020/03/23 7:32, Kirill Bychik wrote:\n> >>>>> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> >>>>> instability. As I previously said I'm fine with your two patches, so unless\n> >>>>> you have objections on the fpi test for temp tables or the documentation\n> >>>>> changes, I believe those should be ready for committer.\n> >>>>\n> >>>> You added the columns into pg_stat_database, but seem to forget to\n> >>>> update the document for pg_stat_database.\n> >>>\n> >>> Ah right, I totally missed that when I tried to clean up the original POC.\n> >>>\n> >>>> Is it really reasonable to add the columns for vacuum's WAL usage into\n> >>>> pg_stat_database? I'm not sure how much the information about\n> >>>> the amount of WAL generated by vacuum per database is useful.\n> >>>\n> >>> The amount per database isn't really useful, but I didn't had a better idea on\n> >>> how to expose (auto)vacuum WAL usage until this:\n> >>>\n> >>>> Isn't it better to make VACUUM VERBOSE and autovacuum log include\n> >>>> that information, instead, to see how much each vacuum activity\n> >>>> generates the WAL? Sorry if this discussion has already been done\n> >>>> upthread.\n> >>>\n> >>> That's a way better idea! I'm attaching the full patchset with the 3rd patch\n> >>> to use this approach instead. There's a bit a duplicate code for computing the\n> >>> WalUsage, as I didn't find a better way to avoid that without exposing\n> >>> WalUsageAccumDiff().\n> >>>\n> >>> Autovacuum log sample:\n> >>>\n> >>> 2020-03-19 15:49:05.708 CET [5843] LOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n> >>> pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n> >>> tuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n> >>> buffer usage: 4448 hits, 4 misses, 4 dirtied\n> >>> avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n> >>> system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n> >>> WAL usage: 6643 records, 4 full page records, 1402679 bytes\n> >>>\n> >>> VACUUM log sample:\n> >>>\n> >>> # vacuum VERBOSE t1;\n> >>> INFO: vacuuming \"public.t1\"\n> >>> INFO: \"t1\": removed 50000 row versions in 443 pages\n> >>> INFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\n> >>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\n> >>> There were 50000 unused item identifiers.\n> >>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> >>> 0 pages are entirely empty.\n> >>> 1332 WAL records, 4 WAL full page records, 306901 WAL bytes\n> >>> CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n> >>> INFO: \"t1\": truncated 443 to 0 pages\n> >>> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> >>> INFO: vacuuming \"pg_toast.pg_toast_16385\"\n> >>> INFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\n> >>> DETAIL: 0 index row versions were removed.\n> >>> 0 index pages have been deleted, 0 are currently reusable.\n> >>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> >>> INFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\n> >>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\n> >>> There were 0 unused item identifiers.\n> >>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> >>> 0 pages are entirely empty.\n> >>> 0 WAL records, 0 WAL full page records, 0 WAL bytes\n> >>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> >>> VACUUM\n> >>>\n> >>> Note that the 3rd patch is an addition on top of Kirill's original patch, as\n> >>> this is information that would have been greatly helpful to investigate in some\n> >>> performance issues I had to investigate recently. I'd be happy to have it land\n> >>> into v13, but if that's controversial or too late I'm happy to postpone it to\n> >>> v14 if the infrastructure added in Kirill's patches can make it to v13.\n> >>\n> >> Dear all, can we please focus on getting the core patch committed?\n> >> Given the uncertainity regarding autovacuum stats, can we please get\n> >> parts 1 and 2 into the codebase, and think about exposing autovacuum\n> >> stats later?\n> >\n> > Here are the comments for 0001 patch.\n> >\n> > + /*\n> > + * Report a full page image constructed for the WAL record\n> > + */\n> > + pgWalUsage.wal_fp_records++;\n> >\n> > Isn't it better to use \"fpw\" or \"fpi\" for the variable name rather than\n> > \"fp\" here? In other places, \"fpw\" and \"fpi\" are used for full page\n> > writes/image.\n> >\n> > ISTM that this counter could be incorrect if XLogInsertRecord() determines to\n> > calculate again whether FPI is necessary or not. No? IOW, this issue could\n> > happen if XLogInsert() calls XLogRecordAssemble() multiple times in\n> > its do-while loop. Isn't this problematic?\n> >\n> > + long wal_bytes; /* size of wal records produced */\n> >\n> > Isn't it safer to use uint64 (i.e., XLogRecPtr) as the type of this variable\n> > rather than long?\n> >\n> > + shm_toc_insert(pcxt->toc, PARALLEL_KEY_WAL_USAGE, bufusage_space);\n> >\n> > bufusage_space should be walusage_space here?\n> >\n> > /*\n> > * Finish parallel execution. We wait for parallel workers to finish, and\n> > * accumulate their buffer usage.\n> > */\n> >\n> > There are some comments mentioning buffer usage, in execParallel.c.\n> > For example, the top comment for ExecParallelFinish(), as the above.\n> > These should be updated.\n>\n> Here are the comments for 0002 patch.\n>\n> + OUT wal_write_bytes int8,\n> + OUT wal_write_records int8,\n> + OUT wal_write_fp_records int8\n>\n> Isn't \"write\" part in the column names confusing because it's WAL\n> *generated* (not written) by the statement?\n>\n> +RETURNS SETOF record\n> +AS 'MODULE_PATHNAME', 'pg_stat_statements_1_4'\n> +LANGUAGE C STRICT VOLATILE;\n>\n> PARALLEL SAFE should be specified?\n>\n> +/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n>\n> ISTM it's good timing to have also pg_stat_statements--1.8.sql since\n> the definition of pg_stat_statements() is changed. Thought?\n>\n> +-- CHECKPOINT before WAL tests to ensure test stability\n> +CHECKPOINT;\n>\n> Is this true? I thought you added this because the number of FPI\n> should be larger than zero in the subsequent test. No? But there\n> seems no such test. I'm not excited about adding the test checking\n> the number of FPI because it looks fragile, though...\n>\n> +UPDATE pgss_test SET b = '333' WHERE a = 3 \\;\n> +UPDATE pgss_test SET b = '444' WHERE a = 4 ;\n>\n> Could you tell me why several queries need to be run to test\n> the WAL usage? Isn't running a few query enough for the test purpase?\n\nFTR I marked the commitfest entry as waiting on author.\n\nKirill do you think you'll have time to address Fuji-san's review\nshortly? The end of the commitfest is approaching quite fast :(\n\n\n", "msg_date": "Fri, 27 Mar 2020 09:51:59 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "> > >>>>> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> > >>>>> instability. As I previously said I'm fine with your two patches, so unless\n> > >>>>> you have objections on the fpi test for temp tables or the documentation\n> > >>>>> changes, I believe those should be ready for committer.\n> > >>>>\n> > >>>> You added the columns into pg_stat_database, but seem to forget to\n> > >>>> update the document for pg_stat_database.\n> > >>>\n> > >>> Ah right, I totally missed that when I tried to clean up the original POC.\n> > >>>\n> > >>>> Is it really reasonable to add the columns for vacuum's WAL usage into\n> > >>>> pg_stat_database? I'm not sure how much the information about\n> > >>>> the amount of WAL generated by vacuum per database is useful.\n> > >>>\n> > >>> The amount per database isn't really useful, but I didn't had a better idea on\n> > >>> how to expose (auto)vacuum WAL usage until this:\n> > >>>\n> > >>>> Isn't it better to make VACUUM VERBOSE and autovacuum log include\n> > >>>> that information, instead, to see how much each vacuum activity\n> > >>>> generates the WAL? Sorry if this discussion has already been done\n> > >>>> upthread.\n> > >>>\n> > >>> That's a way better idea! I'm attaching the full patchset with the 3rd patch\n> > >>> to use this approach instead. There's a bit a duplicate code for computing the\n> > >>> WalUsage, as I didn't find a better way to avoid that without exposing\n> > >>> WalUsageAccumDiff().\n> > >>>\n> > >>> Autovacuum log sample:\n> > >>>\n> > >>> 2020-03-19 15:49:05.708 CET [5843] LOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n> > >>> pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n> > >>> tuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n> > >>> buffer usage: 4448 hits, 4 misses, 4 dirtied\n> > >>> avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n> > >>> system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n> > >>> WAL usage: 6643 records, 4 full page records, 1402679 bytes\n> > >>>\n> > >>> VACUUM log sample:\n> > >>>\n> > >>> # vacuum VERBOSE t1;\n> > >>> INFO: vacuuming \"public.t1\"\n> > >>> INFO: \"t1\": removed 50000 row versions in 443 pages\n> > >>> INFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\n> > >>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\n> > >>> There were 50000 unused item identifiers.\n> > >>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> > >>> 0 pages are entirely empty.\n> > >>> 1332 WAL records, 4 WAL full page records, 306901 WAL bytes\n> > >>> CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n> > >>> INFO: \"t1\": truncated 443 to 0 pages\n> > >>> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> > >>> INFO: vacuuming \"pg_toast.pg_toast_16385\"\n> > >>> INFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\n> > >>> DETAIL: 0 index row versions were removed.\n> > >>> 0 index pages have been deleted, 0 are currently reusable.\n> > >>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> > >>> INFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\n> > >>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\n> > >>> There were 0 unused item identifiers.\n> > >>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> > >>> 0 pages are entirely empty.\n> > >>> 0 WAL records, 0 WAL full page records, 0 WAL bytes\n> > >>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> > >>> VACUUM\n> > >>>\n> > >>> Note that the 3rd patch is an addition on top of Kirill's original patch, as\n> > >>> this is information that would have been greatly helpful to investigate in some\n> > >>> performance issues I had to investigate recently. I'd be happy to have it land\n> > >>> into v13, but if that's controversial or too late I'm happy to postpone it to\n> > >>> v14 if the infrastructure added in Kirill's patches can make it to v13.\n> > >>\n> > >> Dear all, can we please focus on getting the core patch committed?\n> > >> Given the uncertainity regarding autovacuum stats, can we please get\n> > >> parts 1 and 2 into the codebase, and think about exposing autovacuum\n> > >> stats later?\n> > >\n> > > Here are the comments for 0001 patch.\n> > >\n> > > + /*\n> > > + * Report a full page image constructed for the WAL record\n> > > + */\n> > > + pgWalUsage.wal_fp_records++;\n> > >\n> > > Isn't it better to use \"fpw\" or \"fpi\" for the variable name rather than\n> > > \"fp\" here? In other places, \"fpw\" and \"fpi\" are used for full page\n> > > writes/image.\n> > >\n> > > ISTM that this counter could be incorrect if XLogInsertRecord() determines to\n> > > calculate again whether FPI is necessary or not. No? IOW, this issue could\n> > > happen if XLogInsert() calls XLogRecordAssemble() multiple times in\n> > > its do-while loop. Isn't this problematic?\n> > >\n> > > + long wal_bytes; /* size of wal records produced */\n> > >\n> > > Isn't it safer to use uint64 (i.e., XLogRecPtr) as the type of this variable\n> > > rather than long?\n> > >\n> > > + shm_toc_insert(pcxt->toc, PARALLEL_KEY_WAL_USAGE, bufusage_space);\n> > >\n> > > bufusage_space should be walusage_space here?\n> > >\n> > > /*\n> > > * Finish parallel execution. We wait for parallel workers to finish, and\n> > > * accumulate their buffer usage.\n> > > */\n> > >\n> > > There are some comments mentioning buffer usage, in execParallel.c.\n> > > For example, the top comment for ExecParallelFinish(), as the above.\n> > > These should be updated.\n> >\n> > Here are the comments for 0002 patch.\n> >\n> > + OUT wal_write_bytes int8,\n> > + OUT wal_write_records int8,\n> > + OUT wal_write_fp_records int8\n> >\n> > Isn't \"write\" part in the column names confusing because it's WAL\n> > *generated* (not written) by the statement?\n> >\n> > +RETURNS SETOF record\n> > +AS 'MODULE_PATHNAME', 'pg_stat_statements_1_4'\n> > +LANGUAGE C STRICT VOLATILE;\n> >\n> > PARALLEL SAFE should be specified?\n> >\n> > +/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n> >\n> > ISTM it's good timing to have also pg_stat_statements--1.8.sql since\n> > the definition of pg_stat_statements() is changed. Thought?\n> >\n> > +-- CHECKPOINT before WAL tests to ensure test stability\n> > +CHECKPOINT;\n> >\n> > Is this true? I thought you added this because the number of FPI\n> > should be larger than zero in the subsequent test. No? But there\n> > seems no such test. I'm not excited about adding the test checking\n> > the number of FPI because it looks fragile, though...\n> >\n> > +UPDATE pgss_test SET b = '333' WHERE a = 3 \\;\n> > +UPDATE pgss_test SET b = '444' WHERE a = 4 ;\n> >\n> > Could you tell me why several queries need to be run to test\n> > the WAL usage? Isn't running a few query enough for the test purpase?\n>\n> FTR I marked the commitfest entry as waiting on author.\n>\n> Kirill do you think you'll have time to address Fuji-san's review\n> shortly? The end of the commitfest is approaching quite fast :(\n\nAll these are really valuable objections. Unfortunately, I won't be\nable to get all sorted out soon, due to total lack of time. I would be\nvery glad if somebody could step in for this patch.\n\n\n", "msg_date": "Fri, 27 Mar 2020 22:21:48 +0300", "msg_from": "Kirill Bychik <kirill.bychik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Mar 27, 2020 at 8:21 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n>\n> > > >>>>> I'm attaching a v5 with fp records only for temp tables, so there's no risk of\n> > > >>>>> instability. As I previously said I'm fine with your two patches, so unless\n> > > >>>>> you have objections on the fpi test for temp tables or the documentation\n> > > >>>>> changes, I believe those should be ready for committer.\n> > > >>>>\n> > > >>>> You added the columns into pg_stat_database, but seem to forget to\n> > > >>>> update the document for pg_stat_database.\n> > > >>>\n> > > >>> Ah right, I totally missed that when I tried to clean up the original POC.\n> > > >>>\n> > > >>>> Is it really reasonable to add the columns for vacuum's WAL usage into\n> > > >>>> pg_stat_database? I'm not sure how much the information about\n> > > >>>> the amount of WAL generated by vacuum per database is useful.\n> > > >>>\n> > > >>> The amount per database isn't really useful, but I didn't had a better idea on\n> > > >>> how to expose (auto)vacuum WAL usage until this:\n> > > >>>\n> > > >>>> Isn't it better to make VACUUM VERBOSE and autovacuum log include\n> > > >>>> that information, instead, to see how much each vacuum activity\n> > > >>>> generates the WAL? Sorry if this discussion has already been done\n> > > >>>> upthread.\n> > > >>>\n> > > >>> That's a way better idea! I'm attaching the full patchset with the 3rd patch\n> > > >>> to use this approach instead. There's a bit a duplicate code for computing the\n> > > >>> WalUsage, as I didn't find a better way to avoid that without exposing\n> > > >>> WalUsageAccumDiff().\n> > > >>>\n> > > >>> Autovacuum log sample:\n> > > >>>\n> > > >>> 2020-03-19 15:49:05.708 CET [5843] LOG: automatic vacuum of table \"rjuju.public.t1\": index scans: 0\n> > > >>> pages: 0 removed, 2213 remain, 0 skipped due to pins, 0 skipped frozen\n> > > >>> tuples: 250000 removed, 250000 remain, 0 are dead but not yet removable, oldest xmin: 502\n> > > >>> buffer usage: 4448 hits, 4 misses, 4 dirtied\n> > > >>> avg read rate: 0.160 MB/s, avg write rate: 0.160 MB/s\n> > > >>> system usage: CPU: user: 0.13 s, system: 0.00 s, elapsed: 0.19 s\n> > > >>> WAL usage: 6643 records, 4 full page records, 1402679 bytes\n> > > >>>\n> > > >>> VACUUM log sample:\n> > > >>>\n> > > >>> # vacuum VERBOSE t1;\n> > > >>> INFO: vacuuming \"public.t1\"\n> > > >>> INFO: \"t1\": removed 50000 row versions in 443 pages\n> > > >>> INFO: \"t1\": found 50000 removable, 0 nonremovable row versions in 443 out of 443 pages\n> > > >>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 512\n> > > >>> There were 50000 unused item identifiers.\n> > > >>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> > > >>> 0 pages are entirely empty.\n> > > >>> 1332 WAL records, 4 WAL full page records, 306901 WAL bytes\n> > > >>> CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s.\n> > > >>> INFO: \"t1\": truncated 443 to 0 pages\n> > > >>> DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> > > >>> INFO: vacuuming \"pg_toast.pg_toast_16385\"\n> > > >>> INFO: index \"pg_toast_16385_index\" now contains 0 row versions in 1 pages\n> > > >>> DETAIL: 0 index row versions were removed.\n> > > >>> 0 index pages have been deleted, 0 are currently reusable.\n> > > >>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> > > >>> INFO: \"pg_toast_16385\": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages\n> > > >>> DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 513\n> > > >>> There were 0 unused item identifiers.\n> > > >>> Skipped 0 pages due to buffer pins, 0 frozen pages.\n> > > >>> 0 pages are entirely empty.\n> > > >>> 0 WAL records, 0 WAL full page records, 0 WAL bytes\n> > > >>> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n> > > >>> VACUUM\n> > > >>>\n> > > >>> Note that the 3rd patch is an addition on top of Kirill's original patch, as\n> > > >>> this is information that would have been greatly helpful to investigate in some\n> > > >>> performance issues I had to investigate recently. I'd be happy to have it land\n> > > >>> into v13, but if that's controversial or too late I'm happy to postpone it to\n> > > >>> v14 if the infrastructure added in Kirill's patches can make it to v13.\n> > > >>\n> > > >> Dear all, can we please focus on getting the core patch committed?\n> > > >> Given the uncertainity regarding autovacuum stats, can we please get\n> > > >> parts 1 and 2 into the codebase, and think about exposing autovacuum\n> > > >> stats later?\n> > > >\n> > > > Here are the comments for 0001 patch.\n> > > >\n> > > > + /*\n> > > > + * Report a full page image constructed for the WAL record\n> > > > + */\n> > > > + pgWalUsage.wal_fp_records++;\n> > > >\n> > > > Isn't it better to use \"fpw\" or \"fpi\" for the variable name rather than\n> > > > \"fp\" here? In other places, \"fpw\" and \"fpi\" are used for full page\n> > > > writes/image.\n> > > >\n> > > > ISTM that this counter could be incorrect if XLogInsertRecord() determines to\n> > > > calculate again whether FPI is necessary or not. No? IOW, this issue could\n> > > > happen if XLogInsert() calls XLogRecordAssemble() multiple times in\n> > > > its do-while loop. Isn't this problematic?\n> > > >\n> > > > + long wal_bytes; /* size of wal records produced */\n> > > >\n> > > > Isn't it safer to use uint64 (i.e., XLogRecPtr) as the type of this variable\n> > > > rather than long?\n> > > >\n> > > > + shm_toc_insert(pcxt->toc, PARALLEL_KEY_WAL_USAGE, bufusage_space);\n> > > >\n> > > > bufusage_space should be walusage_space here?\n> > > >\n> > > > /*\n> > > > * Finish parallel execution. We wait for parallel workers to finish, and\n> > > > * accumulate their buffer usage.\n> > > > */\n> > > >\n> > > > There are some comments mentioning buffer usage, in execParallel.c.\n> > > > For example, the top comment for ExecParallelFinish(), as the above.\n> > > > These should be updated.\n> > >\n> > > Here are the comments for 0002 patch.\n> > >\n> > > + OUT wal_write_bytes int8,\n> > > + OUT wal_write_records int8,\n> > > + OUT wal_write_fp_records int8\n> > >\n> > > Isn't \"write\" part in the column names confusing because it's WAL\n> > > *generated* (not written) by the statement?\n> > >\n> > > +RETURNS SETOF record\n> > > +AS 'MODULE_PATHNAME', 'pg_stat_statements_1_4'\n> > > +LANGUAGE C STRICT VOLATILE;\n> > >\n> > > PARALLEL SAFE should be specified?\n> > >\n> > > +/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n> > >\n> > > ISTM it's good timing to have also pg_stat_statements--1.8.sql since\n> > > the definition of pg_stat_statements() is changed. Thought?\n> > >\n> > > +-- CHECKPOINT before WAL tests to ensure test stability\n> > > +CHECKPOINT;\n> > >\n> > > Is this true? I thought you added this because the number of FPI\n> > > should be larger than zero in the subsequent test. No? But there\n> > > seems no such test. I'm not excited about adding the test checking\n> > > the number of FPI because it looks fragile, though...\n> > >\n> > > +UPDATE pgss_test SET b = '333' WHERE a = 3 \\;\n> > > +UPDATE pgss_test SET b = '444' WHERE a = 4 ;\n> > >\n> > > Could you tell me why several queries need to be run to test\n> > > the WAL usage? Isn't running a few query enough for the test purpase?\n> >\n> > FTR I marked the commitfest entry as waiting on author.\n> >\n> > Kirill do you think you'll have time to address Fuji-san's review\n> > shortly? The end of the commitfest is approaching quite fast :(\n>\n> All these are really valuable objections. Unfortunately, I won't be\n> able to get all sorted out soon, due to total lack of time. I would be\n> very glad if somebody could step in for this patch.\n\nI'll try to do that tomorrow!\n\n\n", "msg_date": "Fri, 27 Mar 2020 20:24:33 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Mar 28, 2020 at 12:54 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Mar 27, 2020 at 8:21 PM Kirill Bychik <kirill.bychik@gmail.com> wrote:\n> >\n> >\n> > All these are really valuable objections. Unfortunately, I won't be\n> > able to get all sorted out soon, due to total lack of time. I would be\n> > very glad if somebody could step in for this patch.\n>\n> I'll try to do that tomorrow!\n>\n\nI see some basic problems with the patch. The way it tries to compute\nWAL usage for parallel stuff doesn't seem right to me. Can you share\nor point me to any test done where we have computed WAL for parallel\noperations like Parallel Vacuum or Parallel Create Index? Basically,\nI don't know changes done in ExecInitParallelPlan and friends allow us\nto compute WAL for parallel operations. Those will primarily cover\nparallel queries that won't write WAL. How you have tested those\nchanges?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 28 Mar 2020 16:14:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> \n> I see some basic problems with the patch. The way it tries to compute\n> WAL usage for parallel stuff doesn't seem right to me. Can you share\n> or point me to any test done where we have computed WAL for parallel\n> operations like Parallel Vacuum or Parallel Create Index?\n\nAh, that's indeed a good point and AFAICT WAL records from parallel utility\nworkers won't be accounted for. That being said, I think that an argument\ncould be made that proper infrastructure should have been added in the original\nparallel utility patches, as pg_stat_statement is already broken wrt. buffer\nusage in parallel utility, unless I'm missing something.\n\n> Basically,\n> I don't know changes done in ExecInitParallelPlan and friends allow us\n> to compute WAL for parallel operations. Those will primarily cover\n> parallel queries that won't write WAL. How you have tested those\n> changes?\n\nI didn't tested those, and I'm not even sure how to properly and reliably test\nthat. Do you have any advice on how to achieve that?\n\nHowever the patch is mimicking the buffer instrumentation that already exists,\nand the approach also looks correct to me. Do you have a reason to believe\nthat the approach that works for buffer usage wouldn't work for WAL records? (I\nof course agree that this should be tested anyway)\n\n\n", "msg_date": "Sat, 28 Mar 2020 14:38:27 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > \n> > I see some basic problems with the patch. The way it tries to compute\n> > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > or point me to any test done where we have computed WAL for parallel\n> > operations like Parallel Vacuum or Parallel Create Index?\n> \n> Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> workers won't be accounted for. That being said, I think that an argument\n> could be made that proper infrastructure should have been added in the original\n> parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> usage in parallel utility, unless I'm missing something.\n\nJust to be sure I did a quick test with pg_stat_statements behavior using\nparallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\ndoesn't reflect parallel workers' activity.\n\nI added an open for that, and adding Robert in Cc as 9da0cc352 is the first\ncommit adding parallel maintenance.\n\n\n", "msg_date": "Sat, 28 Mar 2020 16:17:21 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sat, Mar 28, 2020 at 8:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > >\n> > > I see some basic problems with the patch. The way it tries to compute\n> > > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > > or point me to any test done where we have computed WAL for parallel\n> > > operations like Parallel Vacuum or Parallel Create Index?\n> >\n> > Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> > workers won't be accounted for. That being said, I think that an argument\n> > could be made that proper infrastructure should have been added in the original\n> > parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> > usage in parallel utility, unless I'm missing something.\n>\n> Just to be sure I did a quick test with pg_stat_statements behavior using\n> parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> doesn't reflect parallel workers' activity.\n>\n\nSawada-San would like to investigate this? If not, I will look into\nthis next week.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 29 Mar 2020 10:53:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sat, Mar 28, 2020 at 7:08 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> >\n> > Basically,\n> > I don't know changes done in ExecInitParallelPlan and friends allow us\n> > to compute WAL for parallel operations. Those will primarily cover\n> > parallel queries that won't write WAL. How you have tested those\n> > changes?\n>\n> I didn't tested those, and I'm not even sure how to properly and reliably test\n> that. Do you have any advice on how to achieve that?\n>\n> However the patch is mimicking the buffer instrumentation that already exists,\n> and the approach also looks correct to me. Do you have a reason to believe\n> that the approach that works for buffer usage wouldn't work for WAL records? (I\n> of course agree that this should be tested anyway)\n>\n\nThe buffer usage infrastructure is for read-only queries (for ex. for\nstats like blks_hit, blks_read). As far as I can think, there is no\neasy way to test the WAL usage via that API. It might or might not be\nrequired in the future depending on whether we decide to use the same\ninfrastructure for parallel writes. I think for now we should remove\nthat part of changes and rather think how to get that for parallel\noperations that can write WAL. For ex. we might need to do something\nsimilar to what this patch has done in begin_parallel_vacuum and\nend_parallel_vacuum. Would you like to attempt that?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 29 Mar 2020 11:03:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sun, 29 Mar 2020 at 14:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 8:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> > > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > > >\n> > > > I see some basic problems with the patch. The way it tries to compute\n> > > > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > > > or point me to any test done where we have computed WAL for parallel\n> > > > operations like Parallel Vacuum or Parallel Create Index?\n> > >\n> > > Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> > > workers won't be accounted for. That being said, I think that an argument\n> > > could be made that proper infrastructure should have been added in the original\n> > > parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> > > usage in parallel utility, unless I'm missing something.\n> >\n> > Just to be sure I did a quick test with pg_stat_statements behavior using\n> > parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> > doesn't reflect parallel workers' activity.\n> >\n>\n> Sawada-San would like to investigate this? If not, I will look into\n> this next week.\n\nSure, I'll investigate this issue today.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 29 Mar 2020 15:19:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sun, 29 Mar 2020 at 15:19, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 29 Mar 2020 at 14:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Mar 28, 2020 at 8:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> > > > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > > > >\n> > > > > I see some basic problems with the patch. The way it tries to compute\n> > > > > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > > > > or point me to any test done where we have computed WAL for parallel\n> > > > > operations like Parallel Vacuum or Parallel Create Index?\n> > > >\n> > > > Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> > > > workers won't be accounted for. That being said, I think that an argument\n> > > > could be made that proper infrastructure should have been added in the original\n> > > > parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> > > > usage in parallel utility, unless I'm missing something.\n> > >\n> > > Just to be sure I did a quick test with pg_stat_statements behavior using\n> > > parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> > > doesn't reflect parallel workers' activity.\n> > >\n> >\n> > Sawada-San would like to investigate this? If not, I will look into\n> > this next week.\n>\n> Sure, I'll investigate this issue today.\n>\n\nI've run vacuum with/without parallel workers on the table having 5\nindexes. The vacuum reads all blocks of table and indexes.\n\n* VACUUM command with no parallel workers\n=# select total_time, shared_blks_hit, shared_blks_read,\nshared_blks_hit + shared_blks_read as total_read_blks,\nshared_blks_dirtied, shared_blks_written from pg_stat_statements where\nquery ~ 'vacuum';\n\n total_time | shared_blks_hit | shared_blks_read | total_read_blks |\nshared_blks_dirtied | shared_blks_written\n--------------+-----------------+------------------+-----------------+---------------------+---------------------\n 19857.217207 | 45238 | 226944 | 272182 |\n 225943 | 225894\n(1 row)\n\n* VACUUM command with 4 parallel workers\n=# select total_time, shared_blks_hit, shared_blks_read,\nshared_blks_hit + shared_blks_read as total_read_blks,\nshared_blks_dirtied, shared_blks_written from pg_stat_statements where\nquery ~ 'vacuum';\n\n total_time | shared_blks_hit | shared_blks_read | total_read_blks |\nshared_blks_dirtied | shared_blks_written\n-------------+-----------------+------------------+-----------------+---------------------+---------------------\n 6932.117365 | 45205 | 73079 | 118284 |\n 72403 | 72365\n(1 row)\n\nThe total number of blocks of table and indexes are about 182243\nblocks. As Julien reported, obviously the total number of read blocks\nduring parallel vacuum is much less than single process vacuum's\nresult.\n\nParallel create index has the same issue but it doesn't exist in\nparallel queries for SELECTs.\n\nI think we need to change parallel maintenance commands so that they\nreport buffer usage like what ParallelQueryMain() does; prepare to\ntrack buffer usage during query execution by\nInstrStartParallelQuery(), and report it by InstrEndParallelQuery()\nafter parallel maintenance command. To report buffer usage of parallel\nmaintenance command correctly, I'm thinking that we can (1) change\nparallel create index and parallel vacuum so that they prepare\ngathering buffer usage, or (2) have a common entry point for parallel\nmaintenance commands that is responsible for gathering buffer usage\nand calling the entry functions for individual maintenance command.\nI'll investigate it more in depth.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 29 Mar 2020 16:51:49 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sun, Mar 29, 2020 at 11:03:50AM +0530, Amit Kapila wrote:\n> On Sat, Mar 28, 2020 at 7:08 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > >\n> > > Basically,\n> > > I don't know changes done in ExecInitParallelPlan and friends allow us\n> > > to compute WAL for parallel operations. Those will primarily cover\n> > > parallel queries that won't write WAL. How you have tested those\n> > > changes?\n> >\n> > I didn't tested those, and I'm not even sure how to properly and reliably test\n> > that. Do you have any advice on how to achieve that?\n> >\n> > However the patch is mimicking the buffer instrumentation that already exists,\n> > and the approach also looks correct to me. Do you have a reason to believe\n> > that the approach that works for buffer usage wouldn't work for WAL records? (I\n> > of course agree that this should be tested anyway)\n> >\n> \n> The buffer usage infrastructure is for read-only queries (for ex. for\n> stats like blks_hit, blks_read). As far as I can think, there is no\n> easy way to test the WAL usage via that API. It might or might not be\n> required in the future depending on whether we decide to use the same\n> infrastructure for parallel writes.\n\nI'm not sure that I get your point. I'm assuming that you meant\nparallel-read-only queries, but surely buffer usage infrastructure for\nparallel query relies on the same approach as non-parallel one (each node\ncomputes the process-local pgBufferUsage diff) and sums all of that at the end\nof the parallel query execution. I also don't see how whether the query is\nread-only or not is relevant here as far as instrumentation is concerned,\nespecially since read-only query can definitely do writes and increase the\ncount of dirtied buffers, like a write query would. For instance a hint\nbit change can be done in a parallel query AFAIK, and this can generate WAL\nrecords in wal_log_hints is enabled, so that's probably one way to test it.\n\nI now think that not adding support for WAL buffers in EXPLAIN output in the\ninitial patch scope was a mistake, as this is probably the best way to test the\nWAL counters for parallel queries. This shouldn't be hard to add though, and I\ncan work on it quickly if there's still a chance to get this feature included\nin pg13.\n\n> I think for now we should remove\n> that part of changes and rather think how to get that for parallel\n> operations that can write WAL. For ex. we might need to do something\n> similar to what this patch has done in begin_parallel_vacuum and\n> end_parallel_vacuum. Would you like to attempt that?\n\nDo you mean removing WAL buffers instrumentation from parallel query\ninfrastructure?\n\nFor parallel utility that can do writes it's probably better to keep the\ndiscussion in the other part of the thread. I tried to think a little bit\nabout that, but for now I don't have a better idea than adding something\nsimilar to intrumentation for utility command to have a general infrastructure,\nas building a workaround for specific utility looks like the wrong approach.\nBut this would require quite import changes in utility handling, which is maybe\nnot a good idea a couple of week before the feature freeze, and that is\ndefinitely not backpatchable so that won't fix the issue for parallel index\nbuild that exists since pg11.\n\n\n", "msg_date": "Sun, 29 Mar 2020 09:55:49 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sun, Mar 29, 2020 at 9:52 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 29 Mar 2020 at 15:19, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sun, 29 Mar 2020 at 14:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 28, 2020 at 8:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> > > > > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > > > > >\n> > > > > > I see some basic problems with the patch. The way it tries to compute\n> > > > > > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > > > > > or point me to any test done where we have computed WAL for parallel\n> > > > > > operations like Parallel Vacuum or Parallel Create Index?\n> > > > >\n> > > > > Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> > > > > workers won't be accounted for. That being said, I think that an argument\n> > > > > could be made that proper infrastructure should have been added in the original\n> > > > > parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> > > > > usage in parallel utility, unless I'm missing something.\n> > > >\n> > > > Just to be sure I did a quick test with pg_stat_statements behavior using\n> > > > parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> > > > doesn't reflect parallel workers' activity.\n> > > >\n> > >\n> > > Sawada-San would like to investigate this? If not, I will look into\n> > > this next week.\n> >\n> > Sure, I'll investigate this issue today.\n\nThanks for looking at it!\n\n> I've run vacuum with/without parallel workers on the table having 5\n> indexes. The vacuum reads all blocks of table and indexes.\n>\n> * VACUUM command with no parallel workers\n> =# select total_time, shared_blks_hit, shared_blks_read,\n> shared_blks_hit + shared_blks_read as total_read_blks,\n> shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> query ~ 'vacuum';\n>\n> total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> shared_blks_dirtied | shared_blks_written\n> --------------+-----------------+------------------+-----------------+---------------------+---------------------\n> 19857.217207 | 45238 | 226944 | 272182 |\n> 225943 | 225894\n> (1 row)\n>\n> * VACUUM command with 4 parallel workers\n> =# select total_time, shared_blks_hit, shared_blks_read,\n> shared_blks_hit + shared_blks_read as total_read_blks,\n> shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> query ~ 'vacuum';\n>\n> total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> shared_blks_dirtied | shared_blks_written\n> -------------+-----------------+------------------+-----------------+---------------------+---------------------\n> 6932.117365 | 45205 | 73079 | 118284 |\n> 72403 | 72365\n> (1 row)\n>\n> The total number of blocks of table and indexes are about 182243\n> blocks. As Julien reported, obviously the total number of read blocks\n> during parallel vacuum is much less than single process vacuum's\n> result.\n>\n> Parallel create index has the same issue but it doesn't exist in\n> parallel queries for SELECTs.\n>\n> I think we need to change parallel maintenance commands so that they\n> report buffer usage like what ParallelQueryMain() does; prepare to\n> track buffer usage during query execution by\n> InstrStartParallelQuery(), and report it by InstrEndParallelQuery()\n> after parallel maintenance command. To report buffer usage of parallel\n> maintenance command correctly, I'm thinking that we can (1) change\n> parallel create index and parallel vacuum so that they prepare\n> gathering buffer usage, or (2) have a common entry point for parallel\n> maintenance commands that is responsible for gathering buffer usage\n> and calling the entry functions for individual maintenance command.\n> I'll investigate it more in depth.\n\nAs I just mentioned, (2) seems like a better design as it's quite\nlikely that the number of parallel-aware utilities will probably\ncontinue to increase. One problem also is that parallel CREATE INDEX\nhas been introduced in pg11, so (2) probably won't be packpatchable\n(and (1) seems problematic too).\n\n\n", "msg_date": "Sun, 29 Mar 2020 10:13:58 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sun, Mar 29, 2020 at 1:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sun, Mar 29, 2020 at 9:52 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I've run vacuum with/without parallel workers on the table having 5\n> > indexes. The vacuum reads all blocks of table and indexes.\n> >\n> > * VACUUM command with no parallel workers\n> > =# select total_time, shared_blks_hit, shared_blks_read,\n> > shared_blks_hit + shared_blks_read as total_read_blks,\n> > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > query ~ 'vacuum';\n> >\n> > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > shared_blks_dirtied | shared_blks_written\n> > --------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > 19857.217207 | 45238 | 226944 | 272182 |\n> > 225943 | 225894\n> > (1 row)\n> >\n> > * VACUUM command with 4 parallel workers\n> > =# select total_time, shared_blks_hit, shared_blks_read,\n> > shared_blks_hit + shared_blks_read as total_read_blks,\n> > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > query ~ 'vacuum';\n> >\n> > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > shared_blks_dirtied | shared_blks_written\n> > -------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > 6932.117365 | 45205 | 73079 | 118284 |\n> > 72403 | 72365\n> > (1 row)\n> >\n> > The total number of blocks of table and indexes are about 182243\n> > blocks. As Julien reported, obviously the total number of read blocks\n> > during parallel vacuum is much less than single process vacuum's\n> > result.\n> >\n> > Parallel create index has the same issue but it doesn't exist in\n> > parallel queries for SELECTs.\n> >\n> > I think we need to change parallel maintenance commands so that they\n> > report buffer usage like what ParallelQueryMain() does; prepare to\n> > track buffer usage during query execution by\n> > InstrStartParallelQuery(), and report it by InstrEndParallelQuery()\n> > after parallel maintenance command. To report buffer usage of parallel\n> > maintenance command correctly, I'm thinking that we can (1) change\n> > parallel create index and parallel vacuum so that they prepare\n> > gathering buffer usage, or (2) have a common entry point for parallel\n> > maintenance commands that is responsible for gathering buffer usage\n> > and calling the entry functions for individual maintenance command.\n> > I'll investigate it more in depth.\n>\n> As I just mentioned, (2) seems like a better design as it's quite\n> likely that the number of parallel-aware utilities will probably\n> continue to increase. One problem also is that parallel CREATE INDEX\n> has been introduced in pg11, so (2) probably won't be packpatchable\n> (and (1) seems problematic too).\n>\n\nI am not sure if we can decide at this stage whether it is\nback-patchable or not. Let's first see the patch and if it turns out\nto be complex, then we can try to do some straight-forward fix for\nback-branches. In general, I don't see why the fix here should be\ncomplex?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 29 Mar 2020 16:45:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sun, Mar 29, 2020 at 1:26 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> I'm not sure that I get your point. I'm assuming that you meant\n> parallel-read-only queries, but surely buffer usage infrastructure for\n> parallel query relies on the same approach as non-parallel one (each node\n> computes the process-local pgBufferUsage diff) and sums all of that at the end\n> of the parallel query execution. I also don't see how whether the query is\n> read-only or not is relevant here as far as instrumentation is concerned,\n> especially since read-only query can definitely do writes and increase the\n> count of dirtied buffers, like a write query would. For instance a hint\n> bit change can be done in a parallel query AFAIK, and this can generate WAL\n> records in wal_log_hints is enabled, so that's probably one way to test it.\n>\n\nYeah, that way we can test it. Can you try that?\n\n> I now think that not adding support for WAL buffers in EXPLAIN output in the\n> initial patch scope was a mistake, as this is probably the best way to test the\n> WAL counters for parallel queries. This shouldn't be hard to add though, and I\n> can work on it quickly if there's still a chance to get this feature included\n> in pg13.\n>\n\nI am not sure we will add it in Explain or not (maybe we need inputs\nfrom others in this regard), but if it helps in testing this part of\nthe patch, then it is a good idea to write a patch for it. You might\nwant to keep it separate from the main patch as we might not commit\nit.\n\n> > I think for now we should remove\n> > that part of changes and rather think how to get that for parallel\n> > operations that can write WAL. For ex. we might need to do something\n> > similar to what this patch has done in begin_parallel_vacuum and\n> > end_parallel_vacuum. Would you like to attempt that?\n>\n> Do you mean removing WAL buffers instrumentation from parallel query\n> infrastructure?\n>\n\nYes, I meant that but now I realize we need those and your proposed\nway of testing it can help us in validating those changes.\n\n> For parallel utility that can do writes it's probably better to keep the\n> discussion in the other part of the thread.\n>\n\nSure, I am fine with that but I am not sure if it is a good idea to\ncommit this patch without having a way to compute WAL utilization for\nthose commands.\n\n I tried to think a little bit\n> about that, but for now I don't have a better idea than adding something\n> similar to intrumentation for utility command to have a general infrastructure,\n> as building a workaround for specific utility looks like the wrong approach.\n>\n\nI don't know what exactly you have in mind as I don't see why it\nshould be too complex. Let's wait for a patch from Sawada-San on\nbuffer usage stuff and in the meantime, we can work on other parts of\nthis patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 29 Mar 2020 17:12:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sun, 29 Mar 2020 at 20:15, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Mar 29, 2020 at 1:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sun, Mar 29, 2020 at 9:52 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > I've run vacuum with/without parallel workers on the table having 5\n> > > indexes. The vacuum reads all blocks of table and indexes.\n> > >\n> > > * VACUUM command with no parallel workers\n> > > =# select total_time, shared_blks_hit, shared_blks_read,\n> > > shared_blks_hit + shared_blks_read as total_read_blks,\n> > > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > > query ~ 'vacuum';\n> > >\n> > > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > > shared_blks_dirtied | shared_blks_written\n> > > --------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > > 19857.217207 | 45238 | 226944 | 272182 |\n> > > 225943 | 225894\n> > > (1 row)\n> > >\n> > > * VACUUM command with 4 parallel workers\n> > > =# select total_time, shared_blks_hit, shared_blks_read,\n> > > shared_blks_hit + shared_blks_read as total_read_blks,\n> > > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > > query ~ 'vacuum';\n> > >\n> > > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > > shared_blks_dirtied | shared_blks_written\n> > > -------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > > 6932.117365 | 45205 | 73079 | 118284 |\n> > > 72403 | 72365\n> > > (1 row)\n> > >\n> > > The total number of blocks of table and indexes are about 182243\n> > > blocks. As Julien reported, obviously the total number of read blocks\n> > > during parallel vacuum is much less than single process vacuum's\n> > > result.\n> > >\n> > > Parallel create index has the same issue but it doesn't exist in\n> > > parallel queries for SELECTs.\n> > >\n> > > I think we need to change parallel maintenance commands so that they\n> > > report buffer usage like what ParallelQueryMain() does; prepare to\n> > > track buffer usage during query execution by\n> > > InstrStartParallelQuery(), and report it by InstrEndParallelQuery()\n> > > after parallel maintenance command. To report buffer usage of parallel\n> > > maintenance command correctly, I'm thinking that we can (1) change\n> > > parallel create index and parallel vacuum so that they prepare\n> > > gathering buffer usage, or (2) have a common entry point for parallel\n> > > maintenance commands that is responsible for gathering buffer usage\n> > > and calling the entry functions for individual maintenance command.\n> > > I'll investigate it more in depth.\n> >\n> > As I just mentioned, (2) seems like a better design as it's quite\n> > likely that the number of parallel-aware utilities will probably\n> > continue to increase. One problem also is that parallel CREATE INDEX\n> > has been introduced in pg11, so (2) probably won't be packpatchable\n> > (and (1) seems problematic too).\n> >\n>\n> I am not sure if we can decide at this stage whether it is\n> back-patchable or not. Let's first see the patch and if it turns out\n> to be complex, then we can try to do some straight-forward fix for\n> back-branches.\n\nAgreed.\n\n> In general, I don't see why the fix here should be\n> complex?\n\nYeah, particularly the approach (1) will not be complex. I'll write a\npatch tomorrow.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 29 Mar 2020 20:44:24 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Mar 23, 2020 at 11:24:50PM +0900, Fujii Masao wrote:\n> \n> > Here are the comments for 0001 patch.\n> > \n> > +����������� /*\n> > +������������ * Report a full page image constructed for the WAL record\n> > +������������ */\n> > +����������� pgWalUsage.wal_fp_records++;\n> > \n> > Isn't it better to use \"fpw\" or \"fpi\" for the variable name rather than\n> > \"fp\" here? In other places, \"fpw\" and \"fpi\" are used for full page\n> > writes/image.\n\nAgreed, I went with fpw.\n\n> > ISTM that this counter could be incorrect if XLogInsertRecord() determines to\n> > calculate again whether FPI is necessary or not. No? IOW, this issue could\n> > happen if XLogInsert() calls� XLogRecordAssemble() multiple times in\n> > its do-while loop. Isn't this problematic?\n\nYes probably. I also see while adding support for EXPLAIN/auto_explain that\nthe previous approach was incrementing both records and fpw_records, while it\nshould be only one of those for each record. I fixed this using the approach I\npreviously mentionned in [1] which seems to work just fine.\n\n> > +��� long������� wal_bytes;������� /* size of wal records produced */\n> > \n> > Isn't it safer to use uint64 (i.e., XLogRecPtr) as the type of this variable\n> > rather than long?\n\nYes indeed. I switched to uint64, and modified everything accordingly (and\nchanged pgss to output numeric as there's no other way to handle unsigned int8)\n\n> > +��� shm_toc_insert(pcxt->toc, PARALLEL_KEY_WAL_USAGE, bufusage_space);\n> > \n> > bufusage_space should be walusage_space here?\n\nGood catch, fixed.\n\n> > /*\n> > �* Finish parallel execution.� We wait for parallel workers to finish, and\n> > �* accumulate their buffer usage.\n> > �*/\n> > \n> > There are some comments mentioning buffer usage, in execParallel.c.\n> > For example, the top comment for ExecParallelFinish(), as the above.\n> > These should be updated.\n\nI went through all the file and quickly checked in other places, and I think I\nfixed all required comments.\n\n> Here are the comments for 0002 patch.\n> \n> + OUT wal_write_bytes int8,\n> + OUT wal_write_records int8,\n> + OUT wal_write_fp_records int8\n> \n> Isn't \"write\" part in the column names confusing because it's WAL\n> *generated* (not written) by the statement?\n\nAgreed, I simply dropped the \"_write\" part everywhere.\n\n> +RETURNS SETOF record\n> +AS 'MODULE_PATHNAME', 'pg_stat_statements_1_4'\n> +LANGUAGE C STRICT VOLATILE;\n> \n> PARALLEL SAFE should be specified?\n\nIndeed, fixed.\n\n> +/* contrib/pg_stat_statements/pg_stat_statements--1.7--1.8.sql */\n> \n> ISTM it's good timing to have also pg_stat_statements--1.8.sql since\n> the definition of pg_stat_statements() is changed. Thought?\n\nAs mentionned in other pgss thread, I think the general agreement is to never\nprovide full script anymore, so I didn't changed that.\n\n> +-- CHECKPOINT before WAL tests to ensure test stability\n> +CHECKPOINT;\n> \n> Is this true? I thought you added this because the number of FPI\n> should be larger than zero in the subsequent test. No? But there\n> seems no such test. I'm not excited about adding the test checking\n> the number of FPI because it looks fragile, though...\n\nIt should ensure a FPW for each new block touch, but yes that's quite fragile.\n\nSince I fixed the record / FPW record counters, I saw that this was actually\nalready broken as there was a mix of FPW and non-FPW, so I dropped the\ncheckpoint and just tested (wal_record + wal_fpw_record) instead.\n\n> +UPDATE pgss_test SET b = '333' WHERE a = 3 \\;\n> +UPDATE pgss_test SET b = '444' WHERE a = 4 ;\n> \n> Could you tell me why several queries need to be run to test\n> the WAL usage? Isn't running a few query enough for the test purpase?\n\nAs far as I can see it's used to test multiple scenario (single command /\nmultiple commands in or outside explicit transaction). It shouldn't add a lot\nof overhead and since some commands are issues with \"\\;\" it's also testing\nproper query string isolation when multi-command query string is provided,\nwhich doesn't seem like a bad idea. I didn't changed that but I'm not opposed\nto remove some of the updates if needed.\n\nAlso, to answer Amit Kapila's comments about WAL records and parallel query, I\nadded support for both EXPLAIN and auto_explain (tab completion and\ndocumentation are also updated), and using a simple table with an index, with\nforced parallelism and no leader participation and concurrent update on the\nsame table, I could test that WAL usage is working as expected:\n\nrjuju=# explain (analyze, wal, verbose) select * from t1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.00..8805.05 rows=100010 width=14) (actual time=8.695..47.592 rows=100010 loops=1)\n Output: id, val\n Workers Planned: 2\n Workers Launched: 2\n WAL: records=204 bytes=86198\n -> Parallel Seq Scan on public.t1 (cost=0.00..8805.05 rows=50005 width=14) (actual time=0.056..29.112 rows=50005 loops\n Output: id, val\n WAL: records=204 bytes=86198\n Worker 0: actual time=0.060..28.995 rows=49593 loops=1\n WAL: records=105 bytes=44222\n Worker 1: actual time=0.052..29.230 rows=50417 loops=1\n WAL: records=99 bytes=41976\n Planning Time: 0.038 ms\n Execution Time: 53.957 ms\n(14 rows)\n\nand the same query when nothing end up being modified:\n\nrjuju=# explain (analyze, wal, verbose) select * from t1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.00..8805.05 rows=100010 width=14) (actual time=9.413..48.187 rows=100010 loops=1)\n Output: id, val\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on public.t1 (cost=0.00..8805.05 rows=50005 width=14) (actual time=0.033..24.697 rows=50005 loops\n Output: id, val\n Worker 0: actual time=0.028..24.786 rows=50447 loops=1\n Worker 1: actual time=0.038..24.609 rows=49563 loops=1\n Planning Time: 0.282 ms\n Execution Time: 55.643 ms\n(10 rows)\n\nSo it seems to me that WAL usage infrastructure for parallel query is working\njust fine. I added the EXPLAIN/auto_explain in a separate commit just in case.\n\n[1] https://www.postgresql.org/message-id/CAOBaU_aECK1Z7Nn+x=MhvEwrJzK8wyPsPtWAafjqtZN1fYjEmg@mail.gmail.com", "msg_date": "Sun, 29 Mar 2020 14:19:44 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "Hi Amit,\n\nSorry I just noticed your mail.\n\nOn Sun, Mar 29, 2020 at 05:12:16PM +0530, Amit Kapila wrote:\n> On Sun, Mar 29, 2020 at 1:26 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I'm not sure that I get your point. I'm assuming that you meant\n> > parallel-read-only queries, but surely buffer usage infrastructure for\n> > parallel query relies on the same approach as non-parallel one (each node\n> > computes the process-local pgBufferUsage diff) and sums all of that at the end\n> > of the parallel query execution. I also don't see how whether the query is\n> > read-only or not is relevant here as far as instrumentation is concerned,\n> > especially since read-only query can definitely do writes and increase the\n> > count of dirtied buffers, like a write query would. For instance a hint\n> > bit change can be done in a parallel query AFAIK, and this can generate WAL\n> > records in wal_log_hints is enabled, so that's probably one way to test it.\n> >\n> \n> Yeah, that way we can test it. Can you try that?\n> \n> > I now think that not adding support for WAL buffers in EXPLAIN output in the\n> > initial patch scope was a mistake, as this is probably the best way to test the\n> > WAL counters for parallel queries. This shouldn't be hard to add though, and I\n> > can work on it quickly if there's still a chance to get this feature included\n> > in pg13.\n> >\n> \n> I am not sure we will add it in Explain or not (maybe we need inputs\n> from others in this regard), but if it helps in testing this part of\n> the patch, then it is a good idea to write a patch for it. You might\n> want to keep it separate from the main patch as we might not commit\n> it.\n\nAs I just wrote in [1] that's exactly what I did. Using parallel query and\nconcurrent update on a table I could see that WAL usage for parallel query\nseems to be working as one could expect.\n\n> Sure, I am fine with that but I am not sure if it is a good idea to\n> commit this patch without having a way to compute WAL utilization for\n> those commands.\n\nI'm generally fine with waiting for a fix for the existing issue to be\ncommitted. But as the feature freeze is approaching, I hope that it won't mean\npostponing this feature to v14 because a related 2yo bug has just been\ndiscovered, as it would seem a bit unfair.\n\n\n", "msg_date": "Sun, 29 Mar 2020 14:31:41 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sun, 29 Mar 2020 at 20:44, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 29 Mar 2020 at 20:15, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Mar 29, 2020 at 1:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Sun, Mar 29, 2020 at 9:52 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > I've run vacuum with/without parallel workers on the table having 5\n> > > > indexes. The vacuum reads all blocks of table and indexes.\n> > > >\n> > > > * VACUUM command with no parallel workers\n> > > > =# select total_time, shared_blks_hit, shared_blks_read,\n> > > > shared_blks_hit + shared_blks_read as total_read_blks,\n> > > > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > > > query ~ 'vacuum';\n> > > >\n> > > > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > > > shared_blks_dirtied | shared_blks_written\n> > > > --------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > > > 19857.217207 | 45238 | 226944 | 272182 |\n> > > > 225943 | 225894\n> > > > (1 row)\n> > > >\n> > > > * VACUUM command with 4 parallel workers\n> > > > =# select total_time, shared_blks_hit, shared_blks_read,\n> > > > shared_blks_hit + shared_blks_read as total_read_blks,\n> > > > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > > > query ~ 'vacuum';\n> > > >\n> > > > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > > > shared_blks_dirtied | shared_blks_written\n> > > > -------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > > > 6932.117365 | 45205 | 73079 | 118284 |\n> > > > 72403 | 72365\n> > > > (1 row)\n> > > >\n> > > > The total number of blocks of table and indexes are about 182243\n> > > > blocks. As Julien reported, obviously the total number of read blocks\n> > > > during parallel vacuum is much less than single process vacuum's\n> > > > result.\n> > > >\n> > > > Parallel create index has the same issue but it doesn't exist in\n> > > > parallel queries for SELECTs.\n> > > >\n> > > > I think we need to change parallel maintenance commands so that they\n> > > > report buffer usage like what ParallelQueryMain() does; prepare to\n> > > > track buffer usage during query execution by\n> > > > InstrStartParallelQuery(), and report it by InstrEndParallelQuery()\n> > > > after parallel maintenance command. To report buffer usage of parallel\n> > > > maintenance command correctly, I'm thinking that we can (1) change\n> > > > parallel create index and parallel vacuum so that they prepare\n> > > > gathering buffer usage, or (2) have a common entry point for parallel\n> > > > maintenance commands that is responsible for gathering buffer usage\n> > > > and calling the entry functions for individual maintenance command.\n> > > > I'll investigate it more in depth.\n> > >\n> > > As I just mentioned, (2) seems like a better design as it's quite\n> > > likely that the number of parallel-aware utilities will probably\n> > > continue to increase. One problem also is that parallel CREATE INDEX\n> > > has been introduced in pg11, so (2) probably won't be packpatchable\n> > > (and (1) seems problematic too).\n> > >\n> >\n> > I am not sure if we can decide at this stage whether it is\n> > back-patchable or not. Let's first see the patch and if it turns out\n> > to be complex, then we can try to do some straight-forward fix for\n> > back-branches.\n>\n> Agreed.\n>\n> > In general, I don't see why the fix here should be\n> > complex?\n>\n> Yeah, particularly the approach (1) will not be complex. I'll write a\n> patch tomorrow.\n>\n\nI've attached two patches fixing this issue for parallel index\ncreation and parallel vacuum. These approaches take the same approach;\nwe allocate DSM to share buffer usage and the leader gathers them,\ndescribed as approach (1) above. I think this is a straightforward\napproach for this issue. We can create a common entry point for\nparallel maintenance command that is responsible for gathering buffer\nusage as well as sharing query text etc. But it will accompany\nrelatively big change and it might be overkill at this stage. We can\ndiscuss that and it will become an item for PG14.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Mar 2020 15:46:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, 30 Mar 2020 at 15:46, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 29 Mar 2020 at 20:44, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sun, 29 Mar 2020 at 20:15, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sun, Mar 29, 2020 at 1:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > On Sun, Mar 29, 2020 at 9:52 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > > I've run vacuum with/without parallel workers on the table having 5\n> > > > > indexes. The vacuum reads all blocks of table and indexes.\n> > > > >\n> > > > > * VACUUM command with no parallel workers\n> > > > > =# select total_time, shared_blks_hit, shared_blks_read,\n> > > > > shared_blks_hit + shared_blks_read as total_read_blks,\n> > > > > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > > > > query ~ 'vacuum';\n> > > > >\n> > > > > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > > > > shared_blks_dirtied | shared_blks_written\n> > > > > --------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > > > > 19857.217207 | 45238 | 226944 | 272182 |\n> > > > > 225943 | 225894\n> > > > > (1 row)\n> > > > >\n> > > > > * VACUUM command with 4 parallel workers\n> > > > > =# select total_time, shared_blks_hit, shared_blks_read,\n> > > > > shared_blks_hit + shared_blks_read as total_read_blks,\n> > > > > shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> > > > > query ~ 'vacuum';\n> > > > >\n> > > > > total_time | shared_blks_hit | shared_blks_read | total_read_blks |\n> > > > > shared_blks_dirtied | shared_blks_written\n> > > > > -------------+-----------------+------------------+-----------------+---------------------+---------------------\n> > > > > 6932.117365 | 45205 | 73079 | 118284 |\n> > > > > 72403 | 72365\n> > > > > (1 row)\n> > > > >\n> > > > > The total number of blocks of table and indexes are about 182243\n> > > > > blocks. As Julien reported, obviously the total number of read blocks\n> > > > > during parallel vacuum is much less than single process vacuum's\n> > > > > result.\n> > > > >\n> > > > > Parallel create index has the same issue but it doesn't exist in\n> > > > > parallel queries for SELECTs.\n> > > > >\n> > > > > I think we need to change parallel maintenance commands so that they\n> > > > > report buffer usage like what ParallelQueryMain() does; prepare to\n> > > > > track buffer usage during query execution by\n> > > > > InstrStartParallelQuery(), and report it by InstrEndParallelQuery()\n> > > > > after parallel maintenance command. To report buffer usage of parallel\n> > > > > maintenance command correctly, I'm thinking that we can (1) change\n> > > > > parallel create index and parallel vacuum so that they prepare\n> > > > > gathering buffer usage, or (2) have a common entry point for parallel\n> > > > > maintenance commands that is responsible for gathering buffer usage\n> > > > > and calling the entry functions for individual maintenance command.\n> > > > > I'll investigate it more in depth.\n> > > >\n> > > > As I just mentioned, (2) seems like a better design as it's quite\n> > > > likely that the number of parallel-aware utilities will probably\n> > > > continue to increase. One problem also is that parallel CREATE INDEX\n> > > > has been introduced in pg11, so (2) probably won't be packpatchable\n> > > > (and (1) seems problematic too).\n> > > >\n> > >\n> > > I am not sure if we can decide at this stage whether it is\n> > > back-patchable or not. Let's first see the patch and if it turns out\n> > > to be complex, then we can try to do some straight-forward fix for\n> > > back-branches.\n> >\n> > Agreed.\n> >\n> > > In general, I don't see why the fix here should be\n> > > complex?\n> >\n> > Yeah, particularly the approach (1) will not be complex. I'll write a\n> > patch tomorrow.\n> >\n>\n> I've attached two patches fixing this issue for parallel index\n> creation and parallel vacuum. These approaches take the same approach;\n> we allocate DSM to share buffer usage and the leader gathers them,\n> described as approach (1) above. I think this is a straightforward\n> approach for this issue. We can create a common entry point for\n> parallel maintenance command that is responsible for gathering buffer\n> usage as well as sharing query text etc. But it will accompany\n> relatively big change and it might be overkill at this stage. We can\n> discuss that and it will become an item for PG14.\n>\n\nThe patch for vacuum conflicts with recent changes in vacuum. So I've\nattached rebased one.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Mar 2020 16:01:18 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Mar 30, 2020 at 04:01:18PM +0900, Masahiko Sawada wrote:\n> On Mon, 30 Mar 2020 at 15:46, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sun, 29 Mar 2020 at 20:44, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > > > > I think we need to change parallel maintenance commands so that they\n> > > > > > report buffer usage like what ParallelQueryMain() does; prepare to\n> > > > > > track buffer usage during query execution by\n> > > > > > InstrStartParallelQuery(), and report it by InstrEndParallelQuery()\n> > > > > > after parallel maintenance command. To report buffer usage of parallel\n> > > > > > maintenance command correctly, I'm thinking that we can (1) change\n> > > > > > parallel create index and parallel vacuum so that they prepare\n> > > > > > gathering buffer usage, or (2) have a common entry point for parallel\n> > > > > > maintenance commands that is responsible for gathering buffer usage\n> > > > > > and calling the entry functions for individual maintenance command.\n> > > > > > I'll investigate it more in depth.\n> > > > >\n> > > [...]\n> >\n> > I've attached two patches fixing this issue for parallel index\n> > creation and parallel vacuum. These approaches take the same approach;\n> > we allocate DSM to share buffer usage and the leader gathers them,\n> > described as approach (1) above. I think this is a straightforward\n> > approach for this issue. We can create a common entry point for\n> > parallel maintenance command that is responsible for gathering buffer\n> > usage as well as sharing query text etc. But it will accompany\n> > relatively big change and it might be overkill at this stage. We can\n> > discuss that and it will become an item for PG14.\n> >\n> \n> The patch for vacuum conflicts with recent changes in vacuum. So I've\n> attached rebased one.\n\nThanks Sawada-san!\n\nJust minor nitpicking:\n\n+ int i;\n\n Assert(!IsParallelWorker());\n Assert(ParallelVacuumIsActive(lps));\n@@ -2166,6 +2172,13 @@ lazy_parallel_vacuum_indexes(Relation *Irel, IndexBulkDeleteResult **stats,\n /* Wait for all vacuum workers to finish */\n WaitForParallelWorkersToFinish(lps->pcxt);\n\n+ /*\n+ * Next, accumulate buffer usage. (This must wait for the workers to\n+ * finish, or we might get incomplete data.)\n+ */\n+ for (i = 0; i < nworkers; i++)\n+ InstrAccumParallelQuery(&lps->buffer_usage[i]);\n\nWe now allow declaring a variable in those loops, so it may be better to avoid\ndeclaring i outside the for scope?\n\nOther than that both patch looks good to me and a good fit for packpatching. I\nalso did some testing on VACUUM and CREATE INDEX and it works as expected.\n\n\n", "msg_date": "Mon, 30 Mar 2020 10:00:05 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sun, Mar 29, 2020 at 5:49 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n\n@@ -1249,6 +1250,16 @@ XLogInsertRecord(XLogRecData *rdata,\n ProcLastRecPtr = StartPos;\n XactLastRecEnd = EndPos;\n\n+ /* Provide WAL update data to the instrumentation */\n+ if (inserted)\n+ {\n+ pgWalUsage.wal_bytes += rechdr->xl_tot_len;\n+ if (doPageWrites && fpw_lsn <= RedoRecPtr)\n+ pgWalUsage.wal_fpw_records++;\n+ else\n+ pgWalUsage.wal_records++;\n+ }\n+\n\nI think the above code has multiple problems. (a) fpw_lsn can be\nInvalidXLogRecPtr and still there could be full-page image (for ex.\nwhen REGBUF_FORCE_IMAGE flag for buffer is set). (b) There could be\nmultiple FPW records while inserting a record; consider when there are\nmultiple registered buffers. I think the right place to figure this\nout is XLogRecordAssemble. (c) There are cases when we also attach the\nrecord data even when we decide to write FPW (cf. REGBUF_KEEP_DATA),\nso we might want to increment wal_fpw_records and wal_records for such\ncases.\n\nI think the right place to compute this information is\nXLogRecordAssemble even though we update it at the place where you\nhave it in the patch. You can probably compute that in local\nvariables and then transfer to pgWalUsage in XLogInsertRecord. I am\nfine if you can think of some other way but the current patch doesn't\nseem correct to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Mar 2020 15:52:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Mar 30, 2020 at 03:52:38PM +0530, Amit Kapila wrote:\n> On Sun, Mar 29, 2020 at 5:49 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> \n> @@ -1249,6 +1250,16 @@ XLogInsertRecord(XLogRecData *rdata,\n> ProcLastRecPtr = StartPos;\n> XactLastRecEnd = EndPos;\n> \n> + /* Provide WAL update data to the instrumentation */\n> + if (inserted)\n> + {\n> + pgWalUsage.wal_bytes += rechdr->xl_tot_len;\n> + if (doPageWrites && fpw_lsn <= RedoRecPtr)\n> + pgWalUsage.wal_fpw_records++;\n> + else\n> + pgWalUsage.wal_records++;\n> + }\n> +\n> \n> I think the above code has multiple problems. (a) fpw_lsn can be\n> InvalidXLogRecPtr and still there could be full-page image (for ex.\n> when REGBUF_FORCE_IMAGE flag for buffer is set). (b) There could be\n> multiple FPW records while inserting a record; consider when there are\n> multiple registered buffers. I think the right place to figure this\n> out is XLogRecordAssemble. (c) There are cases when we also attach the\n> record data even when we decide to write FPW (cf. REGBUF_KEEP_DATA),\n> so we might want to increment wal_fpw_records and wal_records for such\n> cases.\n> \n> I think the right place to compute this information is\n> XLogRecordAssemble even though we update it at the place where you\n> have it in the patch. You can probably compute that in local\n> variables and then transfer to pgWalUsage in XLogInsertRecord. I am\n> fine if you can think of some other way but the current patch doesn't\n> seem correct to me.\n\nMy previous approach was indeed totally broken. v8 attached which hopefully\nwill be ok.", "msg_date": "Mon, 30 Mar 2020 14:43:56 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Mar 30, 2020 at 12:31 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> The patch for vacuum conflicts with recent changes in vacuum. So I've\n> attached rebased one.\n>\n\n+ /*\n+ * Next, accumulate buffer usage. (This must wait for the workers to\n+ * finish, or we might get incomplete data.)\n+ */\n+ for (i = 0; i < nworkers; i++)\n+ InstrAccumParallelQuery(&lps->buffer_usage[i]);\n+\n\nThis should be done for launched workers aka\nlps->pcxt->nworkers_launched. I think a similar problem exists in\ncreate index related patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 09:27:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, 31 Mar 2020 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 12:31 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > The patch for vacuum conflicts with recent changes in vacuum. So I've\n> > attached rebased one.\n> >\n>\n> + /*\n> + * Next, accumulate buffer usage. (This must wait for the workers to\n> + * finish, or we might get incomplete data.)\n> + */\n> + for (i = 0; i < nworkers; i++)\n> + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> +\n>\n> This should be done for launched workers aka\n> lps->pcxt->nworkers_launched. I think a similar problem exists in\n> create index related patch.\n\nYou're right. Fixed in the new patches.\n\nOn Mon, 30 Mar 2020 at 17:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Just minor nitpicking:\n>\n> + int i;\n>\n> Assert(!IsParallelWorker());\n> Assert(ParallelVacuumIsActive(lps));\n> @@ -2166,6 +2172,13 @@ lazy_parallel_vacuum_indexes(Relation *Irel, IndexBulkDeleteResult **stats,\n> /* Wait for all vacuum workers to finish */\n> WaitForParallelWorkersToFinish(lps->pcxt);\n>\n> + /*\n> + * Next, accumulate buffer usage. (This must wait for the workers to\n> + * finish, or we might get incomplete data.)\n> + */\n> + for (i = 0; i < nworkers; i++)\n> + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n>\n> We now allow declaring a variable in those loops, so it may be better to avoid\n> declaring i outside the for scope?\n\nWe can do that but I was not sure if it's good since other codes\naround there don't use that. So I'd like to leave it for committers.\nIt's a trivial change.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 31 Mar 2020 14:13:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, Mar 31, 2020 at 10:44 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 31 Mar 2020 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 12:31 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > The patch for vacuum conflicts with recent changes in vacuum. So I've\n> > > attached rebased one.\n> > >\n> >\n> > + /*\n> > + * Next, accumulate buffer usage. (This must wait for the workers to\n> > + * finish, or we might get incomplete data.)\n> > + */\n> > + for (i = 0; i < nworkers; i++)\n> > + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > +\n> >\n> > This should be done for launched workers aka\n> > lps->pcxt->nworkers_launched. I think a similar problem exists in\n> > create index related patch.\n>\n> You're right. Fixed in the new patches.\n>\n> On Mon, 30 Mar 2020 at 17:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Just minor nitpicking:\n> >\n> > + int i;\n> >\n> > Assert(!IsParallelWorker());\n> > Assert(ParallelVacuumIsActive(lps));\n> > @@ -2166,6 +2172,13 @@ lazy_parallel_vacuum_indexes(Relation *Irel, IndexBulkDeleteResult **stats,\n> > /* Wait for all vacuum workers to finish */\n> > WaitForParallelWorkersToFinish(lps->pcxt);\n> >\n> > + /*\n> > + * Next, accumulate buffer usage. (This must wait for the workers to\n> > + * finish, or we might get incomplete data.)\n> > + */\n> > + for (i = 0; i < nworkers; i++)\n> > + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> >\n> > We now allow declaring a variable in those loops, so it may be better to avoid\n> > declaring i outside the for scope?\n>\n> We can do that but I was not sure if it's good since other codes\n> around there don't use that. So I'd like to leave it for committers.\n> It's a trivial change.\n\nI have reviewed the patch and the patch looks fine to me.\n\nOne minor comment\n/+ /* Points to buffer usage are in DSM */\n+ BufferUsage *buffer_usage;\n+\n/buffer usage are in DSM / buffer usage area in DSM\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:20:44 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 03:52:38PM +0530, Amit Kapila wrote:\n> >\n> > I think the right place to compute this information is\n> > XLogRecordAssemble even though we update it at the place where you\n> > have it in the patch. You can probably compute that in local\n> > variables and then transfer to pgWalUsage in XLogInsertRecord. I am\n> > fine if you can think of some other way but the current patch doesn't\n> > seem correct to me.\n>\n> My previous approach was indeed totally broken. v8 attached which hopefully\n> will be ok.\n>\n\nThis is better. Few more comments:\n1. The point (c) from my previous email doesn't seem to be fixed\nproperly. Basically, the record data is only attached with FPW in\nsome particular cases like where REGBUF_KEEP_DATA is set, but the\npatch assumes it is always set.\n\n2.\n+ /* Report a full page imsage constructed for the WAL record */\n+ *num_fpw += 1;\n\nTypo. /imsage/image\n\n3. We need to enhance the patch to cover WAL usage for parallel\nvacuum and parallel create index based on Sawada-San's latest patch[1]\nwhich fixed the case for buffer usage.\n\n[1] - https://www.postgresql.org/message-id/CA%2Bfd4k5L4yVoWz0smymmqB4_SMHd2tyJExUgA_ACsL7k00B5XQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 12:23:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 8:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 03:52:38PM +0530, Amit Kapila wrote:\n> > >\n> > > I think the right place to compute this information is\n> > > XLogRecordAssemble even though we update it at the place where you\n> > > have it in the patch. You can probably compute that in local\n> > > variables and then transfer to pgWalUsage in XLogInsertRecord. I am\n> > > fine if you can think of some other way but the current patch doesn't\n> > > seem correct to me.\n> >\n> > My previous approach was indeed totally broken. v8 attached which hopefully\n> > will be ok.\n> >\n>\n> This is better. Few more comments:\n> 1. The point (c) from my previous email doesn't seem to be fixed\n> properly. Basically, the record data is only attached with FPW in\n> some particular cases like where REGBUF_KEEP_DATA is set, but the\n> patch assumes it is always set.\n\nAs I mentioned multiple times already, I'm really not familiar with\nthe WAL code, so I'll be happy to be proven wrong but my reading is\nthat in XLogRecordAssemble(), there are 2 different things being done:\n\n- a FPW is optionally added, iif include_image is true, which doesn't\ntake into account REGBUF_KEEP_DATA. Looking at that part of the code\nI don't see any sign of the recorded FPW being skipped or discarded if\nREGBUF_KEEP_DATA is not set, and useful variables such as total_len\nare modified\n- then data is also optionally added, iif needs_data is set.\n\nIIUC a FPW can be added even if the WAL record doesn't contain data.\nSo the behavior look ok to me, as what seems to be useful it to\ndistinguish 9KB WAL for 1 record of 9KB from 9KB or WAL for 1KB record\nand 1 FPW.\n\nWhat am I missing here?\n\n> 2.\n> + /* Report a full page imsage constructed for the WAL record */\n> + *num_fpw += 1;\n>\n> Typo. /imsage/image\n\nOops yes, will fix.\n\n> 3. We need to enhance the patch to cover WAL usage for parallel\n> vacuum and parallel create index based on Sawada-San's latest patch[1]\n> which fixed the case for buffer usage.\n\nI'm sorry but I'm not following. Do you mean adding regression tests\nfor that case?\n\n\n", "msg_date": "Tue, 31 Mar 2020 11:09:04 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 12:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 03:52:38PM +0530, Amit Kapila wrote:\n> > >\n> > > I think the right place to compute this information is\n> > > XLogRecordAssemble even though we update it at the place where you\n> > > have it in the patch. You can probably compute that in local\n> > > variables and then transfer to pgWalUsage in XLogInsertRecord. I am\n> > > fine if you can think of some other way but the current patch doesn't\n> > > seem correct to me.\n> >\n> > My previous approach was indeed totally broken. v8 attached which hopefully\n> > will be ok.\n> >\n>\n> This is better. Few more comments:\n> 1. The point (c) from my previous email doesn't seem to be fixed\n> properly. Basically, the record data is only attached with FPW in\n> some particular cases like where REGBUF_KEEP_DATA is set, but the\n> patch assumes it is always set.\n>\n> 2.\n> + /* Report a full page imsage constructed for the WAL record */\n> + *num_fpw += 1;\n>\n> Typo. /imsage/image\n>\n> 3. We need to enhance the patch to cover WAL usage for parallel\n> vacuum and parallel create index based on Sawada-San's latest patch[1]\n> which fixed the case for buffer usage.\n\nI have started reviewing this patch and I have some comments/questions.\n\n1.\n@@ -22,6 +22,10 @@ static BufferUsage save_pgBufferUsage;\n\n static void BufferUsageAdd(BufferUsage *dst, const BufferUsage *add);\n\n+WalUsage pgWalUsage;\n+static WalUsage save_pgWalUsage;\n+\n+static void WalUsageAdd(WalUsage *dst, WalUsage *add);\n\nBetter we move all variable declaration first along with other\nvariables and then function declaration along with other function\ndeclaration. That is the convention we follow.\n\n2.\n {\n bool need_buffers = (instrument_options & INSTRUMENT_BUFFERS) != 0;\n+ bool need_wal = (instrument_options & INSTRUMENT_WAL) != 0;\n\nI think you need to run pgindent, we should give only one space\nbetween the variable name and '='.\nso we need to change like below\n\nbool need_wal = (instrument_options & INSTRUMENT_WAL) != 0;\n\n3.\n+typedef struct WalUsage\n+{\n+ long wal_records; /* # of WAL records produced */\n+ long wal_fpw_records; /* # of full page write WAL records\n+ * produced */\n\nIMHO, the name wal_fpw_records is bit confusing, First I thought it\nis counting the number of wal records which actually has FPW, then\nafter seeing code, I realized that it is actually counting total FPW.\nShouldn't we rename it to just wal_fpw? or wal_num_fpw or\nwal_fpw_count?\n\n\n4. Currently, we are combining all full-page write\nforce/normal/consistency checks in one category. I am not sure\nwhether it will be good information to know how many are force_fpw and\nhow many are normal_fpw?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 14:51:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 2:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> 4. Currently, we are combining all full-page write\n> force/normal/consistency checks in one category. I am not sure\n> whether it will be good information to know how many are force_fpw and\n> how many are normal_fpw?\n>\n\nWe can do it if we want but I am not sure how useful it will be. I\nthink we can always enhance this information if people really need\nthis and have a clear use-case in mind.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 15:01:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 2:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 8:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 30, 2020 at 03:52:38PM +0530, Amit Kapila wrote:\n> > > >\n> > > > I think the right place to compute this information is\n> > > > XLogRecordAssemble even though we update it at the place where you\n> > > > have it in the patch. You can probably compute that in local\n> > > > variables and then transfer to pgWalUsage in XLogInsertRecord. I am\n> > > > fine if you can think of some other way but the current patch doesn't\n> > > > seem correct to me.\n> > >\n> > > My previous approach was indeed totally broken. v8 attached which hopefully\n> > > will be ok.\n> > >\n> >\n> > This is better. Few more comments:\n> > 1. The point (c) from my previous email doesn't seem to be fixed\n> > properly. Basically, the record data is only attached with FPW in\n> > some particular cases like where REGBUF_KEEP_DATA is set, but the\n> > patch assumes it is always set.\n>\n> As I mentioned multiple times already, I'm really not familiar with\n> the WAL code, so I'll be happy to be proven wrong but my reading is\n> that in XLogRecordAssemble(), there are 2 different things being done:\n>\n> - a FPW is optionally added, iif include_image is true, which doesn't\n> take into account REGBUF_KEEP_DATA. Looking at that part of the code\n> I don't see any sign of the recorded FPW being skipped or discarded if\n> REGBUF_KEEP_DATA is not set, and useful variables such as total_len\n> are modified\n> - then data is also optionally added, iif needs_data is set.\n>\n> IIUC a FPW can be added even if the WAL record doesn't contain data.\n> So the behavior look ok to me, as what seems to be useful it to\n> distinguish 9KB WAL for 1 record of 9KB from 9KB or WAL for 1KB record\n> and 1 FPW.\n>\n\nIt is possible that both of us are having different meanings for below\ntwo variables:\n+typedef struct WalUsage\n+{\n+ long wal_records; /* # of WAL records produced */\n+ long wal_fpw_records; /* # of full page write WAL records\n+ * produced */\n\n\nLet me clarify my understanding. Say if the record is just an FPI\n(ex. XLOG_FPI) and doesn't contain any data then do we want to add one\nto each of wal_fpw_records and wal_records? My understanding was in\nsuch a case we will just increment wal_fpw_records.\n\n>\n> > 3. We need to enhance the patch to cover WAL usage for parallel\n> > vacuum and parallel create index based on Sawada-San's latest patch[1]\n> > which fixed the case for buffer usage.\n>\n> I'm sorry but I'm not following. Do you mean adding regression tests\n> for that case?\n>\n\nNo. I mean to say we should implement WAL usage calculation for those\ntwo parallel commands. AFAICS, your patch doesn't cover those two\ncommands.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 15:46:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n@@ -448,6 +449,7 @@ XLogInsert(RmgrId rmid, uint8 info)\n bool doPageWrites;\n XLogRecPtr fpw_lsn;\n XLogRecData *rdt;\n+ int num_fpw = 0;\n\n /*\n * Get values needed to decide whether to do full-page writes. Since\n@@ -457,9 +459,9 @@ XLogInsert(RmgrId rmid, uint8 info)\n GetFullPageWriteInfo(&RedoRecPtr, &doPageWrites);\n\n rdt = XLogRecordAssemble(rmid, info, RedoRecPtr, doPageWrites,\n- &fpw_lsn);\n+ &fpw_lsn, &num_fpw);\n\n- EndPos = XLogInsertRecord(rdt, fpw_lsn, curinsert_flags);\n+ EndPos = XLogInsertRecord(rdt, fpw_lsn, curinsert_flags, num_fpw);\n } while (EndPos == InvalidXLogRecPtr);\n\nI think there are some issues in the num_fpw calculation. For some\ncases, we have to return from XLogInsert without inserting a record.\nBasically, we've to recompute/reassemble the same record. In those\ncases, num_fpw should be reset. Thoughts?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 31 Mar 2020 15:50:48 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have started reviewing this patch and I have some comments/questions.\n\nThanks a lot!\n\n>\n> 1.\n> @@ -22,6 +22,10 @@ static BufferUsage save_pgBufferUsage;\n>\n> static void BufferUsageAdd(BufferUsage *dst, const BufferUsage *add);\n>\n> +WalUsage pgWalUsage;\n> +static WalUsage save_pgWalUsage;\n> +\n> +static void WalUsageAdd(WalUsage *dst, WalUsage *add);\n>\n> Better we move all variable declaration first along with other\n> variables and then function declaration along with other function\n> declaration. That is the convention we follow.\n\nAgreed, fixed.\n\n> 2.\n> {\n> bool need_buffers = (instrument_options & INSTRUMENT_BUFFERS) != 0;\n> + bool need_wal = (instrument_options & INSTRUMENT_WAL) != 0;\n>\n> I think you need to run pgindent, we should give only one space\n> between the variable name and '='.\n> so we need to change like below\n>\n> bool need_wal = (instrument_options & INSTRUMENT_WAL) != 0;\n\nDone.\n\n> 3.\n> +typedef struct WalUsage\n> +{\n> + long wal_records; /* # of WAL records produced */\n> + long wal_fpw_records; /* # of full page write WAL records\n> + * produced */\n>\n> IMHO, the name wal_fpw_records is bit confusing, First I thought it\n> is counting the number of wal records which actually has FPW, then\n> after seeing code, I realized that it is actually counting total FPW.\n> Shouldn't we rename it to just wal_fpw? or wal_num_fpw or\n> wal_fpw_count?\n\nYes I agree, the name was too confusing. I went with wal_num_fpw. I\nalso used the same for pg_stat_statements. Other fields are usually\nnamed with a trailing \"s\" but wal_fpws just seems too weird. I can\nchange it if consistency is preferred here.\n\n> 4. Currently, we are combining all full-page write\n> force/normal/consistency checks in one category. I am not sure\n> whether it will be good information to know how many are force_fpw and\n> how many are normal_fpw?\n\nI agree with Amit's POV. For now a single counter seems like enough\nto diagnose many behaviors.\n\nI'll keep answering following mails before sending an updated patchset.\n\n\n", "msg_date": "Tue, 31 Mar 2020 15:51:58 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 12:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> It is possible that both of us are having different meanings for below\n> two variables:\n> +typedef struct WalUsage\n> +{\n> + long wal_records; /* # of WAL records produced */\n> + long wal_fpw_records; /* # of full page write WAL records\n> + * produced */\n>\n>\n> Let me clarify my understanding. Say if the record is just an FPI\n> (ex. XLOG_FPI) and doesn't contain any data then do we want to add one\n> to each of wal_fpw_records and wal_records? My understanding was in\n> such a case we will just increment wal_fpw_records.\n\nYes, as Dilip just pointed out the misunderstanding is due to this\npoor name. Indeed, in such case what I want is both counters to be\nincremented. What I want is wal_records to reflect the total number\nof records generated regardless of any content, and wal_num_fpw the\nnumber of full page images, as it seems to make the most sense, and\nthe easiest way to estimate the ratio of data due to FPW.\n\n> > > 3. We need to enhance the patch to cover WAL usage for parallel\n> > > vacuum and parallel create index based on Sawada-San's latest patch[1]\n> > > which fixed the case for buffer usage.\n> >\n> > I'm sorry but I'm not following. Do you mean adding regression tests\n> > for that case?\n> >\n>\n> No. I mean to say we should implement WAL usage calculation for those\n> two parallel commands. AFAICS, your patch doesn't cover those two\n> commands.\n\nOh I see. I just assumed that Sawada-san's patch would be committed\nfirst and I'd then rebase the patchset on top of the newly added\ninfrastructure to also handle WAL counters, to avoid any conflict on\nthat bugfix while this new feature is being discussed. I'll rebase\nthe patchset against those patches then.\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:01:16 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 12:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 10:44 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 31 Mar 2020 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 30, 2020 at 12:31 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > The patch for vacuum conflicts with recent changes in vacuum. So I've\n> > > > attached rebased one.\n> > > >\n> > >\n> > > + /*\n> > > + * Next, accumulate buffer usage. (This must wait for the workers to\n> > > + * finish, or we might get incomplete data.)\n> > > + */\n> > > + for (i = 0; i < nworkers; i++)\n> > > + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > > +\n> > >\n> > > This should be done for launched workers aka\n> > > lps->pcxt->nworkers_launched. I think a similar problem exists in\n> > > create index related patch.\n> >\n> > You're right. Fixed in the new patches.\n> >\n> > On Mon, 30 Mar 2020 at 17:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > Just minor nitpicking:\n> > >\n> > > + int i;\n> > >\n> > > Assert(!IsParallelWorker());\n> > > Assert(ParallelVacuumIsActive(lps));\n> > > @@ -2166,6 +2172,13 @@ lazy_parallel_vacuum_indexes(Relation *Irel, IndexBulkDeleteResult **stats,\n> > > /* Wait for all vacuum workers to finish */\n> > > WaitForParallelWorkersToFinish(lps->pcxt);\n> > >\n> > > + /*\n> > > + * Next, accumulate buffer usage. (This must wait for the workers to\n> > > + * finish, or we might get incomplete data.)\n> > > + */\n> > > + for (i = 0; i < nworkers; i++)\n> > > + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > >\n> > > We now allow declaring a variable in those loops, so it may be better to avoid\n> > > declaring i outside the for scope?\n> >\n> > We can do that but I was not sure if it's good since other codes\n> > around there don't use that. So I'd like to leave it for committers.\n> > It's a trivial change.\n>\n> I have reviewed the patch and the patch looks fine to me.\n>\n> One minor comment\n> /+ /* Points to buffer usage are in DSM */\n> + BufferUsage *buffer_usage;\n> +\n> /buffer usage are in DSM / buffer usage area in DSM\n>\n\nWhile testing I have found one issue. Basically, during a parallel\nvacuum, it was showing more number of\nshared_blk_hits+shared_blks_read. After, some investigation, I found\nthat during the cleanup phase nworkers are -1, and because of this we\ndidn't try to launch worker but \"lps->pcxt->nworkers_launched\" had the\nold launched worker count and shared memory also had old buffer read\ndata which was never updated as we did not try to launch the worker.\n\ndiff --git a/src/backend/access/heap/vacuumlazy.c\nb/src/backend/access/heap/vacuumlazy.c\nindex b97b678..5dfaf4d 100644\n--- a/src/backend/access/heap/vacuumlazy.c\n+++ b/src/backend/access/heap/vacuumlazy.c\n@@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\nIndexBulkDeleteResult **stats,\n * Next, accumulate buffer usage. (This must wait for the workers to\n * finish, or we might get incomplete data.)\n */\n- for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n+ nworkers = Min(nworkers, lps->pcxt->nworkers_launched);\n+ for (i = 0; i < nworkers; i++)\n InstrAccumParallelQuery(&lps->buffer_usage[i]);\n\nIt worked after the above fix.\n\n\n", "msg_date": "Tue, 31 Mar 2020 19:32:35 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, Mar 31, 2020 at 12:21 PM Kuntal Ghosh\n<kuntalghosh.2007@gmail.com> wrote:\n>\n> On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> @@ -448,6 +449,7 @@ XLogInsert(RmgrId rmid, uint8 info)\n> bool doPageWrites;\n> XLogRecPtr fpw_lsn;\n> XLogRecData *rdt;\n> + int num_fpw = 0;\n>\n> /*\n> * Get values needed to decide whether to do full-page writes. Since\n> @@ -457,9 +459,9 @@ XLogInsert(RmgrId rmid, uint8 info)\n> GetFullPageWriteInfo(&RedoRecPtr, &doPageWrites);\n>\n> rdt = XLogRecordAssemble(rmid, info, RedoRecPtr, doPageWrites,\n> - &fpw_lsn);\n> + &fpw_lsn, &num_fpw);\n>\n> - EndPos = XLogInsertRecord(rdt, fpw_lsn, curinsert_flags);\n> + EndPos = XLogInsertRecord(rdt, fpw_lsn, curinsert_flags, num_fpw);\n> } while (EndPos == InvalidXLogRecPtr);\n>\n> I think there are some issues in the num_fpw calculation. For some\n> cases, we have to return from XLogInsert without inserting a record.\n> Basically, we've to recompute/reassemble the same record. In those\n> cases, num_fpw should be reset. Thoughts?\n\nMmm, yes but since that's the same record is being recomputed from the\nsame RedoRecPtr, doesn't it mean that we need to reset the counter?\nOtherwise we would count the same FPW multiple times.\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:08:55 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 7:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 12:21 PM Kuntal Ghosh\n> <kuntalghosh.2007@gmail.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 6:14 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > @@ -448,6 +449,7 @@ XLogInsert(RmgrId rmid, uint8 info)\n> > bool doPageWrites;\n> > XLogRecPtr fpw_lsn;\n> > XLogRecData *rdt;\n> > + int num_fpw = 0;\n> >\n> > /*\n> > * Get values needed to decide whether to do full-page writes. Since\n> > @@ -457,9 +459,9 @@ XLogInsert(RmgrId rmid, uint8 info)\n> > GetFullPageWriteInfo(&RedoRecPtr, &doPageWrites);\n> >\n> > rdt = XLogRecordAssemble(rmid, info, RedoRecPtr, doPageWrites,\n> > - &fpw_lsn);\n> > + &fpw_lsn, &num_fpw);\n> >\n> > - EndPos = XLogInsertRecord(rdt, fpw_lsn, curinsert_flags);\n> > + EndPos = XLogInsertRecord(rdt, fpw_lsn, curinsert_flags, num_fpw);\n> > } while (EndPos == InvalidXLogRecPtr);\n> >\n> > I think there are some issues in the num_fpw calculation. For some\n> > cases, we have to return from XLogInsert without inserting a record.\n> > Basically, we've to recompute/reassemble the same record. In those\n> > cases, num_fpw should be reset. Thoughts?\n>\n> Mmm, yes but since that's the same record is being recomputed from the\n> same RedoRecPtr, doesn't it mean that we need to reset the counter?\n> Otherwise we would count the same FPW multiple times.\n\nYes. That was my point as well. I missed the part that you're already\nresetting the same inside the do-while loop before calling\nXLogRecordAssemble. Sorry for the noise.\n\n\n", "msg_date": "Tue, 31 Mar 2020 19:52:49 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> While testing I have found one issue. Basically, during a parallel\n> vacuum, it was showing more number of\n> shared_blk_hits+shared_blks_read. After, some investigation, I found\n> that during the cleanup phase nworkers are -1, and because of this we\n> didn't try to launch worker but \"lps->pcxt->nworkers_launched\" had the\n> old launched worker count and shared memory also had old buffer read\n> data which was never updated as we did not try to launch the worker.\n>\n> diff --git a/src/backend/access/heap/vacuumlazy.c\n> b/src/backend/access/heap/vacuumlazy.c\n> index b97b678..5dfaf4d 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> IndexBulkDeleteResult **stats,\n> * Next, accumulate buffer usage. (This must wait for the workers to\n> * finish, or we might get incomplete data.)\n> */\n> - for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> + nworkers = Min(nworkers, lps->pcxt->nworkers_launched);\n> + for (i = 0; i < nworkers; i++)\n> InstrAccumParallelQuery(&lps->buffer_usage[i]);\n>\n> It worked after the above fix.\n>\n\nGood catch. I think we should not even call\nWaitForParallelWorkersToFinish for such a case. So, I guess the fix\ncould be,\n\nif (workers > 0)\n{\nWaitForParallelWorkersToFinish();\nfor (i = 0; i < lps->pcxt->nworkers_launched; i++)\n InstrAccumParallelQuery(&lps->buffer_usage[i]);\n}\n\nor something along those lines.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 08:16:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > While testing I have found one issue. Basically, during a parallel\n> > vacuum, it was showing more number of\n> > shared_blk_hits+shared_blks_read. After, some investigation, I found\n> > that during the cleanup phase nworkers are -1, and because of this we\n> > didn't try to launch worker but \"lps->pcxt->nworkers_launched\" had the\n> > old launched worker count and shared memory also had old buffer read\n> > data which was never updated as we did not try to launch the worker.\n> >\n> > diff --git a/src/backend/access/heap/vacuumlazy.c\n> > b/src/backend/access/heap/vacuumlazy.c\n> > index b97b678..5dfaf4d 100644\n> > --- a/src/backend/access/heap/vacuumlazy.c\n> > +++ b/src/backend/access/heap/vacuumlazy.c\n> > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> > IndexBulkDeleteResult **stats,\n> > * Next, accumulate buffer usage. (This must wait for the workers to\n> > * finish, or we might get incomplete data.)\n> > */\n> > - for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > + nworkers = Min(nworkers, lps->pcxt->nworkers_launched);\n> > + for (i = 0; i < nworkers; i++)\n> > InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> >\n> > It worked after the above fix.\n> >\n>\n> Good catch. I think we should not even call\n> WaitForParallelWorkersToFinish for such a case. So, I guess the fix\n> could be,\n>\n> if (workers > 0)\n> {\n> WaitForParallelWorkersToFinish();\n> for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> }\n>\n> or something along those lines.\n\nHmm, Right!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 08:20:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, 1 Apr 2020 at 11:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > While testing I have found one issue. Basically, during a parallel\n> > vacuum, it was showing more number of\n> > shared_blk_hits+shared_blks_read. After, some investigation, I found\n> > that during the cleanup phase nworkers are -1, and because of this we\n> > didn't try to launch worker but \"lps->pcxt->nworkers_launched\" had the\n> > old launched worker count and shared memory also had old buffer read\n> > data which was never updated as we did not try to launch the worker.\n> >\n> > diff --git a/src/backend/access/heap/vacuumlazy.c\n> > b/src/backend/access/heap/vacuumlazy.c\n> > index b97b678..5dfaf4d 100644\n> > --- a/src/backend/access/heap/vacuumlazy.c\n> > +++ b/src/backend/access/heap/vacuumlazy.c\n> > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> > IndexBulkDeleteResult **stats,\n> > * Next, accumulate buffer usage. (This must wait for the workers to\n> > * finish, or we might get incomplete data.)\n> > */\n> > - for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > + nworkers = Min(nworkers, lps->pcxt->nworkers_launched);\n> > + for (i = 0; i < nworkers; i++)\n> > InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> >\n> > It worked after the above fix.\n> >\n>\n> Good catch. I think we should not even call\n> WaitForParallelWorkersToFinish for such a case. So, I guess the fix\n> could be,\n>\n> if (workers > 0)\n> {\n> WaitForParallelWorkersToFinish();\n> for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> }\n>\n\nAgreed. I've attached the updated patch.\n\nThank you for testing, Dilip!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 1 Apr 2020 11:56:20 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:26 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 1 Apr 2020 at 11:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > While testing I have found one issue. Basically, during a parallel\n> > > vacuum, it was showing more number of\n> > > shared_blk_hits+shared_blks_read. After, some investigation, I found\n> > > that during the cleanup phase nworkers are -1, and because of this we\n> > > didn't try to launch worker but \"lps->pcxt->nworkers_launched\" had the\n> > > old launched worker count and shared memory also had old buffer read\n> > > data which was never updated as we did not try to launch the worker.\n> > >\n> > > diff --git a/src/backend/access/heap/vacuumlazy.c\n> > > b/src/backend/access/heap/vacuumlazy.c\n> > > index b97b678..5dfaf4d 100644\n> > > --- a/src/backend/access/heap/vacuumlazy.c\n> > > +++ b/src/backend/access/heap/vacuumlazy.c\n> > > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> > > IndexBulkDeleteResult **stats,\n> > > * Next, accumulate buffer usage. (This must wait for the workers to\n> > > * finish, or we might get incomplete data.)\n> > > */\n> > > - for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > > + nworkers = Min(nworkers, lps->pcxt->nworkers_launched);\n> > > + for (i = 0; i < nworkers; i++)\n> > > InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > >\n> > > It worked after the above fix.\n> > >\n> >\n> > Good catch. I think we should not even call\n> > WaitForParallelWorkersToFinish for such a case. So, I guess the fix\n> > could be,\n> >\n> > if (workers > 0)\n> > {\n> > WaitForParallelWorkersToFinish();\n> > for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > }\n> >\n>\n> Agreed. I've attached the updated patch.\n>\n> Thank you for testing, Dilip!\n\nThanks! One hunk is failing on the latest head. And, I have rebased\nthe patch for my testing so posting the same. I have done some more\ntesting to test multi-pass vacuum.\n\npostgres[114321]=# show maintenance_work_mem ;\n maintenance_work_mem\n----------------------\n 1MB\n(1 row)\n\n--Test case\nselect pg_stat_statements_reset();\ndrop table test;\nCREATE TABLE test (a int, b int);\nCREATE INDEX idx1 on test(a);\nCREATE INDEX idx2 on test(b);\nINSERT INTO test SELECT i, i FROM GENERATE_SERIES(1,2000000) as i;\nDELETE FROM test where a%2=0;\nVACUUM (PARALLEL n) test;\nselect query, total_time, shared_blks_hit, shared_blks_read,\nshared_blks_hit + shared_blks_read as total_read_blks,\nshared_blks_dirtied, shared_blks_written from pg_stat_statements where\nquery like 'VACUUM%';\n\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n--------------------------+-------------+-----------------+------------------+-----------------+---------------------+---------------------\n VACUUM (PARALLEL 0) test | 5964.282408 | 92447 |\n 6 | 92453 | 19789 | 0\n\n\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n--------------------------+--------------------+-----------------+------------------+-----------------+---------------------+---------------------\n VACUUM (PARALLEL 1) test | 3957.7658810000003 | 92447 |\n 6 | 92453 | 19789 |\n 0\n(1 row)\n\nSo I am getting correct results with the multi-pass vacuum.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Apr 2020 08:51:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > Agreed. I've attached the updated patch.\n> >\n> > Thank you for testing, Dilip!\n>\n> Thanks! One hunk is failing on the latest head. And, I have rebased\n> the patch for my testing so posting the same. I have done some more\n> testing to test multi-pass vacuum.\n>\n\nThe patch looks good to me. I have done a few minor modifications (a)\nmoved the declaration of variable closer to where it is used, (b)\nchanged a comment, (c) ran pgindent. I have also done some additional\ntesting with more number of indexes and found that vacuum and parallel\nvacuum used the same number of total_read_blks and that is what is\nexpected here.\n\nLet me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Apr 2020 12:01:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Apr 1, 2020 at 8:26 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 1 Apr 2020 at 11:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 31, 2020 at 7:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > While testing I have found one issue. Basically, during a parallel\n> > > > vacuum, it was showing more number of\n> > > > shared_blk_hits+shared_blks_read. After, some investigation, I found\n> > > > that during the cleanup phase nworkers are -1, and because of this we\n> > > > didn't try to launch worker but \"lps->pcxt->nworkers_launched\" had the\n> > > > old launched worker count and shared memory also had old buffer read\n> > > > data which was never updated as we did not try to launch the worker.\n> > > >\n> > > > diff --git a/src/backend/access/heap/vacuumlazy.c\n> > > > b/src/backend/access/heap/vacuumlazy.c\n> > > > index b97b678..5dfaf4d 100644\n> > > > --- a/src/backend/access/heap/vacuumlazy.c\n> > > > +++ b/src/backend/access/heap/vacuumlazy.c\n> > > > @@ -2150,7 +2150,8 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> > > > IndexBulkDeleteResult **stats,\n> > > > * Next, accumulate buffer usage. (This must wait for the workers to\n> > > > * finish, or we might get incomplete data.)\n> > > > */\n> > > > - for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > > > + nworkers = Min(nworkers, lps->pcxt->nworkers_launched);\n> > > > + for (i = 0; i < nworkers; i++)\n> > > > InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > > >\n> > > > It worked after the above fix.\n> > > >\n> > >\n> > > Good catch. I think we should not even call\n> > > WaitForParallelWorkersToFinish for such a case. So, I guess the fix\n> > > could be,\n> > >\n> > > if (workers > 0)\n> > > {\n> > > WaitForParallelWorkersToFinish();\n> > > for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > > InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > > }\n> > >\n> >\n> > Agreed. I've attached the updated patch.\n> >\n> > Thank you for testing, Dilip!\n>\n> Thanks! One hunk is failing on the latest head. And, I have rebased\n> the patch for my testing so posting the same. I have done some more\n> testing to test multi-pass vacuum.\n>\n> postgres[114321]=# show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 1MB\n> (1 row)\n>\n> --Test case\n> select pg_stat_statements_reset();\n> drop table test;\n> CREATE TABLE test (a int, b int);\n> CREATE INDEX idx1 on test(a);\n> CREATE INDEX idx2 on test(b);\n> INSERT INTO test SELECT i, i FROM GENERATE_SERIES(1,2000000) as i;\n> DELETE FROM test where a%2=0;\n> VACUUM (PARALLEL n) test;\n> select query, total_time, shared_blks_hit, shared_blks_read,\n> shared_blks_hit + shared_blks_read as total_read_blks,\n> shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> query like 'VACUUM%';\n>\n> query | total_time | shared_blks_hit |\n> shared_blks_read | total_read_blks | shared_blks_dirtied |\n> shared_blks_written\n> --------------------------+-------------+-----------------+------------------+-----------------+---------------------+---------------------\n> VACUUM (PARALLEL 0) test | 5964.282408 | 92447 |\n> 6 | 92453 | 19789 | 0\n>\n>\n> query | total_time | shared_blks_hit |\n> shared_blks_read | total_read_blks | shared_blks_dirtied |\n> shared_blks_written\n> --------------------------+--------------------+-----------------+------------------+-----------------+---------------------+---------------------\n> VACUUM (PARALLEL 1) test | 3957.7658810000003 | 92447 |\n> 6 | 92453 | 19789 |\n> 0\n> (1 row)\n>\n> So I am getting correct results with the multi-pass vacuum.\n\nI have done some testing for the parallel \"create index\".\n\npostgres[99536]=# show maintenance_work_mem ;\n maintenance_work_mem\n----------------------\n 1MB\n(1 row)\n\nCREATE TABLE test (a int, b int);\nINSERT INTO test SELECT i, i FROM GENERATE_SERIES(1,2000000) as i;\nCREATE INDEX idx1 on test(a);\nselect query, total_time, shared_blks_hit, shared_blks_read,\nshared_blks_hit + shared_blks_read as total_read_blks,\nshared_blks_dirtied, shared_blks_written from pg_stat_statements where\nquery like 'CREATE INDEX%';\n\n\nSET max_parallel_maintenance_workers TO 0;\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+--------------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 1947.4959979999999 | 8947 |\n 11 | 8958 | 5 |\n 0\n\nSET max_parallel_maintenance_workers TO 2;\n\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+--------------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 1942.1426040000001 | 8960 |\n 14 | 8974 | 5 |\n 0\n(1 row)\n\nI have noticed that the total_read_blks, with the parallel, create\nindex is more compared to non-parallel one. I have created a fresh\ndatabase before each run. I am not much aware of the internal code of\nparallel create an index so I am not sure whether it is expected to\nread extra blocks with the parallel create an index. I guess maybe\nbecause multiple workers are inserting int the btree they might need\nto visit some btree nodes multiple times while traversing the tree\ndown. But, it's better if someone who have more idea with this code\ncan confirm this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 12:41:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "So here's a v9, rebased on top of the latest versions of Sawada-san's bug fixes\n(Amit's v6 for vacuum and Sawada-san's v2 for create index), with all\npreviously mentionned changes.\n\nNote that I'm only attaching those patches for convenience and to make sure\nthat cfbot is happy.", "msg_date": "Wed, 1 Apr 2020 10:01:52 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 1, 2020 at 1:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> So here's a v9, rebased on top of the latest versions of Sawada-san's bug fixes\n> (Amit's v6 for vacuum and Sawada-san's v2 for create index), with all\n> previously mentionned changes.\n>\n\nFew other comments:\nv9-0003-Add-infrastructure-to-track-WAL-usage\n1.\n static void BufferUsageAdd(BufferUsage *dst, const BufferUsage *add);\n-\n+static void WalUsageAdd(WalUsage *dst, WalUsage *add);\n\nLooks like a spurious line removal\n\n2.\n+ /* Report a full page imsage constructed for the WAL record */\n+ *num_fpw += 1;\n\nTypo. /imsage/image\n\n3. Doing some testing with and without parallelism to ensure WAL usage\ndata is correct would be great and if possible, share the results?\n\nv9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n4.\n+-- SELECT usage data, check WAL usage is reported, wal_records equal\nrows count for INSERT/UPDATE/DELETE\n+SELECT query, calls, rows,\n+wal_bytes > 0 as wal_bytes_generated,\n+wal_records > 0 as wal_records_generated,\n+wal_records = rows as wal_records_as_rows\n+FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n+ query |\ncalls | rows | wal_bytes_generated | wal_records_generated |\nwal_records_as_rows\n+------------------------------------------------------------------+-------+------+---------------------+-----------------------+---------------------\n+ DELETE FROM pgss_test WHERE a > $1 |\n 1 | 1 | t | t | t\n+ DROP TABLE pgss_test |\n 1 | 0 | t | t | f\n+ INSERT INTO pgss_test (a, b) VALUES ($1, $2), ($3, $4), ($5, $6) |\n 1 | 3 | t | t | t\n+ INSERT INTO pgss_test VALUES(generate_series($1, $2), $3) |\n 1 | 10 | t | t | t\n+ SELECT * FROM pgss_test ORDER BY a |\n 1 | 12 | f | f | f\n+ SELECT * FROM pgss_test WHERE a > $1 ORDER BY a |\n 2 | 4 | f | f | f\n+ SELECT * FROM pgss_test WHERE a IN ($1, $2, $3, $4, $5) |\n 1 | 8 | f | f | f\n+ SELECT pg_stat_statements_reset() |\n 1 | 1 | f | f | f\n+ SET pg_stat_statements.track_utility = FALSE |\n 1 | 0 | f | f | t\n+ UPDATE pgss_test SET b = $1 WHERE a = $2 |\n 6 | 6 | t | t | t\n+ UPDATE pgss_test SET b = $1 WHERE a > $2 |\n 1 | 3 | t | t | t\n+(11 rows)\n+\n\nI am not sure if the above tests make much sense as they are just\ntesting that if WAL is generated for these commands. I understand it\nis not easy to make these tests reliable but in that case, we can\nthink of some simple tests. It seems to me that the difficulty is due\nto full_page_writes as that depends on the checkpoint. Can we make\nfull_page_writes = off for these tests and check some simple\nInsert/Update/Delete cases? Alternatively, if you can present the\nreason why that is unstable or are tricky to write, then we can simply\nget rid of these tests because I don't see tests for BufferUsage. Let\nnot write tests for the sake of writing it unless they can detect bugs\nin the future or are meaningfully covering the new code added.\n\n5.\n-SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n- query | calls | rows\n------------------------------------+-------+------\n- SELECT $1::TEXT | 1 | 1\n- SELECT PLUS_ONE($1) | 2 | 2\n- SELECT PLUS_TWO($1) | 2 | 2\n- SELECT pg_stat_statements_reset() | 1 | 1\n+SELECT query, calls, rows, wal_bytes, wal_records FROM\npg_stat_statements ORDER BY query COLLATE \"C\";\n+ query | calls | rows | wal_bytes | wal_records\n+-----------------------------------+-------+------+-----------+-------------\n+ SELECT $1::TEXT | 1 | 1 | 0 | 0\n+ SELECT PLUS_ONE($1) | 2 | 2 | 0 | 0\n+ SELECT PLUS_TWO($1) | 2 | 2 | 0 | 0\n+ SELECT pg_stat_statements_reset() | 1 | 1 | 0 | 0\n (4 rows)\n\nAgain, I am not sure if these modifications make much sense?\n\n6.\n static void pgss_shmem_startup(void);\n@@ -313,6 +318,7 @@ static void pgss_store(const char *query, uint64 queryId,\n int query_location, int query_len,\n double total_time, uint64 rows,\n const BufferUsage *bufusage,\n+ const WalUsage* walusage,\n pgssJumbleState *jstate);\n\nThe alignment for walusage doesn't seem to be correct. Running\npgindent will fix this.\n\n7.\n+ values[i++] = Int64GetDatumFast(tmp.wal_records);\n+ values[i++] = UInt64GetDatum(tmp.wal_num_fpw);\n\nWhy are they different? I think we should use the same *GetDatum API\n(probably Int64GetDatumFast) for these.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 16:29:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 1, 2020 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> v9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n>\n\nOne more comment related to this patch.\n+\n+ snprintf(buf, sizeof buf, UINT64_FORMAT, tmp.wal_bytes);\n+\n+ /* Convert to numeric. */\n+ wal_bytes = DirectFunctionCall3(numeric_in,\n+ CStringGetDatum(buf),\n+ ObjectIdGetDatum(0),\n+ Int32GetDatum(-1));\n+\n+ values[i++] = wal_bytes;\n\nI see that other places that display uint64 values use BIGINT datatype\nin SQL, so why can't we do the same here? See the usage of queryid in\npg_stat_statements or internal_pages, *_pages exposed via\npgstatindex.c.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:00:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 1, 2020 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 1, 2020 at 8:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > Agreed. I've attached the updated patch.\n> > >\n> > > Thank you for testing, Dilip!\n> >\n> > Thanks! One hunk is failing on the latest head. And, I have rebased\n> > the patch for my testing so posting the same. I have done some more\n> > testing to test multi-pass vacuum.\n> >\n>\n> The patch looks good to me. I have done a few minor modifications (a)\n> moved the declaration of variable closer to where it is used, (b)\n> changed a comment, (c) ran pgindent. I have also done some additional\n> testing with more number of indexes and found that vacuum and parallel\n> vacuum used the same number of total_read_blks and that is what is\n> expected here.\n>\n> Let me know what you think of the attached?\n\nThe patch looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:55:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 1, 2020 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > v9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n> >\n>\n> One more comment related to this patch.\n> +\n> + snprintf(buf, sizeof buf, UINT64_FORMAT, tmp.wal_bytes);\n> +\n> + /* Convert to numeric. */\n> + wal_bytes = DirectFunctionCall3(numeric_in,\n> + CStringGetDatum(buf),\n> + ObjectIdGetDatum(0),\n> + Int32GetDatum(-1));\n> +\n> + values[i++] = wal_bytes;\n>\n> I see that other places that display uint64 values use BIGINT datatype\n> in SQL, so why can't we do the same here? See the usage of queryid in\n> pg_stat_statements or internal_pages, *_pages exposed via\n> pgstatindex.c.\n\nI have reviewed 0003 and 0004, I have a few comments.\nv9-0003-Add-infrastructure-to-track-WAL-usage\n\n1.\n /* Points to buffer usage area in DSM */\n BufferUsage *buffer_usage;\n+ /* Points to WAL usage area in DSM */\n+ WalUsage *wal_usage;\n\nBetter to give one blank line between the previous statement/variable\ndeclaration and the next comment line.\n\n /* Points to buffer usage area in DSM */\n BufferUsage *buffer_usage;\n---------Empty line here--------------------\n+ /* Points to WAL usage area in DSM */\n+ WalUsage *wal_usage;\n\n2.\n@@ -2154,7 +2157,7 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\nIndexBulkDeleteResult **stats,\n WaitForParallelWorkersToFinish(lps->pcxt);\n\n for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n- InstrAccumParallelQuery(&lps->buffer_usage[i]);\n+ InstrAccumParallelQuery(&lps->buffer_usage[i], &lps->wal_usage[i]);\n }\n\nThe existing comment above this loop, which just mentions the buffer\nusage, not the wal usage so I guess we need to change that.\n\" /*\n* Next, accumulate buffer usage. (This must wait for the workers to\n* finish, or we might get incomplete data.)\n*/\"\n\n\nv9-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-aut\n\n3.\n+ if (usage->wal_num_fpw > 0)\n+ appendStringInfo(es->str, \" full page records=%ld\",\n+ usage->wal_num_fpw);\n+ if (usage->wal_bytes > 0)\n+ appendStringInfo(es->str, \" bytes=\" UINT64_FORMAT,\n+ usage->wal_bytes);\n\nShall we change to 'full page writes' or 'full page image' instead of\nfull page records?\n\nApart from this, I have some testing to see the wal_usage with the\nparallel vacuum and the results look fine.\n\npostgres[104248]=# CREATE TABLE test (a int, b int);\nCREATE TABLE\npostgres[104248]=# INSERT INTO test SELECT i, i FROM\nGENERATE_SERIES(1,2000000) as i;\nINSERT 0 2000000\npostgres[104248]=# CREATE INDEX idx1 on test(a);\nCREATE INDEX\npostgres[104248]=# VACUUM (PARALLEL 1) test;\nVACUUM\npostgres[104248]=# select query , wal_bytes, wal_records, wal_num_fpw\nfrom pg_stat_statements where query like 'VACUUM%';\n query | wal_bytes | wal_records | wal_num_fpw\n--------------------------+-----------+-------------+-------------\n VACUUM (PARALLEL 1) test | 72814331 | 8857 | 8855\n\n\n\npostgres[106479]=# CREATE TABLE test (a int, b int);\nCREATE TABLE\npostgres[106479]=# INSERT INTO test SELECT i, i FROM\nGENERATE_SERIES(1,2000000) as i;\nINSERT 0 2000000\npostgres[106479]=# CREATE INDEX idx1 on test(a);\nCREATE INDEX\npostgres[106479]=# VACUUM (PARALLEL 0) test;\nVACUUM\npostgres[106479]=# select query , wal_bytes, wal_records, wal_num_fpw\nfrom pg_stat_statements where query like 'VACUUM%';\n query | wal_bytes | wal_records | wal_num_fpw\n--------------------------+-----------+-------------+-------------\n VACUUM (PARALLEL 0) test | 72814331 | 8857 | 8855\n\nBy tomorrow, I will try to finish reviewing 0005 and 0006.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Apr 2020 19:20:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "Hi,\n\nI'm replying here to all reviews that have been sent, thanks a lot!\n\nOn Wed, Apr 01, 2020 at 04:29:16PM +0530, Amit Kapila wrote:\n> On Wed, Apr 1, 2020 at 1:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > So here's a v9, rebased on top of the latest versions of Sawada-san's bug fixes\n> > (Amit's v6 for vacuum and Sawada-san's v2 for create index), with all\n> > previously mentionned changes.\n> >\n> \n> Few other comments:\n> v9-0003-Add-infrastructure-to-track-WAL-usage\n> 1.\n> static void BufferUsageAdd(BufferUsage *dst, const BufferUsage *add);\n> -\n> +static void WalUsageAdd(WalUsage *dst, WalUsage *add);\n> \n> Looks like a spurious line removal\n\n\nFixed.\n\n\n> 2.\n> + /* Report a full page imsage constructed for the WAL record */\n> + *num_fpw += 1;\n> \n> Typo. /imsage/image\n\n\nAh sorry I though I fixed it previously, fixed.\n\n\n> 3. Doing some testing with and without parallelism to ensure WAL usage\n> data is correct would be great and if possible, share the results?\n\n\nI just saw that Dilip did some testing, but just in case here is some\nadditional one\n\n- vacuum, after a truncate, loading 1M row and a \"UPDATE t1 SET id = id\"\n\n=# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%vacuum%';\n query | calls | wal_bytes | wal_records | wal_num_fpw\n------------------------+-------+-----------+-------------+-------------\n vacuum (parallel 3) t1 | 1 | 20098962 | 34104 | 2\n vacuum (parallel 0) t1 | 1 | 20098962 | 34104 | 2\n(2 rows)\n\n- create index, overload t1's parallel_workers, using the 1M line just\n vacuumed:\n\n=# alter table t1 set (parallel_workers = 2);\nALTER TABLE\n\n=# create index t1_parallel_2 on t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 0);\nALTER TABLE\n\n=# create index t1_parallel_0 on t1(id);\nCREATE INDEX\n\n=# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n query | calls | wal_bytes | wal_records | wal_num_fpw\n--------------------------------------+-------+-----------+-------------+-------------\n create index t1_parallel_0 on t1(id) | 1 | 20355540 | 2762 | 2745\n create index t1_parallel_2 on t1(id) | 1 | 20406811 | 2762 | 2758\n(2 rows)\n\nIt all looks good to me.\n\n\n> v9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n> 4.\n> +-- SELECT usage data, check WAL usage is reported, wal_records equal\n> rows count for INSERT/UPDATE/DELETE\n> +SELECT query, calls, rows,\n> +wal_bytes > 0 as wal_bytes_generated,\n> +wal_records > 0 as wal_records_generated,\n> +wal_records = rows as wal_records_as_rows\n> +FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> + query |\n> calls | rows | wal_bytes_generated | wal_records_generated |\n> wal_records_as_rows\n> +------------------------------------------------------------------+-------+------+---------------------+-----------------------+---------------------\n> + DELETE FROM pgss_test WHERE a > $1 |\n> 1 | 1 | t | t | t\n> + DROP TABLE pgss_test |\n> 1 | 0 | t | t | f\n> + INSERT INTO pgss_test (a, b) VALUES ($1, $2), ($3, $4), ($5, $6) |\n> 1 | 3 | t | t | t\n> + INSERT INTO pgss_test VALUES(generate_series($1, $2), $3) |\n> 1 | 10 | t | t | t\n> + SELECT * FROM pgss_test ORDER BY a |\n> 1 | 12 | f | f | f\n> + SELECT * FROM pgss_test WHERE a > $1 ORDER BY a |\n> 2 | 4 | f | f | f\n> + SELECT * FROM pgss_test WHERE a IN ($1, $2, $3, $4, $5) |\n> 1 | 8 | f | f | f\n> + SELECT pg_stat_statements_reset() |\n> 1 | 1 | f | f | f\n> + SET pg_stat_statements.track_utility = FALSE |\n> 1 | 0 | f | f | t\n> + UPDATE pgss_test SET b = $1 WHERE a = $2 |\n> 6 | 6 | t | t | t\n> + UPDATE pgss_test SET b = $1 WHERE a > $2 |\n> 1 | 3 | t | t | t\n> +(11 rows)\n> +\n> \n> I am not sure if the above tests make much sense as they are just\n> testing that if WAL is generated for these commands. I understand it\n> is not easy to make these tests reliable but in that case, we can\n> think of some simple tests. It seems to me that the difficulty is due\n> to full_page_writes as that depends on the checkpoint. Can we make\n> full_page_writes = off for these tests and check some simple\n> Insert/Update/Delete cases? Alternatively, if you can present the\n> reason why that is unstable or are tricky to write, then we can simply\n> get rid of these tests because I don't see tests for BufferUsage. Let\n> not write tests for the sake of writing it unless they can detect bugs\n> in the future or are meaningfully covering the new code added.\n\n\nI don't think that we can have any hope in a stable amount of WAL bytes\ngenerated, so testing a positive number looks sensible to me. Then testing\nthat each 1-line-write query generates a WAL record also looks sensible, so I\nkept this. I realized that Kirill used an existing set of queries that were\npreviously added to validate the multi queries commands behavior, so there's no\nneed to have all of them again. I just kept one of each (insert, update,\ndelete, select) to make sure that we do record WAL activity there, but I don't\nthink that more can really be done. I still think that this is better than\nnothing, but if you disagree feel free to drop those tests.\n\n\n> 5.\n> -SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> - query | calls | rows\n> ------------------------------------+-------+------\n> - SELECT $1::TEXT | 1 | 1\n> - SELECT PLUS_ONE($1) | 2 | 2\n> - SELECT PLUS_TWO($1) | 2 | 2\n> - SELECT pg_stat_statements_reset() | 1 | 1\n> +SELECT query, calls, rows, wal_bytes, wal_records FROM\n> pg_stat_statements ORDER BY query COLLATE \"C\";\n> + query | calls | rows | wal_bytes | wal_records\n> +-----------------------------------+-------+------+-----------+-------------\n> + SELECT $1::TEXT | 1 | 1 | 0 | 0\n> + SELECT PLUS_ONE($1) | 2 | 2 | 0 | 0\n> + SELECT PLUS_TWO($1) | 2 | 2 | 0 | 0\n> + SELECT pg_stat_statements_reset() | 1 | 1 | 0 | 0\n> (4 rows)\n> \n> Again, I am not sure if these modifications make much sense?\n\n\nThose are queries that were previously executed. As those are read-only query,\nthat are pretty much guaranteed to not cause any WAL activity, I don't see how\nit hurts to test at the same time that that's we indeed record with\npg_stat_statements, just to be safe. Once again, feel free to drop the extra\nwal_* columns from the output if you disagree.\n\n\n> 6.\n> static void pgss_shmem_startup(void);\n> @@ -313,6 +318,7 @@ static void pgss_store(const char *query, uint64 queryId,\n> int query_location, int query_len,\n> double total_time, uint64 rows,\n> const BufferUsage *bufusage,\n> + const WalUsage* walusage,\n> pgssJumbleState *jstate);\n> \n> The alignment for walusage doesn't seem to be correct. Running\n> pgindent will fix this.\n\n\nIndeed, fixed.\n\n> 7.\n> + values[i++] = Int64GetDatumFast(tmp.wal_records);\n> + values[i++] = UInt64GetDatum(tmp.wal_num_fpw);\n> \n> Why are they different? I think we should use the same *GetDatum API\n> (probably Int64GetDatumFast) for these.\n\n\nOops, that's a mistake from when I was working on the wal_bytes output, fixed.\n\n> > v9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n> >\n>\n> One more comment related to this patch.\n> +\n> + snprintf(buf, sizeof buf, UINT64_FORMAT, tmp.wal_bytes);\n> +\n> + /* Convert to numeric. */\n> + wal_bytes = DirectFunctionCall3(numeric_in,\n> + CStringGetDatum(buf),\n> + ObjectIdGetDatum(0),\n> + Int32GetDatum(-1));\n> +\n> + values[i++] = wal_bytes;\n>\n> I see that other places that display uint64 values use BIGINT datatype\n> in SQL, so why can't we do the same here? See the usage of queryid in\n> pg_stat_statements or internal_pages, *_pages exposed via\n> pgstatindex.c.\n\n\nThat's because it's harmless to report a signed number for a hash (at least\ncomapred to the overhead of having it unsigned), while that's certainly not\nwanted to report a negative amount of WAL bytes generated if it goes beyond\nbigint limit. See the usage of pg_lsn_mi in pg_lsn.c for instance.\n\nOn Wed, Apr 01, 2020 at 07:20:31PM +0530, Dilip Kumar wrote:\n>\n> I have reviewed 0003 and 0004, I have a few comments.\n> v9-0003-Add-infrastructure-to-track-WAL-usage\n>\n> 1.\n> /* Points to buffer usage area in DSM */\n> BufferUsage *buffer_usage;\n> + /* Points to WAL usage area in DSM */\n> + WalUsage *wal_usage;\n>\n> Better to give one blank line between the previous statement/variable\n> declaration and the next comment line.\n\n\nFixed.\n\n\n> 2.\n> @@ -2154,7 +2157,7 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> IndexBulkDeleteResult **stats,\n> WaitForParallelWorkersToFinish(lps->pcxt);\n>\n> for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> - InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> + InstrAccumParallelQuery(&lps->buffer_usage[i], &lps->wal_usage[i]);\n> }\n>\n> The existing comment above this loop, which just mentions the buffer\n> usage, not the wal usage so I guess we need to change that.\n\n\nAh indeed, I thought I caught all the comments but missed this one. Fixed.\n\n\n> v9-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-aut\n>\n> 3.\n> + if (usage->wal_num_fpw > 0)\n> + appendStringInfo(es->str, \" full page records=%ld\",\n> + usage->wal_num_fpw);\n> + if (usage->wal_bytes > 0)\n> + appendStringInfo(es->str, \" bytes=\" UINT64_FORMAT,\n> + usage->wal_bytes);\n>\n> Shall we change to 'full page writes' or 'full page image' instead of\n> full page records?\n\n\nIndeed, I changed it in the (auto)vacuum output but missed this one. Fixed.\n\n\n> Apart from this, I have some testing to see the wal_usage with the\n> parallel vacuum and the results look fine.\n>\n> postgres[104248]=# CREATE TABLE test (a int, b int);\n> CREATE TABLE\n> postgres[104248]=# INSERT INTO test SELECT i, i FROM\n> GENERATE_SERIES(1,2000000) as i;\n> INSERT 0 2000000\n> postgres[104248]=# CREATE INDEX idx1 on test(a);\n> CREATE INDEX\n> postgres[104248]=# VACUUM (PARALLEL 1) test;\n> VACUUM\n> postgres[104248]=# select query , wal_bytes, wal_records, wal_num_fpw\n> from pg_stat_statements where query like 'VACUUM%';\n> query | wal_bytes | wal_records | wal_num_fpw\n> --------------------------+-----------+-------------+-------------\n> VACUUM (PARALLEL 1) test | 72814331 | 8857 | 8855\n>\n>\n>\n> postgres[106479]=# CREATE TABLE test (a int, b int);\n> CREATE TABLE\n> postgres[106479]=# INSERT INTO test SELECT i, i FROM\n> GENERATE_SERIES(1,2000000) as i;\n> INSERT 0 2000000\n> postgres[106479]=# CREATE INDEX idx1 on test(a);\n> CREATE INDEX\n> postgres[106479]=# VACUUM (PARALLEL 0) test;\n> VACUUM\n> postgres[106479]=# select query , wal_bytes, wal_records, wal_num_fpw\n> from pg_stat_statements where query like 'VACUUM%';\n> query | wal_bytes | wal_records | wal_num_fpw\n> --------------------------+-----------+-------------+-------------\n> VACUUM (PARALLEL 0) test | 72814331 | 8857 | 8855\n\n\nThanks! I did some similar testing, with also seq/parallel index creation and\ngot similar results.\n\n\n> By tomorrow, I will try to finish reviewing 0005 and 0006.\n\nThanks!", "msg_date": "Wed, 1 Apr 2020 16:29:54 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "Adding Peter G.\n\nOn Wed, Apr 1, 2020 at 12:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have done some testing for the parallel \"create index\".\n>\n> postgres[99536]=# show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 1MB\n> (1 row)\n>\n> CREATE TABLE test (a int, b int);\n> INSERT INTO test SELECT i, i FROM GENERATE_SERIES(1,2000000) as i;\n> CREATE INDEX idx1 on test(a);\n> select query, total_time, shared_blks_hit, shared_blks_read,\n> shared_blks_hit + shared_blks_read as total_read_blks,\n> shared_blks_dirtied, shared_blks_written from pg_stat_statements where\n> query like 'CREATE INDEX%';\n>\n>\n> SET max_parallel_maintenance_workers TO 0;\n> query | total_time | shared_blks_hit |\n> shared_blks_read | total_read_blks | shared_blks_dirtied |\n> shared_blks_written\n> ------------------------------+--------------------+-----------------+------------------+-----------------+---------------------+---------------------\n> CREATE INDEX idx1 on test(a) | 1947.4959979999999 | 8947 |\n> 11 | 8958 | 5 |\n> 0\n>\n> SET max_parallel_maintenance_workers TO 2;\n>\n> query | total_time | shared_blks_hit |\n> shared_blks_read | total_read_blks | shared_blks_dirtied |\n> shared_blks_written\n> ------------------------------+--------------------+-----------------+------------------+-----------------+---------------------+---------------------\n> CREATE INDEX idx1 on test(a) | 1942.1426040000001 | 8960 |\n> 14 | 8974 | 5 |\n> 0\n> (1 row)\n>\n> I have noticed that the total_read_blks, with the parallel, create\n> index is more compared to non-parallel one. I have created a fresh\n> database before each run. I am not much aware of the internal code of\n> parallel create an index so I am not sure whether it is expected to\n> read extra blocks with the parallel create an index. I guess maybe\n> because multiple workers are inserting int the btree they might need\n> to visit some btree nodes multiple times while traversing the tree\n> down. But, it's better if someone who have more idea with this code\n> can confirm this.\n>\n\nPeter, Is this behavior expected?\n\nLet me summarize the situation so that it would be easier for Peter to\ncomment. Julien has noticed that parallel vacuum and parallel create\nindex doesn't seem to report correct values for buffer usage stats.\nSawada-San wrote a patch to fix the problem for both the cases. We\nexpect that 'total_read_blks' as reported in pg_stat_statements should\ngive the same value for parallel and non-parallel operations. We see\nthat is true for parallel vacuum and previously we have the same\nobservation for the parallel query. Now, for parallel create index\nthis doesn't seem to be true as test results by Dilip show that. We\nhave two possibilities here (a) there is some bug in Sawada-San's\npatch or (b) this is expected behavior for parallel create index.\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 08:22:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 7:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Peter, Is this behavior expected?\n>\n> Let me summarize the situation so that it would be easier for Peter to\n> comment. Julien has noticed that parallel vacuum and parallel create\n> index doesn't seem to report correct values for buffer usage stats.\n> Sawada-San wrote a patch to fix the problem for both the cases. We\n> expect that 'total_read_blks' as reported in pg_stat_statements should\n> give the same value for parallel and non-parallel operations. We see\n> that is true for parallel vacuum and previously we have the same\n> observation for the parallel query. Now, for parallel create index\n> this doesn't seem to be true as test results by Dilip show that. We\n> have two possibilities here (a) there is some bug in Sawada-San's\n> patch or (b) this is expected behavior for parallel create index.\n> What do you think?\n\nnbtree CREATE INDEX doesn't even go through the buffer manager. The\ndifference that Dilip showed is probably due to extra catalog accesses\nin the two parallel workers -- pg_amproc lookups, and the like. Those\nare rather small differences, overall.\n\nCan Dilip demonstrate the the \"extra\" buffer accesses are\nproportionate to the number of workers launched in some constant,\npredictable way?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Apr 2020 20:04:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Thu, Apr 2, 2020 at 8:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Apr 1, 2020 at 7:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Peter, Is this behavior expected?\n> >\n> > Let me summarize the situation so that it would be easier for Peter to\n> > comment. Julien has noticed that parallel vacuum and parallel create\n> > index doesn't seem to report correct values for buffer usage stats.\n> > Sawada-San wrote a patch to fix the problem for both the cases. We\n> > expect that 'total_read_blks' as reported in pg_stat_statements should\n> > give the same value for parallel and non-parallel operations. We see\n> > that is true for parallel vacuum and previously we have the same\n> > observation for the parallel query. Now, for parallel create index\n> > this doesn't seem to be true as test results by Dilip show that. We\n> > have two possibilities here (a) there is some bug in Sawada-San's\n> > patch or (b) this is expected behavior for parallel create index.\n> > What do you think?\n>\n> nbtree CREATE INDEX doesn't even go through the buffer manager.\n\nThanks for clarifying. So IIUC, it will not go through the buffer\nmanager for the index pages, but for the heap pages, it will still go\nthrough the buffer manager.\n\n> The\n> difference that Dilip showed is probably due to extra catalog accesses\n> in the two parallel workers -- pg_amproc lookups, and the like. Those\n> are rather small differences, overall.\n\n> Can Dilip demonstrate the the \"extra\" buffer accesses are\n> proportionate to the number of workers launched in some constant,\n> predictable way?\n\nOkay, I will test this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 09:13:05 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Thu, Apr 2, 2020 at 9:13 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 8:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Wed, Apr 1, 2020 at 7:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Peter, Is this behavior expected?\n> > >\n> > > Let me summarize the situation so that it would be easier for Peter to\n> > > comment. Julien has noticed that parallel vacuum and parallel create\n> > > index doesn't seem to report correct values for buffer usage stats.\n> > > Sawada-San wrote a patch to fix the problem for both the cases. We\n> > > expect that 'total_read_blks' as reported in pg_stat_statements should\n> > > give the same value for parallel and non-parallel operations. We see\n> > > that is true for parallel vacuum and previously we have the same\n> > > observation for the parallel query. Now, for parallel create index\n> > > this doesn't seem to be true as test results by Dilip show that. We\n> > > have two possibilities here (a) there is some bug in Sawada-San's\n> > > patch or (b) this is expected behavior for parallel create index.\n> > > What do you think?\n> >\n> > nbtree CREATE INDEX doesn't even go through the buffer manager.\n>\n> Thanks for clarifying. So IIUC, it will not go through the buffer\n> manager for the index pages, but for the heap pages, it will still go\n> through the buffer manager.\n>\n> > The\n> > difference that Dilip showed is probably due to extra catalog accesses\n> > in the two parallel workers -- pg_amproc lookups, and the like. Those\n> > are rather small differences, overall.\n>\n> > Can Dilip demonstrate the the \"extra\" buffer accesses are\n> > proportionate to the number of workers launched in some constant,\n> > predictable way?\n>\n> Okay, I will test this.\n\n0-worker\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+-------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 1228.895057 | 8947 |\n 11 | 8971 | 5 |\n0\n\n1-worker\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+-------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 1006.157231 | 8962 |\n 12 | 8974 | 5 |\n0\n\n2-workers\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 949.44663 | 8965 |\n 12 | 8977 | 5 | 0\n\n3-workers\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+-------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 1037.297196 | 8968 |\n 12 | 8980 | 5 |\n0\n\n4-workers\n query | total_time | shared_blks_hit |\nshared_blks_read | total_read_blks | shared_blks_dirtied |\nshared_blks_written\n------------------------------+------------+-----------------+------------------+-----------------+---------------------+---------------------\n CREATE INDEX idx1 on test(a) | 889.332782 | 8971 |\n 12 | 8983 | 6 | 0\n\nYou are right, it is increasing with some constant factor.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 10:13:31 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Apr 01, 2020 at 04:29:16PM +0530, Amit Kapila wrote:\n> > 3. Doing some testing with and without parallelism to ensure WAL usage\n> > data is correct would be great and if possible, share the results?\n>\n>\n> I just saw that Dilip did some testing, but just in case here is some\n> additional one\n>\n> - vacuum, after a truncate, loading 1M row and a \"UPDATE t1 SET id = id\"\n>\n> =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%vacuum%';\n> query | calls | wal_bytes | wal_records | wal_num_fpw\n> ------------------------+-------+-----------+-------------+-------------\n> vacuum (parallel 3) t1 | 1 | 20098962 | 34104 | 2\n> vacuum (parallel 0) t1 | 1 | 20098962 | 34104 | 2\n> (2 rows)\n>\n> - create index, overload t1's parallel_workers, using the 1M line just\n> vacuumed:\n>\n> =# alter table t1 set (parallel_workers = 2);\n> ALTER TABLE\n>\n> =# create index t1_parallel_2 on t1(id);\n> CREATE INDEX\n>\n> =# alter table t1 set (parallel_workers = 0);\n> ALTER TABLE\n>\n> =# create index t1_parallel_0 on t1(id);\n> CREATE INDEX\n>\n> =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> query | calls | wal_bytes | wal_records | wal_num_fpw\n> --------------------------------------+-------+-----------+-------------+-------------\n> create index t1_parallel_0 on t1(id) | 1 | 20355540 | 2762 | 2745\n> create index t1_parallel_2 on t1(id) | 1 | 20406811 | 2762 | 2758\n> (2 rows)\n>\n> It all looks good to me.\n>\n\nHere the wal_num_fpw and wal_bytes are different between parallel and\nnon-parallel versions. Is it due to checkpoint or something else? We\ncan probably rule out checkpoint by increasing checkpoint_timeout and\nother checkpoint related parameters.\n\n>\n> > 5.\n> > -SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> > - query | calls | rows\n> > ------------------------------------+-------+------\n> > - SELECT $1::TEXT | 1 | 1\n> > - SELECT PLUS_ONE($1) | 2 | 2\n> > - SELECT PLUS_TWO($1) | 2 | 2\n> > - SELECT pg_stat_statements_reset() | 1 | 1\n> > +SELECT query, calls, rows, wal_bytes, wal_records FROM\n> > pg_stat_statements ORDER BY query COLLATE \"C\";\n> > + query | calls | rows | wal_bytes | wal_records\n> > +-----------------------------------+-------+------+-----------+-------------\n> > + SELECT $1::TEXT | 1 | 1 | 0 | 0\n> > + SELECT PLUS_ONE($1) | 2 | 2 | 0 | 0\n> > + SELECT PLUS_TWO($1) | 2 | 2 | 0 | 0\n> > + SELECT pg_stat_statements_reset() | 1 | 1 | 0 | 0\n> > (4 rows)\n> >\n> > Again, I am not sure if these modifications make much sense?\n>\n>\n> Those are queries that were previously executed. As those are read-only query,\n> that are pretty much guaranteed to not cause any WAL activity, I don't see how\n> it hurts to test at the same time that that's we indeed record with\n> pg_stat_statements, just to be safe.\n>\n\nOn a similar theory, one could have checked bufferusage stats as well.\nThe statements are using some expressions so don't see any value in\ncheck all usage data for such statements.\n\n> Once again, feel free to drop the extra\n> wal_* columns from the output if you disagree.\n>\n\nRight now, that particular patch is not getting applied (probably due\nto recent commit 17e0328224). Can you rebase it?\n\n>\n>\n> > v9-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-aut\n> >\n> > 3.\n> > + if (usage->wal_num_fpw > 0)\n> > + appendStringInfo(es->str, \" full page records=%ld\",\n> > + usage->wal_num_fpw);\n> > + if (usage->wal_bytes > 0)\n> > + appendStringInfo(es->str, \" bytes=\" UINT64_FORMAT,\n> > + usage->wal_bytes);\n> >\n> > Shall we change to 'full page writes' or 'full page image' instead of\n> > full page records?\n>\n>\n> Indeed, I changed it in the (auto)vacuum output but missed this one. Fixed.\n>\n\nI don't see this change in the patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:07:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 11:07 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n\nAlso, I forgot to mention that let's not base this on buffer usage\npatch for create index\n(v10-0002-Allow-parallel-index-creation-to-accumulate-buff) because as\nper recent discussion I am not sure about its usefulness. I think we\ncan proceed with this patch without\nv10-0002-Allow-parallel-index-creation-to-accumulate-buff as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:22:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> I'm replying here to all reviews that have been sent, thanks a lot!\n>\n> On Wed, Apr 01, 2020 at 04:29:16PM +0530, Amit Kapila wrote:\n> > On Wed, Apr 1, 2020 at 1:32 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > So here's a v9, rebased on top of the latest versions of Sawada-san's bug fixes\n> > > (Amit's v6 for vacuum and Sawada-san's v2 for create index), with all\n> > > previously mentionned changes.\n> > >\n> >\n> > Few other comments:\n> > v9-0003-Add-infrastructure-to-track-WAL-usage\n> > 1.\n> > static void BufferUsageAdd(BufferUsage *dst, const BufferUsage *add);\n> > -\n> > +static void WalUsageAdd(WalUsage *dst, WalUsage *add);\n> >\n> > Looks like a spurious line removal\n>\n>\n> Fixed.\n>\n>\n> > 2.\n> > + /* Report a full page imsage constructed for the WAL record */\n> > + *num_fpw += 1;\n> >\n> > Typo. /imsage/image\n>\n>\n> Ah sorry I though I fixed it previously, fixed.\n>\n>\n> > 3. Doing some testing with and without parallelism to ensure WAL usage\n> > data is correct would be great and if possible, share the results?\n>\n>\n> I just saw that Dilip did some testing, but just in case here is some\n> additional one\n>\n> - vacuum, after a truncate, loading 1M row and a \"UPDATE t1 SET id = id\"\n>\n> =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%vacuum%';\n> query | calls | wal_bytes | wal_records | wal_num_fpw\n> ------------------------+-------+-----------+-------------+-------------\n> vacuum (parallel 3) t1 | 1 | 20098962 | 34104 | 2\n> vacuum (parallel 0) t1 | 1 | 20098962 | 34104 | 2\n> (2 rows)\n>\n> - create index, overload t1's parallel_workers, using the 1M line just\n> vacuumed:\n>\n> =# alter table t1 set (parallel_workers = 2);\n> ALTER TABLE\n>\n> =# create index t1_parallel_2 on t1(id);\n> CREATE INDEX\n>\n> =# alter table t1 set (parallel_workers = 0);\n> ALTER TABLE\n>\n> =# create index t1_parallel_0 on t1(id);\n> CREATE INDEX\n>\n> =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> query | calls | wal_bytes | wal_records | wal_num_fpw\n> --------------------------------------+-------+-----------+-------------+-------------\n> create index t1_parallel_0 on t1(id) | 1 | 20355540 | 2762 | 2745\n> create index t1_parallel_2 on t1(id) | 1 | 20406811 | 2762 | 2758\n> (2 rows)\n>\n> It all looks good to me.\n>\n>\n> > v9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n> > 4.\n> > +-- SELECT usage data, check WAL usage is reported, wal_records equal\n> > rows count for INSERT/UPDATE/DELETE\n> > +SELECT query, calls, rows,\n> > +wal_bytes > 0 as wal_bytes_generated,\n> > +wal_records > 0 as wal_records_generated,\n> > +wal_records = rows as wal_records_as_rows\n> > +FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> > + query |\n> > calls | rows | wal_bytes_generated | wal_records_generated |\n> > wal_records_as_rows\n> > +------------------------------------------------------------------+-------+------+---------------------+-----------------------+---------------------\n> > + DELETE FROM pgss_test WHERE a > $1 |\n> > 1 | 1 | t | t | t\n> > + DROP TABLE pgss_test |\n> > 1 | 0 | t | t | f\n> > + INSERT INTO pgss_test (a, b) VALUES ($1, $2), ($3, $4), ($5, $6) |\n> > 1 | 3 | t | t | t\n> > + INSERT INTO pgss_test VALUES(generate_series($1, $2), $3) |\n> > 1 | 10 | t | t | t\n> > + SELECT * FROM pgss_test ORDER BY a |\n> > 1 | 12 | f | f | f\n> > + SELECT * FROM pgss_test WHERE a > $1 ORDER BY a |\n> > 2 | 4 | f | f | f\n> > + SELECT * FROM pgss_test WHERE a IN ($1, $2, $3, $4, $5) |\n> > 1 | 8 | f | f | f\n> > + SELECT pg_stat_statements_reset() |\n> > 1 | 1 | f | f | f\n> > + SET pg_stat_statements.track_utility = FALSE |\n> > 1 | 0 | f | f | t\n> > + UPDATE pgss_test SET b = $1 WHERE a = $2 |\n> > 6 | 6 | t | t | t\n> > + UPDATE pgss_test SET b = $1 WHERE a > $2 |\n> > 1 | 3 | t | t | t\n> > +(11 rows)\n> > +\n> >\n> > I am not sure if the above tests make much sense as they are just\n> > testing that if WAL is generated for these commands. I understand it\n> > is not easy to make these tests reliable but in that case, we can\n> > think of some simple tests. It seems to me that the difficulty is due\n> > to full_page_writes as that depends on the checkpoint. Can we make\n> > full_page_writes = off for these tests and check some simple\n> > Insert/Update/Delete cases? Alternatively, if you can present the\n> > reason why that is unstable or are tricky to write, then we can simply\n> > get rid of these tests because I don't see tests for BufferUsage. Let\n> > not write tests for the sake of writing it unless they can detect bugs\n> > in the future or are meaningfully covering the new code added.\n>\n>\n> I don't think that we can have any hope in a stable amount of WAL bytes\n> generated, so testing a positive number looks sensible to me. Then testing\n> that each 1-line-write query generates a WAL record also looks sensible, so I\n> kept this. I realized that Kirill used an existing set of queries that were\n> previously added to validate the multi queries commands behavior, so there's no\n> need to have all of them again. I just kept one of each (insert, update,\n> delete, select) to make sure that we do record WAL activity there, but I don't\n> think that more can really be done. I still think that this is better than\n> nothing, but if you disagree feel free to drop those tests.\n>\n>\n> > 5.\n> > -SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> > - query | calls | rows\n> > ------------------------------------+-------+------\n> > - SELECT $1::TEXT | 1 | 1\n> > - SELECT PLUS_ONE($1) | 2 | 2\n> > - SELECT PLUS_TWO($1) | 2 | 2\n> > - SELECT pg_stat_statements_reset() | 1 | 1\n> > +SELECT query, calls, rows, wal_bytes, wal_records FROM\n> > pg_stat_statements ORDER BY query COLLATE \"C\";\n> > + query | calls | rows | wal_bytes | wal_records\n> > +-----------------------------------+-------+------+-----------+-------------\n> > + SELECT $1::TEXT | 1 | 1 | 0 | 0\n> > + SELECT PLUS_ONE($1) | 2 | 2 | 0 | 0\n> > + SELECT PLUS_TWO($1) | 2 | 2 | 0 | 0\n> > + SELECT pg_stat_statements_reset() | 1 | 1 | 0 | 0\n> > (4 rows)\n> >\n> > Again, I am not sure if these modifications make much sense?\n>\n>\n> Those are queries that were previously executed. As those are read-only query,\n> that are pretty much guaranteed to not cause any WAL activity, I don't see how\n> it hurts to test at the same time that that's we indeed record with\n> pg_stat_statements, just to be safe. Once again, feel free to drop the extra\n> wal_* columns from the output if you disagree.\n>\n>\n> > 6.\n> > static void pgss_shmem_startup(void);\n> > @@ -313,6 +318,7 @@ static void pgss_store(const char *query, uint64 queryId,\n> > int query_location, int query_len,\n> > double total_time, uint64 rows,\n> > const BufferUsage *bufusage,\n> > + const WalUsage* walusage,\n> > pgssJumbleState *jstate);\n> >\n> > The alignment for walusage doesn't seem to be correct. Running\n> > pgindent will fix this.\n>\n>\n> Indeed, fixed.\n>\n> > 7.\n> > + values[i++] = Int64GetDatumFast(tmp.wal_records);\n> > + values[i++] = UInt64GetDatum(tmp.wal_num_fpw);\n> >\n> > Why are they different? I think we should use the same *GetDatum API\n> > (probably Int64GetDatumFast) for these.\n>\n>\n> Oops, that's a mistake from when I was working on the wal_bytes output, fixed.\n>\n> > > v9-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n> > >\n> >\n> > One more comment related to this patch.\n> > +\n> > + snprintf(buf, sizeof buf, UINT64_FORMAT, tmp.wal_bytes);\n> > +\n> > + /* Convert to numeric. */\n> > + wal_bytes = DirectFunctionCall3(numeric_in,\n> > + CStringGetDatum(buf),\n> > + ObjectIdGetDatum(0),\n> > + Int32GetDatum(-1));\n> > +\n> > + values[i++] = wal_bytes;\n> >\n> > I see that other places that display uint64 values use BIGINT datatype\n> > in SQL, so why can't we do the same here? See the usage of queryid in\n> > pg_stat_statements or internal_pages, *_pages exposed via\n> > pgstatindex.c.\n>\n>\n> That's because it's harmless to report a signed number for a hash (at least\n> comapred to the overhead of having it unsigned), while that's certainly not\n> wanted to report a negative amount of WAL bytes generated if it goes beyond\n> bigint limit. See the usage of pg_lsn_mi in pg_lsn.c for instance.\n>\n> On Wed, Apr 01, 2020 at 07:20:31PM +0530, Dilip Kumar wrote:\n> >\n> > I have reviewed 0003 and 0004, I have a few comments.\n> > v9-0003-Add-infrastructure-to-track-WAL-usage\n> >\n> > 1.\n> > /* Points to buffer usage area in DSM */\n> > BufferUsage *buffer_usage;\n> > + /* Points to WAL usage area in DSM */\n> > + WalUsage *wal_usage;\n> >\n> > Better to give one blank line between the previous statement/variable\n> > declaration and the next comment line.\n>\n>\n> Fixed.\n>\n>\n> > 2.\n> > @@ -2154,7 +2157,7 @@ lazy_parallel_vacuum_indexes(Relation *Irel,\n> > IndexBulkDeleteResult **stats,\n> > WaitForParallelWorkersToFinish(lps->pcxt);\n> >\n> > for (i = 0; i < lps->pcxt->nworkers_launched; i++)\n> > - InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > + InstrAccumParallelQuery(&lps->buffer_usage[i], &lps->wal_usage[i]);\n> > }\n> >\n> > The existing comment above this loop, which just mentions the buffer\n> > usage, not the wal usage so I guess we need to change that.\n>\n>\n> Ah indeed, I thought I caught all the comments but missed this one. Fixed.\n>\n>\n> > v9-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-aut\n> >\n> > 3.\n> > + if (usage->wal_num_fpw > 0)\n> > + appendStringInfo(es->str, \" full page records=%ld\",\n> > + usage->wal_num_fpw);\n> > + if (usage->wal_bytes > 0)\n> > + appendStringInfo(es->str, \" bytes=\" UINT64_FORMAT,\n> > + usage->wal_bytes);\n> >\n> > Shall we change to 'full page writes' or 'full page image' instead of\n> > full page records?\n>\n>\n> Indeed, I changed it in the (auto)vacuum output but missed this one. Fixed.\n>\n>\n> > Apart from this, I have some testing to see the wal_usage with the\n> > parallel vacuum and the results look fine.\n> >\n> > postgres[104248]=# CREATE TABLE test (a int, b int);\n> > CREATE TABLE\n> > postgres[104248]=# INSERT INTO test SELECT i, i FROM\n> > GENERATE_SERIES(1,2000000) as i;\n> > INSERT 0 2000000\n> > postgres[104248]=# CREATE INDEX idx1 on test(a);\n> > CREATE INDEX\n> > postgres[104248]=# VACUUM (PARALLEL 1) test;\n> > VACUUM\n> > postgres[104248]=# select query , wal_bytes, wal_records, wal_num_fpw\n> > from pg_stat_statements where query like 'VACUUM%';\n> > query | wal_bytes | wal_records | wal_num_fpw\n> > --------------------------+-----------+-------------+-------------\n> > VACUUM (PARALLEL 1) test | 72814331 | 8857 | 8855\n> >\n> >\n> >\n> > postgres[106479]=# CREATE TABLE test (a int, b int);\n> > CREATE TABLE\n> > postgres[106479]=# INSERT INTO test SELECT i, i FROM\n> > GENERATE_SERIES(1,2000000) as i;\n> > INSERT 0 2000000\n> > postgres[106479]=# CREATE INDEX idx1 on test(a);\n> > CREATE INDEX\n> > postgres[106479]=# VACUUM (PARALLEL 0) test;\n> > VACUUM\n> > postgres[106479]=# select query , wal_bytes, wal_records, wal_num_fpw\n> > from pg_stat_statements where query like 'VACUUM%';\n> > query | wal_bytes | wal_records | wal_num_fpw\n> > --------------------------+-----------+-------------+-------------\n> > VACUUM (PARALLEL 0) test | 72814331 | 8857 | 8855\n>\n>\n> Thanks! I did some similar testing, with also seq/parallel index creation and\n> got similar results.\n>\n>\n> > By tomorrow, I will try to finish reviewing 0005 and 0006.\n\nI have reviewed these patches and I have a few cosmetic comments.\nv10-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n\n1.\n+ uint64 wal_bytes; /* total amount of wal bytes written */\n+ int64 wal_records; /* # of wal records written */\n+ int64 wal_num_fpw; /* # of full page wal records written */\n\n\n/s/# of full page wal records written / /* # of WAL full page image produced */\n\n2.\n static void pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString,\n ProcessUtilityContext context, ParamListInfo params,\n QueryEnvironment *queryEnv,\n- DestReceiver *dest, QueryCompletion *qc);\n+ DestReceiver *dest, QueryCompletion * qc);\n\nUseless hunk.\n\n3.\n\nv10-0006-Expose-WAL-usage-counters-in-verbose-auto-vacuum\n\n@@ -3105,7 +3105,7 @@ show_wal_usage(ExplainState *es, const WalUsage *usage)\n {\n ExplainPropertyInteger(\"WAL records\", NULL,\n usage->wal_records, es);\n- ExplainPropertyInteger(\"WAL full page records\", NULL,\n+ ExplainPropertyInteger(\"WAL full page writes\", NULL,\n usage->wal_num_fpw, es);\nJust noticed that in 0004 you have first added \"WAL full page\nrecords\", which is later corrected to \"WAL full page writes\" in 0006.\nI think we better keep this proper in 0004 itself and avoid this hunk\nin 0006, otherwise, it creates confusion while reviewing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 12:04:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 02, 2020 at 11:07:29AM +0530, Amit Kapila wrote:\n> On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Wed, Apr 01, 2020 at 04:29:16PM +0530, Amit Kapila wrote:\n> > > 3. Doing some testing with and without parallelism to ensure WAL usage\n> > > data is correct would be great and if possible, share the results?\n> >\n> >\n> > I just saw that Dilip did some testing, but just in case here is some\n> > additional one\n> >\n> > - vacuum, after a truncate, loading 1M row and a \"UPDATE t1 SET id = id\"\n> >\n> > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%vacuum%';\n> > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > ------------------------+-------+-----------+-------------+-------------\n> > vacuum (parallel 3) t1 | 1 | 20098962 | 34104 | 2\n> > vacuum (parallel 0) t1 | 1 | 20098962 | 34104 | 2\n> > (2 rows)\n> >\n> > - create index, overload t1's parallel_workers, using the 1M line just\n> > vacuumed:\n> >\n> > =# alter table t1 set (parallel_workers = 2);\n> > ALTER TABLE\n> >\n> > =# create index t1_parallel_2 on t1(id);\n> > CREATE INDEX\n> >\n> > =# alter table t1 set (parallel_workers = 0);\n> > ALTER TABLE\n> >\n> > =# create index t1_parallel_0 on t1(id);\n> > CREATE INDEX\n> >\n> > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > --------------------------------------+-------+-----------+-------------+-------------\n> > create index t1_parallel_0 on t1(id) | 1 | 20355540 | 2762 | 2745\n> > create index t1_parallel_2 on t1(id) | 1 | 20406811 | 2762 | 2758\n> > (2 rows)\n> >\n> > It all looks good to me.\n> >\n> \n> Here the wal_num_fpw and wal_bytes are different between parallel and\n> non-parallel versions. Is it due to checkpoint or something else? We\n> can probably rule out checkpoint by increasing checkpoint_timeout and\n> other checkpoint related parameters.\n\nI think this is because I did a checkpoint after the VACUUM tests, so the 1st\nCREATE INDEX (with parallelism) induced some FPW on the catalog blocks. I\ndidn't try to investigate more since:\n\nOn Thu, Apr 02, 2020 at 11:22:16AM +0530, Amit Kapila wrote:\n>\n> Also, I forgot to mention that let's not base this on buffer usage\n> patch for create index\n> (v10-0002-Allow-parallel-index-creation-to-accumulate-buff) because as\n> per recent discussion I am not sure about its usefulness. I think we\n> can proceed with this patch without\n> v10-0002-Allow-parallel-index-creation-to-accumulate-buff as well.\n\n\nWhich is done in attached v11.\n\n\n> > > 5.\n> > > -SELECT query, calls, rows FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> > > - query | calls | rows\n> > > ------------------------------------+-------+------\n> > > - SELECT $1::TEXT | 1 | 1\n> > > - SELECT PLUS_ONE($1) | 2 | 2\n> > > - SELECT PLUS_TWO($1) | 2 | 2\n> > > - SELECT pg_stat_statements_reset() | 1 | 1\n> > > +SELECT query, calls, rows, wal_bytes, wal_records FROM\n> > > pg_stat_statements ORDER BY query COLLATE \"C\";\n> > > + query | calls | rows | wal_bytes | wal_records\n> > > +-----------------------------------+-------+------+-----------+-------------\n> > > + SELECT $1::TEXT | 1 | 1 | 0 | 0\n> > > + SELECT PLUS_ONE($1) | 2 | 2 | 0 | 0\n> > > + SELECT PLUS_TWO($1) | 2 | 2 | 0 | 0\n> > > + SELECT pg_stat_statements_reset() | 1 | 1 | 0 | 0\n> > > (4 rows)\n> > >\n> > > Again, I am not sure if these modifications make much sense?\n> >\n> >\n> > Those are queries that were previously executed. As those are read-only query,\n> > that are pretty much guaranteed to not cause any WAL activity, I don't see how\n> > it hurts to test at the same time that that's we indeed record with\n> > pg_stat_statements, just to be safe.\n> >\n> \n> On a similar theory, one could have checked bufferusage stats as well.\n> The statements are using some expressions so don't see any value in\n> check all usage data for such statements.\n\n\nDropped.\n\n\n> Right now, that particular patch is not getting applied (probably due\n> to recent commit 17e0328224). Can you rebase it?\n\n\nDone.\n\n\n> > > v9-0004-Add-option-to-report-WAL-usage-in-EXPLAIN-and-aut\n> > >\n> > > 3.\n> > > + if (usage->wal_num_fpw > 0)\n> > > + appendStringInfo(es->str, \" full page records=%ld\",\n> > > + usage->wal_num_fpw);\n> > > + if (usage->wal_bytes > 0)\n> > > + appendStringInfo(es->str, \" bytes=\" UINT64_FORMAT,\n> > > + usage->wal_bytes);\n> > >\n> > > Shall we change to 'full page writes' or 'full page image' instead of\n> > > full page records?\n> >\n> >\n> > Indeed, I changed it in the (auto)vacuum output but missed this one. Fixed.\n> >\n> \n> I don't see this change in the patch.\n\n\nYes, as Dilip reported I fixuped the wrong commit, sorry about that. This\nversion should now be ok.\n\n\nOn Thu, Apr 02, 2020 at 12:04:32PM +0530, Dilip Kumar wrote:\n> On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > > By tomorrow, I will try to finish reviewing 0005 and 0006.\n>\n> I have reviewed these patches and I have a few cosmetic comments.\n> v10-0005-Keep-track-of-WAL-usage-in-pg_stat_statements\n>\n> 1.\n> + uint64 wal_bytes; /* total amount of wal bytes written */\n> + int64 wal_records; /* # of wal records written */\n> + int64 wal_num_fpw; /* # of full page wal records written */\n>\n>\n> /s/# of full page wal records written / /* # of WAL full page image produced */\n\n\nDone, I also consistently s/wal/WAL/.\n\n>\n> 2.\n> static void pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString,\n> ProcessUtilityContext context, ParamListInfo params,\n> QueryEnvironment *queryEnv,\n> - DestReceiver *dest, QueryCompletion *qc);\n> + DestReceiver *dest, QueryCompletion * qc);\n>\n> Useless hunk.\n\n\nOops, leftover of a pgindent as QueryCompletion isn't in the typedefs yet. I\nthought I discarded all the useless hunks but missed this one. Thanks, fixed.\n\n\n>\n> 3.\n>\n> v10-0006-Expose-WAL-usage-counters-in-verbose-auto-vacuum\n>\n> @@ -3105,7 +3105,7 @@ show_wal_usage(ExplainState *es, const WalUsage *usage)\n> {\n> ExplainPropertyInteger(\"WAL records\", NULL,\n> usage->wal_records, es);\n> - ExplainPropertyInteger(\"WAL full page records\", NULL,\n> + ExplainPropertyInteger(\"WAL full page writes\", NULL,\n> usage->wal_num_fpw, es);\n> Just noticed that in 0004 you have first added \"WAL full page\n> records\", which is later corrected to \"WAL full page writes\" in 0006.\n> I think we better keep this proper in 0004 itself and avoid this hunk\n> in 0006, otherwise, it creates confusion while reviewing.\n\n\nOh, I didn't realized that I fixuped the wrong commit. Fixed.\n\n\nI also adapted the documentation that mentioned full page records instead of\nfull page images, and integrated Justin's comment:\n\n> In 0003:\n> + /* Provide WAL update data to the instrumentation */\n> Remove \"data\" ??\n\nso changed to \"Report WAL traffic to the instrumentation.\"\n\nI didn't change the (auto)vacuum output yet (except fixing the s/full page\nrecords/full page writes/ that I previously missed), as it's not clear what the\nconsensus is yet. I'll take care of that as soon as we reach to a consensus.", "msg_date": "Thu, 2 Apr 2020 10:30:35 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 2:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Apr 02, 2020 at 11:07:29AM +0530, Amit Kapila wrote:\n> > On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 01, 2020 at 04:29:16PM +0530, Amit Kapila wrote:\n> > > > 3. Doing some testing with and without parallelism to ensure WAL usage\n> > > > data is correct would be great and if possible, share the results?\n> > >\n> > >\n> > > I just saw that Dilip did some testing, but just in case here is some\n> > > additional one\n> > >\n> > > - vacuum, after a truncate, loading 1M row and a \"UPDATE t1 SET id = id\"\n> > >\n> > > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%vacuum%';\n> > > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > > ------------------------+-------+-----------+-------------+-------------\n> > > vacuum (parallel 3) t1 | 1 | 20098962 | 34104 | 2\n> > > vacuum (parallel 0) t1 | 1 | 20098962 | 34104 | 2\n> > > (2 rows)\n> > >\n> > > - create index, overload t1's parallel_workers, using the 1M line just\n> > > vacuumed:\n> > >\n> > > =# alter table t1 set (parallel_workers = 2);\n> > > ALTER TABLE\n> > >\n> > > =# create index t1_parallel_2 on t1(id);\n> > > CREATE INDEX\n> > >\n> > > =# alter table t1 set (parallel_workers = 0);\n> > > ALTER TABLE\n> > >\n> > > =# create index t1_parallel_0 on t1(id);\n> > > CREATE INDEX\n> > >\n> > > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > > --------------------------------------+-------+-----------+-------------+-------------\n> > > create index t1_parallel_0 on t1(id) | 1 | 20355540 | 2762 | 2745\n> > > create index t1_parallel_2 on t1(id) | 1 | 20406811 | 2762 | 2758\n> > > (2 rows)\n> > >\n> > > It all looks good to me.\n> > >\n> >\n> > Here the wal_num_fpw and wal_bytes are different between parallel and\n> > non-parallel versions. Is it due to checkpoint or something else? We\n> > can probably rule out checkpoint by increasing checkpoint_timeout and\n> > other checkpoint related parameters.\n>\n> I think this is because I did a checkpoint after the VACUUM tests, so the 1st\n> CREATE INDEX (with parallelism) induced some FPW on the catalog blocks. I\n> didn't try to investigate more since:\n>\n\nWe need to do this.\n\n> On Thu, Apr 02, 2020 at 11:22:16AM +0530, Amit Kapila wrote:\n> >\n> > Also, I forgot to mention that let's not base this on buffer usage\n> > patch for create index\n> > (v10-0002-Allow-parallel-index-creation-to-accumulate-buff) because as\n> > per recent discussion I am not sure about its usefulness. I think we\n> > can proceed with this patch without\n> > v10-0002-Allow-parallel-index-creation-to-accumulate-buff as well.\n>\n>\n> Which is done in attached v11.\n>\n\nHmm, I haven't suggested removing the WAL usage from the parallel\ncreate index. I just told not to use the infrastructure of another\npatch. We bypass the buffer manager but do write WAL. See\n_bt_blwritepage->log_newpage. So we need to accumulate WAL usage even\nif we decide not to do anything about BufferUsage which means we need\nto investigate the above inconsistency in wal_num_fpw and wal_bytes\nbetween parallel and non-parallel version.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 14:32:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 02, 2020 at 02:32:07PM +0530, Amit Kapila wrote:\n> On Thu, Apr 2, 2020 at 2:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, Apr 02, 2020 at 11:07:29AM +0530, Amit Kapila wrote:\n> > > On Wed, Apr 1, 2020 at 8:00 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > On Wed, Apr 01, 2020 at 04:29:16PM +0530, Amit Kapila wrote:\n> > > > > 3. Doing some testing with and without parallelism to ensure WAL usage\n> > > > > data is correct would be great and if possible, share the results?\n> > > >\n> > > >\n> > > > I just saw that Dilip did some testing, but just in case here is some\n> > > > additional one\n> > > >\n> > > > - vacuum, after a truncate, loading 1M row and a \"UPDATE t1 SET id = id\"\n> > > >\n> > > > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%vacuum%';\n> > > > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > > > ------------------------+-------+-----------+-------------+-------------\n> > > > vacuum (parallel 3) t1 | 1 | 20098962 | 34104 | 2\n> > > > vacuum (parallel 0) t1 | 1 | 20098962 | 34104 | 2\n> > > > (2 rows)\n> > > >\n> > > > - create index, overload t1's parallel_workers, using the 1M line just\n> > > > vacuumed:\n> > > >\n> > > > =# alter table t1 set (parallel_workers = 2);\n> > > > ALTER TABLE\n> > > >\n> > > > =# create index t1_parallel_2 on t1(id);\n> > > > CREATE INDEX\n> > > >\n> > > > =# alter table t1 set (parallel_workers = 0);\n> > > > ALTER TABLE\n> > > >\n> > > > =# create index t1_parallel_0 on t1(id);\n> > > > CREATE INDEX\n> > > >\n> > > > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > > > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > > > --------------------------------------+-------+-----------+-------------+-------------\n> > > > create index t1_parallel_0 on t1(id) | 1 | 20355540 | 2762 | 2745\n> > > > create index t1_parallel_2 on t1(id) | 1 | 20406811 | 2762 | 2758\n> > > > (2 rows)\n> > > >\n> > > > It all looks good to me.\n> > > >\n> > >\n> > > Here the wal_num_fpw and wal_bytes are different between parallel and\n> > > non-parallel versions. Is it due to checkpoint or something else? We\n> > > can probably rule out checkpoint by increasing checkpoint_timeout and\n> > > other checkpoint related parameters.\n> >\n> > I think this is because I did a checkpoint after the VACUUM tests, so the 1st\n> > CREATE INDEX (with parallelism) induced some FPW on the catalog blocks. I\n> > didn't try to investigate more since:\n> >\n> \n> We need to do this.\n> \n> > On Thu, Apr 02, 2020 at 11:22:16AM +0530, Amit Kapila wrote:\n> > >\n> > > Also, I forgot to mention that let's not base this on buffer usage\n> > > patch for create index\n> > > (v10-0002-Allow-parallel-index-creation-to-accumulate-buff) because as\n> > > per recent discussion I am not sure about its usefulness. I think we\n> > > can proceed with this patch without\n> > > v10-0002-Allow-parallel-index-creation-to-accumulate-buff as well.\n> >\n> >\n> > Which is done in attached v11.\n> >\n> \n> Hmm, I haven't suggested removing the WAL usage from the parallel\n> create index. I just told not to use the infrastructure of another\n> patch. We bypass the buffer manager but do write WAL. See\n> _bt_blwritepage->log_newpage. So we need to accumulate WAL usage even\n> if we decide not to do anything about BufferUsage which means we need\n> to investigate the above inconsistency in wal_num_fpw and wal_bytes\n> between parallel and non-parallel version.\n\n\nOh, I thought that you wanted to wait on that part, as we'll probably change\nthe parallel create index to report buffer access eventually.\n\nv12 attached with an adaptation of Sawada-san's original patch but only dealing\nwith WAL activity.\n\nI did some more experiment, ensuring as much stability as possible:\n\n=# create table t1(id integer);\nCREATE TABLE\n=# insert into t1 select * from generate_series(1, 1000000);\nINSERT 0 1000000\n=# select * from pg_stat_statements_reset() ;\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\n=# alter table t1 set (parallel_workers = 0);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_0 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 1);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_1 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 2);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_2 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 3);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_3 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 4);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_4 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 5);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_5 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 6);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_6 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 7);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_7 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 8);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_8 ON t1(id);\nCREATE INDEX\n\n=# alter table t1 set (parallel_workers = 0);\nALTER TABLE\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_0_bis ON t1(id);\nCREATE INDEX\n=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\n=# create index t1_idx_parallel_0_ter ON t1(id);\nCREATE INDEX\n\n=# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n query | calls | wal_bytes | wal_records | wal_num_fpw\n----------------------------------------------+-------+-----------+-------------+-------------\n create index t1_idx_parallel_0 ON t1(id) | 1 | 20389743 | 2762 | 2758\n create index t1_idx_parallel_0_bis ON t1(id) | 1 | 20394391 | 2762 | 2758\n create index t1_idx_parallel_0_ter ON t1(id) | 1 | 20395155 | 2762 | 2758\n create index t1_idx_parallel_1 ON t1(id) | 1 | 20388335 | 2762 | 2758\n create index t1_idx_parallel_2 ON t1(id) | 1 | 20389091 | 2762 | 2758\n create index t1_idx_parallel_3 ON t1(id) | 1 | 20389847 | 2762 | 2758\n create index t1_idx_parallel_4 ON t1(id) | 1 | 20390603 | 2762 | 2758\n create index t1_idx_parallel_5 ON t1(id) | 1 | 20391359 | 2762 | 2758\n create index t1_idx_parallel_6 ON t1(id) | 1 | 20392115 | 2762 | 2758\n create index t1_idx_parallel_7 ON t1(id) | 1 | 20392871 | 2762 | 2758\n create index t1_idx_parallel_8 ON t1(id) | 1 | 20393627 | 2762 | 2758\n(11 rows)\n\n=# select relname, pg_relation_size(oid) from pg_class where relname like '%t1_id%';\n relname | pg_relation_size\n-----------------------+------------------\n t1_idx_parallel_0 | 22487040\n t1_idx_parallel_0_bis | 22487040\n t1_idx_parallel_0_ter | 22487040\n t1_idx_parallel_2 | 22487040\n t1_idx_parallel_1 | 22487040\n t1_idx_parallel_4 | 22487040\n t1_idx_parallel_3 | 22487040\n t1_idx_parallel_5 | 22487040\n t1_idx_parallel_6 | 22487040\n t1_idx_parallel_7 | 22487040\n t1_idx_parallel_8 | 22487040\n(9 rows)\n\n\nSo while the number of WAL records and full page images stay constant, we can\nsee some small fluctuations in the total amount of generated WAL data, even for\nmultiple execution of the sequential create index. I'm wondering if the\nfluctuations are due to some other internal details or if the WalUsage support\nis just completely broken (although I don't see any obvious issue ATM).", "msg_date": "Thu, 2 Apr 2020 14:48:18 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 6:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> query | calls | wal_bytes | wal_records | wal_num_fpw\n> ----------------------------------------------+-------+-----------+-------------+-------------\n> create index t1_idx_parallel_0 ON t1(id) | 1 | 20389743 | 2762 | 2758\n> create index t1_idx_parallel_0_bis ON t1(id) | 1 | 20394391 | 2762 | 2758\n> create index t1_idx_parallel_0_ter ON t1(id) | 1 | 20395155 | 2762 | 2758\n> create index t1_idx_parallel_1 ON t1(id) | 1 | 20388335 | 2762 | 2758\n> create index t1_idx_parallel_2 ON t1(id) | 1 | 20389091 | 2762 | 2758\n> create index t1_idx_parallel_3 ON t1(id) | 1 | 20389847 | 2762 | 2758\n> create index t1_idx_parallel_4 ON t1(id) | 1 | 20390603 | 2762 | 2758\n> create index t1_idx_parallel_5 ON t1(id) | 1 | 20391359 | 2762 | 2758\n> create index t1_idx_parallel_6 ON t1(id) | 1 | 20392115 | 2762 | 2758\n> create index t1_idx_parallel_7 ON t1(id) | 1 | 20392871 | 2762 | 2758\n> create index t1_idx_parallel_8 ON t1(id) | 1 | 20393627 | 2762 | 2758\n> (11 rows)\n>\n> =# select relname, pg_relation_size(oid) from pg_class where relname like '%t1_id%';\n> relname | pg_relation_size\n> -----------------------+------------------\n> t1_idx_parallel_0 | 22487040\n> t1_idx_parallel_0_bis | 22487040\n> t1_idx_parallel_0_ter | 22487040\n> t1_idx_parallel_2 | 22487040\n> t1_idx_parallel_1 | 22487040\n> t1_idx_parallel_4 | 22487040\n> t1_idx_parallel_3 | 22487040\n> t1_idx_parallel_5 | 22487040\n> t1_idx_parallel_6 | 22487040\n> t1_idx_parallel_7 | 22487040\n> t1_idx_parallel_8 | 22487040\n> (9 rows)\n>\n>\n> So while the number of WAL records and full page images stay constant, we can\n> see some small fluctuations in the total amount of generated WAL data, even for\n> multiple execution of the sequential create index. I'm wondering if the\n> fluctuations are due to some other internal details or if the WalUsage support\n> is just completely broken (although I don't see any obvious issue ATM).\n>\n\nI think we need to know the reason for this. Can you try with small\nsize indexes and see if the problem is reproducible? If it is, then it\nwill be easier to debug the same.\n\nFew other minor comments\n------------------------------------\npg_stat_statements patch\n1.\n+--\n+-- CRUD: INSERT SELECT UPDATE DELETE on test non-temp table to\nvalidate WAL generation metrics\n+--\n\nThe word 'non-temp' in the above comment appears out of place. We\ndon't need to specify it.\n\n2.\n+-- SELECT usage data, check WAL usage is reported, wal_records equal\nrows count for INSERT/UPDATE/DELETE\n+SELECT query, calls, rows,\n+wal_bytes > 0 as wal_bytes_generated,\n+wal_records > 0 as wal_records_generated,\n+wal_records = rows as wal_records_as_rows\n+FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n\nThe comment doesn't seem to match what we are doing in the statement.\nI think we can simplify it to something like \"check WAL is generated\nfor above statements:\n\n3.\n@@ -185,6 +185,9 @@ typedef struct Counters\n int64 local_blks_written; /* # of local disk blocks written */\n int64 temp_blks_read; /* # of temp blocks read */\n int64 temp_blks_written; /* # of temp blocks written */\n+ uint64 wal_bytes; /* total amount of WAL bytes generated */\n+ int64 wal_records; /* # of WAL records generated */\n+ int64 wal_num_fpw; /* # of WAL full page image generated */\n double blk_read_time; /* time spent reading, in msec */\n double blk_write_time; /* time spent writing, in msec */\n double usage; /* usage factor */\n\nIt is better to keep wal_bytes should be after wal_num_fpw as it is in\nthe main patch. Also, consider changing at other places in this\npatch. I think we should add these new fields after blk_write_time or\nat the end after usage.\n\n4.\n/* # of WAL full page image generated */\nCan we change it to \"/* # of WAL full page image records generated */\"?\n\nIf you agree, then a similar comment exists in\nv11-0001-Add-infrastructure-to-track-WAL-usage, consider changing that\nas well.\n\n\nv11-0002-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au\n5.\nSpecifically, include the\n+ number of records, full page images and bytes generated.\n\nHow about making the above slightly clear? \"Specifically, include the\nnumber of records, number of full page image records and amount of WAL\nbytes generated.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 18:40:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 6:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > ----------------------------------------------+-------+-----------+-------------+-------------\n> > create index t1_idx_parallel_0 ON t1(id) | 1 | 20389743 | 2762 | 2758\n> > create index t1_idx_parallel_0_bis ON t1(id) | 1 | 20394391 | 2762 | 2758\n> > create index t1_idx_parallel_0_ter ON t1(id) | 1 | 20395155 | 2762 | 2758\n> > create index t1_idx_parallel_1 ON t1(id) | 1 | 20388335 | 2762 | 2758\n> > create index t1_idx_parallel_2 ON t1(id) | 1 | 20389091 | 2762 | 2758\n> > create index t1_idx_parallel_3 ON t1(id) | 1 | 20389847 | 2762 | 2758\n> > create index t1_idx_parallel_4 ON t1(id) | 1 | 20390603 | 2762 | 2758\n> > create index t1_idx_parallel_5 ON t1(id) | 1 | 20391359 | 2762 | 2758\n> > create index t1_idx_parallel_6 ON t1(id) | 1 | 20392115 | 2762 | 2758\n> > create index t1_idx_parallel_7 ON t1(id) | 1 | 20392871 | 2762 | 2758\n> > create index t1_idx_parallel_8 ON t1(id) | 1 | 20393627 | 2762 | 2758\n> > (11 rows)\n> >\n> > =# select relname, pg_relation_size(oid) from pg_class where relname like '%t1_id%';\n> > relname | pg_relation_size\n> > -----------------------+------------------\n> > t1_idx_parallel_0 | 22487040\n> > t1_idx_parallel_0_bis | 22487040\n> > t1_idx_parallel_0_ter | 22487040\n> > t1_idx_parallel_2 | 22487040\n> > t1_idx_parallel_1 | 22487040\n> > t1_idx_parallel_4 | 22487040\n> > t1_idx_parallel_3 | 22487040\n> > t1_idx_parallel_5 | 22487040\n> > t1_idx_parallel_6 | 22487040\n> > t1_idx_parallel_7 | 22487040\n> > t1_idx_parallel_8 | 22487040\n> > (9 rows)\n> >\n> >\n> > So while the number of WAL records and full page images stay constant, we can\n> > see some small fluctuations in the total amount of generated WAL data, even for\n> > multiple execution of the sequential create index. I'm wondering if the\n> > fluctuations are due to some other internal details or if the WalUsage support\n> > is just completely broken (although I don't see any obvious issue ATM).\n> >\n>\n> I think we need to know the reason for this. Can you try with small\n> size indexes and see if the problem is reproducible? If it is, then it\n> will be easier to debug the same.\n>\n> Few other minor comments\n> ------------------------------------\n> pg_stat_statements patch\n> 1.\n> +--\n> +-- CRUD: INSERT SELECT UPDATE DELETE on test non-temp table to\n> validate WAL generation metrics\n> +--\n>\n> The word 'non-temp' in the above comment appears out of place. We\n> don't need to specify it.\n>\n> 2.\n> +-- SELECT usage data, check WAL usage is reported, wal_records equal\n> rows count for INSERT/UPDATE/DELETE\n> +SELECT query, calls, rows,\n> +wal_bytes > 0 as wal_bytes_generated,\n> +wal_records > 0 as wal_records_generated,\n> +wal_records = rows as wal_records_as_rows\n> +FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n>\n> The comment doesn't seem to match what we are doing in the statement.\n> I think we can simplify it to something like \"check WAL is generated\n> for above statements:\n>\n> 3.\n> @@ -185,6 +185,9 @@ typedef struct Counters\n> int64 local_blks_written; /* # of local disk blocks written */\n> int64 temp_blks_read; /* # of temp blocks read */\n> int64 temp_blks_written; /* # of temp blocks written */\n> + uint64 wal_bytes; /* total amount of WAL bytes generated */\n> + int64 wal_records; /* # of WAL records generated */\n> + int64 wal_num_fpw; /* # of WAL full page image generated */\n> double blk_read_time; /* time spent reading, in msec */\n> double blk_write_time; /* time spent writing, in msec */\n> double usage; /* usage factor */\n>\n> It is better to keep wal_bytes should be after wal_num_fpw as it is in\n> the main patch. Also, consider changing at other places in this\n> patch. I think we should add these new fields after blk_write_time or\n> at the end after usage.\n>\n> 4.\n> /* # of WAL full page image generated */\n> Can we change it to \"/* # of WAL full page image records generated */\"?\n\nIMHO, \"# of WAL full-page image records\" seems like the number of wal\nrecord which contains the full-page image. But, actually, this is the\ntotal number of the full-page images, not the number of records that\nhave a full-page image.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 2 Apr 2020 20:06:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 02, 2020 at 06:40:51PM +0530, Amit Kapila wrote:\n> On Thu, Apr 2, 2020 at 6:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > ----------------------------------------------+-------+-----------+-------------+-------------\n> > create index t1_idx_parallel_0 ON t1(id) | 1 | 20389743 | 2762 | 2758\n> > create index t1_idx_parallel_0_bis ON t1(id) | 1 | 20394391 | 2762 | 2758\n> > create index t1_idx_parallel_0_ter ON t1(id) | 1 | 20395155 | 2762 | 2758\n> > create index t1_idx_parallel_1 ON t1(id) | 1 | 20388335 | 2762 | 2758\n> > create index t1_idx_parallel_2 ON t1(id) | 1 | 20389091 | 2762 | 2758\n> > create index t1_idx_parallel_3 ON t1(id) | 1 | 20389847 | 2762 | 2758\n> > create index t1_idx_parallel_4 ON t1(id) | 1 | 20390603 | 2762 | 2758\n> > create index t1_idx_parallel_5 ON t1(id) | 1 | 20391359 | 2762 | 2758\n> > create index t1_idx_parallel_6 ON t1(id) | 1 | 20392115 | 2762 | 2758\n> > create index t1_idx_parallel_7 ON t1(id) | 1 | 20392871 | 2762 | 2758\n> > create index t1_idx_parallel_8 ON t1(id) | 1 | 20393627 | 2762 | 2758\n> > (11 rows)\n> >\n> > =# select relname, pg_relation_size(oid) from pg_class where relname like '%t1_id%';\n> > relname | pg_relation_size\n> > -----------------------+------------------\n> > t1_idx_parallel_0 | 22487040\n> > t1_idx_parallel_0_bis | 22487040\n> > t1_idx_parallel_0_ter | 22487040\n> > t1_idx_parallel_2 | 22487040\n> > t1_idx_parallel_1 | 22487040\n> > t1_idx_parallel_4 | 22487040\n> > t1_idx_parallel_3 | 22487040\n> > t1_idx_parallel_5 | 22487040\n> > t1_idx_parallel_6 | 22487040\n> > t1_idx_parallel_7 | 22487040\n> > t1_idx_parallel_8 | 22487040\n> > (9 rows)\n> >\n> >\n> > So while the number of WAL records and full page images stay constant, we can\n> > see some small fluctuations in the total amount of generated WAL data, even for\n> > multiple execution of the sequential create index. I'm wondering if the\n> > fluctuations are due to some other internal details or if the WalUsage support\n> > is just completely broken (although I don't see any obvious issue ATM).\n> >\n> \n> I think we need to know the reason for this. Can you try with small\n> size indexes and see if the problem is reproducible? If it is, then it\n> will be easier to debug the same.\n\n\nI did some quick testing using the attached shell script:\n\n- one a 1k line base number of lines, scales 1 10 100 1000 (suffix _s)\n- parallel workers from 0 to 8 (suffix _w)\n- each index created twice (suffix _pa and _pb)\n- with a vacuum;checkpoint;pg_switch_wal executed each time\n\nI get the following results:\n\n query | wal_bytes | wal_records | wal_num_fpw \n--------------------------------------------+-----------+-------------+-------------\n CREATE INDEX t1_idx_s001_pa_w0 ON t1 (id) | 61871 | 22 | 18\n CREATE INDEX t1_idx_s001_pa_w1 ON t1 (id) | 62394 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w2 ON t1 (id) | 63150 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w3 ON t1 (id) | 63906 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w4 ON t1 (id) | 64662 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w5 ON t1 (id) | 65418 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w6 ON t1 (id) | 65450 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w7 ON t1 (id) | 66206 | 21 | 18\n CREATE INDEX t1_idx_s001_pa_w8 ON t1 (id) | 66962 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w0 ON t1 (id) | 67718 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w1 ON t1 (id) | 68474 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w2 ON t1 (id) | 68418 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w3 ON t1 (id) | 69174 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w4 ON t1 (id) | 69930 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w5 ON t1 (id) | 70686 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w6 ON t1 (id) | 71442 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w7 ON t1 (id) | 64922 | 21 | 18\n CREATE INDEX t1_idx_s001_pb_w8 ON t1 (id) | 65682 | 21 | 18\n CREATE INDEX t1_idx_s010_pa_w0 ON t1 (id) | 250460 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w1 ON t1 (id) | 251216 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w2 ON t1 (id) | 251972 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w3 ON t1 (id) | 252728 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w4 ON t1 (id) | 253484 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w5 ON t1 (id) | 254240 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w6 ON t1 (id) | 253552 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w7 ON t1 (id) | 254308 | 47 | 44\n CREATE INDEX t1_idx_s010_pa_w8 ON t1 (id) | 255064 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w0 ON t1 (id) | 255820 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w1 ON t1 (id) | 256576 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w2 ON t1 (id) | 257332 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w3 ON t1 (id) | 258088 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w4 ON t1 (id) | 258844 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w5 ON t1 (id) | 259600 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w6 ON t1 (id) | 260356 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w7 ON t1 (id) | 260012 | 47 | 44\n CREATE INDEX t1_idx_s010_pb_w8 ON t1 (id) | 260768 | 47 | 44\n CREATE INDEX t1_idx_s1000_pa_w0 ON t1 (id) | 20400595 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w1 ON t1 (id) | 20401351 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w2 ON t1 (id) | 20402107 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w3 ON t1 (id) | 20402863 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w4 ON t1 (id) | 20403619 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w5 ON t1 (id) | 20404375 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w6 ON t1 (id) | 20403687 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w7 ON t1 (id) | 20404443 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pa_w8 ON t1 (id) | 20405199 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w0 ON t1 (id) | 20405955 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w1 ON t1 (id) | 20406711 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w2 ON t1 (id) | 20407467 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w3 ON t1 (id) | 20408223 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w4 ON t1 (id) | 20408979 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w5 ON t1 (id) | 20409735 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w6 ON t1 (id) | 20410491 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w7 ON t1 (id) | 20410147 | 2762 | 2759\n CREATE INDEX t1_idx_s1000_pb_w8 ON t1 (id) | 20410903 | 2762 | 2759\n CREATE INDEX t1_idx_s100_pa_w0 ON t1 (id) | 2082194 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w1 ON t1 (id) | 2082950 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w2 ON t1 (id) | 2083706 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w3 ON t1 (id) | 2084462 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w4 ON t1 (id) | 2085218 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w5 ON t1 (id) | 2085974 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w6 ON t1 (id) | 2085286 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w7 ON t1 (id) | 2086042 | 293 | 290\n CREATE INDEX t1_idx_s100_pa_w8 ON t1 (id) | 2086798 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w0 ON t1 (id) | 2087554 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w1 ON t1 (id) | 2088310 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w2 ON t1 (id) | 2089066 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w3 ON t1 (id) | 2089822 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w4 ON t1 (id) | 2090578 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w5 ON t1 (id) | 2091334 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w6 ON t1 (id) | 2092090 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w7 ON t1 (id) | 2091746 | 293 | 290\n CREATE INDEX t1_idx_s100_pb_w8 ON t1 (id) | 2092502 | 293 | 290\n(72 rows)\n\nThe fluctuations exist for all scales, but doesn't seem to depend on the input\nsize.\n\n\nJust to be sure I tried to measure the amount of WAL for various INSERT size\nusing roughly the same approach, and results are stable:\n\n query | wal_bytes | wal_records | wal_num_fpw\n-----------------------------------------------------+-----------+-------------+-------------\n INSERT INTO t_001_a SELECT generate_series($1, $2) | 59000 | 1000 | 0\n INSERT INTO t_001_b SELECT generate_series($1, $2) | 59000 | 1000 | 0\n INSERT INTO t_010_a SELECT generate_series($1, $2) | 590000 | 10000 | 0\n INSERT INTO t_010_b SELECT generate_series($1, $2) | 590000 | 10000 | 0\n INSERT INTO t_1000_a SELECT generate_series($1, $2) | 59000000 | 1000000 | 0\n INSERT INTO t_1000_b SELECT generate_series($1, $2) | 59000000 | 1000000 | 0\n INSERT INTO t_100_a SELECT generate_series($1, $2) | 5900000 | 100000 | 0\n INSERT INTO t_100_b SELECT generate_series($1, $2) | 5900000 | 100000 | 0\n(8 rows)\n\n\nAt this point I tend to think that this is somehow due to btbuild specific\nbehavior, or somewhere nearby.\n\n\n> Few other minor comments\n> ------------------------------------\n> pg_stat_statements patch\n> 1.\n> +--\n> +-- CRUD: INSERT SELECT UPDATE DELETE on test non-temp table to\n> validate WAL generation metrics\n> +--\n> \n> The word 'non-temp' in the above comment appears out of place. We\n> don't need to specify it.\n\n\nFixed.\n\n\n> 2.\n> +-- SELECT usage data, check WAL usage is reported, wal_records equal\n> rows count for INSERT/UPDATE/DELETE\n> +SELECT query, calls, rows,\n> +wal_bytes > 0 as wal_bytes_generated,\n> +wal_records > 0 as wal_records_generated,\n> +wal_records = rows as wal_records_as_rows\n> +FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> \n> The comment doesn't seem to match what we are doing in the statement.\n> I think we can simplify it to something like \"check WAL is generated\n> for above statements:\n\n\nDone.\n\n\n> 3.\n> @@ -185,6 +185,9 @@ typedef struct Counters\n> int64 local_blks_written; /* # of local disk blocks written */\n> int64 temp_blks_read; /* # of temp blocks read */\n> int64 temp_blks_written; /* # of temp blocks written */\n> + uint64 wal_bytes; /* total amount of WAL bytes generated */\n> + int64 wal_records; /* # of WAL records generated */\n> + int64 wal_num_fpw; /* # of WAL full page image generated */\n> double blk_read_time; /* time spent reading, in msec */\n> double blk_write_time; /* time spent writing, in msec */\n> double usage; /* usage factor */\n> \n> It is better to keep wal_bytes should be after wal_num_fpw as it is in\n> the main patch. Also, consider changing at other places in this\n> patch. I think we should add these new fields after blk_write_time or\n> at the end after usage.\n\n\nDone.\n\n\n> 4.\n> /* # of WAL full page image generated */\n> Can we change it to \"/* # of WAL full page image records generated */\"?\n> \n> If you agree, then a similar comment exists in\n> v11-0001-Add-infrastructure-to-track-WAL-usage, consider changing that\n> as well.\n\n\nAgreed, and fixed in both place.\n\n\n> v11-0002-Add-option-to-report-WAL-usage-in-EXPLAIN-and-au\n> 5.\n> Specifically, include the\n> + number of records, full page images and bytes generated.\n> \n> How about making the above slightly clear? \"Specifically, include the\n> number of records, number of full page image records and amount of WAL\n> bytes generated.\n\n\nThanks, that's clearer. Done", "msg_date": "Thu, 2 Apr 2020 16:44:38 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 6:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > ----------------------------------------------+-------+-----------+-------------+-------------\n> > create index t1_idx_parallel_0 ON t1(id) | 1 | 20389743 | 2762 | 2758\n> > create index t1_idx_parallel_0_bis ON t1(id) | 1 | 20394391 | 2762 | 2758\n> > create index t1_idx_parallel_0_ter ON t1(id) | 1 | 20395155 | 2762 | 2758\n> > create index t1_idx_parallel_1 ON t1(id) | 1 | 20388335 | 2762 | 2758\n> > create index t1_idx_parallel_2 ON t1(id) | 1 | 20389091 | 2762 | 2758\n> > create index t1_idx_parallel_3 ON t1(id) | 1 | 20389847 | 2762 | 2758\n> > create index t1_idx_parallel_4 ON t1(id) | 1 | 20390603 | 2762 | 2758\n> > create index t1_idx_parallel_5 ON t1(id) | 1 | 20391359 | 2762 | 2758\n> > create index t1_idx_parallel_6 ON t1(id) | 1 | 20392115 | 2762 | 2758\n> > create index t1_idx_parallel_7 ON t1(id) | 1 | 20392871 | 2762 | 2758\n> > create index t1_idx_parallel_8 ON t1(id) | 1 | 20393627 | 2762 | 2758\n> > (11 rows)\n> >\n> > =# select relname, pg_relation_size(oid) from pg_class where relname like '%t1_id%';\n> > relname | pg_relation_size\n> > -----------------------+------------------\n> > t1_idx_parallel_0 | 22487040\n> > t1_idx_parallel_0_bis | 22487040\n> > t1_idx_parallel_0_ter | 22487040\n> > t1_idx_parallel_2 | 22487040\n> > t1_idx_parallel_1 | 22487040\n> > t1_idx_parallel_4 | 22487040\n> > t1_idx_parallel_3 | 22487040\n> > t1_idx_parallel_5 | 22487040\n> > t1_idx_parallel_6 | 22487040\n> > t1_idx_parallel_7 | 22487040\n> > t1_idx_parallel_8 | 22487040\n> > (9 rows)\n> >\n> >\n> > So while the number of WAL records and full page images stay constant, we can\n> > see some small fluctuations in the total amount of generated WAL data, even for\n> > multiple execution of the sequential create index. I'm wondering if the\n> > fluctuations are due to some other internal details or if the WalUsage support\n> > is just completely broken (although I don't see any obvious issue ATM).\n> >\n>\n> I think we need to know the reason for this. Can you try with small\n> size indexes and see if the problem is reproducible? If it is, then it\n> will be easier to debug the same.\n\nI have done some testing to see where these extra WAL size is coming\nfrom. First I tried to create new db before every run then the size\nis consistent. But, then on the same server, I tired as Julien showed\nin his experiment then I am getting few extra wal bytes from next\ncreate index onwards. And, the waldump(attached in the mail) shows\nthat is pg_class insert wal. I still have to check that why we need\nto write an extra wal size.\n\ncreate extension pg_stat_statements;\ndrop table t1;\ncreate table t1(id integer);\ninsert into t1 select * from generate_series(1, 10);\nalter table t1 set (parallel_workers = 0);\nvacuum;checkpoint;\nselect * from pg_stat_statements_reset() ;\ncreate index t1_idx_parallel_0 ON t1(id);\nselect query, calls, wal_bytes, wal_records, wal_num_fpw from\npg_stat_statements where query ilike '%create index%';;\n query\n | calls | wal_bytes | wal_records | wal_num_fpw\n----------------------------------------------------------------------------------+-------+-----------+-------------+-------------\n create index t1_idx_parallel_0 ON t1(id)\n | 1 | 49320 | 23 | 15\n\n\ndrop table t1;\ncreate table t1(id integer);\ninsert into t1 select * from generate_series(1, 10);\n--select * from pg_stat_statements_reset() ;\nalter table t1 set (parallel_workers = 0);\nvacuum;checkpoint;\ncreate index t1_idx_parallel_1 ON t1(id);\n\nselect query, calls, wal_bytes, wal_records, wal_num_fpw from\npg_stat_statements where query ilike '%create index%';;\npostgres[110383]=# select query, calls, wal_bytes, wal_records,\nwal_num_fpw from pg_stat_statements;\n query\n | calls | wal_bytes | wal_records | wal_num_fpw\n----------------------------------------------------------------------------------+-------+-----------+-------------+-------------\n create index t1_idx_parallel_1 ON t1(id)\n | 1 | 50040 | 23 | 15\n\nwal_bytes diff = 50040-49320 = 720\n\nBelow, WAL record is causing the 720 bytes difference, all other WALs\nare of the same size.\nt1_idx_parallel_0:\nrmgr: Heap len (rec/tot): 54/ 7498, tx: 489, lsn:\n0/0167B9B0, prev 0/0167B970, desc: INSERT off 30 flags 0x01, blkref\n#0: rel 1663/13580/1249\n\nt1_idx_parallel_1:\nrmgr: Heap len (rec/tot): 54/ 8218, tx: 494, lsn:\n0/016B84F8, prev 0/016B84B8, desc: INSERT off 30 flags 0x01, blkref\n#0: rel 1663/13580/1249\n\nwal diff: 8218 - 7498 = 720\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 2 Apr 2020 21:28:13 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 8:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 4.\n> > /* # of WAL full page image generated */\n> > Can we change it to \"/* # of WAL full page image records generated */\"?\n>\n> IMHO, \"# of WAL full-page image records\" seems like the number of wal\n> record which contains the full-page image.\n>\n\nI think this resembles what you have written here.\n\n> But, actually, this is the\n> total number of the full-page images, not the number of records that\n> have a full-page image.\n>\n\nWe count this when forming WAL records. As per my understanding, all\nthree counters are about WAL records. This counter tells how many\nrecords have full page images and one of the purposes of having this\ncounter is to check what percentage of records contain full page\nimage.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 06:37:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "Hello.\n\nThe v13 patch seems failing to apply on the master.\n\nAt Fri, 3 Apr 2020 06:37:21 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Apr 2, 2020 at 8:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 4.\n> > > /* # of WAL full page image generated */\n> > > Can we change it to \"/* # of WAL full page image records generated */\"?\n> >\n> > IMHO, \"# of WAL full-page image records\" seems like the number of wal\n> > record which contains the full-page image.\n> >\n> \n> I think this resembles what you have written here.\n> \n> > But, actually, this is the\n> > total number of the full-page images, not the number of records that\n> > have a full-page image.\n> >\n> \n> We count this when forming WAL records. As per my understanding, all\n> three counters are about WAL records. This counter tells how many\n> records have full page images and one of the purposes of having this\n> counter is to check what percentage of records contain full page\n> image.\n\nAside from which is desirable or useful, acutually XLogRecordAssemble\nin v13-0001 counts the number of attached images then XLogInsertRecord\nsums up the number of images in pgWalUsage.wal_num_fpw.\n\nFWIW, it seems to me that the main concern here is the source of WAL\nsize. If it is the case I think that the number of full page image is\nmore useful.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Apr 2020 10:45:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 6:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 8:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 4.\n> > > /* # of WAL full page image generated */\n> > > Can we change it to \"/* # of WAL full page image records generated */\"?\n> >\n> > IMHO, \"# of WAL full-page image records\" seems like the number of wal\n> > record which contains the full-page image.\n> >\n>\n> I think this resembles what you have written here.\n>\n> > But, actually, this is the\n> > total number of the full-page images, not the number of records that\n> > have a full-page image.\n> >\n>\n> We count this when forming WAL records. As per my understanding, all\n> three counters are about WAL records. This counter tells how many\n> records have full page images and one of the purposes of having this\n> counter is to check what percentage of records contain full page\n> image.\n>\n\nHow about if say \"# of full-page writes generated\" or \"# of WAL\nfull-page writes generated\"? I think now I understand your concern\nbecause we want to display it as full page writes and the comment\ndoesn't seem to indicate the same.\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 08:13:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 8:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 6:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 2, 2020 at 8:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > 4.\n> > > > /* # of WAL full page image generated */\n> > > > Can we change it to \"/* # of WAL full page image records generated */\"?\n> > >\n> > > IMHO, \"# of WAL full-page image records\" seems like the number of wal\n> > > record which contains the full-page image.\n> > >\n> >\n> > I think this resembles what you have written here.\n> >\n> > > But, actually, this is the\n> > > total number of the full-page images, not the number of records that\n> > > have a full-page image.\n> > >\n> >\n> > We count this when forming WAL records. As per my understanding, all\n> > three counters are about WAL records. This counter tells how many\n> > records have full page images and one of the purposes of having this\n> > counter is to check what percentage of records contain full page\n> > image.\n> >\n>\n> How about if say \"# of full-page writes generated\" or \"# of WAL\n> full-page writes generated\"? I think now I understand your concern\n> because we want to display it as full page writes and the comment\n> doesn't seem to indicate the same.\n\nEither of these seem good to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 08:24:38 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 7:15 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n> The v13 patch seems failing to apply on the master.\n>\n\nIt is probably due to recent commit ed7a509571. I have briefly\nstudied that and I think we should make this patch account for plan\ntime WAL usage if any similar to what got committed for buffer usage.\nThe reason is that there is a possibility that during planning we\nmight write a WAL due to hint bits.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 08:33:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 2, 2020 at 9:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Apr 2, 2020 at 6:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 2, 2020 at 6:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > =# select query, calls, wal_bytes, wal_records, wal_num_fpw from pg_stat_statements where query ilike '%create index%';\n> > > query | calls | wal_bytes | wal_records | wal_num_fpw\n> > > ----------------------------------------------+-------+-----------+-------------+-------------\n> > > create index t1_idx_parallel_0 ON t1(id) | 1 | 20389743 | 2762 | 2758\n> > > create index t1_idx_parallel_0_bis ON t1(id) | 1 | 20394391 | 2762 | 2758\n> > > create index t1_idx_parallel_0_ter ON t1(id) | 1 | 20395155 | 2762 | 2758\n> > > create index t1_idx_parallel_1 ON t1(id) | 1 | 20388335 | 2762 | 2758\n> > > create index t1_idx_parallel_2 ON t1(id) | 1 | 20389091 | 2762 | 2758\n> > > create index t1_idx_parallel_3 ON t1(id) | 1 | 20389847 | 2762 | 2758\n> > > create index t1_idx_parallel_4 ON t1(id) | 1 | 20390603 | 2762 | 2758\n> > > create index t1_idx_parallel_5 ON t1(id) | 1 | 20391359 | 2762 | 2758\n> > > create index t1_idx_parallel_6 ON t1(id) | 1 | 20392115 | 2762 | 2758\n> > > create index t1_idx_parallel_7 ON t1(id) | 1 | 20392871 | 2762 | 2758\n> > > create index t1_idx_parallel_8 ON t1(id) | 1 | 20393627 | 2762 | 2758\n> > > (11 rows)\n> > >\n> > > =# select relname, pg_relation_size(oid) from pg_class where relname like '%t1_id%';\n> > > relname | pg_relation_size\n> > > -----------------------+------------------\n> > > t1_idx_parallel_0 | 22487040\n> > > t1_idx_parallel_0_bis | 22487040\n> > > t1_idx_parallel_0_ter | 22487040\n> > > t1_idx_parallel_2 | 22487040\n> > > t1_idx_parallel_1 | 22487040\n> > > t1_idx_parallel_4 | 22487040\n> > > t1_idx_parallel_3 | 22487040\n> > > t1_idx_parallel_5 | 22487040\n> > > t1_idx_parallel_6 | 22487040\n> > > t1_idx_parallel_7 | 22487040\n> > > t1_idx_parallel_8 | 22487040\n> > > (9 rows)\n> > >\n> > >\n> > > So while the number of WAL records and full page images stay constant, we can\n> > > see some small fluctuations in the total amount of generated WAL data, even for\n> > > multiple execution of the sequential create index. I'm wondering if the\n> > > fluctuations are due to some other internal details or if the WalUsage support\n> > > is just completely broken (although I don't see any obvious issue ATM).\n> > >\n> >\n> > I think we need to know the reason for this. Can you try with small\n> > size indexes and see if the problem is reproducible? If it is, then it\n> > will be easier to debug the same.\n>\n> I have done some testing to see where these extra WAL size is coming\n> from. First I tried to create new db before every run then the size\n> is consistent. But, then on the same server, I tired as Julien showed\n> in his experiment then I am getting few extra wal bytes from next\n> create index onwards. And, the waldump(attached in the mail) shows\n> that is pg_class insert wal. I still have to check that why we need\n> to write an extra wal size.\n>\n> create extension pg_stat_statements;\n> drop table t1;\n> create table t1(id integer);\n> insert into t1 select * from generate_series(1, 10);\n> alter table t1 set (parallel_workers = 0);\n> vacuum;checkpoint;\n> select * from pg_stat_statements_reset() ;\n> create index t1_idx_parallel_0 ON t1(id);\n> select query, calls, wal_bytes, wal_records, wal_num_fpw from\n> pg_stat_statements where query ilike '%create index%';;\n> query\n> | calls | wal_bytes | wal_records | wal_num_fpw\n> ----------------------------------------------------------------------------------+-------+-----------+-------------+-------------\n> create index t1_idx_parallel_0 ON t1(id)\n> | 1 | 49320 | 23 | 15\n>\n>\n> drop table t1;\n> create table t1(id integer);\n> insert into t1 select * from generate_series(1, 10);\n> --select * from pg_stat_statements_reset() ;\n> alter table t1 set (parallel_workers = 0);\n> vacuum;checkpoint;\n> create index t1_idx_parallel_1 ON t1(id);\n>\n> select query, calls, wal_bytes, wal_records, wal_num_fpw from\n> pg_stat_statements where query ilike '%create index%';;\n> postgres[110383]=# select query, calls, wal_bytes, wal_records,\n> wal_num_fpw from pg_stat_statements;\n> query\n> | calls | wal_bytes | wal_records | wal_num_fpw\n> ----------------------------------------------------------------------------------+-------+-----------+-------------+-------------\n> create index t1_idx_parallel_1 ON t1(id)\n> | 1 | 50040 | 23 | 15\n>\n> wal_bytes diff = 50040-49320 = 720\n>\n> Below, WAL record is causing the 720 bytes difference, all other WALs\n> are of the same size.\n> t1_idx_parallel_0:\n> rmgr: Heap len (rec/tot): 54/ 7498, tx: 489, lsn:\n> 0/0167B9B0, prev 0/0167B970, desc: INSERT off 30 flags 0x01, blkref\n> #0: rel 1663/13580/1249\n>\n> t1_idx_parallel_1:\n> rmgr: Heap len (rec/tot): 54/ 8218, tx: 494, lsn:\n> 0/016B84F8, prev 0/016B84B8, desc: INSERT off 30 flags 0x01, blkref\n> #0: rel 1663/13580/1249\n>\n> wal diff: 8218 - 7498 = 720\n\nI think now I got the reason. Basically, both of these records are\nstoring the FPW, and FPW size can vary based on the hole size on the\npage. If hold size is smaller the image length will be more, the\nimage_len= BLCKSZ-hole_size. So in subsequent records, the image size\nis bigger. You can refer below code in\nXLogRecordAssemble\n{\n....\nbimg.length = BLCKSZ - cbimg.hole_length;\n\nif (cbimg.hole_length == 0)\n{\n....\n}\nelse\n{\n/* must skip the hole */\nrdt_datas_last->data = page;\nrdt_datas_last->len = bimg.hole_offset;\n\nrdt_datas_last->next = &regbuf->bkp_rdatas[1];\nrdt_datas_last = rdt_datas_last->next;\n\nrdt_datas_last->data =\npage + (bimg.hole_offset + cbimg.hole_length);\nrdt_datas_last->len =\nBLCKSZ - (bimg.hole_offset + cbimg.hole_length);\n}\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 08:54:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 8:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I think now I got the reason. Basically, both of these records are\n> storing the FPW, and FPW size can vary based on the hole size on the\n> page. If hold size is smaller the image length will be more, the\n> image_len= BLCKSZ-hole_size. So in subsequent records, the image size\n> is bigger.\n>\n\nThis means if we always re-create the database or may be keep\nfull_page_writes to off, then we should get consistent WAL usage data\nfor all tests.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:02:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 8:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I think now I got the reason. Basically, both of these records are\n> > storing the FPW, and FPW size can vary based on the hole size on the\n> > page. If hold size is smaller the image length will be more, the\n> > image_len= BLCKSZ-hole_size. So in subsequent records, the image size\n> > is bigger.\n> >\n>\n> This means if we always re-create the database or may be keep\n> full_page_writes to off, then we should get consistent WAL usage data\n> for all tests.\n\nWith new database, it is always the same. But, with full-page write,\nI could see one of the create index is writing extra wal and if we\nchange the older then the new create index at that place will write\nextra wal. I guess that could be due to a non-in place update in some\nof the system tables.\n\npostgres[58554]=# create extension pg_stat_statements;\nCREATE EXTENSION\npostgres[58554]=#\npostgres[58554]=# create table t1(id integer);\nCREATE TABLE\npostgres[58554]=# insert into t1 select * from generate_series(1, 1000000);\nINSERT 0 1000000\npostgres[58554]=# select * from pg_stat_statements_reset() ;\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 0);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_0 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 1);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_1 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 2);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_2 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 3);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_3 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 4);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_4 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 5);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_5 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 6);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_6 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 7);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_7 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# alter table t1 set (parallel_workers = 8);\nALTER TABLE\npostgres[58554]=# vacuum;checkpoint;\nVACUUM\nCHECKPOINT\npostgres[58554]=# create index t1_idx_parallel_8 ON t1(id);\nCREATE INDEX\npostgres[58554]=#\npostgres[58554]=# select query, calls, wal_bytes, wal_records,\nwal_num_fpw from pg_stat_statements where query ilike '%create\nindex%';\n query | calls | wal_bytes |\nwal_records | wal_num_fpw\n------------------------------------------+-------+-----------+-------------+-------------\n create index t1_idx_parallel_0 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_1 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_3 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_2 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_4 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_8 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_6 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_7 ON t1(id) | 1 | 20355953 |\n2766 | 2745\n create index t1_idx_parallel_5 ON t1(id) | 1 | 20359585 |\n2767 | 2745\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:17:54 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 9:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 3, 2020 at 8:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I think now I got the reason. Basically, both of these records are\n> > > storing the FPW, and FPW size can vary based on the hole size on the\n> > > page. If hold size is smaller the image length will be more, the\n> > > image_len= BLCKSZ-hole_size. So in subsequent records, the image size\n> > > is bigger.\n> > >\n> >\n> > This means if we always re-create the database or may be keep\n> > full_page_writes to off, then we should get consistent WAL usage data\n> > for all tests.\n>\n> With new database, it is always the same. But, with full-page write,\n> I could see one of the create index is writing extra wal and if we\n> change the older then the new create index at that place will write\n> extra wal. I guess that could be due to a non-in place update in some\n> of the system tables.\n\nI have analyzed the WAL and there could be multiple reasons for the\nsame. With small data, I have noticed that while inserting in the\nsystem index there was a Page Split and that created extra WAL.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:35:13 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 9:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Apr 3, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 3, 2020 at 8:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I think now I got the reason. Basically, both of these records are\n> > > > storing the FPW, and FPW size can vary based on the hole size on the\n> > > > page. If hold size is smaller the image length will be more, the\n> > > > image_len= BLCKSZ-hole_size. So in subsequent records, the image size\n> > > > is bigger.\n> > > >\n> > >\n> > > This means if we always re-create the database or may be keep\n> > > full_page_writes to off, then we should get consistent WAL usage data\n> > > for all tests.\n> >\n> > With new database, it is always the same. But, with full-page write,\n> > I could see one of the create index is writing extra wal and if we\n> > change the older then the new create index at that place will write\n> > extra wal. I guess that could be due to a non-in place update in some\n> > of the system tables.\n>\n> I have analyzed the WAL and there could be multiple reasons for the\n> same. With small data, I have noticed that while inserting in the\n> system index there was a Page Split and that created extra WAL.\n>\n\nThanks for the investigation. I think it is clear that we can't\nexpect the same WAL size even if we repeat the same operation unless\nit is a fresh database.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:40:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 9:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have analyzed the WAL and there could be multiple reasons for the\n> > same. With small data, I have noticed that while inserting in the\n> > system index there was a Page Split and that created extra WAL.\n> >\n>\n> Thanks for the investigation. I think it is clear that we can't\n> expect the same WAL size even if we repeat the same operation unless\n> it is a fresh database.\n>\n\nAttached find the latest patches. I have modified based on our\ndiscussion on user interface thread [1], ran pgindent on all patches,\nslightly modified one comment based on Dilip's input and added commit\nmessages. I think the patches are in good shape. I would like to\ncommit the first patch in this series tomorrow unless I see more\ncomments or any other objections. The patch-2 might need to be\nrebased if the other related patch [2] got committed first and we\nmight need to tweak a bit based on the input from other thread [1]\nwhere we are discussing user interface for it.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2Bo1Vj4Rso09pKOaKhY8QWTA0gWwCL3TGCi1rCLBBf-QQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/E1jKC4J-0007R3-Bo%40gemulon.postgresql.org\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 3 Apr 2020 19:36:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 3, 2020 at 7:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 3, 2020 at 9:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I have analyzed the WAL and there could be multiple reasons for the\n> > > same. With small data, I have noticed that while inserting in the\n> > > system index there was a Page Split and that created extra WAL.\n> > >\n> >\n> > Thanks for the investigation. I think it is clear that we can't\n> > expect the same WAL size even if we repeat the same operation unless\n> > it is a fresh database.\n> >\n>\n> Attached find the latest patches. I have modified based on our\n> discussion on user interface thread [1], ran pgindent on all patches,\n> slightly modified one comment based on Dilip's input and added commit\n> messages. I think the patches are in good shape. I would like to\n> commit the first patch in this series tomorrow unless I see more\n> comments or any other objections.\n>\n\nPushed.\n\n> The patch-2 might need to be\n> rebased if the other related patch [2] got committed first and we\n> might need to tweak a bit based on the input from other thread [1]\n> where we are discussing user interface for it.\n>\n\nThe primary question for patch-2 is whether we want to include WAL\nusage information for the planning phase as we did for BUFFERS in\nrecent commit ce77abe63c (Include information on buffer usage during\nplanning phase, in EXPLAIN output, take two.). Initially, I thought\nit might be a good idea to do the same for WAL but after reading the\nthread that leads to commit, I am not sure if there is any pressing\nneed to include WAL information for the planning phase. Because\nduring planning we might not write much WAL (with the exception of WAL\ndue to setting of hint-bits) so users might not care much. What do\nyou think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Apr 2020 10:38:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 04, 2020 at 10:38:14AM +0530, Amit Kapila wrote:\n> On Fri, Apr 3, 2020 at 7:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 3, 2020 at 9:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 3, 2020 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I have analyzed the WAL and there could be multiple reasons for the\n> > > > same. With small data, I have noticed that while inserting in the\n> > > > system index there was a Page Split and that created extra WAL.\n> > > >\n> > >\n> > > Thanks for the investigation. I think it is clear that we can't\n> > > expect the same WAL size even if we repeat the same operation unless\n> > > it is a fresh database.\n> > >\n> >\n> > Attached find the latest patches. I have modified based on our\n> > discussion on user interface thread [1], ran pgindent on all patches,\n> > slightly modified one comment based on Dilip's input and added commit\n> > messages. I think the patches are in good shape. I would like to\n> > commit the first patch in this series tomorrow unless I see more\n> > comments or any other objections.\n> >\n> \n> Pushed.\n\n\nThanks!\n\n\n> > The patch-2 might need to be\n> > rebased if the other related patch [2] got committed first and we\n> > might need to tweak a bit based on the input from other thread [1]\n> > where we are discussing user interface for it.\n> >\n> \n> The primary question for patch-2 is whether we want to include WAL\n> usage information for the planning phase as we did for BUFFERS in\n> recent commit ce77abe63c (Include information on buffer usage during\n> planning phase, in EXPLAIN output, take two.). Initially, I thought\n> it might be a good idea to do the same for WAL but after reading the\n> thread that leads to commit, I am not sure if there is any pressing\n> need to include WAL information for the planning phase. Because\n> during planning we might not write much WAL (with the exception of WAL\n> due to setting of hint-bits) so users might not care much. What do\n> you think?\n\n\nI agree that WAL activity during planning shouldn't be very frequent, but it\nmight still be worthwhile to add it. I'm wondering how stable the normalized\nWAL information would be in some regression tests, as the counters are only\nshowed if non zero. Maybe it'd be better to remove them from the output, same\nas the buffers?\n\n\n", "msg_date": "Sat, 4 Apr 2020 08:03:03 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 4, 2020 at 11:33 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Apr 04, 2020 at 10:38:14AM +0530, Amit Kapila wrote:\n>\n> > > The patch-2 might need to be\n> > > rebased if the other related patch [2] got committed first and we\n> > > might need to tweak a bit based on the input from other thread [1]\n> > > where we are discussing user interface for it.\n> > >\n> >\n> > The primary question for patch-2 is whether we want to include WAL\n> > usage information for the planning phase as we did for BUFFERS in\n> > recent commit ce77abe63c (Include information on buffer usage during\n> > planning phase, in EXPLAIN output, take two.). Initially, I thought\n> > it might be a good idea to do the same for WAL but after reading the\n> > thread that leads to commit, I am not sure if there is any pressing\n> > need to include WAL information for the planning phase. Because\n> > during planning we might not write much WAL (with the exception of WAL\n> > due to setting of hint-bits) so users might not care much. What do\n> > you think?\n>\n>\n> I agree that WAL activity during planning shouldn't be very frequent, but it\n> might still be worthwhile to add it.\n>\n\nWe can add if we want but I am not able to convince myself for that.\nDo you have any use case in mind? I think in most of the cases\n(except for hint-bit WAL) it will be zero. If we are not sure of this\nwe can also discuss it separately in a new thread once this\npatch-series is committed and see if anybody else sees the value of it\nand if so adding the code should be easy.\n\n> I'm wondering how stable the normalized\n> WAL information would be in some regression tests, as the counters are only\n> showed if non zero. Maybe it'd be better to remove them from the output, same\n> as the buffers?\n>\n\nWhich regression tests are you referring to? pg_stat_statements? If\nso, why would it be unstable? It should always generate WAL although\nthe exact values may differ and we have already taken care of that in\nthe patch, no?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Apr 2020 14:12:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 04, 2020 at 02:12:59PM +0530, Amit Kapila wrote:\n> On Sat, Apr 4, 2020 at 11:33 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Sat, Apr 04, 2020 at 10:38:14AM +0530, Amit Kapila wrote:\n> >\n> > > > The patch-2 might need to be\n> > > > rebased if the other related patch [2] got committed first and we\n> > > > might need to tweak a bit based on the input from other thread [1]\n> > > > where we are discussing user interface for it.\n> > > >\n> > >\n> > > The primary question for patch-2 is whether we want to include WAL\n> > > usage information for the planning phase as we did for BUFFERS in\n> > > recent commit ce77abe63c (Include information on buffer usage during\n> > > planning phase, in EXPLAIN output, take two.). Initially, I thought\n> > > it might be a good idea to do the same for WAL but after reading the\n> > > thread that leads to commit, I am not sure if there is any pressing\n> > > need to include WAL information for the planning phase. Because\n> > > during planning we might not write much WAL (with the exception of WAL\n> > > due to setting of hint-bits) so users might not care much. What do\n> > > you think?\n> >\n> >\n> > I agree that WAL activity during planning shouldn't be very frequent, but it\n> > might still be worthwhile to add it.\n> >\n> \n> We can add if we want but I am not able to convince myself for that.\n> Do you have any use case in mind? I think in most of the cases\n> (except for hint-bit WAL) it will be zero. If we are not sure of this\n> we can also discuss it separately in a new thread once this\n> patch-series is committed and see if anybody else sees the value of it\n> and if so adding the code should be easy.\n\n\nI'm mostly thinking of people trying to investigate possible slowdowns on a\nhot-standby replica with a primary without wal_log_hints. If they explicitly\nask for WAL information, we should provide them, even if it's quite unlikely to\nhappen.\n\n\n> \n> > I'm wondering how stable the normalized\n> > WAL information would be in some regression tests, as the counters are only\n> > showed if non zero. Maybe it'd be better to remove them from the output, same\n> > as the buffers?\n> >\n> \n> Which regression tests are you referring to? pg_stat_statements? If\n> so, why would it be unstable? It should always generate WAL although\n> the exact values may differ and we have already taken care of that in\n> the patch, no?\n\n\nI'm talking about a hypothetical new EXPLAIN (ALAYZE, WAL) regression test,\nwhich could be unstable for similar reason to why the first attempt to add\nBUFFERS in the planning part of EXPLAIN was unstable. I thought that's why you\nwere hesitating of adding it.\n\n\n", "msg_date": "Sat, 4 Apr 2020 10:54:23 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 4, 2020 at 2:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > We can add if we want but I am not able to convince myself for that.\n> > Do you have any use case in mind? I think in most of the cases\n> > (except for hint-bit WAL) it will be zero. If we are not sure of this\n> > we can also discuss it separately in a new thread once this\n> > patch-series is committed and see if anybody else sees the value of it\n> > and if so adding the code should be easy.\n>\n>\n> I'm mostly thinking of people trying to investigate possible slowdowns on a\n> hot-standby replica with a primary without wal_log_hints. If they explicitly\n> ask for WAL information, we should provide them, even if it's quite unlikely to\n> happen.\n>\n\nYeah, possible but I am not completely sure. I would like to hear the\nopinion of others if any before adding code for this. How about if we\nfirst commit pg_stat_statements and wait for this till Monday and if\nnobody responds we can commit the current patch but would start a new\nthread and try to get the opinion of others?\n\n>\n> >\n> > > I'm wondering how stable the normalized\n> > > WAL information would be in some regression tests, as the counters are only\n> > > showed if non zero. Maybe it'd be better to remove them from the output, same\n> > > as the buffers?\n> > >\n> >\n> > Which regression tests are you referring to? pg_stat_statements? If\n> > so, why would it be unstable? It should always generate WAL although\n> > the exact values may differ and we have already taken care of that in\n> > the patch, no?\n>\n>\n> I'm talking about a hypothetical new EXPLAIN (ALAYZE, WAL) regression test,\n> which could be unstable for similar reason to why the first attempt to add\n> BUFFERS in the planning part of EXPLAIN was unstable.\n>\n\noh, then leave it for now because I don't see much use of those as the\ncode path can anyway be hit by the tests added by pg_stat_statements\npatch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Apr 2020 14:39:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 04, 2020 at 02:39:32PM +0530, Amit Kapila wrote:\n> On Sat, Apr 4, 2020 at 2:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > We can add if we want but I am not able to convince myself for that.\n> > > Do you have any use case in mind? I think in most of the cases\n> > > (except for hint-bit WAL) it will be zero. If we are not sure of this\n> > > we can also discuss it separately in a new thread once this\n> > > patch-series is committed and see if anybody else sees the value of it\n> > > and if so adding the code should be easy.\n> >\n> >\n> > I'm mostly thinking of people trying to investigate possible slowdowns on a\n> > hot-standby replica with a primary without wal_log_hints. If they explicitly\n> > ask for WAL information, we should provide them, even if it's quite unlikely to\n> > happen.\n> >\n> \n> Yeah, possible but I am not completely sure. I would like to hear the\n> opinion of others if any before adding code for this. How about if we\n> first commit pg_stat_statements and wait for this till Monday and if\n> nobody responds we can commit the current patch but would start a new\n> thread and try to get the opinion of others?\n\n\nI'm fine with it.\n\n\n> \n> >\n> > >\n> > > > I'm wondering how stable the normalized\n> > > > WAL information would be in some regression tests, as the counters are only\n> > > > showed if non zero. Maybe it'd be better to remove them from the output, same\n> > > > as the buffers?\n> > > >\n> > >\n> > > Which regression tests are you referring to? pg_stat_statements? If\n> > > so, why would it be unstable? It should always generate WAL although\n> > > the exact values may differ and we have already taken care of that in\n> > > the patch, no?\n> >\n> >\n> > I'm talking about a hypothetical new EXPLAIN (ALAYZE, WAL) regression test,\n> > which could be unstable for similar reason to why the first attempt to add\n> > BUFFERS in the planning part of EXPLAIN was unstable.\n> >\n> \n> oh, then leave it for now because I don't see much use of those as the\n> code path can anyway be hit by the tests added by pg_stat_statements\n> patch.\n> \n\n\nPerfect then!\n\n\n", "msg_date": "Sat, 4 Apr 2020 11:20:15 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 4, 2020 at 2:50 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sat, Apr 04, 2020 at 02:39:32PM +0530, Amit Kapila wrote:\n> > On Sat, Apr 4, 2020 at 2:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > We can add if we want but I am not able to convince myself for that.\n> > > > Do you have any use case in mind? I think in most of the cases\n> > > > (except for hint-bit WAL) it will be zero. If we are not sure of this\n> > > > we can also discuss it separately in a new thread once this\n> > > > patch-series is committed and see if anybody else sees the value of it\n> > > > and if so adding the code should be easy.\n> > >\n> > >\n> > > I'm mostly thinking of people trying to investigate possible slowdowns on a\n> > > hot-standby replica with a primary without wal_log_hints. If they explicitly\n> > > ask for WAL information, we should provide them, even if it's quite unlikely to\n> > > happen.\n> > >\n> >\n> > Yeah, possible but I am not completely sure. I would like to hear the\n> > opinion of others if any before adding code for this. How about if we\n> > first commit pg_stat_statements and wait for this till Monday and if\n> > nobody responds we can commit the current patch but would start a new\n> > thread and try to get the opinion of others?\n>\n>\n> I'm fine with it.\n>\n\nI have pushed pg_stat_statements and Explain related patches. I am\nnow looking into (auto)vacuum patch and have few comments.\n\n@@ -614,6 +616,9 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n\n TimestampDifference(starttime, endtime, &secs, &usecs);\n\n+ memset(&walusage, 0, sizeof(WalUsage));\n+ WalUsageAccumDiff(&walusage, &pgWalUsage, &walusage_start);\n+\n read_rate = 0;\n write_rate = 0;\n if ((secs > 0) || (usecs > 0))\n@@ -666,7 +671,13 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n (long long) VacuumPageDirty);\n appendStringInfo(&buf, _(\"avg read rate: %.3f MB/s, avg write rate:\n%.3f MB/s\\n\"),\n read_rate, write_rate);\n- appendStringInfo(&buf, _(\"system usage: %s\"), pg_rusage_show(&ru0));\n+ appendStringInfo(&buf, _(\"system usage: %s\\n\"), pg_rusage_show(&ru0));\n+ appendStringInfo(&buf,\n+ _(\"WAL usage: %ld records, %ld full page writes, \"\n+ UINT64_FORMAT \" bytes\"),\n+ walusage.wal_records,\n+ walusage.wal_num_fpw,\n+ walusage.wal_bytes);\n\nHere, we are not displaying Buffers related data, so why do we think\nit is important to display WAL data? I see some point in displaying\nBuffers and WAL data in a vacuum (verbose), but I feel it is better to\nmake a case for both the statistics together rather than just\ndisplaying one and leaving other. I think the other change related to\nautovacuum stats seems okay to me.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Apr 2020 08:55:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, 31 Mar 2020 at 14:13, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 31 Mar 2020 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 30, 2020 at 12:31 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > The patch for vacuum conflicts with recent changes in vacuum. So I've\n> > > attached rebased one.\n> > >\n> >\n> > + /*\n> > + * Next, accumulate buffer usage. (This must wait for the workers to\n> > + * finish, or we might get incomplete data.)\n> > + */\n> > + for (i = 0; i < nworkers; i++)\n> > + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> > +\n> >\n> > This should be done for launched workers aka\n> > lps->pcxt->nworkers_launched. I think a similar problem exists in\n> > create index related patch.\n>\n> You're right. Fixed in the new patches.\n>\n> On Mon, 30 Mar 2020 at 17:00, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Just minor nitpicking:\n> >\n> > + int i;\n> >\n> > Assert(!IsParallelWorker());\n> > Assert(ParallelVacuumIsActive(lps));\n> > @@ -2166,6 +2172,13 @@ lazy_parallel_vacuum_indexes(Relation *Irel, IndexBulkDeleteResult **stats,\n> > /* Wait for all vacuum workers to finish */\n> > WaitForParallelWorkersToFinish(lps->pcxt);\n> >\n> > + /*\n> > + * Next, accumulate buffer usage. (This must wait for the workers to\n> > + * finish, or we might get incomplete data.)\n> > + */\n> > + for (i = 0; i < nworkers; i++)\n> > + InstrAccumParallelQuery(&lps->buffer_usage[i]);\n> >\n> > We now allow declaring a variable in those loops, so it may be better to avoid\n> > declaring i outside the for scope?\n>\n> We can do that but I was not sure if it's good since other codes\n> around there don't use that. So I'd like to leave it for committers.\n> It's a trivial change.\n>\n\nI've updated the buffer usage patch for parallel index creation as the\nprevious patch conflicts with commit df3b181499b40.\n\nThis comment in commit df3b181499b40 seems the comment which had been\nreplaced by Amit with a better sentence when introducing buffer usage\nto parallel vacuum.\n\n+ /*\n+ * Estimate space for WalUsage -- PARALLEL_KEY_WAL_USAGE\n+ *\n+ * WalUsage during execution of maintenance command can be used by an\n+ * extension that reports the WAL usage, such as pg_stat_statements. We\n+ * have no way of knowing whether anyone's looking at pgWalUsage, so do it\n+ * unconditionally.\n+ */\n\nWould the following sentence in lazyvacuum.c be also better for\nparallel create index?\n\n * If there are no extensions loaded that care, we could skip this. We\n * have no way of knowing whether anyone's looking at pgBufferUsage or\n * pgWalUsage, so do it unconditionally.\n\nThe attached patch changes to the above comment and removed the code\nthat is used to un-support only buffer usage accumulation.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 6 Apr 2020 14:48:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Apr 6, 2020 at 11:19 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> The attached patch changes to the above comment and removed the code\n> that is used to un-support only buffer usage accumulation.\n>\n\nSo, IIUC, the purpose of this patch will be to count the buffer usage\ndue to the heap scan (in heapam_index_build_range_scan) we perform\nwhile parallel create index? Because the index creation itself won't\nuse buffer manager.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Apr 2020 12:46:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, 6 Apr 2020 at 16:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 6, 2020 at 11:19 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > The attached patch changes to the above comment and removed the code\n> > that is used to un-support only buffer usage accumulation.\n> >\n>\n> So, IIUC, the purpose of this patch will be to count the buffer usage\n> due to the heap scan (in heapam_index_build_range_scan) we perform\n> while parallel create index? Because the index creation itself won't\n> use buffer manager.\n\nOops, I'd missed Peter's comment. Btree index doesn't use\nheapam_index_build_range_scan so it's not necessary. Sorry for the\nnoise.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Apr 2020 16:24:53 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Apr 06, 2020 at 08:55:01AM +0530, Amit Kapila wrote:\n> On Sat, Apr 4, 2020 at 2:50 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> I have pushed pg_stat_statements and Explain related patches. I am\n> now looking into (auto)vacuum patch and have few comments.\n> \n\nThanks!\n\n> @@ -614,6 +616,9 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n> \n> TimestampDifference(starttime, endtime, &secs, &usecs);\n> \n> + memset(&walusage, 0, sizeof(WalUsage));\n> + WalUsageAccumDiff(&walusage, &pgWalUsage, &walusage_start);\n> +\n> read_rate = 0;\n> write_rate = 0;\n> if ((secs > 0) || (usecs > 0))\n> @@ -666,7 +671,13 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,\n> (long long) VacuumPageDirty);\n> appendStringInfo(&buf, _(\"avg read rate: %.3f MB/s, avg write rate:\n> %.3f MB/s\\n\"),\n> read_rate, write_rate);\n> - appendStringInfo(&buf, _(\"system usage: %s\"), pg_rusage_show(&ru0));\n> + appendStringInfo(&buf, _(\"system usage: %s\\n\"), pg_rusage_show(&ru0));\n> + appendStringInfo(&buf,\n> + _(\"WAL usage: %ld records, %ld full page writes, \"\n> + UINT64_FORMAT \" bytes\"),\n> + walusage.wal_records,\n> + walusage.wal_num_fpw,\n> + walusage.wal_bytes);\n> \n> Here, we are not displaying Buffers related data, so why do we think\n> it is important to display WAL data? I see some point in displaying\n> Buffers and WAL data in a vacuum (verbose), but I feel it is better to\n> make a case for both the statistics together rather than just\n> displaying one and leaving other. I think the other change related to\n> autovacuum stats seems okay to me.\n\nOne thing is that the amount of WAL, and more precisely FPW, is quite\nunpredictable wrt. vacuum and even more anti-wraparound vacuum, so this is IMHO\na very useful metric. That being said I totally agree with you that both\nshould be displayed. Should I send a patch to also expose it?\n\n\n", "msg_date": "Mon, 6 Apr 2020 10:23:07 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 6, 2020 at 1:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Apr 06, 2020 at 08:55:01AM +0530, Amit Kapila wrote:\n> >\n> > Here, we are not displaying Buffers related data, so why do we think\n> > it is important to display WAL data? I see some point in displaying\n> > Buffers and WAL data in a vacuum (verbose), but I feel it is better to\n> > make a case for both the statistics together rather than just\n> > displaying one and leaving other. I think the other change related to\n> > autovacuum stats seems okay to me.\n>\n> One thing is that the amount of WAL, and more precisely FPW, is quite\n> unpredictable wrt. vacuum and even more anti-wraparound vacuum, so this is IMHO\n> a very useful metric.\n>\n\nI agree but we already have a way via pg_stat_statements to find it if\nthe metric is so useful.\n\n> That being said I totally agree with you that both\n> should be displayed. Should I send a patch to also expose it?\n>\n\nI think this should be a separate proposal. Let's not add things\nunless they are really essential. We can separately discuss of\nenhancing vacuum verbose for Buffer and WAL usage stats and see if\nothers also find that information useful. I think you can send a\npatch by removing the code I mentioned above if you agree. Thanks for\nworking on this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Apr 2020 14:34:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 6, 2020 at 12:55 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 6 Apr 2020 at 16:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 6, 2020 at 11:19 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > The attached patch changes to the above comment and removed the code\n> > > that is used to un-support only buffer usage accumulation.\n> > >\n> >\n> > So, IIUC, the purpose of this patch will be to count the buffer usage\n> > due to the heap scan (in heapam_index_build_range_scan) we perform\n> > while parallel create index? Because the index creation itself won't\n> > use buffer manager.\n>\n> Oops, I'd missed Peter's comment. Btree index doesn't use\n> heapam_index_build_range_scan so it's not necessary.\n>\n\nAFAIU, it uses heapam_index_build_range_scan but for writing to index,\nit doesn't use buffer manager. So, I guess probably we can accumulate\nBufferUsage stats for parallel create index. What I wanted to know is\nwhether the extra lookup for pg_amproc or any other catalog access via\nparallel workers is fine or we somehow want to eliminate that?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Apr 2020 14:51:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Apr 06, 2020 at 02:34:36PM +0530, Amit Kapila wrote:\n> On Mon, Apr 6, 2020 at 1:53 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Apr 06, 2020 at 08:55:01AM +0530, Amit Kapila wrote:\n> > >\n> > > Here, we are not displaying Buffers related data, so why do we think\n> > > it is important to display WAL data? I see some point in displaying\n> > > Buffers and WAL data in a vacuum (verbose), but I feel it is better to\n> > > make a case for both the statistics together rather than just\n> > > displaying one and leaving other. I think the other change related to\n> > > autovacuum stats seems okay to me.\n> >\n> > One thing is that the amount of WAL, and more precisely FPW, is quite\n> > unpredictable wrt. vacuum and even more anti-wraparound vacuum, so this is IMHO\n> > a very useful metric.\n> >\n> \n> I agree but we already have a way via pg_stat_statements to find it if\n> the metric is so useful.\n> \n\nAgreed.\n\n> \n> > That being said I totally agree with you that both\n> > should be displayed. Should I send a patch to also expose it?\n> >\n> \n> I think this should be a separate proposal. Let's not add things\n> unless they are really essential. We can separately discuss of\n> enhancing vacuum verbose for Buffer and WAL usage stats and see if\n> others also find that information useful. I think you can send a\n> patch by removing the code I mentioned above if you agree. Thanks for\n> working on this.\n\nThanks! v15 attached.", "msg_date": "Mon, 6 Apr 2020 11:33:34 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> I have pushed pg_stat_statements and Explain related patches. I am\n> now looking into (auto)vacuum patch and have few comments.\n>\n> I wasn't paying much attention to this thread. May I suggest changing\nwal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n'num'. It seems inconsistent to me.\n\n\nRegards,\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\nI have pushed pg_stat_statements and Explain related patches.  I am\nnow looking into (auto)vacuum patch and have few comments.\nI wasn't paying much attention to this thread. May I suggest changing wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix 'num'. It seems inconsistent to me. Regards,-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 6 Apr 2020 10:12:55 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 06, 2020 at 10:12:55AM -0300, Euler Taveira wrote:\n> On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> >\n> > I have pushed pg_stat_statements and Explain related patches. I am\n> > now looking into (auto)vacuum patch and have few comments.\n> >\n> > I wasn't paying much attention to this thread. May I suggest changing\n> wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n> 'num'. It seems inconsistent to me.\n> \n\nIf we want to be consistent shouldn't we rename it to wal_fpws? FTR I don't\nlike much either version.\n\n\n", "msg_date": "Mon, 6 Apr 2020 15:37:35 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, 6 Apr 2020 at 10:37, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Mon, Apr 06, 2020 at 10:12:55AM -0300, Euler Taveira wrote:\n> > On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > >\n> > > I have pushed pg_stat_statements and Explain related patches. I am\n> > > now looking into (auto)vacuum patch and have few comments.\n> > >\n> > > I wasn't paying much attention to this thread. May I suggest changing\n> > wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n> > 'num'. It seems inconsistent to me.\n> >\n>\n> If we want to be consistent shouldn't we rename it to wal_fpws? FTR I\n> don't\n> like much either version.\n>\n\nSince FPW is an acronym, plural form reads better when you are using\nuppercase (such as FPWs or FPW's); thus, I prefer singular form because\nparameter names are lowercase. Function description will clarify that this\nis \"number of WAL full page writes\".\n\n\nRegards,\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Mon, 6 Apr 2020 at 10:37, Julien Rouhaud <rjuju123@gmail.com> wrote:On Mon, Apr 06, 2020 at 10:12:55AM -0300, Euler Taveira wrote:\n> On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> >\n> > I have pushed pg_stat_statements and Explain related patches.  I am\n> > now looking into (auto)vacuum patch and have few comments.\n> >\n> > I wasn't paying much attention to this thread. May I suggest changing\n> wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n> 'num'. It seems inconsistent to me.\n> \n\nIf we want to be consistent shouldn't we rename it to wal_fpws?  FTR I don't\nlike much either version.\nSince FPW is an acronym, plural form reads better when you are using uppercase (such as FPWs or FPW's); thus, I prefer singular form because parameter names are lowercase. Function description will clarify that this is \"number of WAL full page writes\".Regards,-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 6 Apr 2020 11:28:14 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "I noticed in some of the screenshots that were tweeted that for example in\n\n WAL: records=1 bytes=56\n\nthere are two spaces between pieces of data. This doesn't match the \nrest of the EXPLAIN output. Can that be adjusted?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Apr 2020 17:01:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 06, 2020 at 05:01:30PM +0200, Peter Eisentraut wrote:\n> I noticed in some of the screenshots that were tweeted that for example in\n> \n> WAL: records=1 bytes=56\n> \n> there are two spaces between pieces of data. This doesn't match the rest of\n> the EXPLAIN output. Can that be adjusted?\n\nWe talked about that here:\nhttps://www.postgresql.org/message-id/20200402054120.GC14618%40telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Apr 2020 11:31:09 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 6, 2020 at 2:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> AFAIU, it uses heapam_index_build_range_scan but for writing to index,\n> it doesn't use buffer manager.\n\nRight. It doesn't need to use the buffer manager to write to the\nindex, unlike (say) GIN's CREATE INDEX.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Apr 2020 10:40:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Mon, Apr 6, 2020 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Apr 06, 2020 at 05:01:30PM +0200, Peter Eisentraut wrote:\n> > I noticed in some of the screenshots that were tweeted that for example in\n> >\n> > WAL: records=1 bytes=56\n> >\n> > there are two spaces between pieces of data. This doesn't match the rest of\n> > the EXPLAIN output. Can that be adjusted?\n>\n> We talked about that here:\n> https://www.postgresql.org/message-id/20200402054120.GC14618%40telsasoft.com\n>\n\nYeah. Just to brief here, the main reason was that one of the fields\n(full page writes) already had a single space and then we had prior\ncases as mentioned in Justin's email [1] where we use two spaces which\nlead us to decide using two spaces in this case.\n\nNow, we can change back to one space as suggested by you but I am not\nsure if that is an improvement over what we have done. Let me know if\nyou think otherwise.\n\n\n[1] - https://www.postgresql.org/message-id/20200402054120.GC14618%40telsasoft.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 07:42:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 6, 2020 at 7:58 PM Euler Taveira\n<euler.taveira@2ndquadrant.com> wrote:\n>\n> On Mon, 6 Apr 2020 at 10:37, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>\n>> On Mon, Apr 06, 2020 at 10:12:55AM -0300, Euler Taveira wrote:\n>> > On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > >\n>> > > I have pushed pg_stat_statements and Explain related patches. I am\n>> > > now looking into (auto)vacuum patch and have few comments.\n>> > >\n>> > > I wasn't paying much attention to this thread. May I suggest changing\n>> > wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n>> > 'num'. It seems inconsistent to me.\n>> >\n>>\n>> If we want to be consistent shouldn't we rename it to wal_fpws? FTR I don't\n>> like much either version.\n>\n>\n> Since FPW is an acronym, plural form reads better when you are using uppercase (such as FPWs or FPW's); thus, I prefer singular form because parameter names are lowercase. Function description will clarify that this is \"number of WAL full page writes\".\n>\n\nI like Euler's suggestion to change wal_num_fpw to wal_fpw. It is\nbetter if others who didn't like this name can also share their\nopinion now because changing multiple times the same thing is not a\ngood idea.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 08:05:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, 7 Apr 2020 at 02:40, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Apr 6, 2020 at 2:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > AFAIU, it uses heapam_index_build_range_scan but for writing to index,\n> > it doesn't use buffer manager.\n>\n> Right. It doesn't need to use the buffer manager to write to the\n> index, unlike (say) GIN's CREATE INDEX.\n\nHmm, after more thoughts and testing, it seems to me that parallel\nbtree index creation uses buffer manager while scanning the table in\nparallel, i.e in heapam_index_build_range_scan, which affects\nshared_blks_xxx in pg_stat_statements. I've some parallel create index\ntests with the current HEAD and with the attached patch. The table has\n44248 blocks.\n\nHEAD, no workers:\n\n-[ RECORD 1 ]-------+----------\ntotal_plan_time | 0\ntotal_plan_time | 0\nshared_blks_hit | 148\nshared_blks_read | 44281\ntotal_read_blks | 44429\nshared_blks_dirtied | 44261\nshared_blks_written | 24644\nwal_records | 71693\nwal_num_fpw | 71682\nwal_bytes | 566815038\n\nHEAD, 4 workers:\n\n-[ RECORD 1 ]-------+----------\ntotal_plan_time | 0\ntotal_plan_time | 0\nshared_blks_hit | 160\nshared_blks_read | 8892\ntotal_read_blks | 9052\nshared_blks_dirtied | 8871\nshared_blks_written | 5342\nwal_records | 71693\nwal_num_fpw | 71682\nwal_bytes | 566815038\n\nThe WAL usage statistics are good but the buffer usage statistics seem\nnot correct.\n\nPatched, no workers:\n\n-[ RECORD 1 ]-------+----------\ntotal_plan_time | 0\ntotal_plan_time | 0\nshared_blks_hit | 148\nshared_blks_read | 44281\ntotal_read_blks | 44429\nshared_blks_dirtied | 44261\nshared_blks_written | 24843\nwal_records | 71693\nwal_num_fpw | 71682\nwal_bytes | 566815038\n\nPatched, 4 workers:\n\n-[ RECORD 1 ]-------+----------\ntotal_plan_time | 0\ntotal_plan_time | 0\nshared_blks_hit | 172\nshared_blks_read | 44282\ntotal_read_blks | 44454\nshared_blks_dirtied | 44261\nshared_blks_written | 26968\nwal_records | 71693\nwal_num_fpw | 71682\nwal_bytes | 566815038\n\nBuffer usage statistics seem correct. The small differences would be\ncatalog lookups Peter mentioned.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Apr 2020 16:59:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, Apr 7, 2020 at 1:30 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Buffer usage statistics seem correct. The small differences would be\n> catalog lookups Peter mentioned.\n>\n\nAgreed, but can you check which part of code does that lookup? I want\nto see if we can avoid that from buffer usage stats or at least write\na comment about it, otherwise, we might have to face this question\nagain and again.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 14:12:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, Apr 7, 2020 at 4:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 6, 2020 at 7:58 PM Euler Taveira\n> <euler.taveira@2ndquadrant.com> wrote:\n> >\n> > On Mon, 6 Apr 2020 at 10:37, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >>\n> >> On Mon, Apr 06, 2020 at 10:12:55AM -0300, Euler Taveira wrote:\n> >> > On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >\n> >> > >\n> >> > > I have pushed pg_stat_statements and Explain related patches. I am\n> >> > > now looking into (auto)vacuum patch and have few comments.\n> >> > >\n> >> > > I wasn't paying much attention to this thread. May I suggest changing\n> >> > wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n> >> > 'num'. It seems inconsistent to me.\n> >> >\n> >>\n> >> If we want to be consistent shouldn't we rename it to wal_fpws? FTR I don't\n> >> like much either version.\n> >\n> >\n> > Since FPW is an acronym, plural form reads better when you are using uppercase (such as FPWs or FPW's); thus, I prefer singular form because parameter names are lowercase. Function description will clarify that this is \"number of WAL full page writes\".\n> >\n>\n> I like Euler's suggestion to change wal_num_fpw to wal_fpw. It is\n> better if others who didn't like this name can also share their\n> opinion now because changing multiple times the same thing is not a\n> good idea.\n\n+1\n\nAbout Justin and your comments on the other thread:\n\nOn Tue, Apr 7, 2020 at 4:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 6, 2020 at 10:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Apr 02, 2020 at 08:29:31AM +0200, Julien Rouhaud wrote:\n> > > > > \"full page records\" seems to be showing the number of full page\n> > > > > images, not the record having full page images.\n> > > >\n> > > > I am not sure what exactly is a difference but it is the records\n> > > > having full page images. Julien correct me if I am wrong.\n> >\n> > > Obviously previous complaints about the meaning and parsability of\n> > > \"full page writes\" should be addressed here for consistency.\n> >\n> > There's a couple places that say \"full page image records\" which I think is\n> > language you were trying to avoid. It's the number of pages, not the number of\n> > records, no ? I see explain and autovacuum say what I think is wanted, but\n> > these say the wrong thing? Find attached slightly larger patch.\n> >\n> > $ git grep 'image record'\n> > contrib/pg_stat_statements/pg_stat_statements.c: int64 wal_num_fpw; /* # of WAL full page image records generated */\n> > doc/src/sgml/ref/explain.sgml: number of records, number of full page image records and amount of WAL\n> >\n>\n> Few comments:\n> 1.\n> - int64 wal_num_fpw; /* # of WAL full page image records generated */\n> + int64 wal_num_fpw; /* # of WAL full page images generated */\n>\n> Let's change comment as \" /* # of WAL full page writes generated */\"\n> to be consistent with other places like instrument.h. Also, make a\n> similar change at other places if required.\n\nAgreed. That's pg_stat_statements.c and instrument.h. I'll send a\npatch once we reach consensus with the rest of the comments.\n\n> 2.\n> <entry>\n> - Total amount of WAL bytes generated by the statement\n> + Total number of WAL bytes generated by the statement\n> </entry>\n>\n> I feel the previous text was better as this field can give us the size\n> of WAL with which we can answer \"how much WAL data is generated by a\n> particular statement?\". Julien, do you have any thoughts on this?\n\nI also prefer \"amount\" as it feels more natural. I'm not a native\nenglish speaker though, so maybe I'm just biased.\n\n\n", "msg_date": "Tue, 7 Apr 2020 11:18:24 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, 7 Apr 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 1:30 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Buffer usage statistics seem correct. The small differences would be\n> > catalog lookups Peter mentioned.\n> >\n>\n> Agreed, but can you check which part of code does that lookup? I want\n> to see if we can avoid that from buffer usage stats or at least write\n> a comment about it, otherwise, we might have to face this question\n> again and again.\n\nOkay, I'll check it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 18:29:58 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On 2020-04-07 04:12, Amit Kapila wrote:\n> On Mon, Apr 6, 2020 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> On Mon, Apr 06, 2020 at 05:01:30PM +0200, Peter Eisentraut wrote:\n>>> I noticed in some of the screenshots that were tweeted that for example in\n>>>\n>>> WAL: records=1 bytes=56\n>>>\n>>> there are two spaces between pieces of data. This doesn't match the rest of\n>>> the EXPLAIN output. Can that be adjusted?\n>>\n>> We talked about that here:\n>> https://www.postgresql.org/message-id/20200402054120.GC14618%40telsasoft.com\n>>\n> \n> Yeah. Just to brief here, the main reason was that one of the fields\n> (full page writes) already had a single space and then we had prior\n> cases as mentioned in Justin's email [1] where we use two spaces which\n> lead us to decide using two spaces in this case.\n\nWe also have existing cases for the other way:\n\n actual time=0.050..0.052\n Buffers: shared hit=3 dirtied=1\n\nThe cases mentioned by Justin are not formatted in a key=value format, \nso it's not quite the same, but it also raises the question why they are \nnot.\n\nLet's figure out a way to consolidate this without making up a third format.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 12:00:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, 7 Apr 2020 at 18:29, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 7 Apr 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 7, 2020 at 1:30 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > Buffer usage statistics seem correct. The small differences would be\n> > > catalog lookups Peter mentioned.\n> > >\n> >\n> > Agreed, but can you check which part of code does that lookup? I want\n> > to see if we can avoid that from buffer usage stats or at least write\n> > a comment about it, otherwise, we might have to face this question\n> > again and again.\n>\n> Okay, I'll check it.\n>\n\nI've checked the buffer usage differences when parallel btree index creation.\n\nTL;DR;\n\nDuring tuple sorting individual parallel workers read blocks of\npg_amproc and pg_amproc_fam_proc_index to get the sort support\nfunction. The call flow is like:\n\nParallelWorkerMain()\n _bt_parallel_scan_and_sort()\n tuplesort_begin_index_btree()\n PrepareSortSupportFromIndexRel()\n FinishSortSupportFunction()\n get_opfamily_proc()\n\nThe details are as follows.\n\nI populated the test table by the following scripts:\n\ncreate table test (c int) with (autovacuum_enabled = off, parallel_workers = 8);\ninsert into test select generate_series(1,10000000);\n\nand create index DDL is:\n\ncreate index test_idx on test (c);\n\nBefore executing the test script, I've put code at the following 4\nplaces which checks the buffer usage at that point, and calculated the\ndifference between points: (a), (b) and (c). For example, (b) shows\nthe number of blocks read or hit during executing scanning heap and\nbuilding index.\n\n1. Before executing CREATE INDEX command (at pgss_ProcessUtility())\n(a)\n2. Before parallel create index (at _bt_begin_parallel())\n(b)\n3. After parallel create index, after accumlating workers stats (at\n_bt_end_parallel())\n(c)\n4. After executing CREATE INDEX command (at pgss_ProcessUtility())\n\nAnd here is the results:\n\n2 workers:\n(a) hit: 107, read: 26\n(b) hit: 12(=6+3+3), read: 44248(=15538+14453+14527)\n(c) hit: 13, read: 2\ntotal hit: 132, read:44276\n\n4 workers:\n(a) hit: 107, read: 26\n(b) hit: 18(=6+3+3+3+3), read: 44248(=9368+8582+8544+9250+8504)\n(c) hit: 13, read: 2\ntotal hit: 138, read:44276\n\nThe table 'test' has 44276 blocks.\n\n From the above results, the total number of reading blocks (44248\nblocks) during parallel index creation is stable and equals to the\nnumber of blocks of the test table. And we can see that extra three\nblocks are read per workers. These three blocks are two for\npg_amproc_fam_proc_index and one for pg_amproc. That is, individual\nparallel workers accesses these relations to get the sort support\nfunction. The full backtrace is:\n\n* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP\n * frame #0: 0x00007fff779c561a libsystem_kernel.dylib`__select + 10\n frame #1: 0x000000010cc9f90d postgres`pg_usleep(microsec=20000000)\nat pgsleep.c:56:10\n frame #2: 0x000000010ca5a668\npostgres`ReadBuffer_common(smgr=0x00007fe872848f70,\nrelpersistence='p', forkNum=MAIN_FORKNUM, blockNum=3, mode=RBM_NORMAL,\nstrategy=0x0000000000000000, hit=0x00007ffee363071b) at bufmgr.c:685:3\n frame #3: 0x000000010ca5a4b6\npostgres`ReadBufferExtended(reln=0x000000010d58f790,\nforkNum=MAIN_FORKNUM, blockNum=3, mode=RBM_NORMAL,\nstrategy=0x0000000000000000) at bufmgr.c:628:8\n frame #4: 0x000000010ca5a397\npostgres`ReadBuffer(reln=0x000000010d58f790, blockNum=3) at\nbufmgr.c:560:9\n frame #5: 0x000000010c67187e\npostgres`_bt_getbuf(rel=0x000000010d58f790, blkno=3, access=1) at\nnbtpage.c:792:9\n frame #6: 0x000000010c670507\npostgres`_bt_getroot(rel=0x000000010d58f790, access=1) at\nnbtpage.c:294:13\n frame #7: 0x000000010c679393\npostgres`_bt_search(rel=0x000000010d58f790, key=0x00007ffee36312d0,\nbufP=0x00007ffee3631bec, access=1, snapshot=0x00007fe8728388e0) at\nnbtsearch.c:107:10\n frame #8: 0x000000010c67b489\npostgres`_bt_first(scan=0x00007fe86f814998, dir=ForwardScanDirection)\nat nbtsearch.c:1355:10\n frame #9: 0x000000010c676869\npostgres`btgettuple(scan=0x00007fe86f814998, dir=ForwardScanDirection)\nat nbtree.c:253:10\n frame #10: 0x000000010c6656ad\npostgres`index_getnext_tid(scan=0x00007fe86f814998,\ndirection=ForwardScanDirection) at indexam.c:530:10\n frame #11: 0x000000010c66585b\npostgres`index_getnext_slot(scan=0x00007fe86f814998,\ndirection=ForwardScanDirection, slot=0x00007fe86f814880) at\nindexam.c:622:10\n frame #12: 0x000000010c663eac\npostgres`systable_getnext(sysscan=0x00007fe86f814828) at genam.c:454:7\n frame #13: 0x000000010cc0be41\npostgres`SearchCatCacheMiss(cache=0x00007fe872818e80, nkeys=4,\nhashValue=3052139574, hashIndex=6, v1=1976, v2=23, v3=23, v4=2) at\ncatcache.c:1368:9\n frame #14: 0x000000010cc0bced\npostgres`SearchCatCacheInternal(cache=0x00007fe872818e80, nkeys=4,\nv1=1976, v2=23, v3=23, v4=2) at catcache.c:1299:9\n frame #15: 0x000000010cc0baa8\npostgres`SearchCatCache4(cache=0x00007fe872818e80, v1=1976, v2=23,\nv3=23, v4=2) at catcache.c:1191:9\n frame #16: 0x000000010cc27c82 postgres`SearchSysCache4(cacheId=5,\nkey1=1976, key2=23, key3=23, key4=2) at syscache.c:1156:9\n frame #17: 0x000000010cc105dd\npostgres`get_opfamily_proc(opfamily=1976, lefttype=23, righttype=23,\nprocnum=2) at lsyscache.c:751:7\n frame #18: 0x000000010cc72e1d\npostgres`FinishSortSupportFunction(opfamily=1976, opcintype=23,\nssup=0x00007fe86f8147d0) at sortsupport.c:99:24\n frame #19: 0x000000010cc73100\npostgres`PrepareSortSupportFromIndexRel(indexRel=0x000000010d5ced48,\nstrategy=1, ssup=0x00007fe86f8147d0) at sortsupport.c:176:2\n frame #20: 0x000000010cc75463\npostgres`tuplesort_begin_index_btree(heapRel=0x000000010d5cf808,\nindexRel=0x000000010d5ced48, enforceUnique=false, workMem=21845,\ncoordinate=0x00007fe872839248, randomAccess=false) at\ntuplesort.c:1114:3\n frame #21: 0x000000010c681ffc\npostgres`_bt_parallel_scan_and_sort(btspool=0x00007fe872839738,\nbtspool2=0x0000000000000000, btshared=0x000000010d56c4c0,\nsharedsort=0x000000010d56c460, sharedsort2=0x0000000000000000,\nsortmem=21845, progress=false) at nbtsort.c:1941:23\n frame #22: 0x000000010c681eb2\npostgres`_bt_parallel_build_main(seg=0x00007fe87280a058,\ntoc=0x000000010d56c000) at nbtsort.c:1889:2\n frame #23: 0x000000010c6b7358\npostgres`ParallelWorkerMain(main_arg=1169089032) at parallel.c:1471:2\n frame #24: 0x000000010c9da86f postgres`StartBackgroundWorker at\nbgworker.c:813:2\n frame #25: 0x000000010c9efbc0\npostgres`do_start_bgworker(rw=0x00007fe86f419290) at\npostmaster.c:5852:4\n frame #26: 0x000000010c9eff9f postgres`maybe_start_bgworkers at\npostmaster.c:6078:9\n frame #27: 0x000000010c9eee99\npostgres`sigusr1_handler(postgres_signal_arg=30) at\npostmaster.c:5247:3\n frame #28: 0x00007fff77a74b5d libsystem_platform.dylib`_sigtramp + 29\n frame #29: 0x00007fff779c561b libsystem_kernel.dylib`__select + 11\n frame #30: 0x000000010c9ea48c postgres`ServerLoop at postmaster.c:1691:13\n frame #31: 0x000000010c9e9e06 postgres`PostmasterMain(argc=5,\nargv=0x00007fe86f4036f0) at postmaster.c:1400:11\n frame #32: 0x000000010c8ee399 postgres`main(argc=<unavailable>,\nargv=<unavailable>) at main.c:210:3\n frame #33: 0x00007fff778893d5 libdyld.dylib`start + 1\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Apr 2020 20:47:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, Apr 7, 2020 at 12:00 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-04-07 04:12, Amit Kapila wrote:\n> > On Mon, Apr 6, 2020 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>\n> >> On Mon, Apr 06, 2020 at 05:01:30PM +0200, Peter Eisentraut wrote:\n> >>> I noticed in some of the screenshots that were tweeted that for example in\n> >>>\n> >>> WAL: records=1 bytes=56\n> >>>\n> >>> there are two spaces between pieces of data. This doesn't match the rest of\n> >>> the EXPLAIN output. Can that be adjusted?\n> >>\n> >> We talked about that here:\n> >> https://www.postgresql.org/message-id/20200402054120.GC14618%40telsasoft.com\n> >>\n> >\n> > Yeah. Just to brief here, the main reason was that one of the fields\n> > (full page writes) already had a single space and then we had prior\n> > cases as mentioned in Justin's email [1] where we use two spaces which\n> > lead us to decide using two spaces in this case.\n>\n> We also have existing cases for the other way:\n>\n> actual time=0.050..0.052\n> Buffers: shared hit=3 dirtied=1\n>\n> The cases mentioned by Justin are not formatted in a key=value format,\n> so it's not quite the same, but it also raises the question why they are\n> not.\n>\n> Let's figure out a way to consolidate this without making up a third format.\n\nThe parsability problem Justin was mentioning is only due to \"full\npage writes\", so we could use \"full_page_writes\" or \"fpw\" instead and\nremove the extra spaces. There would be a small discrepancy with the\nverbose autovacuum log, but there are others differences already.\n\nI'd slightly in favor of \"fpw\" to be more concise. Would that be ok?\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:23:47 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Apr 07, 2020 at 12:00:29PM +0200, Peter Eisentraut wrote:\n> We also have existing cases for the other way:\n> \n> actual time=0.050..0.052\n> Buffers: shared hit=3 dirtied=1\n> \n> The cases mentioned by Justin are not formatted in a key=value format, so\n> it's not quite the same, but it also raises the question why they are not.\n> \n> Let's figure out a way to consolidate this without making up a third format.\n\nSo this re-raises my suggestion here to use colons, Title Case Field Names, and\n\"Size: ..kB\" rather than \"bytes=\":\n|https://www.postgresql.org/message-id/20200403054451.GN14618%40telsasoft.com\n\nAs I see it, the sort/hashjoin style is being used for cases with fields with\ndifferent units:\n\n Sort Method: quicksort Memory: 931kB\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n\n..which is distinguished from the case where the units are the same, like\nbuffers (hit=Npages read=Npages dirtied=Npages written=Npages).\n\nNote, as of 1f39bce021, we have hashagg_disk, which looks like this:\n\ntemplate1=# explain analyze SELECT a, COUNT(1) FROM generate_series(1,99999) a GROUP BY 1 ORDER BY 1;\n...\n -> HashAggregate (cost=1499.99..1501.99 rows=200 width=12) (actual time=166.883..280.943 rows=99999 loops=1)\n Group Key: a\n Peak Memory Usage: 4913 kB\n Disk Usage: 1848 kB\n HashAgg Batches: 8\n\nIncremental sort adds yet another variation, which I've mentioned that thread.\nI'm hoping to come to some resolution here, first.\nhttps://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Apr 2020 17:50:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Apr 7, 2020 at 3:30 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-04-07 04:12, Amit Kapila wrote:\n> > On Mon, Apr 6, 2020 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>\n> >> On Mon, Apr 06, 2020 at 05:01:30PM +0200, Peter Eisentraut wrote:\n> >>> I noticed in some of the screenshots that were tweeted that for example in\n> >>>\n> >>> WAL: records=1 bytes=56\n> >>>\n> >>> there are two spaces between pieces of data. This doesn't match the rest of\n> >>> the EXPLAIN output. Can that be adjusted?\n> >>\n> >> We talked about that here:\n> >> https://www.postgresql.org/message-id/20200402054120.GC14618%40telsasoft.com\n> >>\n> >\n> > Yeah. Just to brief here, the main reason was that one of the fields\n> > (full page writes) already had a single space and then we had prior\n> > cases as mentioned in Justin's email [1] where we use two spaces which\n> > lead us to decide using two spaces in this case.\n>\n> We also have existing cases for the other way:\n>\n> actual time=0.050..0.052\n> Buffers: shared hit=3 dirtied=1\n>\n\nBuffers case is not the same because 'shared' is used for 'hit',\n'read', 'dirtied', etc. However, I think it is arguable.\n\n> The cases mentioned by Justin are not formatted in a key=value format,\n> so it's not quite the same, but it also raises the question why they are\n> not.\n>\n> Let's figure out a way to consolidate this without making up a third format.\n>\n\nSure, I think my intention is to keep the format of WAL stats as close\nto Buffers stats as possible because both depict I/O and users would\nprobably be interested to check/read both together. There is a point\nto keep things in a format so that it is easier for someone to parse\nbut I guess as these as fixed 'words', it shouldn't be difficult\neither way and we should give more weightage to consistency. Any\nsuggestions?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 08:36:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Apr 7, 2020 at 5:17 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 7 Apr 2020 at 18:29, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 7 Apr 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 7, 2020 at 1:30 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > Buffer usage statistics seem correct. The small differences would be\n> > > > catalog lookups Peter mentioned.\n> > > >\n> > >\n> > > Agreed, but can you check which part of code does that lookup? I want\n> > > to see if we can avoid that from buffer usage stats or at least write\n> > > a comment about it, otherwise, we might have to face this question\n> > > again and again.\n> >\n> > Okay, I'll check it.\n> >\n>\n> I've checked the buffer usage differences when parallel btree index creation.\n>\n> TL;DR;\n>\n> During tuple sorting individual parallel workers read blocks of\n> pg_amproc and pg_amproc_fam_proc_index to get the sort support\n> function. The call flow is like:\n>\n> ParallelWorkerMain()\n> _bt_parallel_scan_and_sort()\n> tuplesort_begin_index_btree()\n> PrepareSortSupportFromIndexRel()\n> FinishSortSupportFunction()\n> get_opfamily_proc()\n>\n\nThanks for the investigation. I don't see we can do anything special\nabout this. In an ideal world, this should be done once and not for\neach worker but I guess it doesn't matter too much. I am not sure if\nit is worth adding a comment for this, what do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 11:13:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, 8 Apr 2020 at 14:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 5:17 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 7 Apr 2020 at 18:29, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, 7 Apr 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Apr 7, 2020 at 1:30 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > > Buffer usage statistics seem correct. The small differences would be\n> > > > > catalog lookups Peter mentioned.\n> > > > >\n> > > >\n> > > > Agreed, but can you check which part of code does that lookup? I want\n> > > > to see if we can avoid that from buffer usage stats or at least write\n> > > > a comment about it, otherwise, we might have to face this question\n> > > > again and again.\n> > >\n> > > Okay, I'll check it.\n> > >\n> >\n> > I've checked the buffer usage differences when parallel btree index creation.\n> >\n> > TL;DR;\n> >\n> > During tuple sorting individual parallel workers read blocks of\n> > pg_amproc and pg_amproc_fam_proc_index to get the sort support\n> > function. The call flow is like:\n> >\n> > ParallelWorkerMain()\n> > _bt_parallel_scan_and_sort()\n> > tuplesort_begin_index_btree()\n> > PrepareSortSupportFromIndexRel()\n> > FinishSortSupportFunction()\n> > get_opfamily_proc()\n> >\n>\n> Thanks for the investigation. I don't see we can do anything special\n> about this. In an ideal world, this should be done once and not for\n> each worker but I guess it doesn't matter too much. I am not sure if\n> it is worth adding a comment for this, what do you think?\n>\n\nI agree with you. If the differences were considerably large probably\nwe would do something but I think we don't need to anything at this\ntime.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:23:00 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 8, 2020 at 11:53 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 8 Apr 2020 at 14:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Thanks for the investigation. I don't see we can do anything special\n> > about this. In an ideal world, this should be done once and not for\n> > each worker but I guess it doesn't matter too much. I am not sure if\n> > it is worth adding a comment for this, what do you think?\n> >\n>\n> I agree with you. If the differences were considerably large probably\n> we would do something but I think we don't need to anything at this\n> time.\n>\n\nFair enough, can you once check this in back-branches as this needs to\nbe backpatched? I will do that once by myself as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 12:34:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 8, 2020 at 8:23 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 8 Apr 2020 at 14:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 7, 2020 at 5:17 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, 7 Apr 2020 at 18:29, Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Tue, 7 Apr 2020 at 17:42, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Apr 7, 2020 at 1:30 PM Masahiko Sawada\n> > > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > >\n> > > > > > Buffer usage statistics seem correct. The small differences would be\n> > > > > > catalog lookups Peter mentioned.\n> > > > > >\n> > > > >\n> > > > > Agreed, but can you check which part of code does that lookup? I want\n> > > > > to see if we can avoid that from buffer usage stats or at least write\n> > > > > a comment about it, otherwise, we might have to face this question\n> > > > > again and again.\n> > > >\n> > > > Okay, I'll check it.\n> > > >\n> > >\n> > > I've checked the buffer usage differences when parallel btree index creation.\n> > >\n> > > TL;DR;\n> > >\n> > > During tuple sorting individual parallel workers read blocks of\n> > > pg_amproc and pg_amproc_fam_proc_index to get the sort support\n> > > function. The call flow is like:\n> > >\n> > > ParallelWorkerMain()\n> > > _bt_parallel_scan_and_sort()\n> > > tuplesort_begin_index_btree()\n> > > PrepareSortSupportFromIndexRel()\n> > > FinishSortSupportFunction()\n> > > get_opfamily_proc()\n> > >\n> >\n> > Thanks for the investigation. I don't see we can do anything special\n> > about this. In an ideal world, this should be done once and not for\n> > each worker but I guess it doesn't matter too much. I am not sure if\n> > it is worth adding a comment for this, what do you think?\n> >\n>\n> I agree with you. If the differences were considerably large probably\n> we would do something but I think we don't need to anything at this\n> time.\n\n+1\n\n\n", "msg_date": "Wed, 8 Apr 2020 09:11:41 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, 8 Apr 2020 at 16:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 8, 2020 at 11:53 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 8 Apr 2020 at 14:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Thanks for the investigation. I don't see we can do anything special\n> > > about this. In an ideal world, this should be done once and not for\n> > > each worker but I guess it doesn't matter too much. I am not sure if\n> > > it is worth adding a comment for this, what do you think?\n> > >\n> >\n> > I agree with you. If the differences were considerably large probably\n> > we would do something but I think we don't need to anything at this\n> > time.\n> >\n>\n> Fair enough, can you once check this in back-branches as this needs to\n> be backpatched? I will do that once by myself as well.\n\nI've done the same test with HEAD of both REL_12_STABLE and\nREL_11_STABLE. I think the patch needs to be backpatched to PG11 where\nparallel index creation was introduced. I've attached the patches\nfor PG12 and PG11 I used for this test for reference.\n\nHere are the results:\n\n* PG12\n\nWith no worker:\n-[ RECORD 1 ]-------+-------------\nshared_blks_hit | 119\nshared_blks_read | 44283\ntotal_read_blks | 44402\nshared_blks_dirtied | 44262\nshared_blks_written | 24925\n\nWith 4 workers:\n-[ RECORD 1 ]-------+------------\nshared_blks_hit | 128\nshared_blks_read | 8844\ntotal_read_blks | 8972\nshared_blks_dirtied | 8822\nshared_blks_written | 5393\n\nWith 4 workers after patching:\n-[ RECORD 1 ]-------+------------\nshared_blks_hit | 140\nshared_blks_read | 44284\ntotal_read_blks | 44424\nshared_blks_dirtied | 44262\nshared_blks_written | 26574\n\n* PG11\n\nWith no worker:\n-[ RECORD 1 ]-------+------------\nshared_blks_hit | 124\nshared_blks_read | 44284\ntotal_read_blks | 44408\nshared_blks_dirtied | 44263\nshared_blks_written | 24908\n\nWith 4 workers:\n-[ RECORD 1 ]-------+-------------\nshared_blks_hit | 132\nshared_blks_read | 8910\ntotal_read_blks | 9042\nshared_blks_dirtied | 8888\nshared_blks_written | 5370\n\nWith 4 workers after patched:\n-[ RECORD 1 ]-------+-------------\nshared_blks_hit | 144\nshared_blks_read | 44285\ntotal_read_blks | 44429\nshared_blks_dirtied | 44263\nshared_blks_written | 26861\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 8 Apr 2020 17:19:19 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Wed, Apr 8, 2020 at 1:49 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 8 Apr 2020 at 16:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 8, 2020 at 11:53 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Wed, 8 Apr 2020 at 14:44, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Thanks for the investigation. I don't see we can do anything special\n> > > > about this. In an ideal world, this should be done once and not for\n> > > > each worker but I guess it doesn't matter too much. I am not sure if\n> > > > it is worth adding a comment for this, what do you think?\n> > > >\n> > >\n> > > I agree with you. If the differences were considerably large probably\n> > > we would do something but I think we don't need to anything at this\n> > > time.\n> > >\n> >\n> > Fair enough, can you once check this in back-branches as this needs to\n> > be backpatched? I will do that once by myself as well.\n>\n> I've done the same test with HEAD of both REL_12_STABLE and\n> REL_11_STABLE. I think the patch needs to be backpatched to PG11 where\n> parallel index creation was introduced. I've attached the patches\n> for PG12 and PG11 I used for this test for reference.\n>\n\nThanks, I will once again verify and push this tomorrow if there are\nno other comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 14:35:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Tue, Apr 7, 2020 at 2:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 4:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 6, 2020 at 7:58 PM Euler Taveira\n> > <euler.taveira@2ndquadrant.com> wrote:\n> > >\n> > > On Mon, 6 Apr 2020 at 10:37, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >>\n> > >> On Mon, Apr 06, 2020 at 10:12:55AM -0300, Euler Taveira wrote:\n> > >> > On Mon, 6 Apr 2020 at 00:25, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >> >\n> > >> > >\n> > >> > > I have pushed pg_stat_statements and Explain related patches. I am\n> > >> > > now looking into (auto)vacuum patch and have few comments.\n> > >> > >\n> > >> > > I wasn't paying much attention to this thread. May I suggest changing\n> > >> > wal_num_fpw to wal_fpw? wal_records and wal_bytes does not have a prefix\n> > >> > 'num'. It seems inconsistent to me.\n> > >> >\n> > >>\n> > >> If we want to be consistent shouldn't we rename it to wal_fpws? FTR I don't\n> > >> like much either version.\n> > >\n> > >\n> > > Since FPW is an acronym, plural form reads better when you are using uppercase (such as FPWs or FPW's); thus, I prefer singular form because parameter names are lowercase. Function description will clarify that this is \"number of WAL full page writes\".\n> > >\n> >\n> > I like Euler's suggestion to change wal_num_fpw to wal_fpw. It is\n> > better if others who didn't like this name can also share their\n> > opinion now because changing multiple times the same thing is not a\n> > good idea.\n>\n> +1\n>\n> About Justin and your comments on the other thread:\n>\n> On Tue, Apr 7, 2020 at 4:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 6, 2020 at 10:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Thu, Apr 02, 2020 at 08:29:31AM +0200, Julien Rouhaud wrote:\n> > > > > > \"full page records\" seems to be showing the number of full page\n> > > > > > images, not the record having full page images.\n> > > > >\n> > > > > I am not sure what exactly is a difference but it is the records\n> > > > > having full page images. Julien correct me if I am wrong.\n> > >\n> > > > Obviously previous complaints about the meaning and parsability of\n> > > > \"full page writes\" should be addressed here for consistency.\n> > >\n> > > There's a couple places that say \"full page image records\" which I think is\n> > > language you were trying to avoid. It's the number of pages, not the number of\n> > > records, no ? I see explain and autovacuum say what I think is wanted, but\n> > > these say the wrong thing? Find attached slightly larger patch.\n> > >\n> > > $ git grep 'image record'\n> > > contrib/pg_stat_statements/pg_stat_statements.c: int64 wal_num_fpw; /* # of WAL full page image records generated */\n> > > doc/src/sgml/ref/explain.sgml: number of records, number of full page image records and amount of WAL\n> > >\n> >\n> > Few comments:\n> > 1.\n> > - int64 wal_num_fpw; /* # of WAL full page image records generated */\n> > + int64 wal_num_fpw; /* # of WAL full page images generated */\n> >\n> > Let's change comment as \" /* # of WAL full page writes generated */\"\n> > to be consistent with other places like instrument.h. Also, make a\n> > similar change at other places if required.\n>\n> Agreed. That's pg_stat_statements.c and instrument.h. I'll send a\n> patch once we reach consensus with the rest of the comments.\n>\n\nWould you like to send a consolidated patch that includes Euler's\nsuggestion and Justin's patch (by making changes for points we\ndiscussed.)? I think we can keep the point related to number of\nspaces before each field open?\n\n> > 2.\n> > <entry>\n> > - Total amount of WAL bytes generated by the statement\n> > + Total number of WAL bytes generated by the statement\n> > </entry>\n> >\n> > I feel the previous text was better as this field can give us the size\n> > of WAL with which we can answer \"how much WAL data is generated by a\n> > particular statement?\". Julien, do you have any thoughts on this?\n>\n> I also prefer \"amount\" as it feels more natural.\n>\n\nAs we see no other opinion on this matter, we can use \"amount\" here.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:46:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 10, 2020 at 8:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 2:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Tue, Apr 7, 2020 at 4:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 6, 2020 at 7:58 PM Euler Taveira\n> > > <euler.taveira@2ndquadrant.com> wrote:\n> > > Few comments:\n> > > 1.\n> > > - int64 wal_num_fpw; /* # of WAL full page image records generated */\n> > > + int64 wal_num_fpw; /* # of WAL full page images generated */\n> > >\n> > > Let's change comment as \" /* # of WAL full page writes generated */\"\n> > > to be consistent with other places like instrument.h. Also, make a\n> > > similar change at other places if required.\n> >\n> > Agreed. That's pg_stat_statements.c and instrument.h. I'll send a\n> > patch once we reach consensus with the rest of the comments.\n> >\n>\n> Would you like to send a consolidated patch that includes Euler's\n> suggestion and Justin's patch (by making changes for points we\n> discussed.)? I think we can keep the point related to number of\n> spaces before each field open?\n\nSure, I'll take care of that tomorrow!\n\n> > > 2.\n> > > <entry>\n> > > - Total amount of WAL bytes generated by the statement\n> > > + Total number of WAL bytes generated by the statement\n> > > </entry>\n> > >\n> > > I feel the previous text was better as this field can give us the size\n> > > of WAL with which we can answer \"how much WAL data is generated by a\n> > > particular statement?\". Julien, do you have any thoughts on this?\n> >\n> > I also prefer \"amount\" as it feels more natural.\n> >\n>\n> As we see no other opinion on this matter, we can use \"amount\" here.\n\nOk.\n\n\n", "msg_date": "Fri, 10 Apr 2020 21:37:54 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 10, 2020 at 9:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Apr 10, 2020 at 8:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Would you like to send a consolidated patch that includes Euler's\n> > suggestion and Justin's patch (by making changes for points we\n> > discussed.)? I think we can keep the point related to number of\n> > spaces before each field open?\n>\n> Sure, I'll take care of that tomorrow!\n\nI tried to take into account all that have been discussed, but I have\nto admit that I'm absolutely not sure of what was actually decided\nhere. I went with those changes:\n\n- rename wal_num_fpw to wal_fpw for consistency, both in pgss view\nfiel name but also everywhere in the code\n- change comments to consistently mention \"full page writes generated\"\n- changed pgss and explain documentation to mention \"full page images\ngenerated\", from Justin's patch on another thread\n- kept \"amount\" of WAL bytes\n- no change to the explain output as I have no idea what is the\nconsensus (one or two spaces, use semicolon or equal, show unit or\nnot)", "msg_date": "Sat, 11 Apr 2020 15:24:47 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Mar 28, 2020 at 04:17:21PM +0100, Julien Rouhaud wrote:\n> On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > > \n> > > I see some basic problems with the patch. The way it tries to compute\n> > > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > > or point me to any test done where we have computed WAL for parallel\n> > > operations like Parallel Vacuum or Parallel Create Index?\n> > \n> > Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> > workers won't be accounted for. That being said, I think that an argument\n> > could be made that proper infrastructure should have been added in the original\n> > parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> > usage in parallel utility, unless I'm missing something.\n> \n> Just to be sure I did a quick test with pg_stat_statements behavior using\n> parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> doesn't reflect parallel workers' activity.\n> \n> I added an open for that, and adding Robert in Cc as 9da0cc352 is the first\n> commit adding parallel maintenance.\n\nI believe this is resolved for parallel vacuum in master and parallel create\nindex back to PG11.\n\nI marked this as closed.\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34802&oldid=34781\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 11 Apr 2020 17:33:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "Le dim. 12 avr. 2020 à 00:33, Justin Pryzby <pryzby@telsasoft.com> a écrit :\n\n> On Sat, Mar 28, 2020 at 04:17:21PM +0100, Julien Rouhaud wrote:\n> >\n> > Just to be sure I did a quick test with pg_stat_statements behavior using\n> > parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer\n> usage\n> > doesn't reflect parallel workers' activity.\n> >\n> > I added an open for that, and adding Robert in Cc as 9da0cc352 is the\n> first\n> > commit adding parallel maintenance.\n>\n> I believe this is resolved for parallel vacuum in master and parallel\n> create\n> index back to PG11.\n>\n\nindeed, I was about to take care of this too\n\n\n> I marked this as closed.\n>\n> https://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34802&oldid=34781\n\n\nthanks a lot!\n\nLe dim. 12 avr. 2020 à 00:33, Justin Pryzby <pryzby@telsasoft.com> a écrit :On Sat, Mar 28, 2020 at 04:17:21PM +0100, Julien Rouhaud wrote:\n> \n> Just to be sure I did a quick test with pg_stat_statements behavior using\n> parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> doesn't reflect parallel workers' activity.\n> \n> I added an open for that, and adding Robert in Cc as 9da0cc352 is the first\n> commit adding parallel maintenance.\n\nI believe this is resolved for parallel vacuum in master and parallel create\nindex back to PG11.indeed, I was about to take care of this too\n\nI marked this as closed.\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34802&oldid=34781thanks a lot!", "msg_date": "Sun, 12 Apr 2020 12:53:58 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sun, Apr 12, 2020 at 4:03 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Mar 28, 2020 at 04:17:21PM +0100, Julien Rouhaud wrote:\n> > On Sat, Mar 28, 2020 at 02:38:27PM +0100, Julien Rouhaud wrote:\n> > > On Sat, Mar 28, 2020 at 04:14:04PM +0530, Amit Kapila wrote:\n> > > >\n> > > > I see some basic problems with the patch. The way it tries to compute\n> > > > WAL usage for parallel stuff doesn't seem right to me. Can you share\n> > > > or point me to any test done where we have computed WAL for parallel\n> > > > operations like Parallel Vacuum or Parallel Create Index?\n> > >\n> > > Ah, that's indeed a good point and AFAICT WAL records from parallel utility\n> > > workers won't be accounted for. That being said, I think that an argument\n> > > could be made that proper infrastructure should have been added in the original\n> > > parallel utility patches, as pg_stat_statement is already broken wrt. buffer\n> > > usage in parallel utility, unless I'm missing something.\n> >\n> > Just to be sure I did a quick test with pg_stat_statements behavior using\n> > parallel/non-parallel CREATE INDEX and VACUUM, and unsurprisingly buffer usage\n> > doesn't reflect parallel workers' activity.\n> >\n> > I added an open for that, and adding Robert in Cc as 9da0cc352 is the first\n> > commit adding parallel maintenance.\n>\n> I believe this is resolved for parallel vacuum in master and parallel create\n> index back to PG11.\n>\n> I marked this as closed.\n> https://wiki.postgresql.org/index.php?title=PostgreSQL_13_Open_Items&diff=34802&oldid=34781\n>\n\nOkay, thanks.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:12:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements issue with parallel maintenance (Was Re: WAL\n usage calculation patch)" }, { "msg_contents": "On Sat, Apr 11, 2020 at 6:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Apr 10, 2020 at 9:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Fri, Apr 10, 2020 at 8:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Would you like to send a consolidated patch that includes Euler's\n> > > suggestion and Justin's patch (by making changes for points we\n> > > discussed.)? I think we can keep the point related to number of\n> > > spaces before each field open?\n> >\n> > Sure, I'll take care of that tomorrow!\n>\n> I tried to take into account all that have been discussed, but I have\n> to admit that I'm absolutely not sure of what was actually decided\n> here. I went with those changes:\n>\n> - rename wal_num_fpw to wal_fpw for consistency, both in pgss view\n> fiel name but also everywhere in the code\n> - change comments to consistently mention \"full page writes generated\"\n> - changed pgss and explain documentation to mention \"full page images\n> generated\", from Justin's patch on another thread\n>\n\nI think it is better to use \"full page writes\" to be consistent with\nother places.\n\n> - kept \"amount\" of WAL bytes\n>\n\nOkay, but I would like to make another change suggested by Justin\nwhich is to replace \"count\" with \"number\" at a few places.\n\nI have made the above two changes in the attached. Let me know what\nyou think about attached?\n\n> - no change to the explain output as I have no idea what is the\n> consensus (one or two spaces, use semicolon or equal, show unit or\n> not)\n>\n\nYeah, let's do this separately once we have consensus.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Apr 2020 11:40:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 13, 2020 at 8:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 11, 2020 at 6:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Fri, Apr 10, 2020 at 9:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > I tried to take into account all that have been discussed, but I have\n> > to admit that I'm absolutely not sure of what was actually decided\n> > here. I went with those changes:\n> >\n> > - rename wal_num_fpw to wal_fpw for consistency, both in pgss view\n> > fiel name but also everywhere in the code\n> > - change comments to consistently mention \"full page writes generated\"\n> > - changed pgss and explain documentation to mention \"full page images\n> > generated\", from Justin's patch on another thread\n> >\n>\n> I think it is better to use \"full page writes\" to be consistent with\n> other places.\n>\n> > - kept \"amount\" of WAL bytes\n> >\n>\n> Okay, but I would like to make another change suggested by Justin\n> which is to replace \"count\" with \"number\" at a few places.\n\nAh sorry I missed this one. +1 it also sounds better.\n\n> I have made the above two changes in the attached. Let me know what\n> you think about attached?\n\nIt all looks good to me!\n\n> > - no change to the explain output as I have no idea what is the\n> > consensus (one or two spaces, use semicolon or equal, show unit or\n> > not)\n> >\n>\n> Yeah, let's do this separately once we have consensus.\n\nAgreed.\n\n\n", "msg_date": "Mon, 13 Apr 2020 09:40:14 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 13, 2020 at 1:10 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Apr 13, 2020 at 8:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Apr 11, 2020 at 6:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 10, 2020 at 9:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > I tried to take into account all that have been discussed, but I have\n> > > to admit that I'm absolutely not sure of what was actually decided\n> > > here. I went with those changes:\n> > >\n> > > - rename wal_num_fpw to wal_fpw for consistency, both in pgss view\n> > > fiel name but also everywhere in the code\n> > > - change comments to consistently mention \"full page writes generated\"\n> > > - changed pgss and explain documentation to mention \"full page images\n> > > generated\", from Justin's patch on another thread\n> > >\n> >\n> > I think it is better to use \"full page writes\" to be consistent with\n> > other places.\n> >\n> > > - kept \"amount\" of WAL bytes\n> > >\n> >\n> > Okay, but I would like to make another change suggested by Justin\n> > which is to replace \"count\" with \"number\" at a few places.\n>\n> Ah sorry I missed this one. +1 it also sounds better.\n>\n> > I have made the above two changes in the attached. Let me know what\n> > you think about attached?\n>\n> It all looks good to me!\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Apr 2020 17:16:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "Le lun. 13 avr. 2020 à 13:47, Amit Kapila <amit.kapila16@gmail.com> a\nécrit :\n\n> On Mon, Apr 13, 2020 at 1:10 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Apr 13, 2020 at 8:11 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > On Sat, Apr 11, 2020 at 6:55 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > > >\n> > > > On Fri, Apr 10, 2020 at 9:37 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > > > >\n> > > > I tried to take into account all that have been discussed, but I have\n> > > > to admit that I'm absolutely not sure of what was actually decided\n> > > > here. I went with those changes:\n> > > >\n> > > > - rename wal_num_fpw to wal_fpw for consistency, both in pgss view\n> > > > fiel name but also everywhere in the code\n> > > > - change comments to consistently mention \"full page writes\n> generated\"\n> > > > - changed pgss and explain documentation to mention \"full page images\n> > > > generated\", from Justin's patch on another thread\n> > > >\n> > >\n> > > I think it is better to use \"full page writes\" to be consistent with\n> > > other places.\n> > >\n> > > > - kept \"amount\" of WAL bytes\n> > > >\n> > >\n> > > Okay, but I would like to make another change suggested by Justin\n> > > which is to replace \"count\" with \"number\" at a few places.\n> >\n> > Ah sorry I missed this one. +1 it also sounds better.\n> >\n> > > I have made the above two changes in the attached. Let me know what\n> > > you think about attached?\n> >\n> > It all looks good to me!\n> >\n>\n> Pushed.\n>\n\nThanks a lot Amit!\n\n>\n\nLe lun. 13 avr. 2020 à 13:47, Amit Kapila <amit.kapila16@gmail.com> a écrit :On Mon, Apr 13, 2020 at 1:10 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Apr 13, 2020 at 8:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Apr 11, 2020 at 6:55 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 10, 2020 at 9:37 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > I tried to take into account all that have been discussed, but I have\n> > > to admit that I'm absolutely not sure of what was actually decided\n> > > here.  I went with those changes:\n> > >\n> > > - rename wal_num_fpw to wal_fpw for consistency, both in pgss view\n> > > fiel name but also everywhere in the code\n> > > - change comments to consistently mention \"full page writes generated\"\n> > > - changed pgss and explain documentation to mention \"full page images\n> > > generated\", from Justin's patch on another thread\n> > >\n> >\n> > I think it is better to use \"full page writes\" to be consistent with\n> > other places.\n> >\n> > > - kept \"amount\" of WAL bytes\n> > >\n> >\n> > Okay, but I would like to make another change suggested by Justin\n> > which is to replace \"count\" with \"number\" at a few places.\n>\n> Ah sorry I missed this one.  +1 it also sounds better.\n>\n> > I have made the above two changes in the attached.  Let me know what\n> > you think about attached?\n>\n> It all looks good to me!\n>\n\nPushed.Thanks a lot Amit!", "msg_date": "Mon, 13 Apr 2020 15:37:13 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 8, 2020 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 3:30 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> >\n> > We also have existing cases for the other way:\n> >\n> > actual time=0.050..0.052\n> > Buffers: shared hit=3 dirtied=1\n> >\n>\n> Buffers case is not the same because 'shared' is used for 'hit',\n> 'read', 'dirtied', etc. However, I think it is arguable.\n>\n> > The cases mentioned by Justin are not formatted in a key=value format,\n> > so it's not quite the same, but it also raises the question why they are\n> > not.\n> >\n> > Let's figure out a way to consolidate this without making up a third format.\n> >\n>\n> Sure, I think my intention is to keep the format of WAL stats as close\n> to Buffers stats as possible because both depict I/O and users would\n> probably be interested to check/read both together. There is a point\n> to keep things in a format so that it is easier for someone to parse\n> but I guess as these as fixed 'words', it shouldn't be difficult\n> either way and we should give more weightage to consistency. Any\n> suggestions?\n>\n\nPeter E, others, any suggestions on how to move forward? I think here\nwe should follow the rule \"follow the style of nearby code\" which in\nthis case would be to have one space after each field as we would like\nit to be closer to the \"Buffers\" format. It would be good if we have\na unified format among all Explain stuff but we might not want to\nchange the existing things and even if we want to do that it might be\na broader/bigger change and we should do that as a PG14 change. What\ndo you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 09:27:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On 2020-04-14 05:57, Amit Kapila wrote:\n> Peter E, others, any suggestions on how to move forward? I think here\n> we should follow the rule \"follow the style of nearby code\" which in\n> this case would be to have one space after each field as we would like\n> it to be closer to the \"Buffers\" format. It would be good if we have\n> a unified format among all Explain stuff but we might not want to\n> change the existing things and even if we want to do that it might be\n> a broader/bigger change and we should do that as a PG14 change. What\n> do you think?\n\nIf looks like shortening to fpw= and using one space is the easiest way \nto solve this issue.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:15:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Fri, Apr 17, 2020 at 6:45 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-04-14 05:57, Amit Kapila wrote:\n> > Peter E, others, any suggestions on how to move forward? I think here\n> > we should follow the rule \"follow the style of nearby code\" which in\n> > this case would be to have one space after each field as we would like\n> > it to be closer to the \"Buffers\" format. It would be good if we have\n> > a unified format among all Explain stuff but we might not want to\n> > change the existing things and even if we want to do that it might be\n> > a broader/bigger change and we should do that as a PG14 change. What\n> > do you think?\n>\n> If looks like shortening to fpw= and using one space is the easiest way\n> to solve this issue.\n>\n\nI am fine with this approach and will change accordingly. I will wait\nfor a few days (3-4 days) to see if someone shows up with either an\nobjection to this or with a better idea for the display of WAL usage\ninformation.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 18 Apr 2020 09:46:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 18, 2020 at 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 17, 2020 at 6:45 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2020-04-14 05:57, Amit Kapila wrote:\n> > > Peter E, others, any suggestions on how to move forward? I think here\n> > > we should follow the rule \"follow the style of nearby code\" which in\n> > > this case would be to have one space after each field as we would like\n> > > it to be closer to the \"Buffers\" format. It would be good if we have\n> > > a unified format among all Explain stuff but we might not want to\n> > > change the existing things and even if we want to do that it might be\n> > > a broader/bigger change and we should do that as a PG14 change. What\n> > > do you think?\n> >\n> > If looks like shortening to fpw= and using one space is the easiest way\n> > to solve this issue.\n> >\n>\n> I am fine with this approach and will change accordingly. I will wait\n> for a few days (3-4 days) to see if someone shows up with either an\n> objection to this or with a better idea for the display of WAL usage\n> information.\n\nThat was also my preferred alternative. PFA a patch for that. I also\nchanged to \"fpw\" for the non textual output for consistency.", "msg_date": "Sat, 18 Apr 2020 17:39:35 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Sat, Apr 18, 2020 at 05:39:35PM +0200, Julien Rouhaud wrote:\n> On Sat, Apr 18, 2020 at 6:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 17, 2020 at 6:45 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > > On 2020-04-14 05:57, Amit Kapila wrote:\n> > > > Peter E, others, any suggestions on how to move forward? I think here\n> > > > we should follow the rule \"follow the style of nearby code\" which in\n> > > > this case would be to have one space after each field as we would like\n> > > > it to be closer to the \"Buffers\" format. It would be good if we have\n> > > > a unified format among all Explain stuff but we might not want to\n> > > > change the existing things and even if we want to do that it might be\n> > > > a broader/bigger change and we should do that as a PG14 change. What\n> > > > do you think?\n> > >\n> > > If looks like shortening to fpw= and using one space is the easiest way\n> > > to solve this issue.\n> > >\n> >\n> > I am fine with this approach and will change accordingly. I will wait\n> > for a few days (3-4 days) to see if someone shows up with either an\n> > objection to this or with a better idea for the display of WAL usage\n> > information.\n> \n> That was also my preferred alternative. PFA a patch for that. I also\n> changed to \"fpw\" for the non textual output for consistency.\n\nShould capitalize at least the non-text one ? And maybe the text one for\nconsistency ?\n\n+ ExplainPropertyInteger(\"WAL fpw\", NULL, \n\nAnd add the acronym to the docs:\n\n$ git grep 'full page' '*/explain.sgml'\ndoc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n\n\"..full page writes (FPW)..\"\n\nShould we also change vacuumlazy.c for consistency ?\n\n+ _(\"WAL usage: %ld records, %ld full page writes, \"\n+ UINT64_FORMAT \" bytes\"),\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 18 Apr 2020 15:41:05 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "Hi Justin,\n\nThanks for the review!\n\nOn Sat, Apr 18, 2020 at 10:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Should capitalize at least the non-text one ? And maybe the text one for\n> consistency ?\n>\n> + ExplainPropertyInteger(\"WAL fpw\", NULL,\n\nI think we should keep both version consistent, whether lower or upper\ncase. The uppercase version is probably more correct, but it's a\nlittle bit weird to have it being the only upper case label in all\noutput, so I kept it lower case.\n\n> And add the acronym to the docs:\n>\n> $ git grep 'full page' '*/explain.sgml'\n> doc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n>\n> \"..full page writes (FPW)..\"\n\nIndeed! Fixed (using lowercase to match current output).\n\n> Should we also change vacuumlazy.c for consistency ?\n>\n> + _(\"WAL usage: %ld records, %ld full page writes, \"\n> + UINT64_FORMAT \" bytes\"),\n\nI don't think this one should be changed, vacuumlazy output is already\nentirely different, and is way more verbose so keeping it as is makes\nsense to me.", "msg_date": "Sun, 19 Apr 2020 16:22:26 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "At Sun, 19 Apr 2020 16:22:26 +0200, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> Hi Justin,\n> \n> Thanks for the review!\n> \n> On Sat, Apr 18, 2020 at 10:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Should capitalize at least the non-text one ? And maybe the text one for\n> > consistency ?\n> >\n> > + ExplainPropertyInteger(\"WAL fpw\", NULL,\n> \n> I think we should keep both version consistent, whether lower or upper\n> case. The uppercase version is probably more correct, but it's a\n> little bit weird to have it being the only upper case label in all\n> output, so I kept it lower case.\n\nOne space follwed by an acronym looks perfect. I'd prefer capital\nletters but small-letters also works well.\n\n> > And add the acronym to the docs:\n> >\n> > $ git grep 'full page' '*/explain.sgml'\n> > doc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n> >\n> > \"..full page writes (FPW)..\"\n> \n> Indeed! Fixed (using lowercase to match current output).\n\nI searched through the documentation and AFAICS most of occurances of\n\"full page\" are follwed by \"image\" and full_page_writes is used only\nas the parameter name.\n\nI'm fine with fpw as the acronym, but \"fpw means the number of full\npage images\" looks odd..\n\n> > Should we also change vacuumlazy.c for consistency ?\n> >\n> > + _(\"WAL usage: %ld records, %ld full page writes, \"\n> > + UINT64_FORMAT \" bytes\"),\n> \n> I don't think this one should be changed, vacuumlazy output is already\n> entirely different, and is way more verbose so keeping it as is makes\n> sense to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:46:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 20, 2020 at 1:17 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sun, 19 Apr 2020 16:22:26 +0200, Julien Rouhaud <rjuju123@gmail.com> wrote in\n> > Hi Justin,\n> >\n> > Thanks for the review!\n> >\n> > On Sat, Apr 18, 2020 at 10:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > Should capitalize at least the non-text one ? And maybe the text one for\n> > > consistency ?\n> > >\n> > > + ExplainPropertyInteger(\"WAL fpw\", NULL,\n> >\n> > I think we should keep both version consistent, whether lower or upper\n> > case. The uppercase version is probably more correct, but it's a\n> > little bit weird to have it being the only upper case label in all\n> > output, so I kept it lower case.\n\nI think we can keep upper-case for all non-text ones in case of WAL\nusage, something like WAL Records, WAL FPW, WAL Bytes. The buffer\nusage seems to be following a similar convention.\n\n>\n> One space follwed by an acronym looks perfect. I'd prefer capital\n> letters but small-letters also works well.\n>\n> > > And add the acronym to the docs:\n> > >\n> > > $ git grep 'full page' '*/explain.sgml'\n> > > doc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n> > >\n> > > \"..full page writes (FPW)..\"\n> >\n> > Indeed! Fixed (using lowercase to match current output).\n>\n> I searched through the documentation and AFAICS most of occurances of\n> \"full page\" are follwed by \"image\" and full_page_writes is used only\n> as the parameter name.\n>\n> I'm fine with fpw as the acronym, but \"fpw means the number of full\n> page images\" looks odd..\n>\n\nI don't understand this. Where are we using such a description of fpw?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Apr 2020 09:15:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 22, 2020 at 09:15:08AM +0530, Amit Kapila wrote:\n> > > > And add the acronym to the docs:\n> > > >\n> > > > $ git grep 'full page' '*/explain.sgml'\n> > > > doc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n> > > >\n> > > > \"..full page writes (FPW)..\"\n> > >\n> > > Indeed! Fixed (using lowercase to match current output).\n> >\n> > I searched through the documentation and AFAICS most of occurances of\n> > \"full page\" are follwed by \"image\" and full_page_writes is used only\n> > as the parameter name.\n> >\n> > I'm fine with fpw as the acronym, but \"fpw means the number of full\n> > page images\" looks odd..\n> >\n> \n> I don't understand this. Where are we using such a description of fpw?\n\nI suggested to add \" (FPW)\" to the new docs for \"explain(wal)\"\nBut, the documentation before this commit mostly refers to \"full page images\".\nSo the implication is that maybe we should use that language (and FPI acronym).\n\nThe only pre-existing use of \"full page writes\" seems to be here:\n$ git grep -iC2 'full page write' origin doc \norigin:doc/src/sgml/wal.sgml- Internal data structures such as <filename>pg_xact</filename>, <filename>pg_subtrans</filename>, <filename>pg_multixact</filename>,\norigin:doc/src/sgml/wal.sgml- <filename>pg_serial</filename>, <filename>pg_notify</filename>, <filename>pg_stat</filename>, <filename>pg_snapshots</filename> are not directly\norigin:doc/src/sgml/wal.sgml: checksummed, nor are pages protected by full page writes. However, where\n\nAnd we're not using either acronym.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Apr 2020 22:55:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 22, 2020 at 9:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 09:15:08AM +0530, Amit Kapila wrote:\n> > > > > And add the acronym to the docs:\n> > > > >\n> > > > > $ git grep 'full page' '*/explain.sgml'\n> > > > > doc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n> > > > >\n> > > > > \"..full page writes (FPW)..\"\n> > > >\n> > > > Indeed! Fixed (using lowercase to match current output).\n> > >\n> > > I searched through the documentation and AFAICS most of occurances of\n> > > \"full page\" are follwed by \"image\" and full_page_writes is used only\n> > > as the parameter name.\n> > >\n> > > I'm fine with fpw as the acronym, but \"fpw means the number of full\n> > > page images\" looks odd..\n> > >\n> >\n> > I don't understand this. Where are we using such a description of fpw?\n>\n> I suggested to add \" (FPW)\" to the new docs for \"explain(wal)\"\n> But, the documentation before this commit mostly refers to \"full page images\".\n> So the implication is that maybe we should use that language (and FPI acronym).\n>\n\nI am not sure if it matters that much. I think we can use \"full page\nwrites (FPW)\" in this case but we should be consistent wherever we\nrefer it in the WAL usage context and I think we already are, if not\nthen let's be consistent.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Apr 2020 17:57:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 22, 2020 at 9:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 20, 2020 at 1:17 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sun, 19 Apr 2020 16:22:26 +0200, Julien Rouhaud <rjuju123@gmail.com> wrote in\n> > > Hi Justin,\n> > >\n> > > Thanks for the review!\n> > >\n> > > On Sat, Apr 18, 2020 at 10:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > Should capitalize at least the non-text one ? And maybe the text one for\n> > > > consistency ?\n> > > >\n> > > > + ExplainPropertyInteger(\"WAL fpw\", NULL,\n> > >\n> > > I think we should keep both version consistent, whether lower or upper\n> > > case. The uppercase version is probably more correct, but it's a\n> > > little bit weird to have it being the only upper case label in all\n> > > output, so I kept it lower case.\n>\n> I think we can keep upper-case for all non-text ones in case of WAL\n> usage, something like WAL Records, WAL FPW, WAL Bytes. The buffer\n> usage seems to be following a similar convention.\n>\n\nThe attached patch changed the non-text display format as mentioned.\nLet me know if you have any comments?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Apr 2020 10:50:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, Apr 22, 2020 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 9:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Apr 22, 2020 at 09:15:08AM +0530, Amit Kapila wrote:\n> > > > > > And add the acronym to the docs:\n> > > > > >\n> > > > > > $ git grep 'full page' '*/explain.sgml'\n> > > > > > doc/src/sgml/ref/explain.sgml: number of records, number of full page writes and amount of WAL bytes\n> > > > > >\n> > > > > > \"..full page writes (FPW)..\"\n> > > > >\n> > > > > Indeed! Fixed (using lowercase to match current output).\n> > > >\n> > > > I searched through the documentation and AFAICS most of occurances of\n> > > > \"full page\" are follwed by \"image\" and full_page_writes is used only\n> > > > as the parameter name.\n> > > >\n> > > > I'm fine with fpw as the acronym, but \"fpw means the number of full\n> > > > page images\" looks odd..\n> > > >\n> > >\n> > > I don't understand this. Where are we using such a description of fpw?\n> >\n> > I suggested to add \" (FPW)\" to the new docs for \"explain(wal)\"\n> > But, the documentation before this commit mostly refers to \"full page images\".\n> > So the implication is that maybe we should use that language (and FPI acronym).\n> >\n>\n> I am not sure if it matters that much. I think we can use \"full page\n> writes (FPW)\" in this case but we should be consistent wherever we\n> refer it in the WAL usage context and I think we already are, if not\n> then let's be consistent.\n\nI agree that full page writes can be used in this case, but I'm\nwondering if that can be misleading for some reader which might e.g.\nconfuse with the full_page_writes GUC. And as Justin pointed out, the\ndocumentation for now usually mentions \"full page image(s)\" in such\ncases.\n\n\n", "msg_date": "Thu, 23 Apr 2020 07:31:47 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 23, 2020 at 7:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 22, 2020 at 9:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 20, 2020 at 1:17 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Sun, 19 Apr 2020 16:22:26 +0200, Julien Rouhaud <rjuju123@gmail.com> wrote in\n> > > > Hi Justin,\n> > > >\n> > > > Thanks for the review!\n> > > >\n> > > > On Sat, Apr 18, 2020 at 10:41 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > >\n> > > > > Should capitalize at least the non-text one ? And maybe the text one for\n> > > > > consistency ?\n> > > > >\n> > > > > + ExplainPropertyInteger(\"WAL fpw\", NULL,\n> > > >\n> > > > I think we should keep both version consistent, whether lower or upper\n> > > > case. The uppercase version is probably more correct, but it's a\n> > > > little bit weird to have it being the only upper case label in all\n> > > > output, so I kept it lower case.\n> >\n> > I think we can keep upper-case for all non-text ones in case of WAL\n> > usage, something like WAL Records, WAL FPW, WAL Bytes. The buffer\n> > usage seems to be following a similar convention.\n> >\n>\n> The attached patch changed the non-text display format as mentioned.\n> Let me know if you have any comments?\n\nAssuming that we're fine using full page write(s) / FPW rather than\nfull page image(s) / FPI (see previous mail), I'm fine with this\npatch.\n\n\n", "msg_date": "Thu, 23 Apr 2020 07:33:13 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "At Thu, 23 Apr 2020 07:33:13 +0200, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> > > > > I think we should keep both version consistent, whether lower or upper\n> > > > > case. The uppercase version is probably more correct, but it's a\n> > > > > little bit weird to have it being the only upper case label in all\n> > > > > output, so I kept it lower case.\n> > >\n> > > I think we can keep upper-case for all non-text ones in case of WAL\n> > > usage, something like WAL Records, WAL FPW, WAL Bytes. The buffer\n> > > usage seems to be following a similar convention.\n> > >\n> >\n> > The attached patch changed the non-text display format as mentioned.\n> > Let me know if you have any comments?\n> \n> Assuming that we're fine using full page write(s) / FPW rather than\n> full page image(s) / FPI (see previous mail), I'm fine with this\n> patch.\n\nFWIW, I like FPW, and the patch looks good to me. The index in the\ndocumentation has the entry for full_page_writes (having underscores)\nand it would work.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:54:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On 2020-04-23 07:31, Julien Rouhaud wrote:\n> I agree that full page writes can be used in this case, but I'm\n> wondering if that can be misleading for some reader which might e.g.\n> confuse with the full_page_writes GUC. And as Justin pointed out, the\n> documentation for now usually mentions \"full page image(s)\" in such\n> cases.\n\nISTM that in the context of this patch, \"full-page image\" is correct. A \n\"full-page write\" is what you do to a table or index page when you are \nrecovering a full-page image. The internal symbol for the WAL record is \nXLOG_FPI and xlogdesc.c prints it as \"FPI\".\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Apr 2020 08:46:50 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 23, 2020 at 12:16 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-04-23 07:31, Julien Rouhaud wrote:\n> > I agree that full page writes can be used in this case, but I'm\n> > wondering if that can be misleading for some reader which might e.g.\n> > confuse with the full_page_writes GUC. And as Justin pointed out, the\n> > documentation for now usually mentions \"full page image(s)\" in such\n> > cases.\n>\n> ISTM that in the context of this patch, \"full-page image\" is correct. A\n> \"full-page write\" is what you do to a table or index page when you are\n> recovering a full-page image.\n>\n\nSo what do we call when we log the page after it is touched after\ncheckpoint? I thought we call that as full-page write.\n\n> The internal symbol for the WAL record is\n> XLOG_FPI and xlogdesc.c prints it as \"FPI\".\n>\n\nThat is just one way/reason we log the page. There are others as\nwell. I thought here we are computing the number of full-page writes\nhappened in the system due to various reasons like (a) a page is\noperated upon first time after the checkpoint, (b) log the XLOG_FPI\nrecord, (c) Guc for WAL consistency checker is on, etc. If we see in\nXLogRecordAssemble where we decide to log this information, there is a\ncomment \" .... log a full-page write for the current block.\" and there\nwas an existing variable with 'fpw_lsn' which indicates to an extent\nthat what we are computing in this patch is full-page writes. But\nthere is a reference to full-page image as well. I think as\nfull_page_writes is an exposed variable that is well understood so\nexposing information with similar name via this patch doesn't sound\nillogical to me. Whatever we use here we need to be consistent all\nthroughout, even pg_stat_statements need to name exposed variable as\nwal_fpi instead of wal_fpw.\n\nTo me, full-page writes sound more appealing with other WAL usage\nvariables like records and bytes. I might be more used to this term as\n'fpw' that is why it occurred better to me. OTOH, if most of us think\nthat a full-page image is better suited here, I am fine with changing\nit at all places.\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:35:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 23, 2020 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 23, 2020 at 12:16 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>\n> > The internal symbol for the WAL record is\n> > XLOG_FPI and xlogdesc.c prints it as \"FPI\".\n> >\n>\n> That is just one way/reason we log the page. There are others as\n> well. I thought here we are computing the number of full-page writes\n> happened in the system due to various reasons like (a) a page is\n> operated upon first time after the checkpoint, (b) log the XLOG_FPI\n> record, (c) Guc for WAL consistency checker is on, etc. If we see in\n> XLogRecordAssemble where we decide to log this information, there is a\n> comment \" .... log a full-page write for the current block.\" and there\n> was an existing variable with 'fpw_lsn' which indicates to an extent\n> that what we are computing in this patch is full-page writes. But\n> there is a reference to full-page image as well. I think as\n> full_page_writes is an exposed variable that is well understood so\n> exposing information with similar name via this patch doesn't sound\n> illogical to me. Whatever we use here we need to be consistent all\n> throughout, even pg_stat_statements need to name exposed variable as\n> wal_fpi instead of wal_fpw.\n>\n> To me, full-page writes sound more appealing with other WAL usage\n> variables like records and bytes. I might be more used to this term as\n> 'fpw' that is why it occurred better to me. OTOH, if most of us think\n> that a full-page image is better suited here, I am fine with changing\n> it at all places.\n>\n\nJulien, Peter, others do you have any opinion here? I think it is\nbetter if we decide on one of FPW or FPI and make the changes at all\nplaces for this patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Apr 2020 08:35:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 27, 2020 at 08:35:51AM +0530, Amit Kapila wrote:\n> On Thu, Apr 23, 2020 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Thu, Apr 23, 2020 at 12:16 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>> The internal symbol for the WAL record is\n>>> XLOG_FPI and xlogdesc.c prints it as \"FPI\".\n> \n> Julien, Peter, others do you have any opinion here? I think it is\n> better if we decide on one of FPW or FPI and make the changes at all\n> places for this patch.\n\nIt seems to me that Peter is right here. A full-page write is the\naction to write a full-page image, so if you consider only a way to\ndefine the static data of a full-page and/or a quantity associated to\nit, we should talk about full-page images.\n--\nMichael", "msg_date": "Mon, 27 Apr 2020 15:11:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 27, 2020 at 8:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Apr 27, 2020 at 08:35:51AM +0530, Amit Kapila wrote:\n> > On Thu, Apr 23, 2020 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> On Thu, Apr 23, 2020 at 12:16 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> >>> The internal symbol for the WAL record is\n> >>> XLOG_FPI and xlogdesc.c prints it as \"FPI\".\n> >\n> > Julien, Peter, others do you have any opinion here? I think it is\n> > better if we decide on one of FPW or FPI and make the changes at all\n> > places for this patch.\n>\n> It seems to me that Peter is right here. A full-page write is the\n> action to write a full-page image, so if you consider only a way to\n> define the static data of a full-page and/or a quantity associated to\n> it, we should talk about full-page images.\n\nI agree with that definition. I can send a cleanup patch if there's\nno objection.\n\n\n", "msg_date": "Mon, 27 Apr 2020 09:52:17 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, Apr 27, 2020 at 1:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Apr 27, 2020 at 8:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Apr 27, 2020 at 08:35:51AM +0530, Amit Kapila wrote:\n> > > On Thu, Apr 23, 2020 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >> On Thu, Apr 23, 2020 at 12:16 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > >>> The internal symbol for the WAL record is\n> > >>> XLOG_FPI and xlogdesc.c prints it as \"FPI\".\n> > >\n> > > Julien, Peter, others do you have any opinion here? I think it is\n> > > better if we decide on one of FPW or FPI and make the changes at all\n> > > places for this patch.\n> >\n> > It seems to me that Peter is right here. A full-page write is the\n> > action to write a full-page image, so if you consider only a way to\n> > define the static data of a full-page and/or a quantity associated to\n> > it, we should talk about full-page images.\n>\n\nFair enough, if more people want full-page image terminology in this\ncontext then we can do that.\n\n> I agree with that definition. I can send a cleanup patch if there's\n> no objection.\n>\n\nOkay, feel free to send the patch. Thanks for taking the initiative\nto write a patch for this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 28 Apr 2020 07:38:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, Apr 28, 2020 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 27, 2020 at 1:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n>\n> > I agree with that definition. I can send a cleanup patch if there's\n> > no objection.\n> >\n>\n> Okay, feel free to send the patch. Thanks for taking the initiative\n> to write a patch for this.\n>\n\nJulien, are you planning to write a cleanup patch for this open item?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 30 Apr 2020 08:35:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 30, 2020 at 5:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 28, 2020 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 27, 2020 at 1:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> >\n> > > I agree with that definition. I can send a cleanup patch if there's\n> > > no objection.\n> > >\n> >\n> > Okay, feel free to send the patch. Thanks for taking the initiative\n> > to write a patch for this.\n> >\n>\n> Julien, are you planning to write a cleanup patch for this open item?\n\nSorry Amit, I've been quite busy at work for the last couple of days.\nI'll take care of that this morning for sure!\n\n\n", "msg_date": "Thu, 30 Apr 2020 09:18:57 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 30, 2020 at 9:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Apr 30, 2020 at 5:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 28, 2020 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 27, 2020 at 1:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > >\n> > > > I agree with that definition. I can send a cleanup patch if there's\n> > > > no objection.\n> > > >\n> > >\n> > > Okay, feel free to send the patch. Thanks for taking the initiative\n> > > to write a patch for this.\n> > >\n> >\n> > Julien, are you planning to write a cleanup patch for this open item?\n>\n> Sorry Amit, I've been quite busy at work for the last couple of days.\n> I'll take care of that this morning for sure!\n\nHere's the patch. I included the content of\nv3-fix_explain_wal_output.patch you provided before, and tried to\nconsistently replace full page writes/fpw to full page images/fpi\neverywhere on top of it (so documentation, command output, variable\nnames and comments).", "msg_date": "Thu, 30 Apr 2020 10:48:46 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 30, 2020 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Apr 30, 2020 at 9:18 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Thu, Apr 30, 2020 at 5:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Julien, are you planning to write a cleanup patch for this open item?\n> >\n> > Sorry Amit, I've been quite busy at work for the last couple of days.\n> > I'll take care of that this morning for sure!\n>\n> Here's the patch.\n>\n\nThanks for the patch. I will look into it early next week.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 May 2020 17:19:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Thu, Apr 30, 2020 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Here's the patch. I included the content of\n> v3-fix_explain_wal_output.patch you provided before, and tried to\n> consistently replace full page writes/fpw to full page images/fpi\n> everywhere on top of it (so documentation, command output, variable\n> names and comments).\n>\n\nYour patch looks mostly good to me. I have made slight modifications\nwhich include changing the non-text format in show_wal_usage to use a\ncapital letter for the second word, which makes it similar to Buffer\nusage stats, and additionally, ran pgindent.\n\nLet me know what do you think of attached?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 4 May 2020 09:39:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, May 4, 2020 at 6:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 30, 2020 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > Here's the patch. I included the content of\n> > v3-fix_explain_wal_output.patch you provided before, and tried to\n> > consistently replace full page writes/fpw to full page images/fpi\n> > everywhere on top of it (so documentation, command output, variable\n> > names and comments).\n> >\n>\n> Your patch looks mostly good to me. I have made slight modifications\n> which include changing the non-text format in show_wal_usage to use a\n> capital letter for the second word, which makes it similar to Buffer\n> usage stats, and additionally, ran pgindent.\n>\n> Let me know what do you think of attached?\n\nThanks a lot Amit. It looks perfect to me!\n\n\n", "msg_date": "Mon, 4 May 2020 16:32:55 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Mon, May 4, 2020 at 8:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, May 4, 2020 at 6:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 30, 2020 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > Here's the patch. I included the content of\n> > > v3-fix_explain_wal_output.patch you provided before, and tried to\n> > > consistently replace full page writes/fpw to full page images/fpi\n> > > everywhere on top of it (so documentation, command output, variable\n> > > names and comments).\n> > >\n> >\n> > Your patch looks mostly good to me. I have made slight modifications\n> > which include changing the non-text format in show_wal_usage to use a\n> > capital letter for the second word, which makes it similar to Buffer\n> > usage stats, and additionally, ran pgindent.\n> >\n> > Let me know what do you think of attached?\n>\n> Thanks a lot Amit. It looks perfect to me!\n>\n\nPushed.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 May 2020 16:14:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Tue, May 5, 2020 at 12:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 4, 2020 at 8:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, May 4, 2020 at 6:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 30, 2020 at 2:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > >\n> > > > Here's the patch. I included the content of\n> > > > v3-fix_explain_wal_output.patch you provided before, and tried to\n> > > > consistently replace full page writes/fpw to full page images/fpi\n> > > > everywhere on top of it (so documentation, command output, variable\n> > > > names and comments).\n> > > >\n> > >\n> > > Your patch looks mostly good to me. I have made slight modifications\n> > > which include changing the non-text format in show_wal_usage to use a\n> > > capital letter for the second word, which makes it similar to Buffer\n> > > usage stats, and additionally, ran pgindent.\n> > >\n> > > Let me know what do you think of attached?\n> >\n> > Thanks a lot Amit. It looks perfect to me!\n> >\n>\n> Pushed.\n\nThanks!\n\n\n", "msg_date": "Tue, 5 May 2020 20:48:58 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" }, { "msg_contents": "On Wed, May 6, 2020 at 12:19 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, May 5, 2020 at 12:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > Your patch looks mostly good to me. I have made slight modifications\n> > > > which include changing the non-text format in show_wal_usage to use a\n> > > > capital letter for the second word, which makes it similar to Buffer\n> > > > usage stats, and additionally, ran pgindent.\n> > > >\n> > > > Let me know what do you think of attached?\n> > >\n> > > Thanks a lot Amit. It looks perfect to me!\n> > >\n> >\n> > Pushed.\n>\n> Thanks!\n>\n\nI have updated the open items page to reflect this commit [1].\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 May 2020 08:30:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL usage calculation patch" } ]
[ { "msg_contents": "If the restore command claims to have succeeded, but fails to have created\na file with the right name (due to typos or escaping or quoting issues, for\nexample), no complaint is issued at the time, and it then fails later with\na relatively uninformative error message like \"could not locate required\ncheckpoint record\".\n\n if (rc == 0)\n {\n /*\n * command apparently succeeded, but let's make sure the file is\n * really there now and has the correct size.\n */\n if (stat(xlogpath, &stat_buf) == 0)\n {......\n }\n else\n {\n /* stat failed */\n if (errno != ENOENT)\n ereport(FATAL,\n (errcode_for_file_access(),\n errmsg(\"could not stat file \\\"%s\\\": %m\",\n xlogpath)));\n }\n\nI don't see why ENOENT is thought to deserve the silent treatment. It\nseems weird that success gets logged (\"restored log file \\\"%s\\\" from\narchive\"), but one particular type of unexpected failure does not.\n\nI've attached a patch which emits a LOG message for ENOENT. The exact\nwording doesn't matter to me, I'm sure someone can improve it.\nAlternatively, perhaps the message a few lines down, \"could not restore\nfile\", could get promoted from DEBUG2 to LOG when rc indicates success.\nBut I don't think we need any more messages which say \"Something went\nwrong: success\".\n\nThis is based on the question at\nhttps://stackoverflow.com/questions/60056597/couldnt-restore-postgres-v11-from-pg-basebackup.\nBut this isn' the first time I've seen similar confusion.\n\nCheers,\n\nJeff", "msg_date": "Wed, 5 Feb 2020 11:10:07 -0500", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "bad logging around broken restore_command" }, { "msg_contents": "On 2020/02/06 1:10, Jeff Janes wrote:\n> If the restore command claims to have succeeded, but fails to have created a file with the right name (due to typos or escaping or quoting issues, for example), no complaint is issued at the time, and it then fails later with a relatively uninformative error message like \"could not locate required checkpoint record\".\n> \n>     if (rc == 0)\n>     {\n>         /*\n>          * command apparently succeeded, but let's make sure the file is\n>          * really there now and has the correct size.\n>          */\n>         if (stat(xlogpath, &stat_buf) == 0)\n>         {......\n>         }\n>         else\n>         {\n>             /* stat failed */\n>             if (errno != ENOENT)\n>                 ereport(FATAL,\n>                         (errcode_for_file_access(),\n>                          errmsg(\"could not stat file \\\"%s\\\": %m\",\n>                                 xlogpath)));\n>         }\n> \n> I don't see why ENOENT is thought to deserve the silent treatment.  It seems weird that success gets logged (\"restored log file \\\"%s\\\" from archive\"), but one particular type of unexpected failure does not.\n\nAgreed.\n\n> I've attached a patch which emits a LOG message for ENOENT.\n\nIsn't it better to use \"could not stat file\" message even in ENOENT case?\nIf yes, something like message that you used in the patch should be\nlogged as DETAIL or HINT message. So, what about the attached patch?\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Thu, 6 Feb 2020 23:23:42 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: bad logging around broken restore_command" }, { "msg_contents": "Hi Jeff,\n\nOn 2/6/20 9:23 AM, Fujii Masao wrote:\n> \n>> I've attached a patch which emits a LOG message for ENOENT.\n> \n> Isn't it better to use \"could not stat file\" message even in ENOENT case?\n> If yes, something like message that you used in the patch should be\n> logged as DETAIL or HINT message. So, what about the attached patch?\n\nWhat do you think of Fujii's changes?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 9 Mar 2020 08:47:18 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: bad logging around broken restore_command" }, { "msg_contents": "At Thu, 6 Feb 2020 23:23:42 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/02/06 1:10, Jeff Janes wrote:\n> > If the restore command claims to have succeeded, but fails to have created\n> > a file with the right name (due to typos or escaping or quoting issues, for\n> > example), no complaint is issued at the time, and it then fails later with\n> > a relatively uninformative error message like \"could not locate required\n> > checkpoint record\".\n...\n> > I don't see why ENOENT is thought to deserve the silent treatment.  It\n> > seems weird that success gets logged (\"restored log file \\\"%s\\\" from\n> > archive\"), but one particular type of unexpected failure does not.\n> \n> Agreed.\n\nIn the first place it is not perfectly silent and that problem cannot\nhappen. In the ENOENT case, the function reports \"could not restore\nfile \\\"%s\\\" from archive: %s\", but with DEBUG2 then returns false, and\nthe callers treat the failure properly.\n\n> I've attached a patch which emits a LOG message for ENOENT.\n> \n> Isn't it better to use \"could not stat file\" message even in ENOENT\n> case?\n> If yes, something like message that you used in the patch should be\n> logged as DETAIL or HINT message. So, what about the attached patch?\n\nIf you want to see some log messages in the case, it is sufficient to\nraise the loglevel of the existing message from DEBUG2 to LOG.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 10 Mar 2020 11:47:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bad logging around broken restore_command" }, { "msg_contents": "\n\nOn 2020/03/10 11:47, Kyotaro Horiguchi wrote:\n> At Thu, 6 Feb 2020 23:23:42 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> On 2020/02/06 1:10, Jeff Janes wrote:\n>>> If the restore command claims to have succeeded, but fails to have created\n>>> a file with the right name (due to typos or escaping or quoting issues, for\n>>> example), no complaint is issued at the time, and it then fails later with\n>>> a relatively uninformative error message like \"could not locate required\n>>> checkpoint record\".\n> ...\n>>> I don't see why ENOENT is thought to deserve the silent treatment.  It\n>>> seems weird that success gets logged (\"restored log file \\\"%s\\\" from\n>>> archive\"), but one particular type of unexpected failure does not.\n>>\n>> Agreed.\n> \n> In the first place it is not perfectly silent and that problem cannot\n> happen. In the ENOENT case, the function reports \"could not restore\n> file \\\"%s\\\" from archive: %s\", but with DEBUG2 then returns false, and\n> the callers treat the failure properly.\n\nYes.\n\n>> I've attached a patch which emits a LOG message for ENOENT.\n>>\n>> Isn't it better to use \"could not stat file\" message even in ENOENT\n>> case?\n>> If yes, something like message that you used in the patch should be\n>> logged as DETAIL or HINT message. So, what about the attached patch?\n> \n> If you want to see some log messages in the case, it is sufficient to\n> raise the loglevel of the existing message from DEBUG2 to LOG.\n\nIsn't it too noisy to log every time when we could not restore\nthe archived file? In archive recovery case, it's common to fail\nto restore archive files and try to replay WAL files in pg_wal.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 25 Mar 2020 14:03:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: bad logging around broken restore_command" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI decided to add my review to this simple patch.\r\nI applied Fuji's patch and found it perfectly working with installcheck-world passed, code is clean.\r\nAs for the feature I agree with Jeff and Fuji that this ENOENT case is worth logging under LOG priority.\r\nI consider the last (Fuji's) patch is now ready to be committed. \r\n\r\nBest regards, \r\nPavel Borisov\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 19 Nov 2020 11:27:29 +0000", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: bad logging around broken restore_command" }, { "msg_contents": "\n\nOn 2020/11/19 20:27, Pavel Borisov wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> I decided to add my review to this simple patch.\n> I applied Fuji's patch and found it perfectly working with installcheck-world passed, code is clean.\n> As for the feature I agree with Jeff and Fuji that this ENOENT case is worth logging under LOG priority.\n> I consider the last (Fuji's) patch is now ready to be committed.\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 20 Nov 2020 15:45:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: bad logging around broken restore_command" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/11/dml-returning.html\nDescription:\n\nIn the docs explaining RETURNING\nhttps://www.postgresql.org/docs/11/dml-returning.html there is no mention of\nthe fact that a nested sub-select in the RETURNING statement executes on the\ntable as if the INSERT/UPDATE had not happened. \r\n\r\nI suppose maybe this might be obvious if you understand how SQL works but I\nthink it is nuanced enough that it is worth explaining here as it provides\nsome useful features for UPSERT queries. Example:\r\n\r\n```sql\r\ncreate table foo (x int primary key, y int);\r\n--=> CREATE TABLE\r\ninsert into foo (x, y) values (1, 1);\r\n--=> INSERT 0 1\r\nupdate foo set y = 2 where x = 1 returning (select y from foo where x = 1)\nas old_y;\r\n/* =>\r\n * old_y \r\n * -------\r\n * 1\r\n * (1 row)\r\n *\r\n * UPDATE 1\r\n */\r\nselect * from foo;\r\n/* =>\r\n * x | y \r\n * ---+---\r\n * 1 | 2\r\n * (1 row)\r\n */\r\n```", "msg_date": "Wed, 05 Feb 2020 16:32:45 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "RETURNING does not explain evaluation context for subqueries" }, { "msg_contents": "On Wed, Feb 5, 2020 at 04:32:45PM +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/11/dml-returning.html\n> Description:\n> \n> In the docs explaining RETURNING\n> https://www.postgresql.org/docs/11/dml-returning.html there is no mention of\n> the fact that a nested sub-select in the RETURNING statement executes on the\n> table as if the INSERT/UPDATE had not happened. \n> \n> I suppose maybe this might be obvious if you understand how SQL works but I\n> think it is nuanced enough that it is worth explaining here as it provides\n> some useful features for UPSERT queries. Example:\n> \n> ```sql\n> create table foo (x int primary key, y int);\n> --=> CREATE TABLE\n> insert into foo (x, y) values (1, 1);\n> --=> INSERT 0 1\n> update foo set y = 2 where x = 1 returning (select y from foo where x = 1)\n> as old_y;\n> /* =>\n> * old_y \n> * -------\n> * 1\n> * (1 row)\n> *\n> * UPDATE 1\n> */\n> select * from foo;\n> /* =>\n> * x | y \n> * ---+---\n> * 1 | 2\n> * (1 row)\n> */\n> ```\n\nSorry for the delay in replying. I am moving this thread to hackers\nbecause it isn't clearly a documentation issue. I did some research on\nthis and it is kind of confusing:\n\n\tCREATE TABLE foo (x INT PRIMARY KEY, y INT);\n\n\tINSERT INTO foo (x, y) VALUES (1, 1);\n\n\tUPDATE foo SET y = y + 1 WHERE x = 1 RETURNING y;\n\t y\n\t---\n\t 2\n\tSELECT y FROM foo;\n\t y\n\t---\n\t 2\n\t\n\tUPDATE foo SET y = y + 1 WHERE x = 1 RETURNING (y);\n\t y\n\t---\n\t 3\n\tSELECT y FROM foo;\n\t y\n\t---\n\t 3\n\t\n\tUPDATE foo SET y = y + 1 WHERE x = 1 RETURNING (SELECT y);\n\t y\n\t---\n\t 4\n\tSELECT y FROM foo;\n\t y\n\t---\n\t 4\n\t\n\tUPDATE foo SET y = y + 1 WHERE x = 1 RETURNING (SELECT y FROM foo);\n\t y\n\t---\n\t 4\n\tSELECT y FROM foo;\n\t y\n\t---\n\t 5\n\nSo, it is only when querying 'foo' that it uses the pre-UPDATE\nvisibility snapshot. So the 'y' in 'SELECT y' is the 'y' from the\nupdate, but the 'y' from 'SELECT y FROM foo' uses the snapshot from\nbefore the update. My guess is that we just didn't consider the rules\nfor what the 'y' references, and I bet if I dig into the code I can find\nout why this happening.\n\nRETURNING for INSERT/UPDATE/DELETE isn't part of the SQL standard, so we\ndon't have much guidance there. It is as though the 'FROM foo' changes\nthe resolution of the 'y' because it is closer.\n\nI am unclear if this should be documented or changed, or neither.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Fri, 13 Mar 2020 21:41:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: RETURNING does not explain evaluation context for subqueries" } ]
[ { "msg_contents": "I sent earlier version of this a few times last year along with bunch of other\ndoc patches but it was never picked up. So maybe I'll try send one at a time\nin more digestible chunks.\nhttps://www.postgresql.org/message-id/flat/20190427025647.GD3925%40telsasoft.com#e1731c33455145eadc1158042cc411f9\n\n From cb5842724330dfcfc914f2e3effdbfe4843be565 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu, 9 May 2019 21:13:55 -0500\nSubject: [PATCH] spelling and typos\n\n---\n doc/src/sgml/bloom.sgml | 2 +-\n doc/src/sgml/config.sgml | 2 +-\n doc/src/sgml/ref/alter_table.sgml | 2 +-\n doc/src/sgml/sources.sgml | 4 ++--\n src/backend/access/transam/README.parallel | 2 +-\n src/backend/storage/buffer/bufmgr.c | 2 +-\n src/backend/storage/sync/sync.c | 2 +-\n src/include/access/tableam.h | 2 +-\n 8 files changed, 9 insertions(+), 9 deletions(-)\n\ndiff --git a/doc/src/sgml/bloom.sgml b/doc/src/sgml/bloom.sgml\nindex 6eeadde..c341b65 100644\n--- a/doc/src/sgml/bloom.sgml\n+++ b/doc/src/sgml/bloom.sgml\n@@ -65,7 +65,7 @@\n <para>\n Number of bits generated for each index column. Each parameter's name\n refers to the number of the index column that it controls. The default\n- is <literal>2</literal> bits and maximum is <literal>4095</literal>. Parameters for\n+ is <literal>2</literal> bits and the maximum is <literal>4095</literal>. Parameters for\n index columns not actually used are ignored.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex b2c89bd..102698b 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -4318,7 +4318,7 @@ ANY <replaceable class=\"parameter\">num_sync</replaceable> ( <replaceable class=\"\n except crash recovery.\n \n <varname>hot_standby_feedback</varname> will be delayed by use of this feature.\n- Combinining these settings could lead to bloat on the master, so should\n+ Combining these settings could lead to bloat on the master, so should\n be done only with care.\n \n <warning>\ndiff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\nindex 5de3676..a22770c 100644\n--- a/doc/src/sgml/ref/alter_table.sgml\n+++ b/doc/src/sgml/ref/alter_table.sgml\n@@ -222,7 +222,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n \n <para>\n <literal>SET NOT NULL</literal> may only be applied to a column\n- providing none of the records in the table contain a\n+ provided none of the records in the table contain a\n <literal>NULL</literal> value for the column. Ordinarily this is\n checked during the <literal>ALTER TABLE</literal> by scanning the\n entire table; however, if a valid <literal>CHECK</literal> constraint is\ndiff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml\nindex 5831ec4..b5d28e7 100644\n--- a/doc/src/sgml/sources.sgml\n+++ b/doc/src/sgml/sources.sgml\n@@ -511,7 +511,7 @@ Hint: the addendum\n \n <para>\n There are functions in the backend that will double-quote their own output\n- at need (for example, <function>format_type_be()</function>). Do not put\n+ as needed (for example, <function>format_type_be()</function>). Do not put\n additional quotes around the output of such functions.\n </para>\n \n@@ -880,7 +880,7 @@ BETTER: unrecognized node type: 42\n practices.\n </para>\n <para>\n- Features from later revision of the C standard or compiler specific\n+ Features from later revisions of the C standard or compiler specific\n features can be used, if a fallback is provided.\n </para>\n <para>\ndiff --git a/src/backend/access/transam/README.parallel b/src/backend/access/transam/README.parallel\nindex 85e5840..99c588d 100644\n--- a/src/backend/access/transam/README.parallel\n+++ b/src/backend/access/transam/README.parallel\n@@ -169,7 +169,7 @@ differently because of them. Right now, we don't even allow that.\n At the end of a parallel operation, which can happen either because it\n completed successfully or because it was interrupted by an error, parallel\n workers associated with that operation exit. In the error case, transaction\n-abort processing in the parallel leader kills of any remaining workers, and\n+abort processing in the parallel leader kills off any remaining workers, and\n the parallel leader then waits for them to die. In the case of a successful\n parallel operation, the parallel leader does not send any signals, but must\n wait for workers to complete and exit of their own volition. In either\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex aba3960..5880054 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -4291,7 +4291,7 @@ ts_ckpt_progress_comparator(Datum a, Datum b, void *arg)\n *\n * *max_pending is a pointer instead of an immediate value, so the coalesce\n * limits can easily changed by the GUC mechanism, and so calling code does\n- * not have to check the current configuration. A value is 0 means that no\n+ * not have to check the current configuration. A value of 0 means that no\n * writeback control will be performed.\n */\n void\ndiff --git a/src/backend/storage/sync/sync.c b/src/backend/storage/sync/sync.c\nindex 9cb7c65..8282a47 100644\n--- a/src/backend/storage/sync/sync.c\n+++ b/src/backend/storage/sync/sync.c\n@@ -216,7 +216,7 @@ SyncPostCheckpoint(void)\n \n \t\t/*\n \t\t * As in ProcessSyncRequests, we don't want to stop absorbing fsync\n-\t\t * requests for along time when there are many deletions to be done.\n+\t\t * requests for a long time when there are many deletions to be done.\n \t\t * We can safely call AbsorbSyncRequests() at this point in the loop\n \t\t * (note it might try to delete list entries).\n \t\t */\ndiff --git a/src/include/access/tableam.h b/src/include/access/tableam.h\nindex 696451f..ba9f7b8 100644\n--- a/src/include/access/tableam.h\n+++ b/src/include/access/tableam.h\n@@ -1185,7 +1185,7 @@ table_tuple_complete_speculative(Relation rel, TupleTableSlot *slot,\n * operation. That's often faster than calling table_insert() in a loop,\n * because e.g. the AM can reduce WAL logging and page locking overhead.\n *\n- * Except for taking `nslots` tuples as input, as an array of TupleTableSlots\n+ * Except for taking `nslots` tuples as input, and an array of TupleTableSlots\n * in `slots`, the parameters for table_multi_insert() are the same as for\n * table_tuple_insert().\n *\n-- \n2.7.4", "msg_date": "Wed, 5 Feb 2020 20:14:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "typos in comments and user docs" }, { "msg_contents": "On Thu, Feb 6, 2020 at 7:44 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I sent earlier version of this a few times last year along with bunch of other\n> doc patches but it was never picked up. So maybe I'll try send one at a time\n> in more digestible chunks.\n> https://www.postgresql.org/message-id/flat/20190427025647.GD3925%40telsasoft.com#e1731c33455145eadc1158042cc411f9\n>\n> From cb5842724330dfcfc914f2e3effdbfe4843be565 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu, 9 May 2019 21:13:55 -0500\n> Subject: [PATCH] spelling and typos\n>\n> ---\n> doc/src/sgml/bloom.sgml | 2 +-\n> doc/src/sgml/config.sgml | 2 +-\n> doc/src/sgml/ref/alter_table.sgml | 2 +-\n> doc/src/sgml/sources.sgml | 4 ++--\n> src/backend/access/transam/README.parallel | 2 +-\n> src/backend/storage/buffer/bufmgr.c | 2 +-\n> src/backend/storage/sync/sync.c | 2 +-\n> src/include/access/tableam.h | 2 +-\n> 8 files changed, 9 insertions(+), 9 deletions(-)\n>\n\nYour changes look fine to me on the first read. I will push this to\nHEAD unless there are any objections. If we want them in\nback-branches, we might want to probably segregate the changes based\non the branch until those apply.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Feb 2020 08:47:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> Your changes look fine to me on the first read. I will push this to\n> HEAD unless there are any objections. If we want them in\n> back-branches, we might want to probably segregate the changes based\n> on the branch until those apply.\n\n+1. It would be nice to back-patch the user-visible changes in the\ndocs.\n--\nMichael", "msg_date": "Thu, 6 Feb 2020 14:15:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > Your changes look fine to me on the first read. I will push this to\n> > HEAD unless there are any objections. If we want them in\n> > back-branches, we might want to probably segregate the changes based\n> > on the branch until those apply.\n>\n> +1. It would be nice to back-patch the user-visible changes in the\n> docs.\n>\n\nFair enough, Justin, is it possible for you to segregate the changes\nthat can be backpatched?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Feb 2020 16:43:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Thu, Feb 06, 2020 at 04:43:18PM +0530, Amit Kapila wrote:\n> On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > > Your changes look fine to me on the first read. I will push this to\n> > > HEAD unless there are any objections. If we want them in\n> > > back-branches, we might want to probably segregate the changes based\n> > > on the branch until those apply.\n> >\n> > +1. It would be nice to back-patch the user-visible changes in the\n> > docs.\n> >\n> \n> Fair enough, Justin, is it possible for you to segregate the changes\n> that can be backpatched?\n\nLooks like the whole patch can be applied to master and v12 [0].\n\nMy original thread from last year was about docs added in v12, so bloom.sgml is\nthe only user-facing doc which can be backpatched. README.parallel and\nbufmgr.c changes could be backpatched but I agree it's not necessary.\n\nNote, the bloom typo seems to complete a change that was started here:\n\n|commit 31ff51adc855e3ffe8e3c20e479b8d1a4508feb8\n|Author: Alexander Korotkov <akorotkov@postgresql.org>\n|Date: Mon Oct 22 00:23:26 2018 +0300\n|\n| Fix some grammar errors in bloom.sgml\n| \n| Discussion: https://postgr.es/m/CAEepm%3D3sijpGr8tXdyz-7EJJZfhQHABPKEQ29gpnb7-XSy%2B%3D5A%40mail.gmail.com\n| Reported-by: Thomas Munro\n| Backpatch-through: 9.6\n\nJustin\n\n[0] modulo a fix for a typo which I introduced in another patch in this branch,\nwhich shouldn't have been in this patch; fixed in the attached.", "msg_date": "Thu, 6 Feb 2020 07:56:40 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Thu, Feb 6, 2020 at 7:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Feb 06, 2020 at 04:43:18PM +0530, Amit Kapila wrote:\n> > On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > > > Your changes look fine to me on the first read. I will push this to\n> > > > HEAD unless there are any objections. If we want them in\n> > > > back-branches, we might want to probably segregate the changes based\n> > > > on the branch until those apply.\n> > >\n> > > +1. It would be nice to back-patch the user-visible changes in the\n> > > docs.\n> > >\n> >\n> > Fair enough, Justin, is it possible for you to segregate the changes\n> > that can be backpatched?\n>\n> Looks like the whole patch can be applied to master and v12 [0].\n>\n\nIf we decide to backpatch, then why not try to backpatch as far as\npossible (till 9.5)? If so, then it would be better to separate\nchanges which can be backpatched till 9.5, if that is tedious, then\nmaybe we can just back-patch (in 12) bloom.sgml change as a separate\ncommit and rest commit it in HEAD only. What do you think?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Feb 2020 08:33:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Fri, Feb 07, 2020 at 08:33:40AM +0530, Amit Kapila wrote:\n> On Thu, Feb 6, 2020 at 7:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Feb 06, 2020 at 04:43:18PM +0530, Amit Kapila wrote:\n> > > On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > > > > Your changes look fine to me on the first read. I will push this to\n> > > > > HEAD unless there are any objections. If we want them in\n> > > > > back-branches, we might want to probably segregate the changes based\n> > > > > on the branch until those apply.\n> > > >\n> > > > +1. It would be nice to back-patch the user-visible changes in the\n> > > > docs.\n> > > >\n> > >\n> > > Fair enough, Justin, is it possible for you to segregate the changes\n> > > that can be backpatched?\n> >\n> > Looks like the whole patch can be applied to master and v12 [0].\n> \n> If we decide to backpatch, then why not try to backpatch as far as\n> possible (till 9.5)? If so, then it would be better to separate\n> changes which can be backpatched till 9.5, if that is tedious, then\n> maybe we can just back-patch (in 12) bloom.sgml change as a separate\n> commit and rest commit it in HEAD only. What do you think?\n\nI don't think I was clear. My original doc review patches were limited to\nthis:\n\nOn Sat, Mar 30, 2019 at 05:43:33PM -0500, Justin Pryzby wrote:\n> I reviewed docs like this:\n> git log -p remotes/origin/REL_11_STABLE..HEAD -- doc\n\n\nSTABLE..REL_12_STABLE. So after a few minutes earlier today of cherry-pick, I\nconcluded that only bloom.sgml is applicable further back than v12. Probably,\nI either noticed that minor issue at the same time as nearby doc changes in\nv12(?), or maybe noticed that issue later, independently of doc review, but\nthen tacked it on to the previous commit, for lack of any better place.\n\nJustin\n\n\n", "msg_date": "Thu, 6 Feb 2020 21:11:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Fri, Feb 7, 2020 at 8:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Feb 07, 2020 at 08:33:40AM +0530, Amit Kapila wrote:\n> > On Thu, Feb 6, 2020 at 7:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Thu, Feb 06, 2020 at 04:43:18PM +0530, Amit Kapila wrote:\n> > > > On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > >\n> > > > > On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > > > > > Your changes look fine to me on the first read. I will push this to\n> > > > > > HEAD unless there are any objections. If we want them in\n> > > > > > back-branches, we might want to probably segregate the changes based\n> > > > > > on the branch until those apply.\n> > > > >\n> > > > > +1. It would be nice to back-patch the user-visible changes in the\n> > > > > docs.\n> > > > >\n> > > >\n> > > > Fair enough, Justin, is it possible for you to segregate the changes\n> > > > that can be backpatched?\n> > >\n> > > Looks like the whole patch can be applied to master and v12 [0].\n> >\n\nI tried your patch master and it failed to apply.\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file doc/src/sgml/bloom.sgml\n(Stripping trailing CRs from patch; use --binary to disable.)\npatching file doc/src/sgml/config.sgml\nHunk #1 FAILED at 4318.\n1 out of 1 hunk FAILED -- saving rejects to file doc/src/sgml/config.sgml.rej\n\n> > If we decide to backpatch, then why not try to backpatch as far as\n> > possible (till 9.5)? If so, then it would be better to separate\n> > changes which can be backpatched till 9.5, if that is tedious, then\n> > maybe we can just back-patch (in 12) bloom.sgml change as a separate\n> > commit and rest commit it in HEAD only. What do you think?\n>\n> I don't think I was clear. My original doc review patches were limited to\n> this:\n>\n> On Sat, Mar 30, 2019 at 05:43:33PM -0500, Justin Pryzby wrote:\n> > I reviewed docs like this:\n> > git log -p remotes/origin/REL_11_STABLE..HEAD -- doc\n>\n>\n> STABLE..REL_12_STABLE. So after a few minutes earlier today of cherry-pick, I\n> concluded that only bloom.sgml is applicable further back than v12. Probably,\n> I either noticed that minor issue at the same time as nearby doc changes in\n> v12(?), or maybe noticed that issue later, independently of doc review, but\n> then tacked it on to the previous commit, for lack of any better place.\n>\n\nI am still not 100% clear, it is better if you can prepare a separate\npatch which can be backpatched and the rest that we can apply to HEAD.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Feb 2020 09:26:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Fri, Feb 07, 2020 at 09:26:04AM +0530, Amit Kapila wrote:\n> On Fri, Feb 7, 2020 at 8:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Feb 07, 2020 at 08:33:40AM +0530, Amit Kapila wrote:\n> > > On Thu, Feb 6, 2020 at 7:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > On Thu, Feb 06, 2020 at 04:43:18PM +0530, Amit Kapila wrote:\n> > > > > On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > > >\n> > > > > > On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > > > > > > Your changes look fine to me on the first read. I will push this to\n> > > > > > > HEAD unless there are any objections. If we want them in\n> > > > > > > back-branches, we might want to probably segregate the changes based\n> > > > > > > on the branch until those apply.\n> > > > > >\n> > > > > > +1. It would be nice to back-patch the user-visible changes in the\n> > > > > > docs.\n> > > > > >\n> > > > >\n> > > > > Fair enough, Justin, is it possible for you to segregate the changes\n> > > > > that can be backpatched?\n> > > >\n> > > > Looks like the whole patch can be applied to master and v12 [0].\n> > >\n> \n> I tried your patch master and it failed to apply.\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file doc/src/sgml/bloom.sgml\n> (Stripping trailing CRs from patch; use --binary to disable.)\n> patching file doc/src/sgml/config.sgml\n> Hunk #1 FAILED at 4318.\n> 1 out of 1 hunk FAILED -- saving rejects to file doc/src/sgml/config.sgml.rej\n\nI think you applied the first patch, which I corrected here.\nhttps://www.postgresql.org/message-id/20200206135640.GG403%40telsasoft.com\n\nJust rechecked it works for master and v12.\n\n$ git checkout -b test2 origin/master\nBranch test2 set up to track remote branch master from origin.\nSwitched to a new branch 'test2'\n$ patch -p1 <0001-spelling-and-typos.patch\npatching file doc/src/sgml/bloom.sgml\npatching file doc/src/sgml/ref/alter_table.sgml\npatching file doc/src/sgml/sources.sgml\npatching file src/backend/access/transam/README.parallel\npatching file src/backend/storage/buffer/bufmgr.c\npatching file src/backend/storage/sync/sync.c\npatching file src/include/access/tableam.h\n\n$ patch -p1 <0001-spelling-and-typos.patch\npatching file doc/src/sgml/bloom.sgml\npatching file doc/src/sgml/ref/alter_table.sgml\nHunk #1 succeeded at 220 (offset -2 lines).\npatching file doc/src/sgml/sources.sgml\npatching file src/backend/access/transam/README.parallel\npatching file src/backend/storage/buffer/bufmgr.c\nHunk #1 succeeded at 4268 (offset -23 lines).\npatching file src/backend/storage/sync/sync.c\npatching file src/include/access/tableam.h\nHunk #1 succeeded at 1167 (offset -18 lines).\n\nThe bloom patch there works for v11.\nAttached now another version for v10-.", "msg_date": "Thu, 6 Feb 2020 22:17:06 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: typos in comments and user docs" }, { "msg_contents": "On Fri, Feb 7, 2020 at 9:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Feb 07, 2020 at 09:26:04AM +0530, Amit Kapila wrote:\n> > On Fri, Feb 7, 2020 at 8:41 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > On Fri, Feb 07, 2020 at 08:33:40AM +0530, Amit Kapila wrote:\n> > > > On Thu, Feb 6, 2020 at 7:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > >\n> > > > > On Thu, Feb 06, 2020 at 04:43:18PM +0530, Amit Kapila wrote:\n> > > > > > On Thu, Feb 6, 2020 at 10:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > > > > >\n> > > > > > > On Thu, Feb 06, 2020 at 08:47:14AM +0530, Amit Kapila wrote:\n> > > > > > > > Your changes look fine to me on the first read. I will push this to\n> > > > > > > > HEAD unless there are any objections. If we want them in\n> > > > > > > > back-branches, we might want to probably segregate the changes based\n> > > > > > > > on the branch until those apply.\n> > > > > > >\n> > > > > > > +1. It would be nice to back-patch the user-visible changes in the\n> > > > > > > docs.\n> > > > > > >\n> > > > > >\n> > > > > > Fair enough, Justin, is it possible for you to segregate the changes\n> > > > > > that can be backpatched?\n> > > > >\n> > > > > Looks like the whole patch can be applied to master and v12 [0].\n> > > >\n> >\n> > I tried your patch master and it failed to apply.\n> > (Stripping trailing CRs from patch; use --binary to disable.)\n> > patching file doc/src/sgml/bloom.sgml\n> > (Stripping trailing CRs from patch; use --binary to disable.)\n> > patching file doc/src/sgml/config.sgml\n> > Hunk #1 FAILED at 4318.\n> > 1 out of 1 hunk FAILED -- saving rejects to file doc/src/sgml/config.sgml.rej\n>\n> I think you applied the first patch, which I corrected here.\n> https://www.postgresql.org/message-id/20200206135640.GG403%40telsasoft.com\n>\n> Just rechecked it works for master and v12.\n>\n\nOkay, thanks. I have pushed user-facing changes (bloom.sgml and\nalter_table.sgml) to back branches till they apply and rest of the\nchanges in just HEAD.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Feb 2020 09:36:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typos in comments and user docs" } ]
[ { "msg_contents": "Hi,\n\nThe ringbuffers we use for seqscans, vacuum, copy etc can cause very\ndrastic slowdowns (see e.g. [1]), an can cause some workloads to\npractically never end up utilizing shared buffers. ETL workloads\ne.g. regularly fight with that problem.\n\nWhile I think there's a number of improvements[2] we could make to the\nringbuffer logic, I think we should also just allow to make them\nconfigurable. I think that'll allow a decent number of systems perform\nbetter (especially on slightly bigger systems the the current\nringbuffers are *way* too small) , make the thresholds more discoverable\n(e.g. the NBuffers / 4 threshold is very confusing), and will make it\neasier to experiment with better default values.\n\nI think it would make sense to have seqscan_ringbuffer_threshold,\n{bulkread,bulkwrite,vacuum}_ringbuffer_size. I think they often sensibly\nare set in proportion of shared_buffers, so I suggest defining them as\nfloats, where negative values divide shared_buffers, whereas positive\nvalues are absolute sizes, and 0 disables the use of ringbuffers.\n\nI.e. to maintain the current defaults, seqscan_ringbuffer_threshold\nwould be -4.0, but could be also be set to an absolute 4GB (converted to\npages). Probably would want a GUC show function that displays\nproportional values in a nice way.\n\nWe probably should also just increase all the ringbuffer sizes by an\norder of magnitude or two, especially the one for VACUUM.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20190507201619.lnyg2nyhmpxcgeau%40alap3.anarazel.de\n\n[2] The two most important things imo:\n a) Don't evict buffers when falling off the ringbuffer as long as\n there unused buffers on the freelist. Possibly just set their\n usagecount to zero as long that is the case.\n b) The biggest performance pain comes from ringbuffers where it's\n likely that buffers are dirty (vacuum, copy), because doing so\n requires that the corresponding WAL be flushed. Which often ends\n up turning many individual buffer evictions into an fdatasync,\n slowing things down to a crawl. And the contention caused by that\n is a significant concurrency issue too. By doing writes, but not\n flushes, shortly after the insertion, we can reduce the cost\n significantly.\n\n\n", "msg_date": "Wed, 5 Feb 2020 20:00:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "From: Andres Freund <andres@anarazel.de>\n> While I think there's a number of improvements[2] we could make to the\n> ringbuffer logic, I think we should also just allow to make them\n> configurable. I think that'll allow a decent number of systems perform\n> better (especially on slightly bigger systems the the current\n> ringbuffers are *way* too small) , make the thresholds more discoverable\n> (e.g. the NBuffers / 4 threshold is very confusing), and will make it\n> easier to experiment with better default values.\n\n+1\nThe NBuffers / 4 logic sometimes caused unexpected behavior. IIRC, even when some batch or analytic processing needed to read large tables sequentially multiple times, the second and subsequent reads didn't get the benefit of caching. another example is that before pg_prewarm became available, I couldn't cache the entire table by running \"SELECT * from table\" before benchmarking performance.\n\n\n> I think it would make sense to have seqscan_ringbuffer_threshold,\n> {bulkread,bulkwrite,vacuum}_ringbuffer_size. I think they often sensibly\n> are set in proportion of shared_buffers, so I suggest defining them as\n> floats, where negative values divide shared_buffers, whereas positive\n> values are absolute sizes, and 0 disables the use of ringbuffers.\n> \n> I.e. to maintain the current defaults, seqscan_ringbuffer_threshold\n> would be -4.0, but could be also be set to an absolute 4GB (converted to\n> pages). Probably would want a GUC show function that displays\n> proportional values in a nice way.\n\nI think per-table reloption is necessary as well as or instead of GUC, because the need for caching depends on the table (see below for Oracle's manual.)\n\nI'm afraid it would be confusing for a user-settable parameter to have different units (percent and size). I think just the positive percentage would suffice.\n\n\nCREATE TABLE\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-TABLE.html#GUID-F9CE0CC3-13AE-4744-A43C-EAC7A71AAAB6\n--------------------------------------------------\nCACHE | NOCACHE | CACHE READS\n\nUse these clauses to indicate how Oracle Database should store blocks in the buffer cache. For LOB storage, you can specify CACHE, NOCACHE, or CACHE READS. For other types of storage, you can specify only CACHE or NOCACHE. \n\nThe behavior of CACHE and NOCACHE described in this section does not apply when Oracle Database chooses to use direct reads or to perform table scans using parallel query. \n\nCACHE\n\nFor data that is accessed frequently, this clause indicates that the blocks retrieved for this table are placed at the most recently used end of the least recently used (LRU) list in the buffer cache when a full table scan is performed. This attribute is useful for small lookup tables.\n\nNOCACHE\n\nFor data that is not accessed frequently, this clause indicates that the blocks retrieved for this table are placed at the least recently used end of the LRU list in the buffer cache when a full table scan is performed. NOCACHE is the default for LOB storage. \n\nCACHE READS\n\nCACHE READS applies only to LOB storage. It specifies that LOB values are brought into the buffer cache only during read operations but not during write operations. \n--------------------------------------------------\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Thu, 6 Feb 2020 05:12:11 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "Hi,\n\nOn 2020-02-06 05:12:11 +0000, tsunakawa.takay@fujitsu.com wrote:\n> I think per-table reloption is necessary as well as or instead of GUC, because the need for caching depends on the table (see below for Oracle's manual.)\n\nI'm inclined to not do that initially. It's going to be controversial\nenough to add the GUCs.\n\n\n> I'm afraid it would be confusing for a user-settable parameter to have\n> different units (percent and size). I think just the positive\n> percentage would suffice.\n\nIDK, I feel like there's good reasons to use either. But I'd gladly take\njust percent if that's the general concensus, rather than not getting\nthe improvement at all.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Feb 2020 21:19:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "On Wed, 2020-02-05 at 20:00 -0800, Andres Freund wrote:\n> The ringbuffers we use for seqscans, vacuum, copy etc can cause very\n> drastic slowdowns (see e.g. [1]), an can cause some workloads to\n> practically never end up utilizing shared buffers. ETL workloads\n> e.g. regularly fight with that problem.\n> \n> I think it would make sense to have seqscan_ringbuffer_threshold,\n> {bulkread,bulkwrite,vacuum}_ringbuffer_size. I think they often sensibly\n> are set in proportion of shared_buffers, so I suggest defining them as\n> floats, where negative values divide shared_buffers, whereas positive\n> values are absolute sizes, and 0 disables the use of ringbuffers.\n\nSounds reasonable.\n\nI feel that it should be as few GUCs as possible, so I think that\nhaving one per type of operation might be too granular.\n\nThis should of course also be a storage parameter that can be\nset per table.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 06 Feb 2020 07:18:00 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "Hi,\n\nOn 2020-02-06 07:18:00 +0100, Laurenz Albe wrote:\n> On Wed, 2020-02-05 at 20:00 -0800, Andres Freund wrote:\n> > The ringbuffers we use for seqscans, vacuum, copy etc can cause very\n> > drastic slowdowns (see e.g. [1]), an can cause some workloads to\n> > practically never end up utilizing shared buffers. ETL workloads\n> > e.g. regularly fight with that problem.\n> > \n> > I think it would make sense to have seqscan_ringbuffer_threshold,\n> > {bulkread,bulkwrite,vacuum}_ringbuffer_size. I think they often sensibly\n> > are set in proportion of shared_buffers, so I suggest defining them as\n> > floats, where negative values divide shared_buffers, whereas positive\n> > values are absolute sizes, and 0 disables the use of ringbuffers.\n> \n> Sounds reasonable.\n\n> I feel that it should be as few GUCs as possible, so I think that\n> having one per type of operation might be too granular.\n\nThey already are set to different sizes, so I don't really see how\nthat's something doable. The relevant bits of code are:\n\nBufferAccessStrategy\nGetAccessStrategy(BufferAccessStrategyType btype)\n{\n\tBufferAccessStrategy strategy;\n\tint\t\t\tring_size;\n\n\t/*\n\t * Select ring size to use. See buffer/README for rationales.\n\t *\n\t * Note: if you change the ring size for BAS_BULKREAD, see also\n\t * SYNC_SCAN_REPORT_INTERVAL in access/heap/syncscan.c.\n\t */\n\tswitch (btype)\n\t{\n\t\tcase BAS_NORMAL:\n\t\t\t/* if someone asks for NORMAL, just give 'em a \"default\" object */\n\t\t\treturn NULL;\n\n\t\tcase BAS_BULKREAD:\n\t\t\tring_size = 256 * 1024 / BLCKSZ;\n\t\t\tbreak;\n\t\tcase BAS_BULKWRITE:\n\t\t\tring_size = 16 * 1024 * 1024 / BLCKSZ;\n\t\t\tbreak;\n\t\tcase BAS_VACUUM:\n\t\t\tring_size = 256 * 1024 / BLCKSZ;\n\t\t\tbreak;\n\nand\n\n\n\t/*\n\t * If the table is large relative to NBuffers, use a bulk-read access\n\t * strategy and enable synchronized scanning (see syncscan.c). Although\n\t * the thresholds for these features could be different, we make them the\n\t * same so that there are only two behaviors to tune rather than four.\n\t * (However, some callers need to be able to disable one or both of these\n\t * behaviors, independently of the size of the table; also there is a GUC\n\t * variable that can disable synchronized scanning.)\n\t *\n\t * Note that table_block_parallelscan_initialize has a very similar test;\n\t * if you change this, consider changing that one, too.\n\t */\n\tif (!RelationUsesLocalBuffers(scan->rs_base.rs_rd) &&\n\t\tscan->rs_nblocks > NBuffers / 4)\n\t{\n\t\tallow_strat = (scan->rs_base.rs_flags & SO_ALLOW_STRAT) != 0;\n\t\tallow_sync = (scan->rs_base.rs_flags & SO_ALLOW_SYNC) != 0;\n\t}\n\telse\n\t\tallow_strat = allow_sync = false;\n\n\n> This should of course also be a storage parameter that can be\n> set per table.\n\nI honestly don't quite get that. What precisely is this addressing? I\nmean fine, I can add that, but it's sufficiently more complicated than\nthe GUCs, and I don't really forsee that being particularly useful to\ntune on a per table basis. A lot of the reason behind having the\nringbuffers is about managing the whole system impact, rather than\nmaking individual table fast/slow.\n\n- Andres\n\n\n", "msg_date": "Thu, 6 Feb 2020 09:54:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "On Wed, Feb 5, 2020 at 11:00 PM Andres Freund <andres@anarazel.de> wrote:\n> I.e. to maintain the current defaults, seqscan_ringbuffer_threshold\n> would be -4.0, but could be also be set to an absolute 4GB (converted to\n> pages). Probably would want a GUC show function that displays\n> proportional values in a nice way.\n\nI think this is kind of awkward given that our GUC system attributes\nunits to everything. It'd sort of be nicer to have two separate GUCs,\none measured as a multiple and the other measured in bytes, but maybe\nthat's just exchanging one form of confusion for another.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Feb 2020 13:15:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "Hi,\n\nOn 2020-02-06 13:15:04 -0500, Robert Haas wrote:\n> On Wed, Feb 5, 2020 at 11:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > I.e. to maintain the current defaults, seqscan_ringbuffer_threshold\n> > would be -4.0, but could be also be set to an absolute 4GB (converted to\n> > pages). Probably would want a GUC show function that displays\n> > proportional values in a nice way.\n> \n> I think this is kind of awkward given that our GUC system attributes\n> units to everything.\n\nI admit it's awkward. I think we possibly could still just make the size\ndisplayed in bytes in either case, reducing that issue a *bit*?\n\n\n> It'd sort of be nicer to have two separate GUCs,\n> one measured as a multiple and the other measured in bytes, but maybe\n> that's just exchanging one form of confusion for another.\n\nWe don't really have a good way to deal with GUCs where setting one\nprecludes the other, especially when those GUCs should be changable at\nruntime :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Feb 2020 10:52:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "On Thu, Feb 6, 2020 at 1:52 PM Andres Freund <andres@anarazel.de> wrote:\n> I admit it's awkward. I think we possibly could still just make the size\n> displayed in bytes in either case, reducing that issue a *bit*?\n\nThat seems like it makes it even more confusing, honestly.\n\n> > It'd sort of be nicer to have two separate GUCs,\n> > one measured as a multiple and the other measured in bytes, but maybe\n> > that's just exchanging one form of confusion for another.\n>\n> We don't really have a good way to deal with GUCs where setting one\n> precludes the other, especially when those GUCs should be changable at\n> runtime :(.\n\nIt can work if one of the GUCs is king, and the other one takes effect\nonly the first one is set to some value that means \"ignore me\". We\nhave a number of examples of that, e.g. autovacuum_work_mem,\nautovacuum_vacuum_cost_limit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Feb 2020 14:03:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" }, { "msg_contents": "On Wed, Feb 05, 2020 at 08:00:26PM -0800, Andres Freund wrote:\n> I think it would make sense to have seqscan_ringbuffer_threshold,\n> {bulkread,bulkwrite,vacuum}_ringbuffer_size.\n\nI suggest the possibility of somehow forcing a ringbuffer for nonbulk writes\nfor the current session.\n\nIn our use-case, we have loader processes INSERTing data using prepared\nstatements, UPSERT, and/or multiple VALUES(),() lists. Some of that data will\nbe accessed in the near future (15min-24hr) but some parts (large parts, even)\nmay never be accessed. I imagine most of the buffer pages never get\nusagecount > 0 before being evicted.\n\nI think it'd still be desirable to make the backend do write() its own dirty\nbuffers to the OS, rather than leaving behind large numbers of dirty buffers\nfor another backend to deal with, since that *could* be a customer facing\nreport. I'd prefer the report run 10% faster due to rarely hitting dirty\nbuffer (by avoiding the need to write out lots of someone elses data), than the\nloaders to run 25% slower, due to constantly writing to the OS.\n\nThe speed of loaders is not something our customers would be concerned with.\nIt's okay if they are slower than they might be. They need to keep up with\nincoming data, but it'd rarely matter if we load a 15min interval of data in\n5min instead of in 4.\n\nWe would use copy if we could, to get ring buffer during writes. But cannot\ndue to UPSERT (and maybe other reasons). \n\nI have considered the possibility of loading data into a separate instance with\nsmall (or in any case separate) shared_buffers and then tranferring its data to\na customer-facing report instance using pg_restore (COPY)...but the overhead to\nmaintain that would be significant for us (me).\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 19 Feb 2020 11:37:42 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Make ringbuffer threshold and ringbuffer sizes configurable?" } ]
[ { "msg_contents": "Hi,\n\nI am testing performance both PG12 and PG11.\nI found the case of performance degradation in PG12.\n\nAmit Langote help me to analyze and to create patch.\nThanks!\n\n* environment\n\nCentOS Linux release 7.6.1810 (Core)\npostgresql 12.1\npostgresql 11.6\n\n* postgresql.conf\n\nshared_buffers = 2048MB\nmax_parallel_workers_per_gather = 0\nwork_mem = '64MB'\njit = off\n\n* test case\n\nCREATE TABLE realtest(a real, b real, c real, d real, e real);\nINSERT INTO realtest SELECT i,i,i,i,i FROM generate_series(0,10000000) AS i;\n\nEXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n from realtest;\n\n* result\n\n PG12.1 5878.389 ms\n PG11.6 4533.554 ms\n\n** PostgreSQL 12.1\n\npgbench=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n from realtest;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.realtest (cost=0.00..288697.59 rows=10000115 width=40)\n(actual time=0.040..5195.328 rows=10000001 loops=1)\n Output: ('2'::double precision * a), ('2'::double precision * b),\n('2'::double precision * c), ('2'::double precision * d), ('2'::double\nprecision * e)\n Buffers: shared hit=63695\n Planning Time: 0.051 ms\n Execution Time: 5878.389 ms\n(5 行)\n\nSamples: 6K of event 'cpu-clock', Event count (approx.): 1577750000\nOverhead Command Shared Object Symbol\n 25.48% postgres postgres [.] ExecInterpExpr\n★18.65% postgres libc-2.17.so [.] __isinf\n 14.36% postgres postgres [.] float84mul\n 8.54% postgres [vdso] [.] __vdso_clock_gettime\n 4.02% postgres postgres [.] ExecScan\n 3.69% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 2.63% postgres libc-2.17.so [.] __clock_gettime\n 2.55% postgres postgres [.] HeapTupleSatisfiesVisibility\n 2.00% postgres postgres [.] heapgettup_pagemode\n\n** PostgreSQL 11.6\n\npgbench=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n from realtest;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.realtest (cost=0.00..288697.59 rows=10000115 width=40)\n(actual time=0.012..3845.480 rows=10000001 loops=1)\n Output: ('2'::double precision * a), ('2'::double precision * b),\n('2'::double precision * c), ('2'::double precision * d), ('2'::double\nprecision * e)\n Buffers: shared hit=63695\n Planning Time: 0.033 ms\n Execution Time: 4533.554 ms\n(5 行)\n\nSamples: 4K of event 'cpu-clock', Event count (approx.): 1192000000\nOverhead Command Shared Object Symbol\n 32.30% postgres postgres [.] ExecInterpExpr\n 14.95% postgres postgres [.] float84mul\n 10.57% postgres [vdso] [.] __vdso_clock_gettime\n★ 6.84% postgres libc-2.17.so [.] __isinf\n 3.96% postgres postgres [.] ExecScan\n 3.50% postgres libc-2.17.so [.] __clock_gettime\n 3.31% postgres postgres [.] heap_getnext\n 3.08% postgres postgres [.] HeapTupleSatisfiesMVCC\n 2.77% postgres postgres [.] slot_deform_tuple\n 2.37% postgres postgres [.] ExecProcNodeInstr\n 2.08% postgres postgres [.] standard_ExecutorRun\n\n* cause\n\nObviously, even in common cases where no overflow occurs,\nyou can tell that PG 12 is performing isinf() 3 times on every call of\nfloat8_mul() once for each of val1, val2, result where as PG 11\nis performing only once for result.\n\nThat's because check_float8_val() (in PG 12) is a function\nwhose arguments must be evaluated before\nit is called (it is inline, but that's irrelevant),\nwhereas CHECKFLOATVAL() (in PG11) is a macro\nwhose arguments are only substituted into its body.\n\nBy the way, this change of float8mul() implementation is\nmostly due to the following commit in PG 12 development cycle:\ncommit 6bf0bc842bd75877e31727eb559c6a69e237f831\n\nEspecially, the following diff:\n\n@@ -894,13 +746,8 @@ float8mul(PG_FUNCTION_ARGS) {\n float8 arg1 = PG_GETARG_FLOAT8(0);\n float8 arg2 = PG_GETARG_FLOAT8(1);\n- float8 result;\n-\n- result = arg1 * arg2;\n\n- CHECKFLOATVAL(result, isinf(arg1) || isinf(arg2),\n- arg1 == 0 || arg2 == 0);\n- PG_RETURN_FLOAT8(result);\n+ PG_RETURN_FLOAT8(float8_mul(arg1, arg2));\n }\n\n* patch\n\nThis patch uses MACRO which was used by PG11.\nI tried attached patch, which can be applied to PG 12 source and performed\na benchmark:\n\n PG12.1 5878.389 ms\n PG11.6 4533.554 ms\n\n PG12.1 + Patch 4679.162 ms\n\n** PostgreSQL 12.1 + Patch\n\npostgres=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n from realtest;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.realtest (cost=0.00..307328.38 rows=10828150 width=40)\n(actual time=0.012..4009.012 rows=10000001 loops=1)\n Output: ('2'::double precision * a), ('2'::double precision * b),\n('2'::double precision * c), ('2'::double precision * d), ('2'::double\nprecision * e)\n Buffers: shared hit=63695\n Planning Time: 0.038 ms\n Execution Time: 4679.162 ms\n(5 rows)\n\nSamples: 5K of event 'cpu-clock', Event count (approx.): 1376750000\nOverhead Command Shared Object Symbol\n 31.43% postgres postgres [.] ExecInterpExpr\n 14.24% postgres postgres [.] float84mul\n 10.40% postgres [vdso] [.] __vdso_clock_gettime\n★ 5.41% postgres libc-2.17.so [.] __isinf\n 4.63% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 4.03% postgres postgres [.] ExecScan\n 3.54% postgres libc-2.17.so [.] __clock_gettime\n 3.12% postgres postgres [.] HeapTupleSatisfiesVisibility\n 2.36% postgres postgres [.] heap_getnextslot\n 2.16% postgres postgres [.] heapgettup_pagemode\n 2.09% postgres postgres [.] standard_ExecutorRun\n 2.07% postgres postgres [.] SeqNext\n 2.03% postgres postgres [.] ExecProcNodeInstr\n 2.03% postgres postgres [.] tts_virtual_clear\n\nPG 12 is still slower compared to PG 11, but the __isinf() situation is\nbetter with the patch.\n\nBest Regards,\nKeisuke Kuroda", "msg_date": "Thu, 6 Feb 2020 14:25:03 +0900", "msg_from": "keisuke kuroda <keisuke.kuroda.3862@gmail.com>", "msg_from_op": true, "msg_subject": "In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-06 14:25:03 +0900, keisuke kuroda wrote:\n> That's because check_float8_val() (in PG 12) is a function\n> whose arguments must be evaluated before\n> it is called (it is inline, but that's irrelevant),\n> whereas CHECKFLOATVAL() (in PG11) is a macro\n> whose arguments are only substituted into its body.\n\nHm - it's not that clear to me that it is irrelevant that the function\ngets inlined. The compiler should know that isinf is side-effect free,\nand that it doesn't have to evaluate before necessary.\n\nNormally isinf is implemented by a compiler intrisic within the system\nheaders. But not in your profile:\n> ★ 5.41% postgres libc-2.17.so [.] __isinf\n\nI checked, and I don't see any references to isinf from within float.c\n(looking at the disassembly - there's some debug strings containing the\nword, but that's it).\n\nWhat compiler & compiler version on what kind of architecture is this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Feb 2020 21:55:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 6, 2020 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-02-06 14:25:03 +0900, keisuke kuroda wrote:\n> > That's because check_float8_val() (in PG 12) is a function\n> > whose arguments must be evaluated before\n> > it is called (it is inline, but that's irrelevant),\n> > whereas CHECKFLOATVAL() (in PG11) is a macro\n> > whose arguments are only substituted into its body.\n>\n> Hm - it's not that clear to me that it is irrelevant that the function\n> gets inlined. The compiler should know that isinf is side-effect free,\n> and that it doesn't have to evaluate before necessary.\n>\n> Normally isinf is implemented by a compiler intrisic within the system\n> headers. But not in your profile:\n> > ★ 5.41% postgres libc-2.17.so [.] __isinf\n>\n> I checked, and I don't see any references to isinf from within float.c\n> (looking at the disassembly - there's some debug strings containing the\n> word, but that's it).\n>\n> What compiler & compiler version on what kind of architecture is this?\n\nAs Kuroda-san mentioned, I also checked the behavior that he reports.\nThe compiler I used is an ancient one (CentOS 7 default):\n\n$ gcc --version\ngcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)\n\nCompiler dependent behavior of inlining might be relevant here, but\nthere is one more thing to consider. The if () condition in\ncheck_float8_val (PG 12) and CHECKFLOATVAL (PG 11) is calculated\ndifferently, causing isinf() to be called more times in PG 12:\n\nstatic inline void\ncheck_float8_val(const float8 val, const bool inf_is_valid,\n const bool zero_is_valid)\n{\n if (!inf_is_valid && unlikely(isinf(val)))\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value out of range: overflow\")));\n\n#define CHECKFLOATVAL(val, inf_is_valid, zero_is_valid) \\\ndo { \\\n if (isinf(val) && !(inf_is_valid)) \\\n ereport(ERROR, \\\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), \\\n errmsg(\"value out of range: overflow\"))); \\\n\ncalled thusly:\n\n check_float8_val(result, isinf(val1) || isinf(val2),\n val1 == 0.0 || val2 == 0.0);\n\nand\n\n CHECKFLOATVAL(result, isinf(arg1) || isinf(arg2),\n arg1 == 0 || arg2 == 0);\n\nfrom float8_mul() and float8mul() in PG 12 and PG 11, respectively.\n\nYou may notice that the if () condition is reversed, so while PG 12\ncalculates isinf(arg1) || isinf(arg2) first and isinf(result) only if\nthe first is false, which it is in most cases, PG 11 calculates\nisinf(result) first, followed by isinf(arg1) || isinf(arg2) if the\nformer is true. I don't understand why such reversal was necessary,\nbut it appears to be the main factor behind this slowdown. So, even\nif PG 12's check_float8_val() is perfectly inlined, this slowdown\ncouldn't be helped.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 6 Feb 2020 16:05:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "So it appears to me that what commit 6bf0bc842 did in this area was\nnot just wrong, but disastrously so. Before that, we had a macro that\nevaluated isinf(val) before it evaluated the inf_is_valid condition.\nNow we have check_float[48]_val which do it the other way around.\nThat would be okay if the inf_is_valid condition were cheap to\nevaluate, but in common code paths it's actually twice as expensive\nas isinf().\n\nAndres seems to be of the opinion that the compiler should be willing\nto ignore the semantic requirements of the C standard in order\nto rearrange the code back into the cheaper order. That sounds like\nwishful thinking to me ... even if it actually works on his compiler,\nit certainly isn't going to work for everyone.\n\nThe patch looks unduly invasive to me, but I think that it might be\nright that we should go back to a macro-based implementation, because\notherwise we don't have a good way to be certain that the function\nparameter won't get evaluated first. (Another reason to do so is\nso that the file/line numbers generated for the error reports go back\nto being at least a little bit useful.) We could use local variables\nwithin the macro to avoid double evals, if anyone thinks that's\nactually important --- I don't.\n\nI think the current code is probably also misusing unlikely(),\nand that the right way would be more like\n\n\tif (unlikely(isinf(val) && !(inf_is_valid)))\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Feb 2020 11:03:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "On Thu, Feb 6, 2020 at 11:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So it appears to me that what commit 6bf0bc842 did in this area was\n> not just wrong, but disastrously so. Before that, we had a macro that\n> evaluated isinf(val) before it evaluated the inf_is_valid condition.\n> Now we have check_float[48]_val which do it the other way around.\n> That would be okay if the inf_is_valid condition were cheap to\n> evaluate, but in common code paths it's actually twice as expensive\n> as isinf().\n\nWell, if the previous coding was a deliberate attempt to dodge this\nperformance issue, the evidence seems to be well-concealed. Neither\nthe comments for that macro nor the related commit messages make any\nmention of it. When subtle things like this are performance-critical,\ngood comments are pretty critical, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 6 Feb 2020 13:31:07 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-06 11:03:51 -0500, Tom Lane wrote:\n> Andres seems to be of the opinion that the compiler should be willing\n> to ignore the semantic requirements of the C standard in order\n> to rearrange the code back into the cheaper order. That sounds like\n> wishful thinking to me ... even if it actually works on his compiler,\n> it certainly isn't going to work for everyone.\n\nSorry, but, uh, what are you talking about? Please tell me which single\nstandards violation I'm advocating for?\n\nI was asking about the inlining bit because the first email of the topic\nexplained that as the problem, which I don't believe can be the full\nexplanation - and it turns out it isn't. As Amit Langote's followup\nemail explained, there's the whole issue of the order of checks being\ninverted - which is clearly bad. And wholly unrelated to inlining.\n\nAnd I asked about __isinf() being used because there are issues with\naccidentally ending up with the non-intrinsic version of isinf() when\nnot using gcc, due to badly written standard library headers.\n\n\n> The patch looks unduly invasive to me, but I think that it might be\n> right that we should go back to a macro-based implementation, because\n> otherwise we don't have a good way to be certain that the function\n> parameter won't get evaluated first.\n\nI'd first like to see some actual evidence of this being a problem,\nrather than just the order of the checks.\n\n\n> (Another reason to do so is so that the file/line numbers generated\n> for the error reports go back to being at least a little bit useful.)\n> We could use local variables within the macro to avoid double evals,\n> if anyone thinks that's actually important --- I don't.\n\nI don't think that's necessarily a good idea. In fact, I think we should\nprobably do the exact opposite, and move the error messages further out\nof line. All these otherwise very small functions having their own\nereports makes them much bigger. Our low code density, and the resulting\nrate of itlb misses, is pretty significant cost (cf [1]).\n\nmaster:\n text\t data\t bss\t dec\t hex\tfilename\n 36124\t 44\t 65\t 36233\t 8d89\tfloat.o\nerror messages moved out of line:\n text\t data\t bss\t dec\t hex\tfilename\n 32883\t 44\t 65\t 32992\t 80e0\tfloat.o\n\nTaking int4pl as an example - solely because it is simpler assembly to\nlook at - we get:\n\nmaster:\n 0x00000000004ac190 <+0>:\tmov 0x30(%rdi),%rax\n 0x00000000004ac194 <+4>:\tadd 0x20(%rdi),%eax\n 0x00000000004ac197 <+7>:\tjo 0x4ac19c <int4pl+12>\n 0x00000000004ac199 <+9>:\tcltq\n 0x00000000004ac19b <+11>:\tretq\n 0x00000000004ac19c <+12>:\tpush %rbp\n 0x00000000004ac19d <+13>:\tlea 0x1a02c4(%rip),%rsi # 0x64c468\n 0x00000000004ac1a4 <+20>:\txor %r8d,%r8d\n 0x00000000004ac1a7 <+23>:\tlea 0x265da1(%rip),%rcx # 0x711f4f <__func__.26823>\n 0x00000000004ac1ae <+30>:\tmov $0x30b,%edx\n 0x00000000004ac1b3 <+35>:\tmov $0x14,%edi\n 0x00000000004ac1b8 <+40>:\tcallq 0x586060 <errstart>\n 0x00000000004ac1bd <+45>:\tlea 0x147e0e(%rip),%rdi # 0x5f3fd2\n 0x00000000004ac1c4 <+52>:\txor %eax,%eax\n 0x00000000004ac1c6 <+54>:\tcallq 0x5896a0 <errmsg>\n 0x00000000004ac1cb <+59>:\tmov $0x3000082,%edi\n 0x00000000004ac1d0 <+64>:\tmov %eax,%ebp\n 0x00000000004ac1d2 <+66>:\tcallq 0x589540 <errcode>\n 0x00000000004ac1d7 <+71>:\tmov %eax,%edi\n 0x00000000004ac1d9 <+73>:\tmov %ebp,%esi\n 0x00000000004ac1db <+75>:\txor %eax,%eax\n 0x00000000004ac1dd <+77>:\tcallq 0x588fb0 <errfinish>\n\nout-of-line error:\n 0x00000000004b04e0 <+0>:\tmov 0x30(%rdi),%rax\n 0x00000000004b04e4 <+4>:\tadd 0x20(%rdi),%eax\n 0x00000000004b04e7 <+7>:\tjo 0x4b04ec <int4pl+12>\n 0x00000000004b04e9 <+9>:\tcltq\n 0x00000000004b04eb <+11>:\tretq\n 0x00000000004b04ec <+12>:\tpush %rax\n 0x00000000004b04ed <+13>:\tcallq 0x115e17 <out_of_range_err>\n\nWith the out-of-line error, we can fit multiple of these functions into one\ncache line. With the inline error, not even one.\n\nGreetings,\n\nAndres Freund\n\n[1] https://twitter.com/AndresFreundTec/status/1214305610172289024\n\n\n", "msg_date": "Thu, 6 Feb 2020 10:48:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nI have been testing with newer compiler (clang-7)\nand result is a bit different at least with clang-7.\nCompiling PG 12.1 (even without patch) with clang-7\nresults in __isinf() no longer being a bottleneck,\nthat is, you don't see it in profiler at all.\n\nSo, there is no issue for people who use the modern clang toolchain,\nbut maybe that's not everyone.\nSo there would still be some interest in doing something about this.\n\n* clang\n\nbash-4.2$ which clang\n/opt/rh/llvm-toolset-7.0/root/usr/bin/clang\n\nbash-4.2$ clang -v\nclang version 7.0.1 (tags/RELEASE_701/final)\nTarget: x86_64-unknown-linux-gnu\nThread model: posix\nInstalledDir: /opt/rh/llvm-toolset-7.0/root/usr/bin\nFound candidate GCC installation:\n/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7\nFound candidate GCC installation:\n/opt/rh/devtoolset-8/root/usr/lib/gcc/x86_64-redhat-linux/8\nFound candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/4.8.2\nFound candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/4.8.5\nSelected GCC installation:\n/opt/rh/devtoolset-8/root/usr/lib/gcc/x86_64-redhat-linux/8\nCandidate multilib: .;@m64\nCandidate multilib: 32;@m32\nSelected multilib: .;@m64\n\n** pg_config\n\n---\nCONFIGURE = '--prefix=/var/lib/pgsql/pgsql/12.1'\n'CC=/opt/rh/llvm-toolset-7.0/root/usr/bin/clang'\n'PKG_CONFIG_PATH=/opt/rh/llvm-toolset-7.0/root/usr/lib64/pkgconfig'\nCC = /opt/rh/llvm-toolset-7.0/root/usr/bin/clang\n---\n\n* result(PostgreSQL 12.1 (even without patch))\n\npostgres=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n from realtest;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.realtest (cost=0.00..288697.59 rows=10000115 width=40)\n(actual time=0.012..3878.284 rows=10000001 loops=1)\n Output: ('2'::double precision * a), ('2'::double precision * b),\n('2'::double precision * c), ('2'::double precision * d), ('2'::double\nprecision * e)\n Buffers: shared hit=63695\n Planning Time: 0.038 ms\n Execution Time: 4533.767 ms\n(5 rows)\n\nSamples: 5K of event 'cpu-clock', Event count (approx.): 1275000000\nOverhead Command Shared Object Symbol\n 33.92% postgres postgres [.] ExecInterpExpr\n 13.27% postgres postgres [.] float84mul\n 10.86% postgres [vdso] [.] __vdso_clock_gettime\n 5.49% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 3.96% postgres postgres [.] ExecScan\n 3.25% postgres libc-2.17.so [.] __clock_gettime\n 3.16% postgres postgres [.] heap_getnextslot\n 2.41% postgres postgres [.] tts_virtual_clear\n 2.39% postgres postgres [.] SeqNext\n 2.22% postgres postgres [.] InstrStopNode\n\nBest Regards,\nKeisuke Kuroda\n\n2020年2月7日(金) 3:48 Andres Freund <andres@anarazel.de>:\n\n> Hi,\n>\n> On 2020-02-06 11:03:51 -0500, Tom Lane wrote:\n> > Andres seems to be of the opinion that the compiler should be willing\n> > to ignore the semantic requirements of the C standard in order\n> > to rearrange the code back into the cheaper order. That sounds like\n> > wishful thinking to me ... even if it actually works on his compiler,\n> > it certainly isn't going to work for everyone.\n>\n> Sorry, but, uh, what are you talking about? Please tell me which single\n> standards violation I'm advocating for?\n>\n> I was asking about the inlining bit because the first email of the topic\n> explained that as the problem, which I don't believe can be the full\n> explanation - and it turns out it isn't. As Amit Langote's followup\n> email explained, there's the whole issue of the order of checks being\n> inverted - which is clearly bad. And wholly unrelated to inlining.\n>\n> And I asked about __isinf() being used because there are issues with\n> accidentally ending up with the non-intrinsic version of isinf() when\n> not using gcc, due to badly written standard library headers.\n>\n>\n> > The patch looks unduly invasive to me, but I think that it might be\n> > right that we should go back to a macro-based implementation, because\n> > otherwise we don't have a good way to be certain that the function\n> > parameter won't get evaluated first.\n>\n> I'd first like to see some actual evidence of this being a problem,\n> rather than just the order of the checks.\n>\n>\n> > (Another reason to do so is so that the file/line numbers generated\n> > for the error reports go back to being at least a little bit useful.)\n> > We could use local variables within the macro to avoid double evals,\n> > if anyone thinks that's actually important --- I don't.\n>\n> I don't think that's necessarily a good idea. In fact, I think we should\n> probably do the exact opposite, and move the error messages further out\n> of line. All these otherwise very small functions having their own\n> ereports makes them much bigger. Our low code density, and the resulting\n> rate of itlb misses, is pretty significant cost (cf [1]).\n>\n> master:\n> text data bss dec hex filename\n> 36124 44 65 36233 8d89 float.o\n> error messages moved out of line:\n> text data bss dec hex filename\n> 32883 44 65 32992 80e0 float.o\n>\n> Taking int4pl as an example - solely because it is simpler assembly to\n> look at - we get:\n>\n> master:\n> 0x00000000004ac190 <+0>: mov 0x30(%rdi),%rax\n> 0x00000000004ac194 <+4>: add 0x20(%rdi),%eax\n> 0x00000000004ac197 <+7>: jo 0x4ac19c <int4pl+12>\n> 0x00000000004ac199 <+9>: cltq\n> 0x00000000004ac19b <+11>: retq\n> 0x00000000004ac19c <+12>: push %rbp\n> 0x00000000004ac19d <+13>: lea 0x1a02c4(%rip),%rsi #\n> 0x64c468\n> 0x00000000004ac1a4 <+20>: xor %r8d,%r8d\n> 0x00000000004ac1a7 <+23>: lea 0x265da1(%rip),%rcx #\n> 0x711f4f <__func__.26823>\n> 0x00000000004ac1ae <+30>: mov $0x30b,%edx\n> 0x00000000004ac1b3 <+35>: mov $0x14,%edi\n> 0x00000000004ac1b8 <+40>: callq 0x586060 <errstart>\n> 0x00000000004ac1bd <+45>: lea 0x147e0e(%rip),%rdi #\n> 0x5f3fd2\n> 0x00000000004ac1c4 <+52>: xor %eax,%eax\n> 0x00000000004ac1c6 <+54>: callq 0x5896a0 <errmsg>\n> 0x00000000004ac1cb <+59>: mov $0x3000082,%edi\n> 0x00000000004ac1d0 <+64>: mov %eax,%ebp\n> 0x00000000004ac1d2 <+66>: callq 0x589540 <errcode>\n> 0x00000000004ac1d7 <+71>: mov %eax,%edi\n> 0x00000000004ac1d9 <+73>: mov %ebp,%esi\n> 0x00000000004ac1db <+75>: xor %eax,%eax\n> 0x00000000004ac1dd <+77>: callq 0x588fb0 <errfinish>\n>\n> out-of-line error:\n> 0x00000000004b04e0 <+0>: mov 0x30(%rdi),%rax\n> 0x00000000004b04e4 <+4>: add 0x20(%rdi),%eax\n> 0x00000000004b04e7 <+7>: jo 0x4b04ec <int4pl+12>\n> 0x00000000004b04e9 <+9>: cltq\n> 0x00000000004b04eb <+11>: retq\n> 0x00000000004b04ec <+12>: push %rax\n> 0x00000000004b04ed <+13>: callq 0x115e17 <out_of_range_err>\n>\n> With the out-of-line error, we can fit multiple of these functions into one\n> cache line. With the inline error, not even one.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> [1] https://twitter.com/AndresFreundTec/status/1214305610172289024\n>\n\nHi,I have been testing with newer compiler (clang-7) and result is a bit different at least with clang-7. Compiling PG 12.1 (even without patch) with clang-7 results in __isinf() no longer being a bottleneck,that is, you don't see it in profiler at all.So, there is no issue for people who use the modern clang toolchain,but maybe that's not everyone.So there would still be some interest in doing something about this.* clangbash-4.2$ which clang/opt/rh/llvm-toolset-7.0/root/usr/bin/clangbash-4.2$ clang -vclang version 7.0.1 (tags/RELEASE_701/final)Target: x86_64-unknown-linux-gnuThread model: posixInstalledDir: /opt/rh/llvm-toolset-7.0/root/usr/binFound candidate GCC installation: /opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7Found candidate GCC installation: /opt/rh/devtoolset-8/root/usr/lib/gcc/x86_64-redhat-linux/8Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/4.8.2Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/4.8.5Selected GCC installation: /opt/rh/devtoolset-8/root/usr/lib/gcc/x86_64-redhat-linux/8Candidate multilib: .;@m64Candidate multilib: 32;@m32Selected multilib: .;@m64** pg_config---CONFIGURE = '--prefix=/var/lib/pgsql/pgsql/12.1' 'CC=/opt/rh/llvm-toolset-7.0/root/usr/bin/clang' 'PKG_CONFIG_PATH=/opt/rh/llvm-toolset-7.0/root/usr/lib64/pkgconfig'CC = /opt/rh/llvm-toolset-7.0/root/usr/bin/clang---* result(PostgreSQL 12.1 (even without patch))postgres=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on) select (2 * a) , (2 * b) , (2 * c), (2 * d),  (2 * e) from realtest;                                                                        QUERY PLAN----------------------------------------------------------------------------------------------------------------------- Seq Scan on public.realtest  (cost=0.00..288697.59 rows=10000115 width=40) (actual time=0.012..3878.284 rows=10000001 loops=1)   Output: ('2'::double precision * a), ('2'::double precision * b), ('2'::double precision * c), ('2'::double precision * d), ('2'::double precision * e)   Buffers: shared hit=63695 Planning Time: 0.038 ms Execution Time: 4533.767 ms(5 rows)Samples: 5K of event 'cpu-clock', Event count (approx.): 1275000000Overhead  Command   Shared Object      Symbol  33.92%  postgres  postgres           [.] ExecInterpExpr  13.27%  postgres  postgres           [.] float84mul  10.86%  postgres  [vdso]             [.] __vdso_clock_gettime   5.49%  postgres  postgres           [.] tts_buffer_heap_getsomeattrs   3.96%  postgres  postgres           [.] ExecScan   3.25%  postgres  libc-2.17.so       [.] __clock_gettime   3.16%  postgres  postgres           [.] heap_getnextslot   2.41%  postgres  postgres           [.] tts_virtual_clear   2.39%  postgres  postgres           [.] SeqNext   2.22%  postgres  postgres           [.] InstrStopNodeBest Regards,Keisuke Kuroda2020年2月7日(金) 3:48 Andres Freund <andres@anarazel.de>:Hi,\n\nOn 2020-02-06 11:03:51 -0500, Tom Lane wrote:\n> Andres seems to be of the opinion that the compiler should be willing\n> to ignore the semantic requirements of the C standard in order\n> to rearrange the code back into the cheaper order.  That sounds like\n> wishful thinking to me ... even if it actually works on his compiler,\n> it certainly isn't going to work for everyone.\n\nSorry, but, uh, what are you talking about?  Please tell me which single\nstandards violation I'm advocating for?\n\nI was asking about the inlining bit because the first email of the topic\nexplained that as the problem, which I don't believe can be the full\nexplanation - and it turns out it isn't. As Amit Langote's followup\nemail explained, there's the whole issue of the order of checks being\ninverted - which is clearly bad. And wholly unrelated to inlining.\n\nAnd I asked about __isinf() being used because there are issues with\naccidentally ending up with the non-intrinsic version of isinf() when\nnot using gcc, due to badly written standard library headers.\n\n\n> The patch looks unduly invasive to me, but I think that it might be\n> right that we should go back to a macro-based implementation, because\n> otherwise we don't have a good way to be certain that the function\n> parameter won't get evaluated first.\n\nI'd first like to see some actual evidence of this being a problem,\nrather than just the order of the checks.\n\n\n> (Another reason to do so is so that the file/line numbers generated\n> for the error reports go back to being at least a little bit useful.)\n> We could use local variables within the macro to avoid double evals,\n> if anyone thinks that's actually important --- I don't.\n\nI don't think that's necessarily a good idea. In fact, I think we should\nprobably do the exact opposite, and move the error messages further out\nof line. All these otherwise very small functions having their own\nereports makes them much bigger. Our low code density, and the resulting\nrate of itlb misses, is pretty significant cost (cf [1]).\n\nmaster:\n   text    data     bss     dec     hex filename\n  36124      44      65   36233    8d89 float.o\nerror messages moved out of line:\n   text    data     bss     dec     hex filename\n  32883      44      65   32992    80e0 float.o\n\nTaking int4pl as an example - solely because it is simpler assembly to\nlook at - we get:\n\nmaster:\n   0x00000000004ac190 <+0>:     mov    0x30(%rdi),%rax\n   0x00000000004ac194 <+4>:     add    0x20(%rdi),%eax\n   0x00000000004ac197 <+7>:     jo     0x4ac19c <int4pl+12>\n   0x00000000004ac199 <+9>:     cltq\n   0x00000000004ac19b <+11>:    retq\n   0x00000000004ac19c <+12>:    push   %rbp\n   0x00000000004ac19d <+13>:    lea    0x1a02c4(%rip),%rsi        # 0x64c468\n   0x00000000004ac1a4 <+20>:    xor    %r8d,%r8d\n   0x00000000004ac1a7 <+23>:    lea    0x265da1(%rip),%rcx        # 0x711f4f <__func__.26823>\n   0x00000000004ac1ae <+30>:    mov    $0x30b,%edx\n   0x00000000004ac1b3 <+35>:    mov    $0x14,%edi\n   0x00000000004ac1b8 <+40>:    callq  0x586060 <errstart>\n   0x00000000004ac1bd <+45>:    lea    0x147e0e(%rip),%rdi        # 0x5f3fd2\n   0x00000000004ac1c4 <+52>:    xor    %eax,%eax\n   0x00000000004ac1c6 <+54>:    callq  0x5896a0 <errmsg>\n   0x00000000004ac1cb <+59>:    mov    $0x3000082,%edi\n   0x00000000004ac1d0 <+64>:    mov    %eax,%ebp\n   0x00000000004ac1d2 <+66>:    callq  0x589540 <errcode>\n   0x00000000004ac1d7 <+71>:    mov    %eax,%edi\n   0x00000000004ac1d9 <+73>:    mov    %ebp,%esi\n   0x00000000004ac1db <+75>:    xor    %eax,%eax\n   0x00000000004ac1dd <+77>:    callq  0x588fb0 <errfinish>\n\nout-of-line error:\n   0x00000000004b04e0 <+0>:     mov    0x30(%rdi),%rax\n   0x00000000004b04e4 <+4>:     add    0x20(%rdi),%eax\n   0x00000000004b04e7 <+7>:     jo     0x4b04ec <int4pl+12>\n   0x00000000004b04e9 <+9>:     cltq\n   0x00000000004b04eb <+11>:    retq\n   0x00000000004b04ec <+12>:    push   %rax\n   0x00000000004b04ed <+13>:    callq  0x115e17 <out_of_range_err>\n\nWith the out-of-line error, we can fit multiple of these functions into one\ncache line. With the inline error, not even one.\n\nGreetings,\n\nAndres Freund\n\n[1] https://twitter.com/AndresFreundTec/status/1214305610172289024", "msg_date": "Fri, 7 Feb 2020 16:42:30 +0900", "msg_from": "keisuke kuroda <keisuke.kuroda.3862@gmail.com>", "msg_from_op": true, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi, \n\nOn February 6, 2020 11:42:30 PM PST, keisuke kuroda <keisuke.kuroda.3862@gmail.com> wrote:\n>Hi,\n>\n>I have been testing with newer compiler (clang-7)\n>and result is a bit different at least with clang-7.\n>Compiling PG 12.1 (even without patch) with clang-7\n>results in __isinf() no longer being a bottleneck,\n>that is, you don't see it in profiler at all.\n\nI don't think that's necessarily the right conclusion. What's quite possibly happening is that you do not see the external isinf function anymore, because it is implemented as an intrinsic, but that there still are more computations being done. Due to the changed order of the isinf checks. You'd have to compare with 11 using the same compiler.\n\nAndres\n\n\n>* result(PostgreSQL 12.1 (even without patch))\n>\n>postgres=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n> select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n> from realtest;\n>\n>QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------\n>Seq Scan on public.realtest (cost=0.00..288697.59 rows=10000115\n>width=40)\n>(actual time=0.012..3878.284 rows=10000001 loops=1)\n> Output: ('2'::double precision * a), ('2'::double precision * b),\n>('2'::double precision * c), ('2'::double precision * d), ('2'::double\n>precision * e)\n> Buffers: shared hit=63695\n> Planning Time: 0.038 ms\n> Execution Time: 4533.767 ms\n>(5 rows)\n>\n>Samples: 5K of event 'cpu-clock', Event count (approx.): 1275000000\n>Overhead Command Shared Object Symbol\n> 33.92% postgres postgres [.] ExecInterpExpr\n> 13.27% postgres postgres [.] float84mul\n> 10.86% postgres [vdso] [.] __vdso_clock_gettime\n> 5.49% postgres postgres [.] tts_buffer_heap_getsomeattrs\n> 3.96% postgres postgres [.] ExecScan\n> 3.25% postgres libc-2.17.so [.] __clock_gettime\n> 3.16% postgres postgres [.] heap_getnextslot\n> 2.41% postgres postgres [.] tts_virtual_clear\n> 2.39% postgres postgres [.] SeqNext\n> 2.22% postgres postgres [.] InstrStopNode\n>\n>Best Regards,\n>Keisuke Kuroda\n>\n>2020年2月7日(金) 3:48 Andres Freund <andres@anarazel.de>:\n>\n>> Hi,\n>>\n>> On 2020-02-06 11:03:51 -0500, Tom Lane wrote:\n>> > Andres seems to be of the opinion that the compiler should be\n>willing\n>> > to ignore the semantic requirements of the C standard in order\n>> > to rearrange the code back into the cheaper order. That sounds\n>like\n>> > wishful thinking to me ... even if it actually works on his\n>compiler,\n>> > it certainly isn't going to work for everyone.\n>>\n>> Sorry, but, uh, what are you talking about? Please tell me which\n>single\n>> standards violation I'm advocating for?\n>>\n>> I was asking about the inlining bit because the first email of the\n>topic\n>> explained that as the problem, which I don't believe can be the full\n>> explanation - and it turns out it isn't. As Amit Langote's followup\n>> email explained, there's the whole issue of the order of checks being\n>> inverted - which is clearly bad. And wholly unrelated to inlining.\n>>\n>> And I asked about __isinf() being used because there are issues with\n>> accidentally ending up with the non-intrinsic version of isinf() when\n>> not using gcc, due to badly written standard library headers.\n>>\n>>\n>> > The patch looks unduly invasive to me, but I think that it might be\n>> > right that we should go back to a macro-based implementation,\n>because\n>> > otherwise we don't have a good way to be certain that the function\n>> > parameter won't get evaluated first.\n>>\n>> I'd first like to see some actual evidence of this being a problem,\n>> rather than just the order of the checks.\n>>\n>>\n>> > (Another reason to do so is so that the file/line numbers generated\n>> > for the error reports go back to being at least a little bit\n>useful.)\n>> > We could use local variables within the macro to avoid double\n>evals,\n>> > if anyone thinks that's actually important --- I don't.\n>>\n>> I don't think that's necessarily a good idea. In fact, I think we\n>should\n>> probably do the exact opposite, and move the error messages further\n>out\n>> of line. All these otherwise very small functions having their own\n>> ereports makes them much bigger. Our low code density, and the\n>resulting\n>> rate of itlb misses, is pretty significant cost (cf [1]).\n>>\n>> master:\n>> text data bss dec hex filename\n>> 36124 44 65 36233 8d89 float.o\n>> error messages moved out of line:\n>> text data bss dec hex filename\n>> 32883 44 65 32992 80e0 float.o\n>>\n>> Taking int4pl as an example - solely because it is simpler assembly\n>to\n>> look at - we get:\n>>\n>> master:\n>> 0x00000000004ac190 <+0>: mov 0x30(%rdi),%rax\n>> 0x00000000004ac194 <+4>: add 0x20(%rdi),%eax\n>> 0x00000000004ac197 <+7>: jo 0x4ac19c <int4pl+12>\n>> 0x00000000004ac199 <+9>: cltq\n>> 0x00000000004ac19b <+11>: retq\n>> 0x00000000004ac19c <+12>: push %rbp\n>> 0x00000000004ac19d <+13>: lea 0x1a02c4(%rip),%rsi #\n>> 0x64c468\n>> 0x00000000004ac1a4 <+20>: xor %r8d,%r8d\n>> 0x00000000004ac1a7 <+23>: lea 0x265da1(%rip),%rcx #\n>> 0x711f4f <__func__.26823>\n>> 0x00000000004ac1ae <+30>: mov $0x30b,%edx\n>> 0x00000000004ac1b3 <+35>: mov $0x14,%edi\n>> 0x00000000004ac1b8 <+40>: callq 0x586060 <errstart>\n>> 0x00000000004ac1bd <+45>: lea 0x147e0e(%rip),%rdi #\n>> 0x5f3fd2\n>> 0x00000000004ac1c4 <+52>: xor %eax,%eax\n>> 0x00000000004ac1c6 <+54>: callq 0x5896a0 <errmsg>\n>> 0x00000000004ac1cb <+59>: mov $0x3000082,%edi\n>> 0x00000000004ac1d0 <+64>: mov %eax,%ebp\n>> 0x00000000004ac1d2 <+66>: callq 0x589540 <errcode>\n>> 0x00000000004ac1d7 <+71>: mov %eax,%edi\n>> 0x00000000004ac1d9 <+73>: mov %ebp,%esi\n>> 0x00000000004ac1db <+75>: xor %eax,%eax\n>> 0x00000000004ac1dd <+77>: callq 0x588fb0 <errfinish>\n>>\n>> out-of-line error:\n>> 0x00000000004b04e0 <+0>: mov 0x30(%rdi),%rax\n>> 0x00000000004b04e4 <+4>: add 0x20(%rdi),%eax\n>> 0x00000000004b04e7 <+7>: jo 0x4b04ec <int4pl+12>\n>> 0x00000000004b04e9 <+9>: cltq\n>> 0x00000000004b04eb <+11>: retq\n>> 0x00000000004b04ec <+12>: push %rax\n>> 0x00000000004b04ed <+13>: callq 0x115e17 <out_of_range_err>\n>>\n>> With the out-of-line error, we can fit multiple of these functions\n>into one\n>> cache line. With the inline error, not even one.\n>>\n>> Greetings,\n>>\n>> Andres Freund\n>>\n>> [1] https://twitter.com/AndresFreundTec/status/1214305610172289024\n>>\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 06 Feb 2020 23:53:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "On Fri, Feb 7, 2020 at 4:54 PM Andres Freund <andres@anarazel.de> wrote:\n> On February 6, 2020 11:42:30 PM PST, keisuke kuroda <keisuke.kuroda.3862@gmail.com> wrote:\n> >Hi,\n> >\n> >I have been testing with newer compiler (clang-7)\n> >and result is a bit different at least with clang-7.\n> >Compiling PG 12.1 (even without patch) with clang-7\n> >results in __isinf() no longer being a bottleneck,\n> >that is, you don't see it in profiler at all.\n>\n> I don't think that's necessarily the right conclusion. What's quite possibly happening is that you do not see the external isinf function anymore, because it is implemented as an intrinsic, but that there still are more computations being done. Due to the changed order of the isinf checks. You'd have to compare with 11 using the same compiler.\n\nI did some tests using two relatively recent compilers: gcc 8 and\nclang-7 and here are the results:\n\nSetup:\n\ncreate table realtest (a real, b real, c real, d real, e real);\ninsert into realtest select i, i, i, i, i from generate_series(1, 1000000) i;\n\nTest query:\n\n/tmp/query.sql\nselect avg(2*dsqrt(a)), avg(2*dsqrt(b)), avg(2*dsqrt(c)),\navg(2*dsqrt(d)), avg(2*dsqrt(e)) from realtest;\n\npgbench -n -T 60 -f /tmp/query.sql\n\nLatency and profiling results:\n\ngcc 8 (gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3))\n====\n\n11.6\n\nlatency average = 463.968 ms\n\n 40.62% postgres postgres [.] ExecInterpExpr\n 9.74% postgres postgres [.] float8_accum\n 6.12% postgres libc-2.17.so [.] __isinf\n 5.96% postgres postgres [.] float8mul\n 5.33% postgres postgres [.] dsqrt\n 3.90% postgres postgres [.] ftod\n 3.53% postgres postgres [.] Float8GetDatum\n 2.34% postgres postgres [.] DatumGetFloat8\n 2.15% postgres postgres [.] AggCheckCallContext\n 2.03% postgres postgres [.] slot_deform_tuple\n 1.95% postgres libm-2.17.so [.] __sqrt\n 1.19% postgres postgres [.] check_float8_array\n\nHEAD\n\nlatency average = 549.071 ms\n\n 31.74% postgres postgres [.] ExecInterpExpr\n 11.02% postgres libc-2.17.so [.] __isinf\n 10.58% postgres postgres [.] float8_accum\n 4.84% postgres postgres [.] check_float8_val\n 4.66% postgres postgres [.] dsqrt\n 3.91% postgres postgres [.] float8mul\n 3.56% postgres postgres [.] ftod\n 3.26% postgres postgres [.] Float8GetDatum\n 2.91% postgres postgres [.] float8_mul\n 2.30% postgres postgres [.] DatumGetFloat8\n 2.19% postgres postgres [.] slot_deform_heap_tuple\n 1.81% postgres postgres [.] AggCheckCallContext\n 1.31% postgres libm-2.17.so [.] __sqrt\n 1.25% postgres postgres [.] check_float8_array\n\nHEAD + patch\n\nlatency average = 546.624 ms\n\n 33.51% postgres postgres [.] ExecInterpExpr\n 10.35% postgres postgres [.] float8_accum\n 10.06% postgres libc-2.17.so [.] __isinf\n 4.58% postgres postgres [.] dsqrt\n 4.14% postgres postgres [.] check_float8_val\n 4.03% postgres postgres [.] ftod\n 3.54% postgres postgres [.] float8mul\n 2.96% postgres postgres [.] Float8GetDatum\n 2.38% postgres postgres [.] slot_deform_heap_tuple\n 2.23% postgres postgres [.] DatumGetFloat8\n 2.09% postgres postgres [.] float8_mul\n 1.88% postgres postgres [.] AggCheckCallContext\n 1.65% postgres libm-2.17.so [.] __sqrt\n 1.22% postgres postgres [.] check_float8_array\n\n\nclang-7 (clang version 7.0.1 (tags/RELEASE_701/final))\n=====\n\n11.6\n\nlatency average = 419.014 ms\n\n 47.57% postgres postgres [.] ExecInterpExpr\n 7.99% postgres postgres [.] float8_accum\n 5.96% postgres postgres [.] dsqrt\n 4.88% postgres postgres [.] float8mul\n 4.23% postgres postgres [.] ftod\n 3.30% postgres postgres [.] slot_deform_tuple\n 3.19% postgres postgres [.] DatumGetFloat8\n 1.92% postgres libm-2.17.so [.] __sqrt\n 1.72% postgres postgres [.] check_float8_array\n\nHEAD\n\nlatency average = 452.958 ms\n\n 40.55% postgres postgres [.] ExecInterpExpr\n 10.61% postgres postgres [.] float8_accum\n 4.58% postgres postgres [.] dsqrt\n 3.59% postgres postgres [.] slot_deform_heap_tuple\n 3.54% postgres postgres [.] check_float8_val\n 3.48% postgres postgres [.] ftod\n 3.42% postgres postgres [.] float8mul\n 3.22% postgres postgres [.] DatumGetFloat8\n 2.69% postgres postgres [.] Float8GetDatum\n 2.46% postgres postgres [.] float8_mul\n 2.29% postgres libm-2.17.so [.] __sqrt\n 1.47% postgres postgres [.] check_float8_array\n\nHEAD + patch\n\nlatency average = 452.533 ms\n\n 41.05% postgres postgres [.] ExecInterpExpr\n 10.15% postgres postgres [.] float8_accum\n 5.62% postgres postgres [.] dsqrt\n 3.86% postgres postgres [.] check_float8_val\n 3.27% postgres postgres [.] float8mul\n 3.09% postgres postgres [.] slot_deform_heap_tuple\n 2.91% postgres postgres [.] ftod\n 2.88% postgres postgres [.] DatumGetFloat8\n 2.62% postgres postgres [.] float8_mul\n 2.03% postgres libm-2.17.so [.] __sqrt\n 2.00% postgres postgres [.] check_float8_array\n\nThe patch mentioned above is this:\n\ndiff --git a/src/include/utils/float.h b/src/include/utils/float.h\nindex e2c5dc0f57..dc97d19293 100644\n--- a/src/include/utils/float.h\n+++ b/src/include/utils/float.h\n@@ -136,12 +136,12 @@ static inline void\n check_float4_val(const float4 val, const bool inf_is_valid,\n const bool zero_is_valid)\n {\n- if (!inf_is_valid && unlikely(isinf(val)))\n+ if (unlikely(isinf(val)) && !inf_is_valid)\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value out of range: overflow\")));\n\n- if (!zero_is_valid && unlikely(val == 0.0))\n+ if (unlikely(val == 0.0) && !zero_is_valid)\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value out of range: underflow\")));\n@@ -151,12 +151,12 @@ static inline void\n check_float8_val(const float8 val, const bool inf_is_valid,\n const bool zero_is_valid)\n {\n- if (!inf_is_valid && unlikely(isinf(val)))\n+ if (unlikely(isinf(val)) && !inf_is_valid)\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value out of range: overflow\")));\n\n- if (!zero_is_valid && unlikely(val == 0.0))\n+ if (unlikely(val == 0.0) && !zero_is_valid)\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value out of range: underflow\")));\n\nSo, the patch appears to do very little here. I can only conclude that\nthe check_float{8|4}_val() (PG 12) is slower than CHECKFLOATVAL() (PG\n11) due to arguments being evaluated first. It's entirely possible\nthough that the patch shown above is not enough.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 7 Feb 2020 17:17:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Fwiw, also tried the patch that Kuroda-san had posted yesterday.\n\nOn Fri, Feb 7, 2020 at 5:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Latency and profiling results:\n>\n> gcc 8 (gcc (GCC) 8.3.1 20190311 (Red Hat 8.3.1-3))\n> ====\n>\n> 11.6\n>\n> latency average = 463.968 ms\n>\n> 40.62% postgres postgres [.] ExecInterpExpr\n> 9.74% postgres postgres [.] float8_accum\n> 6.12% postgres libc-2.17.so [.] __isinf\n> 5.96% postgres postgres [.] float8mul\n> 5.33% postgres postgres [.] dsqrt\n> 3.90% postgres postgres [.] ftod\n> 3.53% postgres postgres [.] Float8GetDatum\n> 2.34% postgres postgres [.] DatumGetFloat8\n> 2.15% postgres postgres [.] AggCheckCallContext\n> 2.03% postgres postgres [.] slot_deform_tuple\n> 1.95% postgres libm-2.17.so [.] __sqrt\n> 1.19% postgres postgres [.] check_float8_array\n>\n> HEAD\n>\n> latency average = 549.071 ms\n>\n> 31.74% postgres postgres [.] ExecInterpExpr\n> 11.02% postgres libc-2.17.so [.] __isinf\n> 10.58% postgres postgres [.] float8_accum\n> 4.84% postgres postgres [.] check_float8_val\n> 4.66% postgres postgres [.] dsqrt\n> 3.91% postgres postgres [.] float8mul\n> 3.56% postgres postgres [.] ftod\n> 3.26% postgres postgres [.] Float8GetDatum\n> 2.91% postgres postgres [.] float8_mul\n> 2.30% postgres postgres [.] DatumGetFloat8\n> 2.19% postgres postgres [.] slot_deform_heap_tuple\n> 1.81% postgres postgres [.] AggCheckCallContext\n> 1.31% postgres libm-2.17.so [.] __sqrt\n> 1.25% postgres postgres [.] check_float8_array\n>\n> HEAD + patch\n>\n> latency average = 546.624 ms\n>\n> 33.51% postgres postgres [.] ExecInterpExpr\n> 10.35% postgres postgres [.] float8_accum\n> 10.06% postgres libc-2.17.so [.] __isinf\n> 4.58% postgres postgres [.] dsqrt\n> 4.14% postgres postgres [.] check_float8_val\n> 4.03% postgres postgres [.] ftod\n> 3.54% postgres postgres [.] float8mul\n> 2.96% postgres postgres [.] Float8GetDatum\n> 2.38% postgres postgres [.] slot_deform_heap_tuple\n> 2.23% postgres postgres [.] DatumGetFloat8\n> 2.09% postgres postgres [.] float8_mul\n> 1.88% postgres postgres [.] AggCheckCallContext\n> 1.65% postgres libm-2.17.so [.] __sqrt\n> 1.22% postgres postgres [.] check_float8_array\n\nHEAD + Kuroda-san's patch (compiled with gcc 8)\n\nlatency average = 484.604 ms\n\n 37.41% postgres postgres [.] ExecInterpExpr\n 10.83% postgres postgres [.] float8_accum\n 5.62% postgres postgres [.] dsqrt\n 4.23% postgres libc-2.17.so [.] __isinf\n 4.05% postgres postgres [.] float8mul\n 3.85% postgres postgres [.] ftod\n 3.18% postgres postgres [.] Float8GetDatum\n 2.81% postgres postgres [.] slot_deform_heap_tuple\n 2.63% postgres postgres [.] DatumGetFloat8\n 2.46% postgres postgres [.] float8_mul\n 1.91% postgres libm-2.17.so [.] __sqrt\n\n> clang-7 (clang version 7.0.1 (tags/RELEASE_701/final))\n> =====\n>\n> 11.6\n>\n> latency average = 419.014 ms\n>\n> 47.57% postgres postgres [.] ExecInterpExpr\n> 7.99% postgres postgres [.] float8_accum\n> 5.96% postgres postgres [.] dsqrt\n> 4.88% postgres postgres [.] float8mul\n> 4.23% postgres postgres [.] ftod\n> 3.30% postgres postgres [.] slot_deform_tuple\n> 3.19% postgres postgres [.] DatumGetFloat8\n> 1.92% postgres libm-2.17.so [.] __sqrt\n> 1.72% postgres postgres [.] check_float8_array\n>\n> HEAD\n>\n> latency average = 452.958 ms\n>\n> 40.55% postgres postgres [.] ExecInterpExpr\n> 10.61% postgres postgres [.] float8_accum\n> 4.58% postgres postgres [.] dsqrt\n> 3.59% postgres postgres [.] slot_deform_heap_tuple\n> 3.54% postgres postgres [.] check_float8_val\n> 3.48% postgres postgres [.] ftod\n> 3.42% postgres postgres [.] float8mul\n> 3.22% postgres postgres [.] DatumGetFloat8\n> 2.69% postgres postgres [.] Float8GetDatum\n> 2.46% postgres postgres [.] float8_mul\n> 2.29% postgres libm-2.17.so [.] __sqrt\n> 1.47% postgres postgres [.] check_float8_array\n>\n> HEAD + patch\n>\n> latency average = 452.533 ms\n>\n> 41.05% postgres postgres [.] ExecInterpExpr\n> 10.15% postgres postgres [.] float8_accum\n> 5.62% postgres postgres [.] dsqrt\n> 3.86% postgres postgres [.] check_float8_val\n> 3.27% postgres postgres [.] float8mul\n> 3.09% postgres postgres [.] slot_deform_heap_tuple\n> 2.91% postgres postgres [.] ftod\n> 2.88% postgres postgres [.] DatumGetFloat8\n> 2.62% postgres postgres [.] float8_mul\n> 2.03% postgres libm-2.17.so [.] __sqrt\n> 2.00% postgres postgres [.] check_float8_array\n\nHEAD + Kuroda-san's patch (compiled with clang-7)\n\nlatency average = 435.454 ms\n\n 43.02% postgres postgres [.] ExecInterpExpr\n 10.86% postgres postgres [.] float8_accum\n 3.97% postgres postgres [.] dsqrt\n 3.97% postgres postgres [.] float8mul\n 3.51% postgres postgres [.] ftod\n 3.42% postgres postgres [.] slot_deform_heap_tuple\n 3.36% postgres postgres [.] DatumGetFloat8\n 1.97% postgres libm-2.17.so [.] __sqrt\n 1.97% postgres postgres [.] check_float8_array\n 1.88% postgres postgres [.] float8_mul\n\nNeedless to say, that one makes a visible difference, although still\nslower compared to PG 11.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 7 Feb 2020 17:54:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> Fwiw, also tried the patch that Kuroda-san had posted yesterday.\n\nI run the same test case too:\n\nclang version 7.0.0:\n\nHEAD 2548.119 ms\nwith patch 2320.974 ms\n\nclang version 8.0.0:\n\nHEAD 2431.766 ms\nwith patch 2419.439 ms\n\nclang version 9.0.0:\n\nHEAD 2477.493 ms\nwith patch 2365.509 ms\n\ngcc version 7.4.0:\n\nHEAD 2451.261 ms\nwith patch 2343.393 ms\n\ngcc version 8.3.0:\n\nHEAD 2540.626 ms\nwith patch 2299.653 ms\n\n\n", "msg_date": "Fri, 7 Feb 2020 14:30:54 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> > The patch looks unduly invasive to me, but I think that it might be\n> > right that we should go back to a macro-based implementation, because\n> > otherwise we don't have a good way to be certain that the function\n> > parameter won't get evaluated first.\n>\n> I'd first like to see some actual evidence of this being a problem,\n> rather than just the order of the checks.\n\nThere seem to be enough evidence of this being the problem. We are\nbetter off going back to the macro-based implementation. I polished\nKeisuke Kuroda's patch commenting about the performance issue, removed\nthe check_float*_val() functions completely, and added unlikely() as\nTom Lane suggested. It is attached. I confirmed with different\ncompilers that the macro, and unlikely() makes this noticeably faster.", "msg_date": "Fri, 7 Feb 2020 14:42:39 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Moin,\n\nOn 2020-02-07 15:42, Emre Hasegeli wrote:\n>> > The patch looks unduly invasive to me, but I think that it might be\n>> > right that we should go back to a macro-based implementation, because\n>> > otherwise we don't have a good way to be certain that the function\n>> > parameter won't get evaluated first.\n>> \n>> I'd first like to see some actual evidence of this being a problem,\n>> rather than just the order of the checks.\n> \n> There seem to be enough evidence of this being the problem. We are\n> better off going back to the macro-based implementation. I polished\n> Keisuke Kuroda's patch commenting about the performance issue, removed\n> the check_float*_val() functions completely, and added unlikely() as\n> Tom Lane suggested. It is attached. I confirmed with different\n> compilers that the macro, and unlikely() makes this noticeably faster.\n\nHm, the diff has the macro tests as:\n\n +\tif (unlikely(isinf(val) && !(inf_is_valid)))\n ...\n + if (unlikely((val) == 0.0 && !(zero_is_valid)))\n\nBut the comment does not explain that this test has to be in that\norder, or the compiler will for non-constant arguments evalute\nthe (now) right-side first. E.g. if I understand this correctly:\n\n + if (!(zero_is_valid) && unlikely((val) == 0.0)\n\nwould have the same problem of evaluating \"zero_is_valid\" (which\nmight be an isinf(arg1) || isinf(arg2)) first and so be the same thing\nwe try to avoid with the macro? Maybe adding this bit of info to the\ncomment makes it clearer?\n\nAlso, a few places use the macro as:\n\n +\tCHECKFLOATVAL(result, true, true);\n\nwhich evaluates to a complete NOP in both cases. IMHO this could be\nreplaced with a comment like:\n\n +\t// No CHECKFLOATVAL() needed, as both inf and 0.0 are valid\n\n(or something along the lines of \"no error can occur\"), as otherwise\nCHECKFLOATVAL() implies to the casual reader that there are some checks\ndone, while in reality no real checks are done at all (and hopefully\nthe compiler optimizes everything away, which might not be true for\ndebug builds).\n\n-- \nBest regards,\n\nTels", "msg_date": "Fri, 07 Feb 2020 18:55:01 +0100", "msg_from": "Tels <nospam-pg-abuse@bloodgate.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 17:17:21 +0900, Amit Langote wrote:\n> I did some tests using two relatively recent compilers: gcc 8 and\n> clang-7 and here are the results:\n\nHm, these very much look like they've been done in an unoptimized build?\n\n> 40.62% postgres postgres [.] ExecInterpExpr\n> 9.74% postgres postgres [.] float8_accum\n> 6.12% postgres libc-2.17.so [.] __isinf\n> 5.96% postgres postgres [.] float8mul\n> 5.33% postgres postgres [.] dsqrt\n> 3.90% postgres postgres [.] ftod\n> 3.53% postgres postgres [.] Float8GetDatum\n> 2.34% postgres postgres [.] DatumGetFloat8\n> 2.15% postgres postgres [.] AggCheckCallContext\n> 2.03% postgres postgres [.] slot_deform_tuple\n> 1.95% postgres libm-2.17.so [.] __sqrt\n> 1.19% postgres postgres [.] check_float8_array\n\n> HEAD\n> \n> latency average = 549.071 ms\n> \n> 31.74% postgres postgres [.] ExecInterpExpr\n> 11.02% postgres libc-2.17.so [.] __isinf\n> 10.58% postgres postgres [.] float8_accum\n> 4.84% postgres postgres [.] check_float8_val\n> 4.66% postgres postgres [.] dsqrt\n> 3.91% postgres postgres [.] float8mul\n> 3.56% postgres postgres [.] ftod\n> 3.26% postgres postgres [.] Float8GetDatum\n> 2.91% postgres postgres [.] float8_mul\n> 2.30% postgres postgres [.] DatumGetFloat8\n> 2.19% postgres postgres [.] slot_deform_heap_tuple\n> 1.81% postgres postgres [.] AggCheckCallContext\n> 1.31% postgres libm-2.17.so [.] __sqrt\n> 1.25% postgres postgres [.] check_float8_array\n\nBecause DatumGetFloat8, Float8GetDatum, etc aren't functions that\nnormally stay separate.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Feb 2020 10:13:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "On Sat, Feb 8, 2020 at 3:13 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-02-07 17:17:21 +0900, Amit Langote wrote:\n> > I did some tests using two relatively recent compilers: gcc 8 and\n> > clang-7 and here are the results:\n>\n> Hm, these very much look like they've been done in an unoptimized build?\n>\n> > 40.62% postgres postgres [.] ExecInterpExpr\n> > 9.74% postgres postgres [.] float8_accum\n> > 6.12% postgres libc-2.17.so [.] __isinf\n> > 5.96% postgres postgres [.] float8mul\n> > 5.33% postgres postgres [.] dsqrt\n> > 3.90% postgres postgres [.] ftod\n> > 3.53% postgres postgres [.] Float8GetDatum\n> > 2.34% postgres postgres [.] DatumGetFloat8\n> > 2.15% postgres postgres [.] AggCheckCallContext\n> > 2.03% postgres postgres [.] slot_deform_tuple\n> > 1.95% postgres libm-2.17.so [.] __sqrt\n> > 1.19% postgres postgres [.] check_float8_array\n>\n> > HEAD\n> >\n> > latency average = 549.071 ms\n> >\n> > 31.74% postgres postgres [.] ExecInterpExpr\n> > 11.02% postgres libc-2.17.so [.] __isinf\n> > 10.58% postgres postgres [.] float8_accum\n> > 4.84% postgres postgres [.] check_float8_val\n> > 4.66% postgres postgres [.] dsqrt\n> > 3.91% postgres postgres [.] float8mul\n> > 3.56% postgres postgres [.] ftod\n> > 3.26% postgres postgres [.] Float8GetDatum\n> > 2.91% postgres postgres [.] float8_mul\n> > 2.30% postgres postgres [.] DatumGetFloat8\n> > 2.19% postgres postgres [.] slot_deform_heap_tuple\n> > 1.81% postgres postgres [.] AggCheckCallContext\n> > 1.31% postgres libm-2.17.so [.] __sqrt\n> > 1.25% postgres postgres [.] check_float8_array\n>\n> Because DatumGetFloat8, Float8GetDatum, etc aren't functions that\n> normally stay separate.\n\nOkay, fair.\n\nHere are numbers after compiling with -O3:\n\ngcc 8\n=====\n\nHEAD\n\nlatency average = 350.187 ms\n\n 34.67% postgres postgres [.] ExecInterpExpr\n 20.94% postgres libc-2.17.so [.] __isinf\n 10.74% postgres postgres [.] float8_accum\n 8.22% postgres postgres [.] dsqrt\n 6.63% postgres postgres [.] float8mul\n 3.45% postgres postgres [.] ftod\n 2.32% postgres postgres [.] tts_buffer_heap_getsomeattrs\n\nHEAD + reverse-if-condition patch\n\nlatency average = 346.710 ms\n\n 34.48% postgres postgres [.] ExecInterpExpr\n 21.00% postgres libc-2.17.so [.] __isinf\n 12.26% postgres postgres [.] float8_accum\n 8.31% postgres postgres [.] dsqrt\n 6.32% postgres postgres [.] float8mul\n 3.23% postgres postgres [.] ftod\n 2.25% postgres postgres [.] tts_buffer_heap_getsomeattrs\n\nHEAD + revert-to-macro patch\n\nlatency average = 297.493 ms\n\n 39.25% postgres postgres [.] ExecInterpExpr\n 14.44% postgres postgres [.] float8_accum\n 11.02% postgres libc-2.17.so [.] __isinf\n 8.21% postgres postgres [.] dsqrt\n 5.55% postgres postgres [.] float8mul\n 4.15% postgres postgres [.] ftod\n 2.78% postgres postgres [.] tts_buffer_heap_getsomeattrs\n\n11.6\n\nlatency average = 290.301 ms\n\n 42.78% postgres postgres [.] ExecInterpExpr\n 12.27% postgres postgres [.] float8_accum\n 12.12% postgres libc-2.17.so [.] __isinf\n 8.96% postgres postgres [.] dsqrt\n 5.77% postgres postgres [.] float8mul\n 3.94% postgres postgres [.] ftod\n 2.61% postgres postgres [.] AggCheckCallContext\n\n\nclang-7\n=======\n\nHEAD\n\nlatency average = 246.278 ms\n\n 44.47% postgres postgres [.] ExecInterpExpr\n 14.56% postgres postgres [.] float8_accum\n 7.25% postgres postgres [.] float8mul\n 7.22% postgres postgres [.] dsqrt\n 5.40% postgres postgres [.] ftod\n 4.09% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 2.20% postgres postgres [.] check_float8_val\n\nHEAD + reverse-if-condition patch\n\nlatency average = 240.212 ms\n\n 45.49% postgres postgres [.] ExecInterpExpr\n 13.69% postgres postgres [.] float8_accum\n 8.32% postgres postgres [.] dsqrt\n 5.28% postgres postgres [.] ftod\n 5.19% postgres postgres [.] float8mul\n 3.68% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 2.90% postgres postgres [.] float8_mul\n\nHEAD + revert-to-macro patch\n\nlatency average = 240.620 ms\n\n 44.04% postgres postgres [.] ExecInterpExpr\n 13.72% postgres postgres [.] float8_accum\n 9.26% postgres postgres [.] dsqrt\n 5.30% postgres postgres [.] ftod\n 4.66% postgres postgres [.] float8mul\n 3.53% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 3.39% postgres postgres [.] float8_mul\n\n11.6\n\nlatency average = 237.045 ms\n\n 46.85% postgres postgres [.] ExecInterpExpr\n 11.39% postgres postgres [.] float8_accum\n 8.02% postgres postgres [.] dsqrt\n 7.29% postgres postgres [.] slot_deform_tuple\n 6.04% postgres postgres [.] float8mul\n 5.49% postgres postgres [.] ftod\n\nPG 12 is worse than PG 11 when compiled with gcc.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 10 Feb 2020 14:10:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "On Fri, Feb 7, 2020 at 11:43 PM Emre Hasegeli <emre@hasegeli.com> wrote:\n> > > The patch looks unduly invasive to me, but I think that it might be\n> > > right that we should go back to a macro-based implementation, because\n> > > otherwise we don't have a good way to be certain that the function\n> > > parameter won't get evaluated first.\n> >\n> > I'd first like to see some actual evidence of this being a problem,\n> > rather than just the order of the checks.\n>\n> There seem to be enough evidence of this being the problem. We are\n> better off going back to the macro-based implementation. I polished\n> Keisuke Kuroda's patch commenting about the performance issue, removed\n> the check_float*_val() functions completely, and added unlikely() as\n> Tom Lane suggested. It is attached. I confirmed with different\n> compilers that the macro, and unlikely() makes this noticeably faster.\n\nThanks for updating the patch.\n\nShould we update the same macro in contrib/btree_gist/btree_utils_num.h too?\n\nRegards,\nAmit\n\n\n", "msg_date": "Mon, 10 Feb 2020 16:33:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> But the comment does not explain that this test has to be in that\n> order, or the compiler will for non-constant arguments evalute\n> the (now) right-side first. E.g. if I understand this correctly:\n>\n> + if (!(zero_is_valid) && unlikely((val) == 0.0)\n>\n> would have the same problem of evaluating \"zero_is_valid\" (which\n> might be an isinf(arg1) || isinf(arg2)) first and so be the same thing\n> we try to avoid with the macro? Maybe adding this bit of info to the\n> comment makes it clearer?\n\nAdded.\n\n> Also, a few places use the macro as:\n>\n> + CHECKFLOATVAL(result, true, true);\n>\n> which evaluates to a complete NOP in both cases. IMHO this could be\n> replaced with a comment like:\n>\n> + // No CHECKFLOATVAL() needed, as both inf and 0.0 are valid\n>\n> (or something along the lines of \"no error can occur\"), as otherwise\n> CHECKFLOATVAL() implies to the casual reader that there are some checks\n> done, while in reality no real checks are done at all (and hopefully\n> the compiler optimizes everything away, which might not be true for\n> debug builds).\n\nI don't know why those trigonometric functions don't check for\noverflow/underflow like all the rest of float.c. I'll submit another\npatch to make them error when overflow/underflow.\n\nThe new version is attached.", "msg_date": "Wed, 12 Feb 2020 11:54:13 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> Should we update the same macro in contrib/btree_gist/btree_utils_num.h too?\n\nI posted another version incorporating this.\n\n\n", "msg_date": "Wed, 12 Feb 2020 11:56:09 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-12 11:54:13 +0000, Emre Hasegeli wrote:\n> From fb5052b869255ef9465b1de92e84b2fb66dd6eb3 Mon Sep 17 00:00:00 2001\n> From: Emre Hasegeli <emre@hasegeli.com>\n> Date: Fri, 7 Feb 2020 10:27:25 +0000\n> Subject: [PATCH] Bring back CHECKFLOATVAL() macro\n> \n> The inline functions added by 6bf0bc842b caused the conditions of\n> overflow/underflow checks to be evaluated when no overflow/underflow\n> happen. This slowed down floating point operations. This commit brings\n> back the macro that was in use before 6bf0bc842b to fix the performace\n> regression.\n\nWait, no. Didn't we get to the point that we figured out that the\nprimary issue is the reversal of the order of what is checked is the\nprimary problem, rather than the macro/inline piece?\n\nNor do I see how it's going to be ok to just rename the function in a\nstable branch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Feb 2020 09:21:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> Wait, no. Didn't we get to the point that we figured out that the\n> primary issue is the reversal of the order of what is checked is the\n> primary problem, rather than the macro/inline piece?\n\nReversal of the order makes a small or no difference. The\nmacro/inline change causes the real slowdown at least on GCC.\n\n> Nor do I see how it's going to be ok to just rename the function in a\n> stable branch.\n\nI'll post another version to keep them around.\n\n\n", "msg_date": "Wed, 12 Feb 2020 17:49:14 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "On 2020-02-12 17:49:14 +0000, Emre Hasegeli wrote:\n> > Nor do I see how it's going to be ok to just rename the function in a\n> > stable branch.\n> \n> I'll post another version to keep them around.\n\nI'd just rename the macro to the name of the inline function. No need to\nhave a verbose change in all callsites just to update the name imo.\n\n\n", "msg_date": "Wed, 12 Feb 2020 09:59:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'd just rename the macro to the name of the inline function. No need to\n> have a verbose change in all callsites just to update the name imo.\n\n+1, that's what I had in mind too. That does suggest though that we\nought to make sure the macro has single-eval behavior, so that you\ndon't need to know it's a macro.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Feb 2020 13:15:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-12 13:15:22 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I'd just rename the macro to the name of the inline function. No need to\n> > have a verbose change in all callsites just to update the name imo.\n>\n> +1, that's what I had in mind too. That does suggest though that we\n> ought to make sure the macro has single-eval behavior, so that you\n> don't need to know it's a macro.\n\nWe'd have to store 'val' in a local variable for that I think. Not the\nprettiest, but also not a problem.\n\n\nI do wonder if we're just punching ourselves in the face with the\nsignature of these checks. Part of the problem here really comes from\nusing the same function to handle a number of different checks.\n\nI mean something like dtof's\n\tcheck_float4_val((float4) num, isinf(num), num == 0);\nwhere the num == 0 is solely to satisfy the check function is a bit\nstupid.\n\nAnd the reason we have these isinf(arg1) || isinf(arg2) parameters is\nalso largely because we force the same function to be used in cases\nwhere we have two inputs, rather than just one.\n\nFor most places it'd probably end up being easier to read and to\noptimize if we just wrote them as\n\nif (unlikely(isinf(result)) && !isinf(arg))\n float_overflow_error();\n\nand when needed added a\n\nelse if (unlikely(result == 0) && arg1 != 0.0)\n float_underflow_error();\n\nthe verbose piece really is the error, not the error check. Sure, there\nare more complicated cases like\n\nif (unlikely(isinf(result)) && (!isinf(arg1) || !isinf(arg2)))\n\nbut that's still not very complicated.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Feb 2020 10:37:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I do wonder if we're just punching ourselves in the face with the\n> signature of these checks. Part of the problem here really comes from\n> using the same function to handle a number of different checks.\n\nYeah, I've thought that too. It's *far* from clear that this thing\nis a win at all, other than your point about the number of copies of\nthe ereport call. It's bulky, it's hard to optimize, and I have\nnever thought it was more readable than the direct tests it replaced.\n\n> For most places it'd probably end up being easier to read and to\n> optimize if we just wrote them as\n> if (unlikely(isinf(result)) && !isinf(arg))\n> float_overflow_error();\n> and when needed added a\n> else if (unlikely(result == 0) && arg1 != 0.0)\n> float_underflow_error();\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Feb 2020 14:18:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-12 14:18:30 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I do wonder if we're just punching ourselves in the face with the\n> > signature of these checks. Part of the problem here really comes from\n> > using the same function to handle a number of different checks.\n> \n> Yeah, I've thought that too. It's *far* from clear that this thing\n> is a win at all, other than your point about the number of copies of\n> the ereport call. It's bulky, it's hard to optimize, and I have\n> never thought it was more readable than the direct tests it replaced.\n> \n> > For most places it'd probably end up being easier to read and to\n> > optimize if we just wrote them as\n> > if (unlikely(isinf(result)) && !isinf(arg))\n> > float_overflow_error();\n> > and when needed added a\n> > else if (unlikely(result == 0) && arg1 != 0.0)\n> > float_underflow_error();\n> \n> +1\n\nCool. Emre, any chance you could write a patch along those lines?\n\nI'm inclined that we should backpatch that, and just leave the inline\nfunction (without in core callers) in place in 12?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Feb 2020 11:32:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm inclined that we should backpatch that, and just leave the inline\n> function (without in core callers) in place in 12?\n\nYeah, we can't remove the inline function in 12. But we don't have\nto use it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Feb 2020 14:50:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> > > For most places it'd probably end up being easier to read and to\n> > > optimize if we just wrote them as\n> > > if (unlikely(isinf(result)) && !isinf(arg))\n> > > float_overflow_error();\n> > > and when needed added a\n> > > else if (unlikely(result == 0) && arg1 != 0.0)\n> > > float_underflow_error();\n> >\n> > +1\n>\n> Cool. Emre, any chance you could write a patch along those lines?\n\nYes, I am happy to do. It makes more sense to me too.\n\n\n", "msg_date": "Wed, 12 Feb 2020 19:52:57 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "> > > > For most places it'd probably end up being easier to read and to\n> > > > optimize if we just wrote them as\n> > > > if (unlikely(isinf(result)) && !isinf(arg))\n> > > > float_overflow_error();\n> > > > and when needed added a\n> > > > else if (unlikely(result == 0) && arg1 != 0.0)\n> > > > float_underflow_error();\n> > >\n> > > +1\n> >\n> > Cool. Emre, any chance you could write a patch along those lines?\n>\n> Yes, I am happy to do. It makes more sense to me too.\n\nHow about the one attached?", "msg_date": "Thu, 13 Feb 2020 16:25:25 +0000", "msg_from": "Emre Hasegeli <emre@hasegeli.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Emre Hasegeli <emre@hasegeli.com> writes:\n>>> Cool. Emre, any chance you could write a patch along those lines?\n\n> How about the one attached?\n\nI see some minor things I don't like here, eg float_*flow_error()\nneed some documentation as to why they exist. But I'll review,\nfix those things up and then push.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 11:30:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi, \n\nOn February 13, 2020 8:30:45 AM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Emre Hasegeli <emre@hasegeli.com> writes:\n>>>> Cool. Emre, any chance you could write a patch along those lines?\n>\n>> How about the one attached?\n>\n>I see some minor things I don't like here, eg float_*flow_error()\n>need some documentation as to why they exist. But I'll review,\n>fix those things up and then push.\n\nWould be good to mark them noreturn too.\n\nWonder if it's useful to add the\"cold\" marker to pg. Not as part of this patch, but for functions like these.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 13 Feb 2020 08:41:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-13 16:25:25 +0000, Emre Hasegeli wrote:\n> And also this commit is changing the usage of unlikely() to cover\n> the whole condition. Using it only for the result is not semantically\n> correct. It is more than likely for the result to be infinite when\n> the input is, or it to be 0 when the input is.\n\nI'm not really convinced by this fwiw.\n\nComparing\n\n if (unlikely(isinf(result) && !isinf(num)))\n float_overflow_error();\n\nwith\n\n if (unlikely(isinf(result)) && !isinf(num))\n float_overflow_error();\n\nI don't think it's clear that we want the former. What we want to\nexpress is that it's unlikely that the result is infinite, and that the\ncompiler should optimize for that. Since there's a jump involved between\nthe check for isinf(result) and the one for !isinf(num), we want the\ncompiler to implement this so the non-overflow path follows the first\ncheck, and the rest of the check is later.\n\n\n\n> +void float_overflow_error()\n> +{\n\nTom's probably on this, but it should be (void).\n\n\n> @@ -2846,23 +2909,21 @@ float8_accum(PG_FUNCTION_ARGS)\n> \n> \t\t/*\n> \t\t * Overflow check. We only report an overflow error when finite\n> \t\t * inputs lead to infinite results. Note also that Sxx should be NaN\n> \t\t * if any of the inputs are infinite, so we intentionally prevent Sxx\n> \t\t * from becoming infinite.\n> \t\t */\n> \t\tif (isinf(Sx) || isinf(Sxx))\n> \t\t{\n> \t\t\tif (!isinf(transvalues[1]) && !isinf(newval))\n> -\t\t\t\tereport(ERROR,\n> -\t\t\t\t\t\t(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> -\t\t\t\t\t\t errmsg(\"value out of range: overflow\")));\n> +\t\t\t\tfloat_overflow_error();\n> \n> \t\t\tSxx = get_float8_nan();\n> \t\t}\n> \t}\n\nProbably worth unifiying the use of unlikely around isinf here and in\nthe follow functions.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Feb 2020 09:23:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On February 13, 2020 8:30:45 AM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I see some minor things I don't like here, eg float_*flow_error()\n>> need some documentation as to why they exist. But I'll review,\n>> fix those things up and then push.\n\n> Would be good to mark them noreturn too.\n\nYeah, that was one of the things I didn't like ;-). Also the lack\nof pg_noinline.\n\n> Wonder if it's useful to add the\"cold\" marker to pg. Not as part of this patch, but for functions like these.\n\nI'm only seeing about a 1.5kB reduction in the backend size from\nthis patch, which kinda surprises me, but it says that we're\nnot winning all that much from just having one copy of the ereport\ncalls. So I don't think that \"cold\" is going to add much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 12:42:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-02-13 16:25:25 +0000, Emre Hasegeli wrote:\n>> And also this commit is changing the usage of unlikely() to cover\n>> the whole condition. Using it only for the result is not semantically\n>> correct. It is more than likely for the result to be infinite when\n>> the input is, or it to be 0 when the input is.\n\n> I'm not really convinced by this fwiw.\n\n> Comparing\n\n> if (unlikely(isinf(result) && !isinf(num)))\n> float_overflow_error();\n\n> with\n\n> if (unlikely(isinf(result)) && !isinf(num))\n> float_overflow_error();\n\n> I don't think it's clear that we want the former. What we want to\n> express is that it's unlikely that the result is infinite, and that the\n> compiler should optimize for that. Since there's a jump involved between\n> the check for isinf(result) and the one for !isinf(num), we want the\n> compiler to implement this so the non-overflow path follows the first\n> check, and the rest of the check is later.\n\nYeah, I was wondering about that. I'll change it as you suggest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 12:43:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "... and pushed. One other change I made beyond those suggested\nwas to push the zero-divide ereport's out-of-line as well.\n\nI did not do anything about adding unlikely() calls around the\nunrelated isinf tests in float.c. That seemed to me to be a separate\nmatter, and I'm not quite convinced it'd be a win anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 13:40:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Hi,\n\nOn 2020-02-13 13:40:43 -0500, Tom Lane wrote:\n> ... and pushed. One other change I made beyond those suggested\n> was to push the zero-divide ereport's out-of-line as well.\n\nThanks!\n\n\n> I did not do anything about adding unlikely() calls around the\n> unrelated isinf tests in float.c. That seemed to me to be a separate\n> matter, and I'm not quite convinced it'd be a win anyway.\n\nI was mostly going for consistency...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:47:10 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "On Fri, Feb 14, 2020 at 3:47 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-02-13 13:40:43 -0500, Tom Lane wrote:\n> > ... and pushed. One other change I made beyond those suggested\n> > was to push the zero-divide ereport's out-of-line as well.\n>\n> Thanks!\n\nThank you all.\n\nI repeated some of the tests I did earlier and things look good.\n\ngcc-8\n=====\n\nHEAD\n\nlatency average = 296.842 ms\n\n 42.05% postgres postgres [.] ExecInterpExpr\n 15.14% postgres postgres [.] float8_accum\n 9.32% postgres libc-2.17.so [.] __isinf\n 7.32% postgres postgres [.] dsqrt\n 5.67% postgres postgres [.] float8mul\n 4.20% postgres postgres [.] ftod\n\n11.7\n\nlatency average = 289.439 ms\n\n 41.52% postgres postgres [.] ExecInterpExpr\n 13.59% postgres libc-2.17.so [.] __isinf\n 10.98% postgres postgres [.] float8_accum\n 8.26% postgres postgres [.] dsqrt\n 6.17% postgres postgres [.] float8mul\n 3.65% postgres postgres [.] ftod\n\nclang-7\n=======\n\nHEAD\n\nlatency average = 233.735 ms\n\n 43.84% postgres postgres [.] ExecInterpExpr\n 15.17% postgres postgres [.] float8_accum\n 8.25% postgres postgres [.] dsqrt\n 7.35% postgres postgres [.] float8mul\n 5.84% postgres postgres [.] ftod\n 3.78% postgres postgres [.] tts_buffer_heap_getsomeattrs\n\n11.7\n\nlatency average = 221.009 ms\n\n 49.55% postgres postgres [.] ExecInterpExpr\n 12.05% postgres postgres [.] float8_accum\n 8.97% postgres postgres [.] dsqrt\n 6.72% postgres postgres [.] float8mul\n 5.62% postgres postgres [.] ftod\n 2.18% postgres postgres [.] slot_deform_tuple\n\nHEAD and PG 11 are now comparable even when built with gcc.\n\nRegards,\nAmit\n\n\n", "msg_date": "Fri, 14 Feb 2020 13:29:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" }, { "msg_contents": "Thank you very much everyone.\n\nImprovement was confirmed even if PG12_STABLE was built with gcc 4.8.5.\n\n* PG_12_STABLE\n* gcc 4.8.5\n\npostgres=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on)\n select (2 * a) , (2 * b) , (2 * c), (2 * d), (2 * e)\n from realtest;\n\nQUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.realtest (cost=0.00..288692.14 rows=9999873 width=40)\n(actual time=0.012..4118.432 rows=10000001 loops=1)\n Output: ('2'::double precision * a), ('2'::double precision * b),\n('2'::double precision * c), ('2'::double precision * d), ('2'::double\nprecision * e)\n Buffers: shared hit=63695\n Planning Time: 0.034 ms\n Execution Time: 4811.957 ms\n(5 rows)\n\n 32.03% postgres postgres [.] ExecInterpExpr\n 12.28% postgres postgres [.] float84mul\n 9.62% postgres [vdso] [.] __vdso_clock_gettime\n 6.45% postgres libc-2.17.so [.] __isinf\n 5.15% postgres postgres [.] tts_buffer_heap_getsomeattrs\n 3.83% postgres postgres [.] ExecScan\n\nBest Regards,\nKeisuke Kuroda\n\n2020年2月14日(金) 13:29 Amit Langote <amitlangote09@gmail.com>:\n\n> On Fri, Feb 14, 2020 at 3:47 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-02-13 13:40:43 -0500, Tom Lane wrote:\n> > > ... and pushed. One other change I made beyond those suggested\n> > > was to push the zero-divide ereport's out-of-line as well.\n> >\n> > Thanks!\n>\n> Thank you all.\n>\n> I repeated some of the tests I did earlier and things look good.\n>\n> gcc-8\n> =====\n>\n> HEAD\n>\n> latency average = 296.842 ms\n>\n> 42.05% postgres postgres [.] ExecInterpExpr\n> 15.14% postgres postgres [.] float8_accum\n> 9.32% postgres libc-2.17.so [.] __isinf\n> 7.32% postgres postgres [.] dsqrt\n> 5.67% postgres postgres [.] float8mul\n> 4.20% postgres postgres [.] ftod\n>\n> 11.7\n>\n> latency average = 289.439 ms\n>\n> 41.52% postgres postgres [.] ExecInterpExpr\n> 13.59% postgres libc-2.17.so [.] __isinf\n> 10.98% postgres postgres [.] float8_accum\n> 8.26% postgres postgres [.] dsqrt\n> 6.17% postgres postgres [.] float8mul\n> 3.65% postgres postgres [.] ftod\n>\n> clang-7\n> =======\n>\n> HEAD\n>\n> latency average = 233.735 ms\n>\n> 43.84% postgres postgres [.] ExecInterpExpr\n> 15.17% postgres postgres [.] float8_accum\n> 8.25% postgres postgres [.] dsqrt\n> 7.35% postgres postgres [.] float8mul\n> 5.84% postgres postgres [.] ftod\n> 3.78% postgres postgres [.] tts_buffer_heap_getsomeattrs\n>\n> 11.7\n>\n> latency average = 221.009 ms\n>\n> 49.55% postgres postgres [.] ExecInterpExpr\n> 12.05% postgres postgres [.] float8_accum\n> 8.97% postgres postgres [.] dsqrt\n> 6.72% postgres postgres [.] float8mul\n> 5.62% postgres postgres [.] ftod\n> 2.18% postgres postgres [.] slot_deform_tuple\n>\n> HEAD and PG 11 are now comparable even when built with gcc.\n>\n> Regards,\n> Amit\n>\n\nThank you very much everyone.Improvement was confirmed even if PG12_STABLE was built with gcc 4.8.5.* PG_12_STABLE  * gcc 4.8.5postgres=# EXPLAIN (ANALYZE on, VERBOSE on, BUFFERS on) select (2 * a) , (2 * b) , (2 * c), (2 * d),  (2 * e) from realtest;                                                                        QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Seq Scan on public.realtest  (cost=0.00..288692.14 rows=9999873 width=40) (actual time=0.012..4118.432 rows=10000001 loops=1)   Output: ('2'::double precision * a), ('2'::double precision * b), ('2'::double precision * c), ('2'::double precision * d), ('2'::double precision * e)   Buffers: shared hit=63695 Planning Time: 0.034 ms Execution Time: 4811.957 ms(5 rows)  32.03%  postgres  postgres           [.] ExecInterpExpr  12.28%  postgres  postgres           [.] float84mul   9.62%  postgres  [vdso]             [.] __vdso_clock_gettime   6.45%  postgres  libc-2.17.so       [.] __isinf   5.15%  postgres  postgres           [.] tts_buffer_heap_getsomeattrs   3.83%  postgres  postgres           [.] ExecScan   Best Regards,Keisuke Kuroda2020年2月14日(金) 13:29 Amit Langote <amitlangote09@gmail.com>:On Fri, Feb 14, 2020 at 3:47 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-02-13 13:40:43 -0500, Tom Lane wrote:\n> > ... and pushed.  One other change I made beyond those suggested\n> > was to push the zero-divide ereport's out-of-line as well.\n>\n> Thanks!\n\nThank you all.\n\nI repeated some of the tests I did earlier and things look good.\n\ngcc-8\n=====\n\nHEAD\n\nlatency average = 296.842 ms\n\n    42.05%  postgres  postgres           [.] ExecInterpExpr\n    15.14%  postgres  postgres           [.] float8_accum\n     9.32%  postgres  libc-2.17.so       [.] __isinf\n     7.32%  postgres  postgres           [.] dsqrt\n     5.67%  postgres  postgres           [.] float8mul\n     4.20%  postgres  postgres           [.] ftod\n\n11.7\n\nlatency average = 289.439 ms\n\n    41.52%  postgres  postgres           [.] ExecInterpExpr\n    13.59%  postgres  libc-2.17.so       [.] __isinf\n    10.98%  postgres  postgres           [.] float8_accum\n     8.26%  postgres  postgres           [.] dsqrt\n     6.17%  postgres  postgres           [.] float8mul\n     3.65%  postgres  postgres           [.] ftod\n\nclang-7\n=======\n\nHEAD\n\nlatency average = 233.735 ms\n\n    43.84%  postgres  postgres           [.] ExecInterpExpr\n    15.17%  postgres  postgres           [.] float8_accum\n     8.25%  postgres  postgres           [.] dsqrt\n     7.35%  postgres  postgres           [.] float8mul\n     5.84%  postgres  postgres           [.] ftod\n     3.78%  postgres  postgres           [.] tts_buffer_heap_getsomeattrs\n\n11.7\n\nlatency average = 221.009 ms\n\n    49.55%  postgres  postgres           [.] ExecInterpExpr\n    12.05%  postgres  postgres           [.] float8_accum\n     8.97%  postgres  postgres           [.] dsqrt\n     6.72%  postgres  postgres           [.] float8mul\n     5.62%  postgres  postgres           [.] ftod\n     2.18%  postgres  postgres           [.] slot_deform_tuple\n\nHEAD and PG 11 are now comparable even when built with gcc.\n\nRegards,\nAmit", "msg_date": "Fri, 14 Feb 2020 15:42:16 +0900", "msg_from": "keisuke kuroda <keisuke.kuroda.3862@gmail.com>", "msg_from_op": true, "msg_subject": "Re: In PG12, query with float calculations is slower than PG11" } ]
[ { "msg_contents": "Hi,\n\nAttached fixes $subject.\n\nThanks,\nAmit", "msg_date": "Thu, 6 Feb 2020 15:11:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "typo in set_rel_consider_parallel()" }, { "msg_contents": "On Thu, Feb 6, 2020 at 11:41 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi,\n>\n> Attached fixes $subject.\n>\n\nLGTM. I will push this later today.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Feb 2020 12:10:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typo in set_rel_consider_parallel()" }, { "msg_contents": "On Thu, Feb 6, 2020 at 12:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 6, 2020 at 11:41 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Attached fixes $subject.\n> >\n>\n> LGTM. I will push this later today.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Feb 2020 16:41:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typo in set_rel_consider_parallel()" }, { "msg_contents": "On Thu, Feb 6, 2020 at 20:11 Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Feb 6, 2020 at 12:10 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Thu, Feb 6, 2020 at 11:41 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Attached fixes $subject.\n> > >\n> >\n> > LGTM. I will push this later today.\n> >\n>\n> Pushed.\n\n\nThanks Amit.\n\nRegards,\nAmit\n\n>\n\nOn Thu, Feb 6, 2020 at 20:11 Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Feb 6, 2020 at 12:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 6, 2020 at 11:41 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Attached fixes $subject.\n> >\n>\n> LGTM.  I will push this later today.\n>\n\nPushed.Thanks Amit.Regards,Amit", "msg_date": "Thu, 6 Feb 2020 20:33:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: typo in set_rel_consider_parallel()" } ]
[ { "msg_contents": "Buildfarm runs have triggered the assertion at the end of\nSyncRepGetSyncStandbysPriority():\n\n sysname │ snapshot │ branch │ bfurl\n──────────┼─────────────────────┼───────────────┼──────────────────────────────────────────────────────────────────────────────────────────────\n hoverfly │ 2019-11-22 12:15:08 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2019-11-22%2012%3A15%3A08\n hoverfly │ 2019-11-07 17:19:12 │ REL9_6_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2019-11-07%2017%3A19%3A12\n nightjar │ 2019-08-13 23:04:41 │ REL_10_STABLE │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=nightjar&dt=2019-08-13%2023%3A04%3A41\n skink │ 2018-11-28 21:03:35 │ HEAD │ http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2018-11-28%2021%3A03%3A35\n\nOn my development system, this delay injection reproduces the failure:\n\n--- a/src/backend/replication/syncrep.c\n+++ b/src/backend/replication/syncrep.c\n@@ -399,6 +399,8 @@ SyncRepInitConfig(void)\n {\n int priority;\n \n+ pg_usleep(100 * 1000);\n\nSyncRepInitConfig() is the function responsible for updating, after SIGHUP,\nthe sync_standby_priority values that SyncRepGetSyncStandbysPriority()\nconsults. The assertion holds if each walsender's sync_standby_priority (in\nshared memory) accounts for the latest synchronous_standby_names GUC value.\nThat ceases to hold for brief moments after a SIGHUP that changes the\nsynchronous_standby_names GUC value.\n\nI think the way to fix this is to nominate one process to update all\nsync_standby_priority values after SIGHUP. That process should acquire\nSyncRepLock once per ProcessConfigFile(), not once per walsender. If\nwalsender startup occurs at roughly the same time as a SIGHUP, the new\nwalsender should avoid computing sync_standby_priority based on a GUC value\ndifferent from the one used for the older walsenders.\n\nWould anyone like to fix this? I could add it to my queue, but it would wait\na year or more.\n\nThanks,\nnm\n\n\n", "msg_date": "Wed, 5 Feb 2020 23:45:52 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "SyncRepGetSyncStandbysPriority() vs. SIGHUP" }, { "msg_contents": "At Wed, 5 Feb 2020 23:45:52 -0800, Noah Misch <noah@leadboat.com> wrote in \n> Buildfarm runs have triggered the assertion at the end of\n> SyncRepGetSyncStandbysPriority():\n..\n> On my development system, this delay injection reproduces the failure:\n> \n> --- a/src/backend/replication/syncrep.c\n> +++ b/src/backend/replication/syncrep.c\n> @@ -399,6 +399,8 @@ SyncRepInitConfig(void)\n> {\n> int priority;\n> \n> + pg_usleep(100 * 1000);\n\nThough I couldn't see that, but actually that can happen.\n\nThat happens if:\n\n all potentially-sync standbys have lower priority than the number of\n syncrep list members.\n\n the number of sync standbys is short.\n\n the number of the potentially-sync standbys is enough.\n\nIf all of the above are true, the while (priority <-= lowest_priority)\ndoesn't loops and goes into the assertion.\n\n> SyncRepInitConfig() is the function responsible for updating, after SIGHUP,\n> the sync_standby_priority values that SyncRepGetSyncStandbysPriority()\n> consults. The assertion holds if each walsender's sync_standby_priority (in\n> shared memory) accounts for the latest synchronous_standby_names GUC value.\n> That ceases to hold for brief moments after a SIGHUP that changes the\n> synchronous_standby_names GUC value.\n\nAgreed.\n\n> I think the way to fix this is to nominate one process to update all\n> sync_standby_priority values after SIGHUP. That process should acquire\n> SyncRepLock once per ProcessConfigFile(), not once per walsender. If\n> walsender startup occurs at roughly the same time as a SIGHUP, the new\n> walsender should avoid computing sync_standby_priority based on a GUC value\n> different from the one used for the older walsenders.\n\nIf we update the priority of all walsenders at once, the other\nwalsenders may calculate required LSN using the old configuration with\nthe new priority. I'm not sure the probability of happening this, but\nthat causes similar situation.\n\nThe priority calcuation happens for every standby repliy. So if\nthere're some standbys with wrong priority, They will catch up at\nreceiving the next standby reply. Thus just asuuming walsenders with\nout-of-range priority as non-sync standbys can \"fix\" it, I believe.\npg_stat_get_wal_senders() reveals such inconsistent state but I don't\nthink it's not worth addressing.\n\n> Would anyone like to fix this? I could add it to my queue, but it would wait\n> a year or more.\n\nThe attached does that. Isn't it enough?\n\n# The more significant problem is I haven't suceeded to replay the\n# problem..\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 07 Feb 2020 12:52:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SyncRepGetSyncStandbysPriority() vs. SIGHUP" }, { "msg_contents": "On Wed, Feb 05, 2020 at 11:45:52PM -0800, Noah Misch wrote:\n> Would anyone like to fix this? I could add it to my queue, but it would wait\n> a year or more.\n\nCommit f332241 fixed this.\n\n\n", "msg_date": "Tue, 26 May 2020 00:43:34 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: SyncRepGetSyncStandbysPriority() vs. SIGHUP" } ]
[ { "msg_contents": "During logical decoding, we send replication_origin and\nreplication_origin_lsn when we decode commit. In pgoutput_begin_txn,\nwe send values for these two but never used on the subscriber side.\nThough we have provided a function (logicalrep_read_origin) to read\nthese two values but that is not used in code anywhere.\n\nI think this is primarily for external application usage, but it is\nnot very clear how will they use it. As far as I understand, the\nvalue of origin can be used to avoid loops in bi-directional\nreplication, and origin_lsn can be used to track how far subscriber\nhas recevied changes. I am not sure about this and particularly how\norigin_lsn can be used in external applications.\n\nThis has come up in the discussion of the \"logical streaming of large\nin-progress transactions\" [1]. Basically, we are not sure when to send\nthese values during streaming as we don't know its clear usage.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAFiTN-skHvSWDHV66qpzMfnHH6AvsE2YAjvh4Kt613E8ZD8WoQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Feb 2020 14:40:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Thu, Feb 6, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> During logical decoding, we send replication_origin and\n> replication_origin_lsn when we decode commit. In pgoutput_begin_txn,\n> we send values for these two but never used on the subscriber side.\n> Though we have provided a function (logicalrep_read_origin) to read\n> these two values but that is not used in code anywhere.\n>\n\nFor the purpose of decoding in-progress transactions, I think we can\nsend replication_origin in the first 'start' message as it is present\nwith each WAL record, however replication_origin_lsn is only logged at\ncommit time, so can't send it before commit. The\nreplication_origin_lsn is set by pg_replication_origin_xact_setup()\nbut it is not clear how and when that function can be used. Do we\nreally need replication_origin_lsn before we decode the commit record?\n\nNote- I have added few more people which I could see are working in a\nsimilar area to get some response.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 16:40:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On 09/07/2020 13:10, Amit Kapila wrote:\n> On Thu, Feb 6, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> During logical decoding, we send replication_origin and\n>> replication_origin_lsn when we decode commit. In pgoutput_begin_txn,\n>> we send values for these two but never used on the subscriber side.\n>> Though we have provided a function (logicalrep_read_origin) to read\n>> these two values but that is not used in code anywhere.\n>>\n\nWe don't use the origin message anywhere really because we don't support \norigin forwarding in the built-in replication yet. That part I left out \nintentionally in the original PG10 patchset as it's mostly useful for \ncircular replication detection when you want to replicate both ways. \nHowever that's relatively useless without also having some kind of \nconflict detection which would be another huge pile of code and I \nexpected we would end up not getting logical replication in PG10 at all \nif I tried to push conflict detection as well :)\n\n> \n> For the purpose of decoding in-progress transactions, I think we can\n> send replication_origin in the first 'start' message as it is present\n> with each WAL record, however replication_origin_lsn is only logged at\n> commit time, so can't send it before commit. The\n> replication_origin_lsn is set by pg_replication_origin_xact_setup()\n> but it is not clear how and when that function can be used. Do we\n> really need replication_origin_lsn before we decode the commit record?\n> \n\nThat's the SQL interface, C interface does not require that and I don't \nthink we need to do that. The existing apply code sets the \nreplorigin_session_origin_lsn only when processing commit message IIRC.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Thu, 9 Jul 2020 13:45:58 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Thu, Jul 9, 2020 at 5:16 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>\n> On 09/07/2020 13:10, Amit Kapila wrote:\n> > On Thu, Feb 6, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> During logical decoding, we send replication_origin and\n> >> replication_origin_lsn when we decode commit. In pgoutput_begin_txn,\n> >> we send values for these two but never used on the subscriber side.\n> >> Though we have provided a function (logicalrep_read_origin) to read\n> >> these two values but that is not used in code anywhere.\n> >>\n>\n> We don't use the origin message anywhere really because we don't support\n> origin forwarding in the built-in replication yet. That part I left out\n> intentionally in the original PG10 patchset as it's mostly useful for\n> circular replication detection when you want to replicate both ways.\n> However that's relatively useless without also having some kind of\n> conflict detection which would be another huge pile of code and I\n> expected we would end up not getting logical replication in PG10 at all\n> if I tried to push conflict detection as well :)\n>\n\nFair enough. However, without tests and more documentation about this\nconcept, it is likely that future development might break it. It is\ngood that you and others who know this part well are there to respond\nbut still, the more documentation and tests would be preferred.\n\n> >\n> > For the purpose of decoding in-progress transactions, I think we can\n> > send replication_origin in the first 'start' message as it is present\n> > with each WAL record, however replication_origin_lsn is only logged at\n> > commit time, so can't send it before commit. The\n> > replication_origin_lsn is set by pg_replication_origin_xact_setup()\n> > but it is not clear how and when that function can be used. Do we\n> > really need replication_origin_lsn before we decode the commit record?\n> >\n>\n> That's the SQL interface, C interface does not require that and I don't\n> think we need to do that.\n>\n\nI think when you are saying SQL interface, you referred to\npg_replication_origin_xact_setup() but I am not sure which C interface\nyou are referring to in the above sentence?\n\n> The existing apply code sets the\n> replorigin_session_origin_lsn only when processing commit message IIRC.\n>\n\nThat's correct. However, we do send it via 'begin' callback which\nwon't be possible with the streaming of in-progress transactions. Do\nwe need to send this origin related information (origin, origin_lsn)\nwhile streaming of in-progress transactions? If so, when? As far as\nI can see, the origin_id can be sent with the first 'start' message.\nThe origin_lsn and origin_commit can be sent with the last 'start' of\nstreaming commit if we want but not sure if that is of use. If we\nneed to send origin_lsn earlier than that then we need to record it\nwith other WAL records (other than Commit WAL record).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 18:04:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "Hi,\n\nOn 09/07/2020 14:34, Amit Kapila wrote:\n> On Thu, Jul 9, 2020 at 5:16 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>>\n>> On 09/07/2020 13:10, Amit Kapila wrote:\n>>> On Thu, Feb 6, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> During logical decoding, we send replication_origin and\n>>>> replication_origin_lsn when we decode commit. In pgoutput_begin_txn,\n>>>> we send values for these two but never used on the subscriber side.\n>>>> Though we have provided a function (logicalrep_read_origin) to read\n>>>> these two values but that is not used in code anywhere.\n>>>>\n>>\n>> We don't use the origin message anywhere really because we don't support\n>> origin forwarding in the built-in replication yet. That part I left out\n>> intentionally in the original PG10 patchset as it's mostly useful for\n>> circular replication detection when you want to replicate both ways.\n>> However that's relatively useless without also having some kind of\n>> conflict detection which would be another huge pile of code and I\n>> expected we would end up not getting logical replication in PG10 at all\n>> if I tried to push conflict detection as well :)\n>>\n> \n> Fair enough. However, without tests and more documentation about this\n> concept, it is likely that future development might break it. It is\n> good that you and others who know this part well are there to respond\n> but still, the more documentation and tests would be preferred.\n> \n\nHonestly that part didn't even need to be committed given it's unused. \nProtocol supports versioning so it could have been added at later time.\n\n>>>\n>>> For the purpose of decoding in-progress transactions, I think we can\n>>> send replication_origin in the first 'start' message as it is present\n>>> with each WAL record, however replication_origin_lsn is only logged at\n>>> commit time, so can't send it before commit. The\n>>> replication_origin_lsn is set by pg_replication_origin_xact_setup()\n>>> but it is not clear how and when that function can be used. Do we\n>>> really need replication_origin_lsn before we decode the commit record?\n>>>\n>>\n>> That's the SQL interface, C interface does not require that and I don't\n>> think we need to do that.\n>>\n> \n> I think when you are saying SQL interface, you referred to\n> pg_replication_origin_xact_setup() but I am not sure which C interface\n> you are referring to in the above sentence?\n> \n\nAll the stuff pg_replication_origin_xact_setup does internally.\n\n>> The existing apply code sets the\n>> replorigin_session_origin_lsn only when processing commit message IIRC.\n>>\n> \n> That's correct. However, we do send it via 'begin' callback which\n> won't be possible with the streaming of in-progress transactions. Do\n> we need to send this origin related information (origin, origin_lsn)\n> while streaming of in-progress transactions? If so, when? As far as\n> I can see, the origin_id can be sent with the first 'start' message.\n> The origin_lsn and origin_commit can be sent with the last 'start' of\n> streaming commit if we want but not sure if that is of use. If we\n> need to send origin_lsn earlier than that then we need to record it\n> with other WAL records (other than Commit WAL record).\n> \n\nIf we were to support the origin forwarding, then strictly speaking we \nneed everything only at commit time from correctness perspective, but \nideally origin_id would be best sent with first message as it can be \nused to filter out changes at decoding stage rather than while we \nprocess the commit so having it set early improves performance of decoding.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Thu, 9 Jul 2020 14:44:56 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> On 09/07/2020 14:34, Amit Kapila wrote:\n> > On Thu, Jul 9, 2020 at 5:16 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> >>\n> >> On 09/07/2020 13:10, Amit Kapila wrote:\n> >>> On Thu, Feb 6, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>>\n> >>>> During logical decoding, we send replication_origin and\n> >>>> replication_origin_lsn when we decode commit. In pgoutput_begin_txn,\n> >>>> we send values for these two but never used on the subscriber side.\n> >>>> Though we have provided a function (logicalrep_read_origin) to read\n> >>>> these two values but that is not used in code anywhere.\n> >>>>\n> >>\n> >> We don't use the origin message anywhere really because we don't support\n> >> origin forwarding in the built-in replication yet. That part I left out\n> >> intentionally in the original PG10 patchset as it's mostly useful for\n> >> circular replication detection when you want to replicate both ways.\n> >> However that's relatively useless without also having some kind of\n> >> conflict detection which would be another huge pile of code and I\n> >> expected we would end up not getting logical replication in PG10 at all\n> >> if I tried to push conflict detection as well :)\n> >>\n> >\n> > Fair enough. However, without tests and more documentation about this\n> > concept, it is likely that future development might break it. It is\n> > good that you and others who know this part well are there to respond\n> > but still, the more documentation and tests would be preferred.\n> >\n>\n> Honestly that part didn't even need to be committed given it's unused.\n> Protocol supports versioning so it could have been added at later time.\n>\n> >>>\n> >>> For the purpose of decoding in-progress transactions, I think we can\n> >>> send replication_origin in the first 'start' message as it is present\n> >>> with each WAL record, however replication_origin_lsn is only logged at\n> >>> commit time, so can't send it before commit. The\n> >>> replication_origin_lsn is set by pg_replication_origin_xact_setup()\n> >>> but it is not clear how and when that function can be used. Do we\n> >>> really need replication_origin_lsn before we decode the commit record?\n> >>>\n> >>\n> >> That's the SQL interface, C interface does not require that and I don't\n> >> think we need to do that.\n> >>\n> >\n> > I think when you are saying SQL interface, you referred to\n> > pg_replication_origin_xact_setup() but I am not sure which C interface\n> > you are referring to in the above sentence?\n> >\n>\n> All the stuff pg_replication_origin_xact_setup does internally.\n>\n> >> The existing apply code sets the\n> >> replorigin_session_origin_lsn only when processing commit message IIRC.\n> >>\n> >\n> > That's correct. However, we do send it via 'begin' callback which\n> > won't be possible with the streaming of in-progress transactions. Do\n> > we need to send this origin related information (origin, origin_lsn)\n> > while streaming of in-progress transactions? If so, when? As far as\n> > I can see, the origin_id can be sent with the first 'start' message.\n> > The origin_lsn and origin_commit can be sent with the last 'start' of\n> > streaming commit if we want but not sure if that is of use. If we\n> > need to send origin_lsn earlier than that then we need to record it\n> > with other WAL records (other than Commit WAL record).\n> >\n>\n> If we were to support the origin forwarding, then strictly speaking we\n> need everything only at commit time from correctness perspective,\n>\n\nOkay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n\n> but\n> ideally origin_id would be best sent with first message as it can be\n> used to filter out changes at decoding stage rather than while we\n> process the commit so having it set early improves performance of decoding.\n>\n\nYeah, makes sense. So, we will just send origin_id (with first\nstreaming start message) and leave others.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 18:54:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> >\n> > Hi,\n> >\n> > On 09/07/2020 14:34, Amit Kapila wrote:\n> > > On Thu, Jul 9, 2020 at 5:16 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> > >>\n> > >> On 09/07/2020 13:10, Amit Kapila wrote:\n> > >>> On Thu, Feb 6, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >>>>\n> > >>>> During logical decoding, we send replication_origin and\n> > >>>> replication_origin_lsn when we decode commit. In pgoutput_begin_txn,\n> > >>>> we send values for these two but never used on the subscriber side.\n> > >>>> Though we have provided a function (logicalrep_read_origin) to read\n> > >>>> these two values but that is not used in code anywhere.\n> > >>>>\n> > >>\n> > >> We don't use the origin message anywhere really because we don't support\n> > >> origin forwarding in the built-in replication yet. That part I left out\n> > >> intentionally in the original PG10 patchset as it's mostly useful for\n> > >> circular replication detection when you want to replicate both ways.\n> > >> However that's relatively useless without also having some kind of\n> > >> conflict detection which would be another huge pile of code and I\n> > >> expected we would end up not getting logical replication in PG10 at all\n> > >> if I tried to push conflict detection as well :)\n> > >>\n> > >\n> > > Fair enough. However, without tests and more documentation about this\n> > > concept, it is likely that future development might break it. It is\n> > > good that you and others who know this part well are there to respond\n> > > but still, the more documentation and tests would be preferred.\n> > >\n> >\n> > Honestly that part didn't even need to be committed given it's unused.\n> > Protocol supports versioning so it could have been added at later time.\n> >\n> > >>>\n> > >>> For the purpose of decoding in-progress transactions, I think we can\n> > >>> send replication_origin in the first 'start' message as it is present\n> > >>> with each WAL record, however replication_origin_lsn is only logged at\n> > >>> commit time, so can't send it before commit. The\n> > >>> replication_origin_lsn is set by pg_replication_origin_xact_setup()\n> > >>> but it is not clear how and when that function can be used. Do we\n> > >>> really need replication_origin_lsn before we decode the commit record?\n> > >>>\n> > >>\n> > >> That's the SQL interface, C interface does not require that and I don't\n> > >> think we need to do that.\n> > >>\n> > >\n> > > I think when you are saying SQL interface, you referred to\n> > > pg_replication_origin_xact_setup() but I am not sure which C interface\n> > > you are referring to in the above sentence?\n> > >\n> >\n> > All the stuff pg_replication_origin_xact_setup does internally.\n> >\n> > >> The existing apply code sets the\n> > >> replorigin_session_origin_lsn only when processing commit message IIRC.\n> > >>\n> > >\n> > > That's correct. However, we do send it via 'begin' callback which\n> > > won't be possible with the streaming of in-progress transactions. Do\n> > > we need to send this origin related information (origin, origin_lsn)\n> > > while streaming of in-progress transactions? If so, when? As far as\n> > > I can see, the origin_id can be sent with the first 'start' message.\n> > > The origin_lsn and origin_commit can be sent with the last 'start' of\n> > > streaming commit if we want but not sure if that is of use. If we\n> > > need to send origin_lsn earlier than that then we need to record it\n> > > with other WAL records (other than Commit WAL record).\n> > >\n> >\n> > If we were to support the origin forwarding, then strictly speaking we\n> > need everything only at commit time from correctness perspective,\n> >\n>\n> Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n>\n> > but\n> > ideally origin_id would be best sent with first message as it can be\n> > used to filter out changes at decoding stage rather than while we\n> > process the commit so having it set early improves performance of decoding.\n> >\n>\n> Yeah, makes sense. So, we will just send origin_id (with first\n> streaming start message) and leave others.\n\nSo IIUC, currently we are sending the latest origin_id which is set\nduring the commit time. So in our case, while we start streaming we\nwill send the origin_id of the latest change in the current stream\nright? I think we will always have to remember the latest origin id\nin top-level ReorderBufferTXN as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 11:00:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> > >\n> > >\n> > > If we were to support the origin forwarding, then strictly speaking we\n> > > need everything only at commit time from correctness perspective,\n> > >\n> >\n> > Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n> >\n> > > but\n> > > ideally origin_id would be best sent with first message as it can be\n> > > used to filter out changes at decoding stage rather than while we\n> > > process the commit so having it set early improves performance of decoding.\n> > >\n> >\n> > Yeah, makes sense. So, we will just send origin_id (with first\n> > streaming start message) and leave others.\n>\n> So IIUC, currently we are sending the latest origin_id which is set\n> during the commit time. So in our case, while we start streaming we\n> will send the origin_id of the latest change in the current stream\n> right?\n>\n\nIt has to be sent only once with the first start message not with\nconsecutive start messages.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 11:14:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Tue, Jul 14, 2020 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> > > >\n> > > >\n> > > > If we were to support the origin forwarding, then strictly speaking we\n> > > > need everything only at commit time from correctness perspective,\n> > > >\n> > >\n> > > Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n> > >\n> > > > but\n> > > > ideally origin_id would be best sent with first message as it can be\n> > > > used to filter out changes at decoding stage rather than while we\n> > > > process the commit so having it set early improves performance of decoding.\n> > > >\n> > >\n> > > Yeah, makes sense. So, we will just send origin_id (with first\n> > > streaming start message) and leave others.\n> >\n> > So IIUC, currently we are sending the latest origin_id which is set\n> > during the commit time. So in our case, while we start streaming we\n> > will send the origin_id of the latest change in the current stream\n> > right?\n> >\n>\n> It has to be sent only once with the first start message not with\n> consecutive start messages.\n\nOkay, so do you mean to say that with the first start message we send\nthe origin_id of the latest change? because during the transaction\nlifetime, the origin id can be changed. Currently, we send the\norigin_id of the latest WAL i.e. origin id of the commit. so I think\nit will be on a similar line if with every stream_start we send the\norigin_id of the latest change in that stream.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:04:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jul 14, 2020 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> > > > >\n> > > > >\n> > > > > If we were to support the origin forwarding, then strictly speaking we\n> > > > > need everything only at commit time from correctness perspective,\n> > > > >\n> > > >\n> > > > Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n> > > >\n> > > > > but\n> > > > > ideally origin_id would be best sent with first message as it can be\n> > > > > used to filter out changes at decoding stage rather than while we\n> > > > > process the commit so having it set early improves performance of decoding.\n> > > > >\n> > > >\n> > > > Yeah, makes sense. So, we will just send origin_id (with first\n> > > > streaming start message) and leave others.\n> > >\n> > > So IIUC, currently we are sending the latest origin_id which is set\n> > > during the commit time. So in our case, while we start streaming we\n> > > will send the origin_id of the latest change in the current stream\n> > > right?\n> > >\n> >\n> > It has to be sent only once with the first start message not with\n> > consecutive start messages.\n>\n> Okay, so do you mean to say that with the first start message we send\n> the origin_id of the latest change?\n>\n\nYes.\n\n> because during the transaction\n> lifetime, the origin id can be changed.\n>\n\nYeah, it could be changed but if we have to send again apart from with\nthe first message then it should be sent with each message. So, I\nthink it is better to just send it once during the transaction as we\ndo it now (send with begin message).\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 13:59:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "Hi,\n\nOn 14/07/2020 10:29, Amit Kapila wrote:\n> On Tue, Jul 14, 2020 at 12:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Tue, Jul 14, 2020 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>> On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>>>\n>>>> On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>>\n>>>>> On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>> If we were to support the origin forwarding, then strictly speaking we\n>>>>>> need everything only at commit time from correctness perspective,\n>>>>>>\n>>>>>\n>>>>> Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n>>>>>\n>>>>>> but\n>>>>>> ideally origin_id would be best sent with first message as it can be\n>>>>>> used to filter out changes at decoding stage rather than while we\n>>>>>> process the commit so having it set early improves performance of decoding.\n>>>>>>\n>>>>>\n>>>>> Yeah, makes sense. So, we will just send origin_id (with first\n>>>>> streaming start message) and leave others.\n>>>>\n>>>> So IIUC, currently we are sending the latest origin_id which is set\n>>>> during the commit time. So in our case, while we start streaming we\n>>>> will send the origin_id of the latest change in the current stream\n>>>> right?\n>>>>\n>>>\n>>> It has to be sent only once with the first start message not with\n>>> consecutive start messages.\n>>\n>> Okay, so do you mean to say that with the first start message we send\n>> the origin_id of the latest change?\n>>\n> \n> Yes.\n> \n>> because during the transaction\n>> lifetime, the origin id can be changed.\n>>\n> \n> Yeah, it could be changed but if we have to send again apart from with\n> the first message then it should be sent with each message. So, I\n> think it is better to just send it once during the transaction as we\n> do it now (send with begin message).\n> \n> \n\nI am not sure if I can follow the discussion here very well, but if I \nunderstand correctly I'd like to clarify two things:\n- origin id does not change mid transaction as you can only have one per xid\n- until we have origin forwarding feature, the origin id is always same \nfor a given subscription\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Tue, 14 Jul 2020 11:17:43 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Tue, Jul 14, 2020 at 2:47 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> On 14/07/2020 10:29, Amit Kapila wrote:\n> > On Tue, Jul 14, 2020 at 12:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> On Tue, Jul 14, 2020 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>\n> >>> On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>>>\n> >>>> On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>>>\n> >>>>> On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>> If we were to support the origin forwarding, then strictly speaking we\n> >>>>>> need everything only at commit time from correctness perspective,\n> >>>>>>\n> >>>>>\n> >>>>> Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n> >>>>>\n> >>>>>> but\n> >>>>>> ideally origin_id would be best sent with first message as it can be\n> >>>>>> used to filter out changes at decoding stage rather than while we\n> >>>>>> process the commit so having it set early improves performance of decoding.\n> >>>>>>\n> >>>>>\n> >>>>> Yeah, makes sense. So, we will just send origin_id (with first\n> >>>>> streaming start message) and leave others.\n> >>>>\n> >>>> So IIUC, currently we are sending the latest origin_id which is set\n> >>>> during the commit time. So in our case, while we start streaming we\n> >>>> will send the origin_id of the latest change in the current stream\n> >>>> right?\n> >>>>\n> >>>\n> >>> It has to be sent only once with the first start message not with\n> >>> consecutive start messages.\n> >>\n> >> Okay, so do you mean to say that with the first start message we send\n> >> the origin_id of the latest change?\n> >>\n> >\n> > Yes.\n> >\n> >> because during the transaction\n> >> lifetime, the origin id can be changed.\n> >>\n> >\n> > Yeah, it could be changed but if we have to send again apart from with\n> > the first message then it should be sent with each message. So, I\n> > think it is better to just send it once during the transaction as we\n> > do it now (send with begin message).\n> >\n> >\n>\n> I am not sure if I can follow the discussion here very well, but if I\n> understand correctly I'd like to clarify two things:\n> - origin id does not change mid transaction as you can only have one per xid\n\nActually, I was talking about if someone changes the session origin\nthen which origin id we should send? currently, we send data only\nduring the commit so we take the origin id from the commit wal and\nsend the same. In the below example, I am inserting 2 records in a\ntransaction and each of them has different origin id.\n\nbegin;\nselect pg_replication_origin_session_setup('o1');\ninsert into t values(1, 'test');\nselect pg_replication_origin_session_reset();\nselect pg_replication_origin_session_setup('o2'); --> Origin ID changed\ninsert into t values(2, 'test');\ncommit;\n\n> - until we have origin forwarding feature, the origin id is always same\n> for a given subscription\n\nok\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:06:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Tue, Jul 14, 2020 at 2:47 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> On 14/07/2020 10:29, Amit Kapila wrote:\n> > On Tue, Jul 14, 2020 at 12:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> On Tue, Jul 14, 2020 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>\n> >>> On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>>>\n> >>>> On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>>>\n> >>>>> On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>> If we were to support the origin forwarding, then strictly speaking we\n> >>>>>> need everything only at commit time from correctness perspective,\n> >>>>>>\n> >>>>>\n> >>>>> Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n> >>>>>\n> >>>>>> but\n> >>>>>> ideally origin_id would be best sent with first message as it can be\n> >>>>>> used to filter out changes at decoding stage rather than while we\n> >>>>>> process the commit so having it set early improves performance of decoding.\n> >>>>>>\n> >>>>>\n> >>>>> Yeah, makes sense. So, we will just send origin_id (with first\n> >>>>> streaming start message) and leave others.\n> >>>>\n> >>>> So IIUC, currently we are sending the latest origin_id which is set\n> >>>> during the commit time. So in our case, while we start streaming we\n> >>>> will send the origin_id of the latest change in the current stream\n> >>>> right?\n> >>>>\n> >>>\n> >>> It has to be sent only once with the first start message not with\n> >>> consecutive start messages.\n> >>\n> >> Okay, so do you mean to say that with the first start message we send\n> >> the origin_id of the latest change?\n> >>\n> >\n> > Yes.\n> >\n> >> because during the transaction\n> >> lifetime, the origin id can be changed.\n> >>\n> >\n> > Yeah, it could be changed but if we have to send again apart from with\n> > the first message then it should be sent with each message. So, I\n> > think it is better to just send it once during the transaction as we\n> > do it now (send with begin message).\n> >\n> >\n>\n> I am not sure if I can follow the discussion here very well, but if I\n> understand correctly I'd like to clarify two things:\n> - origin id does not change mid transaction as you can only have one per xid\n>\n\nAs shown by Dilip, I don't think currently we have any way to prevent\nthis from changing during the transaction.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Jul 2020 15:15:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On 14/07/2020 11:36, Dilip Kumar wrote:\n> On Tue, Jul 14, 2020 at 2:47 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>>\n>> Hi,\n>>\n>> On 14/07/2020 10:29, Amit Kapila wrote:\n>>> On Tue, Jul 14, 2020 at 12:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>>>\n>>>> On Tue, Jul 14, 2020 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>>\n>>>>> On Tue, Jul 14, 2020 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>>>>>\n>>>>>> On Thu, Jul 9, 2020 at 6:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>>>>\n>>>>>>> On Thu, Jul 9, 2020 at 6:14 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> If we were to support the origin forwarding, then strictly speaking we\n>>>>>>>> need everything only at commit time from correctness perspective,\n>>>>>>>>\n>>>>>>>\n>>>>>>> Okay. Anyway streaming mode is optional, so in such cases, we can keep it 'off'\n>>>>>>>\n>>>>>>>> but\n>>>>>>>> ideally origin_id would be best sent with first message as it can be\n>>>>>>>> used to filter out changes at decoding stage rather than while we\n>>>>>>>> process the commit so having it set early improves performance of decoding.\n>>>>>>>>\n>>>>>>>\n>>>>>>> Yeah, makes sense. So, we will just send origin_id (with first\n>>>>>>> streaming start message) and leave others.\n>>>>>>\n>>>>>> So IIUC, currently we are sending the latest origin_id which is set\n>>>>>> during the commit time. So in our case, while we start streaming we\n>>>>>> will send the origin_id of the latest change in the current stream\n>>>>>> right?\n>>>>>>\n>>>>>\n>>>>> It has to be sent only once with the first start message not with\n>>>>> consecutive start messages.\n>>>>\n>>>> Okay, so do you mean to say that with the first start message we send\n>>>> the origin_id of the latest change?\n>>>>\n>>>\n>>> Yes.\n>>>\n>>>> because during the transaction\n>>>> lifetime, the origin id can be changed.\n>>>>\n>>>\n>>> Yeah, it could be changed but if we have to send again apart from with\n>>> the first message then it should be sent with each message. So, I\n>>> think it is better to just send it once during the transaction as we\n>>> do it now (send with begin message).\n>>>\n>>>\n>>\n>> I am not sure if I can follow the discussion here very well, but if I\n>> understand correctly I'd like to clarify two things:\n>> - origin id does not change mid transaction as you can only have one per xid\n> \n> Actually, I was talking about if someone changes the session origin\n> then which origin id we should send? currently, we send data only\n> during the commit so we take the origin id from the commit wal and\n> send the same. In the below example, I am inserting 2 records in a\n> transaction and each of them has different origin id.\n> \n> begin;\n> select pg_replication_origin_session_setup('o1');\n> insert into t values(1, 'test');\n> select pg_replication_origin_session_reset();\n> select pg_replication_origin_session_setup('o2'); --> Origin ID changed\n> insert into t values(2, 'test');\n> commit;\n> \n\nCommit record and commit_ts record will both include only 'o2', while \nindividual DML WAL records will contain one or the other depending on \nwhen they were done.\n\nThe origin API is really not really prepared for this situation \n(independently of streaming) because the origin lookup for all rows in \nthat transaction will return 'o2', but decoding will decode whatever is \nin the DML WAL record.\n\nOne can't even use this approach for sensible filtering as the ultimate \nfaith of whole transaction is decided by what's in commit record since \nthe filter callback only provides origin id, not record being processed \nso plugin can't differentiate. So it's hard to see how the above pattern \ncould be used for anything but breaking things. Not sure what Andres' \noriginal intention was with allowing this.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Tue, 14 Jul 2020 12:07:36 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" }, { "msg_contents": "On Tue, Jul 14, 2020 at 3:37 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n>\n> On 14/07/2020 11:36, Dilip Kumar wrote:\n> > On Tue, Jul 14, 2020 at 2:47 PM Petr Jelinek <petr@2ndquadrant.com> wrote:\n> >>\n> >> I am not sure if I can follow the discussion here very well, but if I\n> >> understand correctly I'd like to clarify two things:\n> >> - origin id does not change mid transaction as you can only have one per xid\n> >\n> > Actually, I was talking about if someone changes the session origin\n> > then which origin id we should send? currently, we send data only\n> > during the commit so we take the origin id from the commit wal and\n> > send the same. In the below example, I am inserting 2 records in a\n> > transaction and each of them has different origin id.\n> >\n> > begin;\n> > select pg_replication_origin_session_setup('o1');\n> > insert into t values(1, 'test');\n> > select pg_replication_origin_session_reset();\n> > select pg_replication_origin_session_setup('o2'); --> Origin ID changed\n> > insert into t values(2, 'test');\n> > commit;\n> >\n>\n> Commit record and commit_ts record will both include only 'o2', while\n> individual DML WAL records will contain one or the other depending on\n> when they were done.\n>\n> The origin API is really not really prepared for this situation\n> (independently of streaming) because the origin lookup for all rows in\n> that transaction will return 'o2', but decoding will decode whatever is\n> in the DML WAL record.\n>\n> One can't even use this approach for sensible filtering as the ultimate\n> faith of whole transaction is decided by what's in commit record since\n> the filter callback only provides origin id, not record being processed\n> so plugin can't differentiate. So it's hard to see how the above pattern\n> could be used for anything but breaking things.\n>\n\nFair enough, I think we can proceed with the assumption that it won't\nchange during the transaction and send origin_id along with the very\nfirst *start* message during the streaming of in-progress\ntransactions.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 08:12:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication_origin and replication_origin_lsn usage on subscriber" } ]
[ { "msg_contents": "Hello,\n\nContinuing the work done in commits 0dc8ead4 and c24dcd0c, here are a\nfew more places where we could throw away some code by switching to\npg_pread() and pg_pwrite().", "msg_date": "Fri, 7 Feb 2020 12:38:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Getting rid of some more lseek() calls" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 12:38:27 +1300, Thomas Munro wrote:\n> Continuing the work done in commits 0dc8ead4 and c24dcd0c, here are a\n> few more places where we could throw away some code by switching to\n> pg_pread() and pg_pwrite().\n\nNice.\n\n\n\n> From 5723976510f30385385628758d7118042c4e4bf6 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Fri, 7 Feb 2020 12:04:43 +1300\n> Subject: [PATCH 1/3] Use pg_pread() and pg_pwrite() in slru.c.\n> \n> This avoids lseek() system calls at every SLRU I/O, as was\n> done for relation files in commit c24dcd0c.\n> ---\n> src/backend/access/transam/slru.c | 25 ++++---------------------\n> 1 file changed, 4 insertions(+), 21 deletions(-)\n> \n> diff --git a/src/backend/access/transam/slru.c b/src/backend/access/transam/slru.c\n> index d5b7a08f73..f9efb22311 100644\n> --- a/src/backend/access/transam/slru.c\n> +++ b/src/backend/access/transam/slru.c\n> @@ -646,7 +646,7 @@ SlruPhysicalReadPage(SlruCtl ctl, int pageno, int slotno)\n> \tSlruShared\tshared = ctl->shared;\n> \tint\t\t\tsegno = pageno / SLRU_PAGES_PER_SEGMENT;\n> \tint\t\t\trpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n> -\tint\t\t\toffset = rpageno * BLCKSZ;\n> +\toff_t\t\toffset = rpageno * BLCKSZ;\n> \tchar\t\tpath[MAXPGPATH];\n> \tint\t\t\tfd;\n> \n> @@ -676,17 +676,9 @@ SlruPhysicalReadPage(SlruCtl ctl, int pageno, int slotno)\n> \t\treturn true;\n> \t}\n> \n> -\tif (lseek(fd, (off_t) offset, SEEK_SET) < 0)\n> -\t{\n> -\t\tslru_errcause = SLRU_SEEK_FAILED;\n> -\t\tslru_errno = errno;\n> -\t\tCloseTransientFile(fd);\n> -\t\treturn false;\n> -\t}\n> -\n> \terrno = 0;\n> \tpgstat_report_wait_start(WAIT_EVENT_SLRU_READ);\n> -\tif (read(fd, shared->page_buffer[slotno], BLCKSZ) != BLCKSZ)\n> +\tif (pg_pread(fd, shared->page_buffer[slotno], BLCKSZ, offset) != BLCKSZ)\n> \t{\n> \t\tpgstat_report_wait_end();\n> \t\tslru_errcause = SLRU_READ_FAILED;\n> @@ -726,7 +718,7 @@ SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruFlush fdata)\n> \tSlruShared\tshared = ctl->shared;\n> \tint\t\t\tsegno = pageno / SLRU_PAGES_PER_SEGMENT;\n> \tint\t\t\trpageno = pageno % SLRU_PAGES_PER_SEGMENT;\n> -\tint\t\t\toffset = rpageno * BLCKSZ;\n> +\toff_t\t\toffset = rpageno * BLCKSZ;\n> \tchar\t\tpath[MAXPGPATH];\n> \tint\t\t\tfd = -1;\n> \n> @@ -836,18 +828,9 @@ SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno, SlruFlush fdata)\n> \t\t}\n> \t}\n> \n> -\tif (lseek(fd, (off_t) offset, SEEK_SET) < 0)\n> -\t{\n> -\t\tslru_errcause = SLRU_SEEK_FAILED;\n> -\t\tslru_errno = errno;\n> -\t\tif (!fdata)\n> -\t\t\tCloseTransientFile(fd);\n> -\t\treturn false;\n> -\t}\n> -\n> \terrno = 0;\n> \tpgstat_report_wait_start(WAIT_EVENT_SLRU_WRITE);\n> -\tif (write(fd, shared->page_buffer[slotno], BLCKSZ) != BLCKSZ)\n> +\tif (pg_pwrite(fd, shared->page_buffer[slotno], BLCKSZ, offset) != BLCKSZ)\n> \t{\n> \t\tpgstat_report_wait_end();\n> \t\t/* if write didn't set errno, assume problem is no disk space */\n> -- \n> 2.23.0\n\nHm. This still leaves us with one source of SLRU_SEEK_FAILED. And that's\nreally just for getting the file size. Should we rename that?\n\nPerhaps we should just replace lseek(SEEK_END) with fstat()? Or at least\none wrapper function for getting the size? It seems ugly to change fd\npositions just for the purpose of getting file sizes (and also implies\nmore kernel level locking, I believe).\n\n\n> From 95d7187172f2ac6c08dc92f1043e1662b0dab4ac Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Fri, 7 Feb 2020 12:04:57 +1300\n> Subject: [PATCH 2/3] Use pg_pwrite() in rewriteheap.c.\n> \n> This removes an lseek() call.\n> ---\n> src/backend/access/heap/rewriteheap.c | 9 +--------\n> 1 file changed, 1 insertion(+), 8 deletions(-)\n> \n> diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c\n> index 5869922ff8..9c29bc0e0f 100644\n> --- a/src/backend/access/heap/rewriteheap.c\n> +++ b/src/backend/access/heap/rewriteheap.c\n> @@ -1156,13 +1156,6 @@ heap_xlog_logical_rewrite(XLogReaderState *r)\n> \t\t\t\t\t\tpath, (uint32) xlrec->offset)));\n> \tpgstat_report_wait_end();\n> \n> -\t/* now seek to the position we want to write our data to */\n> -\tif (lseek(fd, xlrec->offset, SEEK_SET) != xlrec->offset)\n> -\t\tereport(ERROR,\n> -\t\t\t\t(errcode_for_file_access(),\n> -\t\t\t\t errmsg(\"could not seek to end of file \\\"%s\\\": %m\",\n> -\t\t\t\t\t\tpath)));\n> -\n> \tdata = XLogRecGetData(r) + sizeof(*xlrec);\n> \n> \tlen = xlrec->num_mappings * sizeof(LogicalRewriteMappingData);\n> @@ -1170,7 +1163,7 @@ heap_xlog_logical_rewrite(XLogReaderState *r)\n> \t/* write out tail end of mapping file (again) */\n> \terrno = 0;\n> \tpgstat_report_wait_start(WAIT_EVENT_LOGICAL_REWRITE_MAPPING_WRITE);\n> -\tif (write(fd, data, len) != len)\n> +\tif (pg_pwrite(fd, data, len, xlrec->offset) != len)\n> \t{\n> \t\t/* if write didn't set errno, assume problem is no disk space */\n> \t\tif (errno == 0)\n\nlgtm.\n\n\n> From da6d712eeef2e3257d7fa672d95f2901bbe62887 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Fri, 7 Feb 2020 12:05:12 +1300\n> Subject: [PATCH 3/3] Use pg_pwrite() in walreceiver.c.\n> \n> This gets rid of an lseek() call. While there was code to avoid\n> it in most cases, it's better to lose the call AND the global state\n> and code required to avoid it.\n> ---\n> src/backend/replication/walreceiver.c | 28 +++------------------------\n> 1 file changed, 3 insertions(+), 25 deletions(-)\n> \n> diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c\n> index a5e85d32f3..2ab15c3cbb 100644\n> --- a/src/backend/replication/walreceiver.c\n> +++ b/src/backend/replication/walreceiver.c\n> @@ -85,14 +85,13 @@ WalReceiverFunctionsType *WalReceiverFunctions = NULL;\n> #define NAPTIME_PER_CYCLE 100\t/* max sleep time between cycles (100ms) */\n> \n> /*\n> - * These variables are used similarly to openLogFile/SegNo/Off,\n> + * These variables are used similarly to openLogFile/SegNo,\n> * but for walreceiver to write the XLOG. recvFileTLI is the TimeLineID\n> * corresponding the filename of recvFile.\n> */\n> static int\trecvFile = -1;\n> static TimeLineID recvFileTLI = 0;\n> static XLogSegNo recvSegNo = 0;\n> -static uint32 recvOff = 0;\n> \n> /*\n> * Flags set by interrupt handlers of walreceiver for later service in the\n> @@ -945,7 +944,6 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)\n> \t\t\tuse_existent = true;\n> \t\t\trecvFile = XLogFileInit(recvSegNo, &use_existent, true);\n> \t\t\trecvFileTLI = ThisTimeLineID;\n> -\t\t\trecvOff = 0;\n> \t\t}\n> \n> \t\t/* Calculate the start offset of the received logs */\n> @@ -956,29 +954,10 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)\n> \t\telse\n> \t\t\tsegbytes = nbytes;\n> \n> -\t\t/* Need to seek in the file? */\n> -\t\tif (recvOff != startoff)\n> -\t\t{\n> -\t\t\tif (lseek(recvFile, (off_t) startoff, SEEK_SET) < 0)\n> -\t\t\t{\n> -\t\t\t\tchar\t\txlogfname[MAXFNAMELEN];\n> -\t\t\t\tint\t\t\tsave_errno = errno;\n> -\n> -\t\t\t\tXLogFileName(xlogfname, recvFileTLI, recvSegNo, wal_segment_size);\n> -\t\t\t\terrno = save_errno;\n> -\t\t\t\tereport(PANIC,\n> -\t\t\t\t\t\t(errcode_for_file_access(),\n> -\t\t\t\t\t\t errmsg(\"could not seek in log segment %s to offset %u: %m\",\n> -\t\t\t\t\t\t\t\txlogfname, startoff)));\n> -\t\t\t}\n> -\n> -\t\t\trecvOff = startoff;\n> -\t\t}\n> -\n> \t\t/* OK to write the logs */\n> \t\terrno = 0;\n> \n> -\t\tbyteswritten = write(recvFile, buf, segbytes);\n> +\t\tbyteswritten = pg_pwrite(recvFile, buf, segbytes, (off_t) startoff);\n> \t\tif (byteswritten <= 0)\n> \t\t{\n> \t\t\tchar\t\txlogfname[MAXFNAMELEN];\n> @@ -995,13 +974,12 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr)\n> \t\t\t\t\t(errcode_for_file_access(),\n> \t\t\t\t\t errmsg(\"could not write to log segment %s \"\n> \t\t\t\t\t\t\t\"at offset %u, length %lu: %m\",\n> -\t\t\t\t\t\t\txlogfname, recvOff, (unsigned long) segbytes)));\n> +\t\t\t\t\t\t\txlogfname, startoff, (unsigned long) segbytes)));\n> \t\t}\n> \n> \t\t/* Update state for write */\n> \t\trecptr += byteswritten;\n> \n> -\t\trecvOff += byteswritten;\n> \t\tnbytes -= byteswritten;\n> \t\tbuf += byteswritten;\n\nlgtm.\n\n\nThere's still a few more lseek(SEEK_SET) calls in the backend after this\n(walsender, miscinit, pg_stat_statements). It'd imo make sense to just\ntry to get rid of all of them in one series this time round?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Feb 2020 16:37:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On Fri, Feb 7, 2020 at 1:37 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. This still leaves us with one source of SLRU_SEEK_FAILED. And that's\n> really just for getting the file size. Should we rename that?\n>\n> Perhaps we should just replace lseek(SEEK_END) with fstat()? Or at least\n> one wrapper function for getting the size? It seems ugly to change fd\n> positions just for the purpose of getting file sizes (and also implies\n> more kernel level locking, I believe).\n\nlseek(SEEK_END) seems to be nearly twice as fast as fstat() if you\njust call it in a big loop, on Linux and FreeBSD (though I didn't\ninvestigate exactly why, mitigations etc, it certainly returns more\nstuff so there's that). I don't think that's a problem here (I mean,\nwe open and close the file every time so we can't be too concerned\nabout the overheads), so I'm in favour of creating a pg_fstat_size(fd)\nfunction on aesthetic grounds. Here's a patch like that; better names\nwelcome.\n\nFor the main offender, namely md.c via fd.c's FileSize(), I'd hold off\non changing that until we figure out how to cache the sizes[1].\n\n> There's still a few more lseek(SEEK_SET) calls in the backend after this\n> (walsender, miscinit, pg_stat_statements). It'd imo make sense to just\n> try to get rid of all of them in one series this time round?\n\nOk, I pushed changes for all the cases discussed except slru.c and\nwalsender.c, which depend on the bikeshed colour discussion about\nwhether we want pg_fstat_size(). See attached.\n\n[1] https://www.postgresql.org/message-id/flat/CAEepm%3D3SSw-Ty1DFcK%3D1rU-K6GSzYzfdD4d%2BZwapdN7dTa6%3DnQ%40mail.gmail.com", "msg_date": "Tue, 11 Feb 2020 18:04:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On Tue, Feb 11, 2020 at 06:04:09PM +1300, Thomas Munro wrote:\n> lseek(SEEK_END) seems to be nearly twice as fast as fstat() if you\n> just call it in a big loop, on Linux and FreeBSD (though I didn't\n> investigate exactly why, mitigations etc, it certainly returns more\n> stuff so there's that).\n\nInteresting. What of Windows? We've had for some time now problem\nwith fetching the size of files larger than 4GB (COPY, dumps..). I am\nwondering if we could not take advantage of that for those cases:\nhttps://www.postgresql.org/message-id/15858-9572469fd3b73263@postgresql.org\n--\nMichael", "msg_date": "Wed, 12 Feb 2020 14:42:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On Wed, Feb 12, 2020 at 6:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Feb 11, 2020 at 06:04:09PM +1300, Thomas Munro wrote:\n> > lseek(SEEK_END) seems to be nearly twice as fast as fstat() if you\n> > just call it in a big loop, on Linux and FreeBSD (though I didn't\n> > investigate exactly why, mitigations etc, it certainly returns more\n> > stuff so there's that).\n>\n> Interesting. What of Windows? We've had for some time now problem\n> with fetching the size of files larger than 4GB (COPY, dumps..). I am\n> wondering if we could not take advantage of that for those cases:\n> https://www.postgresql.org/message-id/15858-9572469fd3b73263@postgresql.org\n\nHmm. Well, on Unix we have to choose between \"tell me the size but\nalso change the position that I either don't care about or have to\nundo\", and \"tell me the size but also tell me all this other stuff I\ndon't care about\". Since Windows apparently has GetFileSizeEx(), why\nnot use that when that's exactly what you want? It apparently\nunderstands large files.\n\nhttps://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfilesizeex\n\n\n", "msg_date": "Wed, 12 Feb 2020 20:08:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On 2020-Feb-12, Thomas Munro wrote:\n\n> Hmm. Well, on Unix we have to choose between \"tell me the size but\n> also change the position that I either don't care about or have to\n> undo\", and \"tell me the size but also tell me all this other stuff I\n> don't care about\". Since Windows apparently has GetFileSizeEx(), why\n> not use that when that's exactly what you want? It apparently\n> understands large files.\n\nI was already thinking that it might be better to make the new function\njust \"tell me the file size\" without leaking the details of *how* we do\nit, before reading about this Windows call. That reinforces it, IMO.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Feb 2020 13:30:25 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On Thu, Feb 13, 2020 at 5:30 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Feb-12, Thomas Munro wrote:\n> > Hmm. Well, on Unix we have to choose between \"tell me the size but\n> > also change the position that I either don't care about or have to\n> > undo\", and \"tell me the size but also tell me all this other stuff I\n> > don't care about\". Since Windows apparently has GetFileSizeEx(), why\n> > not use that when that's exactly what you want? It apparently\n> > understands large files.\n>\n> I was already thinking that it might be better to make the new function\n> just \"tell me the file size\" without leaking the details of *how* we do\n> it, before reading about this Windows call. That reinforces it, IMO.\n\nOk, how about this?", "msg_date": "Thu, 13 Feb 2020 14:51:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On Thu, Feb 13, 2020 at 02:51:44PM +1300, Thomas Munro wrote:\n> Ok, how about this?\n\nAlvaro's point sounds sensible to me. I like the approach you are\ntaking in 0001. At least it avoids more issues with WIN32 and stat()\n(I hope to work on that at some point, we'll see..).\n\n+/*\n+ * pg_file_size --- return the size of a file\n+ */\n+int64\n+pg_file_size(int fd)\n+{\nThis routine has nothing really dependent on the backend. Would it\nmake sense to put it in a different place where it can be used by the\nfrontend? The function should include at least a comment about why we\nhave a special path for Windows, aka not falling into the trap of the\n4GB limit for stat().\n\nThe commit message of 0001 mentions pg_read(), and that should be\npg_pread().\n\nThere are two combinations of lseek/read that could be replaced: one\nin pg_receivewal.c:FindStreamingStart(), and one in\nSimpleXLogPageRead() for parsexlog.c as of pg_rewind.\n\nPatch 0002 looks good to me. This actually removes a confusion when\nfailing to seek the end of the file as the offset referenced to would\nbe 0. Patch 0003 is also a very good thing.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 16:14:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On 2020-Feb-13, Thomas Munro wrote:\n\n> On Thu, Feb 13, 2020 at 5:30 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > On 2020-Feb-12, Thomas Munro wrote:\n> > > Hmm. Well, on Unix we have to choose between \"tell me the size but\n> > > also change the position that I either don't care about or have to\n> > > undo\", and \"tell me the size but also tell me all this other stuff I\n> > > don't care about\". Since Windows apparently has GetFileSizeEx(), why\n> > > not use that when that's exactly what you want? It apparently\n> > > understands large files.\n> >\n> > I was already thinking that it might be better to make the new function\n> > just \"tell me the file size\" without leaking the details of *how* we do\n> > it, before reading about this Windows call. That reinforces it, IMO.\n> \n> Ok, how about this?\n\nSo, you said lseek() is faster than fstat, and we would only use the\nlatter because we want to avoid the file position jumping ahead, even\nthough it's slower. But if the next read/write is not going to care\nabout the file position because pread/pwrite, then why not just do one\nlseek() and not worry about the file position jumping ahead? Maybe the\nAPI could offer this as an option; caller can say \"I do care about file\nposition\" (a boolean flag) and then we use fstat; or they can say \"I\ndon't care\" and then we just do a single lseek(SEEK_END). Of course, in\nWindows we ignore the flag since we can do it in the fast way.\n\npg_file_size(int fd, bool careful)\n\nLet's have the function comment indicate that -1 is returned in case of\nfailure.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 13 Feb 2020 12:47:54 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of some more lseek() calls" }, { "msg_contents": "On Fri, Feb 14, 2020 at 4:47 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> So, you said lseek() is faster than fstat, and we would only use the\n> latter because we want to avoid the file position jumping ahead, even\n> though it's slower. But if the next read/write is not going to care\n> about the file position because pread/pwrite, then why not just do one\n> lseek() and not worry about the file position jumping ahead? Maybe the\n> API could offer this as an option; caller can say \"I do care about file\n> position\" (a boolean flag) and then we use fstat; or they can say \"I\n> don't care\" and then we just do a single lseek(SEEK_END). Of course, in\n> Windows we ignore the flag since we can do it in the fast way.\n>\n> pg_file_size(int fd, bool careful)\n>\n> Let's have the function comment indicate that -1 is returned in case of\n> failure.\n\nReviving an old thread... yeah, we should probably figure out if we\nstill want a central fd size function and how it should look, but in\nthe meantime, we might as well have slru.c using pread/pwrite as\nalready agreed, so I went ahead and pushed that part.\n\n\n", "msg_date": "Sun, 2 Aug 2020 00:24:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Getting rid of some more lseek() calls" } ]
[ { "msg_contents": "Hello Postgres hackers,\r\nThe below problem occurs in Postgres versions 11, 10, and 9.6. However, it doesn’t occur since Postgres version 12, since the commit [6] to add basic infrastructure for 64-bit transaction IDs indirectly fixed it.\r\nProblem\r\nThe replica sends an incorrect epoch in its hot standby feedback to the master in the scenario outlined below, where a checkpoint is interleaved with the execution of 2 transactions at the master. The incorrect epoch in the feedback causes the master to ignore the “oldest Xmin” X sent by the replica. If a heap page prune[1] or vacuum were executed at the master immediately thereafter, they may use a newer “oldest Xmin” Y > X, and prematurely delete a tuple T such that X < t_xmax (T) < Y, which is still in use at the replica as part of a long running read query Q. Subsequently, when the replica replays the deletion of T as part of its WAL replay, it cancels the long running query Q causing unnecessary pain to customers.\r\n\r\nScenario\r\n The experimental setup is to start 2 write transactions and get their transaction ids, then create a checkpoint, and then have the oldest of the 2 transactions perform a write operation. The WAL replay of that write operation breaks the monotonically increasing property of “nextXid” tracked by the replica, and causes it to send a wrong epoch in its next feedback.\r\n\r\n 1. Set the following parameters on the replica and the master.\r\n * replica:\r\n\r\n i. set hot_standby = on and hot_standby_feedback = on to provide hot standby feedback.\r\n\r\n ii. set log_min_messages = DEBUG2 so that hot standby feedback[2] gets logged.\r\n\r\n * master:\r\n\r\n i. set checkpoint_timeout = 3600 (1 hour) to turn off automatic checkpoints during the experiment.\r\n\r\n ii. set autovacuum = off to avoid interference from the vacuum process.\r\n\r\n 1. Make sure no other read/write queries get executed on the replica and the master during this experiment, since they might change the nextXid on the replica. To achieve this, I set up a master and replica running Postgres 11 “locally” on my desktop, with the above parameter settings.\r\n 2. Start a psql client to the master, and do the following:\r\n\r\na. Create a test table, say “test_epoch_table”. Execute “CREATE TABLE test_epoch_table(id INT, description CHAR(200));”.\r\n\r\nb. Begin a transaction, say A, and get the current transaction ID. Execute “begin; select txid_current();”.\r\n\r\n 1. Start a second psql client to the master. Begin a transaction, say B, and get the current transaction ID. Execute “begin; select txid_current();”.\r\n 2. Start a third psql client to the master, and create a checkpoint. Execute “checkpoint;”.\r\n 3. From transaction A, insert a tuple into “test_epoch_table”. Execute “insert into test_epoch_table(id, description) values(1, 'one');”\r\n 4. Open the Postgres log file on the replica and look for the latest hot standby feedback. The log message will show that the replica sent an incorrect epoch of 1, instead of 0.\r\n\r\nAnalysis\r\nThe variable “ShmemVariableCache->nextXid” (or “nextXid” for short) should be monotonically increasing unless it wraps around to the next epoch. However, in the above sequence, this property is violated on the replica in the function “RecordKnownAssignedTransactionIds”[3], when the WAL replay for the insertion at step 6 is executed at the replica.\r\n\r\nFor example, before step 3b is executed, suppose the nextXid at the replica and master is 100, and that the variable “latestObservedXid”, which tracks the latest observed Xid at the replica, is 99. Now, suppose the Xid for transaction A is 100, and that for transaction B is 101. After step 4 is executed, the latestObservedXid at the replica still remains at 99, since the WAL replay for RUNNING_XACTS doesn’t advance latestObservedXid, once the replica reaches the STANDBY_SNAPSHOT_READY state. See [4] for reference. After step 5 is executed, the WAL replay of the checkpoint advances nextXid at the replica to 102, but doesn’t invoke RecordKnownAssignedTransactionIds, and hence latestObservedXid still remains 99. After step 6 is executed, the WAL replay of the insertion invokes RecordKnownAssignedTransactionIds with an input Xid of 100, advances latestObservedXid to 100, and sets nextXid at the replica to 101 (at line [3]) breaking its monotonicity. The hot standby feedback, which is generated immediately after the WAL replay of the insertion, invokes the function “GetNextXidAndEpoch”, which incorrectly calculates the epoch [5].\r\n\r\nVersions impacted by this problem: 11, 10, and 9.6\r\n\r\nAttached is the git diff of the patch for version 11, which is a 1-line change excluding the curly braces. Patch for versions 10 and 9.6 involve the same change.\r\n\r\nPlease let me know if you have any questions.\r\n\r\nThanks,\r\nEka Palamadai\r\nAmazon Web Services\r\n\r\n[1] https://github.com/postgres/postgres/blob/REL_11_STABLE/src/backend/access/heap/pruneheap.c#L74\r\n[2] https://github.com/postgres/postgres/blob/REL_11_STABLE/src/backend/replication/walreceiver.c#L1244\r\n[3] https://github.com/postgres/postgres/blob/REL_11_STABLE/src/backend/storage/ipc/procarray.c#L3259\r\n[4] https://github.com/postgres/postgres/blob/REL_11_STABLE/src/backend/storage/ipc/procarray.c#L693\r\n[5] https://github.com/postgres/postgres/blob/REL_11_STABLE/src/backend/access/transam/xlog.c#L8444\r\n[6] https://github.com/postgres/postgres/commit/2fc7af5e966043a412e8e69c135fae55a2db6d4f", "msg_date": "Thu, 6 Feb 2020 23:51:55 +0000", "msg_from": "\"Palamadai, Eka\" <ekanatha@amazon.com>", "msg_from_op": true, "msg_subject": "[PATCH] Replica sends an incorrect epoch in its hot standby feedback\n to the Master" }, { "msg_contents": "On Fri, Feb 7, 2020 at 1:03 PM Palamadai, Eka <ekanatha@amazon.com> wrote:\n> The below problem occurs in Postgres versions 11, 10, and 9.6. However, it doesn’t occur since Postgres version 12, since the commit [6] to add basic infrastructure for 64-bit transaction IDs indirectly fixed it.\n\nI'm happy that that stuff is already fixing bugs we didn't know we\nhad, but, yeah, it looks like it really only fixed it incidentally by\nmoving all the duplicated \"assign if higher\" code into a function, not\nthrough the magical power of 64 bit xids.\n\n> The replica sends an incorrect epoch in its hot standby feedback to the master in the scenario outlined below, where a checkpoint is interleaved with the execution of 2 transactions at the master. The incorrect epoch in the feedback causes the master to ignore the “oldest Xmin” X sent by the replica. If a heap page prune[1] or vacuum were executed at the master immediately thereafter, they may use a newer “oldest Xmin” Y > X, and prematurely delete a tuple T such that X < t_xmax (T) < Y, which is still in use at the replica as part of a long running read query Q. Subsequently, when the replica replays the deletion of T as part of its WAL replay, it cancels the long running query Q causing unnecessary pain to customers.\n\nOuch. Thanks for this analysis!\n\n> The variable “ShmemVariableCache->nextXid” (or “nextXid” for short) should be monotonically increasing unless it wraps around to the next epoch. However, in the above sequence, this property is violated on the replica in the function “RecordKnownAssignedTransactionIds”[3], when the WAL replay for the insertion at step 6 is executed at the replica.\n\nI haven't tried your repro or studied this closely yet, but yes, that\nassignment to nextXid does indeed look pretty fishy. Other similar\ncode elsewhere always does a check like in your patch, before\nclobbering nextXid.\n\n\n", "msg_date": "Wed, 12 Feb 2020 17:27:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" }, { "msg_contents": "Thanks a lot for the feedback. Please let me know if you have any further comments. Meanwhile, I have also added this patch to \"Commitfest 2020-03\" at https://commitfest.postgresql.org/27/2464.\r\n\r\nThanks,\r\nEka Palamadai\r\nAmazon Web Services\r\n\r\nOn 2/11/20, 11:28 PM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n\r\n On Fri, Feb 7, 2020 at 1:03 PM Palamadai, Eka <ekanatha@amazon.com> wrote:\r\n > The below problem occurs in Postgres versions 11, 10, and 9.6. However, it doesn’t occur since Postgres version 12, since the commit [6] to add basic infrastructure for 64-bit transaction IDs indirectly fixed it.\r\n \r\n I'm happy that that stuff is already fixing bugs we didn't know we\r\n had, but, yeah, it looks like it really only fixed it incidentally by\r\n moving all the duplicated \"assign if higher\" code into a function, not\r\n through the magical power of 64 bit xids.\r\n \r\n > The replica sends an incorrect epoch in its hot standby feedback to the master in the scenario outlined below, where a checkpoint is interleaved with the execution of 2 transactions at the master. The incorrect epoch in the feedback causes the master to ignore the “oldest Xmin” X sent by the replica. If a heap page prune[1] or vacuum were executed at the master immediately thereafter, they may use a newer “oldest Xmin” Y > X, and prematurely delete a tuple T such that X < t_xmax (T) < Y, which is still in use at the replica as part of a long running read query Q. Subsequently, when the replica replays the deletion of T as part of its WAL replay, it cancels the long running query Q causing unnecessary pain to customers.\r\n \r\n Ouch. Thanks for this analysis!\r\n \r\n > The variable “ShmemVariableCache->nextXid” (or “nextXid” for short) should be monotonically increasing unless it wraps around to the next epoch. However, in the above sequence, this property is violated on the replica in the function “RecordKnownAssignedTransactionIds”[3], when the WAL replay for the insertion at step 6 is executed at the replica.\r\n \r\n I haven't tried your repro or studied this closely yet, but yes, that\r\n assignment to nextXid does indeed look pretty fishy. Other similar\r\n code elsewhere always does a check like in your patch, before\r\n clobbering nextXid.\r\n \r\n\r\n", "msg_date": "Fri, 21 Feb 2020 17:10:49 +0000", "msg_from": "\"Palamadai, Eka\" <ekanatha@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" }, { "msg_contents": "On Fri, Feb 21, 2020 at 8:15 PM Palamadai, Eka <ekanatha@amazon.com> wrote:\n\nPlease, do not top post on this list.\n\nThanks a lot for the feedback. Please let me know if you have any further\n> comments. Meanwhile, I have also added this patch to \"Commitfest 2020-03\"\n> at https://commitfest.postgresql.org/27/2464.\n>\n\nApparently, there are a couple of duplicate entries in the commitfest as:\nhttps://commitfest.postgresql.org/27/2463/ and\nhttps://commitfest.postgresql.org/27/2462/\n\nCould close those as \"withdrawn\"?\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Feb 21, 2020 at 8:15 PM Palamadai, Eka <ekanatha@amazon.com> wrote:Please, do not top post on this list.Thanks a lot for the feedback. Please let me know if you have any further comments. Meanwhile, I have also added this patch to \"Commitfest 2020-03\" at https://commitfest.postgresql.org/27/2464.Apparently, there are a couple of duplicate entries in the commitfest as: https://commitfest.postgresql.org/27/2463/ and https://commitfest.postgresql.org/27/2462/Could close those as \"withdrawn\"?Regards,Juan José Santamaría Flecha", "msg_date": "Mon, 2 Mar 2020 16:33:10 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" }, { "msg_contents": "On 3/2/20 10:33 AM, Juan José Santamaría Flecha wrote:\n> On Fri, Feb 21, 2020 at 8:15 PM Palamadai, Eka <ekanatha@amazon.com \n> <mailto:ekanatha@amazon.com>> wrote:\n> \n> Please, do not top post on this list.\n> \n> Thanks a lot for the feedback. Please let me know if you have any\n> further comments. Meanwhile, I have also added this patch to\n> \"Commitfest 2020-03\" at https://commitfest.postgresql.org/27/2464.\n> \n> \n> Apparently, there are a couple of duplicate entries in the commitfest \n> as: https://commitfest.postgresql.org/27/2463/ and \n> https://commitfest.postgresql.org/27/2462/\n> \n> Could close those as \"withdrawn\"?\n\nI have marked the duplicate entries 2462 and 2463 as withdrawn.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 3 Mar 2020 08:02:51 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" }, { "msg_contents": "On Sat, Feb 22, 2020 at 6:10 AM Palamadai, Eka <ekanatha@amazon.com> wrote:\n> Thanks a lot for the feedback. Please let me know if you have any further comments. Meanwhile, I have also added this patch to \"Commitfest 2020-03\" at https://commitfest.postgresql.org/27/2464.\n\nThanks for the excellent reproducer for this obscure bug. You said\nthe problem exists in 9.6-11, but I'm also able to reproduce it in\n9.5. That's the oldest supported release, but it probably goes back\nfurther. I confirmed that this patch fixes the immediate problem.\nI've attached a version of your patch with a commit message, to see if\nanyone has more feedback on this.", "msg_date": "Wed, 11 Mar 2020 19:47:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" }, { "msg_contents": "On Wed, Mar 11, 2020 at 7:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Feb 22, 2020 at 6:10 AM Palamadai, Eka <ekanatha@amazon.com> wrote:\n> > Thanks a lot for the feedback. Please let me know if you have any further comments. Meanwhile, I have also added this patch to \"Commitfest 2020-03\" at https://commitfest.postgresql.org/27/2464.\n>\n> Thanks for the excellent reproducer for this obscure bug. You said\n> the problem exists in 9.6-11, but I'm also able to reproduce it in\n> 9.5. That's the oldest supported release, but it probably goes back\n> further. I confirmed that this patch fixes the immediate problem.\n> I've attached a version of your patch with a commit message, to see if\n> anyone has more feedback on this.\n\nPushed.\n\n\n", "msg_date": "Thu, 12 Mar 2020 18:21:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" } ]
[ { "msg_contents": "Hello,\n\nHere's a rebase of a refactoring patch that got lost behind a filing\ncabinet on another thread even though there seemed to be some\nagreement that we probably want something like this[1]. It introduces\na new type SegmentNumber, instead of using BlockNumber to represent\nsegment numbers.\n\n[1] https://www.postgresql.org/message-id/flat/", "msg_date": "Fri, 7 Feb 2020 13:00:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "typedef SegmentNumber" }, { "msg_contents": "On Fri, Feb 7, 2020 at 1:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [1] https://www.postgresql.org/message-id/flat/\n\nThat should be:\n\n[1] https://www.postgresql.org/message-id/flat/20190429130321.GA14886%40alvherre.pgsql#7e4ed274b6552d6c5e18a069579321c9\n\n\n", "msg_date": "Fri, 7 Feb 2020 13:01:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: typedef SegmentNumber" }, { "msg_contents": "On Fri, Feb 07, 2020 at 01:00:00PM +1300, Thomas Munro wrote:\n> Hello,\n>\n> Here's a rebase of a refactoring patch that got lost behind a filing\n> cabinet on another thread even though there seemed to be some\n> agreement that we probably want something like this[1]. It introduces\n> a new type SegmentNumber, instead of using BlockNumber to represent\n> segment numbers.\n\n+1, and looks good to me!\n\n\n", "msg_date": "Fri, 7 Feb 2020 12:40:33 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typedef SegmentNumber" } ]
[ { "msg_contents": "pgxs.mk assumes that if $(EXTENSION) is set, a file\n$(EXTENSION).control must exist in the $(srcdir).\n\nExtensions that need to support multiple Pg versions, multiple\nvariants of the extension, etc may need to template their extension\ncontrol file. PGXS's assumption prevents those extensions from\nsupporting read-only source trees for true vpath builds.\n\nA workaround is to ignore the EXTENSION field in PGXS and leave it\nunset, then set MODULEDIR to the value you would've set EXTENSION to\nand install your control file with DATA_built . But that's more than a\ntad ugly.\n\nThe attached patch fixes this by having PGXS resolve\n$(EXTENSION).control along the VPATH.\n\nBefore:\n\n/usr/bin/install: cannot stat\n'/the/srcdir/path/the_extension.control': No such file or directory\nmake: *** [/the/postgres/path/lib/postgresql/pgxs/src/makefiles/pgxs.mk:229:\ninstall] Error 1\n\nAfter: no error, extension control file is found in builddir.\n\nThere's no reference to $(EXTENSION) outside pgxs.mk so this shouldn't\nhave any wider consequences.\n\nThe extension is responsible for the build rule for the control file,\nlike in DATA_built etc.\n\nPlease backpatch this build fix.\n\nI could supply an alternative patch that follows PGXS's existing\nconvention of using a _built suffix, allowing the extension to specify\neither EXTENSION or EXTENSION_built. For backward compat we'd have to\nallow both to be set so long as they have the same value. Personally I\ndislike this pattern and prefer to just resolve it in normal Make\nfashion without caring if it's a built file or not, especially for the\nEXTENSION var, so I'd prefer the first variant.\n\nFrankly I'd rather we got rid of all the $(VAR) and $(VAR_built) stuff\nentirely and just let make do proper vpath resolution. But I'm sure\nit's that way for a reason...\n\nI have a few other cleanup/fixup patches in the pipe for PGXS and\nMakefile.global but I have to tidy them up a bit first. One to\neliminate undefined variables use, another to allow vpath directives\nto be used instead of the big VPATH variable hammer. Keep an eye out.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Fri, 7 Feb 2020 11:14:57 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "[PATCH] Support built control file in PGXS VPATH builds" }, { "msg_contents": "On 2020-02-07 04:14, Craig Ringer wrote:\n> The attached patch fixes this by having PGXS resolve\n> $(EXTENSION).control along the VPATH.\n\nSimpler patch:\n\ndiff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk\nindex 271e7eaba8..1cd750eecd 100644\n--- a/src/makefiles/pgxs.mk\n+++ b/src/makefiles/pgxs.mk\n@@ -229,7 +229,7 @@ endif # MODULE_big\n \n install: all installdirs\n ifneq (,$(EXTENSION))\n-\t$(INSTALL_DATA) $(addprefix $(srcdir)/, $(addsuffix .control, $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n+\t$(INSTALL_DATA) $(call vpathsearch,$(addsuffix .control, $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n endif # EXTENSION\n ifneq (,$(DATA)$(DATA_built))\n \t$(INSTALL_DATA) $(addprefix $(srcdir)/, $(DATA)) $(DATA_built) '$(DESTDIR)$(datadir)/$(datamoduledir)/'\n\nDoes that work for you?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 10:27:46 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support built control file in PGXS VPATH builds" }, { "msg_contents": "On Mon, 9 Mar 2020, 17:27 Peter Eisentraut, <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-02-07 04:14, Craig Ringer wrote:\n> > The attached patch fixes this by having PGXS resolve\n> > $(EXTENSION).control along the VPATH.\n>\n> Simpler patch:\n>\n> diff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk\n> index 271e7eaba8..1cd750eecd 100644\n> --- a/src/makefiles/pgxs.mk\n> +++ b/src/makefiles/pgxs.mk\n> @@ -229,7 +229,7 @@ endif # MODULE_big\n>\n> install: all installdirs\n> ifneq (,$(EXTENSION))\n> - $(INSTALL_DATA) $(addprefix $(srcdir)/, $(addsuffix .control,\n> $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n> + $(INSTALL_DATA) $(call vpathsearch,$(addsuffix .control,\n> $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n> endif # EXTENSION\n> ifneq (,$(DATA)$(DATA_built))\n> $(INSTALL_DATA) $(addprefix $(srcdir)/, $(DATA)) $(DATA_built)\n> '$(DESTDIR)$(datadir)/$(datamoduledir)/'\n>\n> Does that work for you?\n>\n\nIt wouldn't be my preference because it relies on the VPATH variable.\nAFAICS the extension cannot use finer grained vpath directives for this.\nAnd if anything relies on VPATH it must be set so you can't really benefit\nfrom vpath directives for anything else.\n\nCurrently it's possible to build extensions by unsetting VPATH after\nincluding pgxs.mk and defining vpath directives only the things you want to\nsearch for. This is immensely useful. You can prevent make from looking for\nbuild products in the srcdir so you don't get issues with stale files if\nsomeone does a vpath build from a dirty worktree that has files left in it\nfrom a previous in tree build. Lots of other things too.\n\nSo while your patch would work it would definitely not be my preference.\nFrankly I'd rather be moving on the other direction - doing away with all\nthis DATA vs DATA_BUILT mess entirely and switch everything to using make\nvpath directives + automatic variable path resolution.\n\nOur vpathsearch function is IMO a bit of a hack we shouldn't need to use at\nall. The only time it's necessary is when we absolutely need to get the\nvpath resolved path into a Make variable. Since I don't think make offers\nits own vpath directive aware search function there's no convenient way to\nget a make var with the resolved path in it before the target is invoked.\n\nSo really I think we should be letting make resolve the targets for us\nusing automatic variables like $< $^ and $@ with the target search logic.\n\nBTW it's definitely rather frustrating that make doesn't appear to have a\n$(vpathsearch foo) or $(vpathlookup foo) or whatever of its own. Seems very\nsilly to not have that when there are vpath directives.\n\nOn Mon, 9 Mar 2020, 17:27 Peter Eisentraut, <peter.eisentraut@2ndquadrant.com> wrote:On 2020-02-07 04:14, Craig Ringer wrote:\n> The attached patch fixes this by having PGXS resolve\n> $(EXTENSION).control along the VPATH.\n\nSimpler patch:\n\ndiff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk\nindex 271e7eaba8..1cd750eecd 100644\n--- a/src/makefiles/pgxs.mk\n+++ b/src/makefiles/pgxs.mk\n@@ -229,7 +229,7 @@ endif # MODULE_big\n\n install: all installdirs\n ifneq (,$(EXTENSION))\n-       $(INSTALL_DATA) $(addprefix $(srcdir)/, $(addsuffix .control, $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n+       $(INSTALL_DATA) $(call vpathsearch,$(addsuffix .control, $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n endif # EXTENSION\n ifneq (,$(DATA)$(DATA_built))\n        $(INSTALL_DATA) $(addprefix $(srcdir)/, $(DATA)) $(DATA_built) '$(DESTDIR)$(datadir)/$(datamoduledir)/'\n\nDoes that work for you?It wouldn't be my preference because it relies on the VPATH variable. AFAICS the extension  cannot use finer grained vpath directives for this. And if anything relies on VPATH it must be set so you can't really benefit from vpath directives for anything else.Currently it's possible to build extensions by unsetting VPATH after including pgxs.mk and defining vpath directives only the things you want to search for. This is immensely useful. You can prevent make from looking for build products in the srcdir so you don't get issues with stale files if someone does a vpath build from a dirty worktree that has files left in it from a previous in tree build. Lots of other things too.So while your patch would work it would definitely not be my preference. Frankly I'd rather be moving on the other direction - doing away with all this DATA vs DATA_BUILT mess entirely and switch everything to using make vpath directives + automatic variable path resolution.Our vpathsearch function is IMO a bit of a hack we shouldn't need to use at all. The only time it's necessary is when we absolutely need to get the vpath resolved path into a Make variable. Since I don't think make offers its own vpath directive aware search function there's no convenient way to get a make var with the resolved path in it before the target is invoked.So really I think we should be letting make resolve the targets for us using automatic variables like $< $^ and $@ with the target search logic.BTW it's definitely rather frustrating that make doesn't appear to have a $(vpathsearch foo) or $(vpathlookup foo) or whatever of its own. Seems very silly to not have that when there are vpath directives.", "msg_date": "Mon, 30 Mar 2020 11:50:23 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support built control file in PGXS VPATH builds" }, { "msg_contents": "On Mon, 30 Mar 2020 at 11:50, Craig Ringer <craig@2ndquadrant.com> wrote:\n\n>\n>\n> On Mon, 9 Mar 2020, 17:27 Peter Eisentraut, <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>\n>> On 2020-02-07 04:14, Craig Ringer wrote:\n>> > The attached patch fixes this by having PGXS resolve\n>> > $(EXTENSION).control along the VPATH.\n>>\n>> Simpler patch:\n>>\n>> diff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk\n>> index 271e7eaba8..1cd750eecd 100644\n>> --- a/src/makefiles/pgxs.mk\n>> +++ b/src/makefiles/pgxs.mk\n>> @@ -229,7 +229,7 @@ endif # MODULE_big\n>>\n>> install: all installdirs\n>> ifneq (,$(EXTENSION))\n>> - $(INSTALL_DATA) $(addprefix $(srcdir)/, $(addsuffix .control,\n>> $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n>> + $(INSTALL_DATA) $(call vpathsearch,$(addsuffix .control,\n>> $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n>> endif # EXTENSION\n>> ifneq (,$(DATA)$(DATA_built))\n>> $(INSTALL_DATA) $(addprefix $(srcdir)/, $(DATA)) $(DATA_built)\n>> '$(DESTDIR)$(datadir)/$(datamoduledir)/'\n>>\n>> Does that work for you?\n>>\n>\n> It wouldn't be my preference because it relies on the VPATH variable.\n> AFAICS the extension cannot use finer grained vpath directives for this.\n> And if anything relies on VPATH it must be set so you can't really benefit\n> from vpath directives for anything else.\n>\n\n\nAny thoughts here? I'd like to get it merged if possible and I hope my\nexplanation of why I did it that way clears things up.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Mon, 30 Mar 2020 at 11:50, Craig Ringer <craig@2ndquadrant.com> wrote:On Mon, 9 Mar 2020, 17:27 Peter Eisentraut, <peter.eisentraut@2ndquadrant.com> wrote:On 2020-02-07 04:14, Craig Ringer wrote:\n> The attached patch fixes this by having PGXS resolve\n> $(EXTENSION).control along the VPATH.\n\nSimpler patch:\n\ndiff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk\nindex 271e7eaba8..1cd750eecd 100644\n--- a/src/makefiles/pgxs.mk\n+++ b/src/makefiles/pgxs.mk\n@@ -229,7 +229,7 @@ endif # MODULE_big\n\n install: all installdirs\n ifneq (,$(EXTENSION))\n-       $(INSTALL_DATA) $(addprefix $(srcdir)/, $(addsuffix .control, $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n+       $(INSTALL_DATA) $(call vpathsearch,$(addsuffix .control, $(EXTENSION))) '$(DESTDIR)$(datadir)/extension/'\n endif # EXTENSION\n ifneq (,$(DATA)$(DATA_built))\n        $(INSTALL_DATA) $(addprefix $(srcdir)/, $(DATA)) $(DATA_built) '$(DESTDIR)$(datadir)/$(datamoduledir)/'\n\nDoes that work for you?It wouldn't be my preference because it relies on the VPATH variable. AFAICS the extension  cannot use finer grained vpath directives for this. And if anything relies on VPATH it must be set so you can't really benefit from vpath directives for anything else.Any thoughts here? I'd like to get it merged if possible and I hope my explanation of why I did it that way clears things up.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 9 Apr 2020 11:54:06 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support built control file in PGXS VPATH builds" }, { "msg_contents": "> On 9 Apr 2020, at 05:54, Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> Any thoughts here? I'd like to get it merged if possible and I hope my explanation of why I did it that way clears things up.\n\nAccording to the CFBot patch tester, this fails the test_extensions and\ntest_extdepend test suites. I've marked the patch Waiting on Author.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 13:34:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support built control file in PGXS VPATH builds" }, { "msg_contents": "> On 1 Jul 2020, at 13:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 9 Apr 2020, at 05:54, Craig Ringer <craig@2ndquadrant.com> wrote:\n> \n>> Any thoughts here? I'd like to get it merged if possible and I hope my explanation of why I did it that way clears things up.\n> \n> According to the CFBot patch tester, this fails the test_extensions and\n> test_extdepend test suites. I've marked the patch Waiting on Author.\n\nWith the thread stalled and the tests still failing, I've marked this patch\nReturned with Feedback. Feel free to create a new entry when there is a new\nversion of the patch.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 31 Jul 2020 21:10:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support built control file in PGXS VPATH builds" } ]
[ { "msg_contents": "Hello Hackers,\n\nWhile working on some issue in logical decoding, I found some\ninconsistencies in the comment for defining max_cached_tuplebufs in\nreorderbuffer.c. It only exists till PG10 because after that the\ndefinition got removed by the generational memory allocator patch. The\nvariable is defined as follows in reorderbuffer.c:\nstatic const Size max_cached_tuplebufs = 4096 * 2; /* ~8MB */\n\nAnd it gets compared with rb->nr_cached_tuplebufs in\nReorderBufferReturnTupleBuf as follows:\nif (tuple->alloc_tuple_size == MaxHeapTupleSize &&\n rb->nr_cached_tuplebufs < max_cached_tuplebufs)\n\n {\n rb->nr_cached_tuplebufs++;\n}\n\nSo, what this variable actually tracks is 4096 * 2 times\nMaxHeapTupleSize amount of memory which is approximately 64MB. I've\nattached a patch to modify the comment.\n\nBut, I'm not sure whether the intention was to keep 8MB cache only. In\nthat case, I can come up with another patch.\n\nThoughts?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 7 Feb 2020 14:49:14 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": true, "msg_subject": "Fix comment for max_cached_tuplebufs definition" }, { "msg_contents": "On Fri, Feb 7, 2020 at 02:49:14PM +0530, Kuntal Ghosh wrote:\n> Hello Hackers,\n> \n> While working on some issue in logical decoding, I found some\n> inconsistencies in the comment for defining max_cached_tuplebufs in\n> reorderbuffer.c. It only exists till PG10 because after that the\n> definition got removed by the generational memory allocator patch. The\n> variable is defined as follows in reorderbuffer.c:\n> static const Size max_cached_tuplebufs = 4096 * 2; /* ~8MB */\n> \n> And it gets compared with rb->nr_cached_tuplebufs in\n> ReorderBufferReturnTupleBuf as follows:\n> if (tuple->alloc_tuple_size == MaxHeapTupleSize &&\n> rb->nr_cached_tuplebufs < max_cached_tuplebufs)\n> \n> {\n> rb->nr_cached_tuplebufs++;\n> }\n> \n> So, what this variable actually tracks is 4096 * 2 times\n> MaxHeapTupleSize amount of memory which is approximately 64MB. I've\n> attached a patch to modify the comment.\n> \n> But, I'm not sure whether the intention was to keep 8MB cache only. In\n> that case, I can come up with another patch.\n\nYes, I see you are correct, since each tuplebuf is MaxHeapTupleSize. \nPatch applied from PG 9.5 to PG 10. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 14 Mar 2020 17:38:24 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fix comment for max_cached_tuplebufs definition" } ]
[ { "msg_contents": "Hello\n\nPatch ba79cb5 (for full discussion see [1]) introduces a feature to log \nbind parameter values on error,\nwhich greatly helps to reproduce errors artificially having only server \nlog -- thanks everyone who\nreviewed and improved it!\n\nHowever, it cuts the values, as some of the reviewers were worried log \ncould fill up too quickly.\nThis applies both to the new case of logging parameter values and to the \nexisting ones due to\nlog_min_duration_statement or log_statement.\n\nThis is a backwards-incompatible change, and also ruins the idea of \nreproducible errors -- sorry Tom\nI failed to second this idea [2] in time, before the change was pushed.\n\nI personally don't think that we necessarily need to cut the values, we \ncan rely on the users\nbeing careful when using this feature -- in the same way we trusted them \nuse similarly dangerous\nlog_min_duration_statement and especially log_statement for ages. At \nleast it's better than having\nno option to disable it. Alvaro's opinion was different [3]. What do you \nthink\nof adding a parameter to limit max logged parameter length? See patch \nattached.\n\nBest, Alex\n\n[1] https://postgr.es/m/0146a67b-a22a-0519-9082-bc29756b93a2@imap.cc\n[2] https://postgr.es/m/11425.1575927321%40sss.pgh.pa.us\n[3] https://postgr.es/m/20191209200531.GA19848@alvherre.pgsql", "msg_date": "Fri, 7 Feb 2020 13:56:52 +0000", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "control max length of parameter values logged" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n> I personally don't think that we necessarily need to cut the values, we \n> can rely on the users\n> being careful when using this feature -- in the same way we trusted them \n> use similarly dangerous\n> log_min_duration_statement and especially log_statement for ages. At \n> least it's better than having\n> no option to disable it. Alvaro's opinion was different [3]. What do you \n> think\n> of adding a parameter to limit max logged parameter length? See patch \n> attached.\n\nThis patch is failing to build docs (per the cfbot) and it also fails\ncheck-world because you changed behavior tested by ba79cb5dc's test case.\nAttached is an update that hopefully will make the cfbot happy.\n\nI agree that something ought to be done here, but I'm not sure that\nthis is exactly what. It appears to me that there are three related\nbut distinct behaviors under discussion:\n\n1. Truncation of bind parameters returned to clients in error message\n detail fields.\n2. Truncation of bind parameters written to the server log in logged\n error messages.\n3. Truncation of bind parameters written to the server log in non-error\n statement logging actions (log_statement and variants).\n\nHistorically we haven't truncated any of these, I believe. As of\nba79cb5dc we forcibly truncate #1 and #2 at 64 bytes, but not #3.\nYour patch proposes to provide a SUSET GUC that controls the\ntruncation length for all 3.\n\nI think that the status quo as of ba79cb5dc is entirely unacceptable,\nbecause there is no recourse if you want to find out why a statement\nis failing and the reason is buried in some long parameter string.\nHowever, this patch doesn't really fix it, because it still seems\npretty annoying that you need to be superuser to adjust what gets\nsent back to the client. Maybe that isn't a problem in practice\n(since the client presumably has the original parameter value laying\nabout), but it seems conceptually wrong.\n\nOn the other hand, that line of reasoning leads to wanting two\nseparate GUCs (one for #1 and one for the other two), which seems\nlike overkill, plus it's going to be hard/expensive to have the\noutputs for #1 and #2 not be the same.\n\nI do agree that it seems weird and random (not to say 100% backward)\nthat error cases provide only truncated values but routine query\nlogging insists on emitting full untruncated values. I should think\nthat the most common use-cases would want it the other way round.\n\nSo I feel like we'd better resolve these definitional questions about\nwhat behavior we actually want. I agree that ba79cb5dc is not\nterribly well thought out as it stands.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 10 Mar 2020 18:03:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On 2020-Mar-10, Tom Lane wrote:\n\n> I agree that something ought to be done here, but I'm not sure that\n> this is exactly what. It appears to me that there are three related\n> but distinct behaviors under discussion:\n> \n> 1. Truncation of bind parameters returned to clients in error message\n> detail fields.\n> 2. Truncation of bind parameters written to the server log in logged\n> error messages.\n> 3. Truncation of bind parameters written to the server log in non-error\n> statement logging actions (log_statement and variants).\n> \n> Historically we haven't truncated any of these, I believe. As of\n> ba79cb5dc we forcibly truncate #1 and #2 at 64 bytes, but not #3.\n> Your patch proposes to provide a SUSET GUC that controls the\n> truncation length for all 3.\n\nThe reason I didn't change the other uses was precisely that it was\nestablished behavior, but my opinion was that truncating them would\nbe better, now that we have code to handle doing so.\n\nMaybe it would make sense to always log complete parameters for error\ncases when that feature is enabled, and have the GUC only control the\nlengths logged for non-error cases?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Mar 2020 18:17:32 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Maybe it would make sense to always log complete parameters for error\n> cases when that feature is enabled, and have the GUC only control the\n> lengths logged for non-error cases?\n\nI could get behind that. It's a bit different from the original\nidea here, but I think it's closer to being real-world-useful.\n\nAnother way to slice this up would be to have a USERSET GUC that\ncontrols truncation of parameter values in errors, and a separate\nSUSET GUC that controls it for the non-error statement logging\ncases. I'm not sure how much that's actually worth, but if we\nfeel that truncation in error cases can be useful, that's how\nI'd vote to expose it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 17:56:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On 2020-Mar-11, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Maybe it would make sense to always log complete parameters for error\n> > cases when that feature is enabled, and have the GUC only control the\n> > lengths logged for non-error cases?\n> \n> I could get behind that. It's a bit different from the original\n> idea here, but I think it's closer to being real-world-useful.\n> \n> Another way to slice this up would be to have a USERSET GUC that\n> controls truncation of parameter values in errors, and a separate\n> SUSET GUC that controls it for the non-error statement logging\n> cases. I'm not sure how much that's actually worth, but if we\n> feel that truncation in error cases can be useful, that's how\n> I'd vote to expose it.\n\nEither of these ideas work for me. I think I like the latter more,\nsince it allows to configure truncation in all cases. (I'm not really\nsure I understand why one of them must be SUSET.)\n\nThe reason I'm so hot about parameter truncation is that we've seen\ncases where customers' log files contain log lines many megabytes long\nbecause of gigantic parameters; UUID arrays with tens of thousands of\nentries, and such. Sometimes we see those in the normal \"statement\"\nline because $customer interpolates into the query literal; normally the\n\"solution\" is to move the params from interpolated into a parameter.\nBut if we log all parameters whole, that workaround no longer works, so\na way to clip is necessary.\n\nI agree that truncating the value that can be disabled while not\ntruncating the values that cannot be disabled, is a bit silly.\n\nI'm okay with the default being not to clip anything.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 12 Mar 2020 12:53:42 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Mar-11, Tom Lane wrote:\n>> Another way to slice this up would be to have a USERSET GUC that\n>> controls truncation of parameter values in errors, and a separate\n>> SUSET GUC that controls it for the non-error statement logging\n>> cases. I'm not sure how much that's actually worth, but if we\n>> feel that truncation in error cases can be useful, that's how\n>> I'd vote to expose it.\n\n> Either of these ideas work for me. I think I like the latter more,\n> since it allows to configure truncation in all cases. (I'm not really\n> sure I understand why one of them must be SUSET.)\n\nWe generally suppose that GUCs that control statement logging should be\nSUSET, so that unprivileged users don't get to hide their activity from\nthe log. On the other hand, I think it's okay for error logging (as\nopposed to statement tracing) to be under user control, because the user\ncan simply avoid or trap an error if he doesn't want it to be logged.\n\n> The reason I'm so hot about parameter truncation is that we've seen\n> cases where customers' log files contain log lines many megabytes long\n> because of gigantic parameters; UUID arrays with tens of thousands of\n> entries, and such. Sometimes we see those in the normal \"statement\"\n> line because $customer interpolates into the query literal; normally the\n> \"solution\" is to move the params from interpolated into a parameter.\n> But if we log all parameters whole, that workaround no longer works, so\n> a way to clip is necessary.\n\nAgreed, it seems like there's a fairly compelling case for being\nable to clip.\n\n> I'm okay with the default being not to clip anything.\n\nAlso agreed. It's been like it is for a long time with not that\nmany complaints, so the case for changing the default behavior\nseems a bit weak.\n\nBarring other opinions, I think we have consensus here on what\nto do. Alexey, will you update your patch?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Mar 2020 12:01:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On Thu, Mar 12, 2020 at 12:01:24PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > The reason I'm so hot about parameter truncation is that we've seen\n> > cases where customers' log files contain log lines many megabytes long\n> > because of gigantic parameters; UUID arrays with tens of thousands of\n> > entries, and such. Sometimes we see those in the normal \"statement\"\n> > line because $customer interpolates into the query literal; normally the\n> > \"solution\" is to move the params from interpolated into a parameter.\n> > But if we log all parameters whole, that workaround no longer works, so\n> > a way to clip is necessary.\n> \n> Agreed, it seems like there's a fairly compelling case for being\n> able to clip.\n> \n> > I'm okay with the default being not to clip anything.\n> \n> Also agreed. It's been like it is for a long time with not that\n> many complaints, so the case for changing the default behavior\n> seems a bit weak.\n> \n> Barring other opinions, I think we have consensus here on what\n> to do. Alexey, will you update your patch?\n\nI am sorry --- I am confused. Why are we truncating or allowing control\nof truncation of BIND parameter values, but have no such facility for\nqueries. Do we assume queries are shorter than BIND parameters, or is\nit just that it is easier to trim BIND parameters than values embedded\nin non-EXECUTE queries.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Sat, 14 Mar 2020 18:09:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I am sorry --- I am confused. Why are we truncating or allowing control\n> of truncation of BIND parameter values, but have no such facility for\n> queries. Do we assume queries are shorter than BIND parameters, or is\n> it just that it is easier to trim BIND parameters than values embedded\n> in non-EXECUTE queries.\n\nThe cases that Alvaro was worried about were enormous values supplied\nvia bind parameters. We haven't heard comparable complaints about\nthe statement text. Also, from a security standpoint, the contents\nof the statement text are way more critical than the contents of\nan out-of-line parameter; you can't do SQL injection from the latter.\nSo I think the audience for trimming would be a lot smaller for\nstatement-text trimming.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Mar 2020 18:41:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On 2020-Mar-14, Tom Lane wrote:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I am sorry --- I am confused. Why are we truncating or allowing control\n> > of truncation of BIND parameter values, but have no such facility for\n> > queries. Do we assume queries are shorter than BIND parameters, or is\n> > it just that it is easier to trim BIND parameters than values embedded\n> > in non-EXECUTE queries.\n> \n> The cases that Alvaro was worried about were enormous values supplied\n> via bind parameters. We haven't heard comparable complaints about\n> the statement text.\n\nTo be more precise, I have seen cases of enormous statement text, but\nthose are fixed precisely by moving the bulk to parameters. So the\nability to trim the parameter is important. I've never seen a very\nlarge query without the bulk being parameterizable.\n\n> Also, from a security standpoint, the contents\n> of the statement text are way more critical than the contents of\n> an out-of-line parameter; you can't do SQL injection from the latter.\n\nThat's a good point too.\n\n> So I think the audience for trimming would be a lot smaller for\n> statement-text trimming.\n\nNod. (I think if we really wanted to trim queries, it would have to be\nsomething semantically sensible, not just trim whatever is at the end of\nthe statement literal. Say, only trim parts of the where clause that\nare of the form \"something op constant\", and rules like that, plus put\nplaceholders to show that they were there. This sounds a lot of work to\nfigure out usefully ...)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 15 Mar 2020 20:48:33 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Hi,\n\nBIND parameter truncation is good to me.\nLogs could be very hard to read due to\nthe very long parameters recorded.\n\nNow, parameter is extracuted from left.\ne.g. \"AAAA-BBBB-CCCC-DDDD-EEEE\" to \"AAAA-BBBB-CCCC...\"\n\nIs not necessary from right?\ne.g. \"AAAA-BBBB-CCCC-DDDD-EEEE\" to \"...CCCC-DDDD-EEEE\"\n\nIt is sometimes nice to be able to check\nthe last of strings. For example, if there\nare difference only in the last of strings\nof many parameters.\n\nBest Regards,\nKeisuke Kuroda\n\n\n", "msg_date": "Wed, 18 Mar 2020 14:04:06 +0900", "msg_from": "keisuke kuroda <keisuke.kuroda.3862@gmail.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On Sun, Mar 15, 2020 at 08:48:33PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-14, Tom Lane wrote:\n> \n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > I am sorry --- I am confused. Why are we truncating or allowing control\n> > > of truncation of BIND parameter values, but have no such facility for\n> > > queries. Do we assume queries are shorter than BIND parameters, or is\n> > > it just that it is easier to trim BIND parameters than values embedded\n> > > in non-EXECUTE queries.\n> > \n> > The cases that Alvaro was worried about were enormous values supplied\n> > via bind parameters. We haven't heard comparable complaints about\n> > the statement text.\n> \n> To be more precise, I have seen cases of enormous statement text, but\n> those are fixed precisely by moving the bulk to parameters. So the\n> ability to trim the parameter is important. I've never seen a very\n> large query without the bulk being parameterizable.\n\nI don't claim our use is a common case or a good example but I'm going to offer\na data point.\n\nWe have very long query strings, even while using bind parameters.\nOur loader process uses upsert and prepared statements.\nSo we might run: INSERT INTO t (k1,k2,a,b,...) VALUES($1,$2,$3,$4)\nON CONFLICT(k1,k2) DO UPDATE SET a=excluded.a,b=excluded.b\n..which is fine, but we also have large number of columns - historically up to\n1600. If a query fails, the error might be a query string 2+ pages long.\n\nLooks like we have common cases (and executed many times) with:\n24k long message and 86k long param string\n70k long message and 10k long param string\n\nHaving full log on error is important, more to the client but also in the\nserver log. But it would be nice if we could reduce the server logs. Most of\nthe prepare string is of little value if there's no error: VALUES ($1,$2,$3,)\n(but prepared query is at least better than repeating the query string).\n\nRelated: a year ago, I wrote about the repetition of the \"PREPARE\" statement in\nquery logs.\nhttps://www.postgresql.org/message-id/20190208132953.GF29720@telsasoft.com\n\nUltimately I withdrew that patch and switched to log_statement_min_duration.\n\n> Nod. (I think if we really wanted to trim queries, it would have to be\n> something semantically sensible, not just trim whatever is at the end of\n> the statement literal.\n\nIf it were easy, I would truncate query strings to a few hundred bytes.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 18 Mar 2020 01:26:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "> Also agreed. It's been like it is for a long time with not that\n> many complaints, so the case for changing the default behavior\n> seems a bit weak.\n>\n> Barring other opinions, I think we have consensus here on what\n> to do. Alexey, will you update your patch?\n>\nSorry for the delay, please could you have a look?\n\nBest, Alex", "msg_date": "Wed, 1 Apr 2020 01:52:48 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n> Sorry for the delay, please could you have a look?\n\nGot it, will look tomorrow. (I think it's important to get this\ndone for v13, before we ship this behavior.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Mar 2020 21:04:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 01, 2020 at 01:52:48AM +0100, Alexey Bashtanov wrote:\n> +++ b/doc/src/sgml/config.sgml\n> + <varlistentry id=\"guc-log-parameter-max-length\" xreflabel=\"log_parameter_max_length\">\n> + <term><varname>log_parameter_max_length</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>log_parameter_max_length</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + If greater than zero, bind parameter values reported in non-error\n> + statement-logging messages are trimmed to no more than this many bytes.\n\nCan I suggest to say:\n\n\"Limit bind parameter values reported by non-error statement-logging messages\nto this many bytes\". Or,\n\n\"The maximum length of bind parameter values to log with non-error\nstatement-logging messages\".\n\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -2855,6 +2857,28 @@ static struct config_int ConfigureNamesInt[] =\n> \t\tNULL, NULL, NULL\n> \t},\n> \n> +\t{\n> +\t\t{\"log_parameter_max_length\", PGC_SUSET, LOGGING_WHAT,\n> +\t\t\tgettext_noop(\"When logging statements, limit logged parameter values to first N bytes.\"),\n> +\t\t\tgettext_noop(\"Zero to print values in full.\"),\n\nCould you make zero a normal value and -1 the \"special\" value to disable\ntrimming ?\n\nSetting to zero will avoid displaying parameters at all, setting to -1 wil\ndisplay values in full.\n\nCheers,\n-- \nJustin\n\n\n", "msg_date": "Tue, 31 Mar 2020 23:36:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Hi,\n>> + If greater than zero, bind parameter values reported in non-error\n>> + statement-logging messages are trimmed to no more than this many bytes.\n> Can I suggest to say:\n>\n> \"Limit bind parameter values reported by non-error statement-logging messages\n> to this many bytes\". Or,\n>\n> \"The maximum length of bind parameter values to log with non-error\n> statement-logging messages\".\nOkay I'll rephrase.\n> Could you make zero a normal value and -1 the \"special\" value to disable\n> trimming ?\n>\n> Setting to zero will avoid displaying parameters at all, setting to -1 wil\n> display values in full.\nI can, but then for the sake of consistency I'll have to do the same for \nlog_parameter_max_length.\nThen we'll end up with two ways to enable/disable parameter logging on \nerror:\nusing the primary boolean setting and setting length to 0.\nOne of them will require superuser privileges, the other one won't.\nDo you think it's okay? I have no objections but I'm a bit worried \nsomeone may find it clumsy.\n\nBest, Alex\n\n\n", "msg_date": "Wed, 1 Apr 2020 10:10:55 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On Wed, Apr 01, 2020 at 10:10:55AM +0100, Alexey Bashtanov wrote:\n> Hi,\n> > > + If greater than zero, bind parameter values reported in non-error\n> > > + statement-logging messages are trimmed to no more than this many bytes.\n> > Can I suggest to say:\n> > \n> > \"Limit bind parameter values reported by non-error statement-logging messages\n> > to this many bytes\". Or,\n> > \n> > \"The maximum length of bind parameter values to log with non-error\n> > statement-logging messages\".\n> Okay I'll rephrase.\n> > Could you make zero a normal value and -1 the \"special\" value to disable\n> > trimming ?\n> > \n> > Setting to zero will avoid displaying parameters at all, setting to -1 wil\n> > display values in full.\n> I can, but then for the sake of consistency I'll have to do the same for\n> log_parameter_max_length.\n> Then we'll end up with two ways to enable/disable parameter logging on\n> error:\n> using the primary boolean setting and setting length to 0.\n> One of them will require superuser privileges, the other one won't.\n\nI guess you're referring to log_parameters_on_error.\nDoes it have to be SUSET ?\n\nOr maybe log_parameters_on_error doesn't need to exist at all, and setting\nlog_parameter_max_length=0 can be used to disable parameter logging.\n\nI showed up late to this thread, so let's see what others think.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Apr 2020 04:31:42 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Apr 01, 2020 at 10:10:55AM +0100, Alexey Bashtanov wrote:\n>>> Could you make zero a normal value and -1 the \"special\" value to disable\n>>> trimming ?\n\n>> I can, but then for the sake of consistency I'll have to do the same for\n>> log_parameter_max_length.\n>> Then we'll end up with two ways to enable/disable parameter logging on\n>> error:\n>> using the primary boolean setting and setting length to 0.\n>> One of them will require superuser privileges, the other one won't.\n\n> I guess you're referring to log_parameters_on_error.\n> Does it have to be SUSET ?\n> Or maybe log_parameters_on_error doesn't need to exist at all, and setting\n> log_parameter_max_length=0 can be used to disable parameter logging.\n> I showed up late to this thread, so let's see what others think.\n\nI think Justin's got a point: defining zero this way is weirdly\ninconsistent. -1, being clearly outside the domain of possible\nlength limits, makes more sense as a marker for \"don't trim\".\n\nAlexey's right that having a separate boolean flag is pointless, but\nI think we could just drop the boolean; we haven't shipped that yet.\nThe privilege argument seems irrelevant to me. We already decided\nthat the plan is (a) SUSET for non-error statement logging purposes and\n(b) USERSET for logging caused by errors, and that would have to apply\nto length limits as well as enable/disable ability. Otherwise a user\ncould pretty effectively disable logging by setting the length to 1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 10:51:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Hi,\n> The privilege argument seems irrelevant to me. We already decided\n> that the plan is (a) SUSET for non-error statement logging purposes and\n> (b) USERSET for logging caused by errors, and that would have to apply\n> to length limits as well as enable/disable ability. Otherwise a user\n> could pretty effectively disable logging by setting the length to 1.\nThe only privilege that user can gain if we drop the boolean is to \n*enable* logging parameters on error.\nThat gives user a little bit easier way to fill up the disk with logs, \nbut they anyway can do that if they want to.\nIf that's okay with everyone, please see the new version attached.\n\nBest, Alex", "msg_date": "Thu, 2 Apr 2020 01:29:04 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Thanks for updating the patch.\n\nOn Thu, Apr 02, 2020 at 01:29:04AM +0100, Alexey Bashtanov wrote:\n> + If greater than zero, bind parameter values reported in non-error\n> + statement-logging messages are trimmed to no more than this many bytes.\n> + If this value is specified without units, it is taken as bytes.\n> + Zero disables logging parameters with statements.\n> + <literal>-1</literal> (the default) makes parameters logged in full.\n\nsay: \"..causes parameters to be logged in full\".\n\nAlso..I just realized that this truncates *each* parameter, rather than\ntruncating the parameter list.\n\nSo say: \"\n|If greater than zero, each bind parameter value reported in a non-error\n|statement-logging messages is trimmed to no more than this number of bytes.\n\n> + Non-zero values add some overhead, as\n> + <productname>PostgreSQL</productname> will compute and store textual\n> + representations of parameter values in memory for all statements,\n> + even if they eventually do not get logged. \n\nsay: \"even if they are ultimately not logged\"\n\n> +++ b/src/backend/nodes/params.c\n> @@ -280,6 +280,7 @@ BuildParamLogString(ParamListInfo params, char **knownTextValues, int maxlen)\n> \t\t\t\toldCxt;\n> \tStringInfoData buf;\n> \n> +\tAssert(maxlen == -1 || maxlen > 0);\n\nYou can write: >= -1\n\n> -\t\t\t\t\tif (log_parameters_on_error)\n> +\t\t\t\t\tif (log_parameter_max_length_on_error != 0)\n\nI would write this as >= 0\n\n> +\t\t\t\t\t\tif (log_parameter_max_length_on_error > 0)\n> +\t\t\t\t\t\t{\n> + /*\n> + * We can trim the saved string, knowing that we\n> + * won't print all of it. But we must copy at\n> + * least two more full characters than\n> + * BuildParamLogString wants to use; otherwise it\n> + * might fail to include the trailing ellipsis.\n> + */\n> + knownTextValues[paramno] =\n> + pnstrdup(pstring,\n> + log_parameter_max_length_on_error\n> + + 2 * MAX_MULTIBYTE_CHAR_LEN);\n\nThe comment says we need at least 2 chars, but\nlog_parameter_max_length_on_error might be 1, so I think you can either add 64\nbyte fudge factor, like before, or do Max(log_parameter_max_length_on_error, 2).\n\n> +\t\t\t\t\t\t}\n> +\t\t\t\t\t\telse\n> +\t\t\t\t\t\t\tknownTextValues[paramno] = pstrdup(pstring);\n\nI suggest to handle the \"== -1\" case first and the > 0 case as \"else\".\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Apr 2020 20:33:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Hi,\n\nPlease see the new version attached.\n>> + If greater than zero, bind parameter values reported in non-error\n>> + statement-logging messages are trimmed to no more than this many bytes.\n>> + If this value is specified without units, it is taken as bytes.\n>> + Zero disables logging parameters with statements.\n>> + <literal>-1</literal> (the default) makes parameters logged in full.\n> say: \"..causes parameters to be logged in full\".\n>\n> Also..I just realized that this truncates *each* parameter, rather than\n> truncating the parameter list.\n>\n> So say: \"\n> |If greater than zero, each bind parameter value reported in a non-error\n> |statement-logging messages is trimmed to no more than this number of bytes.\nokay\n\nI also changed \"trimmed to no more than this many bytes\" to \"trimmed to \nthis many bytes\".\nIt's not that pedantic any more but hopefully less awkward.\n\n>> + Non-zero values add some overhead, as\n>> + <productname>PostgreSQL</productname> will compute and store textual\n>> + representations of parameter values in memory for all statements,\n>> + even if they eventually do not get logged.\n> say: \"even if they are ultimately not logged\"\nokay\n\n>> +++ b/src/backend/nodes/params.c\n>> @@ -280,6 +280,7 @@ BuildParamLogString(ParamListInfo params, char **knownTextValues, int maxlen)\n>> \t\t\t\toldCxt;\n>> \tStringInfoData buf;\n>> \n>> +\tAssert(maxlen == -1 || maxlen > 0);\n> You can write: >= -1\nno, I'm specifically checking they don't pass 0\n\n\n>> -\t\t\t\t\tif (log_parameters_on_error)\n>> +\t\t\t\t\tif (log_parameter_max_length_on_error != 0)\n> I would write this as >= 0\nno, I'm specifically checking for both positives and -1\n\n>> +\t\t\t\t\t\tif (log_parameter_max_length_on_error > 0)\n>> +\t\t\t\t\t\t{\n>> + /*\n>> + * We can trim the saved string, knowing that we\n>> + * won't print all of it. But we must copy at\n>> + * least two more full characters than\n>> + * BuildParamLogString wants to use; otherwise it\n>> + * might fail to include the trailing ellipsis.\n>> + */\n>> + knownTextValues[paramno] =\n>> + pnstrdup(pstring,\n>> + log_parameter_max_length_on_error\n>> + + 2 * MAX_MULTIBYTE_CHAR_LEN);\n> The comment says we need at least 2 chars, but\n> log_parameter_max_length_on_error might be 1, so I think you can either add 64\n> byte fudge factor, like before, or do Max(log_parameter_max_length_on_error, 2).\nThat's the code I reused without deep analysis TBH.\nI think it's mostly for to allocate the space for the ellipsis in case \nit needs to be added,\nnot to copy any actual characters, that's why we add.\n\n>> +\t\t\t\t\t\t}\n>> +\t\t\t\t\t\telse\n>> +\t\t\t\t\t\t\tknownTextValues[paramno] = pstrdup(pstring);\n> I suggest to handle the \"== -1\" case first and the > 0 case as \"else\".\nGood, as long as I thought of this too, I'm changing that.\n\nBest, Alex", "msg_date": "Thu, 2 Apr 2020 10:35:03 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "On 2020-Apr-02, Alexey Bashtanov wrote:\n\n\n> > > +\t\t\t\t\t\tif (log_parameter_max_length_on_error > 0)\n> > > +\t\t\t\t\t\t{\n> > > + /*\n> > > + * We can trim the saved string, knowing that we\n> > > + * won't print all of it. But we must copy at\n> > > + * least two more full characters than\n> > > + * BuildParamLogString wants to use; otherwise it\n> > > + * might fail to include the trailing ellipsis.\n> > > + */\n> > > + knownTextValues[paramno] =\n> > > + pnstrdup(pstring,\n> > > + log_parameter_max_length_on_error\n> > > + + 2 * MAX_MULTIBYTE_CHAR_LEN);\n> > The comment says we need at least 2 chars, but\n> > log_parameter_max_length_on_error might be 1, so I think you can either add 64\n> > byte fudge factor, like before, or do Max(log_parameter_max_length_on_error, 2).\n> That's the code I reused without deep analysis TBH.\n> I think it's mostly for to allocate the space for the ellipsis in case it\n> needs to be added,\n> not to copy any actual characters, that's why we add.\n\nMore or less. If you don't add these chars, mbcliplen doesn't think\nthere's character there, so it ends up not adding the ellipsis. (I\ndon't remember why it has to be two chars rather than just one.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 2 Apr 2020 14:03:16 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> More or less. If you don't add these chars, mbcliplen doesn't think\n> there's character there, so it ends up not adding the ellipsis. (I\n> don't remember why it has to be two chars rather than just one.)\n\nI think the idea is to be sure that there's a full multibyte character\nafter the truncation point; if the truncation point is within a multibyte\ncharacter, then you might have only a partial multibyte character after\nthat, which could cause problems. Doing it this way, mbcliplen will\nnever look at the last possibly-truncated character.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 14:51:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "Alexey Bashtanov <bashtanov@imap.cc> writes:\n> Please see the new version attached.\n\nPushed with a bit of editorialization.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 15:06:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: control max length of parameter values logged" }, { "msg_contents": "\n> Pushed with a bit of editorialization.\nGreat, and thanks for the fixes!\n\nBest, Alex\n\n\n", "msg_date": "Thu, 2 Apr 2020 21:55:50 +0100", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Re: control max length of parameter values logged" } ]
[ { "msg_contents": "Just saw this in a PG 11.6 cluster starting a recovery:\n\n2020-02-07 10:45:40 EST FATAL: 42501: could not open file\n\"backup_label\": Permission denied\n2020-02-07 10:45:40 EST LOCATION: fsync_fname_ext, fd.c:3531\n\nThe label file was written with mode 0400 by a script that got\nthe contents from pg_stop_backup(boolean,boolean).\n\nBut during recovery, it is being poked at by fsync_fname_ext\nwhich wants to open it O_RDWR.\n\nI had assumed the label file would be treated as readonly\nduring recovery.\n\nIf the file needs to have 0600 permissions, should there be\na note in the nonexclusive-mode backup docs to say so?\n\nRegards,\n-Chap\n\n\n\n", "msg_date": "Fri, 7 Feb 2020 11:08:48 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Does recovery write to backup_label ?" }, { "msg_contents": "Hi,\n\nOn 2020-02-07 11:08:48 -0500, Chapman Flack wrote:\n> Just saw this in a PG 11.6 cluster starting a recovery:\n> \n> 2020-02-07 10:45:40 EST FATAL: 42501: could not open file\n> \"backup_label\": Permission denied\n> 2020-02-07 10:45:40 EST LOCATION: fsync_fname_ext, fd.c:3531\n\nWell, we generally assume that files in the data directory are writable,\nwith a very few exceptions. And we do need to be able rename\nbackup_label to backup_label.old. Which strictly speaking doesn't\nrequire write permissions on the file - but I assume that's what\ntriggers the issue here. There's IIRC platforms that don't like fsyncing\nfiles with a O_RDONLY fd, so if we want to rename safely (which entails\nfsyncing beforehand), we don't have much choice.\n\n\n> I had assumed the label file would be treated as readonly\n> during recovery.\n\nIt is IIRC documented that it does get renamed...\n\n> If the file needs to have 0600 permissions, should there be\n> a note in the nonexclusive-mode backup docs to say so?\n\nI'm not convinced that that's useful. The default is that everything\nneeds to be writable by postgres. The exceptions should be noted if\nanything, not the default.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Feb 2020 11:55:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Does recovery write to backup_label ?" }, { "msg_contents": "On 2/7/20 2:55 PM, Andres Freund wrote:\n\n>> If the file needs to have 0600 permissions, should there be\n>> a note in the nonexclusive-mode backup docs to say so?\n> \n> I'm not convinced that that's useful. The default is that everything\n> needs to be writable by postgres. The exceptions should be noted if\n> anything, not the default.\n\nCould this arguably be a special case, as most things in the datadir\nare put there by postgres, but the backup_label is now to be put there\n(and not even 'there' there, but added as a final step only to a\n'backup-copy-of-there' there) by the poor schmuck who reads the\nnon-exclusive backup docs as saying \"retrieve this content from\npg_stop_backup() and preserve in a file named backup_label\" and can't\nthink of any obvious reason to put write permission on a file\nthat preserves immutable history in a backup?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 7 Feb 2020 15:05:42 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Does recovery write to backup_label ?" }, { "msg_contents": "On Sat, Feb 8, 2020 at 5:05 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 2/7/20 2:55 PM, Andres Freund wrote:\n>\n> >> If the file needs to have 0600 permissions, should there be\n> >> a note in the nonexclusive-mode backup docs to say so?\n> >\n> > I'm not convinced that that's useful. The default is that everything\n> > needs to be writable by postgres. The exceptions should be noted if\n> > anything, not the default.\n\n+1\n\n> Could this arguably be a special case, as most things in the datadir\n> are put there by postgres, but the backup_label is now to be put there\n> (and not even 'there' there, but added as a final step only to a\n> 'backup-copy-of-there' there) by the poor schmuck who reads the\n> non-exclusive backup docs as saying \"retrieve this content from\n> pg_stop_backup() and preserve in a file named backup_label\" and can't\n> think of any obvious reason to put write permission on a file\n> that preserves immutable history in a backup?\n\nI have no strong objection to add more note about permissions\nof the files that the users put in the data directory. If we really\ndo that, it'd be better to note about not only backup_label\nbut also other files like tablespace_map, recovery.signal,\npromotion trigger file, etc.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Sat, 8 Feb 2020 12:06:06 +0900", "msg_from": "Fujii Masao <masao.fujii@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Does recovery write to backup_label ?" }, { "msg_contents": "\n\nOn 2/7/20 8:06 PM, Fujii Masao wrote:\n> On Sat, Feb 8, 2020 at 5:05 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>>\n>> On 2/7/20 2:55 PM, Andres Freund wrote:\n>>\n>>>> If the file needs to have 0600 permissions, should there be\n>>>> a note in the nonexclusive-mode backup docs to say so?\n>>>\n>>> I'm not convinced that that's useful. The default is that everything\n>>> needs to be writable by postgres. The exceptions should be noted if\n>>> anything, not the default.\n> \n> +1\n\n+1.\n\nIn theory it would be more secure to only allow rename, but since \nPostgres can overwrite any other file in the cluster I don't see much \nvalue in making an exception for this file.\n\n>> Could this arguably be a special case, as most things in the datadir\n>> are put there by postgres, but the backup_label is now to be put there\n>> (and not even 'there' there, but added as a final step only to a\n>> 'backup-copy-of-there' there) by the poor schmuck who reads the\n>> non-exclusive backup docs as saying \"retrieve this content from\n>> pg_stop_backup() and preserve in a file named backup_label\" and can't\n>> think of any obvious reason to put write permission on a file\n>> that preserves immutable history in a backup?\n> \n> I have no strong objection to add more note about permissions\n> of the files that the users put in the data directory. If we really\n> do that, it'd be better to note about not only backup_label\n> but also other files like tablespace_map, recovery.signal,\n> promotion trigger file, etc.\n\nMore documentation seems like a good idea, especially since \nnon-exclusive backup requires the app to choose/set permissions.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Sun, 9 Feb 2020 21:53:47 -0700", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Does recovery write to backup_label ?" } ]
[ { "msg_contents": "Late last year, I did some work to make it possible to use simplehash\nin frontend code.[1] However, a hash table is not much good unless one\nalso has some hash functions that one can use to hash the keys that\nneed to be inserted into that hash table. I initially thought that\nsolving this problem was going to be pretty annoying, but when I\nlooked at it again today I found what I think is a pretty simple way\nto adapt things so that the hashing routines we use in the backend are\neasily available to frontend code.\n\nHere are some patches for that. It may make sense to combine some of\nthese in terms of actually getting this committed, but I have\nseparated them here so that it is, hopefully, easy to see what I did\nand why I did it. There are three basic problems which are solved by\nthe three preparatory patches:\n\n0001 - hashfn.c has a couple of routines that depend on bitmapsets,\nand bitmapset.c is currently backend-only. Fix by moving this code\nnear related code in bitmapset.c.\n\n0002 - some of the prototypes for functions in hashfn.c are in\nhsearch.h, mixed with the dynahash stuff, and others are in\nhashutils.c. Fix by making hashutils.h the one true header for\nhashfn.c.\n\n0003 - Some of hashfn.c's routines return Datum, but that's\nbackend-only. Fix by renaming the functions and changing the return\ntypes to uint32 and uint64, and add static inline wrappers with the\nold names that convert to Datum. Just changing the return types of the\nexisting functions seemed like it would've required a lot more code\nchurn, and also seems like it could cause silent breakage in the\nfuture.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] http://postgr.es/m/CA+Tgmob8oyh02NrZW=xCScB+5GyJ-jVowE3+TWTUmPF=FsGWTA@mail.gmail.com", "msg_date": "Fri, 7 Feb 2020 12:30:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "allow frontend use of the backend's core hashing functions" }, { "msg_contents": "Hi,\n\nI have spent some time reviewing the patches and overall it looks good to\nme.\n\nHowever, I have few cosmetic review comments for 0003 patch as below;\n\n1:\n+++ b/src/backend/utils/hash/hashfn.c\n@@ -16,15 +16,14 @@\n * It is expected that every bit of a hash function's 32-bit result is\n * as random as every other; failure to ensure this is likely to lead\n * to poor performance of hash tables. In most cases a hash\n- * function should use hash_any() or its variant hash_uint32().\n+ * function should use hash_bytes() or its variant hash_bytes_uint32(),\n+ * or the wrappers hash_any() and *hash_any_uint32* defined in hashfn.h.\n\nHere, indicated function name should be *hash_uint32*.\n\n2: I can see renamed functions are declared twice in hashutils.c. I think\nduplicate declarations after #endif can be removed,\n\n+extern uint32 hash_bytes(const unsigned char *k, int keylen);\n+extern uint64 hash_bytes_extended(const unsigned char *k,\n+ int keylen, uint64 seed);\n+extern uint32 hash_bytes_uint32(uint32 k);\n+extern uint64 hash_bytes_uint32_extended(uint32 k, uint64 seed);\n+\n+#ifndef FRONTEND\n..\nWrapper functions\n..\n+#endif\n+\n+extern uint32 hash_bytes(const unsigned char *k, int keylen);\n+extern uint64 hash_bytes_extended(const unsigned char *k,\n+ int keylen, uint64 seed);\n+extern uint32 hash_bytes_uint32(uint32 k);\n+extern uint64 hash_bytes_uint32_extended(uint32 k, uint64 seed);\n\n\n3: The first line of the commit message has one typo.\ndefiend => defined.\n\nOn Fri, Feb 7, 2020 at 11:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Late last year, I did some work to make it possible to use simplehash\n> in frontend code.[1] However, a hash table is not much good unless one\n> also has some hash functions that one can use to hash the keys that\n> need to be inserted into that hash table. I initially thought that\n> solving this problem was going to be pretty annoying, but when I\n> looked at it again today I found what I think is a pretty simple way\n> to adapt things so that the hashing routines we use in the backend are\n> easily available to frontend code.\n>\n> Here are some patches for that. It may make sense to combine some of\n> these in terms of actually getting this committed, but I have\n> separated them here so that it is, hopefully, easy to see what I did\n> and why I did it. There are three basic problems which are solved by\n> the three preparatory patches:\n>\n> 0001 - hashfn.c has a couple of routines that depend on bitmapsets,\n> and bitmapset.c is currently backend-only. Fix by moving this code\n> near related code in bitmapset.c.\n>\n> 0002 - some of the prototypes for functions in hashfn.c are in\n> hsearch.h, mixed with the dynahash stuff, and others are in\n> hashutils.c. Fix by making hashutils.h the one true header for\n> hashfn.c.\n>\n> 0003 - Some of hashfn.c's routines return Datum, but that's\n> backend-only. Fix by renaming the functions and changing the return\n> types to uint32 and uint64, and add static inline wrappers with the\n> old names that convert to Datum. Just changing the return types of the\n> existing functions seemed like it would've required a lot more code\n> churn, and also seems like it could cause silent breakage in the\n> future.\n>\n> Thanks,\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> [1]\n> http://postgr.es/m/CA+Tgmob8oyh02NrZW=xCScB+5GyJ-jVowE3+TWTUmPF=FsGWTA@mail.gmail.com\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nHi,I have spent some time reviewing the patches and overall it looks good to me.However, I have few cosmetic review comments for 0003 patch as below;1: +++ b/src/backend/utils/hash/hashfn.c@@ -16,15 +16,14 @@  *\t  It is expected that every bit of a hash function's 32-bit result is  *\t  as random as every other; failure to ensure this is likely to lead  *\t  to poor performance of hash tables.  In most cases a hash- *\t  function should use hash_any() or its variant hash_uint32().+ *\t  function should use hash_bytes() or its variant hash_bytes_uint32(),+ *\t  or the wrappers hash_any() and hash_any_uint32 defined in hashfn.h.Here, indicated function name should be hash_uint32.2: I can see renamed functions are declared twice in hashutils.c. I think duplicate declarations after #endif can be removed,+extern uint32 hash_bytes(const unsigned char *k, int keylen);+extern uint64 hash_bytes_extended(const unsigned char *k,+\t\t\t\t\t\t\t\t  int keylen, uint64 seed);+extern uint32 hash_bytes_uint32(uint32 k);+extern uint64 hash_bytes_uint32_extended(uint32 k, uint64 seed);++#ifndef FRONTEND..Wrapper functions..+#endif++extern uint32 hash_bytes(const unsigned char *k, int keylen);+extern uint64 hash_bytes_extended(const unsigned char *k,+\t\t\t\t\t\t\t\t  int keylen, uint64 seed);+extern uint32 hash_bytes_uint32(uint32 k);+extern uint64 hash_bytes_uint32_extended(uint32 k, uint64 seed);3: The first line of the commit message has one typo.defiend => defined.On Fri, Feb 7, 2020 at 11:00 PM Robert Haas <robertmhaas@gmail.com> wrote:Late last year, I did some work to make it possible to use simplehash\nin frontend code.[1] However, a hash table is not much good unless one\nalso has some hash functions that one can use to hash the keys that\nneed to be inserted into that hash table. I initially thought that\nsolving this problem was going to be pretty annoying, but when I\nlooked at it again today I found what I think is a pretty simple way\nto adapt things so that the hashing routines we use in the backend are\neasily available to frontend code.\n\nHere are some patches for that. It may make sense to combine some of\nthese in terms of actually getting this committed, but I have\nseparated them here so that it is, hopefully, easy to see what I did\nand why I did it. There are three basic problems which are solved by\nthe three preparatory patches:\n\n0001 - hashfn.c has a couple of routines that depend on bitmapsets,\nand bitmapset.c is currently backend-only. Fix by moving this code\nnear related code in bitmapset.c.\n\n0002 - some of the prototypes for functions in hashfn.c are in\nhsearch.h, mixed with the dynahash stuff, and others are in\nhashutils.c. Fix by making hashutils.h the one true header for\nhashfn.c.\n\n0003 - Some of hashfn.c's routines return Datum, but that's\nbackend-only. Fix by renaming the functions and changing the return\ntypes to uint32 and uint64, and add static inline wrappers with the\nold names that convert to Datum. Just changing the return types of the\nexisting functions seemed like it would've required a lot more code\nchurn, and also seems like it could cause silent breakage in the\nfuture.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] http://postgr.es/m/CA+Tgmob8oyh02NrZW=xCScB+5GyJ-jVowE3+TWTUmPF=FsGWTA@mail.gmail.com\n-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.", "msg_date": "Thu, 13 Feb 2020 17:14:23 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "> On Feb 13, 2020, at 3:44 AM, Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:\n> \n> Hi,\n> \n> I have spent some time reviewing the patches and overall it looks good to me.\n> \n> However, I have few cosmetic review comments for 0003 patch as below;\n> \n> 1: \n> +++ b/src/backend/utils/hash/hashfn.c\n> @@ -16,15 +16,14 @@\n> *\t It is expected that every bit of a hash function's 32-bit result is\n> *\t as random as every other; failure to ensure this is likely to lead\n> *\t to poor performance of hash tables. In most cases a hash\n> - *\t function should use hash_any() or its variant hash_uint32().\n> + *\t function should use hash_bytes() or its variant hash_bytes_uint32(),\n> + *\t or the wrappers hash_any() and hash_any_uint32 defined in hashfn.h.\n> \n> Here, indicated function name should be hash_uint32.\n\n+1\n\n> 2: I can see renamed functions are declared twice in hashutils.c. I think duplicate declarations after #endif can be removed,\n> \n> +extern uint32 hash_bytes(const unsigned char *k, int keylen);\n> +extern uint64 hash_bytes_extended(const unsigned char *k,\n> +\t int keylen, uint64 seed);\n> +extern uint32 hash_bytes_uint32(uint32 k);\n> +extern uint64 hash_bytes_uint32_extended(uint32 k, uint64 seed);\n> +\n> +#ifndef FRONTEND\n> ..\n> Wrapper functions\n> ..\n> +#endif\n> +\n> +extern uint32 hash_bytes(const unsigned char *k, int keylen);\n> +extern uint64 hash_bytes_extended(const unsigned char *k,\n> +\t int keylen, uint64 seed);\n> +extern uint32 hash_bytes_uint32(uint32 k);\n> +extern uint64 hash_bytes_uint32_extended(uint32 k, uint64 seed);\n\n+1\n\n> 3: The first line of the commit message has one typo.\n> defiend => defined.\n\n+1\n\nI have made these changes and rebased Robert’s patches but otherwise changed nothing. Here they are:\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 13 Feb 2020 08:26:49 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "On Thu, Feb 13, 2020 at 11:26 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I have made these changes and rebased Robert’s patches but otherwise changed nothing. Here they are:\n\nThanks. Anyone else have comments? I think this is pretty\nstraightforward and unobjectionable work so I'm inclined to press\nforward with committing it fairly soon, but if someone feels\notherwise, please speak up.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 10:33:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "On Fri, Feb 14, 2020 at 10:33:04AM -0500, Robert Haas wrote:\n> On Thu, Feb 13, 2020 at 11:26 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > I have made these changes and rebased Robert’s patches but\n> > otherwise changed nothing. Here they are:\n> \n> Thanks. Anyone else have comments? I think this is pretty\n> straightforward and unobjectionable work so I'm inclined to press\n> forward with committing it fairly soon, but if someone feels\n> otherwise, please speak up.\n\nOne question. It might be possible to make these functions faster\nusing compiler intrinsics. Would those still be available to front-end\ncode?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:15:23 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "\n\n> On Feb 14, 2020, at 8:15 AM, David Fetter <david@fetter.org> wrote:\n> \n> On Fri, Feb 14, 2020 at 10:33:04AM -0500, Robert Haas wrote:\n>> On Thu, Feb 13, 2020 at 11:26 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> I have made these changes and rebased Robert’s patches but\n>>> otherwise changed nothing. Here they are:\n>> \n>> Thanks. Anyone else have comments? I think this is pretty\n>> straightforward and unobjectionable work so I'm inclined to press\n>> forward with committing it fairly soon, but if someone feels\n>> otherwise, please speak up.\n> \n> One question. It might be possible to make these functions faster\n> using compiler intrinsics. Would those still be available to front-end\n> code?\n\nDo you have a specific proposal that would preserve on-disk compatibility?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 14 Feb 2020 08:16:47 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "On Fri, Feb 14, 2020 at 11:15 AM David Fetter <david@fetter.org> wrote:\n> One question. It might be possible to make these functions faster\n> using compiler intrinsics. Would those still be available to front-end\n> code?\n\nWhy not? We use the same compiler for frontend code as we do for\nbackend code. Possibly you might need some more header-file\nrejiggering someplace, but I don't know of anything specific or see\nany reason why it would be particularly painful if we had to do\nsomething.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 11:21:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "On Fri, Feb 14, 2020 at 08:16:47AM -0800, Mark Dilger wrote:\n> > On Feb 14, 2020, at 8:15 AM, David Fetter <david@fetter.org> wrote:\n> > \n> > On Fri, Feb 14, 2020 at 10:33:04AM -0500, Robert Haas wrote:\n> >> On Thu, Feb 13, 2020 at 11:26 AM Mark Dilger\n> >> <mark.dilger@enterprisedb.com> wrote:\n> >>> I have made these changes and rebased Robert’s patches but\n> >>> otherwise changed nothing. Here they are:\n> >> \n> >> Thanks. Anyone else have comments? I think this is pretty\n> >> straightforward and unobjectionable work so I'm inclined to press\n> >> forward with committing it fairly soon, but if someone feels\n> >> otherwise, please speak up.\n> > \n> > One question. It might be possible to make these functions faster\n> > using compiler intrinsics. Would those still be available to front-end\n> > code?\n> \n> Do you have a specific proposal that would preserve on-disk compatibility?\n\nI hadn't planned on changing the representation, just cutting\ninstructions out of the calculation of same.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:29:01 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "\n\n> On Feb 14, 2020, at 8:29 AM, David Fetter <david@fetter.org> wrote:\n> \n> On Fri, Feb 14, 2020 at 08:16:47AM -0800, Mark Dilger wrote:\n>>> On Feb 14, 2020, at 8:15 AM, David Fetter <david@fetter.org> wrote:\n>>> \n>>> On Fri, Feb 14, 2020 at 10:33:04AM -0500, Robert Haas wrote:\n>>>> On Thu, Feb 13, 2020 at 11:26 AM Mark Dilger\n>>>> <mark.dilger@enterprisedb.com> wrote:\n>>>>> I have made these changes and rebased Robert’s patches but\n>>>>> otherwise changed nothing. Here they are:\n>>>> \n>>>> Thanks. Anyone else have comments? I think this is pretty\n>>>> straightforward and unobjectionable work so I'm inclined to press\n>>>> forward with committing it fairly soon, but if someone feels\n>>>> otherwise, please speak up.\n>>> \n>>> One question. It might be possible to make these functions faster\n>>> using compiler intrinsics. Would those still be available to front-end\n>>> code?\n>> \n>> Do you have a specific proposal that would preserve on-disk compatibility?\n> \n> I hadn't planned on changing the representation, just cutting\n> instructions out of the calculation of same.\n\nOk, I misunderstood.\n\nI thought the question was about using compiler intrinsics to implement an alltogether different hashing algorithm than the one currently in use and whether exposing the hash function to frontend code would lock down the algorithm in a way that would make it harder to change in the future. That lead me to the question of whether we had sufficient flexibility to entertain changing the hashing algorithm anyway.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 14 Feb 2020 08:37:40 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "On Fri, Feb 14, 2020 at 9:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Feb 13, 2020 at 11:26 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > I have made these changes and rebased Robert’s patches but otherwise changed nothing. Here they are:\n>\n> Thanks. Anyone else have comments? I think this is pretty\n> straightforward and unobjectionable work so I'm inclined to press\n> forward with committing it fairly soon, but if someone feels\n> otherwise, please speak up.\n\nI've committed 0001 through 0003 as revised by Mark in accordance with\nthe comments from Suraj. Here's the last patch again with a tweak to\ntry not to break the Windows build, per some off-list advice I\nreceived on how not to break the Windows build. Barring complaints\nfrom the buildfarm or otherwise, I'll commit this one too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 24 Feb 2020 17:32:29 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" }, { "msg_contents": "On Mon, Feb 24, 2020 at 5:32 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I've committed 0001 through 0003 as revised by Mark in accordance with\n> the comments from Suraj. Here's the last patch again with a tweak to\n> try not to break the Windows build, per some off-list advice I\n> received on how not to break the Windows build. Barring complaints\n> from the buildfarm or otherwise, I'll commit this one too.\n\nDone.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Feb 2020 09:33:15 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: allow frontend use of the backend's core hashing functions" } ]
[ { "msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=388d4351f78dfa6082922074127d496e6f525033\n\n(Note: these cover through 9710d3d4a, but not the additional partitioning\nfix I see Alvaro just pushed.)\n\nPlease send comments before Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Feb 2020 16:56:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Draft release notes are up for review" }, { "msg_contents": "> On 7 Feb 2020, at 22:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Please send comments before Sunday.\n\nMy name is misspelled master 7d0bcb047 s/Gustaffson/Gustafsson/. Nothing else\nstood out from skimming the diff.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 7 Feb 2020 23:13:03 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Draft release notes are up for review" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> My name is misspelled master 7d0bcb047 s/Gustaffson/Gustafsson/. Nothing else\n> stood out from skimming the diff.\n\nAh, looks like I copied and pasted that from the commit log. Sorry,\nwill fix in next draft.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Feb 2020 17:18:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Draft release notes are up for review" } ]
[ { "msg_contents": "Hi,\nI am migrating my applications that use postgres client from msvc 2010\n(32bits) to msvc 2019 (32 bits).\nCompilation using msvc 2019 (64 bits), works very well.\nBut the build using msvc 2019 (32 bit) is not working.\nThe 32-bit Platform variable is set to x86, resulting in the first error.\n\n\"Project\" C: \\ dll \\ postgres \\ pgsql.sln \"on node 1 (default targets).\nC: \\ dll \\ postgres \\ pgsql.sln.metaproj: error MSB4126: The specified\nsolution configuration \"Release | x86\" is invalid. Plea\nif specify a valid solution configuration using the Configuration and\nPlatform properties (e.g. MSBuild.exe Solution.sl\nn / p: Configuration = Debug / p: Platform = \"Any CPU\") or leave those\nproperties blank to use the default solution configurati\non. [C: \\ dll \\ postgres \\ pgsql.sln]\nDone Building Project \"C: \\ dll \\ postgres \\ pgsql.sln\" (default targets) -\nFAILED. \"\n\nThis is because the Sub DeterminePlatform function of the Solution.pm\nprogram uses the following expression:\n\"my $ output =` cl /? 2> & 1`; \"\nThe result of msvc 2019 (32bits) is:\n\"Microsoft (R) C / C ++ Optimizing Compiler Version 19.24.28315 for x86\"\n\nBy setting the Platform variable manually to WIn32, the compilation process\ncontinues until it stops at:\n\"Generating configuration headers ...\"\n\nWhere the second error occurs:\nunused defines: HAVE_STRUCT_CMSGCRED\nUSE_NAMED_POSI ... etc ...\nALIGNOF_DOUBLE USE_DEV_URANDOM at C: \\ dll \\ postgres \\ src \\ tools \\ msvc\n/ Mkvcbuild.pm line 842.\n\nQuestion:\nWill Postgres continue to support 32-bit client?\n\nregards,\nRanier Vilela\n\nHi,I am migrating my applications that use postgres client from msvc 2010 (32bits) to msvc 2019 (32 bits).Compilation using msvc 2019 (64 bits), works very well.But the build using msvc 2019 (32 bit) is not working.The 32-bit Platform variable is set to x86, resulting in the first error.\"Project\" C: \\ dll \\ postgres \\ pgsql.sln \"on node 1 (default targets).C: \\ dll \\ postgres \\ pgsql.sln.metaproj: error MSB4126: The specified solution configuration \"Release | x86\" is invalid. Pleaif specify a valid solution configuration using the Configuration and Platform properties (e.g. MSBuild.exe Solution.sln / p: Configuration = Debug / p: Platform = \"Any CPU\") or leave those properties blank to use the default solution configuration. [C: \\ dll \\ postgres \\ pgsql.sln]Done Building Project \"C: \\ dll \\ postgres \\ pgsql.sln\" (default targets) - FAILED. \"This is because the Sub DeterminePlatform function of the Solution.pm program uses the following expression:\"my $ output =` cl /? 2> & 1`; \"The result of msvc 2019 (32bits) is:\"Microsoft (R) C / C ++ Optimizing Compiler Version 19.24.28315 for x86\"By setting the Platform variable manually to WIn32, the compilation process continues until it stops at:\"Generating configuration headers ...\"Where the second error occurs:unused defines: HAVE_STRUCT_CMSGCREDUSE_NAMED_POSI ... etc ...ALIGNOF_DOUBLE USE_DEV_URANDOM at C: \\ dll \\ postgres \\ src \\ tools \\ msvc / Mkvcbuild.pm line 842.Question:Will Postgres continue to support 32-bit client?regards,Ranier Vilela", "msg_date": "Fri, 7 Feb 2020 23:34:54 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "On Sat, Feb 8, 2020 at 8:05 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi,\n> I am migrating my applications that use postgres client from msvc 2010 (32bits) to msvc 2019 (32 bits).\n> Compilation using msvc 2019 (64 bits), works very well.\n> But the build using msvc 2019 (32 bit) is not working.\n> The 32-bit Platform variable is set to x86, resulting in the first error.\n>\n> \"Project\" C: \\ dll \\ postgres \\ pgsql.sln \"on node 1 (default targets).\n> C: \\ dll \\ postgres \\ pgsql.sln.metaproj: error MSB4126: The specified solution configuration \"Release | x86\" is invalid. Plea\n> if specify a valid solution configuration using the Configuration and Platform properties (e.g. MSBuild.exe Solution.sl\n> n / p: Configuration = Debug / p: Platform = \"Any CPU\") or leave those properties blank to use the default solution configurati\n> on. [C: \\ dll \\ postgres \\ pgsql.sln]\n> Done Building Project \"C: \\ dll \\ postgres \\ pgsql.sln\" (default targets) - FAILED. \"\n>\n> This is because the Sub DeterminePlatform function of the Solution.pm program uses the following expression:\n> \"my $ output =` cl /? 2> & 1`; \"\n> The result of msvc 2019 (32bits) is:\n> \"Microsoft (R) C / C ++ Optimizing Compiler Version 19.24.28315 for x86\"\n>\n> By setting the Platform variable manually to WIn32, the compilation process continues until it stops at:\n> \"Generating configuration headers ...\"\n>\n> Where the second error occurs:\n> unused defines: HAVE_STRUCT_CMSGCRED\n> USE_NAMED_POSI ... etc ...\n> ALIGNOF_DOUBLE USE_DEV_URANDOM at C: \\ dll \\ postgres \\ src \\ tools \\ msvc / Mkvcbuild.pm line 842.\n>\n\nTry by removing src/include/pg_config.\n\n> Question:\n> Will Postgres continue to support 32-bit client?\n>\n\nI am not aware of any discussion related to stopping the support of\n32-bit client.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 9 Feb 2020 13:05:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "On Sun, 9 Feb 2020 at 15:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 8, 2020 at 8:05 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Hi,\n> > I am migrating my applications that use postgres client from msvc 2010 (32bits) to msvc 2019 (32 bits).\n> > Compilation using msvc 2019 (64 bits), works very well.\n> > But the build using msvc 2019 (32 bit) is not working.\n> > The 32-bit Platform variable is set to x86, resulting in the first error.\n> >\n> > \"Project\" C: \\ dll \\ postgres \\ pgsql.sln \"on node 1 (default targets).\n> > C: \\ dll \\ postgres \\ pgsql.sln.metaproj: error MSB4126: The specified solution configuration \"Release | x86\" is invalid. Plea\n> > if specify a valid solution configuration using the Configuration and Platform properties (e.g. MSBuild.exe Solution.sl\n> > n / p: Configuration = Debug / p: Platform = \"Any CPU\") or leave those properties blank to use the default solution configurati\n> > on. [C: \\ dll \\ postgres \\ pgsql.sln]\n> > Done Building Project \"C: \\ dll \\ postgres \\ pgsql.sln\" (default targets) - FAILED. \"\n> >\n> > This is because the Sub DeterminePlatform function of the Solution.pm program uses the following expression:\n> > \"my $ output =` cl /? 2> & 1`; \"\n> > The result of msvc 2019 (32bits) is:\n> > \"Microsoft (R) C / C ++ Optimizing Compiler Version 19.24.28315 for x86\"\n> >\n> > By setting the Platform variable manually to WIn32, the compilation process continues until it stops at:\n> > \"Generating configuration headers ...\"\n> >\n> > Where the second error occurs:\n> > unused defines: HAVE_STRUCT_CMSGCRED\n> > USE_NAMED_POSI ... etc ...\n> > ALIGNOF_DOUBLE USE_DEV_URANDOM at C: \\ dll \\ postgres \\ src \\ tools \\ msvc / Mkvcbuild.pm line 842.\n> >\n>\n> Try by removing src/include/pg_config.\n>\n> > Question:\n> > Will Postgres continue to support 32-bit client?\n> >\n>\n> I am not aware of any discussion related to stopping the support of\n> 32-bit client.\n\nBuildfarm member whelk reports for 32-bit Windows.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=whelk&br=HEAD\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=whelk&br=REL_12_STABLE\n\nIt says it uses Microsoft Visual C++ 2010 .\n\nI don't see any other members building for 32-bit. But it should work\nand as you say, nothing's been discussed about removing it.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Mon, 10 Feb 2020 11:55:09 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "On Mon, Feb 10, 2020 at 11:55:09AM +0800, Craig Ringer wrote:\n> I don't see any other members building for 32-bit. But it should work\n> and as you say, nothing's been discussed about removing it.\n\nYes, it works normally AFAIK and there is no reason to remove this\nsupport either. My guess is that the repository was not cleaned up\nproperly when attempting the 32b compilation after a 64b compilation\nwas completed.\n--\nMichael", "msg_date": "Mon, 10 Feb 2020 15:38:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "Hi guys, thank you for the answers.\n\nEm seg., 10 de fev. de 2020 às 03:38, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Mon, Feb 10, 2020 at 11:55:09AM +0800, Craig Ringer wrote:\n> > I don't see any other members building for 32-bit. But it should work\n> > and as you say, nothing's been discussed about removing it.\n>\n> Yes, it works normally AFAIK and there is no reason to remove this\n> support either. My guess is that the repository was not cleaned up\n> properly when attempting the 32b compilation after a 64b compilation\n> was completed.\n>\nI tried from fresh source install.\n\nCraig, the buildfarm uses msvc 2013.\n\nAmit, your suggestion worked, thank you.\nI removed pg_config.h and compiled it libpq.dll, but the tool reported\n 8 Warning(s)\n 55 Error(s)\n\nThe first error is:\n\"adminpack.obj : error LNK2019: sφmbolo externo indefinido _Int64GetDatum\nreferenciado na funτπo _pg_file_write [C:\\dll\\postgres\\adminpack.vcxproj]\n.\\Release\\adminpack\\adminpack.dll : fatal error LNK1120: 1 externo nπo\nresolvidos [C:\\dll\\postgres\\adminpack.vcxproj]\nDone Building Project \"C:\\dll\\postgres\\adminpack.vcxproj\" (default targets)\n-- FAILED.\"\n\nUnfortunately, i will have to live with 32 bits clients for a long time yet.\nI still have customers using Windows XP yet ...\n\nbest regards,\nRanier Vilela\n\nHi guys, thank you for the answers.Em seg., 10 de fev. de 2020 às 03:38, Michael Paquier <michael@paquier.xyz> escreveu:On Mon, Feb 10, 2020 at 11:55:09AM +0800, Craig Ringer wrote:\n> I don't see any other members building for 32-bit. But it should work\n> and as you say, nothing's been discussed about removing it.\n\nYes, it works normally AFAIK and there is no reason to remove this\nsupport either.  My guess is that the repository was not cleaned up\nproperly when attempting the 32b compilation after a 64b compilation\nwas completed.I tried from fresh source install.Craig, the buildfarm uses msvc 2013.Amit, your suggestion worked, thank you.I removed pg_config.h and compiled it libpq.dll, but the tool reported     8 Warning(s)    55 Error(s)The first error is:\"adminpack.obj : error LNK2019: sφmbolo externo indefinido _Int64GetDatum referenciado na funτπo _pg_file_write [C:\\dll\\postgres\\adminpack.vcxproj].\\Release\\adminpack\\adminpack.dll : fatal error LNK1120: 1 externo nπo resolvidos [C:\\dll\\postgres\\adminpack.vcxproj]Done Building Project \"C:\\dll\\postgres\\adminpack.vcxproj\" (default targets) -- FAILED.\"Unfortunately, i will have to live with 32 bits clients for a long time yet.I still have customers using Windows XP yet ...best regards,Ranier Vilela", "msg_date": "Mon, 10 Feb 2020 09:13:35 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "On Mon, 10 Feb 2020 at 20:14, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> \"adminpack.obj : error LNK2019: sφmbolo externo indefinido _Int64GetDatum referenciado na funτπo _pg_file_write [C:\\dll\\postgres\\adminpack.vcxproj]\n> .\\Release\\adminpack\\adminpack.dll : fatal error LNK1120: 1 externo nπo resolvidos [C:\\dll\\postgres\\adminpack.vcxproj]\n> Done Building Project \"C:\\dll\\postgres\\adminpack.vcxproj\" (default targets) -- FAILED.\"\n\nYou are almost certainly trying to build with a mismatched\nconfiguration vs toolchain. See \"postgres.h\" for the definition of\nInt64GetDatum. It's a macro if you're on a 64-bit arch where we can\npass 64-bit fields by-value efficiently; otherwise it's a function.\nYou're probably trying to link 32-bit extensions against a 64-bit\npostgres.\n\nClean everything. Completely. Set up a totally clean MSVC environment\nand ensure you have ONLY the 32-bit toolchain on the PATH, only 32-bit\nlibraries, etc. Then retry.\n\nRather than building via MSVC's user interface, use msbuild.exe with\nthe project files PostgreSQL generates for you.\n\nSee if that helps.\n\nI've seen many mangled setups when there are mixes of different MSVC\ntoolchains versions on a machine. I now maintain isolated VMs with\nexactly one MSVC version on each to address the amazing level of\nbreakage and incompatibility that MS's various toolchains seem to\ndeliver.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Mon, 10 Feb 2020 21:53:03 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "Em seg., 10 de fev. de 2020 às 10:53, Craig Ringer <craig@2ndquadrant.com>\nescreveu:\n\n> On Mon, 10 Feb 2020 at 20:14, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > \"adminpack.obj : error LNK2019: sφmbolo externo indefinido\n> _Int64GetDatum referenciado na funτπo _pg_file_write\n> [C:\\dll\\postgres\\adminpack.vcxproj]\n> > .\\Release\\adminpack\\adminpack.dll : fatal error LNK1120: 1 externo nπo\n> resolvidos [C:\\dll\\postgres\\adminpack.vcxproj]\n> > Done Building Project \"C:\\dll\\postgres\\adminpack.vcxproj\" (default\n> targets) -- FAILED.\"\n>\n> You are almost certainly trying to build with a mismatched\n> configuration vs toolchain. See \"postgres.h\" for the definition of\n> Int64GetDatum. It's a macro if you're on a 64-bit arch where we can\n> pass 64-bit fields by-value efficiently; otherwise it's a function.\n> You're probably trying to link 32-bit extensions against a 64-bit\n> postgres.\n>\n> Clean everything. Completely. Set up a totally clean MSVC environment\n> and ensure you have ONLY the 32-bit toolchain on the PATH, only 32-bit\n> libraries, etc. Then retry.\n>\n> Rather than building via MSVC's user interface, use msbuild.exe with\n> the project files PostgreSQL generates for you.\n>\n> See if that helps.\n>\n> I've seen many mangled setups when there are mixes of different MSVC\n> toolchains versions on a machine. I now maintain isolated VMs with\n> exactly one MSVC version on each to address the amazing level of\n> breakage and incompatibility that MS's various toolchains seem to\n> deliver.\n>\n> You know those times when you feel like you haven't done your job. This is\none of them.\nThanks Craig and Michael, my mistake, the 32 bits project build was done\ncomplete.\nThere was really a mixing problem between 32 and 64 bits compilations.\nAfter downloading the current full version of the sources again and\nstarting the 32 bits compilation, everything went well.\nIt's great to know that Postgres compiles and runs all regression tests in\n32 bits (all 196).\n\nbest regards,\nRanier Vilela\n\nEm seg., 10 de fev. de 2020 às 10:53, Craig Ringer <craig@2ndquadrant.com> escreveu:On Mon, 10 Feb 2020 at 20:14, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> \"adminpack.obj : error LNK2019: sφmbolo externo indefinido _Int64GetDatum referenciado na funτπo _pg_file_write [C:\\dll\\postgres\\adminpack.vcxproj]\n> .\\Release\\adminpack\\adminpack.dll : fatal error LNK1120: 1 externo nπo resolvidos [C:\\dll\\postgres\\adminpack.vcxproj]\n> Done Building Project \"C:\\dll\\postgres\\adminpack.vcxproj\" (default targets) -- FAILED.\"\n\nYou are almost certainly trying to build with a mismatched\nconfiguration vs toolchain. See \"postgres.h\" for the definition of\nInt64GetDatum. It's a macro if you're on a 64-bit arch where we can\npass 64-bit fields by-value efficiently; otherwise it's a function.\nYou're probably trying to link 32-bit extensions against a 64-bit\npostgres.\n\nClean everything. Completely. Set up a totally clean MSVC environment\nand ensure you have ONLY the 32-bit toolchain on the PATH, only 32-bit\nlibraries, etc. Then retry.\n\nRather than building via MSVC's user interface, use msbuild.exe with\nthe project files PostgreSQL generates for you.\n\nSee if that helps.\n\nI've seen many mangled setups when there are mixes of different MSVC\ntoolchains versions on a machine. I now maintain isolated VMs with\nexactly one MSVC version on each to address the amazing level of\nbreakage and incompatibility that MS's various toolchains seem to\ndeliver.\nYou know those times when you feel like you haven't done your job. This is one of them.Thanks Craig and Michael, my mistake, the 32 bits project build was done complete.There was really a mixing problem between 32 and 64 bits compilations.After downloading the current full version of the sources again and starting the 32 bits compilation, everything went well.It's great to know that Postgres compiles and runs all regression tests in 32 bits (all 196).best regards,Ranier Vilela", "msg_date": "Mon, 10 Feb 2020 22:07:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "\nOn 2/10/20 7:13 AM, Ranier Vilela wrote:\n>\n>\n> Unfortunately, i will have to live with 32 bits clients for a long\n> time yet.\n> I still have customers using Windows XP yet ...\n>\n>\n\n\nAFAIK we don't support WinXP past Postgres Release 10 because of the\nlack of huge page support. That won't affect clients, but it does mean\nwe won't build or test later releases on XP.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 11 Feb 2020 16:08:39 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" }, { "msg_contents": "Em ter., 11 de fev. de 2020 às 18:08, Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> escreveu:\n\n>\n> On 2/10/20 7:13 AM, Ranier Vilela wrote:\n> >\n> >\n> > Unfortunately, i will have to live with 32 bits clients for a long\n> > time yet.\n> > I still have customers using Windows XP yet ...\n> >\n> >\n>\n>\n> AFAIK we don't support WinXP past Postgres Release 10 because of the\n> lack of huge page support. That won't affect clients, but it does mean\n> we won't build or test later releases on XP.\n>\n> Oh yes of course, I understand.\nI support 32 bits clients that still use WIndows XP, with version 9.6, for\nboth client and server.\nUnfortunately, the notorious Windows 7 32 bits is still in use.\nFor these I am migrating to version 12, both for client and server.\n\nregards,\nRanier Vilela\n\nEm ter., 11 de fev. de 2020 às 18:08, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> escreveu:\nOn 2/10/20 7:13 AM, Ranier Vilela wrote:\n>\n>\n> Unfortunately, i will have to live with 32 bits clients for a long\n> time yet.\n> I still have customers using Windows XP yet ...\n>\n>\n\n\nAFAIK we don't support WinXP past Postgres Release 10 because of the\nlack of huge page support. That won't affect clients, but it does mean\nwe won't build or test later releases on XP.\nOh yes of course, I understand.I support 32 bits clients that still use WIndows XP, with version 9.6, for both client and server.Unfortunately, the notorious Windows 7 32 bits is still in use.For these I am migrating to version 12, both for client and server.regards,Ranier Vilela", "msg_date": "Tue, 11 Feb 2020 19:01:41 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres 32 bits client compilation fail. Win32 bits client is\n supported?" } ]
[ { "msg_contents": "Hi hackers, attached is a proof of concept patch adding a new base type\ncalled \"rational\" to represent fractions. It includes arithmetic,\nsimplification, conversion to/from float, finding intermediates with a\nstern-brocot tree, custom aggregates, and btree/hash indices.\n\nThe primary motivation was as a column type to support user-defined\nordering of rows (with the ability to dynamically rearrange rows). The\npostgres wiki has a page [0] about this using pairs of integers to\nrepresent fractions, but it's not particularly elegant.\n\nI wrote about options for implementing user-defined order in an article\n[1] and ended up creating a postgres extension, pg_rational [2], to\nprovide the new type. People have been using the extension, but told me\nthey wished they could use it on hosted platforms like Amazon RDS which\nhave a limited set of whitelisted extensions. Thus I'm submitting this\npatch to discuss getting the feature in core postgres.\n\nFor usage, see the included regression test. To see how it works for the\nuser-defined order use case see my article. I haven't included docs in\nthe patch since the interface may change with community feedback.\n\n0: https://wiki.postgresql.org/wiki/User-specified_ordering_with_fractions\n1: https://begriffs.com/posts/2018-03-20-user-defined-order.html\n2: https://github.com/begriffs/pg_rational\n\n-- \nJoe Nelson https://begriffs.com", "msg_date": "Fri, 7 Feb 2020 22:25:54 -0600", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": true, "msg_subject": "POC: rational number type (fractions)" }, { "msg_contents": "Hello,\nIt seems you are not the first to be interested in such feature.\n\nThere was a similar extension used in \"incremental view maintenance\"\ntesting:\nhttps://github.com/nuko-yokohama/pg_fraction\n\ndidn't tryed it myself.\n\nRegards\nPAscal \n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Sat, 8 Feb 2020 02:54:49 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On Fri, 2020-02-07 at 22:25 -0600, Joe Nelson wrote:\n> Hi hackers, attached is a proof of concept patch adding a new base\n> type\n> called \"rational\" to represent fractions.\n\nHi!\n\n> The primary motivation was as a column type to support user-defined\n> ordering of rows (with the ability to dynamically rearrange rows).\n> The\n> postgres wiki has a page [0] about this using pairs of integers to\n> represent fractions, but it's not particularly elegant.\n\nSounds good.\n\n> I wrote about options for implementing user-defined order in an\n> article\n> [1] and ended up creating a postgres extension, pg_rational [2], to\n> provide the new type. People have been using the extension, but told\n> me\n> they wished they could use it on hosted platforms like Amazon RDS\n> which\n> have a limited set of whitelisted extensions. Thus I'm submitting\n> this\n> patch to discuss getting the feature in core postgres.\n\nThe decision between an extension and a core type is a tricky one. To\nput an extension in core, usually it's good to show either that it\nsatisfies something in the SQL standard, or that there is some specific\ntechnical advantage (like integration with the syntax or the type\nsystem).\n\nIntegrating it in core just to make it easier to use is a double-edge\nsword. It does make it easier in some environments; but it also removes\npressure to make those environments offer better support for the\nextension ecosystem, ultimately weakening extensions overall.\n\nIn this case I believe it could be a candidate for in-core, but it's\nborderline. The reasons it makes sense to me are:\n\n1. It seems that there's \"only one way to do it\". It would be good to\nvalidate that this really covers most of the use cases of rational\nnumbers, but if so, that makes it a better candidate for building it\ninto core. It would also be good to compare against other\nimplementations (perhaps in normal programming languages) to see if\nthere is anything interesting.\n\n2. I don't expect this will be much of a maintenance burden.\n\nKeep in mind that if you do want this to be in core, the data format\nhas to be very stable to maintain pg_upgrade compatibility.\n\n\nPatch comments:\n\n* Please include docs.\n\n* I'm worried about the use of int32. Does that cover all of the\nreasonable use cases of rational?\n\n* Shouldn't:\n\n /*\n * x = coalesce(lo, arg[0]) y = coalesce(hi, arg[1])\n */\n\n be: \n\n /*\n * x = coalesce(arg[0], lo) y = coalesce(arg[1], lo)\n */\n\n* More generally, what's the philosophy regarding NULL and rational?\nWhy are NULL arguments returning non-NULL answers?\n\n* Is rational_intermediate() well-defined, or can it just choose any\nrational between the two arguments?\n\n* Can you discuss how cross-type comparisons and conversions should be\nhandled (e.g. int8, numeric, float8)?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 11 Feb 2020 16:51:09 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Jeff Davis wrote:\n> The decision between an extension and a core type is a tricky one. To\n> put an extension in core, usually it's good to show either that it\n> satisfies something in the SQL standard, or that there is some\n> specific technical advantage (like integration with the syntax or the\n> type system).\n\nI don't see any references to \"rational\" in the SQL standard (fifth ed,\n2016) and the only reference to \"fraction\" is for fractions of a second\nin datetime. Doesn't look like SQL includes rational numbers.\n\nAlso I don't believe I'm getting extra abilities by putting this in core vs\nusing an extension. Perhaps there's a syntax change that would make rationals\nmore pleasant to deal with than how they in this patch, but I haven't imagined\nwhat it would be, and it's not backed by a standard.\n\n> Integrating it in core just to make it easier to use is a double-edge\n> sword. It does make it easier in some environments; but it also\n> removes pressure to make those environments offer better support for\n> the extension ecosystem, ultimately weakening extensions overall.\n\nMakes sense. We petitioned RDS to allow the pg_rational extension, [0]\nbut they didn't respond. Not sure how that process is supposed to work.\n\n0: https://github.com/begriffs/pg_rational/issues/7\n\n> In this case I believe it could be a candidate for in-core, but it's\n> borderline. The reasons it makes sense to me are:\n> \n> 1. It seems that there's \"only one way to do it\".\n\nThere may be more than one way to do it actually. For instance the choice\nbetween a fixed size type with limits on the fractions it can represent, vs one\nthat can grow to hold any fraction. I chose the first option, to make the type\nthe same size as float8. My reasoning was that there was would be no space\noverhead for choosing rational over float.\n\nAlso there's the choice of whether to always store fractions in normal\nform (lowest terms). This patch allows fractions to be in non-normal\nform after arithmetic, and only normalizes as needed when an arithmetic\noperation would overflow. I wanted to cut down on how many times gcd is\ncalled. However this choice means that hash indices have to normalize\nbecause they hash the bit pattern, while btree indices can compare\nrationals without regard to normal form.\n\nThis patch represents each rational as a separate numerator and denominator. I\ndid some research today to see if there are any another ways, and looks like\nthere's a technique from the 70s called \"quote notation.\" [1] It appears that\nquote notation makes addition and subtraction faster, but that the operations\ncan less predictable performance in the worst case scenarios with doing\narbitrary precision. So there's more than one way to do it.\n\n1: https://en.wikipedia.org/wiki/Quote_notation\n\n> 2. I don't expect this will be much of a maintenance burden.\n\nTrue, rational numbers aren't going to change anytime soon.\n\n> Keep in mind that if you do want this to be in core, the data format\n> has to be very stable to maintain pg_upgrade compatibility.\n\nThe format is currently [int32 numerator][int32 denominator] packed together,\nwhere the denominator is made positive whenever possible (not possible when\nit's -INT_MAX). The quote notation alternative would arrange things very\ndifferently.\n\n> Patch comments:\n> \n> * Please include docs.\n\nOf course, if we determine the patch is desirable. The included tests\nshould help demonstrate how it works for the purposes of review.\n\n> * I'm worried about the use of int32. Does that cover all of the\n> reasonable use cases of rational?\n\nI could imagine having two types, a rational8 for the current\nimplementation, and an arbitrary precision rational. Perhaps...\n\n> * what's the philosophy regarding NULL and rational? Why are NULL\n> arguments returning non-NULL answers?\n\nThe rational_intermediate(x, y) function picks a fraction between x and\ny, and NULL was meant as a signal that one of the sides is unbounded.\nSo rational_intermediate(x, NULL) means find a fraction larger than x,\nand rational_intermediate(NULL, y) means find one smaller than y.\n\nThe use case is a query for a spot \"immediately after position 2:\"\n\nSELECT rational_intermediate(2, min(pos))\n FROM todos\n WHERE pos > 2;\n\nIf there are already todos positioned after 2 then it'll find a spot\nbetween 2 and the min. However if there are no todos after 2 then min()\nwill return NULL and we'll simply find a position *somewhere* after 2.\n\n> * Is rational_intermediate() well-defined, or can it just choose any\n> rational between the two arguments?\n\nIt chooses the mediant [2] of x and y in lowest terms by walking a\nStern-Brocot tree. I found that this keeps the terms of the fraction\nmuch smaller than taking the average of x and y. This was an advantage\nover calculating with floats because I don't know how to take the\nmediant of floats, and repeatedly taking the average of floats eats up\nprecision pretty quickly.\n\n2: https://en.wikipedia.org/wiki/Mediant_(mathematics)\n\n> * Can you discuss how cross-type comparisons and conversions should be\n> handled (e.g. int8, numeric, float8)?\n\nGood point, I don't have tests for that. Would implicit casts do the\ntrick? So '1/2'::rational < 1 would cast 1 to '1/1' and compare? I have\ncurrently included these casts: integer -> rational, float8 <->\nrational. Don't have one for numeric yet.\n\n> Regards,\n\nThank you for taking the time to raise those questions.\n\n-- \nJoe Nelson https://begriffs.com\n\n\n", "msg_date": "Fri, 21 Feb 2020 19:24:47 -0600", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": true, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Joe Nelson wrote:\n> where the denominator is made positive whenever possible (not possible\n> when it's -INT_MAX).\n\n(I meant INT_MIN rather than -INT_MAX.)\n\nAnother more-than-one-way-to-do-it task is converting a float to a\nfraction. I translated John Kennedy's method [0] to C, but Github user\nadegert sent an alternative [1] that matches the way the CPython\nimplementation works.\n\n0: https://begriffs.com/pdf/dec2frac.pdf \n1: https://github.com/begriffs/pg_rational/pull/13\n\n\n", "msg_date": "Fri, 21 Feb 2020 20:24:23 -0600", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": true, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On Fri, 2020-02-21 at 19:24 -0600, Joe Nelson wrote:\n> I could imagine having two types, a rational8 for the current\n> implementation, and an arbitrary precision rational. Perhaps...\n\nThe main thing I'm trying to avoid is a situation where we introduce\n\"rational\", but it only meets a subset of the use cases, and then we\nend up with another extension. I'm not saying that's happening here,\nbut it would be good to compare against other implementations (for\ninstance from normal programming languages) to see if we are missing\nsome use cases.\n\nIf you are using rational to track positions of items in an online to-\ndo list, and the user keeps swapping or rotating items in the list, is\nthat going to lead to overflow/underflow?\n\n> > * what's the philosophy regarding NULL and rational? Why are NULL\n> > arguments returning non-NULL answers?\n> \n> The rational_intermediate(x, y) function picks a fraction between x\n> and\n> y, and NULL was meant as a signal that one of the sides is unbounded.\n> So rational_intermediate(x, NULL) means find a fraction larger than\n> x,\n> and rational_intermediate(NULL, y) means find one smaller than y.\n\nWould \"x+1\" or \"y-1\" also work for that?\n\nI am a little worried about introducing a function that is not well-\ndefined. Would a midpoint function serve the purpose?\n\n> If there are already todos positioned after 2 then it'll find a spot\n> between 2 and the min. However if there are no todos after 2 then\n> min()\n> will return NULL and we'll simply find a position *somewhere* after\n> 2.\n\nInteresting. That could probably be solved with a COALESCE() around the\nMIN(), but your version is a little cleaner.\n\n> > * Is rational_intermediate() well-defined, or can it just choose\n> > any\n> > rational between the two arguments?\n> \n> It chooses the mediant [2] \n\nMaybe we should define it as the mediant then?\n\n> currently included these casts: integer -> rational, float8 <->\n> rational. Don't have one for numeric yet.\n\nA cast to numeric would make sense. What will you do in cases where the\ndomains don't quite match, are you rounding or truncating?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 22 Feb 2020 09:51:16 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Hi Joe,\n\nOn 2/7/20 11:25 PM, Joe Nelson wrote:\n> Hi hackers, attached is a proof of concept patch adding a new base type\n> called \"rational\" to represent fractions.\n\nI have set the target version for this patch to PG14 because it is POC \nand this is the first CF it has appeared in.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Mon, 9 Mar 2020 09:00:34 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On 2020-02-08 05:25, Joe Nelson wrote:\n> Hi hackers, attached is a proof of concept patch adding a new base type\n> called \"rational\" to represent fractions. It includes arithmetic,\n> simplification, conversion to/from float, finding intermediates with a\n> stern-brocot tree, custom aggregates, and btree/hash indices.\n\nThe numeric type already stores rational numbers. How is this \ndifferent? What's the use?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 23:33:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On 05/18/20 17:33, Peter Eisentraut wrote:\n> The numeric type already stores rational numbers. How is this different? \n> What's the use?\n\nSeems like numeric is a base-10000 representation. Will work ok for\na rational whose denominator factors into 2s and 5s.\n\nWon't ever quite represent, say, 1/3, no matter how big you let it get.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 18 May 2020 17:50:20 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 05/18/20 17:33, Peter Eisentraut wrote:\n>> The numeric type already stores rational numbers. How is this different? \n>> What's the use?\n\n> Won't ever quite represent, say, 1/3, no matter how big you let it get.\n\nThere surely are use-cases for true rational arithmetic, but I'm\ndubious that it belongs in core Postgres. I don't think that enough\nof our users would want it to justify expending core-project maintenance\neffort on it. So I'd be happier to see this as an out-of-core extension.\n\n(That'd also ease dealing with the prospect of having more than one\nvariant, as was mentioned upthread.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 May 2020 18:14:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On Mon, May 18, 2020 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There surely are use-cases for true rational arithmetic, but I'm\n> dubious that it belongs in core Postgres. I don't think that enough\n> of our users would want it to justify expending core-project maintenance\n> effort on it. So I'd be happier to see this as an out-of-core extension.\n\nAs is often the case, I'm a little more positive about including this\nthan Tom, but as is also often the case, I'm somewhat cautious, too.\nOn the one hand, I think it would be cool to have and people would\nlike it. But, On the other hand, I also think we'd only want it if\nwe're convinced that it's a really good implementation and that\nthere's not a competing design which is better, or even equally good.\nThose things don't seem too clear at this point, so I hope Jeff and\nJoe keep chatting about it ... and maybe some other people who are\nknowledgeable about this will chime in, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 May 2020 13:40:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On Thu, May 21, 2020 at 01:40:10PM -0400, Robert Haas wrote:\n> On Mon, May 18, 2020 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > There surely are use-cases for true rational arithmetic, but I'm\n> > dubious that it belongs in core Postgres. I don't think that enough\n> > of our users would want it to justify expending core-project maintenance\n> > effort on it. So I'd be happier to see this as an out-of-core extension.\n> \n> As is often the case, I'm a little more positive about including this\n> than Tom, but as is also often the case, I'm somewhat cautious, too.\n> On the one hand, I think it would be cool to have and people would\n> like it. But, On the other hand, I also think we'd only want it if\n> we're convinced that it's a really good implementation and that\n> there's not a competing design which is better, or even equally good.\n\nI vote for keeping it out of core, mostly because writing rational numeric\ncode is so different from writing DBMS core code. (Many of our existing\ntypes, like numeric and the geometric types, have the same problem. Let's not\ninvite more of that.) The optimal reviewer pools won't have much overlap, so\npatches may sit awhile and/or settle for a cursory review.\n\nMore language standard libraries provide \"numeric\"-style big decimals[1] than\nprovide big rationals[2], suggesting we're in good company.\n\n[1] https://en.wikipedia.org/wiki/List_of_arbitrary-precision_arithmetic_software#Languages\n[2] https://en.wikipedia.org/wiki/Rational_data_type#Language_support\n\n\n", "msg_date": "Thu, 21 May 2020 22:53:32 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "\nOn 5/22/20 1:53 AM, Noah Misch wrote:\n> On Thu, May 21, 2020 at 01:40:10PM -0400, Robert Haas wrote:\n>> On Mon, May 18, 2020 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> There surely are use-cases for true rational arithmetic, but I'm\n>>> dubious that it belongs in core Postgres. I don't think that enough\n>>> of our users would want it to justify expending core-project maintenance\n>>> effort on it. So I'd be happier to see this as an out-of-core extension.\n>> As is often the case, I'm a little more positive about including this\n>> than Tom, but as is also often the case, I'm somewhat cautious, too.\n>> On the one hand, I think it would be cool to have and people would\n>> like it. But, On the other hand, I also think we'd only want it if\n>> we're convinced that it's a really good implementation and that\n>> there's not a competing design which is better, or even equally good.\n> I vote for keeping it out of core, mostly because writing rational numeric\n> code is so different from writing DBMS core code. (Many of our existing\n> types, like numeric and the geometric types, have the same problem. Let's not\n> invite more of that.) The optimal reviewer pools won't have much overlap, so\n> patches may sit awhile and/or settle for a cursory review.\n>\n> More language standard libraries provide \"numeric\"-style big decimals[1] than\n> provide big rationals[2], suggesting we're in good company.\n>\n> [1] https://en.wikipedia.org/wiki/List_of_arbitrary-precision_arithmetic_software#Languages\n> [2] https://en.wikipedia.org/wiki/Rational_data_type#Language_support\n>\n>\n\nI agree. Also the original rationale that people want to use it on RDS\nis pretty awful. We can't just add in every extension that some DBAAS\nprovider doesn't support.\n\n\nI think we mark this as rejected.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 1 Jul 2020 16:09:35 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Greetings,\n\n* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> On 5/22/20 1:53 AM, Noah Misch wrote:\n> > On Thu, May 21, 2020 at 01:40:10PM -0400, Robert Haas wrote:\n> >> On Mon, May 18, 2020 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> There surely are use-cases for true rational arithmetic, but I'm\n> >>> dubious that it belongs in core Postgres. I don't think that enough\n> >>> of our users would want it to justify expending core-project maintenance\n> >>> effort on it. So I'd be happier to see this as an out-of-core extension.\n> >> As is often the case, I'm a little more positive about including this\n> >> than Tom, but as is also often the case, I'm somewhat cautious, too.\n> >> On the one hand, I think it would be cool to have and people would\n> >> like it. But, On the other hand, I also think we'd only want it if\n> >> we're convinced that it's a really good implementation and that\n> >> there's not a competing design which is better, or even equally good.\n> > I vote for keeping it out of core, mostly because writing rational numeric\n> > code is so different from writing DBMS core code. (Many of our existing\n> > types, like numeric and the geometric types, have the same problem. Let's not\n> > invite more of that.) The optimal reviewer pools won't have much overlap, so\n> > patches may sit awhile and/or settle for a cursory review.\n> >\n> > More language standard libraries provide \"numeric\"-style big decimals[1] than\n> > provide big rationals[2], suggesting we're in good company.\n> >\n> > [1] https://en.wikipedia.org/wiki/List_of_arbitrary-precision_arithmetic_software#Languages\n> > [2] https://en.wikipedia.org/wiki/Rational_data_type#Language_support\n> \n> I agree. Also the original rationale that people want to use it on RDS\n> is pretty awful. We can't just add in every extension that some DBAAS\n> provider doesn't support.\n\nI disagree with this and instead lean more towards the side that Robert\nand Jeff were taking in that this would be a useful extension and\nsomething we should consider including in core. I disagree with Tom and\nNoah, specifically because, if we add this capability then I see our\npotential use-cases as increasing and therefore getting more individuals\ninterested in working with us- to potentially include new contributors\nand possibly committers.\n\n> I think we mark this as rejected.\n\nThe more we reject new things, the less appealing our community ends up\nbeing. I don't mean that to be an argument that we should accept\neverything, but a new capability that has relatively little impact on\nthe core code and is useful should be something we're leaning towards\naccepting rather than rejecting out of hand because it's not explicitly\ncalled out in the SQL standard or appeals to the masses.\n\nI'll further say that this is where we end up potentially loosing\nnewcomers to writing their own code in python or other tools when, if we\nhad such support in core, they'd be able to accomplish what they want\nmore easily with PG.\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Jul 2020 16:32:46 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I disagree with this and instead lean more towards the side that Robert\n> and Jeff were taking in that this would be a useful extension and\n> something we should consider including in core. I disagree with Tom and\n> Noah, specifically because, if we add this capability then I see our\n> potential use-cases as increasing and therefore getting more individuals\n> interested in working with us- to potentially include new contributors\n> and possibly committers.\n\nFWIW, I'm entirely in favor of having this available as an extension.\nBut I'm not in favor of it being in core. I'm afraid it will end up\nlike the geometric types, i.e. a backwater of not-very-good code that\ngets little love because it's not in line with the core competencies\nof a bunch of database geeks. If it's a separate project, then we\ncould hope to attract interest from people who know the subject matter\nbetter but would never dare touch the PG backend in general. There's\nalso the whole project-management issue that we have finite resources\nand so we can *not* afford to put every arguably-useful feature in core.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 18:40:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I disagree with this and instead lean more towards the side that Robert\n> > and Jeff were taking in that this would be a useful extension and\n> > something we should consider including in core. I disagree with Tom and\n> > Noah, specifically because, if we add this capability then I see our\n> > potential use-cases as increasing and therefore getting more individuals\n> > interested in working with us- to potentially include new contributors\n> > and possibly committers.\n> \n> FWIW, I'm entirely in favor of having this available as an extension.\n> But I'm not in favor of it being in core. I'm afraid it will end up\n> like the geometric types, i.e. a backwater of not-very-good code that\n> gets little love because it's not in line with the core competencies\n> of a bunch of database geeks. If it's a separate project, then we\n> could hope to attract interest from people who know the subject matter\n> better but would never dare touch the PG backend in general. There's\n> also the whole project-management issue that we have finite resources\n> and so we can *not* afford to put every arguably-useful feature in core.\n\nThe issue that you highlight regarding geometric types is really that we\nsimply refuse to punt things from core, ever, and that's not a\nreasonable position to take for long-term sanity. On the flip side,\nit's ridiculously rare for an extension to have any kind of real\nlife as an independent project- yes, there's one big exception (PostGIS)\nbecause it's simply ridiculously useful, and a few other cases\nwhere one company/individual or another funds the work of a particular\nextension because they need it for whatever, but by and large,\nextensions outside of PG simply don't thrive as independent projects.\n\nThere's various potential reasons for that, from being hard to find, to\nbeing hard to install and work with, to the fact that we don't have a\ncentralized extension system (PGXN isn't really endorsed at all by\ncore... and I don't really think it should be), and our general\nextension management system isn't particularly great anyway.\n\nThe argument that we simply aren't able to ever extend the surface area\nof the database server beyond simple types because anything else\nrequires specialized knowledge that general database hackers don't have\nimplies that every committee/maintainers must be a general database\nhacker that knows all kinds of stuff about the innards of the kernel,\nbut that isn't the case, even among our committers today- different\nfolks have confidence working in different areas and that's an entirely\ngood thing that allows us to increase our pool of resources while not\nexpecting folks to be complete generalists. How do we build on that,\nwhile not ending up with code getting dumped on us and then left with us\nto maintain? That's the problem we need to solve, and perhaps other\nprojects can give us insight, but I certainly won't accept that we\nsimply must accept some glass ceiling regardless of how many people want\nto help us move forward- let's find a sensible way for them to help us\nmove forward while not increasing drag to the point that we fall out of\nthe sky.\n\nThanks,\n\nStephen", "msg_date": "Wed, 1 Jul 2020 19:00:07 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "\nOn 7/1/20 7:00 PM, Stephen Frost wrote:\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>>> I disagree with this and instead lean more towards the side that Robert\n>>> and Jeff were taking in that this would be a useful extension and\n>>> something we should consider including in core. I disagree with Tom and\n>>> Noah, specifically because, if we add this capability then I see our\n>>> potential use-cases as increasing and therefore getting more individuals\n>>> interested in working with us- to potentially include new contributors\n>>> and possibly committers.\n>> FWIW, I'm entirely in favor of having this available as an extension.\n>> But I'm not in favor of it being in core. I'm afraid it will end up\n>> like the geometric types, i.e. a backwater of not-very-good code that\n>> gets little love because it's not in line with the core competencies\n>> of a bunch of database geeks. If it's a separate project, then we\n>> could hope to attract interest from people who know the subject matter\n>> better but would never dare touch the PG backend in general. There's\n>> also the whole project-management issue that we have finite resources\n>> and so we can *not* afford to put every arguably-useful feature in core.\n> The issue that you highlight regarding geometric types is really that we\n> simply refuse to punt things from core, ever, and that's not a\n> reasonable position to take for long-term sanity. On the flip side,\n> it's ridiculously rare for an extension to have any kind of real\n> life as an independent project- yes, there's one big exception (PostGIS)\n> because it's simply ridiculously useful, and a few other cases\n> where one company/individual or another funds the work of a particular\n> extension because they need it for whatever, but by and large,\n> extensions outside of PG simply don't thrive as independent projects.\n>\n> There's various potential reasons for that, from being hard to find, to\n> being hard to install and work with, to the fact that we don't have a\n> centralized extension system (PGXN isn't really endorsed at all by\n> core... and I don't really think it should be), and our general\n> extension management system isn't particularly great anyway.\n>\n\n\nThen these are things we should fix. But the right fix isn't including\nevery extension in the core code.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 2 Jul 2020 08:59:01 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/1/20 7:00 PM, Stephen Frost wrote:\n>> ... extensions outside of PG simply don't thrive as independent projects.\n>> \n>> There's various potential reasons for that, from being hard to find, to\n>> being hard to install and work with, to the fact that we don't have a\n>> centralized extension system (PGXN isn't really endorsed at all by\n>> core... and I don't really think it should be), and our general\n>> extension management system isn't particularly great anyway.\n\n> Then these are things we should fix. But the right fix isn't including\n> every extension in the core code.\n\nYeah. We *must not* simply give up on extensibility and decide that\nevery interesting feature has to be in core. I don't have any great\nideas about how we grow the wider Postgres development community and\ninfrastructure, but that certainly isn't the path to doing so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 10:01:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 7/1/20 7:00 PM, Stephen Frost wrote:\n> >> ... extensions outside of PG simply don't thrive as independent projects.\n> >> \n> >> There's various potential reasons for that, from being hard to find, to\n> >> being hard to install and work with, to the fact that we don't have a\n> >> centralized extension system (PGXN isn't really endorsed at all by\n> >> core... and I don't really think it should be), and our general\n> >> extension management system isn't particularly great anyway.\n> \n> > Then these are things we should fix. But the right fix isn't including\n> > every extension in the core code.\n> \n> Yeah. We *must not* simply give up on extensibility and decide that\n> every interesting feature has to be in core. I don't have any great\n> ideas about how we grow the wider Postgres development community and\n> infrastructure, but that certainly isn't the path to doing so.\n\nI don't see where I was either proposing that we give up extensibility,\nor that we have to include every extension in the core code.\n\nThanks,\n\nStephen", "msg_date": "Thu, 2 Jul 2020 10:21:48 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On 2020-07-02 16:21, Stephen Frost wrote:\n> I don't see where I was either proposing that we give up extensibility,\n> or that we have to include every extension in the core code.\n\nBy the way, I have an extension that adds unsigned integer types. I \nwould argue that that is more frequently requested than a fractions \ntype. I'm not in favor of adding either to core, but just saying that \nif we think this one should be added, we might be opening up the \nfloodgates a certain amount.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 08:17:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "> > I think we mark this as rejected.\n\nStephen Frost wrote:\n> The more we reject new things, the less appealing our community ends\n> up being.\n\nFor what it's worth, I'm not disheartened if my rational patch is\nrejected. I can appreciate that postgres wants to avoid what might be\nfeature creep, especially if aspects of the implementation are arbitrary\nor subject to change later on.\n\nIt might be more productive for me to investigate other ways to\ncontribute, like SQL:2016 features/conformance. That would increase our\nharmony with other databases, rather than adding idiosyncrasies like a\nnew numeric type.\n\n\n", "msg_date": "Fri, 3 Jul 2020 01:33:17 -0500", "msg_from": "Joe Nelson <joe@begriffs.com>", "msg_from_op": true, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "\n\n\nOn 7/2/20 10:01 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 7/1/20 7:00 PM, Stephen Frost wrote:\n>>> ... extensions outside of PG simply don't thrive as independent projects.\n>>>\n>>> There's various potential reasons for that, from being hard to find, to\n>>> being hard to install and work with, to the fact that we don't have a\n>>> centralized extension system (PGXN isn't really endorsed at all by\n>>> core... and I don't really think it should be), and our general\n>>> extension management system isn't particularly great anyway.\n>> Then these are things we should fix. But the right fix isn't including\n>> every extension in the core code.\n> Yeah. We *must not* simply give up on extensibility and decide that\n> every interesting feature has to be in core. I don't have any great\n> ideas about how we grow the wider Postgres development community and\n> infrastructure, but that certainly isn't the path to doing so.\n>\n> \t\t\t\n\n\nI've been thinking about this a bit. Right now there isn't anything\noutside of core that seems to work well. PGXN was supposed to be our\nCPAN equivalent, but it doesn't seem to have worked out that way, it\nnever really got the traction. I'm thinking about something different,\nin effect a curated set of extensions, maintained separately from the\ncore. Probably the involvement of one or two committers would be good,\nbut the idea is that in general core developers wouldn't need to be\nconcerned about these. For want of a better name let's call it\npostgresql-extras. I would undertake to provide buildfarm support, and\npossibly we would provide packages to complement the PGDG yum and apt\nrepos. If people think that's a useful idea then those of us who are\nprepared to put in some effort on this can take the discussion offline\nand come back with a firmer proposal.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 3 Jul 2020 11:43:30 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/2/20 10:01 AM, Tom Lane wrote:\n>> Yeah. We *must not* simply give up on extensibility and decide that\n>> every interesting feature has to be in core. I don't have any great\n>> ideas about how we grow the wider Postgres development community and\n>> infrastructure, but that certainly isn't the path to doing so.\n\n> I've been thinking about this a bit. Right now there isn't anything\n> outside of core that seems to work well. PGXN was supposed to be our\n> CPAN equivalent, but it doesn't seem to have worked out that way, it\n> never really got the traction.\n\nYeah. Can we analyze why it hasn't done better? Can we improve it\nrather than starting something completely new?\n\n> I'm thinking about something different,\n> in effect a curated set of extensions, maintained separately from the\n> core. Probably the involvement of one or two committers would be good,\n> but the idea is that in general core developers wouldn't need to be\n> concerned about these. For want of a better name let's call it\n> postgresql-extras. I would undertake to provide buildfarm support, and\n> possibly we would provide packages to complement the PGDG yum and apt\n> repos. If people think that's a useful idea then those of us who are\n> prepared to put in some effort on this can take the discussion offline\n> and come back with a firmer proposal.\n\nMy only objection to this idea is that competing with PGXN might not\nbe a great thing. But then again, maybe it would be. Or maybe this\nis an intermediate tier between PGXN and core. Anyway, it certainly\nseems worth spending more thought on. I agree that we need to do\n*something* proactive rather than just hoping the extension community\ngets stronger by itself.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Jul 2020 12:18:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "On 2020-07-03 18:18, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 7/2/20 10:01 AM, Tom Lane wrote:\n>>> Yeah. We *must not* simply give up on extensibility and decide that\n>>> every interesting feature has to be in core. I don't have any great\n>>> ideas about how we grow the wider Postgres development community and\n>>> infrastructure, but that certainly isn't the path to doing so.\n> \n>> I've been thinking about this a bit. Right now there isn't anything\n>> outside of core that seems to work well. PGXN was supposed to be our\n>> CPAN equivalent, but it doesn't seem to have worked out that way, it\n>> never really got the traction.\n> \n> Yeah. Can we analyze why it hasn't done better? Can we improve it\n> rather than starting something completely new?\n\nThis should probably all be in a different thread. But yeah, I think a \nbit more analysis of the problem is needed before jumping into action.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 4 Jul 2020 15:29:45 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" }, { "msg_contents": "The discussion diverged somewhat to PGXN and extensions in general, but \nthe consensus seems to be that this should (continue to) be an extension \nrather than a core feature. I agree that as an extension this is pretty \ncool. I'll mark this as rejected in the commitfest app.\n\nLooking at the patch, it looks well-structured and commented.\n\n- Heikki\n\n\n", "msg_date": "Fri, 4 Sep 2020 11:12:51 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: POC: rational number type (fractions)" } ]
[ { "msg_contents": "Forking this thread\nhttps://www.postgresql.org/message-id/20181227132417.xe3oagawina7775b%40alvherre.pgsql\n\nOn Wed, Dec 26, 2018 at 01:09:39PM -0500, Robert Haas wrote:\n> ALTER TABLE already has a lot of logic that is oriented towards being\n> able to do multiple things at the same time. If we added CLUSTER,\n> VACUUM FULL, and REINDEX to that set, then you could, say, change a\n> data type, cluster, and change tablespaces all in a single SQL\n> command.\n\nOn Thu, Dec 27, 2018 at 10:24:17AM -0300, Alvaro Herrera wrote:\n> I think it would be valuable to have those ALTER TABLE variants that rewrite\n> the table do so using the cluster order, if there is one, instead of the heap\n> order, which is what it does today.\n\nThat's a neat idea.\n\nI haven't yet fit all of ALTERs processing logic in my head ... but there's an\nissue that ALTER (unlike CLUSTER) needs to deal with column type promotion, so\nthe indices may need to be dropped and recreated. The table rewrite happens\nAFTER dropping indices (and all other processing), but the clustered index\ncan't be scanned if it's just been dropped. I handled that by using a\ntuplesort, same as heapam_relation_copy_for_cluster.\n\nExperimental patch attached. With clustered ALTER:\n\ntemplate1=# DROP TABLE t; CREATE TABLE t AS SELECT generate_series(1,999)i; CREATE INDEX ON t(i DESC); ALTER TABLE t CLUSTER ON t_i_idx; ALTER TABLE t ALTER i TYPE bigint; SELECT * FROM t LIMIT 9;\nDROP TABLE\nSELECT 999\nCREATE INDEX\nALTER TABLE\nALTER TABLE\n i \n-----\n 999\n 998\n 997\n 996\n 995\n 994\n 993\n 992\n 991\n(9 rows)\n\n0001 patch is stolen from the nearby thread:\nhttps://www.postgresql.org/message-id/flat/20200207143935.GP403%40telsasoft.com\nIt doesn't make much sense for ALTER to use a clustered index when rewriting a\ntable, if doesn't also go to the effort to preserve the cluster property when\nrebuilding its indices.\n\n0002 patch is included and not squished with 0003 to show the original\nimplementation using an index scan (by not dropping indices on the old table,\nand breaking various things), and the evolution to tuplesort.\n\nNote, this doesn't use clustered order when rewriting only due to tablespace\nchange. Alter currently does an AM specific block copy without looking at\ntuples. But I think it'd be possible to use tuplesort and copy if desired.", "msg_date": "Sat, 8 Feb 2020 09:04:53 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "ALTER TABLE rewrite to use clustered order" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Dec 27, 2018 at 10:24:17AM -0300, Alvaro Herrera wrote:\n>> I think it would be valuable to have those ALTER TABLE variants that rewrite\n>> the table do so using the cluster order, if there is one, instead of the heap\n>> order, which is what it does today.\n\n> That's a neat idea.\n\nTBH, I'm -1 on this. The current behavior of preserving physical order is\nperfectly sane, and it's faster than anything involving CLUSTER is going\nto be, and if you try to change that you are going to have enormous\nheadaches with the variants of ALTER TABLE that would change the semantics\nof the CLUSTER index columns. (Unless of course your theory is that you\ndon't actually care exactly what the finished order is, in which case why\nare we bothering?)\n\nThe proposed patch which *forces* it to be done like that, whether the\nuser wants it or not, seems particularly poorly thought out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Feb 2020 11:57:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE rewrite to use clustered order" } ]
[ { "msg_contents": "Hi,\n\nWhile jumping around partically decoded xacts questions [1], I've read\nthrough the copy replication slots code (9f06d79ef) and found a couple\nof issues.\n\n1) It seems quite reckless to me to dive into\nDecodingContextFindStartpoint without actual WAL reservation (donors\nslot restart_lsn is used, but it is not acquired). Why risking erroring\nout with WAL removal error if the function advances new slot position to\nupdated donors one in the end anyway?\n\n2) In the end, restart_lsn of new slot is set to updated donors\none. However, confirmed_flush field is not updated. This is just wrong\n-- we could start decoding too early and stream partially decoded\ntransaction.\n\nI'd probably avoid doing DecodingContextFindStartpoint at all. Its only\npurpose is to assemble consistent snapshot (and establish corresponding\n<restart_lsn, confirmed_flush_lsn> pair), but donor slot must have\nalready done that and we could use it as well. Was this considered?\n\n\n[1] https://www.postgresql.org/message-id/flat/AB5978B2-1772-4FEE-A245-74C91704ECB0%40amazon.com\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sun, 09 Feb 2020 19:28:59 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "logical copy_replication_slot issues" }, { "msg_contents": "On Mon, 10 Feb 2020 at 01:29, Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> Hi,\n>\n> While jumping around partically decoded xacts questions [1], I've read\n> through the copy replication slots code (9f06d79ef) and found a couple\n> of issues.\n>\n> 1) It seems quite reckless to me to dive into\n> DecodingContextFindStartpoint without actual WAL reservation (donors\n> slot restart_lsn is used, but it is not acquired). Why risking erroring\n> out with WAL removal error if the function advances new slot position to\n> updated donors one in the end anyway?\n\nGood catch. It's possible that DecodingContextFindStartpoint could\nfail when the restart_lsn of the source slot is advanced and removed\nrequired WAL.\n\n>\n> 2) In the end, restart_lsn of new slot is set to updated donors\n> one. However, confirmed_flush field is not updated. This is just wrong\n> -- we could start decoding too early and stream partially decoded\n> transaction.\n\nI think you are right.\n\n>\n> I'd probably avoid doing DecodingContextFindStartpoint at all. Its only\n> purpose is to assemble consistent snapshot (and establish corresponding\n> <restart_lsn, confirmed_flush_lsn> pair), but donor slot must have\n> already done that and we could use it as well. Was this considered?\n\nSkipping doing DecodingContextFindStartpoint while creating a new\ndestination logical slot seems sensible to me.\n\nI've attached the draft patch fixing this issue but I'll continue\ninvestigating it more deeply.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 10 Feb 2020 14:01:44 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "\nMasahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n\n> I've attached the draft patch fixing this issue but I'll continue\n> investigating it more deeply.\n\nThere also should be a check that source slot itself has consistent\nsnapshot (valid confirmed_flush) -- otherwise it might be possible to\ncreate not initialized slot which is probably not an error, but weird\nand somewhat meaningless. Paranoically, this ought to be checked in both\nsrc slot lookups.\n\nWith this patch it seems like the only thing\ncreate_logical_replication_slot does is ReplicationSlotCreate, which\nquestions the usefulness of this function. On the second look,\nCreateInitDecodingContext checks plugin sanity (ensures it exists), so\nprobably it's fine.\n\n\n-- cheers, arseny\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:01:25 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "On Mon, 10 Feb 2020 at 23:01, Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n>\n> > I've attached the draft patch fixing this issue but I'll continue\n> > investigating it more deeply.\n>\n> There also should be a check that source slot itself has consistent\n> snapshot (valid confirmed_flush) -- otherwise it might be possible to\n> create not initialized slot which is probably not an error, but weird\n> and somewhat meaningless. Paranoically, this ought to be checked in both\n> src slot lookups.\n>\n> With this patch it seems like the only thing\n> create_logical_replication_slot does is ReplicationSlotCreate, which\n> questions the usefulness of this function. On the second look,\n> CreateInitDecodingContext checks plugin sanity (ensures it exists), so\n> probably it's fine.\n>\n\nThank you for reviewing this patch.\n\nI've attached the updated version patch that incorporated your\ncomments. I believe we're going in the right direction for fixing this\nbug. I'll register this item to the next commit fest so as not to\nforget.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 19 Feb 2020 16:59:40 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n\n> I've attached the updated version patch that incorporated your\n> comments. I believe we're going in the right direction for fixing this\n> bug. I'll register this item to the next commit fest so as not to\n> forget.\n\nI've moved confirmed_flush check to the second lookup out of paranoic\nconsiderations (e.g. slot could have been recreated and creation hasn't\nfinished yet) and made some minor stylistic adjustments. It looks good\nto me now.\n\n\n\n\n\n-- cheers, arseny", "msg_date": "Wed, 04 Mar 2020 18:36:44 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "I wrote:\n\n> It looks good to me now.\n\nAfter lying for some time in my head it reminded me that\nCreateInitDecodingContext not only pegs the LSN, but also xmin, so\nattached makes a minor comment correction.\n\nWhile taking a look at the nearby code it seemed weird to me that\nGetOldestSafeDecodingTransactionId checks PGXACT->xid, not xmin. Don't\nwant to investigate this at the moment though, and not for this thread.\n\nAlso not for this thread, but I've noticed\npg_copy_logical_replication_slot doesn't allow to change plugin name\nwhich is an omission in my view. It would be useful and trivial to do.\n\n\n-- cheers, arseny", "msg_date": "Fri, 06 Mar 2020 14:02:53 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "On Fri, 6 Mar 2020 at 20:02, Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> I wrote:\n>\n> > It looks good to me now.\n>\n> After lying for some time in my head it reminded me that\n> CreateInitDecodingContext not only pegs the LSN, but also xmin, so\n> attached makes a minor comment correction.\n>\n> While taking a look at the nearby code it seemed weird to me that\n> GetOldestSafeDecodingTransactionId checks PGXACT->xid, not xmin. Don't\n> want to investigate this at the moment though, and not for this thread.\n>\n> Also not for this thread, but I've noticed\n> pg_copy_logical_replication_slot doesn't allow to change plugin name\n> which is an omission in my view. It would be useful and trivial to do.\n>\n\nThank you for updating the patch. The patch looks basically good to me\nbut I have a few questions:\n\n /*\n- * Create logical decoding context, to build the initial snapshot.\n+ * Create logical decoding context to find start point or, if we don't\n+ * need it, to 1) bump slot's restart_lsn and xmin 2) check plugin sanity.\n */\n\nDo we need to numbering that despite not referring them?\n\n ctx = CreateInitDecodingContext(plugin, NIL,\n- false, /* do not build snapshot */\n+ false, /* do not build data snapshot */\n restart_lsn,\n logical_read_local_xlog_page, NULL, NULL,\n NULL);\n\nI'm not sure this change makes the comment better. Could you elaborate\non the motivation of this change?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 14:47:06 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "\nMasahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n\n> /*\n> - * Create logical decoding context, to build the initial snapshot.\n> + * Create logical decoding context to find start point or, if we don't\n> + * need it, to 1) bump slot's restart_lsn and xmin 2) check plugin sanity.\n> */\n>\n> Do we need to numbering that despite not referring them?\n\nNo, it just seemed clearer to me this way. I don't mind removing the\nnumbers if you feel this is better.\n\n> ctx = CreateInitDecodingContext(plugin, NIL,\n> - false, /* do not build snapshot */\n> + false, /* do not build data snapshot */\n> restart_lsn,\n> logical_read_local_xlog_page, NULL, NULL,\n> NULL);\n> I'm not sure this change makes the comment better. Could you elaborate\n> on the motivation of this change?\n\nWell, DecodingContextFindStartpoint always builds a snapshot allowing\nhistorical *catalog* lookups. This bool controls whether the snapshot\nshould additionally be suitable for looking at the actual data, this is\ne.g. used by initial data sync in the native logical replication.\n\n\n-- cheers, arseny\n\n\n", "msg_date": "Mon, 09 Mar 2020 15:46:39 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "On Mon, 9 Mar 2020 at 21:46, Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n>\n> > /*\n> > - * Create logical decoding context, to build the initial snapshot.\n> > + * Create logical decoding context to find start point or, if we don't\n> > + * need it, to 1) bump slot's restart_lsn and xmin 2) check plugin sanity.\n> > */\n> >\n> > Do we need to numbering that despite not referring them?\n>\n> No, it just seemed clearer to me this way. I don't mind removing the\n> numbers if you feel this is better.\n>\n\nOkay.\n\n> > ctx = CreateInitDecodingContext(plugin, NIL,\n> > - false, /* do not build snapshot */\n> > + false, /* do not build data snapshot */\n> > restart_lsn,\n> > logical_read_local_xlog_page, NULL, NULL,\n> > NULL);\n> > I'm not sure this change makes the comment better. Could you elaborate\n> > on the motivation of this change?\n>\n> Well, DecodingContextFindStartpoint always builds a snapshot allowing\n> historical *catalog* lookups. This bool controls whether the snapshot\n> should additionally be suitable for looking at the actual data, this is\n> e.g. used by initial data sync in the native logical replication.\n\nOkay.\n\nAnyway, since the patch looks good to me I've marked this patch as\n\"Ready for Committer\". I think we can defer these things to\ncommitters.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 14:58:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "Thanks Arseny and Masahiko, I pushed this patch just now. I changed\nsome comments while at it, hopefully they are improvements.\n\nOn 2020-Mar-09, Masahiko Sawada wrote:\n\n> ctx = CreateInitDecodingContext(plugin, NIL,\n> - false, /* do not build snapshot */\n> + false, /* do not build data snapshot */\n> restart_lsn,\n> logical_read_local_xlog_page, NULL, NULL,\n> NULL);\n> \n> I'm not sure this change makes the comment better. Could you elaborate\n> on the motivation of this change?\n\nI addressed this issue by adding a comment in CreateInitDecodingContext\nto explain the parameter, and then reference that comment's terminology\nin this call. I think it ends up clearer overall -- not that this whole\narea is at all particularly clear.\n\nThanks again.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 17 Mar 2020 16:24:48 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical copy_replication_slot issues" }, { "msg_contents": "On Wed, 18 Mar 2020 at 04:24, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n>\n> Thanks Arseny and Masahiko, I pushed this patch just now. I changed\n> some comments while at it, hopefully they are improvements.\n>\n> On 2020-Mar-09, Masahiko Sawada wrote:\n>\n> > ctx = CreateInitDecodingContext(plugin, NIL,\n> > - false, /* do not build snapshot */\n> > + false, /* do not build data\nsnapshot */\n> > restart_lsn,\n> > logical_read_local_xlog_page, NULL,\nNULL,\n> > NULL);\n> >\n> > I'm not sure this change makes the comment better. Could you elaborate\n> > on the motivation of this change?\n>\n> I addressed this issue by adding a comment in CreateInitDecodingContext\n> to explain the parameter, and then reference that comment's terminology\n> in this call. I think it ends up clearer overall -- not that this whole\n> area is at all particularly clear.\n>\n> Thanks again.\n>\n\nThank you for committing the patch! That changes look good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\n\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\nOn Wed, 18 Mar 2020 at 04:24, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Thanks Arseny and Masahiko, I pushed this patch just now.  I changed\n> some comments while at it, hopefully they are improvements.\n>\n> On 2020-Mar-09, Masahiko Sawada wrote:\n>\n> >     ctx = CreateInitDecodingContext(plugin, NIL,\n> > -                                   false,  /* do not build snapshot */\n> > +                                   false,  /* do not build data snapshot */\n> >                                     restart_lsn,\n> >                                     logical_read_local_xlog_page, NULL, NULL,\n> >                                     NULL);\n> >\n> > I'm not sure this change makes the comment better. Could you elaborate\n> > on the motivation of this change?\n>\n> I addressed this issue by adding a comment in CreateInitDecodingContext\n> to explain the parameter, and then reference that comment's terminology\n> in this call.  I think it ends up clearer overall -- not that this whole\n> area is at all particularly clear.\n>\n> Thanks again.\n>Thank you for committing the patch! That changes look good to me.Regards,\n\n-- \nMasahiko Sawada            http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n-- Masahiko Sawada            http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 19 Mar 2020 13:20:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: logical copy_replication_slot issues" } ]
[ { "msg_contents": "Hackers,\n\n     Musing some other date-related things I stumbled upon the thought \nthat naming the upcoming release PostgreSQL 20 might be preferrable to \nthe current/expected \"PostgreSQL 13\".\n\n\nCons:\n\n  * Discontinuity in versions. 12 -> 20.  Now that we have the precedent \nof 9.6 -> 10 (for very good reasons, I think), this is probably a minor \nissue... Mostly the inconvenience of having to add tests for the skipped \nversions, I believe.\n\n     ¿any others that I don't know about?\n\nPros:\n\n  * Simplified supportability assessment:  PostgreSQL 20, released in \n2020, would be supported until the release of PostgreSQL 25 (late 2025 \nif release cadence is kept as today). Simple and straightforward.\n\n  * We avoid users skipping the release altogether due to superstition \nor analogous reasons ---might be a major issue in some cultures---. \nPostgres 13 would be certainly skipped in production in some \nenvironments that I know about o_0\n\n\nNothing really important, I guess. I think of it as a thought experiment \nmostly, but might spark some ultimate useful debate.\n\n\nThanks,\n\n     / J.L.\n\n\n\n\n", "msg_date": "Sun, 9 Feb 2020 19:28:49 +0100", "msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>", "msg_from_op": true, "msg_subject": "Just for fun: Postgres 20?" }, { "msg_contents": "On 09/02/2020 19:28, Jose Luis Tallon wrote:\n>  * Simplified supportability assessment:  PostgreSQL 20, released in\n> 2020, would be supported until the release of PostgreSQL 25 (late 2025\n> if release cadence is kept as today). Simple and straightforward.\n\nHow would you handle multiple releases in the same calendar year (such\nas 9.5 and 9.6 were)?\n\n>  * We avoid users skipping the release altogether due to superstition or\n> analogous reasons ---might be a major issue in some cultures---.\n> Postgres 13 would be certainly skipped in production in some\n> environments that I know about o_0\n\nThat's not our problem.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 9 Feb 2020 19:54:19 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Jose Luis Tallon <jltallon@adv-solutions.net> writes:\n>     Musing some other date-related things I stumbled upon the thought \n> that naming the upcoming release PostgreSQL 20 might be preferrable to \n> the current/expected \"PostgreSQL 13\".\n\nSorry, but it's not April 1st yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Feb 2020 14:25:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "From: Jose Luis Tallon <jltallon@adv-solutions.net>\r\n>     Musing some other date-related things I stumbled upon the thought\r\n> that naming the upcoming release PostgreSQL 20 might be preferrable to\r\n> the current/expected \"PostgreSQL 13\".\r\n\r\n+1\r\nUsers can easily know how old/new the release is that they are using.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Sun, 9 Feb 2020 23:44:44 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Just for fun: Postgres 20?" }, { "msg_contents": "And nobody is asking about all the \"missing\" versions like in a big red superstitious database.\n\n\n Am Montag, 10. Februar 2020, 00:45:02 MEZ hat tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> Folgendes geschrieben: \n \n From: Jose Luis Tallon <jltallon@adv-solutions.net>\n>      Musing some other date-related things I stumbled upon the thought\n> that naming the upcoming release PostgreSQL 20 might be preferrable to\n> the current/expected \"PostgreSQL 13\".\n\n+1\nUsers can easily know how old/new the release is that they are using.\n\n\nRegards\nTakayuki Tsunakawa\n\n \n\nAnd nobody is asking about all the \"missing\" versions like in a big red superstitious database.\n\n\n\n Am Montag, 10. Februar 2020, 00:45:02 MEZ hat tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> Folgendes geschrieben:\n \n\n\nFrom: Jose Luis Tallon <jltallon@adv-solutions.net>>      Musing some other date-related things I stumbled upon the thought> that naming the upcoming release PostgreSQL 20 might be preferrable to> the current/expected \"PostgreSQL 13\".+1Users can easily know how old/new the release is that they are using.RegardsTakayuki Tsunakawa", "msg_date": "Mon, 10 Feb 2020 11:16:28 +0000 (UTC)", "msg_from": "Wolfgang Wilhelm <wolfgang20121964@yahoo.de>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": ">\n>\n> From: Jose Luis Tallon <jltallon@adv-solutions.net>\n>\n> > Musing some other date-related things I stumbled upon the thought\n> > that naming the upcoming release PostgreSQL 20 might be preferrable to\n> > the current/expected \"PostgreSQL 13\".\n>\n> +1\n>\n> Users can easily know how old/new the release is that they are using.\n>\n>\nThere are multiple pros and cons to this idea. There is an argument since\nwe are on annual releases that 20 makes sense, and (14) would be 21 etc...\nHowever, there is a significant problem with that. Our annual releases are\na relatively new thing and I can definitely see a situation in the future\nwhere we move back to non-annual releases to a more conservative timeline.\nFurther, the jump of the number is going to be seen as a marketing ploy and\nif we are going to be doing marketing ploys, then we should have the new\nfeature set to back it up upon release.\n\nJD\n\n\nFrom: Jose Luis Tallon <jltallon@adv-solutions.net>>      Musing some other date-related things I stumbled upon the thought> that naming the upcoming release PostgreSQL 20 might be preferrable to> the current/expected \"PostgreSQL 13\".+1Users can easily know how old/new the release is that they are using.There are multiple pros and cons to this idea. There is an argument since we are on annual releases that 20 makes sense, and (14) would be 21 etc... However, there is a significant problem with that. Our annual releases are a relatively new thing and I can definitely see a situation in the future where we move back to non-annual releases to a more conservative timeline. Further, the jump of the number is going to be seen as a marketing ploy and if we are going to be doing marketing ploys, then we should have the new feature set to back it up upon release.JD", "msg_date": "Tue, 11 Feb 2020 08:55:27 -0800", "msg_from": "Joshua Drake <jd@commandprompt.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "This project already tried that: \nhttps://www.postgresql.org/docs/12/history.html#HISTORY-POSTGRES95 \n<https://www.postgresql.org/docs/12/history.html#HISTORY-POSTGRES95> \nDidn't last long... \n\n\n--\n Andreas Joseph Krogh", "msg_date": "Tue, 11 Feb 2020 18:03:17 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <andreas@visena.com>", "msg_from_op": false, "msg_subject": "Sv: Just for fun: Postgres 20?" }, { "msg_contents": "I'd rather have releases being made when the software is ready and not when\nthe calendar year mandates it.\nIt seems like a terrible idea.\n\nOn Tue, 11 Feb 2020 at 14:03, Andreas Joseph Krogh <andreas@visena.com>\nwrote:\n\n> This project already tried that:\n> https://www.postgresql.org/docs/12/history.html#HISTORY-POSTGRES95\n> Didn't last long...\n>\n> --\n> Andreas Joseph Krogh\n>\n\nI'd rather have releases being made when the software is ready and not when the calendar year mandates it. It seems like a terrible idea.On Tue, 11 Feb 2020 at 14:03, Andreas Joseph Krogh <andreas@visena.com> wrote:This project already tried that: https://www.postgresql.org/docs/12/history.html#HISTORY-POSTGRES95\nDidn't last long...\n \n\n--\nAndreas Joseph Krogh", "msg_date": "Tue, 11 Feb 2020 14:04:59 -0300", "msg_from": "marcelo zen <mzen@itapua.com.uy>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "marcelo zen escribi�:\n> I'd rather have releases being made when the software is ready and not when\n> the calendar year mandates it.\n> It seems like a terrible idea.\n\nBut we do actually release on calendar year. While it seems not\nunreasonable that we might fail to ship in time, that would likely lead\nto one month, two months of delay. Four months? I don't think anybody\neven imagines such a long delay. It would be seen as utter,\nunacceptable failure of our release team.\n\nOthers have commented in this thread that the idea seems ridiculous, and\nI concur. But the reason is not what you say. The reason, I think, is\nthat for years we spent months each time debating what to name the next\nrelease; and only recently, in version 10, we decided to change our\nnumbering scheme so that these pointless discussions are gone for good.\nTo think that just three years after that we're going to waste months\nagain discussing the same topic ...? Surely not.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Feb 2020 20:07:41 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On 2/12/20 12:07 AM, Alvaro Herrera wrote:\n> marcelo zen escribió:\n>> I'd rather have releases being made when the software is ready and not when\n>> the calendar year mandates it.\n>> It seems like a terrible idea.\n> \n> But we do actually release on calendar year. While it seems not\n> unreasonable that we might fail to ship in time, that would likely lead\n> to one month, two months of delay. Four months? I don't think anybody\n> even imagines such a long delay. It would be seen as utter,\n> unacceptable failure of our release team.\n\nIt has actually happened once: PostgreSQL 9.5 was released in 2016-01-07.\n\n> Others have commented in this thread that the idea seems ridiculous, and\n> I concur. But the reason is not what you say. The reason, I think, is\n> that for years we spent months each time debating what to name the next\n> release; and only recently, in version 10, we decided to change our\n> numbering scheme so that these pointless discussions are gone for good.\n> To think that just three years after that we're going to waste months\n> again discussing the same topic ...? Surely not.\n\nAgreed, and personally I do not see enough benefit from moving to 20.X \nor 2020.X for it to be worth re-opening this discussion. The bikeshed is \nalready painted.\n\nAndreas\n\n\n", "msg_date": "Wed, 12 Feb 2020 14:52:53 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 2/12/20 12:07 AM, Alvaro Herrera wrote:\n>> But we do actually release on calendar year. While it seems not\n>> unreasonable that we might fail to ship in time, that would likely lead\n>> to one month, two months of delay. Four months? I don't think anybody\n>> even imagines such a long delay. It would be seen as utter,\n>> unacceptable failure of our release team.\n\n> It has actually happened once: PostgreSQL 9.5 was released in 2016-01-07.\n\nYeah; I don't think it's *that* unlikely for it to happen again. But\nmy own principal concern about this mirrors what somebody else already\npointed out: the one-major-release-per-year schedule is not engraved on\nany stone tablets. So I don't want to go to a release numbering system\nthat depends on us doing it that way for the rest of time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Feb 2020 09:46:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Andreas Karlsson escribi�:\n> On 2/12/20 12:07 AM, Alvaro Herrera wrote:\n> > marcelo zen escribi�:\n> > > I'd rather have releases being made when the software is ready and not when\n> > > the calendar year mandates it.\n> > > It seems like a terrible idea.\n> > \n> > But we do actually release on calendar year. While it seems not\n> > unreasonable that we might fail to ship in time, that would likely lead\n> > to one month, two months of delay. Four months? I don't think anybody\n> > even imagines such a long delay. It would be seen as utter,\n> > unacceptable failure of our release team.\n> \n> It has actually happened once: PostgreSQL 9.5 was released in 2016-01-07.\n\nWe didn't have a formal release team back then :-) It started with 9.6.\nSome history: https://wiki.postgresql.org/wiki/RMT Anyway, I concede\nthat it's too recent history to say that this will never happen again.\n\nRetroactively we could still have named \"Postgres 15\" the one released\non January 2016. It was clearly the development line made during 2015,\nit just got a little bit delayed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Feb 2020 13:22:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, Feb 12, 2020 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Yeah; I don't think it's *that* unlikely for it to happen again. But\n> my own principal concern about this mirrors what somebody else already\n> pointed out: the one-major-release-per-year schedule is not engraved on\n> any stone tablets. So I don't want to go to a release numbering system\n> that depends on us doing it that way for the rest of time.\n>\n>\nWe could you use YYYY as version identifier, so people will not expect\ncorrelative numbering. SQL Server is being released every couple of years\nand they are using this naming shema. The problem would be releasing twice\nthe same year, but how likely would that be?\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Feb 12, 2020 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nYeah; I don't think it's *that* unlikely for it to happen again.  But\nmy own principal concern about this mirrors what somebody else already\npointed out: the one-major-release-per-year schedule is not engraved on\nany stone tablets.  So I don't want to go to a release numbering system\nthat depends on us doing it that way for the rest of time.We could you use YYYY as version identifier, so people will not expect correlative numbering. SQL Server is being released every couple of years and they are using this naming shema. The problem would be releasing twice the same year, but how likely would that be?Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 12 Feb 2020 17:25:15 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, 12 Feb 2020 at 08:28, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> marcelo zen escribió:\n> > I'd rather have releases being made when the software is ready and not\n> when\n> > the calendar year mandates it.\n> > It seems like a terrible idea.\n>\n> But we do actually release on calendar year. While it seems not\n> unreasonable that we might fail to ship in time, that would likely lead\n> to one month, two months of delay. Four months? I don't think anybody\n> even imagines such a long delay. It would be seen as utter,\n> unacceptable failure of our release team.\n>\n\nAll said, I think there's some merit to avoiding a PostgreSQL 13 release,\nbecause\nthere's enough superstition out there about the infamous \"number 13.\"\n\nPerhaps we could avert it by doing an \"April Fool's Postgres 13\" release?\n-- \nWhen confronted by a difficult problem, solve it by reducing it to the\nquestion, \"How would the Lone Ranger handle this?\"\n\nOn Wed, 12 Feb 2020 at 08:28, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:marcelo zen escribió:\n> I'd rather have releases being made when the software is ready and not when\n> the calendar year mandates it.\n> It seems like a terrible idea.\n\nBut we do actually release on calendar year.  While it seems not\nunreasonable that we might fail to ship in time, that would likely lead\nto one month, two months of delay.  Four months?  I don't think anybody\neven imagines such a long delay.  It would be seen as utter,\nunacceptable failure of our release team.All said, I think there's some merit to avoiding a PostgreSQL 13 release, becausethere's enough superstition out there about the infamous \"number 13.\"Perhaps we could avert it by doing an \"April Fool's Postgres 13\" release?-- When confronted by a difficult problem, solve it by reducing it to thequestion, \"How would the Lone Ranger handle this?\"", "msg_date": "Wed, 12 Feb 2020 12:32:27 -0500", "msg_from": "Christopher Browne <cbbrowne@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, 2020-02-12 at 12:32 -0500, Christopher Browne wrote:\n> All said, I think there's some merit to avoiding a PostgreSQL 13 release, because\n> there's enough superstition out there about the infamous \"number 13.\"\n\nIt would make me sad if the project kotowed to superstition like Oracle did.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 12 Feb 2020 20:58:24 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, 12 Feb 2020 at 14:58, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Wed, 2020-02-12 at 12:32 -0500, Christopher Browne wrote:\n> > All said, I think there's some merit to avoiding a PostgreSQL 13\n> release, because\n> > there's enough superstition out there about the infamous \"number 13.\"\n>\n> It would make me sad if the project kotowed to superstition like Oracle\n> did.\n>\n\nAgreed. That being said, everybody knows you can't avoid the curse of 13 by\nre-numbering it - you simply have to avoid the version/floor/day/whatever\nafter 12.\n\nOn Wed, 12 Feb 2020 at 14:58, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Wed, 2020-02-12 at 12:32 -0500, Christopher Browne wrote:\n> All said, I think there's some merit to avoiding a PostgreSQL 13 release, because\n> there's enough superstition out there about the infamous \"number 13.\"\n\nIt would make me sad if the project kotowed to superstition like Oracle did.\nAgreed. That being said, everybody knows you can't avoid the curse of 13 by re-numbering it - you simply have to avoid the version/floor/day/whatever after 12.", "msg_date": "Wed, 12 Feb 2020 15:02:53 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, Feb 12, 2020 at 05:25:15PM +0100, Juan Jos� Santamar�a Flecha wrote:\n> On Wed, Feb 12, 2020 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> >\n> > Yeah; I don't think it's *that* unlikely for it to happen again. But\n> > my own principal concern about this mirrors what somebody else already\n> > pointed out: the one-major-release-per-year schedule is not engraved on\n> > any stone tablets. So I don't want to go to a release numbering system\n> > that depends on us doing it that way for the rest of time.\n> >\n> >\n> We could you use YYYY as version identifier, so people will not expect\n> correlative numbering. SQL Server is being released every couple of years\n> and they are using this naming shema. The problem would be releasing twice\n> the same year, but how likely would that be?\n\nWe've released more than one major version in a year before, so we\nhave a track record of that actually happening.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 12 Feb 2020 22:10:29 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On 12/02/2020 21:10, David Fetter wrote:\n> On Wed, Feb 12, 2020 at 05:25:15PM +0100, Juan José Santamaría Flecha wrote:\n>> On Wed, Feb 12, 2020 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>>\n>>> Yeah; I don't think it's *that* unlikely for it to happen again. But\n>>> my own principal concern about this mirrors what somebody else already\n>>> pointed out: the one-major-release-per-year schedule is not engraved on\n>>> any stone tablets. So I don't want to go to a release numbering system\n>>> that depends on us doing it that way for the rest of time.\n>>>\n>>>\n>> We could you use YYYY as version identifier, so people will not expect\n>> correlative numbering. SQL Server is being released every couple of years\n>> and they are using this naming shema. The problem would be releasing twice\n>> the same year, but how likely would that be?\n> \n> We've released more than one major version in a year before, so we\n> have a track record of that actually happening.\n\nBesides what everyone else has said, it's not that long since the\nnumbering scheme was changed for major versions. Changing it again so\nsoon would, IMHO, look confused at best.\n\nRay.\n\n-- \nRaymond O'Donnell // Galway // Ireland\nray@rodonnell.ie\n\n\n", "msg_date": "Wed, 12 Feb 2020 21:26:17 +0000", "msg_from": "Ray O'Donnell <ray@rodonnell.ie>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 12, 2020 at 02:52:53PM +0100, Andreas Karlsson wrote:\n> On 2/12/20 12:07 AM, Alvaro Herrera wrote:\n> > marcelo zen escribi�:\n> > > I'd rather have releases being made when the software is ready and\n> > > not when the calendar year mandates it. It seems like a terrible\n> > > idea.\n> > \n> > But we do actually release on calendar year. While it seems not\n> > unreasonable that we might fail to ship in time, that would likely lead\n> > to one month, two months of delay. Four months? I don't think anybody\n> > even imagines such a long delay. It would be seen as utter,\n> > unacceptable failure of our release team.\n> \n> It has actually happened once: PostgreSQL 9.5 was released in 2016-01-07.\n\nIt was my undestanding that this prompted us to form the release team,\nwhich has since done a great job of making sure that this does not\nhappen again.\n\nOf course, this does not mean it won't ever happen again. Even then,\nshipping PostgreSQL 23 at the beginning of 2024 wouldn't be a total\ndisaster in my opinion.\n\nThe fact that the community might want to re-think the major release\ncycle at some point and not be tied to yearly release numbers is the\nmost convincing argument against it.\n\nThat, and the PR-style \"sell-out\" it might be regarded as.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Wed, 12 Feb 2020 22:38:19 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, Feb 12, 2020 at 09:46:48AM -0500, Tom Lane wrote:\n> Yeah; I don't think it's *that* unlikely for it to happen again. But\n> my own principal concern about this mirrors what somebody else already\n> pointed out: the one-major-release-per-year schedule is not engraved on\n> any stone tablets. So I don't want to go to a release numbering system\n> that depends on us doing it that way for the rest of time.\n\nYeah, it is good to keep some flexibility here, so my take is that\nthere is little advantage in changing again the version numbering.\nNote that any change like that induces an extra cost for anybody\nmaintaining builds of Postgres or any upgrade logic where the decision\ndepends on the version number of the origin build and the target\nbuild.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 12:44:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Thu, Feb 13, 2020 at 2:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 12, 2020 at 09:46:48AM -0500, Tom Lane wrote:\n> > Yeah; I don't think it's *that* unlikely for it to happen again. But\n> > my own principal concern about this mirrors what somebody else already\n> > pointed out: the one-major-release-per-year schedule is not engraved on\n> > any stone tablets. So I don't want to go to a release numbering system\n> > that depends on us doing it that way for the rest of time.\n>\n> Yeah, it is good to keep some flexibility here, so my take is that\n> there is little advantage in changing again the version numbering.\n> Note that any change like that induces an extra cost for anybody\n> maintaining builds of Postgres or any upgrade logic where the decision\n> depends on the version number of the origin build and the target\n> build.\n\n+1\n\nI also object because 20 is *my* unlucky number ...\n\ncheers\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 08:04:09 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I also object because 20 is *my* unlucky number ...\n\nNot sure how serious Andrew is being here, but it does open up an\nimportant point: there are varying opinions on which numbers are unlucky.\nThe idea that 13 is unlucky is Western, and maybe even only common in\nEnglish-speaking countries. In Asia, numbers containing the digit 4\nare considered unlucky [1], and there are probably other rules in other\ncultures. If we establish a precedent that we'll skip release numbers\nfor non-technical reasons, I'm afraid we'll be right back in the mess\nwe sought to avoid, whereby nearly every year we had an argument about\nwhat the next release number would be. So let's not go there.\n\n\t\t\tregards, tom lane\n\n[1] https://en.wikipedia.org/wiki/Tetraphobia\n\n\n", "msg_date": "Fri, 14 Feb 2020 19:18:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Fri, Feb 14, 2020 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Not sure how serious Andrew is being here, but it does open up an\n> important point: there are varying opinions on which numbers are unlucky.\n> The idea that 13 is unlucky is Western, and maybe even only common in\n> English-speaking countries.\n\nI would wager that this superstition is the main reason why Oracle 12c\nwas followed by Oracle 18c rather than Oracle 13c. I have no evidence\nfor this -- I take it on faith.\n\nI feel that I should take the proposal seriously for at least a\nmoment. The proposal doesn't affect anybody who isn't into numerology.\nAt the same time, it makes the superstitious people happy (leaving\naside the icosaphobes). Airlines do this with row numbers -- what's\nthe harm?\n\nThere is a real downside to this, though. It is a bad idea, even on\nits own terms. If we take the idea seriously, then it has every chance\nof being noticed and becoming a big distraction in all sorts of ways.\nThat might happen anyway, but I think it's less likely this way.\n\nISTM that the smart thing to do is to ignore it completely. Don't even\ntry to preempt a silly headline written by some tech journalist\nwiseacre.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:03:13 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Thu, Feb 13, 2020 at 1:34 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> I also object because 20 is *my* unlucky number ...\n\nI don't think we're going to do this, so you don't have to worry on that score.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:09:42 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Fri, Feb 14, 2020 at 7:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > I also object because 20 is *my* unlucky number ...\n>\n> Not sure how serious Andrew is being here, but it does open up an\n> important point: there are varying opinions on which numbers are unlucky.\n> The idea that 13 is unlucky is Western, and maybe even only common in\n> English-speaking countries. In Asia, numbers containing the digit 4\n> are considered unlucky [1], and there are probably other rules in other\n> cultures. If we establish a precedent that we'll skip release numbers\n> for non-technical reasons, I'm afraid we'll be right back in the mess\n> we sought to avoid, whereby nearly every year we had an argument about\n> what the next release number would be. So let's not go there.\n>\n>\n\n\nYes, I was being flippant, in an attempt to make the exact point\nyou're making cogently but less pithily here.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 15 Feb 2020 17:40:02 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, Feb 12, 2020 at 10:38:19PM +0100, Michael Banck wrote:\n> Hi,\n> \n> On Wed, Feb 12, 2020 at 02:52:53PM +0100, Andreas Karlsson wrote:\n> > On 2/12/20 12:07 AM, Alvaro Herrera wrote:\n> > > marcelo zen escribi�:\n> > > > I'd rather have releases being made when the software is ready and\n> > > > not when the calendar year mandates it. It seems like a terrible\n> > > > idea.\n> > > \n> > > But we do actually release on calendar year. While it seems not\n> > > unreasonable that we might fail to ship in time, that would likely lead\n> > > to one month, two months of delay. Four months? I don't think anybody\n> > > even imagines such a long delay. It would be seen as utter,\n> > > unacceptable failure of our release team.\n> > \n> > It has actually happened once: PostgreSQL 9.5 was released in 2016-01-07.\n> \n> It was my undestanding that this prompted us to form the release team,\n> which has since done a great job of making sure that this does not\n> happen again.\n\nFYI, the delay for 9.5 was because the compression method used for JSONB\nwas discovered to be sub-optimal in August/September. While a relesae\nteam might have gotten the release out before January, that isn't\ncertain.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 16 Mar 2020 17:08:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On 15.02.2020 1:18, Tom Lane wrote:\n> The idea that 13 is unlucky is Western, and maybe even only common in \n> English-speaking countries. \n\nNumber 13 (especially Friday 13) is also considered unlucky In Czech \nrepublic (central Europe, Slavic language).\n\n--\n\nJiří.\n\n\n\n", "msg_date": "Mon, 25 May 2020 11:05:09 +0200", "msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Please don't take personal but when you open a discussion like that on number 13 then you are doing something very christian centric and forget the rest of the world. As there are more cultural spheres than the christian one on this planet can you please elaborate the next number which is acceptable (PostgreSQL) world wide? \nMay I assist you a little bit? The number 4 in japanese and chinese are spoken the same way as the word for death. 14 is spoken as ten-four. That'd a reason to skip PostgreSQL ten-death a.k.a. 14, too, isn't it? You don't want a PG death version, do you? By the way: In Japan or in jewish tradition 13 is a lucky number (see Freitag, der 13. – Wikipedia, sorry, german only). Why do you want to skip a lucky number? Do you prefer PostgreSQL ju-san because that's a lucky number instead of PostgreSQL 13 because that's a unlucky one?\n\n\n\n\n\n\n Am Montag, 25. Mai 2020, 11:04:53 MESZ hat Jiří Fejfar <jurafejfar@gmail.com> Folgendes geschrieben: \n \n On 15.02.2020 1:18, Tom Lane wrote:\n> The idea that 13 is unlucky is Western, and maybe even only common in \n> English-speaking countries. \n\nNumber 13 (especially Friday 13) is also considered unlucky In Czech \nrepublic (central Europe, Slavic language).\n\n--\n\nJiří.\n\n\n\n \n\nPlease don't take personal but when you open a discussion like that on number 13 then you are doing something very christian centric and forget the rest of the world. As there are more cultural spheres than the christian one on this planet can you please elaborate the next number which is acceptable (PostgreSQL) world wide? May I assist you a little bit? The number 4 in japanese and chinese are spoken the \nsame way as the word for death. 14 is spoken as \nten-four. That'd a reason to skip PostgreSQL ten-death a.k.a.\n 14, too, isn't it? You don't want a PG death version, do you? By the way: In Japan or in jewish tradition 13 is a lucky number (see Freitag, der 13. – Wikipedia, sorry, german only). Why do you want to skip a lucky number? Do you prefer PostgreSQL ju-san because that's a lucky number instead of PostgreSQL 13 because that's a unlucky one?\n\n\n Am Montag, 25. Mai 2020, 11:04:53 MESZ hat Jiří Fejfar <jurafejfar@gmail.com> Folgendes geschrieben:\n \n\n\nOn 15.02.2020 1:18, Tom Lane wrote:> The idea that 13 is unlucky is Western, and maybe even only common in > English-speaking countries. Number 13 (especially Friday 13) is also considered unlucky In Czech republic (central Europe, Slavic language).--Jiří.", "msg_date": "Mon, 25 May 2020 18:33:57 +0000 (UTC)", "msg_from": "Wolfgang Wilhelm <wolfgang20121964@yahoo.de>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Mon, May 25, 2020 at 11:05:09AM +0200, Jiří Fejfar wrote:\n> On 15.02.2020 1:18, Tom Lane wrote:\n> > The idea that 13 is unlucky is Western, and maybe even only common in\n> > English-speaking countries.\n> \n> Number 13 (especially Friday 13) is also considered unlucky In Czech\n> republic (central Europe, Slavic language).\n\nYeah, it is in a number of places, and we have discussed it, but we have\ndecided to stay with 13.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 25 May 2020 21:55:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On 26.05.2020 3:55, Bruce Momjian wrote:\n> On Mon, May 25, 2020 at 11:05:09AM +0200, Jiří Fejfar wrote:\n>> On 15.02.2020 1:18, Tom Lane wrote:\n>>> The idea that 13 is unlucky is Western, and maybe even only common in\n>>> English-speaking countries.\n>> Number 13 (especially Friday 13) is also considered unlucky In Czech\n>> republic (central Europe, Slavic language).\n> Yeah, it is in a number of places, and we have discussed it, but we have\n> decided to stay with 13.\n\nI am definitely not against PG13 nor any other number. I just wanted to \nsay, in response to the part of original message from Tom Lane, that \nidea that 13 is unlucky is not only valid in English-speaking countries.\n\nIn fact I am trying to test if I am able to discuss something in mailing \nlist. I would like to discuss PG extensions related topics later, but I \nfeel I do not have enough experience with such communication using email \n(conversation threading, reply to the part of message only, bottom \nposting, formatting, recipients) although I am fascinated by what sort \nof complicated issues is possible to solve this way in community. I \nchose this thread because of its subject \"for fun...\" to start with \nsomething simpler than PG extensions. I should have used some emoji \nprobably...\n\n--\n\nJiří\n\n\n\n", "msg_date": "Thu, 28 May 2020 08:14:05 +0200", "msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Wed, Feb 12, 2020 at 11:25 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> On Wed, Feb 12, 2020 at 3:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah; I don't think it's *that* unlikely for it to happen again. But\n>> my own principal concern about this mirrors what somebody else already\n>> pointed out: the one-major-release-per-year schedule is not engraved on\n>> any stone tablets. So I don't want to go to a release numbering system\n>> that depends on us doing it that way for the rest of time.\n>\n> We could you use YYYY as version identifier, so people will not expect correlative numbering. SQL Server is being released every couple of years and they are using this naming shema. The problem would be releasing twice the same year, but how likely would that be?\n\nAs has already been pointed out, it could definitely happen, but we\ncould solve that by just using a longer version number, say, including\nthe month and, in case we ever do multiple major releases in the same\nmonth, also the day. In fact, we might as well take it one step\nfurther and use the same format for the release number that we use for\nCATALOG_VERSION_NO: YYYYMMDDN. So this fall, piggybacking on the\nsuccess of PostgreSQL 10, 11, and 12, we could look then release\nPostgreSQL 202009241 or so. As catversion.h wisely points out,\nthere's room to hope that we'll never commit 10 independent sets of\ncatalog changes on the same day, and I think we can also hope we'll\nnever do more than ten major releases on the same day. Admittedly,\nskipping the version number by 200 million or so might seem like an\noverreaction to the purported unluckiness of the number 13, but just\nthink how many OTHER unlucky numbers we'd also skip in the progress.\n\n/me runs away and hides.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 1 Jun 2020 15:11:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> As has already been pointed out, it could definitely happen, but we\n> could solve that by just using a longer version number, say, including\n> the month and, in case we ever do multiple major releases in the same\n> month, also the day. In fact, we might as well take it one step\n> further and use the same format for the release number that we use for\n> CATALOG_VERSION_NO: YYYYMMDDN. So this fall, piggybacking on the\n> success of PostgreSQL 10, 11, and 12, we could look then release\n> PostgreSQL 202009241 or so.\n\nBut then where do you put the minor number for maintenance releases?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 15:20:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Mon, Jun 1, 2020 at 3:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > As has already been pointed out, it could definitely happen, but we\n> > could solve that by just using a longer version number, say, including\n> > the month and, in case we ever do multiple major releases in the same\n> > month, also the day. In fact, we might as well take it one step\n> > further and use the same format for the release number that we use for\n> > CATALOG_VERSION_NO: YYYYMMDDN. So this fall, piggybacking on the\n> > success of PostgreSQL 10, 11, and 12, we could look then release\n> > PostgreSQL 202009241 or so.\n>\n> But then where do you put the minor number for maintenance releases?\n\nOh, well that's easy. The first maintenance release would just be 202009241.1.\n\nUnless, of course, we want to simplify things by using the same format\nfor both parts of the version number. Then, supposing the first\nmaintenance release follows the major release by a month or so, it\nwould be PostgreSQL 202009241.202010291 or something of this sort.\n\nIt's hard to agree on anything around here but I suspect we can come\nto near-unanimous agreement on the topic of how much merit this\nproposal has.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 2 Jun 2020 13:45:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" }, { "msg_contents": "On Tue, Jun 2, 2020 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jun 1, 2020 at 3:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > As has already been pointed out, it could definitely happen, but we\n> > > could solve that by just using a longer version number, say, including\n> > > the month and, in case we ever do multiple major releases in the same\n> > > month, also the day. In fact, we might as well take it one step\n> > > further and use the same format for the release number that we use for\n> > > CATALOG_VERSION_NO: YYYYMMDDN. So this fall, piggybacking on the\n> > > success of PostgreSQL 10, 11, and 12, we could look then release\n> > > PostgreSQL 202009241 or so.\n> >\n> > But then where do you put the minor number for maintenance releases?\n>\n> Oh, well that's easy. The first maintenance release would just be\n> 202009241.1.\n>\n> Unless, of course, we want to simplify things by using the same format\n> for both parts of the version number. Then, supposing the first\n> maintenance release follows the major release by a month or so, it\n> would be PostgreSQL 202009241.202010291 or something of this sort.\n>\nSince there is a proposal to have a 64-bit transaction ID, we could rather\nhave a 64-bit random number which could solve all of these problems. :P\nAnd then if I ask my customer what Postgres version is he/she using, it\ncould be a postgres fun ride.\n\n>\n> It's hard to agree on anything around here but I suspect we can come\n> to near-unanimous agreement on the topic of how much merit this\n> proposal has.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\n-- \nRegards,\nAvinash Vallarapu\n\nOn Tue, Jun 2, 2020 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jun 1, 2020 at 3:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > As has already been pointed out, it could definitely happen, but we\n> > could solve that by just using a longer version number, say, including\n> > the month and, in case we ever do multiple major releases in the same\n> > month, also the day. In fact, we might as well take it one step\n> > further and use the same format for the release number that we use for\n> > CATALOG_VERSION_NO: YYYYMMDDN. So this fall, piggybacking on the\n> > success of PostgreSQL 10, 11, and 12, we could look then release\n> > PostgreSQL 202009241 or so.\n>\n> But then where do you put the minor number for maintenance releases?\n\nOh, well that's easy. The first maintenance release would just be 202009241.1.\n\nUnless, of course, we want to simplify things by using the same format\nfor both parts of the version number. Then, supposing the first\nmaintenance release follows the major release by a month or so, it\nwould be PostgreSQL 202009241.202010291 or something of this sort.Since there is a proposal to have a 64-bit transaction ID, we could rather have a 64-bit random number which could solve all of these problems. :P And then if I ask my customer what Postgres version is he/she using, it could be a postgres fun ride.\n\nIt's hard to agree on anything around here but I suspect we can come\nto near-unanimous agreement on the topic of how much merit this\nproposal has.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- Regards,Avinash Vallarapu", "msg_date": "Thu, 4 Jun 2020 12:23:03 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Just for fun: Postgres 20?" } ]
[ { "msg_contents": "I believe the 2nd hunk should reset node->hashnulls, rather than reset\n->hashtable a 2nd time:\n\n@@ -505,7 +505,10 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)\n if (nbuckets < 1)\n nbuckets = 1;\n \n- node->hashtable = BuildTupleHashTable(node->parent,\n+ if (node->hashtable)\n+ ResetTupleHashTable(node->hashtable);\n+ else\n+ node->hashtable = BuildTupleHashTableExt(node->parent,\n node->descRight,\n ncols,\n node->keyColIdx,\n...\n\n@@ -527,7 +531,11 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)\n if (nbuckets < 1)\n nbuckets = 1;\n }\n- node->hashnulls = BuildTupleHashTable(node->parent,\n+\n+ if (node->hashnulls)\n+ ResetTupleHashTable(node->hashtable);\n+ else\n+ node->hashnulls = BuildTupleHashTableExt(node->parent,\n node->descRight,\n ncols,\n node->keyColIdx,\n\nAdded here:\n\ncommit 356687bd825e5ca7230d43c1bffe7a59ad2e77bd\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Sat Feb 9 00:35:57 2019 -0800\n\n Reset, not recreate, execGrouping.c style hashtables.\n\n\n", "msg_date": "Sun, 9 Feb 2020 21:25:47 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "subplan resets wrong hashtable" }, { "msg_contents": "Hi,\n\nOn 2020-02-09 21:25:47 -0600, Justin Pryzby wrote:\n> I believe the 2nd hunk should reset node->hashnulls, rather than reset\n> ->hashtable a 2nd time:\n> \n> @@ -505,7 +505,10 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)\n> if (nbuckets < 1)\n> nbuckets = 1;\n> \n> - node->hashtable = BuildTupleHashTable(node->parent,\n> + if (node->hashtable)\n> + ResetTupleHashTable(node->hashtable);\n> + else\n> + node->hashtable = BuildTupleHashTableExt(node->parent,\n> node->descRight,\n> ncols,\n> node->keyColIdx,\n> ...\n> \n> @@ -527,7 +531,11 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)\n> if (nbuckets < 1)\n> nbuckets = 1;\n> }\n> - node->hashnulls = BuildTupleHashTable(node->parent,\n> +\n> + if (node->hashnulls)\n> + ResetTupleHashTable(node->hashtable);\n> + else\n> + node->hashnulls = BuildTupleHashTableExt(node->parent,\n> node->descRight,\n> ncols,\n> node->keyColIdx,\n\nUgh, that indeed looks wrong. Did you check whether it can actively\ncause wrong query results? If so, did you do theoretically, or got to a\nquery returning wrong results?\n\n- Andres\n\n\n", "msg_date": "Sun, 9 Feb 2020 20:01:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: subplan resets wrong hashtable" }, { "msg_contents": "On Sun, Feb 09, 2020 at 08:01:26PM -0800, Andres Freund wrote:\n> Ugh, that indeed looks wrong. Did you check whether it can actively\n> cause wrong query results? If so, did you do theoretically, or got to a\n> query returning wrong results?\n\nNo, I only noticed while reading code.\n\nI tried briefly to find a plan that looked like what I thought might be broken,\nbut haven't found anything close.\n\nJustin\n\n\n", "msg_date": "Sun, 9 Feb 2020 22:05:08 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: subplan resets wrong hashtable" }, { "msg_contents": "On Sun, Feb 09, 2020 at 08:01:26PM -0800, Andres Freund wrote:\n> Ugh, that indeed looks wrong. Did you check whether it can actively\n> cause wrong query results? If so, did you do theoretically, or got to a\n> query returning wrong results?\n\nActually .. I can \"theoretically\" prove that there's no wrong results from that\npatch...since in that file it has no effect, the tested variables being zeroed\nfew lines earlier:\n\n @@ -499,51 +499,60 @@ buildSubPlanHash(SubPlanState *node, ExprContext *econtext)\n* node->hashtable = NULL;\n* node->hashnulls = NULL;\n node->havehashrows = false;\n node->havenullrows = false;\n \n nbuckets = (long) Min(planstate->plan->plan_rows, (double) LONG_MAX);\n if (nbuckets < 1)\n nbuckets = 1;\n \n - node->hashtable = BuildTupleHashTable(node->parent,\n - node->descRight,\n - ncols,\n - node->keyColIdx,\n - node->tab_eq_funcoids,\n - node->tab_hash_funcs,\n - nbuckets,\n - 0,\n - node->hashtablecxt,\n - node->hashtempcxt,\n - false);\n*+ if (node->hashtable)\n + ResetTupleHashTable(node->hashtable);\n + else\n + node->hashtable = BuildTupleHashTableExt(node->parent,\n \n ...\n*+ if (node->hashnulls)\n + ResetTupleHashTable(node->hashtable);\n + else\n + node->hashnulls = BuildTupleHashTableExt(node->parent,\n + node->descRight,\n\n\n\n\n", "msg_date": "Sun, 9 Feb 2020 22:53:08 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: subplan resets wrong hashtable" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Feb 09, 2020 at 08:01:26PM -0800, Andres Freund wrote:\n>> Ugh, that indeed looks wrong. Did you check whether it can actively\n>> cause wrong query results? If so, did you do theoretically, or got to a\n>> query returning wrong results?\n\n> Actually .. I can \"theoretically\" prove that there's no wrong results from that\n> patch...since in that file it has no effect, the tested variables being zeroed\n> few lines earlier:\n\nRight. So the incorrect ResetTupleHashTable call is unreachable\n(and a look at the code coverage report confirms that). The whole\nthing obviously is a bit hasty and unreviewed, but it doesn't have\na live bug AFAICS ... or at least, if there's a bug, it's a memory\nleakage issue across repeat executions, not a crash hazard. I'm\nnot too clear on whether the context reset just above those pointer\nassignments will get rid of all traces of the old hash tables,\nbut it sort of looks like it might not anymore.\n\nAnyway, not going to hold up the releases for a fix for this.\nWe've lived with it for a year, so it can wait another quarter.\n\n\t\t\tregards, tom lane\n\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:08:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: subplan resets wrong hashtable" }, { "msg_contents": "I wrote:\n> Right. So the incorrect ResetTupleHashTable call is unreachable\n> (and a look at the code coverage report confirms that). The whole\n> thing obviously is a bit hasty and unreviewed, but it doesn't have\n> a live bug AFAICS ... or at least, if there's a bug, it's a memory\n> leakage issue across repeat executions, not a crash hazard.\n\nFor the archives' sake: this *is* a memory leak, and we dealt with\nit at 58c47ccfff20b8c125903482725c1dbfd30beade.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Feb 2020 14:05:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: subplan resets wrong hashtable" } ]
[ { "msg_contents": "Hi,\n\nI found that pg_basebackup -F plain -R *overwrites* postgresql.auto.conf\ntaken from the primary server with new primary_conninfo setting,\nwhile pg_basebackup -F tar -R just *appends* it into the file. I think that\nthis is a bug and pg_basebackup -F plain -R should *append* the setting.\nThought?\n\nI attached the patch to fix the bug. This patch should be back-patch to\nv12.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Mon, 10 Feb 2020 16:58:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "pg_basebackup -F plain -R overwrites postgresql.auto.conf" }, { "msg_contents": "Hello\n\nSeems bug was introduced in caba97a9d9f4d4fa2531985fd12d3cd823da06f3 - in HEAD only\n\nIn REL_12_STABLE we have:\n\n\tbool\t\tis_recovery_guc_supported = true;\n\n\tif (PQserverVersion(conn) < MINIMUM_VERSION_FOR_RECOVERY_GUC)\n\t\tis_recovery_guc_supported = false;\n\n\tsnprintf(filename, MAXPGPATH, \"%s/%s\", basedir,\n\t\t\t is_recovery_guc_supported ? \"postgresql.auto.conf\" : \"recovery.conf\");\n\n\tcf = fopen(filename, is_recovery_guc_supported ? \"a\" : \"w\");\n\nIt looks correct: append mode for postgresql.auto.conf\n\nIn HEAD version is_recovery_guc_supported variable was replaced to inversed use_recovery_conf without change fopen mode.\n\nregards, Sergei\n\n\n", "msg_date": "Mon, 10 Feb 2020 11:23:49 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup -F plain -R overwrites postgresql.auto.conf" }, { "msg_contents": "\n\nOn 2020/02/10 17:23, Sergei Kornilov wrote:\n> Hello\n> \n> Seems bug was introduced in caba97a9d9f4d4fa2531985fd12d3cd823da06f3 - in HEAD only\n> \n> In REL_12_STABLE we have:\n> \n> \tbool\t\tis_recovery_guc_supported = true;\n> \n> \tif (PQserverVersion(conn) < MINIMUM_VERSION_FOR_RECOVERY_GUC)\n> \t\tis_recovery_guc_supported = false;\n> \n> \tsnprintf(filename, MAXPGPATH, \"%s/%s\", basedir,\n> \t\t\t is_recovery_guc_supported ? \"postgresql.auto.conf\" : \"recovery.conf\");\n> \n> \tcf = fopen(filename, is_recovery_guc_supported ? \"a\" : \"w\");\n> \n> It looks correct: append mode for postgresql.auto.conf\n> \n> In HEAD version is_recovery_guc_supported variable was replaced to inversed use_recovery_conf without change fopen mode.\n\nYes! Thanks for pointing out that!\nSo the patch needs to be applied only in master.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:41:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup -F plain -R overwrites postgresql.auto.conf" }, { "msg_contents": "On Mon, Feb 10, 2020 at 9:41 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/02/10 17:23, Sergei Kornilov wrote:\n> > Hello\n> >\n> > Seems bug was introduced in caba97a9d9f4d4fa2531985fd12d3cd823da06f3 - in HEAD only\n> >\n> > In REL_12_STABLE we have:\n> >\n> > bool is_recovery_guc_supported = true;\n> >\n> > if (PQserverVersion(conn) < MINIMUM_VERSION_FOR_RECOVERY_GUC)\n> > is_recovery_guc_supported = false;\n> >\n> > snprintf(filename, MAXPGPATH, \"%s/%s\", basedir,\n> > is_recovery_guc_supported ? \"postgresql.auto.conf\" : \"recovery.conf\");\n> >\n> > cf = fopen(filename, is_recovery_guc_supported ? \"a\" : \"w\");\n> >\n> > It looks correct: append mode for postgresql.auto.conf\n> >\n> > In HEAD version is_recovery_guc_supported variable was replaced to inversed use_recovery_conf without change fopen mode.\n>\n> Yes! Thanks for pointing out that!\n> So the patch needs to be applied only in master.\n\n+1. We should absolutely not be overwriting the auto conf.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:35:29 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup -F plain -R overwrites postgresql.auto.conf" }, { "msg_contents": "On 2020-Feb-10, Fujii Masao wrote:\n\n> \n> \n> On 2020/02/10 17:23, Sergei Kornilov wrote:\n> > Hello\n> > \n> > Seems bug was introduced in caba97a9d9f4d4fa2531985fd12d3cd823da06f3 - in HEAD only\n> > \n> > In REL_12_STABLE we have:\n> > \n> > \tbool\t\tis_recovery_guc_supported = true;\n> > \n> > \tif (PQserverVersion(conn) < MINIMUM_VERSION_FOR_RECOVERY_GUC)\n> > \t\tis_recovery_guc_supported = false;\n> > \n> > \tsnprintf(filename, MAXPGPATH, \"%s/%s\", basedir,\n> > \t\t\t is_recovery_guc_supported ? \"postgresql.auto.conf\" : \"recovery.conf\");\n> > \n> > \tcf = fopen(filename, is_recovery_guc_supported ? \"a\" : \"w\");\n> > \n> > It looks correct: append mode for postgresql.auto.conf\n> > \n> > In HEAD version is_recovery_guc_supported variable was replaced to inversed use_recovery_conf without change fopen mode.\n> \n> Yes! Thanks for pointing out that!\n> So the patch needs to be applied only in master.\n\nYikes, thanks. Pushing in a minute.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 12:28:05 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup -F plain -R overwrites postgresql.auto.conf" }, { "msg_contents": "On 2020-Feb-10, Alvaro Herrera wrote:\n\n> On 2020-Feb-10, Fujii Masao wrote:\n\n> > Yes! Thanks for pointing out that!\n> > So the patch needs to be applied only in master.\n> \n> Yikes, thanks. Pushing in a minute.\n\nActually, if you want to push it, be my guest.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 12:28:43 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup -F plain -R overwrites postgresql.auto.conf" }, { "msg_contents": "\n\nOn 2020/02/11 0:28, Alvaro Herrera wrote:\n> On 2020-Feb-10, Alvaro Herrera wrote:\n> \n>> On 2020-Feb-10, Fujii Masao wrote:\n> \n>>> Yes! Thanks for pointing out that!\n>>> So the patch needs to be applied only in master.\n>>\n>> Yikes, thanks. Pushing in a minute.\n> \n> Actually, if you want to push it, be my guest.\n\nYeah, I pushed the patch. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 12 Feb 2020 09:14:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup -F plain -R overwrites postgresql.auto.conf" } ]
[ { "msg_contents": "Hi hacker,\n\nCurrently we only print block number and relation path when checksum check\nfails. See example below:\n\nERROR: invalid page in block 333571 of relation base/65959/656195\n\nDBA complains that she needs additional work to calculate which physical\nfile is broken, since one physical file can only contain `RELSEG_SIZE`\nnumber of blocks. For large tables, we need to use many physical files with\nadditional suffix, e.g. 656195.1, 656195.2 ...\n\nIs that a good idea to also print the physical file path in error message?\nLike below:\n\nERROR: invalid page in block 333571 of relation base/65959/656195, file\npath base/65959/656195.2\n\nPatch is attached.\n-- \nThanks\n\nHubert Zhang", "msg_date": "Mon, 10 Feb 2020 16:04:21 +0800", "msg_from": "Hubert Zhang <hzhang@pivotal.io>", "msg_from_op": true, "msg_subject": "Print physical file path when checksum check fails" }, { "msg_contents": "HHi,\n\nOn 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:\n> Currently we only print block number and relation path when checksum check\n> fails. See example below:\n> \n> ERROR: invalid page in block 333571 of relation base/65959/656195\n\n> DBA complains that she needs additional work to calculate which physical\n> file is broken, since one physical file can only contain `RELSEG_SIZE`\n> number of blocks. For large tables, we need to use many physical files with\n> additional suffix, e.g. 656195.1, 656195.2 ...\n> \n> Is that a good idea to also print the physical file path in error message?\n> Like below:\n> \n> ERROR: invalid page in block 333571 of relation base/65959/656195, file\n> path base/65959/656195.2\n\nI think that'd be a nice improvement. But:\n\nI don't think the way you did it is right architecturally. The\nsegmenting is really something that lives within md.c, and we shouldn't\nfurther expose it outside of that. And e.g. the undo patchset uses files\nwith different segmentation - but still goes through bufmgr.c.\n\nI wonder if this partially signals that the checksum verification piece\nis architecturally done in the wrong place currently? It's imo not good\nthat every place doing an smgrread() needs to separately verify\nchecksums. OTOH, it doesn't really belong inside smgr.c.\n\n\nThis layering issue was also encountered in\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\nso perhaps we should work to reuse the FileTag it introduces to\nrepresent segments, without hardcoding the specific segment size?\n\nRegards,\n\nAndres\n\n> @@ -912,17 +912,20 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \t\t\t\t{\n> \t\t\t\t\tereport(WARNING,\n> \t\t\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> -\t\t\t\t\t\t\t errmsg(\"invalid page in block %u of relation %s; zeroing out page\",\n> +\t\t\t\t\t\t\t errmsg(\"invalid page in block %u of relation %s, \"\n> +\t\t\t\t\t\t\t\t\t\"file path %s; zeroing out page\",\n> \t\t\t\t\t\t\t\t\tblockNum,\n> -\t\t\t\t\t\t\t\t\trelpath(smgr->smgr_rnode, forkNum))));\n> +\t\t\t\t\t\t\t\t\trelpath(smgr->smgr_rnode, forkNum),\n> +\t\t\t\t\t\t\t\t\trelfilepath(smgr->smgr_rnode, forkNum, blockNum))));\n> \t\t\t\t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n> \t\t\t\t}\n> \t\t\t\telse\n> \t\t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> -\t\t\t\t\t\t\t errmsg(\"invalid page in block %u of relation %s\",\n> +\t\t\t\t\t\t\t errmsg(\"invalid page in block %u of relation %s, file path %s\",\n> \t\t\t\t\t\t\t\t\tblockNum,\n> -\t\t\t\t\t\t\t\t\trelpath(smgr->smgr_rnode, forkNum))));\n> +\t\t\t\t\t\t\t\t\trelpath(smgr->smgr_rnode, forkNum),\n> +\t\t\t\t\t\t\t\t\trelfilepath(smgr->smgr_rnode, forkNum, blockNum))));\n> \t\t\t}\n> \t\t}\n> \t}\n> diff --git a/src/common/relpath.c b/src/common/relpath.c\n> index ad733d1363..8b39c4ac4f 100644\n> --- a/src/common/relpath.c\n> +++ b/src/common/relpath.c\n> @@ -208,3 +208,30 @@ GetRelationPath(Oid dbNode, Oid spcNode, Oid relNode,\n> \t}\n> \treturn path;\n> }\n> +\n> +/*\n> + * GetRelationFilePath - construct path to a relation's physical file\n> + * given its block number.\n> + */\n> +\tchar *\n> +GetRelationFilePath(Oid dbNode, Oid spcNode, Oid relNode,\n> +\t\t\t\t\tint backendId, ForkNumber forkNumber, BlockNumber blkno)\n> +{\n> +\tchar\t *path;\n> +\tchar\t *fullpath;\n> +\tBlockNumber\tsegno;\n> +\n> +\tpath = GetRelationPath(dbNode, spcNode, relNode, backendId, forkNumber);\n> +\n> +\tsegno = blkno / ((BlockNumber) RELSEG_SIZE);\n> +\n> +\tif (segno > 0)\n> +\t{\n> +\t\tfullpath = psprintf(\"%s.%u\", path, segno);\n> +\t\tpfree(path);\n> +\t}\n> +\telse\n> +\t\tfullpath = path;\n> +\n> +\treturn fullpath;\n> +}\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:30:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "Thanks Andres,\n\nOn Tue, Feb 11, 2020 at 5:30 AM Andres Freund <andres@anarazel.de> wrote:\n\n> HHi,\n>\n> On 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:\n> > Currently we only print block number and relation path when checksum\n> check\n> > fails. See example below:\n> >\n> > ERROR: invalid page in block 333571 of relation base/65959/656195\n>\n> > DBA complains that she needs additional work to calculate which physical\n> > file is broken, since one physical file can only contain `RELSEG_SIZE`\n> > number of blocks. For large tables, we need to use many physical files\n> with\n> > additional suffix, e.g. 656195.1, 656195.2 ...\n> >\n> > Is that a good idea to also print the physical file path in error\n> message?\n> > Like below:\n> >\n> > ERROR: invalid page in block 333571 of relation base/65959/656195, file\n> > path base/65959/656195.2\n>\n> I think that'd be a nice improvement. But:\n>\n> I don't think the way you did it is right architecturally. The\n> segmenting is really something that lives within md.c, and we shouldn't\n> further expose it outside of that. And e.g. the undo patchset uses files\n> with different segmentation - but still goes through bufmgr.c.\n>\n> I wonder if this partially signals that the checksum verification piece\n> is architecturally done in the wrong place currently? It's imo not good\n> that every place doing an smgrread() needs to separately verify\n> checksums. OTOH, it doesn't really belong inside smgr.c.\n>\n>\n> This layering issue was also encountered in\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\n> so perhaps we should work to reuse the FileTag it introduces to\n> represent segments, without hardcoding the specific segment size?\n>\n>\nI checked the FileTag commit. It calls `register_xxx_segment` inside md.c\nto store the sync request into a hashtable and used by checkpointer later.\n\nChecksum verify is simpler. We could move the `PageIsVerified` into md.c\n(mdread). But we can not elog error inside md.c because read buffer mode\nRBM_ZERO_ON_ERROR is at bugmgr.c level.\n\nOne idea is to change save the error message(or the FileTag) at (e.g. a\nstatic string in bufmgr.c) by calling `register_checksum_failure` inside\nmdread in md.c.\n\nAs for your concern about the need to do checksum verify after every\nsmgrread, we now move the checksum verify logic into md.c, but we still\nneed to check the checksum verify result after smgrread and reset buffer to\nzero if mode is RBM_ZERO_ON_ERROR.\n\nIf this idea is OK, I will submit the new PR.\n\nThanks\n\nHubert Zhang\n\nThanks Andres,On Tue, Feb 11, 2020 at 5:30 AM Andres Freund <andres@anarazel.de> wrote:HHi,\n\nOn 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:\n> Currently we only print block number and relation path when checksum check\n> fails. See example below:\n> \n> ERROR: invalid page in block 333571 of relation base/65959/656195\n\n> DBA complains that she needs additional work to calculate which physical\n> file is broken, since one physical file can only contain `RELSEG_SIZE`\n> number of blocks. For large tables, we need to use many physical files with\n> additional suffix, e.g. 656195.1, 656195.2 ...\n> \n> Is that a good idea to also print the physical file path in error message?\n> Like below:\n> \n> ERROR: invalid page in block 333571 of relation base/65959/656195, file\n> path base/65959/656195.2\n\nI think that'd be a nice improvement. But:\n\nI don't think the way you did it is right architecturally. The\nsegmenting is really something that lives within md.c, and we shouldn't\nfurther expose it outside of that. And e.g. the undo patchset uses files\nwith different segmentation - but still goes through bufmgr.c.\n\nI wonder if this partially signals that the checksum verification piece\nis architecturally done in the wrong place currently? It's imo not good\nthat every place doing an smgrread() needs to separately verify\nchecksums. OTOH, it doesn't really belong inside smgr.c.\n\n\nThis layering issue was also encountered in\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\nso perhaps we should work to reuse the FileTag it introduces to\nrepresent segments, without hardcoding the specific segment size?\nI checked the FileTag commit. It calls `register_xxx_segment` inside md.c to store the sync request into a hashtable and used by checkpointer later.Checksum verify is simpler. We could move the `PageIsVerified` into md.c (mdread). But we can not elog error inside md.c because read buffer mode RBM_ZERO_ON_ERROR is at bugmgr.c level.One idea is to change save the error message(or the FileTag) at (e.g. a static string in bufmgr.c) by calling `register_checksum_failure` inside mdread in md.c.As for your concern about the need to do checksum verify after every smgrread, we now move the checksum verify logic into md.c, but we still need to check the checksum verify result after smgrread and reset buffer to zero if mode is RBM_ZERO_ON_ERROR.  If this idea is OK, I will submit the new PR.ThanksHubert Zhang", "msg_date": "Wed, 12 Feb 2020 17:22:52 +0800", "msg_from": "Hubert Zhang <hzhang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "On Wed, Feb 12, 2020 at 5:22 PM Hubert Zhang <hzhang@pivotal.io> wrote:\n\n> Thanks Andres,\n>\n> On Tue, Feb 11, 2020 at 5:30 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> HHi,\n>>\n>> On 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:\n>> > Currently we only print block number and relation path when checksum\n>> check\n>> > fails. See example below:\n>> >\n>> > ERROR: invalid page in block 333571 of relation base/65959/656195\n>>\n>> > DBA complains that she needs additional work to calculate which physical\n>> > file is broken, since one physical file can only contain `RELSEG_SIZE`\n>> > number of blocks. For large tables, we need to use many physical files\n>> with\n>> > additional suffix, e.g. 656195.1, 656195.2 ...\n>> >\n>> > Is that a good idea to also print the physical file path in error\n>> message?\n>> > Like below:\n>> >\n>> > ERROR: invalid page in block 333571 of relation base/65959/656195, file\n>> > path base/65959/656195.2\n>>\n>> I think that'd be a nice improvement. But:\n>>\n>> I don't think the way you did it is right architecturally. The\n>> segmenting is really something that lives within md.c, and we shouldn't\n>> further expose it outside of that. And e.g. the undo patchset uses files\n>> with different segmentation - but still goes through bufmgr.c.\n>>\n>> I wonder if this partially signals that the checksum verification piece\n>> is architecturally done in the wrong place currently? It's imo not good\n>> that every place doing an smgrread() needs to separately verify\n>> checksums. OTOH, it doesn't really belong inside smgr.c.\n>>\n>>\n>> This layering issue was also encountered in\n>>\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\n>> so perhaps we should work to reuse the FileTag it introduces to\n>> represent segments, without hardcoding the specific segment size?\n>>\n>>\n> I checked the FileTag commit. It calls `register_xxx_segment` inside md.c\n> to store the sync request into a hashtable and used by checkpointer later.\n>\n> Checksum verify is simpler. We could move the `PageIsVerified` into md.c\n> (mdread). But we can not elog error inside md.c because read buffer mode\n> RBM_ZERO_ON_ERROR is at bugmgr.c level.\n>\n> One idea is to change save the error message(or the FileTag) at (e.g. a\n> static string in bufmgr.c) by calling `register_checksum_failure` inside\n> mdread in md.c.\n>\n> As for your concern about the need to do checksum verify after every\n> smgrread, we now move the checksum verify logic into md.c, but we still\n> need to check the checksum verify result after smgrread and reset buffer to\n> zero if mode is RBM_ZERO_ON_ERROR.\n>\n> If this idea is OK, I will submit the new PR.\n>\n>\nBased on Andres's comments, here is the new patch for moving checksum\nverify logic into mdread() instead of call PageIsVerified in every\nsmgrread(). Also using FileTag to print the physical file name when\nchecksum verify failed, which handle segmenting inside md.c as well.\n\n-- \nThanks\n\nHubert Zhang", "msg_date": "Tue, 18 Feb 2020 09:27:39 +0800", "msg_from": "Hubert Zhang <hzhang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "Hello. Thank you for the new patch.\n\nAt Tue, 18 Feb 2020 09:27:39 +0800, Hubert Zhang <hzhang@pivotal.io> wrote in \n> On Wed, Feb 12, 2020 at 5:22 PM Hubert Zhang <hzhang@pivotal.io> wrote:\n> \n> > Thanks Andres,\n> >\n> > On Tue, Feb 11, 2020 at 5:30 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> >> HHi,\n> >>\n> >> On 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:\n> >> > Currently we only print block number and relation path when checksum\n> >> check\n> >> > fails. See example below:\n> >> >\n> >> > ERROR: invalid page in block 333571 of relation base/65959/656195\n> >>\n> >> > DBA complains that she needs additional work to calculate which physical\n> >> > file is broken, since one physical file can only contain `RELSEG_SIZE`\n> >> > number of blocks. For large tables, we need to use many physical files\n> >> with\n> >> > additional suffix, e.g. 656195.1, 656195.2 ...\n> >> >\n> >> > Is that a good idea to also print the physical file path in error\n> >> message?\n> >> > Like below:\n> >> >\n> >> > ERROR: invalid page in block 333571 of relation base/65959/656195, file\n> >> > path base/65959/656195.2\n> >>\n> >> I think that'd be a nice improvement. But:\n> >>\n> >> I don't think the way you did it is right architecturally. The\n> >> segmenting is really something that lives within md.c, and we shouldn't\n> >> further expose it outside of that. And e.g. the undo patchset uses files\n> >> with different segmentation - but still goes through bufmgr.c.\n> >>\n> >> I wonder if this partially signals that the checksum verification piece\n> >> is architecturally done in the wrong place currently? It's imo not good\n> >> that every place doing an smgrread() needs to separately verify\n> >> checksums. OTOH, it doesn't really belong inside smgr.c.\n> >>\n> >>\n> >> This layering issue was also encountered in\n> >>\n> >> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3eb77eba5a51780d5cf52cd66a9844cd4d26feb0\n> >> so perhaps we should work to reuse the FileTag it introduces to\n> >> represent segments, without hardcoding the specific segment size?\n> >>\n> >>\n> > I checked the FileTag commit. It calls `register_xxx_segment` inside md.c\n> > to store the sync request into a hashtable and used by checkpointer later.\n> >\n> > Checksum verify is simpler. We could move the `PageIsVerified` into md.c\n> > (mdread). But we can not elog error inside md.c because read buffer mode\n> > RBM_ZERO_ON_ERROR is at bugmgr.c level.\n> >\n> > One idea is to change save the error message(or the FileTag) at (e.g. a\n> > static string in bufmgr.c) by calling `register_checksum_failure` inside\n> > mdread in md.c.\n> >\n> > As for your concern about the need to do checksum verify after every\n> > smgrread, we now move the checksum verify logic into md.c, but we still\n> > need to check the checksum verify result after smgrread and reset buffer to\n> > zero if mode is RBM_ZERO_ON_ERROR.\n> >\n> > If this idea is OK, I will submit the new PR.\n> >\n> >\n> Based on Andres's comments, here is the new patch for moving checksum\n> verify logic into mdread() instead of call PageIsVerified in every\n> smgrread(). Also using FileTag to print the physical file name when\n> checksum verify failed, which handle segmenting inside md.c as well.\n\nThe patch doesn't address the first comment from Andres. It still\nexpose the notion of segment to the upper layer, but bufmgr (bufmgr.c\nand bufpage.c) or Realation (relpath.c) layer shouldn't know of\nsegment. So the two layers should ask smgr without using segment\nnumber for the real file name for the block.\n\nFor example, I think the following structure works. (Without moving\nchecksum verification.)\n\n======\nmd.c:\n char *mdfname(SmgrRelation reln, Forknumber forkNum, BlockNumber blocknum);\nsmgr.c:\n char *smgrfname(SMgrRelation reln, ForkNumber forkNum, BlockNumber Blocknum);\n\nbufmgr.c:\n ReadBuffer_common()\n {\n ..\n\t\t\t/* check for garbage data */\n\t\t\tif (!PageIsVerified((Page) bufBlock, blockNum))\n\t\t\t\tif (mode == RBM_ZERO_ON_ERROR || zero_damaged_pages)\n\t\t\t\t\tereport(WARNING,\n\t\t\t\t\t\t\t errmsg(\"invalid page in block %u in file %s; zeroing out page\",\n\t\t\t\t\t\t\t\t\tblockNum,\n\t\t\t\t\t\t\t\t\tsmgrfname(smgr, forkNum, blockNum))));\n\t\t\t\t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n====\n\nHowever, the block number in the error messages looks odd as it is\nactually not the block number in the segment. We could convert\nBlockNum into relative block number or offset in the file but it would\nbe overkill. So something like this works?\n\n\"invalid page in block %u in relation %u, file \\\"%s\\\"\",\n blockNum, smgr->smgr_rnode.node.relNode, smgrfname()\n\n\nIf we also verify checksum in md layer, callback is overkill since the\nimmediate caller consumes the event immediately. We can signal the\nerror by somehow returning a file tag.\n\n======\nmd.c:\n FileTag *mdread(...) /* or void mdread(..., FileTag*)? */\n {\n if (!VerfyPage())\n\t return ftag;\n....\n return NULL;\n }\n char *mdfname(FileTag *ftag);\nsmgr.c:\n FileTag *smgrread(...);\n char *smgrfname(FileTag *ftag);\n\nbufmgr.c:\n ReadBuffer_common()\n {\n FileTag *errtag;\n ..\n errftag = smgrread();\n\n\t\t\tif (errtag))\n\t\t\t\t/* page verification failed */\n\t\t\t\tif (mode == RBM_ZERO_ON_ERROR || zero_damaged_pages)\n\t\t\t\t\tereport(WARNING,\n\t\t\t\t\t\t\t errmsg(\"invalid page in block %u in file %s; zeroing out page\",\n\t\t\t\t\t\t\t\t\tblockNum, smgrfname(errftag)));\n\t\t\t\t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n====\n\nBut it is uneasy that smgrread just returning a filetag signals\nchecksum error..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 19 Feb 2020 13:07:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "On Wed, Feb 19, 2020 at 01:07:36PM +0900, Kyotaro Horiguchi wrote:\n> If we also verify checksum in md layer, callback is overkill since the\n> immediate caller consumes the event immediately. We can signal the\n> error by somehow returning a file tag.\n\nFWIW, I am wondering if there is any need for a change here and\ncomplicate more the code. If you know the block number, the page size\nand the segment file size you can immediately guess where is the\ndamaged block. The first information is already part of the error\nmessage, and the two other ones are constants defined at\ncompile-time.\n--\nMichael", "msg_date": "Wed, 19 Feb 2020 13:28:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "At Wed, 19 Feb 2020 13:28:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Feb 19, 2020 at 01:07:36PM +0900, Kyotaro Horiguchi wrote:\n> > If we also verify checksum in md layer, callback is overkill since the\n> > immediate caller consumes the event immediately. We can signal the\n> > error by somehow returning a file tag.\n> \n> FWIW, I am wondering if there is any need for a change here and\n> complicate more the code. If you know the block number, the page size\n> and the segment file size you can immediately guess where is the\n> damaged block. The first information is already part of the error\n\nI have had support requests related to broken block several times, and\n(I think) most of *them* had hard time to locate the broken block or\neven broken file. I don't think it is useles at all, but I'm not sure\nit is worth the additional complexity.\n\n> damaged block. The first information is already part of the error\n> message, and the two other ones are constants defined at\n> compile-time.\n\nMay you have misread the snippet?\n\nWhat Hubert proposed is:\n\n \"invalid page in block %u of relation file %s; zeroing out page\",\n blkno, <filename>\n\nThe second format in my messages just before is:\n \"invalid page in block %u in relation %u, file \\\"%s\\\"\",\n blockNum, smgr->smgr_rnode.node.relNode, smgrfname()\n\nAll of them are not compile-time constant at all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 19 Feb 2020 15:00:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "On Wed, Feb 19, 2020 at 03:00:54PM +0900, Kyotaro Horiguchi wrote:\n> I have had support requests related to broken block several times, and\n> (I think) most of *them* had hard time to locate the broken block or\n> even broken file. I don't think it is useles at all, but I'm not sure\n> it is worth the additional complexity.\n\nI handle stuff like that from time to time, and any reports usually\ngo down to people knowledgeable about PostgreSQL enough to know the\ndifference. My point is that in order to know where a broken block is\nphysically located on disk, you need to know four things:\n- The block number.\n- The physical location of the relation.\n- The size of the block.\n- The length of a file segment.\nThe first two items are printed in the error message, and you can\nguess easily the actual location (file, offset) with the two others.\n\nI am not necessarily against improving the error message here, but\nFWIW I think that we need to consider seriously if the code\ncomplications and the maintenance cost involved are really worth\nsaving from one simple calculation. Particularly, quickly reading\nthrough the patch, I am rather unhappy about the shape of the second\npatch which pushes down the segment number knowledge into relpath.c,\nand creates more complication around the handling of\nzero_damaged_pages and zero'ed pages.\n--\nMichael", "msg_date": "Wed, 19 Feb 2020 16:48:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "Hi,\n\nOn 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n> On Wed, Feb 19, 2020 at 03:00:54PM +0900, Kyotaro Horiguchi wrote:\n> > I have had support requests related to broken block several times, and\n> > (I think) most of *them* had hard time to locate the broken block or\n> > even broken file. I don't think it is useles at all, but I'm not sure\n> > it is worth the additional complexity.\n>\n> I handle stuff like that from time to time, and any reports usually\n> go down to people knowledgeable about PostgreSQL enough to know the\n> difference. My point is that in order to know where a broken block is\n> physically located on disk, you need to know four things:\n> - The block number.\n> - The physical location of the relation.\n> - The size of the block.\n> - The length of a file segment.\n> The first two items are printed in the error message, and you can\n> guess easily the actual location (file, offset) with the two others.\n\n> I am not necessarily against improving the error message here, but\n> FWIW I think that we need to consider seriously if the code\n> complications and the maintenance cost involved are really worth\n> saving from one simple calculation.\n\nI don't think it's that simple for most.\n\nAnd if we e.g. ever get the undo stuff merged, it'd get more\ncomplicated, because they segment entirely differently. Similar, if we\never manage to move SLRUs into the buffer pool and checksummed, it'd\nagain work differently.\n\nNor is it architecturally appealing to handle checksums in multiple\nplaces above the smgr layer: For one, that requires multiple places to\ncompute verify them. But also, as the way checksums are computed depends\non the page format etc, it'll likely change for things like undo/slru -\nwhich then again will require additional smarts if done above the smgr\nlayer.\n\n\n> Particularly, quickly reading through the patch, I am rather unhappy\n> about the shape of the second patch which pushes down the segment\n> number knowledge into relpath.c, and creates more complication around\n> the handling of zero_damaged_pages and zero'ed pages. -- Michael\n\nI do not like the SetZeroDamagedPageInChecksum stuff at all however.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 19 Feb 2020 19:36:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "Thanks,\n\nOn Thu, Feb 20, 2020 at 11:36 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n> > On Wed, Feb 19, 2020 at 03:00:54PM +0900, Kyotaro Horiguchi wrote:\n> > > I have had support requests related to broken block several times, and\n> > > (I think) most of *them* had hard time to locate the broken block or\n> > > even broken file. I don't think it is useles at all, but I'm not sure\n> > > it is worth the additional complexity.\n> >\n> > I handle stuff like that from time to time, and any reports usually\n> > go down to people knowledgeable about PostgreSQL enough to know the\n> > difference. My point is that in order to know where a broken block is\n> > physically located on disk, you need to know four things:\n> > - The block number.\n> > - The physical location of the relation.\n> > - The size of the block.\n> > - The length of a file segment.\n> > The first two items are printed in the error message, and you can\n> > guess easily the actual location (file, offset) with the two others.\n>\n> > I am not necessarily against improving the error message here, but\n> > FWIW I think that we need to consider seriously if the code\n> > complications and the maintenance cost involved are really worth\n> > saving from one simple calculation.\n>\n> I don't think it's that simple for most.\n>\n> And if we e.g. ever get the undo stuff merged, it'd get more\n> complicated, because they segment entirely differently. Similar, if we\n> ever manage to move SLRUs into the buffer pool and checksummed, it'd\n> again work differently.\n>\n> Nor is it architecturally appealing to handle checksums in multiple\n> places above the smgr layer: For one, that requires multiple places to\n> compute verify them. But also, as the way checksums are computed depends\n> on the page format etc, it'll likely change for things like undo/slru -\n> which then again will require additional smarts if done above the smgr\n> layer.\n>\n\nSo considering undo staff, it's better to move checksum logic into md.c\nI will keep it in the new patch.\n\nOn 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n\n> Particularly, quickly reading through the patch, I am rather unhappy\n> > about the shape of the second patch which pushes down the segment\n> > number knowledge into relpath.c, and creates more complication around\n> > the handling of zero_damaged_pages and zero'ed pages. -- Michael\n>\n> I do not like the SetZeroDamagedPageInChecksum stuff at all however.\n>\n>\nI'm +1 on the first concern, and I will delete the new added function\n`GetRelationFilePath`\nin relpath.c and append the segno directly in error message inside function\n`VerifyPage`\n\nAs for SetZeroDamagedPageInChecksum, the reason why I introduced it is that\nwhen we are doing\nsmgrread() and one of the damaged page failed to pass the checksum check,\nwe could not directly error\nout, since the caller of smgrread() may tolerate this error and just zero\nall the damaged page plus a warning message.\nAlso, we could not just use GUC zero_damaged_pages to do this branch, since\nwe also have ReadBufferMode(i.e. RBM_ZERO_ON_ERROR) to control it.\n\nTo get rid of SetZeroDamagedPageInChecksum, one idea is to pass\nzero_damaged_page flag into smgrread(), something like below:\n==\n\nextern void smgrread(SMgrRelation reln, ForkNumber forknum,\n\nBlockNumber blocknum, char *buffer, int flag);\n\n===\n\n\nAny comments?\n\n\n\n-- \nThanks\n\nHubert Zhang\n\nThanks,On Thu, Feb 20, 2020 at 11:36 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n> On Wed, Feb 19, 2020 at 03:00:54PM +0900, Kyotaro Horiguchi wrote:\n> > I have had support requests related to broken block several times, and\n> > (I think) most of *them* had hard time to locate the broken block or\n> > even broken file.  I don't think it is useles at all, but I'm not sure\n> > it is worth the additional complexity.\n>\n> I handle stuff like that from time to time, and any reports usually\n> go down to people knowledgeable about PostgreSQL enough to know the\n> difference.  My point is that in order to know where a broken block is\n> physically located on disk, you need to know four things:\n> - The block number.\n> - The physical location of the relation.\n> - The size of the block.\n> - The length of a file segment.\n> The first two items are printed in the error message, and you can\n> guess easily the actual location (file, offset) with the two others.\n\n> I am not necessarily against improving the error message here, but\n> FWIW I think that we need to consider seriously if the code\n> complications and the maintenance cost involved are really worth\n> saving from one simple calculation.\n\nI don't think it's that simple for most.\n\nAnd if we e.g. ever get the undo stuff merged, it'd get more\ncomplicated, because they segment entirely differently. Similar, if we\never manage to move SLRUs into the buffer pool and checksummed, it'd\nagain work differently.\n\nNor is it architecturally appealing to handle checksums in multiple\nplaces above the smgr layer: For one, that requires multiple places to\ncompute verify them. But also, as the way checksums are computed depends\non the page format etc, it'll likely change for things like undo/slru -\nwhich then again will require additional smarts if done above the smgr\nlayer.So considering undo staff, it's better to move checksum logic into md.cI will keep it in the new patch.On 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n> Particularly, quickly reading through the patch, I am rather unhappy\n> about the shape of the second patch which pushes down the segment\n> number knowledge into relpath.c, and creates more complication around\n> the handling of zero_damaged_pages and zero'ed pages.  -- Michael\n\nI do not like the SetZeroDamagedPageInChecksum stuff at all however.\nI'm +1 on the first concern, and I will delete the new added function `GetRelationFilePath`in relpath.c and append the segno directly in error message inside function `VerifyPage`As for SetZeroDamagedPageInChecksum, the reason why I introduced it is that when we are doingsmgrread() and one of the damaged page failed to pass the checksum check, we could not directly errorout, since the caller of smgrread() may tolerate this error and just zero all the damaged page plus a warning message.Also, we could not just use GUC zero_damaged_pages to do this branch, since we also have ReadBufferMode(i.e. RBM_ZERO_ON_ERROR) to control it.\nTo get rid of SetZeroDamagedPageInChecksum, one idea is to pass zero_damaged_page flag into smgrread(), something like below:==\nextern void smgrread(SMgrRelation reln, ForkNumber forknum,\n BlockNumber blocknum, char *buffer, int flag);===Any comments?-- ThanksHubert Zhang", "msg_date": "Thu, 20 Feb 2020 14:33:28 +0800", "msg_from": "Hubert Zhang <hzhang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "Thanks Kyotaro,\n\nOn Wed, Feb 19, 2020 at 2:02 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 19 Feb 2020 13:28:04 +0900, Michael Paquier <michael@paquier.xyz>\n> wrote in\n> > On Wed, Feb 19, 2020 at 01:07:36PM +0900, Kyotaro Horiguchi wrote:\n> > > If we also verify checksum in md layer, callback is overkill since the\n> > > immediate caller consumes the event immediately. We can signal the\n> > > error by somehow returning a file tag.\n> >\n> > FWIW, I am wondering if there is any need for a change here and\n> > complicate more the code. If you know the block number, the page size\n> > and the segment file size you can immediately guess where is the\n> > damaged block. The first information is already part of the error\n>\n> I have had support requests related to broken block several times, and\n> (I think) most of *them* had hard time to locate the broken block or\n> even broken file. I don't think it is useles at all, but I'm not sure\n> it is worth the additional complexity.\n>\n> > damaged block. The first information is already part of the error\n> > message, and the two other ones are constants defined at\n> > compile-time.\n>\n> May you have misread the snippet?\n>\n> What Hubert proposed is:\n>\n> \"invalid page in block %u of relation file %s; zeroing out page\",\n> blkno, <filename>\n>\n> The second format in my messages just before is:\n> \"invalid page in block %u in relation %u, file \\\"%s\\\"\",\n> blockNum, smgr->smgr_rnode.node.relNode, smgrfname()\n>\n> All of them are not compile-time constant at all.\n>\n>\nI like your error message, the block number is relation level not file\nlevel.\nI 'll change the error message to\n\"invalid page in block %u of relation %u, file %s\"\n\n\n-- \nThanks\n\nHubert Zhang\n\nThanks Kyotaro,On Wed, Feb 19, 2020 at 2:02 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Wed, 19 Feb 2020 13:28:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Feb 19, 2020 at 01:07:36PM +0900, Kyotaro Horiguchi wrote:\n> > If we also verify checksum in md layer, callback is overkill since the\n> > immediate caller consumes the event immediately.  We can signal the\n> > error by somehow returning a file tag.\n> \n> FWIW, I am wondering if there is any need for a change here and\n> complicate more the code.  If you know the block number, the page size\n> and the segment file size you can immediately guess where is the\n> damaged block.  The first information is already part of the error\n\nI have had support requests related to broken block several times, and\n(I think) most of *them* had hard time to locate the broken block or\neven broken file.  I don't think it is useles at all, but I'm not sure\nit is worth the additional complexity.\n\n> damaged block.  The first information is already part of the error\n> message, and the two other ones are constants defined at\n> compile-time.\n\nMay you have misread the snippet?\n\nWhat Hubert proposed is:\n\n \"invalid page in block %u of relation file %s; zeroing out page\",\n    blkno, <filename>\n\nThe second format in my messages just before is:\n  \"invalid page in block %u in relation %u, file \\\"%s\\\"\",\n     blockNum, smgr->smgr_rnode.node.relNode, smgrfname()\n\nAll of them are not compile-time constant at all.\nI like your error message, the block number is relation level not file level.I 'll change the error message to\"invalid page in block %u of relation %u, file %s\"-- ThanksHubert Zhang", "msg_date": "Thu, 20 Feb 2020 14:39:44 +0800", "msg_from": "Hubert Zhang <hzhang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Print physical file path when checksum check fails" }, { "msg_contents": "I have updated the patch based on the previous comments. Sorry for the late\npatch.\n\nI removed `SetZeroDamagedPageInChecksum` and add `zeroDamagePage` flag in\nsmgrread to determine whether we should zero damage page when an error\nhappens. It depends on the caller.\n\n`GetRelationFilePath` is removed as well. We print segno on the fly.\n\nOn Thu, Feb 20, 2020 at 2:33 PM Hubert Zhang <hzhang@pivotal.io> wrote:\n\n> Thanks,\n>\n> On Thu, Feb 20, 2020 at 11:36 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n>> > On Wed, Feb 19, 2020 at 03:00:54PM +0900, Kyotaro Horiguchi wrote:\n>> > > I have had support requests related to broken block several times, and\n>> > > (I think) most of *them* had hard time to locate the broken block or\n>> > > even broken file. I don't think it is useles at all, but I'm not sure\n>> > > it is worth the additional complexity.\n>> >\n>> > I handle stuff like that from time to time, and any reports usually\n>> > go down to people knowledgeable about PostgreSQL enough to know the\n>> > difference. My point is that in order to know where a broken block is\n>> > physically located on disk, you need to know four things:\n>> > - The block number.\n>> > - The physical location of the relation.\n>> > - The size of the block.\n>> > - The length of a file segment.\n>> > The first two items are printed in the error message, and you can\n>> > guess easily the actual location (file, offset) with the two others.\n>>\n>> > I am not necessarily against improving the error message here, but\n>> > FWIW I think that we need to consider seriously if the code\n>> > complications and the maintenance cost involved are really worth\n>> > saving from one simple calculation.\n>>\n>> I don't think it's that simple for most.\n>>\n>> And if we e.g. ever get the undo stuff merged, it'd get more\n>> complicated, because they segment entirely differently. Similar, if we\n>> ever manage to move SLRUs into the buffer pool and checksummed, it'd\n>> again work differently.\n>>\n>> Nor is it architecturally appealing to handle checksums in multiple\n>> places above the smgr layer: For one, that requires multiple places to\n>> compute verify them. But also, as the way checksums are computed depends\n>> on the page format etc, it'll likely change for things like undo/slru -\n>> which then again will require additional smarts if done above the smgr\n>> layer.\n>>\n>\n> So considering undo staff, it's better to move checksum logic into md.c\n> I will keep it in the new patch.\n>\n> On 2020-02-19 16:48:45 +0900, Michael Paquier wrote:\n>\n> > Particularly, quickly reading through the patch, I am rather unhappy\n>> > about the shape of the second patch which pushes down the segment\n>> > number knowledge into relpath.c, and creates more complication around\n>> > the handling of zero_damaged_pages and zero'ed pages. -- Michael\n>>\n>> I do not like the SetZeroDamagedPageInChecksum stuff at all however.\n>>\n>>\n> I'm +1 on the first concern, and I will delete the new added function\n> `GetRelationFilePath`\n> in relpath.c and append the segno directly in error message inside\n> function `VerifyPage`\n>\n> As for SetZeroDamagedPageInChecksum, the reason why I introduced it is\n> that when we are doing\n> smgrread() and one of the damaged page failed to pass the checksum check,\n> we could not directly error\n> out, since the caller of smgrread() may tolerate this error and just zero\n> all the damaged page plus a warning message.\n> Also, we could not just use GUC zero_damaged_pages to do this branch,\n> since we also have ReadBufferMode(i.e. RBM_ZERO_ON_ERROR) to control it.\n>\n> To get rid of SetZeroDamagedPageInChecksum, one idea is to pass\n> zero_damaged_page flag into smgrread(), something like below:\n> ==\n>\n> extern void smgrread(SMgrRelation reln, ForkNumber forknum,\n>\n> BlockNumber blocknum, char *buffer, int flag);\n>\n> ===\n>\n>\n> Any comments?\n>\n>\n>\n> --\n> Thanks\n>\n> Hubert Zhang\n>\n\n\n-- \nThanks\n\nHubert Zhang", "msg_date": "Thu, 19 Mar 2020 18:29:19 +0800", "msg_from": "Hubert Zhang <hzhang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Print physical file path when checksum check fails" } ]
[ { "msg_contents": "Hello, hackers.\n\nYesterday I have noticed that in simple protocol mode snapshot is\ntaken twice - first time for parsing/analyze and later for execution.\n\nI was thinking it is a great idea to reuse the same snapshot. After\nsome time (not short) I was able to find this thread from 2011 with\nexactly same idea (of course after I already got few % of performance\nin POC):\n\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoYqKRj9BozjB-%2BtLQgVkSvzPFWBEzRF4PM2xjPOsmFRdw%40mail.gmail.com\n\nAnd it was even merged: \"Take fewer snapshots\" (\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d573e239f03506920938bf0be56c868d9c3416da\n)\n\nBut where is optimisation in the HEAD?\n\nIt absent because was reverted later in 2012 because of tricky reasons\n( https://www.postgresql.org/message-id/flat/5075D8DF.6050500%40fuzzy.cz\n) in commit \"Revert patch for taking fewer snapshots.\" (\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=532994299e2ff208a58376134fab75f5ae471e41\n)\n\nI think it is good idea to add few comments to code related to the\ntopic in order to safe time for a next guy.\n\nComments-only patch attached.\n\nThanks,\nMichail.", "msg_date": "Mon, 10 Feb 2020 12:42:46 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Comments related to \"Take fewer snapshots\" and \"Revert patch\n for taking fewer snapshots\"" }, { "msg_contents": "On 2020-Feb-10, Michail Nikolaev wrote:\n\n> I think it is good idea to add few comments to code related to the\n> topic in order to safe time for a next guy.\n\nApplied, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 13:28:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Comments related to \"Take fewer snapshots\" and \"Revert\n patch for taking fewer snapshots\"" } ]
[ { "msg_contents": "Hello,\n\nMaybe I'm wrong, but anychar_typmodin() of\nsrc/backend/utils/adt/varchar.c of PostgreSQL 12.1 does not pfree()s\nthe memory allocated by ArrayGetIntegerTypmods(). Probably, I'm\nmissing something. Could anybody please clarify on that?\n\nThanks!\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:33:17 +0300", "msg_from": "Dmitry Igrishin <dmitigr@gmail.com>", "msg_from_op": true, "msg_subject": "Is it memory leak or not?" }, { "msg_contents": "Dmitry Igrishin <dmitigr@gmail.com> writes:\n> Maybe I'm wrong, but anychar_typmodin() of\n> src/backend/utils/adt/varchar.c of PostgreSQL 12.1 does not pfree()s\n> the memory allocated by ArrayGetIntegerTypmods(). Probably, I'm\n> missing something. Could anybody please clarify on that?\n\nIt is a leak, in the sense that the pointer is unreferenced once the\nfunction returns. But we don't care, either here or in the probably\nthousands of other similar cases, because we don't expect this function\nto be run in a long-lived memory context. The general philosophy in\nthe backend is that it's cheaper and far less error-prone to rely on\nmemory context cleanup to reclaim (small amounts of) memory than to\nrely on manual pfree calls. You can read more about that in\nsrc/backend/utils/mmgr/README.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Feb 2020 10:59:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is it memory leak or not?" }, { "msg_contents": "On Mon, 10 Feb 2020, 18:59 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Dmitry Igrishin <dmitigr@gmail.com> writes:\n> > Maybe I'm wrong, but anychar_typmodin() of\n> > src/backend/utils/adt/varchar.c of PostgreSQL 12.1 does not pfree()s\n> > the memory allocated by ArrayGetIntegerTypmods(). Probably, I'm\n> > missing something. Could anybody please clarify on that?\n>\n> It is a leak, in the sense that the pointer is unreferenced once the\n> function returns. But we don't care, either here or in the probably\n> thousands of other similar cases, because we don't expect this function\n> to be run in a long-lived memory context. The general philosophy in\n> the backend is that it's cheaper and far less error-prone to rely on\n> memory context cleanup to reclaim (small amounts of) memory than to\n> rely on manual pfree calls. You can read more about that in\n> src/backend/utils/mmgr/README.\n>\nI see. Thank you very much!\n\nOn Mon, 10 Feb 2020, 18:59 Tom Lane, <tgl@sss.pgh.pa.us> wrote:Dmitry Igrishin <dmitigr@gmail.com> writes:\n> Maybe I'm wrong, but anychar_typmodin() of\n> src/backend/utils/adt/varchar.c of PostgreSQL 12.1 does not pfree()s\n> the memory allocated by ArrayGetIntegerTypmods(). Probably, I'm\n> missing something. Could anybody please clarify on that?\n\nIt is a leak, in the sense that the pointer is unreferenced once the\nfunction returns.  But we don't care, either here or in the probably\nthousands of other similar cases, because we don't expect this function\nto be run in a long-lived memory context.  The general philosophy in\nthe backend is that it's cheaper and far less error-prone to rely on\nmemory context cleanup to reclaim (small amounts of) memory than to\nrely on manual pfree calls.  You can read more about that in\nsrc/backend/utils/mmgr/README.I see. Thank you very much!", "msg_date": "Mon, 10 Feb 2020 19:18:19 +0300", "msg_from": "Dmitry Igrishin <dmitigr@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it memory leak or not?" } ]
[ { "msg_contents": "Hi,\n\nAttached is a draft for the 2020-02-13 press release. I would appreciate\nany review for accuracy, notable omissions, and the inevitable typos I\ntend to have on drafts (even though I do run it through a spell\nchecker). There were over 75 fixes, so paring the list down was a bit\ntricky, and I tried to focus on things that would have noticeable user\nimpact.\n\nAs noted in other threads, this is the EOL release for 9.4. In a\ndeparture from the past, I tried to give a bit of a \"tribute\" to 9.4 by\nlisting some of the major / impactful features that were introduced, the\nthought process being that we should celebrate the history of PostgreSQL\nand also look at how far we've come in 5 years. If we feel this does not\nmake sense, I'm happy to remove it.\n\nWhile I'll accept feedback up until time of release, please try to have\nit in no later than 2020-02-13 0:00 UTC :)\n\nThanks,\n\nJonathan", "msg_date": "Mon, 10 Feb 2020 11:46:23 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2020-02-13 Press Release Draft" }, { "msg_contents": "On 2020-02-10 17:46, Jonathan S. Katz wrote:\n> Hi,\n> \n> Attached is a draft for the 2020-02-13 press release. I would \n> appreciate\n\nA small typo:\n\n'many of which have receive improvements' should be\n'many of which have received improvements'\n\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:55:17 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "Typo in 9.4 retirement message:\n\ns/is it time to retire/it is time to retire/\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nTypo in 9.4 retirement message:s/is it time to retire/it is time to retire/Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/", "msg_date": "Mon, 10 Feb 2020 12:03:27 -0500", "msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>", "msg_from_op": false, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "On 2/10/20 11:55 AM, Erik Rijkers wrote:\n> On 2020-02-10 17:46, Jonathan S. Katz wrote:\n>> Hi,\n>>\n>> Attached is a draft for the 2020-02-13 press release. I would appreciate\n> \n> A small typo:\n> \n> 'many of which have receive improvements'   should be\n> 'many of which have received improvements'\n\nFixed on my local copy. Thanks!\n\nJonathan", "msg_date": "Mon, 10 Feb 2020 12:07:18 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "On 2/10/20 12:03 PM, Sehrope Sarkuni wrote:\n> Typo in 9.4 retirement message:\n> \n> s/is it time to retire/it is time to retire/\n\nHeh, we definitely don't want that to be a question :)\n\nFixed on my local copy, thanks!\n\nJonathan", "msg_date": "Mon, 10 Feb 2020 12:07:33 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "On 2020-Feb-10, Jonathan S. Katz wrote:\n\n> * Several figures for GSSAPI support, including having libpq accept all\n> GSS-related connection parameters even if the GSSAPI code is not compiled in.\n\n\"figures\"?\n\n> If you had previously executed `TRUNCATE .. CASCADE` on a sub-partition of a\n> partitioned table, and the partitioned table has a foreign-key reference from\n> another table, you will have to run the `TRUNCATE` on the other table as well.\n> The issue that caused this is fixed in this release, but you will have to\n> perform this step to ensure all of your data is cleaned up.\n\nI'm unsure about the \"will\" in the \"you will have to run the TRUNCATE\".\nIf the table is truncated then reloading, then the truncation might be\noptional. I would change the \"will\" to \"may\". At the same time I\nwonder if it would make sense to provide a query that would return any\nrows violating such constraints; if empty then there's no need to\ntruncate. On the other hand, if not empty, perhaps we can suggest to\ndelete just those rows rather than truncating everything. Perhaps we\ncan quote ri_triggers.c's RI_Initial_Check,\n\n /*----------\n * The query string built is:\n * SELECT fk.keycols FROM [ONLY] relname fk\n * LEFT OUTER JOIN [ONLY] pkrelname pk\n * ON (pk.pkkeycol1=fk.keycol1 [AND ...])\n * WHERE pk.pkkeycol1 IS NULL AND\n * For MATCH SIMPLE:\n * (fk.keycol1 IS NOT NULL [AND ...])\n * For MATCH FULL:\n * (fk.keycol1 IS NOT NULL [OR ...])\n *\n * We attach COLLATE clauses to the operators when comparing columns\n * that have different collations.\n *----------\n */\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 14:23:57 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "On 2/10/20 12:23 PM, Alvaro Herrera wrote:\n> On 2020-Feb-10, Jonathan S. Katz wrote:\n> \n>> * Several figures for GSSAPI support, including having libpq accept all\n>> GSS-related connection parameters even if the GSSAPI code is not compiled in.\n> \n> \"figures\"?\n\n\"figures\" I have a typo ;) Fixed to \"fixes\"\n\n> \n>> If you had previously executed `TRUNCATE .. CASCADE` on a sub-partition of a\n>> partitioned table, and the partitioned table has a foreign-key reference from\n>> another table, you will have to run the `TRUNCATE` on the other table as well.\n>> The issue that caused this is fixed in this release, but you will have to\n>> perform this step to ensure all of your data is cleaned up.\n> \n> I'm unsure about the \"will\" in the \"you will have to run the TRUNCATE\".\n> If the table is truncated then reloading, then the truncation might be\n> optional. I would change the \"will\" to \"may\". \n\nChanged. And...\n\n> At the same time I\n> wonder if it would make sense to provide a query that would return any\n> rows violating such constraints; if empty then there's no need to\n> truncate. On the other hand, if not empty, perhaps we can suggest to\n> delete just those rows rather than truncating everything. Perhaps we\n> can quote ri_triggers.c's RI_Initial_Check,\n> \n> /*----------\n> * The query string built is:\n> * SELECT fk.keycols FROM [ONLY] relname fk\n> * LEFT OUTER JOIN [ONLY] pkrelname pk\n> * ON (pk.pkkeycol1=fk.keycol1 [AND ...])\n> * WHERE pk.pkkeycol1 IS NULL AND\n> * For MATCH SIMPLE:\n> * (fk.keycol1 IS NOT NULL [AND ...])\n> * For MATCH FULL:\n> * (fk.keycol1 IS NOT NULL [OR ...])\n> *\n> * We attach COLLATE clauses to the operators when comparing columns\n> * that have different collations.\n> *----------\n> */\n\n...yeah, I like that approach, especially if it's \"may\" instead of\n\"will\" -- we should give our users the tools to determine if they have\nto do anything. Should we just give the base base?\n\n\tSELECT\n\t fk.keycol\n\tFROM\n\t relname fk\n\t LEFT OUTER JOIN pkrelname pk ON pk.pkkeycol = fk.keycol\n\tWHERE\n\t pk.pkkeycol IS NULL\n\t AND fk.keycol IS NOT NULL;\n\nRE TRUNCATE vs. DELETE, we should present the option (\"TRUNCATE\" is the\neasiest route, but you may opt to \"DELETE\" instead due to having\nreplaced the data)\n\nThanks,\n\nJonathan", "msg_date": "Mon, 10 Feb 2020 12:37:44 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "Em 10/02/2020 13:46, Jonathan S. Katz escreveu:\n> Hi,\n>\n> Attached is a draft for the 2020-02-13 press release. I would appreciate\n> any review for accuracy, notable omissions, and the inevitable typos I\n> tend to have on drafts (even though I do run it through a spell\n> checker). There were over 75 fixes, so paring the list down was a bit\n> tricky, and I tried to focus on things that would have noticeable user\n> impact.\n>\n> As noted in other threads, this is the EOL release for 9.4. In a\n> departure from the past, I tried to give a bit of a \"tribute\" to 9.4 by\n> listing some of the major / impactful features that were introduced, the\n> thought process being that we should celebrate the history of PostgreSQL\n> and also look at how far we've come in 5 years. If we feel this does not\n> make sense, I'm happy to remove it.\n>\n> While I'll accept feedback up until time of release, please try to have\n> it in no later than 2020-02-13 0:00 UTC :)\n>\n> Thanks,\n>\n> Jonathan\n\ns/Several fix for query planner errors/Several fixes for query planner \nerrors/\n\n\n", "msg_date": "Mon, 10 Feb 2020 14:55:28 -0300", "msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <luisroberto@siscobra.com.br>", "msg_from_op": false, "msg_subject": "Re: 2020-02-13 Press Release Draft" }, { "msg_contents": "On 2/10/20 12:55 PM, Luís Roberto Weck wrote:\n> Em 10/02/2020 13:46, Jonathan S. Katz escreveu:\n>> Hi,\n>>\n>> Attached is a draft for the 2020-02-13 press release. I would appreciate\n>> any review for accuracy, notable omissions, and the inevitable typos I\n>> tend to have on drafts (even though I do run it through a spell\n>> checker). There were over 75 fixes, so paring the list down was a bit\n>> tricky, and I tried to focus on things that would have noticeable user\n>> impact.\n>>\n>> As noted in other threads, this is the EOL release for 9.4. In a\n>> departure from the past, I tried to give a bit of a \"tribute\" to 9.4 by\n>> listing some of the major / impactful features that were introduced, the\n>> thought process being that we should celebrate the history of PostgreSQL\n>> and also look at how far we've come in 5 years. If we feel this does not\n>> make sense, I'm happy to remove it.\n>>\n>> While I'll accept feedback up until time of release, please try to have\n>> it in no later than 2020-02-13 0:00 UTC :)\n>>\n>> Thanks,\n>>\n>> Jonathan\n> \n> s/Several fix for query planner errors/Several fixes for query planner\n> errors/\n\nThanks! Fixed applied.\n\nHere is the latest canonical copy. I also incorporated some of Alvaro's\nsuggestions, though when trying to add the query I found the explanation\nbecoming too long. Perhaps it might make an interesting blog post? ;)\n\nPlease let me know if there are any more suggestions/changes before the\nrelease (which is rapidly approaching).\n\nThanks!\n\nJonathan", "msg_date": "Thu, 13 Feb 2020 01:04:25 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2020-02-13 Press Release Draft" } ]
[ { "msg_contents": "Hi!\n\nIn Postgres Pro we have complaints about too large core dumps. The\npossible way to reduce code dump size is to skip some information.\nFrequently shared buffers is most long memory segment in core dump.\nFor sure, contents of shared buffers is required for discovering many\nof bugs. But short core dump without shared buffers might be still\nuseful. If system appears to be not capable to capture full core\ndump, short core dump appears to be valuable option.\n\nAttached POC patch implements core_dump_no_shared_buffers GUC, which\ndoes madvise(MADV_DONTDUMP) for shared buffers. Any thoughts?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 10 Feb 2020 22:07:13 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "Hi,\n\nOn 2020-02-10 22:07:13 +0300, Alexander Korotkov wrote:\n> In Postgres Pro we have complaints about too large core dumps.\n\nI've seen those too, and I've had them myself. It's pretty frustrating\nif a core dump makes the machine unusable for half an hour while said\ncoredump is being written out...\n\n\n> The possible way to reduce code dump size is to skip some information.\n> Frequently shared buffers is most long memory segment in core dump.\n> For sure, contents of shared buffers is required for discovering many\n> of bugs. But short core dump without shared buffers might be still\n> useful. If system appears to be not capable to capture full core\n> dump, short core dump appears to be valuable option.\n\nIt's possibly interesting, in the interim at least, that enabling huge\npages on linux has the effect that pages aren't included in core dumps\nby default.\n\n\n> Attached POC patch implements core_dump_no_shared_buffers GUC, which\n> does madvise(MADV_DONTDUMP) for shared buffers. Any thoughts?\n\nHm. Not really convinced by this. The rest of shared memory is still\npretty large, and this can't be tuned at runtime.\n\nHave you considered postmaster (or even just the GUC processing in each\nprocess) adjusting /proc/self/coredump_filter instead?\n\n From the man page:\n\n The value in the file is a bit mask of memory mapping types (see mmap(2)). If a bit is set in the mask, then memory mappings of the corresponding\n type are dumped; otherwise they are not dumped. The bits in this file have the following meanings:\n\n bit 0 Dump anonymous private mappings.\n bit 1 Dump anonymous shared mappings.\n bit 2 Dump file-backed private mappings.\n bit 3 Dump file-backed shared mappings.\n bit 4 (since Linux 2.6.24)\n Dump ELF headers.\n bit 5 (since Linux 2.6.28)\n Dump private huge pages.\n bit 6 (since Linux 2.6.28)\n Dump shared huge pages.\n bit 7 (since Linux 4.4)\n Dump private DAX pages.\n bit 8 (since Linux 4.4)\n Dump shared DAX pages.\n\nYou can also incorporate this into the start script for postgres today.\n\n\n> +static Size ShmemPageSize = FPM_PAGE_SIZE;\n\nI am somewhat confused by the use of FPM_PAGE_SIZE? What does this have\nto do with any of this? Is it just because it's set to 4kb by default?\n\n\n> /*\n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index 8228e1f3903..c578528b0bb 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -2037,6 +2037,19 @@ static struct config_bool ConfigureNamesBool[] =\n> \t\tNULL, NULL, NULL\n> \t},\n> \n> +#if HAVE_DECL_MADV_DONTDUMP\n> +\t{\n> +\t\t{\"core_dump_no_shared_buffers\", PGC_POSTMASTER, DEVELOPER_OPTIONS,\n> +\t\t\tgettext_noop(\"Exclude shared buffers from core dumps.\"),\n> +\t\t\tNULL,\n> +\t\t\tGUC_NOT_IN_SAMPLE\n> +\t\t},\n> +\t\t&core_dump_no_shared_buffers,\n> +\t\tfalse,\n> +\t\tNULL, NULL, NULL\n> +\t},\n> +#endif\n\nIMO it's better to have GUCs always present, but don't allow them to be\nenabled if prerequisites aren't fulfilled.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Feb 2020 11:56:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "On 2020-Feb-10, Andres Freund wrote:\n\n> Have you considered postmaster (or even just the GUC processing in each\n> process) adjusting /proc/self/coredump_filter instead?\n> \n> From the man page:\n> \n> The value in the file is a bit mask of memory mapping types (see mmap(2)). If a bit is set in the mask, then memory mappings of the corresponding\n> type are dumped; otherwise they are not dumped. The bits in this file have the following meanings:\n> \n> bit 0 Dump anonymous private mappings.\n> bit 1 Dump anonymous shared mappings.\n> bit 2 Dump file-backed private mappings.\n> bit 3 Dump file-backed shared mappings.\n> bit 4 (since Linux 2.6.24)\n> Dump ELF headers.\n> bit 5 (since Linux 2.6.28)\n> Dump private huge pages.\n> bit 6 (since Linux 2.6.28)\n> Dump shared huge pages.\n> bit 7 (since Linux 4.4)\n> Dump private DAX pages.\n> bit 8 (since Linux 4.4)\n> Dump shared DAX pages.\n> \n> You can also incorporate this into the start script for postgres today.\n\nYeah. Maybe we should file bug reports against downstream packages to\ninclude a corefilter tweak.\n\nMy development helper script uses this\n\nrunpg_corefilter() {\n pid=$(head -1 $PGDATADIR/postmaster.pid)\n if [ ! -z \"$pid\" ]; then\n echo 0x01 > /proc/$pid/coredump_filter \n fi\n}\n\nI don't know how easy is it to teach systemd to do this on its service\nfiles.\n\nFWIW I've heard that some people like to have shmem in core files to\nimprove debuggability, but it's *very* infrequent. But maybe we should have\na way to disable the corefiltering.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 17:31:47 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "Hi,\n\nOn 2020-02-10 17:31:47 -0300, Alvaro Herrera wrote:\n> Yeah. Maybe we should file bug reports against downstream packages to\n> include a corefilter tweak.\n\nHm, I'm not sure that's a reasonable way to scale things. Nor am I\nreally sure that's the right granularity.\n\n\n> My development helper script uses this\n> \n> runpg_corefilter() {\n> pid=$(head -1 $PGDATADIR/postmaster.pid)\n> if [ ! -z \"$pid\" ]; then\n> echo 0x01 > /proc/$pid/coredump_filter \n> fi\n> }\n> \n> I don't know how easy is it to teach systemd to do this on its service\n> files.\n\nWell, you could just make it part of the command that starts the\nserver. Not aware of anything else.\n\n\n> FWIW I've heard that some people like to have shmem in core files to\n> improve debuggability, but it's *very* infrequent.\n\nOh, I pretty regularly want that. If you're debugging anthying that\nincludes locks, page accesses, etc, it's pretty hard to succeed without?\n\n\n> But maybe we should have a way to disable the corefiltering.\n\nThere should, imo. That's why I was wondering about making this a GUC\n(presumably suset).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:03:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "On 2020-Feb-10, Andres Freund wrote:\n\n> Hi,\n> \n> On 2020-02-10 17:31:47 -0300, Alvaro Herrera wrote:\n> > Yeah. Maybe we should file bug reports against downstream packages to\n> > include a corefilter tweak.\n> \n> Hm, I'm not sure that's a reasonable way to scale things. Nor am I\n> really sure that's the right granularity.\n\nHah. This argument boils down to saying our packagers suck :-)\n\n> > I don't know how easy is it to teach systemd to do this on its service\n> > files.\n> \n> Well, you could just make it part of the command that starts the\n> server. Not aware of anything else.\n\nI tried to do that, but couldn't figure out a clean way, because you\nhave to do it after the fact (not in the process itself). Maybe it's\npossible to have pg_ctl do it once postmaster is running.\n\n> > FWIW I've heard that some people like to have shmem in core files to\n> > improve debuggability, but it's *very* infrequent.\n> \n> Oh, I pretty regularly want that. If you're debugging anthying that\n> includes locks, page accesses, etc, it's pretty hard to succeed without?\n\nyyyyeah kinda, I guess -- I don't remember cases when I've wanted to do\nthat in production systems.\n\n> > But maybe we should have a way to disable the corefiltering.\n> \n> There should, imo. That's why I was wondering about making this a GUC\n> (presumably suset).\n\nNot really sure about suset ... AFAIR that means superuser can SET it;\nbut what you really care about is more like ALTER SYSTEM, which is\nSIGHUP unless I misremember.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 10 Feb 2020 18:21:30 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "Hi,\n\nOn 2020-02-10 18:21:30 -0300, Alvaro Herrera wrote:\n> On 2020-Feb-10, Andres Freund wrote:\n> \n> > Hi,\n> > \n> > On 2020-02-10 17:31:47 -0300, Alvaro Herrera wrote:\n> > > Yeah. Maybe we should file bug reports against downstream packages to\n> > > include a corefilter tweak.\n> > \n> > Hm, I'm not sure that's a reasonable way to scale things. Nor am I\n> > really sure that's the right granularity.\n> \n> Hah. This argument boils down to saying our packagers suck :-)\n\nHm? I'd say it's a sign of respect to not have each of them do the same\nwork. Especially when they can't address it to the same degree core PG\ncan. So maybe I'm saying we shouldn't be lazy ;)\n\n\n> > > I don't know how easy is it to teach systemd to do this on its service\n> > > files.\n> > \n> > Well, you could just make it part of the command that starts the\n> > server. Not aware of anything else.\n> \n> I tried to do that, but couldn't figure out a clean way, because you\n> have to do it after the fact (not in the process itself). Maybe it's\n> possible to have pg_ctl do it once postmaster is running.\n\nShouldn't it be sufficient to just do it to /proc/self/coredump_filter?\nIt's inherited IIUC?\n\nYep:\n A child process created via fork(2) inherits its parent's coredump_filter value; the coredump_filter value is preserved across an execve(2).\n\n\n> > > But maybe we should have a way to disable the corefiltering.\n> > \n> > There should, imo. That's why I was wondering about making this a GUC\n> > (presumably suset).\n> \n> Not really sure about suset ... AFAIR that means superuser can SET it;\n> but what you really care about is more like ALTER SYSTEM, which is\n> SIGHUP unless I misremember.\n\nI really want both. Sometimes it's annoying to get followup coredumps by\nother processes, even if I just want to get a corefile from one process\ndoing something more specific. It seems nice to alter that session /\nuser to have large coredumps, but not the rest?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:35:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "On Tue, 11 Feb 2020 at 03:07, Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> Hi!\n>\n> In Postgres Pro we have complaints about too large core dumps. The\n> possible way to reduce code dump size is to skip some information.\n> Frequently shared buffers is most long memory segment in core dump.\n> For sure, contents of shared buffers is required for discovering many\n> of bugs. But short core dump without shared buffers might be still\n> useful. If system appears to be not capable to capture full core\n> dump, short core dump appears to be valuable option.\n>\n> Attached POC patch implements core_dump_no_shared_buffers GUC, which\n> does madvise(MADV_DONTDUMP) for shared buffers. Any thoughts?\n\nI'd like this a lot. In fact I'd like it so much I kinda hope it'd be\nconsidered backpatchable because coredump_filter is much too crude and\ncoarse grained.\n\nNow that Pg has parallel query we all rely on shm_mq, DSM/DSA, etc.\nIt's increasingly problematic to have these areas left out of core\ndumps in order to avoid bloating them with shared_buffers contents.\nDoubly so if, like me, you work with extensions that make very heavy\nuse of shared memory areas for their own IPC.\n\nCurrently my options are \"dump all shmem including shared_buffers\" or\n\"dump no shmem\". But I usually want \"dump all shmem except\nshared_buffers\". It's tolerable to just dump s_b on a test system with\na small s_b, but if enabling coredumps to track down some\nhard-to-repro crash on a production system I really don't want 20GB\ncoredumps...\n\nPlease, please apply.\n\nPlease backpatch, if you can possibly stand to do so.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Tue, 11 Feb 2020 11:36:08 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "From: Craig Ringer <craig@2ndquadrant.com>\r\n> Currently my options are \"dump all shmem including shared_buffers\" or\r\n> \"dump no shmem\". But I usually want \"dump all shmem except\r\n> shared_buffers\". It's tolerable to just dump s_b on a test system with\r\n> a small s_b, but if enabling coredumps to track down some\r\n> hard-to-repro crash on a production system I really don't want 20GB\r\n> coredumps...\r\n\r\nWe have a simple implementation that allows to exclude shared memory. That's been working for years.\r\n\r\n[postgresql.conf]\r\ncore_directory = 'location of core dumps'\r\ncore_contents = '{none | minimum | full}'\r\n# none = doesn't dump core, minimum excludes shared memory, and full dumps all\r\n\r\nI can provide it. But it simply changes the current directory and detaches shared memory when postgres receives signals that dump core.\r\n\r\nI made this GUC because Windows also had to be dealt with.\r\n\r\n\r\nFrom: Andres Freund <andres@anarazel.de>\r\n> > Hah. This argument boils down to saying our packagers suck :-)\r\n> \r\n> Hm? I'd say it's a sign of respect to not have each of them do the same\r\n> work. Especially when they can't address it to the same degree core PG\r\n> can. So maybe I'm saying we shouldn't be lazy ;)\r\n\r\nMaybe we should add options to pg_ctl just like -c which is available now, so that OS packagers can easily use in their start scripts. Or, can they just use pg_ctl's -o to specify new GUC parameters?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Wed, 12 Feb 2020 00:55:50 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "\n\nOn Wed, Feb 12, 2020, at 7:55 AM, tsunakawa.takay@fujitsu.com wrote:\n> From: Craig Ringer <craig@2ndquadrant.com>\n>> Currently my options are \"dump all shmem including shared_buffers\" or\n>> \"dump no shmem\". But I usually want \"dump all shmem except\n>> shared_buffers\". It's tolerable to just dump s_b on a test system with\n>> a small s_b, but if enabling coredumps to track down some\n>> hard-to-repro crash on a production system I really don't want 20GB\n>> coredumps...\n>\n> We have a simple implementation that allows to exclude shared memory. \n> That's been working for years.\n>\n> [postgresql.conf]\n> core_directory = 'location of core dumps'\n> core_contents = '{none | minimum | full}'\n> # none = doesn't dump core, minimum excludes shared memory, and full dumps all\n>\n> I can provide it. But it simply changes the current directory and \n> detaches shared memory when postgres receives signals that dump core.\n>\n> I made this GUC because Windows also had to be dealt with.\n\nIf it's still possible, share your patch here. I don't know what about the core, but during development, especially the bug-fixing process, it is really dull to wait for the core generation process every time, even if you debug a planner issue and are not interested in shared memory blocks ...\n\n-- \nRegards,\nAndrei Lepikhov\n\n\n", "msg_date": "Wed, 13 Sep 2023 10:48:43 +0700", "msg_from": "\"Lepikhov Andrei\" <lepikhov@fastmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" }, { "msg_contents": "Hi,\n\nThe current approach could be better because we want to use it on \nWindows/MacOS and other systems. So, I've tried to develop another \nstrategy - detaching shmem and DSM blocks before executing the abort() \nroutine.\nAs I can see, it helps and reduces the size of the core file.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 25 Sep 2023 16:42:35 +0700", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: POC: GUC option for skipping shared buffers in core dumps" } ]
[ { "msg_contents": "Hi,\n\nI started this as a reply to\nhttps://www.postgresql.org/message-id/CAA4eK1JMgUfiTdAgr9k3nA4cdKvvruousKBg7FWTDNzQgBpOZA%40mail.gmail.com\nbut the email seemed to morph into distinct topic that I thought it\nbest to separate out.\n\nWe've had a number of cases where heavyweight locks turned out to be\ntoo, well, heavy. And I don't mean cases where a fixed number of locks\ncan do the job. The last case of this is the above thread, where a\nseparate lock manager just for relation extension was implemented.\n\nMy problem with that patch is that it just seems like the wrong\ndirection architecturally to me. There's two main aspects to this:\n\n1) It basically builds a another, more lightweight but less capable, of\n a lock manager that can lock more objects than we can have distinct\n locks for. It is faster because it uses *one* hashtable without\n conflict handling, because it has fewer lock modes, and because it\n doesn't support detecting deadlocks. And probably some other things.\n\n2) A lot of the contention around file extension comes from us doing\n multiple expensive things under one lock (determining current\n relation size, searching victim buffer, extending file), and in tiny\n increments (growing a 1TB table by 8kb). This patch doesn't address\n that at all.\n\nI'm only planning to address 1) in this thread, and write a separate\nabout 2) as part of the above thread. So more on that later.\n\nWith regard to 1):\n\nTo me this seems to go in the direction of having multiple bespoke lock\nmanagers with slightly different feature sets, different debugging /\nlogging / monitoring support, with separate locking code each. That's\nbad for maintainability.\n\n\nI think one crucial piece of analysis that I've not seen fully done, is\nwhy a separate lock manager is actually faster. An early (quite\ndifferent) version of this patch yielded the following micro-benchmark:\n\nhttps://www.postgresql.org/message-id/CAD21AoD1NenjTmD%3D5ypOBo9%3DFRtAtWVxUcHqHxY3wNos_5Bb5w%40mail.gmail.com\nOn 2017-11-21 07:19:30 +0900, Masahiko Sawada wrote:\n> Also I've done a micro-benchmark; calling LockRelationForExtension and\n> UnlockRelationForExtension tightly in order to measure the number of\n> lock/unlock cycles per second. The result is,\n> PATCHED = 3.95892e+06 (cycles/sec)\n> HEAD = 1.15284e+06 (cycles/sec)\n> The patched is 3 times faster than current HEAD.\n\nbut I've not actually seen an analysis as to *why* precisely there's a\nthreefold difference in cost, and whether the problem could instead be\naddressed by making the difference much smaller.\n\nMy guess would be that the difference, to a very large degree, comes from\navoiding dynahash lookups. We currently have quite a few hashtable\nlookups:\n\n* LockRelationForExtension\n** LockAcquireExtended does HASH_ENTERs in LockMethodLocalHash\n** SetupLockInTable does HASH_ENTER_NULL in LockMethodLockHash\n** SetupLockInTable does HASH_ENTER_NULL in LockMethodProcLockHash\n* UnlockRelationForExtension\n** LockRelease does HASH_FIND in LockMethodLocalHash\n** CleanUpLock does HASH_REMOVE in LockMethodProcLockHash\n** CleanUpLock does HASH_REMOVE in LockMethodLockHash\n** RemoveLocalLock does HASH_REMOVE in LockMethodLocalHash\n\nit's pretty easy to believe that a lock mapping like:\n\n+\t\trelextlock = &RelExtLockArray[RelExtLockTargetTagToIndex(&tag)];\n\nwith\n\n+typedef struct RelExtLockTag\n+{\n+\tOid\t\t\tdbid;\t\t\t/* InvalidOid for a shared relation */\n+\tOid\t\t\trelid;\n+} RelExtLockTag;\n...\n+static inline uint32\n+RelExtLockTargetTagToIndex(RelExtLockTag *locktag)\n+{\n+\treturn tag_hash(locktag, sizeof(RelExtLockTag)) % N_RELEXTLOCK_ENTS;\n+}\n\nand then just waiting till that specific hash entry becomes free, is\ncheaper than *7* dynahash lookups. Especially if doing so doesn't\nrequire any mapping locks.\n\nbut it's not at all clear to me whether we cannot sometimes / always\navoid some of this overhead. Improving lock.c would be a much bigger\nwin, without building separate infrastructure.\n\nE.g. we could:\n\n- Have an extended LockAcquire API where the caller provides a stack\n variable for caching information until the LockRelease call, avoiding\n separate LockMethodLocalHash lookup for release.\n\n- Instead of always implementing HASH_REMOVE as a completely fresh\n lookup in dynahash, we should provide an API for removing an object we\n *know* is in the hashtable at a specific pointer. We're obviously\n already relying on that address to stay constant, so we'd not loose\n flexibility. With this separate API we can avoid the bucket lookup,\n walking through each element in that bucket, and having to compare the\n hash key every time (which is quite expensive for a full LOCKTAG).\n\n For the biggest benefit, we'd have to make the bucket list doubly\n linked, but even if we were to just go through from start, and just\n compare entries by pointer, we'd win big.\n\n- We should try to figure out whether we can avoid needing the separate\n lock table for PROCLOCKs. As far as I can tell there's not a single\n reason for it to be a hashtable, rather than something like\n NUM_LOCK_PARTITIONS lists of free PROCLOCKs.\n\n- Whenever we do a relation extension, we better already hold a normal\n relation lock. We don't actually need to have an entirely distinct\n type of lock for extensions, that theoretically could support N\n extension locks for each relation. The fact that these are associated\n could be utilized in different ways:\n\n - We could define relation extension as a separate lock level only\n conflicting with itself (and perhaps treat AELs as including\n it). That'd avoid needing separate shared memory states in many\n cases.\n\n This might be too problematic, because extension locks don't have\n the same desired behaviour around group locking (and a small army of\n other issues).\n\n - We could keep a separate extension lock cached inside the relation\n lock. The first time a transaction needs to extend, it does the\n normal work, and after that stores the PROCLOCK in the LOCALLOCK (or\n something like that). Once extension is done, don't release the lock\n entirely, but instead just reduce the lock level to a new\n non-conflicting lock level\n\n Alternatively we could implement something very similar outside of\n lock.c, e.g. by caching the LOCALLOCK* (possibly identified by an\n integer or such) in RelationData. That'd require enough support\n from lock.c to be able to make that really cheap.\n\n\nThe second big difference between lock.c and the proposed relation\nextension lock is that it doesn't have a \"mapping\" lock. It does instead\nsolve the the mapping without a lock by having exactly one potential\nlocation for each lock, using atomic ops to manipulate the lock state,\nand deals with conflicts by waiting for the bucket to become free.\n\nI don't think it's realistic to not use locks to protect the lock\nmapping hashtable, nor does it seem likely we can make the manipulation\nof individual lock states entirely atomic. But it very well seems\npossible to reduce the likelihood of contention:\n\nWe could e.g. split the maintenance of the hashtable(s) from protecting\nthe state of individual locks. The locks protecting the hashtable would\njust be held long enough to change a \"pin count\" of each lock, and then\na per LOCK lwlock would protect each heavyweight lock's state. We'd not\nneed to lock the partition locks for quite a few cases, because there's\nmany locks in a loaded system that always have lockers. There'd be cost\nto that, needing more atomic ops in some cases, but I think it'd reduce\ncontention tremendously, and it'd open a lot more optimization\npotential. It seems plausible that we could even work, as a followup,\nto not needing the partition locks for some lock releases (e.g. when\nthere are other holders), and we might even be able to avoid it for\nacquisitions, by caching the LOCK inside LOCALLOCK, and re-identifying\nthe identity.\n\nThoughts?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Feb 2020 20:22:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Improve heavyweight locks instead of building new lock managers?" }, { "msg_contents": "Hi,\n\nSome of the discussions about improving the locking code, in particular\nthe group locking / deadlock detector discussion with Robert, again made\nme look at lock.c. While looking at how much work it'd be to a) remove\nthe PROCLOCK hashtable b) move more of the group locking logic into\nlock.c, rather than smarts in deadlock.c, I got sidetracked by all the\nverbose and hard to read SHM_QUEUE code.\n\nHere's a *draft* patch series for removing all use of SHM_QUEUE, and\nsubsequently removing SHM_QUEUE. There's a fair bit of polish needed,\nbut I do think it makes the code considerably easier to read\n(particularly for predicate.c). The diffstat is nice too:\n\n src/include/lib/ilist.h | 132 +++++++++++++++++----\n src/include/replication/walsender_private.h | 3 +-\n src/include/storage/lock.h | 10 +-\n src/include/storage/predicate_internals.h | 49 +++-----\n src/include/storage/proc.h | 18 +--\n src/include/storage/shmem.h | 22 ----\n src/backend/access/transam/twophase.c | 4 +-\n src/backend/lib/ilist.c | 8 +-\n src/backend/replication/syncrep.c | 89 ++++++--------\n src/backend/replication/walsender.c | 2 +-\n src/backend/storage/ipc/Makefile | 1 -\n src/backend/storage/ipc/shmqueue.c | 190 ------------------------------\n src/backend/storage/lmgr/deadlock.c | 76 +++++-------\n src/backend/storage/lmgr/lock.c | 129 ++++++++------------\n src/backend/storage/lmgr/predicate.c | 692 +++++++++++++++++++++++++++++++++++------------------------------------------------------------------------\n src/backend/storage/lmgr/proc.c | 197 +++++++++++++------------------\n 16 files changed, 569 insertions(+), 1053 deletions(-)\n\nI don't want to invest a lot of time into this if there's not some\nagreement that this is a good direction to go into. So I'd appreciate a\nfew cursory looks before spending more time.\n\nOverview:\n0001: Add additional prev/next & detached node functions to ilist.\n I think the additional prev/next accessors are nice. I am less\n convinced by the 'detached' stuff, but it's used by some SHM_QUEUE\n users. I don't want to make the plain dlist_delete reset the node's\n prev/next pointers, it's not needed in the vast majority of cases...\n\n0002: Use dlists instead of SHM_QUEUE for heavyweight locks.\n I mostly removed the odd reliance on PGPROC.links needing to be the\n first struct member - seems odd.\n\n I think we should rename PROC_QUEUE.links, elsewhere that's used for\n list membership nodes, so it's imo confusing/odd.\n\n0003: Use dlist for syncrep queue.\n This seems fairly simple to me.\n\n0004: Use dlists for predicate locking.\n Unfortunately pretty large. I think it's a huge improvement, but it's\n also subtle code. Wonder if there's something better to do here wrt\n OnConflict_CheckForSerializationFailure?\n\n0005: Remove now unused SHMQueue*.\n0006: Remove PROC_QUEUE.size.\n I'm not sure whether this is a a good idea. I was looking primarily at\n that because I thought it'd allow us to remove PROC_QUEUE as a whole\n if we wanted to. But as PROC_QUEUE.size doesn't really seem to buy us\n much, we should perhaps just do something roughly like in the patch?\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 19 Feb 2020 20:14:02 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improve heavyweight locks instead of building new lock managers?" }, { "msg_contents": "Hi Andres,\n\nOn Thu, Feb 20, 2020 at 1:14 PM Andres Freund <andres@anarazel.de> wrote:\n> Here's a *draft* patch series for removing all use of SHM_QUEUE, and\n> subsequently removing SHM_QUEUE. There's a fair bit of polish needed,\n> but I do think it makes the code considerably easier to read\n> (particularly for predicate.c). The diffstat is nice too:\n>\n> 0005: Remove now unused SHMQueue*.\n> 0006: Remove PROC_QUEUE.size.\n\nMaybe you're aware, but there still seem to be places using it. In\nLOCK_DEBUG build:\n\nlock.c: In function ‘LOCK_PRINT’:\nlock.c:320:20: error: ‘PROC_QUEUE’ {aka ‘const struct PROC_QUEUE’} has\nno member named ‘size’\n lock->waitProcs.size,\n\nlock.c: In function ‘DumpLocks’:\nlock.c:3906:2: error: unknown type name ‘SHM_QUEUE’; did you mean ‘SI_QUEUE’?\n\nFwiw, I for one, am all for removing specialized data structures when\nmore widely used data structures will do, especially if that\nspecialization is no longer relevant.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 20 Feb 2020 15:15:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve heavyweight locks instead of building new lock managers?" }, { "msg_contents": "On Thu, Feb 20, 2020 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n> 16 files changed, 569 insertions(+), 1053 deletions(-)\n\nNice!\n\nSome comments on 0001, 0003, 0004:\n\n> Subject: [PATCH v1 1/6] Add additional prev/next & detached node functions to\n\n+extern void dlist_check(const dlist_head *head);\n+extern void slist_check(const slist_head *head);\n\nI approve of the incidental constification in this patch.\n\n+/*\n+ * Like dlist_delete(), but also sets next/prev to NULL to signal not being in\n+ * list.\n+ */\n+static inline void\n+dlist_delete_thoroughly(dlist_node *node)\n+{\n+ node->prev->next = node->next;\n+ node->next->prev = node->prev;\n+ node->next = NULL;\n+ node->prev = NULL;\n+}\n\nInstead of introducing this strange terminology, why not just have the\ncallers do ...\n\ndlist_delete(node);\ndlist_node_init(node);\n\n..., or perhaps supply dlist_delete_and_reinit(node) that does exactly\nthat? That is, reuse the code and terminology.\n\n+/*\n+ * Check if node is detached. A node is only detached if it either has been\n+ * initialized with dlist_init_node(), or deleted with\n+ * dlist_delete_thoroughly().\n+ */\n+static inline bool\n+dlist_node_is_detached(const dlist_node *node)\n+{\n+ Assert((node->next == NULL && node->prev == NULL) ||\n+ (node->next != NULL && node->prev != NULL));\n+\n+ return node->next == NULL;\n+}\n\nHow about dlist_node_is_linked()? I don't like introducing random new\nverbs when we already have 'linked' in various places, and these\nthings are, y'know, linked lists.\n\n> Subject: [PATCH v1 3/6] Use dlist for syncrep queue.\n\nLGTM.\n\n> Subject: [PATCH v1 4/6] Use dlists for predicate locking.\n\n+ dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->outConflicts)\n\nYuck... I suppose you could do this:\n\n- dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->outConflicts)\n+ dlist_foreach_const(iter, &reader->outConflicts)\n\n... given:\n\n+/* Variant for when you have a pointer to const dlist_head. */\n+#define dlist_foreach_const(iter, lhead) \\\n+ for (AssertVariableIsOfTypeMacro(iter, dlist_iter), \\\n+ AssertVariableIsOfTypeMacro(lhead, const dlist_head *), \\\n+ (iter).end = (dlist_node *) &(lhead)->head, \\\n+ (iter).cur = (iter).end->next ? (iter).end->next : (iter).end; \\\n+ (iter).cur != (iter).end; \\\n+ (iter).cur = (iter).cur->next)\n+\n\n... or find a way to make dlist_foreach() handle that itself, which\nseems pretty reasonable given its remit to traverse lists without\nmodifying them, though perhaps that would require a different iterator\ntype...\n\nOtherwise looks OK to me and passes various tests I threw at it.\n\n\n", "msg_date": "Fri, 21 Feb 2020 12:40:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve heavyweight locks instead of building new lock managers?" }, { "msg_contents": "Hi,\n\nOn 2020-02-21 12:40:06 +1300, Thomas Munro wrote:\n> On Thu, Feb 20, 2020 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > 16 files changed, 569 insertions(+), 1053 deletions(-)\n> \n> Nice!\n\nThanks!\n\n\n> Some comments on 0001, 0003, 0004:\n> \n> > Subject: [PATCH v1 1/6] Add additional prev/next & detached node functions to\n> \n> +extern void dlist_check(const dlist_head *head);\n> +extern void slist_check(const slist_head *head);\n> \n> I approve of the incidental constification in this patch.\n\nIt was just necessary fallout :)\n\n\n> +/*\n> + * Like dlist_delete(), but also sets next/prev to NULL to signal not being in\n> + * list.\n> + */\n> +static inline void\n> +dlist_delete_thoroughly(dlist_node *node)\n> +{\n> + node->prev->next = node->next;\n> + node->next->prev = node->prev;\n> + node->next = NULL;\n> + node->prev = NULL;\n> +}\n> \n> Instead of introducing this strange terminology, why not just have the\n> callers do ...\n> \n> dlist_delete(node);\n> dlist_node_init(node);\n\nThere's quite a few callers in predicate.c - I actually did that first.\n\n\n> ..., or perhaps supply dlist_delete_and_reinit(node) that does exactly\n> that? That is, reuse the code and terminology.\n\nYea, that's might be better, but see paragraph below. I quite dislike\nadding any \"empty node\" state.\n\n\n> +/*\n> + * Check if node is detached. A node is only detached if it either has been\n> + * initialized with dlist_init_node(), or deleted with\n> + * dlist_delete_thoroughly().\n> + */\n> +static inline bool\n> +dlist_node_is_detached(const dlist_node *node)\n> +{\n> + Assert((node->next == NULL && node->prev == NULL) ||\n> + (node->next != NULL && node->prev != NULL));\n> +\n> + return node->next == NULL;\n> +}\n> \n> How about dlist_node_is_linked()? I don't like introducing random new\n> verbs when we already have 'linked' in various places, and these\n> things are, y'know, linked lists.\n\nWell, but that doesn't signal that you can't just delete and have\ndlist_node_is_linked() work. I *want* it to sound \"different\". We could\nof course make delete always do this, but I don't want to add that\noverhead unnecessarily.\n\n\n> > Subject: [PATCH v1 4/6] Use dlists for predicate locking.\n> \n> + dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->outConflicts)\n> \n> Yuck...\n\nIt doesn't seem *that* bad to me, at least signals properly what we're\ndoing, and only does so in one place.\n\n\n> I suppose you could do this:\n> \n> - dlist_foreach(iter, &unconstify(SERIALIZABLEXACT *, reader)->outConflicts)\n> + dlist_foreach_const(iter, &reader->outConflicts)\n\nWe'd need a different iterator type too, I think? Because the iterator\nitself can't be constant, but we'd want the elements themselves be\npointers to constants.\n\nconst just isn't a very granular thing in C :(.\n\n\n> ... given:\n> \n> +/* Variant for when you have a pointer to const dlist_head. */\n> +#define dlist_foreach_const(iter, lhead) \\\n> + for (AssertVariableIsOfTypeMacro(iter, dlist_iter), \\\n> + AssertVariableIsOfTypeMacro(lhead, const dlist_head *), \\\n> + (iter).end = (dlist_node *) &(lhead)->head, \\\n> + (iter).cur = (iter).end->next ? (iter).end->next : (iter).end; \\\n> + (iter).cur != (iter).end; \\\n> + (iter).cur = (iter).cur->next)\n> +\n> \n> ... or find a way to make dlist_foreach() handle that itself, which\n> seems pretty reasonable given its remit to traverse lists without\n> modifying them, though perhaps that would require a different iterator\n> type...\n\nI was trying that first, but I don't easily see how we can do\nso. Iterating over a non-constant list with dlist_foreach obviously\nstill allows to to manipulate the list members. Thus dlist_iter.cur\ncan't be a 'pointer to const'. Whereas that's arguably what'd be needed\nfor a correct dlist_foreach() of a constant list?\n\nWe could just accept const pointers for dlist_foreach(), but then we'd\n*always* accept them, and we'd thus unconditionally have iter.cur as\nnon-const. Would that be better?\n\n\n> Otherwise looks OK to me and passes various tests I threw at it\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 15:23:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Improve heavyweight locks instead of building new lock managers?" }, { "msg_contents": "On Mon, Feb 10, 2020 at 11:22 PM Andres Freund <andres@anarazel.de> wrote:\n> To me this seems to go in the direction of having multiple bespoke lock\n> managers with slightly different feature sets, different debugging /\n> logging / monitoring support, with separate locking code each. That's\n> bad for maintainability.\n\nI think it would help a lot if the lock manager were not such a\nmonolithic thing. For instance, suppose that instead of having the\ndeadlock detector be part of the lock manager, it's a separate thing\nthat integrates with the lock manager. Instead of only knowing about\nwaits for heavyweight locks, it knows about whatever you want to tell\nit about. Then, for instance, you could possibly tell it that process\nA is waiting for a cleanup lock while process B holds a pin, possibly\ndetecting a deadlock we can't notice today. There are likely other\ncases as well.\n\n> E.g. we could:\n>\n> - Have an extended LockAcquire API where the caller provides a stack\n> variable for caching information until the LockRelease call, avoiding\n> separate LockMethodLocalHash lookup for release.\n\nMaybe. Seems like it might make things a little clunky for the caller,\nthough. If you just kept a stack of locks acquired and checked the\nlock being released against the top of the stack, you'd probably catch\na large percentage of cases. Or, the caller could pass a flag\nindicating whether they intend to release the lock prior to the end of\nthe transaction (which could be cross-checked by assertions). Any that\nyou intend to release early go on a stack.\n\n> - Instead of always implementing HASH_REMOVE as a completely fresh\n> lookup in dynahash, we should provide an API for removing an object we\n> *know* is in the hashtable at a specific pointer. We're obviously\n> already relying on that address to stay constant, so we'd not loose\n> flexibility. With this separate API we can avoid the bucket lookup,\n> walking through each element in that bucket, and having to compare the\n> hash key every time (which is quite expensive for a full LOCKTAG).\n\nThat makes a lot of sense, although I wonder if we should go further\nand replace dynahash entirely.\n\n> - We should try to figure out whether we can avoid needing the separate\n> lock table for PROCLOCKs. As far as I can tell there's not a single\n> reason for it to be a hashtable, rather than something like\n> NUM_LOCK_PARTITIONS lists of free PROCLOCKs.\n\nI agree that the PROCLOCK thing seems ugly and inefficient. I'm not\nsure that a bunch of lists is the best answer, though it might be. One\ncase to consider is when a lock is initially acquired via the\nfast-path and then, because of a conflicting lock acquisition,\ntransferred by *some other process* to the main lock table. If this\noccurs, the original locker must be able to find its PROCLOCK. It\ndoesn't have to be crazy efficient because it shouldn't happen very\noften, but it shouldn't suck too much.\n\n> - Whenever we do a relation extension, we better already hold a normal\n> relation lock. We don't actually need to have an entirely distinct\n> type of lock for extensions, that theoretically could support N\n> extension locks for each relation. The fact that these are associated\n> could be utilized in different ways:\n>\n> - We could define relation extension as a separate lock level only\n> conflicting with itself (and perhaps treat AELs as including\n> it). That'd avoid needing separate shared memory states in many\n> cases.\n>\n> This might be too problematic, because extension locks don't have\n> the same desired behaviour around group locking (and a small army of\n> other issues).\n\nYeah, I don't think that's likely to work out very nicely.\n\n> - We could keep a separate extension lock cached inside the relation\n> lock. The first time a transaction needs to extend, it does the\n> normal work, and after that stores the PROCLOCK in the LOCALLOCK (or\n> something like that). Once extension is done, don't release the lock\n> entirely, but instead just reduce the lock level to a new\n> non-conflicting lock level\n>\n> Alternatively we could implement something very similar outside of\n> lock.c, e.g. by caching the LOCALLOCK* (possibly identified by an\n> integer or such) in RelationData. That'd require enough support\n> from lock.c to be able to make that really cheap.\n\nNot sure I quite see what you mean here.\n\n> We could e.g. split the maintenance of the hashtable(s) from protecting\n> the state of individual locks. The locks protecting the hashtable would\n> just be held long enough to change a \"pin count\" of each lock, and then\n> a per LOCK lwlock would protect each heavyweight lock's state. We'd not\n> need to lock the partition locks for quite a few cases, because there's\n> many locks in a loaded system that always have lockers. There'd be cost\n> to that, needing more atomic ops in some cases, but I think it'd reduce\n> contention tremendously, and it'd open a lot more optimization\n> potential. It seems plausible that we could even work, as a followup,\n> to not needing the partition locks for some lock releases (e.g. when\n> there are other holders), and we might even be able to avoid it for\n> acquisitions, by caching the LOCK inside LOCALLOCK, and re-identifying\n> the identity.\n\nI agree. This seems worth exploring. The idea of caching the probably\nlocation of the lock and re-pinning it to check whether it's the one\nyou expected seems like it would avoid false sharing in a lot of\npractical cases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 08:59:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve heavyweight locks instead of building new lock managers?" }, { "msg_contents": "On Wed, Feb 19, 2020 at 11:14 PM Andres Freund <andres@anarazel.de> wrote:\n> Here's a *draft* patch series for removing all use of SHM_QUEUE, and\n> subsequently removing SHM_QUEUE.\n\n+1 for that idea.\n\nBut color me skeptical of what Thomas described as the \"incidental\nconstification\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 09:01:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve heavyweight locks instead of building new lock managers?" } ]
[ { "msg_contents": "Hi,\n\nI get the error `can not find file \"\"` when I hack the bgworkers, the\nroot case is a zero initialized bgworker was registered. It's not\ninteresting, but I'd like to refine the error message a bit.\n\nThanks.\n\n-- \nAdam Lee", "msg_date": "Tue, 11 Feb 2020 14:03:27 +0800", "msg_from": "Adam Lee <ali@pivotal.io>", "msg_from_op": true, "msg_subject": "[PATCH] Sanity check BackgroundWorker's function entry" } ]
[ { "msg_contents": "Example:\n\ninitdb --help\n...\nReport bugs to <pgsql-bugs@lists.postgresql.org>.\nPostgreSQL home page: <https://www.postgresql.org/>\n\nI think this is useful. You see this nowadays in other packages as \nwell. See also \n<https://www.gnu.org/prep/standards/standards.html#g_t_002d_002dhelp> \nfor a reference.\n\nAutoconf already has a way to register the package home page and \npropagate it, so I used that. That also makes it easier to change it \n(see http: -> https:) or have third parties substitute their own contact \ninformation without destroying translations.\n\nWhile at it, I also did the same refactoring for the bug reporting \naddress (which was also recently changed, so this is a bit late, but who \nknows what the future holds).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 11 Feb 2020 08:41:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Add PostgreSQL home page to --help output" }, { "msg_contents": "> On 11 Feb 2020, at 08:41, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> Autoconf already has a way to register the package home page and propagate it, so I used that. That also makes it easier to change it (see http: -> https:) or have third parties substitute their own contact information without destroying translations.\n\n+1, this change has the side benefit of aiding postgres forks who otherwise\nhave to patch all occurrences to avoid getting reports on the wrong list.\n\n> While at it, I also did the same refactoring for the bug reporting address (which was also recently changed, so this is a bit late, but who knows what the future holds).\n\nPardon my weak autoconf-skills, what does the inverted brackets (]foo[ as\nopposed to [foo]) do in the below?\n\n-Please also contact <pgsql-bugs@lists.postgresql.org> to see about\n+Please also contact <]AC_PACKAGE_BUGREPORT[> to see about\n\ncheers ./daniel\n\n", "msg_date": "Tue, 11 Feb 2020 10:34:42 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On 2020-02-11 10:34, Daniel Gustafsson wrote:\n> Pardon my weak autoconf-skills, what does the inverted brackets (]foo[ as\n> opposed to [foo]) do in the below?\n> \n> -Please also contact<pgsql-bugs@lists.postgresql.org> to see about\n> +Please also contact <]AC_PACKAGE_BUGREPORT[> to see about\n\nAC_PACKAGE_BUGREPORT is an Autoconf macro, set up by AC_INIT. The call \nabove is in the context of\n\nAC_MSG_ERROR([[ ... text ... ]])\n\nThe brackets are quote characters that prevent accidentally expanding a \ntoken in the text as a macro. So in order to get AC_PACKAGE_BUGREPORT \nexpanded, we need to undo one level of quoting.\n\nSee also \n<https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.69/html_node/M4-Quotation.html#M4-Quotation> \nfor more information.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 12 Feb 2020 11:54:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "> On 12 Feb 2020, at 11:54, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-02-11 10:34, Daniel Gustafsson wrote:\n>> Pardon my weak autoconf-skills, what does the inverted brackets (]foo[ as\n>> opposed to [foo]) do in the below?\n>> -Please also contact<pgsql-bugs@lists.postgresql.org> to see about\n>> +Please also contact <]AC_PACKAGE_BUGREPORT[> to see about\n> \n> AC_PACKAGE_BUGREPORT is an Autoconf macro, set up by AC_INIT. The call above is in the context of\n> \n> AC_MSG_ERROR([[ ... text ... ]])\n> \n> The brackets are quote characters that prevent accidentally expanding a token in the text as a macro. So in order to get AC_PACKAGE_BUGREPORT expanded, we need to undo one level of quoting.\n> \n> See also <https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.69/html_node/M4-Quotation.html#M4-Quotation> for more information.\n\nAha, that's what I was looking for in the docs but didn't find. Thanks for\nsharing!\n\ncheers ./daniel\n\n", "msg_date": "Wed, 12 Feb 2020 14:20:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "Sounds like a fine idea. But personally I would prefer it without the <>\naround the it, just a url on a line by itself. I think it would be clearer,\nlook cleaner, and be easier to select to copy/paste elsewhere.\n\nOn Tue., Feb. 11, 2020, 02:42 Peter Eisentraut, <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Example:\n>\n> initdb --help\n> ...\n> Report bugs to <pgsql-bugs@lists.postgresql.org>.\n> PostgreSQL home page: <https://www.postgresql.org/>\n>\n> I think this is useful. You see this nowadays in other packages as\n> well. See also\n> <https://www.gnu.org/prep/standards/standards.html#g_t_002d_002dhelp>\n> for a reference.\n>\n> Autoconf already has a way to register the package home page and\n> propagate it, so I used that. That also makes it easier to change it\n> (see http: -> https:) or have third parties substitute their own contact\n> information without destroying translations.\n>\n> While at it, I also did the same refactoring for the bug reporting\n> address (which was also recently changed, so this is a bit late, but who\n> knows what the future holds).\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nSounds like a fine idea. But personally I would prefer it without the <> around the it, just a url on a line by itself. I think it would be clearer, look cleaner, and be easier to select to copy/paste elsewhere.On Tue., Feb. 11, 2020, 02:42 Peter Eisentraut, <peter.eisentraut@2ndquadrant.com> wrote:Example:\n\ninitdb --help\n...\nReport bugs to <pgsql-bugs@lists.postgresql.org>.\nPostgreSQL home page: <https://www.postgresql.org/>\n\nI think this is useful.  You see this nowadays in other packages as \nwell.  See also \n<https://www.gnu.org/prep/standards/standards.html#g_t_002d_002dhelp> \nfor a reference.\n\nAutoconf already has a way to register the package home page and \npropagate it, so I used that.  That also makes it easier to change it \n(see http: -> https:) or have third parties substitute their own contact \ninformation without destroying translations.\n\nWhile at it, I also did the same refactoring for the bug reporting \naddress (which was also recently changed, so this is a bit late, but who \nknows what the future holds).\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 13 Feb 2020 08:24:03 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On 2020-02-13 14:24, Greg Stark wrote:\n> Sounds like a fine idea. But personally I would prefer it without the <> \n> around the it, just a url on a line by itself. I think it would be \n> clearer, look cleaner, and be easier to select to copy/paste elsewhere.\n\nI'm on the fence about this one, but I like the delimiters because it \nwould also work consistently if we put a URL into running text where it \nmight be immediately adjacent to other characters. So I was actually \ngoing for easier to copy/paste here, but perhaps in other environments \nit's not easier?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Feb 2020 10:15:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "> On 20 Feb 2020, at 10:15, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-02-13 14:24, Greg Stark wrote:\n>> Sounds like a fine idea. But personally I would prefer it without the <> around the it, just a url on a line by itself. I think it would be clearer, look cleaner, and be easier to select to copy/paste elsewhere.\n> \n> I'm on the fence about this one, but I like the delimiters because it would also work consistently if we put a URL into running text where it might be immediately adjacent to other characters. So I was actually going for easier to copy/paste here, but perhaps in other environments it's not easier?\n\nFor URLs completely on their own, not using <> makes sense. Copy pasting <url>\ninto the location bar of Safari makes it load the url, but Firefox and Chrome\nturn it into a search engine query (no idea about Windows browsers).\n\nFor URLs in running text it's not uncommon to have <> around the URL for the\nvery reason you mention. Looking at --help and manpages from random open\nsource tools there seems to be roughly a 50/50 split on using <> or not.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 20 Feb 2020 10:53:10 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "> On 20 Feb 2020, at 10:53, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 20 Feb 2020, at 10:15, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> \n>> On 2020-02-13 14:24, Greg Stark wrote:\n>>> Sounds like a fine idea. But personally I would prefer it without the <> around the it, just a url on a line by itself. I think it would be clearer, look cleaner, and be easier to select to copy/paste elsewhere.\n>> \n>> I'm on the fence about this one, but I like the delimiters because it would also work consistently if we put a URL into running text where it might be immediately adjacent to other characters. So I was actually going for easier to copy/paste here, but perhaps in other environments it's not easier?\n> \n> For URLs completely on their own, not using <> makes sense. Copy pasting <url>\n> into the location bar of Safari makes it load the url, but Firefox and Chrome\n> turn it into a search engine query (no idea about Windows browsers).\n> \n> For URLs in running text it's not uncommon to have <> around the URL for the\n> very reason you mention. Looking at --help and manpages from random open\n> source tools there seems to be roughly a 50/50 split on using <> or not.\n\nRFC3986 discuss this in <https://tools.ietf.org/html/rfc3986#appendix-C>, with\nthe content mostly carried over from RFC2396 appendix E.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 20 Feb 2020 12:09:25 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On 2020-02-20 12:09, Daniel Gustafsson wrote:\n>> On 20 Feb 2020, at 10:53, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>>> On 20 Feb 2020, at 10:15, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>>\n>>> On 2020-02-13 14:24, Greg Stark wrote:\n>>>> Sounds like a fine idea. But personally I would prefer it without the <> around the it, just a url on a line by itself. I think it would be clearer, look cleaner, and be easier to select to copy/paste elsewhere.\n>>>\n>>> I'm on the fence about this one, but I like the delimiters because it would also work consistently if we put a URL into running text where it might be immediately adjacent to other characters. So I was actually going for easier to copy/paste here, but perhaps in other environments it's not easier?\n>>\n>> For URLs completely on their own, not using <> makes sense. Copy pasting <url>\n>> into the location bar of Safari makes it load the url, but Firefox and Chrome\n>> turn it into a search engine query (no idea about Windows browsers).\n>>\n>> For URLs in running text it's not uncommon to have <> around the URL for the\n>> very reason you mention. Looking at --help and manpages from random open\n>> source tools there seems to be roughly a 50/50 split on using <> or not.\n> \n> RFC3986 discuss this in <https://tools.ietf.org/html/rfc3986#appendix-C>, with\n> the content mostly carried over from RFC2396 appendix E.\n\nI think we weren't going to get any more insights here, so I have \ncommitted it as is.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 14:02:17 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On Fri, Feb 28, 2020 at 02:02:17PM +0100, Peter Eisentraut wrote:\n> On 2020-02-20 12:09, Daniel Gustafsson wrote:\n> > > On 20 Feb 2020, at 10:53, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > \n> > > > On 20 Feb 2020, at 10:15, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > > > \n> > > > On 2020-02-13 14:24, Greg Stark wrote:\n> > > > > Sounds like a fine idea. But personally I would prefer it without the <> around the it, just a url on a line by itself. I think it would be clearer, look cleaner, and be easier to select to copy/paste elsewhere.\n> > > > \n> > > > I'm on the fence about this one, but I like the delimiters because it would also work consistently if we put a URL into running text where it might be immediately adjacent to other characters. So I was actually going for easier to copy/paste here, but perhaps in other environments it's not easier?\n> > > \n> > > For URLs completely on their own, not using <> makes sense. Copy pasting <url>\n> > > into the location bar of Safari makes it load the url, but Firefox and Chrome\n> > > turn it into a search engine query (no idea about Windows browsers).\n> > > \n> > > For URLs in running text it's not uncommon to have <> around the URL for the\n> > > very reason you mention. Looking at --help and manpages from random open\n> > > source tools there seems to be roughly a 50/50 split on using <> or not.\n> > \n> > RFC3986 discuss this in <https://tools.ietf.org/html/rfc3986#appendix-C>, with\n> > the content mostly carried over from RFC2396 appendix E.\n> \n> I think we weren't going to get any more insights here, so I have committed\n> it as is.\n\nSome new feedback. I find this output confusing since there is a colon\nbefore the <>:\n\n\tReport bugs to <pgsql-bugs@lists.postgresql.org>.\n\tPostgreSQL home page: <https://www.postgresql.org/>\n\nDoes this look better (no colon)?\n\n\tReport bugs to <pgsql-bugs@lists.postgresql.org>.\n\tPostgreSQL home page <https://www.postgresql.org/>\n\nor this (colon, no <>)?\n\n\tReport bugs to <pgsql-bugs@lists.postgresql.org>.\n\tPostgreSQL home page: https://www.postgresql.org/\n\nor maybe this?\n\n\tReport bugs: pgsql-bugs@lists.postgresql.org\n\tPostgreSQL home page: https://www.postgresql.org/\n\nor this?\n\n\tReport bugs <pgsql-bugs@lists.postgresql.org>\n\tPostgreSQL home page <https://www.postgresql.org/>\n\nI actually have never seen URLs in <>, only email addresses. I think\nusing <> for URLs and emails is confusing because they usually have\ndifferent actions, unless we want to add mailto:\n\n\tReport bugs <mailto:pgsql-bugs@lists.postgresql.org>\n\tPostgreSQL home page <https://www.postgresql.org/>\n\nor\n\n\tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n\tPostgreSQL home page https://www.postgresql.org/\n\nI kind of prefer the last one since the can both be pasted directly into\na browser.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 16 Mar 2020 17:55:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On Mon, Mar 16, 2020 at 05:55:26PM -0400, Bruce Momjian wrote:\n> \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n> \tPostgreSQL home page https://www.postgresql.org/\n> \n> I kind of prefer the last one since the can both be pasted directly into\n> a browser.\n\nActually, I prefer:\n\n\tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n\tPostgreSQL website https://www.postgresql.org/\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 16 Mar 2020 18:00:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On 2020-Mar-16, Bruce Momjian wrote:\n\n> On Mon, Mar 16, 2020 at 05:55:26PM -0400, Bruce Momjian wrote:\n> > \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n> > \tPostgreSQL home page https://www.postgresql.org/\n> > \n> > I kind of prefer the last one since the can both be pasted directly into\n> > a browser.\n> \n> Actually, I prefer:\n> \n> \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n> \tPostgreSQL website https://www.postgresql.org/\n\nHmm, pasting mailto into the browser address bar doesn't work for me ...\nit just goes to the lists.postgresql.org website (Brave) or sits there\ndoing nothing (Firefox). I was excited there for a minute.\n\nIf we're talking personal preference, I like the current output.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 21:10:25 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "On Mon, Mar 16, 2020 at 09:10:25PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-16, Bruce Momjian wrote:\n>> Actually, I prefer:\n>> \n>> \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n>> \tPostgreSQL website https://www.postgresql.org/\n> \n> Hmm, pasting mailto into the browser address bar doesn't work for me ...\n> it just goes to the lists.postgresql.org website (Brave) or sits there\n> doing nothing (Firefox). I was excited there for a minute.\n\nPasting \"mailto:pgsql-bugs@lists.postgresql.org\" to Firefox 74.0 pops\nup for me a window asking to choose an application able to send an\nemail. For example, with mutt, this would begin generating an email\nsent to the address pasted.\n\n> If we're talking personal preference, I like the current output.\n\nNo strong opinion about one or the other.\n--\nMichael", "msg_date": "Tue, 17 Mar 2020 11:17:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "bOn Mon, Mar 16, 2020 at 09:10:25PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-16, Bruce Momjian wrote:\n> \n> > On Mon, Mar 16, 2020 at 05:55:26PM -0400, Bruce Momjian wrote:\n> > > \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n> > > \tPostgreSQL home page https://www.postgresql.org/\n> > > \n> > > I kind of prefer the last one since the can both be pasted directly into\n> > > a browser.\n> > \n> > Actually, I prefer:\n> > \n> > \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n> > \tPostgreSQL website https://www.postgresql.org/\n> \n> Hmm, pasting mailto into the browser address bar doesn't work for me ...\n> it just goes to the lists.postgresql.org website (Brave) or sits there\n> doing nothing (Firefox). I was excited there for a minute.\n> \n> If we're talking personal preference, I like the current output.\n\nWell, in Firefox it knows to use Thunderbird to send email because under\nFirefox's Preferences/General/Applications, 'mailto' is set to \"Use\nThunderbird\", though it can be set to other applications. If no one\nlikes my changes, I guess we will just stick with what we have.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Thu, 19 Mar 2020 17:32:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" }, { "msg_contents": "> On 19 Mar 2020, at 22:32, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> bOn Mon, Mar 16, 2020 at 09:10:25PM -0300, Alvaro Herrera wrote:\n>> On 2020-Mar-16, Bruce Momjian wrote:\n>> \n>>> On Mon, Mar 16, 2020 at 05:55:26PM -0400, Bruce Momjian wrote:\n>>>> \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n>>>> \tPostgreSQL home page https://www.postgresql.org/\n>>>> \n>>>> I kind of prefer the last one since the can both be pasted directly into\n>>>> a browser.\n>>> \n>>> Actually, I prefer:\n>>> \n>>> \tReport bugs mailto:pgsql-bugs@lists.postgresql.org\n>>> \tPostgreSQL website https://www.postgresql.org/\n>> \n>> Hmm, pasting mailto into the browser address bar doesn't work for me ...\n>> it just goes to the lists.postgresql.org website (Brave) or sits there\n>> doing nothing (Firefox). I was excited there for a minute.\n>> \n>> If we're talking personal preference, I like the current output.\n> \n> Well, in Firefox it knows to use Thunderbird to send email because under\n> Firefox's Preferences/General/Applications, 'mailto' is set to \"Use\n> Thunderbird\", though it can be set to other applications. If no one\n> likes my changes, I guess we will just stick with what we have.\n\nI don't think mailto: URLs is a battle we can win, pasting it into Safari for\nexample yields this error message:\n\n \"This website has been blocked from automatically composing an email.\"\n\nIt also assumes that users will paste the bugreport email into something that\nparses URLs and not straight into the \"To:\" field of their email client. I'm\nnot sure that assumption holds.\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 19 Mar 2020 23:09:56 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add PostgreSQL home page to --help output" } ]
[ { "msg_contents": "Continuing the discussion from [0], there are still a number of fsync() \ncalls in client programs that are unchecked or where errors are treated \nnon-fatally.\n\nDigging around through the call stack, I think changing fsync_fname() to \njust call exit(1) on errors instead of proceeding would address most of \nthat.\n\nThis affects (at least) initdb, pg_basebackup, pg_checksums, pg_dump, \npg_dumpall, and pg_rewind.\n\nThoughts?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/9b49fe44-8f3e-eca9-5914-29e9e99030bf%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 11 Feb 2020 09:22:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "client-side fsync() error handling" }, { "msg_contents": "On Tue, Feb 11, 2020 at 09:22:54AM +0100, Peter Eisentraut wrote:\n> Digging around through the call stack, I think changing fsync_fname() to\n> just call exit(1) on errors instead of proceeding would address most of\n> that.\n>\n> Thoughts?\n\nDoing things as you do in your patch sounds fine to me for this part.\nNow, don't we need to care about durable_rename() and make the\npanic-like failure an optional choice? From what I can see, this\nroutine is used now in the backend for pg_basebackup to rename\ntemporary history files or partial WAL segments.\n--\nMichael", "msg_date": "Wed, 12 Feb 2020 14:28:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: client-side fsync() error handling" }, { "msg_contents": "On 2020-02-12 06:28, Michael Paquier wrote:\n> Now, don't we need to care about durable_rename() and make the\n> panic-like failure an optional choice? From what I can see, this\n> routine is used now in the backend for pg_basebackup to rename\n> temporary history files or partial WAL segments.\n\ndurable_rename() calls fsync_fname(), so it would be covered by this \nchange. The other file access calls in there can be handled by normal \nerror handling, I think. Is there any specific scenario you have in mind?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:02:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: client-side fsync() error handling" }, { "msg_contents": "On Thu, Feb 13, 2020 at 10:02:31AM +0100, Peter Eisentraut wrote:\n> On 2020-02-12 06:28, Michael Paquier wrote:\n>> Now, don't we need to care about durable_rename() and make the\n>> panic-like failure an optional choice? From what I can see, this\n>> routine is used now in the backend for pg_basebackup to rename\n>> temporary history files or partial WAL segments.\n> \n> durable_rename() calls fsync_fname(), so it would be covered by this change.\n> The other file access calls in there can be handled by normal error\n> handling, I think. Is there any specific scenario you have in mind?\n\nThe old file flush is handled by your patch, but not the new one if\nit exists, and it seems to me that we should handle failures\nconsistently to reason easier about it, actually as the top of the\nfunction says :)\n\nAnother point that we could consider is if fsync_fname() should have\nan option to not trigger an immediate exit when facing a failure. The\nbackend has that option thanks to fsync_fname_ext() with its elevel\nargument. Your choice to default to a failure is fine for most cases\nbecause that's what we want. However, I am questioning if this change\nwould be surprising for some client applications or not, and if we\nshould have the option to choose one behavior or the other.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 20:52:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: client-side fsync() error handling" }, { "msg_contents": "On 2020-02-13 12:52, Michael Paquier wrote:\n>> durable_rename() calls fsync_fname(), so it would be covered by this change.\n>> The other file access calls in there can be handled by normal error\n>> handling, I think. Is there any specific scenario you have in mind?\n> \n> The old file flush is handled by your patch, but not the new one if\n> it exists, and it seems to me that we should handle failures\n> consistently to reason easier about it, actually as the top of the\n> function says :)\n\nOK, added in new patch.\n\n> Another point that we could consider is if fsync_fname() should have\n> an option to not trigger an immediate exit when facing a failure. The\n> backend has that option thanks to fsync_fname_ext() with its elevel\n> argument. Your choice to default to a failure is fine for most cases\n> because that's what we want. However, I am questioning if this change\n> would be surprising for some client applications or not, and if we\n> should have the option to choose one behavior or the other.\n\nThe option in the backend is between panicking and retrying. The old \nbehavior was to always retry but we have learned that that usually \ndoesn't work.\n\nThe frontends do neither right now, or at least the error handling is \nvery inconsistent and inscrutable. It would be possible in theory to \nadd a retry option, but that would be a very different patch, and given \nwhat we have learned about fsync(), it probably wouldn't be widely useful.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 20 Feb 2020 10:10:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: client-side fsync() error handling" }, { "msg_contents": "On Thu, Feb 20, 2020 at 10:10:11AM +0100, Peter Eisentraut wrote:\n> OK, added in new patch.\n\nThanks, that looks good.\n\n> The frontends do neither right now, or at least the error handling is very\n> inconsistent and inscrutable. It would be possible in theory to add a retry\n> option, but that would be a very different patch, and given what we have\n> learned about fsync(), it probably wouldn't be widely useful.\n\nPerhaps. Let's have this discussion later if there are drawbacks\nabout changing things the way your patch does. If we don't do that,\nwe'll never know about it either and this patch makes things safer.\n--\nMichael", "msg_date": "Fri, 21 Feb 2020 13:18:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: client-side fsync() error handling" }, { "msg_contents": "On 2020-02-21 05:18, Michael Paquier wrote:\n> On Thu, Feb 20, 2020 at 10:10:11AM +0100, Peter Eisentraut wrote:\n>> OK, added in new patch.\n> \n> Thanks, that looks good.\n\ncommitted\n\n>> The frontends do neither right now, or at least the error handling is very\n>> inconsistent and inscrutable. It would be possible in theory to add a retry\n>> option, but that would be a very different patch, and given what we have\n>> learned about fsync(), it probably wouldn't be widely useful.\n> \n> Perhaps. Let's have this discussion later if there are drawbacks\n> about changing things the way your patch does. If we don't do that,\n> we'll never know about it either and this patch makes things safer.\n\nyup\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Feb 2020 17:03:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: client-side fsync() error handling" }, { "msg_contents": "On Mon, Feb 24, 2020 at 05:03:07PM +0100, Peter Eisentraut wrote:\n> committed\n\nThanks!\n--\nMichael", "msg_date": "Tue, 25 Feb 2020 11:33:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: client-side fsync() error handling" } ]
[ { "msg_contents": "Hi\nIn the Oracle world we use the product \"golden gate\" to execute transactions from a source database (Oracle, Mysql) to a PostgreSQL instance.\nThis allows 2 Oracle and PostgreSQL databases to be updated at the same time in real time.\nI would like to know if there is an equivalent open-source product.\n\nThanks in advance\n\nBest Regards\nDidier ROS\nEDF\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.", "msg_date": "Tue, 11 Feb 2020 10:23:30 +0000", "msg_from": "ROS Didier <didier.ros@edf.fr>", "msg_from_op": true, "msg_subject": "open-source equivalent of golden-gate" }, { "msg_contents": "вт, 11 февр. 2020 г. в 12:23, ROS Didier <didier.ros@edf.fr>:\n\n> In the Oracle world we use the product \"golden gate\" to execute\n> transactions from a source database (Oracle, Mysql) to a PostgreSQL\n> instance.\n>\n> This allows 2 Oracle and PostgreSQL databases to be updated at the same\n> time in real time.\n>\n> I would like to know if there is an equivalent open-source product.\n>\n\nThere is a SQL/MED standard exactly for this:\nhttps://wiki.postgresql.org/wiki/SQL/MED\n\nImplemented in PostgreSQL as Foreign Data Wrappers:\nhttps://wiki.postgresql.org/wiki/Fdw\nYou need to do the following:\n1. Add wrapper via\nhttps://www.postgresql.org/docs/current/sql-createextension.html\n2. Create remote source via\nhttps://www.postgresql.org/docs/current/sql-createserver.html\n3. Create foreign table via\nhttps://www.postgresql.org/docs/current/sql-createforeigntable.html\n\nNote, that PostgreSQL provides only infrastructure, wrappers for different\nremote systems are not supported by the PostgreSQL community,\nexcept for postgres_fdw and csv_fdw provided by the project.\n\n\n-- \nVictor Yegorov\n\nвт, 11 февр. 2020 г. в 12:23, ROS Didier <didier.ros@edf.fr>:\n\n\nIn the Oracle world we use the product \"golden gate\" to execute transactions from a source database (Oracle, Mysql) to a PostgreSQL instance.\nThis allows 2 Oracle and PostgreSQL databases to be updated at the same time in real time.\nI would like to know if there is an equivalent open-source product.There is a SQL/MED standard exactly for this: https://wiki.postgresql.org/wiki/SQL/MEDImplemented in PostgreSQL as Foreign Data Wrappers: https://wiki.postgresql.org/wiki/FdwYou need to do the following:1. Add wrapper via https://www.postgresql.org/docs/current/sql-createextension.html2. Create remote source via https://www.postgresql.org/docs/current/sql-createserver.html3. Create foreign table via https://www.postgresql.org/docs/current/sql-createforeigntable.htmlNote, that PostgreSQL provides only infrastructure, wrappers for different remote systems are not supported by the PostgreSQL community,except for postgres_fdw and csv_fdw provided by the project.-- Victor Yegorov", "msg_date": "Tue, 11 Feb 2020 14:51:56 +0200", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: open-source equivalent of golden-gate" }, { "msg_contents": "On 02/11/20 07:51, Victor Yegorov wrote:\n> вт, 11 февр. 2020 г. в 12:23, ROS Didier <didier.ros@edf.fr>:\n> \n>> In the Oracle world we use the product \"golden gate\" to execute\n>> transactions from a source database (Oracle, Mysql) to a PostgreSQL\n>> instance.\n> \n> Note, that PostgreSQL provides only infrastructure, wrappers for different\n> remote systems are not supported by the PostgreSQL community,\n> except for postgres_fdw and csv_fdw provided by the project.\n\nI read the question as perhaps concerning the other direction, whether\nthere might be an open source foreign data wrapper installable in Oracle\nfor talking to PostgreSQL (which might, I suppose, also have a name like\n\"postgres_fdw\", which helps explain the number of times I've rewritten\nthis sentence trying to make it unambiguous).\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 11 Feb 2020 08:53:31 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: open-source equivalent of golden-gate" }, { "msg_contents": "ROS Didier schrieb am 11.02.2020 um 11:23:\n> In the Oracle world we use the product \"golden gate\" to execute\n> transactions from a source database (Oracle, Mysql) to a PostgreSQL\n> instance.\n>\n> This allows 2 Oracle and PostgreSQL databases to be updated at the\n> same time in real time.\n>\n> I would like to know if there is an equivalent open-source product.\n>\n> Thanks in advance\n>\n> Best Regards\n> Didier ROS\n\nThe closest solutions to golden gate are probably\n\n* https://debezium.io/\n* https://www.symmetricds.org/\n\nThomas\n\n\n", "msg_date": "Tue, 11 Feb 2020 15:23:05 +0100", "msg_from": "Thomas Kellerer <shammat@gmx.net>", "msg_from_op": false, "msg_subject": "Re: open-source equivalent of golden-gate" }, { "msg_contents": "From: Chapman Flack <chap@anastigmatix.net>\r\n> I read the question as perhaps concerning the other direction, whether\r\n> there might be an open source foreign data wrapper installable in Oracle\r\n> for talking to PostgreSQL (which might, I suppose, also have a name like\r\n> \"postgres_fdw\", which helps explain the number of times I've rewritten\r\n> this sentence trying to make it unambiguous).\r\n\r\nOracle Database Gateway for ODBC can be used:\r\n\r\n\r\nOracle Database Gateway for PostgreSQL - ORACLE-HELP\r\nhttp://oracle-help.com/oracle-database/oracle-database-gateway-postgresql/\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 11 Feb 2020 23:56:31 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: open-source equivalent of golden-gate" } ]
[ { "msg_contents": "This patch allow to use custom postgres launcher for tests (tap&regress)\nby setting environment variable PGLAUNCHER.\n\nOther known methods (like: https://wiki.postgresql.org/wiki/Valgrind)\nrequires\nto perform installation, build system modifications, executable replacement\netc...\n\nAnd proposed way is simpler and more flexible.\n\n\n** Use-case: run checks under Valgrind\n\n\n- prepare launcher\n\n echo 'exec valgrind postgres \"$@\"' > /tmp/pgvalgrind\n chmod +x /tmp/pgvalgrind\n\n- execute regress tests under Valgrind\n\n PGLAUNCHER=/tmp/pgvalgrind TESTS=gin make check-tests\n\n- execute concrete tap-test under Valgrind\n\n PGLAUNCHER=/tmp/pgvalgrind PROVE_TESTS=t/001_stream_rep.pl make \\\n check -C src/test/recovery\n\n\n** Use-case: execute tests with different postgres versions\n\n\n- prepare multi-launcher\n\n cat <<EOF > /tmp/launcher\n cp -f `pwd`/src/backend/postgres.v* \\`pg_config --bindir\\`\n exec postgres.v\\$V \"\\$@\"\n EOF\n chmod +x /tmp/launcher\n\n- make some versions of postgres binary\n\n ./configure \"CFLAGS=...\" && make\n mv src/backend/postgres src/backend/postgres.v1\n\n ./configure \"CFLAGS=...\" && make\n mv src/backend/postgres src/backend/postgres.v2\n\n- run checks with different postgres binaries\n\n PGLAUNCHER=/tmp/launcher V=1 make check -C contrib/bloom\n PGLAUNCHER=/tmp/launcher V=2 make check -C contrib/bloom", "msg_date": "Tue, 11 Feb 2020 13:53:32 +0300", "msg_from": "Ivan Taranov <i.taranov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "custom postgres launcher for tests" }, { "msg_contents": "On Tue, 11 Feb 2020 at 19:33, Ivan Taranov <i.taranov@postgrespro.ru> wrote:\n>\n> This patch allow to use custom postgres launcher for tests (tap&regress)\n> by setting environment variable PGLAUNCHER.\n\nI thought I saw a related patch to this that proposed to add a pg_ctl\nargument. Was that you too? I can't find it at the moment.\n\nIn any case I *very definitely* want the ability to wrap the\n'postgres' executable we launch via some wrapper command. I currently\ndo this with a horrid shellscript hack that uses next-on-path lookups\nand a wrapper 'postgres' executable. But because of how\nfind_other_exec works this is rather far from optimal so I'm a fan of\na cleaner, built-in alternative.\n\nI was initially going to object to using an env-var. But if someone\ncontrols your environment they can already LD_PRELOAD you or PATH-hack\ninto doing whatever they want, so it's not a security concern. And\nyou're right that getting a pg_ctl command line or the like into all\nthe places where we'd want it deep inside TAP tests etc is just too\nmuch hassle otherwise.\n\nI haven't reviewed the code yet, but for the idea a strong +1000 or so.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Fri, 21 Feb 2020 09:49:16 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: custom postgres launcher for tests" }, { "msg_contents": "On Fri, Feb 21, 2020 at 4:49 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> I thought I saw a related patch to this that proposed to add a pg_ctl\n> argument. Was that you too? I can't find it at the moment.\n\nThis very simple two-line patch for src/test/perl/PostgresNode.pm code,\nit set `pg_ctl -p <path>` argument, and one-line patch for\nsrc/test/regress/pg_regress.c it spawn postgres-launcher directly.\n\nThis routine usable only for tap tests with used\nPostgresNode::get_new_node/start/restart, and for regress tests.\n\nPerhaps the name TEST_PGLAUNCHER is more correct for this env-var.\n\n>into doing whatever they want, so it's not a security concern\n\n>I currently do this with a horrid shellscript hack that uses next-on-path\n>lookups and a wrapper 'postgres' executable\n\nIts not security problem, because this kit only for developer, commonly - for\niteratively build and run concrete tests.\n\nFor more complexy replacing need patch for pg_ctl, or postgres wrapper, or\nreplacing postgres bin and other ways...\n\nThanks for the response!\n\n\n", "msg_date": "Fri, 21 Feb 2020 11:04:09 +0300", "msg_from": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: custom postgres launcher for tests" }, { "msg_contents": "On Fri, 21 Feb 2020 at 17:05, Ivan N. Taranov <i.taranov@postgrespro.ru> wrote:\n>\n> On Fri, Feb 21, 2020 at 4:49 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> > I thought I saw a related patch to this that proposed to add a pg_ctl\n> > argument. Was that you too? I can't find it at the moment.\n\nI've had it on my TODO forever but I don't think it was me who posted\na patch. I honestly can't even remember. Too much going on at once.\n\n> This routine usable only for tap tests with used\n> PostgresNode::get_new_node/start/restart, and for regress tests.\n>\n> Perhaps the name TEST_PGLAUNCHER is more correct for this env-var.\n>\n> >into doing whatever they want, so it's not a security concern\n>\n> >I currently do this with a horrid shellscript hack that uses next-on-path\n> >lookups and a wrapper 'postgres' executable\n>\n> Its not security problem, because this kit only for developer, commonly - for\n> iteratively build and run concrete tests.\n>\n> For more complexy replacing need patch for pg_ctl, or postgres wrapper, or\n> replacing postgres bin and other ways...\n\nIf we support a wrapper we should support it for all pg_ctl usage IMO.\nEven if it's intended just for testing. Because the scope of \"testing\"\nextends very far outside \"in-core TAP and pg_regress tests\". Testing\nneeds include extensions running their own tests under valgrind or\nsimilar tools, tests simulating clustered environments using ansible\nor other automation tools, and more.\n\nSo I'd rather stick with your original PGLAUNCHER proposal. I think\nall the tools we care about already invoke postgres via pg_ctl, and\nany that don't should probably be taught to.\n\n(I wish pg_ctl had a --no-daemon, --foreground or --no-detach mode\nthough, to help with this.)\n\nFor the sake of others with similar needs I attach my current\nwrapper/launcher script. To use it you have to:\n\nmkdir pglauncher\ncd pglauncher\ncp $the_script postgres\nchmod a+x postgres\nln -s postgres initdb\nln -s postgres pg_ctl\ncd ..\n\nThen ensure the bin directory for your target postgres is first on the\nPATH and run with something like:\n\nPOSTGRES_SRC=/path/to/your/srcdir\n\nPATH=${PWD}/pglauncher/:$PATH \\\n VG_LOG=\"$(mktemp -p . -d valgrind-log-XXXXXX)/%n-%p.log\"\n VG_ARGS=\"--verbose --leak-check=full\n--show-leak-kinds=definite,possible --track-origins=yes\n--read-var-info=yes --show-error-list=yes --num-callers=30\n--malloc-fill=8f --free-fill=9f\n--suppressions=$POSTGRES_SRC/src/tools/valgrind.supp\n--gen-suppressions=all\" \\\n pg_ctl blah blah\n\n(BTW, we should install \"valgrind.supp\" when we install the optional\nbits and pieces like the src/test/perl modules, pg_regress, and so on,\nso it's available for extensions that run their own valgrind scans.)\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 11 Mar 2020 14:41:30 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: custom postgres launcher for tests" }, { "msg_contents": "> If we support a wrapper we should support it for all pg_ctl usage IMO.\n\nAs i understand it - you propose to patch pg_ctl.c & regress.c instead of\nPostgresNode.pm & regress.c?\n\nThis is a deeper invasion to pg_ctl. There will be a conflict between the\nenvironment variable and the pg_ctl -p parameter, and possibly induced\nbugs.\n\nI suggest microscopic changes for iterative recompilation/test/debug\nwithout installation.\n\n> For the sake of others with similar needs I attach my current\n> wrapper/launcher script. To use it you have to:\n\nIMHO, the method what you proposed (wrapper/launcher) - is more suitable for\ncomplex testing.\n\nI agree that the my proposed way is incomplete.\n\n\n", "msg_date": "Tue, 17 Mar 2020 10:38:29 +0300", "msg_from": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: custom postgres launcher for tests" }, { "msg_contents": "> If we support a wrapper we should support it for all pg_ctl usage IMO.\n\nAs i understand it - you propose to patch pg_ctl.c & regress.c instead of\nPostgresNode.pm & regress.c?\n\nThis is a deeper invasion to pg_ctl. There will be a conflict between the\nenvironment variable and the pg_ctl -p parameter, and possibly induced\nbugs.\n\nI suggest microscopic changes for iterative recompilation/test/debug\nwithout installation.\n\n> For the sake of others with similar needs I attach my current\n> wrapper/launcher script. To use it you have to:\n\nIMHO, the method what you proposed (wrapper/launcher) - is more suitable for\ncomplex testing.\n\nI agree that the my proposed way is incomplete.\n\n\n", "msg_date": "Tue, 17 Mar 2020 14:35:08 +0300", "msg_from": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: custom postgres launcher for tests" } ]
[ { "msg_contents": "Hi Postgres Developers,\n\nWe are currently integrating LSM-tree based storage engine RocksDB into\nPostgres. I am wondering is there any function that serialize data types in\nmemory-comparable format, similar to MySQL and MariaDB. With that kind of\nfunction, we can directly store the serialized format in the storage engine\nand compare them in the storage engine level instead of deserializing data\nand comparing in the upper level. I know PostgreSQL is towards supporting\npluggble storage engine, so I think this feature would be particular useful.\n\nBest,\nShichao\n\nHi Postgres Developers,We are currently integrating LSM-tree based storage engine RocksDB into Postgres. I am wondering is there any function that serialize data types in memory-comparable format, similar to MySQL and MariaDB. With that kind of function, we can directly store the serialized format in the storage engine and compare them in the storage engine level instead of deserializing data and comparing in the upper level. I know PostgreSQL is towards supporting pluggble storage engine, so I think this feature would be particular useful.Best,Shichao", "msg_date": "Tue, 11 Feb 2020 14:52:52 -0500", "msg_from": "Shichao Jin <jsc0218@gmail.com>", "msg_from_op": true, "msg_subject": "Memory-comparable Serialization of Data Types" }, { "msg_contents": "On Tue, Feb 11, 2020 at 11:53 AM Shichao Jin <jsc0218@gmail.com> wrote:\n> We are currently integrating LSM-tree based storage engine RocksDB into Postgres. I am wondering is there any function that serialize data types in memory-comparable format, similar to MySQL and MariaDB. With that kind of function, we can directly store the serialized format in the storage engine and compare them in the storage engine level instead of deserializing data and comparing in the upper level.\n\nDo you mean a format that can perform Index comparisons using a\nmemcmp() rather than per-datatype comparison code?\n\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Feb 2020 12:00:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-comparable Serialization of Data Types" }, { "msg_contents": "Yes, this is exactly what I mean.\n\nOn Tue, 11 Feb 2020 at 15:01, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Feb 11, 2020 at 11:53 AM Shichao Jin <jsc0218@gmail.com> wrote:\n> > We are currently integrating LSM-tree based storage engine RocksDB into\n> Postgres. I am wondering is there any function that serialize data types in\n> memory-comparable format, similar to MySQL and MariaDB. With that kind of\n> function, we can directly store the serialized format in the storage engine\n> and compare them in the storage engine level instead of deserializing data\n> and comparing in the upper level.\n>\n> Do you mean a format that can perform Index comparisons using a\n> memcmp() rather than per-datatype comparison code?\n>\n>\n>\n> --\n> Peter Geoghegan\n>\n\nYes, this is exactly what I mean.On Tue, 11 Feb 2020 at 15:01, Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Feb 11, 2020 at 11:53 AM Shichao Jin <jsc0218@gmail.com> wrote:\n> We are currently integrating LSM-tree based storage engine RocksDB into Postgres. I am wondering is there any function that serialize data types in memory-comparable format, similar to MySQL and MariaDB. With that kind of function, we can directly store the serialized format in the storage engine and compare them in the storage engine level instead of deserializing data and comparing in the upper level.\n\nDo you mean a format that can perform Index comparisons using a\nmemcmp() rather than per-datatype comparison code?\n\n\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 11 Feb 2020 15:19:07 -0500", "msg_from": "Shichao Jin <jsc0218@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory-comparable Serialization of Data Types" }, { "msg_contents": "On Tue, Feb 11, 2020 at 12:19 PM Shichao Jin <jsc0218@gmail.com> wrote:\n> Yes, this is exactly what I mean.\n\nPostgreSQL doesn't have this capability. It might make sense to have\nit for some specific data structures, such as tuples on internal\nB-Tree pages -- these merely guide index scans, so there some\ninformation loss may be acceptable compared to the native/base\nrepresentation. However, that would only be faster because memcmp() is\ngenerally faster than the underlying datatype's native comparator. Not\nbecause comparisons have to take place in \"the upper levels\". There is\nsome indirection/overhead involved in using SQL-callable operators,\nbut not that much.\n\nNote that such a representation has to lose information in at least\nsome cases. For example, case-insensitive collations would have to\nlose information about the original case used (or store the original\nalongside the conditioned binary string). Note also that a \"one pass\"\nrepresentation that we can just memcmp() will have to be significantly\nlarger in some cases, especially when collatable text is used. A\nstrxfrm() blob is typically about 3.3x larger than the original string\nIIRC.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Feb 2020 13:14:59 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-comparable Serialization of Data Types" }, { "msg_contents": "On 2020-Feb-11, Peter Geoghegan wrote:\n\n> On Tue, Feb 11, 2020 at 12:19 PM Shichao Jin <jsc0218@gmail.com> wrote:\n> > Yes, this is exactly what I mean.\n> \n> PostgreSQL doesn't have this capability. It might make sense to have\n> it for some specific data structures,\n\nI think adding that would be too much of a burden, both for the project\nitself as for third-party type definitions; I think we'd rather rely on\ncalling the BTORDER_PROC btree support function for the type.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 11 Feb 2020 18:40:38 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Memory-comparable Serialization of Data Types" }, { "msg_contents": "On Tue, Feb 11, 2020 at 1:40 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think adding that would be too much of a burden, both for the project\n> itself as for third-party type definitions; I think we'd rather rely on\n> calling the BTORDER_PROC btree support function for the type.\n\nAn operator class would still need to provide a BTORDER_PROC. What I\ndescribe would be an optional capability. This is something that I\nhave referred to as key normalization in the past:\n\nhttps://wiki.postgresql.org/wiki/Key_normalization\n\nI think that it would only make sense as an enabler of multiple\noptimizations -- not just the memcmp()/strcmp() thing. A common\nstrcmp()'able binary string format can be used in many different ways.\nNote that this has nothing to do with changing the representation used\nby the vast majority of all tuples -- just the pivot tuples, which are\nmostly located in internal pages. They only make up less than 1% of\nall pages in almost all cases.\n\nI intend to prototype this technique within the next year. It's\npossible that it isn't worth the trouble, but there is only one way to\nfind out. I might just work on the \"abbreviated keys in internal\npages\" thing, for example. Though you really need some kind of prefix\ncompression to make that effective.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Feb 2020 14:16:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Memory-comparable Serialization of Data Types" }, { "msg_contents": "Thank you for both your feedback. Yes, as indicated by Peter, we indeed use\nthat technique in comparison in index, and now we will try passing\ncomparator to the storage engine according to Alvaro's suggestion.\n\nBest,\nShichao\n\n\n\n\nOn Tue, 11 Feb 2020 at 17:16, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Feb 11, 2020 at 1:40 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > I think adding that would be too much of a burden, both for the project\n> > itself as for third-party type definitions; I think we'd rather rely on\n> > calling the BTORDER_PROC btree support function for the type.\n>\n> An operator class would still need to provide a BTORDER_PROC. What I\n> describe would be an optional capability. This is something that I\n> have referred to as key normalization in the past:\n>\n> https://wiki.postgresql.org/wiki/Key_normalization\n>\n> I think that it would only make sense as an enabler of multiple\n> optimizations -- not just the memcmp()/strcmp() thing. A common\n> strcmp()'able binary string format can be used in many different ways.\n> Note that this has nothing to do with changing the representation used\n> by the vast majority of all tuples -- just the pivot tuples, which are\n> mostly located in internal pages. They only make up less than 1% of\n> all pages in almost all cases.\n>\n> I intend to prototype this technique within the next year. It's\n> possible that it isn't worth the trouble, but there is only one way to\n> find out. I might just work on the \"abbreviated keys in internal\n> pages\" thing, for example. Though you really need some kind of prefix\n> compression to make that effective.\n>\n> --\n> Peter Geoghegan\n>\n\n\n-- \nShichao Jin\nPhD Student at University of Waterloo, Canada\ne-mail: jsc0218@gmail.com\nhomepage: http://sites.google.com/site/csshichaojin/\n\nThank you for both your feedback. Yes, as indicated by Peter, we indeed use that technique in comparison in index, and now we will try passing comparator to the storage engine according to Alvaro's suggestion.Best,Shichao  On Tue, 11 Feb 2020 at 17:16, Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Feb 11, 2020 at 1:40 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think adding that would be too much of a burden, both for the project\n> itself as for third-party type definitions; I think we'd rather rely on\n> calling the BTORDER_PROC btree support function for the type.\n\nAn operator class would still need to provide a BTORDER_PROC. What I\ndescribe would be an optional capability. This is something that I\nhave referred to as key normalization in the past:\n\nhttps://wiki.postgresql.org/wiki/Key_normalization\n\nI think that it would only make sense as an enabler of multiple\noptimizations -- not just the memcmp()/strcmp() thing. A common\nstrcmp()'able binary string format can be used in many different ways.\nNote that this has nothing to do with changing the representation used\nby the vast majority of all tuples -- just the pivot tuples, which are\nmostly located in internal pages. They only make up less than 1% of\nall pages in almost all cases.\n\nI intend to prototype this technique within the next year. It's\npossible that it isn't worth the trouble, but there is only one way to\nfind out. I might just work on the \"abbreviated keys in internal\npages\" thing, for example. Though you really need some kind of prefix\ncompression to make that effective.\n\n-- \nPeter Geoghegan\n-- Shichao JinPhD Student at University of Waterloo, Canadae-mail: jsc0218@gmail.comhomepage: http://sites.google.com/site/csshichaojin/", "msg_date": "Wed, 12 Feb 2020 10:41:26 -0500", "msg_from": "Shichao Jin <jsc0218@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory-comparable Serialization of Data Types" } ]
[ { "msg_contents": "Hi,\n\nI noticed this when tightening up some races for [1] I noticed that the\nway speculative locks are displayed in pg_locks is completely bogus. As\npg_locks has no branch specific to speculative locks, the object etc\npath is used:\n\t\t\tcase LOCKTAG_OBJECT:\n\t\t\tcase LOCKTAG_USERLOCK:\n\t\t\tcase LOCKTAG_ADVISORY:\n\t\t\tdefault:\t\t\t/* treat unknown locktags like OBJECT */\n\t\t\t\tvalues[1] = ObjectIdGetDatum(instance->locktag.locktag_field1);\n\t\t\t\tvalues[7] = ObjectIdGetDatum(instance->locktag.locktag_field2);\n\t\t\t\tvalues[8] = ObjectIdGetDatum(instance->locktag.locktag_field3);\n\t\t\t\tvalues[9] = Int16GetDatum(instance->locktag.locktag_field4);\n\t\t\t\tnulls[2] = true;\n\t\t\t\tnulls[3] = true;\n\t\t\t\tnulls[4] = true;\n\t\t\t\tnulls[5] = true;\n\t\t\t\tnulls[6] = true;\n\t\t\t\tbreak;\n\nbut speculative locks are defined like:\n\n/*\n * ID info for a speculative insert is TRANSACTION info +\n * its speculative insert counter.\n */\n#define SET_LOCKTAG_SPECULATIVE_INSERTION(locktag,xid,token) \\\n\t((locktag).locktag_field1 = (xid), \\\n\t (locktag).locktag_field2 = (token),\t\t\\\n\t (locktag).locktag_field3 = 0, \\\n\t (locktag).locktag_field4 = 0, \\\n\t (locktag).locktag_type = LOCKTAG_SPECULATIVE_TOKEN, \\\n\t (locktag).locktag_lockmethodid = DEFAULT_LOCKMETHOD)\n\nwhich means that currently a speculative lock's xid is displayed as the\ndatabase, the token as the classid, and that objid and objsubid are 0\ninstead of NULL.\n\nDoesn't seem great.\n\nIt's trivial to put the xid in the correct place, but it's less obvious\nwhat to do with the token? For master we should probably add a column,\nbut what about the back branches? Ignore it? Put it in classid or such?\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/CAAKRu_ZRmxy_OEryfY3G8Zp01ouhgw59_-_Cm8n7LzRH5BAvng%40mail.gmail.com\n\n\n", "msg_date": "Tue, 11 Feb 2020 12:03:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pg_locks display of speculative locks is bogus" }, { "msg_contents": "On Tue, Feb 11, 2020 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n> Doesn't seem great.\n>\n> It's trivial to put the xid in the correct place, but it's less obvious\n> what to do with the token? For master we should probably add a column,\n> but what about the back branches? Ignore it? Put it in classid or such?\n\nMy vote goes to doing nothing about the token on the back branches.\nJust prevent bogus pg_locks output.\n\nNobody cares about the specifics of the token value -- though perhaps\nyou foresee a need to have it for testing purposes. I suppose that\nadding a column to pg_locks on the master branch is the easy way of\nresolving the situation, even if we don't really expect anyone to use\nit.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Feb 2020 12:24:50 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg_locks display of speculative locks is bogus" }, { "msg_contents": "Hi,\n\nOn 2020-02-11 12:24:50 -0800, Peter Geoghegan wrote:\n> On Tue, Feb 11, 2020 at 12:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > Doesn't seem great.\n> >\n> > It's trivial to put the xid in the correct place, but it's less obvious\n> > what to do with the token? For master we should probably add a column,\n> > but what about the back branches? Ignore it? Put it in classid or such?\n> \n> My vote goes to doing nothing about the token on the back branches.\n> Just prevent bogus pg_locks output.\n> \n> Nobody cares about the specifics of the token value -- though perhaps\n> you foresee a need to have it for testing purposes. I suppose that\n> adding a column to pg_locks on the master branch is the easy way of\n> resolving the situation, even if we don't really expect anyone to use\n> it.\n\nYou can't really analyze what is waiting for what without seeing it -\nthe prime purpose of pg_locks. So I don't agree with the sentiment that\nnobody cares about the token.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 11 Feb 2020 12:46:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pg_locks display of speculative locks is bogus" } ]
[ { "msg_contents": "There is a current discussion off-list about what should happen when a\nCOMMIT is issued for a transaction that cannot be committed for whatever\nreason. PostgreSQL returns ROLLBACK as command tag but otherwise succeeds.\n\nHere is an excerpt of Section 17.7 <commit statement> that I feel is\nrelevant:\n\n<>\n6) Case:\n\na) If any enforced constraint is not satisfied, then any changes to\nSQL-data or schemas that were made by the current SQL-transaction are\ncanceled and an exception condition is raised: transaction rollback —\nintegrity constraint violation.\n\nb) If any other error preventing commitment of the SQL-transaction has\noccurred, then any changes to SQL-data or schemas that were made by the\ncurrent SQL-transaction are canceled and an exception condition is\nraised: transaction rollback with an implementation-defined subclass value.\n\nc) Otherwise, any changes to SQL-data or schemas that were made by the\ncurrent SQL-transaction are eligible to be perceived by all concurrent\nand subsequent SQL-transactions.\n</>\n\nIt seems like this:\n\npostgres=# \\set VERBOSITY verbose\npostgres=# begin;\nBEGIN\npostgres=*# error;\nERROR: 42601: syntax error at or near \"error\"\nLINE 1: error;\n ^\nLOCATION: scanner_yyerror, scan.l:1150\npostgres=!# commit;\nROLLBACK\n\nshould actually produce something like this:\n\npostgres=!# commit;\nERROR: 40P00: transaction cannot be committed\nDETAIL: First error was \"42601: syntax error at or near \"error\"\"\n\nIs this reading correct?\nIf so, is this something we should fix?\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 11 Feb 2020 22:44:50 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Error on failed COMMIT" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> There is a current discussion off-list about what should happen when a\n> COMMIT is issued for a transaction that cannot be committed for whatever\n> reason. PostgreSQL returns ROLLBACK as command tag but otherwise succeeds.\n\n> It seems like [ trying to commit a failed transaction ]\n> should actually produce something like this:\n\n> postgres=!# commit;\n> ERROR: 40P00: transaction cannot be committed\n> DETAIL: First error was \"42601: syntax error at or near \"error\"\"\n\nSo I assume you're imagining that that would leave us still in\ntransaction-aborted state, and the session is basically dead in\nthe water until the user thinks to issue ROLLBACK instead?\n\n> Is this reading correct?\n\nProbably it is, according to the letter of the SQL spec, but I'm\nafraid that changing this behavior now would provoke lots of hate\nand few compliments. An application that's doing the spec-compliant\nthing and issuing ROLLBACK isn't going to be affected, but apps that\nare relying on the existing behavior are going to be badly broken.\n\nA related problem is what happens if you're in a perfectly-fine\ntransaction and the commit itself fails, e.g.,\n\nregression=# create table tt (f1 int primary key deferrable initially deferred);\nCREATE TABLE\nregression=# begin;\nBEGIN\nregression=# insert into tt values (1);\nINSERT 0 1\nregression=# insert into tt values (1);\nINSERT 0 1\nregression=# commit;\nERROR: duplicate key value violates unique constraint \"tt_pkey\"\nDETAIL: Key (f1)=(1) already exists.\n\nAt this point PG considers that you're out of the transaction:\n\nregression=# rollback;\nWARNING: there is no transaction in progress\nROLLBACK\n\nbut I bet the spec doesn't. So if we change that, again we break\napplications that work today. Meanwhile, an app that is doing it\nthe spec-compliant way will issue a ROLLBACK that we consider\nuseless, so currently that draws an ignorable WARNING and all is\nwell. So here also, the prospects for making more people happy\nthan we make unhappy seem pretty grim. (Maybe there's a case\nfor downgrading the WARNING to NOTICE, though?)\n\n(Don't even *think* of suggesting that having a GUC to change\nthis behavior would be appropriate. The long-ago fiasco around\nautocommit showed us the hazards of letting GUCs affect such\nfundamental behavior.)\n\nSpeaking of autocommit, I wonder how that would interact with\nthis...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Feb 2020 17:35:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 11/02/2020 23:35, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> There is a current discussion off-list about what should happen when a\n>> COMMIT is issued for a transaction that cannot be committed for whatever\n>> reason. PostgreSQL returns ROLLBACK as command tag but otherwise succeeds.\n> \n>> It seems like [ trying to commit a failed transaction ]\n>> should actually produce something like this:\n> \n>> postgres=!# commit;\n>> ERROR: 40P00: transaction cannot be committed\n>> DETAIL: First error was \"42601: syntax error at or near \"error\"\"\n> \n> So I assume you're imagining that that would leave us still in\n> transaction-aborted state, and the session is basically dead in\n> the water until the user thinks to issue ROLLBACK instead?\n\nActually, I was imagining that it would end the transaction as it does\ntoday, just with an error code.\n\nThis is backed up by General Rule 9 which says \"The current\nSQL-transaction is terminated.\"\n\n>> Is this reading correct?\n> \n> Probably it is, according to the letter of the SQL spec, but I'm\n> afraid that changing this behavior now would provoke lots of hate\n> and few compliments. An application that's doing the spec-compliant\n> thing and issuing ROLLBACK isn't going to be affected, but apps that\n> are relying on the existing behavior are going to be badly broken.\n\nI figured that was likely. I'm hoping to at least get a documentation\npatch out of this thread, though.\n\n> A related problem is what happens if you're in a perfectly-fine\n> transaction and the commit itself fails, e.g.,\n> \n> regression=# create table tt (f1 int primary key deferrable initially deferred);\n> CREATE TABLE\n> regression=# begin;\n> BEGIN\n> regression=# insert into tt values (1);\n> INSERT 0 1\n> regression=# insert into tt values (1);\n> INSERT 0 1\n> regression=# commit;\n> ERROR: duplicate key value violates unique constraint \"tt_pkey\"\n> DETAIL: Key (f1)=(1) already exists.\n> \n> At this point PG considers that you're out of the transaction:\n> \n> regression=# rollback;\n> WARNING: there is no transaction in progress\n> ROLLBACK\n> \n> but I bet the spec doesn't. So if we change that, again we break\n> applications that work today.\n\nI would argue that the this example is entirely compliant and consistent\nwith my original question (except that it gives a class 23 instead of a\nclass 40).\n\n> Meanwhile, an app that is doing it\n> the spec-compliant way will issue a ROLLBACK that we consider\n> useless, so currently that draws an ignorable WARNING and all is\n> well. So here also, the prospects for making more people happy\n> than we make unhappy seem pretty grim.\n\nI'm not entirely sure what should happen with a free-range ROLLBACK. (I\n*think* it says it should error with \"2D000 invalid transaction\ntermination\" but it's a little confusing to me.)\n\n> (Maybe there's a case for downgrading the WARNING to NOTICE, though?)\n\nMaybe. But I think its match (a double START TRANSACTION) should remain\na warning if we do change this.\n\n> (Don't even *think* of suggesting that having a GUC to change\n> this behavior would be appropriate. The long-ago fiasco around\n> autocommit showed us the hazards of letting GUCs affect such\n> fundamental behavior.)\n\nThat thought never crossed my mind.\n\n> Speaking of autocommit, I wonder how that would interact with\n> this...\n\nI don't see how it would be any different.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 12 Feb 2020 00:19:24 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 11/02/2020 23:35, Tom Lane wrote:\n>> So I assume you're imagining that that would leave us still in\n>> transaction-aborted state, and the session is basically dead in\n>> the water until the user thinks to issue ROLLBACK instead?\n\n> Actually, I was imagining that it would end the transaction as it does\n> today, just with an error code.\n> This is backed up by General Rule 9 which says \"The current\n> SQL-transaction is terminated.\"\n\nHm ... that would be sensible, but I'm not entirely convinced. There\nare several preceding rules that say that an exception condition is\nraised, and normally you can stop reading at that point; nothing else\nis going to happen. If COMMIT acts specially in this respect, they\nought to say so.\n\nIn any case, while this interpretation might change the calculus a bit,\nI think we still end up concluding that altering this behavior has more\ndownside than upside.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Feb 2020 18:27:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 11 Feb 2020 at 17:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Vik Fearing <vik@postgresfriends.org> writes:\n> > There is a current discussion off-list about what should happen when a\n> > COMMIT is issued for a transaction that cannot be committed for whatever\n> > reason. PostgreSQL returns ROLLBACK as command tag but otherwise\n> succeeds.\n>\n> > It seems like [ trying to commit a failed transaction ]\n> > should actually produce something like this:\n>\n> > postgres=!# commit;\n> > ERROR: 40P00: transaction cannot be committed\n> > DETAIL: First error was \"42601: syntax error at or near \"error\"\"\n>\n> So I assume you're imagining that that would leave us still in\n> transaction-aborted state, and the session is basically dead in\n> the water until the user thinks to issue ROLLBACK instead?\n>\n> > Is this reading correct?\n>\n> Probably it is, according to the letter of the SQL spec, but I'm\n> afraid that changing this behavior now would provoke lots of hate\n> and few compliments. An application that's doing the spec-compliant\n> thing and issuing ROLLBACK isn't going to be affected, but apps that\n> are relying on the existing behavior are going to be badly broken.\n>\n> A related problem is what happens if you're in a perfectly-fine\n> transaction and the commit itself fails, e.g.,\n>\n> regression=# create table tt (f1 int primary key deferrable initially\n> deferred);\n> CREATE TABLE\n> regression=# begin;\n> BEGIN\n> regression=# insert into tt values (1);\n> INSERT 0 1\n> regression=# insert into tt values (1);\n> INSERT 0 1\n> regression=# commit;\n> ERROR: duplicate key value violates unique constraint \"tt_pkey\"\n> DETAIL: Key (f1)=(1) already exists.\n>\n> At this point PG considers that you're out of the transaction:\n>\n> regression=# rollback;\n> WARNING: there is no transaction in progress\n> ROLLBACK\n>\n\ninteresting as if you do a commit after violating a not null it simply does\na rollback\nwith no warning whatsoever\n\nbegin;\nBEGIN\ntest=# insert into hasnull(i) values (null);\nERROR: null value in column \"i\" violates not-null constraint\nDETAIL: Failing row contains (null).\ntest=# commit;\nROLLBACK\n\n\n>\n> but I bet the spec doesn't. So if we change that, again we break\n> applications that work today. Meanwhile, an app that is doing it\n> the spec-compliant way will issue a ROLLBACK that we consider\n> useless, so currently that draws an ignorable WARNING and all is\n> well. So here also, the prospects for making more people happy\n> than we make unhappy seem pretty grim. (Maybe there's a case\n> for downgrading the WARNING to NOTICE, though?)\n>\n> Actually the bug reporter was looking for an upgrade from a warning to an\nERROR\n\nI realize we are unlikely to change the behaviour however it would be\nuseful if we\ndid the same thing for all cases, and document this behaviour. We actually\nhave places where\nwe document where we don't adhere to the spec.\n\nDave\n\nOn Tue, 11 Feb 2020 at 17:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:Vik Fearing <vik@postgresfriends.org> writes:\n> There is a current discussion off-list about what should happen when a\n> COMMIT is issued for a transaction that cannot be committed for whatever\n> reason.  PostgreSQL returns ROLLBACK as command tag but otherwise succeeds.\n\n> It seems like [ trying to commit a failed transaction ]\n> should actually produce something like this:\n\n> postgres=!# commit;\n> ERROR:  40P00: transaction cannot be committed\n> DETAIL:  First error was \"42601: syntax error at or near \"error\"\"\n\nSo I assume you're imagining that that would leave us still in\ntransaction-aborted state, and the session is basically dead in\nthe water until the user thinks to issue ROLLBACK instead?\n\n> Is this reading correct?\n\nProbably it is, according to the letter of the SQL spec, but I'm\nafraid that changing this behavior now would provoke lots of hate\nand few compliments.  An application that's doing the spec-compliant\nthing and issuing ROLLBACK isn't going to be affected, but apps that\nare relying on the existing behavior are going to be badly broken.\n\nA related problem is what happens if you're in a perfectly-fine\ntransaction and the commit itself fails, e.g.,\n\nregression=# create table tt (f1 int primary key deferrable initially deferred);\nCREATE TABLE\nregression=# begin;\nBEGIN\nregression=# insert into tt values (1);\nINSERT 0 1\nregression=# insert into tt values (1);\nINSERT 0 1\nregression=# commit;\nERROR:  duplicate key value violates unique constraint \"tt_pkey\"\nDETAIL:  Key (f1)=(1) already exists.\n\nAt this point PG considers that you're out of the transaction:\n\nregression=# rollback;\nWARNING:  there is no transaction in progress\nROLLBACKinteresting as if you do a commit after violating a not null it simply does a rollbackwith no warning whatsoeverbegin;BEGINtest=# insert into hasnull(i) values (null);ERROR:  null value in column \"i\" violates not-null constraintDETAIL:  Failing row contains (null).test=# commit;ROLLBACK \n\nbut I bet the spec doesn't.  So if we change that, again we break\napplications that work today.  Meanwhile, an app that is doing it\nthe spec-compliant way will issue a ROLLBACK that we consider\nuseless, so currently that draws an ignorable WARNING and all is\nwell.  So here also, the prospects for making more people happy\nthan we make unhappy seem pretty grim.  (Maybe there's a case\nfor downgrading the WARNING to NOTICE, though?)\nActually the bug reporter was looking for an upgrade from a warning to an ERRORI realize we are unlikely to change the behaviour however it would be useful if we did the same thing for all cases, and document this behaviour. We actually have places where we document where we don't adhere to the spec.Dave", "msg_date": "Tue, 11 Feb 2020 19:23:18 -0500", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Am 12.02.2020 um 00:27 schrieb Tom Lane:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> Actually, I was imagining that it would end the transaction as it does\n>> today, just with an error code.\n>> This is backed up by General Rule 9 which says \"The current\n>> SQL-transaction is terminated.\"\n> Hm ... that would be sensible, but I'm not entirely convinced. There\n> are several preceding rules that say that an exception condition is\n> raised, and normally you can stop reading at that point; nothing else\n> is going to happen. If COMMIT acts specially in this respect, they\n> ought to say so.\n>\n> In any case, while this interpretation might change the calculus a bit,\n> I think we still end up concluding that altering this behavior has more\n> downside than upside.\n\nLet me illustrate this issue from an application (framework) developer's \nperspective:\n\nWhen an application interacts with a database, it must be clearly \npossible to determine, whether a commit actually succeeded (and made all \nchanges persistent), or the commit failed for any reason (and all of the \nchanges have been rolled back). If a commit succeeds, an application \nmust be allowed to assume that all changes it made in the preceeding \ntransaction are made persistent and it is valid to update its internal \nstate (e.g. caches) to the values updated in the transaction. This must \nbe possible, even if the transaction is constructed collaboratively by \nmultipe independent layers of the application (e.g. a framework and an \napplication layer). Unfortunately, this seems not to be possible with \nthe current implementation - at least not with default settings:\n\nAssume the application is written in Java and sees Postgres through the \nJDBC driver:\n\ncomposeTransaction() {\n    Connection con = getConnection(); // implicitly \"begin\"\n    try {\n       insertFrameworkLevelState(con);\n       insertApplicationLevelState(con);\n       con.commit();\n       publishNewState();\n    } catch (Throwable ex) {\n       con.rollback();\n    }\n}\n\nWith the current implementation, it is possible, that the control flow \nreaches \"publishNewState()\" without the changes done in \n\"insertFrameworkLevelState()\" have been made persistent - without the \nframework-level code (which is everything except \n\"insertApplicationLevelState()\") being able to detect the problem (e.g. \nif \"insertApplicationLevelState()\" tries add a null into a non-null \ncolumn catching the exception or any other application-level error that \nis not properly handled through safepoints).\n\n From a framework's perspective, this behavior is absolutely \nunacceptable. Here, the framework-level code sees a database that \ncommits successfully but does not make its changes persistent. \nTherefore, I don't think that altering this behavior has more downside \nthan upside.\n\nBest regards\n\nBernhard\n\n\n\n", "msg_date": "Thu, 13 Feb 2020 08:38:18 +0100", "msg_from": "\"Haumacher, Bernhard\" <haui@haumacher.de>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Thu, Feb 13, 2020 at 2:38 AM Haumacher, Bernhard <haui@haumacher.de> wrote:\n> Assume the application is written in Java and sees Postgres through the\n> JDBC driver:\n>\n> composeTransaction() {\n> Connection con = getConnection(); // implicitly \"begin\"\n> try {\n> insertFrameworkLevelState(con);\n> insertApplicationLevelState(con);\n> con.commit();\n> publishNewState();\n> } catch (Throwable ex) {\n> con.rollback();\n> }\n> }\n>\n> With the current implementation, it is possible, that the control flow\n> reaches \"publishNewState()\" without the changes done in\n> \"insertFrameworkLevelState()\" have been made persistent - without the\n> framework-level code (which is everything except\n> \"insertApplicationLevelState()\") being able to detect the problem (e.g.\n> if \"insertApplicationLevelState()\" tries add a null into a non-null\n> column catching the exception or any other application-level error that\n> is not properly handled through safepoints).\n>\n> From a framework's perspective, this behavior is absolutely\n> unacceptable. Here, the framework-level code sees a database that\n> commits successfully but does not make its changes persistent.\n> Therefore, I don't think that altering this behavior has more downside\n> than upside.\n\nI am not sure that this example really proves anything. If\ninsertFrameworkLevelState(con), insertApplicationLevelState(con), and\ncon.commit() throw exceptions, or if they return a status code and you\ncheck it and throw an exception if it's not what you expect, then it\nwill work. If database errors that occur during those operations are\nignored, then you've got a problem, but it does not seem necessary to\nchange the behavior of the database to fix that problem.\n\nI think one of the larger issues in this area is that people have\nscript that go:\n\nBEGIN;\n-- do stuff\nCOMMIT;\nBEGIN;\n-- do more stuff\nCOMMIT;\n\n...and they run these scripts by piping them into psql. Now, if the\nCOMMIT leaves the session in a transaction state, this is going to\nhave pretty random behavior. Like your example, this can be fixed by\nhaving proper error checking in the application, but that does require\nthat your application is something more than a psql script.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 12:36:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, 14 Feb 2020 at 12:37, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Feb 13, 2020 at 2:38 AM Haumacher, Bernhard <haui@haumacher.de>\n> wrote:\n> > Assume the application is written in Java and sees Postgres through the\n> > JDBC driver:\n> >\n> > composeTransaction() {\n> > Connection con = getConnection(); // implicitly \"begin\"\n> > try {\n> > insertFrameworkLevelState(con);\n> > insertApplicationLevelState(con);\n> > con.commit();\n> > publishNewState();\n> > } catch (Throwable ex) {\n> > con.rollback();\n> > }\n> > }\n> >\n> > With the current implementation, it is possible, that the control flow\n> > reaches \"publishNewState()\" without the changes done in\n> > \"insertFrameworkLevelState()\" have been made persistent - without the\n> > framework-level code (which is everything except\n> > \"insertApplicationLevelState()\") being able to detect the problem (e.g.\n> > if \"insertApplicationLevelState()\" tries add a null into a non-null\n> > column catching the exception or any other application-level error that\n> > is not properly handled through safepoints).\n> >\n> > From a framework's perspective, this behavior is absolutely\n> > unacceptable. Here, the framework-level code sees a database that\n> > commits successfully but does not make its changes persistent.\n> > Therefore, I don't think that altering this behavior has more downside\n> > than upside.\n>\n> I am not sure that this example really proves anything. If\n> insertFrameworkLevelState(con), insertApplicationLevelState(con), and\n> con.commit() throw exceptions, or if they return a status code and you\n> check it and throw an exception if it's not what you expect, then it\n> will work.\n\n\nThing is that con.commit() DOESN'T return a status code, nor does it throw\nan exception as we silently ROLLBACK here.\n\nAs noted up thread it's somewhat worse as depending on how the transaction\nfailed we seem to do different things\n\nIn Tom's example we do issue a warning and say there is no transaction\nrunning. I would guess we silently rolled back earlier.\nIn my example we don't issue the warning we just roll back.\n\nI do agree with Tom that changing this behaviour at this point would make\nthings worse for more people than it would help so I am not advocating\nthrowing an error here.\n\nI would however advocate for consistently doing the same thing with failed\ntransactions\n\nDave Cramer\n\nwww.postgres.rocks\n\nOn Fri, 14 Feb 2020 at 12:37, Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Feb 13, 2020 at 2:38 AM Haumacher, Bernhard <haui@haumacher.de> wrote:\n> Assume the application is written in Java and sees Postgres through the\n> JDBC driver:\n>\n> composeTransaction() {\n>     Connection con = getConnection(); // implicitly \"begin\"\n>     try {\n>        insertFrameworkLevelState(con);\n>        insertApplicationLevelState(con);\n>        con.commit();\n>        publishNewState();\n>     } catch (Throwable ex) {\n>        con.rollback();\n>     }\n> }\n>\n> With the current implementation, it is possible, that the control flow\n> reaches \"publishNewState()\" without the changes done in\n> \"insertFrameworkLevelState()\" have been made persistent - without the\n> framework-level code (which is everything except\n> \"insertApplicationLevelState()\") being able to detect the problem (e.g.\n> if \"insertApplicationLevelState()\" tries add a null into a non-null\n> column catching the exception or any other application-level error that\n> is not properly handled through safepoints).\n>\n>  From a framework's perspective, this behavior is absolutely\n> unacceptable. Here, the framework-level code sees a database that\n> commits successfully but does not make its changes persistent.\n> Therefore, I don't think that altering this behavior has more downside\n> than upside.\n\nI am not sure that this example really proves anything. If\ninsertFrameworkLevelState(con), insertApplicationLevelState(con), and\ncon.commit() throw exceptions, or if they return a status code and you\ncheck it and throw an exception if it's not what you expect, then it\nwill work. Thing is that con.commit() DOESN'T return a status code, nor does it throw an exception as we silently ROLLBACK here.As noted up thread it's somewhat worse as depending on how the transaction failed we seem to do different thingsIn Tom's example we do issue a warning and say there is no transaction running. I would guess we silently rolled back earlier.In my example we don't issue the warning we just roll back.I do agree with Tom that changing this behaviour at this point would make things worse for more people than it would help so I am not advocating throwing an error here.I would however advocate for consistently doing the same thing with failed transactionsDave Cramerwww.postgres.rocks", "msg_date": "Fri, 14 Feb 2020 13:04:07 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, Feb 14, 2020 at 1:04 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Thing is that con.commit() DOESN'T return a status code, nor does it throw an exception as we silently ROLLBACK here.\n\nWhy not? There's nothing keeping the driver from doing either of those\nthings, is there? I mean, if using libpq, you can use PQcmdStatus() to\nget the command tag, and find out whether it's COMMIT or ROLLBACK. If\nyou're implementing the wire protocol directly, you can do something\nsimilar.\n\nhttps://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-EXEC-NONSELECT\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 13:29:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, 14 Feb 2020 at 13:29, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Feb 14, 2020 at 1:04 PM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n> > Thing is that con.commit() DOESN'T return a status code, nor does it\n> throw an exception as we silently ROLLBACK here.\n>\n> Why not? There's nothing keeping the driver from doing either of those\n> things, is there? I mean, if using libpq, you can use PQcmdStatus() to\n> get the command tag, and find out whether it's COMMIT or ROLLBACK. If\n> you're implementing the wire protocol directly, you can do something\n> similar.\n>\n>\n> https://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-EXEC-NONSELECT\n\n\nWell now you are asking the driver to re-interpret the results in a\ndifferent way than the server which is not what we tend to do.\n\nThe server throws an error we throw an error. We really aren't in the\nbusiness of re-interpreting the servers responses.\n\nDave Cramer\nwww.postgres.rocks\n\nOn Fri, 14 Feb 2020 at 13:29, Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Feb 14, 2020 at 1:04 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Thing is that con.commit() DOESN'T return a status code, nor does it throw an exception as we silently ROLLBACK here.\n\nWhy not? There's nothing keeping the driver from doing either of those\nthings, is there? I mean, if using libpq, you can use PQcmdStatus() to\nget the command tag, and find out whether it's COMMIT or ROLLBACK. If\nyou're implementing the wire protocol directly, you can do something\nsimilar.\n\nhttps://www.postgresql.org/docs/current/libpq-exec.html#LIBPQ-EXEC-NONSELECTWell now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.Dave Cramerwww.postgres.rocks", "msg_date": "Fri, 14 Feb 2020 14:08:20 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Well now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.\n>\n> The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.\n\nI don't really see a reason why the driver has to throw an exception\nif and only if there is an ERROR on the PostgreSQL side. But even if\nyou want to make that rule for some reason, it doesn't preclude\ncorrect behavior here. All you really need is to have con.commit()\nreturn some indication of what the command tag was, just as, say, psql\nwould do. If the server provides you with status information and you\nthrow it out instead of passing it along to the application, that's\nnot ideal.\n\nAnother thing that kinda puzzles me about this situation is that, as\nfar as I know, the only time COMMIT returns ROLLBACK is if the\ntransaction has already previously reported an ERROR. But if an ERROR\ngets turned into an exception, then, in the code snippet previously\nprovided, we'd never reach con.commit() in the first place.\n\nI'm not trying to deny that you might find some other server behavior\nmore convenient. You might. And, to Vik's original point, it might be\nmore compliant with the spec, too. But since changing that would have\na pretty big blast radius at this stage, I think it's worth trying to\nmake things work as well as they can with the server behavior that we\nalready have. And I don't really see anything preventing the driver\nfrom doing that technically. I don't understand the idea that the\ndriver is somehow not allowed to notice the command tag.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 14:36:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n> > Well now you are asking the driver to re-interpret the results in a\n> different way than the server which is not what we tend to do.\n> >\n> > The server throws an error we throw an error. We really aren't in the\n> business of re-interpreting the servers responses.\n>\n> I don't really see a reason why the driver has to throw an exception\n> if and only if there is an ERROR on the PostgreSQL side. But even if\n> you want to make that rule for some reason, it doesn't preclude\n> correct behavior here. All you really need is to have con.commit()\n> return some indication of what the command tag was, just as, say, psql\n> would do. If the server provides you with status information and you\n> throw it out instead of passing it along to the application, that's\n> not ideal.\n>\n\nWell con.commit() returns void :(\n\n\n>\n> Another thing that kinda puzzles me about this situation is that, as\n> far as I know, the only time COMMIT returns ROLLBACK is if the\n> transaction has already previously reported an ERROR. But if an ERROR\n> gets turned into an exception, then, in the code snippet previously\n> provided, we'd never reach con.commit() in the first place.\n>\n\n The OP is building a framework where it's possible for the exception to be\nswallowed by the consumer of the framework.\n\n\n> I'm not trying to deny that you might find some other server behavior\n> more convenient. You might. And, to Vik's original point, it might be\n> more compliant with the spec, too. But since changing that would have\n> a pretty big blast radius at this stage, I think it's worth trying to\n> make things work as well as they can with the server behavior that we\n> already have. And I don't really see anything preventing the driver\n> from doing that technically. I don't understand the idea that the\n> driver is somehow not allowed to notice the command tag.\n>\n\nWe have the same blast radius.\nI have offered to make the behaviour requested dependent on a configuration\nparameter but apparently this is not sufficient.\n\n\n\nDave Cramer\nwww.postgres.rocks\n\nOn Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Well now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.\n>\n> The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.\n\nI don't really see a reason why the driver has to throw an exception\nif and only if there is an ERROR on the PostgreSQL side. But even if\nyou want to make that rule for some reason, it doesn't preclude\ncorrect behavior here. All you really need is to have con.commit()\nreturn some indication of what the command tag was, just as, say, psql\nwould do. If the server provides you with status information and you\nthrow it out instead of passing it along to the application, that's\nnot ideal. Well con.commit() returns void :( \n\nAnother thing that kinda puzzles me about this situation is that, as\nfar as I know, the only time COMMIT returns ROLLBACK is if the\ntransaction has already previously reported an ERROR. But if an ERROR\ngets turned into an exception, then, in the code snippet previously\nprovided, we'd never reach con.commit() in the first place. The OP is building a framework where it's possible for the exception to be swallowed by the consumer of the framework.\n\nI'm not trying to deny that you might find some other server behavior\nmore convenient. You might. And, to Vik's original point, it might be\nmore compliant with the spec, too. But since changing that would have\na pretty big blast radius at this stage, I think it's worth trying to\nmake things work as well as they can with the server behavior that we\nalready have. And I don't really see anything preventing the driver\nfrom doing that technically. I don't understand the idea that the\ndriver is somehow not allowed to notice the command tag.We have the same blast radius. I have offered to make the behaviour requested dependent on a configuration parameter but apparently this is not sufficient. Dave Cramerwww.postgres.rocks", "msg_date": "Fri, 14 Feb 2020 14:47:50 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Dave Cramer <davecramer@postgres.rocks> writes:\n> On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n>> I'm not trying to deny that you might find some other server behavior\n>> more convenient. You might. And, to Vik's original point, it might be\n>> more compliant with the spec, too. But since changing that would have\n>> a pretty big blast radius at this stage, I think it's worth trying to\n>> make things work as well as they can with the server behavior that we\n>> already have. And I don't really see anything preventing the driver\n>> from doing that technically. I don't understand the idea that the\n>> driver is somehow not allowed to notice the command tag.\n\n> We have the same blast radius.\n> I have offered to make the behaviour requested dependent on a configuration\n> parameter but apparently this is not sufficient.\n\nNope, that is absolutely not happening. We learned very painfully, back\naround 7.3 when we tried to put in autocommit on/off, that if server\nbehaviors like this are configurable then most client code has to be\nprepared to work with every possible setting. The argument that \"you can\njust set it to work the way you expect\" is a dangerous falsehood. I see\nno reason to think that a change like this wouldn't suffer the same sort\nof embarrassing and expensive failure that autocommit did.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Feb 2020 15:07:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, 14 Feb 2020 at 15:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@postgres.rocks> writes:\n> > On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n> >> I'm not trying to deny that you might find some other server behavior\n> >> more convenient. You might. And, to Vik's original point, it might be\n> >> more compliant with the spec, too. But since changing that would have\n> >> a pretty big blast radius at this stage, I think it's worth trying to\n> >> make things work as well as they can with the server behavior that we\n> >> already have. And I don't really see anything preventing the driver\n> >> from doing that technically. I don't understand the idea that the\n> >> driver is somehow not allowed to notice the command tag.\n>\n> > We have the same blast radius.\n> > I have offered to make the behaviour requested dependent on a\n> configuration\n> > parameter but apparently this is not sufficient.\n>\n> Nope, that is absolutely not happening.\n\n\nI should have been more specific.\n\nI offered to make the behaviour in the JDBC driver dependent on a\nconfiguration parameter\n\nDave Cramer\nwww.postgres.rocks\n\n>\n>\n\nOn Fri, 14 Feb 2020 at 15:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@postgres.rocks> writes:\n> On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n>> I'm not trying to deny that you might find some other server behavior\n>> more convenient. You might. And, to Vik's original point, it might be\n>> more compliant with the spec, too. But since changing that would have\n>> a pretty big blast radius at this stage, I think it's worth trying to\n>> make things work as well as they can with the server behavior that we\n>> already have. And I don't really see anything preventing the driver\n>> from doing that technically. I don't understand the idea that the\n>> driver is somehow not allowed to notice the command tag.\n\n> We have the same blast radius.\n> I have offered to make the behaviour requested dependent on a configuration\n> parameter but apparently this is not sufficient.\n\nNope, that is absolutely not happening.  I should have been more specific. I offered to make the behaviour in the JDBC driver dependent on a configuration parameterDave Cramerwww.postgres.rocks", "msg_date": "Fri, 14 Feb 2020 15:11:42 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 2020-Feb-14, Dave Cramer wrote:\n\n> I offered to make the behaviour in the JDBC driver dependent on a\n> configuration parameter\n\nDo you mean \"if con.commit() results in a rollback, then an exception is\nthrown, unless the parameter XYZ is set to PQR\"?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:19:50 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, 14 Feb 2020 at 15:19, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Feb-14, Dave Cramer wrote:\n>\n> > I offered to make the behaviour in the JDBC driver dependent on a\n> > configuration parameter\n>\n> Do you mean \"if con.commit() results in a rollback, then an exception is\n> thrown, unless the parameter XYZ is set to PQR\"?\n>\n\n\nNo. JDBC has a number of connection parameters which change internal\nsemantics of various things.\n\nI was proposing to have a connection parameter which would be\nthrowExceptionOnFailedCommit (or something) which would do what it says.\n\nNone of this would touch the server. It would just change the driver\nsemantics.\n\n\nDave Cramer\nwww.postgres.rocks\n\nOn Fri, 14 Feb 2020 at 15:19, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Feb-14, Dave Cramer wrote:\n\n> I offered to make the behaviour in the JDBC driver dependent on a\n> configuration parameter\n\nDo you mean \"if con.commit() results in a rollback, then an exception is\nthrown, unless the parameter XYZ is set to PQR\"?No. JDBC has a number of connection parameters which change internal semantics of various things.I was proposing to have a connection parameter which would be throwExceptionOnFailedCommit (or something) which would do what it says.None of this would touch the server. It would just change the driver semantics.Dave Cramerwww.postgres.rocks", "msg_date": "Fri, 14 Feb 2020 15:43:24 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Am 14.02.2020 um 20:36 schrieb Robert Haas:\n> On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n>> Well now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.\n>>\n>> The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.\n> I don't really see a reason why the driver has to throw an exception\n> if and only if there is an ERROR on the PostgreSQL side. But even if\n> you want to make that rule for some reason, it doesn't preclude\n> correct behavior here. All you really need is to have con.commit()\n> return some indication of what the command tag was, just as, say, psql\n> would do.\n\nI think, this would be an appropriate solution. PostgreSQL reports the \n\"unsuccessful\" commit through the \"ROLLBACK\" status code and the driver \ntranslates this into a Java SQLException, because this is the only way \nto communicate the \"non-successfullness\" from the void commit() method. \nSince the commit() was not successful, from the API point of view this \nis an error and it is fine to report this using an exception.\n\nAm 14.02.2020 um 21:07 schrieb Tom Lane:\n> Dave Cramer <davecramer@postgres.rocks> writes:\n>> We have the same blast radius.\n>> I have offered to make the behaviour requested dependent on a configuration\n>> parameter but apparently this is not sufficient.\n> Nope, that is absolutely not happening. We learned very painfully, back\n> around 7.3 when we tried to put in autocommit on/off, that if server\n> behaviors like this are configurable then most client code has to be\n> prepared to work with every possible setting. The argument that \"you can\n> just set it to work the way you expect\" is a dangerous falsehood. I see\n> no reason to think that a change like this wouldn't suffer the same sort\n> of embarrassing and expensive failure that autocommit did.\n\nDoing this in a (non-default) driver setting is not ideal, because I \nexpect do be notified *by default* from a database (driver) if a commit \nwas not successful (and since the API is void, the only notification \npath is an exception). We already have a non-default option named \n\"autosafe\", which fixes the problem somehow.\n\nIf we really need both behaviors (\"silently ignore failed commits\" and \n\"notify about failed commits\") I would prefer adding a \nbackwards-compatible option \n\"silently-ignore-failed-commit-due-to-auto-rollback\" (since it is a \nreally aburd setting from my point of view, since consistency is at risk \nif this happens - the worst thing to expect from a database).\n\nRegards,  Bernhard\n\n\n\n", "msg_date": "Mon, 17 Feb 2020 19:01:38 +0100", "msg_from": "\"Haumacher, Bernhard\" <haui@haumacher.de>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, 17 Feb 2020 at 13:02, Haumacher, Bernhard <haui@haumacher.de> wrote:\n\n> Am 14.02.2020 um 20:36 schrieb Robert Haas:\n> > On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n> >> Well now you are asking the driver to re-interpret the results in a\n> different way than the server which is not what we tend to do.\n> >>\n> >> The server throws an error we throw an error. We really aren't in the\n> business of re-interpreting the servers responses.\n> > I don't really see a reason why the driver has to throw an exception\n> > if and only if there is an ERROR on the PostgreSQL side. But even if\n> > you want to make that rule for some reason, it doesn't preclude\n> > correct behavior here. All you really need is to have con.commit()\n> > return some indication of what the command tag was, just as, say, psql\n> > would do.\n>\n> I think, this would be an appropriate solution. PostgreSQL reports the\n> \"unsuccessful\" commit through the \"ROLLBACK\" status code and the driver\n> translates this into a Java SQLException, because this is the only way\n> to communicate the \"non-successfullness\" from the void commit() method.\n> Since the commit() was not successful, from the API point of view this\n> is an error and it is fine to report this using an exception.\n>\n\nWell it doesn't always report the unsuccessful commit as a rollback\nsometimes it says\n\"there is no transaction\" depending on what happened in the transaction\n\nAlso when there is an error there is also a status provided by the backend.\nSince this is not an error to the backend there is no status that the\nexception can provide.\n\n>\n> Am 14.02.2020 um 21:07 schrieb Tom Lane:\n> > Dave Cramer <davecramer@postgres.rocks> writes:\n> >> We have the same blast radius.\n> >> I have offered to make the behaviour requested dependent on a\n> configuration\n> >> parameter but apparently this is not sufficient.\n> > Nope, that is absolutely not happening. We learned very painfully, back\n> > around 7.3 when we tried to put in autocommit on/off, that if server\n> > behaviors like this are configurable then most client code has to be\n> > prepared to work with every possible setting. The argument that \"you can\n> > just set it to work the way you expect\" is a dangerous falsehood. I see\n> > no reason to think that a change like this wouldn't suffer the same sort\n> > of embarrassing and expensive failure that autocommit did.\n>\n> Doing this in a (non-default) driver setting is not ideal, because I\n> expect do be notified *by default* from a database (driver) if a commit\n> was not successful (and since the API is void, the only notification\n> path is an exception). We already have a non-default option named\n> \"autosafe\", which fixes the problem somehow.\n>\n\nThe challenge with making this the default, is as Tom noted, many other\npeople don't expect this.\n\nI think the notion that every JDBC driver works exactly the same way for\nevery API call is a challenge.\nTake for instance SERIALIZABLE transaction isolation.\nOnly PostgreSQL actually implements it correctly. AFAIK Oracle SERIALIZABLE\nis actually REPEATABLE READ\n\nWhat many other frameworks do is have vendor specific behaviour.\nPerhaps writing a proxying driver might solve the problem?\n\n\n> If we really need both behaviors (\"silently ignore failed commits\" and\n> \"notify about failed commits\") I would prefer adding a\n> backwards-compatible option\n> \"silently-ignore-failed-commit-due-to-auto-rollback\" (since it is a\n> really aburd setting from my point of view, since consistency is at risk\n> if this happens - the worst thing to expect from a database).\n>\n\nThe error has been reported to the client. At this point the client is\nexpected to do a rollback.\n\nRegards,\nDave\n\nOn Mon, 17 Feb 2020 at 13:02, Haumacher, Bernhard <haui@haumacher.de> wrote:Am 14.02.2020 um 20:36 schrieb Robert Haas:\n> On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n>> Well now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.\n>>\n>> The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.\n> I don't really see a reason why the driver has to throw an exception\n> if and only if there is an ERROR on the PostgreSQL side. But even if\n> you want to make that rule for some reason, it doesn't preclude\n> correct behavior here. All you really need is to have con.commit()\n> return some indication of what the command tag was, just as, say, psql\n> would do.\n\nI think, this would be an appropriate solution. PostgreSQL reports the \n\"unsuccessful\" commit through the \"ROLLBACK\" status code and the driver \ntranslates this into a Java SQLException, because this is the only way \nto communicate the \"non-successfullness\" from the void commit() method. \nSince the commit() was not successful, from the API point of view this \nis an error and it is fine to report this using an exception.Well it doesn't always report the unsuccessful commit as a rollback sometimes it says\"there is no transaction\" depending on what happened in the transactionAlso when there is an error there is also a status provided by the backend. Since this is not an error to the backend there is no status that the exception can provide.\n\nAm 14.02.2020 um 21:07 schrieb Tom Lane:\n> Dave Cramer <davecramer@postgres.rocks> writes:\n>> We have the same blast radius.\n>> I have offered to make the behaviour requested dependent on a configuration\n>> parameter but apparently this is not sufficient.\n> Nope, that is absolutely not happening.  We learned very painfully, back\n> around 7.3 when we tried to put in autocommit on/off, that if server\n> behaviors like this are configurable then most client code has to be\n> prepared to work with every possible setting.  The argument that \"you can\n> just set it to work the way you expect\" is a dangerous falsehood.  I see\n> no reason to think that a change like this wouldn't suffer the same sort\n> of embarrassing and expensive failure that autocommit did.\n\nDoing this in a (non-default) driver setting is not ideal, because I \nexpect do be notified *by default* from a database (driver) if a commit \nwas not successful (and since the API is void, the only notification \npath is an exception). We already have a non-default option named \n\"autosafe\", which fixes the problem somehow.The challenge with making this the default, is as Tom noted, many other people don't expect this. I think the notion that every JDBC driver works exactly the same way for every API call is a challenge. Take for instance SERIALIZABLE transaction isolation. Only PostgreSQL actually implements it correctly. AFAIK Oracle SERIALIZABLE is actually REPEATABLE READWhat many other frameworks do is have vendor specific behaviour. Perhaps writing a proxying driver might solve the problem?\n\nIf we really need both behaviors (\"silently ignore failed commits\" and \n\"notify about failed commits\") I would prefer adding a \nbackwards-compatible option \n\"silently-ignore-failed-commit-due-to-auto-rollback\" (since it is a \nreally aburd setting from my point of view, since consistency is at risk \nif this happens - the worst thing to expect from a database).The error has been reported to the client. At this point the client is expected to do a rollback. Regards,Dave", "msg_date": "Mon, 17 Feb 2020 17:12:04 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 2020-Feb-14, Dave Cramer wrote:\n\n> On Fri, 14 Feb 2020 at 15:19, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> \n> > Do you mean \"if con.commit() results in a rollback, then an exception is\n> > thrown, unless the parameter XYZ is set to PQR\"?\n> \n> No. JDBC has a number of connection parameters which change internal\n> semantics of various things.\n> \n> I was proposing to have a connection parameter which would be\n> throwExceptionOnFailedCommit (or something) which would do what it says.\n> \n> None of this would touch the server. It would just change the driver\n> semantics.\n\nThat's exactly what I was saying.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Feb 2020 20:30:04 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Am 17.02.2020 um 23:12 schrieb Dave Cramer:\n> On Mon, 17 Feb 2020 at 13:02, Haumacher, Bernhard <haui@haumacher.de \n> <mailto:haui@haumacher.de>> wrote:\n>\n> ... would be an appropriate solution. PostgreSQL reports the\n> \"unsuccessful\" commit through the \"ROLLBACK\" status code and the\n> driver\n> translates this into a Java SQLException, because this is the only\n> way\n> to communicate the \"non-successfullness\" from the void commit()\n> method.\n> Since the commit() was not successful, from the API point of view\n> this\n> is an error and it is fine to report this using an exception.\n>\n>\n> Well it doesn't always report the unsuccessful commit as a rollback \n> sometimes it says\n> \"there is no transaction\" depending on what happened in the transaction\neven worse...\n>\n> Also when there is an error there is also a status provided by the \n> backend.\n> Since this is not an error to the backend there is no status that the \n> exception can provide.\nbe free to choose/define one...\n>\n> Doing this in a (non-default) driver setting is not ideal, because I\n> expect do be notified *by default* from a database (driver) if a\n> commit\n> was not successful (and since the API is void, the only notification\n> path is an exception). We already have a non-default option named\n> \"autosafe\", which fixes the problem somehow.\n>\n>\n> The challenge with making this the default, is as Tom noted, many \n> other people don't expect this.\n\nNobody expects a database reporting a successful commit, while it \ninternally rolled back.\n\nIf there is code out there depending on this bug, it is fair to provide \na backwards-compatible option to re-activate this unexpected behavior.\n\n> What many other frameworks do is have vendor specific behaviour.\n> Perhaps writing a proxying driver might solve the problem?\n\nThat's exactly what we do - extending our database abstraction layer to \nwork around database-specific interpretations of the JDBC API.\n\nBut of cause, the abstraction layer is not able to reconstruct an error \nfrom a commit() call, that has been dropped by the driver. Of cause, I \ncould try to insert another dummy entry into a dummy table immediately \nbefore each commit to get again the exception reporting that the \ntransaction is in rollback-only-mode... but this does not sound \nreasonable to me.\n\n> If we really need both behaviors (\"silently ignore failed commits\"\n> and\n> \"notify about failed commits\") I would prefer adding a\n> backwards-compatible option\n> \"silently-ignore-failed-commit-due-to-auto-rollback\" (since it is a\n> really aburd setting from my point of view, since consistency is\n> at risk\n> if this happens - the worst thing to expect from a database).\n>\n>\n> The error has been reported to the client. At this point the client is \n> expected to do a rollback.\n\nAs I explained, there is not \"the client\" but there are several software \nlayers - and the error only has been reported to some of these layers \nthat may decide not to communicate the problem down the road. Therefore, \nthe final commit() must report the problem again.\n\nBest regard, Bernhard\n\n\n\n\n\n\n\nAm 17.02.2020 um 23:12 schrieb Dave\n Cramer:\n\n\n\nOn Mon, 17 Feb 2020 at 13:02, Haumacher, Bernhard\n <haui@haumacher.de>\n wrote:\n\n... would be an\n appropriate solution. PostgreSQL reports the \n \"unsuccessful\" commit through the \"ROLLBACK\" status code and\n the driver \n translates this into a Java SQLException, because this is\n the only way \n to communicate the \"non-successfullness\" from the void\n commit() method. \n Since the commit() was not successful, from the API point of\n view this \n is an error and it is fine to report this using an\n exception.\n\n\n\nWell it doesn't always report the unsuccessful commit as\n a rollback sometimes it says\n\"there is no transaction\" depending on what happened in\n the transaction\n\n\n\n even worse...\n\n\n\n\n\nAlso when there is an error there is also a status\n provided by the backend. \nSince this is not an error to the backend there is no\n status that the exception can provide.\n\n\n\n be free to choose/define one...\n\n\n\n\n Doing this in a (non-default) driver setting is not ideal,\n because I \n expect do be notified *by default* from a database (driver)\n if a commit \n was not successful (and since the API is void, the only\n notification \n path is an exception). We already have a non-default option\n named \n \"autosafe\", which fixes the problem somehow.\n\n\n\nThe challenge with making this the default, is as Tom\n noted, many other people don't expect this. \n\n\n\n\nNobody expects a database reporting a successful commit, while it\n internally rolled back.\nIf there is code out there depending on this bug, it is fair to\n provide a backwards-compatible option to re-activate this\n unexpected behavior.\n\n\n\nWhat many other frameworks do is have\n vendor specific behaviour. \n Perhaps writing a proxying driver might solve the\n problem?\n\n\n\nThat's exactly what we do - extending our database abstraction\n layer to work around database-specific interpretations of the JDBC\n API. \n\nBut of cause, the abstraction layer is not able to reconstruct an\n error from a commit() call, that has been dropped by the driver.\n Of cause, I could try to insert another dummy entry into a dummy\n table immediately before each commit to get again the exception\n reporting that the transaction is in rollback-only-mode... but\n this does not sound reasonable to me.\n\n\n\n\n If we really need both behaviors (\"silently ignore failed\n commits\" and \n \"notify about failed commits\") I would prefer adding a \n backwards-compatible option \n \"silently-ignore-failed-commit-due-to-auto-rollback\" (since\n it is a \n really aburd setting from my point of view, since\n consistency is at risk \n if this happens - the worst thing to expect from a\n database).\n\n\n\nThe error has been reported to the client. At this point\n the client is expected to do a rollback. \n\n\n\n\nAs I explained, there is not \"the client\" but there are several\n software layers - and the error only has been reported to some of\n these layers that may decide not to communicate the problem down\n the road. Therefore, the final commit() must report the problem\n again.\nBest regard, Bernhard", "msg_date": "Thu, 20 Feb 2020 08:02:49 +0100", "msg_from": "\"Haumacher, Bernhard\" <haui@haumacher.de>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks>\n>> wrote:\n>> > Well now you are asking the driver to re-interpret the results in a\n>> different way than the server which is not what we tend to do.\n>> >\n>> > The server throws an error we throw an error. We really aren't in the\n>> business of re-interpreting the servers responses.\n>>\n>> I don't really see a reason why the driver has to throw an exception\n>> if and only if there is an ERROR on the PostgreSQL side. But even if\n>> you want to make that rule for some reason, it doesn't preclude\n>> correct behavior here. All you really need is to have con.commit()\n>> return some indication of what the command tag was, just as, say, psql\n>> would do. If the server provides you with status information and you\n>> throw it out instead of passing it along to the application, that's\n>> not ideal.\n>>\n>\n> Well con.commit() returns void :(\n>\n\nI'd like to second Dave on this, from the .NET perspective - actual client\naccess is done via standard drivers in almost all cases, and these drivers\ngenerally adhere to database API abstractions (JDBC for Java, ADO.NET for\n.NET, and so on). AFAIK, in almost all such abstractions, commit can either\ncomplete (implying success) or throw an exception - there is no third way\nto return a status code. It's true that a driver may expose NOTICE/WARNING\nmessages via some other channel (Npgsql emits .NET events for these), but\nthis is a separate message \"channel\" that is disconnected API-wise from the\ncommit; this makes the mechanism very \"undiscoverable\".\n\nIn other words, if we do agree that there are some legitimate cases where a\nprogram may end up executing commit on a failed transaction (e.g. because\nof a combination of framework and application code), and we think that a\nwell-written client should be aware of the failed transaction and behave in\nan exceptional way around a non-committing commit, then I think that's a\ngood case for a server-side change:\n\n - Asking drivers to do this at the client have the exact same breakage\n impact as the server change, since the user-visible behavior changes in the\n same way (the change is just shifted from server to driver). What's worse\n is that every driver now has to reimplement the same new logic, and we'd\n most probably end up with some drivers doing it in some languages, and\n others not doing it in others (so behavioral differences).\n - Asking end-users (i.e. application code) to do this seems even worse,\n as every user/application in the world now has to be made somehow aware of\n a somewhat obscure and very un-discoverable situation.\n\nShay\n\nOn Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Well now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.\n>\n> The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.\n\nI don't really see a reason why the driver has to throw an exception\nif and only if there is an ERROR on the PostgreSQL side. But even if\nyou want to make that rule for some reason, it doesn't preclude\ncorrect behavior here. All you really need is to have con.commit()\nreturn some indication of what the command tag was, just as, say, psql\nwould do. If the server provides you with status information and you\nthrow it out instead of passing it along to the application, that's\nnot ideal. Well con.commit() returns void :(I'd like to second Dave on this, from the .NET perspective - actual client access is done via standard drivers in almost all cases, and these drivers generally adhere to database API abstractions (JDBC for Java, ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions, commit can either complete (implying success) or throw an exception - there is no third way to return a status code. It's true that a driver may expose NOTICE/WARNING messages via some other channel (Npgsql emits .NET events for these), but this is a separate message \"channel\" that is disconnected API-wise from the commit; this makes the mechanism very \"undiscoverable\".In other words, if we do agree that there are some legitimate cases where a program may end up executing commit on a failed transaction (e.g. because of a combination of framework and application code), and we think that a well-written client should be aware of the failed transaction and behave in an exceptional way around a non-committing commit, then I think that's a good case for a server-side change:Asking drivers to do this at the client have the exact same breakage impact as the server change, since the user-visible behavior changes in the same way (the change is just shifted from server to driver). What's worse is that every driver now has to reimplement the same new logic, and we'd most probably end up with some drivers doing it in some languages, and others not doing it in others (so behavioral differences).Asking end-users (i.e. application code) to do this seems even worse, as every user/application in the world now has to be made somehow aware of a somewhat obscure and very un-discoverable situation.Shay", "msg_date": "Sun, 23 Feb 2020 07:40:58 +0200", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Sun, 23 Feb 2020 at 00:41, Shay Rojansky <roji@roji.org> wrote:\n\n>\n>\n> On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>>> On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks>\n>>> wrote:\n>>> > Well now you are asking the driver to re-interpret the results in a\n>>> different way than the server which is not what we tend to do.\n>>> >\n>>> > The server throws an error we throw an error. We really aren't in the\n>>> business of re-interpreting the servers responses.\n>>>\n>>> I don't really see a reason why the driver has to throw an exception\n>>> if and only if there is an ERROR on the PostgreSQL side. But even if\n>>> you want to make that rule for some reason, it doesn't preclude\n>>> correct behavior here. All you really need is to have con.commit()\n>>> return some indication of what the command tag was, just as, say, psql\n>>> would do. If the server provides you with status information and you\n>>> throw it out instead of passing it along to the application, that's\n>>> not ideal.\n>>>\n>>\n>> Well con.commit() returns void :(\n>>\n>\n> I'd like to second Dave on this, from the .NET perspective - actual client\n> access is done via standard drivers in almost all cases, and these drivers\n> generally adhere to database API abstractions (JDBC for Java, ADO.NET for\n> .NET, and so on). AFAIK, in almost all such abstractions, commit can either\n> complete (implying success) or throw an exception - there is no third way\n> to return a status code. It's true that a driver may expose NOTICE/WARNING\n> messages via some other channel (Npgsql emits .NET events for these), but\n> this is a separate message \"channel\" that is disconnected API-wise from the\n> commit; this makes the mechanism very \"undiscoverable\".\n>\n> In other words, if we do agree that there are some legitimate cases where\n> a program may end up executing commit on a failed transaction (e.g. because\n> of a combination of framework and application code), and we think that a\n> well-written client should be aware of the failed transaction and behave in\n> an exceptional way around a non-committing commit, then I think that's a\n> good case for a server-side change:\n>\n> - Asking drivers to do this at the client have the exact same breakage\n> impact as the server change, since the user-visible behavior changes in the\n> same way (the change is just shifted from server to driver). What's worse\n> is that every driver now has to reimplement the same new logic, and we'd\n> most probably end up with some drivers doing it in some languages, and\n> others not doing it in others (so behavioral differences).\n> - Asking end-users (i.e. application code) to do this seems even\n> worse, as every user/application in the world now has to be made somehow\n> aware of a somewhat obscure and very un-discoverable situation.\n>\n> Shay\n>\n\nTo be fair this is Bernhard's position which, after thinking about this\nsome more, I am endorsing.\n\nSo we now have two of the largest client bases for PostgreSQL with known\nissues effectively losing data because they don't notice that the commit\nfailed.\nIt is very likely that this occurs with all clients but they just don't\nnotice it. That is what is particularly alarming about this problem is that\nwe are silently ignoring an error.\n\nWhile we can certainly code around this in the client drivers I don't\nbelieve they should be responsible for fixing the failings of the server.\n\nI fail to see where doing the right thing and reporting an error where\nthere is one should be trumped by not breaking existing apps which by all\naccounts may be broken but just don't know it.\n\nDave\n\nOn Sun, 23 Feb 2020 at 00:41, Shay Rojansky <roji@roji.org> wrote:On Fri, 14 Feb 2020 at 14:37, Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Feb 14, 2020 at 2:08 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Well now you are asking the driver to re-interpret the results in a different way than the server which is not what we tend to do.\n>\n> The server throws an error we throw an error. We really aren't in the business of re-interpreting the servers responses.\n\nI don't really see a reason why the driver has to throw an exception\nif and only if there is an ERROR on the PostgreSQL side. But even if\nyou want to make that rule for some reason, it doesn't preclude\ncorrect behavior here. All you really need is to have con.commit()\nreturn some indication of what the command tag was, just as, say, psql\nwould do. If the server provides you with status information and you\nthrow it out instead of passing it along to the application, that's\nnot ideal. Well con.commit() returns void :(I'd like to second Dave on this, from the .NET perspective - actual client access is done via standard drivers in almost all cases, and these drivers generally adhere to database API abstractions (JDBC for Java, ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions, commit can either complete (implying success) or throw an exception - there is no third way to return a status code. It's true that a driver may expose NOTICE/WARNING messages via some other channel (Npgsql emits .NET events for these), but this is a separate message \"channel\" that is disconnected API-wise from the commit; this makes the mechanism very \"undiscoverable\".In other words, if we do agree that there are some legitimate cases where a program may end up executing commit on a failed transaction (e.g. because of a combination of framework and application code), and we think that a well-written client should be aware of the failed transaction and behave in an exceptional way around a non-committing commit, then I think that's a good case for a server-side change:Asking drivers to do this at the client have the exact same breakage impact as the server change, since the user-visible behavior changes in the same way (the change is just shifted from server to driver). What's worse is that every driver now has to reimplement the same new logic, and we'd most probably end up with some drivers doing it in some languages, and others not doing it in others (so behavioral differences).Asking end-users (i.e. application code) to do this seems even worse, as every user/application in the world now has to be made somehow aware of a somewhat obscure and very un-discoverable situation.ShayTo be fair this is Bernhard's position which, after thinking about this some more, I am endorsing.So we now have two of the largest client bases for PostgreSQL with known issues effectively losing data because they don't notice that the commit failed.It is very likely that this occurs with all clients but they just don't notice it. That is what is particularly alarming about this problem is that we are silently ignoring an error.While we can certainly code around this in the client drivers I don't believe they should be responsible for fixing the failings of the server.I fail to see where doing the right thing and reporting an error where there is one should be trumped by not breaking existing apps which by all accounts may be broken but just don't know it. Dave", "msg_date": "Sun, 23 Feb 2020 06:16:23 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Shay> Asking drivers to do this at the client have the exact same breakage\nimpact as the server change, since the user-visible behavior changes in the\nsame way\n\n+1\n\nDave>While we can certainly code around this in the client drivers I don't\nbelieve they should be responsible for fixing the failings of the server.\n\nApplication developers expect that the database feels the same no matter\nwhich driver is used, so it would be better to avoid a case\nwhen half of the drivers create exceptions on non-committing-commit, and\nanother half silently loses data.\n\nVladimir\n\nShay> Asking drivers to do this at the client have the exact same breakage impact as the server change, since the user-visible behavior changes in the same way+1Dave>While we can certainly code around this in the client drivers I don't believe they should be responsible for fixing the failings of the server.Application developers expect that the database feels the same no matter which driver is used, so it would be better to avoid a casewhen half of the drivers create exceptions on non-committing-commit, and another half silently loses data.Vladimir", "msg_date": "Sun, 23 Feb 2020 15:14:25 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Sun, Feb 23, 2020 at 11:11 AM Shay Rojansky <roji@roji.org> wrote:\n> I'd like to second Dave on this, from the .NET perspective - actual client access is done via standard drivers in almost all cases, and these drivers generally adhere to database API abstractions (JDBC for Java, ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions, commit can either complete (implying success) or throw an exception - there is no third way to return a status code. It's true that a driver may expose NOTICE/WARNING messages via some other channel (Npgsql emits .NET events for these), but this is a separate message \"channel\" that is disconnected API-wise from the commit; this makes the mechanism very \"undiscoverable\".\n\nI'm still befuddled here. First, to repeat what I said before, the\nCOMMIT only returns a ROLLBACK command tag if there's been a previous\nERROR. So, if you haven't ignored the prior ERROR, you should be fine.\nSecond, there's nothing to keep the driver itself from translating\nROLLBACK into an exception, if that's more convenient for some\nparticular driver. Let's go back to Bernhard's example upthred:\n\ncomposeTransaction() {\n Connection con = getConnection(); // implicitly \"begin\"\n try {\n insertFrameworkLevelState(con);\n insertApplicationLevelState(con);\n con.commit();\n publishNewState();\n } catch (Throwable ex) {\n con.rollback();\n }\n}\n\nIf insertFrameworkLevelState() or insertApplicationLevelState()\nperform database operations that fail, then an exception should be\nthrown and we should end up at con.rollback(), unless there is an\ninternal catch block inside those functions that swallows the\nexception, or unless the JDBC driver ignores the error from the\nserver. If those things succeed, then COMMIT could still fail with an\nERROR but it shouldn't return ROLLBACK. But, for extra security,\ncon.commit() could be made to throw an exception if the command tag\nreturned by COMMIT is not COMMIT. It sounds like Dave doesn't want to\ndo that, but it would solve this problem without requiring a server\nbehavior change.\n\nActually, an even better idea might be to make the driver error out\nwhen the transaction is known to be in a failed state when you enter\ncon.commit(). The server does return an indication after each command\nas to whether the session is in a transaction and whether that\ntransaction is in a failed state. That's how the %x escape sequence\njust added to the psql prompt works. So, suppose the JDBC driver\ntracked that state like libpq does. insertFrameworkLevelState() or\ninsertApplicationLevelState() throws an exception, which is internally\nswallowed. Then you reach con.commit(), and it says, nope, can't do\nthat, we're in a failed state, and so an exception is thrown. Then\nwhen we reach con.rollback() we're still inside a transaction, it gets\nrolled back, and everything works just as expected.\n\nOr, alternatively, the JDBC driver could keep track of the fact that\nit had thrown an exception ITSELF, without paying any attention to\nwhat the server told it, and if it saw con.commit() after raising an\nexception, it could raise another exception (or re-raise the same\none). That would also fix it.\n\n> Asking drivers to do this at the client have the exact same breakage impact as the server change, since the user-visible behavior changes in the same way (the change is just shifted from server to driver). What's worse is that every driver now has to reimplement the same new logic, and we'd most probably end up with some drivers doing it in some languages, and others not doing it in others (so behavioral differences).\n\nWell, it seems quite possible that there are drivers and applications\nthat don't have this issue; I've never had a problem with this\nbehavior, and I've been using PostgreSQL for something like two\ndecades, and I believe that the sketch above could be used to get the\ndesired behavior in current releases of PostgreSQL with no server code\nchange. If we did change the server behavior, it seems unlikely that\nevery driver would adjust their behavior to the new server behavior\nall at once and that they would all get it right while also all\npreserving backward compatibility with current releases in case a\nnewer driver is used with an older server. I don't think that's\nlikely. What would probably happen is that many drivers would ignore\nthe change, leaving applications to cope with the differences between\nserver versions, and some would change the driver behavior\ncategorically, breaking compatibility with older server versions, and\nsome would make mistakes in implementing support for the new behavior.\nAnd maybe we would also find that the new behavior isn't ideal for\neverybody any more than the current behavior is ideal for everybody.\n\nI am really struggling to see why this is anything but a bug in the\nJDBC driver. The problem is that the application doesn't know that the\ntransaction has failed, but the server has returned not one, not two,\nbut three indications of failure. First, it returned an error, which I\nguess the JDBC driver turns into an exception - but it does not,\nbefore throwing that exception, remember that the current transaction\nis failed. Second, it will thereafter report that the transaction is\nin a failed state, both immediately after the error and upon every\nsubsequent operation that does not get the server out of the\ntransaction. It sounds like the JDBC driver ignores this information.\nThird, the attempt at COMMIT will return a ROLLBACK command tag, which\nDave said that the driver does ignore. That's a lot of stuff that the\ndriver could do but isn't doing. So what this boils down to, from my\nperspective, is not that the driver behavior in the face of errors\ncan't be made correct with the existing semantics, but that the driver\nwould find it more convenient if PostgreSQL reported those errors in a\nsomewhat different way. I think that's a fair criticism, but I don't\nthink it's a sufficient reason to change the behavior.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Feb 2020 07:01:09 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Sun, 23 Feb 2020 at 20:31, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Feb 23, 2020 at 11:11 AM Shay Rojansky <roji@roji.org> wrote:\n> > I'd like to second Dave on this, from the .NET perspective - actual\n> client access is done via standard drivers in almost all cases, and these\n> drivers generally adhere to database API abstractions (JDBC for Java,\n> ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions,\n> commit can either complete (implying success) or throw an exception - there\n> is no third way to return a status code. It's true that a driver may expose\n> NOTICE/WARNING messages via some other channel (Npgsql emits .NET events\n> for these), but this is a separate message \"channel\" that is disconnected\n> API-wise from the commit; this makes the mechanism very \"undiscoverable\".\n>\n> I'm still befuddled here. First, to repeat what I said before, the\n> COMMIT only returns a ROLLBACK command tag if there's been a previous\n> ERROR. So, if you haven't ignored the prior ERROR, you should be fine.\n> Second, there's nothing to keep the driver itself from translating\n> ROLLBACK into an exception, if that's more convenient for some\n> particular driver. Let's go back to Bernhard's example upthred:\n>\n> composeTransaction() {\n> Connection con = getConnection(); // implicitly \"begin\"\n> try {\n> insertFrameworkLevelState(con);\n> insertApplicationLevelState(con);\n> con.commit();\n> publishNewState();\n> } catch (Throwable ex) {\n> con.rollback();\n> }\n> }\n>\n> If insertFrameworkLevelState() or insertApplicationLevelState()\n> perform database operations that fail, then an exception should be\n> thrown and we should end up at con.rollback(), unless there is an\n> internal catch block inside those functions that swallows the\n> exception, or unless the JDBC driver ignores the error from the\n> server.\n\n\nThe driver does not ignore the error, But in Bernhard's case the framework\nis\nprocessing the exception and not re-throwing it.\n\n\n> If those things succeed, then COMMIT could still fail with an\n> ERROR but it shouldn't return ROLLBACK. But, for extra security,\n> con.commit() could be made to throw an exception if the command tag\n> returned by COMMIT is not COMMIT. It sounds like Dave doesn't want to\n> do that, but it would solve this problem without requiring a server\n> behavior change.\n>\n\nWell the driver really isn't in the business of changing the semantics of\nthe server.\n\n>\n> Actually, an even better idea might be to make the driver error out\n> when the transaction is known to be in a failed state when you enter\n> con.commit(). The server does return an indication after each command\n> as to whether the session is in a transaction and whether that\n> transaction is in a failed state. That's how the %x escape sequence\n> just added to the psql prompt works. So, suppose the JDBC driver\n> tracked that state like libpq does. insertFrameworkLevelState() or\n> insertApplicationLevelState() throws an exception, which is internally\n> swallowed. Then you reach con.commit(), and it says, nope, can't do\n> that, we're in a failed state, and so an exception is thrown. Then\n> when we reach con.rollback() we're still inside a transaction, it gets\n> rolled back, and everything works just as expected.\n>\n\nYes, we could do that.\n\n>\n> Or, alternatively, the JDBC driver could keep track of the fact that\n> it had thrown an exception ITSELF, without paying any attention to\n> what the server told it, and if it saw con.commit() after raising an\n> exception, it could raise another exception (or re-raise the same\n> one). That would also fix it.\n>\n\nWe could also do this.\n\n>\n> > Asking drivers to do this at the client have the exact same breakage\n> impact as the server change, since the user-visible behavior changes in the\n> same way (the change is just shifted from server to driver). What's worse\n> is that every driver now has to reimplement the same new logic, and we'd\n> most probably end up with some drivers doing it in some languages, and\n> others not doing it in others (so behavioral differences).\n>\n> Well, it seems quite possible that there are drivers and applications\n> that don't have this issue; I've never had a problem with this\n> behavior, and I've been using PostgreSQL for something like two\n> decades,\n\n\nI would argue that you really don't know if you had the problem or not\nsince an error is not thrown.\nThe client merrily goes along its way after issuing a commit and receiving\na rollback or possibly a warning\nsaying that it's not in a transaction. One would have to know that the\nserver\nhad this behaviour to check for it.\nClearly not everyone knows this as it's not documented as violation of the\nSQL spec\n\n\n> and I believe that the sketch above could be used to get the\n> desired behavior in current releases of PostgreSQL with no server code\n> change. If we did change the server behavior, it seems unlikely that\n> every driver would adjust their behavior to the new server behavior\n> all at once and that they would all get it right while also all\n> preserving backward compatibility with current releases in case a\n> newer driver is used with an older server.\n\n\nActually I would be willing to bet that the JDBC driver would do just that\nwithout any code changes whatsoever.\ncommit throws an error. The driver sees the error and throws an exception.\nIf an older version of the server were used no error would be thrown it\nwold work as seen today. I would also be willing to bet other drivers would\nwork the same.\nAdditionally once the server behaviour was changed I'd be more than willing\nto have the driver emulate this behaviour for older versions.\n\n\n\n> I don't think that's\n> likely. What would probably happen is that many drivers would ignore\n> the change, leaving applications to cope with the differences between\n> server versions, and some would change the driver behavior\n> categorically, breaking compatibility with older server versions, and\n> some would make mistakes in implementing support for the new behavior.\n> And maybe we would also find that the new behavior isn't ideal for\n> everybody any more than the current behavior is ideal for everybody.\n>\n> I am really struggling to see why this is anything but a bug in the\n> JDBC driver.\n\nNot seeing how this is a driver error.\n\n> The problem is that the application doesn't know that the\n> transaction has failed, but the server has returned not one, not two,\n> but three indications of failure. First, it returned an error, which I\n> guess the JDBC driver turns into an exception - but it does not,\n>\nIt does throw the exception, however for whatever reason the client ignores\nthese.\nNot how I code but apparently there is an application for this\n\n> before throwing that exception, remember that the current transaction\n> is failed. Second, it will thereafter report that the transaction is\n> in a failed state, both immediately after the error and upon every\n> subsequent operation that does not get the server out of the\n> transaction. It sounds like the JDBC driver ignores this information.\n>\n\n\n> Third, the attempt at COMMIT will return a ROLLBACK command tag, which\n> Dave said that the driver does ignore. That's a lot of stuff that the\n> driver could do but isn't doing. So what this boils down to, from my\n> perspective, is not that the driver behavior in the face of errors\n> can't be made correct with the existing semantics, but that the driver\n> would find it more convenient if PostgreSQL reported those errors in a\n> somewhat different way. I think that's a fair criticism, but I don't\n> think it's a sufficient reason to change the behavior.\n>\n\nI think the fact that this is a violation of the SQL SPEC lends\nconsiderable credence to the argument for changing the behaviour.\nSince this can lead to losing a transaction I think there is even more\nreason to look at changing the behaviour.\n\nDave Cramer\nwww.postgres.rocks\n\nOn Sun, 23 Feb 2020 at 20:31, Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Feb 23, 2020 at 11:11 AM Shay Rojansky <roji@roji.org> wrote:\n> I'd like to second Dave on this, from the .NET perspective - actual client access is done via standard drivers in almost all cases, and these drivers generally adhere to database API abstractions (JDBC for Java, ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions, commit can either complete (implying success) or throw an exception - there is no third way to return a status code. It's true that a driver may expose NOTICE/WARNING messages via some other channel (Npgsql emits .NET events for these), but this is a separate message \"channel\" that is disconnected API-wise from the commit; this makes the mechanism very \"undiscoverable\".\n\nI'm still befuddled here. First, to repeat what I said before, the\nCOMMIT only returns a ROLLBACK command tag if there's been a previous\nERROR. So, if you haven't ignored the prior ERROR, you should be fine.\nSecond, there's nothing to keep the driver itself from translating\nROLLBACK into an exception, if that's more convenient for some\nparticular driver. Let's go back to Bernhard's example upthred:\n\ncomposeTransaction() {\n    Connection con = getConnection(); // implicitly \"begin\"\n    try {\n       insertFrameworkLevelState(con);\n       insertApplicationLevelState(con);\n       con.commit();\n       publishNewState();\n    } catch (Throwable ex) {\n       con.rollback();\n    }\n}\n\nIf insertFrameworkLevelState() or insertApplicationLevelState()\nperform database operations that fail, then an exception should be\nthrown and we should end up at con.rollback(), unless there is an\ninternal catch block inside those functions that swallows the\nexception, or unless the JDBC driver ignores the error from the\nserver. The driver does not ignore the error, But in Bernhard's case the framework is processing the exception and not re-throwing it. If those things succeed, then COMMIT could still fail with an\nERROR but it shouldn't return ROLLBACK. But, for extra security,\ncon.commit() could be made to throw an exception if the command tag\nreturned by COMMIT is not COMMIT. It sounds like Dave doesn't want to\ndo that, but it would solve this problem without requiring a server\nbehavior change.Well the driver really isn't in the business of changing the semantics of the server. \n\nActually, an even better idea might be to make the driver error out\nwhen the transaction is known to be in a failed state when you enter\ncon.commit(). The server does return an indication after each command\nas to whether the session is in a transaction and whether that\ntransaction is in a failed state. That's how the %x escape sequence\njust added to the psql prompt works. So, suppose the JDBC driver\ntracked that state like libpq does. insertFrameworkLevelState() or\ninsertApplicationLevelState() throws an exception, which is internally\nswallowed. Then you reach con.commit(), and it says, nope, can't do\nthat, we're in a failed state, and so an exception is thrown. Then\nwhen we reach con.rollback() we're still inside a transaction, it gets\nrolled back, and everything works just as expected.Yes, we could do that. \n\nOr, alternatively, the JDBC driver could keep track of the fact that\nit had thrown an exception ITSELF, without paying any attention to\nwhat the server told it, and if it saw con.commit() after raising an\nexception, it could raise another exception (or re-raise the same\none). That would also fix it.We could also do this. \n\n> Asking drivers to do this at the client have the exact same breakage impact as the server change, since the user-visible behavior changes in the same way (the change is just shifted from server to driver). What's worse is that every driver now has to reimplement the same new logic, and we'd most probably end up with some drivers doing it in some languages, and others not doing it in others (so behavioral differences).\n\nWell, it seems quite possible that there are drivers and applications\nthat don't have this issue; I've never had a problem with this\nbehavior, and I've been using PostgreSQL for something like two\ndecades, I would argue that you really don't know if you had the problem or not since an error is not thrown. The client merrily goes along its way after issuing a commit and receiving a rollback or possibly  a warningsaying that it's not in a transaction. One would have to know that the serverhad this behaviour to check for it. Clearly not everyone knows this as it's not documented as violation of the SQL spec and I believe that the sketch above could be used to get the\ndesired behavior in current releases of PostgreSQL with no server code\nchange. If we did change the server behavior, it seems unlikely that\nevery driver would adjust their behavior to the new server behavior\nall at once and that they would all get it right while also all\npreserving backward compatibility with current releases in case a\nnewer driver is used with an older server.Actually I would be willing to bet that the JDBC driver would do just that without any code changes whatsoever.commit throws an error. The driver sees the error and throws an exception. If an older version of the server were used no error would be thrown it wold work as seen today. I would also be willing to bet other drivers would work the same. Additionally once the server behaviour was changed I'd be more than willing to have the driver emulate this behaviour for older versions.  I don't think that's\nlikely. What would probably happen is that many drivers would ignore\nthe change, leaving applications to cope with the differences between\nserver versions, and some would change the driver behavior\ncategorically, breaking compatibility with older server versions, and\nsome would make mistakes in implementing support for the new behavior.\nAnd maybe we would also find that the new behavior isn't ideal for\neverybody any more than the current behavior is ideal for everybody.\n\nI am really struggling to see why this is anything but a bug in the\nJDBC driver. Not seeing how this is a driver error. The problem is that the application doesn't know that the\ntransaction has failed, but the server has returned not one, not two,\nbut three indications of failure. First, it returned an error, which I\nguess the JDBC driver turns into an exception - but it does not,It does throw the exception, however for whatever reason the client ignores these. Not how I code but apparently there is an application for this\nbefore throwing that exception, remember that the current transaction\nis failed. Second, it will thereafter report that the transaction is\nin a failed state, both immediately after the error and upon every\nsubsequent operation that does not get the server out of the\ntransaction. It sounds like the JDBC driver ignores this information.\n Third, the attempt at COMMIT will return a ROLLBACK command tag, which\nDave said that the driver does ignore. That's a lot of stuff that the\ndriver could do but isn't doing. So what this boils down to, from my\nperspective, is not that the driver behavior in the face of errors\ncan't be made correct with the existing semantics, but that the driver\nwould find it more convenient if PostgreSQL reported those errors in a\nsomewhat different way. I think that's a fair criticism, but I don't\nthink it's a sufficient reason to change the behavior.  I think the fact that this is a violation of the SQL SPEC lends considerable credence to the argument for changing the behaviour.Since this can lead to losing a transaction I think there is even more reason to look at changing the behaviour.Dave Cramerwww.postgres.rocks", "msg_date": "Sun, 23 Feb 2020 20:58:59 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 24/02/2020 02:31, Robert Haas wrote:\n> I am really struggling to see why this is anything but a bug in the\n> JDBC driver.\n\nI can follow your logic for it being a bug in the JDBC driver, but\n\"anything but\"? No, this is (also) an undocumented violation of SQL.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 24 Feb 2020 09:01:26 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "> First, to repeat what I said before, the COMMIT only returns a ROLLBACK\ncommand tag if there's been a previous ERROR. So, if you haven't ignored\nthe prior ERROR, you should be fine. [...]\n> I am really struggling to see why this is anything but a bug in the JDBC\ndriver\n\nAs Dave wrote, the problem here isn't with the driver, but with framework\nor user-code which swallows the initial exception and allows code to\ncontinue to the commit. Npgsql (and I'm sure the JDBC driver too) does\nsurface PostgreSQL errors as exceptions, and internally tracks the\ntransaction status provided in the CommandComplete message. That means\nusers have the ability - but not the obligation - to know about failed\ntransactions, and some frameworks or user coding patterns could lead to a\ncommit being done on a failed transaction.\n\n> So, if you haven't ignored the prior ERROR, you should be fine. Second,\nthere's nothing to keep the driver itself from translating ROLLBACK into an\nexception, if that's more convenient for some particular driver. [...]\n\nThis is the main point here IMHO, and I don't think it's a question of\nconvenience, or of behavior that should vary across drivers.\n\nIf we think the current *user-visible* behavior is problematic (commit on\nfailed transaction completes without throwing), then the only remaining\nquestion is where this behavior should be fixed - at the server or at the\ndriver. As I wrote above, from the user's perspective it makes no\ndifference - the change would be identical (and just as breaking) either\nway. So while drivers *could* implement the new behavior, what advantages\nwould that have over doing it at the server? Some disadvantages do seem\nclear (repetition of the logic across each driver - leading to\ninconsistency across drivers, changing semantics at the driver by turning a\nnon-error into an exception...).\n\n> Well, it seems quite possible that there are drivers and applications\nthat don't have this issue; I've never had a problem with this behavior,\nand I've been using PostgreSQL for something like two decades [...]\n\nIf we are assuming that most user code is already written to avoid\ncommitting on failed transactions (by tracking transaction state etc.),\nthen making this change at the server wouldn't affect those applications;\nthe only applications affected would be those that do commit on failed\ntransactions today, and it could be argued that those are likely to be\nbroken today (since drivers today don't really expose the rollback in an\naccessible/discoverable way).\n\nShay\n\nOn Mon, Feb 24, 2020 at 3:31 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Feb 23, 2020 at 11:11 AM Shay Rojansky <roji@roji.org> wrote:\n> > I'd like to second Dave on this, from the .NET perspective - actual\n> client access is done via standard drivers in almost all cases, and these\n> drivers generally adhere to database API abstractions (JDBC for Java,\n> ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions,\n> commit can either complete (implying success) or throw an exception - there\n> is no third way to return a status code. It's true that a driver may expose\n> NOTICE/WARNING messages via some other channel (Npgsql emits .NET events\n> for these), but this is a separate message \"channel\" that is disconnected\n> API-wise from the commit; this makes the mechanism very \"undiscoverable\".\n>\n> I'm still befuddled here. First, to repeat what I said before, the\n> COMMIT only returns a ROLLBACK command tag if there's been a previous\n> ERROR. So, if you haven't ignored the prior ERROR, you should be fine.\n> Second, there's nothing to keep the driver itself from translating\n> ROLLBACK into an exception, if that's more convenient for some\n> particular driver. Let's go back to Bernhard's example upthred:\n>\n> composeTransaction() {\n> Connection con = getConnection(); // implicitly \"begin\"\n> try {\n> insertFrameworkLevelState(con);\n> insertApplicationLevelState(con);\n> con.commit();\n> publishNewState();\n> } catch (Throwable ex) {\n> con.rollback();\n> }\n> }\n>\n> If insertFrameworkLevelState() or insertApplicationLevelState()\n> perform database operations that fail, then an exception should be\n> thrown and we should end up at con.rollback(), unless there is an\n> internal catch block inside those functions that swallows the\n> exception, or unless the JDBC driver ignores the error from the\n> server. If those things succeed, then COMMIT could still fail with an\n> ERROR but it shouldn't return ROLLBACK. But, for extra security,\n> con.commit() could be made to throw an exception if the command tag\n> returned by COMMIT is not COMMIT. It sounds like Dave doesn't want to\n> do that, but it would solve this problem without requiring a server\n> behavior change.\n>\n> Actually, an even better idea might be to make the driver error out\n> when the transaction is known to be in a failed state when you enter\n> con.commit(). The server does return an indication after each command\n> as to whether the session is in a transaction and whether that\n> transaction is in a failed state. That's how the %x escape sequence\n> just added to the psql prompt works. So, suppose the JDBC driver\n> tracked that state like libpq does. insertFrameworkLevelState() or\n> insertApplicationLevelState() throws an exception, which is internally\n> swallowed. Then you reach con.commit(), and it says, nope, can't do\n> that, we're in a failed state, and so an exception is thrown. Then\n> when we reach con.rollback() we're still inside a transaction, it gets\n> rolled back, and everything works just as expected.\n>\n> Or, alternatively, the JDBC driver could keep track of the fact that\n> it had thrown an exception ITSELF, without paying any attention to\n> what the server told it, and if it saw con.commit() after raising an\n> exception, it could raise another exception (or re-raise the same\n> one). That would also fix it.\n>\n> > Asking drivers to do this at the client have the exact same breakage\n> impact as the server change, since the user-visible behavior changes in the\n> same way (the change is just shifted from server to driver). What's worse\n> is that every driver now has to reimplement the same new logic, and we'd\n> most probably end up with some drivers doing it in some languages, and\n> others not doing it in others (so behavioral differences).\n>\n> Well, it seems quite possible that there are drivers and applications\n> that don't have this issue; I've never had a problem with this\n> behavior, and I've been using PostgreSQL for something like two\n> decades, and I believe that the sketch above could be used to get the\n> desired behavior in current releases of PostgreSQL with no server code\n> change. If we did change the server behavior, it seems unlikely that\n> every driver would adjust their behavior to the new server behavior\n> all at once and that they would all get it right while also all\n> preserving backward compatibility with current releases in case a\n> newer driver is used with an older server. I don't think that's\n> likely. What would probably happen is that many drivers would ignore\n> the change, leaving applications to cope with the differences between\n> server versions, and some would change the driver behavior\n> categorically, breaking compatibility with older server versions, and\n> some would make mistakes in implementing support for the new behavior.\n> And maybe we would also find that the new behavior isn't ideal for\n> everybody any more than the current behavior is ideal for everybody.\n>\n> I am really struggling to see why this is anything but a bug in the\n> JDBC driver. The problem is that the application doesn't know that the\n> transaction has failed, but the server has returned not one, not two,\n> but three indications of failure. First, it returned an error, which I\n> guess the JDBC driver turns into an exception - but it does not,\n> before throwing that exception, remember that the current transaction\n> is failed. Second, it will thereafter report that the transaction is\n> in a failed state, both immediately after the error and upon every\n> subsequent operation that does not get the server out of the\n> transaction. It sounds like the JDBC driver ignores this information.\n> Third, the attempt at COMMIT will return a ROLLBACK command tag, which\n> Dave said that the driver does ignore. That's a lot of stuff that the\n> driver could do but isn't doing. So what this boils down to, from my\n> perspective, is not that the driver behavior in the face of errors\n> can't be made correct with the existing semantics, but that the driver\n> would find it more convenient if PostgreSQL reported those errors in a\n> somewhat different way. I think that's a fair criticism, but I don't\n> think it's a sufficient reason to change the behavior.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n> First, to repeat what I said before, the COMMIT only returns a ROLLBACK command tag if there's been a previous ERROR. So, if you haven't ignored the prior ERROR, you should be fine. [...]> I am really struggling to see why this is anything but a bug in the JDBC driverAs Dave wrote, the problem here isn't with the driver, but with framework or user-code which swallows the initial exception and allows code to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does surface PostgreSQL errors as exceptions, and internally tracks the transaction status provided in the CommandComplete message. That means users have the ability - but not the obligation - to know about failed transactions, and some frameworks or user coding patterns could lead to a commit being done on a failed transaction.> So, if you haven't ignored the prior ERROR, you should be fine. Second, there's nothing to keep the driver itself from translating ROLLBACK into an exception, if that's more convenient for some particular driver. [...]This is the main point here IMHO, and I don't think it's a question of convenience, or of behavior that should vary across drivers.If we think the current *user-visible* behavior is problematic (commit on failed transaction completes without throwing), then the only remaining question is where this behavior should be fixed - at the server or at the driver. As I wrote above, from the user's perspective it makes no difference - the change would be identical (and just as breaking) either way. So while drivers *could* implement the new behavior, what advantages would that have over doing it at the server? Some disadvantages do seem clear (repetition of the logic across each driver - leading to inconsistency across drivers, changing semantics at the driver by turning a non-error into an exception...).> Well, it seems quite possible that there are drivers and applications that don't have this issue; I've never had a problem with this behavior, and I've been using PostgreSQL for something like two decades [...]If we are assuming that most user code is already written to avoid committing on failed transactions (by tracking transaction state etc.), then making this change at the server wouldn't affect those applications; the only applications affected would be those that do commit on failed transactions today, and it could be argued that those are likely to be broken today (since drivers today don't really expose the rollback in an accessible/discoverable way).ShayOn Mon, Feb 24, 2020 at 3:31 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Feb 23, 2020 at 11:11 AM Shay Rojansky <roji@roji.org> wrote:\n> I'd like to second Dave on this, from the .NET perspective - actual client access is done via standard drivers in almost all cases, and these drivers generally adhere to database API abstractions (JDBC for Java, ADO.NET for .NET, and so on). AFAIK, in almost all such abstractions, commit can either complete (implying success) or throw an exception - there is no third way to return a status code. It's true that a driver may expose NOTICE/WARNING messages via some other channel (Npgsql emits .NET events for these), but this is a separate message \"channel\" that is disconnected API-wise from the commit; this makes the mechanism very \"undiscoverable\".\n\nI'm still befuddled here. First, to repeat what I said before, the\nCOMMIT only returns a ROLLBACK command tag if there's been a previous\nERROR. So, if you haven't ignored the prior ERROR, you should be fine.\nSecond, there's nothing to keep the driver itself from translating\nROLLBACK into an exception, if that's more convenient for some\nparticular driver. Let's go back to Bernhard's example upthred:\n\ncomposeTransaction() {\n    Connection con = getConnection(); // implicitly \"begin\"\n    try {\n       insertFrameworkLevelState(con);\n       insertApplicationLevelState(con);\n       con.commit();\n       publishNewState();\n    } catch (Throwable ex) {\n       con.rollback();\n    }\n}\n\nIf insertFrameworkLevelState() or insertApplicationLevelState()\nperform database operations that fail, then an exception should be\nthrown and we should end up at con.rollback(), unless there is an\ninternal catch block inside those functions that swallows the\nexception, or unless the JDBC driver ignores the error from the\nserver. If those things succeed, then COMMIT could still fail with an\nERROR but it shouldn't return ROLLBACK. But, for extra security,\ncon.commit() could be made to throw an exception if the command tag\nreturned by COMMIT is not COMMIT. It sounds like Dave doesn't want to\ndo that, but it would solve this problem without requiring a server\nbehavior change.\n\nActually, an even better idea might be to make the driver error out\nwhen the transaction is known to be in a failed state when you enter\ncon.commit(). The server does return an indication after each command\nas to whether the session is in a transaction and whether that\ntransaction is in a failed state. That's how the %x escape sequence\njust added to the psql prompt works. So, suppose the JDBC driver\ntracked that state like libpq does. insertFrameworkLevelState() or\ninsertApplicationLevelState() throws an exception, which is internally\nswallowed. Then you reach con.commit(), and it says, nope, can't do\nthat, we're in a failed state, and so an exception is thrown. Then\nwhen we reach con.rollback() we're still inside a transaction, it gets\nrolled back, and everything works just as expected.\n\nOr, alternatively, the JDBC driver could keep track of the fact that\nit had thrown an exception ITSELF, without paying any attention to\nwhat the server told it, and if it saw con.commit() after raising an\nexception, it could raise another exception (or re-raise the same\none). That would also fix it.\n\n> Asking drivers to do this at the client have the exact same breakage impact as the server change, since the user-visible behavior changes in the same way (the change is just shifted from server to driver). What's worse is that every driver now has to reimplement the same new logic, and we'd most probably end up with some drivers doing it in some languages, and others not doing it in others (so behavioral differences).\n\nWell, it seems quite possible that there are drivers and applications\nthat don't have this issue; I've never had a problem with this\nbehavior, and I've been using PostgreSQL for something like two\ndecades, and I believe that the sketch above could be used to get the\ndesired behavior in current releases of PostgreSQL with no server code\nchange. If we did change the server behavior, it seems unlikely that\nevery driver would adjust their behavior to the new server behavior\nall at once and that they would all get it right while also all\npreserving backward compatibility with current releases in case a\nnewer driver is used with an older server. I don't think that's\nlikely. What would probably happen is that many drivers would ignore\nthe change, leaving applications to cope with the differences between\nserver versions, and some would change the driver behavior\ncategorically, breaking compatibility with older server versions, and\nsome would make mistakes in implementing support for the new behavior.\nAnd maybe we would also find that the new behavior isn't ideal for\neverybody any more than the current behavior is ideal for everybody.\n\nI am really struggling to see why this is anything but a bug in the\nJDBC driver. The problem is that the application doesn't know that the\ntransaction has failed, but the server has returned not one, not two,\nbut three indications of failure. First, it returned an error, which I\nguess the JDBC driver turns into an exception - but it does not,\nbefore throwing that exception, remember that the current transaction\nis failed. Second, it will thereafter report that the transaction is\nin a failed state, both immediately after the error and upon every\nsubsequent operation that does not get the server out of the\ntransaction. It sounds like the JDBC driver ignores this information.\nThird, the attempt at COMMIT will return a ROLLBACK command tag, which\nDave said that the driver does ignore. That's a lot of stuff that the\ndriver could do but isn't doing. So what this boils down to, from my\nperspective, is not that the driver behavior in the face of errors\ncan't be made correct with the existing semantics, but that the driver\nwould find it more convenient if PostgreSQL reported those errors in a\nsomewhat different way. I think that's a fair criticism, but I don't\nthink it's a sufficient reason to change the behavior.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 24 Feb 2020 10:26:38 +0200", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 7:29 AM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Well the driver really isn't in the business of changing the semantics of the server.\n\nI mean, I just can't agree with that way of characterizing it. It\nseems clear enough that the driver not only should not change the\nsemantics of the server, but that it cannot. It can, however, decide\nwhich of the things that the server might do (or that the application\nconnected to it might do) ought to result in it throwing an exception.\nAnd a slightly different set of decisions here would produce the\ndesired behavior instead of behavior which is not desired.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Feb 2020 17:55:40 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 1:31 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> On 24/02/2020 02:31, Robert Haas wrote:\n> > I am really struggling to see why this is anything but a bug in the\n> > JDBC driver.\n>\n> I can follow your logic for it being a bug in the JDBC driver, but\n> \"anything but\"? No, this is (also) an undocumented violation of SQL.\n\nWell, that's a fair point. I withdraw my previous statement. Instead,\nI wish to argue that:\n\n1. This problem can definitely be fixed in any given driver without\nchanging the behavior of the server.\n\n2. It would be better to fix the driver than the server because this\nbehavior is very old and there are probably many applications (and\nperhaps some drivers) that rely on it, and changing the server would\nbreak them.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Feb 2020 17:59:39 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 1:56 PM Shay Rojansky <roji@roji.org> wrote:\n> As Dave wrote, the problem here isn't with the driver, but with framework or user-code which swallows the initial exception and allows code to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does surface PostgreSQL errors as exceptions, and internally tracks the transaction status provided in the CommandComplete message. That means users have the ability - but not the obligation - to know about failed transactions, and some frameworks or user coding patterns could lead to a commit being done on a failed transaction.\n\nAgreed. All of that can be fixed in the driver, though.\n\n> If we think the current *user-visible* behavior is problematic (commit on failed transaction completes without throwing), then the only remaining question is where this behavior should be fixed - at the server or at the driver. As I wrote above, from the user's perspective it makes no difference - the change would be identical (and just as breaking) either way. So while drivers *could* implement the new behavior, what advantages would that have over doing it at the server? Some disadvantages do seem clear (repetition of the logic across each driver - leading to inconsistency across drivers, changing semantics at the driver by turning a non-error into an exception...).\n\nThe advantage is that it doesn't cause a compatibility break.\n\n> > Well, it seems quite possible that there are drivers and applications that don't have this issue; I've never had a problem with this behavior, and I've been using PostgreSQL for something like two decades [...]\n>\n> If we are assuming that most user code is already written to avoid committing on failed transactions (by tracking transaction state etc.), then making this change at the server wouldn't affect those applications; the only applications affected would be those that do commit on failed transactions today, and it could be argued that those are likely to be broken today (since drivers today don't really expose the rollback in an accessible/discoverable way).\n\nlibpq exposes it just fine, so I think you're overgeneralizing here.\n\nAs I said upthread, I think one of the things that would be pretty\nbadly broken by this is psql -f something.sql, where something.sql\ncontains a series of blocks of the form \"begin; something; something;\nsomething; commit;\". Right now whichever transactions succeed get\ncommitted. With the proposed change, if one transaction block fails,\nit'll merge with all of the following blocks. You may think that\nnobody is doing this sort of thing, but I think people are, and that\nthey will come after us with pitchforks if we break it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Feb 2020 18:04:28 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, 24 Feb 2020 at 07:25, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Feb 24, 2020 at 7:29 AM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n> > Well the driver really isn't in the business of changing the semantics\n> of the server.\n>\n> I mean, I just can't agree with that way of characterizing it. It\n> seems clear enough that the driver not only should not change the\n> semantics of the server, but that it cannot. It can, however, decide\n> which of the things that the server might do (or that the application\n> connected to it might do) ought to result in it throwing an exception.\n> And a slightly different set of decisions here would produce the\n> desired behavior instead of behavior which is not desired.\n>\n> --\n>\n\nFair enough. What I meant to say was that the driver isn't in the business\nof providing different semantics than the server provides.\n\n\n> Dave Cramer\n> http://www.postgres.rocks\n>\n\nOn Mon, 24 Feb 2020 at 07:25, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Feb 24, 2020 at 7:29 AM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Well the driver really isn't in the business of changing the semantics of the server.\n\nI mean, I just can't agree with that way of characterizing it. It\nseems clear enough that the driver not only should not change the\nsemantics of the server, but that it cannot. It can, however, decide\nwhich of the things that the server might do (or that the application\nconnected to it might do) ought to result in it throwing an exception.\nAnd a slightly different set of decisions here would produce the\ndesired behavior instead of behavior which is not desired.\n\n-- Fair enough. What I meant to say was that the driver isn't in the business of providing different semantics than the server provides.Dave Cramerhttp://www.postgres.rocks", "msg_date": "Mon, 24 Feb 2020 08:09:52 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, 24 Feb 2020 at 07:34, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Feb 24, 2020 at 1:56 PM Shay Rojansky <roji@roji.org> wrote:\n> > As Dave wrote, the problem here isn't with the driver, but with\n> framework or user-code which swallows the initial exception and allows code\n> to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does\n> surface PostgreSQL errors as exceptions, and internally tracks the\n> transaction status provided in the CommandComplete message. That means\n> users have the ability - but not the obligation - to know about failed\n> transactions, and some frameworks or user coding patterns could lead to a\n> commit being done on a failed transaction.\n>\n> Agreed. All of that can be fixed in the driver, though.\n>\n\nOf course it can but we really don't want our users getting one experience\nwith driver A and a different experience with driver B.\n\n>\n> > If we think the current *user-visible* behavior is problematic (commit\n> on failed transaction completes without throwing), then the only remaining\n> question is where this behavior should be fixed - at the server or at the\n> driver. As I wrote above, from the user's perspective it makes no\n> difference - the change would be identical (and just as breaking) either\n> way. So while drivers *could* implement the new behavior, what advantages\n> would that have over doing it at the server? Some disadvantages do seem\n> clear (repetition of the logic across each driver - leading to\n> inconsistency across drivers, changing semantics at the driver by turning a\n> non-error into an exception...).\n>\n> The advantage is that it doesn't cause a compatibility break.\n>\n\nSure it does. Any existing code that was relying on the existing semantics\nwould be incompatible.\n\n\n>\n> > > Well, it seems quite possible that there are drivers and applications\n> that don't have this issue; I've never had a problem with this behavior,\n> and I've been using PostgreSQL for something like two decades [...]\n> >\n> > If we are assuming that most user code is already written to avoid\n> committing on failed transactions (by tracking transaction state etc.),\n> then making this change at the server wouldn't affect those applications;\n> the only applications affected would be those that do commit on failed\n> transactions today, and it could be argued that those are likely to be\n> broken today (since drivers today don't really expose the rollback in an\n> accessible/discoverable way).\n>\n> libpq exposes it just fine, so I think you're overgeneralizing here.\n>\n> As I said upthread, I think one of the things that would be pretty\n> badly broken by this is psql -f something.sql, where something.sql\n> contains a series of blocks of the form \"begin; something; something;\n> something; commit;\". Right now whichever transactions succeed get\n> committed. With the proposed change, if one transaction block fails,\n> it'll merge with all of the following blocks.\n\n\nSo how does one figure out what failed and what succeeded ? I would think\nit would be pretty difficult in a large sql script to go back and figure\nout what needed to be repaired. Seems to me it would be much easier if\neverything failed.\n\n\n> You may think that\n> nobody is doing this sort of thing, but I think people are, and that\n> they will come after us with pitchforks if we break it.\n>\n\nSo the argument here is that we don't want to annoy some percentage of the\npopulation by doing the right thing ?\n\n\n\nDave Cramer\nwww.postgres.rocks\n\nOn Mon, 24 Feb 2020 at 07:34, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Feb 24, 2020 at 1:56 PM Shay Rojansky <roji@roji.org> wrote:\n> As Dave wrote, the problem here isn't with the driver, but with framework or user-code which swallows the initial exception and allows code to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does surface PostgreSQL errors as exceptions, and internally tracks the transaction status provided in the CommandComplete message. That means users have the ability - but not the obligation - to know about failed transactions, and some frameworks or user coding patterns could lead to a commit being done on a failed transaction.\n\nAgreed. All of that can be fixed in the driver, though. Of course it can but we really don't want our users getting one experience with driver A and a different experience with driver B. \n\n> If we think the current *user-visible* behavior is problematic (commit on failed transaction completes without throwing), then the only remaining question is where this behavior should be fixed - at the server or at the driver. As I wrote above, from the user's perspective it makes no difference - the change would be identical (and just as breaking) either way. So while drivers *could* implement the new behavior, what advantages would that have over doing it at the server? Some disadvantages do seem clear (repetition of the logic across each driver - leading to inconsistency across drivers, changing semantics at the driver by turning a non-error into an exception...).\n\nThe advantage is that it doesn't cause a compatibility break.Sure it does. Any existing code that was relying on the existing semantics would be incompatible. \n\n> > Well, it seems quite possible that there are drivers and applications that don't have this issue; I've never had a problem with this behavior, and I've been using PostgreSQL for something like two decades [...]\n>\n> If we are assuming that most user code is already written to avoid committing on failed transactions (by tracking transaction state etc.), then making this change at the server wouldn't affect those applications; the only applications affected would be those that do commit on failed transactions today, and it could be argued that those are likely to be broken today (since drivers today don't really expose the rollback in an accessible/discoverable way).\n\nlibpq exposes it just fine, so I think you're overgeneralizing here.\n\nAs I said upthread, I think one of the things that would be pretty\nbadly broken by this is psql -f something.sql, where something.sql\ncontains a series of blocks of the form \"begin; something; something;\nsomething; commit;\". Right now whichever transactions succeed get\ncommitted. With the proposed change, if one transaction block fails,\nit'll merge with all of the following blocks.So how does one figure out what failed and what succeeded ? I would think it would be pretty difficult in a large sql script to go back and figure out what needed to be repaired. Seems to me it would be much easier if everything failed.  You may think that\nnobody is doing this sort of thing, but I think people are, and that\nthey will come after us with pitchforks if we break it.So the argument here is that we don't want to annoy some percentage of the population by doing the right thing ?Dave Cramerwww.postgres.rocks", "msg_date": "Mon, 24 Feb 2020 08:16:11 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "If we did change the server behavior, it seems unlikely that\n> every driver would adjust their behavior to the new server behavior\n> all at once and that they would all get it right while also all\n> preserving backward compatibility with current releases in case a\n> newer driver is used with an older server. I don't think that's\n> likely. What would probably happen is that many drivers would ignore\n> the change, leaving applications to cope with the differences between\n> server versions, and some would change the driver behavior\n> categorically, breaking compatibility with older server versions, and\n> some would make mistakes in implementing support for the new behavior.\n> And maybe we would also find that the new behavior isn't ideal for\n> everybody any more than the current behavior is ideal for everybody.\n>\n\nTo test how the driver would currently react if the server did respond with\nan error I made a small change\n\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex 0a6f80963b..9405b0cfd9 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -2666,8 +2666,7 @@ IsTransactionExitStmt(Node *parsetree)\n {\n TransactionStmt *stmt = (TransactionStmt *) parsetree;\n\n- if (stmt->kind == TRANS_STMT_COMMIT ||\n- stmt->kind == TRANS_STMT_PREPARE ||\n+ if (stmt->kind == TRANS_STMT_PREPARE ||\n stmt->kind == TRANS_STMT_ROLLBACK ||\n stmt->kind == TRANS_STMT_ROLLBACK_TO)\n return true;\n\nI have no idea how badly this breaks other things but it does throw an\nerror on commit if the transaction is in error.\nWith absolutely no changes to the driver this code does what I would expect\nand executes the conn.rollback()\n\ntry {\n conn.setAutoCommit(false);\n try {\n conn.createStatement().execute(\"insert into notnullable values (NULL)\");\n } catch (SQLException ex ) {\n ex.printStackTrace();\n //ignore this exception\n }\n conn.commit();\n} catch ( SQLException ex ) {\n ex.printStackTrace();\n conn.rollback();\n}\nconn.close();\n\nDave Cramer\n\nhttp://www.postgres.rocks\n\n\n\n>\n\nIf we did change the server behavior, it seems unlikely that\nevery driver would adjust their behavior to the new server behavior\nall at once and that they would all get it right while also all\npreserving backward compatibility with current releases in case a\nnewer driver is used with an older server. I don't think that's\nlikely. What would probably happen is that many drivers would ignore\nthe change, leaving applications to cope with the differences between\nserver versions, and some would change the driver behavior\ncategorically, breaking compatibility with older server versions, and\nsome would make mistakes in implementing support for the new behavior.\nAnd maybe we would also find that the new behavior isn't ideal for\neverybody any more than the current behavior is ideal for everybody.To test how the driver would currently react if the server did respond with an error I made a small change diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.cindex 0a6f80963b..9405b0cfd9 100644--- a/src/backend/tcop/postgres.c+++ b/src/backend/tcop/postgres.c@@ -2666,8 +2666,7 @@ IsTransactionExitStmt(Node *parsetree)        {                TransactionStmt *stmt = (TransactionStmt *) parsetree;-               if (stmt->kind == TRANS_STMT_COMMIT ||-                       stmt->kind == TRANS_STMT_PREPARE ||+               if (stmt->kind == TRANS_STMT_PREPARE ||                        stmt->kind == TRANS_STMT_ROLLBACK ||                        stmt->kind == TRANS_STMT_ROLLBACK_TO)                        return true;I have no idea how badly this breaks other things but it does throw an error on commit if the transaction is in error.With absolutely no changes to the driver this code does what I would expect and executes the conn.rollback()try { conn.setAutoCommit(false); try { conn.createStatement().execute(\"insert into notnullable values (NULL)\"); } catch (SQLException ex ) { ex.printStackTrace(); //ignore this exception } conn.commit();} catch ( SQLException ex ) { ex.printStackTrace(); conn.rollback();}conn.close();Dave Cramerhttp://www.postgres.rocks", "msg_date": "Mon, 24 Feb 2020 08:21:48 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": ">> If we think the current *user-visible* behavior is problematic (commit\non failed transaction completes without throwing), then the only remaining\nquestion is where this behavior should be fixed - at the server or at the\ndriver. As I wrote above, from the user's perspective it makes no\ndifference - the change would be identical (and just as breaking) either\nway. So while drivers *could* implement the new behavior, what advantages\nwould that have over doing it at the server? Some disadvantages do seem\nclear (repetition of the logic across each driver - leading to\ninconsistency across drivers, changing semantics at the driver by turning a\nnon-error into an exception...).\n>\n> The advantage is that it doesn't cause a compatibility break.\n\nI think it's very important to expand the reasoning here from \"server and\nclient\" to \"server, drivers, users\". As I wrote above, changing this\nbehavior in a driver is just as much a compatibility break for any user of\nthat driver, as a server change; it's true that PostgreSQL would not be\n\"responsible\" ot \"at fault\" but rather the driver writer, but as far as\nangry users go there's very little difference. A break is a break, whether\nit happens because of a PostgreSQL change, or because of a .NET/Java driver\nchange.\n\n> 2. It would be better to fix the driver than the server because this\nbehavior is very old and there are probably many applications (and perhaps\nsome drivers) that rely on it, and changing the server would break them.\n\nAs above, if Dave and I make this change in the JDBC driver and/or Npgsql,\nall applications relying on the previous behavior would be just as broken.\n\n>> If we are assuming that most user code is already written to avoid\ncommitting on failed transactions (by tracking transaction state etc.),\nthen making this change at the server wouldn't affect those applications;\nthe only applications affected would be those that do commit on failed\ntransactions today, and it could be argued that those are likely to be\nbroken today (since drivers today don't really expose the rollback in an\naccessible/discoverable way).\n>\n> libpq exposes it just fine, so I think you're overgeneralizing here.\n\nThe question is more whether typical user applications are actually\nchecking for rollback-on-commit, not whether they theoretically can. An\nexception is something you have to actively swallow to ignore; an\nadditional returned status saying \"hey, this didn't actually commit\" is\nextremely easy to ignore unless you've specifically been aware of the\nsituation.\n\nEven so, a quick look at psycopg and Ruby (in addition to JDBC and .NET),\ncommit APIs generally don't return anything - this is just how the API\nabstractions are, probably because across databases nothing like that is\nneeded (the expectation is for a non-throwing commit to imply that the\ncommit occurred).\n\nShay\n\nOn Mon, Feb 24, 2020 at 2:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Feb 24, 2020 at 1:56 PM Shay Rojansky <roji@roji.org> wrote:\n> > As Dave wrote, the problem here isn't with the driver, but with\n> framework or user-code which swallows the initial exception and allows code\n> to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does\n> surface PostgreSQL errors as exceptions, and internally tracks the\n> transaction status provided in the CommandComplete message. That means\n> users have the ability - but not the obligation - to know about failed\n> transactions, and some frameworks or user coding patterns could lead to a\n> commit being done on a failed transaction.\n>\n> Agreed. All of that can be fixed in the driver, though.\n>\n> > If we think the current *user-visible* behavior is problematic (commit\n> on failed transaction completes without throwing), then the only remaining\n> question is where this behavior should be fixed - at the server or at the\n> driver. As I wrote above, from the user's perspective it makes no\n> difference - the change would be identical (and just as breaking) either\n> way. So while drivers *could* implement the new behavior, what advantages\n> would that have over doing it at the server? Some disadvantages do seem\n> clear (repetition of the logic across each driver - leading to\n> inconsistency across drivers, changing semantics at the driver by turning a\n> non-error into an exception...).\n>\n> The advantage is that it doesn't cause a compatibility break.\n>\n> > > Well, it seems quite possible that there are drivers and applications\n> that don't have this issue; I've never had a problem with this behavior,\n> and I've been using PostgreSQL for something like two decades [...]\n> >\n> > If we are assuming that most user code is already written to avoid\n> committing on failed transactions (by tracking transaction state etc.),\n> then making this change at the server wouldn't affect those applications;\n> the only applications affected would be those that do commit on failed\n> transactions today, and it could be argued that those are likely to be\n> broken today (since drivers today don't really expose the rollback in an\n> accessible/discoverable way).\n>\n> libpq exposes it just fine, so I think you're overgeneralizing here.\n>\n> As I said upthread, I think one of the things that would be pretty\n> badly broken by this is psql -f something.sql, where something.sql\n> contains a series of blocks of the form \"begin; something; something;\n> something; commit;\". Right now whichever transactions succeed get\n> committed. With the proposed change, if one transaction block fails,\n> it'll merge with all of the following blocks. You may think that\n> nobody is doing this sort of thing, but I think people are, and that\n> they will come after us with pitchforks if we break it.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n>> If we think the current *user-visible* behavior is problematic \n(commit on failed transaction completes without throwing), then the only\n remaining question is where this behavior should be fixed - at the \nserver or at the driver. As I wrote above, from the user's perspective \nit makes no difference - the change would be identical (and just as \nbreaking) either way. So while drivers *could* implement the new \nbehavior, what advantages would that have over doing it at the server? \nSome disadvantages do seem clear (repetition of the logic across each \ndriver - leading to inconsistency across drivers, changing semantics at \nthe driver by turning a non-error into an exception...).\n>> The advantage is that it doesn't cause a compatibility break.I think it's very important to expand the reasoning here from \"server and client\" to \"server, drivers, users\". As I wrote above, changing this behavior in a driver is just as much a compatibility break for any user of that driver, as a server change; it's true that PostgreSQL would not be \"responsible\" ot \"at fault\" but rather the driver writer, but as far as angry users go there's very little difference. A break is a break, whether it happens because of a PostgreSQL change, or because of a .NET/Java driver change.> \n2. It would be better to fix the driver than the server because this behavior is very old and there are probably many applications (and perhaps some drivers) that rely on it, and changing the server would break them.As above, if Dave and I make this change in the JDBC driver and/or Npgsql, all applications relying on the previous behavior would be just as broken.>> If we are assuming that most user code is already written to avoid committing on failed transactions (by tracking transaction state etc.), then making this change at the server wouldn't affect those applications; the only applications affected would be those that do commit on failed transactions today, and it could be argued that those are likely to be broken today (since drivers today don't really expose the rollback in an accessible/discoverable way).\n>\n> libpq exposes it just fine, so I think you're overgeneralizing here.The question is more whether typical user applications are actually checking for rollback-on-commit, not whether they theoretically can. An exception is something you have to actively swallow to ignore; an additional returned status saying \"hey, this didn't actually commit\" is extremely easy to ignore unless you've specifically been aware of the situation.Even so, a quick look at psycopg and Ruby (in addition to JDBC and .NET), commit APIs generally don't return anything - this is just how the API abstractions are, probably because across databases nothing like that is needed (the expectation is for a non-throwing commit to imply that the commit occurred).ShayOn Mon, Feb 24, 2020 at 2:34 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Feb 24, 2020 at 1:56 PM Shay Rojansky <roji@roji.org> wrote:\n> As Dave wrote, the problem here isn't with the driver, but with framework or user-code which swallows the initial exception and allows code to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does surface PostgreSQL errors as exceptions, and internally tracks the transaction status provided in the CommandComplete message. That means users have the ability - but not the obligation - to know about failed transactions, and some frameworks or user coding patterns could lead to a commit being done on a failed transaction.\n\nAgreed. All of that can be fixed in the driver, though.\n\n> If we think the current *user-visible* behavior is problematic (commit on failed transaction completes without throwing), then the only remaining question is where this behavior should be fixed - at the server or at the driver. As I wrote above, from the user's perspective it makes no difference - the change would be identical (and just as breaking) either way. So while drivers *could* implement the new behavior, what advantages would that have over doing it at the server? Some disadvantages do seem clear (repetition of the logic across each driver - leading to inconsistency across drivers, changing semantics at the driver by turning a non-error into an exception...).\n\nThe advantage is that it doesn't cause a compatibility break.\n\n> > Well, it seems quite possible that there are drivers and applications that don't have this issue; I've never had a problem with this behavior, and I've been using PostgreSQL for something like two decades [...]\n>\n> If we are assuming that most user code is already written to avoid committing on failed transactions (by tracking transaction state etc.), then making this change at the server wouldn't affect those applications; the only applications affected would be those that do commit on failed transactions today, and it could be argued that those are likely to be broken today (since drivers today don't really expose the rollback in an accessible/discoverable way).\n\nlibpq exposes it just fine, so I think you're overgeneralizing here.\n\nAs I said upthread, I think one of the things that would be pretty\nbadly broken by this is psql -f something.sql, where something.sql\ncontains a series of blocks of the form \"begin; something; something;\nsomething; commit;\". Right now whichever transactions succeed get\ncommitted. With the proposed change, if one transaction block fails,\nit'll merge with all of the following blocks. You may think that\nnobody is doing this sort of thing, but I think people are, and that\nthey will come after us with pitchforks if we break it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 24 Feb 2020 15:24:19 +0200", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 06:04:28PM +0530, Robert Haas wrote:\n> On Mon, Feb 24, 2020 at 1:56 PM Shay Rojansky <roji@roji.org> wrote:\n> > As Dave wrote, the problem here isn't with the driver, but with framework or user-code which swallows the initial exception and allows code to continue to the commit. Npgsql (and I'm sure the JDBC driver too) does surface PostgreSQL errors as exceptions, and internally tracks the transaction status provided in the CommandComplete message. That means users have the ability - but not the obligation - to know about failed transactions, and some frameworks or user coding patterns could lead to a commit being done on a failed transaction.\n> \n> Agreed. All of that can be fixed in the driver, though.\n> \n> > If we think the current *user-visible* behavior is problematic (commit on failed transaction completes without throwing), then the only remaining question is where this behavior should be fixed - at the server or at the driver. As I wrote above, from the user's perspective it makes no difference - the change would be identical (and just as breaking) either way. So while drivers *could* implement the new behavior, what advantages would that have over doing it at the server? Some disadvantages do seem clear (repetition of the logic across each driver - leading to inconsistency across drivers, changing semantics at the driver by turning a non-error into an exception...).\n> \n> The advantage is that it doesn't cause a compatibility break.\n> \n> > > Well, it seems quite possible that there are drivers and applications that don't have this issue; I've never had a problem with this behavior, and I've been using PostgreSQL for something like two decades [...]\n> >\n> > If we are assuming that most user code is already written to avoid committing on failed transactions (by tracking transaction state etc.), then making this change at the server wouldn't affect those applications; the only applications affected would be those that do commit on failed transactions today, and it could be argued that those are likely to be broken today (since drivers today don't really expose the rollback in an accessible/discoverable way).\n> \n> libpq exposes it just fine, so I think you're overgeneralizing here.\n> \n> As I said upthread, I think one of the things that would be pretty\n> badly broken by this is psql -f something.sql, where something.sql\n> contains a series of blocks of the form \"begin; something; something;\n> something; commit;\". Right now whichever transactions succeed get\n> committed. With the proposed change, if one transaction block fails,\n> it'll merge with all of the following blocks. You may think that\n> nobody is doing this sort of thing, but I think people are, and that\n> they will come after us with pitchforks if we break it.\n\nI'm doing it, and I don't know about pitchforks, but I do know about\nsuddenly needing to rewrite (and re-test, and re-integrate, and\nre-test some more) load-bearing code, and I'm not a fan of it.\n\nIf we'd done this from a clean sheet of paper, it would have been the\nright decision. We're not there, and haven't been for decades.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 24 Feb 2020 18:37:47 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 24/02/2020 18:37, David Fetter wrote:\n\n> If we'd done this from a clean sheet of paper, it would have been the\n> right decision. We're not there, and haven't been for decades.\n\nOTOH, it's never too late to do the right thing.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 24 Feb 2020 18:40:16 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 06:40:16PM +0100, Vik Fearing wrote:\n> On 24/02/2020 18:37, David Fetter wrote:\n> \n> > If we'd done this from a clean sheet of paper, it would have been the\n> > right decision. We're not there, and haven't been for decades.\n> \n> OTOH, it's never too late to do the right thing.\n\nSome right things take a lot of prep work in order to actually be\nright things. This is one of them. Defaulting to SERIALIZABLE\nisolation is another.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 24 Feb 2020 18:53:34 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Sun, Feb 23, 2020 at 7:59 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n>\n> I think the fact that this is a violation of the SQL SPEC lends considerable credence to the argument for changing the behaviour.\n> Since this can lead to losing a transaction I think there is even more reason to look at changing the behaviour.\n\nThe assumption that COMMIT terminates the transaction is going to be\ndeeply embedded into many applications. It's just too convenient not\nto rely on. For example, I maintain a bash based deployment framework\nthat assembles large SQL files from bit and pieces and tacks a COMMIT\nat the end. It's not *that* much work to test for failure and add a\nrollback but it's the kind of surprise our users hate during the\nupgrade process.\n\nOver the years we've tightened the behavior of postgres to be inline\nwith the spec (example: Tom cleaned up the row-wise comparison\nbehavior in 8.2) but in other cases we had to punt (IS NULL/coalesce\ndisagreement over composites for example), identifier case sensitivity\netc. The point is, changing this stuff can be really painful and we\nhave to evaluate the benefits vs the risks.\n\nMy biggest sense of alarm with the proposed change is that it could\nleave applications in a state where the transaction is hanging there\nit could previously assume it had resolved; this could be catastrophic\nin impact in certain real world scenarios. Tom is right, a GUC is the\nequivalent of \"sweeping the problem under the wrong\" (if you want\nexamples of the long term consequences of that vision read through\nthis: https://dev.mysql.com/doc/refman/8.0/en/timestamp-initialization.html).\n The value proposition of the change is however a little light\nrelative to the risks IMO.\n\nI do think we need to have good page summarizing non-spec behaviors in\nthe documentation however.\n\nmerlin\n\n\n", "msg_date": "Mon, 24 Feb 2020 13:58:00 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Merlin>My biggest sense of alarm with the proposed change is that it could\nMerlin>leave applications in a state where the transaction is hanging there\n\nHow come?\nThe spec says commit ends the transaction.\nCan you please clarify where the proposed change leaves a hanging\ntransaction?\n\nJust in case, the proposed change is as follows:\n\npostgres=# begin;\nBEGIN\npostgres=# aslkdfasdf;\nERROR: syntax error at or near \"aslkdfasdf\"\nLINE 1: aslkdfasdf;\n ^\npostgres=# commit;\nROLLBACK <-- this should be replaced with \"ERROR: can't commit the\ntransaction because ...\"\npostgres=# commit;\nWARNING: there is no transaction in progress <-- this should be as it is\ncurrently. Even if commit throws an error, the transaction should be\nterminated.\nCOMMIT\n\nNo-one on the thread suggests the transaction must hang forever.\nOf course, commit must terminate the transaction one way or another.\nThe proposed change is to surface the exception if user tries to commit or\nprepare a transaction that can't be committed.\nNote: the reason does not matter much. If deferred constraint fails on\ncommit, then commit itself throws an error.\nMaking commit throw an error in case \"current transaction is aborted\" makes\nperfect sense.\n\nNote: the same thing is with PREPARE TRANSACTION 'txname`.\nApparently it silently responses with ROLLBACK which is strange as well.\n\nVladimir\n\nMerlin>My biggest sense of alarm with the proposed change is that it couldMerlin>leave applications in a state where the transaction is hanging thereHow come?The spec says commit ends the transaction.Can you please clarify where the proposed change leaves a hanging transaction?Just in case, the proposed change is as follows:postgres=# begin;BEGINpostgres=# aslkdfasdf;ERROR:  syntax error at or near \"aslkdfasdf\"LINE 1: aslkdfasdf;        ^postgres=# commit;ROLLBACK   <-- this should be replaced with \"ERROR: can't commit the transaction because ...\"postgres=# commit;WARNING:  there is no transaction in progress  <-- this should be as it is currently. Even if commit throws an error, the transaction should be terminated.COMMITNo-one on the thread suggests the transaction must hang forever.Of course, commit must terminate the transaction one way or another.The proposed change is to surface the exception if user tries to commit or prepare a transaction that can't be committed.Note: the reason does not matter much. If deferred constraint fails on commit, then commit itself throws an error.Making commit throw an error in case \"current transaction is aborted\" makes perfect sense.Note: the same thing is with PREPARE TRANSACTION 'txname`.Apparently it silently responses with ROLLBACK which is strange as well.Vladimir", "msg_date": "Tue, 25 Feb 2020 01:06:23 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 12/02/2020 00:27, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 11/02/2020 23:35, Tom Lane wrote:\n>>> So I assume you're imagining that that would leave us still in\n>>> transaction-aborted state, and the session is basically dead in\n>>> the water until the user thinks to issue ROLLBACK instead?\n> \n>> Actually, I was imagining that it would end the transaction as it does\n>> today, just with an error code.\n>> This is backed up by General Rule 9 which says \"The current\n>> SQL-transaction is terminated.\"\n> \n> Hm ... that would be sensible, but I'm not entirely convinced. There\n> are several preceding rules that say that an exception condition is\n> raised, and normally you can stop reading at that point; nothing else\n> is going to happen. If COMMIT acts specially in this respect, they\n> ought to say so.\n\nReading some more, I believe they do say so.\n\nSQL:2016-2 Section 4.41 SQL-transactions:\n\n If an SQL-transaction is terminated by a <rollback statement> or\n unsuccessful execution of a <commit statement>, then all changes\n made to SQL-data or schemas by that SQL-transaction are canceled.\n\nThis to me says that an unsuccessful COMMIT still terminates the\ntransaction.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 24 Feb 2020 23:37:45 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 4:06 PM Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> Merlin>My biggest sense of alarm with the proposed change is that it could\n> Merlin>leave applications in a state where the transaction is hanging there\n>\n> How come?\n> The spec says commit ends the transaction.\n> Can you please clarify where the proposed change leaves a hanging transaction?\n>\n> Just in case, the proposed change is as follows:\n>\n> postgres=# begin;\n> BEGIN\n> postgres=# aslkdfasdf;\n> ERROR: syntax error at or near \"aslkdfasdf\"\n> LINE 1: aslkdfasdf;\n> ^\n> postgres=# commit;\n> ROLLBACK <-- this should be replaced with \"ERROR: can't commit the transaction because ...\"\n> postgres=# commit;\n> WARNING: there is no transaction in progress <-- this should be as it is currently. Even if commit throws an error, the transaction should be terminated.\n> COMMIT\n\nOk, you're right; I missed the point in that it's not nearly as bad as\nI thought you were suggesting (to treat commit as bad statement) but\nthe transaction would still terminate. Still, this is very sensitive\nstuff, do you think most common connection poolers would continue to\nwork after making this change?\n\nmerlin\n\n\n", "msg_date": "Mon, 24 Feb 2020 16:59:37 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, 24 Feb 2020 at 17:59, Merlin Moncure <mmoncure@gmail.com> wrote:\n\n> On Mon, Feb 24, 2020 at 4:06 PM Vladimir Sitnikov\n> <sitnikov.vladimir@gmail.com> wrote:\n> >\n> > Merlin>My biggest sense of alarm with the proposed change is that it\n> could\n> > Merlin>leave applications in a state where the transaction is hanging\n> there\n> >\n> > How come?\n> > The spec says commit ends the transaction.\n> > Can you please clarify where the proposed change leaves a hanging\n> transaction?\n> >\n> > Just in case, the proposed change is as follows:\n> >\n> > postgres=# begin;\n> > BEGIN\n> > postgres=# aslkdfasdf;\n> > ERROR: syntax error at or near \"aslkdfasdf\"\n> > LINE 1: aslkdfasdf;\n> > ^\n> > postgres=# commit;\n> > ROLLBACK <-- this should be replaced with \"ERROR: can't commit the\n> transaction because ...\"\n> > postgres=# commit;\n> > WARNING: there is no transaction in progress <-- this should be as it\n> is currently. Even if commit throws an error, the transaction should be\n> terminated.\n> > COMMIT\n>\n> Ok, you're right; I missed the point in that it's not nearly as bad as\n> I thought you were suggesting (to treat commit as bad statement) but\n> the transaction would still terminate. Still, this is very sensitive\n> stuff, do you think most common connection poolers would continue to\n> work after making this change?\n>\n\nDon't see why not. All that happens is that an error message is emitted by\nthe server on commit instead of silently rolling back\n\n\nDave Cramer\nhttps://www.postgres.rocks\n\nOn Mon, 24 Feb 2020 at 17:59, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Feb 24, 2020 at 4:06 PM Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n>\n> Merlin>My biggest sense of alarm with the proposed change is that it could\n> Merlin>leave applications in a state where the transaction is hanging there\n>\n> How come?\n> The spec says commit ends the transaction.\n> Can you please clarify where the proposed change leaves a hanging transaction?\n>\n> Just in case, the proposed change is as follows:\n>\n> postgres=# begin;\n> BEGIN\n> postgres=# aslkdfasdf;\n> ERROR:  syntax error at or near \"aslkdfasdf\"\n> LINE 1: aslkdfasdf;\n>         ^\n> postgres=# commit;\n> ROLLBACK   <-- this should be replaced with \"ERROR: can't commit the transaction because ...\"\n> postgres=# commit;\n> WARNING:  there is no transaction in progress  <-- this should be as it is currently. Even if commit throws an error, the transaction should be terminated.\n> COMMIT\n\nOk, you're right; I missed the point in that it's not nearly as bad as\nI thought you were suggesting (to treat commit as bad statement) but\nthe transaction would still terminate.   Still, this is very sensitive\nstuff, do you think most common connection poolers would continue to\nwork after making this change?Don't see why not. All that happens is that an error message is emitted by the server on commit instead of silently rolling backDave Cramerhttps://www.postgres.rocks", "msg_date": "Mon, 24 Feb 2020 18:22:21 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": ">do you think most common connection poolers would continue to\n>work after making this change?\n\nOf course, they should.\nThere are existing cases when commit responds with an error: deferrable\nconstraints.\n\nThere's nothing new except it is suggested to make the behavior of\ncommit/prepare failure (e.g. \"can't commit the transaction because...\")\nconsistent with other commit failures (e.g. deferred violation).\n\nVladimir\n\n>do you think most common connection poolers would continue to>work after making this change?Of course, they should.There are existing cases when commit responds with an error: deferrable constraints.There's nothing new except it is suggested to make the behavior of commit/prepare failure (e.g. \"can't commit the transaction because...\") consistent with other commit failures (e.g. deferred violation).Vladimir", "msg_date": "Tue, 25 Feb 2020 08:58:43 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, Feb 24, 2020 at 6:40 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n> Fair enough. What I meant to say was that the driver isn't in the business of providing different semantics than the server provides.\n\nStill don't agree. The server doesn't make any decision about what\nsemantics the driver has to provide. The driver can do whatever it\nwants. If what it does makes users sad, then maybe it ought to do\nsomething different.\n\nNow, of course, it's also true that if what the server does makes\nusers sad, maybe the server should do something different. But I think\nyou're vastly underestimating the likely impact on other users and\ndrivers of making this change. That is a guess, and like any guess,\nmay be wrong. But it is still what I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 25 Feb 2020 12:15:39 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Robert>Now, of course, it's also true that if what the server does makes\nRobert>users sad, maybe the server should do something different\n\nThe server makes users sad as it reports the same end result (==\"commit\nfailed\") differently.\nSometimes the server produces ERROR, and sometimes the server produces \"OK,\nthe transaction was rolled back\".\n\nThe users do expect that commit might fail, and they don't really expect\nthat sometimes commit can be silently converted to a rollback.\n\nRobert>BEGIN;\nRobert>-- do stuff\nRobert>COMMIT;\nRobert>BEGIN;\nRobert>-- do more stuff\nRobert>COMMIT;\n\nRobert>...and they run these scripts by piping them into psql. Now, if the\nRobert>COMMIT leaves the session in a transaction state,\n\nNoone suggested that \"commit leaves the session in a transaction state\".\nOf course, every commit should terminate the transaction.\nHowever, if a commit fails (for any reason), it should produce the relevant\nERROR that explains what went wrong rather than silently doing a rollback.\n\nVladimir\n\nRobert>Now, of course, it's also true that if what the server does makesRobert>users sad, maybe the server should do something differentThe server makes users sad as it reports the same end result (==\"commit failed\") differently.Sometimes the server produces ERROR, and sometimes the server produces \"OK, the transaction was rolled back\".The users do expect that commit might fail, and they don't really expect that sometimes commit can be silently converted to a rollback.Robert>BEGIN;Robert>-- do stuffRobert>COMMIT;Robert>BEGIN;Robert>-- do more stuffRobert>COMMIT;Robert>...and they run these scripts by piping them into psql. Now, if theRobert>COMMIT leaves the session in a transaction state,Noone suggested that \"commit leaves the session in a transaction state\".Of course, every commit should terminate the transaction.However, if a commit fails (for any reason), it should produce the relevant ERROR that explains what went wrong rather than silently doing a rollback.Vladimir", "msg_date": "Tue, 25 Feb 2020 10:17:40 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, Feb 25, 2020 at 12:47 PM Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n> Noone suggested that \"commit leaves the session in a transaction state\".\n> Of course, every commit should terminate the transaction.\n> However, if a commit fails (for any reason), it should produce the relevant ERROR that explains what went wrong rather than silently doing a rollback.\n\nOK, I guess I misinterpreted the proposal. That would be much less\nproblematic -- any driver or application that can't handle ERROR in\nresponse to an attempted COMMIT would be broken already.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 25 Feb 2020 13:25:03 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Tom>I think we still end up concluding that altering this behavior has more\nTom>downside than upside.\n\nWhat is the downside?\n\nApplications, drivers, and poolers already expect that commit might produce\nan error and terminate the transaction at the same time.\n\n\"The data is successfully committed to the database if and only if commit\nreturns without error\".\n^^^ the above is way easier to reason about than \"user must check multiple\nunrelated outcomes to tell if the changes are committed or not\".\n\nVladimir\n\nTom>I think we still end up concluding that altering this behavior has moreTom>downside than upside.\nWhat is the downside?Applications, drivers, and poolers already expect that commit might produce an error and terminate the transaction at the same time.\"The data is successfully committed to the database if and only if commit returns without error\".^^^ the above is way easier to reason about than \"user must check multiple unrelated outcomes to tell if the changes are committed or not\".Vladimir", "msg_date": "Tue, 25 Feb 2020 11:42:47 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 2020-02-25 at 13:25 +0530, Robert Haas wrote:\n> On Tue, Feb 25, 2020 at 12:47 PM Vladimir Sitnikov\n> <sitnikov.vladimir@gmail.com> wrote:\n> > Noone suggested that \"commit leaves the session in a transaction state\".\n> > Of course, every commit should terminate the transaction.\n> > However, if a commit fails (for any reason), it should produce the relevant ERROR that explains what went wrong rather than silently doing a rollback.\n> \n> OK, I guess I misinterpreted the proposal. That would be much less\n> problematic -- any driver or application that can't handle ERROR in\n> response to an attempted COMMIT would be broken already.\n\nI agree with that.\n\nThere is always some chance that someone relies on COMMIT not\nthrowing an error when it rolls back, but I think that throwing an\nerror is actually less astonishing than *not* throwing one.\n\nSo, +1 for the proposal from me.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 25 Feb 2020 12:11:21 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Just one more data point: drivers do allow users to execute queries in a\nfree form.\nShat is the user might execute /*comment*/commit/*comment*/ as a free-form\nSQL, and they would expect that the resulting\n behaviour should be exactly the same as .commit() API call (==silent\nrollback is converted to an exception).\n\nThat is drivers can add extra logic into .commit() API implementation,\nhowever, turning free-form SQL into exceptions\nis hard to do consistently from the driver side.\nIt is not like \"check the response from .commit() result\".\nIt is more like \"don't forget to parse user-provided SQL and verify if it\nis semantically equivalent to commit\"\n\nPushing full SQL parser to the driver is not the best idea taking into the\naccount the extensibility the core has.\n\nVladimir\n\nJust one more data point: drivers do allow users to execute queries in a free form.Shat is the user might execute /*comment*/commit/*comment*/ as a free-form SQL, and they would expect that the resulting behaviour should be exactly the same as .commit() API call (==silent rollback is converted to an exception).That is drivers can add extra logic into .commit() API implementation, however, turning free-form SQL into exceptionsis hard to do consistently from the driver side.It is not like \"check the response from .commit() result\".It is more like \"don't forget to parse user-provided SQL and verify if it is semantically equivalent to commit\"Pushing full SQL parser to the driver is not the best idea taking into the account the extensibility the core has.Vladimir", "msg_date": "Wed, 26 Feb 2020 21:23:25 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Am 24.02.2020 um 13:34 schrieb Robert Haas:\n> As I said upthread, I think one of the things that would be pretty\n> badly broken by this is psql -f something.sql, where something.sql\n> contains a series of blocks of the form \"begin; something; something;\n> something; commit;\". Right now whichever transactions succeed get\n> committed. With the proposed change, if one transaction block fails,\n> it'll merge with all of the following blocks.\n\n\nNo, that's *not* true.\n\nThe only difference with the proposed change would be another error in \nthe logs for the commit following the block with the failed insert. \nNote: Nobody has suggested that the commit that returns with an error \nshould not end the transaction. Do just the same as with any other \ncommit error in response to a constraint violation!\n\n\nAm 24.02.2020 um 18:53 schrieb David Fetter:\n> On Mon, Feb 24, 2020 at 06:40:16PM +0100, Vik Fearing wrote:\n>> On 24/02/2020 18:37, David Fetter wrote:\n>>> If we'd done this from a clean sheet of paper, it would have been the\n>>> right decision. We're not there, and haven't been for decades.\n>> OTOH, it's never too late to do the right thing.\n> Some right things take a lot of prep work in order to actually be\n> right things. This is one of them. Defaulting to SERIALIZABLE\n> isolation is another.\n\n\nHere the proposed changes is really much much less noticable - please \nreport the error (again) instead of giving an incomprehensible status \ncode. Nothing else must be changed - the failing commit should do the \nrollback and end the transaction - but it should report this situation \nas an error!\n\nRegards Bernhard\n\n\n\n\n", "msg_date": "Wed, 26 Feb 2020 19:33:56 +0100", "msg_from": "\"Haumacher, Bernhard\" <haui@haumacher.de>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 25/02/2020 12:11, Laurenz Albe wrote:\n> On Tue, 2020-02-25 at 13:25 +0530, Robert Haas wrote:\n>> On Tue, Feb 25, 2020 at 12:47 PM Vladimir Sitnikov\n>> <sitnikov.vladimir@gmail.com> wrote:\n>>> Noone suggested that \"commit leaves the session in a transaction state\".\n>>> Of course, every commit should terminate the transaction.\n>>> However, if a commit fails (for any reason), it should produce the relevant ERROR that explains what went wrong rather than silently doing a rollback.\n>>\n>> OK, I guess I misinterpreted the proposal. That would be much less\n>> problematic -- any driver or application that can't handle ERROR in\n>> response to an attempted COMMIT would be broken already.\n> \n> I agree with that.\n> \n> There is always some chance that someone relies on COMMIT not\n> throwing an error when it rolls back, but I think that throwing an\n> error is actually less astonishing than *not* throwing one.\n> \n> So, +1 for the proposal from me.\n\nI started this thread for some discussion and hopefully a documentation\npatch. But now I have moved firmly into the +1 camp. COMMIT should\nerror if it can't commit, and then terminate the (aborted) transaction.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 26 Feb 2020 19:46:33 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Wed, 26 Feb 2020 at 13:46, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 25/02/2020 12:11, Laurenz Albe wrote:\n> > On Tue, 2020-02-25 at 13:25 +0530, Robert Haas wrote:\n> >> On Tue, Feb 25, 2020 at 12:47 PM Vladimir Sitnikov\n> >> <sitnikov.vladimir@gmail.com> wrote:\n> >>> Noone suggested that \"commit leaves the session in a transaction\n> state\".\n> >>> Of course, every commit should terminate the transaction.\n> >>> However, if a commit fails (for any reason), it should produce the\n> relevant ERROR that explains what went wrong rather than silently doing a\n> rollback.\n> >>\n> >> OK, I guess I misinterpreted the proposal. That would be much less\n> >> problematic -- any driver or application that can't handle ERROR in\n> >> response to an attempted COMMIT would be broken already.\n> >\n> > I agree with that.\n> >\n> > There is always some chance that someone relies on COMMIT not\n> > throwing an error when it rolls back, but I think that throwing an\n> > error is actually less astonishing than *not* throwing one.\n> >\n> > So, +1 for the proposal from me.\n>\n> I started this thread for some discussion and hopefully a documentation\n> patch. But now I have moved firmly into the +1 camp. COMMIT should\n> error if it can't commit, and then terminate the (aborted) transaction.\n> --\n> Vik Fearing\n>\n\nOK, here is a patch that actually doesn't leave the transaction in a failed\nstate but emits the error and rolls back the transaction.\n\nThis is far from complete as it fails a number of tests and does not cover\nall of the possible paths.\nBut I'd like to know if this is strategy will be acceptable ?\nWhat it does is create another server error level that will emit the error\nand return as opposed to not returning.\nI honestly haven't given much thought to the error message. At this point I\njust want the nod as to how to do it.", "msg_date": "Wed, 26 Feb 2020 15:51:39 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Dave Cramer <davecramer@postgres.rocks> writes:\n> OK, here is a patch that actually doesn't leave the transaction in a failed\n> state but emits the error and rolls back the transaction.\n\n> This is far from complete as it fails a number of tests and does not cover\n> all of the possible paths.\n> But I'd like to know if this is strategy will be acceptable ?\n\nI really don't think that changing the server's behavior here is going to\nfly. The people who are unhappy that we changed it are going to vastly\noutnumber the people who are happy. Even the people who are happy are not\ngoing to find that their lives are improved all that much, because they'll\nstill have to deal with old servers with the old behavior for the\nforeseeable future.\n\nEven granting that a behavioral incompatibility is acceptable, I'm not\nsure how a client is supposed to be sure that this \"error\" means that a\nrollback happened, as opposed to real errors that prevented any state\nchange from occurring. (A trivial example of that is misspelling the\nCOMMIT command; which I'll grant is unlikely in practice. But there are\nless-trivial examples involving internal server malfunctions.) The only\nway to be sure you're out of the transaction is to check the transaction\nstate that's sent along with ReadyForQuery ... but if you need to do\nthat, it's not clear why we should change the server behavior at all.\n\nI also don't think that this scales to the case of subtransaction\ncommit/rollback. That should surely act the same, but your patch doesn't\nchange it.\n\nLastly, introducing a new client-visible message level seems right out.\nThat's a very fundamental protocol break, independently of all else.\nAnd if it's \"not really an error\", then how is this any more standards\ncompliant than before?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Feb 2020 16:22:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 26/02/2020 22:22, Tom Lane wrote:\n> Dave Cramer <davecramer@postgres.rocks> writes:\n>> OK, here is a patch that actually doesn't leave the transaction in a failed\n>> state but emits the error and rolls back the transaction.\n> \n>> This is far from complete as it fails a number of tests and does not cover\n>> all of the possible paths.\n>> But I'd like to know if this is strategy will be acceptable ?\n> \n> I really don't think that changing the server's behavior here is going to\n> fly. The people who are unhappy that we changed it are going to vastly\n> outnumber the people who are happy. Even the people who are happy are not\n> going to find that their lives are improved all that much, because they'll\n> still have to deal with old servers with the old behavior for the\n> foreseeable future.\n\nDealing with old servers for a while is something that everyone is used to.\n\n> Even granting that a behavioral incompatibility is acceptable, I'm not\n> sure how a client is supposed to be sure that this \"error\" means that a\n> rollback happened, as opposed to real errors that prevented any state\n> change from occurring.\n\nBecause the error is a Class 40 — Transaction Rollback.\n\nMy original example was:\n\npostgres=!# commit;\nERROR: 40P00: transaction cannot be committed\nDETAIL: First error was \"42601: syntax error at or near \"error\"\".\n\n\n> (A trivial example of that is misspelling the\n> COMMIT command; which I'll grant is unlikely in practice. But there are\n> less-trivial examples involving internal server malfunctions.)\n\nMisspelling the COMMIT command is likely a syntax error, which is Class\n42. Can you give one of those less-trivial examples please?\n\n> The only\n> way to be sure you're out of the transaction is to check the transaction\n> state that's sent along with ReadyForQuery ... but if you need to do\n> that, it's not clear why we should change the server behavior at all.\n\nHow does this differ from the deferred constraint violation example you\nprovided early on in the thread? That gave the error 23505 and\nterminated the transaction. If you run the same scenario with the\nprimary key immediate, you get the *exact same error* but the\ntransaction is *not* terminated!\n\nI won't go so far as to suggest we change all COMMIT errors to Class 40\n(as the spec says), but I'm thinking it very loudly.\n\n> I also don't think that this scales to the case of subtransaction\n> commit/rollback. That should surely act the same, but your patch doesn't\n> change it.\n\nHow does one commit a subtransaction?\n\n> Lastly, introducing a new client-visible message level seems right out.\n> That's a very fundamental protocol break, independently of all else.\n\nYeah, this seemed like a bad idea to me, too.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 26 Feb 2020 22:57:19 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Wed, 26 Feb 2020 at 16:57, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 26/02/2020 22:22, Tom Lane wrote:\n> > Dave Cramer <davecramer@postgres.rocks> writes:\n> >> OK, here is a patch that actually doesn't leave the transaction in a\n> failed\n> >> state but emits the error and rolls back the transaction.\n> >\n> >> This is far from complete as it fails a number of tests and does not\n> cover\n> >> all of the possible paths.\n> >> But I'd like to know if this is strategy will be acceptable ?\n> >\n> > I really don't think that changing the server's behavior here is going to\n> > fly. The people who are unhappy that we changed it are going to vastly\n> > outnumber the people who are happy.\n\n\nI'm not convinced of this. I doubt we actually have any real numbers?\n\nEven the people who are happy are not\n> > going to find that their lives are improved all that much, because\n> they'll\n> > still have to deal with old servers with the old behavior for the\n> > foreseeable future.\n>\n> Dealing with old servers for a while is something that everyone is used to.\n>\nClients can code around this as well for old servers. This is something\nthat is more palatable\nif the server defines this behaviour.\n\n>\n> > Even granting that a behavioral incompatibility is acceptable, I'm not\n> > sure how a client is supposed to be sure that this \"error\" means that a\n> > rollback happened, as opposed to real errors that prevented any state\n> > change from occurring.\n>\n> Because the error is a Class 40 — Transaction Rollback.\n>\n\nI think his point is that the error is emitted before we actually do the\nrollback and it could fail.\n\n\n>\n> My original example was:\n>\n> postgres=!# commit;\n> ERROR: 40P00: transaction cannot be committed\n> DETAIL: First error was \"42601: syntax error at or near \"error\"\".\n>\n>\n> > (A trivial example of that is misspelling the\n> > COMMIT command; which I'll grant is unlikely in practice. But there are\n> > less-trivial examples involving internal server malfunctions.)\n>\n> Misspelling the COMMIT command is likely a syntax error, which is Class\n> 42. Can you give one of those less-trivial examples please?\n>\n> > The only\n> > way to be sure you're out of the transaction is to check the transaction\n> > state that's sent along with ReadyForQuery ... but if you need to do\n> > that, it's not clear why we should change the server behavior at all.\n>\n\nI guess the error has to be sent after the rollback completes.\n\n>\n> How does this differ from the deferred constraint violation example you\n> provided early on in the thread? That gave the error 23505 and\n> terminated the transaction. If you run the same scenario with the\n> primary key immediate, you get the *exact same error* but the\n> transaction is *not* terminated!\n>\n> I won't go so far as to suggest we change all COMMIT errors to Class 40\n> (as the spec says), but I'm thinking it very loudly.\n>\n> > I also don't think that this scales to the case of subtransaction\n> > commit/rollback. That should surely act the same, but your patch doesn't\n> > change it.\n>\n> How does one commit a subtransaction?\n>\n> > Lastly, introducing a new client-visible message level seems right out.\n> > That's a very fundamental protocol break, independently of all else.\n>\n> Yeah, this seemed like a bad idea to me, too.\n>\n\nPretty sure I can code around this.\n\n-- \n> Vik Fearing\n>\n\nOn Wed, 26 Feb 2020 at 16:57, Vik Fearing <vik@postgresfriends.org> wrote:On 26/02/2020 22:22, Tom Lane wrote:\n> Dave Cramer <davecramer@postgres.rocks> writes:\n>> OK, here is a patch that actually doesn't leave the transaction in a failed\n>> state but emits the error and rolls back the transaction.\n> \n>> This is far from complete as it fails a number of tests  and does not cover\n>> all of the possible paths.\n>> But I'd like to know if this is strategy will be acceptable ?\n> \n> I really don't think that changing the server's behavior here is going to\n> fly.  The people who are unhappy that we changed it are going to vastly\n> outnumber the people who are happy.  I'm not convinced of this. I doubt we actually have any real numbers?  Even the people who are happy are not\n> going to find that their lives are improved all that much, because they'll\n> still have to deal with old servers with the old behavior for the\n> foreseeable future.\n\nDealing with old servers for a while is something that everyone is used to.Clients can code around this as well for old servers. This is something that is more palatable if the server defines this behaviour.   \n\n> Even granting that a behavioral incompatibility is acceptable, I'm not\n> sure how a client is supposed to be sure that this \"error\" means that a\n> rollback happened, as opposed to real errors that prevented any state\n> change from occurring.\n\nBecause the error is a Class 40 — Transaction Rollback.I think his point is that the error is emitted before we actually do the rollback and it could fail. \n\nMy original example was:\n\npostgres=!# commit;\nERROR:  40P00: transaction cannot be committed\nDETAIL:  First error was \"42601: syntax error at or near \"error\"\".\n\n\n> (A trivial example of that is misspelling the\n> COMMIT command; which I'll grant is unlikely in practice.  But there are\n> less-trivial examples involving internal server malfunctions.)\n\nMisspelling the COMMIT command is likely a syntax error, which is Class\n42.  Can you give one of those less-trivial examples please?\n\n> The only\n> way to be sure you're out of the transaction is to check the transaction\n> state that's sent along with ReadyForQuery ... but if you need to do\n> that, it's not clear why we should change the server behavior at all.I guess the error has to be sent after the rollback completes.\n\nHow does this differ from the deferred constraint violation example you\nprovided early on in the thread?  That gave the error 23505 and\nterminated the transaction.  If you run the same scenario with the\nprimary key immediate, you get the *exact same error* but the\ntransaction is *not* terminated!\n\nI won't go so far as to suggest we change all COMMIT errors to Class 40\n(as the spec says), but I'm thinking it very loudly.\n\n> I also don't think that this scales to the case of subtransaction\n> commit/rollback.  That should surely act the same, but your patch doesn't\n> change it.\n\nHow does one commit a subtransaction?\n\n> Lastly, introducing a new client-visible message level seems right out.\n> That's a very fundamental protocol break, independently of all else.\n\nYeah, this seemed like a bad idea to me, too. Pretty sure I can code around this.  \n-- \nVik Fearing", "msg_date": "Wed, 26 Feb 2020 17:11:56 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Wed, Feb 26, 2020 at 11:53 PM Vladimir Sitnikov\n<sitnikov.vladimir@gmail.com> wrote:\n> Pushing full SQL parser to the driver is not the best idea taking into the account the extensibility the core has.\n\nThat wouldn't be necessary. You could just do strcmp() on the command tag.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Feb 2020 09:34:34 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "But if the SQL is /*commit*/rollback, then the driver should not raise an\nexception. The exception should be only for the case when the client asks\nto commit and the database can't do that.\n\nThe resulting command tag alone is not enough.\n\nVladimir\n\nBut if the SQL is /*commit*/rollback, then the driver should not raise an exception. The exception should be only for the case when the client asks to commit and the database can't do that.The resulting command tag alone is not enough.Vladimir", "msg_date": "Thu, 27 Feb 2020 07:11:26 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Wed, 26 Feb 2020 at 16:57, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 26/02/2020 22:22, Tom Lane wrote:\n> > Dave Cramer <davecramer@postgres.rocks> writes:\n> >> OK, here is a patch that actually doesn't leave the transaction in a\n> failed\n> >> state but emits the error and rolls back the transaction.\n> >\n> >> This is far from complete as it fails a number of tests and does not\n> cover\n> >> all of the possible paths.\n> >> But I'd like to know if this is strategy will be acceptable ?\n> >\n> > I really don't think that changing the server's behavior here is going to\n> > fly. The people who are unhappy that we changed it are going to vastly\n> > outnumber the people who are happy. Even the people who are happy are\n> not\n> > going to find that their lives are improved all that much, because\n> they'll\n> > still have to deal with old servers with the old behavior for the\n> > foreseeable future.\n>\n> Dealing with old servers for a while is something that everyone is used to.\n>\n> > Even granting that a behavioral incompatibility is acceptable, I'm not\n> > sure how a client is supposed to be sure that this \"error\" means that a\n> > rollback happened, as opposed to real errors that prevented any state\n> > change from occurring.\n>\n> Because the error is a Class 40 — Transaction Rollback.\n>\n> My original example was:\n>\n> postgres=!# commit;\n> ERROR: 40P00: transaction cannot be committed\n> DETAIL: First error was \"42601: syntax error at or near \"error\"\".\n>\n>\n> > (A trivial example of that is misspelling the\n> > COMMIT command; which I'll grant is unlikely in practice. But there are\n> > less-trivial examples involving internal server malfunctions.)\n>\n> Misspelling the COMMIT command is likely a syntax error, which is Class\n> 42. Can you give one of those less-trivial examples please?\n>\n> > The only\n> > way to be sure you're out of the transaction is to check the transaction\n> > state that's sent along with ReadyForQuery ... but if you need to do\n> > that, it's not clear why we should change the server behavior at all.\n>\n> How does this differ from the deferred constraint violation example you\n> provided early on in the thread? That gave the error 23505 and\n> terminated the transaction. If you run the same scenario with the\n> primary key immediate, you get the *exact same error* but the\n> transaction is *not* terminated!\n>\n> I won't go so far as to suggest we change all COMMIT errors to Class 40\n> (as the spec says), but I'm thinking it very loudly.\n>\n> > I also don't think that this scales to the case of subtransaction\n> > commit/rollback. That should surely act the same, but your patch doesn't\n> > change it.\n>\n> How does one commit a subtransaction?\n>\n> > Lastly, introducing a new client-visible message level seems right out.\n> > That's a very fundamental protocol break, independently of all else.\n>\n> Yeah, this seemed like a bad idea to me, too.\n>\n\nOk, here is a much less obtrusive solution thanks to Vladimir.\n\nFWIW, only 10 of 196 tests fail.\nDave Cramer\nwww.postgres.rocks\n\n>\n>", "msg_date": "Thu, 27 Feb 2020 07:44:14 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Thu, 27 Feb 2020 at 07:44, Dave Cramer <davecramer@postgres.rocks> wrote:\n\n>\n>\n>\n> On Wed, 26 Feb 2020 at 16:57, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n>> On 26/02/2020 22:22, Tom Lane wrote:\n>> > Dave Cramer <davecramer@postgres.rocks> writes:\n>> >> OK, here is a patch that actually doesn't leave the transaction in a\n>> failed\n>> >> state but emits the error and rolls back the transaction.\n>> >\n>> >> This is far from complete as it fails a number of tests and does not\n>> cover\n>> >> all of the possible paths.\n>> >> But I'd like to know if this is strategy will be acceptable ?\n>> >\n>> > I really don't think that changing the server's behavior here is going\n>> to\n>> > fly. The people who are unhappy that we changed it are going to vastly\n>> > outnumber the people who are happy. Even the people who are happy are\n>> not\n>> > going to find that their lives are improved all that much, because\n>> they'll\n>> > still have to deal with old servers with the old behavior for the\n>> > foreseeable future.\n>>\n>> Dealing with old servers for a while is something that everyone is used\n>> to.\n>>\n>> > Even granting that a behavioral incompatibility is acceptable, I'm not\n>> > sure how a client is supposed to be sure that this \"error\" means that a\n>> > rollback happened, as opposed to real errors that prevented any state\n>> > change from occurring.\n>>\n>> Because the error is a Class 40 — Transaction Rollback.\n>>\n>> My original example was:\n>>\n>> postgres=!# commit;\n>> ERROR: 40P00: transaction cannot be committed\n>> DETAIL: First error was \"42601: syntax error at or near \"error\"\".\n>>\n>>\n>> > (A trivial example of that is misspelling the\n>> > COMMIT command; which I'll grant is unlikely in practice. But there are\n>> > less-trivial examples involving internal server malfunctions.)\n>>\n>> Misspelling the COMMIT command is likely a syntax error, which is Class\n>> 42. Can you give one of those less-trivial examples please?\n>>\n>> > The only\n>> > way to be sure you're out of the transaction is to check the transaction\n>> > state that's sent along with ReadyForQuery ... but if you need to do\n>> > that, it's not clear why we should change the server behavior at all.\n>>\n>> How does this differ from the deferred constraint violation example you\n>> provided early on in the thread? That gave the error 23505 and\n>> terminated the transaction. If you run the same scenario with the\n>> primary key immediate, you get the *exact same error* but the\n>> transaction is *not* terminated!\n>>\n>> I won't go so far as to suggest we change all COMMIT errors to Class 40\n>> (as the spec says), but I'm thinking it very loudly.\n>>\n>> > I also don't think that this scales to the case of subtransaction\n>> > commit/rollback. That should surely act the same, but your patch\n>> doesn't\n>> > change it.\n>>\n>> How does one commit a subtransaction?\n>>\n>> > Lastly, introducing a new client-visible message level seems right out.\n>> > That's a very fundamental protocol break, independently of all else.\n>>\n>> Yeah, this seemed like a bad idea to me, too.\n>>\n>\n> Ok, here is a much less obtrusive solution thanks to Vladimir.\n>\n\nStill had to mess with error levels since commit and chain needs the\nexisting context to succeed.\n\nAfter fixing up the tests only 1 still failing.\n\n\nDave Cramer\nhttp://www.postgres.rocks", "msg_date": "Thu, 27 Feb 2020 11:30:17 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Thu, 27 Feb 2020 at 08:30, Dave Cramer <davecramer@postgres.rocks> wrote:\n\n>\n>\n> On Thu, 27 Feb 2020 at 07:44, Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n>\n>>\n>>\n>>\n>> On Wed, 26 Feb 2020 at 16:57, Vik Fearing <vik@postgresfriends.org>\n>> wrote:\n>>\n>>> On 26/02/2020 22:22, Tom Lane wrote:\n>>> > Dave Cramer <davecramer@postgres.rocks> writes:\n>>> >> OK, here is a patch that actually doesn't leave the transaction in a\n>>> failed\n>>> >> state but emits the error and rolls back the transaction.\n>>> >\n>>> >> This is far from complete as it fails a number of tests and does not\n>>> cover\n>>> >> all of the possible paths.\n>>> >> But I'd like to know if this is strategy will be acceptable ?\n>>> >\n>>> > I really don't think that changing the server's behavior here is going\n>>> to\n>>> > fly. The people who are unhappy that we changed it are going to vastly\n>>> > outnumber the people who are happy. Even the people who are happy are\n>>> not\n>>> > going to find that their lives are improved all that much, because\n>>> they'll\n>>> > still have to deal with old servers with the old behavior for the\n>>> > foreseeable future.\n>>>\n>>> Dealing with old servers for a while is something that everyone is used\n>>> to.\n>>>\n>>> > Even granting that a behavioral incompatibility is acceptable, I'm not\n>>> > sure how a client is supposed to be sure that this \"error\" means that a\n>>> > rollback happened, as opposed to real errors that prevented any state\n>>> > change from occurring.\n>>>\n>>> Because the error is a Class 40 — Transaction Rollback.\n>>>\n>>> My original example was:\n>>>\n>>> postgres=!# commit;\n>>> ERROR: 40P00: transaction cannot be committed\n>>> DETAIL: First error was \"42601: syntax error at or near \"error\"\".\n>>>\n>>>\n>>> > (A trivial example of that is misspelling the\n>>> > COMMIT command; which I'll grant is unlikely in practice. But there\n>>> are\n>>> > less-trivial examples involving internal server malfunctions.)\n>>>\n>>> Misspelling the COMMIT command is likely a syntax error, which is Class\n>>> 42. Can you give one of those less-trivial examples please?\n>>>\n>>> > The only\n>>> > way to be sure you're out of the transaction is to check the\n>>> transaction\n>>> > state that's sent along with ReadyForQuery ... but if you need to do\n>>> > that, it's not clear why we should change the server behavior at all.\n>>>\n>>> How does this differ from the deferred constraint violation example you\n>>> provided early on in the thread? That gave the error 23505 and\n>>> terminated the transaction. If you run the same scenario with the\n>>> primary key immediate, you get the *exact same error* but the\n>>> transaction is *not* terminated!\n>>>\n>>> I won't go so far as to suggest we change all COMMIT errors to Class 40\n>>> (as the spec says), but I'm thinking it very loudly.\n>>>\n>>> > I also don't think that this scales to the case of subtransaction\n>>> > commit/rollback. That should surely act the same, but your patch\n>>> doesn't\n>>> > change it.\n>>>\n>>> How does one commit a subtransaction?\n>>>\n>>> > Lastly, introducing a new client-visible message level seems right out.\n>>> > That's a very fundamental protocol break, independently of all else.\n>>>\n>>> Yeah, this seemed like a bad idea to me, too.\n>>>\n>>\n>> Ok, here is a much less obtrusive solution thanks to Vladimir.\n>>\n>\n> Still had to mess with error levels since commit and chain needs the\n> existing context to succeed.\n>\n> After fixing up the tests only 1 still failing.\n>\n\n\nThere have been some arguments that the client can fix this easily.\n\nTurns out it is not as easy as one might think.\n\nIf the client (in this case JDBC) uses conn.commit() then yes relatively\neasy as we know that commit is being executed.\n\nhowever if the client executes commit using direct SQL and possibly\nmultiplexes a number of commands we would have to parse the SQL to figure\nout what is being sent. This could include a column named commit_date or a\ncomment with commit embedded in it. It really doesn't make sense to have a\nfull fledged PostgreSQL SQL parser in every client. This is something the\nserver does very well.\n\nThere has been another argument that we can simply check the transaction\nstate after we get the ReadyForQuery response, however this is set to IDLE\nafter the subsequent ROLLBACK so that doesn't work either.\n\nAdditionally in section 52.2.2 of the docs it states:\n\nA frontend must be prepared to accept ErrorResponse and NoticeResponse\nmessages whenever it is expecting any other type of message. See also\nSection 52.2.6 concerning messages that the backend might generate due to\noutside events.\n\nRecommended practice is to code frontends in a state-machine style that\nwill accept any message type at any time that it could make sense, rather\nthan wiring in assumptions about the exact sequence of messages.\n\nSeems to me that this behaviour is already documented?\n\nDave Cramer\nhttp://www.postgres.rocks\n\n\n\n>\n\nOn Thu, 27 Feb 2020 at 08:30, Dave Cramer <davecramer@postgres.rocks> wrote:On Thu, 27 Feb 2020 at 07:44, Dave Cramer <davecramer@postgres.rocks> wrote:On Wed, 26 Feb 2020 at 16:57, Vik Fearing <vik@postgresfriends.org> wrote:On 26/02/2020 22:22, Tom Lane wrote:\n> Dave Cramer <davecramer@postgres.rocks> writes:\n>> OK, here is a patch that actually doesn't leave the transaction in a failed\n>> state but emits the error and rolls back the transaction.\n> \n>> This is far from complete as it fails a number of tests  and does not cover\n>> all of the possible paths.\n>> But I'd like to know if this is strategy will be acceptable ?\n> \n> I really don't think that changing the server's behavior here is going to\n> fly.  The people who are unhappy that we changed it are going to vastly\n> outnumber the people who are happy.  Even the people who are happy are not\n> going to find that their lives are improved all that much, because they'll\n> still have to deal with old servers with the old behavior for the\n> foreseeable future.\n\nDealing with old servers for a while is something that everyone is used to.\n\n> Even granting that a behavioral incompatibility is acceptable, I'm not\n> sure how a client is supposed to be sure that this \"error\" means that a\n> rollback happened, as opposed to real errors that prevented any state\n> change from occurring.\n\nBecause the error is a Class 40 — Transaction Rollback.\n\nMy original example was:\n\npostgres=!# commit;\nERROR:  40P00: transaction cannot be committed\nDETAIL:  First error was \"42601: syntax error at or near \"error\"\".\n\n\n> (A trivial example of that is misspelling the\n> COMMIT command; which I'll grant is unlikely in practice.  But there are\n> less-trivial examples involving internal server malfunctions.)\n\nMisspelling the COMMIT command is likely a syntax error, which is Class\n42.  Can you give one of those less-trivial examples please?\n\n> The only\n> way to be sure you're out of the transaction is to check the transaction\n> state that's sent along with ReadyForQuery ... but if you need to do\n> that, it's not clear why we should change the server behavior at all.\n\nHow does this differ from the deferred constraint violation example you\nprovided early on in the thread?  That gave the error 23505 and\nterminated the transaction.  If you run the same scenario with the\nprimary key immediate, you get the *exact same error* but the\ntransaction is *not* terminated!\n\nI won't go so far as to suggest we change all COMMIT errors to Class 40\n(as the spec says), but I'm thinking it very loudly.\n\n> I also don't think that this scales to the case of subtransaction\n> commit/rollback.  That should surely act the same, but your patch doesn't\n> change it.\n\nHow does one commit a subtransaction?\n\n> Lastly, introducing a new client-visible message level seems right out.\n> That's a very fundamental protocol break, independently of all else.\n\nYeah, this seemed like a bad idea to me, too.Ok, here is a much less obtrusive solution thanks to Vladimir.Still had to mess with error levels since commit and chain needs the existing context to succeed.After fixing up the tests only 1 still failing.There have been some arguments that the client can fix this easily.Turns out it is not as easy as one might think.If the client (in this case JDBC) uses conn.commit() then yes relatively easy as we know that commit is being executed.however if the client executes commit using direct SQL and possibly multiplexes a number of commands we would have to parse the SQL to figure out what is being sent. This could include a column named commit_date or a comment with commit embedded in it. It really doesn't make sense to have a full fledged PostgreSQL SQL parser in every client. This is something the server does very well.There has been another argument that we can simply check the transaction state after we get the ReadyForQuery response, however this is set to IDLE after the subsequent ROLLBACK  so that doesn't work either.Additionally in section 52.2.2 of the docs it states:A frontend must be prepared to accept ErrorResponse and NoticeResponse messages whenever it is expecting any other type of message. See also Section 52.2.6 concerning messages that the backend might generate due to outside events.Recommended practice is to code frontends in a state-machine style that will accept any message type at any time that it could make sense, rather than wiring in assumptions about the exact sequence of messages.Seems to me that this behaviour is already documented?Dave Cramerhttp://www.postgres.rocks", "msg_date": "Fri, 6 Mar 2020 08:54:51 -0800", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, Mar 6, 2020 at 11:55 AM Dave Cramer <davecramer@postgres.rocks> wrote:\n> There have been some arguments that the client can fix this easily.\n>\n> Turns out it is not as easy as one might think.\n>\n> If the client (in this case JDBC) uses conn.commit() then yes relatively easy as we know that commit is being executed.\n\nRight...\n\n> however if the client executes commit using direct SQL and possibly multiplexes a number of commands we would have to parse the SQL to figure out what is being sent. This could include a column named commit_date or a comment with commit embedded in it. It really doesn't make sense to have a full fledged PostgreSQL SQL parser in every client. This is something the server does very well.\n\nThat's true. If the command tag is either COMMIT or ROLLBACK then the\nstatement was either COMMIT or ROLLBACK, but Vladimir's example query\n/*commit*/rollback does seem like a pretty annoying case. I was\nassuming that the JDBC driver required use of con.commit() in the\ncases we care about, but perhaps that's not so.\n\n> There has been another argument that we can simply check the transaction state after we get the ReadyForQuery response, however this is set to IDLE after the subsequent ROLLBACK so that doesn't work either.\n\nI assumed you'd look at the *previous* ReadyForQuery message and see\nwhether it said \"in transaction\" ('T') or \"failed in transaction\"\n('E'). If the transaction was failed, then only rollback is possible,\nbut if it's not, then either commit or rollback is possible.\n\nBut I agree that if you don't know what you command you sent, and have\nto deal with users who send things like /*commit*/rollback, then the\ncurrent reporting is not good enough. If the command tag for a commit\nthat got converted into a rollback were distinct from the command tag\nthat you get from a deliberate rollback, then it would be file; say if\nwe sent ROLLBACK COMMIT for one and just ROLLBACK for the other, for\nexample. But that's not how it works.\n\nI think you can still fix the con.commit() case. But users issuing\nad-hoc SQL that may contain comments intended to snipe the driver\nseems like it does require a server-side change.\n\n> Additionally in section 52.2.2 of the docs it states:\n>\n> A frontend must be prepared to accept ErrorResponse and NoticeResponse messages whenever it is expecting any other type of message. See also Section 52.2.6 concerning messages that the backend might generate due to outside events.\n>\n> Recommended practice is to code frontends in a state-machine style that will accept any message type at any time that it could make sense, rather than wiring in assumptions about the exact sequence of messages.\n>\n> Seems to me that this behaviour is already documented?\n\nI don't understand what you're going for here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 6 Mar 2020 13:12:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Fri, Mar 6, 2020 at 01:12:10PM -0500, Robert Haas wrote:\n> On Fri, Mar 6, 2020 at 11:55 AM Dave Cramer <davecramer@postgres.rocks> wrote:\n> > There have been some arguments that the client can fix this easily.\n> >\n> > Turns out it is not as easy as one might think.\n> >\n> > If the client (in this case JDBC) uses conn.commit() then yes relatively easy as we know that commit is being executed.\n> \n> Right...\n> \n> > however if the client executes commit using direct SQL and possibly multiplexes a number of commands we would have to parse the SQL to figure out what is being sent. This could include a column named commit_date or a comment with commit embedded in it. It really doesn't make sense to have a full fledged PostgreSQL SQL parser in every client. This is something the server does very well.\n> \n> That's true. If the command tag is either COMMIT or ROLLBACK then the\n> statement was either COMMIT or ROLLBACK, but Vladimir's example query\n> /*commit*/rollback does seem like a pretty annoying case. I was\n> assuming that the JDBC driver required use of con.commit() in the\n> cases we care about, but perhaps that's not so.\n\nLet me try to summarize where I think we are on this topic.\n\nFirst, Vik reported that we don't follow the SQL spec when issuing a\nCOMMIT WORK in a failed transaction. We return success and issue the\nROLLBACK command tag, rather than erroring. In general, if we don't\nfollow the spec, we should either have a good reason, or the breakage to\nmatch the spec is too severe. (I am confused why this has not been\nreported before.)\n\nSecond, someone suggested that if COMMIT throws an error, that future\nstatements would be considered to be in the same transaction block until\nROLLBACK is issued. It was determined that this is not required, and\nthat the API should have COMMIT WORK on a failed transaction still exit\nthe transaction block. This behavior is much more friendly for SQL\nscripts piped into psql.\n\nThird, the idea that individual interfaces, e.g. JDBC, should throw an\nerror in this case while the server just changes the COMMIT return tag\nto ROLLBACK is confusing. People regularly test SQL commands in the\nserver before writing applications or while debugging, and a behavior\nmismatch would cause confusion.\n\nFourth, it is not clear how many applications would break if COMMIT\nstarted issuing an error rather than return success a with ROLLBACK tag.\nCertainly SQL scripts would be fine. They would have one additional\nerror in the script output, but if they had ON_ERROR_STOP enabled, they\nwould have existed before the commit. Applications that track statement\nerrors and issue rollbacks will be fine. So, we are left with\napplications that issue COMMIT and expect success after a transaction\nblock has failed. Do we know how other database systems handle this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 17 Mar 2020 16:47:41 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Bruce, thanks for taking the time to summarize.\n\nBruce>Fourth, it is not clear how many applications would break if COMMIT\nBruce>started issuing an error rather than return success\n\nNone.\n\nBruce>applications that issue COMMIT and expect success after a transaction\nBruce>block has failed\n\nAn application must expect an exception from a COMMIT statement like any\nother SQL.\n\nWire protocol specification explicitly says implementations must expect\nerror messages at any time.\n\n---\n\nBruce>Do we know how other database systems handle this?\n\nOracle DB produces an error from COMMIT if transaction can't be committed\n(e.g. failure in the processing of \"on commit refresh materialized view\").\n\n---\n\nThe bug is \"deferred constraint violation\" and \"non-deferred constraint\nviolation\" end up with\n**different** behavior for COMMIT.\n\ndeferred violation produces an error while non-deferred violation produces\n\"silent rollback\".\n\nIn other words, there are already cases in PostgreSQL when commit produces\nan error. It is nothing new.\nThe new part is that PostgreSQL must not produce \"silent rollbacks\".\n\nBruce>First, Vik reported that we don't follow the SQL spec\n\n+1\n\nBruce>Second, someone suggested that if COMMIT throws an error, that future\nBruce>statements would be considered to be in the same transaction\n\nNo. Please disregard that. That is ill. COMMIT (and/or ROLLBACK) must\nterminate the transaction in any case.\nThe transaction must not exist after COMMIT finishes (successfully or not).\nThe same for PREPARE TRANSACTION. If it fails, then the transaction must be\nclear.\n\nA litmus test is \"deferred constraint violation\". It works Ok in the\ncurrent PostgreSQL.\nIf the database can't commit, it should respond with a clear error that\ndescribes the reason for the failure.\n\nBruce>Third, the idea that individual interfaces, e.g. JDBC, should throw\n\nIndividual interfaces should not deviate from server behavior much.\nThey should convert server-provided errors to the language-native format.\nThey should not invent their own rules to convert server messages to errors.\nThat would provide a uniform PostgreSQL experience for the end-users.\n\nNote: there are even multiple JDBC implementations for PostgreSQL, so\nslight differences in transaction handling\nis the very last \"feature\" people want from PostgreSQL database.\n\nVladimir\n\nBruce, thanks for taking the time to summarize.Bruce>Fourth, it is not clear how many applications would break if COMMITBruce>started issuing an error rather than return successNone.Bruce>applications that issue COMMIT and expect success after a transactionBruce>block has failedAn application must expect an exception from a COMMIT statement like any other SQL.Wire protocol specification explicitly says implementations must expect error messages at any time.---Bruce>Do we know how other database systems handle this?Oracle DB produces an error from COMMIT if transaction can't be committed (e.g. failure in the processing of \"on commit refresh materialized view\").---The bug is \"deferred constraint violation\" and \"non-deferred constraint violation\" end up with**different** behavior for COMMIT.deferred violation produces an error while non-deferred violation produces \"silent rollback\".In other words, there are already cases in PostgreSQL when commit produces an error. It is nothing new.The new part is that PostgreSQL must not produce \"silent rollbacks\".Bruce>First, Vik reported that we don't follow the SQL spec+1Bruce>Second, someone suggested that if COMMIT throws an error, that futureBruce>statements would be considered to be in the same transactionNo. Please disregard that. That is ill. COMMIT (and/or ROLLBACK) must terminate the transaction in any case.The transaction must not exist after COMMIT finishes (successfully or not).The same for PREPARE TRANSACTION. If it fails, then the transaction must be clear.A litmus test is \"deferred constraint violation\". It works Ok in the current PostgreSQL.If the database can't commit, it should respond with a clear error that describes the reason for the failure.Bruce>Third, the idea that individual interfaces, e.g. JDBC, should throwIndividual interfaces should not deviate from server behavior much.They should convert server-provided errors to the language-native format.They should not invent their own rules to convert server messages to errors.That would provide a uniform PostgreSQL experience for the end-users.Note: there are even multiple JDBC implementations for PostgreSQL, so slight differences in transaction handlingis the very last \"feature\" people want from PostgreSQL database.Vladimir", "msg_date": "Wed, 18 Mar 2020 00:22:22 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Mar 6, 2020 at 01:12:10PM -0500, Robert Haas wrote:\n> > On Fri, Mar 6, 2020 at 11:55 AM Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n> > > There have been some arguments that the client can fix this easily.\n> > >\n> > > Turns out it is not as easy as one might think.\n> > >\n> > > If the client (in this case JDBC) uses conn.commit() then yes\n> relatively easy as we know that commit is being executed.\n> >\n> > Right...\n> >\n> > > however if the client executes commit using direct SQL and possibly\n> multiplexes a number of commands we would have to parse the SQL to figure\n> out what is being sent. This could include a column named commit_date or a\n> comment with commit embedded in it. It really doesn't make sense to have a\n> full fledged PostgreSQL SQL parser in every client. This is something the\n> server does very well.\n> >\n> > That's true. If the command tag is either COMMIT or ROLLBACK then the\n> > statement was either COMMIT or ROLLBACK, but Vladimir's example query\n> > /*commit*/rollback does seem like a pretty annoying case. I was\n> > assuming that the JDBC driver required use of con.commit() in the\n> > cases we care about, but perhaps that's not so.\n>\n> Let me try to summarize where I think we are on this topic.\n>\n> First, Vik reported that we don't follow the SQL spec when issuing a\n> COMMIT WORK in a failed transaction. We return success and issue the\n> ROLLBACK command tag, rather than erroring. In general, if we don't\n> follow the spec, we should either have a good reason, or the breakage to\n> match the spec is too severe. (I am confused why this has not been\n> reported before.)\n>\n\nGood question.\n\n>\n> Second, someone suggested that if COMMIT throws an error, that future\n> statements would be considered to be in the same transaction block until\n> ROLLBACK is issued. It was determined that this is not required, and\n> that the API should have COMMIT WORK on a failed transaction still exit\n> the transaction block. This behavior is much more friendly for SQL\n> scripts piped into psql.\n>\n> This is correct. The patch I provided does exactly this.\nThe Rollback occurs. The transaction is finished, but an error message is\nsent\n\n\n> Third, the idea that individual interfaces, e.g. JDBC, should throw an\n> error in this case while the server just changes the COMMIT return tag\n> to ROLLBACK is confusing. People regularly test SQL commands in the\n> server before writing applications or while debugging, and a behavior\n> mismatch would cause confusion.\n>\n\nI'm not sure what you mean by this. The server would throw an error.\n\n>\n> Fourth, it is not clear how many applications would break if COMMIT\n> started issuing an error rather than return success a with ROLLBACK tag.\n> Certainly SQL scripts would be fine. They would have one additional\n> error in the script output, but if they had ON_ERROR_STOP enabled, they\n> would have existed before the commit. Applications that track statement\n> errors and issue rollbacks will be fine. So, we are left with\n> applications that issue COMMIT and expect success after a transaction\n> block has failed. Do we know how other database systems handle this?\n>\n\nWell I know pgjdbc handles my patch fine without any changes to the code\nAs I mentioned upthread 2 of the 3 go drivers already error if rollback is\nreturned. 1 of them does not.\n\nI suspect npgsql would be fine. Shay ?\n\n\nDave Cramer\nwww.postgres.rocks\n\nOn Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:On Fri, Mar  6, 2020 at 01:12:10PM -0500, Robert Haas wrote:\n> On Fri, Mar 6, 2020 at 11:55 AM Dave Cramer <davecramer@postgres.rocks> wrote:\n> > There have been some arguments that the client can fix this easily.\n> >\n> > Turns out it is not as easy as one might think.\n> >\n> > If the client (in this case JDBC) uses conn.commit() then yes relatively easy as we know that commit is being executed.\n> \n> Right...\n> \n> > however if the client executes commit using direct SQL and possibly multiplexes a number of commands we would have to parse the SQL to figure out what is being sent. This could include a column named commit_date or a comment with commit embedded in it. It really doesn't make sense to have a full fledged PostgreSQL SQL parser in every client. This is something the server does very well.\n> \n> That's true. If the command tag is either COMMIT or ROLLBACK then the\n> statement was either COMMIT or ROLLBACK, but Vladimir's example query\n> /*commit*/rollback does seem like a pretty annoying case. I was\n> assuming that the JDBC driver required use of con.commit() in the\n> cases we care about, but perhaps that's not so.\n\nLet me try to summarize where I think we are on this topic.\n\nFirst, Vik reported that we don't follow the SQL spec when issuing a\nCOMMIT WORK in a failed transaction.  We return success and issue the\nROLLBACK command tag, rather than erroring.  In general, if we don't\nfollow the spec, we should either have a good reason, or the breakage to\nmatch the spec is too severe.  (I am confused why this has not been\nreported before.)Good question.  \n\nSecond, someone suggested that if COMMIT throws an error, that future\nstatements would be considered to be in the same transaction block until\nROLLBACK is issued.  It was determined that this is not required, and\nthat the API should have COMMIT WORK on a failed transaction still exit\nthe transaction block.  This behavior is much more friendly for SQL\nscripts piped into psql.This is correct. The patch I provided does exactly this. The Rollback occurs. The transaction is finished, but an error message is sent \nThird, the idea that individual interfaces, e.g. JDBC, should throw an\nerror in this case while the server just changes the COMMIT return tag\nto ROLLBACK is confusing.  People regularly test SQL commands in the\nserver before writing applications or while debugging, and a behavior\nmismatch would cause confusion.I'm not sure what you mean by this. The server would throw an error. \n\nFourth, it is not clear how many applications would break if COMMIT\nstarted issuing an error rather than return success a with ROLLBACK tag.\nCertainly SQL scripts would be fine.  They would have one additional\nerror in the script output, but if they had ON_ERROR_STOP enabled, they\nwould have existed before the commit.  Applications that track statement\nerrors and issue rollbacks will be fine.  So, we are left with\napplications that issue COMMIT and expect success after a transaction\nblock has failed.  Do we know how other database systems handle this?Well I know pgjdbc handles my patch fine without any changes to the codeAs I mentioned upthread 2 of the 3 go drivers already error if rollback is returned. 1 of them does not.I suspect npgsql would be fine. Shay ?Dave Cramerwww.postgres.rocks", "msg_date": "Tue, 17 Mar 2020 19:15:05 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, Mar 17, 2020 at 07:15:05PM -0400, Dave Cramer wrote:\n> On Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:\n> Third, the idea that individual interfaces, e.g. JDBC, should throw an\n> error in this case while the server just changes the COMMIT return tag\n> to ROLLBACK is confusing.� People regularly test SQL commands in the\n> server before writing applications or while debugging, and a behavior\n> mismatch would cause confusion.\n> \n> \n> I'm not sure what you mean by this. The server would throw an error.�\n\nI am saying it is not wise to have interfaces behaving differently than\nthe server, for the reasons stated above.\n\n> Fourth, it is not clear how many applications would break if COMMIT\n> started issuing an error rather than return success a with ROLLBACK tag.\n> Certainly SQL scripts would be fine.� They would have one additional\n> error in the script output, but if they had ON_ERROR_STOP enabled, they\n> would have existed before the commit.� Applications that track statement\n> errors and issue rollbacks will be fine.� So, we are left with\n> applications that issue COMMIT and expect success after a transaction\n> block has failed.� Do we know how other database systems handle this?\n> \n> Well I know pgjdbc handles my patch fine without any changes to the code\n> As I mentioned upthread 2 of the 3 go drivers already error if rollback is\n> returned. 1 of them does not.\n\nGood point.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 17 Mar 2020 19:23:30 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 17 Mar 2020 at 19:23, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Mar 17, 2020 at 07:15:05PM -0400, Dave Cramer wrote:\n> > On Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:\n> > Third, the idea that individual interfaces, e.g. JDBC, should throw\n> an\n> > error in this case while the server just changes the COMMIT return\n> tag\n> > to ROLLBACK is confusing. People regularly test SQL commands in the\n> > server before writing applications or while debugging, and a behavior\n> > mismatch would cause confusion.\n> >\n> >\n> > I'm not sure what you mean by this. The server would throw an error.\n>\n> I am saying it is not wise to have interfaces behaving differently than\n> the server, for the reasons stated above.\n>\n> Agreed and this is why I think it is important for the server to be\ndefining the behaviour instead of each interface deciding how to handle\nthis situation.\n\n\nDave Cramer\nwww.postgres.rocks\n\n>\n>\n\nOn Tue, 17 Mar 2020 at 19:23, Bruce Momjian <bruce@momjian.us> wrote:On Tue, Mar 17, 2020 at 07:15:05PM -0400, Dave Cramer wrote:\n> On Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:\n>     Third, the idea that individual interfaces, e.g. JDBC, should throw an\n>     error in this case while the server just changes the COMMIT return tag\n>     to ROLLBACK is confusing.  People regularly test SQL commands in the\n>     server before writing applications or while debugging, and a behavior\n>     mismatch would cause confusion.\n> \n> \n> I'm not sure what you mean by this. The server would throw an error. \n\nI am saying it is not wise to have interfaces behaving differently than\nthe server, for the reasons stated above.\nAgreed and this is why I think it is important for the server to be defining the behaviour instead of each interface deciding how to handle this situation.Dave Cramerwww.postgres.rocks", "msg_date": "Tue, 17 Mar 2020 19:32:00 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 17 Mar 2020 at 19:32, Dave Cramer <davecramer@postgres.rocks> wrote:\n\n>\n>\n> On Tue, 17 Mar 2020 at 19:23, Bruce Momjian <bruce@momjian.us> wrote:\n>\n>> On Tue, Mar 17, 2020 at 07:15:05PM -0400, Dave Cramer wrote:\n>> > On Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:\n>> > Third, the idea that individual interfaces, e.g. JDBC, should throw\n>> an\n>> > error in this case while the server just changes the COMMIT return\n>> tag\n>> > to ROLLBACK is confusing. People regularly test SQL commands in the\n>> > server before writing applications or while debugging, and a\n>> behavior\n>> > mismatch would cause confusion.\n>> >\n>> >\n>> > I'm not sure what you mean by this. The server would throw an error.\n>>\n>> I am saying it is not wise to have interfaces behaving differently than\n>> the server, for the reasons stated above.\n>>\n>> Agreed and this is why I think it is important for the server to be\n> defining the behaviour instead of each interface deciding how to handle\n> this situation.\n>\n>\n>\nSo it appears this is currently languishing as unresolved and feature\nfreeze is imminent.\n\nWhat has to be done to get a decision one way or another before feature\nfreeze.\n\nI have provided a patch that could be reviewed and at least be considered\nin the commitfest.\n\nPerhaps someone can review the patch and I can do whatever it takes to get\nit presentable ?\n\nDave Cramer\nwww.postgres.rocks\n\nOn Tue, 17 Mar 2020 at 19:32, Dave Cramer <davecramer@postgres.rocks> wrote:On Tue, 17 Mar 2020 at 19:23, Bruce Momjian <bruce@momjian.us> wrote:On Tue, Mar 17, 2020 at 07:15:05PM -0400, Dave Cramer wrote:\n> On Tue, 17 Mar 2020 at 16:47, Bruce Momjian <bruce@momjian.us> wrote:\n>     Third, the idea that individual interfaces, e.g. JDBC, should throw an\n>     error in this case while the server just changes the COMMIT return tag\n>     to ROLLBACK is confusing.  People regularly test SQL commands in the\n>     server before writing applications or while debugging, and a behavior\n>     mismatch would cause confusion.\n> \n> \n> I'm not sure what you mean by this. The server would throw an error. \n\nI am saying it is not wise to have interfaces behaving differently than\nthe server, for the reasons stated above.\nAgreed and this is why I think it is important for the server to be defining the behaviour instead of each interface deciding how to handle this situation.So it appears this is currently languishing as unresolved and feature freeze is imminent. What has to be done to get a decision one way or another before feature freeze.I have provided a patch that could be reviewed and at least be considered in the commitfest.Perhaps someone can review the patch and I can do whatever it takes to get it presentable ?Dave Cramerwww.postgres.rocks", "msg_date": "Mon, 30 Mar 2020 12:05:03 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 3/30/20 6:05 PM, Dave Cramer wrote:\n> So it appears this is currently languishing as unresolved and feature\n> freeze is imminent.\n> \n> What has to be done to get a decision one way or another before feature\n> freeze.\n> \n> I have provided a patch that could be reviewed and at least be considered\n> in the commitfest.\n> \n> Perhaps someone can review the patch and I can do whatever it takes to get\n> it presentable ?\n\n\nI don't know enough about that part of the code to give a meaningful\nreview, but I will give my full support to the patch. (I hadn't\nexpressed an opinion either way yet.)\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 30 Mar 2020 18:32:16 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Apologies for not responding earlier, busy times.\n\n\nFourth, it is not clear how many applications would break if COMMIT\n>> started issuing an error rather than return success a with ROLLBACK tag.\n>> Certainly SQL scripts would be fine. They would have one additional\n>> error in the script output, but if they had ON_ERROR_STOP enabled, they\n>> would have existed before the commit. Applications that track statement\n>> errors and issue rollbacks will be fine. So, we are left with\n>> applications that issue COMMIT and expect success after a transaction\n>> block has failed. Do we know how other database systems handle this?\n>>\n>\n> Well I know pgjdbc handles my patch fine without any changes to the code\n> As I mentioned upthread 2 of the 3 go drivers already error if rollback is\n> returned. 1 of them does not.\n>\n> I suspect npgsql would be fine. Shay ?\n>\n\nNpgsql would be fine. In fact, Npgsql doesn't have any specific\nexpectations nor any specific logic around commit; it assumes errors may be\nreturned for any command (COMMIT or otherwise), and surfaces those errors\nas .NET exceptions. The transaction status is tracked via CommandComplete\nonly, and as mentioned several times, PostgreSQL can already error on\ncommit for various other reasons (e.g. deferred constraint checks). This\ndirection makes a lot of sense to me.\n\nApologies for not responding earlier, busy times.\nFourth, it is not clear how many applications would break if COMMIT\nstarted issuing an error rather than return success a with ROLLBACK tag.\nCertainly SQL scripts would be fine.  They would have one additional\nerror in the script output, but if they had ON_ERROR_STOP enabled, they\nwould have existed before the commit.  Applications that track statement\nerrors and issue rollbacks will be fine.  So, we are left with\napplications that issue COMMIT and expect success after a transaction\nblock has failed.  Do we know how other database systems handle this?Well I know pgjdbc handles my patch fine without any changes to the codeAs I mentioned upthread 2 of the 3 go drivers already error if rollback is returned. 1 of them does not.I suspect npgsql would be fine. Shay ?Npgsql would be fine. In fact, Npgsql doesn't have any specific expectations nor any specific logic around commit; it assumes errors may be returned for any command (COMMIT or otherwise), and surfaces those errors as .NET exceptions. The transaction status is tracked via CommandComplete only, and as mentioned several times, PostgreSQL can already error on commit for various other reasons (e.g. deferred constraint checks). This direction makes a lot of sense to me.", "msg_date": "Mon, 30 Mar 2020 23:06:39 +0200", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Thu, 16 Apr 2020 at 21:16, Shay Rojansky <roji@roji.org> wrote:\n> Npgsql would be fine. In fact, Npgsql doesn't have any specific expectations nor any specific logic around commit; it assumes errors may be returned for any command (COMMIT or otherwise), and surfaces those errors as .NET exceptions.\n\nHi all, I work on the pg8000 Python driver for Postgres and having\nread through the thread I'd like to echo Shay Rojansky's comment and\nsay that pg8000 would be able to handle the behaviour resulting from\nthe proposed patch and I support the change of a call to commit()\n*always* producing an error if it has failed. I can understand\npeople's reluctance in general to change server behaviour, but in this\ncase I think the good outweighs the bad. I think most people expected\nthe server to be behaving like this anyway.\n\nRegards,\n\nTony.\n\n\n", "msg_date": "Sat, 18 Apr 2020 16:27:55 +0100", "msg_from": "Tony Locke <tlocke@tlocke.org.uk>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Attached is the rebased patch for consideration.\n\nDave Cramer\nwww.postgres.rocks\n\n\n\n>\n>", "msg_date": "Tue, 4 Aug 2020 12:19:27 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "\nOn 8/4/20 12:19 PM, Dave Cramer wrote:\n> Attached is the rebased patch for consideration.\n>\n>\n\n\nIt's a bit sad this has been hanging around so long without attention.\n\n\nThe previous discussion seems to give the patch a clean bill of health\nfor most/all of the native drivers. Are there any implications for libpq\nbased drivers such as DBD::Pg and psycopg2? How about for ecpg?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 30 Sep 2020 18:14:52 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Hi,\r\n\r\nthank you for your contribution.\r\n\r\nI did notice that the cfbot [1] is failing for this patch.\r\nPlease try to address the issue for the upcoming commitfest.\r\n\r\nCheers,\r\n//Georgios\r\n\r\n[1] http://cfbot.cputube.org/dave-cramer.html", "msg_date": "Fri, 30 Oct 2020 16:13:13 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Wed, 30 Sep 2020 at 18:14, Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nwrote:\n\n>\n> On 8/4/20 12:19 PM, Dave Cramer wrote:\n> > Attached is the rebased patch for consideration.\n> >\n> >\n>\n>\n> It's a bit sad this has been hanging around so long without attention.\n>\n>\n> The previous discussion seems to give the patch a clean bill of health\n> for most/all of the native drivers. Are there any implications for libpq\n> based drivers such as DBD::Pg and psycopg2? How about for ecpg?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan https://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\nAttached is a rebased patch with fixes for the isolation tests\n\n\nDave Cramer\nwww.postgres.rocks", "msg_date": "Mon, 9 Nov 2020 16:26:59 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Hi,\r\n\r\nI noticed that this patch fails on the cfbot.\r\nFor this, I changed the status to: 'Waiting on Author'.\r\n\r\nCheers,\r\n//Georgios\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Tue, 10 Nov 2020 15:19:53 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, 9 Nov 2020 at 16:26, Dave Cramer <davecramer@postgres.rocks> wrote:\n\n>\n>\n> On Wed, 30 Sep 2020 at 18:14, Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> wrote:\n>\n>>\n>> On 8/4/20 12:19 PM, Dave Cramer wrote:\n>> > Attached is the rebased patch for consideration.\n>> >\n>> >\n>>\n>>\n>> It's a bit sad this has been hanging around so long without attention.\n>>\n>>\n>> The previous discussion seems to give the patch a clean bill of health\n>> for most/all of the native drivers. Are there any implications for libpq\n>> based drivers such as DBD::Pg and psycopg2? How about for ecpg?\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>>\n>> --\n>> Andrew Dunstan https://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n>\n> Attached is a rebased patch with fixes for the isolation tests\n>\n>\n\n>\n> Dave Cramer\n> www.postgres.rocks\n>", "msg_date": "Tue, 10 Nov 2020 11:53:20 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Hi,\r\n\r\nthis patch fails on the cfbot yet it has received an update during the current CF.\r\n\r\nI will move it to the next CF and mark it there as Waiting on Author.\r\n\r\nCheers,\r\nGeorgios\n\nThe new status of this patch is: Needs review\n", "msg_date": "Tue, 01 Dec 2020 09:48:08 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Hi Dave,\n\nOn Tue, Dec 1, 2020 at 6:49 PM Georgios Kokolatos\n<gkokolatos@protonmail.com> wrote:\n>\n> Hi,\n>\n> this patch fails on the cfbot yet it has received an update during the current CF.\n>\n> I will move it to the next CF and mark it there as Waiting on Author.\n>\n\nThis patch has not been updated for almost 2 months. According to\ncfbot test, the patch conflicts on only src/include/utils/elog.h.\nCould you submit the rebased patch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 7 Jan 2021 23:26:58 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "I could if someone wants to commit to reviewing it.\nI've updated it a number of times but it seems nobody wants to review it.\n\nDave Cramer\nwww.postgres.rocks\n\n\nOn Thu, 7 Jan 2021 at 09:27, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> Hi Dave,\n>\n> On Tue, Dec 1, 2020 at 6:49 PM Georgios Kokolatos\n> <gkokolatos@protonmail.com> wrote:\n> >\n> > Hi,\n> >\n> > this patch fails on the cfbot yet it has received an update during the\n> current CF.\n> >\n> > I will move it to the next CF and mark it there as Waiting on Author.\n> >\n>\n> This patch has not been updated for almost 2 months. According to\n> cfbot test, the patch conflicts on only src/include/utils/elog.h.\n> Could you submit the rebased patch?\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n>\n\nI could if someone wants to commit to reviewing it.I've updated it a number of times but it seems nobody wants to review it.Dave Cramerwww.postgres.rocksOn Thu, 7 Jan 2021 at 09:27, Masahiko Sawada <sawada.mshk@gmail.com> wrote:Hi Dave,\n\nOn Tue, Dec 1, 2020 at 6:49 PM Georgios Kokolatos\n<gkokolatos@protonmail.com> wrote:\n>\n> Hi,\n>\n> this patch fails on the cfbot yet it has received an update during the current CF.\n>\n> I will move it to the next CF and mark it there as Waiting on Author.\n>\n\nThis patch has not been updated for almost 2 months. According to\ncfbot test, the patch conflicts on only src/include/utils/elog.h.\nCould you submit the rebased patch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEnterpriseDB:  https://www.enterprisedb.com/", "msg_date": "Thu, 7 Jan 2021 09:29:12 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Thu, Jan 7, 2021 at 11:29 PM Dave Cramer <davecramer@postgres.rocks> wrote:\n>\n> I could if someone wants to commit to reviewing it.\n> I've updated it a number of times but it seems nobody wants to review it.\n\nSince this has a long thread, how about summarizing what consensus we\nreached and what discussion we still need if any so that new reviewers\ncan easily catch up? I think people who want to start reviewing are\nlikely to search the patch marked as \"Needs Review\". So I think\ncontinuous updating and rebasing the patch would help get the patch\nreviewed also in terms of that.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 22 Jan 2021 23:27:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Rebased against head\n\nHere's my summary of the long thread above.\n\nThis change is in keeping with the SQL spec.\n\nThere is an argument (Tom) that says that this will annoy more people than\nit will please. I presume this is due to the fact that libpq behaviour will\nchange.\n\nAs the author of the JDBC driver, and I believe I speak for other driver\n(NPGSQL for one) authors as well that have implemented the protocol I would\nargue that the current behaviour is more annoying.\n\nWe currently have to keep state and determine if COMMIT actually failed or\nit ROLLED BACK. There are a number of async drivers that would also benefit\nfrom not having to keep state in the session.\n\nRegards,\n\nDave Cramer\nwww.postgres.rocks\n\n\nOn Tue, 10 Nov 2020 at 11:53, Dave Cramer <davecramer@postgres.rocks> wrote:\n\n>\n>\n> On Mon, 9 Nov 2020 at 16:26, Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n>\n>>\n>>\n>> On Wed, 30 Sep 2020 at 18:14, Andrew Dunstan <\n>> andrew.dunstan@2ndquadrant.com> wrote:\n>>\n>>>\n>>> On 8/4/20 12:19 PM, Dave Cramer wrote:\n>>> > Attached is the rebased patch for consideration.\n>>> >\n>>> >\n>>>\n>>>\n>>> It's a bit sad this has been hanging around so long without attention.\n>>>\n>>>\n>>> The previous discussion seems to give the patch a clean bill of health\n>>> for most/all of the native drivers. Are there any implications for libpq\n>>> based drivers such as DBD::Pg and psycopg2? How about for ecpg?\n>>>\n>>>\n>>> cheers\n>>>\n>>>\n>>> andrew\n>>>\n>>>\n>>> --\n>>> Andrew Dunstan https://www.2ndQuadrant.com\n>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>>\n>>\n>>\n>> Attached is a rebased patch with fixes for the isolation tests\n>>\n>>\n>\n>>\n>> Dave Cramer\n>> www.postgres.rocks\n>>\n>", "msg_date": "Mon, 25 Jan 2021 09:09:09 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "Apologies, I should have checked again to make sure the patch applied.\n\nThis one does and passes tests.\n\nDave Cramer\nwww.postgres.rocks\n\n\nOn Mon, 25 Jan 2021 at 09:09, Dave Cramer <davecramer@postgres.rocks> wrote:\n\n> Rebased against head\n>\n> Here's my summary of the long thread above.\n>\n> This change is in keeping with the SQL spec.\n>\n> There is an argument (Tom) that says that this will annoy more people than\n> it will please. I presume this is due to the fact that libpq behaviour will\n> change.\n>\n> As the author of the JDBC driver, and I believe I speak for other driver\n> (NPGSQL for one) authors as well that have implemented the protocol I would\n> argue that the current behaviour is more annoying.\n>\n> We currently have to keep state and determine if COMMIT actually failed or\n> it ROLLED BACK. There are a number of async drivers that would also benefit\n> from not having to keep state in the session.\n>\n> Regards,\n>\n> Dave Cramer\n> www.postgres.rocks\n>\n>\n> On Tue, 10 Nov 2020 at 11:53, Dave Cramer <davecramer@postgres.rocks>\n> wrote:\n>\n>>\n>>\n>> On Mon, 9 Nov 2020 at 16:26, Dave Cramer <davecramer@postgres.rocks>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Wed, 30 Sep 2020 at 18:14, Andrew Dunstan <\n>>> andrew.dunstan@2ndquadrant.com> wrote:\n>>>\n>>>>\n>>>> On 8/4/20 12:19 PM, Dave Cramer wrote:\n>>>> > Attached is the rebased patch for consideration.\n>>>> >\n>>>> >\n>>>>\n>>>>\n>>>> It's a bit sad this has been hanging around so long without attention.\n>>>>\n>>>>\n>>>> The previous discussion seems to give the patch a clean bill of health\n>>>> for most/all of the native drivers. Are there any implications for libpq\n>>>> based drivers such as DBD::Pg and psycopg2? How about for ecpg?\n>>>>\n>>>>\n>>>> cheers\n>>>>\n>>>>\n>>>> andrew\n>>>>\n>>>>\n>>>> --\n>>>> Andrew Dunstan https://www.2ndQuadrant.com\n>>>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>>>\n>>>\n>>>\n>>> Attached is a rebased patch with fixes for the isolation tests\n>>>\n>>>\n>>\n>>>\n>>> Dave Cramer\n>>> www.postgres.rocks\n>>>\n>>", "msg_date": "Mon, 25 Jan 2021 11:29:10 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Mon, 2021-01-25 at 11:29 -0500, Dave Cramer wrote:\n> Rebased against head \n> \n> Here's my summary of the long thread above.\n> \n> This change is in keeping with the SQL spec.\n> \n> There is an argument (Tom) that says that this will annoy more people than it will please.\n> I presume this is due to the fact that libpq behaviour will change.\n> \n> As the author of the JDBC driver, and I believe I speak for other driver (NPGSQL for one)\n> authors as well that have implemented the protocol I would argue that the current behaviour\n> is more annoying.\n> \n> We currently have to keep state and determine if COMMIT actually failed or it ROLLED BACK.\n> There are a number of async drivers that would also benefit from not having to keep state\n> in the session.\n\nI think this change makes sense, but I think everybody agrees that it does as it\nmakes PostgreSQL more standard compliant.\n\nAbout the fear that it will break user's applications:\n\nI think that the breakage will be minimal. All that will change is that COMMIT of\nan aborted transaction raises an error.\n\nApplications that catch an error in a transaction and roll back will not\nbe affected. What will be affected are applications that do *not* check for\nerrors in statements in a transaction, but check for errors in the COMMIT.\nI think that doesn't happen often.\n\nI agree that some people will be hurt, but I don't think it will be a major problem.\n\nThe patch applies and passes regression tests.\n\nI wonder about the introduction of the new USER_ERROR level:\n\n #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client as usual, but\n * never to the server log. */\n-#define ERROR 21 /* user error - abort transaction; return to\n+#define USER_ERROR 21\n+#define ERROR 22 /* user error - abort transaction; return to\n * known state */\n /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n * modify it. We have to use a constant rather than ERROR because macros\n * are expanded only when referenced outside macros.\n */\n #ifdef WIN32\n-#define PGERROR 21\n+#define PGERROR 22\n #endif\n-#define FATAL 22 /* fatal error - abort process */\n-#define PANIC 23 /* take down the other backends with me */\n+#define FATAL 23 /* fatal error - abort process */\n+#define PANIC 24 /* take down the other backends with me */\n\nI see that without that, COMMIT AND CHAIN does not behave correctly,\nsince the respective regression tests fail.\n\nBut I don't understand why. I think that this needs some more comments to\nmake this clear.\n\nIs this new message level something we need to allow setting for\n\"client_min_messages\" and \"log_min_messages\"?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 26 Jan 2021 11:05:53 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, Jan 26, 2021 at 7:06 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2021-01-25 at 11:29 -0500, Dave Cramer wrote:\n> > Rebased against head\n> >\n> > Here's my summary of the long thread above.\n> >\n> > This change is in keeping with the SQL spec.\n> >\n> > There is an argument (Tom) that says that this will annoy more people than it will please.\n> > I presume this is due to the fact that libpq behaviour will change.\n> >\n> > As the author of the JDBC driver, and I believe I speak for other driver (NPGSQL for one)\n> > authors as well that have implemented the protocol I would argue that the current behaviour\n> > is more annoying.\n> >\n> > We currently have to keep state and determine if COMMIT actually failed or it ROLLED BACK.\n> > There are a number of async drivers that would also benefit from not having to keep state\n> > in the session.\n>\n> I think this change makes sense, but I think everybody agrees that it does as it\n> makes PostgreSQL more standard compliant.\n>\n> About the fear that it will break user's applications:\n>\n> I think that the breakage will be minimal. All that will change is that COMMIT of\n> an aborted transaction raises an error.\n>\n> Applications that catch an error in a transaction and roll back will not\n> be affected. What will be affected are applications that do *not* check for\n> errors in statements in a transaction, but check for errors in the COMMIT.\n> I think that doesn't happen often.\n>\n> I agree that some people will be hurt, but I don't think it will be a major problem.\n>\n> The patch applies and passes regression tests.\n>\n> I wonder about the introduction of the new USER_ERROR level:\n>\n> #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client as usual, but\n> * never to the server log. */\n> -#define ERROR 21 /* user error - abort transaction; return to\n> +#define USER_ERROR 21\n> +#define ERROR 22 /* user error - abort transaction; return to\n> * known state */\n> /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n> * modify it. We have to use a constant rather than ERROR because macros\n> * are expanded only when referenced outside macros.\n> */\n> #ifdef WIN32\n> -#define PGERROR 21\n> +#define PGERROR 22\n> #endif\n> -#define FATAL 22 /* fatal error - abort process */\n> -#define PANIC 23 /* take down the other backends with me */\n> +#define FATAL 23 /* fatal error - abort process */\n> +#define PANIC 24 /* take down the other backends with me */\n>\n> I see that without that, COMMIT AND CHAIN does not behave correctly,\n> since the respective regression tests fail.\n>\n> But I don't understand why. I think that this needs some more comments to\n> make this clear.\n\nWhile testing the patch I realized that the client gets an\nacknowledgment of COMMIT command completed successfully from\nPostgreSQL server (i.g., PQgetResult() returns PGRES_COMMAND_OK) even\nif the server raises an USER_ERROR level error. I think the command\nshould be failed. Because otherwise, the drivers need to throw an\nexception by re-interpreting the results even in a case where the\ncommand is completed successfully.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 26 Jan 2021 20:58:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 26 Jan 2021 at 06:59, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> On Tue, Jan 26, 2021 at 7:06 PM Laurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> >\n> > On Mon, 2021-01-25 at 11:29 -0500, Dave Cramer wrote:\n> > > Rebased against head\n> > >\n> > > Here's my summary of the long thread above.\n> > >\n> > > This change is in keeping with the SQL spec.\n> > >\n> > > There is an argument (Tom) that says that this will annoy more people\n> than it will please.\n> > > I presume this is due to the fact that libpq behaviour will change.\n> > >\n> > > As the author of the JDBC driver, and I believe I speak for other\n> driver (NPGSQL for one)\n> > > authors as well that have implemented the protocol I would argue that\n> the current behaviour\n> > > is more annoying.\n> > >\n> > > We currently have to keep state and determine if COMMIT actually\n> failed or it ROLLED BACK.\n> > > There are a number of async drivers that would also benefit from not\n> having to keep state\n> > > in the session.\n> >\n> > I think this change makes sense, but I think everybody agrees that it\n> does as it\n> > makes PostgreSQL more standard compliant.\n> >\n> > About the fear that it will break user's applications:\n> >\n> > I think that the breakage will be minimal. All that will change is that\n> COMMIT of\n> > an aborted transaction raises an error.\n> >\n> > Applications that catch an error in a transaction and roll back will not\n> > be affected. What will be affected are applications that do *not* check\n> for\n> > errors in statements in a transaction, but check for errors in the\n> COMMIT.\n> > I think that doesn't happen often.\n> >\n> > I agree that some people will be hurt, but I don't think it will be a\n> major problem.\n> >\n> > The patch applies and passes regression tests.\n> >\n> > I wonder about the introduction of the new USER_ERROR level:\n> >\n> > #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client as\n> usual, but\n> > * never to the server log. */\n> > -#define ERROR 21 /* user error - abort transaction;\n> return to\n> > +#define USER_ERROR 21\n> > +#define ERROR 22 /* user error - abort transaction;\n> return to\n> > * known state */\n> > /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n> > * modify it. We have to use a constant rather than ERROR because\n> macros\n> > * are expanded only when referenced outside macros.\n> > */\n> > #ifdef WIN32\n> > -#define PGERROR 21\n> > +#define PGERROR 22\n> > #endif\n> > -#define FATAL 22 /* fatal error - abort process */\n> > -#define PANIC 23 /* take down the other backends with me\n> */\n> > +#define FATAL 23 /* fatal error - abort process */\n> > +#define PANIC 24 /* take down the other backends with me\n> */\n> >\n> > I see that without that, COMMIT AND CHAIN does not behave correctly,\n> > since the respective regression tests fail.\n> >\n> > But I don't understand why. I think that this needs some more comments\n> to\n> > make this clear.\n>\n> While testing the patch I realized that the client gets an\n> acknowledgment of COMMIT command completed successfully from\n> PostgreSQL server (i.g., PQgetResult() returns PGRES_COMMAND_OK) even\n> if the server raises an USER_ERROR level error. I think the command\n> should be failed. Because otherwise, the drivers need to throw an\n> exception by re-interpreting the results even in a case where the\n> command is completed successfully.\n>\n> Regards,\n>\n\nInteresting. Thanks for looking at this. I'm curious what we return now\nwhen we return rollback instead\n\nDave\n\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\nOn Tue, 26 Jan 2021 at 06:59, Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Tue, Jan 26, 2021 at 7:06 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2021-01-25 at 11:29 -0500, Dave Cramer wrote:\n> > Rebased against head\n> >\n> > Here's my summary of the long thread above.\n> >\n> > This change is in keeping with the SQL spec.\n> >\n> > There is an argument (Tom) that says that this will annoy more people than it will please.\n> >  I presume this is due to the fact that libpq behaviour will change.\n> >\n> > As the author of the JDBC driver, and I believe I speak for other driver (NPGSQL for one)\n> >  authors as well that have implemented the protocol I would argue that the current behaviour\n> >  is more annoying.\n> >\n> > We currently have to keep state and determine if COMMIT actually failed or it ROLLED BACK.\n> >  There are a number of async drivers that would also benefit from not having to keep state\n> >  in the session.\n>\n> I think this change makes sense, but I think everybody agrees that it does as it\n> makes PostgreSQL more standard compliant.\n>\n> About the fear that it will break user's applications:\n>\n> I think that the breakage will be minimal.  All that will change is that COMMIT of\n> an aborted transaction raises an error.\n>\n> Applications that catch an error in a transaction and roll back will not\n> be affected.  What will be affected are applications that do *not* check for\n> errors in statements in a transaction, but check for errors in the COMMIT.\n> I think that doesn't happen often.\n>\n> I agree that some people will be hurt, but I don't think it will be a major problem.\n>\n> The patch applies and passes regression tests.\n>\n> I wonder about the introduction of the new USER_ERROR level:\n>\n>  #define WARNING_CLIENT_ONLY    20  /* Warnings to be sent to client as usual, but\n>                                  * never to the server log. */\n> -#define ERROR      21          /* user error - abort transaction; return to\n> +#define USER_ERROR 21\n> +#define ERROR      22          /* user error - abort transaction; return to\n>                                  * known state */\n>  /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n>   * modify it.  We have to use a constant rather than ERROR because macros\n>   * are expanded only when referenced outside macros.\n>   */\n>  #ifdef WIN32\n> -#define PGERROR        21\n> +#define PGERROR        22\n>  #endif\n> -#define FATAL      22          /* fatal error - abort process */\n> -#define PANIC      23          /* take down the other backends with me */\n> +#define FATAL      23          /* fatal error - abort process */\n> +#define PANIC      24          /* take down the other backends with me */\n>\n> I see that without that, COMMIT AND CHAIN does not behave correctly,\n> since the respective regression tests fail.\n>\n> But I don't understand why.  I think that this needs some more comments to\n> make this clear.\n\nWhile testing the patch I realized that the client gets an\nacknowledgment of COMMIT command completed successfully from\nPostgreSQL server (i.g., PQgetResult() returns PGRES_COMMAND_OK) even\nif the server raises an USER_ERROR level error. I think the command\nshould be failed. Because otherwise, the drivers need to throw an\nexception by re-interpreting the results even in a case where the\ncommand is completed successfully.\n\nRegards,Interesting. Thanks for looking at this. I'm curious what we return now when we return rollback insteadDave \n\n-- \nMasahiko Sawada\nEDB:  https://www.enterprisedb.com/", "msg_date": "Tue, 26 Jan 2021 08:43:41 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 26 Jan 2021 at 05:05, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Mon, 2021-01-25 at 11:29 -0500, Dave Cramer wrote:\n> > Rebased against head\n> >\n> > Here's my summary of the long thread above.\n> >\n> > This change is in keeping with the SQL spec.\n> >\n> > There is an argument (Tom) that says that this will annoy more people\n> than it will please.\n> > I presume this is due to the fact that libpq behaviour will change.\n> >\n> > As the author of the JDBC driver, and I believe I speak for other driver\n> (NPGSQL for one)\n> > authors as well that have implemented the protocol I would argue that\n> the current behaviour\n> > is more annoying.\n> >\n> > We currently have to keep state and determine if COMMIT actually failed\n> or it ROLLED BACK.\n> > There are a number of async drivers that would also benefit from not\n> having to keep state\n> > in the session.\n>\n> I think this change makes sense, but I think everybody agrees that it does\n> as it\n> makes PostgreSQL more standard compliant.\n>\n> About the fear that it will break user's applications:\n>\n> I think that the breakage will be minimal. All that will change is that\n> COMMIT of\n> an aborted transaction raises an error.\n>\n> Applications that catch an error in a transaction and roll back will not\n> be affected. What will be affected are applications that do *not* check\n> for\n> errors in statements in a transaction, but check for errors in the COMMIT.\n> I think that doesn't happen often.\n>\n> I agree that some people will be hurt, but I don't think it will be a\n> major problem.\n>\n> The patch applies and passes regression tests.\n>\n> I wonder about the introduction of the new USER_ERROR level:\n>\n> #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client as\n> usual, but\n> * never to the server log. */\n> -#define ERROR 21 /* user error - abort transaction; return\n> to\n> +#define USER_ERROR 21\n> +#define ERROR 22 /* user error - abort transaction; return\n> to\n> * known state */\n> /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n> * modify it. We have to use a constant rather than ERROR because macros\n> * are expanded only when referenced outside macros.\n> */\n> #ifdef WIN32\n> -#define PGERROR 21\n> +#define PGERROR 22\n> #endif\n> -#define FATAL 22 /* fatal error - abort process */\n> -#define PANIC 23 /* take down the other backends with me */\n> +#define FATAL 23 /* fatal error - abort process */\n> +#define PANIC 24 /* take down the other backends with me */\n>\n> I see that without that, COMMIT AND CHAIN does not behave correctly,\n> since the respective regression tests fail.\n>\n> But I don't understand why. I think that this needs some more comments to\n> make this clear.\n>\n> First off thanks for reviewing.\n\nThe problem is that ereport does not return for any level equal to or above\nERROR. This code required it to return so that it could continue processing\nSo after re-acquainting myself with the code I propose: Now we could use\n\"TRANSACTION_ERROR\" instead of \"USER_ERROR\"\nI'd like to comment more but I do not believe that elog.h is the place.\nSuggestions ?\n\n\nindex 3c0e57621f..df79a2d6db 100644\n--- a/src/include/utils/elog.h\n+++ b/src/include/utils/elog.h\n@@ -42,17 +42,19 @@\n * WARNING\nis for unexpected messages. */\n #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client as\nusual, but\n * never to\nthe server log. */\n-#define ERROR 21 /* user error - abort\ntransaction; return to\n+#define USER_ERROR 21 /* similar to ERROR, except\nwe don't want to\n+ * exit the\ncurrent context. */\n+#define ERROR 22 /* user error - abort\ntransaction; return to\n * known\nstate */\n /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n * modify it. We have to use a constant rather than ERROR because macros\n * are expanded only when referenced outside macros.\n */\n #ifdef WIN32\n-#define PGERROR 21\n+#define PGERROR 22\n #endif\n-#define FATAL 22 /* fatal error - abort\nprocess */\n-#define PANIC 23 /* take down the other\nbackends with me */\n+#define FATAL 23 /* fatal error - abort\nprocess */\n+#define PANIC 24 /* take down the other\nbackends with me */\n\n\n\n\n\n> Is this new message level something we need to allow setting for\n> \"client_min_messages\" and \"log_min_messages\"?\n>\n\nGood question. I had not given that any thought.\n\n\nDave Cramer\nwww.postgres.rocks\n\n>\n>\n\nOn Tue, 26 Jan 2021 at 05:05, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Mon, 2021-01-25 at 11:29 -0500, Dave Cramer wrote:\n> Rebased against head \n> \n> Here's my summary of the long thread above.\n> \n> This change is in keeping with the SQL spec.\n> \n> There is an argument (Tom) that says that this will annoy more people than it will please.\n>  I presume this is due to the fact that libpq behaviour will change.\n> \n> As the author of the JDBC driver, and I believe I speak for other driver (NPGSQL for one)\n>  authors as well that have implemented the protocol I would argue that the current behaviour\n>  is more annoying.\n> \n> We currently have to keep state and determine if COMMIT actually failed or it ROLLED BACK.\n>  There are a number of async drivers that would also benefit from not having to keep state\n>  in the session.\n\nI think this change makes sense, but I think everybody agrees that it does as it\nmakes PostgreSQL more standard compliant.\n\nAbout the fear that it will break user's applications:\n\nI think that the breakage will be minimal.  All that will change is that COMMIT of\nan aborted transaction raises an error.\n\nApplications that catch an error in a transaction and roll back will not\nbe affected.  What will be affected are applications that do *not* check for\nerrors in statements in a transaction, but check for errors in the COMMIT.\nI think that doesn't happen often.\n\nI agree that some people will be hurt, but I don't think it will be a major problem.\n\nThe patch applies and passes regression tests.\n\nI wonder about the introduction of the new USER_ERROR level:\n\n #define WARNING_CLIENT_ONLY    20  /* Warnings to be sent to client as usual, but\n                                 * never to the server log. */\n-#define ERROR      21          /* user error - abort transaction; return to\n+#define USER_ERROR 21\n+#define ERROR      22          /* user error - abort transaction; return to\n                                 * known state */\n /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n  * modify it.  We have to use a constant rather than ERROR because macros\n  * are expanded only when referenced outside macros.\n  */\n #ifdef WIN32\n-#define PGERROR        21\n+#define PGERROR        22\n #endif\n-#define FATAL      22          /* fatal error - abort process */\n-#define PANIC      23          /* take down the other backends with me */\n+#define FATAL      23          /* fatal error - abort process */\n+#define PANIC      24          /* take down the other backends with me */\n\nI see that without that, COMMIT AND CHAIN does not behave correctly,\nsince the respective regression tests fail.\n\nBut I don't understand why.  I think that this needs some more comments to\nmake this clear.\nFirst off thanks for reviewing.The problem is that ereport does not return for any level equal to or above ERROR. This code required it to return so that it could continue processingSo after re-acquainting myself with the code I propose: Now we could use \"TRANSACTION_ERROR\" instead of \"USER_ERROR\" I'd like to comment more but I do not believe that elog.h is the place. Suggestions ?index 3c0e57621f..df79a2d6db 100644--- a/src/include/utils/elog.h+++ b/src/include/utils/elog.h@@ -42,17 +42,19 @@                                                                 * WARNING is for unexpected messages. */ #define WARNING_CLIENT_ONLY    20      /* Warnings to be sent to client as usual, but                                                                 * never to the server log. */-#define ERROR          21                      /* user error - abort transaction; return to+#define USER_ERROR     21                      /* similar to ERROR, except we don't want to+                                                               * exit the current context. */+#define ERROR          22                      /* user error - abort transaction; return to                                                                 * known state */ /* Save ERROR value in PGERROR so it can be restored when Win32 includes  * modify it.  We have to use a constant rather than ERROR because macros  * are expanded only when referenced outside macros.  */ #ifdef WIN32-#define PGERROR                21+#define PGERROR                22 #endif-#define FATAL          22                      /* fatal error - abort process */-#define PANIC          23                      /* take down the other backends with me */+#define FATAL          23                      /* fatal error - abort process */+#define PANIC          24                      /* take down the other backends with me */ \nIs this new message level something we need to allow setting for\n\"client_min_messages\" and \"log_min_messages\"?Good question. I had not given that any thought.Dave Cramerwww.postgres.rocks", "msg_date": "Tue, 26 Jan 2021 11:09:12 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 2021-01-26 at 11:09 -0500, Dave Cramer wrote:\n> On Tue, 26 Jan 2021 at 05:05, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> > I wonder about the introduction of the new USER_ERROR level:\n> > \n> > #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client as usual, but\n> > * never to the server log. */\n> > -#define ERROR 21 /* user error - abort transaction; return to\n> > +#define USER_ERROR 21\n> > +#define ERROR 22 /* user error - abort transaction; return to\n> > * known state */\n> > /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n> > * modify it. We have to use a constant rather than ERROR because macros\n> > * are expanded only when referenced outside macros.\n> > */\n> > #ifdef WIN32\n> > -#define PGERROR 21\n> > +#define PGERROR 22\n> > #endif\n> > -#define FATAL 22 /* fatal error - abort process */\n> > -#define PANIC 23 /* take down the other backends with me */\n> > +#define FATAL 23 /* fatal error - abort process */\n> > +#define PANIC 24 /* take down the other backends with me */\n> > \n> > I see that without that, COMMIT AND CHAIN does not behave correctly,\n> > since the respective regression tests fail.\n> > \n> > But I don't understand why. I think that this needs some more comments to\n> > make this clear.\n> \n> First off thanks for reviewing.\n> \n> The problem is that ereport does not return for any level equal to or above ERROR.\n> This code required it to return so that it could continue processing\n\nOh, I see.\n\nAfter thinking some more about it, I think that COMMIT AND CHAIN would have\nto change behavior: if COMMIT throws an error (because the transaction was\naborted), no new transaction should be started. Everything else seems fishy:\nthe statement fails, but still starts a new transaction?\n\nI guess that's also at fault for the unexpected result status that\nMasahiko complained about in the other message.\n\nSo I think we should not introduce USER_ERROR at all. It is too much\nof a kluge: fail, but not really...\n\nI guess that is one example for the incompatibilities that Tom worried\nabout upthread. I am beginning to see his point better now.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 26 Jan 2021 18:20:41 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 26 Jan 2021 at 12:20, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Tue, 2021-01-26 at 11:09 -0500, Dave Cramer wrote:\n> > On Tue, 26 Jan 2021 at 05:05, Laurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> >\n> > > I wonder about the introduction of the new USER_ERROR level:\n> > >\n> > > #define WARNING_CLIENT_ONLY 20 /* Warnings to be sent to client\n> as usual, but\n> > > * never to the server log. */\n> > > -#define ERROR 21 /* user error - abort transaction;\n> return to\n> > > +#define USER_ERROR 21\n> > > +#define ERROR 22 /* user error - abort transaction;\n> return to\n> > > * known state */\n> > > /* Save ERROR value in PGERROR so it can be restored when Win32\n> includes\n> > > * modify it. We have to use a constant rather than ERROR because\n> macros\n> > > * are expanded only when referenced outside macros.\n> > > */\n> > > #ifdef WIN32\n> > > -#define PGERROR 21\n> > > +#define PGERROR 22\n> > > #endif\n> > > -#define FATAL 22 /* fatal error - abort process */\n> > > -#define PANIC 23 /* take down the other backends with\n> me */\n> > > +#define FATAL 23 /* fatal error - abort process */\n> > > +#define PANIC 24 /* take down the other backends with\n> me */\n> > >\n> > > I see that without that, COMMIT AND CHAIN does not behave correctly,\n> > > since the respective regression tests fail.\n> > >\n> > > But I don't understand why. I think that this needs some more\n> comments to\n> > > make this clear.\n> >\n> > First off thanks for reviewing.\n> >\n> > The problem is that ereport does not return for any level equal to or\n> above ERROR.\n> > This code required it to return so that it could continue processing\n>\n> Oh, I see.\n>\n> After thinking some more about it, I think that COMMIT AND CHAIN would have\n> to change behavior: if COMMIT throws an error (because the transaction was\n> aborted), no new transaction should be started. Everything else seems\n> fishy:\n> the statement fails, but still starts a new transaction?\n>\n> I guess that's also at fault for the unexpected result status that\n> Masahiko complained about in the other message.\n>\n\nI haven't had a look at the result status in libpq. For JDBC we don't see\nthat.\nWe throw an exception when we get this error report. This is very\nconsistent as the commit fails and we throw an exception\n\n\n> So I think we should not introduce USER_ERROR at all. It is too much\n> of a kluge: fail, but not really...\n>\n\nWhat we do now is actually worse as we do not get an error report and we\nsilently change commit to rollback.\nHow is this better ?\n\nDave\n\n\n>\n\nOn Tue, 26 Jan 2021 at 12:20, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Tue, 2021-01-26 at 11:09 -0500, Dave Cramer wrote:\n> On Tue, 26 Jan 2021 at 05:05, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> > I wonder about the introduction of the new USER_ERROR level:\n> > \n> >  #define WARNING_CLIENT_ONLY    20  /* Warnings to be sent to client as usual, but\n> >                                  * never to the server log. */\n> > -#define ERROR      21          /* user error - abort transaction; return to\n> > +#define USER_ERROR 21\n> > +#define ERROR      22          /* user error - abort transaction; return to\n> >                                  * known state */\n> >  /* Save ERROR value in PGERROR so it can be restored when Win32 includes\n> >   * modify it.  We have to use a constant rather than ERROR because macros\n> >   * are expanded only when referenced outside macros.\n> >   */\n> >  #ifdef WIN32\n> > -#define PGERROR        21\n> > +#define PGERROR        22\n> >  #endif\n> > -#define FATAL      22          /* fatal error - abort process */\n> > -#define PANIC      23          /* take down the other backends with me */\n> > +#define FATAL      23          /* fatal error - abort process */\n> > +#define PANIC      24          /* take down the other backends with me */\n> > \n> > I see that without that, COMMIT AND CHAIN does not behave correctly,\n> > since the respective regression tests fail.\n> > \n> > But I don't understand why.  I think that this needs some more comments to\n> > make this clear.\n> \n> First off thanks for reviewing.\n> \n> The problem is that ereport does not return for any level equal to or above ERROR.\n>  This code required it to return so that it could continue processing\n\nOh, I see.\n\nAfter thinking some more about it, I think that COMMIT AND CHAIN would have\nto change behavior: if COMMIT throws an error (because the transaction was\naborted), no new transaction should be started.  Everything else seems fishy:\nthe statement fails, but still starts a new transaction?\n\nI guess that's also at fault for the unexpected result status that\nMasahiko complained about in the other message. I haven't had a look at the result status in libpq. For JDBC we don't see that. We throw an exception when we get this error report. This is very consistent as the commit fails and we throw an exception\n\nSo I think we should not introduce USER_ERROR at all.  It is too much\nof a kluge: fail, but not really...What we do now is actually worse as we do not get an error report and we silently change commit to rollback.How is this better ?Dave", "msg_date": "Tue, 26 Jan 2021 12:25:57 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 1/26/21 6:20 PM, Laurenz Albe wrote:\n> After thinking some more about it, I think that COMMIT AND CHAIN would have\n> to change behavior: if COMMIT throws an error (because the transaction was\n> aborted), no new transaction should be started. Everything else seems fishy:\n> the statement fails, but still starts a new transaction?\n\nThe standard is not clear (to me) on what exactly should happen here.\nIt says that if a <commit statement> is not successful then a <rollback\nstatement> is implied, but I don't see it say anything about whether the\nAND CHAIN should be propagated too.\n\nMy vote is that COMMIT AND CHAIN should become ROLLBACK AND NO CHAIN.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 26 Jan 2021 18:34:34 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 1/26/21 6:34 PM, Vik Fearing wrote:\n> On 1/26/21 6:20 PM, Laurenz Albe wrote:\n>> After thinking some more about it, I think that COMMIT AND CHAIN would have\n>> to change behavior: if COMMIT throws an error (because the transaction was\n>> aborted), no new transaction should be started. Everything else seems fishy:\n>> the statement fails, but still starts a new transaction?\n> \n> The standard is not clear (to me) on what exactly should happen here.\n> It says that if a <commit statement> is not successful then a <rollback\n> statement> is implied, but I don't see it say anything about whether the\n> AND CHAIN should be propagated too.\n> \n> My vote is that COMMIT AND CHAIN should become ROLLBACK AND NO CHAIN.\n\nHmm. On the other hand, that means if the client isn't paying\nattention, it'll start executing commands outside of a transaction which\nwill autocommit. It might be better for a new transaction to be chained\nand hopefully also fail because previous bits are missing.\n\nI will hastily change my vote to \"unsure\".\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 26 Jan 2021 18:37:47 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 2021-01-26 at 12:25 -0500, Dave Cramer wrote:\n> > After thinking some more about it, I think that COMMIT AND CHAIN would have\n> > to change behavior: if COMMIT throws an error (because the transaction was\n> > aborted), no new transaction should be started. Everything else seems fishy:\n> > the statement fails, but still starts a new transaction?\n> > \n> > I guess that's also at fault for the unexpected result status that\n> > Masahiko complained about in the other message.\n> \n> \n> I haven't had a look at the result status in libpq. For JDBC we don't see that. \n> We throw an exception when we get this error report. This is very consistent as the commit fails and we throw an exception\n> \n> > So I think we should not introduce USER_ERROR at all. It is too much\n> > of a kluge: fail, but not really...\n> \n> What we do now is actually worse as we do not get an error report and we silently change commit to rollback.\n> How is this better ?\n\nI see your point from the view of the JDBC driver.\n\nIt just feels hacky - somewhat similar to what you say\nabove: don't go through the normal transaction rollback steps,\nbut issue an error message.\n\nAt least we should fake it well...\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 26 Jan 2021 18:46:10 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Tue, 26 Jan 2021 at 12:46, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Tue, 2021-01-26 at 12:25 -0500, Dave Cramer wrote:\n> > > After thinking some more about it, I think that COMMIT AND CHAIN would\n> have\n> > > to change behavior: if COMMIT throws an error (because the transaction\n> was\n> > > aborted), no new transaction should be started. Everything else seems\n> fishy:\n> > > the statement fails, but still starts a new transaction?\n> > >\n> > > I guess that's also at fault for the unexpected result status that\n> > > Masahiko complained about in the other message.\n> >\n> >\n> > I haven't had a look at the result status in libpq. For JDBC we don't\n> see that.\n> > We throw an exception when we get this error report. This is very\n> consistent as the commit fails and we throw an exception\n> >\n> > > So I think we should not introduce USER_ERROR at all. It is too much\n> > > of a kluge: fail, but not really...\n> >\n> > What we do now is actually worse as we do not get an error report and we\n> silently change commit to rollback.\n> > How is this better ?\n>\n> I see your point from the view of the JDBC driver.\n>\n> It just feels hacky - somewhat similar to what you say\n> above: don't go through the normal transaction rollback steps,\n> but issue an error message.\n>\n> At least we should fake it well...\n>\n\nOK, let me look into how we deal with COMMIT and CHAIN.\n\nI can see some real issues with this as Vik pointed out.\n\nDave\n\nOn Tue, 26 Jan 2021 at 12:46, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Tue, 2021-01-26 at 12:25 -0500, Dave Cramer wrote:\n> > After thinking some more about it, I think that COMMIT AND CHAIN would have\n> > to change behavior: if COMMIT throws an error (because the transaction was\n> > aborted), no new transaction should be started.  Everything else seems fishy:\n> > the statement fails, but still starts a new transaction?\n> > \n> > I guess that's also at fault for the unexpected result status that\n> > Masahiko complained about in the other message.\n> \n>  \n> I haven't had a look at the result status in libpq. For JDBC we don't see that. \n> We throw an exception when we get this error report. This is very consistent as the commit fails and we throw an exception\n> \n> > So I think we should not introduce USER_ERROR at all.  It is too much\n> > of a kluge: fail, but not really...\n> \n> What we do now is actually worse as we do not get an error report and we silently change commit to rollback.\n> How is this better ?\n\nI see your point from the view of the JDBC driver.\n\nIt just feels hacky - somewhat similar to what you say\nabove: don't go through the normal transaction rollback steps,\nbut issue an error message.\n\nAt least we should fake it well...OK, let me look into how we deal with COMMIT and CHAIN. I can see some real issues with this as Vik pointed out.Dave", "msg_date": "Tue, 26 Jan 2021 13:02:06 -0500", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 1/26/21 1:02 PM, Dave Cramer wrote:\n> On Tue, 26 Jan 2021 at 12:46, Laurenz Albe <laurenz.albe@cybertec.at \n> <mailto:laurenz.albe@cybertec.at>> wrote:\n> \n> I see your point from the view of the JDBC driver.\n> \n> It just feels hacky - somewhat similar to what you say\n> above: don't go through the normal transaction rollback steps,\n> but issue an error message.\n> \n> At least we should fake it well...\n> \n> OK, let me look into how we deal with COMMIT and CHAIN.\n> \n> I can see some real issues with this as Vik pointed out.\n\nTest are failing on the cfbot for this patch and it looks like a new \npatch is needed from Dave, at the least, so marking Waiting on Author.\n\nShould we be considering this patch Returned with Feedback instead?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 25 Mar 2021 12:04:44 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On Thu, 25 Mar 2021 at 12:04, David Steele <david@pgmasters.net> wrote:\n\n> On 1/26/21 1:02 PM, Dave Cramer wrote:\n> > On Tue, 26 Jan 2021 at 12:46, Laurenz Albe <laurenz.albe@cybertec.at\n> > <mailto:laurenz.albe@cybertec.at>> wrote:\n> >\n> > I see your point from the view of the JDBC driver.\n> >\n> > It just feels hacky - somewhat similar to what you say\n> > above: don't go through the normal transaction rollback steps,\n> > but issue an error message.\n> >\n> > At least we should fake it well...\n> >\n> > OK, let me look into how we deal with COMMIT and CHAIN.\n> >\n> > I can see some real issues with this as Vik pointed out.\n>\n> Test are failing on the cfbot for this patch and it looks like a new\n> patch is needed from Dave, at the least, so marking Waiting on Author.\n>\n> Should we be considering this patch Returned with Feedback instead?\n>\n>\nNot sure, at this point the impetus for this is not getting a lot of\ntraction.\nHonestly I think the approach I took is too simple and I don't have the\ninclination at the moment to\nrewrite it.\n\n\nDave Cramer\nwww.postgres.rocks\n\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n>\n\nOn Thu, 25 Mar 2021 at 12:04, David Steele <david@pgmasters.net> wrote:On 1/26/21 1:02 PM, Dave Cramer wrote:\n> On Tue, 26 Jan 2021 at 12:46, Laurenz Albe <laurenz.albe@cybertec.at \n> <mailto:laurenz.albe@cybertec.at>> wrote:\n> \n>     I see your point from the view of the JDBC driver.\n> \n>     It just feels hacky - somewhat similar to what you say\n>     above: don't go through the normal transaction rollback steps,\n>     but issue an error message.\n> \n>     At least we should fake it well...\n> \n> OK, let me look into how we deal with COMMIT and CHAIN.\n> \n> I can see some real issues with this as Vik pointed out.\n\nTest are failing on the cfbot for this patch and it looks like a new \npatch is needed from Dave, at the least, so marking Waiting on Author.\n\nShould we be considering this patch Returned with Feedback instead?\nNot sure, at this point the impetus for this is not getting a lot of traction.Honestly I think the approach I took is too simple and I don't have the inclination at the moment torewrite it.Dave Cramerwww.postgres.rocks \nRegards,\n-- \n-David\ndavid@pgmasters.net", "msg_date": "Thu, 25 Mar 2021 15:07:17 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" }, { "msg_contents": "On 3/25/21 3:07 PM, Dave Cramer wrote:\n> On Thu, 25 Mar 2021 at 12:04, David Steele <david@pgmasters.net \n> <mailto:david@pgmasters.net>> wrote:\n> \n> Test are failing on the cfbot for this patch and it looks like a new\n> patch is needed from Dave, at the least, so marking Waiting on Author.\n> \n> Should we be considering this patch Returned with Feedback instead?\n> \n> Not sure, at this point the impetus for this is not getting a lot of \n> traction.\n> Honestly I think the approach I took is too simple and I don't have the \n> inclination at the moment to\n> rewrite it.\n\nIn that case Returned with Feedback is appropriate so I have done that.\n\nOf course, the conversation can continue on this thread or a new one and \nwhen you have a new patch you can create a new CF entry.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 25 Mar 2021 15:23:53 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Error on failed COMMIT" } ]
[ { "msg_contents": "Hi,\n\nIn commit 3eb77eba we made it possible for any subsystem that wants a\nfile to be flushed as part of the next checkpoint to ask the\ncheckpointer to do that, as previously only md.c could do.\n\nIn the past, foreground CLOG flush stalls were a problem, but then\ncommit 33aaa139 cranked up the number of buffers, and then commit\n5364b357 cranked it right up until the flushes mostly disappeared from\nsome benchmark workload but not so high that the resulting linear\nsearches through the buffer array destroyed the gains. I know there\nis interest in moving that stuff into regular shared buffers, so it\ncan be found via the buffer mapping system (and improve as that\nimproves), written back by the background writer (and improve as that\nimproves), managed with a proper replacement algorithm (and improve as\nthat improves), etc etc. That sounds like a great idea to me, but\nit's a big project.\n\nIn the meantime, one thing we could do is hand off the fsyncs, but I'm\nnot sure if it's still considered a real problem in the field given\nthe new parameters.\n\nAnyway, I had a patch for that, that I used while testing commit\n3eb77eba. While reading old threads about SLRU today I found that\nseveral people had wished for a thing exactly like that, so I dusted\nit off and rebased it.", "msg_date": "Wed, 12 Feb 2020 21:54:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Wed, Feb 12, 2020 at 9:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> In commit 3eb77eba we made it possible for any subsystem that wants a\n> file to be flushed as part of the next checkpoint to ask the\n> checkpointer to do that, as previously only md.c could do.\n\nHello,\n\nWhile working on recovery performance, I found my way back to this\nidea and rebased the patch.\n\nProblem statement:\n\nEvery time we have to write out a page of pg_commit_ts, pg_multixact\nor pg_xact due to cache pressure, we immediately call fsync(). This\nruns serially in the recovery process, and it's quite bad for\npg_commit_ts, because we need to dump out a page for every ~800\ntransactions (track_commit_timestamps is not enabled by default). If\nwe ask the checkpointer to do it, it collapses the 2048 fsync calls\nfor each SLRU segment into one, and the kernel can write out the data\nwith larger I/Os, maybe even ahead of time, and update the inode only\nonce.\n\nExperiment:\n\nRun crash recovery for 1 million pgbench transactions:\n\n postgres -D pgdata \\\n -c synchronous_commit=off \\\n -c enable_commit_timestamps=on \\\n -c max_wal_size=10GB \\\n -c checkpoint_timeout=60min\n\n # in another shell\n pgbench -i -s10 postgres\n psql postgres -c checkpoint\n pgbench -t1000000 -Mprepared postgres\n killall -9 postgres\n\n # save the crashed pgdata dir for repeated experiments\n mv pgdata pgdata-save\n\n # now run experiments like this and see how long recovery takes\n rm -fr pgdata\n cp -r pgdata-save pgdata\n postgres -D pgdata\n\nWhat I see on a system that has around 2.5ms latency for fsync:\n\n master: 16.83 seconds\n patched: 4.00 seconds\n\nIt's harder to see it without commit timestamps enabled since we only\nneed to flush a pg_xact page every 32k transactions (and multixacts\nare more complicated to test), but you can still see the effect. With\n8x more transactions to make it clearer what's going on, I could\nmeasure a speedup of around 6% from this patch, which I suppose scales\nup fairly obviously with storage latency (every million transaction =\nat least 30 fsyncs stalls, so you can multiply that by your fsync\nlatency and work out how much time your recovery process will be\nasleep at the wheel instead of applying your records).\n\n From a syscall overhead point of view, it's a bit unfortunate that we\nopen and close SLRU segments every time we write, but it's probably\nnot really enough to complain about... except for the (small) risk of\nan inode dropping out of kernel caches in the time between closing it\nand the checkpointer opening it. Hmm.", "msg_date": "Tue, 4 Aug 2020 18:02:29 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Tue, Aug 4, 2020 at 6:02 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... speedup of around 6% ...\n\nI did some better testing. OS: Linux, storage: consumer SSD. I\nrepeatedly ran crash recovery on 3.3GB worth of WAL generated with 8M\npgbench transactions. I tested 3 different builds 7 times each and\nused \"ministat\" to compare the recovery times. It told me that:\n\n* Master is around 11% faster than last week before commit c5315f4f\n\"Cache smgrnblocks() results in recovery.\"\n* This patch gives a similar speedup, bringing the total to around 25%\nfaster than last week (the time is ~20% less, the WAL processing speed\nis ~1.25x).\n\nMy test fit in RAM and was all cached. With the patch, the recovery\nprocess used 100% of a single core the whole time and stayed on that\ncore and the variance is low, but in the other builds it hovered\naround 90% and hopped around as it kept getting rescheduled and the\nvariance was higher.\n\nOf course, SLRU fsyncs aren't the only I/O stalls in a real system;\namong others, there are also reads from faulting in referenced pages\nthat don't have full page images in the WAL. I'm working on that\nseparately, but that's a tad more complicated than this stuff.\n\nAdded to commit fest.\n\n=== ministat output showing recovery times in seconds ===\n\nx patched.dat\n+ master.dat\n* lastweek.dat\n+------------------------------------------------------------------------------+\n| * |\n| x + * |\n|x x xx + + ++ + + * **** |\n| |AM| |_____AM____| |_____A_M__||\n+------------------------------------------------------------------------------+\n N Min Max Median Avg Stddev\nx 7 38.655 39.406 39.218 39.134857 0.25188849\n+ 7 42.128 45.068 43.958 43.815286 0.91387758\nDifference at 95.0% confidence\n 4.68043 +/- 0.780722\n 11.9597% +/- 1.99495%\n (Student's t, pooled s = 0.670306)\n* 7 47.187 49.404 49.203 48.904286 0.76793483\nDifference at 95.0% confidence\n 9.76943 +/- 0.665613\n 24.9635% +/- 1.70082%\n (Student's t, pooled s = 0.571477)\n\n\n", "msg_date": "Wed, 5 Aug 2020 18:00:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Wed, Aug 5, 2020 at 2:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> * Master is around 11% faster than last week before commit c5315f4f\n> \"Cache smgrnblocks() results in recovery.\"\n> * This patch gives a similar speedup, bringing the total to around 25%\n> faster than last week (the time is ~20% less, the WAL processing speed\n> is ~1.25x).\n\nDang, that's pretty nice, especially for the relatively small amount\nof code that it seems to require.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Aug 2020 10:44:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Sat, Aug 8, 2020 at 2:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Aug 5, 2020 at 2:01 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > * Master is around 11% faster than last week before commit c5315f4f\n> > \"Cache smgrnblocks() results in recovery.\"\n> > * This patch gives a similar speedup, bringing the total to around 25%\n> > faster than last week (the time is ~20% less, the WAL processing speed\n> > is ~1.25x).\n>\n> Dang, that's pretty nice, especially for the relatively small amount\n> of code that it seems to require.\n\nYeah, the combined effect of these two patches is better than I\nexpected. To be clear though, I was only measuring the time between\nthe \"redo starts at ...\" and \"redo done at ...\" messages, since I've\nbeen staring at the main recovery code, but there are also some more\nfsyncs before (SyncDataDirectory()) and after (RemoveOldXlogFiles())\nthat are unaffected. I think it's probably possible to do something\nabout those too, but that's another topic.\n\nI spotted a small problem: if the transaction ID wrap all the way\naround between checkpoints, then you might have cancelled requests for\na removed SLRU segment from the previous epoch, so we'd better\nuncancel them if we see that. That's a one line fix, done in the\nattached. I also adjusted the commit message to be a little clearer\n(this work deferment/collapsing scheme works in crash recovery too,\nnot just when there is a checkpointer process to hand the work to).", "msg_date": "Wed, 12 Aug 2020 18:06:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Wed, Aug 12, 2020 at 6:06 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [patch]\n\nBitrot, rebased, no changes.\n\n> Yeah, the combined effect of these two patches is better than I\n> expected. To be clear though, I was only measuring the time between\n> the \"redo starts at ...\" and \"redo done at ...\" messages, since I've\n> been staring at the main recovery code, but there are also some more\n> fsyncs before (SyncDataDirectory()) and after (RemoveOldXlogFiles())\n> that are unaffected. I think it's probably possible to do something\n> about those too, but that's another topic.\n\n... and of course the end-of-recovery checkpoint; in my tests this\nwasn't materially changed since there isn't actually very much CLOG,\nit's just that we avoided syncing it block at a time and getting\nrescheduled. FWIW I put a very simple test here:\nhttps://github.com/macdice/redo-bench, YMMV.", "msg_date": "Thu, 13 Aug 2020 15:42:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Wed, Aug 12, 2020 at 6:06 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [patch]\n\nHi Thomas / hackers,\n\nI just wanted to help testing this patch (defer SLRU fsyncs during recovery) and also faster compactify_tuples() patch [2] as both are related to the WAL recovery performance in which I'm interested in. This is my first message to this mailing group so please let me know also if I should adjust testing style or formatting.\n\nWith both of those patches applied:\nmake check -> Passes\nmake check-world -> Passes\nmake standbycheck (section \"Testing Hot Standby\" from docs) -> Passes\nThere wasn't a single segfault or postmaster crash during the tests.\nReview of the patches itself: I'm not qualified to review the PostgreSQL internals.\n\nI've used redo-bench scripts [1] by Thomas to measure the performance effect (this approach simplifies testing and excludes network jittering effects): 1st column is redo start->end timing, 2nd is redo end -> end of checkpointing timing before opening the DB for reads. I've conducted 2-3 separate tests that show benefits of those patches depending on the workload:\n- Handing SLRU sync work over to the checkpointer: in my testing it accelerates WAL recovery performance on slower / higer latency storage by ~20%\n- Faster sort in compactify_tuples(): in my testing it accelerates WAL recovery performance for HOT updates also by ~20%\n\nTEST1: workload profile test as per standard TPC-B pgbench -c8 -j8, with majority of records in WAL stream being Heap/HOT_UPDATE:\n\nxvda, master @ a3c66de6c5e1ee9dd41ce1454496568622fb7712 (24/08/2020) - baseline:\n72.991, 0.919\n73.688, 1.027\n72.228, 1.032\n\nxvda, same master with, v2-0001-Defer-flushing-of-SLRU-files.patch\n72.271, 0.857\n72.717, 0.748\n72.705, 0.81\n\nxvda, same master with, v2-0001-Defer-flushing-of-SLRU-files.patch, fsync=off\n72.49, 0.103\n74.069, 0.102\n73.368, 0.102\n\nNVMe, same master with, v2-0001-Defer-flushing-of-SLRU-files.patch\n70.312, 0.22\n70.615, 0.201\n69.739, 0.194\n\nNVMe, same master with, v2-0001-Defer-flushing-of-SLRU-files.patch, fsync=off\n69.686, 0.101\n70.601, 0.102\n70.042, 0.101\n\nAs Thomas stated in the standard pgbench workload profile on recovery side there is compactify_tuples()->pg_qsort() overhead visible. So this is where the 2nd patch helps:\n\nNVMe, same master with, v2-0001-Defer-flushing-of-SLRU-files.patch and compactify_tuples()->pg_qsort() performance improvement\n58.85, 0.296\n58.605, 0.269\n58.603, 0.277\n\nNVMe, same master with, v2-0001-Defer-flushing-of-SLRU-files.patch and compactify_tuples()->pg_qsort() performance improvement, fsync=off\n58.618, 0.103\n57.693, 0.101\n58.779, 0.111\n\nIn the last final case the top profile is as follows related still to the sorting but as I understand in much optimal way:\n\n 26.68% postgres postgres [.] qsort_itemoff\n ---qsort_itemoff\n |--14.17%--qsort_itemoff\n | |--10.96%--compactify_tuples\n | | PageRepairFragmentation\n | | heap2_redo\n | | StartupXLOG\n | --3.21%--qsort_itemoff\n | --3.10%--compactify_tuples\n | PageRepairFragmentation\n | heap2_redo\n | StartupXLOG\n --12.51%--compactify_tuples\n PageRepairFragmentation\n heap2_redo\n StartupXLOG\n\n 8.38% postgres libc-2.17.so [.] __memmove_ssse3_back\n ---__memmove_ssse3_back\n compactify_tuples\n PageRepairFragmentation\n heap2_redo\n StartupXLOG\n\n 6.51% postgres postgres [.] hash_search_with_hash_value\n ---hash_search_with_hash_value\n |--3.62%--smgropen\n | |--2.17%--XLogReadBufferExtended\n | | --1.76%--XLogReadBufferForRedoExtended\n | | --0.93%--heap_xlog_update\n | --1.45%--ReadBufferWithoutRelcache\n | XLogReadBufferExtended\n | --1.34%--XLogReadBufferForRedoExtended\n | --0.72%--heap_xlog_update\n --2.69%--BufTableLookup\n ReadBuffer_common\n ReadBufferWithoutRelcache\n XLogReadBufferExtended\n --2.48%--XLogReadBufferForRedoExtended\n |--1.34%--heap2_redo\n | StartupXLOG\n --0.83%--heap_xlog_update\n\n\nSo to sum, HOT update-like workload profile tends to be CPU bound on single process recovery side. Even slow storage (like xvda) was not the bottleneck here as I've used already hot stuff from VFS cache.\n\nTEST2: The second suite of tests used append-only workload profile (the same amount of transactions, using pgbench -c8 -j8 -f insert_only.sql -n -t 1000000), however with simpler structure:\nCREATE TABLE t (c1 uuid NOT NULL, c2 uuid NOT NULL, c3 integer NOT NULL, c4 integer NOT NULL); \nCREATE INDEX i1 ON t USING btree (c1);\nCREATE INDEX i2 ON t USING btree (c2);\nCREATE INDEX i3 ON t USING btree (c3);\nCREATE EXTENSION \"uuid-ossp\"\n\nand customized script for pgbench, just this: BEGIN; INSERT INTO t (c1,c2,c3,c4) values ( uuid_generate_v4(), uuid_generate_v4(), round(5000*random()), 500); END;\n\nMajority of WAL records being Btree/INSERT_LEAF (60%), results are following: \n\nxvda, master @ a3c66de6c5e1ee9dd41ce1454496568622fb7712 (24/08/2020) - baseline:\n120.732, 5.275\n120.608, 5.902\n120.685, 5.872\n\nxvda, same master, with v2-0001-Defer-flushing-of-SLRU-files.patch\n99.061, 7.071\n99.472, 6.386\n98.036, 5.994\n\nxvda, same master, with v2-0001-Defer-flushing-of-SLRU-files.patch, fsync=off\n102.838, 0.136\n102.711, 0.099\n103.093, 0.0970001\n\nNVMe, master @ a3c66de6c5e1ee9dd41ce1454496568622fb7712 (24/08/2020) - baseline:\n96.46, 0.405\n96.121, 0.405\n95.951, 0.402\n\nNVMe, same master, with v2-0001-Defer-flushing-of-SLRU-files.patch\n94.124, 0.387\n96.61, 0.416\n94.624, 0.451\n\nNVMe, same master, with v2-0001-Defer-flushing-of-SLRU-files.patch, fsync=off\n95.401, 0.0969999\n95.028, 0.099\n94.632, 0.0970001\n\nSo apparently the v2-0001-Defer-flushing-of-SLRU-files helps in my case on higher latency storage.\n\nThe append-only bottleneck appears to be limited by syscalls/s due to small block size even with everything in FS cache (but not in shared buffers, please compare with TEST1 as there was no such bottleneck at all):\n\n 29.62% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n ---copy_user_enhanced_fast_string\n |--17.98%--copyin\n[..]\n | __pwrite_nocancel\n | FileWrite\n | mdwrite\n | FlushBuffer\n | ReadBuffer_common\n | ReadBufferWithoutRelcache\n | XLogReadBufferExtended\n | XLogReadBufferForRedoExtended\n | --17.57%--btree_xlog_insert\n | btree_redo\n | StartupXLOG\n |\n --11.64%--copyout\n[..]\n __pread_nocancel\n --11.44%--FileRead\n mdread\n ReadBuffer_common\n ReadBufferWithoutRelcache\n XLogReadBufferExtended\n XLogReadBufferForRedoExtended\n --11.34%--btree_xlog_insert\n btree_redo\n StartupXLOG\n\n 5.62% postgres postgres [.] hash_search_with_hash_value\n ---hash_search_with_hash_value\n |--1.74%--smgropen\n[..]\n\nNo# of syscalls/s topped at ~100k/s @ 8kB for each read & write respectively (it's logical I/O as everything fits in VFS cache, nearly no physical I/O). It was also visible during the test that the startup/recovering process spent 60% of it's time in %sys time in such conditions. As there was no sorting visible in profiler, I've did not test the workload with compactify_tuples()->pg_qsort() performance improvement here, although from basic runs it appears it did not introduce any degradation.\n\nTEST2b: Quite frankly, at 1st glance I've did not seem to understand why btree_xlog_insert()->ReadBuffer_common() would require FlushBuffer() that would write, until I've bumped shared buffers 128MB -> 24GB as it might have been flushing dirty buffers which caused those pwrite's() - it's not evident as direct call in code from ReadBuffer*() (inlining? any idea how to monitor on standby the cycling of dirty buffers during recovery when there is no bgwriter yet?). Still I wanted to eliminate storage and VFS cache as bottleneck. This lead me to results like below (with fsync=on, also defer patch, shared_buffers=24GB to eliminate VFS cache lookups):\n27.341, 0.858\n27.26, 0.869\n26.959, 0.86\n\nTurning on/off the defer SLRU patch and/or fsync doesn't seem to make any difference, so if anyone is curious the next sets of append-only bottlenecks is like below:\n\n 14.69% postgres postgres [.] hash_search_with_hash_value\n ---hash_search_with_hash_value\n |--9.80%--BufTableLookup\n | ReadBuffer_common\n | ReadBufferWithoutRelcache\n | XLogReadBufferExtended\n | XLogReadBufferForRedoExtended\n | |--7.76%--btree_xlog_insert\n | | btree_redo\n | | StartupXLOG\n | --1.63%--heap_xlog_insert\n --4.90%--smgropen\n |--2.86%--ReadBufferWithoutRelcache\n | XLogReadBufferExtended\n | |--2.04%--XLogReadBufferForRedoExtended\n | | |--1.22%--btree_xlog_insert\n | | | btree_redo\n | | | StartupXLOG\n | | --0.82%--heap_xlog_insert\n | --0.82%--XLogRecordPageWithFreeSpace\n | heap_xlog_insert\n --2.04%--XLogReadBufferExtended\n XLogReadBufferForRedoExtended\n --1.63%--btree_xlog_insert\n btree_redo\n StartupXLOG\n\n 7.76% postgres libc-2.17.so [.] __memmove_ssse3_back\n ---__memmove_ssse3_back\n PageAddItemExtended\n btree_xlog_insert\n btree_redo\n StartupXLOG\n\nStill the result seems to be nice, as it is 10^6 (trx) / 27s = ~34k TPS on recovery side (~3000MB of WAL/~27s = ~111MB/s without FPW) provided there would be no I/O overheads and my assumptions. For full picture, compare ratio generation of \"primary\" data with INSERTs I was able to get without any special tuning:\n- ~29k TPS of INSERTs-only/COMMITs with pgbench -c8\n- ~49k TPS of INSERTs-only/COMMITs with pgbench -c16\nso the WAL (single process) recovery code in 14master seems to have like 70% of performance of fairly low-end primary 16vCPU DB in above append-only conditions.\n\nSpecs: Linux 4.14 kernel, ext4 filesystems (data=ordered,noatime), 1s8c16t Xeon CPU E5-2686 v4 @ 2.30GHz, 128GB RAM, gcc 7.2.1, CFLAGS set by ./configure to \"-O2\", test on default/tiny shared_buffers until last test.\n\nxvda AKA slow storage: root file system, single thread tests:\n\tdd if=/dev/zero of=test bs=1M count=1000 oflag=direct ==> 157 MB/s\n\tdd if=/dev/zero of=test bs=8k count=10000 oflag=direct => 17.2 MB/s\n\nNVMe: striped VG consisting of 2x NVMes devices with much lower latency\n\tdd if=/dev/zero of=test bs=1M count=1000 oflag=direct ==> 1.9 GB/s or maybe even more\n\tdd if=/dev/zero of=test bs=8k count=10000 oflag=direct => 141 MB/s\n\n-Jakub Wartak.\n\n[1] - https://github.com/macdice/redo-bench/\n[2] - https://commitfest.postgresql.org/29/2687/\n\n\n", "msg_date": "Tue, 25 Aug 2020 09:16:03 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Tue, Aug 25, 2020 at 9:16 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> I just wanted to help testing this patch (defer SLRU fsyncs during recovery) and also faster compactify_tuples() patch [2] as both are related to the WAL recovery performance in which I'm interested in. This is my first message to this mailing group so please let me know also if I should adjust testing style or formatting.\n\nHi Jakub,\n\nThanks very much for these results!\n\n> - Handing SLRU sync work over to the checkpointer: in my testing it accelerates WAL recovery performance on slower / higer latency storage by ~20%\n\nWow. Those fsyncs must have had fairly high latency (presumably due\nto queuing behind other write back activity).\n\n> - Faster sort in compactify_tuples(): in my testing it accelerates WAL recovery performance for HOT updates also by ~20%\n\nNice.\n\n> In the last final case the top profile is as follows related still to the sorting but as I understand in much optimal way:\n>\n> 26.68% postgres postgres [.] qsort_itemoff\n> ---qsort_itemoff\n> |--14.17%--qsort_itemoff\n> | |--10.96%--compactify_tuples\n> | | PageRepairFragmentation\n> | | heap2_redo\n> | | StartupXLOG\n> | --3.21%--qsort_itemoff\n> | --3.10%--compactify_tuples\n> | PageRepairFragmentation\n> | heap2_redo\n> | StartupXLOG\n> --12.51%--compactify_tuples\n> PageRepairFragmentation\n> heap2_redo\n> StartupXLOG\n\nI wonder if there is something higher level that could be done to\nreduce the amount of compaction work required in the first place, but\nin the meantime I'm very happy if we can improve the situation so much\nwith such a microscopic improvement that might eventually benefit\nother sorting stuff...\n\n> 8.38% postgres libc-2.17.so [.] __memmove_ssse3_back\n> ---__memmove_ssse3_back\n> compactify_tuples\n> PageRepairFragmentation\n> heap2_redo\n\nHmm, I wonder if this bit could go teensy bit faster by moving as many\nadjacent tuples as you can in one go rather than moving them one at a\ntime...\n\n> The append-only bottleneck appears to be limited by syscalls/s due to small block size even with everything in FS cache (but not in shared buffers, please compare with TEST1 as there was no such bottleneck at all):\n>\n> 29.62% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n> ---copy_user_enhanced_fast_string\n> |--17.98%--copyin\n> [..]\n> | __pwrite_nocancel\n> | FileWrite\n> | mdwrite\n> | FlushBuffer\n> | ReadBuffer_common\n> | ReadBufferWithoutRelcache\n> | XLogReadBufferExtended\n> | XLogReadBufferForRedoExtended\n> | --17.57%--btree_xlog_insert\n\nTo move these writes out of recovery's way, we should probably just\nrun the bgwriter process during crash recovery. I'm going to look\ninto that.\n\nThe other thing is of course the checkpointer process, and our\nend-of-recovery checkpoint. I was going to suggest it should be\noptional and not done by the recovery process itself, which is why\nsome earlier numbers I shared didn't include the end-of-recovery\ncheckpoint, but then I realised it complicated the numbers for this\nlittle patch and, anyway, it'd be a good idea to open that can of\nworms separately...\n\n> | btree_redo\n> | StartupXLOG\n> |\n> --11.64%--copyout\n> [..]\n> __pread_nocancel\n> --11.44%--FileRead\n> mdread\n> ReadBuffer_common\n> ReadBufferWithoutRelcache\n> XLogReadBufferExtended\n> XLogReadBufferForRedoExtended\n\nFor these reads, the solution should be WAL prefetching, but the patch\nI shared for that (and will be updating soon) is just one piece of the\npuzzle, and as it stands it actually *increases* the number of\nsyscalls by adding some posix_fadvise() calls, so ... erm, for an\nall-in-kernel-cache-already workload like what you profiled there it\ncan only make things worse on that front. But... when combined with\nAndres's work-in-progress AIO stuff, a whole bunch of reads can be\nsubmitted with a single system call ahead of time and then the results\nare delivered directly into our buffer pool by kernel threads or\nhardware DMA, so we'll not only avoid going off CPU during recovery\nbut we'll also reduce the system call count.\n\n> Turning on/off the defer SLRU patch and/or fsync doesn't seem to make any difference, so if anyone is curious the next sets of append-only bottlenecks is like below:\n>\n> 14.69% postgres postgres [.] hash_search_with_hash_value\n> ---hash_search_with_hash_value\n> |--9.80%--BufTableLookup\n> | ReadBuffer_common\n> | ReadBufferWithoutRelcache\n> | XLogReadBufferExtended\n> | XLogReadBufferForRedoExtended\n\nHypothesis: Your 24GB buffer pool requires somewhere near 70MB of\nbuffer mapping table (huh, pg_shmem_allocations doesn't show that\ncorrectly), so it doesn't fit into any level of your memory cache\nhierarchy and it's super random access, so every buffer lookup is\ncosting you a ~60-100ns memory stall. Maybe?\n\nIf that's the reason for this showing up in your profile, I think I\ncould probably add a little cache line prefetch phase to the WAL\nprefetch patch to fix it. I've actually tried prefetching the buffer\nmapping cache lines before, without success, but never in recovery.\nI'll make a note to look into that.\n\n\n", "msg_date": "Wed, 26 Aug 2020 15:58:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "Hi,\n\nOn 2020-08-26 15:58:14 +1200, Thomas Munro wrote:\n> > --12.51%--compactify_tuples\n> > PageRepairFragmentation\n> > heap2_redo\n> > StartupXLOG\n> \n> I wonder if there is something higher level that could be done to\n> reduce the amount of compaction work required in the first place, but\n> in the meantime I'm very happy if we can improve the situation so much\n> with such a microscopic improvement that might eventually benefit\n> other sorting stuff...\n\nAnother approach could be to not perform any sorting during recovery,\ninstead including enough information in the WAL record to avoid doing a\nfull blown PageRepairFragmentation during recovery.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Aug 2020 22:20:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On 2020-Aug-25, Andres Freund wrote:\n\n> Hi,\n> \n> On 2020-08-26 15:58:14 +1200, Thomas Munro wrote:\n> > > --12.51%--compactify_tuples\n> > > PageRepairFragmentation\n> > > heap2_redo\n> > > StartupXLOG\n> > \n> > I wonder if there is something higher level that could be done to\n> > reduce the amount of compaction work required in the first place, but\n> > in the meantime I'm very happy if we can improve the situation so much\n> > with such a microscopic improvement that might eventually benefit\n> > other sorting stuff...\n> \n> Another approach could be to not perform any sorting during recovery,\n> instead including enough information in the WAL record to avoid doing a\n> full blown PageRepairFragmentation during recovery.\n\nHmm, including the sorted ItemId array in the WAL record might make\nsense to alleviate the qsort work ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Aug 2020 14:09:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On 2020-Aug-25, Jakub Wartak wrote:\n\n> Turning on/off the defer SLRU patch and/or fsync doesn't seem to make\n> any difference, so if anyone is curious the next sets of append-only\n> bottlenecks is like below:\n> \n> 14.69% postgres postgres [.] hash_search_with_hash_value\n> ---hash_search_with_hash_value\n> |--9.80%--BufTableLookup\n> | ReadBuffer_common\n> | ReadBufferWithoutRelcache\n> | XLogReadBufferExtended\n> | XLogReadBufferForRedoExtended\n> | |--7.76%--btree_xlog_insert\n> | | btree_redo\n> | | StartupXLOG\n> | --1.63%--heap_xlog_insert\n> --4.90%--smgropen\n> |--2.86%--ReadBufferWithoutRelcache\n\nLooking at an earlier report of this problem I was thinking whether it'd\nmake sense to replace SMgrRelationHash with a simplehash table; I have a\nhalf-written patch for that, but I haven't completed that work.\nHowever, in the older profile things were looking different, as\nhash_search_with_hash_value was taking 35.25%, and smgropen was 33.74%\nof it. BufTableLookup was also there but only 1.51%. So I'm not so\nsure now that that'll pay off as clearly as I had hoped.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 26 Aug 2020 14:15:32 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Thu, Aug 27, 2020 at 6:15 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > --4.90%--smgropen\n> > |--2.86%--ReadBufferWithoutRelcache\n>\n> Looking at an earlier report of this problem I was thinking whether it'd\n> make sense to replace SMgrRelationHash with a simplehash table; I have a\n> half-written patch for that, but I haven't completed that work.\n> However, in the older profile things were looking different, as\n> hash_search_with_hash_value was taking 35.25%, and smgropen was 33.74%\n> of it. BufTableLookup was also there but only 1.51%. So I'm not so\n> sure now that that'll pay off as clearly as I had hoped.\n\nRight, my hypothesis requires an uncacheably large buffer mapping\ntable, and I think smgropen() needs a different explanation because\nit's not expected to be as large or as random, at least not with a\npgbench workload. I think the reasons for a profile with a smgropen()\nshowing up so high, and in particular higher than BufTableLookup(),\nmust be:\n\n1. We call smgropen() twice for every call to BufTableLookup(). Once\nin XLogReadBufferExtended(), and then again in\nReadBufferWithoutRelcache().\n2. We also call it for every block forced out of the buffer pool, and\nin recovery that has to be done by the recovery loop.\n3. We also call it for every block in the buffer pool during the\nend-of-recovery checkpoint.\n\nNot sure but the last two might perform worse due to proximity to\ninterleaving pwrite() system calls (just a thought, not investigated).\nIn any case, I'm going to propose we move those things out of the\nrecovery loop, in a new thread.\n\n\n", "msg_date": "Thu, 27 Aug 2020 14:04:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "Hi Thomas / hackers,\n\n>> The append-only bottleneck appears to be limited by syscalls/s due to small block size even with everything in FS cache (but not in shared buffers, please compare with TEST1 as there was no such bottleneck at all):\n>>\n>> 29.62% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n>> ---copy_user_enhanced_fast_string\n>> |--17.98%--copyin\n>> [..]\n>> | __pwrite_nocancel\n>> | FileWrite\n>> | mdwrite\n>> | FlushBuffer\n>> | ReadBuffer_common\n>> | ReadBufferWithoutRelcache\n>> | XLogReadBufferExtended\n>> | XLogReadBufferForRedoExtended\n>> | --17.57%--btree_xlog_insert\n>\n> To move these writes out of recovery's way, we should probably just\n> run the bgwriter process during crash recovery. I'm going to look\n> into that.\n\nSounds awesome. Also as this thread is starting to derail the SLRU fsync topic - maybe we should change subject? However, to add some data to the separate bgwriter: on 14master (already with lseek() caching, SLRU fsyncs out of way, better sorting), I've measured the same configuration as last time with still the same append-only WAL workload on NVMe and compared with various shared_buffers settings (and buffers description sizing from pg_shmem_allocations which as You stated is wrongly reported(?) which I'm stating only for reference just in case):\n\nshared_buffers=128MB buffers_desc=1024kB 96.778, 0.438 [a]\nshared_buffers=256MB buffers_desc=2048kB 62.755, 0.577 [a]\nshared_buffers=512MB buffers_desc=4096kB 33.167, 0.62 [a]\nshared_buffers=1GB buffers_desc=8192kB 27.303, 0.929\nshared_buffers=4GB buffers_desc=32MB 27.185, 1.166\nshared_buffers=8GB buffers_desc=64MB 27.649, 1.088 \nshared_buffers=16GB buffers_desc=128MB 27.584, 1.201 [b]\nshared_buffers=32GB buffers_desc=256MB 32.314, 1.171 [b]\nshared_buffers=48GB buffers_desc=384 MB 31.95, 1.217\nshared_buffers=64GB buffers_desc=512 MB 31.276, 1.349\nshared_buffers=72GB buffers_desc=576 MB 31.925, 1.284\nshared_buffers=80GB buffers_desc=640 MB 31.809, 1.413\n\nThe amount of WAL to be replayed was ~2.8GB. To me it looks like that\na) flushing dirty buffers by StartupXLog might be a real problem but please read-on.\nb) there is very low impact by this L2/L3 hypothesis you mention (?), but it's not that big and it's not degrading linearly as one would expect (??) L1d:L1d:L2:L3 cache sizes on this machine are as follows on this machine: 32K/32K/256K/46080K. Maybe the DB size is too small.\n\nI've double-checked that in condition [a] (shared_buffers=128MB) there was a lot of FlushBuffer() invocations per second (perf stat -e probe_postgres:FlushBuffer -I 1000), e.g:\n# time counts unit events\n 1.000485217 79,494 probe_postgres:FlushBuffer\n 2.000861366 75,305 probe_postgres:FlushBuffer\n 3.001263163 79,662 probe_postgres:FlushBuffer\n 4.001650795 80,317 probe_postgres:FlushBuffer\n 5.002033273 79,951 probe_postgres:FlushBuffer\n 6.002418646 79,993 probe_postgres:FlushBuffer\nwhile at 1GB shared_buffers it sits nearly at zero all the time. So there is like 3x speed-up possible when StartupXLog wouldn't have to care too much about dirty buffers in the critical path (bgwriter would as you say?) at least in append-only scenarios. But ... I've checked some real systems (even older versions of PostgreSQL not doing that much of replication, and indeed it's e.g. happening 8-12k/s FlushBuffer's() and shared buffers are pretty huge (>100GB, 0.5TB RAM) but they are *system-wide* numbers, work is really performed by bgwriter not by startup/recovering as in this redo-bench case when DB is open for reads and replicating-- so it appears that this isn't optimization for hot standby case , but for the DB-closed startup recovery/restart/disaster/PITR scenario).\n\nAs for the 24GB shared_buffers scenario where dirty buffers are not at all a problem with given top profile (output trimmed), again as expected:\n\n 13.41% postgres postgres [.] hash_search_with_hash_value\n |--8.31%--BufTableLookup <- ReadBuffer_common <- ReadBufferWithoutRelcache\n --5.11%--smgropen \n |--2.77%--XLogReadBufferExtended\n --2.34%--ReadBufferWithoutRelcache\n 7.88% postgres postgres [.] MarkBufferDirty\n\nI've tried to get cache misses ratio via PMCs, apparently on EC2 they are (even on bigger) reporting as not-supported or zeros. However interestingly the workload has IPC of 1.40 (instruction bound) which to me is strange as I would expect BufTableLookup() to be actually heavy memory bound (?) Maybe I'll try on some different hardware one day. \n\n>> __pread_nocancel\n>> --11.44%--FileRead\n>> mdread\n>> ReadBuffer_common\n>> ReadBufferWithoutRelcache\n>> XLogReadBufferExtended\n>> XLogReadBufferForRedoExtended\n>\n> For these reads, the solution should be WAL prefetching,(..) But... when combined with Andres's work-in-progress AIO stuff (..)\n\nYes, I've heard a thing or two about those :) I hope I'll be able to deliver some measurements sooner or later of those two together (AIO+WALprefetch).\n\n-Jakub Wartak.\n\n", "msg_date": "Thu, 27 Aug 2020 08:48:33 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "Hi Alvaro, Thomas, hackers\n\n>> 14.69% postgres postgres [.] hash_search_with_hash_value\n>> ---hash_search_with_hash_value\n>> |--9.80%--BufTableLookup\n[..]\n>> --4.90%--smgropen\n>> |--2.86%--ReadBufferWithoutRelcache\n> Looking at an earlier report of this problem I was thinking whether it'd\n> make sense to replace SMgrRelationHash with a simplehash table; I have a\n> half-written patch for that, but I haven't completed that work.\n> However, in the older profile things were looking different, as\n> hash_search_with_hash_value was taking 35.25%, and smgropen was 33.74%\n> of it. BufTableLookup was also there but only 1.51%. So I'm not so\n> sure now that that'll pay off as clearly as I had hoped.\n\nYes, quite frankly my expectation was to see hash_search_with_hash_value()<-smgropen() outcome as 1st one, but in simplified redo-bench script it's not the case. The original scenario was much more complex with plenty of differences (in no particular order: TB-sized DB VS ~500GB RAM -> thousands of forks, multiple tables, huge btrees, multiple INSERTs wih plenty of data in VALUES() thrown as one commit, real primary->hot-standby replication [not closed DB in recovery], sorted not random UUIDs) - I'm going to try nail down these differences and maybe I manage to produce more realistic \"pgbench reproducer\" (this may take some time though).\n\n-Jakub Wartak.\n\n", "msg_date": "Thu, 27 Aug 2020 10:13:46 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Thu, Aug 27, 2020 at 8:48 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> I've tried to get cache misses ratio via PMCs, apparently on EC2 they are (even on bigger) reporting as not-supported or zeros.\n\nI heard some of the counters are only allowed on their dedicated instance types.\n\n> However interestingly the workload has IPC of 1.40 (instruction bound) which to me is strange as I would expect BufTableLookup() to be actually heavy memory bound (?) Maybe I'll try on some different hardware one day.\n\nHmm, OK now you've made me go and read a bunch of Brendan Gregg bloggs\nand try some experiments of my own to get a feel for this number and\nwhat it might be telling us about the cache miss counters you can't\nsee. Since I know how to generate arbitrary cache miss workloads for\nquick experiments using hash joins of different sizes, I tried that\nand noticed that when LLC misses were at 76% (bad), IPC was at 1.69\nwhich is still higher than what you're seeing. When the hash table\nwas much smaller and LLC misses were down to 15% (much better), IPC\nwas at 2.83. I know Gregg said[1] \"An IPC < 1.0 likely means memory\nbound, and an IPC > 1.0 likely means instruction bound\", but that's\nnot what I'm seeing here, and in his comments section that is\ndisputed. So I'm not sure your IPC of 1.40 is evidence against the\nhypothesis on its own.\n\n[1] http://www.brendangregg.com/blog/2017-05-09/cpu-utilization-is-wrong.html\n\n\n", "msg_date": "Thu, 27 Aug 2020 23:10:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Thu, Aug 27, 2020 at 8:48 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> >> 29.62% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n> >> ---copy_user_enhanced_fast_string\n> >> |--17.98%--copyin\n> >> [..]\n> >> | __pwrite_nocancel\n> >> | FileWrite\n> >> | mdwrite\n> >> | FlushBuffer\n> >> | ReadBuffer_common\n> >> | ReadBufferWithoutRelcache\n> >> | XLogReadBufferExtended\n> >> | XLogReadBufferForRedoExtended\n> >> | --17.57%--btree_xlog_insert\n> >\n> > To move these writes out of recovery's way, we should probably just\n> > run the bgwriter process during crash recovery. I'm going to look\n> > into that.\n>\n> Sounds awesome.\n\nI wrote a quick and dirty experimental patch to try that. I can't see\nany benefit from it on pgbench with default shared buffers, but maybe\nit would do better with your append test due to locality, especially\nif you can figure out how to tune bgwriter to pace itself optimally.\nhttps://github.com/macdice/postgres/tree/bgwriter-in-crash-recovery\n\n\n", "msg_date": "Fri, 28 Aug 2020 17:45:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "Hi Thomas, hackers,\n\n>> > To move these writes out of recovery's way, we should probably just\n>> > run the bgwriter process during crash recovery. I'm going to look\n>> > into that.\n>>\n>> Sounds awesome.\n>\n>I wrote a quick and dirty experimental patch to try that. I can't see\n>any benefit from it on pgbench with default shared buffers, but maybe\n>it would do better with your append test due to locality, especially\n>if you can figure out how to tune bgwriter to pace itself optimally.\n>https://github.com/macdice/postgres/tree/bgwriter-in-crash-recovery\n\nOK, so I've quickly tested those two PoCs patches together, in the conditions like below:\n- similar append-only workload by pgbench (to eliminate other already known different WAL bottlenecks: e.g. sorting),\n- 4.3GB of WAL to be applied (mostly Btree/INSERT_LEAF)\n- on same system as last time (ext4 on NVMe, 1s8c16, 4.14 kernel) \n- 14master already with SLRU fsync to checkpointer/pg_qgsort patches applied\n\nTEST bgwriterPOC1:\n- in severe dirty memory conditions (artificially simulated via small s_b here) --> so for workloads with very high FlushBuffer activity in StartupXLOG\n- with fsync=off/fpw=off by default and on NVMe (e.g. scenario: I want to perform some PITR as fast as I can to see how production data looked like in the past, before some user deleted some data)\n\nbaseline s_b@128MB: 140.404, 0.123 (2nd small as there is small region to checkpoint)\n\n    22.49%  postgres  [kernel.kallsyms]   [k] copy_user_enhanced_fast_string\n            ---copy_user_enhanced_fast_string\n               |--14.72%--copyin\n               |          __pwrite_nocancel\n               |          FileWrite\n               |          mdwrite\n               |          FlushBuffer\n               |          ReadBuffer_common\n               |           --14.52%--btree_xlog_insert\n                --7.77%--copyout\n                          __pread_nocancel\n                           --7.57%--FileRead\n                                     mdread\n                                     ReadBuffer_common\n     6.13%  postgres  [kernel.kallsyms]   [k] do_syscall_64\n               |--1.64%--__pwrite_nocancel\n                --1.23%--__pread_nocancel\n     3.68%  postgres  postgres            [.] hash_search_with_hash_value\n            ---hash_search_with_hash_value\n               |--1.02%--smgropen\n\nAfter applying:\npatch -p1 < ../0001-Run-checkpointer-and-bgworker-in-crash-recovery.patch\npatch -p1 < ../0002-Optionally-don-t-wait-for-end-of-recovery-checkpoint.patch\n\n0001+0002 s_b@128MB: similar result to above\n0001+0002 s_b@128MB: 108.871, 0.114 , bgwriter_delay = 10ms/bgwriter_lru_maxpages = 1000\n0001+0002 s_b@128MB: 85.392, 0.103 , bgwriter_delay = 10ms/bgwriter_lru_maxpages = 50000 #~390MB max?\n\n    18.40%  postgres  [kernel.kallsyms]   [k] copy_user_enhanced_fast_string\n            ---copy_user_enhanced_fast_string\n               |--17.79%--copyout\n               |          __pread_nocancel\n               |          |--16.56%--FileRead\n               |          |          mdread\n               |          |          ReadBuffer_common\n                --0.61%--copyin // WOW\n                          __pwrite_nocancel\n                          FileWrite\n                          mdwrite\n                          FlushBuffer\n                          ReadBuffer_common\n     9.20%  postgres  postgres            [.] hash_search_with_hash_value\n            ---hash_search_with_hash_value\n               |--4.70%--smgropen\n\nof course there is another WOW moment during recovery (\"61.9%\")\n\nUSER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND\npostgres 120935  0.9  0.0 866052  3824 ?        Ss   09:47   0:00 postgres: checkpointer\npostgres 120936 61.9  0.0 865796  3824 ?        Rs   09:47   0:22 postgres: background writer\npostgres 120937 97.4  0.0 865940  5228 ?        Rs   09:47   0:36 postgres: startup recovering 000000010000000000000089\n\nspeedup of 1.647x when dirty memory is in way. When it's not:\n\nbaseline  s_b@24000MB: 39.199, 1.448 (2x patches off)\n0001+0002 s_b@24000MB: 39.383, 1.442 , bgwriter_delay = 10ms/bgwriter_lru_maxpages = 50000 #~390MB/s max, yay\n\nthere's no regression. I have only one comment about those 2 WIP patches, bgwriter_lru_maxpages should be maybe called standby_bgwriter_lru_maxpages in this scenario or even more preferred there shouldn't be a maximum set during closed DB recovery scenario.\n\nTEST bgwriterPOC2a to showcase the 2nd patch which opens the DB for read-write users before the final checkpoint finishes after redo recovery. The DBA may make the decision via this parameter end_of_recovery_checkpoint_wait=off.\n- on slow storage (xvda, fsync=on) and even with high memory:\n\ns_b@24000MB: 39.043, 15.639 -- even with WAL recovery being 100% CPU bound(mostly on hash_search_with_hash_value() for Buffers/__memmove_ssse3_back), it took additional 15s to perform checkpoint before DB was open for users (it had to write 269462 buffers =~ 2GB =~ 140MB/s which is close to the xvda device speed): the complete output looks in 14master looks similar to this:\n\n1598609928.620 startup 22543 LOG:  redo done at 1/12201C88\n1598609928.624 checkpointer 22541 LOG:  checkpoint starting: end-of-recovery immediate wait\n1598609944.908 checkpointer 22541 LOG:  checkpoint complete: wrote 269462 buffers (8.6%); 0 WAL file(s) added, 0 removed, 273 recycled; write=15.145 s, sync=0.138 s, total=16.285 s; sync files=11, longest=0.133 s, average=0.012 s; distance=4468855 kB, estimate=4468855 kB\n1598609944.912 postmaster 22538 LOG:  database system is ready to accept connections\n\ns_b@24000MB: 39.96, 0 , with end_of_recovery_checkpoint_wait = off, before DB is open 15s faster \n\n1598610331.556 startup 29499 LOG:  redo done at 1/12201C88\n1598610331.559 checkpointer 29497 LOG:  checkpoint starting: immediate force\n1598610331.562 postmaster 29473 LOG:  database system is ready to accept connections\n1598610347.202 checkpointer 29497 LOG:  checkpoint complete: wrote 269462 buffers (8.6%); 0 WAL file(s) added, 0 removed, 273 recycled; write=15.092 s, sync=0.149 s, total=15.643 s; sync files=12, longest=0.142 s, average=0.012 s; distance=4468855 kB, estimate=4468855 kB\n\nI suppose a checkpoint for large shared_buffers (hundredths of GB) might take a lot of time and this 0002 patch bypasses that. I would find it quite useful in some scenarios (e.g. testing backups, PITR recoveries, opening DB from storage snapshots / storage replication, maybe with DWH-after-crash too).\n\nTEST bgwriterPOC2b: FYI, I was also testing the the hot_standby code path -- to test if it would reduce time of starting / opening a fresh standby for read-only queries, but this parameter doesn't seem to influence that in my tests. As I've learned it's apparently much more complex to reproduce what I'm after and involves a lot of reading about LogStandbySnapshot() / standby recovery points on my side.\n\nNow, back to smgropen() hash_search_by_values() reproducer...\n\n-Jakub Wartak.\n\n\n", "msg_date": "Fri, 28 Aug 2020 12:43:52 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Sat, Aug 29, 2020 at 12:43 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\n> postgres 120935 0.9 0.0 866052 3824 ? Ss 09:47 0:00 postgres: checkpointer\n> postgres 120936 61.9 0.0 865796 3824 ? Rs 09:47 0:22 postgres: background writer\n> postgres 120937 97.4 0.0 865940 5228 ? Rs 09:47 0:36 postgres: startup recovering 000000010000000000000089\n>\n> speedup of 1.647x\n\nThanks for testing! That's better than I expected. I guess it wasn't\nquite so good with default bgwriter settings.\n\n> I have only one comment about those 2 WIP patches, bgwriter_lru_maxpages should be maybe called standby_bgwriter_lru_maxpages in this scenario or even more preferred there shouldn't be a maximum set during closed DB recovery scenario.\n\nI wish bgwriter could auto-tune itself better, so we wouldn't need to\ncontemplate adding more settings.\n\nAs for the second patch (\"Optionally, don't wait for end-of-recovery\ncheckpoint.\"), that also looked quite useful in your test scenario:\n\n> end_of_recovery_checkpoint_wait = off, before DB is open 15s faster\n\n> I suppose a checkpoint for large shared_buffers (hundredths of GB) might take a lot of time and this 0002 patch bypasses that. I would find it quite useful in some scenarios (e.g. testing backups, PITR recoveries, opening DB from storage snapshots / storage replication, maybe with DWH-after-crash too).\n\nI suppose a third option that you might want is no checkpoint at all\n(meaning leave it to regular checkpoint scheduling), like fast\npromotion. One thing missing from the patch is that we probably need\nto log an end-of-recovery *record*, like fast promotion does. I'm a\nlittle fuzzy on the timeline stuff. I wonder if any recovery experts\nwould like to weigh in on theoretical problems that might be lurking\nhere...\n\n\n", "msg_date": "Sat, 29 Aug 2020 08:12:55 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Sat, Aug 29, 2020 at 12:43 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> ... %CPU ... COMMAND\n> ... 97.4 ... postgres: startup recovering 000000010000000000000089\n\nSo, what else is pushing this thing off CPU, anyway? For one thing, I\nguess it might be stalling while reading the WAL itself, because (1)\nwe only read it 8KB at a time, relying on kernel read-ahead, which\ntypically defaults to 128KB I/Os unless you cranked it up, but for\nexample we know that's not enough to saturate a sequential scan on\nNVME system, so maybe it hurts here too (2) we keep having to switch\nsegment files every 16MB. Increasing WAL segment size and kernel\nreadahead size presumably help with that, if indeed it is a problem,\nbut we could also experiment with a big POSIX_FADV_WILLNEED hint for a\nfuture segment every time we cross a boundary, and also maybe increase\nthe size of our reads.\n\n\n", "msg_date": "Sat, 29 Aug 2020 09:26:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "Hi Thomas, hackers,\n\n>> ... %CPU ... COMMAND\n>> ... 97.4 ... postgres: startup recovering 000000010000000000000089\n> So, what else is pushing this thing off CPU, anyway? For one thing, I\n> guess it might be stalling while reading the WAL itself, because (1)\n> we only read it 8KB at a time, relying on kernel read-ahead, which\n> typically defaults to 128KB I/Os unless you cranked it up, but for\n> example we know that's not enough to saturate a sequential scan on\n> NVME system, so maybe it hurts here too (2) we keep having to switch\n> segment files every 16MB. Increasing WAL segment size and kernel\n> readahead size presumably help with that, if indeed it is a problem,\n> but we could also experiment with a big POSIX_FADV_WILLNEED hint for a\n> future segment every time we cross a boundary, and also maybe increase\n> the size of our reads.\n\nAll of the above (1,2) would make sense and the effects IMHO are partially possible to achieve via ./configure compile options, but from previous correspondence [1] in this particular workload, it looked like it was not WAL reading, but reading random DB blocks into shared buffer: in that case I suppose it was the price of too many syscalls to the OS/VFS cache itself as the DB was small and fully cached there - so problem (3): copy_user_enhanced_fast_string <- 17.79%--copyout (!) <- __pread_nocancel <- 16.56%--FileRead / mdread / ReadBuffer_common (!). Without some micro-optimization or some form of vectorized [batching] I/O in recovery it's dead end when it comes to small changes. Thing that come to my mind as for enhancing recovery:\n- preadv() - works only for 1 fd, while WAL stream might require reading a lot of random pages into s_b (many relations/fds, even btree inserting to single relation might put data into many 1GB [default] forks). This would only micro-optimize INSERT (pk) SELECT nextval(seq) kind of processing on recovery side I suppose. Of coruse provided that StartupXLOG would be more working in a batched way: (a) reading a lot of blocks from WAL at once (b) then issuing preadv() to get all the DB blocks into s_b going from the same rel/fd (c) applying WAL. Sounds like a major refactor just to save syscalls :(\n- mmap() - even more unrealistic\n- IO_URING - gives a lot of promise here I think, is it even planned to be shown for PgSQL14 cycle ? Or it's more like PgSQL15?\n\n-Jakub Wartak\n\n[1] - https://www.postgresql.org/message-id/VI1PR0701MB6960EEB838D53886D8A180E3F6520%40VI1PR0701MB6960.eurprd07.prod.outlook.com please see profile after \"0001+0002 s_b(at)128MB: 85.392\"\n\n", "msg_date": "Mon, 31 Aug 2020 08:49:54 +0000", "msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Mon, Aug 31, 2020 at 8:50 PM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> - IO_URING - gives a lot of promise here I think, is it even planned to be shown for PgSQL14 cycle ? Or it's more like PgSQL15?\n\nI can't answer that, but I've played around with the prototype quite a\nbit, and thought quite a lot about how to port it to systems without\nIO_URING, and I'm just as keen to see this happen as you are.\n\nIn the meantime, from the low-hanging-fruit department, here's a new\nversion of the SLRU-fsync-offload patch. The only changes are a\ntweaked commit message, and adoption of C99 designated initialisers\nfor the function table, so { [SYNC_HANDLER_CLOG] = ... } instead of\nrelying on humans to make the array order match the enum values. If\nthere are no comments or objections, I'm planning to commit this quite\nsoon.", "msg_date": "Sat, 19 Sep 2020 17:06:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Sat, Sep 19, 2020 at 5:06 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> In the meantime, from the low-hanging-fruit department, here's a new\n> version of the SLRU-fsync-offload patch. The only changes are a\n> tweaked commit message, and adoption of C99 designated initialisers\n> for the function table, so { [SYNC_HANDLER_CLOG] = ... } instead of\n> relying on humans to make the array order match the enum values. If\n> there are no comments or objections, I'm planning to commit this quite\n> soon.\n\n... and CI told me that Windows didn't like my array syntax with the\nextra trailing comma. Here's one without.", "msg_date": "Sun, 20 Sep 2020 12:40:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Sun, Sep 20, 2020 at 12:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Sep 19, 2020 at 5:06 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > In the meantime, from the low-hanging-fruit department, here's a new\n> > version of the SLRU-fsync-offload patch. The only changes are a\n> > tweaked commit message, and adoption of C99 designated initialisers\n> > for the function table, so { [SYNC_HANDLER_CLOG] = ... } instead of\n> > relying on humans to make the array order match the enum values. If\n> > there are no comments or objections, I'm planning to commit this quite\n> > soon.\n>\n> ... and CI told me that Windows didn't like my array syntax with the\n> extra trailing comma. Here's one without.\n\nWhile scanning for comments and identifier names that needed updating,\nI realised that this patch changed the behaviour of the ShutdownXXX()\nfunctions, since they currently flush the SLRUs but are not followed\nby a checkpoint. I'm not entirely sure I understand the logic of\nthat, but it wasn't my intention to change it. So here's a version\nthat converts the existing fsync_fname() to fsync_fname_recurse() to\nfix that.\n\nStrangely, the fsync calls that ensure that directory entries are on\ndisk seemed to be missing from CheckPointMultixact(), so I added them.\nIsn't that a live bug?\n\nI decided it was a little too magical that CheckPointCLOG() etc\ndepended on the later call to CheckPointBuffers() to perform their\nfsyncs. I started writing comments about that, but then I realised\nthat the right thing to do was to hoist ProcessSyncRequests() out of\nthere into CheckPointGuts() and make it all more explicit.\n\nI also realised that it would be inconsistent to count SLRU sync\nactivity as buffer sync time, but not count SLRU write activity as\nbuffer write time, or count its buffers as written in the reported\nstatistics. In other words, SLRU buffers *are* buffers for checkpoint\nreporting purposes (or should at least be consistently in or out of\nthe stats, and with this patch they have to be in).\n\nDoes that make sense? Is there a problem I'm not seeing with\nreordering CheckPointGuts() as I have?\n\nOne comment change that seems worth highlighting is this code reached\nby VACUUM, which seems like a strict improvement (it wasn't flushing\nfor crash recovery):\n\n /*\n- * Flush out dirty data, so PhysicalPageExists can work correctly.\n- * SimpleLruFlush() is a pretty big hammer for that. Alternatively we\n- * could add an in-memory version of page exists, but\nfind_multixact_start\n- * is called infrequently, and it doesn't seem bad to flush buffers to\n- * disk before truncation.\n+ * Write out dirty data, so PhysicalPageExists can work correctly.\n */\n- SimpleLruFlush(MultiXactOffsetCtl, true);\n- SimpleLruFlush(MultiXactMemberCtl, true);\n+ SimpleLruWriteAll(MultiXactOffsetCtl, true);\n+ SimpleLruWriteAll(MultiXactMemberCtl, true);", "msg_date": "Mon, 21 Sep 2020 14:19:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Mon, Sep 21, 2020 at 2:19 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> While scanning for comments and identifier names that needed updating,\n> I realised that this patch changed the behaviour of the ShutdownXXX()\n> functions, since they currently flush the SLRUs but are not followed\n> by a checkpoint. I'm not entirely sure I understand the logic of\n> that, but it wasn't my intention to change it. So here's a version\n> that converts the existing fsync_fname() to fsync_fname_recurse() to\n\nBleugh, that was probably a bad idea, it's too expensive. But it\nforces me to ask the question: *why* do we need to call\nShutdown{CLOG,CommitTS,SUBTRANS, MultiXact}() after a creating a\nshutdown checkpoint? I wondered if this might date from before the\nWAL, but I see that the pattern was introduced when the CLOG was moved\nout of shared buffers into a proto-SLRU in ancient commit 2589735da08,\nbut even in that commit the preceding CreateCheckPoint() call included\na call to CheckPointCLOG().\n\n\n", "msg_date": "Tue, 22 Sep 2020 09:08:13 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Tue, Sep 22, 2020 at 9:08 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Sep 21, 2020 at 2:19 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > While scanning for comments and identifier names that needed updating,\n> > I realised that this patch changed the behaviour of the ShutdownXXX()\n> > functions, since they currently flush the SLRUs but are not followed\n> > by a checkpoint. I'm not entirely sure I understand the logic of\n> > that, but it wasn't my intention to change it. So here's a version\n> > that converts the existing fsync_fname() to fsync_fname_recurse() to\n>\n> Bleugh, that was probably a bad idea, it's too expensive. But it\n> forces me to ask the question: *why* do we need to call\n> Shutdown{CLOG,CommitTS,SUBTRANS, MultiXact}() after a creating a\n> shutdown checkpoint? I wondered if this might date from before the\n> WAL, but I see that the pattern was introduced when the CLOG was moved\n> out of shared buffers into a proto-SLRU in ancient commit 2589735da08,\n> but even in that commit the preceding CreateCheckPoint() call included\n> a call to CheckPointCLOG().\n\nI complained about the apparently missing multixact fsync in a new\nthread, because if I'm right about that it requires a back-patchable\nfix.\n\nAs for the ShutdownXXX() functions, I haven't yet come up with any\nreason for this code to exist. Emboldened by a colleague's inability\nto explain to me what that code is doing for us, here is a new version\nthat just rips it all out.", "msg_date": "Wed, 23 Sep 2020 13:56:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Wed, Sep 23, 2020 at 1:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As for the ShutdownXXX() functions, I haven't yet come up with any\n> reason for this code to exist. Emboldened by a colleague's inability\n> to explain to me what that code is doing for us, here is a new version\n> that just rips it all out.\n\nRebased.\n\nTom, do you have any thoughts on ShutdownCLOG() etc?", "msg_date": "Fri, 25 Sep 2020 11:47:20 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Tom, do you have any thoughts on ShutdownCLOG() etc?\n\nHm, if we cannot reach that without first completing a shutdown checkpoint,\nit does seem a little pointless.\n\nIt'd likely be a good idea to add a comment to CheckPointCLOG et al\nexplaining that we expect $what-exactly to fsync the data we are writing\nbefore the checkpoint is considered done.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Sep 2020 20:05:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Fri, Sep 25, 2020 at 12:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Tom, do you have any thoughts on ShutdownCLOG() etc?\n>\n> Hm, if we cannot reach that without first completing a shutdown checkpoint,\n> it does seem a little pointless.\n\nThanks for the sanity check.\n\n> It'd likely be a good idea to add a comment to CheckPointCLOG et al\n> explaining that we expect $what-exactly to fsync the data we are writing\n> before the checkpoint is considered done.\n\nGood point. Done like this:\n\n+ /*\n+ * Write dirty CLOG pages to disk. This may result in sync\nrequests queued\n+ * for later handling by ProcessSyncRequests(), as part of the\ncheckpoint.\n+ */\n TRACE_POSTGRESQL_CLOG_CHECKPOINT_START(true);\n- SimpleLruFlush(XactCtl, true);\n+ SimpleLruWriteAll(XactCtl, true);\n TRACE_POSTGRESQL_CLOG_CHECKPOINT_DONE(true);\n\nHere's a new version. The final thing I'm contemplating before\npushing this is whether there may be hidden magical dependencies in\nthe order of operations in CheckPointGuts(), which I've changed\naround. Andres, any comments?", "msg_date": "Fri, 25 Sep 2020 12:53:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Fri, Sep 25, 2020 at 12:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a new version. The final thing I'm contemplating before\n> pushing this is whether there may be hidden magical dependencies in\n> the order of operations in CheckPointGuts(), which I've changed\n> around. Andres, any comments?\n\nI nagged Andres off-list and he opined that it might be better to\nreorder it a bit so that ProcessSyncRequests() comes after almost\neverything else, so that if we ever teach more things to offload their\nfsync work it'll be in the right order. I reordered it like that; now\nonly CheckPointTwoPhase() comes later, based on the comment that\naccompanies it. In any case, we can always reconsider the ordering of\nthis function in later commits as required. Pushed like that.\n\n\n", "msg_date": "Fri, 25 Sep 2020 19:09:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On 9/25/20 9:09 AM, Thomas Munro wrote:\n> On Fri, Sep 25, 2020 at 12:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Here's a new version. The final thing I'm contemplating before\n>> pushing this is whether there may be hidden magical dependencies in\n>> the order of operations in CheckPointGuts(), which I've changed\n>> around. Andres, any comments?\n> \n> I nagged Andres off-list and he opined that it might be better to\n> reorder it a bit so that ProcessSyncRequests() comes after almost\n> everything else, so that if we ever teach more things to offload their\n> fsync work it'll be in the right order. I reordered it like that; now\n> only CheckPointTwoPhase() comes later, based on the comment that\n> accompanies it. In any case, we can always reconsider the ordering of\n> this function in later commits as required. Pushed like that.\n> \n\nSeems this commit left behind a couple unnecessary prototypes in a bunch \nof header files. In particular, it removed these functions\n\n- ShutdownCLOG();\n- ShutdownCommitTs();\n- ShutdownSUBTRANS();\n- ShutdownMultiXact();\n\nbut we still have\n\n$ git grep ShutdownCLOG\nsrc/include/access/clog.h:extern void ShutdownCLOG(void);\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 3 Jan 2021 15:35:39 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" }, { "msg_contents": "On Mon, Jan 4, 2021 at 3:35 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Seems this commit left behind a couple unnecessary prototypes in a bunch\n> of header files. In particular, it removed these functions\n>\n> - ShutdownCLOG();\n> - ShutdownCommitTs();\n> - ShutdownSUBTRANS();\n> - ShutdownMultiXact();\n\nThanks. Fixed.\n\n\n", "msg_date": "Tue, 5 Jan 2021 11:44:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Handing off SLRU fsyncs to the checkpointer" } ]
[ { "msg_contents": "Continuing the discussion in [0], here is a patch that allows parameter \nreferences in the arguments of the EXECUTE command. The main purpose is \nsubmitting protocol-level parameters, but the added regression test case \nshows another way to exercise it.\n\nWhat's confusing is that the code already contains a reference that \nindicates that this should be possible:\n\n /* Evaluate parameters, if any */\n if (entry->plansource->num_params > 0)\n {\n /*\n * Need an EState to evaluate parameters; must not delete it \ntill end\n * of query, in case parameters are pass-by-reference. Note \nthat the\n * passed-in \"params\" could possibly be referenced in the parameter\n * expressions.\n */\n estate = CreateExecutorState();\n estate->es_param_list_info = params;\n paramLI = EvaluateParams(pstate, entry, stmt->params, estate);\n }\n\nI'm not sure what this is supposed to do without my patch on top of it. \nIf I remove the estate->es_param_list_info assignment, no tests fail \n(except the one I added). Either this is a leftover from previous \nvariants of this code (as discussed in [0]), or there is something I \nhaven't understood.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/6e7aa4a1-be6a-1a75-b1f9-83a678e5184a%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 12 Feb 2020 16:00:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Support external parameters in EXECUTE command" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Continuing the discussion in [0], here is a patch that allows parameter \n> references in the arguments of the EXECUTE command. The main purpose is \n> submitting protocol-level parameters, but the added regression test case \n> shows another way to exercise it.\n\nI spent a bit of time looking at this.\n\n> What's confusing is that the code already contains a reference that \n> indicates that this should be possible:\n> ...\n> I'm not sure what this is supposed to do without my patch on top of it. \n> If I remove the estate->es_param_list_info assignment, no tests fail \n> (except the one I added). Either this is a leftover from previous \n> variants of this code (as discussed in [0]), or there is something I \n> haven't understood.\n\nThat case is reachable with an example like this:\n\n create or replace function foo(int) returns int language plpgsql as $$\n declare\n x int := $1;\n y int;\n begin\n execute 'execute fool($1 + 11)' into y using x;\n return y;\n end\n $$;\n\n deallocate fool;\n prepare fool(int) as select $1 + 1;\n\n select foo(42);\n\nIn the existing code this draws \"ERROR: there is no parameter $1\".\nYour patch makes it work, which is an improvement.\n\nThere are a few things bothering me, nonetheless.\n\n1. The patch only implements part of the API for dynamic ParamListInfos,\nthat is it honors paramFetch but not parserSetup. This seems not to\nmatter for any of the existing code paths (although we may just be lacking\na test case to reveal it); but I feel that it's just a matter of time\nbefore somebody sets up a case where it would matter. We would have such\na problem today if plpgsql treated EXECUTE as a plain SQL command rather\nthan its own magic thing, because then \"EXECUTE prepared_stmt(plpgsql_var)\"\nwould require the parserSetup hook to be honored in order to resolve the\nvariable reference.\n\nIt's fairly simple to fix this in ExplainExecuteQuery, since that is\ncreating its own ParseState; it can just apply the plist's parserSetup\nto that pstate. I did that in the attached v2 so you can see what\nI'm talking about. However, it's a lot less clear what to do in\nExecuteQuery, which as it stands is re-using a passed-in ParseState;\nhow do we know that the parse hooks aren't already set up in that?\n(Or if they are, what do we do to merge their effects?)\n\n2. Actually that latter problem exists already in your patch, because\nit's cavalierly overwriting the passed-in ParseState's p_paramref_hook\nwithout regard for the possibility that that's set already. I added\nan Assert that it's not set, and we get through check-world that way,\nbut even to write the assertion is to think that there is surely\ngoing to be a code path that breaks it, soon if not already.\n\n3. Both of the above points seem closely related to the vague worry\nI had in the previous discussion about nested contexts all wanting\nto control the resolution of parameters. We'll get away with this,\nperhaps, as long as that situation never occurs; but once it does\nwe have issues.\n\n4. I'm inclined to feel that the reason we have these problems is\nthat this patch handles parameter resolution in the wrong place.\nIt would likely be better if parameter resolution were already\nset up in the ParseState passed to ExecuteQuery (and then we'd fix\nExplainExecuteQuery by likewise passing it a ParseState for the\nEXPLAIN EXECUTE). However, that approach would probably result in\nParams being available to any utility statement, and then we've\ngot issues for statements where expressions are only parsed and not\nimmediately executed: we have to define sane semantics for Param\nreferences in such contexts, and make sure they get honored.\n\n5. So that brings us back to the other point I made earlier, which\nis that I'm not happy with patching this locally in EXECUTE rather\nthan having a design that works across-the-board for utility\nstatements. You had expressed similar concerns a ways upthread:\n>> Come to think of it, it would probably also be useful if PREPARE did\n>> parameter processing, again in order to allow use with PQexecParams().\n\nI think it's possible that we could solve the semantics problem\nby defining the behavior for Params in utility statements as\n\"the parameter value is immediately substituted at parse time,\nproducing a Const node\". This doesn't change the behavior in EXECUTE,\nbecause the expression will straightaway be evaluated and produce the\ncorrect value. In situations like\n\nALTER TABLE foo ADD COLUMN newcol int DEFAULT ($1);\n\nyou would also get what seem sane semantics, ie the default is the\nvalue provided as a Param.\n\nI feel that perhaps a patch that does this wouldn't be tremendously\nmore code than you have here; but the param resolution hook would be\ninstalled at some different more-global place, and it would be code\nto generate a Const node not a Param.\n\nI attach a v2 with the trivial mods mentioned above, but just for\nillustration not because I think this is the way to go.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 09 Mar 2020 17:21:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support external parameters in EXECUTE command" } ]
[ { "msg_contents": "Hi,\nIn postgresGetForeignJoinPaths(), I see\n\n /* Estimate costs for bare join relation */\n estimate_path_cost_size(root, joinrel, NIL, NIL, NULL,\n &rows, &width, &startup_cost, &total_cost);\n /* Now update this information in the joinrel */\n joinrel->rows = rows;\n joinrel->reltarget->width = width;\n\nThis code is good as well as bad.\n\nFor a join relation, we estimate the number of rows in\nset_joinrel_size_estimates() inside build_*_join_rel() and set the width of\nthe join when building the targetlist. For foreign join, the size estimates\nmay not be correct but width estimate should be. So updating the number of\nrows looks good since it would be better than what\nset_joinrel_size_etimates() might come up with but here are the problems\nwith this code\n1. The rows estimated by estimate_path_cost_size() are better only when\nuse_remote_estimates is true. So, we should be doing this only when\nuse_remote_estimate is true.\n2. This function gets called after local paths for the first pair for this\njoin have been added. So those paths are not being judged fairly and\nperhaps we might be throwing away better paths just because the local\nestimates with which they were created were very different from the remote\nestimates.\n\nA better way would be to get the estimates and setup fpinfo for a joinrel\nin build_join_rel() and later add paths similar to what we do for base\nrelations. That means we split the current hook GetForeignJoinPaths into\ntwo - one to get estimates and the other to setup fpinfo.\n\nComments?\n--\nBest Wishes,\nAshutosh Bapat\n\nHi,In postgresGetForeignJoinPaths(), I see   /* Estimate costs for bare join relation */    estimate_path_cost_size(root, joinrel, NIL, NIL, NULL,                            &rows, &width, &startup_cost, &total_cost);    /* Now update this information in the joinrel */    joinrel->rows = rows;    joinrel->reltarget->width = width;This code is good as well as bad.For a join relation, we estimate the number of rows in set_joinrel_size_estimates() inside build_*_join_rel() and set the width of the join when building the targetlist. For foreign join, the size estimates may not be correct but width estimate should be. So updating the number of rows looks good since it would be better than what set_joinrel_size_etimates() might come up with but here are the problems with this code1. The rows estimated by estimate_path_cost_size() are better only when use_remote_estimates is true. So, we should be doing this only when use_remote_estimate is true.2. This function gets called after local paths for the first pair for this join have been added. So those paths are not being judged fairly and perhaps we might be throwing away better paths just because the local estimates with which they were created were very different from the remote estimates.A better way would be to get the estimates and setup fpinfo for a joinrel in build_join_rel() and later add paths similar to what we do for base relations. That means we split the current hook GetForeignJoinPaths into two - one to get estimates and the other to setup fpinfo.Comments?--Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 12 Feb 2020 21:47:30 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Updating row and width estimates in postgres_fdw" }, { "msg_contents": "Hi Ashutosh,\n\nLong time no see!\n\nOn Thu, Feb 13, 2020 at 1:17 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> In postgresGetForeignJoinPaths(), I see\n>\n> /* Estimate costs for bare join relation */\n> estimate_path_cost_size(root, joinrel, NIL, NIL, NULL,\n> &rows, &width, &startup_cost, &total_cost);\n> /* Now update this information in the joinrel */\n> joinrel->rows = rows;\n> joinrel->reltarget->width = width;\n>\n> This code is good as well as bad.\n>\n> For a join relation, we estimate the number of rows in set_joinrel_size_estimates() inside build_*_join_rel() and set the width of the join when building the targetlist. For foreign join, the size estimates may not be correct but width estimate should be. So updating the number of rows looks good since it would be better than what set_joinrel_size_etimates() might come up with but here are the problems with this code\n> 1. The rows estimated by estimate_path_cost_size() are better only when use_remote_estimates is true. So, we should be doing this only when use_remote_estimate is true.\n\nI think it's actually harmless to do that even when\nuse_remote_estimate=false because in that case we get the rows\nestimate from joinrel->rows in estimate_path_cost_size() and return to\nthe caller the estimate as-is, IIRC.\n\n> 2. This function gets called after local paths for the first pair for this join have been added. So those paths are not being judged fairly and perhaps we might be throwing away better paths just because the local estimates with which they were created were very different from the remote estimates.\n\nYeah, but I'm not sure we really need to fix that because I think the\nremote-join path would usually win against any of local-join paths.\nCould you show me an example causing an issue?\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 19 Feb 2020 19:41:09 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Updating row and width estimates in postgres_fdw" } ]
[ { "msg_contents": "Hi all,\n\nToday digging into a customer issue about errors in pg_restore I realized\nthat pg_restore dispatch a worker to restore EventTrigger\nduring restore_toc_entries_parallel. IMHO EventTriggers should be restored\nduring the restore_toc_entries_postfork in serial mode.\n\nFor example this simple database schema:\n\nBEGIN;\n\nCREATE TABLE foo(c1 bigserial NOT NULL, c2 varchar(100) NOT NULL, PRIMARY\nKEY (c1));\nINSERT INTO foo (c2) SELECT 'Foo '||id FROM generate_series(0,10) id;\nCREATE INDEX foo_1 ON foo (c2);\n\nCREATE TABLE bar(c1 bigserial NOT NULL, c2 bigint REFERENCES public.foo, c3\nvarchar(100), PRIMARY KEY (c1));\nINSERT INTO bar (c2, c3) SELECT (random()*10)::bigint+1, 'Bar '||id FROM\ngenerate_series(1,10000) id;\nCREATE INDEX bar_1 ON bar (c2);\nCREATE INDEX bar_2 ON bar (c3);\n\nCREATE OR REPLACE FUNCTION f_test_ddl_trigger()\nRETURNS event_trigger AS\n$$\nDECLARE\n r RECORD;\nBEGIN\n FOR r IN\n SELECT objid, objsubid, schema_name, objid::regclass::text AS\ntable_name, command_tag, object_type, object_identity\n FROM pg_event_trigger_ddl_commands()\n LOOP\n RAISE INFO 'RUN EVENT TRIGGER %', r;\n END LOOP;\nEND;\n$$\nLANGUAGE plpgsql;\n\nCREATE EVENT TRIGGER test_ddl_trigger\nON ddl_command_end\nEXECUTE PROCEDURE f_test_ddl_trigger();\n\nCOMMIT;\n\nRunning the dump:\n$ bin/pg_dump -Fc -f /tmp/teste.dump fabrizio\n\nRestoring with one worker everything is ok:\nfabrizio@macanudo:~/pgsql\n$ bin/pg_restore -Fc -d fabrizio_restore_serial /tmp/teste.dump | grep 'RUN\nEVENT TRIGGER'\n\nRunning with more the one worker:\nfabrizio@macanudo:~/pgsql\n$ bin/pg_restore -Fc -j2 -d fabrizio_restore_parallel /tmp/teste.dump |\ngrep 'RUN EVENT TRIGGER'\npg_restore: INFO: RUN EVENT TRIGGER (16906,0,public,public.bar,\"ALTER\nTABLE\",table,public.bar)\n\nIn parallel mode it's firing the EventTrigger and it can't be happen.\nPoking around it I did some test with attached just to leave EventTriggers\nin pending_list to process it in restore_toc_entries_postfork and\neverything is ok, but my solution is very ugly, so maybe we need to invent\na new RestorePass to take care of it like RESTORE_PASS_ACL and\nRESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a\ngood way to do that.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Wed, 12 Feb 2020 13:59:05 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Wed, Feb 12, 2020 at 01:59:05PM -0300, Fabrízio de Royes Mello wrote:\n> In parallel mode it's firing the EventTrigger and it can't be happen.\n> Poking around it I did some test with attached just to leave EventTriggers\n> in pending_list to process it in restore_toc_entries_postfork and\n> everything is ok, but my solution is very ugly, so maybe we need to invent\n> a new RestorePass to take care of it like RESTORE_PASS_ACL and\n> RESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a\n> good way to do that.\n\nCould you add that as a bug fix to the next CF [1]?\n\n[1]: https://commitfest.postgresql.org/27/\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 12:52:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Thu, Feb 13, 2020 at 12:52 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Wed, Feb 12, 2020 at 01:59:05PM -0300, Fabrízio de Royes Mello wrote:\n> > In parallel mode it's firing the EventTrigger and it can't be happen.\n> > Poking around it I did some test with attached just to leave\n> EventTriggers\n> > in pending_list to process it in restore_toc_entries_postfork and\n> > everything is ok, but my solution is very ugly, so maybe we need to\n> invent\n> > a new RestorePass to take care of it like RESTORE_PASS_ACL and\n> > RESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a\n> > good way to do that.\n>\n> Could you add that as a bug fix to the next CF [1]?\n>\n> [1]: https://commitfest.postgresql.org/27/\n>\n>\nDone, thanks!\nhttps://commitfest.postgresql.org/27/2450/\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Thu, Feb 13, 2020 at 12:52 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Feb 12, 2020 at 01:59:05PM -0300, Fabrízio de Royes Mello wrote:\n> In parallel mode it's firing the EventTrigger and it can't be happen.\n> Poking around it I did some test with attached just to leave EventTriggers\n> in pending_list to process it in restore_toc_entries_postfork and\n> everything is ok, but my solution is very ugly, so maybe we need to invent\n> a new RestorePass to take care of it like RESTORE_PASS_ACL and\n> RESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a\n> good way to do that.\n\nCould you add that as a bug fix to the next CF [1]?\n\n[1]: https://commitfest.postgresql.org/27/Done, thanks!https://commitfest.postgresql.org/27/2450/Regards,--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Thu, 13 Feb 2020 11:27:50 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Wed, Feb 12, 2020 at 01:59:05PM -0300, Fabrízio de Royes Mello wrote:\n> In parallel mode it's firing the EventTrigger and it can't be happen.\n> Poking around it I did some test with attached just to leave EventTriggers\n> in pending_list to process it in restore_toc_entries_postfork and\n> everything is ok, but my solution is very ugly, so maybe we need to invent\n> a new RestorePass to take care of it like RESTORE_PASS_ACL and\n> RESTORE_PASS_REFRESH. I can provide a more polished patch if it'll be a\n> good way to do that.\n\nThat sounds right, as event triggers could interact with GRANT and\nREFRESH of matviews, so they should be logically last. Looking at the\nrecent commit history, this would be similar to 3eb9a5e as we don't\nreally have a way to treat event triggers as dependency-sortable\nobjects. What kind of errors did you see in this customer\nenvironment? Errors triggered by one or more event triggers blocking\nsome commands based on a tag match? \n--\nMichael", "msg_date": "Thu, 20 Feb 2020 16:52:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Thu, Feb 20, 2020 at 4:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> That sounds right, as event triggers could interact with GRANT and\n> REFRESH of matviews, so they should be logically last. Looking at the\n> recent commit history, this would be similar to 3eb9a5e as we don't\n> really have a way to treat event triggers as dependency-sortable\n> objects.\n>\n\nIndeed... event triggers should be the last thing to be restored.\n\n> What kind of errors did you see in this customer\n> environment? Errors triggered by one or more event triggers blocking\n> some commands based on a tag match?\n>\n\nBy error I meant the weird behavior I described before that pg_restore\ncreate the event triggers in parallel mode and after that other objects are\ncreated then the event trigger is fired during the restore...\n\nHave a look at the new attached patch.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Thu, 20 Feb 2020 15:36:01 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Fri, Feb 21, 2020 at 12:06 AM Fabrízio de Royes Mello\n<fabriziomello@gmail.com> wrote:\n>\n>\n>\n> On Thu, Feb 20, 2020 at 4:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > That sounds right, as event triggers could interact with GRANT and\n> > REFRESH of matviews, so they should be logically last. Looking at the\n> > recent commit history, this would be similar to 3eb9a5e as we don't\n> > really have a way to treat event triggers as dependency-sortable\n> > objects.\n> >\n>\n> Indeed... event triggers should be the last thing to be restored.\n>\n> > What kind of errors did you see in this customer\n> > environment? Errors triggered by one or more event triggers blocking\n> > some commands based on a tag match?\n> >\n>\n> By error I meant the weird behavior I described before that pg_restore create the event triggers in parallel mode and after that other objects are created then the event trigger is fired during the restore...\n>\n> Have a look at the new attached patch.\n>\n\nThe test works fine with the patch.\n\nFew comments:\nThere is minor code alignment that need to be fixed:\ngit apply fix_pg_restore_parallel_with_event_trigger_v2.patch\nfix_pg_restore_parallel_with_event_trigger_v2.patch:11: trailing whitespace.\n * then ACLs, matview refresh items, then event triggers. We might be\nwarning: 1 line adds whitespace errors.\n\nI'm not sure if we can add a test for this, can you have a thought\nabout this to check if we can add a test.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Mar 2020 21:55:54 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> I'm not sure if we can add a test for this, can you have a thought\n> about this to check if we can add a test.\n\nYeah, I'm not quite sure if a test is worth the trouble or not.\n\nWe clearly do need to restore event triggers later than we do now, even\nwithout considering parallel restore: they should not be able to prevent\nus from executing other restore actions. This is just like the rule that\nwe don't restore DML triggers until after we've loaded data.\n\nHowever, I think that the existing code is correct to restore event\ntriggers before matview refreshes, not after as this patch would have us\ndo. The basic idea for matview refresh is that it should happen in the\nnormal running state of the database. If an event trigger interferes with\nthat, it would've done so in normal running as well.\n\nI'm also not terribly on board with loading more functionality onto the\nRestorePass mechanism. That's a crock that should go away someday,\nbecause it basically duplicates and overrides pg_dump's normal object\nsorting mechanism. So we don't want it doing more than it absolutely\nhas to. But in this case, I don't see any reason why we can't just\nrestore event triggers and matviews in the same post-ACL restore pass.\nIn a serial restore, that will make the event triggers come first\nbecause of the existing sort rules. In a parallel restore, it's possible\nthat they'd be intermixed, but that doesn't bother me. Again, if your\nevent triggers have side-effects on your matview refreshes, you're\ngoing to have some issues anyway.\n\nSo that leads me to the attached, which renames the \"RESTORE_PASS_REFRESH\"\nsymbol for clarity, and updates the pg_dump_sort.c code and comments\nto match what's really going on.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 07 Mar 2020 18:42:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Sat, Mar 7, 2020 at 8:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > I'm not sure if we can add a test for this, can you have a thought\n> > about this to check if we can add a test.\n>\n> Yeah, I'm not quite sure if a test is worth the trouble or not.\n>\n> We clearly do need to restore event triggers later than we do now, even\n> without considering parallel restore: they should not be able to prevent\n> us from executing other restore actions. This is just like the rule that\n> we don't restore DML triggers until after we've loaded data.\n>\n\nOk.\n\n\n> However, I think that the existing code is correct to restore event\n> triggers before matview refreshes, not after as this patch would have us\n> do. The basic idea for matview refresh is that it should happen in the\n> normal running state of the database. If an event trigger interferes with\n> that, it would've done so in normal running as well.\n>\n\nI'm not totally sure if it's entirely correct.\n\nFor example if I write an EventTrigger to perform some kind of DDL auditing\nthen during the restore the \"refresh maview\" operation will be audited and\nIMHO it's wrong.\n\n\n> I'm also not terribly on board with loading more functionality onto the\n> RestorePass mechanism. That's a crock that should go away someday,\n> because it basically duplicates and overrides pg_dump's normal object\n> sorting mechanism. So we don't want it doing more than it absolutely\n> has to. But in this case, I don't see any reason why we can't just\n> restore event triggers and matviews in the same post-ACL restore pass.\n\nTotally agree with it.\n\n\n> In a serial restore, that will make the event triggers come first\n> because of the existing sort rules. In a parallel restore, it's possible\n> that they'd be intermixed, but that doesn't bother me. Again, if your\n> event triggers have side-effects on your matview refreshes, you're\n> going to have some issues anyway.\n>\n\nIMHO EventTriggers can't be fired during pg_restore under any circumstances\nbecause can lead us to a different database state than the dump used.\n\n\n> So that leads me to the attached, which renames the \"RESTORE_PASS_REFRESH\"\n> symbol for clarity, and updates the pg_dump_sort.c code and comments\n> to match what's really going on.\n>\n\nOk.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Sat, Mar 7, 2020 at 8:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> vignesh C <vignesh21@gmail.com> writes:> > I'm not sure if we can add a test for this, can you have a thought> > about this to check if we can add a test.>> Yeah, I'm not quite sure if a test is worth the trouble or not.>> We clearly do need to restore event triggers later than we do now, even> without considering parallel restore: they should not be able to prevent> us from executing other restore actions.  This is just like the rule that> we don't restore DML triggers until after we've loaded data.>Ok.> However, I think that the existing code is correct to restore event> triggers before matview refreshes, not after as this patch would have us> do.  The basic idea for matview refresh is that it should happen in the> normal running state of the database.  If an event trigger interferes with> that, it would've done so in normal running as well.>I'm not totally sure if it's entirely correct.For example if I write an EventTrigger to perform some kind of DDL auditing then during the restore the \"refresh maview\" operation will be audited and IMHO it's wrong.> I'm also not terribly on board with loading more functionality onto the> RestorePass mechanism.  That's a crock that should go away someday,> because it basically duplicates and overrides pg_dump's normal object> sorting mechanism.  So we don't want it doing more than it absolutely> has to.  But in this case, I don't see any reason why we can't just> restore event triggers and matviews in the same post-ACL restore pass.Totally agree with it.> In a serial restore, that will make the event triggers come first> because of the existing sort rules.  In a parallel restore, it's possible> that they'd be intermixed, but that doesn't bother me.  Again, if your> event triggers have side-effects on your matview refreshes, you're> going to have some issues anyway.>IMHO EventTriggers can't be fired during pg_restore under any circumstances because can lead us to a different database state than the dump used.> So that leads me to the attached, which renames the \"RESTORE_PASS_REFRESH\"> symbol for clarity, and updates the pg_dump_sort.c code and comments> to match what's really going on.>Ok.Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 9 Mar 2020 09:36:15 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com> writes:\n> On Sat, Mar 7, 2020 at 8:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, I think that the existing code is correct to restore event\n>> triggers before matview refreshes, not after as this patch would have us\n>> do. The basic idea for matview refresh is that it should happen in the\n>> normal running state of the database. If an event trigger interferes with\n>> that, it would've done so in normal running as well.\n\n> I'm not totally sure if it's entirely correct.\n\n> For example if I write an EventTrigger to perform some kind of DDL auditing\n> then during the restore the \"refresh maview\" operation will be audited and\n> IMHO it's wrong.\n\nThe big problem I've got with this line of reasoning is that not\neverything can be the last restore step. There was already an argument\nthat matviews should be refreshed last so they can see the final state\nof the catalogs, in case you have a matview over some catalog (and of\ncourse that applies to pg_event_trigger as much as any other catalog).\nAdmittedly, that seems like an unlikely use-case, but it demonstrates\nthat there are limits to how much we can guarantee about dump/restore\nproducing just the same state that prevailed before the dump.\n\nIn the case of event triggers, the obvious counterexample is that if\nyou restore ET A and then ET B, ET A might interfere with the attempt\nto restore ET B. (And we have no way to know whether restoring B\nbefore A would be better or worse.)\n\nSo on the whole I find \"restore matviews as if they'd been refreshed\nafter the restore\" to be a more trustworthy approach than the other\nway. At some level we have to trust that ETs aren't going to totally\nbollix the restore.\n\nWhich, TBH, makes me wonder about the validity of the original complaint\nin this thread. I don't mind delaying ET restore as long as we feasibly\ncan; but if you have an ET that is going to misbehave during restore,\nyou are in for pain, and it's hard to consider that that pain isn't\nself-inflicted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Mar 2020 11:27:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Mon, Mar 9, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> In the case of event triggers, the obvious counterexample is that if\n> you restore ET A and then ET B, ET A might interfere with the attempt\n> to restore ET B. (And we have no way to know whether restoring B\n> before A would be better or worse.)\n>\n\nYeap... you're correct.\n\n\n> So on the whole I find \"restore matviews as if they'd been refreshed\n> after the restore\" to be a more trustworthy approach than the other\n> way. At some level we have to trust that ETs aren't going to totally\n> bollix the restore.\n>\n\nOk.\n\n> Which, TBH, makes me wonder about the validity of the original complaint\n> in this thread. I don't mind delaying ET restore as long as we feasibly\n> can; but if you have an ET that is going to misbehave during restore,\n> you are in for pain, and it's hard to consider that that pain isn't\n> self-inflicted.\n>\n\nThe proposed patch solve the original complain. I was just trying to\nunderstand completely what you pointed out before and I agree with you.\nThanks for the clear explanation.\n\nAbout the patch LGTM and IMHO we should back-patch it to all supported\nversions.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, Mar 9, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> In the case of event triggers, the obvious counterexample is that if> you restore ET A and then ET B, ET A might interfere with the attempt> to restore ET B.  (And we have no way to know whether restoring B> before A would be better or worse.)>Yeap... you're correct.> So on the whole I find \"restore matviews as if they'd been refreshed> after the restore\" to be a more trustworthy approach than the other> way.  At some level we have to trust that ETs aren't going to totally> bollix the restore.>Ok.> Which, TBH, makes me wonder about the validity of the original complaint> in this thread.  I don't mind delaying ET restore as long as we feasibly> can; but if you have an ET that is going to misbehave during restore,> you are in for pain, and it's hard to consider that that pain isn't> self-inflicted.>The proposed patch solve the original complain. I was just trying to understand completely what you pointed out before and I agree with you. Thanks for the clear explanation.About the patch LGTM and IMHO we should back-patch it to all supported versions. Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 9 Mar 2020 13:56:31 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com> writes:\n> On Mon, Mar 9, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Which, TBH, makes me wonder about the validity of the original complaint\n>> in this thread. I don't mind delaying ET restore as long as we feasibly\n>> can; but if you have an ET that is going to misbehave during restore,\n>> you are in for pain, and it's hard to consider that that pain isn't\n>> self-inflicted.\n\n> The proposed patch solve the original complain. I was just trying to\n> understand completely what you pointed out before and I agree with you.\n> Thanks for the clear explanation.\n\nOK, thanks for confirming that this solves your issue in practice.\n\n> About the patch LGTM and IMHO we should back-patch it to all supported\n> versions.\n\nDone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Mar 2020 14:59:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" }, { "msg_contents": "On Mon, Mar 9, 2020 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com> writes:\n> > On Mon, Mar 9, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Which, TBH, makes me wonder about the validity of the original\ncomplaint\n> >> in this thread. I don't mind delaying ET restore as long as we\nfeasibly\n> >> can; but if you have an ET that is going to misbehave during restore,\n> >> you are in for pain, and it's hard to consider that that pain isn't\n> >> self-inflicted.\n>\n> > The proposed patch solve the original complain. I was just trying to\n> > understand completely what you pointed out before and I agree with you.\n> > Thanks for the clear explanation.\n>\n> OK, thanks for confirming that this solves your issue in practice.\n>\n> > About the patch LGTM and IMHO we should back-patch it to all supported\n> > versions.\n>\n> Done.\n>\n\nGreat, thanks!\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, Mar 9, 2020 at 3:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> =?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com> writes:> > On Mon, Mar 9, 2020 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >> Which, TBH, makes me wonder about the validity of the original complaint> >> in this thread.  I don't mind delaying ET restore as long as we feasibly> >> can; but if you have an ET that is going to misbehave during restore,> >> you are in for pain, and it's hard to consider that that pain isn't> >> self-inflicted.>> > The proposed patch solve the original complain. I was just trying to> > understand completely what you pointed out before and I agree with you.> > Thanks for the clear explanation.>> OK, thanks for confirming that this solves your issue in practice.>> > About the patch LGTM and IMHO we should back-patch it to all supported> > versions.>> Done.>Great, thanks!--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 9 Mar 2020 16:44:34 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in pg_restore with EventTrigger in parallel mode" } ]
[ { "msg_contents": "Hi,\n\nWhen I saw pg_stat_activity.wait_event while pg_basebackup -X none\nis waiting for WAL archiving to finish, it was either NULL or\nCheckpointDone. I think this is confusing. What about introducing\nnew wait_event like WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE\n(BackupWaitWalArchive) and reporting it during that period?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 13 Feb 2020 02:29:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Wait event that should be reported while waiting for WAL archiving to\n finish" }, { "msg_contents": "On Thu, Feb 13, 2020 at 02:29:20AM +0900, Fujii Masao wrote:\n> When I saw pg_stat_activity.wait_event while pg_basebackup -X none\n> is waiting for WAL archiving to finish, it was either NULL or\n> CheckpointDone. I think this is confusing. What about introducing\n> new wait_event like WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE\n> (BackupWaitWalArchive) and reporting it during that period?\n\nSounds like a good idea to me. You need to be careful that this does\nnot overwrite more low-level wait event registration though, so that\ncould be more tricky than it looks at first sight.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 12:28:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On 2020/02/13 12:28, Michael Paquier wrote:\n> On Thu, Feb 13, 2020 at 02:29:20AM +0900, Fujii Masao wrote:\n>> When I saw pg_stat_activity.wait_event while pg_basebackup -X none\n>> is waiting for WAL archiving to finish, it was either NULL or\n>> CheckpointDone. I think this is confusing. What about introducing\n>> new wait_event like WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE\n>> (BackupWaitWalArchive) and reporting it during that period?\n> \n> Sounds like a good idea to me. You need to be careful that this does\n> not overwrite more low-level wait event registration though, so that\n> could be more tricky than it looks at first sight.\n\nThanks for the advise! Patch attached.\n\nI found that the wait events \"LogicalRewriteTruncate\" and\n\"GSSOpenServer\" are not documented. I'm thinking to add\nthem into doc separately if ok.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Thu, 13 Feb 2020 15:35:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On Thu, Feb 13, 2020 at 03:35:50PM +0900, Fujii Masao wrote:\n> I found that the wait events \"LogicalRewriteTruncate\" and\n> \"GSSOpenServer\" are not documented. I'm thinking to add\n> them into doc separately if ok.\n\nNice catch. The ordering of the entries is not respected either for\nGSSOpenServer in pgstat.h. The portion for the code and the docs can\nbe fixed in back-branches, but not the enum list in WaitEventClient or\nwe would have an ABI breakage. But this can be fixed on HEAD. Can\nyou take care of it? If you need help, please feel free to poke me. I\nthink that this should be fixed first, before adding the new event.\n\n> <entry><literal>SyncRep</literal></entry>\n> <entry>Waiting for confirmation from remote server during synchronous replication.</entry>\n> </row>\n> + <row>\n> + <entry><literal>BackupWaitWalArchive</literal></entry>\n> + <entry>Waiting for WAL files required for the backup to be successfully archived.</entry>\n> + </row>\n\nThe category IPC is adapted. You forgot to update the markup morerows\nfrom \"36\" to \"37\", causing the table of the wait events to have a\nweird format (the bottom should be incorrect).\n\n> +\t\tpgstat_report_wait_start(WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE);\n> \t\twhile (XLogArchiveIsBusy(lastxlogfilename) ||\n> \t\t\t XLogArchiveIsBusy(histfilename))\n> \t\t{\n> @@ -11120,6 +11121,7 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)\n> \t\t\t\t\t\t\t\t \"but the database backup will not be usable without all the WAL segments.\")));\n> \t\t\t}\n> \t\t}\n> +\t\tpgstat_report_wait_end();\n\nOkay, that position is right.\n\n> @@ -3848,6 +3848,9 @@ pgstat_get_wait_ipc(WaitEventIPC w)\n> \t\tcase WAIT_EVENT_SYNC_REP:\n> \t\t\tevent_name = \"SyncRep\";\n> \t\t\tbreak;\n> +\t\tcase WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE:\n> +\t\t\tevent_name = \"BackupWaitWalArchive\";\n> +\t\t\tbreak;\n> \t\t\t/* no default case, so that compiler will warn */\n> [...]\n> @@ -853,7 +853,8 @@ typedef enum\n> \tWAIT_EVENT_REPLICATION_ORIGIN_DROP,\n> \tWAIT_EVENT_REPLICATION_SLOT_DROP,\n> \tWAIT_EVENT_SAFE_SNAPSHOT,\n> -\tWAIT_EVENT_SYNC_REP\n> +\tWAIT_EVENT_SYNC_REP,\n> +\tWAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE\n> } WaitEventIPC;\n\nIt would be good to keep entries in alphabetical order in the header,\nthe code and in the docs (see the effort from 5ef037c), and your patch\nis missing that concept for all three places where it matters for this\nnew event.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 16:30:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On 2020/02/13 16:30, Michael Paquier wrote:\n> On Thu, Feb 13, 2020 at 03:35:50PM +0900, Fujii Masao wrote:\n>> I found that the wait events \"LogicalRewriteTruncate\" and\n>> \"GSSOpenServer\" are not documented. I'm thinking to add\n>> them into doc separately if ok.\n> \n> Nice catch. The ordering of the entries is not respected either for\n> GSSOpenServer in pgstat.h. The portion for the code and the docs can\n> be fixed in back-branches, but not the enum list in WaitEventClient or\n> we would have an ABI breakage. But this can be fixed on HEAD. Can\n> you take care of it?\nYes. Patch attached.\n\nlogical_rewrite_truncate_v1.patch adds the description of\nLogicalRewriteTruncate into the doc. This needs to be\nback-patched to v10 where commit 249cf070e3 introduced\nLogicalRewriteTruncate event.\n\ngss_open_server_v1.patch adds the description of GSSOpenServer\ninto the doc and update the code in pgstat_get_wait_client().\nThis needs to be applied in v12 where commit b0b39f72b9 introduced\nGSSOpenServer event.\n\ngss_open_server_for_master_v1.patch does not only what the above\npatch does but also update wait event enum into alphabetical order.\nThis needs to be applied in the master.\n\n> \n>> <entry><literal>SyncRep</literal></entry>\n>> <entry>Waiting for confirmation from remote server during synchronous replication.</entry>\n>> </row>\n>> + <row>\n>> + <entry><literal>BackupWaitWalArchive</literal></entry>\n>> + <entry>Waiting for WAL files required for the backup to be successfully archived.</entry>\n>> + </row>\n> \n> The category IPC is adapted. You forgot to update the markup morerows\n> from \"36\" to \"37\", causing the table of the wait events to have a\n> weird format (the bottom should be incorrect).\n\nFixed. Thanks for the review!\n\n> \n>> +\t\tpgstat_report_wait_start(WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE);\n>> \t\twhile (XLogArchiveIsBusy(lastxlogfilename) ||\n>> \t\t\t XLogArchiveIsBusy(histfilename))\n>> \t\t{\n>> @@ -11120,6 +11121,7 @@ do_pg_stop_backup(char *labelfile, bool waitforarchive, TimeLineID *stoptli_p)\n>> \t\t\t\t\t\t\t\t \"but the database backup will not be usable without all the WAL segments.\")));\n>> \t\t\t}\n>> \t\t}\n>> +\t\tpgstat_report_wait_end();\n> \n> Okay, that position is right.\n> \n>> @@ -3848,6 +3848,9 @@ pgstat_get_wait_ipc(WaitEventIPC w)\n>> \t\tcase WAIT_EVENT_SYNC_REP:\n>> \t\t\tevent_name = \"SyncRep\";\n>> \t\t\tbreak;\n>> +\t\tcase WAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE:\n>> +\t\t\tevent_name = \"BackupWaitWalArchive\";\n>> +\t\t\tbreak;\n>> \t\t\t/* no default case, so that compiler will warn */\n>> [...]\n>> @@ -853,7 +853,8 @@ typedef enum\n>> \tWAIT_EVENT_REPLICATION_ORIGIN_DROP,\n>> \tWAIT_EVENT_REPLICATION_SLOT_DROP,\n>> \tWAIT_EVENT_SAFE_SNAPSHOT,\n>> -\tWAIT_EVENT_SYNC_REP\n>> +\tWAIT_EVENT_SYNC_REP,\n>> +\tWAIT_EVENT_BACKUP_WAIT_WAL_ARCHIVE\n>> } WaitEventIPC;\n> \n> It would be good to keep entries in alphabetical order in the header,\n> the code and in the docs (see the effort from 5ef037c), and your patch\n> is missing that concept for all three places where it matters for this\n> new event.\n\nFixed. Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Fri, 14 Feb 2020 12:47:19 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On Fri, Feb 14, 2020 at 12:47:19PM +0900, Fujii Masao wrote:\n> logical_rewrite_truncate_v1.patch adds the description of\n> LogicalRewriteTruncate into the doc. This needs to be\n> back-patched to v10 where commit 249cf070e3 introduced\n> LogicalRewriteTruncate event.\n\nIndeed. You just be careful about the number of fields for morerows,\nas that's not the same across branches.\n\n> gss_open_server_v1.patch adds the description of GSSOpenServer\n> into the doc and update the code in pgstat_get_wait_client().\n> This needs to be applied in v12 where commit b0b39f72b9 introduced\n> GSSOpenServer event.\n> \n> gss_open_server_for_master_v1.patch does not only what the above\n> patch does but also update wait event enum into alphabetical order.\n> This needs to be applied in the master.\n\nThanks for splitting things. All that looks correct to me.\n--\nMichael", "msg_date": "Fri, 14 Feb 2020 15:45:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On Thu, Feb 13, 2020 at 10:47 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> Fixed. Thanks for the review!\n\nI think it would be safer to just report the wait event during\npg_usleep(1000000L) rather than putting those calls around the whole\nloop. It does not seem impossible that ereport() or\nCHECK_FOR_INTERRUPTS() could do something that reports a wait event\ninternally.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Feb 2020 09:43:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "\n\nOn 2020/02/14 15:45, Michael Paquier wrote:\n> On Fri, Feb 14, 2020 at 12:47:19PM +0900, Fujii Masao wrote:\n>> logical_rewrite_truncate_v1.patch adds the description of\n>> LogicalRewriteTruncate into the doc. This needs to be\n>> back-patched to v10 where commit 249cf070e3 introduced\n>> LogicalRewriteTruncate event.\n> \n> Indeed. You just be careful about the number of fields for morerows,\n> as that's not the same across branches.\n> \n>> gss_open_server_v1.patch adds the description of GSSOpenServer\n>> into the doc and update the code in pgstat_get_wait_client().\n>> This needs to be applied in v12 where commit b0b39f72b9 introduced\n>> GSSOpenServer event.\n>>\n>> gss_open_server_for_master_v1.patch does not only what the above\n>> patch does but also update wait event enum into alphabetical order.\n>> This needs to be applied in the master.\n> \n> Thanks for splitting things. All that looks correct to me.\n\nThanks for the review! Pushed the patches for\nLogicalRewriteTruncate and GSSOpenServer.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 17 Feb 2020 16:24:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On 2020/02/14 23:43, Robert Haas wrote:\n> On Thu, Feb 13, 2020 at 10:47 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> Fixed. Thanks for the review!\n> \n> I think it would be safer to just report the wait event during\n> pg_usleep(1000000L) rather than putting those calls around the whole\n> loop. It does not seem impossible that ereport() or\n> CHECK_FOR_INTERRUPTS() could do something that reports a wait event\n> internally.\n\nOK, so I attached the updated version of the patch.\nThanks for the review!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Mon, 17 Feb 2020 16:30:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On Mon, Feb 17, 2020 at 04:30:00PM +0900, Fujii Masao wrote:\n> On 2020/02/14 23:43, Robert Haas wrote:\n>> On Thu, Feb 13, 2020 at 10:47 PM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>> Fixed. Thanks for the review!\n>> \n>> I think it would be safer to just report the wait event during\n>> pg_usleep(1000000L) rather than putting those calls around the whole\n>> loop. It does not seem impossible that ereport() or\n>> CHECK_FOR_INTERRUPTS() could do something that reports a wait event\n>> internally.\n\nCHECK_FOR_INTERRUPTS() would reset the event wait state. Hm.. You\nmay be right about the WARNING and it would be better to not rely on\nthat. Do you remember the states which may be triggered?\n\n> OK, so I attached the updated version of the patch.\n> Thanks for the review!\n\nActually, I have some questions:\n1) Should a new wait event be added in recoveryPausesHere()? That\nwould be IMO useful.\n2) Perhaps those two points should be replaced with WaitLatch(), where\nwe would use the new wait events introduced?\n--\nMichael", "msg_date": "Mon, 17 Feb 2020 18:48:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "\n\nOn 2020/02/17 18:48, Michael Paquier wrote:\n> On Mon, Feb 17, 2020 at 04:30:00PM +0900, Fujii Masao wrote:\n>> On 2020/02/14 23:43, Robert Haas wrote:\n>>> On Thu, Feb 13, 2020 at 10:47 PM Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote:\n>>>> Fixed. Thanks for the review!\n>>>\n>>> I think it would be safer to just report the wait event during\n>>> pg_usleep(1000000L) rather than putting those calls around the whole\n>>> loop. It does not seem impossible that ereport() or\n>>> CHECK_FOR_INTERRUPTS() could do something that reports a wait event\n>>> internally.\n> \n> CHECK_FOR_INTERRUPTS() would reset the event wait state. Hm.. You\n> may be right about the WARNING and it would be better to not rely on\n> that. Do you remember the states which may be triggered?\n> \n>> OK, so I attached the updated version of the patch.\n>> Thanks for the review!\n> \n> Actually, I have some questions:\n> 1) Should a new wait event be added in recoveryPausesHere()? That\n> would be IMO useful.\n\nYes, it's useful, I think. But it's better to implement that\nas a separate patch.\n\n> 2) Perhaps those two points should be replaced with WaitLatch(), where\n> we would use the new wait events introduced?\n\nFor what? Maybe it should, but I'm not sure it's worth the trouble.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 17 Feb 2020 22:21:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On Mon, Feb 17, 2020 at 10:21:23PM +0900, Fujii Masao wrote:\n> On 2020/02/17 18:48, Michael Paquier wrote:\n>> Actually, I have some questions:\n>> 1) Should a new wait event be added in recoveryPausesHere()? That\n>> would be IMO useful.\n> \n> Yes, it's useful, I think. But it's better to implement that\n> as a separate patch.\n\nNo problem for me.\n\n>> 2) Perhaps those two points should be replaced with WaitLatch(), where\n>> we would use the new wait events introduced?\n> \n> For what? Maybe it should, but I'm not sure it's worth the trouble.\n\nI don't have more to offer than signal handling consistency for both\nwithout relying on pg_usleep()'s behavior depending on the platform,\nand power consumption. For the recovery pause, the second argument\nmay not be worth carrying, but we never had this argument for the\narchiving wait, did we? For both, on top of it you don't need to\nworry about concurrent issues with the wait events attached around.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 12:39:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On 2020/02/18 12:39, Michael Paquier wrote:\n> On Mon, Feb 17, 2020 at 10:21:23PM +0900, Fujii Masao wrote:\n>> On 2020/02/17 18:48, Michael Paquier wrote:\n>>> Actually, I have some questions:\n>>> 1) Should a new wait event be added in recoveryPausesHere()? That\n>>> would be IMO useful.\n>>\n>> Yes, it's useful, I think. But it's better to implement that\n>> as a separate patch.\n> \n> No problem for me.\n\nOn second thought, it's OK to add that event into the patch.\nAttached is the updated version of the patch. This patch adds\ntwo wait events for WAL archiving and recovery pause.\n\n\n>>> 2) Perhaps those two points should be replaced with WaitLatch(), where\n>>> we would use the new wait events introduced?\n>>\n>> For what? Maybe it should, but I'm not sure it's worth the trouble.\n> \n> I don't have more to offer than signal handling consistency for both\n> without relying on pg_usleep()'s behavior depending on the platform,\n> and power consumption. For the recovery pause, the second argument\n> may not be worth carrying, but we never had this argument for the\n> archiving wait, did we?\n\nI have no idea about this. But I wonder how much that change\nis helpful to reduce the power consumption because waiting\nfor WAL archive during the backup basically not so frequently\nhappens.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Wed, 26 Feb 2020 21:19:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "On Wed, Feb 26, 2020 at 9:19 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> I have no idea about this. But I wonder how much that change\n> is helpful to reduce the power consumption because waiting\n> for WAL archive during the backup basically not so frequently\n> happens.\n>\n\n+1.\nAnd as far as I reviewed the patch, I didn't find any problems.\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Wed, Feb 26, 2020 at 9:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\nI have no idea about this. But I wonder how much that change\nis helpful to reduce the power consumption because waiting\nfor WAL archive during the backup basically not so frequently\nhappens.+1. And as far as I reviewed the patch,  I didn't find any problems.Regards,-- Atsushi Torikoshi", "msg_date": "Thu, 19 Mar 2020 19:39:33 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "\n\nOn 2020/03/19 19:39, Atsushi Torikoshi wrote:\n> \n> On Wed, Feb 26, 2020 at 9:19 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> I have no idea about this. But I wonder how much that change\n> is helpful to reduce the power consumption because waiting\n> for WAL archive during the backup basically not so frequently\n> happens.\n> \n> \n> +1.\n> And as far as I reviewed the patch,  I didn't find any problems.\n\nThanks for the review!\nBarring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 23 Mar 2020 15:56:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" }, { "msg_contents": "\n\nOn 2020/03/23 15:56, Fujii Masao wrote:\n> \n> \n> On 2020/03/19 19:39, Atsushi Torikoshi wrote:\n>>\n>> On Wed, Feb 26, 2020 at 9:19 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n>>\n>>     I have no idea about this. But I wonder how much that change\n>>     is helpful to reduce the power consumption because waiting\n>>     for WAL archive during the backup basically not so frequently\n>>     happens.\n>>\n>>\n>> +1.\n>> And as far as I reviewed the patch,  I didn't find any problems.\n> \n> Thanks for the review!\n> Barring any objection, I will commit this patch.\n\nPushed! Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 24 Mar 2020 11:13:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Wait event that should be reported while waiting for WAL\n archiving to finish" } ]
[ { "msg_contents": "Forking this thread for two tangential patches which I think are more\nworthwhile than the original topic's patch.\nhttps://www.postgresql.org/message-id/20200207143935.GP403%40telsasoft.com\n\nIs there a better place to implement assertion from 0002 ?\n\nOn Fri, Feb 07, 2020 at 08:39:35AM -0600, Justin Pryzby wrote:\n> From 7eea0a17e495fe13379ffd589b551f2f145f5672 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu, 6 Feb 2020 21:48:13 -0600\n> Subject: [PATCH v1 1/3] Update comment obsolete since b9b8831a\n> \n> ---\n> src/backend/commands/cluster.c | 6 +++---\n> 1 file changed, 3 insertions(+), 3 deletions(-)\n> \n> diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c\n> index e9d7a7f..3adcbeb 100644\n> --- a/src/backend/commands/cluster.c\n> +++ b/src/backend/commands/cluster.c\n> @@ -1539,9 +1539,9 @@ get_tables_to_cluster(MemoryContext cluster_context)\n> \n> \t/*\n> \t * Get all indexes that have indisclustered set and are owned by\n> -\t * appropriate user. System relations or nailed-in relations cannot ever\n> -\t * have indisclustered set, because CLUSTER will refuse to set it when\n> -\t * called with one of them as argument.\n> +\t * appropriate user. Shared relations cannot ever have indisclustered\n> +\t * set, because CLUSTER will refuse to set it when called with one as\n> +\t * an argument.\n> \t */\n> \tindRelation = table_open(IndexRelationId, AccessShareLock);\n> \tScanKeyInit(&entry,\n> -- \n> 2.7.4\n> \n\n> From 4777be522a7aa8b8c77b13f765cbd02043438f2a Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 7 Feb 2020 08:12:50 -0600\n> Subject: [PATCH v1 2/3] Give developer a helpful kick in the pants if they\n> change natts in one place but not another\n> \n> ---\n> src/backend/bootstrap/bootstrap.c | 23 +++++++++++++++++++++++\n> 1 file changed, 23 insertions(+)\n> \n> diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c\n> index bfc629c..d5e1888 100644\n> --- a/src/backend/bootstrap/bootstrap.c\n> +++ b/src/backend/bootstrap/bootstrap.c\n> @@ -25,7 +25,9 @@\n> #include \"access/xlog_internal.h\"\n> #include \"bootstrap/bootstrap.h\"\n> #include \"catalog/index.h\"\n> +#include \"catalog/pg_class.h\"\n> #include \"catalog/pg_collation.h\"\n> +#include \"catalog/pg_proc.h\"\n> #include \"catalog/pg_type.h\"\n> #include \"common/link-canary.h\"\n> #include \"libpq/pqsignal.h\"\n> @@ -49,6 +51,7 @@\n> #include \"utils/ps_status.h\"\n> #include \"utils/rel.h\"\n> #include \"utils/relmapper.h\"\n> +#include \"utils/syscache.h\"\n> \n> uint32\t\tbootstrap_data_checksum_version = 0;\t/* No checksum */\n> \n> @@ -602,6 +605,26 @@ boot_openrel(char *relname)\n> \tTableScanDesc scan;\n> \tHeapTuple\ttup;\n> \n> +\t/* Check that pg_class data is consistent now, rather than failing obscurely later */\n> +\tstruct { Oid oid; int natts; }\n> +\t\tchecknatts[] = {\n> +\t\t{RelationRelationId, Natts_pg_class,},\n> +\t\t{TypeRelationId, Natts_pg_type,},\n> +\t\t{AttributeRelationId, Natts_pg_attribute,},\n> +\t\t{ProcedureRelationId, Natts_pg_proc,},\n> +\t};\n> +\n> +\tfor (int i=0; i<lengthof(checknatts); ++i) {\n> +\t\tForm_pg_class\tclassForm;\n> +\t\tHeapTuple\ttuple;\n> +\t\ttuple = SearchSysCache1(RELOID, ObjectIdGetDatum(checknatts[i].oid));\n> +\t\tif (!HeapTupleIsValid(tuple))\n> +\t\t\telog(ERROR, \"cache lookup failed for relation %u\", checknatts[i].oid);\n> +\t\tclassForm = (Form_pg_class) GETSTRUCT(tuple);\n> +\t\tAssert(checknatts[i].natts == classForm->relnatts);\n> +\t\tReleaseSysCache(tuple);\n> +\t}\n> +\n> \tif (strlen(relname) >= NAMEDATALEN)\n> \t\trelname[NAMEDATALEN - 1] = '\\0';\n> \n> -- \n> 2.7.4\n>", "msg_date": "Wed, 12 Feb 2020 12:23:37 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "assert pg_class.relnatts is consistent" }, { "msg_contents": "On Thu, Feb 13, 2020 at 3:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Forking this thread for two tangential patches which I think are more\n> worthwhile than the original topic's patch.\n> https://www.postgresql.org/message-id/20200207143935.GP403%40telsasoft.com\n>\n> Is there a better place to implement assertion from 0002 ?\n\nI would think the answer to that would be related to the answer of why\nyou think we need this assert in the first place?\n\nI know I have made the mistake of not updating relnatts when I added\nrelispartition, etc. to pg_class, only to be bitten by it in the form\nof seemingly random errors/crashes. Is that why?\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 13 Feb 2020 16:51:01 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Thu, Feb 13, 2020 at 04:51:01PM +0900, Amit Langote wrote:\n> I would think the answer to that would be related to the answer of why\n> you think we need this assert in the first place?\n\nTaking this thread independently, and even after reading the thread\nmentioned upthread, I still don't quite understand why this change\ncould be a good thing and in which cases it actually helps. The code\nincludes no comments and the commit log says nothing either, so it is\nhard to follow what you are thinking here even if you are splitting\nthe effort across multiple thread. Please note that the style of the\ncode is not project-like, so you should try to indent it. And why\ndoes it matter to check this portion of the catalogs? Also, such\nchecks are not really needed in non-assert builds, if actually\nneeded.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 17:10:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Thu, Feb 13, 2020 at 04:51:01PM +0900, Amit Langote wrote:\n> On Thu, Feb 13, 2020 at 3:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Forking this thread for two tangential patches which I think are more\n> > worthwhile than the original topic's patch.\n> > https://www.postgresql.org/message-id/20200207143935.GP403%40telsasoft.com\n> >\n> > Is there a better place to implement assertion from 0002 ?\n> \n> I would think the answer to that would be related to the answer of why\n> you think we need this assert in the first place?\n> \n> I know I have made the mistake of not updating relnatts when I added\n> relispartition, etc. to pg_class, only to be bitten by it in the form\n> of seemingly random errors/crashes. Is that why?\n\nRight. If adding or removing a column from pg_class (or others) it's necessary\nnot only to add the column in the .h file, and update references like Anum_*,\nbut also to update that catalog's own pg_class.relnatts in pg_class.dat.\n\nOn the other thead, Alvaro agreed it might be worth experimenting with moving\n\"indisclustered\" from boolean in pg_index to an Oid in pg_class. There's not\nmany references to it, so I was able to make most of the necessary changes\nwithin an hour .. but spent some multiple of that tracing the crash in initdb,\nwhich I would prefer to have failed less obscurely.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 Feb 2020 02:11:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Thu, Feb 13, 2020 at 4:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Feb 13, 2020 at 3:23 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Forking this thread for two tangential patches which I think are more\n> > worthwhile than the original topic's patch.\n> > https://www.postgresql.org/message-id/20200207143935.GP403%40telsasoft.com\n> >\n> > Is there a better place to implement assertion from 0002 ?\n>\n> I would think the answer to that would be related to the answer of why\n> you think we need this assert in the first place?\n>\n> I know I have made the mistake of not updating relnatts when I added\n> relispartition, etc. to pg_class, only to be bitten by it in the form\n> of seemingly random errors/crashes. Is that why?\n\nSorry for not having read the patch properly.\n\n> + /* Check that pg_class data is consistent now, rather than failing obscurely later */\n\nThat seems to be it.\n\nThanks,\nAmit\n\n\n", "msg_date": "Thu, 13 Feb 2020 17:12:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Feb 13, 2020 at 4:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I know I have made the mistake of not updating relnatts when I added\n>> relispartition, etc. to pg_class, only to be bitten by it in the form\n>> of seemingly random errors/crashes. Is that why?\n\n> Sorry for not having read the patch properly.\n>> + /* Check that pg_class data is consistent now, rather than failing obscurely later */\n> That seems to be it.\n\nI've been burnt by this too :-(. However, I think this patch is\ncompletely the wrong way to go about improving this. What we should\nbe doing, now that we have all that perl code generating postgres.bki,\nis eliminating the problem at the source. That is, drop the hand-coded\nrelnatts values from pg_class.dat altogether, and let the perl code fill\nit in --- compare the handling of pg_proc.pronargs for instance.\n\n(While we're at it, an awful lot of the bulk of pg_class.dat could be\nreplaced by BKI_DEFAULT() entries in pg_class.h, though I'm less sure\nthat that's much of an improvement. I think we intentionally didn't\nbother when we put in the BKI_DEFAULT support, reasoning that there\nwere too few pg_class.dat entries to bother.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 11:04:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Fri, Feb 14, 2020 at 1:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Feb 13, 2020 at 4:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> I know I have made the mistake of not updating relnatts when I added\n> >> relispartition, etc. to pg_class, only to be bitten by it in the form\n> >> of seemingly random errors/crashes. Is that why?\n>\n> > Sorry for not having read the patch properly.\n> >> + /* Check that pg_class data is consistent now, rather than failing obscurely later */\n> > That seems to be it.\n>\n> I've been burnt by this too :-(. However, I think this patch is\n> completely the wrong way to go about improving this. What we should\n> be doing, now that we have all that perl code generating postgres.bki,\n> is eliminating the problem at the source. That is, drop the hand-coded\n> relnatts values from pg_class.dat altogether, and let the perl code fill\n> it in --- compare the handling of pg_proc.pronargs for instance.\n\nI can't write Perl myself (maybe Justin), but +1 to this idea.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 14 Feb 2020 14:58:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Fri, Feb 14, 2020 at 2:58 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Feb 14, 2020 at 1:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I've been burnt by this too :-(. However, I think this patch is\n> > completely the wrong way to go about improving this. What we should\n> > be doing, now that we have all that perl code generating postgres.bki,\n> > is eliminating the problem at the source. That is, drop the hand-coded\n> > relnatts values from pg_class.dat altogether, and let the perl code fill\n> > it in --- compare the handling of pg_proc.pronargs for instance.\n>\n> I can't write Perl myself (maybe Justin), but +1 to this idea.\n\nI tried and think it works but not sure if that's good Perl\nprogramming. See the attached.\n\nThanks,\nAmit", "msg_date": "Fri, 14 Feb 2020 18:00:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Fri, Feb 14, 2020 at 06:00:05PM +0900, Amit Langote wrote:\n> On Fri, Feb 14, 2020 at 2:58 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Feb 14, 2020 at 1:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I've been burnt by this too :-(. However, I think this patch is\n> > > completely the wrong way to go about improving this. What we should\n> > > be doing, now that we have all that perl code generating postgres.bki,\n> > > is eliminating the problem at the source. That is, drop the hand-coded\n> > > relnatts values from pg_class.dat altogether, and let the perl code fill\n> > > it in --- compare the handling of pg_proc.pronargs for instance.\n> >\n> > I can't write Perl myself (maybe Justin), but +1 to this idea.\n> \n> I tried and think it works but not sure if that's good Perl\n> programming. See the attached.\n\nI quite like what you have here. Please note that this comment in\ngenbki.pl is incorrect regarding relnatts (the last part could just be\ndeleted):\n# Note: only bootstrap catalogs, ie those marked BKI_BOOTSTRAP, need to\n# have entries here. Be sure that the OIDs listed here match those given in\n# their CATALOG and BKI_ROWTYPE_OID macros, and that the relnatts values are\n# correct.\n--\nMichael", "msg_date": "Fri, 14 Feb 2020 18:47:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Fri, Feb 14, 2020 at 5:00 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I tried and think it works but not sure if that's good Perl\n> programming. See the attached.\n\nHi Amit,\nI took this for a spin -- I just have a couple comments.\n\n+ elsif ($attname eq 'relnatts')\n+ {\n+ ;\n+ }\n\nWith your patch, I get this when running\nsrc/include/catalog/reformat_dat_file.pl:\n\nstrip_default_values: pg_class.relnatts undefined\n\nRather than adding this one-off case to AddDefaultValues and then\nanother special case to strip_default_values, maybe it would be better\nto just add a placeholder BKI_DEFAULT(0) to pg_class.h, with a comment\nthat it's just a placeholder.\n\n\n+ if ($catname eq \"pg_class\" && $attname eq \"relnatts\")\n+ {\n+ $bki_values{$attname} = $catalog_ncols{$bki_values{relname}};\n+ }\n+\n\nYou could avoid the name/attr checks if you do it while building the\npg_class lookup table, like this:\n\n foreach my $row (@{ $catalog_data{pg_class} })\n {\n $classoids{ $row->{relname} } = $row->{oid};\n+\n+ # Also fill in correct value for relnatts.\n+ $row->{relnatts} = $catalog_ncols{ $row->{relname} };\n }\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:50:47 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "I wrote:\n> + elsif ($attname eq 'relnatts')\n> + {\n> + ;\n> + }\n>\n> With your patch, I get this when running\n> src/include/catalog/reformat_dat_file.pl:\n>\n> strip_default_values: pg_class.relnatts undefined\n>\n> Rather than adding this one-off case to AddDefaultValues and then\n> another special case to strip_default_values, maybe it would be better\n> to just add a placeholder BKI_DEFAULT(0) to pg_class.h, with a comment\n> that it's just a placeholder.\n\nOne possible objection to what I wrote above is that it adds a\ndifferent kind of special case, but in a sneaky way. Perhaps it would\nbe more principled to treat it the same as oid after all. If we do\nthat, it would help to add a comment that we can't treat relnatts like\npronangs, since we need more information than what's in each pg_class\nrow.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 18:39:59 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "> pronangs, since we need more information than what's in each pg_class\n\nSigh, and of course I met pg_proc.pronargs.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 18:42:32 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Fri, Feb 14, 2020 at 6:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Feb 14, 2020 at 06:00:05PM +0900, Amit Langote wrote:\n> > On Fri, Feb 14, 2020 at 2:58 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Fri, Feb 14, 2020 at 1:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > I've been burnt by this too :-(. However, I think this patch is\n> > > > completely the wrong way to go about improving this. What we should\n> > > > be doing, now that we have all that perl code generating postgres.bki,\n> > > > is eliminating the problem at the source. That is, drop the hand-coded\n> > > > relnatts values from pg_class.dat altogether, and let the perl code fill\n> > > > it in --- compare the handling of pg_proc.pronargs for instance.\n> > >\n> > > I can't write Perl myself (maybe Justin), but +1 to this idea.\n> >\n> > I tried and think it works but not sure if that's good Perl\n> > programming. See the attached.\n>\n> I quite like what you have here. Please note that this comment in\n> genbki.pl is incorrect regarding relnatts (the last part could just be\n> deleted):\n> # Note: only bootstrap catalogs, ie those marked BKI_BOOTSTRAP, need to\n> # have entries here. Be sure that the OIDs listed here match those given in\n> # their CATALOG and BKI_ROWTYPE_OID macros, and that the relnatts values are\n> # correct.\n\nYou're right, although this comment is in pg_class.dat.\n\nThanks,\nAmit\n\n\n", "msg_date": "Fri, 14 Feb 2020 21:44:19 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "Hi John,\n\nOn Fri, Feb 14, 2020 at 6:50 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> On Fri, Feb 14, 2020 at 5:00 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I tried and think it works but not sure if that's good Perl\n> > programming. See the attached.\n>\n> Hi Amit,\n> I took this for a spin -- I just have a couple comments.\n\nThanks for chiming in.\n\n> + elsif ($attname eq 'relnatts')\n> + {\n> + ;\n> + }\n>\n> With your patch, I get this when running\n> src/include/catalog/reformat_dat_file.pl:\n>\n> strip_default_values: pg_class.relnatts undefined\n\nI think I have fixed this in the attached.\n\n> + if ($catname eq \"pg_class\" && $attname eq \"relnatts\")\n> + {\n> + $bki_values{$attname} = $catalog_ncols{$bki_values{relname}};\n> + }\n> +\n>\n> You could avoid the name/attr checks if you do it while building the\n> pg_class lookup table, like this:\n>\n> foreach my $row (@{ $catalog_data{pg_class} })\n> {\n> $classoids{ $row->{relname} } = $row->{oid};\n> +\n> + # Also fill in correct value for relnatts.\n> + $row->{relnatts} = $catalog_ncols{ $row->{relname} };\n> }\n\nDid this too. Attached updated patch, which also addresses Michael's comment.\n\nI'm still trying to understand your comment about using placeholder\nBKI_DEFAULT...\n\nThanks,\nAmit", "msg_date": "Fri, 14 Feb 2020 23:22:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "I propose this more concise coding for AddDefaultValues,\n\n\t# Now fill in defaults, and note any columns that remain undefined.\n\tforeach my $column (@$schema)\n\t{\n\t\tmy $attname = $column->{name};\n\t\tmy $atttype = $column->{type};\n\n\t\t# Skip if a value already exists\n\t\tnext if defined $row->{$attname};\n\n\t\t# 'oid' and 'relnatts' are special cases. Ignore.\n\t\tnext if $attname eq 'oid';\n\t\tnext if $attname eq 'relnatts';\n\n\t\t# This column has a default value. Fill it in.\n\t\tif (defined $column->{default})\n\t\t{\n\t\t\t$row->{$attname} = $column->{default};\n\t\t\tnext;\n\t\t}\n\n\t\t# Failed to find a value.\n\t\tpush @missing_fields, $attname;\n\t}\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 12:13:01 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On 2020-Feb-14, John Naylor wrote:\n\n> One possible objection to what I wrote above is that it adds a\n> different kind of special case, but in a sneaky way. Perhaps it would\n> be more principled to treat it the same as oid after all. If we do\n> that, it would help to add a comment that we can't treat relnatts like\n> pronangs, since we need more information than what's in each pg_class\n> row.\n\nHow about something like this? (untested)\n\n\t\t# oids are a special case; ignore\n\t\tnext if $attname eq 'oid';\n\t\t# pg_class.relnatts is computed from pg_attribute rows; ignore\n\t\tnext if $catname eq 'pg_class' and $attname eq 'relnatts';\n\n\t\t# Raise error unless a value exists.\n\t\tdie \"strip_default_values: $catname.$attname undefined\\n\"\n\t\t if !defined $row->{$attname};\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 12:29:26 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Feb-14, John Naylor wrote:\n>> One possible objection to what I wrote above is that it adds a\n>> different kind of special case, but in a sneaky way. Perhaps it would\n>> be more principled to treat it the same as oid after all. If we do\n>> that, it would help to add a comment that we can't treat relnatts like\n>> pronangs, since we need more information than what's in each pg_class\n>> row.\n\n> How about something like this? (untested)\n\nI think John's idea of setting a dummy BKI_DEFAULT value is better,\nas that means the only code that has to worry about this directly\nis the code that's actually filling in relnatts. As far as said\ncode goes, we don't need an additional global variable when we can\njust look in the $catalogs data structure; and I'm not a fan of\ncramming this into the OID-assignment logic just to save a loop.\nSo that leads me to the attached.\n\n(I agree with Alvaro's thought of shortening AddDefaultValues,\nbut didn't do that here.)\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 14 Feb 2020 14:00:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "I wrote:\n> So that leads me to the attached.\n> ...\n> (I agree with Alvaro's thought of shortening AddDefaultValues,\n> but didn't do that here.)\n\nPushed both of those. I also did something with the stale comment\nthat Justin referred to in the initial message (it wasn't really\ngood practice to try to deal with both things in one thread).\n\nI think we're done here, though maybe the difficulty of finding a clean\nway to get genbki.pl to do this suggests that AddDefaultValues needs\nto be redesigned. Not sure what that'd look like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Feb 2020 15:25:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Sun, Feb 16, 2020 at 5:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > So that leads me to the attached.\n> > ...\n> > (I agree with Alvaro's thought of shortening AddDefaultValues,\n> > but didn't do that here.)\n>\n> Pushed both of those.\n\nThank you.\n\nIt's amazing to see how simple bootstrapping has now become thanks to\nthe work you guys have done recently.\n\nThanks,\nAmit\n\n\n", "msg_date": "Mon, 17 Feb 2020 13:25:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: assert pg_class.relnatts is consistent" }, { "msg_contents": "On Mon, Feb 17, 2020 at 01:25:05PM +0900, Amit Langote wrote:\n> > Pushed both of those.\n> \n> Thank you.\n> \n> It's amazing to see how simple bootstrapping has now become thanks to\n> the work you guys have done recently.\n\nOn Fri, Feb 14, 2020 at 06:00:05PM +0900, Amit Langote wrote:\n> > I can't write Perl myself (maybe Justin), but +1 to this idea.\n> \n> I tried and think it works but not sure if that's good Perl\n> programming. See the attached.\n\nAnd thanks for picking up perl so I didn't have to remember what I ever knew.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 16 Feb 2020 22:44:14 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: assert pg_class.relnatts is consistent" } ]
[ { "msg_contents": "Hi,\nCoverity detected a dead code in the src / interfaces / libpq / fe-auth.c\nfile, to correct it, a simplification was made and the oom_error goto was\nremoved, since it is clearly redundant and its presence can be confusing.\n\nThe second part of the patch refers to the file src / interfaces / libpq /\nfe-exec.c.\nFirst, a correction was made to the return types of some functions that\nclearly return bool, but are defined as int.\n\nAccording to some functions, they do a basic check and if they fail, they\nreturn immediately, so it does not make sense to start communication and\nthen return.\nIt makes more sense to do the basic checks, only to start communicating\nwith the server afterwards.\n\nThese changes are passing the regression tests and are in use in libpq.dll,\nused in production by my customers.\n\nregards,\nRanier Vilela", "msg_date": "Wed, 12 Feb 2020 19:55:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] libpq improvements and fixes" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Coverity detected a dead code in the src / interfaces / libpq / fe-auth.c\n> file, to correct it, a simplification was made and the oom_error goto was\n> removed, since it is clearly redundant and its presence can be confusing.\n\nI'm kind of disinclined to let Coverity dictate our coding style here.\nWe've dismissed many hundreds of its reports as false positives, and\nthis seems like one that could get (probably already has gotten) the\nsame treatment. I also don't feel like duplicating error messages\nas you propose is an improvement.\n\nIf we did want to adjust the code in pg_SASL_init, my thought would\nbe to reduce not increase the code duplication, by making the error\nexits look like\n\n ...\n return STATUS_OK;\n\noom_error:\n printfPQExpBuffer(&conn->errorMessage,\n libpq_gettext(\"out of memory\\n\"));\n /* FALL THRU */\n\nerror:\n termPQExpBuffer(&mechanism_buf);\n if (initialresponse)\n free(initialresponse);\n return STATUS_ERROR;\n}\n\nIt's only marginally worth the trouble though.\n\n> First, a correction was made to the return types of some functions that\n> clearly return bool, but are defined as int.\n\nThis is ancient history that doesn't seem worth revisiting. There is\ncertainly exactly zero chance of us changing libpq's external API\nas you propose, because of the ensuing ABI breakage. Maybe we could\nchange the static functions, but I'm not very excited about it.\n\nI can't get excited about the other code rearrangements you're proposing\nhere either. They seem to make the code more intellectually complex for\nlittle benefit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Feb 2020 20:25:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] libpq improvements and fixes" }, { "msg_contents": "Em qua., 12 de fev. de 2020 às 22:25, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Coverity detected a dead code in the src / interfaces / libpq / fe-auth.c\n> > file, to correct it, a simplification was made and the oom_error goto was\n> > removed, since it is clearly redundant and its presence can be confusing.\n>\n> I'm kind of disinclined to let Coverity dictate our coding style here.\n> We've dismissed many hundreds of its reports as false positives, and\n> this seems like one that could get (probably already has gotten) the\n> same treatment. I also don't feel like duplicating error messages\n> as you propose is an improvement.\n>\nIf you look closely at the code in the source, you will see that the style\nthere is:\nif (test)\n{\n msg;\n goto;\n}\nI just kept it, even if I duplicated the error message, the style was kept\nand in my opinion it is much more coherent and readable.\nBut your solution is also good, and yes, it is worth it, because even with\nsmall benefits, the change improves the code and prevents Coverity or\nanother tool from continuing to report false positives or not.\n\n\n> If we did want to adjust the code in pg_SASL_init, my thought would\n> be to reduce not increase the code duplication, by making the error\n> exits look like\n>\n> ...\n> return STATUS_OK;\n>\n> oom_error:\n> printfPQExpBuffer(&conn->errorMessage,\n> libpq_gettext(\"out of memory\\n\"));\n> /* FALL THRU */\n>\n> error:\n> termPQExpBuffer(&mechanism_buf);\n> if (initialresponse)\n> free(initialresponse);\n> return STATUS_ERROR;\n> }\n>\n> It's only marginally worth the trouble though.\n>\n> Sounds good to me.\n\n> First, a correction was made to the return types of some functions that\n> > clearly return bool, but are defined as int.\n>\n> This is ancient history that doesn't seem worth revisiting. There is\n> certainly exactly zero chance of us changing libpq's external API\n> as you propose, because of the ensuing ABI breakage. Maybe we could\n> change the static functions, but I'm not very excited about it.\n>\nVirtually no code will break for the change, since bool and int are\ninternally the same types.\nI believe that no code will have either adjusted to work with corrected\nfunctions, even if they use compiled libraries.\nAnd again, it is worth correcting at least the static ones, because the\ngoal here, too, is to improve readability.\n\nI can't get excited about the other code rearrangements you're proposing\n> here either. They seem to make the code more intellectually complex for\n> little benefit.\n>\nI cannot agree with you that these changes add complexity.\nIt was using the principle enshrined in programming that I proposed these\nchanges.\n\"Get out quick.\"\nFor 99% of calls to these functions, there won't be any changes, since all\nparameters are ok, tests will be done and the PQsendQueryStart function\nwill be called anyway.\nIf it were possible, it would be better to eliminate the basic tests, but\nthis is not possible, so better to do them first and get out of there soon,\nwithout doing anything else.\n\nregards,\nRanier Villela\n\nEm qua., 12 de fev. de 2020 às 22:25, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Coverity detected a dead code in the src / interfaces / libpq / fe-auth.c\n> file, to correct it, a simplification was made and the oom_error goto was\n> removed, since it is clearly redundant and its presence can be confusing.\n\nI'm kind of disinclined to let Coverity dictate our coding style here.\nWe've dismissed many hundreds of its reports as false positives, and\nthis seems like one that could get (probably already has gotten) the\nsame treatment.  I also don't feel like duplicating error messages\nas you propose is an improvement.If you look closely at the code in the source, you will see that the style there is:if (test){     msg;     goto;}I just kept it, even if I duplicated the error message, the style was kept and in my opinion it is much more coherent and readable.But your solution is also good, and yes, it is worth it, because even with small benefits, the change improves the code and prevents Coverity or another tool from continuing to report false positives or not. \nIf we did want to adjust the code in pg_SASL_init, my thought would\nbe to reduce not increase the code duplication, by making the error\nexits look like\n\n    ...\n    return STATUS_OK;\n\noom_error:\n    printfPQExpBuffer(&conn->errorMessage,\n                      libpq_gettext(\"out of memory\\n\"));\n    /* FALL THRU */\n\nerror:\n    termPQExpBuffer(&mechanism_buf);\n    if (initialresponse)\n        free(initialresponse);\n    return STATUS_ERROR;\n}\n\nIt's only marginally worth the trouble though.\nSounds good to me. \n> First, a correction was made to the return types of some functions that\n> clearly return bool, but are defined as int.\n\nThis is ancient history that doesn't seem worth revisiting.  There is\ncertainly exactly zero chance of us changing libpq's external API\nas you propose, because of the ensuing ABI breakage.  Maybe we could\nchange the static functions, but I'm not very excited about it.Virtually no code will break for the change, since bool and int are internally the same types.I believe that no code will have either adjusted to work with corrected functions, even if they use compiled libraries.And again, it is worth correcting at least the static ones, because the goal here, too, is to improve readability.\nI can't get excited about the other code rearrangements you're proposing\nhere either.  They seem to make the code more intellectually complex for\nlittle benefit.I cannot agree with you that these changes add complexity.It was using the principle enshrined in programming that I proposed these changes.\"Get out quick.\"For 99% of calls to these functions, there won't be any changes, since all parameters are ok, tests will be done and the PQsendQueryStart function will be called anyway.If it were possible, it would be better to eliminate the basic tests, but this is not possible, so better to do them first and get out of there soon, without doing anything else. regards,Ranier Villela", "msg_date": "Thu, 13 Feb 2020 14:22:36 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] libpq improvements and fixes" }, { "msg_contents": "On Thu, Feb 13, 2020 at 02:22:36PM -0300, Ranier Vilela wrote:\n> I just kept it, even if I duplicated the error message, the style was kept\n> and in my opinion it is much more coherent and readable.\n> But your solution is also good, and yes, it is worth it, because even with\n> small benefits, the change improves the code and prevents Coverity or\n> another tool from continuing to report false positives or not.\n\nComplaints from static analyzers need to be taken with a pinch of\nsalt, and I agree with Tom here.\n\n> Virtually no code will break for the change, since bool and int are\n> internally the same types.\n> I believe that no code will have either adjusted to work with corrected\n> functions, even if they use compiled libraries.\n> And again, it is worth correcting at least the static ones, because the\n> goal here, too, is to improve readability.\n\nFWIW, looking at the patch from upthread, I think that it is not that\nwise to blindly break the error compatibility handling of all PQsend*\nroutines by switching the error handling of the connection to be after\nthe compatibility checks, and all the other changes don't justify a\nbreakage making back-patching more complicated nor do they improve\nreadability at great lengths.\n--\nMichael", "msg_date": "Fri, 14 Feb 2020 15:13:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] libpq improvements and fixes" }, { "msg_contents": "Em sex., 14 de fev. de 2020 às 03:13, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Thu, Feb 13, 2020 at 02:22:36PM -0300, Ranier Vilela wrote:\n> > I just kept it, even if I duplicated the error message, the style was\n> kept\n> > and in my opinion it is much more coherent and readable.\n> > But your solution is also good, and yes, it is worth it, because even\n> with\n> > small benefits, the change improves the code and prevents Coverity or\n> > another tool from continuing to report false positives or not.\n>\n> Complaints from static analyzers need to be taken with a pinch of\n> salt, and I agree with Tom here.\n>\nThat's right, I will try avoid sending patches that only satisfy static\nanalysis tools.\n\n\n> > Virtually no code will break for the change, since bool and int are\n> > internally the same types.\n> > I believe that no code will have either adjusted to work with corrected\n> > functions, even if they use compiled libraries.\n> > And again, it is worth correcting at least the static ones, because the\n> > goal here, too, is to improve readability.\n>\n> FWIW, looking at the patch from upthread, I think that it is not that\n> wise to blindly break the error compatibility handling of all PQsend*\n> routines by switching the error handling of the connection to be after\n> the compatibility checks, and all the other changes don't justify a\n> breakage making back-patching more complicated nor do they improve\n> readability at great lengths.\n>\n\nIt is difficult to understand what you consider to be improvement.\n\nAnother programming principle I follow is to remove anything static from\nloops that can be executed outside the loop.\nIn this specific case, from the loop modified in fe-exec, two branches were\nremoved, is this an improvement for you or not?\n\nSee patch attached.\n\nregards,\nRanier Vilela", "msg_date": "Fri, 14 Feb 2020 08:07:15 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] libpq improvements and fixes" } ]
[ { "msg_contents": "Attached is a demo patch that adds a placeholder %b for log_line_prefix \n(not in the default setting) that contains the backend type, the same \nthat you see in pg_stat_activity and in the ps status. I would have \nfound this occasionally useful when analyzing logs, especially if you \nhave a lot of background workers active. Thoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 13 Feb 2020 09:56:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "backend type in log_line_prefix?" }, { "msg_contents": "On Thu, Feb 13, 2020 at 09:56:38AM +0100, Peter Eisentraut wrote:\n> Attached is a demo patch that adds a placeholder %b for log_line_prefix (not\n> in the default setting) that contains the backend type, the same that you\n> see in pg_stat_activity and in the ps status. I would have found this\n> occasionally useful when analyzing logs, especially if you have a lot of\n> background workers active. Thoughts?\n\n+1, I'd also have been happy to have it multiple times.\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:14:28 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "\n\nOn 2020/02/13 18:14, Julien Rouhaud wrote:\n> On Thu, Feb 13, 2020 at 09:56:38AM +0100, Peter Eisentraut wrote:\n>> Attached is a demo patch that adds a placeholder %b for log_line_prefix (not\n>> in the default setting) that contains the backend type, the same that you\n>> see in pg_stat_activity and in the ps status. I would have found this\n>> occasionally useful when analyzing logs, especially if you have a lot of\n>> background workers active. Thoughts?\n\nIf we do this, backend type should be also included in csvlog?\n\nRegarding the patch, postgresql.conf.sample needs to be updated.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 13 Feb 2020 18:43:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "Hi,\n\nOn 2020-02-13 09:56:38 +0100, Peter Eisentraut wrote:\n> Attached is a demo patch that adds a placeholder %b for log_line_prefix (not\n> in the default setting) that contains the backend type, the same that you\n> see in pg_stat_activity and in the ps status. I would have found this\n> occasionally useful when analyzing logs, especially if you have a lot of\n> background workers active. Thoughts?\n\nI wished for this several times.\n\n\n> @@ -342,7 +342,7 @@ AuxiliaryProcessMain(int argc, char *argv[])\n> \t\t\t\tstatmsg = \"??? process\";\n> \t\t\t\tbreak;\n> \t\t}\n> -\t\tinit_ps_display(statmsg, \"\", \"\", \"\");\n> +\t\tinit_ps_display((backend_type_str = statmsg), \"\", \"\", \"\");\n> \t}\n\nBut I'm decidedly not a fan of this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 19 Feb 2020 19:41:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-02-13 09:56, Peter Eisentraut wrote:\n> Attached is a demo patch that adds a placeholder %b for log_line_prefix\n> (not in the default setting) that contains the backend type, the same\n> that you see in pg_stat_activity and in the ps status. I would have\n> found this occasionally useful when analyzing logs, especially if you\n> have a lot of background workers active. Thoughts?\n\nAfter positive initial feedback, here is a more ambitious patch set. In \nparticular, I wanted to avoid having to specify the backend type (at \nleast) twice, once for the ps display and once for this new facility.\n\nI have added a new global variable MyBackendType that uses the existing \nBackendType enum that was previously only used by the stats collector. \nThen the ps display, the stats collector, the log_line_prefix, and other \nplaces can just refer to this to know \"who am I\". (There are more \nplaces like that, for example in the autovacuum system, so patch 0004 in \nparticular could be expanded in analogous ways.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 21 Feb 2020 10:09:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "Updated patch set because of conflicts.\n\nOn 2020-02-21 10:09, Peter Eisentraut wrote:\n> After positive initial feedback, here is a more ambitious patch set. In\n> particular, I wanted to avoid having to specify the backend type (at\n> least) twice, once for the ps display and once for this new facility.\n> \n> I have added a new global variable MyBackendType that uses the existing\n> BackendType enum that was previously only used by the stats collector.\n> Then the ps display, the stats collector, the log_line_prefix, and other\n> places can just refer to this to know \"who am I\". (There are more\n> places like that, for example in the autovacuum system, so patch 0004 in\n> particular could be expanded in analogous ways.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 7 Mar 2020 16:08:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "Hello,\n\nOn Sat, Mar 7, 2020 at 8:38 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> Updated patch set because of conflicts.\n>\nThank you for the patch. This feature is really helpful. Here are some\nminor comments:\n\nIn v3-0001-Refactor-ps_status.c-API.patch,\n\n- *\n- * For a walsender, the ps display is set in the following form:\n- *\n- * postgres: walsender <user> <host> <activity>\nThis part is still valid, right?\n\n+ init_ps_display(ps_data.data);\n+ pfree(ps_data.data);\n+\n+ set_ps_display(\"initializing\");\nAs per the existing behaviour, if update_process_title is true, we\ndisplay \"authentication\" as the initial string. On the other hand,\nthis patch, irrespective of the GUC variable, always displays\n\"initializing\" as the initial string and In PerformAuthentication, it\nsets the display as \"authentication\" Is this intended? Should we check\nthe GUC here as well?\n\nIn v3-0002-Unify-several-ways-to-tracking-backend-type.patch,\n+ * If fixed_part is NULL, a default will be obtained from BackendType.\ns/BackendType/MyBackendType?\n\nIn v3-0003-Add-backend-type-to-csvlog-and-optionally-log_lin.patch,\n+ <entry>Backend process type</entry>\nIn other places you've written \"backend type\".\n\nIn v3-0004-Remove-am_syslogger-global-variable.patch,\n+ * This is exported so that elog.c can call it when BackendType is B_LOGGER.\ns/BackendType/MyBackendType?\n\nDone some basic testing. Working as expected.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 Mar 2020 20:50:39 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-03-09 16:20, Kuntal Ghosh wrote:\n> In v3-0001-Refactor-ps_status.c-API.patch,\n> \n> - *\n> - * For a walsender, the ps display is set in the following form:\n> - *\n> - * postgres: walsender <user> <host> <activity>\n> This part is still valid, right?\n\nSure but I figured this comment was in the context of the explanation of \nhow the old API was being abused, so it's no longer necessary.\n\n> + init_ps_display(ps_data.data);\n> + pfree(ps_data.data);\n> +\n> + set_ps_display(\"initializing\");\n> As per the existing behaviour, if update_process_title is true, we\n> display \"authentication\" as the initial string. On the other hand,\n> this patch, irrespective of the GUC variable, always displays\n> \"initializing\" as the initial string and In PerformAuthentication, it\n> sets the display as \"authentication\" Is this intended? Should we check\n> the GUC here as well?\n\nset_ps_display() checks update_process_title itself and does nothing if \nit's off, so this should work okay.\n\n> In v3-0002-Unify-several-ways-to-tracking-backend-type.patch,\n> + * If fixed_part is NULL, a default will be obtained from BackendType.\n> s/BackendType/MyBackendType?\n\nyup\n\n> In v3-0003-Add-backend-type-to-csvlog-and-optionally-log_lin.patch,\n> + <entry>Backend process type</entry>\n> In other places you've written \"backend type\".\n\nok changed\n\n> \n> In v3-0004-Remove-am_syslogger-global-variable.patch,\n> + * This is exported so that elog.c can call it when BackendType is B_LOGGER.\n> s/BackendType/MyBackendType?\n\nok\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 16:41:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Tue, Mar 10, 2020 at 4:41 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-03-09 16:20, Kuntal Ghosh wrote:\n> > In v3-0002-Unify-several-ways-to-tracking-backend-type.patch,\n\nIn pgstat_get_backend_desc(), the fallback \"unknown process type\"\ndescription shouldn't be required anymore.\n\nOther than that, it all looks good to me.\n\n\n", "msg_date": "Tue, 10 Mar 2020 17:38:30 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Tue, Mar 10, 2020 at 9:11 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-03-09 16:20, Kuntal Ghosh wrote:\n> > In v3-0001-Refactor-ps_status.c-API.patch,\n> > - * postgres: walsender <user> <host> <activity>\n> > This part is still valid, right?\n> Sure but I figured this comment was in the context of the explanation of\n> how the old API was being abused, so it's no longer necessary.\n>\nMakes sense.\n\n> set_ps_display() checks update_process_title itself and does nothing if\n> it's off, so this should work okay.\nRight.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Mar 2020 22:35:05 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "I like these patches; the first two are nice cleanup.\n\nMy only gripe is that pgstat_get_backend_desc() is not really a pgstat\nfunction; I think it should have a different name with a prototype in\nmiscadmin.h (next to the enum's new location, which I would put\nsomeplace near the \"pmod.h\" comment rather than where you put it;\nperhaps just above the AuxProcType definition), and implementation\nprobably in miscinit.c.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 15:07:31 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Thu, Feb 13, 2020 at 06:43:32PM +0900, Fujii Masao wrote:\n> If we do this, backend type should be also included in csvlog?\n\n+1, I've been missing that\n\nNote, this patch seems to correspond to:\nb025f32e0b Add leader_pid to pg_stat_activity\n\nI had mentioned privately to Julien missing this info in CSV log.\n\nShould leader_pid be exposed instead (or in addition)? Or backend_type be a\npositive number giving the leader's PID if it's a parallel worker, or a some\nspecial negative number like -BackendType to indicate a nonparallel worker.\nNULL for a B_BACKEND which is not a parallel worker.\n\nMy hope is to answer to questions like these:\n\n. is query (ever? usually?) using parallel paths?\n. is query usefully using parallel paths?\n. what queries are my max_parallel_workers(_per_process) being used for ?\n. Are certain longrunning or frequently running queries which are using\n parallel paths using all max_parallel_workers and precluding other queries\n from using parallel query ? Or, are semi-short queries sometimes precluding\n longrunning queries from using parallelism, when the long queries would\n better benefit ?\n\nI think this patch alone wouldn't provide that, and there'd need to either be a\nline logged for each worker. Maybe it'd log full query+details (ugh), or just\nlog \"parallel worker of pid...\". Or maybe there'd be a new column with which\nthe leader would log nworkers (workers planned vs workers launched - I would\n*not* want to get this out of autoexplain).\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 10 Mar 2020 14:01:43 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Tue, Mar 10, 2020 at 02:01:42PM -0500, Justin Pryzby wrote:\n> On Thu, Feb 13, 2020 at 06:43:32PM +0900, Fujii Masao wrote:\n> > If we do this, backend type should be also included in csvlog?\n> \n> +1, I've been missing that\n> \n> Note, this patch seems to correspond to:\n> b025f32e0b Add leader_pid to pg_stat_activity\n> \n> I had mentioned privately to Julien missing this info in CSV log.\n> \n> Should leader_pid be exposed instead (or in addition)? Or backend_type be a\n\nI looked more closely and played with the patch.\n\nCan I suggest:\n\n$ git diff\ndiff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\nindex 3a6f7f9456..56e0a1437e 100644\n--- a/src/backend/utils/error/elog.c\n+++ b/src/backend/utils/error/elog.c\n@@ -2945,7 +2945,7 @@ write_csvlog(ErrorData *edata)\n if (MyProcPid == PostmasterPid)\n appendCSVLiteral(&buf, \"postmaster\");\n else if (MyBackendType == B_BG_WORKER)\n- appendCSVLiteral(&buf, MyBgworkerEntry->bgw_type);\n+ appendCSVLiteral(&buf, MyBgworkerEntry->bgw_name);\n else\n appendCSVLiteral(&buf, pgstat_get_backend_desc(MyBackendType));\n\n\nThen it logs the leader:\n|2020-03-11 13:16:05.596 CDT,,,16289,,5e692ae3.3fa1,1,,2020-03-11 13:16:03 CDT,4/3,0,LOG,00000,\"temporary file: path \"\"base/pgsql_tmp/pgsql_tmp16289.0\"\", size 4276224\",,,,,,\"explain analyze SELECT * FROM t a JOIN t b USING(i) WHERE i>999 GROUP BY 1;\",,,\"psql\",\"parallel worker for PID 16210\"\n\nIt'll be easy enough to extract the leader and join that ON leader=pid.\n\n> I think this patch alone wouldn't provide that, and there'd need to either be a\n> line logged for each worker. Maybe it'd log full query+details (ugh), or just\n> log \"parallel worker of pid...\". Or maybe there'd be a new column with which\n> the leader would log nworkers (workers planned vs workers launched - I would\n> *not* want to get this out of autoexplain).\n\nI'm still not sure how to do that, though.\nI see I can get what's needed at DEBUG1:\n\n|2020-03-11 13:50:58.304 CDT,,,16196,,5e692aa7.3f44,22,,2020-03-11 13:15:03 CDT,,0,DEBUG,00000,\"registering background worker \"\"parallel worker for PID 16210\"\"\",,,,,,,,,\"\",\"postmaster\"\n\nBut I don't think it's viable to run for very long with log_statement=all,\nlog_min_messages=DEBUG.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Mar 2020 13:53:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-03-11 19:53, Justin Pryzby wrote:\n> Can I suggest:\n> \n> $ git diff\n> diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c\n> index 3a6f7f9456..56e0a1437e 100644\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -2945,7 +2945,7 @@ write_csvlog(ErrorData *edata)\n> if (MyProcPid == PostmasterPid)\n> appendCSVLiteral(&buf, \"postmaster\");\n> else if (MyBackendType == B_BG_WORKER)\n> - appendCSVLiteral(&buf, MyBgworkerEntry->bgw_type);\n> + appendCSVLiteral(&buf, MyBgworkerEntry->bgw_name);\n> else\n> appendCSVLiteral(&buf, pgstat_get_backend_desc(MyBackendType));\n\nThe difference is intentional. bgw_type is so that you can filter and \ngroup by type. The bgw_name could be totally different for each instance.\n\nHaving the bgw name available somehow would perhaps also be useful, but \nthen we should also do this in a consistent way for processes that are \nnot background workers, such as regular client backends or wal senders \nor autovacuum workers. Doing it just for background workers would \ncreate inconsistencies that the introduction of bgw_type some time ago \nsought to eliminate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 22:22:52 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-03-10 19:07, Alvaro Herrera wrote:\n> I like these patches; the first two are nice cleanup.\n> \n> My only gripe is that pgstat_get_backend_desc() is not really a pgstat\n> function; I think it should have a different name with a prototype in\n> miscadmin.h (next to the enum's new location, which I would put\n> someplace near the \"pmod.h\" comment rather than where you put it;\n> perhaps just above the AuxProcType definition), and implementation\n> probably in miscinit.c.\n\nI have committed the refactoring patches with adjustments along these \nlines. The patch with the log_line_prefix and csvlog enhancements is \nstill under discussion.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 22:24:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Fri, Feb 21, 2020 at 10:09:38AM +0100, Peter Eisentraut wrote:\n> From 75ac8ed0c47801712eb2aa300d9cb29767d2e121 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Thu, 20 Feb 2020 18:16:39 +0100\n> Subject: [PATCH v2 3/4] Add backend type to csvlog and optionally log_line_prefix\n\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index c1128f89ec..206778b1c3 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -6470,6 +6470,11 @@ <title>What to Log</title>\n\n characters are copied straight to the log line. Some escapes are\n only recognized by session processes, and will be treated as empty by\n background processes such as the main server process. Status\n...\n <entry>Escape</entry>\n <entry>Effect</entry>\n <entry>Session only</entry>\n\n> <entry>Application name</entry>\n> <entry>yes</entry>\n> </row>\n> + <row>\n> + <entry><literal>%b</literal></entry>\n> + <entry>Backend process type</entry>\n> + <entry>yes</entry>\n\n=> should say \"no\", it's not blank for background processes:\n\n> +\n> +\t\t\t\t\tif (MyProcPid == PostmasterPid)\n> +\t\t\t\t\t\tbackend_type_str = \"postmaster\";\n> +\t\t\t\t\telse if (MyBackendType == B_BG_WORKER)\n> +\t\t\t\t\t\tbackend_type_str = MyBgworkerEntry->bgw_type;\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tbackend_type_str = pgstat_get_backend_desc(MyBackendType);\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 14 Mar 2020 10:49:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Fri, Mar 13, 2020 at 10:22:52PM +0100, Peter Eisentraut wrote:\n> >Can I suggest:\n> >\n> >- appendCSVLiteral(&buf, MyBgworkerEntry->bgw_type);\n> >+ appendCSVLiteral(&buf, MyBgworkerEntry->bgw_name);\n> \n> The difference is intentional. bgw_type is so that you can filter and group\n> by type. The bgw_name could be totally different for each instance.\n\nI found 5373bc2a0867048bb78f93aede54ac1309b5e227\n\nYour patch adds bgw_type, which is also in pg_stat_activity, so I agree it's\ngood to allow include it log_line_prefix and CSV.\n\nI suggest the CSV/log should also have the leader_pid, corresponding to\n| b025f32e0b Add leader_pid to pg_stat_activity\n\nWith the attached on top of your patch, CSV logs like:\n\n2020-03-14 22:09:39.395 CDT,\"pryzbyj\",\"template1\",17030,\"[local]\",5e6d9c69.4286,2,\"idle\",2020-03-14 22:09:29 CDT,3/23,0,LOG,00000,\"statement: explain analyze SELECT COUNT(1), a.a FROM t a JOIN t b ON a.a=b.a GROUP BY 2;\",,,,,,,,,\"psql\",\"client backend\",\n2020-03-14 22:09:43.094 CDT,,,17042,,5e6d9c73.4292,1,,2020-03-14 22:09:39 CDT,4/3,0,LOG,00000,\"temporary file: path \"\"base/pgsql_tmp/pgsql_tmp17042.0\"\", size 4694016\",,,,,,\"explain analyze SELECT COUNT(1), a.a FROM t a JOIN t b ON a.a=b.a GROUP BY 2;\",,,\"psql\",\"parallel worker\",17030\n2020-03-14 22:09:43.094 CDT,,,17043,,5e6d9c73.4293,1,,2020-03-14 22:09:39 CDT,5/3,0,LOG,00000,\"temporary file: path \"\"base/pgsql_tmp/pgsql_tmp17043.0\"\", size 4694016\",,,,,,\"explain analyze SELECT COUNT(1), a.a FROM t a JOIN t b ON a.a=b.a GROUP BY 2;\",,,\"psql\",\"parallel worker\",17030\n\nAs for my question \"what's using/trying/failing to use parallel workers\", I was\nable to look into that by parsing \"Workers Planned/Launched\" from autoexplain.\nIt's not a *good* way to do it, but I don't see how to do better and I don't\nsee any way this patch can improve that.\n\n-- \nJustin", "msg_date": "Sun, 15 Mar 2020 04:57:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-03-13 22:24, Peter Eisentraut wrote:\n> On 2020-03-10 19:07, Alvaro Herrera wrote:\n>> I like these patches; the first two are nice cleanup.\n>>\n>> My only gripe is that pgstat_get_backend_desc() is not really a pgstat\n>> function; I think it should have a different name with a prototype in\n>> miscadmin.h (next to the enum's new location, which I would put\n>> someplace near the \"pmod.h\" comment rather than where you put it;\n>> perhaps just above the AuxProcType definition), and implementation\n>> probably in miscinit.c.\n> \n> I have committed the refactoring patches with adjustments along these\n> lines. The patch with the log_line_prefix and csvlog enhancements is\n> still under discussion.\n\nI have committed that last one also, after some corrections.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 15 Mar 2020 11:32:31 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-03-15 10:57, Justin Pryzby wrote:\n> I suggest the CSV/log should also have the leader_pid, corresponding to\n> | b025f32e0b Add leader_pid to pg_stat_activity\n\nI haven't followed those developments. It sounds interesting, but I \nsuggest you start a new thread or continue in the thread that added \nleader_pid.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 15 Mar 2020 11:34:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "\n\nOn 2020/03/15 19:32, Peter Eisentraut wrote:\n> On 2020-03-13 22:24, Peter Eisentraut wrote:\n>> On 2020-03-10 19:07, Alvaro Herrera wrote:\n>>> I like these patches; the first two are nice cleanup.\n>>>\n>>> My only gripe is that pgstat_get_backend_desc() is not really a pgstat\n>>> function; I think it should have a different name with a prototype in\n>>> miscadmin.h (next to the enum's new location, which I would put\n>>> someplace near the \"pmod.h\" comment rather than where you put it;\n>>> perhaps just above the AuxProcType definition), and implementation\n>>> probably in miscinit.c.\n>>\n>> I have committed the refactoring patches with adjustments along these\n>> lines.  The patch with the log_line_prefix and csvlog enhancements is\n>> still under discussion.\n> \n> I have committed that last one also, after some corrections.\n\nThanks for adding this nice feature!\n\nI have one comment; You seem to need to update file-fdw.sgml so that\npglog table in the doc should include backend_type column.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 16 Mar 2020 12:04:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "> On 2020/03/15 19:32, Peter Eisentraut wrote:\n> > On 2020-03-13 22:24, Peter Eisentraut wrote:\n> >> On 2020-03-10 19:07, Alvaro Herrera wrote:\n> >>> I like these patches; the first two are nice cleanup.\n> >>>\n> >>> My only gripe is that pgstat_get_backend_desc() is not really a pgstat\n> >>> function; I think it should have a different name with a prototype in\n> >>> miscadmin.h (next to the enum's new location, which I would put\n> >>> someplace near the \"pmod.h\" comment rather than where you put it;\n> >>> perhaps just above the AuxProcType definition), and implementation\n> >>> probably in miscinit.c.\n> >>\n> >> I have committed the refactoring patches with adjustments along these\n> >> lines. The patch with the log_line_prefix and csvlog enhancements is\n> >> still under discussion.\n> >\n> > I have committed that last one also, after some corrections.\n\nSorry for being late to this thread, but was wondering if anyone had\ntaken a look at the Process Centralization patchset that I submitted\nto this CF:\nhttps://www.postgresql.org/message-id/CAMN686HgTVRJBAw6hqFE4Lj8bgPLQqfp1c-%2BWBGUtEmg6wPVhg%40mail.gmail.com\n\nThere's quite a bit of that code that is in the same vein as the\nMyBackendType changes proposed/merged in this thread.\n\nI think we could reduce a large portion of redundant code (including\nthe pgstat_get_backend_desc code) while also\ncentralizing/standardizing process startup. A few useful features\n(outside of code reduction) include the ability to identify backends\nprior to their Main functions, cleaner control of SubPostmasterMain\nlogic (including implicit handling of shmem timing considerations).\n\nIf others think it's worthwhile, I will work on rebasing those changes\non the changes proposed/merged in this thread (re: MyBackendType).\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Tue, 17 Mar 2020 15:03:22 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Sun, Mar 15, 2020 at 7:32 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n>\n>\n> I have committed that last one also, after some corrections.\n>\n\nIMHO we should also update file_fdw documentation. See attached!\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Thu, 19 Mar 2020 13:37:17 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Thu, Mar 19, 2020 at 01:37:17PM -0300, Fabr�zio de Royes Mello wrote:\n> \n> On Sun, Mar 15, 2020 at 7:32 AM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> >\n> > I have committed that last one also, after some corrections.\n> >\n> \n> IMHO we should also update file_fdw documentation. See attached!\n> \n> Regards,\n> \n> --\n> � �Fabr�zio de Royes Mello � � � � Timbira - http://www.timbira.com.br/\n> � �PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n> diff --git a/doc/src/sgml/file-fdw.sgml b/doc/src/sgml/file-fdw.sgml\n> index 28b61c8f2d..ed028e4ec9 100644\n> --- a/doc/src/sgml/file-fdw.sgml\n> +++ b/doc/src/sgml/file-fdw.sgml\n> @@ -261,7 +261,8 @@ CREATE FOREIGN TABLE pglog (\n> query text,\n> query_pos integer,\n> location text,\n> - application_name text\n> + application_name text,\n> + backend_type text\n> ) SERVER pglog\n> OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );\n> </programlisting>\n\nPatch applied to master, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 23 Mar 2020 18:38:53 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "Hello.\n\nAt Mon, 23 Mar 2020 18:38:53 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> Patch applied to master, thanks.\n\nThe patch (8e8a0becb3) named archiver process as just \"archiver\". On\nthe other hand the discussion in the thread [1] was going to name the\nprocess as \"WAL/wal archiver\". As all other processes related to WAL\nare named as walreceiver, walsender, walwriter, wouldn't we name the\nprocess like \"wal archiver\"?\n\n[1]: https://www.postgresql.org/message-id/20200319195410.icib45bbgjwqb5zn@alap3.anarazel.de\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 27 Mar 2020 16:30:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On Fri, Mar 27, 2020 at 04:30:07PM +0900, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Mon, 23 Mar 2020 18:38:53 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> > Patch applied to master, thanks.\n> \n> The patch (8e8a0becb3) named archiver process as just \"archiver\". On\n> the other hand the discussion in the thread [1] was going to name the\n> process as \"WAL/wal archiver\". As all other processes related to WAL\n> are named as walreceiver, walsender, walwriter, wouldn't we name the\n> process like \"wal archiver\"?\n> \n> [1]: https://www.postgresql.org/message-id/20200319195410.icib45bbgjwqb5zn@alap3.anarazel.de\n\nAgreed. I ended up moving \"wal\" as a separate word, since it looks\ncleaner; patch attached. Tools that look for the backend type in\npg_stat_activity would need to be adjusted; it would be an\nincompatibility. Maybe changing it would cause too much disruption.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Tue, 31 Mar 2020 21:55:48 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: backend type in log_line_prefix?" }, { "msg_contents": "On 2020-04-01 03:55, Bruce Momjian wrote:\n> Agreed. I ended up moving \"wal\" as a separate word, since it looks\n> cleaner; patch attached. Tools that look for the backend type in\n> pg_stat_activity would need to be adjusted; it would be an\n> incompatibility. Maybe changing it would cause too much disruption.\n\nYeah, it's probably not worth the change for that reason. There is no \nconfusion what the \"archiver\" is. Also, we have archive_mode, \narchive_command, etc. without a wal_ prefix. Let's leave it as is.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Apr 2020 15:44:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: backend type in log_line_prefix?" } ]
[ { "msg_contents": "When making build system changes that risk breaking the MSVC build \nsystem, it's useful to be able to run the part of the MSVC build tools \nthat read the makefiles and produce the project files under a \nnot-Windows platform. This part does not really depend on anything \nparticular to Windows, so it's possible in principle. There are some \nminor dependencies on Windows, however, that need to be worked around. \nI have had some local hacks for that for a while, and I took a moment to \nclean them up and make them presentable, so here they are. Interested?\n\nTo test, apply the patch and run perl src/tools/msvc/mkvcbuild.pl .\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 13 Feb 2020 12:00:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On Thu, Feb 13, 2020 at 12:00:54PM +0100, Peter Eisentraut wrote:\n> When making build system changes that risk breaking the MSVC build system,\n> it's useful to be able to run the part of the MSVC build tools that read the\n> makefiles and produce the project files under a not-Windows platform. This\n> part does not really depend on anything particular to Windows, so it's\n> possible in principle. There are some minor dependencies on Windows,\n> however, that need to be worked around. I have had some local hacks for that\n> for a while, and I took a moment to clean them up and make them presentable,\n> so here they are. Interested?\n> \n> To test, apply the patch and run perl src/tools/msvc/mkvcbuild.pl .\n\n$ perl src/tools/msvc/mkvcbuild.pl .\nWarning: no config.pl found, using default.\nUnable to determine Visual Studio version: The nmake command wasn't\nfound. at /home/ioltas/git/postgres/src/tools/msvc/Mkvcbuild.pm line 92.\nIs that the expected result? \n--\nMichael", "msg_date": "Thu, 13 Feb 2020 21:04:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On 2020-02-13 13:04, Michael Paquier wrote:\n> On Thu, Feb 13, 2020 at 12:00:54PM +0100, Peter Eisentraut wrote:\n>> When making build system changes that risk breaking the MSVC build system,\n>> it's useful to be able to run the part of the MSVC build tools that read the\n>> makefiles and produce the project files under a not-Windows platform. This\n>> part does not really depend on anything particular to Windows, so it's\n>> possible in principle. There are some minor dependencies on Windows,\n>> however, that need to be worked around. I have had some local hacks for that\n>> for a while, and I took a moment to clean them up and make them presentable,\n>> so here they are. Interested?\n>>\n>> To test, apply the patch and run perl src/tools/msvc/mkvcbuild.pl .\n> \n> $ perl src/tools/msvc/mkvcbuild.pl .\n> Warning: no config.pl found, using default.\n> Unable to determine Visual Studio version: The nmake command wasn't\n> found. at /home/ioltas/git/postgres/src/tools/msvc/Mkvcbuild.pm line 92.\n> Is that the expected result?\n\nNo, I had apparently created by own fake \"nmake\" shell script some time \nago to work around that. Here is a new patch with that taken care of, too.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 13 Feb 2020 14:24:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On Thu, Feb 13, 2020 at 02:24:43PM +0100, Peter Eisentraut wrote:\n> On 2020-02-13 13:04, Michael Paquier wrote:\n> > On Thu, Feb 13, 2020 at 12:00:54PM +0100, Peter Eisentraut wrote:\n> > > When making build system changes that risk breaking the MSVC build system,\n> > > it's useful to be able to run the part of the MSVC build tools that read the\n> > > makefiles and produce the project files under a not-Windows platform. This\n> > > part does not really depend on anything particular to Windows, so it's\n> > > possible in principle. There are some minor dependencies on Windows,\n> > > however, that need to be worked around. I have had some local hacks for that\n> > > for a while, and I took a moment to clean them up and make them presentable,\n> > > so here they are. Interested?\n> > >\n> > > To test, apply the patch and run perl src/tools/msvc/mkvcbuild.pl .\n> >\n> > $ perl src/tools/msvc/mkvcbuild.pl .\n> > Warning: no config.pl found, using default.\n> > Unable to determine Visual Studio version: The nmake command wasn't\n> > found. at /home/ioltas/git/postgres/src/tools/msvc/Mkvcbuild.pm line 92.\n> > Is that the expected result?\n>\n> No, I had apparently created by own fake \"nmake\" shell script some time ago\n> to work around that. Here is a new patch with that taken care of, too.\n\nWith v2 I'm able to successfully run mkvcbuild.pl on linux and macos. I don't\nhave any knowledge on compiling with windows, so I can't really judge what it's\nbeen doing.\n\n\n", "msg_date": "Thu, 13 Feb 2020 14:40:32 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Feb 13, 2020 at 02:24:43PM +0100, Peter Eisentraut wrote:\n>>> When making build system changes that risk breaking the MSVC build system,\n>>> it's useful to be able to run the part of the MSVC build tools that read the\n>>> makefiles and produce the project files under a not-Windows platform.\n\n> With v2 I'm able to successfully run mkvcbuild.pl on linux and macos. I don't\n> have any knowledge on compiling with windows, so I can't really judge what it's\n> been doing.\n\nYeah, I'm wondering exactly how this helps. IME the typical sort of\nbreakage is \"the MSVC build doesn't know that file X needs to be\nincluded when building Y\". It seems like just building the project\nfiles will teach one nothing about that type of omission.\n\nI don't have any particular objection to the patch as given, it\njust doesn't sound helpful for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:36:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On 2020-02-13 16:36, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n>> On Thu, Feb 13, 2020 at 02:24:43PM +0100, Peter Eisentraut wrote:\n>>>> When making build system changes that risk breaking the MSVC build system,\n>>>> it's useful to be able to run the part of the MSVC build tools that read the\n>>>> makefiles and produce the project files under a not-Windows platform.\n> \n>> With v2 I'm able to successfully run mkvcbuild.pl on linux and macos. I don't\n>> have any knowledge on compiling with windows, so I can't really judge what it's\n>> been doing.\n> \n> Yeah, I'm wondering exactly how this helps. IME the typical sort of\n> breakage is \"the MSVC build doesn't know that file X needs to be\n> included when building Y\". It seems like just building the project\n> files will teach one nothing about that type of omission.\n\nThe main benefit is that if you make \"blind\" edits in the Perl files, \nyou can verify them easily, first by seeing that the Perl code runs, \nsecond, depending on the circumstances, by diffing the created project \nfiles. Another is that if you do some nontrivial surgery in makefiles, \nyou can check whether the Perl code can still process them. So the \nbenefit is mainly that you can iterate faster when working on build \nsystem related things. You still need to do a full test on Windows at \nthe conclusion, but then hopefully with a better chance of success.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 20 Feb 2020 09:14:37 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-02-13 16:36, Tom Lane wrote:\n>> Yeah, I'm wondering exactly how this helps. IME the typical sort of\n>> breakage is \"the MSVC build doesn't know that file X needs to be\n>> included when building Y\". It seems like just building the project\n>> files will teach one nothing about that type of omission.\n\n> The main benefit is that if you make \"blind\" edits in the Perl files, \n> you can verify them easily, first by seeing that the Perl code runs, \n> second, depending on the circumstances, by diffing the created project \n> files. Another is that if you do some nontrivial surgery in makefiles, \n> you can check whether the Perl code can still process them. So the \n> benefit is mainly that you can iterate faster when working on build \n> system related things. You still need to do a full test on Windows at \n> the conclusion, but then hopefully with a better chance of success.\n\nI see. No objection then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 09:31:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On Thu, Feb 20, 2020 at 09:31:32AM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> The main benefit is that if you make \"blind\" edits in the Perl files, \n>> you can verify them easily, first by seeing that the Perl code runs, \n>> second, depending on the circumstances, by diffing the created project \n>> files. Another is that if you do some nontrivial surgery in makefiles, \n>> you can check whether the Perl code can still process them. So the \n>> benefit is mainly that you can iterate faster when working on build \n>> system related things. You still need to do a full test on Windows at \n>> the conclusion, but then hopefully with a better chance of success.\n> \n> I see. No objection then.\n\nNone from here either, and the patch is working correctly.\n--\nMichael", "msg_date": "Fri, 21 Feb 2020 13:00:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On 2020-02-21 05:00, Michael Paquier wrote:\n> On Thu, Feb 20, 2020 at 09:31:32AM -0500, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> The main benefit is that if you make \"blind\" edits in the Perl files,\n>>> you can verify them easily, first by seeing that the Perl code runs,\n>>> second, depending on the circumstances, by diffing the created project\n>>> files. Another is that if you do some nontrivial surgery in makefiles,\n>>> you can check whether the Perl code can still process them. So the\n>>> benefit is mainly that you can iterate faster when working on build\n>>> system related things. You still need to do a full test on Windows at\n>>> the conclusion, but then hopefully with a better chance of success.\n>>\n>> I see. No objection then.\n> \n> None from here either, and the patch is working correctly.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Feb 2020 21:04:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> committed\n\ncrake says that this doesn't pass perlcritic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Feb 2020 15:25:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" }, { "msg_contents": "On 2020-02-21 21:25, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> committed\n> \n> crake says that this doesn't pass perlcritic.\n\nOK, fixed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Feb 2020 22:03:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: allow running parts of src/tools/msvc/ under not Windows" } ]
[ { "msg_contents": "Hello,\n\nCurrently the documentation says that one can put \"a list of table \nexpressions\"\nafter FROM in UPDATE or after USING in DELETE.\nHowever, \"table expression\" is defined as a complex of\nFROM, WHERE, GROUP BY and HAVING clauses [1].\nThe thing one can list in the FROM clause in a comma-separated manner\nis called a table reference [2].\nSELECT reference does not use this term but explains what they could be [3].\n\nPlease could someone have a look at the patch attached?\nIt's not just pedantry but rather based on a real-life example of \nsomeone reading and being not sure\nwhether e.g. joins can be used in there.\n\nBest, Alex\n\n[1] https://www.postgresql.org/docs/devel/queries-table-expressions.html\n[2] \nhttps://www.postgresql.org/docs/devel/queries-table-expressions.html#QUERIES-FROM\n[3] https://www.postgresql.org/docs/devel/sql-select.html#SQL-FROM", "msg_date": "Thu, 13 Feb 2020 11:13:32 +0000", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Small docs bugfix: make it clear what can be used in UPDATE FROM and\n DELETE USING" }, { "msg_contents": "On Thu, Feb 13, 2020 at 4:13 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\n\n> Hello,\n>\n> Currently the documentation says that one can put \"a list of table\n> expressions\"\n> after FROM in UPDATE or after USING in DELETE.\n> However, \"table expression\" is defined as a complex of\n> FROM, WHERE, GROUP BY and HAVING clauses [1].\n> The thing one can list in the FROM clause in a comma-separated manner\n> is called a table reference [2].\n> SELECT reference does not use this term but explains what they could be\n> [3].\n>\n> Please could someone have a look at the patch attached?\n> It's not just pedantry but rather based on a real-life example of\n> someone reading and being not sure\n> whether e.g. joins can be used in there.\n>\n> Best, Alex\n>\n> [1] https://www.postgresql.org/docs/devel/queries-table-expressions.html\n> [2]\n>\n> https://www.postgresql.org/docs/devel/queries-table-expressions.html#QUERIES-FROM\n> [3] https://www.postgresql.org/docs/devel/sql-select.html#SQL-FROM\n\n\nDrive-by comment - I'm on board with the idea but I do not believe this\npatch accomplishes the goal.\n\nIMO there is too much indirection happening and trying to get terms exactly\nright, so the user can find or remember them from elsewhere in the\ndocumentation, doesn't seem like the best solution. The material isn't\nthat extensive and since it is covered elsewhere a little bit more\nexplicitness in the DELETE and FROM documentation seems like a better path\nforward.\n\nDavid J.\n\nOn Thu, Feb 13, 2020 at 4:13 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:Hello,\n\nCurrently the documentation says that one can put \"a list of table \nexpressions\"\nafter FROM in UPDATE or after USING in DELETE.\nHowever, \"table expression\" is defined as a complex of\nFROM, WHERE, GROUP BY and HAVING clauses [1].\nThe thing one can list in the FROM clause in a comma-separated manner\nis called a table reference [2].\nSELECT reference does not use this term but explains what they could be [3].\n\nPlease could someone have a look at the patch attached?\nIt's not just pedantry but rather based on a real-life example of \nsomeone reading and being not sure\nwhether e.g. joins can be used in there.\n\nBest, Alex\n\n[1] https://www.postgresql.org/docs/devel/queries-table-expressions.html\n[2] \nhttps://www.postgresql.org/docs/devel/queries-table-expressions.html#QUERIES-FROM\n[3] https://www.postgresql.org/docs/devel/sql-select.html#SQL-FROMDrive-by comment - I'm on board with the idea but I do not believe this patch accomplishes the goal.IMO there is too much indirection happening and trying to get terms exactly right, so the user can find or remember them from elsewhere in the documentation, doesn't seem like the best solution.  The material isn't that extensive and since it is covered elsewhere a little bit more explicitness in the DELETE and FROM documentation seems like a better path forward.David J.", "msg_date": "Thu, 13 Feb 2020 08:04:40 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "On Thu, Feb 13, 2020 at 11:13:32AM +0000, Alexey Bashtanov wrote:\n> Hello,\n> \n> Currently the documentation says that one can put \"a list of table\n> expressions\"\n> after FROM in UPDATE or after USING in DELETE.\n> However, \"table expression\" is defined as a complex of\n> FROM, WHERE, GROUP BY and HAVING clauses [1].\n> The thing one can list in the FROM clause in a comma-separated manner\n> is called a table reference [2].\n> SELECT reference does not use this term but explains what they could be [3].\n> \n> Please could someone have a look at the patch attached?\n> It's not just pedantry but rather based on a real-life example of someone\n> reading and being not sure\n> whether e.g. joins can be used in there.\n\nThanks for doing this!\n\nSpeaking of examples, there should be more of them illustrating some\nof the cases you name.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 13 Feb 2020 16:11:23 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Feb 13, 2020 at 4:13 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\n>> Please could someone have a look at the patch attached?\n>> It's not just pedantry but rather based on a real-life example of\n>> someone reading and being not sure\n>> whether e.g. joins can be used in there.\n\n> Drive-by comment - I'm on board with the idea but I do not believe this\n> patch accomplishes the goal.\n> IMO there is too much indirection happening and trying to get terms exactly\n> right, so the user can find or remember them from elsewhere in the\n> documentation, doesn't seem like the best solution. The material isn't\n> that extensive and since it is covered elsewhere a little bit more\n> explicitness in the DELETE and FROM documentation seems like a better path\n> forward.\n\nI see where you're coming from, but I do not think that repeating the\nwhole from_item syntax in UPDATE and DELETE is the best way forward.\nIn the first place, we'd inevitably forget to update those copies,\nand in the second, I'm not sure that the syntax is all that helpful\nwithout all the supporting text in the SELECT ref page --- which\nsurely we aren't going to duplicate.\n\nI think the real problem with the places Alexey is on about is that\nthey're too waffle-y. They use wording like \"similar to\", leaving\none wondering what discrepancies exist but are being papered over.\nIn point of fact, as a look into gram.y will show, what you can\nwrite after UPDATE ... FROM or DELETE ... USING is *exactly* the\nsame thing as what you can write after SELECT ... FROM. So what\nI'm in favor of here is:\n\n* Change the synopsis entries to look like \"FROM from_item [, ...]\"\nand \"USING from_item [, ...]\", so that they match the SELECT\nsynopsis exactly.\n\n* In the text, describe from_item as being exactly the same as\nit is in SELECT.\n\n(Compare the handling of with_query, which has pretty much the\nsame problem of being way too complex to document three times.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 11:26:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "On Thu, Feb 13, 2020 at 9:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Thu, Feb 13, 2020 at 4:13 AM Alexey Bashtanov <bashtanov@imap.cc>\n> wrote:\n> >> Please could someone have a look at the patch attached?\n> >> It's not just pedantry but rather based on a real-life example of\n> >> someone reading and being not sure\n> >> whether e.g. joins can be used in there.\n>\n> > Drive-by comment - I'm on board with the idea but I do not believe this\n> > patch accomplishes the goal.\n> > IMO there is too much indirection happening and trying to get terms\n> exactly\n> > right, so the user can find or remember them from elsewhere in the\n> > documentation, doesn't seem like the best solution. The material isn't\n> > that extensive and since it is covered elsewhere a little bit more\n> > explicitness in the DELETE and FROM documentation seems like a better\n> path\n> > forward.\n>\n> I see where you're coming from, but I do not think that repeating the\n> whole from_item syntax in UPDATE and DELETE is the best way forward.\n> In the first place, we'd inevitably forget to update those copies,\n> and in the second, I'm not sure that the syntax is all that helpful\n> without all the supporting text in the SELECT ref page --- which\n> surely we aren't going to duplicate.\n>\n> I think the real problem with the places Alexey is on about is that\n> they're too waffle-y. They use wording like \"similar to\", leaving\n> one wondering what discrepancies exist but are being papered over.\n> In point of fact, as a look into gram.y will show, what you can\n> write after UPDATE ... FROM or DELETE ... USING is *exactly* the\n> same thing as what you can write after SELECT ... FROM. So what\n> I'm in favor of here is:\n>\n> * Change the synopsis entries to look like \"FROM from_item [, ...]\"\n> and \"USING from_item [, ...]\", so that they match the SELECT\n> synopsis exactly.\n>\n> * In the text, describe from_item as being exactly the same as\n> it is in SELECT.\n>\n>\n+1\n\nI didn't want a wholesale repetition but the whole \"similar to\" piece is\nindeed my issue and this addresses it sufficiently.\n\nDavid J.\n\nOn Thu, Feb 13, 2020 at 9:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Feb 13, 2020 at 4:13 AM Alexey Bashtanov <bashtanov@imap.cc> wrote:\n>> Please could someone have a look at the patch attached?\n>> It's not just pedantry but rather based on a real-life example of\n>> someone reading and being not sure\n>> whether e.g. joins can be used in there.\n\n> Drive-by comment - I'm on board with the idea but I do not believe this\n> patch accomplishes the goal.\n> IMO there is too much indirection happening and trying to get terms exactly\n> right, so the user can find or remember them from elsewhere in the\n> documentation, doesn't seem like the best solution.  The material isn't\n> that extensive and since it is covered elsewhere a little bit more\n> explicitness in the DELETE and FROM documentation seems like a better path\n> forward.\n\nI see where you're coming from, but I do not think that repeating the\nwhole from_item syntax in UPDATE and DELETE is the best way forward.\nIn the first place, we'd inevitably forget to update those copies,\nand in the second, I'm not sure that the syntax is all that helpful\nwithout all the supporting text in the SELECT ref page --- which\nsurely we aren't going to duplicate.\n\nI think the real problem with the places Alexey is on about is that\nthey're too waffle-y.  They use wording like \"similar to\", leaving\none wondering what discrepancies exist but are being papered over.\nIn point of fact, as a look into gram.y will show, what you can\nwrite after UPDATE ... FROM or DELETE ... USING is *exactly* the\nsame thing as what you can write after SELECT ... FROM.  So what\nI'm in favor of here is:\n\n* Change the synopsis entries to look like \"FROM from_item [, ...]\"\nand \"USING from_item [, ...]\", so that they match the SELECT\nsynopsis exactly.\n\n* In the text, describe from_item as being exactly the same as\nit is in SELECT.+1I didn't want a wholesale repetition but the whole \"similar to\" piece is indeed my issue and this addresses it sufficiently.David J.", "msg_date": "Thu, 13 Feb 2020 09:47:59 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "On Thu, Feb 13, 2020 at 11:26:45AM -0500, Tom Lane wrote:\n> I see where you're coming from, but I do not think that repeating the\n> whole from_item syntax in UPDATE and DELETE is the best way forward.\n> In the first place, we'd inevitably forget to update those copies,\n> and in the second, I'm not sure that the syntax is all that helpful\n> without all the supporting text in the SELECT ref page --- which\n> surely we aren't going to duplicate.\n> \n> I think the real problem with the places Alexey is on about is that\n> they're too waffle-y. They use wording like \"similar to\", leaving\n> one wondering what discrepancies exist but are being papered over.\n> In point of fact, as a look into gram.y will show, what you can\n> write after UPDATE ... FROM or DELETE ... USING is *exactly* the\n> same thing as what you can write after SELECT ... FROM. So what\n> I'm in favor of here is:\n> \n> * Change the synopsis entries to look like \"FROM from_item [, ...]\"\n> and \"USING from_item [, ...]\", so that they match the SELECT\n> synopsis exactly.\n> \n> * In the text, describe from_item as being exactly the same as\n> it is in SELECT.\n> \n> (Compare the handling of with_query, which has pretty much the\n> same problem of being way too complex to document three times.)\n\nI have implemented the ideas above in the attached patch. I have\nsynchronized the syntax to match SELECT, and synchronized the paragraphs\ndescribing the item.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Tue, 17 Mar 2020 20:31:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I have implemented the ideas above in the attached patch. I have\n> synchronized the syntax to match SELECT, and synchronized the paragraphs\n> describing the item.\n\nI think that the DELETE synopsis should look like\n\n [ USING <replaceable class=\"parameter\">from_item</replaceable> [, ...] ]\n\nso that there's not any question which part of the SELECT syntax we're\ntalking about. I also think that the running text in both cases should\nsay in exactly these words \"from_item means the same thing as it does\nin SELECT\"; the wording you propose still seems to be dancing around\nthe point, leaving readers perhaps not quite sure about what is meant.\n\nIn the DELETE case you could alternatively say \"using_item means the same\nthing as from_item does in SELECT\", but that doesn't really seem like an\nimprovement to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Mar 2020 22:58:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "On Tue, Mar 17, 2020 at 10:58:54PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I have implemented the ideas above in the attached patch. I have\n> > synchronized the syntax to match SELECT, and synchronized the paragraphs\n> > describing the item.\n> \n> I think that the DELETE synopsis should look like\n> \n> [ USING <replaceable class=\"parameter\">from_item</replaceable> [, ...] ]\n> \n> so that there's not any question which part of the SELECT syntax we're\n> talking about. I also think that the running text in both cases should\n> say in exactly these words \"from_item means the same thing as it does\n> in SELECT\"; the wording you propose still seems to be dancing around\n> the point, leaving readers perhaps not quite sure about what is meant.\n> \n> In the DELETE case you could alternatively say \"using_item means the same\n> thing as from_item does in SELECT\", but that doesn't really seem like an\n> improvement to me.\n\nOK, updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +", "msg_date": "Wed, 18 Mar 2020 12:24:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> OK, updated patch attached.\n\nLGTM, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Mar 2020 12:50:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "On Wednesday, March 18, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > OK, updated patch attached.\n>\n> LGTM, thanks\n>\n\n+1\n\nDavid J.\n\nOn Wednesday, March 18, 2020, Tom Lane <tgl@sss.pgh.pa.us> wrote:Bruce Momjian <bruce@momjian.us> writes:\n> OK, updated patch attached.\n\nLGTM, thanks\n+1David J.", "msg_date": "Wed, 18 Mar 2020 10:58:03 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" }, { "msg_contents": "On Wed, Mar 18, 2020 at 12:24:45PM -0400, Bruce Momjian wrote:\n> On Tue, Mar 17, 2020 at 10:58:54PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > I have implemented the ideas above in the attached patch. I have\n> > > synchronized the syntax to match SELECT, and synchronized the paragraphs\n> > > describing the item.\n> > \n> > I think that the DELETE synopsis should look like\n> > \n> > [ USING <replaceable class=\"parameter\">from_item</replaceable> [, ...] ]\n> > \n> > so that there's not any question which part of the SELECT syntax we're\n> > talking about. I also think that the running text in both cases should\n> > say in exactly these words \"from_item means the same thing as it does\n> > in SELECT\"; the wording you propose still seems to be dancing around\n> > the point, leaving readers perhaps not quite sure about what is meant.\n> > \n> > In the DELETE case you could alternatively say \"using_item means the same\n> > thing as from_item does in SELECT\", but that doesn't really seem like an\n> > improvement to me.\n> \n> OK, updated patch attached.\n\nPatch appied through 9.5.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:32:07 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Small docs bugfix: make it clear what can be used in UPDATE FROM\n and DELETE USING" } ]
[ { "msg_contents": "I've noticed that convert_and_check_filename() is always passed false for the\n\"logAllowed\" argument - someone probably forgot to remove the argument when it\nwas decided that log files are no longer accepted. If the argument was removed,\nthe function would become a bit simpler, see the patch.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Thu, 13 Feb 2020 12:15:39 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Dead code in adminpack" }, { "msg_contents": "On Thu, Feb 13, 2020 at 12:14 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> I've noticed that convert_and_check_filename() is always passed false for the\n> \"logAllowed\" argument - someone probably forgot to remove the argument when it\n> was decided that log files are no longer accepted. If the argument was removed,\n> the function would become a bit simpler, see the patch.\n\nIndeed, and actually I don't see when this codepath was reachable.\n\nPatch LGTM.\n\n\n", "msg_date": "Thu, 13 Feb 2020 12:41:46 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Dead code in adminpack" }, { "msg_contents": "On Thu, Feb 13, 2020 at 12:15:39PM +0100, Antonin Houska wrote:\n> I've noticed that convert_and_check_filename() is always passed false for the\n> \"logAllowed\" argument - someone probably forgot to remove the argument when it\n> was decided that log files are no longer accepted. If the argument was removed,\n> the function would become a bit simpler, see the patch.\n\nThis routine was originally named absClusterPath(), but even at its\norigin point (fe59e56) this argument has never been used. So no\nobjections to clean up that.\n--\nMichael", "msg_date": "Thu, 13 Feb 2020 20:45:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Dead code in adminpack" }, { "msg_contents": "On Thu, Feb 13, 2020 at 12:41:46PM +0100, Julien Rouhaud wrote:\n> On Thu, Feb 13, 2020 at 12:14 PM Antonin Houska <ah@cybertec.at> wrote:\n>> I've noticed that convert_and_check_filename() is always passed false for the\n>> \"logAllowed\" argument - someone probably forgot to remove the argument when it\n>> was decided that log files are no longer accepted. If the argument was removed,\n>> the function would become a bit simpler, see the patch.\n> \n> Indeed, and actually I don't see when this codepath was reachable.\n> \n> Patch LGTM.\n\nThanks, applied.\n--\nMichael", "msg_date": "Fri, 14 Feb 2020 12:44:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Dead code in adminpack" } ]
[ { "msg_contents": "Hi,\n\nTRUNCATE command on the temporary tables of other sessions fails\nwith the following error. This behavior looks expected to me.\n\n ERROR: cannot truncate temporary tables of other sessions\n\nHowever I found that LOCK TABLE and DROP TABLE commands on\nthe temporary tables of other sessions are successfully processed,\nif users (like superusers) have enough access privileges on them.\nIs this a bug? ISTM that the similar check that TRUNCATE command\ndoes needs to be added even in LOCK TABLE and DROP TABLE cases.\n\nBTW, even SELECT has the same issue. Basically SELECT on\nthe temporary tables of other sessions fails with the following\nerror.\n\n ERROR: cannot access temporary tables of other sessions\n\nHowever if the temporary table is empty, SELECT doesn't reach\nthe above check, is successfully processed and the relation lock\nis taken. This lock can prevent the backend process that created\nthe temporary table from exiting even when the client that\nthe backend is connecting to quits. Seems it's problematic.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 13 Feb 2020 22:09:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "LOCK TABLE and DROP TABLE on temp tables of other sessions" }, { "msg_contents": "On Thu, Feb 13, 2020 at 6:40 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> Hi,\n>\n> TRUNCATE command on the temporary tables of other sessions fails\n> with the following error. This behavior looks expected to me.\n>\n> ERROR: cannot truncate temporary tables of other sessions\n>\n> However I found that LOCK TABLE and DROP TABLE commands on\n> the temporary tables of other sessions are successfully processed,\n> if users (like superusers) have enough access privileges on them.\n> Is this a bug? ISTM that the similar check that TRUNCATE command\n> does needs to be added even in LOCK TABLE and DROP TABLE cases.\n>\n\nThat looks odd. Other sessions are able to see temporary tables of a given\nsession because they are stored in the same catalog which is accessible to\nall the sessions. But ideally, a temporary table should be visible only to\nthe session which created it (GTT is an exception). So LOCK and DROP table\nshould not succeed.\n\nThinking from a different perspective, DROP TABLE being able to drop a\ntemporary table seems a good tool in case a temporary table is left behind\nby a finished session. But that doesn't seem like a good reason to have it\nand I don't see much use of LOCK TABLE there.\n\n\n>\n> BTW, even SELECT has the same issue. Basically SELECT on\n> the temporary tables of other sessions fails with the following\n> error.\n>\n> ERROR: cannot access temporary tables of other sessions\n>\n> However if the temporary table is empty, SELECT doesn't reach\n> the above check, is successfully processed and the relation lock\n> is taken. This lock can prevent the backend process that created\n> the temporary table from exiting even when the client that\n> the backend is connecting to quits. Seems it's problematic.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> NTT DATA CORPORATION\n> Advanced Platform Technology Group\n> Research and Development Headquarters\n>\n>\n>\n\n-- \n--\nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Feb 13, 2020 at 6:40 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:Hi,\n\nTRUNCATE command on the temporary tables of other sessions fails\nwith the following error. This behavior looks expected to me.\n\n     ERROR:  cannot truncate temporary tables of other sessions\n\nHowever I found that LOCK TABLE and DROP TABLE commands on\nthe temporary tables of other sessions are successfully processed,\nif users (like superusers) have enough access privileges on them.\nIs this a bug? ISTM that the similar check that TRUNCATE command\ndoes needs to be added even in LOCK TABLE and DROP TABLE cases.That looks odd. Other sessions are able to see temporary tables of a given session because they are stored in the same catalog which is accessible to all the sessions. But ideally, a temporary table should be visible only to the session which created it (GTT is an exception). So LOCK and DROP table should not succeed.Thinking from a different perspective, DROP TABLE being able to drop a temporary table seems a good tool in case a temporary table is left behind by a finished session. But that doesn't seem like a good reason to have it and I don't see much use of LOCK TABLE there. \n\nBTW, even SELECT has the same issue. Basically SELECT on\nthe temporary tables of other sessions fails with the following\nerror.\n\n     ERROR:  cannot access temporary tables of other sessions\n\nHowever if the temporary table is empty, SELECT doesn't reach\nthe above check, is successfully processed and the relation lock\nis taken. This lock can prevent the backend process that created\nthe temporary table from exiting even when the client that\nthe backend is connecting to quits. Seems it's problematic.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n-- --Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 13 Feb 2020 21:05:01 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE and DROP TABLE on temp tables of other sessions" }, { "msg_contents": "On Thu, Feb 13, 2020 at 09:05:01PM +0530, Ashutosh Bapat wrote:\n> That looks odd. Other sessions are able to see temporary tables of a given\n> session because they are stored in the same catalog which is accessible to\n> all the sessions. But ideally, a temporary table should be visible only to\n> the session which created it (GTT is an exception). So LOCK and DROP table\n> should not succeed.\n\nOne thing that we need to consider is if there are applications which\ntake advantage of LOCK allowed on temp relations from other backends\nor not. One downside is that if one backend takes a lock on a temp\ntable from a different session, then this other session would not\ncompletely shut down (still report the shutdown to the client),\nand would remain blocked during the temp schema cleanup until the\ntransaction of the session locking the temp relation commits. This\nblocks access to one connection slot, still we are talking about an\noperation where the owner of the temp schema wants to do the lock.\n\n> Thinking from a different perspective, DROP TABLE being able to drop a\n> temporary table seems a good tool in case a temporary table is left behind\n> by a finished session. But that doesn't seem like a good reason to have it\n> and I don't see much use of LOCK TABLE there.\n\nYep. Robert had actually this argument with DROP SCHEMA pg_temp not\nso long ago with me.\n--\nMichael", "msg_date": "Fri, 14 Feb 2020 15:05:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE and DROP TABLE on temp tables of other sessions" }, { "msg_contents": "On Fri, Feb 14, 2020 at 11:35 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Feb 13, 2020 at 09:05:01PM +0530, Ashutosh Bapat wrote:\n> > That looks odd. Other sessions are able to see temporary tables of a\n> given\n> > session because they are stored in the same catalog which is accessible\n> to\n> > all the sessions. But ideally, a temporary table should be visible only\n> to\n> > the session which created it (GTT is an exception). So LOCK and DROP\n> table\n> > should not succeed.\n>\n> One thing that we need to consider is if there are applications which\n> take advantage of LOCK allowed on temp relations from other backends\n> or not. One downside is that if one backend takes a lock on a temp\n> table from a different session, then this other session would not\n> completely shut down (still report the shutdown to the client),\n> and would remain blocked during the temp schema cleanup until the\n> transaction of the session locking the temp relation commits. This\n> blocks access to one connection slot, still we are talking about an\n> operation where the owner of the temp schema wants to do the lock.\n>\n\nThat might be disastrous if happens by accident eating up most of the\navailable connection slots.\n\nWhatever the user wants to achieve using LOCK [temp] TABLE of other\nsession, I guess can be achieved by other means or can be shown to have\ndisastrous effect. So that kind of usage pattern would better be forced to\nchange.\n\n\n>\n> > Thinking from a different perspective, DROP TABLE being able to drop a\n> > temporary table seems a good tool in case a temporary table is left\n> behind\n> > by a finished session. But that doesn't seem like a good reason to have\n> it\n> > and I don't see much use of LOCK TABLE there.\n>\n> Yep. Robert had actually this argument with DROP SCHEMA pg_temp not\n> so long ago with me.\n>\n>\nDROP SCHEMA might be better for mass cleanup. That actually makes DROP\n[other session temp] TABLE useless.\n\n\n-- \n--\nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Feb 14, 2020 at 11:35 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Feb 13, 2020 at 09:05:01PM +0530, Ashutosh Bapat wrote:\n> That looks odd. Other sessions are able to see temporary tables of a given\n> session because they are stored in the same catalog which is accessible to\n> all the sessions. But ideally, a temporary table should be visible only to\n> the session which created it (GTT is an exception). So LOCK and DROP table\n> should not succeed.\n\nOne thing that we need to consider is if there are applications which\ntake advantage of LOCK allowed on temp relations from other backends\nor not.  One downside is that if one backend takes a lock on a temp\ntable from a different session, then this other session would not\ncompletely shut down (still report the shutdown to the client),\nand would remain blocked during the temp schema cleanup until the\ntransaction of the session locking the temp relation commits.  This\nblocks access to one connection slot, still we are talking about an\noperation where the owner of the temp schema wants to do the lock.That might be disastrous if happens by accident eating up most of the available connection slots.Whatever the user wants to achieve using LOCK [temp] TABLE of other session, I guess can be achieved by other means or can be shown to have disastrous effect. So that kind of usage pattern would better be forced to change. \n\n> Thinking from a different perspective, DROP TABLE being able to drop a\n> temporary table seems a good tool in case a temporary table is left behind\n> by a finished session. But that doesn't seem like a good reason to have it\n> and I don't see much use of LOCK TABLE there.\n\nYep.  Robert had actually this argument with DROP SCHEMA pg_temp not\nso long ago with me.DROP SCHEMA might be better for mass cleanup. That actually makes DROP [other session temp] TABLE useless.-- --Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 14 Feb 2020 17:59:34 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE and DROP TABLE on temp tables of other sessions" }, { "msg_contents": "On Fri, Feb 14, 2020 at 05:59:34PM +0530, Ashutosh Bapat wrote:\n> On Fri, Feb 14, 2020 at 11:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> One thing that we need to consider is if there are applications which\n>> take advantage of LOCK allowed on temp relations from other backends\n>> or not. One downside is that if one backend takes a lock on a temp\n>> table from a different session, then this other session would not\n>> completely shut down (still report the shutdown to the client),\n>> and would remain blocked during the temp schema cleanup until the\n>> transaction of the session locking the temp relation commits. This\n>> blocks access to one connection slot, still we are talking about an\n>> operation where the owner of the temp schema wants to do the lock.\n> \n> That might be disastrous if happens by accident eating up most of the\n> available connection slots.\n\nWell, that would be an owner doing that.\n\n> Whatever the user wants to achieve using LOCK [temp] TABLE of other\n> session, I guess can be achieved by other means or can be shown to have\n> disastrous effect. So that kind of usage pattern would better be forced to\n> change.\n\nAnyway, don't take me wrong. I would be rather in favor of\nrestricting LOCK but that does not seem like something enough for a\nbackpatch. One recent example in this area I had to deal with is\nREINDEX on temp tables. We have some assumptions which involve lock\nupgrades (ShareUpdateExclusiveLock => AccessExclusiveLock between the\nmoment we take the relation lock using its RangeVar until the moment\nthe reindex is actually done), so being able to take a conflicting\nlock on the temp relation could cause reindex to deadlock.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 13:28:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE and DROP TABLE on temp tables of other sessions" } ]
[ { "msg_contents": "GCC reports various instances of\n\n warning: cast to pointer from integer of different size \n[-Wint-to-pointer-cast]\n warning: cast from pointer to integer of different size \n[-Wpointer-to-int-cast]\n\nand MSVC equivalently\n\n warning C4312: 'type cast': conversion from 'int' to 'void *' of \ngreater size\n warning C4311: 'type cast': pointer truncation from 'void *' to 'long'\n\nin ECPG test files. This is because void* and long are cast back and\nforth, but on 64-bit Windows, these have different sizes. Fix by\nusing intptr_t instead.\n\nThe code actually worked fine because the integer values in use are\nall small. So this is just to get the test code to compile warning-free.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 13 Feb 2020 15:16:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> GCC reports various instances of\n> warning: cast to pointer from integer of different size \n> [-Wint-to-pointer-cast]\n> warning: cast from pointer to integer of different size \n> [-Wpointer-to-int-cast]\n> in ECPG test files. This is because void* and long are cast back and\n> forth, but on 64-bit Windows, these have different sizes. Fix by\n> using intptr_t instead.\n\nHm. Silencing the warnings is a laudable goal, but I'm very dubious\nof allowing these test files to depend on pg_config.h. That doesn't\ncorrespond to real-world ECPG usage, so it seems likely that it could\ncome back to bite us some day.\n\nAccording to C99 and POSIX, intptr_t should be provided by <stdint.h> ...\nnow that we're requiring C99, can we get away with just #include'ing\nthat directly in these test files?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 10:19:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "On 2020-02-13 16:19, Tom Lane wrote:\n> According to C99 and POSIX, intptr_t should be provided by <stdint.h> ...\n> now that we're requiring C99, can we get away with just #include'ing\n> that directly in these test files?\n\nI think in the past we were worried about the C library not being fully \nC99. But the build farm indicates that even the trailing edge OS X and \nHP-UX members have it, so I'm content to require it. Then we should \nprobably remove the Autoconf tests altogether.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 14 Feb 2020 10:15:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-02-13 16:19, Tom Lane wrote:\n>> According to C99 and POSIX, intptr_t should be provided by <stdint.h> ...\n>> now that we're requiring C99, can we get away with just #include'ing\n>> that directly in these test files?\n\n> I think in the past we were worried about the C library not being fully \n> C99. But the build farm indicates that even the trailing edge OS X and \n> HP-UX members have it, so I'm content to require it. Then we should \n> probably remove the Autoconf tests altogether.\n\nYeah, I think that the C99 requirement has obsoleted a number of configure\ntests and related hackery in c.h. We just haven't got round to cleaning\nthat up yet.\n\nBTW: I'm still concerned about the possibility of the C library being\nless than C99. The model that was popular back then, and which still\nexists on e.g. gaur, was that you could install a C99 *compiler* on\na pre-C99 system, and the compiler would bring its own standard header\nfiles as necessary. While I don't have the machine booted up to check,\nI'm pretty sure that gaur's <stdint.h> is being supplied by the gcc\ninstallation not directly from /usr/include. On the other hand, that\ncompiler installation is still dependent on the vendor-supplied libc.\n\nSo the sorts of tests I think we can get away with removing have to do\nwith the presence of C99-required headers, macros, typedefs, etc.\nAnything that is checking the presence or behavior of code in libc,\nwe probably need to be more careful about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Feb 2020 09:52:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "On 2020-02-14 15:52, Tom Lane wrote:\n> Yeah, I think that the C99 requirement has obsoleted a number of configure\n> tests and related hackery in c.h. We just haven't got round to cleaning\n> that up yet.\n> \n> BTW: I'm still concerned about the possibility of the C library being\n> less than C99. The model that was popular back then, and which still\n> exists on e.g. gaur, was that you could install a C99 *compiler* on\n> a pre-C99 system, and the compiler would bring its own standard header\n> files as necessary. While I don't have the machine booted up to check,\n> I'm pretty sure that gaur's <stdint.h> is being supplied by the gcc\n> installation not directly from /usr/include. On the other hand, that\n> compiler installation is still dependent on the vendor-supplied libc.\n\nYeah, stdint.h belongs to the compiler, whereas intttypes.h belongs to \nthe C library. So if we require a C99 compiler we can get rid of all \ntests and workarounds for stdint.h missing. Patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 17 Feb 2020 09:44:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-02-14 15:52, Tom Lane wrote:\n>> BTW: I'm still concerned about the possibility of the C library being\n>> less than C99. The model that was popular back then, and which still\n>> exists on e.g. gaur, was that you could install a C99 *compiler* on\n>> a pre-C99 system, and the compiler would bring its own standard header\n>> files as necessary. While I don't have the machine booted up to check,\n>> I'm pretty sure that gaur's <stdint.h> is being supplied by the gcc\n>> installation not directly from /usr/include. On the other hand, that\n>> compiler installation is still dependent on the vendor-supplied libc.\n\n> Yeah, stdint.h belongs to the compiler, whereas intttypes.h belongs to \n> the C library. So if we require a C99 compiler we can get rid of all \n> tests and workarounds for stdint.h missing. Patch attached.\n\nI tried this on gaur's host, and got:\n\n$ make -s\nIn file included from ../../src/include/postgres_fe.h:25,\n from base64.c:18:\n../../src/include/c.h:67:20: stdint.h: No such file or directory\nmake[2]: *** [base64.o] Error 1\nmake[1]: *** [all-common-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\n\nOoops. Poking around, it looks like this version of gcc has brought its\nown stdbool.h, but not stdint.h:\n\n$ ls /usr/include/std*\n/usr/include/std_space.h /usr/include/stdio.h\n/usr/include/stdarg.h /usr/include/stdlib.h\n/usr/include/stddef.h\n$ find /opt/gcc-3.4.6 -name 'std*.h'\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/include/stdarg.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/include/stdbool.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/include/stddef.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/include/stdio.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/include/stdlib.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/install-tools/include/stdarg.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/install-tools/include/stdbool.h\n/opt/gcc-3.4.6/lib/gcc/hppa2.0-hp-hpux10.20/3.4.6/install-tools/include/stddef.h\n\nKind of annoying. Perhaps more recent gcc versions fixed that?\nAnyway, this seems like a bit of a blocker for this idea, at least\nunless I update or retire this buildfarm critter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Feb 2020 09:31:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "I wrote:\n> Ooops. Poking around, it looks like this version of gcc has brought its\n> own stdbool.h, but not stdint.h:\n> ...\n> Kind of annoying. Perhaps more recent gcc versions fixed that?\n\nHere we go, in the gcc 4.5.x release notes:\n\n GCC now ensures that a C99-conforming <stdint.h> is present on most\n targets, and uses information about the types in this header to\n implement the Fortran bindings to those types. GCC does not ensure the\n presence of such a header, and does not implement the Fortran\n bindings, on the following targets: NetBSD, VxWorks, VMS, SymbianOS,\n WinCE, LynxOS, Netware, QNX, Interix, TPF.\n\n4.5 seems annoyingly recent for this purpose (barely 10 years old).\nAlso, I'd previously tried and failed to use 4.2.4 and 4.0.4 on that\nplatform --- they didn't seem to be able to cope with the old header\nfiles. (Now I wonder if the lack of stdint.h had something to do\nwith it... although those versions did build, they just were buggy.)\n\nAnyway, I'll have a go at updating gaur to use 4.5.x. There is a\nsane-looking stdint.h on my second-oldest dinosaur, prairiedog.\nDon't know about the situation on Windows, though. We might want\nto take a close look at NetBSD, too, based on the GCC notes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Feb 2020 10:52:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "On Mon, Feb 17, 2020 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Anyway, I'll have a go at updating gaur to use 4.5.x. There is a\n> sane-looking stdint.h on my second-oldest dinosaur, prairiedog.\n> Don't know about the situation on Windows, though. We might want\n> to take a close look at NetBSD, too, based on the GCC notes.\n>\n>\nAs for Windows, stdint.h was included in VS2010, and currently Postgres\nsupports VS2013 to 2019.\n\nRegards\n\nOn Mon, Feb 17, 2020 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nAnyway, I'll have a go at updating gaur to use 4.5.x.  There is a\nsane-looking stdint.h on my second-oldest dinosaur, prairiedog.\nDon't know about the situation on Windows, though.  We might want\nto take a close look at NetBSD, too, based on the GCC notes.As for Windows, stdint.h was included in VS2010, and currently Postgres supports VS2013 to 2019.Regards", "msg_date": "Mon, 17 Feb 2020 18:24:30 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Mon, Feb 17, 2020 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, I'll have a go at updating gaur to use 4.5.x. There is a\n>> sane-looking stdint.h on my second-oldest dinosaur, prairiedog.\n>> Don't know about the situation on Windows, though. We might want\n>> to take a close look at NetBSD, too, based on the GCC notes.\n\n> As for Windows, stdint.h was included in VS2010, and currently Postgres\n> supports VS2013 to 2019.\n\nI've now updated gaur to gcc 4.5.4 (took a little more hair-pulling\nthan I would have wished). I confirm that 0001-Require-stdint.h.patch\nworks in that environment, so I think you can go ahead and push it.\n\nI think there is room for more extensive trimming of no-longer-useful\nconfigure checks, but I'll start a separate thread about that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Feb 2020 11:24:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" }, { "msg_contents": "On 2020-02-20 17:24, Tom Lane wrote:\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n>> On Mon, Feb 17, 2020 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Anyway, I'll have a go at updating gaur to use 4.5.x. There is a\n>>> sane-looking stdint.h on my second-oldest dinosaur, prairiedog.\n>>> Don't know about the situation on Windows, though. We might want\n>>> to take a close look at NetBSD, too, based on the GCC notes.\n> \n>> As for Windows, stdint.h was included in VS2010, and currently Postgres\n>> supports VS2013 to 2019.\n> \n> I've now updated gaur to gcc 4.5.4 (took a little more hair-pulling\n> than I would have wished). I confirm that 0001-Require-stdint.h.patch\n> works in that environment, so I think you can go ahead and push it.\n\nDone, and also the appropriately reworked Windows warnings patch from \nthe beginning of the thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 21 Feb 2020 20:10:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix compiler warnings on 64-bit Windows" } ]
[ { "msg_contents": "pg_regress' --load-language option has been unused in the core code since\nwe introduced extensions in 9.1; all callers now use --load-extension\ninstead. It's a fairly safe bet that it's never been used by any non-core\ncode, since it could only work for languages listed in pg_pltemplate.\nThe last possible reason to use it expired with commit 50fc694e4, which\nremoved pg_pltemplate and made parameterless CREATE LANGUAGE equivalent\nto CREATE EXTENSION. So I think we might as well kill it, as per the\nattached trivial patch. Any objections?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 Feb 2020 15:56:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Retiring pg_regress' --load-language option" } ]
[ { "msg_contents": "pg_dump/restore fail to restore the ownership of an extension correctly:\nin practice it'll always end up owned by whoever runs the restore\nscript. We've sort of averted our eyes from that up to now, because\nit's not a big deal in a world where most extensions have to be\nsuperuser-owned anyway. But I think it's no longer acceptable in a\nworld with trusted extensions. So I started looking into fixing that.\n\nMeanwhile ...\n\npg_dump and pg_restore have a --role switch, which causes them\nto attempt to SET ROLE to the specified user name at startup.\n\nThey also have a --use-set-session-authorization switch, which causes\nthem to use SET SESSION AUTHORIZATION before CREATE, rather than\nALTER OWNER after CREATE, to set the ownership of restored objects.\nObviously, those commands will be issued per-object.\n\nNow, for pg_dump there's no real conflict because --role determines\nwhat we send to the source database server, not what is put into the\ndump output. But AFAICS, these two switches do not work together\nin pg_restore. We'll send SET ROLE at the start of the restore but\nit'll be immediately and permanently overridden by the first\nSET SESSION AUTHORIZATION. Moreover, because SetSessionAuthorization\ninspects the original (authenticated) user ID to decide if the command\nis allowed, the SET ROLE doesn't help pass that permission check\neven the first time.\n\nGiven the current behavior of SET ROLE and SET SESSION AUTHORIZATION,\nI don't actually see any way that we could get these features to\nplay together. SET SESSION AUTHORIZATION insists on the originally\nauthenticated user being a superuser, so that the documented point of\n--role (to allow you to start the restore from a not-superuser role)\nisn't going to work. I thought about starting to use SET ROLE for\nboth purposes, but it checks whether you have role privilege based\non the session userid, so that a previous SET ROLE doesn't get you\npast that check even if it was a successful SET ROLE to a superuser.\n\nThe quick-and-dirty answer is to disallow these switches from being\nused together in pg_restore, and I'm inclined to think maybe we should\ndo that in the back branches.\n\nBut ... the reason I noticed this is that I don't see any way to\nrestore extension ownership correctly unless we use the SET SESSION\nAUTHORIZATION technique. We don't have ALTER EXTENSION OWNER, and I'm\nafraid that we never can have it now that we've institutionalized the\nexpectation that not all objects within an extension need have the\nsame owner --- that means ALTER EXTENSION OWNER could not know which\ncontained objects to change the owner of. So while it might be an\nacceptable restriction that --role prevents use of\n--use-set-session-authorization, it's surely not acceptable that\n--role is unable to restore extensions correctly.\n\nThe outline of a fix that I'm considering is\n\n(1) In the backend, allow SET ROLE to succeed if either the session\nuserid or the current userid is a member of the desired role. This\nwould mean that, given the use-case for --role that you are logging\ninto an account that can \"SET ROLE postgres\", it'd work to do\n\n\tSET ROLE postgres;\n\tSET ROLE anybody;\n\t... create an object to be owned by anybody\n\tSET ROLE postgres;\n\tSET ROLE somebodyelse;\n\t... create an object to be owned by somebodyelse\n\tSET ROLE postgres;\n\t... lather rinse repeat\n\n(2) Adjust pg_dump/pg_restore so that instead of SET SESSION\nAUTHORIZATION, they use SET ROLE pairs as shown above to control\nobject ownership, when not using ALTER OWNER. I'm not sure whether\nto rename the --use-set-session-authorization switch ... it'd be\nmisleadingly named now, but there's backwards compatibility issues\nif we change it. Or maybe keep it and invent a separate\n--use-set-role switch, though that opens the door for lots of\nconfusion.\n\n(3) Adjust pg_dump/pg_restore so that extension ownership is\nalways restored using SET ROLE, whether you gave that switch or not.\n\nHaving said that ... I can't find the discussion right now, but\nI recall Peter or Stephen complaining recently about how SET ROLE\nand SET SESSION AUTHORIZATION allow more than the SQL spec says\nthey should. Do we want to make successful restores dependent\non an even-looser definition of SET ROLE? If not, how might we\nhandle this problem without assuming non-SQL semantics?\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 17:55:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Extension ownership and misuse of SET ROLE/SET SESSION AUTHORIZATION" }, { "msg_contents": "> On 13 Feb 2020, at 23:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nIs this being worked on for the 13 cycle such that it should be an open item?\n\n> Given the current behavior of SET ROLE and SET SESSION AUTHORIZATION,\n> I don't actually see any way that we could get these features to\n> play together. SET SESSION AUTHORIZATION insists on the originally\n> authenticated user being a superuser, so that the documented point of\n> --role (to allow you to start the restore from a not-superuser role)\n> isn't going to work. I thought about starting to use SET ROLE for\n> both purposes, but it checks whether you have role privilege based\n> on the session userid, so that a previous SET ROLE doesn't get you\n> past that check even if it was a successful SET ROLE to a superuser.\n> \n> The quick-and-dirty answer is to disallow these switches from being\n> used together in pg_restore, and I'm inclined to think maybe we should\n> do that in the back branches.\n\n..or should we do this for v13 and back-branches and leave fixing it for 14?\nConsidering the potential invasiveness of the fix I think the latter sounds\nrather appealing at this point in the cycle. Something like the attached\nshould be enough IIUC.\n\ncheers ./daniel", "msg_date": "Tue, 19 May 2020 17:11:53 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 13 Feb 2020, at 23:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Given the current behavior of SET ROLE and SET SESSION AUTHORIZATION,\n>> I don't actually see any way that we could get these features to\n>> play together.\n\n> Is this being worked on for the 13 cycle such that it should be an open item?\n\nI didn't have it on my list, but yeah maybe we should add it to the\n\"pre-existing issues\" list.\n\n>> The quick-and-dirty answer is to disallow these switches from being\n>> used together in pg_restore, and I'm inclined to think maybe we should\n>> do that in the back branches.\n\n> ..or should we do this for v13 and back-branches and leave fixing it for 14?\n> Considering the potential invasiveness of the fix I think the latter sounds\n> rather appealing at this point in the cycle. Something like the attached\n> should be enough IIUC.\n\npg_dump and pg_dumpall also have that switch no?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 11:34:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "> On 19 May 2020, at 17:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 13 Feb 2020, at 23:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Given the current behavior of SET ROLE and SET SESSION AUTHORIZATION,\n>>> I don't actually see any way that we could get these features to\n>>> play together.\n> \n>> Is this being worked on for the 13 cycle such that it should be an open item?\n> \n> I didn't have it on my list, but yeah maybe we should add it to the\n> \"pre-existing issues\" list.\n> \n>>> The quick-and-dirty answer is to disallow these switches from being\n>>> used together in pg_restore, and I'm inclined to think maybe we should\n>>> do that in the back branches.\n> \n>> ..or should we do this for v13 and back-branches and leave fixing it for 14?\n>> Considering the potential invasiveness of the fix I think the latter sounds\n>> rather appealing at this point in the cycle. Something like the attached\n>> should be enough IIUC.\n> \n> pg_dump and pg_dumpall also have that switch no?\n\nThey do, but there the switches actually work as intended and the combination\nshould be allowed AFAICT. Since SET ROLE is sent to the source server and not\nthe output we can use for starting the dump without being a superuser.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 19 May 2020 23:07:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "On Thu, Feb 13, 2020 at 5:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (1) In the backend, allow SET ROLE to succeed if either the session\n> userid or the current userid is a member of the desired role. This\n> would mean that, given the use-case for --role that you are logging\n> into an account that can \"SET ROLE postgres\", it'd work to do\n>\n> SET ROLE postgres;\n> SET ROLE anybody;\n> ... create an object to be owned by anybody\n> SET ROLE postgres;\n> SET ROLE somebodyelse;\n> ... create an object to be owned by somebodyelse\n> SET ROLE postgres;\n> ... lather rinse repeat\n\nHonestly, the fact that this would work where a direct SET ROLE would\nfail seems horribly counterintuitive.\n\n> Having said that ... I can't find the discussion right now, but\n> I recall Peter or Stephen complaining recently about how SET ROLE\n> and SET SESSION AUTHORIZATION allow more than the SQL spec says\n> they should. Do we want to make successful restores dependent\n> on an even-looser definition of SET ROLE? If not, how might we\n> handle this problem without assuming non-SQL semantics?\n\nI don't know how to solve this problem, but I think loosening the\nrequirements for 'SET ROLE' is something where a lot of caution is\nwarranted.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 20 May 2020 13:54:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "> On 19 May 2020, at 23:07, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 19 May 2020, at 17:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>>> On 13 Feb 2020, at 23:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Given the current behavior of SET ROLE and SET SESSION AUTHORIZATION,\n>>>> I don't actually see any way that we could get these features to\n>>>> play together.\n>> \n>>> Is this being worked on for the 13 cycle such that it should be an open item?\n>> \n>> I didn't have it on my list, but yeah maybe we should add it to the\n>> \"pre-existing issues\" list.\n\nThe recent work on pg_dump reminded me about this thread, AFAICT this was never\naddressed? Are you including it in the current line of work (if so, sorry for\nmissing it in the threads) or should I take a stab at it?\n\n>>>> The quick-and-dirty answer is to disallow these switches from being\n>>>> used together in pg_restore, and I'm inclined to think maybe we should\n>>>> do that in the back branches.\n>> \n>>> ..or should we do this for v13 and back-branches and leave fixing it for 14?\n>>> Considering the potential invasiveness of the fix I think the latter sounds\n>>> rather appealing at this point in the cycle. Something like the attached\n>>> should be enough IIUC.\n>> \n>> pg_dump and pg_dumpall also have that switch no?\n> \n> They do, but there the switches actually work as intended and the combination\n> should be allowed AFAICT. Since SET ROLE is sent to the source server and not\n> the output we can use for starting the dump without being a superuser.\n\nThis patch still seems relevant for back-branches, but starting at 14 this time.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 29 Oct 2021 16:39:19 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 13 Feb 2020, at 23:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Given the current behavior of SET ROLE and SET SESSION AUTHORIZATION,\n>>> I don't actually see any way that we could get these features to\n>>> play together.\n\n> The recent work on pg_dump reminded me about this thread, AFAICT this was never\n> addressed? Are you including it in the current line of work (if so, sorry for\n> missing it in the threads) or should I take a stab at it?\n\nNo, I'm not working on this --- I'd kind of forgotten about it.\nPeople didn't seem to like the idea of loosening the requirements\nfor SET ROLE, but I'm not sure how to solve the extension-ownership\nproblem without it.\n\n> This patch still seems relevant for back-branches, but starting at 14 this time.\n\nI think the appropriate thing to do is stick your patch into all branches\nfor the moment. We can remove it again whenever we invent a fix for the\nproblem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Oct 2021 12:04:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "> On 29 Oct 2021, at 18:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> This patch still seems relevant for back-branches, but starting at 14 this time.\n> \n> I think the appropriate thing to do is stick your patch into all branches\n> for the moment. We can remove it again whenever we invent a fix for the\n> problem.\n\nFair enough, I'll make that happen.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 29 Oct 2021 20:00:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" }, { "msg_contents": "> On 29 Oct 2021, at 20:00, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 29 Oct 2021, at 18:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n> \n>>> This patch still seems relevant for back-branches, but starting at 14 this time.\n>> \n>> I think the appropriate thing to do is stick your patch into all branches\n>> for the moment. We can remove it again whenever we invent a fix for the\n>> problem.\n> \n> Fair enough, I'll make that happen.\n\nI added a small note to the doc page and a simple test, unless objections I'll\napply the attached v2 all the way down.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 2 Nov 2021 11:57:18 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extension ownership and misuse of SET ROLE/SET SESSION\n AUTHORIZATION" } ]
[ { "msg_contents": "Hi all,\n\ncreatedb has a couple of issues with its quoting. For example take\nthat, which can be confusing:\n$ createdb --lc-ctype=\"en_US.UTF-8';create table aa();select '1\" popo\ncreatedb: error: database creation failed: ERROR: CREATE DATABASE\ncannot run inside a transaction block\n\nThe root of the issue is that any values added by the command caller\nwith --lc-collate, --lc-ctype or --encoding are not quoted properly,\nand in all three cases it means that the quoting needs to be\nencoding-sensitive (Tom mentioned me directly that part). This proper\nquoting can be achieved using appendStringLiteralConn() from\nstring_utils.c, at the condition of taking the connection to the\nserver before building the CREATE DATABASE query.\n\nNote that for --encoding, this is less of a problem as there is some\nextra validation with pg_char_to_encoding(), but it seems better to me\nto be consistent.\n\nSo this gives the patch attached, where the error becomes:\nERROR: invalid locale name: \"en_US.UTF-8';create table aa();select '1\"\n\nAny opinions?\n--\nMichael", "msg_date": "Fri, 14 Feb 2020 13:10:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Quoting issues with createdb" }, { "msg_contents": "> On 14 Feb 2020, at 05:10, Michael Paquier <michael@paquier.xyz> wrote:\n\n> createdb has a couple of issues with its quoting. For example take\n> that, which can be confusing:\n> $ createdb --lc-ctype=\"en_US.UTF-8';create table aa();select '1\" popo\n> createdb: error: database creation failed: ERROR: CREATE DATABASE\n> cannot run inside a transaction block\n\nNice catch!\n\n> The root of the issue is that any values added by the command caller\n> with --lc-collate, --lc-ctype or --encoding are not quoted properly,\n> and in all three cases it means that the quoting needs to be\n> encoding-sensitive (Tom mentioned me directly that part). This proper\n> quoting can be achieved using appendStringLiteralConn() from\n> string_utils.c, at the condition of taking the connection to the\n> server before building the CREATE DATABASE query.\n\nMakes sense, it aligns it with other utils and passes all the tests. +1 on the\nfix.\n\n> Any opinions?\n\nI would've liked a negative test basically along the lines of your example\nabove. If we left a hole the size of this, it would be nice to catch it from\naccidentally happening again.\n\ndiff --git a/src/bin/scripts/t/020_createdb.pl b/src/bin/scripts/t/020_createdb.pl\nindex c0f6067a92..afd128deba 100644\n--- a/src/bin/scripts/t/020_createdb.pl\n+++ b/src/bin/scripts/t/020_createdb.pl\n@@ -3,7 +3,7 @@ use warnings;\n\n use PostgresNode;\n use TestLib;\n-use Test::More tests => 13;\n+use Test::More tests => 14;\n\n program_help_ok('createdb');\n program_version_ok('createdb');\n@@ -24,3 +24,6 @@ $node->issues_sql_like(\n\n $node->command_fails([ 'createdb', 'foobar1' ],\n 'fails if database already exists');\n+\n+$node->command_fails(['createdb', '-l', 'C\\';SELECT 1;' ],\n+ 'fails on incorrect locale');\n\ncheers ./daniel\n\n", "msg_date": "Thu, 27 Feb 2020 00:00:11 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Quoting issues with createdb" }, { "msg_contents": "On Thu, Feb 27, 2020 at 12:00:11AM +0100, Daniel Gustafsson wrote:\n> Makes sense, it aligns it with other utils and passes all the tests. +1 on the\n> fix.\n\nThanks for the review.\n\n> I would've liked a negative test basically along the lines of your example\n> above. If we left a hole the size of this, it would be nice to catch it from\n> accidentally happening again.\n\nNo arguments against that.\n\n> program_help_ok('createdb');\n> program_version_ok('createdb');\n> @@ -24,3 +24,6 @@ $node->issues_sql_like(\n> \n> $node->command_fails([ 'createdb', 'foobar1' ],\n> 'fails if database already exists');\n> +\n> +$node->command_fails(['createdb', '-l', 'C\\';SELECT 1;' ],\n> + 'fails on incorrect locale');\n\nOne problem with this way of testing things is that you don't check\nthe exact error message triggered, and this command fails with or\nwithout the patch, so you don't actually know if things are correctly\npatched up or not. What should be used instead is command_checks_all,\navailable down to 11 where we check for a failure and a match with the\nerror string generated. I have used that, and applied the patch down\nto 9.5.\n--\nMichael", "msg_date": "Thu, 27 Feb 2020 11:24:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Quoting issues with createdb" } ]
[ { "msg_contents": "This work is to parallelize the copy command and in particular \"Copy\n<table_name> from 'filename' Where <condition>;\" command.\n\nBefore going into how and what portion of 'copy command' processing we\ncan parallelize, let us see in brief what are the top-level operations\nwe perform while copying from the file into a table. We read the file\nin 64KB chunks, then find the line endings and process that data line\nby line, where each line corresponds to one tuple. We first form the\ntuple (in form of value/null array) from that line, check if it\nqualifies the where condition and if it qualifies, then perform\nconstraint check and few other checks and then finally store it in\nlocal tuple array. Once we reach 1000 tuples or consumed 64KB\n(whichever occurred first), we insert them together via\ntable_multi_insert API and then for each tuple insert into the\nindex(es) and execute after row triggers.\n\nSo if we see here we do a lot of work after reading each 64K chunk.\nWe can read the next chunk only after all the tuples are processed in\nthe previous chunk we read. This brings us an opportunity to\nparallelize each 64K chunk processing. I think we can do this in more\nthan one way.\n\nThe first idea is that we allocate each chunk to a worker and once the\nworker has finished processing the current chunk, it can start with\nthe next unprocessed chunk. Here, we need to see how to handle the\npartial tuples at the end or beginning of each chunk. We can read the\nchunks in dsa/dsm instead of in local buffer for processing.\nAlternatively, if we think that accessing shared memory can be costly\nwe can read the entire chunk in local memory, but copy the partial\ntuple at the beginning of a chunk (if any) to a dsa. We mainly need\npartial tuple in the shared memory area. The worker which has found\nthe initial part of the partial tuple will be responsible to process\nthe entire tuple. Now, to detect whether there is a partial tuple at\nthe beginning of a chunk, we always start reading one byte, prior to\nthe start of the current chunk and if that byte is not a terminating\nline byte, we know that it is a partial tuple. Now, while processing\nthe chunk, we will ignore this first line and start after the first\nterminating line.\n\nTo connect the partial tuple in two consecutive chunks, we need to\nhave another data structure (for the ease of reference in this email,\nI call it CTM (chunk-tuple-map)) in shared memory where we store some\nper-chunk information like the chunk-number, dsa location of that\nchunk and a variable which indicates whether we can free/reuse the\ncurrent entry. Whenever we encounter the partial tuple at the\nbeginning of a chunk we note down its chunk number, and dsa location\nin CTM. Next, whenever we encounter any partial tuple at the end of\nthe chunk, we search CTM for next chunk-number and read from\ncorresponding dsa location till we encounter terminating line byte.\nOnce we have read and processed this partial tuple, we can mark the\nentry as available for reuse. There are some loose ends here like how\nmany entries shall we allocate in this data structure. It depends on\nwhether we want to allow the worker to start reading the next chunk\nbefore the partial tuple of the previous chunk is processed. To keep\nit simple, we can allow the worker to process the next chunk only when\nthe partial tuple in the previous chunk is processed. This will allow\nus to keep the entries equal to a number of workers in CTM. I think\nwe can easily improve this if we want but I don't think it will matter\ntoo much as in most cases by the time we processed the tuples in that\nchunk, the partial tuple would have been consumed by the other worker.\n\nAnother approach that came up during an offlist discussion with Robert\nis that we have one dedicated worker for reading the chunks from file\nand it copies the complete tuples of one chunk in the shared memory\nand once that is done, a handover that chunks to another worker which\ncan process tuples in that area. We can imagine that the reader\nworker is responsible to form some sort of work queue that can be\nprocessed by the other workers. In this idea, we won't be able to get\nthe benefit of initial tokenization (forming tuple boundaries) via\nparallel workers and might need some additional memory processing as\nafter reader worker has handed the initial shared memory segment, we\nneed to somehow identify tuple boundaries and then process them.\n\nAnother thing we need to figure out is the how many workers to use for\nthe copy command. I think we can use it based on the file size which\nneeds some experiments or may be based on user input.\n\nI think we have two related problems to solve for this (a) relation\nextension lock (required for extending the relation) which won't\nconflict among workers due to group locking, we are working on a\nsolution for this in another thread [1], (b) Use of Page locks in Gin\nindexes, we can probably disallow parallelism if the table has Gin\nindex which is not a great thing but not bad either.\n\nTo be clear, this work is for PG14.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B%2BSs%3DGn1LA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 Feb 2020 13:41:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel copy" }, { "msg_contents": "On Fri, Feb 14, 2020 at 9:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> This work is to parallelize the copy command and in particular \"Copy\n> <table_name> from 'filename' Where <condition>;\" command.\n\nNice project, and a great stepping stone towards parallel DML.\n\n> The first idea is that we allocate each chunk to a worker and once the\n> worker has finished processing the current chunk, it can start with\n> the next unprocessed chunk. Here, we need to see how to handle the\n> partial tuples at the end or beginning of each chunk. We can read the\n> chunks in dsa/dsm instead of in local buffer for processing.\n> Alternatively, if we think that accessing shared memory can be costly\n> we can read the entire chunk in local memory, but copy the partial\n> tuple at the beginning of a chunk (if any) to a dsa. We mainly need\n> partial tuple in the shared memory area. The worker which has found\n> the initial part of the partial tuple will be responsible to process\n> the entire tuple. Now, to detect whether there is a partial tuple at\n> the beginning of a chunk, we always start reading one byte, prior to\n> the start of the current chunk and if that byte is not a terminating\n> line byte, we know that it is a partial tuple. Now, while processing\n> the chunk, we will ignore this first line and start after the first\n> terminating line.\n\nThat's quiet similar to the approach I took with a parallel file_fdw\npatch[1], which mostly consisted of parallelising the reading part of\ncopy.c, except that...\n\n> To connect the partial tuple in two consecutive chunks, we need to\n> have another data structure (for the ease of reference in this email,\n> I call it CTM (chunk-tuple-map)) in shared memory where we store some\n> per-chunk information like the chunk-number, dsa location of that\n> chunk and a variable which indicates whether we can free/reuse the\n> current entry. Whenever we encounter the partial tuple at the\n> beginning of a chunk we note down its chunk number, and dsa location\n> in CTM. Next, whenever we encounter any partial tuple at the end of\n> the chunk, we search CTM for next chunk-number and read from\n> corresponding dsa location till we encounter terminating line byte.\n> Once we have read and processed this partial tuple, we can mark the\n> entry as available for reuse. There are some loose ends here like how\n> many entries shall we allocate in this data structure. It depends on\n> whether we want to allow the worker to start reading the next chunk\n> before the partial tuple of the previous chunk is processed. To keep\n> it simple, we can allow the worker to process the next chunk only when\n> the partial tuple in the previous chunk is processed. This will allow\n> us to keep the entries equal to a number of workers in CTM. I think\n> we can easily improve this if we want but I don't think it will matter\n> too much as in most cases by the time we processed the tuples in that\n> chunk, the partial tuple would have been consumed by the other worker.\n\n... I didn't use a shm 'partial tuple' exchanging mechanism, I just\nhad each worker follow the final tuple in its chunk into the next\nchunk, and have each worker ignore the first tuple in chunk after\nchunk 0 because it knows someone else is looking after that. That\nmeans that there was some double reading going on near the boundaries,\nand considering how much I've been complaining about bogus extra\nsystem calls on this mailing list lately, yeah, your idea of doing a\nbit more coordination is a better idea. If you go this way, you might\nat least find the copy.c part of the patch I wrote useful as stand-in\nscaffolding code in the meantime while you prototype the parallel\nwriting side, if you don't already have something better for this?\n\n> Another approach that came up during an offlist discussion with Robert\n> is that we have one dedicated worker for reading the chunks from file\n> and it copies the complete tuples of one chunk in the shared memory\n> and once that is done, a handover that chunks to another worker which\n> can process tuples in that area. We can imagine that the reader\n> worker is responsible to form some sort of work queue that can be\n> processed by the other workers. In this idea, we won't be able to get\n> the benefit of initial tokenization (forming tuple boundaries) via\n> parallel workers and might need some additional memory processing as\n> after reader worker has handed the initial shared memory segment, we\n> need to somehow identify tuple boundaries and then process them.\n\nYeah, I have also wondered about something like this in a slightly\ndifferent context. For parallel query in general, I wondered if there\nshould be a Parallel Scatter node, that can be put on top of any\nparallel-safe plan, and it runs it in a worker process that just\npushes tuples into a single-producer multi-consumer shm queue, and\nthen other workers read from that whenever they need a tuple. Hmm,\nbut for COPY, I suppose you'd want to push the raw lines with minimal\nexamination, not tuples, into a shm queue, so I guess that's a bit\ndifferent.\n\n> Another thing we need to figure out is the how many workers to use for\n> the copy command. I think we can use it based on the file size which\n> needs some experiments or may be based on user input.\n\nIt seems like we don't even really have a general model for that sort\nof thing in the rest of the system yet, and I guess some kind of\nfairly dumb explicit system would make sense in the early days...\n\n> Thoughts?\n\nThis is cool.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGKZu8fpZo0W%3DPOmQEN46kXhLedzqqAnt5iJZy7tD0x6sw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 14 Feb 2020 23:05:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Feb 14, 2020 at 3:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Feb 14, 2020 at 9:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > This work is to parallelize the copy command and in particular \"Copy\n> > <table_name> from 'filename' Where <condition>;\" command.\n>\n> Nice project, and a great stepping stone towards parallel DML.\n>\n\nThanks.\n\n> > The first idea is that we allocate each chunk to a worker and once the\n> > worker has finished processing the current chunk, it can start with\n> > the next unprocessed chunk. Here, we need to see how to handle the\n> > partial tuples at the end or beginning of each chunk. We can read the\n> > chunks in dsa/dsm instead of in local buffer for processing.\n> > Alternatively, if we think that accessing shared memory can be costly\n> > we can read the entire chunk in local memory, but copy the partial\n> > tuple at the beginning of a chunk (if any) to a dsa. We mainly need\n> > partial tuple in the shared memory area. The worker which has found\n> > the initial part of the partial tuple will be responsible to process\n> > the entire tuple. Now, to detect whether there is a partial tuple at\n> > the beginning of a chunk, we always start reading one byte, prior to\n> > the start of the current chunk and if that byte is not a terminating\n> > line byte, we know that it is a partial tuple. Now, while processing\n> > the chunk, we will ignore this first line and start after the first\n> > terminating line.\n>\n> That's quiet similar to the approach I took with a parallel file_fdw\n> patch[1], which mostly consisted of parallelising the reading part of\n> copy.c, except that...\n>\n> > To connect the partial tuple in two consecutive chunks, we need to\n> > have another data structure (for the ease of reference in this email,\n> > I call it CTM (chunk-tuple-map)) in shared memory where we store some\n> > per-chunk information like the chunk-number, dsa location of that\n> > chunk and a variable which indicates whether we can free/reuse the\n> > current entry. Whenever we encounter the partial tuple at the\n> > beginning of a chunk we note down its chunk number, and dsa location\n> > in CTM. Next, whenever we encounter any partial tuple at the end of\n> > the chunk, we search CTM for next chunk-number and read from\n> > corresponding dsa location till we encounter terminating line byte.\n> > Once we have read and processed this partial tuple, we can mark the\n> > entry as available for reuse. There are some loose ends here like how\n> > many entries shall we allocate in this data structure. It depends on\n> > whether we want to allow the worker to start reading the next chunk\n> > before the partial tuple of the previous chunk is processed. To keep\n> > it simple, we can allow the worker to process the next chunk only when\n> > the partial tuple in the previous chunk is processed. This will allow\n> > us to keep the entries equal to a number of workers in CTM. I think\n> > we can easily improve this if we want but I don't think it will matter\n> > too much as in most cases by the time we processed the tuples in that\n> > chunk, the partial tuple would have been consumed by the other worker.\n>\n> ... I didn't use a shm 'partial tuple' exchanging mechanism, I just\n> had each worker follow the final tuple in its chunk into the next\n> chunk, and have each worker ignore the first tuple in chunk after\n> chunk 0 because it knows someone else is looking after that. That\n> means that there was some double reading going on near the boundaries,\n>\n\nRight and especially if the part in the second chunk is bigger, then\nwe might need to read most of the second chunk.\n\n> and considering how much I've been complaining about bogus extra\n> system calls on this mailing list lately, yeah, your idea of doing a\n> bit more coordination is a better idea. If you go this way, you might\n> at least find the copy.c part of the patch I wrote useful as stand-in\n> scaffolding code in the meantime while you prototype the parallel\n> writing side, if you don't already have something better for this?\n>\n\nNo, I haven't started writing anything yet, but I have some ideas on\nhow to achieve this. I quickly skimmed through your patch and I think\nthat can be used as a starting point though if we decide to go with\naccumulating the partial tuple or all the data in shm, then the things\nmight differ.\n\n> > Another approach that came up during an offlist discussion with Robert\n> > is that we have one dedicated worker for reading the chunks from file\n> > and it copies the complete tuples of one chunk in the shared memory\n> > and once that is done, a handover that chunks to another worker which\n> > can process tuples in that area. We can imagine that the reader\n> > worker is responsible to form some sort of work queue that can be\n> > processed by the other workers. In this idea, we won't be able to get\n> > the benefit of initial tokenization (forming tuple boundaries) via\n> > parallel workers and might need some additional memory processing as\n> > after reader worker has handed the initial shared memory segment, we\n> > need to somehow identify tuple boundaries and then process them.\n>\n> Yeah, I have also wondered about something like this in a slightly\n> different context. For parallel query in general, I wondered if there\n> should be a Parallel Scatter node, that can be put on top of any\n> parallel-safe plan, and it runs it in a worker process that just\n> pushes tuples into a single-producer multi-consumer shm queue, and\n> then other workers read from that whenever they need a tuple.\n>\n\nThe idea sounds great but the past experience shows that shoving all\nthe tuples through queue might add a significant overhead. However, I\ndon't know how exactly you are planning to use it?\n\n> Hmm,\n> but for COPY, I suppose you'd want to push the raw lines with minimal\n> examination, not tuples, into a shm queue, so I guess that's a bit\n> different.\n>\n\nYeah.\n\n> > Another thing we need to figure out is the how many workers to use for\n> > the copy command. I think we can use it based on the file size which\n> > needs some experiments or may be based on user input.\n>\n> It seems like we don't even really have a general model for that sort\n> of thing in the rest of the system yet, and I guess some kind of\n> fairly dumb explicit system would make sense in the early days...\n>\n\nmakes sense.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 14 Feb 2020 17:26:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, 14 Feb 2020 at 11:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Feb 14, 2020 at 3:36 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >\n> > On Fri, Feb 14, 2020 at 9:12 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n\n ...\n\n> > > Another approach that came up during an offlist discussion with Robert\n> > > is that we have one dedicated worker for reading the chunks from file\n> > > and it copies the complete tuples of one chunk in the shared memory\n> > > and once that is done, a handover that chunks to another worker which\n> > > can process tuples in that area. We can imagine that the reader\n> > > worker is responsible to form some sort of work queue that can be\n> > > processed by the other workers. In this idea, we won't be able to get\n> > > the benefit of initial tokenization (forming tuple boundaries) via\n> > > parallel workers and might need some additional memory processing as\n> > > after reader worker has handed the initial shared memory segment, we\n> > > need to somehow identify tuple boundaries and then process them.\n>\n\nParsing rows from the raw input (the work done by CopyReadLine()) in a\nsingle process would accommodate line returns in quoted fields. I don't\nthink there's a way of getting parallel workers to manage the\nin-quote/out-of-quote state required. A single worker could also process a\nstream without having to reread/rewind so it would be able to process input\nfrom STDIN or PROGRAM sources, making the improvements applicable to load\noperations done by third party tools and scripted \\copy in psql.\n\n\n> >\n\n...\n\n>\n> > > Another thing we need to figure out is the how many workers to use for\n> > > the copy command. I think we can use it based on the file size which\n> > > needs some experiments or may be based on user input.\n> >\n> > It seems like we don't even really have a general model for that sort\n> > of thing in the rest of the system yet, and I guess some kind of\n> > fairly dumb explicit system would make sense in the early days...\n> >\n>\n> makes sense.\n>\nThe ratio between chunking or line parsing processes and the parallel\nworker pool would vary with the width of the table, complexity of the data\nor file (dates, encoding conversions), complexity of constraints and\nacceptable impact of the load. Being able to control it through user input\nwould be great.\n\n--\nAlastair\n\nOn Fri, 14 Feb 2020 at 11:57, Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Feb 14, 2020 at 3:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Feb 14, 2020 at 9:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote: ...\n> > Another approach that came up during an offlist discussion with Robert\n> > is that we have one dedicated worker for reading the chunks from file\n> > and it copies the complete tuples of one chunk in the shared memory\n> > and once that is done, a handover that chunks to another worker which\n> > can process tuples in that area.  We can imagine that the reader\n> > worker is responsible to form some sort of work queue that can be\n> > processed by the other workers.  In this idea, we won't be able to get\n> > the benefit of initial tokenization (forming tuple boundaries) via\n> > parallel workers and might need some additional memory processing as\n> > after reader worker has handed the initial shared memory segment, we\n> > need to somehow identify tuple boundaries and then process them.Parsing rows from the raw input (the work done by CopyReadLine()) in a single process would accommodate line returns in quoted fields. I don't think there's a way of getting parallel workers to manage the in-quote/out-of-quote state required. A single worker could also process a stream without having to reread/rewind so it would be able to process input from STDIN or PROGRAM sources, making the improvements applicable to load operations done by third party tools and scripted \\copy in psql. \n>... \n\n> > Another thing we need to figure out is the how many workers to use for\n> > the copy command.  I think we can use it based on the file size which\n> > needs some experiments or may be based on user input.\n>\n> It seems like we don't even really have a general model for that sort\n> of thing in the rest of the system yet, and I guess some kind of\n> fairly dumb explicit system would make sense in the early days...\n>\n\nmakes sense.The ratio between chunking or line parsing processes and the parallel worker pool would vary with the width of the table, complexity of the data or file (dates, encoding conversions), complexity of constraints and acceptable impact of the load. Being able to control it through user input would be great.--Alastair", "msg_date": "Fri, 14 Feb 2020 13:45:55 +0000", "msg_from": "Alastair Turner <minion@decodable.me>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Feb 14, 2020 at 7:16 PM Alastair Turner <minion@decodable.me> wrote:\n>\n> On Fri, 14 Feb 2020 at 11:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Feb 14, 2020 at 3:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> >\n>> > On Fri, Feb 14, 2020 at 9:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> ...\n>>\n>> > > Another approach that came up during an offlist discussion with Robert\n>> > > is that we have one dedicated worker for reading the chunks from file\n>> > > and it copies the complete tuples of one chunk in the shared memory\n>> > > and once that is done, a handover that chunks to another worker which\n>> > > can process tuples in that area. We can imagine that the reader\n>> > > worker is responsible to form some sort of work queue that can be\n>> > > processed by the other workers. In this idea, we won't be able to get\n>> > > the benefit of initial tokenization (forming tuple boundaries) via\n>> > > parallel workers and might need some additional memory processing as\n>> > > after reader worker has handed the initial shared memory segment, we\n>> > > need to somehow identify tuple boundaries and then process them.\n>\n>\n> Parsing rows from the raw input (the work done by CopyReadLine()) in a single process would accommodate line returns in quoted fields. I don't think there's a way of getting parallel workers to manage the in-quote/out-of-quote state required.\n>\n\nAFAIU, the whole of this in-quote/out-of-quote state is manged inside\nCopyReadLineText which will be done by each of the parallel workers,\nsomething on the lines of what Thomas did in his patch [1].\nBasically, we need to invent a mechanism to allocate chunks to\nindividual workers and then the whole processing will be done as we\nare doing now except for special handling for partial tuples which I\nhave explained in my previous email. Am, I missing something here?\n\n>>\n>> >\n>\n> ...\n>>\n>>\n>> > > Another thing we need to figure out is the how many workers to use for\n>> > > the copy command. I think we can use it based on the file size which\n>> > > needs some experiments or may be based on user input.\n>> >\n>> > It seems like we don't even really have a general model for that sort\n>> > of thing in the rest of the system yet, and I guess some kind of\n>> > fairly dumb explicit system would make sense in the early days...\n>> >\n>>\n>> makes sense.\n>\n> The ratio between chunking or line parsing processes and the parallel worker pool would vary with the width of the table, complexity of the data or file (dates, encoding conversions), complexity of constraints and acceptable impact of the load. Being able to control it through user input would be great.\n>\n\nOkay, I think one simple way could be that we compute the number of\nworkers based on filesize (some experiments are required to determine\nthis) unless the user has given the input. If the user has provided\nthe input then we can use that with an upper limit to\nmax_parallel_workers.\n\n\n[1] - https://www.postgresql.org/message-id/CA%2BhUKGKZu8fpZo0W%3DPOmQEN46kXhLedzqqAnt5iJZy7tD0x6sw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 15 Feb 2020 10:25:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, 15 Feb 2020 at 04:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 14, 2020 at 7:16 PM Alastair Turner <minion@decodable.me> wrote:\n> >\n...\n> >\n> > Parsing rows from the raw input (the work done by CopyReadLine()) in a single process would accommodate line returns in quoted fields. I don't think there's a way of getting parallel workers to manage the in-quote/out-of-quote state required.\n> >\n>\n> AFAIU, the whole of this in-quote/out-of-quote state is manged inside\n> CopyReadLineText which will be done by each of the parallel workers,\n> something on the lines of what Thomas did in his patch [1].\n> Basically, we need to invent a mechanism to allocate chunks to\n> individual workers and then the whole processing will be done as we\n> are doing now except for special handling for partial tuples which I\n> have explained in my previous email. Am, I missing something here?\n>\nThe problem case that I see is the chunk boundary falling in the\nmiddle of a quoted field where\n - The quote opens in chunk 1\n - The quote closes in chunk 2\n - There is an EoL character between the start of chunk 2 and the closing quote\n\nWhen the worker processing chunk 2 starts, it believes itself to be in\nout-of-quote state, so only data between the start of the chunk and\nthe EoL is regarded as belonging to the partial line. From that point\non the parsing of the rest of the chunk goes off track.\n\nSome of the resulting errors can be avoided by, for instance,\nrequiring a quote to be preceded by a delimiter or EoL. That answer\nfails when fields end with EoL characters, which happens often enough\nin the wild.\n\nRecovering from an incorrect in-quote/out-of-quote state assumption at\nthe start of parsing a chunk just seems like a hole with no bottom. So\nit looks to me like it's best done in a single process which can keep\ntrack of that state reliably.\n\n--\nAastair\n\n\n", "msg_date": "Sat, 15 Feb 2020 10:38:07 +0000", "msg_from": "Alastair Turner <minion@decodable.me>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Feb 15, 2020 at 4:08 PM Alastair Turner <minion@decodable.me> wrote:\n>\n> On Sat, 15 Feb 2020 at 04:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 14, 2020 at 7:16 PM Alastair Turner <minion@decodable.me> wrote:\n> > >\n> ...\n> > >\n> > > Parsing rows from the raw input (the work done by CopyReadLine()) in a single process would accommodate line returns in quoted fields. I don't think there's a way of getting parallel workers to manage the in-quote/out-of-quote state required.\n> > >\n> >\n> > AFAIU, the whole of this in-quote/out-of-quote state is manged inside\n> > CopyReadLineText which will be done by each of the parallel workers,\n> > something on the lines of what Thomas did in his patch [1].\n> > Basically, we need to invent a mechanism to allocate chunks to\n> > individual workers and then the whole processing will be done as we\n> > are doing now except for special handling for partial tuples which I\n> > have explained in my previous email. Am, I missing something here?\n> >\n> The problem case that I see is the chunk boundary falling in the\n> middle of a quoted field where\n> - The quote opens in chunk 1\n> - The quote closes in chunk 2\n> - There is an EoL character between the start of chunk 2 and the closing quote\n>\n> When the worker processing chunk 2 starts, it believes itself to be in\n> out-of-quote state, so only data between the start of the chunk and\n> the EoL is regarded as belonging to the partial line. From that point\n> on the parsing of the rest of the chunk goes off track.\n>\n> Some of the resulting errors can be avoided by, for instance,\n> requiring a quote to be preceded by a delimiter or EoL. That answer\n> fails when fields end with EoL characters, which happens often enough\n> in the wild.\n>\n> Recovering from an incorrect in-quote/out-of-quote state assumption at\n> the start of parsing a chunk just seems like a hole with no bottom. So\n> it looks to me like it's best done in a single process which can keep\n> track of that state reliably.\n>\n\nGood point and I agree with you that having a single process would\navoid any such stuff. However, I will think some more on it and if\nyou/anyone else gets some idea on how to deal with this in a\nmulti-worker system (where we can allow each worker to read and\nprocess the chunk) then feel free to share your thoughts.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 15 Feb 2020 18:02:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Feb 15, 2020 at 06:02:06PM +0530, Amit Kapila wrote:\n> On Sat, Feb 15, 2020 at 4:08 PM Alastair Turner <minion@decodable.me> wrote:\n> >\n> > On Sat, 15 Feb 2020 at 04:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 14, 2020 at 7:16 PM Alastair Turner <minion@decodable.me> wrote:\n> > > >\n> > ...\n> > > >\n> > > > Parsing rows from the raw input (the work done by CopyReadLine()) in a single process would accommodate line returns in quoted fields. I don't think there's a way of getting parallel workers to manage the in-quote/out-of-quote state required.\n> > > >\n> > >\n> > > AFAIU, the whole of this in-quote/out-of-quote state is manged inside\n> > > CopyReadLineText which will be done by each of the parallel workers,\n> > > something on the lines of what Thomas did in his patch [1].\n> > > Basically, we need to invent a mechanism to allocate chunks to\n> > > individual workers and then the whole processing will be done as we\n> > > are doing now except for special handling for partial tuples which I\n> > > have explained in my previous email. Am, I missing something here?\n> > >\n> > The problem case that I see is the chunk boundary falling in the\n> > middle of a quoted field where\n> > - The quote opens in chunk 1\n> > - The quote closes in chunk 2\n> > - There is an EoL character between the start of chunk 2 and the closing quote\n> >\n> > When the worker processing chunk 2 starts, it believes itself to be in\n> > out-of-quote state, so only data between the start of the chunk and\n> > the EoL is regarded as belonging to the partial line. From that point\n> > on the parsing of the rest of the chunk goes off track.\n> >\n> > Some of the resulting errors can be avoided by, for instance,\n> > requiring a quote to be preceded by a delimiter or EoL. That answer\n> > fails when fields end with EoL characters, which happens often enough\n> > in the wild.\n> >\n> > Recovering from an incorrect in-quote/out-of-quote state assumption at\n> > the start of parsing a chunk just seems like a hole with no bottom. So\n> > it looks to me like it's best done in a single process which can keep\n> > track of that state reliably.\n> >\n> \n> Good point and I agree with you that having a single process would\n> avoid any such stuff. However, I will think some more on it and if\n> you/anyone else gets some idea on how to deal with this in a\n> multi-worker system (where we can allow each worker to read and\n> process the chunk) then feel free to share your thoughts.\n\nI see two pieces of this puzzle: an input format we control, and the\nones we don't.\n\nIn the former case, we could encode all fields with base85 (or\nsomething similar that reduces the input alphabet efficiently), then\nreserve bytes that denote delimiters of various types. ASCII has\nseparators for file, group, record, and unit that we could use as\ninspiration.\n\nI don't have anything to offer for free-form input other than to agree\nthat it looks like a hole with no bottom, and maybe we should just\nkeep that process serial, at least until someone finds a bottom.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 15 Feb 2020 18:51:05 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "\nOn 2/15/20 7:32 AM, Amit Kapila wrote:\n> On Sat, Feb 15, 2020 at 4:08 PM Alastair Turner <minion@decodable.me> wrote:\n>> On Sat, 15 Feb 2020 at 04:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>> On Fri, Feb 14, 2020 at 7:16 PM Alastair Turner <minion@decodable.me> wrote:\n>> ...\n>>>> Parsing rows from the raw input (the work done by CopyReadLine()) in a single process would accommodate line returns in quoted fields. I don't think there's a way of getting parallel workers to manage the in-quote/out-of-quote state required.\n>>>>\n>>> AFAIU, the whole of this in-quote/out-of-quote state is manged inside\n>>> CopyReadLineText which will be done by each of the parallel workers,\n>>> something on the lines of what Thomas did in his patch [1].\n>>> Basically, we need to invent a mechanism to allocate chunks to\n>>> individual workers and then the whole processing will be done as we\n>>> are doing now except for special handling for partial tuples which I\n>>> have explained in my previous email. Am, I missing something here?\n>>>\n>> The problem case that I see is the chunk boundary falling in the\n>> middle of a quoted field where\n>> - The quote opens in chunk 1\n>> - The quote closes in chunk 2\n>> - There is an EoL character between the start of chunk 2 and the closing quote\n>>\n>> When the worker processing chunk 2 starts, it believes itself to be in\n>> out-of-quote state, so only data between the start of the chunk and\n>> the EoL is regarded as belonging to the partial line. From that point\n>> on the parsing of the rest of the chunk goes off track.\n>>\n>> Some of the resulting errors can be avoided by, for instance,\n>> requiring a quote to be preceded by a delimiter or EoL. That answer\n>> fails when fields end with EoL characters, which happens often enough\n>> in the wild.\n>>\n>> Recovering from an incorrect in-quote/out-of-quote state assumption at\n>> the start of parsing a chunk just seems like a hole with no bottom. So\n>> it looks to me like it's best done in a single process which can keep\n>> track of that state reliably.\n>>\n> Good point and I agree with you that having a single process would\n> avoid any such stuff. However, I will think some more on it and if\n> you/anyone else gets some idea on how to deal with this in a\n> multi-worker system (where we can allow each worker to read and\n> process the chunk) then feel free to share your thoughts.\n>\n\n\nIIRC, in_quote only matters here in CSV mode (because CSV fields can\nhave embedded newlines). So why not just forbid parallel copy in CSV\nmode, at least for now? I guess it depends on the actual use case. If we\nexpect to be parallel loading humungous CSVs then that won't fly.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 16 Feb 2020 01:51:37 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sun, Feb 16, 2020 at 12:21 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> On 2/15/20 7:32 AM, Amit Kapila wrote:\n> > On Sat, Feb 15, 2020 at 4:08 PM Alastair Turner <minion@decodable.me> wrote:\n> >>>\n> >> The problem case that I see is the chunk boundary falling in the\n> >> middle of a quoted field where\n> >> - The quote opens in chunk 1\n> >> - The quote closes in chunk 2\n> >> - There is an EoL character between the start of chunk 2 and the closing quote\n> >>\n> >> When the worker processing chunk 2 starts, it believes itself to be in\n> >> out-of-quote state, so only data between the start of the chunk and\n> >> the EoL is regarded as belonging to the partial line. From that point\n> >> on the parsing of the rest of the chunk goes off track.\n> >>\n> >> Some of the resulting errors can be avoided by, for instance,\n> >> requiring a quote to be preceded by a delimiter or EoL. That answer\n> >> fails when fields end with EoL characters, which happens often enough\n> >> in the wild.\n> >>\n> >> Recovering from an incorrect in-quote/out-of-quote state assumption at\n> >> the start of parsing a chunk just seems like a hole with no bottom. So\n> >> it looks to me like it's best done in a single process which can keep\n> >> track of that state reliably.\n> >>\n> > Good point and I agree with you that having a single process would\n> > avoid any such stuff. However, I will think some more on it and if\n> > you/anyone else gets some idea on how to deal with this in a\n> > multi-worker system (where we can allow each worker to read and\n> > process the chunk) then feel free to share your thoughts.\n> >\n>\n>\n> IIRC, in_quote only matters here in CSV mode (because CSV fields can\n> have embedded newlines).\n>\n\nAFAIU, that is correct.\n\n> So why not just forbid parallel copy in CSV\n> mode, at least for now? I guess it depends on the actual use case. If we\n> expect to be parallel loading humungous CSVs then that won't fly.\n>\n\nI am not sure about this part. However, I guess we should at the very\nleast have some extendable solution that can deal with csv, otherwise,\nwe might end up re-designing everything if someday we want to deal\nwith CSV. One naive idea is that in csv mode, we can set up the\nthings slightly differently like the worker, won't start processing\nthe chunk unless the previous chunk is completely parsed. So each\nworker would first parse and tokenize the entire chunk and then start\nwriting it. So, this will make the reading/parsing part serialized,\nbut writes can still be parallel. Now, I don't know if it is a good\nidea to process in a different way for csv mode.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Feb 2020 16:49:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, 15 Feb 2020 at 14:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Good point and I agree with you that having a single process would\n> avoid any such stuff. However, I will think some more on it and if\n> you/anyone else gets some idea on how to deal with this in a\n> multi-worker system (where we can allow each worker to read and\n> process the chunk) then feel free to share your thoughts.\n\nI think having a single process handle splitting the input into tuples makes\nmost sense. It's possible to parse csv at multiple GB/s rates [1], finding\ntuple boundaries is a subset of that task.\n\nMy first thought for a design would be to have two shared memory ring buffers,\none for data and one for tuple start positions. Reader process reads the CSV\ndata into the main buffer, finds tuple start locations in there and writes\nthose to the secondary buffer.\n\nWorker processes claim a chunk of tuple positions from the secondary buffer and\nupdate their \"keep this data around\" position with the first position. Then\nproceed to parse and insert the tuples, updating their position until they find\nthe end of the last tuple in the chunk.\n\nBuffer size, maximum and minimum chunk size could be tunable. Ideally the\nbuffers would be at least big enough to absorb one of the workers getting\nscheduled out for a timeslice, which could be up to tens of megabytes.\n\nRegards,\nAnts Aasma\n\n[1] https://github.com/geofflangdale/simdcsv/\n\n\n", "msg_date": "Mon, 17 Feb 2020 17:04:35 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "At Mon, 17 Feb 2020 16:49:22 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Sun, Feb 16, 2020 at 12:21 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n> > On 2/15/20 7:32 AM, Amit Kapila wrote:\n> > > On Sat, Feb 15, 2020 at 4:08 PM Alastair Turner <minion@decodable.me> wrot> > So why not just forbid parallel copy in CSV\n> > mode, at least for now? I guess it depends on the actual use case. If we\n> > expect to be parallel loading humungous CSVs then that won't fly.\n> >\n> \n> I am not sure about this part. However, I guess we should at the very\n> least have some extendable solution that can deal with csv, otherwise,\n> we might end up re-designing everything if someday we want to deal\n> with CSV. One naive idea is that in csv mode, we can set up the\n> things slightly differently like the worker, won't start processing\n> the chunk unless the previous chunk is completely parsed. So each\n> worker would first parse and tokenize the entire chunk and then start\n> writing it. So, this will make the reading/parsing part serialized,\n> but writes can still be parallel. Now, I don't know if it is a good\n> idea to process in a different way for csv mode.\n\nIn an extreme case, if we didn't see a QUOTE in a chunk, we cannot\nknow the chunk is in a quoted section or not, until all the past\nchunks are parsed. After all we are forced to parse fully\nsequentially as far as we allow QUOTE.\n\nOn the other hand, if we allowed \"COPY t FROM f WITH (FORMAT CSV,\nQUOTE '')\" in order to signal that there's no quoted section in the\nfile then all chunks would be fully concurrently parsable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 18 Feb 2020 10:57:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 4:04 AM Ants Aasma <ants@cybertec.at> wrote:\n> On Sat, 15 Feb 2020 at 14:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Good point and I agree with you that having a single process would\n> > avoid any such stuff. However, I will think some more on it and if\n> > you/anyone else gets some idea on how to deal with this in a\n> > multi-worker system (where we can allow each worker to read and\n> > process the chunk) then feel free to share your thoughts.\n>\n> I think having a single process handle splitting the input into tuples makes\n> most sense. It's possible to parse csv at multiple GB/s rates [1], finding\n> tuple boundaries is a subset of that task.\n\nYeah, this is compelling. Even though it has to read the file\nserially, the real gains from parallel COPY should come from doing the\nreal work in parallel: data-type parsing, tuple forming, WHERE clause\nfiltering, partition routing, buffer management, insertion and\nassociated triggers, FKs and index maintenance.\n\nThe reason I used the other approach for the file_fdw patch is that I\nwas trying to make it look as much as possible like parallel\nsequential scan and not create an extra worker, because I didn't feel\nlike an FDW should be allowed to do that (what if executor nodes all\nover the query tree created worker processes willy-nilly?). Obviously\nit doesn't work correctly for embedded newlines, and even if you\ndecree that multi-line values aren't allowed in parallel COPY, the\nstuff about tuples crossing chunk boundaries is still a bit unpleasant\n(whether solved by double reading as I showed, or a bunch of tap\ndancing in shared memory) and creates overheads.\n\n> My first thought for a design would be to have two shared memory ring buffers,\n> one for data and one for tuple start positions. Reader process reads the CSV\n> data into the main buffer, finds tuple start locations in there and writes\n> those to the secondary buffer.\n>\n> Worker processes claim a chunk of tuple positions from the secondary buffer and\n> update their \"keep this data around\" position with the first position. Then\n> proceed to parse and insert the tuples, updating their position until they find\n> the end of the last tuple in the chunk.\n\n+1. That sort of two-queue scheme is exactly how I sketched out a\nmulti-consumer queue for a hypothetical Parallel Scatter node. It\nprobably gets a bit trickier when the payload has to be broken up into\nfragments to wrap around the \"data\" buffer N times.\n\n\n", "msg_date": "Tue, 18 Feb 2020 15:39:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, 18 Feb 2020 at 04:40, Thomas Munro <thomas.munro@gmail.com> wrote:\n> +1. That sort of two-queue scheme is exactly how I sketched out a\n> multi-consumer queue for a hypothetical Parallel Scatter node. It\n> probably gets a bit trickier when the payload has to be broken up into\n> fragments to wrap around the \"data\" buffer N times.\n\nAt least for copy it should be easy enough - it already has to handle reading\ndata block by block. If worker updates its position while doing so the reader\ncan wrap around the data buffer.\n\nThere will be no parallelism while one worker is buffering up a line larger\nthan the data buffer, but that doesn't seem like a major issue. Once the line is\nbuffered and begins inserting next worker can start buffering the next tuple.\n\nRegards,\nAnts Aasma\n\n\n", "msg_date": "Tue, 18 Feb 2020 09:13:45 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Feb 17, 2020 at 8:34 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Sat, 15 Feb 2020 at 14:32, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Good point and I agree with you that having a single process would\n> > avoid any such stuff. However, I will think some more on it and if\n> > you/anyone else gets some idea on how to deal with this in a\n> > multi-worker system (where we can allow each worker to read and\n> > process the chunk) then feel free to share your thoughts.\n>\n> I think having a single process handle splitting the input into tuples makes\n> most sense. It's possible to parse csv at multiple GB/s rates [1], finding\n> tuple boundaries is a subset of that task.\n>\n> My first thought for a design would be to have two shared memory ring buffers,\n> one for data and one for tuple start positions. Reader process reads the CSV\n> data into the main buffer, finds tuple start locations in there and writes\n> those to the secondary buffer.\n>\n> Worker processes claim a chunk of tuple positions from the secondary buffer and\n> update their \"keep this data around\" position with the first position. Then\n> proceed to parse and insert the tuples, updating their position until they find\n> the end of the last tuple in the chunk.\n>\n\nThis is something similar to what I had also in mind for this idea. I\nhad thought of handing over complete chunk (64K or whatever we\ndecide). The one thing that slightly bothers me is that we will add\nsome additional overhead of copying to and from shared memory which\nwas earlier from local process memory. And, the tokenization (finding\nline boundaries) would be serial. I think that tokenization should be\na small part of the overall work we do during the copy operation, but\nwill do some measurements to ascertain the same.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Feb 2020 15:50:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 7:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 17 Feb 2020 16:49:22 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Sun, Feb 16, 2020 at 12:21 PM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> > > On 2/15/20 7:32 AM, Amit Kapila wrote:\n> > > > On Sat, Feb 15, 2020 at 4:08 PM Alastair Turner <minion@decodable.me> wrot> > So why not just forbid parallel copy in CSV\n> > > mode, at least for now? I guess it depends on the actual use case. If we\n> > > expect to be parallel loading humungous CSVs then that won't fly.\n> > >\n> >\n> > I am not sure about this part. However, I guess we should at the very\n> > least have some extendable solution that can deal with csv, otherwise,\n> > we might end up re-designing everything if someday we want to deal\n> > with CSV. One naive idea is that in csv mode, we can set up the\n> > things slightly differently like the worker, won't start processing\n> > the chunk unless the previous chunk is completely parsed. So each\n> > worker would first parse and tokenize the entire chunk and then start\n> > writing it. So, this will make the reading/parsing part serialized,\n> > but writes can still be parallel. Now, I don't know if it is a good\n> > idea to process in a different way for csv mode.\n>\n> In an extreme case, if we didn't see a QUOTE in a chunk, we cannot\n> know the chunk is in a quoted section or not, until all the past\n> chunks are parsed. After all we are forced to parse fully\n> sequentially as far as we allow QUOTE.\n>\n\nRight, I think the benefits of this as compared to single reader idea\nwould be (a) we can save accessing shared memory for the most part of\nthe chunk (b) for non-csv mode, even the tokenization (finding line\nboundaries) would also be parallel. OTOH, doing processing\ndifferently for csv and non-csv mode might not be good.\n\n> On the other hand, if we allowed \"COPY t FROM f WITH (FORMAT CSV,\n> QUOTE '')\" in order to signal that there's no quoted section in the\n> file then all chunks would be fully concurrently parsable.\n>\n\nYeah, if we can provide such an option, we can probably make parallel\ncsv processing equivalent to non-csv. However, users might not like\nthis as I think in some cases it won't be easier for them to tell\nwhether the file has quoted fields or not. I am not very sure of this\npoint.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Feb 2020 15:59:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "At Tue, 18 Feb 2020 15:59:36 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Feb 18, 2020 at 7:28 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > In an extreme case, if we didn't see a QUOTE in a chunk, we cannot\n> > know the chunk is in a quoted section or not, until all the past\n> > chunks are parsed. After all we are forced to parse fully\n> > sequentially as far as we allow QUOTE.\n> >\n> \n> Right, I think the benefits of this as compared to single reader idea\n> would be (a) we can save accessing shared memory for the most part of\n> the chunk (b) for non-csv mode, even the tokenization (finding line\n> boundaries) would also be parallel. OTOH, doing processing\n> differently for csv and non-csv mode might not be good.\n\nAgreed. So I think it's a good point of compromize.\n\n> > On the other hand, if we allowed \"COPY t FROM f WITH (FORMAT CSV,\n> > QUOTE '')\" in order to signal that there's no quoted section in the\n> > file then all chunks would be fully concurrently parsable.\n> >\n> \n> Yeah, if we can provide such an option, we can probably make parallel\n> csv processing equivalent to non-csv. However, users might not like\n> this as I think in some cases it won't be easier for them to tell\n> whether the file has quoted fields or not. I am not very sure of this\n> point.\n\nI'm not sure how large portion of the usage contains quoted sections,\nso I'm not sure how it is useful..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 18 Feb 2020 19:59:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> This is something similar to what I had also in mind for this idea. I\n> had thought of handing over complete chunk (64K or whatever we\n> decide). The one thing that slightly bothers me is that we will add\n> some additional overhead of copying to and from shared memory which\n> was earlier from local process memory. And, the tokenization (finding\n> line boundaries) would be serial. I think that tokenization should be\n> a small part of the overall work we do during the copy operation, but\n> will do some measurements to ascertain the same.\n\nI don't think any extra copying is needed. The reader can directly\nfread()/pq_copymsgbytes() into shared memory, and the workers can run\nCopyReadLineText() inner loop directly off of the buffer in shared memory.\n\nFor serial performance of tokenization into lines, I really think a SIMD\nbased approach will be fast enough for quite some time. I hacked up the code in\nthe simdcsv project to only tokenize on line endings and it was able to\ntokenize a CSV file with short lines at 8+ GB/s. There are going to be many\nother bottlenecks before this one starts limiting. Patch attached if you'd\nlike to try that out.\n\nRegards,\nAnts Aasma", "msg_date": "Tue, 18 Feb 2020 14:29:20 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > This is something similar to what I had also in mind for this idea. I\n> > had thought of handing over complete chunk (64K or whatever we\n> > decide). The one thing that slightly bothers me is that we will add\n> > some additional overhead of copying to and from shared memory which\n> > was earlier from local process memory. And, the tokenization (finding\n> > line boundaries) would be serial. I think that tokenization should be\n> > a small part of the overall work we do during the copy operation, but\n> > will do some measurements to ascertain the same.\n>\n> I don't think any extra copying is needed.\n>\n\nI am talking about access to shared memory instead of the process\nlocal memory. I understand that an extra copy won't be required.\n\n> The reader can directly\n> fread()/pq_copymsgbytes() into shared memory, and the workers can run\n> CopyReadLineText() inner loop directly off of the buffer in shared memory.\n>\n\nI am slightly confused here. AFAIU, the for(;;) loop in\nCopyReadLineText is about finding the line endings which we thought\nthat the reader process will do.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Feb 2020 18:51:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sun, Feb 16, 2020 at 12:51 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> IIRC, in_quote only matters here in CSV mode (because CSV fields can\n> have embedded newlines). So why not just forbid parallel copy in CSV\n> mode, at least for now? I guess it depends on the actual use case. If we\n> expect to be parallel loading humungous CSVs then that won't fly.\n\n\nLoading large CSV files is pretty common here. I hope this can be\nsupported.\n\n\n\nMIKE BLACKWELL\n\n\n* <Mike.Blackwell@rrd.com>*\n\n\n\n>\n\nOn Sun, Feb 16, 2020 at 12:51 AM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nIIRC, in_quote only matters here in CSV mode (because CSV fields can\nhave embedded newlines). So why not just forbid parallel copy in CSV\nmode, at least for now? I guess it depends on the actual use case. If we\nexpect to be parallel loading humungous CSVs then that won't fly.Loading large CSV files is pretty common here.  I hope this can be supported.MIKE BLACKWELL", "msg_date": "Tue, 18 Feb 2020 08:21:43 -0600", "msg_from": "Mike Blackwell <mike.blackwell@rrd.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, 18 Feb 2020 at 15:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n> >\n> > On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > This is something similar to what I had also in mind for this idea. I\n> > > had thought of handing over complete chunk (64K or whatever we\n> > > decide). The one thing that slightly bothers me is that we will add\n> > > some additional overhead of copying to and from shared memory which\n> > > was earlier from local process memory. And, the tokenization (finding\n> > > line boundaries) would be serial. I think that tokenization should be\n> > > a small part of the overall work we do during the copy operation, but\n> > > will do some measurements to ascertain the same.\n> >\n> > I don't think any extra copying is needed.\n> >\n>\n> I am talking about access to shared memory instead of the process\n> local memory. I understand that an extra copy won't be required.\n>\n> > The reader can directly\n> > fread()/pq_copymsgbytes() into shared memory, and the workers can run\n> > CopyReadLineText() inner loop directly off of the buffer in shared memory.\n> >\n>\n> I am slightly confused here. AFAIU, the for(;;) loop in\n> CopyReadLineText is about finding the line endings which we thought\n> that the reader process will do.\n\nIndeed, I somehow misread the code while scanning over it. So CopyReadLineText\ncurrently copies data from cstate->raw_buf to the StringInfo in\ncstate->line_buf. In parallel mode it would copy it from the shared data buffer\nto local line_buf until it hits the line end found by the data reader. The\namount of copying done is still exactly the same as it is now.\n\nRegards,\nAnts Aasma\n\n\n", "msg_date": "Tue, 18 Feb 2020 16:38:13 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 06:51:29PM +0530, Amit Kapila wrote:\n> On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n> >\n> > On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > This is something similar to what I had also in mind for this idea. I\n> > > had thought of handing over complete chunk (64K or whatever we\n> > > decide). The one thing that slightly bothers me is that we will add\n> > > some additional overhead of copying to and from shared memory which\n> > > was earlier from local process memory. And, the tokenization (finding\n> > > line boundaries) would be serial. I think that tokenization should be\n> > > a small part of the overall work we do during the copy operation, but\n> > > will do some measurements to ascertain the same.\n> >\n> > I don't think any extra copying is needed.\n> \n> I am talking about access to shared memory instead of the process\n> local memory. I understand that an extra copy won't be required.\n\nIsn't accessing shared memory from different pieces of execution what\nthreads were designed to do?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 18 Feb 2020 16:11:49 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 8:41 PM David Fetter <david@fetter.org> wrote:\n>\n> On Tue, Feb 18, 2020 at 06:51:29PM +0530, Amit Kapila wrote:\n> > On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n> > >\n> > > On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > This is something similar to what I had also in mind for this idea. I\n> > > > had thought of handing over complete chunk (64K or whatever we\n> > > > decide). The one thing that slightly bothers me is that we will add\n> > > > some additional overhead of copying to and from shared memory which\n> > > > was earlier from local process memory. And, the tokenization (finding\n> > > > line boundaries) would be serial. I think that tokenization should be\n> > > > a small part of the overall work we do during the copy operation, but\n> > > > will do some measurements to ascertain the same.\n> > >\n> > > I don't think any extra copying is needed.\n> >\n> > I am talking about access to shared memory instead of the process\n> > local memory. I understand that an extra copy won't be required.\n>\n> Isn't accessing shared memory from different pieces of execution what\n> threads were designed to do?\n>\n\nSorry, but I don't understand what you mean by the above? We are\ngoing to use background workers (which are processes) for parallel\nworkers. In general, it might not make a big difference in accessing\nshared memory as compared to local memory especially because the cost\nof other stuff in the copy is relatively higher. But still, it is a\npoint to consider.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Feb 2020 09:38:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 8:08 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Tue, 18 Feb 2020 at 15:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n> > >\n> > > On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > This is something similar to what I had also in mind for this idea. I\n> > > > had thought of handing over complete chunk (64K or whatever we\n> > > > decide). The one thing that slightly bothers me is that we will add\n> > > > some additional overhead of copying to and from shared memory which\n> > > > was earlier from local process memory. And, the tokenization (finding\n> > > > line boundaries) would be serial. I think that tokenization should be\n> > > > a small part of the overall work we do during the copy operation, but\n> > > > will do some measurements to ascertain the same.\n> > >\n> > > I don't think any extra copying is needed.\n> > >\n> >\n> > I am talking about access to shared memory instead of the process\n> > local memory. I understand that an extra copy won't be required.\n> >\n> > > The reader can directly\n> > > fread()/pq_copymsgbytes() into shared memory, and the workers can run\n> > > CopyReadLineText() inner loop directly off of the buffer in shared memory.\n> > >\n> >\n> > I am slightly confused here. AFAIU, the for(;;) loop in\n> > CopyReadLineText is about finding the line endings which we thought\n> > that the reader process will do.\n>\n> Indeed, I somehow misread the code while scanning over it. So CopyReadLineText\n> currently copies data from cstate->raw_buf to the StringInfo in\n> cstate->line_buf. In parallel mode it would copy it from the shared data buffer\n> to local line_buf until it hits the line end found by the data reader. The\n> amount of copying done is still exactly the same as it is now.\n>\n\nYeah, on a broader level it will be something like that, but actual\ndetails might vary during implementation. BTW, have you given any\nthoughts on one other approach I have shared above [1]? We might not\ngo with that idea, but it is better to discuss different ideas and\nevaluate their pros and cons.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LyAyPCtBk4rkwomeT6%3DyTse5qWws-7i9EFwnUFZhvu5w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Feb 2020 09:52:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 7:51 PM Mike Blackwell <mike.blackwell@rrd.com>\nwrote:\n\n> On Sun, Feb 16, 2020 at 12:51 AM Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> wrote:\n>\n>>\n>> IIRC, in_quote only matters here in CSV mode (because CSV fields can\n>> have embedded newlines). So why not just forbid parallel copy in CSV\n>> mode, at least for now? I guess it depends on the actual use case. If we\n>> expect to be parallel loading humungous CSVs then that won't fly.\n>\n>\n> Loading large CSV files is pretty common here. I hope this can be\n> supported.\n>\n>\nThank you for your inputs. It is important and valuable.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Feb 18, 2020 at 7:51 PM Mike Blackwell <mike.blackwell@rrd.com> wrote:On Sun, Feb 16, 2020 at 12:51 AM Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\nIIRC, in_quote only matters here in CSV mode (because CSV fields can\nhave embedded newlines). So why not just forbid parallel copy in CSV\nmode, at least for now? I guess it depends on the actual use case. If we\nexpect to be parallel loading humungous CSVs then that won't fly.Loading large CSV files is pretty common here.  I hope this can be supported.Thank you for your inputs.  It is important and valuable.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 19 Feb 2020 09:53:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, 19 Feb 2020 at 06:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 18, 2020 at 8:08 PM Ants Aasma <ants@cybertec.at> wrote:\n> >\n> > On Tue, 18 Feb 2020 at 15:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n> > > >\n> > > > On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > This is something similar to what I had also in mind for this idea. I\n> > > > > had thought of handing over complete chunk (64K or whatever we\n> > > > > decide). The one thing that slightly bothers me is that we will add\n> > > > > some additional overhead of copying to and from shared memory which\n> > > > > was earlier from local process memory. And, the tokenization (finding\n> > > > > line boundaries) would be serial. I think that tokenization should be\n> > > > > a small part of the overall work we do during the copy operation, but\n> > > > > will do some measurements to ascertain the same.\n> > > >\n> > > > I don't think any extra copying is needed.\n> > > >\n> > >\n> > > I am talking about access to shared memory instead of the process\n> > > local memory. I understand that an extra copy won't be required.\n> > >\n> > > > The reader can directly\n> > > > fread()/pq_copymsgbytes() into shared memory, and the workers can run\n> > > > CopyReadLineText() inner loop directly off of the buffer in shared memory.\n> > > >\n> > >\n> > > I am slightly confused here. AFAIU, the for(;;) loop in\n> > > CopyReadLineText is about finding the line endings which we thought\n> > > that the reader process will do.\n> >\n> > Indeed, I somehow misread the code while scanning over it. So CopyReadLineText\n> > currently copies data from cstate->raw_buf to the StringInfo in\n> > cstate->line_buf. In parallel mode it would copy it from the shared data buffer\n> > to local line_buf until it hits the line end found by the data reader. The\n> > amount of copying done is still exactly the same as it is now.\n> >\n>\n> Yeah, on a broader level it will be something like that, but actual\n> details might vary during implementation. BTW, have you given any\n> thoughts on one other approach I have shared above [1]? We might not\n> go with that idea, but it is better to discuss different ideas and\n> evaluate their pros and cons.\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1LyAyPCtBk4rkwomeT6%3DyTse5qWws-7i9EFwnUFZhvu5w%40mail.gmail.com\n\nIt seems to be that at least for the general CSV case the tokenization to\ntuples is an inherently serial task. Adding thread synchronization to that path\nfor coordinating between multiple workers is only going to make it slower. It\nmay be possible to enforce limitations on the input (e.g. no quotes allowed) or\ndo some speculative tokenization (e.g. if we encounter quote before newline\nassume the chunk started in a quoted section) to make it possible to do the\ntokenization in parallel. But given that the simpler and more featured approach\nof handling it in a single reader process looks to be fast enough, I don't see\nthe point. I rather think that the next big step would be to overlap reading\ninput and tokenization, hopefully by utilizing Andres's work on asyncio.\n\nRegards,\nAnts Aasma\n\n\n", "msg_date": "Wed, 19 Feb 2020 11:02:15 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Feb 19, 2020 at 11:02:15AM +0200, Ants Aasma wrote:\n>On Wed, 19 Feb 2020 at 06:22, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Feb 18, 2020 at 8:08 PM Ants Aasma <ants@cybertec.at> wrote:\n>> >\n>> > On Tue, 18 Feb 2020 at 15:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > >\n>> > > On Tue, Feb 18, 2020 at 5:59 PM Ants Aasma <ants@cybertec.at> wrote:\n>> > > >\n>> > > > On Tue, 18 Feb 2020 at 12:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > > > > This is something similar to what I had also in mind for this idea. I\n>> > > > > had thought of handing over complete chunk (64K or whatever we\n>> > > > > decide). The one thing that slightly bothers me is that we will add\n>> > > > > some additional overhead of copying to and from shared memory which\n>> > > > > was earlier from local process memory. And, the tokenization (finding\n>> > > > > line boundaries) would be serial. I think that tokenization should be\n>> > > > > a small part of the overall work we do during the copy operation, but\n>> > > > > will do some measurements to ascertain the same.\n>> > > >\n>> > > > I don't think any extra copying is needed.\n>> > > >\n>> > >\n>> > > I am talking about access to shared memory instead of the process\n>> > > local memory. I understand that an extra copy won't be required.\n>> > >\n>> > > > The reader can directly\n>> > > > fread()/pq_copymsgbytes() into shared memory, and the workers can run\n>> > > > CopyReadLineText() inner loop directly off of the buffer in shared memory.\n>> > > >\n>> > >\n>> > > I am slightly confused here. AFAIU, the for(;;) loop in\n>> > > CopyReadLineText is about finding the line endings which we thought\n>> > > that the reader process will do.\n>> >\n>> > Indeed, I somehow misread the code while scanning over it. So CopyReadLineText\n>> > currently copies data from cstate->raw_buf to the StringInfo in\n>> > cstate->line_buf. In parallel mode it would copy it from the shared data buffer\n>> > to local line_buf until it hits the line end found by the data reader. The\n>> > amount of copying done is still exactly the same as it is now.\n>> >\n>>\n>> Yeah, on a broader level it will be something like that, but actual\n>> details might vary during implementation. BTW, have you given any\n>> thoughts on one other approach I have shared above [1]? We might not\n>> go with that idea, but it is better to discuss different ideas and\n>> evaluate their pros and cons.\n>>\n>> [1] - https://www.postgresql.org/message-id/CAA4eK1LyAyPCtBk4rkwomeT6%3DyTse5qWws-7i9EFwnUFZhvu5w%40mail.gmail.com\n>\n>It seems to be that at least for the general CSV case the tokenization to\n>tuples is an inherently serial task. Adding thread synchronization to that path\n>for coordinating between multiple workers is only going to make it slower. It\n>may be possible to enforce limitations on the input (e.g. no quotes allowed) or\n>do some speculative tokenization (e.g. if we encounter quote before newline\n>assume the chunk started in a quoted section) to make it possible to do the\n>tokenization in parallel. But given that the simpler and more featured approach\n>of handling it in a single reader process looks to be fast enough, I don't see\n>the point. I rather think that the next big step would be to overlap reading\n>input and tokenization, hopefully by utilizing Andres's work on asyncio.\n>\n\nI generally agree with the impression that parsing CSV is tricky and\nunlikely to benefit from parallelism in general. There may be cases with\nrestrictions making it easier (e.g. restrictions on the format) but that\nmight be a bit too complex to start with.\n\nFor example, I had an idea to parallelise the planning by splitting it\ninto two phases:\n\n1) indexing\n\nSplits the CSV file into equally-sized chunks, make each worker to just\nscan through it's chunk and store positions of delimiters, quotes,\nnewlines etc. This is probably the most expensive part of the parsing\n(essentially go char by char), and we'd speed it up linearly.\n\n2) merge\n\nCombine the information from (1) in a single process, and actually parse\nthe CSV data - we would not have to inspect each character, because we'd\nknow positions of interesting chars, so this should be fast. We might\nhave to recheck some stuff (e.g. escaping) but it should still be much\nfaster.\n\nBut yes, this may be a bit complex and I'm not sure it's worth it.\n\nThe one piece of information I'm missing here is at least a very rough\nquantification of the individual steps of CSV processing - for example\nif parsing takes only 10% of the time, it's pretty pointless to start by\nparallelising this part and we should focus on the rest. If it's 50% it\nmight be a different story. Has anyone done any measurements?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 19 Feb 2020 11:38:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Feb 19, 2020 at 4:08 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> The one piece of information I'm missing here is at least a very rough\n> quantification of the individual steps of CSV processing - for example\n> if parsing takes only 10% of the time, it's pretty pointless to start by\n> parallelising this part and we should focus on the rest. If it's 50% it\n> might be a different story.\n>\n\nRight, this is important information to know.\n\n> Has anyone done any measurements?\n>\n\nNot yet, but planning to work on it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Feb 2020 18:34:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Feb 14, 2020 at 01:41:54PM +0530, Amit Kapila wrote:\n> This work is to parallelize the copy command and in particular \"Copy\n> <table_name> from 'filename' Where <condition>;\" command.\n\nApropos of the initial parsing issue generally, there's an interesting\napproach taken here: https://github.com/robertdavidgraham/wc2\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 20 Feb 2020 00:41:58 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Feb 20, 2020 at 5:12 AM David Fetter <david@fetter.org> wrote:\n>\n> On Fri, Feb 14, 2020 at 01:41:54PM +0530, Amit Kapila wrote:\n> > This work is to parallelize the copy command and in particular \"Copy\n> > <table_name> from 'filename' Where <condition>;\" command.\n>\n> Apropos of the initial parsing issue generally, there's an interesting\n> approach taken here: https://github.com/robertdavidgraham/wc2\n>\n\nThanks for sharing. I might be missing something, but I can't figure\nout how this can help here. Does this in some way help to allow\nmultiple workers to read and tokenize the chunks?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Feb 2020 16:11:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Feb 20, 2020 at 04:11:39PM +0530, Amit Kapila wrote:\n>On Thu, Feb 20, 2020 at 5:12 AM David Fetter <david@fetter.org> wrote:\n>>\n>> On Fri, Feb 14, 2020 at 01:41:54PM +0530, Amit Kapila wrote:\n>> > This work is to parallelize the copy command and in particular \"Copy\n>> > <table_name> from 'filename' Where <condition>;\" command.\n>>\n>> Apropos of the initial parsing issue generally, there's an interesting\n>> approach taken here: https://github.com/robertdavidgraham/wc2\n>>\n>\n>Thanks for sharing. I might be missing something, but I can't figure\n>out how this can help here. Does this in some way help to allow\n>multiple workers to read and tokenize the chunks?\n>\n\nI think the wc2 is showing that maybe instead of parallelizing the\nparsing, we might instead try using a different tokenizer/parser and\nmake the implementation more efficient instead of just throwing more\nCPUs on it.\n\nI don't know if our code is similar to what wc does, maytbe parsing\ncsv is more complicated than what wc does.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 20 Feb 2020 14:36:02 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Feb 20, 2020 at 02:36:02PM +0100, Tomas Vondra wrote:\n> On Thu, Feb 20, 2020 at 04:11:39PM +0530, Amit Kapila wrote:\n> > On Thu, Feb 20, 2020 at 5:12 AM David Fetter <david@fetter.org> wrote:\n> > > \n> > > On Fri, Feb 14, 2020 at 01:41:54PM +0530, Amit Kapila wrote:\n> > > > This work is to parallelize the copy command and in particular \"Copy\n> > > > <table_name> from 'filename' Where <condition>;\" command.\n> > > \n> > > Apropos of the initial parsing issue generally, there's an interesting\n> > > approach taken here: https://github.com/robertdavidgraham/wc2\n> > > \n> > \n> > Thanks for sharing. I might be missing something, but I can't figure\n> > out how this can help here. Does this in some way help to allow\n> > multiple workers to read and tokenize the chunks?\n> \n> I think the wc2 is showing that maybe instead of parallelizing the\n> parsing, we might instead try using a different tokenizer/parser and\n> make the implementation more efficient instead of just throwing more\n> CPUs on it.\n\nThat was what I had in mind.\n\n> I don't know if our code is similar to what wc does, maytbe parsing\n> csv is more complicated than what wc does.\n\nCSV parsing differs from wc in that there are more states in the state\nmachine, but I don't see anything fundamentally different.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 20 Feb 2020 17:43:26 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, 20 Feb 2020 at 18:43, David Fetter <david@fetter.org> wrote:>\n> On Thu, Feb 20, 2020 at 02:36:02PM +0100, Tomas Vondra wrote:\n> > I think the wc2 is showing that maybe instead of parallelizing the\n> > parsing, we might instead try using a different tokenizer/parser and\n> > make the implementation more efficient instead of just throwing more\n> > CPUs on it.\n>\n> That was what I had in mind.\n>\n> > I don't know if our code is similar to what wc does, maytbe parsing\n> > csv is more complicated than what wc does.\n>\n> CSV parsing differs from wc in that there are more states in the state\n> machine, but I don't see anything fundamentally different.\n\nThe trouble with a state machine based approach is that the state\ntransitions form a dependency chain, which means that at best the\nprocessing rate will be 4-5 cycles per byte (L1 latency to fetch the\nnext state).\n\nI whipped together a quick prototype that uses SIMD and bitmap\nmanipulations to do the equivalent of CopyReadLineText() in csv mode\nincluding quotes and escape handling, this runs at 0.25-0.5 cycles per\nbyte.\n\nRegards,\nAnts Aasma", "msg_date": "Fri, 21 Feb 2020 14:54:31 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Feb 21, 2020 at 02:54:31PM +0200, Ants Aasma wrote:\n>On Thu, 20 Feb 2020 at 18:43, David Fetter <david@fetter.org> wrote:>\n>> On Thu, Feb 20, 2020 at 02:36:02PM +0100, Tomas Vondra wrote:\n>> > I think the wc2 is showing that maybe instead of parallelizing the\n>> > parsing, we might instead try using a different tokenizer/parser and\n>> > make the implementation more efficient instead of just throwing more\n>> > CPUs on it.\n>>\n>> That was what I had in mind.\n>>\n>> > I don't know if our code is similar to what wc does, maytbe parsing\n>> > csv is more complicated than what wc does.\n>>\n>> CSV parsing differs from wc in that there are more states in the state\n>> machine, but I don't see anything fundamentally different.\n>\n>The trouble with a state machine based approach is that the state\n>transitions form a dependency chain, which means that at best the\n>processing rate will be 4-5 cycles per byte (L1 latency to fetch the\n>next state).\n>\n>I whipped together a quick prototype that uses SIMD and bitmap\n>manipulations to do the equivalent of CopyReadLineText() in csv mode\n>including quotes and escape handling, this runs at 0.25-0.5 cycles per\n>byte.\n>\n\nInteresting. How does that compare to what we currently have?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 22 Feb 2020 01:28:02 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 18, 2020 at 6:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I am talking about access to shared memory instead of the process\n> local memory. I understand that an extra copy won't be required.\n\nYou make it sound like there is some performance penalty for accessing\nshared memory, but I don't think that's true. It's true that\n*contended* access to shared memory can be slower, because if multiple\nprocesses are trying to access the same memory, and especially if\nmultiple processes are trying to write the same memory, then the cache\nlines have to be shared and that has a cost. However, I don't think\nthat would create any noticeable effect in this case. First, there's\npresumably only one writer process. Second, you wouldn't normally have\nmultiple readers working on the same part of the data at the same\ntime.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Feb 2020 06:18:54 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-02-19 11:38:45 +0100, Tomas Vondra wrote:\n> I generally agree with the impression that parsing CSV is tricky and\n> unlikely to benefit from parallelism in general. There may be cases with\n> restrictions making it easier (e.g. restrictions on the format) but that\n> might be a bit too complex to start with.\n> \n> For example, I had an idea to parallelise the planning by splitting it\n> into two phases:\n\nFWIW, I think we ought to rewrite our COPY parsers before we go for\ncomplex schemes. They're way slower than a decent green-field\nCSV/... parser.\n\n\n> The one piece of information I'm missing here is at least a very rough\n> quantification of the individual steps of CSV processing - for example\n> if parsing takes only 10% of the time, it's pretty pointless to start by\n> parallelising this part and we should focus on the rest. If it's 50% it\n> might be a different story. Has anyone done any measurements?\n\nNot recently, but I'm pretty sure that I've observed CSV parsing to be\nway more than 10%.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Feb 2020 17:09:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sun, Feb 23, 2020 at 05:09:51PM -0800, Andres Freund wrote:\n>Hi,\n>\n>On 2020-02-19 11:38:45 +0100, Tomas Vondra wrote:\n>> I generally agree with the impression that parsing CSV is tricky and\n>> unlikely to benefit from parallelism in general. There may be cases with\n>> restrictions making it easier (e.g. restrictions on the format) but that\n>> might be a bit too complex to start with.\n>>\n>> For example, I had an idea to parallelise the planning by splitting it\n>> into two phases:\n>\n>FWIW, I think we ought to rewrite our COPY parsers before we go for\n>complex schemes. They're way slower than a decent green-field\n>CSV/... parser.\n>\n\nYep, that's quite possible.\n\n>\n>> The one piece of information I'm missing here is at least a very rough\n>> quantification of the individual steps of CSV processing - for example\n>> if parsing takes only 10% of the time, it's pretty pointless to start by\n>> parallelising this part and we should focus on the rest. If it's 50% it\n>> might be a different story. Has anyone done any measurements?\n>\n>Not recently, but I'm pretty sure that I've observed CSV parsing to be\n>way more than 10%.\n>\n\nPerhaps. I guess it'll depend on the CSV file (number of fields, ...),\nso I still think we need to do some measurements first. I'm willing to\ndo that, but (a) I doubt I'll have time for that until after 2020-03,\nand (b) it'd be good to agree on some set of typical CSV files.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Feb 2020 17:00:51 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Feb 25, 2020 at 9:30 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Feb 23, 2020 at 05:09:51PM -0800, Andres Freund wrote:\n> >Hi,\n> >\n> >> The one piece of information I'm missing here is at least a very rough\n> >> quantification of the individual steps of CSV processing - for example\n> >> if parsing takes only 10% of the time, it's pretty pointless to start by\n> >> parallelising this part and we should focus on the rest. If it's 50% it\n> >> might be a different story. Has anyone done any measurements?\n> >\n> >Not recently, but I'm pretty sure that I've observed CSV parsing to be\n> >way more than 10%.\n> >\n>\n> Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> so I still think we need to do some measurements first.\n>\n\nAgreed.\n\n> I'm willing to\n> do that, but (a) I doubt I'll have time for that until after 2020-03,\n> and (b) it'd be good to agree on some set of typical CSV files.\n>\n\nRight, I don't know what is the best way to define that. I can think\nof the below tests.\n\n1. A table with 10 columns (with datatypes as integers, date, text).\nIt has one index (unique/primary). Load with 1 million rows (basically\nthe data should be probably 5-10 GB).\n2. A table with 10 columns (with datatypes as integers, date, text).\nIt has three indexes, one index can be (unique/primary). Load with 1\nmillion rows (basically the data should be probably 5-10 GB).\n3. A table with 10 columns (with datatypes as integers, date, text).\nIt has three indexes, one index can be (unique/primary). It has before\nand after trigeers. Load with 1 million rows (basically the data\nshould be probably 5-10 GB).\n4. A table with 10 columns (with datatypes as integers, date, text).\nIt has five or six indexes, one index can be (unique/primary). Load\nwith 1 million rows (basically the data should be probably 5-10 GB).\n\nAmong all these tests, we can check how much time did we spend in\nreading, parsing the csv files vs. rest of execution?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Feb 2020 16:24:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, 26 Feb 2020 at 10:54, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 25, 2020 at 9:30 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n...\n> >\n> > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> > so I still think we need to do some measurements first.\n> >\n>\n> Agreed.\n>\n> > I'm willing to\n> > do that, but (a) I doubt I'll have time for that until after 2020-03,\n> > and (b) it'd be good to agree on some set of typical CSV files.\n> >\n>\n> Right, I don't know what is the best way to define that. I can think\n> of the below tests.\n>\n> 1. A table with 10 columns (with datatypes as integers, date, text).\n> It has one index (unique/primary). Load with 1 million rows (basically\n> the data should be probably 5-10 GB).\n> 2. A table with 10 columns (with datatypes as integers, date, text).\n> It has three indexes, one index can be (unique/primary). Load with 1\n> million rows (basically the data should be probably 5-10 GB).\n> 3. A table with 10 columns (with datatypes as integers, date, text).\n> It has three indexes, one index can be (unique/primary). It has before\n> and after trigeers. Load with 1 million rows (basically the data\n> should be probably 5-10 GB).\n> 4. A table with 10 columns (with datatypes as integers, date, text).\n> It has five or six indexes, one index can be (unique/primary). Load\n> with 1 million rows (basically the data should be probably 5-10 GB).\n>\n> Among all these tests, we can check how much time did we spend in\n> reading, parsing the csv files vs. rest of execution?\n\nThat's a good set of tests of what happens after the parse. Two\nsimpler test runs may provide useful baselines - no\nconstraints/indexes with all columns varchar and no\nconstraints/indexes with columns correctly typed.\n\nFor testing the impact of various parts of the parse process, my idea would be:\n - A base dataset with 10 columns including int, date and text. One\ntext field quoted and containing both delimiters and line terminators\n - A derivative to measure just line parsing - strip the quotes around\nthe text field and quote the whole row as one text field\n - A derivative to measure the impact of quoted fields - clean up the\ntext field so it doesn't require quoting\n - A derivative to measure the impact of row length - run ten rows\ntogether to make 100 column rows, but only a tenth as many rows\n\nIf that sounds reasonable, I'll try to knock up a generator.\n\n--\nAlastair\n\n\n", "msg_date": "Wed, 26 Feb 2020 15:16:11 +0000", "msg_from": "Alastair Turner <minion@decodable.me>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, 25 Feb 2020 at 18:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> so I still think we need to do some measurements first. I'm willing to\n> do that, but (a) I doubt I'll have time for that until after 2020-03,\n> and (b) it'd be good to agree on some set of typical CSV files.\n\nI agree that getting a nice varied dataset would be nice. Including\nthings like narrow integer only tables, strings with newlines and\nescapes in them, extremely wide rows.\n\nI tried to capture a quick profile just to see what it looks like.\nGrabbed a random open data set from the web, about 800MB of narrow\nrows CSV [1].\n\nScript:\nCREATE TABLE census (year int,age int,ethnic int,sex int,area text,count text);\nCOPY census FROM '.../Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n\nProfile:\n# Samples: 59K of event 'cycles:u'\n# Event count (approx.): 57644269486\n#\n# Overhead Command Shared Object Symbol\n# ........ ........ ..................\n.......................................\n#\n 18.24% postgres postgres [.] CopyReadLine\n 9.23% postgres postgres [.] NextCopyFrom\n 8.87% postgres postgres [.] NextCopyFromRawFields\n 5.82% postgres postgres [.] pg_verify_mbstr_len\n 5.45% postgres postgres [.] pg_strtoint32\n 4.16% postgres postgres [.] heap_fill_tuple\n 4.03% postgres postgres [.] heap_compute_data_size\n 3.83% postgres postgres [.] CopyFrom\n 3.78% postgres postgres [.] AllocSetAlloc\n 3.53% postgres postgres [.] heap_form_tuple\n 2.96% postgres postgres [.] InputFunctionCall\n 2.89% postgres libc-2.30.so [.] __memmove_avx_unaligned_erms\n 1.82% postgres libc-2.30.so [.] __strlen_avx2\n 1.72% postgres postgres [.] AllocSetReset\n 1.72% postgres postgres [.] RelationPutHeapTuple\n 1.47% postgres postgres [.] heap_prepare_insert\n 1.31% postgres postgres [.] heap_multi_insert\n 1.25% postgres postgres [.] textin\n 1.24% postgres postgres [.] int4in\n 1.05% postgres postgres [.] tts_buffer_heap_clear\n 0.85% postgres postgres [.] pg_any_to_server\n 0.80% postgres postgres [.] pg_comp_crc32c_sse42\n 0.77% postgres postgres [.] cstring_to_text_with_len\n 0.69% postgres postgres [.] AllocSetFree\n 0.60% postgres postgres [.] appendBinaryStringInfo\n 0.55% postgres postgres [.] tts_buffer_heap_materialize.part.0\n 0.54% postgres postgres [.] palloc\n 0.54% postgres libc-2.30.so [.] __memmove_avx_unaligned\n 0.51% postgres postgres [.] palloc0\n 0.51% postgres postgres [.] pg_encoding_max_length\n 0.48% postgres postgres [.] enlargeStringInfo\n 0.47% postgres postgres [.] ExecStoreVirtualTuple\n 0.45% postgres postgres [.] PageAddItemExtended\n\nSo that confirms that the parsing is a huge chunk of overhead with\ncurrent splitting into lines being the largest portion. Amdahl's law\nsays that splitting into tuples needs to be made fast before\nparallelizing makes any sense.\n\nRegards,\nAnts Aasma\n\n[1] https://www3.stats.govt.nz/2018census/Age-sex-by-ethnic-group-grouped-total-responses-census-usually-resident-population-counts-2006-2013-2018-Censuses-RC-TA-SA2-DHB.zip\n\n\n", "msg_date": "Wed, 26 Feb 2020 17:17:33 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Feb 26, 2020 at 8:47 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Tue, 25 Feb 2020 at 18:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> > so I still think we need to do some measurements first. I'm willing to\n> > do that, but (a) I doubt I'll have time for that until after 2020-03,\n> > and (b) it'd be good to agree on some set of typical CSV files.\n>\n> I agree that getting a nice varied dataset would be nice. Including\n> things like narrow integer only tables, strings with newlines and\n> escapes in them, extremely wide rows.\n>\n> I tried to capture a quick profile just to see what it looks like.\n> Grabbed a random open data set from the web, about 800MB of narrow\n> rows CSV [1].\n>\n> Script:\n> CREATE TABLE census (year int,age int,ethnic int,sex int,area text,count text);\n> COPY census FROM '.../Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n>\n> Profile:\n> # Samples: 59K of event 'cycles:u'\n> # Event count (approx.): 57644269486\n> #\n> # Overhead Command Shared Object Symbol\n> # ........ ........ ..................\n> .......................................\n> #\n> 18.24% postgres postgres [.] CopyReadLine\n> 9.23% postgres postgres [.] NextCopyFrom\n> 8.87% postgres postgres [.] NextCopyFromRawFields\n> 5.82% postgres postgres [.] pg_verify_mbstr_len\n> 5.45% postgres postgres [.] pg_strtoint32\n> 4.16% postgres postgres [.] heap_fill_tuple\n> 4.03% postgres postgres [.] heap_compute_data_size\n> 3.83% postgres postgres [.] CopyFrom\n> 3.78% postgres postgres [.] AllocSetAlloc\n> 3.53% postgres postgres [.] heap_form_tuple\n> 2.96% postgres postgres [.] InputFunctionCall\n> 2.89% postgres libc-2.30.so [.] __memmove_avx_unaligned_erms\n> 1.82% postgres libc-2.30.so [.] __strlen_avx2\n> 1.72% postgres postgres [.] AllocSetReset\n> 1.72% postgres postgres [.] RelationPutHeapTuple\n> 1.47% postgres postgres [.] heap_prepare_insert\n> 1.31% postgres postgres [.] heap_multi_insert\n> 1.25% postgres postgres [.] textin\n> 1.24% postgres postgres [.] int4in\n> 1.05% postgres postgres [.] tts_buffer_heap_clear\n> 0.85% postgres postgres [.] pg_any_to_server\n> 0.80% postgres postgres [.] pg_comp_crc32c_sse42\n> 0.77% postgres postgres [.] cstring_to_text_with_len\n> 0.69% postgres postgres [.] AllocSetFree\n> 0.60% postgres postgres [.] appendBinaryStringInfo\n> 0.55% postgres postgres [.] tts_buffer_heap_materialize.part.0\n> 0.54% postgres postgres [.] palloc\n> 0.54% postgres libc-2.30.so [.] __memmove_avx_unaligned\n> 0.51% postgres postgres [.] palloc0\n> 0.51% postgres postgres [.] pg_encoding_max_length\n> 0.48% postgres postgres [.] enlargeStringInfo\n> 0.47% postgres postgres [.] ExecStoreVirtualTuple\n> 0.45% postgres postgres [.] PageAddItemExtended\n>\n> So that confirms that the parsing is a huge chunk of overhead with\n> current splitting into lines being the largest portion. Amdahl's law\n> says that splitting into tuples needs to be made fast before\n> parallelizing makes any sense.\n>\n\nI have ran very simple case on table with 2 indexes and I can see a\nlot of time is spent in index insertion. I agree that there is a good\namount of time spent in tokanizing but it is not very huge compared to\nindex insertion.\n\nI have expanded the time spent in the CopyFrom function from my perf\nreport and pasted here. We can see that a lot of time is spent in\nExecInsertIndexTuples(77%). I agree that we need to further evaluate\nthat out of which how much is I/O vs CPU operations. But, the point I\nwant to make is that it's not true for all the cases that parsing is\ntaking maximum amout of time.\n\n - 99.50% CopyFrom\n - 82.90% CopyMultiInsertInfoFlush\n - 82.85% CopyMultiInsertBufferFlush\n + 77.68% ExecInsertIndexTuples\n + 3.74% table_multi_insert\n + 0.89% ExecClearTuple\n - 12.54% NextCopyFrom\n - 7.70% NextCopyFromRawFields\n - 5.72% CopyReadLine\n 3.96% CopyReadLineText\n + 1.49% pg_any_to_server\n 1.86% CopyReadAttributesCSV\n + 3.68% InputFunctionCall\n + 2.11% ExecMaterializeSlot\n + 0.94% MemoryContextReset\n\nMy test:\n-- Prepare:\nCREATE TABLE t (a int, b int, c varchar);\ninsert into t select i,i, 'aaaaaaaaaaaaaaaaaaaaaaaa' from\ngenerate_series(1,10000000) as i;\ncopy t to '/home/dilipkumar/a.csv' WITH (FORMAT 'csv', HEADER true);\ntruncate table t;\ncreate index idx on t(a);\ncreate index idx1 on t(c);\n\n-- Test CopyFrom and measure with perf:\ncopy t from '/home/dilipkumar/a.csv' WITH (FORMAT 'csv', HEADER true);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 29 Feb 2020 14:12:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Feb 26, 2020 at 4:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 25, 2020 at 9:30 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sun, Feb 23, 2020 at 05:09:51PM -0800, Andres Freund wrote:\n> > >Hi,\n> > >\n> > >> The one piece of information I'm missing here is at least a very\nrough\n> > >> quantification of the individual steps of CSV processing - for\nexample\n> > >> if parsing takes only 10% of the time, it's pretty pointless to\nstart by\n> > >> parallelising this part and we should focus on the rest. If it's 50%\nit\n> > >> might be a different story. Has anyone done any measurements?\n> > >\n> > >Not recently, but I'm pretty sure that I've observed CSV parsing to be\n> > >way more than 10%.\n> > >\n> >\n> > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> > so I still think we need to do some measurements first.\n> >\n>\n> Agreed.\n>\n> > I'm willing to\n> > do that, but (a) I doubt I'll have time for that until after 2020-03,\n> > and (b) it'd be good to agree on some set of typical CSV files.\n> >\n>\n> Right, I don't know what is the best way to define that. I can think\n> of the below tests.\n>\n> 1. A table with 10 columns (with datatypes as integers, date, text).\n> It has one index (unique/primary). Load with 1 million rows (basically\n> the data should be probably 5-10 GB).\n> 2. A table with 10 columns (with datatypes as integers, date, text).\n> It has three indexes, one index can be (unique/primary). Load with 1\n> million rows (basically the data should be probably 5-10 GB).\n> 3. A table with 10 columns (with datatypes as integers, date, text).\n> It has three indexes, one index can be (unique/primary). It has before\n> and after trigeers. Load with 1 million rows (basically the data\n> should be probably 5-10 GB).\n> 4. A table with 10 columns (with datatypes as integers, date, text).\n> It has five or six indexes, one index can be (unique/primary). Load\n> with 1 million rows (basically the data should be probably 5-10 GB).\n>\n\nI have tried to capture the execution time taken for 3 scenarios which I\nfelt could give a fair idea:\nTest1 (Table with 3 indexes and 1 trigger)\nTest2 (Table with 2 indexes)\nTest3 (Table without indexes/triggers)\n\nI have captured the following details:\nFile Read time - time taken to read the file from CopyGetData function.\nRead line Time - time taken to read line from NextCopyFrom function(read\ntime & tokenise time excluded)\nTokenize Time - time taken to tokenize the contents from\nNextCopyFromRawFields function.\nData Execution Time - remaining execution time from the total time\n\nThe execution breakdown for the tests are given below:\nTest/ Time(In Seconds) Total Time File Read Time Read line /Buffer\nRead Time Tokenize\nTime Data Execution Time\nTest1 1693.369 0.256 34.173 5.578 1653.362\nTest2 736.096 0.288 39.762 6.525 689.521\nTest3 112.06 0.266 39.189 6.433 66.172\nSteps for the scenarios:\nTest1(Table with 3 indexes and 1 trigger):\nCREATE TABLE census2 (year int,age int,ethnic int,sex int,area text,count\ntext);\nCREATE TABLE census3(year int,age int,ethnic int,sex int,area text,count\ntext);\n\nCREATE INDEX idx1_census2 on census2(year);\nCREATE INDEX idx2_census2 on census2(age);\nCREATE INDEX idx2_census2 on census2(ethnic);\n\nCREATE or REPLACE FUNCTION census2_afterinsert()\nRETURNS TRIGGER\nAS $$\nBEGIN\n INSERT INTO census3 SELECT * FROM census2 limit 1;\n RETURN NEW;\nEND;\n$$\nLANGUAGE plpgsql;\n\nCREATE TRIGGER census2_trigger AFTER INSERT ON census2 FOR EACH ROW\nEXECUTE PROCEDURE census2_afterinsert();\nCOPY census2 FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n\nTest2 (Table with 2 indexes):\nCREATE TABLE census1 (year int,age int,ethnic int,sex int,area text,count\ntext);\nCREATE INDEX idx1_census1 on census1(year);\nCREATE INDEX idx2_census1 on census1(age);\nCOPY census1 FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n\nTest3 (Table without indexes/triggers):\nCREATE TABLE census (year int,age int,ethnic int,sex int,area text,count\ntext);\nCOPY census FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n\nNote: The Data8277.csv used was the same data that Ants aasma had used.\n\n From the above result we could infer that Read line will have to be done\nsequentially. Read line time takes about 2.01%, 5.40% and 34.97%of the\ntotal time. I felt we will be able to parallelise the remaining phases of\nthe copy. The performance improvement will vary based on the\nscenario(indexes/triggers), it will be proportionate to the number of\nindexes and triggers. Read line can also be parallelised in txt format(non\ncsv). I feel parallelising copy could give significant improvement in quite\nsome scenarios.\n\nFurther I'm planning to see how the execution will be for toast table. I'm\nalso planning to do test on RAM disk where I will configure the data on RAM\ndisk, so that we can further eliminate the I/O cost.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Feb 26, 2020 at 4:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Tue, Feb 25, 2020 at 9:30 PM Tomas Vondra> <tomas.vondra@2ndquadrant.com> wrote:> >> > On Sun, Feb 23, 2020 at 05:09:51PM -0800, Andres Freund wrote:> > >Hi,> > >> > >> The one piece of information I'm missing here is at least a very rough> > >> quantification of the individual steps of CSV processing - for example> > >> if parsing takes only 10% of the time, it's pretty pointless to start by> > >> parallelising this part and we should focus on the rest. If it's 50% it> > >> might be a different story. Has anyone done any measurements?> > >> > >Not recently, but I'm pretty sure that I've observed CSV parsing to be> > >way more than 10%.> > >> >> > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),> > so I still think we need to do some measurements first.> >>> Agreed.>> > I'm willing to> > do that, but (a) I doubt I'll have time for that until after 2020-03,> > and (b) it'd be good to agree on some set of typical CSV files.> >>> Right, I don't know what is the best way to define that.  I can think> of the below tests.>> 1. A table with 10 columns (with datatypes as integers, date, text).> It has one index (unique/primary). Load with 1 million rows (basically> the data should be probably 5-10 GB).> 2. A table with 10 columns (with datatypes as integers, date, text).> It has three indexes, one index can be (unique/primary). Load with 1> million rows (basically the data should be probably 5-10 GB).> 3. A table with 10 columns (with datatypes as integers, date, text).> It has three indexes, one index can be (unique/primary). It has before> and after trigeers. Load with 1 million rows (basically the data> should be probably 5-10 GB).> 4. A table with 10 columns (with datatypes as integers, date, text).> It has five or six indexes, one index can be (unique/primary). Load> with 1 million rows (basically the data should be probably 5-10 GB).>I have tried to capture the execution time taken for 3 scenarios which I felt could give a fair idea:Test1 (Table with 3 indexes and 1 trigger)Test2 (Table with 2 indexes)Test3 (Table without indexes/triggers)I have captured the following details:File Read time - time taken to read the file from CopyGetData function.Read line Time -  time taken to read line from NextCopyFrom function(read time & tokenise time excluded)Tokenize Time - time taken to tokenize the contents from NextCopyFromRawFields function.Data Execution Time - remaining execution time from the total timeThe execution breakdown for the tests are  given below:Test/ Time(In Seconds)Total TimeFile Read TimeRead line /Buffer Read TimeTokenize TimeData Execution TimeTest11693.3690.25634.1735.5781653.362Test2736.0960.28839.7626.525689.521Test3112.060.26639.1896.43366.172Steps for the scenarios:Test1(Table with 3 indexes and 1 trigger):CREATE TABLE census2 (year int,age int,ethnic int,sex int,area text,count text);CREATE TABLE census3(year int,age int,ethnic int,sex int,area text,count text);CREATE INDEX idx1_census2 on census2(year);CREATE INDEX idx2_census2 on census2(age);CREATE INDEX idx2_census2 on census2(ethnic);CREATE or REPLACE FUNCTION census2_afterinsert()RETURNS TRIGGERAS $$BEGIN  INSERT INTO census3  SELECT * FROM census2 limit 1;  RETURN NEW;END;$$LANGUAGE plpgsql;CREATE TRIGGER census2_trigger AFTER INSERT  ON census2 FOR EACH ROW EXECUTE PROCEDURE census2_afterinsert();COPY census2 FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);Test2 (Table with 2 indexes):CREATE TABLE census1 (year int,age int,ethnic int,sex int,area text,count text);CREATE INDEX idx1_census1 on census1(year);CREATE INDEX idx2_census1 on census1(age);COPY census1 FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);Test3 (Table without indexes/triggers):CREATE TABLE census (year int,age int,ethnic int,sex int,area text,count text);COPY census FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);Note: The Data8277.csv used was the same data that Ants aasma had used.From the above result we could infer that Read line will have to be done sequentially. Read line time takes about 2.01%, 5.40% and 34.97%of the total time. I felt we will be able to parallelise the remaining  phases of the copy. The performance improvement will vary based on the scenario(indexes/triggers), it will be proportionate to the number of indexes and triggers. Read line can also be parallelised in txt format(non csv). I feel parallelising copy could give significant improvement in quite some scenarios.Further I'm planning to see how the execution will be for toast table. I'm also planning to do test on RAM disk where I will configure the data on RAM disk, so that we can further eliminate the I/O cost.Thoughts?Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 3 Mar 2020 11:25:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Feb 26, 2020 at 8:47 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Tue, 25 Feb 2020 at 18:00, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n> > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> > so I still think we need to do some measurements first. I'm willing to\n> > do that, but (a) I doubt I'll have time for that until after 2020-03,\n> > and (b) it'd be good to agree on some set of typical CSV files.\n>\n> I agree that getting a nice varied dataset would be nice. Including\n> things like narrow integer only tables, strings with newlines and\n> escapes in them, extremely wide rows.\n>\n> I tried to capture a quick profile just to see what it looks like.\n> Grabbed a random open data set from the web, about 800MB of narrow\n> rows CSV [1].\n>\n> Script:\n> CREATE TABLE census (year int,age int,ethnic int,sex int,area text,count\ntext);\n> COPY census FROM '.../Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n>\n> Profile:\n> # Samples: 59K of event 'cycles:u'\n> # Event count (approx.): 57644269486\n> #\n> # Overhead Command Shared Object Symbol\n> # ........ ........ ..................\n> .......................................\n> #\n> 18.24% postgres postgres [.] CopyReadLine\n> 9.23% postgres postgres [.] NextCopyFrom\n> 8.87% postgres postgres [.] NextCopyFromRawFields\n> 5.82% postgres postgres [.] pg_verify_mbstr_len\n> 5.45% postgres postgres [.] pg_strtoint32\n> 4.16% postgres postgres [.] heap_fill_tuple\n> 4.03% postgres postgres [.] heap_compute_data_size\n> 3.83% postgres postgres [.] CopyFrom\n> 3.78% postgres postgres [.] AllocSetAlloc\n> 3.53% postgres postgres [.] heap_form_tuple\n> 2.96% postgres postgres [.] InputFunctionCall\n> 2.89% postgres libc-2.30.so [.] __memmove_avx_unaligned_erms\n> 1.82% postgres libc-2.30.so [.] __strlen_avx2\n> 1.72% postgres postgres [.] AllocSetReset\n> 1.72% postgres postgres [.] RelationPutHeapTuple\n> 1.47% postgres postgres [.] heap_prepare_insert\n> 1.31% postgres postgres [.] heap_multi_insert\n> 1.25% postgres postgres [.] textin\n> 1.24% postgres postgres [.] int4in\n> 1.05% postgres postgres [.] tts_buffer_heap_clear\n> 0.85% postgres postgres [.] pg_any_to_server\n> 0.80% postgres postgres [.] pg_comp_crc32c_sse42\n> 0.77% postgres postgres [.] cstring_to_text_with_len\n> 0.69% postgres postgres [.] AllocSetFree\n> 0.60% postgres postgres [.] appendBinaryStringInfo\n> 0.55% postgres postgres [.]\ntts_buffer_heap_materialize.part.0\n> 0.54% postgres postgres [.] palloc\n> 0.54% postgres libc-2.30.so [.] __memmove_avx_unaligned\n> 0.51% postgres postgres [.] palloc0\n> 0.51% postgres postgres [.] pg_encoding_max_length\n> 0.48% postgres postgres [.] enlargeStringInfo\n> 0.47% postgres postgres [.] ExecStoreVirtualTuple\n> 0.45% postgres postgres [.] PageAddItemExtended\n>\n> So that confirms that the parsing is a huge chunk of overhead with\n> current splitting into lines being the largest portion. Amdahl's law\n> says that splitting into tuples needs to be made fast before\n> parallelizing makes any sense.\n>\n\nI had taken perf report with the same test data that you had used, I was\ngetting the following results:\n.....\n+ 99.61% 0.00% postgres postgres [.] PortalRun\n+ 99.61% 0.00% postgres postgres [.] PortalRunMulti\n+ 99.61% 0.00% postgres postgres [.] PortalRunUtility\n+ 99.61% 0.00% postgres postgres [.] ProcessUtility\n+ 99.61% 0.00% postgres postgres [.]\nstandard_ProcessUtility\n+ 99.61% 0.00% postgres postgres [.] DoCopy\n+ 99.30% 0.94% postgres postgres [.] CopyFrom\n+ 51.61% 7.76% postgres postgres [.] NextCopyFrom\n+ 23.66% 0.01% postgres postgres [.]\nCopyMultiInsertInfoFlush\n+ 23.61% 0.28% postgres postgres [.]\nCopyMultiInsertBufferFlush\n+ 21.99% 1.02% postgres postgres [.]\nNextCopyFromRawFields\n\n\n*+ 19.79% 0.01% postgres postgres [.]\ntable_multi_insert+ 19.32% 3.00% postgres postgres [.]\nheap_multi_insert*+ 18.27% 2.44% postgres postgres [.]\nInputFunctionCall\n\n*+ 15.19% 0.89% postgres postgres [.] CopyReadLine*+\n13.05% 0.18% postgres postgres [.] ExecMaterializeSlot\n+ 13.00% 0.55% postgres postgres [.]\ntts_buffer_heap_materialize\n+ 12.31% 1.77% postgres postgres [.] heap_form_tuple\n+ 10.43% 0.45% postgres postgres [.] int4in\n+ 10.18% 8.92% postgres postgres [.] CopyReadLineText\n......\n\nIn my results I observed execution table_multi_insert was nearly 20%. Also\nI felt like once we have made few tuples from CopyReadLine, the parallel\nworkers should be able to start consuming the data and process the data. We\nneed not wait for the complete tokenisation to be finished. Once few tuples\nare tokenised parallel workers should start consuming the data parallelly\nand tokenisation should happen simultaneously. In this way once the copy is\ndone parallelly total execution time should be CopyReadLine Time + delta\nprocessing time.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Feb 26, 2020 at 8:47 PM Ants Aasma <ants@cybertec.at> wrote:>> On Tue, 25 Feb 2020 at 18:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:> > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),> > so I still think we need to do some measurements first. I'm willing to> > do that, but (a) I doubt I'll have time for that until after 2020-03,> > and (b) it'd be good to agree on some set of typical CSV files.>> I agree that getting a nice varied dataset would be nice. Including> things like narrow integer only tables, strings with newlines and> escapes in them, extremely wide rows.>> I tried to capture a quick profile just to see what it looks like.> Grabbed a random open data set from the web, about 800MB of narrow> rows CSV [1].>> Script:> CREATE TABLE census (year int,age int,ethnic int,sex int,area text,count text);> COPY census FROM '.../Data8277.csv' WITH (FORMAT 'csv', HEADER true);>> Profile:> # Samples: 59K of event 'cycles:u'> # Event count (approx.): 57644269486> #> # Overhead  Command   Shared Object       Symbol> # ........  ........  ..................> .......................................> #>     18.24%  postgres  postgres            [.] CopyReadLine>      9.23%  postgres  postgres            [.] NextCopyFrom>      8.87%  postgres  postgres            [.] NextCopyFromRawFields>      5.82%  postgres  postgres            [.] pg_verify_mbstr_len>      5.45%  postgres  postgres            [.] pg_strtoint32>      4.16%  postgres  postgres            [.] heap_fill_tuple>      4.03%  postgres  postgres            [.] heap_compute_data_size>      3.83%  postgres  postgres            [.] CopyFrom>      3.78%  postgres  postgres            [.] AllocSetAlloc>      3.53%  postgres  postgres            [.] heap_form_tuple>      2.96%  postgres  postgres            [.] InputFunctionCall>      2.89%  postgres  libc-2.30.so        [.] __memmove_avx_unaligned_erms>      1.82%  postgres  libc-2.30.so        [.] __strlen_avx2>      1.72%  postgres  postgres            [.] AllocSetReset>      1.72%  postgres  postgres            [.] RelationPutHeapTuple>      1.47%  postgres  postgres            [.] heap_prepare_insert>      1.31%  postgres  postgres            [.] heap_multi_insert>      1.25%  postgres  postgres            [.] textin>      1.24%  postgres  postgres            [.] int4in>      1.05%  postgres  postgres            [.] tts_buffer_heap_clear>      0.85%  postgres  postgres            [.] pg_any_to_server>      0.80%  postgres  postgres            [.] pg_comp_crc32c_sse42>      0.77%  postgres  postgres            [.] cstring_to_text_with_len>      0.69%  postgres  postgres            [.] AllocSetFree>      0.60%  postgres  postgres            [.] appendBinaryStringInfo>      0.55%  postgres  postgres            [.] tts_buffer_heap_materialize.part.0>      0.54%  postgres  postgres            [.] palloc>      0.54%  postgres  libc-2.30.so        [.] __memmove_avx_unaligned>      0.51%  postgres  postgres            [.] palloc0>      0.51%  postgres  postgres            [.] pg_encoding_max_length>      0.48%  postgres  postgres            [.] enlargeStringInfo>      0.47%  postgres  postgres            [.] ExecStoreVirtualTuple>      0.45%  postgres  postgres            [.] PageAddItemExtended>> So that confirms that the parsing is a huge chunk of overhead with> current splitting into lines being the largest portion. Amdahl's law> says that splitting into tuples needs to be made fast before> parallelizing makes any sense.>I had taken perf report with the same test data that you had used, I was getting the following results:.....+   99.61%     0.00%  postgres  postgres            [.] PortalRun+   99.61%     0.00%  postgres  postgres            [.] PortalRunMulti+   99.61%     0.00%  postgres  postgres            [.] PortalRunUtility+   99.61%     0.00%  postgres  postgres            [.] ProcessUtility+   99.61%     0.00%  postgres  postgres            [.] standard_ProcessUtility+   99.61%     0.00%  postgres  postgres            [.] DoCopy+   99.30%     0.94%  postgres  postgres            [.] CopyFrom+   51.61%     7.76%  postgres  postgres            [.] NextCopyFrom+   23.66%     0.01%  postgres  postgres            [.] CopyMultiInsertInfoFlush+   23.61%     0.28%  postgres  postgres            [.] CopyMultiInsertBufferFlush+   21.99%     1.02%  postgres  postgres            [.] NextCopyFromRawFields+   19.79%     0.01%  postgres  postgres            [.] table_multi_insert+   19.32%     3.00%  postgres  postgres            [.] heap_multi_insert+   18.27%     2.44%  postgres  postgres            [.] InputFunctionCall+   15.19%     0.89%  postgres  postgres            [.] CopyReadLine+   13.05%     0.18%  postgres  postgres            [.] ExecMaterializeSlot+   13.00%     0.55%  postgres  postgres            [.] tts_buffer_heap_materialize+   12.31%     1.77%  postgres  postgres            [.] heap_form_tuple+   10.43%     0.45%  postgres  postgres            [.] int4in+   10.18%     8.92%  postgres  postgres            [.] CopyReadLineText ......In my results I observed execution table_multi_insert was nearly 20%. Also I felt like once we have made few tuples from CopyReadLine, the parallel workers should be able to start consuming the data and process the data. We need not wait for the complete tokenisation to be finished. Once few tuples are tokenised parallel workers should start consuming the data parallelly and tokenisation should happen simultaneously. In this way once the copy is done parallelly total execution time should be CopyReadLine Time + delta processing time. Thoughts?Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 3 Mar 2020 11:44:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "I have got the execution breakdown for few scenarios with normal disk and\nRAM disk.\n\n*Execution breakup in Normal disk:*\nTest/ Time(In Seconds)\nTotal TIme File Read Time copyreadline Time\nRemaining\nExecution Time Read line percentage\nTest1(3 index + 1 trigger) 2099.017 0.311 10.256 2088.45 0.4886096682\nTest2(2 index) 657.994 0.303 10.171 647.52 1.545758776\nTest3(no index, no trigger) 112.41 0.296 10.996 101.118 9.782047861\nTest4(toast) 360.028 1.43 46.556 312.042 12.93121646\n\n*Execution breakup in RAM disk:*\nTest/ Time(In Seconds)\nTotal TIme File Read Time copyreadline Time\nRemaining\nExecution Time Read line percentage\nTest1(3 index + 1 trigger) 1571.558 0.259 6.986 1564.313 0.4445270235\nTest2(2 index) 369.942 0.263 6.848 362.831 1.851100983\nTest3(no index, no trigger) 54.077 0.239 6.805 47.033 12.58390813\nTest4(toast) 96.323 0.918 26.603 68.802 27.61853348\n\nSteps for the scenarios:\n*Test1(Table with 3 indexes and 1 trigger):*\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n*CREATE TABLE census2 (year int,age int,ethnic int,sex int,area text,count\ntext);CREATE TABLE census3(year int,age int,ethnic int,sex int,area\ntext,count text);CREATE INDEX idx1_census2 on census2(year);CREATE INDEX\nidx2_census2 on census2(age);CREATE INDEX idx3_census2 on\ncensus2(ethnic);CREATE or REPLACE FUNCTION census2_afterinsert()RETURNS\nTRIGGERAS $$BEGIN INSERT INTO census3 SELECT * FROM census2 limit 1;\nRETURN NEW;END;$$LANGUAGE plpgsql;CREATE TRIGGER census2_trigger AFTER\nINSERT ON census2 FOR EACH ROW EXECUTE PROCEDURE\ncensus2_afterinsert();COPY census2 FROM 'Data8277.csv' WITH (FORMAT 'csv',\nHEADER true);*\n\n*Test2 (Table with 2 indexes):*\n\n\n\n*CREATE TABLE census1 (year int,age int,ethnic int,sex int,area text,count\ntext);CREATE INDEX idx1_census1 on census1(year);CREATE INDEX idx2_census1\non census1(age);COPY census1 FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER\ntrue);*\n\n*Test3 (Table without indexes/triggers):*\n\n*CREATE TABLE census (year int,age int,ethnic int,sex int,area text,count\ntext);COPY census FROM 'Data8277.csv' WITH (FORMAT 'csv', HEADER true);*\n\n*Random open data set from the web, about 800MB of narrow rows CSV [1] was\nused in the above tests, the same which Ants Aasma had used.*\n\n*Test4 (Toast table):*\n\n\n\n\n\n\n\n\n\n*CREATE TABLE indtoasttest(descr text, cnt int DEFAULT 0, f1 text, f2\ntext);alter table indtoasttest alter column f1 set storage external;alter\ntable indtoasttest alter column f2 set storage external;inserted 262144\nrecordscopy indtoasttest to\n'/mnt/magnetic/vignesh.c/postgres/toast_data3.csv' WITH (FORMAT 'csv',\nHEADER true);CREATE TABLE indtoasttest1(descr text, cnt int DEFAULT 0, f1\ntext, f2 text);alter table indtoasttest1 alter column f1 set storage\nexternal;alter table indtoasttest1 alter column f2 set storage\nexternal;copy indtoasttest1 from\n'/mnt/magnetic/vignesh.c/postgres/toast_data3.csv' WITH (FORMAT 'csv',\nHEADER true);*\n\nWe could infer that Read line Time cannot be parallelized, this is mainly\nbecause if the data has quote present we will not be able to differentiate\nif it is part of previous record or it is part of current record. The rest\nof the execution time can be parallelized. Read line Time takes about 0.5%,\n1.5%, 9.8% & 12.9% of the total time. We could parallelize the remaining\nphases of the copy. The performance improvement will vary based on the\nscenario(indexes/triggers), it will be proportionate to the number of\nindexes and triggers. Read line can also be parallelized in txt format(non\ncsv). We feel parallelize copy could give significant improvement in many\nscenarios.\n\nAttached patch for reference which was used to capture the execution time\nbreakup.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Mar 3, 2020 at 11:44 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Wed, Feb 26, 2020 at 8:47 PM Ants Aasma <ants@cybertec.at> wrote:\n> >\n> > On Tue, 25 Feb 2020 at 18:00, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:\n> > > Perhaps. I guess it'll depend on the CSV file (number of fields, ...),\n> > > so I still think we need to do some measurements first. I'm willing to\n> > > do that, but (a) I doubt I'll have time for that until after 2020-03,\n> > > and (b) it'd be good to agree on some set of typical CSV files.\n> >\n> > I agree that getting a nice varied dataset would be nice. Including\n> > things like narrow integer only tables, strings with newlines and\n> > escapes in them, extremely wide rows.\n> >\n> > I tried to capture a quick profile just to see what it looks like.\n> > Grabbed a random open data set from the web, about 800MB of narrow\n> > rows CSV [1].\n> >\n> > Script:\n> > CREATE TABLE census (year int,age int,ethnic int,sex int,area text,count\n> text);\n> > COPY census FROM '.../Data8277.csv' WITH (FORMAT 'csv', HEADER true);\n> >\n> > Profile:\n> > # Samples: 59K of event 'cycles:u'\n> > # Event count (approx.): 57644269486\n> > #\n> > # Overhead Command Shared Object Symbol\n> > # ........ ........ ..................\n> > .......................................\n> > #\n> > 18.24% postgres postgres [.] CopyReadLine\n> > 9.23% postgres postgres [.] NextCopyFrom\n> > 8.87% postgres postgres [.] NextCopyFromRawFields\n> > 5.82% postgres postgres [.] pg_verify_mbstr_len\n> > 5.45% postgres postgres [.] pg_strtoint32\n> > 4.16% postgres postgres [.] heap_fill_tuple\n> > 4.03% postgres postgres [.] heap_compute_data_size\n> > 3.83% postgres postgres [.] CopyFrom\n> > 3.78% postgres postgres [.] AllocSetAlloc\n> > 3.53% postgres postgres [.] heap_form_tuple\n> > 2.96% postgres postgres [.] InputFunctionCall\n> > 2.89% postgres libc-2.30.so [.]\n> __memmove_avx_unaligned_erms\n> > 1.82% postgres libc-2.30.so [.] __strlen_avx2\n> > 1.72% postgres postgres [.] AllocSetReset\n> > 1.72% postgres postgres [.] RelationPutHeapTuple\n> > 1.47% postgres postgres [.] heap_prepare_insert\n> > 1.31% postgres postgres [.] heap_multi_insert\n> > 1.25% postgres postgres [.] textin\n> > 1.24% postgres postgres [.] int4in\n> > 1.05% postgres postgres [.] tts_buffer_heap_clear\n> > 0.85% postgres postgres [.] pg_any_to_server\n> > 0.80% postgres postgres [.] pg_comp_crc32c_sse42\n> > 0.77% postgres postgres [.] cstring_to_text_with_len\n> > 0.69% postgres postgres [.] AllocSetFree\n> > 0.60% postgres postgres [.] appendBinaryStringInfo\n> > 0.55% postgres postgres [.]\n> tts_buffer_heap_materialize.part.0\n> > 0.54% postgres postgres [.] palloc\n> > 0.54% postgres libc-2.30.so [.] __memmove_avx_unaligned\n> > 0.51% postgres postgres [.] palloc0\n> > 0.51% postgres postgres [.] pg_encoding_max_length\n> > 0.48% postgres postgres [.] enlargeStringInfo\n> > 0.47% postgres postgres [.] ExecStoreVirtualTuple\n> > 0.45% postgres postgres [.] PageAddItemExtended\n> >\n> > So that confirms that the parsing is a huge chunk of overhead with\n> > current splitting into lines being the largest portion. Amdahl's law\n> > says that splitting into tuples needs to be made fast before\n> > parallelizing makes any sense.\n> >\n>\n> I had taken perf report with the same test data that you had used, I was\n> getting the following results:\n> .....\n> + 99.61% 0.00% postgres postgres [.] PortalRun\n> + 99.61% 0.00% postgres postgres [.] PortalRunMulti\n> + 99.61% 0.00% postgres postgres [.] PortalRunUtility\n> + 99.61% 0.00% postgres postgres [.] ProcessUtility\n> + 99.61% 0.00% postgres postgres [.]\n> standard_ProcessUtility\n> + 99.61% 0.00% postgres postgres [.] DoCopy\n> + 99.30% 0.94% postgres postgres [.] CopyFrom\n> + 51.61% 7.76% postgres postgres [.] NextCopyFrom\n> + 23.66% 0.01% postgres postgres [.]\n> CopyMultiInsertInfoFlush\n> + 23.61% 0.28% postgres postgres [.]\n> CopyMultiInsertBufferFlush\n> + 21.99% 1.02% postgres postgres [.]\n> NextCopyFromRawFields\n>\n>\n> *+ 19.79% 0.01% postgres postgres [.]\n> table_multi_insert+ 19.32% 3.00% postgres postgres [.]\n> heap_multi_insert*+ 18.27% 2.44% postgres postgres [.]\n> InputFunctionCall\n>\n> *+ 15.19% 0.89% postgres postgres [.] CopyReadLine*+\n> 13.05% 0.18% postgres postgres [.] ExecMaterializeSlot\n> + 13.00% 0.55% postgres postgres [.]\n> tts_buffer_heap_materialize\n> + 12.31% 1.77% postgres postgres [.] heap_form_tuple\n> + 10.43% 0.45% postgres postgres [.] int4in\n> + 10.18% 8.92% postgres postgres [.] CopyReadLineText\n> ......\n>\n> In my results I observed execution table_multi_insert was nearly 20%. Also\n> I felt like once we have made few tuples from CopyReadLine, the parallel\n> workers should be able to start consuming the data and process the data. We\n> need not wait for the complete tokenisation to be finished. Once few tuples\n> are tokenised parallel workers should start consuming the data parallelly\n> and tokenisation should happen simultaneously. In this way once the copy is\n> done parallelly total execution time should be CopyReadLine Time + delta\n> processing time.\n>\n> Thoughts?\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Thu, 12 Mar 2020 18:39:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Mar 12, 2020 at 6:39 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nExisting parallel copy code flow. Copy supports copy operation from\ncsv, txt & bin format file. For processing csv & text format, it will\nread 64kb chunk or lesser size if in case the file has lesser size\ncontents in the input file. Server will then read one tuple of data\nand do the processing of the tuple. If the above tuple that is\ngenerated was less than 64kb data, then the server will try to\ngenerate another tuple for processing from the remaining unprocessed\ndata. If it is not able to generate one tuple from the unprocessed\ndata it will do a further 64kb data read or lesser remaining size that\nis present in the file and send the tuple for processing. This process\nis repeated till the complete file is processed. For processing bin\nformat file the flow is slightly different. Server will read the\nnumber of columns that are present. Then read the column size data and\nthen read the actual column contents, repeat this for all the columns.\nServer will then process the tuple that is generated. This process is\nrepeated for all the remaining tuples in the bin file. The tuple\nprocessing flow is the same in all the formats. Currently all the\noperations happen sequentially. This project will help in\nparallelizing the copy operation.\n\nI'm planning to do the POC of parallel copy with the below design:\nProposed Syntax:\nCOPY table_name FROM ‘copy_file' WITH (FORMAT ‘format’, PARALLEL ‘workers’);\nUsers can specify the number of workers that must be used for copying\nthe data in parallel. Here ‘workers’ is the number of workers that\nmust be used for parallel copy operation apart from the leader. Leader\nis responsible for reading the data from the input file and generating\nthe work for the workers. Leader will start a transaction and share\nthis transaction with the workers. All workers will be using the same\ntransaction to insert the records. Leader will create a circular queue\nand share it across the workers. The circular queue will be present in\nDSM. Leader will be using a fixed size queue to share the contents\nbetween the leader and the workers. Currently we will have 100\nelements present in the queue. This will be created before the workers\nare started and shared with the workers. The data structures that are\nrequired by the parallel workers will be initialized by the leader,\nthe size required in dsm will be calculated and the necessary keys\nwill be loaded in the DSM. The specified number of workers will then\nbe launched. Leader will read the table data from the file and copy\nthe contents to the queue element by element. Each element in the\nqueue will have 64K size DSA. This DSA will be used to store tuple\ncontents from the file. The leader will try to copy as much content as\npossible within one 64K DSA queue element. We intend to store at least\none tuple in each queue element. There are some cases where the 64K\nspace may not be enough to store a single tuple. Mostly in cases where\nthe table has toast data present and the single tuple can be more than\n64K size. In these scenarios we will extend the DSA space accordingly.\nWe cannot change the size of the dsm once the workers are launched.\nWhereas in case of DSA we can free the dsa pointer and reallocate the\ndsa pointer based on the memory size required. This is the very reason\nfor choosing DSA over DSM for storing the data that must be inserted\ninto the relation. Leader will keep on loading the data into the queue\ntill the queue becomes full. Leader will transform his role into a\nworker either when the Queue is full or the Complete file is\nprocessed. Once the queue is full, the leader will switch its role to\nbecome a worker, then the leader will continue to act as worker till\n25% of the elements in the queue is consumed by all the workers. Once\nthere is at least 25% space available in the queue leader who was\nworking as a worker will switch its role back to become the leader\nagain. The above process of filling the queue will be continued by the\nleader until the whole file is processed. Leader will wait until the\nrespective workers finish processing the queue elements. The copy from\nfunctionality is also being used during initdb operations where the\ncopy is intended to be performed in single mode or the user can still\ncontinue running in non-parallel mode. In case of non parallel mode,\nmemory allocation will happen using palloc instead of DSM/DSA and most\nof the flow will be the same in both parallel and non parallel cases.\n\nWe had a couple of options for the way in which queue elements can be stored.\nOption 1: Each element (DSA chunk) will contain tuples such that each\ntuple will be preceded by the length of the tuple. So the tuples will\nbe arranged like (Length of tuple-1, tuple-1), (Length of tuple-2,\ntuple-2), .... Or Option 2: Each element (DSA chunk) will contain only\ntuples (tuple-1), (tuple-2), ..... And we will have a second\nring-buffer which contains a start-offset or length of each tuple. The\nold design used to generate one tuple of data and process tuple by\ntuple. In the new design, the server will generate multiple tuples of\ndata per queue element. The worker will then process data tuple by\ntuple. As we are processing the data tuple by tuple, I felt both of\nthe options are almost the same. However Design1 was chosen over\nDesign 2 as we can save up on some space that was required by another\nvariable in each element of the queue.\n\nThe parallel workers will read the tuples from the queue and do the\nfollowing operations, all of these operations: a) where clause\nhandling, b) convert tuple to columns, c) add default null values for\nthe missing columns that are not present in that record, d) find the\npartition if it is partitioned table, e) before row insert Triggers,\nconstraints f) insertion of the data. Rest of the flow is the same as\nthe existing code.\n\nEnhancements after POC is done:\nInitially we plan to use the number of workers based on the worker\ncount user has specified, Later we will do some experiments and think\nof an approach to choose workers automatically after processing sample\ncontents from the file.\nInitially we plan to use 100 elements in the queue, Later we will\nexperiment to find the right size for the queue once the basic patch\nis ready.\nInitially we plan to generate the transaction from the leader and\nshare it across to the workers. Later we will change this in such a\nway that the first process that will do an insert operation will\ngenerate the transaction and share it with the rest of them.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Apr 2020 10:54:28 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, 7 Apr 2020 at 08:24, vignesh C <vignesh21@gmail.com> wrote:\n> Leader will create a circular queue\n> and share it across the workers. The circular queue will be present in\n> DSM. Leader will be using a fixed size queue to share the contents\n> between the leader and the workers. Currently we will have 100\n> elements present in the queue. This will be created before the workers\n> are started and shared with the workers. The data structures that are\n> required by the parallel workers will be initialized by the leader,\n> the size required in dsm will be calculated and the necessary keys\n> will be loaded in the DSM. The specified number of workers will then\n> be launched. Leader will read the table data from the file and copy\n> the contents to the queue element by element. Each element in the\n> queue will have 64K size DSA. This DSA will be used to store tuple\n> contents from the file. The leader will try to copy as much content as\n> possible within one 64K DSA queue element. We intend to store at least\n> one tuple in each queue element. There are some cases where the 64K\n> space may not be enough to store a single tuple. Mostly in cases where\n> the table has toast data present and the single tuple can be more than\n> 64K size. In these scenarios we will extend the DSA space accordingly.\n> We cannot change the size of the dsm once the workers are launched.\n> Whereas in case of DSA we can free the dsa pointer and reallocate the\n> dsa pointer based on the memory size required. This is the very reason\n> for choosing DSA over DSM for storing the data that must be inserted\n> into the relation.\n\nI think the element based approach and requirement that all tuples fit\ninto the queue makes things unnecessarily complex. The approach I\ndetailed earlier allows for tuples to be bigger than the buffer. In\nthat case a worker will claim the long tuple from the ring queue of\ntuple start positions, and starts copying it into its local line_buf.\nThis can wrap around the buffer multiple times until the next start\nposition shows up. At that point this worker can proceed with\ninserting the tuple and the next worker will claim the next tuple.\n\nThis way nothing needs to be resized, there is no risk of a file with\nhuge tuples running the system out of memory because each element will\nbe reallocated to be huge and the number of elements is not something\nthat has to be tuned.\n\n> We had a couple of options for the way in which queue elements can be stored.\n> Option 1: Each element (DSA chunk) will contain tuples such that each\n> tuple will be preceded by the length of the tuple. So the tuples will\n> be arranged like (Length of tuple-1, tuple-1), (Length of tuple-2,\n> tuple-2), .... Or Option 2: Each element (DSA chunk) will contain only\n> tuples (tuple-1), (tuple-2), ..... And we will have a second\n> ring-buffer which contains a start-offset or length of each tuple. The\n> old design used to generate one tuple of data and process tuple by\n> tuple. In the new design, the server will generate multiple tuples of\n> data per queue element. The worker will then process data tuple by\n> tuple. As we are processing the data tuple by tuple, I felt both of\n> the options are almost the same. However Design1 was chosen over\n> Design 2 as we can save up on some space that was required by another\n> variable in each element of the queue.\n\nWith option 1 it's not possible to read input data into shared memory\nand there needs to be an extra memcpy in the time critical sequential\nflow of the leader. With option 2 data could be read directly into the\nshared memory buffer. With future async io support, reading and\nlooking for tuple boundaries could be performed concurrently.\n\n\nRegards,\nAnts Aasma\nCybertec\n\n\n", "msg_date": "Tue, 7 Apr 2020 16:38:33 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Apr 7, 2020 at 7:08 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Tue, 7 Apr 2020 at 08:24, vignesh C <vignesh21@gmail.com> wrote:\n> > Leader will create a circular queue\n> > and share it across the workers. The circular queue will be present in\n> > DSM. Leader will be using a fixed size queue to share the contents\n> > between the leader and the workers. Currently we will have 100\n> > elements present in the queue. This will be created before the workers\n> > are started and shared with the workers. The data structures that are\n> > required by the parallel workers will be initialized by the leader,\n> > the size required in dsm will be calculated and the necessary keys\n> > will be loaded in the DSM. The specified number of workers will then\n> > be launched. Leader will read the table data from the file and copy\n> > the contents to the queue element by element. Each element in the\n> > queue will have 64K size DSA. This DSA will be used to store tuple\n> > contents from the file. The leader will try to copy as much content as\n> > possible within one 64K DSA queue element. We intend to store at least\n> > one tuple in each queue element. There are some cases where the 64K\n> > space may not be enough to store a single tuple. Mostly in cases where\n> > the table has toast data present and the single tuple can be more than\n> > 64K size. In these scenarios we will extend the DSA space accordingly.\n> > We cannot change the size of the dsm once the workers are launched.\n> > Whereas in case of DSA we can free the dsa pointer and reallocate the\n> > dsa pointer based on the memory size required. This is the very reason\n> > for choosing DSA over DSM for storing the data that must be inserted\n> > into the relation.\n>\n> I think the element based approach and requirement that all tuples fit\n> into the queue makes things unnecessarily complex. The approach I\n> detailed earlier allows for tuples to be bigger than the buffer. In\n> that case a worker will claim the long tuple from the ring queue of\n> tuple start positions, and starts copying it into its local line_buf.\n> This can wrap around the buffer multiple times until the next start\n> position shows up. At that point this worker can proceed with\n> inserting the tuple and the next worker will claim the next tuple.\n>\n\nIIUC, with the fixed size buffer, the parallelism might hit a bit\nbecause till the worker copies the data from shared buffer to local\nbuffer the reader process won't be able to continue. I think there\nwill be somewhat more leader-worker coordination is required with the\nfixed buffer size. However, as you pointed out, we can't allow it to\nincrease it to max_size possible for all tuples as that might require\na lot of memory. One idea could be that we allow it for first any\nsuch tuple and then if any other element/chunk in the queue required\nmore memory than the default 64KB, then we will always fallback to use\nthe memory we have allocated for first chunk. This will allow us to\nnot use more memory except for one tuple and won't hit parallelism\nmuch as in many cases not all tuples will be so large.\n\nI think in the proposed approach queue element is nothing but a way to\ndivide the work among workers based on size rather than based on\nnumber of tuples. Say if we try to divide the work among workers\nbased on start offsets, it can be more tricky. Because it could lead\nto either a lot of contentention if we choose say one offset\nper-worker (basically copy the data for one tuple, process it and then\npick next tuple) or probably unequal division of work because some can\nbe smaller and others can be bigger. I guess division based on size\nwould be a better idea. OTOH, I see the advantage of your approach as\nwell and I will think more on it.\n\n>\n> > We had a couple of options for the way in which queue elements can be stored.\n> > Option 1: Each element (DSA chunk) will contain tuples such that each\n> > tuple will be preceded by the length of the tuple. So the tuples will\n> > be arranged like (Length of tuple-1, tuple-1), (Length of tuple-2,\n> > tuple-2), .... Or Option 2: Each element (DSA chunk) will contain only\n> > tuples (tuple-1), (tuple-2), ..... And we will have a second\n> > ring-buffer which contains a start-offset or length of each tuple. The\n> > old design used to generate one tuple of data and process tuple by\n> > tuple. In the new design, the server will generate multiple tuples of\n> > data per queue element. The worker will then process data tuple by\n> > tuple. As we are processing the data tuple by tuple, I felt both of\n> > the options are almost the same. However Design1 was chosen over\n> > Design 2 as we can save up on some space that was required by another\n> > variable in each element of the queue.\n>\n> With option 1 it's not possible to read input data into shared memory\n> and there needs to be an extra memcpy in the time critical sequential\n> flow of the leader. With option 2 data could be read directly into the\n> shared memory buffer. With future async io support, reading and\n> looking for tuple boundaries could be performed concurrently.\n>\n\nYeah, option-2 sounds better.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 16:42:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Apr 7, 2020 at 9:38 AM Ants Aasma <ants@cybertec.at> wrote:\n> I think the element based approach and requirement that all tuples fit\n> into the queue makes things unnecessarily complex. The approach I\n> detailed earlier allows for tuples to be bigger than the buffer. In\n> that case a worker will claim the long tuple from the ring queue of\n> tuple start positions, and starts copying it into its local line_buf.\n> This can wrap around the buffer multiple times until the next start\n> position shows up. At that point this worker can proceed with\n> inserting the tuple and the next worker will claim the next tuple.\n>\n> This way nothing needs to be resized, there is no risk of a file with\n> huge tuples running the system out of memory because each element will\n> be reallocated to be huge and the number of elements is not something\n> that has to be tuned.\n\n+1. This seems like the right way to do it.\n\n> > We had a couple of options for the way in which queue elements can be stored.\n> > Option 1: Each element (DSA chunk) will contain tuples such that each\n> > tuple will be preceded by the length of the tuple. So the tuples will\n> > be arranged like (Length of tuple-1, tuple-1), (Length of tuple-2,\n> > tuple-2), .... Or Option 2: Each element (DSA chunk) will contain only\n> > tuples (tuple-1), (tuple-2), ..... And we will have a second\n> > ring-buffer which contains a start-offset or length of each tuple. The\n> > old design used to generate one tuple of data and process tuple by\n> > tuple. In the new design, the server will generate multiple tuples of\n> > data per queue element. The worker will then process data tuple by\n> > tuple. As we are processing the data tuple by tuple, I felt both of\n> > the options are almost the same. However Design1 was chosen over\n> > Design 2 as we can save up on some space that was required by another\n> > variable in each element of the queue.\n>\n> With option 1 it's not possible to read input data into shared memory\n> and there needs to be an extra memcpy in the time critical sequential\n> flow of the leader. With option 2 data could be read directly into the\n> shared memory buffer. With future async io support, reading and\n> looking for tuple boundaries could be performed concurrently.\n\nBut option 2 still seems significantly worse than your proposal above, right?\n\nI really think we don't want a single worker in charge of finding\ntuple boundaries for everybody. That adds a lot of unnecessary\ninter-process communication and synchronization. Each process should\njust get the next tuple starting after where the last one ended, and\nthen advance the end pointer so that the next process can do the same\nthing. Vignesh's proposal involves having a leader process that has to\nswitch roles - he picks an arbitrary 25% threshold - and if it doesn't\nswitch roles at the right time, performance will be impacted. If the\nleader doesn't get scheduled in time to refill the queue before it\nruns completely empty, workers will have to wait. Ants's scheme avoids\nthat risk: whoever needs the next tuple reads the next line. There's\nno need to ever wait for the leader because there is no leader.\n\nI think it's worth enumerating some of the other ways that a project\nin this area can fail to achieve good speedups, so that we can try to\navoid those that are avoidable and be aware of the others:\n\n- If we're unable to supply data to the COPY process as fast as the\nworkers could load it, then speed will be limited at that point. We\nknow reading the file from disk is pretty fast compared to what a\nsingle process can do. I'm not sure we've tested what happens with a\nnetwork socket. It will depend on the network speed some, but it might\nbe useful to know how many MB/s we can pump through over a UNIX\nsocket.\n\n- The portion of the time that is used to split the lines is not\neasily parallelizable. That seems to be a fairly small percentage for\na reasonably wide table, but it looks significant (13-18%) for a\nnarrow table. Such cases will gain less performance and be limited to\na smaller number of workers. I think we also need to be careful about\nfiles whose lines are longer than the size of the buffer. If we're not\ncareful, we could get a significant performance drop-off in such\ncases. We should make sure to pick an algorithm that seems like it\nwill handle such cases without serious regressions and check that a\nfile composed entirely of such long lines is handled reasonably\nefficiently.\n\n- There could be index contention. Let's suppose that we can read data\nsuper fast and break it up into lines super fast. Maybe the file we're\nreading is fully RAM-cached and the lines are long. Now all of the\nbackends are inserting into the indexes at the same time, and they\nmight be trying to insert into the same pages. If so, lock contention\ncould become a factor that hinders performance.\n\n- There could also be similar contention on the heap. Say the tuples\nare narrow, and many backends are trying to insert tuples into the\nsame heap page at the same time. This would lead to many lock/unlock\ncycles. This could be avoided if the backends avoid targeting the same\nheap pages, but I'm not sure there's any reason to expect that they\nwould do so unless we make some special provision for it.\n\n- These problems could also arise with respect to TOAST table\ninsertions, either on the TOAST table itself or on its index. This\nwould only happen if the table contains a lot of toastable values, but\nthat could be the case: imagine a table with a bunch of columns each\nof which contains a long string that isn't very compressible.\n\n- What else? I bet the above list is not comprehensive.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:30:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, 8 Apr 2020 at 22:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> - If we're unable to supply data to the COPY process as fast as the\n> workers could load it, then speed will be limited at that point. We\n> know reading the file from disk is pretty fast compared to what a\n> single process can do. I'm not sure we've tested what happens with a\n> network socket. It will depend on the network speed some, but it might\n> be useful to know how many MB/s we can pump through over a UNIX\n> socket.\n\nThis raises a good point. If at some point we want to minimize the\namount of memory copies then we might want to allow for RDMA to\ndirectly write incoming network traffic into a distributing ring\nbuffer, which would include the protocol level headers. But at this\npoint we are so far off from network reception becoming a bottleneck I\ndon't think it's worth holding anything up for not allowing for zero\ncopy transfers.\n\n> - The portion of the time that is used to split the lines is not\n> easily parallelizable. That seems to be a fairly small percentage for\n> a reasonably wide table, but it looks significant (13-18%) for a\n> narrow table. Such cases will gain less performance and be limited to\n> a smaller number of workers. I think we also need to be careful about\n> files whose lines are longer than the size of the buffer. If we're not\n> careful, we could get a significant performance drop-off in such\n> cases. We should make sure to pick an algorithm that seems like it\n> will handle such cases without serious regressions and check that a\n> file composed entirely of such long lines is handled reasonably\n> efficiently.\n\nI don't have a proof, but my gut feel tells me that it's fundamentally\nimpossible to ingest csv without a serial line-ending/comment\ntokenization pass. The current line splitting algorithm is terrible.\nI'm currently working with some scientific data where on ingestion\nCopyReadLineText() is about 25% on profiles. I prototyped a\nreplacement that can do ~8GB/s on narrow rows, more on wider ones.\n\nFor rows that are consistently wider than the input buffer I think\nparallelism will still give a win - the serial phase is just memcpy\nthrough a ringbuffer, after which a worker goes away to perform the\nactual insert, letting the next worker read the data. The memcpy is\nalready happening today, CopyReadLineText() copies the input buffer\ninto a StringInfo, so the only extra work is synchronization between\nleader and worker.\n\n> - There could be index contention. Let's suppose that we can read data\n> super fast and break it up into lines super fast. Maybe the file we're\n> reading is fully RAM-cached and the lines are long. Now all of the\n> backends are inserting into the indexes at the same time, and they\n> might be trying to insert into the same pages. If so, lock contention\n> could become a factor that hinders performance.\n\nDifferent data distribution strategies can have an effect on that.\nDealing out input data in larger or smaller chunks will have a\nconsiderable effect on contention, btree page splits and all kinds of\nthings. I think the common theme would be a push to increase chunk\nsize to reduce contention..\n\n> - There could also be similar contention on the heap. Say the tuples\n> are narrow, and many backends are trying to insert tuples into the\n> same heap page at the same time. This would lead to many lock/unlock\n> cycles. This could be avoided if the backends avoid targeting the same\n> heap pages, but I'm not sure there's any reason to expect that they\n> would do so unless we make some special provision for it.\n\nI thought there already was a provision for that. Am I mis-remembering?\n\n> - What else? I bet the above list is not comprehensive.\n\nI think parallel copy patch needs to concentrate on splitting input\ndata to workers. After that any performance issues would be basically\nthe same as a normal parallel insert workload. There may well be\nbottlenecks there, but those could be tackled independently.\n\nRegards,\nAnts Aasma\nCybertec\n\n\n", "msg_date": "Thu, 9 Apr 2020 01:24:59 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 9:38 AM Ants Aasma <ants@cybertec.at> wrote:\n> >\n> > With option 1 it's not possible to read input data into shared memory\n> > and there needs to be an extra memcpy in the time critical sequential\n> > flow of the leader. With option 2 data could be read directly into the\n> > shared memory buffer. With future async io support, reading and\n> > looking for tuple boundaries could be performed concurrently.\n>\n> But option 2 still seems significantly worse than your proposal above, right?\n>\n> I really think we don't want a single worker in charge of finding\n> tuple boundaries for everybody. That adds a lot of unnecessary\n> inter-process communication and synchronization. Each process should\n> just get the next tuple starting after where the last one ended, and\n> then advance the end pointer so that the next process can do the same\n> thing. Vignesh's proposal involves having a leader process that has to\n> switch roles - he picks an arbitrary 25% threshold - and if it doesn't\n> switch roles at the right time, performance will be impacted. If the\n> leader doesn't get scheduled in time to refill the queue before it\n> runs completely empty, workers will have to wait. Ants's scheme avoids\n> that risk: whoever needs the next tuple reads the next line. There's\n> no need to ever wait for the leader because there is no leader.\n>\n\nHmm, I think in his scheme also there is a single reader process. See\nthe email above [1] where he described how it should work. I think\nthe difference is in the division of work. AFAIU, in Ants scheme, the\nworker needs to pick the work from tuple_offset queue whereas in\nVignesh's scheme it will be based on the size (each worker will get\nprobably 64KB of work). I think in his scheme the main thing to find\nout is how many tuple offsets to be assigned to each worker in one-go\nso that we don't unnecessarily add contention for finding the work\nunit. I think we need to find the right balance between size and\nnumber of tuples. I am trying to consider size here because larger\nsized tuples will probably require more time as we need to allocate\nmore space for them and also probably requires more processing time.\nOne way to achieve that could be each worker will try to claim 500\ntuples (or some other threshold number) but if their size is greater\nthan 64K (or some other threshold size) then the worker will try with\nlesser number of tuples (such that the size of the chunk of tuples is\nless than a threshold size.).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 16:20:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 4:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 9, 2020 at 1:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Apr 7, 2020 at 9:38 AM Ants Aasma <ants@cybertec.at> wrote:\n> > >\n> > > With option 1 it's not possible to read input data into shared memory\n> > > and there needs to be an extra memcpy in the time critical sequential\n> > > flow of the leader. With option 2 data could be read directly into the\n> > > shared memory buffer. With future async io support, reading and\n> > > looking for tuple boundaries could be performed concurrently.\n> >\n> > But option 2 still seems significantly worse than your proposal above, right?\n> >\n> > I really think we don't want a single worker in charge of finding\n> > tuple boundaries for everybody. That adds a lot of unnecessary\n> > inter-process communication and synchronization. Each process should\n> > just get the next tuple starting after where the last one ended, and\n> > then advance the end pointer so that the next process can do the same\n> > thing. Vignesh's proposal involves having a leader process that has to\n> > switch roles - he picks an arbitrary 25% threshold - and if it doesn't\n> > switch roles at the right time, performance will be impacted. If the\n> > leader doesn't get scheduled in time to refill the queue before it\n> > runs completely empty, workers will have to wait. Ants's scheme avoids\n> > that risk: whoever needs the next tuple reads the next line. There's\n> > no need to ever wait for the leader because there is no leader.\n> >\n>\n> Hmm, I think in his scheme also there is a single reader process. See\n> the email above [1] where he described how it should work.\n>\n\noops, I forgot to specify the link to the email. See\nhttps://www.postgresql.org/message-id/CANwKhkO87A8gApobOz_o6c9P5auuEG1W2iCz0D5CfOeGgAnk3g%40mail.gmail.com\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 16:22:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 3:55 AM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Wed, 8 Apr 2020 at 22:30, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > - The portion of the time that is used to split the lines is not\n> > easily parallelizable. That seems to be a fairly small percentage for\n> > a reasonably wide table, but it looks significant (13-18%) for a\n> > narrow table. Such cases will gain less performance and be limited to\n> > a smaller number of workers. I think we also need to be careful about\n> > files whose lines are longer than the size of the buffer. If we're not\n> > careful, we could get a significant performance drop-off in such\n> > cases. We should make sure to pick an algorithm that seems like it\n> > will handle such cases without serious regressions and check that a\n> > file composed entirely of such long lines is handled reasonably\n> > efficiently.\n>\n> I don't have a proof, but my gut feel tells me that it's fundamentally\n> impossible to ingest csv without a serial line-ending/comment\n> tokenization pass.\n>\n\nI think even if we try to do it via multiple workers it might not be\nbetter. In such a scheme, every worker needs to update the end\nboundaries and the next worker to keep a check if the previous has\nupdated the end pointer. I think this can add a significant\nsynchronization effort for cases where tuples are of 100 or so bytes\nwhich will be a common case.\n\n> The current line splitting algorithm is terrible.\n> I'm currently working with some scientific data where on ingestion\n> CopyReadLineText() is about 25% on profiles. I prototyped a\n> replacement that can do ~8GB/s on narrow rows, more on wider ones.\n>\n\nGood to hear. I think that will be a good project on its own and that\nmight give a boost to parallel copy as with that we can further reduce\nthe non-parallelizable work unit.\n\n> For rows that are consistently wider than the input buffer I think\n> parallelism will still give a win - the serial phase is just memcpy\n> through a ringbuffer, after which a worker goes away to perform the\n> actual insert, letting the next worker read the data. The memcpy is\n> already happening today, CopyReadLineText() copies the input buffer\n> into a StringInfo, so the only extra work is synchronization between\n> leader and worker.\n>\n>\n> > - There could also be similar contention on the heap. Say the tuples\n> > are narrow, and many backends are trying to insert tuples into the\n> > same heap page at the same time. This would lead to many lock/unlock\n> > cycles. This could be avoided if the backends avoid targeting the same\n> > heap pages, but I'm not sure there's any reason to expect that they\n> > would do so unless we make some special provision for it.\n>\n> I thought there already was a provision for that. Am I mis-remembering?\n>\n\nThe copy uses heap_multi_insert to insert batch of tuples and I think\neach batch should ideally use a different page mostly it will be a new\npage. So, not sure if this will be a problem or a problem of a level\nfor which we need to do some special handling. But if this turns out\nto be a problem, we definetly need some better way to deal with it.\n\n> > - What else? I bet the above list is not comprehensive.\n>\n> I think parallel copy patch needs to concentrate on splitting input\n> data to workers. After that any performance issues would be basically\n> the same as a normal parallel insert workload. There may well be\n> bottlenecks there, but those could be tackled independently.\n>\n\nI agree.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 16:31:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 1:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Apr 7, 2020 at 9:38 AM Ants Aasma <ants@cybertec.at> wrote:\n> > I think the element based approach and requirement that all tuples fit\n> > into the queue makes things unnecessarily complex. The approach I\n> > detailed earlier allows for tuples to be bigger than the buffer. In\n> > that case a worker will claim the long tuple from the ring queue of\n> > tuple start positions, and starts copying it into its local line_buf.\n> > This can wrap around the buffer multiple times until the next start\n> > position shows up. At that point this worker can proceed with\n> > inserting the tuple and the next worker will claim the next tuple.\n> >\n> > This way nothing needs to be resized, there is no risk of a file with\n> > huge tuples running the system out of memory because each element will\n> > be reallocated to be huge and the number of elements is not something\n> > that has to be tuned.\n>\n> +1. This seems like the right way to do it.\n>\n> > > We had a couple of options for the way in which queue elements can be stored.\n> > > Option 1: Each element (DSA chunk) will contain tuples such that each\n> > > tuple will be preceded by the length of the tuple. So the tuples will\n> > > be arranged like (Length of tuple-1, tuple-1), (Length of tuple-2,\n> > > tuple-2), .... Or Option 2: Each element (DSA chunk) will contain only\n> > > tuples (tuple-1), (tuple-2), ..... And we will have a second\n> > > ring-buffer which contains a start-offset or length of each tuple. The\n> > > old design used to generate one tuple of data and process tuple by\n> > > tuple. In the new design, the server will generate multiple tuples of\n> > > data per queue element. The worker will then process data tuple by\n> > > tuple. As we are processing the data tuple by tuple, I felt both of\n> > > the options are almost the same. However Design1 was chosen over\n> > > Design 2 as we can save up on some space that was required by another\n> > > variable in each element of the queue.\n> >\n> > With option 1 it's not possible to read input data into shared memory\n> > and there needs to be an extra memcpy in the time critical sequential\n> > flow of the leader. With option 2 data could be read directly into the\n> > shared memory buffer. With future async io support, reading and\n> > looking for tuple boundaries could be performed concurrently.\n>\n> But option 2 still seems significantly worse than your proposal above, right?\n>\n> I really think we don't want a single worker in charge of finding\n> tuple boundaries for everybody. That adds a lot of unnecessary\n> inter-process communication and synchronization. Each process should\n> just get the next tuple starting after where the last one ended, and\n> then advance the end pointer so that the next process can do the same\n> thing. Vignesh's proposal involves having a leader process that has to\n> switch roles - he picks an arbitrary 25% threshold - and if it doesn't\n> switch roles at the right time, performance will be impacted. If the\n> leader doesn't get scheduled in time to refill the queue before it\n> runs completely empty, workers will have to wait. Ants's scheme avoids\n> that risk: whoever needs the next tuple reads the next line. There's\n> no need to ever wait for the leader because there is no leader.\n\nI agree that if the leader switches the role, then it is possible that\nsometimes the leader might not produce the work before the queue is\nempty. OTOH, the problem with the approach you are suggesting is that\nthe work will be generated on-demand, i.e. there is no specific\nprocess who is generating the data while workers are busy inserting\nthe data. So IMHO, if we have a specific leader process then there\nwill always be work available for all the workers. I agree that we\nneed to find the correct point when the leader will work as a worker.\nOne idea could be that when the queue is full and there is no space to\npush more work to queue then the leader himself processes that work.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Apr 2020 17:19:06 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 7:49 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I agree that if the leader switches the role, then it is possible that\n> sometimes the leader might not produce the work before the queue is\n> empty. OTOH, the problem with the approach you are suggesting is that\n> the work will be generated on-demand, i.e. there is no specific\n> process who is generating the data while workers are busy inserting\n> the data.\n\nI think you have a point. The way I think things could go wrong if we\ndon't have a leader is if it tends to happen that everyone wants new\nwork at the same time. In that case, everyone will wait at once,\nwhereas if there is a designated process that aggressively queues up\nwork, we could perhaps avoid that. Note that you really have to have\nthe case where everyone wants new work at the exact same moment,\nbecause otherwise they just all take turns finding work for\nthemselves, and everything is fine, because nobody's waiting for\nanybody else to do any work, so everyone is always making forward\nprogress.\n\nNow on the other hand, if we do have a leader, and for some reason\nit's slow in responding, everyone will have to wait. That could happen\neither because the leader also has other responsibilities, like\nreading data or helping with the main work when the queue is full, or\njust because the system is really busy and the leader doesn't get\nscheduled on-CPU for a while. I am inclined to think that's likely to\nbe a more serious problem.\n\nThe thing is, the problem of everyone needing new work at the same\ntime can't really keep on repeating. Say that everyone finishes\nprocessing their first chunk at the same time. Now everyone needs a\nsecond chunk, and in a leaderless system, they must take turns getting\nit. So they will go in some order. The ones who go later will\npresumably also finish later, so the end times for the second and\nfollowing chunks will be scattered. You shouldn't get repeated\npile-ups with everyone finishing at the same time, because each time\nit happens, it will force a little bit of waiting that will spread\nthings out. If they clump up again, that will happen again, but it\nshouldn't happen every time.\n\nBut in the case where there is a leader, I don't think there's any\nsimilar protection. Suppose we go with the design Vignesh proposes\nwhere the leader switches to processing chunks when the queue is more\nthan 75% full. If the leader has a \"hiccup\" where it gets swapped out\nor is busy with processing a chunk for a longer-than-normal time, all\nof the other processes have to wait for it. Now we can probably tune\nthis to some degree by adjusting the queue size and fullness\nthresholds, but the optimal values for those parameters might be quite\ndifferent on different systems, depending on load, I/O performance,\nCPU architecture, etc. If there's a system or configuration where the\nleader tends not to respond fast enough, it will probably just keep\nhappening, because nothing in the algorithm will tend to shake it out\nof that bad pattern.\n\nI'm not 100% certain that my analysis here is right, so it will be\ninteresting to hear from other people. However, as a general rule, I\nthink we want to minimize the amount of work that can only be done by\none process (the leader) and maximize the amount that can be done by\nany process with whichever one is available taking on the job. In the\ncase of COPY FROM STDIN, the reads from the network socket can only be\ndone by the one process connected to it. In the case of COPY from a\nfile, even that could be rotated around, if all processes open the\nfile individually and seek to the appropriate offset.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Apr 2020 13:42:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn April 9, 2020 4:01:43 AM PDT, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>On Thu, Apr 9, 2020 at 3:55 AM Ants Aasma <ants@cybertec.at> wrote:\n>>\n>> On Wed, 8 Apr 2020 at 22:30, Robert Haas <robertmhaas@gmail.com>\n>wrote:\n>>\n>> > - The portion of the time that is used to split the lines is not\n>> > easily parallelizable. That seems to be a fairly small percentage\n>for\n>> > a reasonably wide table, but it looks significant (13-18%) for a\n>> > narrow table. Such cases will gain less performance and be limited\n>to\n>> > a smaller number of workers. I think we also need to be careful\n>about\n>> > files whose lines are longer than the size of the buffer. If we're\n>not\n>> > careful, we could get a significant performance drop-off in such\n>> > cases. We should make sure to pick an algorithm that seems like it\n>> > will handle such cases without serious regressions and check that a\n>> > file composed entirely of such long lines is handled reasonably\n>> > efficiently.\n>>\n>> I don't have a proof, but my gut feel tells me that it's\n>fundamentally\n>> impossible to ingest csv without a serial line-ending/comment\n>> tokenization pass.\n\nI can't quite see a way either. But even if it were, I have a hard time seeing parallelizing that path as the right thing.\n\n\n>I think even if we try to do it via multiple workers it might not be\n>better. In such a scheme, every worker needs to update the end\n>boundaries and the next worker to keep a check if the previous has\n>updated the end pointer. I think this can add a significant\n>synchronization effort for cases where tuples are of 100 or so bytes\n>which will be a common case.\n\nIt seems like it'd also have terrible caching and instruction level parallelism behavior. By constantly switching the process that analyzes boundaries, the current data will have to be brought into l1/register, rather than staying there.\n\nI'm fairly certain that we do *not* want to distribute input data between processes on a single tuple basis. Probably not even below a few hundred kb. If there's any sort of natural clustering in the loaded data - extremely common, think timestamps - splitting on a granular basis will make indexing much more expensive. And have a lot more contention.\n\n\n>> The current line splitting algorithm is terrible.\n>> I'm currently working with some scientific data where on ingestion\n>> CopyReadLineText() is about 25% on profiles. I prototyped a\n>> replacement that can do ~8GB/s on narrow rows, more on wider ones.\n\nWe should really replace the entire copy parsing code. It's terrible.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 09 Apr 2020 11:55:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm fairly certain that we do *not* want to distribute input data between processes on a single tuple basis. Probably not even below a few hundred kb. If there's any sort of natural clustering in the loaded data - extremely common, think timestamps - splitting on a granular basis will make indexing much more expensive. And have a lot more contention.\n\nThat's a fair point. I think the solution ought to be that once any\nprocess starts finding line endings, it continues until it's grabbed\nat least a certain amount of data for itself. Then it stops and lets\nsome other process grab a chunk of data.\n\nOr are you are arguing that there should be only one process that's\nallowed to find line endings for the entire duration of the load?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:29:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi, \n\nOn April 9, 2020 12:29:09 PM PDT, Robert Haas <robertmhaas@gmail.com> wrote:\n>On Thu, Apr 9, 2020 at 2:55 PM Andres Freund <andres@anarazel.de>\n>wrote:\n>> I'm fairly certain that we do *not* want to distribute input data\n>between processes on a single tuple basis. Probably not even below a\n>few hundred kb. If there's any sort of natural clustering in the loaded\n>data - extremely common, think timestamps - splitting on a granular\n>basis will make indexing much more expensive. And have a lot more\n>contention.\n>\n>That's a fair point. I think the solution ought to be that once any\n>process starts finding line endings, it continues until it's grabbed\n>at least a certain amount of data for itself. Then it stops and lets\n>some other process grab a chunk of data.\n>\n>Or are you are arguing that there should be only one process that's\n>allowed to find line endings for the entire duration of the load?\n\nI've not yet read the whole thread. So I'm probably restating ideas.\n\nImo, yes, there should be only one process doing the chunking. For ilp, cache efficiency, but also because the leader is the only process with access to the network socket. It should load input data into one large buffer that's shared across processes. There should be a separate ringbuffer with tuple/partial tuple (for huge tuples) offsets. Worker processes should grab large chunks of offsets from the offset ringbuffer. If the ringbuffer is not full, the worker chunks should be reduced in size. \n\nGiven that everything stalls if the leader doesn't accept further input data, as well as when there are no available splitted chunks, it doesn't seem like a good idea to have the leader do other work.\n\n\nI don't think optimizing/targeting copy from local files, where multiple processes could read, is useful. COPY STDIN is the only thing that practically matters.\n\nAndres\n\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 09 Apr 2020 13:00:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Apr 9, 2020 at 4:00 PM Andres Freund <andres@anarazel.de> wrote:\n> I've not yet read the whole thread. So I'm probably restating ideas.\n\nYeah, but that's OK.\n\n> Imo, yes, there should be only one process doing the chunking. For ilp, cache efficiency, but also because the leader is the only process with access to the network socket. It should load input data into one large buffer that's shared across processes. There should be a separate ringbuffer with tuple/partial tuple (for huge tuples) offsets. Worker processes should grab large chunks of offsets from the offset ringbuffer. If the ringbuffer is not full, the worker chunks should be reduced in size.\n\nMy concern here is that it's going to be hard to avoid processes going\nidle. If the leader does nothing at all once the ring buffer is full,\nit's wasting time that it could spend processing a chunk. But if it\npicks up a chunk, then it might not get around to refilling the buffer\nbefore other processes are idle with no work to do.\n\nStill, it might be the case that having the process that is reading\nthe data also find the line endings is so fast that it makes no sense\nto split those two tasks. After all, whoever just read the data must\nhave it in cache, and that helps a lot.\n\n> Given that everything stalls if the leader doesn't accept further input data, as well as when there are no available splitted chunks, it doesn't seem like a good idea to have the leader do other work.\n>\n> I don't think optimizing/targeting copy from local files, where multiple processes could read, is useful. COPY STDIN is the only thing that practically matters.\n\nYeah, I think Amit has been thinking primarily in terms of COPY from\nfiles, and I've been encouraging him to at least consider the STDIN\ncase. But I think you're right, and COPY FROM STDIN should be the\ndesign center for this feature.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 10 Apr 2020 07:40:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 07:40:06 -0400, Robert Haas wrote:\n> On Thu, Apr 9, 2020 at 4:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > Imo, yes, there should be only one process doing the chunking. For ilp, cache efficiency, but also because the leader is the only process with access to the network socket. It should load input data into one large buffer that's shared across processes. There should be a separate ringbuffer with tuple/partial tuple (for huge tuples) offsets. Worker processes should grab large chunks of offsets from the offset ringbuffer. If the ringbuffer is not full, the worker chunks should be reduced in size.\n> \n> My concern here is that it's going to be hard to avoid processes going\n> idle. If the leader does nothing at all once the ring buffer is full,\n> it's wasting time that it could spend processing a chunk. But if it\n> picks up a chunk, then it might not get around to refilling the buffer\n> before other processes are idle with no work to do.\n\nAn idle process doesn't cost much. Processes that use CPU inefficiently\nhowever...\n\n\n> Still, it might be the case that having the process that is reading\n> the data also find the line endings is so fast that it makes no sense\n> to split those two tasks. After all, whoever just read the data must\n> have it in cache, and that helps a lot.\n\nYea. And if it's not fast enough to split lines, then we have a problem\nregardless of which process does the splitting.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:26:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Apr 10, 2020 at 2:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > Still, it might be the case that having the process that is reading\n> > the data also find the line endings is so fast that it makes no sense\n> > to split those two tasks. After all, whoever just read the data must\n> > have it in cache, and that helps a lot.\n>\n> Yea. And if it's not fast enough to split lines, then we have a problem\n> regardless of which process does the splitting.\n\nStill, if the reader does the splitting, then you don't need as much\nIPC, right? The shared memory data structure is just a ring of bytes,\nand whoever reads from it is responsible for the rest.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 14:13:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-04-13 14:13:46 -0400, Robert Haas wrote:\n> On Fri, Apr 10, 2020 at 2:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Still, it might be the case that having the process that is reading\n> > > the data also find the line endings is so fast that it makes no sense\n> > > to split those two tasks. After all, whoever just read the data must\n> > > have it in cache, and that helps a lot.\n> >\n> > Yea. And if it's not fast enough to split lines, then we have a problem\n> > regardless of which process does the splitting.\n> \n> Still, if the reader does the splitting, then you don't need as much\n> IPC, right? The shared memory data structure is just a ring of bytes,\n> and whoever reads from it is responsible for the rest.\n\nI don't think so. If only one process does the splitting, the\nexclusively locked section is just popping off a bunch of offsets of the\nring. And that could fairly easily be done with atomic ops (since what\nwe need is basically a single producer multiple consumer queue, which\ncan be done lock free fairly easily ). Whereas in the case of each\nprocess doing the splitting, the exclusively locked part is splitting\nalong lines - which takes considerably longer than just popping off a\nfew offsets.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Apr 2020 13:16:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Apr 13, 2020 at 4:16 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think so. If only one process does the splitting, the\n> exclusively locked section is just popping off a bunch of offsets of the\n> ring. And that could fairly easily be done with atomic ops (since what\n> we need is basically a single producer multiple consumer queue, which\n> can be done lock free fairly easily ). Whereas in the case of each\n> process doing the splitting, the exclusively locked part is splitting\n> along lines - which takes considerably longer than just popping off a\n> few offsets.\n\nHmm, that does seem believable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:20:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hello,\n\nI was going through some literatures on parsing CSV files in a fully\nparallelized way and found (from [1]) an interesting approach\nimplemented in the open-source project ParaText[2]. The algorithm\nfollows a two-phase approach: the first pass identifies the adjusted\nchunks in parallel by exploiting the simplicity of CSV formats and the\nsecond phase processes complete records within each adjusted chunk by\none of the available workers. Here is the sketch:\n\n1. Each worker scans a distinct fixed sized chunk of the CSV file and\ncollects the following three stats from the chunk:\na) number of quotes\nb) position of the first new line after even number of quotes\nc) position of the first new line after odd number of quotes\n2. Once stats from all the chunks are collected, the leader identifies\nthe adjusted chunk boundaries by iterating over the stats linearly:\n- For the k-th chunk, the leader adds the number of quotes in k-1 chunks.\n- If the number is even, then the k-th chunk does not start in the\nmiddle of a quoted field, and the first newline after an even number\nof quotes (the second collected information) is the first record\ndelimiter in this chunk.\n- Otherwise, if the number is odd, the first newline after an odd\nnumber of quotes (the third collected information) is the first record\ndelimiter.\n- The end position of the adjusted chunk is obtained based on the\nstarting position of the next adjusted chunk.\n3. Once the boundaries of the chunks are determined (forming adjusted\nchunks), individual worker may take up one adjusted chunk and process\nthe tuples independently.\n\nAlthough this approach parses the CSV in parallel, it requires two\nscan on the CSV file. So, given a system with spinning hard-disk and\nsmall RAM, as per my understanding, the algorithm will perform very\npoorly. But, if we use this algorithm to parse a CSV file on a\nmulti-core system with a large RAM, the performance might be improved\nsignificantly [1].\n\nHence, I was trying to think whether we can leverage this idea for\nimplementing parallel COPY in PG. We can design an algorithm similar\nto parallel hash-join where the workers pass through different phases.\n1. Phase 1 - Read fixed size chunks in parallel, store the chunks and\nthe small stats about each chunk in the shared memory. If the shared\nmemory is full, go to phase 2.\n2. Phase 2 - Allow a single worker to process the stats and decide the\nactual chunk boundaries so that no tuple spans across two different\nchunks. Go to phase 3.\n3. Phase 3 - Each worker picks one adjusted chunk, parse and process\ntuples from the same. Once done with one chunk, it picks the next one\nand so on.\n4. If there are still some unread contents, go back to phase 1.\n\nWe can probably use separate workers for phase 1 and phase 3 so that\nthey can work concurrently.\n\nAdvantages:\n1. Each worker spends some significant time in each phase. Gets\nbenefit of the instruction cache - at least in phase 1.\n2. It also has the same advantage of parallel hash join - fast workers\nget to work more.\n3. We can extend this solution for reading data from STDIN. Of course,\nthe phase 1 and phase 2 must be performed by the leader process who\ncan read from the socket.\n\nDisadvantages:\n1. Surely doesn't work if we don't have enough shared memory.\n2. Probably, this approach is just impractical for PG due to certain\nlimitations.\n\nThoughts?\n\n[1] https://www.microsoft.com/en-us/research/uploads/prod/2019/04/chunker-sigmod19.pdf\n[2] ParaText. https://github.com/wiseio/paratext.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Apr 2020 01:10:32 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, 14 Apr 2020 at 22:40, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> 1. Each worker scans a distinct fixed sized chunk of the CSV file and\n> collects the following three stats from the chunk:\n> a) number of quotes\n> b) position of the first new line after even number of quotes\n> c) position of the first new line after odd number of quotes\n> 2. Once stats from all the chunks are collected, the leader identifies\n> the adjusted chunk boundaries by iterating over the stats linearly:\n> - For the k-th chunk, the leader adds the number of quotes in k-1 chunks.\n> - If the number is even, then the k-th chunk does not start in the\n> middle of a quoted field, and the first newline after an even number\n> of quotes (the second collected information) is the first record\n> delimiter in this chunk.\n> - Otherwise, if the number is odd, the first newline after an odd\n> number of quotes (the third collected information) is the first record\n> delimiter.\n> - The end position of the adjusted chunk is obtained based on the\n> starting position of the next adjusted chunk.\n\nThe trouble is that, at least with current coding, the number of\nquotes in a chunk can depend on whether the chunk started in a quote\nor not. That's because escape characters only count inside quotes. See\nfor example the following csv:\n\nfoo,\\\"bar\nbaz\",\\\"xyz\"\n\nThis currently parses as one line and the number of parsed quotes\ndoesn't change if you add a quote in front.\n\nBut the general approach of doing the tokenization in parallel and\nthen a serial pass over the tokenization would still work. The quote\ncounting and new line finding just has to be done for both starting in\nquote and not starting in quote case.\n\nUsing phases doesn't look like the correct approach - the tokenization\ncan be prepared just in time for the serial pass and processing the\nchunk can proceed immediately after. This could all be done by having\nthe data in a single ringbuffer with a processing pipeline where one\nprocess does the reading, then workers grab tokenization chunks as\nthey become available, then one process handles determining the chunk\nboundaries, after which the chunks are processed.\n\nBut I still don't think this is something to worry about for the first\nversion. Just a better line splitting algorithm should go a looong way\nin feeding a large number of workers, even when inserting to an\nunindexed unlogged table. If we get the SIMD line splitting in, it\nwill be enough to overwhelm most I/O subsystems available today.\n\nRegards,\nAnts Aasma\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:45:31 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, 13 Apr 2020 at 23:16, Andres Freund <andres@anarazel.de> wrote:\n> > Still, if the reader does the splitting, then you don't need as much\n> > IPC, right? The shared memory data structure is just a ring of bytes,\n> > and whoever reads from it is responsible for the rest.\n>\n> I don't think so. If only one process does the splitting, the\n> exclusively locked section is just popping off a bunch of offsets of the\n> ring. And that could fairly easily be done with atomic ops (since what\n> we need is basically a single producer multiple consumer queue, which\n> can be done lock free fairly easily ). Whereas in the case of each\n> process doing the splitting, the exclusively locked part is splitting\n> along lines - which takes considerably longer than just popping off a\n> few offsets.\n\nI see the benefit of having one process responsible for splitting as\nbeing able to run ahead of the workers to queue up work when many of\nthem need new data at the same time. I don't think the locking\nbenefits of a ring are important in this case. At current rather\nconservative chunk sizes we are looking at ~100k chunks per second at\nbest, normal locking should be perfectly adequate. And chunk size can\neasily be increased. I see the main value in it being simple.\n\nBut there is a point that having a layer of indirection instead of a\nlinear buffer allows for some workers to fall behind. Either because\nthe kernel scheduled them out for a time slice, or they need to do I/O\nor because inserting some tuple hit an unique conflict and needs to\nwait for a tx to complete or abort to resolve. With a ring buffer\nreading has to wait on the slowest worker reading its chunk. Having\nworkers copy the data to a local buffer as the first step would reduce\nthe probability of hitting any issues. But still, at GB/s rates,\nhiding a 10ms timeslice of delay would need 10's of megabytes of\nbuffer.\n\nFWIW. I think just increasing the buffer is good enough - the CPUs\nprocessing this workload are likely to have tens to hundreds of\nmegabytes of cache on board.\n\n\n", "msg_date": "Wed, 15 Apr 2020 12:05:47 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Apr 15, 2020 at 1:10 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> Hence, I was trying to think whether we can leverage this idea for\n> implementing parallel COPY in PG. We can design an algorithm similar\n> to parallel hash-join where the workers pass through different phases.\n> 1. Phase 1 - Read fixed size chunks in parallel, store the chunks and\n> the small stats about each chunk in the shared memory. If the shared\n> memory is full, go to phase 2.\n> 2. Phase 2 - Allow a single worker to process the stats and decide the\n> actual chunk boundaries so that no tuple spans across two different\n> chunks. Go to phase 3.\n>\n> 3. Phase 3 - Each worker picks one adjusted chunk, parse and process\n> tuples from the same. Once done with one chunk, it picks the next one\n> and so on.\n>\n> 4. If there are still some unread contents, go back to phase 1.\n>\n> We can probably use separate workers for phase 1 and phase 3 so that\n> they can work concurrently.\n>\n> Advantages:\n> 1. Each worker spends some significant time in each phase. Gets\n> benefit of the instruction cache - at least in phase 1.\n> 2. It also has the same advantage of parallel hash join - fast workers\n> get to work more.\n> 3. We can extend this solution for reading data from STDIN. Of course,\n> the phase 1 and phase 2 must be performed by the leader process who\n> can read from the socket.\n>\n> Disadvantages:\n> 1. Surely doesn't work if we don't have enough shared memory.\n> 2. Probably, this approach is just impractical for PG due to certain\n> limitations.\n>\n\nAs I understand this, it needs to parse the lines twice (second time\nin phase-3) and till the first two phases are over, we can't start the\ntuple processing work which is done in phase-3. So even if the\ntokenization is done a bit faster but we will lose some on processing\nthe tuples which might not be an overall win and in fact, it can be\nworse as compared to the single reader approach being discussed.\nNow, if the work done in tokenization is a major (or significant)\nportion of the copy then thinking of such a technique might be useful\nbut that is not the case as seen in the data shared above (the\ntokenize time is very less as compared to data processing time) in\nthis email.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Apr 2020 16:45:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Apr 15, 2020 at 7:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> As I understand this, it needs to parse the lines twice (second time\n> in phase-3) and till the first two phases are over, we can't start the\n> tuple processing work which is done in phase-3. So even if the\n> tokenization is done a bit faster but we will lose some on processing\n> the tuples which might not be an overall win and in fact, it can be\n> worse as compared to the single reader approach being discussed.\n> Now, if the work done in tokenization is a major (or significant)\n> portion of the copy then thinking of such a technique might be useful\n> but that is not the case as seen in the data shared above (the\n> tokenize time is very less as compared to data processing time) in\n> this email.\n\nIt seems to me that a good first step here might be to forget about\nparallelism for a minute and just write a patch to make the line\nsplitting as fast as possible.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 15 Apr 2020 10:12:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Apr 15, 2020 at 2:15 PM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Tue, 14 Apr 2020 at 22:40, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> > 1. Each worker scans a distinct fixed sized chunk of the CSV file and\n> > collects the following three stats from the chunk:\n> > a) number of quotes\n> > b) position of the first new line after even number of quotes\n> > c) position of the first new line after odd number of quotes\n> > 2. Once stats from all the chunks are collected, the leader identifies\n> > the adjusted chunk boundaries by iterating over the stats linearly:\n> > - For the k-th chunk, the leader adds the number of quotes in k-1 chunks.\n> > - If the number is even, then the k-th chunk does not start in the\n> > middle of a quoted field, and the first newline after an even number\n> > of quotes (the second collected information) is the first record\n> > delimiter in this chunk.\n> > - Otherwise, if the number is odd, the first newline after an odd\n> > number of quotes (the third collected information) is the first record\n> > delimiter.\n> > - The end position of the adjusted chunk is obtained based on the\n> > starting position of the next adjusted chunk.\n>\n> The trouble is that, at least with current coding, the number of\n> quotes in a chunk can depend on whether the chunk started in a quote\n> or not. That's because escape characters only count inside quotes. See\n> for example the following csv:\n>\n> foo,\\\"bar\n> baz\",\\\"xyz\"\n>\n> This currently parses as one line and the number of parsed quotes\n> doesn't change if you add a quote in front.\n>\n> But the general approach of doing the tokenization in parallel and\n> then a serial pass over the tokenization would still work. The quote\n> counting and new line finding just has to be done for both starting in\n> quote and not starting in quote case.\n>\nYeah, right.\n\n> Using phases doesn't look like the correct approach - the tokenization\n> can be prepared just in time for the serial pass and processing the\n> chunk can proceed immediately after. This could all be done by having\n> the data in a single ringbuffer with a processing pipeline where one\n> process does the reading, then workers grab tokenization chunks as\n> they become available, then one process handles determining the chunk\n> boundaries, after which the chunks are processed.\n>\nI was thinking from this point of view - the sooner we introduce\nparallelism in the process, the greater the benefits. Probably there\nisn't any way to avoid a single-pass over the data (phase - 2 in the\nabove case) to tokenise the chunks. So yeah, if the reading and\ntokenisation phase doesn't take much time, parallelising the same will\njust be an overkill. As pointed by Andres and you, using a lock-free\ncircular buffer implementation sounds the way to go forward. AFAIK,\nFIFO circular queue with CAS-based implementation suffers from two\nproblems - 1. (as pointed by you) slow workers may block producers. 2.\nSince it doesn't partition the queue among the workers, does not\nachieve good locality and cache-friendliness, limits their scalability\non NUMA systems.\n\n> But I still don't think this is something to worry about for the first\n> version. Just a better line splitting algorithm should go a looong way\n> in feeding a large number of workers, even when inserting to an\n> unindexed unlogged table. If we get the SIMD line splitting in, it\n> will be enough to overwhelm most I/O subsystems available today.\n>\nYeah. Parsing text is a great use case for data parallelism which can\nbe achieved by SIMD instructions. Consider processing 8-bit ASCII\ncharacters in 512-bit SIMD word. A lot of code and complexity from\nCopyReadLineText will surely go away. And further (I'm not sure in\nthis point), if we can use the schema of the table, perhaps JIT can\ngenerate machine code to efficient read of fields based on their\ntypes.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Apr 2020 20:36:39 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 2020-04-15 10:12:14 -0400, Robert Haas wrote:\n> On Wed, Apr 15, 2020 at 7:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > As I understand this, it needs to parse the lines twice (second time\n> > in phase-3) and till the first two phases are over, we can't start the\n> > tuple processing work which is done in phase-3. So even if the\n> > tokenization is done a bit faster but we will lose some on processing\n> > the tuples which might not be an overall win and in fact, it can be\n> > worse as compared to the single reader approach being discussed.\n> > Now, if the work done in tokenization is a major (or significant)\n> > portion of the copy then thinking of such a technique might be useful\n> > but that is not the case as seen in the data shared above (the\n> > tokenize time is very less as compared to data processing time) in\n> > this email.\n> \n> It seems to me that a good first step here might be to forget about\n> parallelism for a minute and just write a patch to make the line\n> splitting as fast as possible.\n\n+1\n\nCompared to all the rest of the efforts during COPY a fast \"split rows\"\nimplementation should not be a bottleneck anymore.\n\n\n", "msg_date": "Wed, 15 Apr 2020 10:09:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-04-15 20:36:39 +0530, Kuntal Ghosh wrote:\n> I was thinking from this point of view - the sooner we introduce\n> parallelism in the process, the greater the benefits.\n\nI don't really agree. Sure, that's true from a theoretical perspective,\nbut the incremental gains may be very small, and the cost in complexity\nvery high. If we can get single threaded splitting of rows to be >4GB/s,\nwhich should very well be attainable, the rest of the COPY work is going\nto dominate the time. We shouldn't add complexity to parallelize more\nof the line splitting, caring too much about scalable datastructures,\netc when the bottleneck after some straightforward optimization is\nusually still in the parallelized part.\n\nI'd expect that for now we'd likely hit scalability issues in other\nparts of the system first (e.g. extension locks, buffer mapping).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Apr 2020 10:15:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-04-15 12:05:47 +0300, Ants Aasma wrote:\n> I see the benefit of having one process responsible for splitting as\n> being able to run ahead of the workers to queue up work when many of\n> them need new data at the same time.\n\nYea, I agree.\n\n\n> I don't think the locking benefits of a ring are important in this\n> case. At current rather conservative chunk sizes we are looking at\n> ~100k chunks per second at best, normal locking should be perfectly\n> adequate. And chunk size can easily be increased. I see the main value\n> in it being simple.\n\nI think the locking benefits of not needing to hold a lock *while*\nsplitting (as we'd need in some proposal floated earlier) is likely to\nalready be beneficial. I don't think we need to worry about lock\nscalability protecting the queue of already split data, for now.\n\nI don't think we really want to have a much larger chunk size,\nbtw. Makes it more likely for data to workers to take an uneven amount\nof time.\n\n\n> But there is a point that having a layer of indirection instead of a\n> linear buffer allows for some workers to fall behind.\n\nYea. It'd probably make sense to read the input data into an array of\nevenly sized blocks, and have the datastructure (still think a\nringbuffer makes sense) of split boundaries point into those entries. If\nwe don't require the input blocks to be in-order in that array, we can\nreuse blocks therein that are fully processed, even if \"earlier\" data in\nthe input has not yet been fully processed.\n\n\n> With a ring buffer reading has to wait on the slowest worker reading\n> its chunk.\n\nTo be clear, I was only thinking of using a ringbuffer to indicate split\nboundaries. And that workers would just pop entries from it before they\nactually process the data (stored outside of the ringbuffer). Since the\nsplit boundaries will always be read in order by workers, and the\nentries will be tiny, there's no need to avoid copying out entries.\n\n\nSo basically what I was thinking we *eventually* may want (I'd forgo some\nof this initially) is something like:\n\nstruct InputBlock\n{\n uint32 unprocessed_chunk_parts;\n uint32 following_block;\n char data[INPUT_BLOCK_SIZE]\n};\n\n// array of input data, with > 2*nworkers entries\nInputBlock *input_blocks;\n\nstruct ChunkedInputBoundary\n{\n uint32 firstblock;\n uint32 startoff;\n};\n\nstruct ChunkedInputBoundaries\n{\n uint32 read_pos;\n uint32 write_end;\n ChunkedInputBoundary ring[RINGSIZE];\n};\n\nWhere the leader would read data into InputBlocks with\nunprocessed_chunk_parts == 0. Then it'd split the read input data into\nchunks (presumably with chunk size << input block size), putting\nidentified chunks into ChunkedInputBoundaries. For each\nChunkedInputBoundary it'd increment the unprocessed_chunk_parts of each\nInputBlock containing parts of the chunk. For chunks across >1\nInputBlocks each InputBlock's following_block would be set accordingly.\n\nWorkers would just pop an entry from the ringbuffer (making that entry\nreusable), and process the chunk. The underlying data would not be\ncopied out of the InputBlocks, but obviously readers would need to take\ncare to handle InputBlock boundaries. Whenever a chunk is fully read, or\nwhen crossing a InputBlock boundary, the InputBlock's\nunprocessed_chunk_parts would be decremented.\n\nRecycling of InputBlocks could probably just be an occasional linear\nsearch for buffers with unprocessed_chunk_parts == 0.\n\n\nSomething roughly like this should not be too complicated to\nimplement. Unless extremely unluckly (very wide input data spanning many\nInputBlocks) a straggling reader would not prevent global progress, it'd\njust prevent reuse of the InputBlocks with data for its chunk (normally\nthat'd be two InputBlocks, not more).\n\n\n> Having workers copy the data to a local buffer as the first\n> step would reduce the probability of hitting any issues. But still, at\n> GB/s rates, hiding a 10ms timeslice of delay would need 10's of\n> megabytes of buffer.\n\nYea. Given the likelihood of blocking on resources (reading in index\ndata, writing out dirty buffers for reclaim, row locks for uniqueness\nchecks, extension locks, ...), as well as non uniform per-row costs\n(partial indexes, index splits, ...) I think we ought to try to cope\nwell with that. IMO/IME it'll be common to see stalls that are much\nlonger than 10ms for processes that do COPY, even when the system is not\noverloaded.\n\n\n> FWIW. I think just increasing the buffer is good enough - the CPUs\n> processing this workload are likely to have tens to hundreds of\n> megabytes of cache on board.\n\nIt'll not necessarily be a cache shared between leader / workers though,\nand some of the cache-cache transfers will be more expensive even within\na socket (between core complexes for AMD, multi chip processors for\nIntel).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:19:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Apr 15, 2020 at 10:45 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-15 20:36:39 +0530, Kuntal Ghosh wrote:\n> > I was thinking from this point of view - the sooner we introduce\n> > parallelism in the process, the greater the benefits.\n>\n> I don't really agree. Sure, that's true from a theoretical perspective,\n> but the incremental gains may be very small, and the cost in complexity\n> very high. If we can get single threaded splitting of rows to be >4GB/s,\n> which should very well be attainable, the rest of the COPY work is going\n> to dominate the time. We shouldn't add complexity to parallelize more\n> of the line splitting, caring too much about scalable datastructures,\n> etc when the bottleneck after some straightforward optimization is\n> usually still in the parallelized part.\n>\n> I'd expect that for now we'd likely hit scalability issues in other\n> parts of the system first (e.g. extension locks, buffer mapping).\n>\nGot your point. In this particular case, a single producer is fast\nenough (or probably we can make it fast enough) to generate enough\nchunks for multiple consumers so that they don't stay idle and wait\nfor work.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Apr 2020 02:00:46 +0530", "msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Apr 15, 2020 at 11:49 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> To be clear, I was only thinking of using a ringbuffer to indicate split\n> boundaries. And that workers would just pop entries from it before they\n> actually process the data (stored outside of the ringbuffer). Since the\n> split boundaries will always be read in order by workers, and the\n> entries will be tiny, there's no need to avoid copying out entries.\n>\n\nI think the binary mode processing will be slightly different because\nunlike text and csv format, the data is stored in Length, Value format\nfor each column and there are no line markers. I don't think there\nwill be a big difference but still, we need to somewhere keep the\ninformation what is the format of data in ring buffers. Basically, we\ncan copy the data in Length, Value format and once the writers know\nabout the format, they will parse the data in the appropriate format.\nWe currently also have a different way of parsing the binary format,\nsee NextCopyFrom. I think we need to be careful about avoiding\nduplicate work as much as possible.\n\nApart from this, we have analyzed the other cases as mentioned below\nwhere we need to decide whether we can allow parallelism for the copy\ncommand.\nCase-1:\nDo we want to enable parallelism for a copy when transition tables are\ninvolved? Basically, during the copy, we do capture tuples in\ntransition tables for certain cases like when after statement trigger\naccesses the same relation on which we have a trigger. See the\nexample below [1]. We decide this in function\nMakeTransitionCaptureState. For such cases, we collect minimal tuples\nin the tuple store after processing them so that later after statement\ntriggers can access them. Now, if we want to enable parallelism for\nsuch cases, we instead need to store and access tuples from shared\ntuple store (sharedtuplestore.c/sharedtuplestore.h). However, it\ndoesn't have the facility to store tuples in-memory, so we always need\nto store and access from a file which could be costly unless we also\nhave an additional way to store minimal tuples in shared memory till\nwork_memory and then in shared tuple store. It is possible to do all\nthis or part of this work to enable parallel copy for such cases but I\nam not sure if it is worth it. We can decide to not enable parallelism\nfor such cases and later allow if we see demand for the same and it\nwill also help us to not introduce additional work/complexity in the\nfirst version of the patch.\n\nCase-2:\nThe Single Insertion mode (CIM_SINGLE) is performed in various\nscenarios and whether we can allow parallelism for those depends on\ncase to case basis which is discussed below:\na. When there are BEFORE/INSTEAD OF triggers on the table. We don't\nallow multi-inserts in such cases because such triggers might query\nthe table we're inserting into and act differently if the tuples that\nhave already been processed and prepared for insertion are not there.\nNow, if we allow parallelism with such triggers the behavior would\ndepend on if the parallel worker has already inserted or not that\nparticular row. I guess such functions should ideally be marked as\nparallel-unsafe. So, in short in this case whether to allow\nparallelism or not depends upon the parallel-safety marking of this\nfunction.\nb. For partitioned tables, we can't support multi-inserts when there\nare any statement-level insert triggers. This is because as of now,\nwe expect that any before row insert and statement-level insert\ntriggers are on the same relation. Now, there is no harm in allowing\nparallelism for such cases but it depends upon if we have the\ninfrastructure (basically allow tuples to be collected in shared tuple\nstore) to support statement-level insert triggers.\nc. For inserts into foreign tables. We can't allow the parallelism in\nthis case because each worker needs to establish the FDW connection\nand operate in a separate transaction. Now unless we have a\ncapability to provide a two-phase commit protocol for \"Transactions\ninvolving multiple postgres foreign servers\" (which is being discussed\nin a separate thread [2]), we can't allow this.\nd. If there are volatile default expressions or the where clause\ncontains a volatile expression. Here, we can check if the expression\nis parallel-safe, then we can allow parallelism.\n\nCase-3:\nIn copy command, for performing foreign key checks, we take KEY SHARE\nlock on primary key table rows which inturn will increment the command\ncounter and updates the snapshot. Now, as we share the snapshots at\nthe beginning of the command, we can't allow it to be changed later.\nSo, unless we do something special for it, I think we can't allow\nparallelism in such cases.\n\nI couldn't think of many problems if we allow parallelism in such\ncases. One inconsistency, if we allow FK checks via workers, would be\nthat at the end of COPY the value of command_counter will not be what\nwe expect as we wouldn't have accounted for that from workers. Now,\nif COPY is being done in a transaction it will not assign the correct\nvalues to the next commands. Also, for executing deferred triggers,\nwe use transaction snapshot, so if anything is changed in snapshot via\nparallel workers, ideally it should have synced the changed snapshot\nin the worker.\n\nNow, the other concern could be that different workers can try to\nacquire KEY SHARE lock on the same tuples which they will be able to\nacquire due to group locking or otherwise but I don't see any problem\nwith it.\n\nI am not sure if it above leads to any user-visible problem but I\nmight be missing something here. I think if we can think of any real\nproblems we can try to design a better solution to address those.\n\nCase-4:\nFor Deferred Triggers, it seems we record CTIDs of tuples (via\nExecARInsertTriggers->AfterTriggerSaveEvent) and then execute deferred\ntriggers at transaction end using AfterTriggerFireDeferred or at end\nof the statement. The challenge to allow parallelism for such cases\nis we need to capture the CTID events in shared memory. For that, we\neither need to invent a new infrastructure for event capturing in\nshared memory which will be a huge task on its own. The other idea is\nto get CTIDs via shared memory and then add those to event queues via\nleader but I think in that case we need to ensure the order of CTIDs\n(basically it should be in the same order in which we have processed\nthem).\n\n[1] -\ncreate or replace function dump_insert() returns trigger language plpgsql as\n$$\n begin\n raise notice 'trigger = %, new table = %',\n TG_NAME,\n (select string_agg(new_table::text, ', ' order by a)\nfrom new_table);\n return null;\n end;\n$$;\n\ncreate table test (a int);\ncreate trigger trg1_test after insert on test referencing new table\nas new_table for each statement execute procedure dump_insert();\ncopy test (a) from stdin;\n1\n2\n3\n\\.\n\n[2] - https://www.postgresql.org/message-id/20191206.173215.1818665441859410805.horikyota.ntt%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 May 2020 17:42:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "I wonder why you're still looking at this instead of looking at just\nspeeding up the current code, especially the line splitting, per\nprevious discussion. And then coming back to study this issue more\nafter that's done.\n\nOn Mon, May 11, 2020 at 8:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Apart from this, we have analyzed the other cases as mentioned below\n> where we need to decide whether we can allow parallelism for the copy\n> command.\n> Case-1:\n> Do we want to enable parallelism for a copy when transition tables are\n> involved?\n\nI think it would be OK not to support this.\n\n> Case-2:\n> a. When there are BEFORE/INSTEAD OF triggers on the table.\n> b. For partitioned tables, we can't support multi-inserts when there\n> are any statement-level insert triggers.\n> c. For inserts into foreign tables.\n> d. If there are volatile default expressions or the where clause\n> contains a volatile expression. Here, we can check if the expression\n> is parallel-safe, then we can allow parallelism.\n\nThis all sounds fine.\n\n> Case-3:\n> In copy command, for performing foreign key checks, we take KEY SHARE\n> lock on primary key table rows which inturn will increment the command\n> counter and updates the snapshot. Now, as we share the snapshots at\n> the beginning of the command, we can't allow it to be changed later.\n> So, unless we do something special for it, I think we can't allow\n> parallelism in such cases.\n\nThis sounds like much more of a problem to me; it'd be a significant\nrestriction that would kick in routine cases where the user isn't\ndoing anything particularly exciting. The command counter presumably\nonly needs to be updated once per command, so maybe we could do that\nbefore we start parallelism. However, I think we would need to have\nsome kind of dynamic memory structure to which new combo CIDs can be\nadded by any member of the group, and then discovered by other members\nof the group later. At the end of the parallel operation, the leader\nmust discover any combo CIDs added by others to that table before\ndestroying it, even if it has no immediate use for the information. We\ncan't allow a situation where the group members have inconsistent\nnotions of which combo CIDs exist or what their mappings are, and if\nKEY SHARE locks are being taken, new combo CIDs could be created.\n\n> Case-4:\n> For Deferred Triggers, it seems we record CTIDs of tuples (via\n> ExecARInsertTriggers->AfterTriggerSaveEvent) and then execute deferred\n> triggers at transaction end using AfterTriggerFireDeferred or at end\n> of the statement.\n\nI think this could be left for the future.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 14:21:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, May 11, 2020 at 11:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I wonder why you're still looking at this instead of looking at just\n> speeding up the current code, especially the line splitting,\n>\n\nBecause the line splitting is just 1-2% of overall work in common\ncases. See the data shared by Vignesh for various workloads [1]. The\ntime it takes is in range of 0.5-12% approximately and for cases like\na table with few indexes, it is not more than 1-2%.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3r8cPsk0Vo_-6AXipTrVwd0o9U2S0nCmRdku1Dn-Tpqg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 09:42:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, May 11, 2020 at 11:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > Case-3:\n> > In copy command, for performing foreign key checks, we take KEY SHARE\n> > lock on primary key table rows which inturn will increment the command\n> > counter and updates the snapshot. Now, as we share the snapshots at\n> > the beginning of the command, we can't allow it to be changed later.\n> > So, unless we do something special for it, I think we can't allow\n> > parallelism in such cases.\n>\n> This sounds like much more of a problem to me; it'd be a significant\n> restriction that would kick in routine cases where the user isn't\n> doing anything particularly exciting. The command counter presumably\n> only needs to be updated once per command, so maybe we could do that\n> before we start parallelism. However, I think we would need to have\n> some kind of dynamic memory structure to which new combo CIDs can be\n> added by any member of the group, and then discovered by other members\n> of the group later. At the end of the parallel operation, the leader\n> must discover any combo CIDs added by others to that table before\n> destroying it, even if it has no immediate use for the information. We\n> can't allow a situation where the group members have inconsistent\n> notions of which combo CIDs exist or what their mappings are, and if\n> KEY SHARE locks are being taken, new combo CIDs could be created.\n>\n\nAFAIU, we don't generate combo CIDs for this case. See below code in\nheap_lock_tuple():\n\n/*\n* Store transaction information of xact locking the tuple.\n*\n* Note: Cmax is meaningless in this context, so don't set it; this avoids\n* possibly generating a useless combo CID. Moreover, if we're locking a\n* previously updated tuple, it's important to preserve the Cmax.\n*\n* Also reset the HOT UPDATE bit, but only if there's no update; otherwise\n* we would break the HOT chain.\n*/\ntuple->t_data->t_infomask &= ~HEAP_XMAX_BITS;\ntuple->t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;\ntuple->t_data->t_infomask |= new_infomask;\ntuple->t_data->t_infomask2 |= new_infomask2;\n\nI don't understand why we need to do something special for combo CIDs\nif they are not generated during this operation?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 May 2020 10:30:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, May 12, 2020 at 1:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I don't understand why we need to do something special for combo CIDs\n> if they are not generated during this operation?\n\nHmm. Well I guess if they're not being generated then we don't need to\ndo anything about them, but I still think we should try to work around\nhaving to disable parallelism for a table which is referenced by\nforeign keys.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 13 May 2020 15:09:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, May 14, 2020 at 12:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 12, 2020 at 1:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I don't understand why we need to do something special for combo CIDs\n> > if they are not generated during this operation?\n>\n> Hmm. Well I guess if they're not being generated then we don't need to\n> do anything about them, but I still think we should try to work around\n> having to disable parallelism for a table which is referenced by\n> foreign keys.\n>\n\nOkay, just to be clear, we want to allow parallelism for a table that\nhas foreign keys. Basically, a parallel copy should work while\nloading data into tables having FK references.\n\nTo support that, we need to consider a few things.\na. Currently, we increment the command counter each time we take a key\nshare lock on a tuple during trigger execution. I am really not sure\nif this is required during Copy command execution or we can just\nincrement it once for the copy. If we need to increment the command\ncounter just once for copy command then for the parallel copy we can\nensure that we do it just once at the end of the parallel copy but if\nnot then we might need some special handling.\n\nb. Another point is that after inserting rows we record CTIDs of the\ntuples in the event queue and then once all tuples are processed we\ncall FK trigger for each CTID. Now, with parallelism, the FK checks\nwill be processed once the worker processed one chunk. I don't see\nany problem with it but still, this will be a bit different from what\nwe do in serial case. Do you see any problem with this?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 May 2020 11:47:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, May 14, 2020 at 11:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 14, 2020 at 12:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, May 12, 2020 at 1:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I don't understand why we need to do something special for combo CIDs\n> > > if they are not generated during this operation?\n> >\n> > Hmm. Well I guess if they're not being generated then we don't need to\n> > do anything about them, but I still think we should try to work around\n> > having to disable parallelism for a table which is referenced by\n> > foreign keys.\n> >\n>\n> Okay, just to be clear, we want to allow parallelism for a table that\n> has foreign keys. Basically, a parallel copy should work while\n> loading data into tables having FK references.\n>\n> To support that, we need to consider a few things.\n> a. Currently, we increment the command counter each time we take a key\n> share lock on a tuple during trigger execution. I am really not sure\n> if this is required during Copy command execution or we can just\n> increment it once for the copy. If we need to increment the command\n> counter just once for copy command then for the parallel copy we can\n> ensure that we do it just once at the end of the parallel copy but if\n> not then we might need some special handling.\n>\n> b. Another point is that after inserting rows we record CTIDs of the\n> tuples in the event queue and then once all tuples are processed we\n> call FK trigger for each CTID. Now, with parallelism, the FK checks\n> will be processed once the worker processed one chunk. I don't see\n> any problem with it but still, this will be a bit different from what\n> we do in serial case. Do you see any problem with this?\n\nIMHO, it should not be a problem because without parallelism also we\ntrigger the foreign key check when we detect EOF and end of data from\nSTDIN. And, with parallel workers also the worker will assume that it\nhas complete all the work and it can go for the foreign key check is\nonly after the leader receives EOF and end of data from STDIN.\n\nThe only difference is that each worker is not waiting for all the\ndata (from all workers) to get inserted before checking the\nconstraint. Moreover, we are not supporting external triggers with\nthe parallel copy, otherwise, we might have to worry that those\ntriggers could do something on the primary table before we check the\nconstraint. I am not sure if there are any other factors that I am\nmissing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 May 2020 12:22:10 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, May 14, 2020 at 2:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> To support that, we need to consider a few things.\n> a. Currently, we increment the command counter each time we take a key\n> share lock on a tuple during trigger execution. I am really not sure\n> if this is required during Copy command execution or we can just\n> increment it once for the copy. If we need to increment the command\n> counter just once for copy command then for the parallel copy we can\n> ensure that we do it just once at the end of the parallel copy but if\n> not then we might need some special handling.\n\nMy sense is that it would be a lot more sensible to do it at the\n*beginning* of the parallel operation. Once we do it once, we\nshouldn't ever do it again; that's how it works now. Deferring it\nuntil later seems much more likely to break things.\n\n> b. Another point is that after inserting rows we record CTIDs of the\n> tuples in the event queue and then once all tuples are processed we\n> call FK trigger for each CTID. Now, with parallelism, the FK checks\n> will be processed once the worker processed one chunk. I don't see\n> any problem with it but still, this will be a bit different from what\n> we do in serial case. Do you see any problem with this?\n\nI think there could be some problems here. For instance, suppose that\nthere are two entries for different workers for the same CTID. If the\nleader were trying to do all the work, they'd be handled\nconsecutively. If they were from completely unrelated processes,\nlocking would serialize them. But group locking won't, so there you\nhave an issue, I think. Also, it's not ideal from a work-distribution\nperspective: one worker could finish early and be unable to help the\nothers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 May 2020 16:20:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, May 15, 2020 at 1:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 14, 2020 at 2:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > To support that, we need to consider a few things.\n> > a. Currently, we increment the command counter each time we take a key\n> > share lock on a tuple during trigger execution. I am really not sure\n> > if this is required during Copy command execution or we can just\n> > increment it once for the copy. If we need to increment the command\n> > counter just once for copy command then for the parallel copy we can\n> > ensure that we do it just once at the end of the parallel copy but if\n> > not then we might need some special handling.\n>\n> My sense is that it would be a lot more sensible to do it at the\n> *beginning* of the parallel operation. Once we do it once, we\n> shouldn't ever do it again; that's how it works now. Deferring it\n> until later seems much more likely to break things.\n>\n\nAFAIU, we always increment the command counter after executing the\ncommand. Why do we want to do it differently here?\n\n> > b. Another point is that after inserting rows we record CTIDs of the\n> > tuples in the event queue and then once all tuples are processed we\n> > call FK trigger for each CTID. Now, with parallelism, the FK checks\n> > will be processed once the worker processed one chunk. I don't see\n> > any problem with it but still, this will be a bit different from what\n> > we do in serial case. Do you see any problem with this?\n>\n> I think there could be some problems here. For instance, suppose that\n> there are two entries for different workers for the same CTID.\n>\n\nFirst, let me clarify the CTID I have used in my email are for the\ntable in which insertion is happening which means FK table. So, in\nsuch a case, we can't have the same CTIDs queued for different\nworkers. Basically, we use CTID to fetch the row from FK table later\nand form a query to lock (in KEY SHARE mode) the corresponding tuple\nin PK table. Now, it is possible that two different workers try to\nlock the same row of PK table. I am not clear what problem group\nlocking can have in this case because these are non-conflicting locks.\nCan you please elaborate a bit more?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 May 2020 09:49:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, May 15, 2020 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > My sense is that it would be a lot more sensible to do it at the\n> > *beginning* of the parallel operation. Once we do it once, we\n> > shouldn't ever do it again; that's how it works now. Deferring it\n> > until later seems much more likely to break things.\n>\n> AFAIU, we always increment the command counter after executing the\n> command. Why do we want to do it differently here?\n\nHmm, now I'm starting to think that I'm confused about what is under\ndiscussion here. Which CommandCounterIncrement() are we talking about\nhere?\n\n> First, let me clarify the CTID I have used in my email are for the\n> table in which insertion is happening which means FK table. So, in\n> such a case, we can't have the same CTIDs queued for different\n> workers. Basically, we use CTID to fetch the row from FK table later\n> and form a query to lock (in KEY SHARE mode) the corresponding tuple\n> in PK table. Now, it is possible that two different workers try to\n> lock the same row of PK table. I am not clear what problem group\n> locking can have in this case because these are non-conflicting locks.\n> Can you please elaborate a bit more?\n\nI'm concerned about two workers trying to take the same lock at the\nsame time. If that's prevented by the buffer locking then I think it's\nOK, but if it's prevented by a heavyweight lock then it's not going to\nwork in this case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 15 May 2020 09:19:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, May 15, 2020 at 6:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, May 15, 2020 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > My sense is that it would be a lot more sensible to do it at the\n> > > *beginning* of the parallel operation. Once we do it once, we\n> > > shouldn't ever do it again; that's how it works now. Deferring it\n> > > until later seems much more likely to break things.\n> >\n> > AFAIU, we always increment the command counter after executing the\n> > command. Why do we want to do it differently here?\n>\n> Hmm, now I'm starting to think that I'm confused about what is under\n> discussion here. Which CommandCounterIncrement() are we talking about\n> here?\n>\n\nThe one we do after executing a non-readonly command. Let me try to\nexplain by example:\n\nCREATE TABLE tab_fk_referenced_chk(refindex INTEGER PRIMARY KEY,\nheight real, weight real);\ninsert into tab_fk_referenced_chk values( 1, 1.1, 100);\nCREATE TABLE tab_fk_referencing_chk(index INTEGER REFERENCES\ntab_fk_referenced_chk(refindex), height real, weight real);\n\nCOPY tab_fk_referencing_chk(index,height,weight) FROM stdin WITH(\nDELIMITER ',');\n1,1.1,100\n1,2.1,200\n1,3.1,300\n\\.\n\nIn the above case, even though we are executing a single command from\nthe user perspective, but the currentCommandId will be four after the\ncommand. One increment will be for the copy command and the other\nthree increments are for locking tuple in PK table\n(tab_fk_referenced_chk) for three tuples in FK table\n(tab_fk_referencing_chk). Now, for parallel workers, it is\n(theoretically) possible that the three tuples are processed by three\ndifferent workers which don't get synced as of now. The question was\ndo we see any kind of problem with this and if so can we just sync it\nup at the end of parallelism.\n\n> > First, let me clarify the CTID I have used in my email are for the\n> > table in which insertion is happening which means FK table. So, in\n> > such a case, we can't have the same CTIDs queued for different\n> > workers. Basically, we use CTID to fetch the row from FK table later\n> > and form a query to lock (in KEY SHARE mode) the corresponding tuple\n> > in PK table. Now, it is possible that two different workers try to\n> > lock the same row of PK table. I am not clear what problem group\n> > locking can have in this case because these are non-conflicting locks.\n> > Can you please elaborate a bit more?\n>\n> I'm concerned about two workers trying to take the same lock at the\n> same time. If that's prevented by the buffer locking then I think it's\n> OK, but if it's prevented by a heavyweight lock then it's not going to\n> work in this case.\n>\n\nWe do take buffer lock in exclusive mode before trying to acquire KEY\nSHARE lock on the tuple, so the two workers shouldn't try to acquire\nat the same time. I think you are trying to see if in any case, two\nworkers try to acquire heavyweight lock like tuple lock or something\nlike that to perform the operation then it will create a problem\nbecause due to group locking it will allow such an operation where it\nshould not have been. But I don't think anything of that sort is\nfeasible in COPY operation and if it is then we probably need to\ncarefully block it or find some solution for it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 May 2020 10:18:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi.\n\nWe have made a patch on the lines that were discussed in the previous\nmails. We could achieve up to 9.87X performance improvement. The\nimprovement varies from case to case.\n\nWorkers/\nExec time (seconds) copy from file,\n2 indexes on integer columns\n1 index on text column copy from stdin,\n2 indexes on integer columns\n1 index on text column copy from file, 1 gist index on text column copy\nfrom file,\n3 indexes on integer columns copy from stdin, 3 indexes on integer columns\n0 1162.772(1X) 1176.035(1X) 827.669(1X) 216.171(1X) 217.376(1X)\n1 1110.288(1.05X) 1120.556(1.05X) 747.384(1.11X) 174.242(1.24X)\n163.492(1.33X)\n2 635.249(1.83X) 668.18(1.76X) 435.673(1.9X) 133.829(1.61X) 126.516(1.72X)\n4 336.835(3.45X) 346.768(3.39X) 236.406(3.5X) 105.767(2.04X) 107.382(2.02X)\n8 188.577(6.17X) 194.491(6.04X) 148.962(5.56X) 100.708(2.15X) 107.72(2.01X)\n16 126.819(9.17X) 146.402(8.03X) 119.923(6.9X) 97.996(2.2X) 106.531(2.04X)\n20 *117.845(9.87X)* 149.203(7.88X) 138.741(5.96X) 97.94(2.21X) 107.5(2.02)\n30 127.554(9.11X) 161.218(7.29X) 172.443(4.8X) 98.232(2.2X) 108.778(1.99X)\n\nPosting the initial patch to get the feedback.\n\nDesign of the Parallel Copy: The backend, to which the \"COPY FROM\" query is\nsubmitted acts as leader with the responsibility of reading data from the\nfile/stdin, launching at most n number of workers as specified with\nPARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\ncommon data required for the workers execution in the DSM and shares it\nwith the workers. The leader then executes before statement triggers if\nthere exists any. Leader populates DSM chunks which includes the start\noffset and chunk size, while populating the chunks it reads as many blocks\nas required into the DSM data blocks from the file. Each block is of 64K\nsize. The leader parses the data to identify a chunk, the existing logic\nfrom CopyReadLineText which identifies the chunks with some changes was\nused for this. Leader checks if a free chunk is available to copy the\ninformation, if there is no free chunk it waits till the required chunk is\nfreed up by the worker and then copies the identified chunks information\n(offset & chunk size) into the DSM chunks. This process is repeated till\nthe complete file is processed. Simultaneously, the workers cache the\nchunks(50) locally into the local memory and release the chunks to the\nleader for further populating. Each worker processes the chunk that it\ncached and inserts it into the table. The leader waits till all the chunks\npopulated are processed by the workers and exits.\n\nWe would like to include support of parallel copy for referential integrity\nconstraints and parallelizing copy from binary format files in the future.\nThe above mentioned tests were run with CSV format, file size of 5.1GB & 10\nmillion records in the table. The postgres configuration and system\nconfiguration used is attached in config.txt.\nMyself and one of my colleagues Bharath have developed this patch. We would\nlike to thank Amit, Dilip, Robert, Andres, Ants, Kuntal, Alastair, Tomas,\nDavid, Thomas, Andrew & Kyotaro for their thoughts/discussions/suggestions.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, May 18, 2020 at 10:18 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Fri, May 15, 2020 at 6:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, May 15, 2020 at 12:19 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > > My sense is that it would be a lot more sensible to do it at the\n> > > > *beginning* of the parallel operation. Once we do it once, we\n> > > > shouldn't ever do it again; that's how it works now. Deferring it\n> > > > until later seems much more likely to break things.\n> > >\n> > > AFAIU, we always increment the command counter after executing the\n> > > command. Why do we want to do it differently here?\n> >\n> > Hmm, now I'm starting to think that I'm confused about what is under\n> > discussion here. Which CommandCounterIncrement() are we talking about\n> > here?\n> >\n>\n> The one we do after executing a non-readonly command. Let me try to\n> explain by example:\n>\n> CREATE TABLE tab_fk_referenced_chk(refindex INTEGER PRIMARY KEY,\n> height real, weight real);\n> insert into tab_fk_referenced_chk values( 1, 1.1, 100);\n> CREATE TABLE tab_fk_referencing_chk(index INTEGER REFERENCES\n> tab_fk_referenced_chk(refindex), height real, weight real);\n>\n> COPY tab_fk_referencing_chk(index,height,weight) FROM stdin WITH(\n> DELIMITER ',');\n> 1,1.1,100\n> 1,2.1,200\n> 1,3.1,300\n> \\.\n>\n> In the above case, even though we are executing a single command from\n> the user perspective, but the currentCommandId will be four after the\n> command. One increment will be for the copy command and the other\n> three increments are for locking tuple in PK table\n> (tab_fk_referenced_chk) for three tuples in FK table\n> (tab_fk_referencing_chk). Now, for parallel workers, it is\n> (theoretically) possible that the three tuples are processed by three\n> different workers which don't get synced as of now. The question was\n> do we see any kind of problem with this and if so can we just sync it\n> up at the end of parallelism.\n>\n> > > First, let me clarify the CTID I have used in my email are for the\n> > > table in which insertion is happening which means FK table. So, in\n> > > such a case, we can't have the same CTIDs queued for different\n> > > workers. Basically, we use CTID to fetch the row from FK table later\n> > > and form a query to lock (in KEY SHARE mode) the corresponding tuple\n> > > in PK table. Now, it is possible that two different workers try to\n> > > lock the same row of PK table. I am not clear what problem group\n> > > locking can have in this case because these are non-conflicting locks.\n> > > Can you please elaborate a bit more?\n> >\n> > I'm concerned about two workers trying to take the same lock at the\n> > same time. If that's prevented by the buffer locking then I think it's\n> > OK, but if it's prevented by a heavyweight lock then it's not going to\n> > work in this case.\n> >\n>\n> We do take buffer lock in exclusive mode before trying to acquire KEY\n> SHARE lock on the tuple, so the two workers shouldn't try to acquire\n> at the same time. I think you are trying to see if in any case, two\n> workers try to acquire heavyweight lock like tuple lock or something\n> like that to perform the operation then it will create a problem\n> because due to group locking it will allow such an operation where it\n> should not have been. But I don't think anything of that sort is\n> feasible in COPY operation and if it is then we probably need to\n> carefully block it or find some solution for it.\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Wed, 3 Jun 2020 15:53:24 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, May 18, 2020 at 12:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> In the above case, even though we are executing a single command from\n> the user perspective, but the currentCommandId will be four after the\n> command. One increment will be for the copy command and the other\n> three increments are for locking tuple in PK table\n> (tab_fk_referenced_chk) for three tuples in FK table\n> (tab_fk_referencing_chk). Now, for parallel workers, it is\n> (theoretically) possible that the three tuples are processed by three\n> different workers which don't get synced as of now. The question was\n> do we see any kind of problem with this and if so can we just sync it\n> up at the end of parallelism.\n\nI strongly disagree with the idea of \"just sync(ing) it up at the end\nof parallelism\". That seems like a completely unprincipled approach to\nthe problem. Either the command counter increment is important or it's\nnot. If it's not important, maybe we can arrange to skip it in the\nfirst place. If it is important, then it's probably not OK for each\nbackend to be doing it separately.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jun 2020 12:13:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-06-03 12:13:14 -0400, Robert Haas wrote:\n> On Mon, May 18, 2020 at 12:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > In the above case, even though we are executing a single command from\n> > the user perspective, but the currentCommandId will be four after the\n> > command. One increment will be for the copy command and the other\n> > three increments are for locking tuple in PK table\n> > (tab_fk_referenced_chk) for three tuples in FK table\n> > (tab_fk_referencing_chk). Now, for parallel workers, it is\n> > (theoretically) possible that the three tuples are processed by three\n> > different workers which don't get synced as of now. The question was\n> > do we see any kind of problem with this and if so can we just sync it\n> > up at the end of parallelism.\n\n> I strongly disagree with the idea of \"just sync(ing) it up at the end\n> of parallelism\". That seems like a completely unprincipled approach to\n> the problem. Either the command counter increment is important or it's\n> not. If it's not important, maybe we can arrange to skip it in the\n> first place. If it is important, then it's probably not OK for each\n> backend to be doing it separately.\n\nThat scares me too. These command counter increments definitely aren't\nunnecessary in the general case.\n\nEven in the example you share above, aren't we potentially going to\nactually lock rows multiple times from within the same transaction,\ninstead of once? If the command counter increments from within\nri_trigger.c aren't visible to other parallel workers/leader, we'll not\ncorrectly understand that a locked row is invisible to heap_lock_tuple,\nbecause we're not using a new enough snapshot (by dint of not having a\nnew enough cid).\n\nI've not dug through everything that'd potentially cause, but it seems\npretty clearly a no-go from here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Jun 2020 11:38:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-06-03 15:53:24 +0530, vignesh C wrote:\n> Workers/\n> Exec time (seconds) copy from file,\n> 2 indexes on integer columns\n> 1 index on text column copy from stdin,\n> 2 indexes on integer columns\n> 1 index on text column copy from file, 1 gist index on text column copy\n> from file,\n> 3 indexes on integer columns copy from stdin, 3 indexes on integer columns\n> 0 1162.772(1X) 1176.035(1X) 827.669(1X) 216.171(1X) 217.376(1X)\n> 1 1110.288(1.05X) 1120.556(1.05X) 747.384(1.11X) 174.242(1.24X) 163.492(1.33X)\n> 2 635.249(1.83X) 668.18(1.76X) 435.673(1.9X) 133.829(1.61X) 126.516(1.72X)\n> 4 336.835(3.45X) 346.768(3.39X) 236.406(3.5X) 105.767(2.04X) 107.382(2.02X)\n> 8 188.577(6.17X) 194.491(6.04X) 148.962(5.56X) 100.708(2.15X) 107.72(2.01X)\n> 16 126.819(9.17X) 146.402(8.03X) 119.923(6.9X) 97.996(2.2X) 106.531(2.04X)\n> 20 *117.845(9.87X)* 149.203(7.88X) 138.741(5.96X) 97.94(2.21X) 107.5(2.02)\n> 30 127.554(9.11X) 161.218(7.29X) 172.443(4.8X) 98.232(2.2X) 108.778(1.99X)\n\nHm. you don't explicitly mention that in your design, but given how\nsmall the benefits going from 0-1 workers is, I assume the leader\ndoesn't do any \"chunk processing\" on its own?\n\n\n> Design of the Parallel Copy: The backend, to which the \"COPY FROM\" query is\n> submitted acts as leader with the responsibility of reading data from the\n> file/stdin, launching at most n number of workers as specified with\n> PARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\n> common data required for the workers execution in the DSM and shares it\n> with the workers. The leader then executes before statement triggers if\n> there exists any. Leader populates DSM chunks which includes the start\n> offset and chunk size, while populating the chunks it reads as many blocks\n> as required into the DSM data blocks from the file. Each block is of 64K\n> size. The leader parses the data to identify a chunk, the existing logic\n> from CopyReadLineText which identifies the chunks with some changes was\n> used for this. Leader checks if a free chunk is available to copy the\n> information, if there is no free chunk it waits till the required chunk is\n> freed up by the worker and then copies the identified chunks information\n> (offset & chunk size) into the DSM chunks. This process is repeated till\n> the complete file is processed. Simultaneously, the workers cache the\n> chunks(50) locally into the local memory and release the chunks to the\n> leader for further populating. Each worker processes the chunk that it\n> cached and inserts it into the table. The leader waits till all the chunks\n> populated are processed by the workers and exits.\n\nWhy do we need the local copy of 50 chunks? Copying memory around is far\nfrom free. I don't see why it'd be better to add per-process caching,\nrather than making the DSM bigger? I can see some benefit in marking\nmultiple chunks as being processed with one lock acquisition, but I\ndon't think adding a memory copy is a good idea.\n\n\nThis patch *desperately* needs to be split up. It imo is close to\nunreviewable, due to a large amount of changes that just move code\naround without other functional changes being mixed in with the actual\nnew stuff.\n\n\n> /*\n> + * State of the chunk.\n> + */\n> +typedef enum ChunkState\n> +{\n> +\tCHUNK_INIT,\t\t\t\t\t/* initial state of chunk */\n> +\tCHUNK_LEADER_POPULATING,\t/* leader processing chunk */\n> +\tCHUNK_LEADER_POPULATED,\t\t/* leader completed populating chunk */\n> +\tCHUNK_WORKER_PROCESSING,\t/* worker processing chunk */\n> +\tCHUNK_WORKER_PROCESSED\t\t/* worker completed processing chunk */\n> +}ChunkState;\n> +\n> +#define RAW_BUF_SIZE 65536\t\t/* we palloc RAW_BUF_SIZE+1 bytes */\n> +\n> +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> +#define RINGSIZE (10 * 1000)\n> +#define MAX_BLOCKS_COUNT 1000\n> +#define WORKER_CHUNK_COUNT 50\t/* should be mod of RINGSIZE */\n> +\n> +#define\tIsParallelCopy()\t\t(cstate->is_parallel)\n> +#define IsLeader()\t\t\t\t(cstate->pcdata->is_leader)\n> +#define IsHeaderLine()\t\t\t(cstate->header_line && cstate->cur_lineno == 1)\n> +\n> +/*\n> + * Copy data block information.\n> + */\n> +typedef struct CopyDataBlock\n> +{\n> +\t/* The number of unprocessed chunks in the current block. */\n> +\tpg_atomic_uint32 unprocessed_chunk_parts;\n> +\n> +\t/*\n> +\t * If the current chunk data is continued into another block,\n> +\t * following_block will have the position where the remaining data need to\n> +\t * be read.\n> +\t */\n> +\tuint32\tfollowing_block;\n> +\n> +\t/*\n> +\t * This flag will be set, when the leader finds out this block can be read\n> +\t * safely by the worker. This helps the worker to start processing the chunk\n> +\t * early where the chunk will be spread across many blocks and the worker\n> +\t * need not wait for the complete chunk to be processed.\n> +\t */\n> +\tbool curr_blk_completed;\n> +\tchar data[DATA_BLOCK_SIZE + 1]; /* data read from file */\n> +}CopyDataBlock;\n\nWhat's the + 1 here about?\n\n\n> +/*\n> + * Parallel copy line buffer information.\n> + */\n> +typedef struct ParallelCopyLineBuf\n> +{\n> +\tStringInfoData\t\tline_buf;\n> +\tuint64\t\t\t\tcur_lineno;\t/* line number for error messages */\n> +}ParallelCopyLineBuf;\n\nWhy do we need separate infrastructure for this? We shouldn't duplicate\ninfrastructure unnecessarily.\n\n\n\n> +/*\n> + * Common information that need to be copied to shared memory.\n> + */\n> +typedef struct CopyWorkerCommonData\n> +{\n\nWhy is parallel specific stuff here suddenly not named ParallelCopy*\nanymore? If you introduce a naming like that it imo should be used\nconsistently.\n\n> +\t/* low-level state data */\n> +\tCopyDest copy_dest;\t\t/* type of copy source/destination */\n> +\tint file_encoding;\t/* file or remote side's character encoding */\n> +\tbool need_transcoding;\t/* file encoding diff from server? */\n> +\tbool encoding_embeds_ascii;\t/* ASCII can be non-first byte? */\n> +\n> +\t/* parameters from the COPY command */\n> +\tbool csv_mode;\t\t/* Comma Separated Value format? */\n> +\tbool header_line;\t/* CSV header line? */\n> +\tint null_print_len; /* length of same */\n> +\tbool force_quote_all;\t/* FORCE_QUOTE *? */\n> +\tbool convert_selectively;\t/* do selective binary conversion? */\n> +\n> +\t/* Working state for COPY FROM */\n> +\tAttrNumber num_defaults;\n> +\tOid relid;\n> +}CopyWorkerCommonData;\n\nBut I actually think we shouldn't have this information in two different\nstructs. This should exist once, independent of using parallel /\nnon-parallel copy.\n\n\n> +/* List information */\n> +typedef struct ListInfo\n> +{\n> +\tint\tcount;\t\t/* count of attributes */\n> +\n> +\t/* string info in the form info followed by info1, info2... infon */\n> +\tchar info[1];\n> +} ListInfo;\n\nBased on these comments I have no idea what this could be for.\n\n\n> /*\n> - * This keeps the character read at the top of the loop in the buffer\n> - * even if there is more than one read-ahead.\n> + * This keeps the character read at the top of the loop in the buffer\n> + * even if there is more than one read-ahead.\n> + */\n> +#define IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(extralen) \\\n> +if (1) \\\n> +{ \\\n> +\tif (copy_buff_state.raw_buf_ptr + (extralen) >= copy_buff_state.copy_buf_len && !hit_eof) \\\n> +\t{ \\\n> +\t\tif (IsParallelCopy()) \\\n> +\t\t{ \\\n> +\t\t\tcopy_buff_state.chunk_size = prev_chunk_size; /* update previous chunk size */ \\\n> +\t\t\tif (copy_buff_state.block_switched) \\\n> +\t\t\t{ \\\n> +\t\t\t\tpg_atomic_sub_fetch_u32(&copy_buff_state.data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> +\t\t\t\tcopy_buff_state.copy_buf_len = prev_copy_buf_len; \\\n> +\t\t\t} \\\n> +\t\t} \\\n> +\t\tcopy_buff_state.raw_buf_ptr = prev_raw_ptr; /* undo fetch */ \\\n> +\t\tneed_data = true; \\\n> +\t\tcontinue; \\\n> +\t} \\\n> +} else ((void) 0)\n\nI think it's an absolutely clear no-go to add new branches to\nthese. They're *really* hot already, and this is going to sprinkle a\nsignificant amount of new instructions over a lot of places.\n\n\n> +/*\n> + * SET_RAWBUF_FOR_LOAD - Set raw_buf to the shared memory where the file data must\n> + * be read.\n> + */\n> +#define SET_RAWBUF_FOR_LOAD() \\\n> +{ \\\n> +\tShmCopyInfo\t*pcshared_info = cstate->pcdata->pcshared_info; \\\n> +\tuint32 cur_block_pos; \\\n> +\t/* \\\n> +\t * Mark the previous block as completed, worker can start copying this data. \\\n> +\t */ \\\n> +\tif (copy_buff_state.data_blk_ptr != copy_buff_state.curr_data_blk_ptr && \\\n> +\t\tcopy_buff_state.data_blk_ptr->curr_blk_completed == false) \\\n> +\t\tcopy_buff_state.data_blk_ptr->curr_blk_completed = true; \\\n> +\t\\\n> +\tcopy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> +\tcur_block_pos = WaitGetFreeCopyBlock(pcshared_info); \\\n> +\tcopy_buff_state.curr_data_blk_ptr = &pcshared_info->data_blocks[cur_block_pos]; \\\n> +\t\\\n> +\tif (!copy_buff_state.data_blk_ptr) \\\n> +\t{ \\\n> +\t\tcopy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> +\t\tchunk_first_block = cur_block_pos; \\\n> +\t} \\\n> +\telse if (need_data == false) \\\n> +\t\tcopy_buff_state.data_blk_ptr->following_block = cur_block_pos; \\\n> +\t\\\n> +\tcstate->raw_buf = copy_buff_state.curr_data_blk_ptr->data; \\\n> +\tcopy_buff_state.copy_raw_buf = cstate->raw_buf; \\\n> +}\n> +\n> +/*\n> + * END_CHUNK_PARALLEL_COPY - Update the chunk information in shared memory.\n> + */\n> +#define END_CHUNK_PARALLEL_COPY() \\\n> +{ \\\n> +\tif (!IsHeaderLine()) \\\n> +\t{ \\\n> +\t\tShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> +\t\tChunkBoundaries *chunkBoundaryPtr = &pcshared_info->chunk_boundaries; \\\n> +\t\tif (copy_buff_state.chunk_size) \\\n> +\t\t{ \\\n> +\t\t\tChunkBoundary *chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> +\t\t\t/* \\\n> +\t\t\t * If raw_buf_ptr is zero, unprocessed_chunk_parts would have been \\\n> +\t\t\t * incremented in SEEK_COPY_BUFF_POS. This will happen if the whole \\\n> +\t\t\t * chunk finishes at the end of the current block. If the \\\n> +\t\t\t * new_line_size > raw_buf_ptr, then the new block has only new line \\\n> +\t\t\t * char content. The unprocessed count should not be increased in \\\n> +\t\t\t * this case. \\\n> +\t\t\t */ \\\n> +\t\t\tif (copy_buff_state.raw_buf_ptr != 0 && \\\n> +\t\t\t\tcopy_buff_state.raw_buf_ptr > new_line_size) \\\n> +\t\t\t\tpg_atomic_add_fetch_u32(&copy_buff_state.curr_data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> +\t\t\t\\\n> +\t\t\t/* Update chunk size. */ \\\n> +\t\t\tpg_atomic_write_u32(&chunkInfo->chunk_size, copy_buff_state.chunk_size); \\\n> +\t\t\tpg_atomic_write_u32(&chunkInfo->chunk_state, CHUNK_LEADER_POPULATED); \\\n> +\t\t\telog(DEBUG1, \"[Leader] After adding - chunk position:%d, chunk_size:%d\", \\\n> +\t\t\t\t\t\tchunk_pos, copy_buff_state.chunk_size); \\\n> +\t\t\tpcshared_info->populated++; \\\n> +\t\t} \\\n> +\t\telse if (new_line_size) \\\n> +\t\t{ \\\n> +\t\t\t/* \\\n> +\t\t\t * This means only new line char, empty record should be \\\n> +\t\t\t * inserted. \\\n> +\t\t\t */ \\\n> +\t\t\tChunkBoundary *chunkInfo; \\\n> +\t\t\tchunk_pos = UpdateBlockInChunkInfo(cstate, -1, -1, 0, \\\n> +\t\t\t\t\t\t\t\t\t\t\t CHUNK_LEADER_POPULATED); \\\n> +\t\t\tchunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> +\t\t\telog(DEBUG1, \"[Leader] Added empty chunk with offset:%d, chunk position:%d, chunk size:%d\", \\\n> +\t\t\t\t\t\t chunkInfo->start_offset, chunk_pos, \\\n> +\t\t\t\t\t\t pg_atomic_read_u32(&chunkInfo->chunk_size)); \\\n> +\t\t\tpcshared_info->populated++; \\\n> +\t\t} \\\n> +\t}\\\n> +\t\\\n> +\t/*\\\n> +\t * All of the read data is processed, reset index & len. In the\\\n> +\t * subsequent read, we will get a new block and copy data in to the\\\n> +\t * new block.\\\n> +\t */\\\n> +\tif (copy_buff_state.raw_buf_ptr == copy_buff_state.copy_buf_len)\\\n> +\t{\\\n> +\t\tcstate->raw_buf_index = 0;\\\n> +\t\tcstate->raw_buf_len = 0;\\\n> +\t}\\\n> +\telse\\\n> +\t\tcstate->raw_buf_len = copy_buff_state.copy_buf_len;\\\n> +}\n\nWhy are these macros? They are way way way above a length where that\nmakes any sort of sense.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Jun 2020 12:14:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Jun 4, 2020 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-06-03 12:13:14 -0400, Robert Haas wrote:\n> > On Mon, May 18, 2020 at 12:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > In the above case, even though we are executing a single command from\n> > > the user perspective, but the currentCommandId will be four after the\n> > > command. One increment will be for the copy command and the other\n> > > three increments are for locking tuple in PK table\n> > > (tab_fk_referenced_chk) for three tuples in FK table\n> > > (tab_fk_referencing_chk). Now, for parallel workers, it is\n> > > (theoretically) possible that the three tuples are processed by three\n> > > different workers which don't get synced as of now. The question was\n> > > do we see any kind of problem with this and if so can we just sync it\n> > > up at the end of parallelism.\n>\n> > I strongly disagree with the idea of \"just sync(ing) it up at the end\n> > of parallelism\". That seems like a completely unprincipled approach to\n> > the problem. Either the command counter increment is important or it's\n> > not. If it's not important, maybe we can arrange to skip it in the\n> > first place. If it is important, then it's probably not OK for each\n> > backend to be doing it separately.\n>\n> That scares me too. These command counter increments definitely aren't\n> unnecessary in the general case.\n>\n\nYeah, this is what we want to understand? Can you explain how they\nare useful here? AFAIU, heap_lock_tuple doesn't use commandid while\nstoring the transaction information of xact while locking the tuple.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Jun 2020 08:10:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nOn 2020-06-04 08:10:07 +0530, Amit Kapila wrote:\n> On Thu, Jun 4, 2020 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I strongly disagree with the idea of \"just sync(ing) it up at the end\n> > > of parallelism\". That seems like a completely unprincipled approach to\n> > > the problem. Either the command counter increment is important or it's\n> > > not. If it's not important, maybe we can arrange to skip it in the\n> > > first place. If it is important, then it's probably not OK for each\n> > > backend to be doing it separately.\n> >\n> > That scares me too. These command counter increments definitely aren't\n> > unnecessary in the general case.\n> >\n> \n> Yeah, this is what we want to understand? Can you explain how they\n> are useful here? AFAIU, heap_lock_tuple doesn't use commandid while\n> storing the transaction information of xact while locking the tuple.\n\nBut the HeapTupleSatisfiesUpdate() call does use it?\n\nAnd even if that weren't an issue, I don't see how it's defensible to\njust randomly break the the commandid coherency for parallel copy.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Jun 2020 20:40:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Jun 4, 2020 at 9:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-06-04 08:10:07 +0530, Amit Kapila wrote:\n> > On Thu, Jun 4, 2020 at 12:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > I strongly disagree with the idea of \"just sync(ing) it up at the end\n> > > > of parallelism\". That seems like a completely unprincipled approach to\n> > > > the problem. Either the command counter increment is important or it's\n> > > > not. If it's not important, maybe we can arrange to skip it in the\n> > > > first place. If it is important, then it's probably not OK for each\n> > > > backend to be doing it separately.\n> > >\n> > > That scares me too. These command counter increments definitely aren't\n> > > unnecessary in the general case.\n> > >\n> >\n> > Yeah, this is what we want to understand? Can you explain how they\n> > are useful here? AFAIU, heap_lock_tuple doesn't use commandid while\n> > storing the transaction information of xact while locking the tuple.\n>\n> But the HeapTupleSatisfiesUpdate() call does use it?\n>\n\nIt won't use 'cid' for lockers or multi-lockers case (AFAICS, there\nare special case handling for lockers/multi-lockers). I think it is\nused for updates/deletes.\n\n> And even if that weren't an issue, I don't see how it's defensible to\n> just randomly break the the commandid coherency for parallel copy.\n>\n\nAt this stage, we are evelauating whether there is any need to\nincrement command counter for foreign key checks or is it just\nhappening because we are using using some common code to execute\n\"Select ... For Key Share\" statetement during these checks.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Jun 2020 10:21:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Jun 4, 2020 at 12:44 AM Andres Freund <andres@anarazel.de> wrote\n>\n>\n> Hm. you don't explicitly mention that in your design, but given how\n> small the benefits going from 0-1 workers is, I assume the leader\n> doesn't do any \"chunk processing\" on its own?\n>\n\nYes you are right, the leader does not do any processing, Leader's\nwork is mainly to populate the shared memory with the offset\ninformation for each record.\n\n>\n>\n> > Design of the Parallel Copy: The backend, to which the \"COPY FROM\" query is\n> > submitted acts as leader with the responsibility of reading data from the\n> > file/stdin, launching at most n number of workers as specified with\n> > PARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\n> > common data required for the workers execution in the DSM and shares it\n> > with the workers. The leader then executes before statement triggers if\n> > there exists any. Leader populates DSM chunks which includes the start\n> > offset and chunk size, while populating the chunks it reads as many blocks\n> > as required into the DSM data blocks from the file. Each block is of 64K\n> > size. The leader parses the data to identify a chunk, the existing logic\n> > from CopyReadLineText which identifies the chunks with some changes was\n> > used for this. Leader checks if a free chunk is available to copy the\n> > information, if there is no free chunk it waits till the required chunk is\n> > freed up by the worker and then copies the identified chunks information\n> > (offset & chunk size) into the DSM chunks. This process is repeated till\n> > the complete file is processed. Simultaneously, the workers cache the\n> > chunks(50) locally into the local memory and release the chunks to the\n> > leader for further populating. Each worker processes the chunk that it\n> > cached and inserts it into the table. The leader waits till all the chunks\n> > populated are processed by the workers and exits.\n>\n> Why do we need the local copy of 50 chunks? Copying memory around is far\n> from free. I don't see why it'd be better to add per-process caching,\n> rather than making the DSM bigger? I can see some benefit in marking\n> multiple chunks as being processed with one lock acquisition, but I\n> don't think adding a memory copy is a good idea.\n\nWe had run performance with csv data file, 5.1GB, 10million tuples, 2\nindexes on integer columns, results for the same are given below. We\nnoticed in some cases the performance is better if we copy the 50\nrecords locally and release the shared memory. We will get better\nbenefits as the workers increase. Thoughts?\n------------------------------------------------------------------------------------------------\nWorkers | Exec time (With local copying | Exec time (Without copying,\n | 50 records & release the | processing record by record)\n | shared memory) |\n------------------------------------------------------------------------------------------------\n0 | 1162.772(1X) | 1152.684(1X)\n2 | 635.249(1.83X) | 647.894(1.78X)\n4 | 336.835(3.45X) | 335.534(3.43X)\n8 | 188.577(6.17 X) | 189.461(6.08X)\n16 | 126.819(9.17X) | 142.730(8.07X)\n20 | 117.845(9.87X) | 146.533(7.87X)\n30 | 127.554(9.11X) | 160.307(7.19X)\n\n> This patch *desperately* needs to be split up. It imo is close to\n> unreviewable, due to a large amount of changes that just move code\n> around without other functional changes being mixed in with the actual\n> new stuff.\n\nI have split the patch, the new split patches are attached.\n\n>\n>\n>\n> > /*\n> > + * State of the chunk.\n> > + */\n> > +typedef enum ChunkState\n> > +{\n> > + CHUNK_INIT, /* initial state of chunk */\n> > + CHUNK_LEADER_POPULATING, /* leader processing chunk */\n> > + CHUNK_LEADER_POPULATED, /* leader completed populating chunk */\n> > + CHUNK_WORKER_PROCESSING, /* worker processing chunk */\n> > + CHUNK_WORKER_PROCESSED /* worker completed processing chunk */\n> > +}ChunkState;\n> > +\n> > +#define RAW_BUF_SIZE 65536 /* we palloc RAW_BUF_SIZE+1 bytes */\n> > +\n> > +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> > +#define RINGSIZE (10 * 1000)\n> > +#define MAX_BLOCKS_COUNT 1000\n> > +#define WORKER_CHUNK_COUNT 50 /* should be mod of RINGSIZE */\n> > +\n> > +#define IsParallelCopy() (cstate->is_parallel)\n> > +#define IsLeader() (cstate->pcdata->is_leader)\n> > +#define IsHeaderLine() (cstate->header_line && cstate->cur_lineno == 1)\n> > +\n> > +/*\n> > + * Copy data block information.\n> > + */\n> > +typedef struct CopyDataBlock\n> > +{\n> > + /* The number of unprocessed chunks in the current block. */\n> > + pg_atomic_uint32 unprocessed_chunk_parts;\n> > +\n> > + /*\n> > + * If the current chunk data is continued into another block,\n> > + * following_block will have the position where the remaining data need to\n> > + * be read.\n> > + */\n> > + uint32 following_block;\n> > +\n> > + /*\n> > + * This flag will be set, when the leader finds out this block can be read\n> > + * safely by the worker. This helps the worker to start processing the chunk\n> > + * early where the chunk will be spread across many blocks and the worker\n> > + * need not wait for the complete chunk to be processed.\n> > + */\n> > + bool curr_blk_completed;\n> > + char data[DATA_BLOCK_SIZE + 1]; /* data read from file */\n> > +}CopyDataBlock;\n>\n> What's the + 1 here about?\n\nFixed this, removed +1. That is not needed.\n\n>\n>\n> > +/*\n> > + * Parallel copy line buffer information.\n> > + */\n> > +typedef struct ParallelCopyLineBuf\n> > +{\n> > + StringInfoData line_buf;\n> > + uint64 cur_lineno; /* line number for error messages */\n> > +}ParallelCopyLineBuf;\n>\n> Why do we need separate infrastructure for this? We shouldn't duplicate\n> infrastructure unnecessarily.\n>\n\nThis was required for copying the multiple records locally and\nreleasing the shared memory. I have not changed this, will decide on\nthis based on the decision taken for one of the previous comments.\n\n>\n>\n>\n> > +/*\n> > + * Common information that need to be copied to shared memory.\n> > + */\n> > +typedef struct CopyWorkerCommonData\n> > +{\n>\n> Why is parallel specific stuff here suddenly not named ParallelCopy*\n> anymore? If you introduce a naming like that it imo should be used\n> consistently.\n\nFixed, changed to maintain ParallelCopy in all structs.\n\n>\n> > + /* low-level state data */\n> > + CopyDest copy_dest; /* type of copy source/destination */\n> > + int file_encoding; /* file or remote side's character encoding */\n> > + bool need_transcoding; /* file encoding diff from server? */\n> > + bool encoding_embeds_ascii; /* ASCII can be non-first byte? */\n> > +\n> > + /* parameters from the COPY command */\n> > + bool csv_mode; /* Comma Separated Value format? */\n> > + bool header_line; /* CSV header line? */\n> > + int null_print_len; /* length of same */\n> > + bool force_quote_all; /* FORCE_QUOTE *? */\n> > + bool convert_selectively; /* do selective binary conversion? */\n> > +\n> > + /* Working state for COPY FROM */\n> > + AttrNumber num_defaults;\n> > + Oid relid;\n> > +}CopyWorkerCommonData;\n>\n> But I actually think we shouldn't have this information in two different\n> structs. This should exist once, independent of using parallel /\n> non-parallel copy.\n>\n\nThis structure helps in storing the common data from CopyStateData\nthat are required by the workers. This information will then be\nallocated and stored into the DSM for the worker to retrieve and copy\nit to CopyStateData.\n\n>\n> > +/* List information */\n> > +typedef struct ListInfo\n> > +{\n> > + int count; /* count of attributes */\n> > +\n> > + /* string info in the form info followed by info1, info2... infon */\n> > + char info[1];\n> > +} ListInfo;\n>\n> Based on these comments I have no idea what this could be for.\n>\n\nHave added better comments for this. The following is added: This\nstructure will help in converting a List data type into the below\nstructure format with the count having the number of elements in the\nlist and the info having the List elements appended contiguously. This\nconverted structure will be allocated in shared memory and stored in\nDSM for the worker to retrieve and later convert it back to List data\ntype.\n\n>\n> > /*\n> > - * This keeps the character read at the top of the loop in the buffer\n> > - * even if there is more than one read-ahead.\n> > + * This keeps the character read at the top of the loop in the buffer\n> > + * even if there is more than one read-ahead.\n> > + */\n> > +#define IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(extralen) \\\n> > +if (1) \\\n> > +{ \\\n> > + if (copy_buff_state.raw_buf_ptr + (extralen) >= copy_buff_state.copy_buf_len && !hit_eof) \\\n> > + { \\\n> > + if (IsParallelCopy()) \\\n> > + { \\\n> > + copy_buff_state.chunk_size = prev_chunk_size; /* update previous chunk size */ \\\n> > + if (copy_buff_state.block_switched) \\\n> > + { \\\n> > + pg_atomic_sub_fetch_u32(&copy_buff_state.data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> > + copy_buff_state.copy_buf_len = prev_copy_buf_len; \\\n> > + } \\\n> > + } \\\n> > + copy_buff_state.raw_buf_ptr = prev_raw_ptr; /* undo fetch */ \\\n> > + need_data = true; \\\n> > + continue; \\\n> > + } \\\n> > +} else ((void) 0)\n>\n> I think it's an absolutely clear no-go to add new branches to\n> these. They're *really* hot already, and this is going to sprinkle a\n> significant amount of new instructions over a lot of places.\n>\n\nFixed, removed this.\n\n>\n>\n> > +/*\n> > + * SET_RAWBUF_FOR_LOAD - Set raw_buf to the shared memory where the file data must\n> > + * be read.\n> > + */\n> > +#define SET_RAWBUF_FOR_LOAD() \\\n> > +{ \\\n> > + ShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > + uint32 cur_block_pos; \\\n> > + /* \\\n> > + * Mark the previous block as completed, worker can start copying this data. \\\n> > + */ \\\n> > + if (copy_buff_state.data_blk_ptr != copy_buff_state.curr_data_blk_ptr && \\\n> > + copy_buff_state.data_blk_ptr->curr_blk_completed == false) \\\n> > + copy_buff_state.data_blk_ptr->curr_blk_completed = true; \\\n> > + \\\n> > + copy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> > + cur_block_pos = WaitGetFreeCopyBlock(pcshared_info); \\\n> > + copy_buff_state.curr_data_blk_ptr = &pcshared_info->data_blocks[cur_block_pos]; \\\n> > + \\\n> > + if (!copy_buff_state.data_blk_ptr) \\\n> > + { \\\n> > + copy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> > + chunk_first_block = cur_block_pos; \\\n> > + } \\\n> > + else if (need_data == false) \\\n> > + copy_buff_state.data_blk_ptr->following_block = cur_block_pos; \\\n> > + \\\n> > + cstate->raw_buf = copy_buff_state.curr_data_blk_ptr->data; \\\n> > + copy_buff_state.copy_raw_buf = cstate->raw_buf; \\\n> > +}\n> > +\n> > +/*\n> > + * END_CHUNK_PARALLEL_COPY - Update the chunk information in shared memory.\n> > + */\n> > +#define END_CHUNK_PARALLEL_COPY() \\\n> > +{ \\\n> > + if (!IsHeaderLine()) \\\n> > + { \\\n> > + ShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > + ChunkBoundaries *chunkBoundaryPtr = &pcshared_info->chunk_boundaries; \\\n> > + if (copy_buff_state.chunk_size) \\\n> > + { \\\n> > + ChunkBoundary *chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > + /* \\\n> > + * If raw_buf_ptr is zero, unprocessed_chunk_parts would have been \\\n> > + * incremented in SEEK_COPY_BUFF_POS. This will happen if the whole \\\n> > + * chunk finishes at the end of the current block. If the \\\n> > + * new_line_size > raw_buf_ptr, then the new block has only new line \\\n> > + * char content. The unprocessed count should not be increased in \\\n> > + * this case. \\\n> > + */ \\\n> > + if (copy_buff_state.raw_buf_ptr != 0 && \\\n> > + copy_buff_state.raw_buf_ptr > new_line_size) \\\n> > + pg_atomic_add_fetch_u32(&copy_buff_state.curr_data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> > + \\\n> > + /* Update chunk size. */ \\\n> > + pg_atomic_write_u32(&chunkInfo->chunk_size, copy_buff_state.chunk_size); \\\n> > + pg_atomic_write_u32(&chunkInfo->chunk_state, CHUNK_LEADER_POPULATED); \\\n> > + elog(DEBUG1, \"[Leader] After adding - chunk position:%d, chunk_size:%d\", \\\n> > + chunk_pos, copy_buff_state.chunk_size); \\\n> > + pcshared_info->populated++; \\\n> > + } \\\n> > + else if (new_line_size) \\\n> > + { \\\n> > + /* \\\n> > + * This means only new line char, empty record should be \\\n> > + * inserted. \\\n> > + */ \\\n> > + ChunkBoundary *chunkInfo; \\\n> > + chunk_pos = UpdateBlockInChunkInfo(cstate, -1, -1, 0, \\\n> > + CHUNK_LEADER_POPULATED); \\\n> > + chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > + elog(DEBUG1, \"[Leader] Added empty chunk with offset:%d, chunk position:%d, chunk size:%d\", \\\n> > + chunkInfo->start_offset, chunk_pos, \\\n> > + pg_atomic_read_u32(&chunkInfo->chunk_size)); \\\n> > + pcshared_info->populated++; \\\n> > + } \\\n> > + }\\\n> > + \\\n> > + /*\\\n> > + * All of the read data is processed, reset index & len. In the\\\n> > + * subsequent read, we will get a new block and copy data in to the\\\n> > + * new block.\\\n> > + */\\\n> > + if (copy_buff_state.raw_buf_ptr == copy_buff_state.copy_buf_len)\\\n> > + {\\\n> > + cstate->raw_buf_index = 0;\\\n> > + cstate->raw_buf_len = 0;\\\n> > + }\\\n> > + else\\\n> > + cstate->raw_buf_len = copy_buff_state.copy_buf_len;\\\n> > +}\n>\n> Why are these macros? They are way way way above a length where that\n> makes any sort of sense.\n>\n\nConverted these macros to functions.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Jun 2020 11:00:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi All,\n\nI've spent little bit of time going through the project discussion that has\nhappened in this email thread and to start with I have few questions which\nI would like to put here:\n\nQ1) Are we also planning to read the input data in parallel or is it only\nabout performing the multi-insert operation in parallel? AFAIU, the data\nreading part will be done by the leader process alone so no parallelism is\ninvolved there.\n\nQ2) How are we going to deal with the partitioned tables? I mean will there\nbe some worker process dedicated for each partition or how is it? Further,\nthe challenge that I see incase of partitioned tables is that we would have\na single input file containing data to be inserted into multiple tables\n(aka partitions) unlike the normal case where all the tuples in the input\nfile would belong to the same table.\n\nQ3) Incase of toast tables, there is a possibility of having a single tuple\nin the input file which could be of a very big size (probably in GB)\neventually resulting in a bigger file size. So, in this case, how are we\ngoing to decide the number of worker processes to be launched. I mean,\nalthough the file size is big, but the number of tuples to be processed is\njust one or few of them, so, can we decide the number of the worker\nprocesses to be launched based on the file size?\n\nQ4) Who is going to process constraints (preferably the deferred\nconstraint) that is supposed to be executed at the COMMIT time? I mean is\nit the leader process or the worker process or in such cases we won't be\nchoosing the parallelism at all?\n\nQ5) Do we have any risk of table bloating when the data is loaded in\nparallel. I am just asking this because incase of parallelism there would\nbe multiple processes performing bulk insert into a table. There is a\nchance that the table file might get extended even if there is some space\ninto the file being written into, but that space is locked by some other\nworker process and hence that might result in a creation of a new block for\nthat table. Sorry, if I am missing something here.\n\nPlease note that I haven't gone through all the emails in this thread so\nthere is a possibility that I might have repeated the question that has\nalready been raised and answered here. If that is the case, I am sorry for\nthat, but it would be very helpful if someone could point out that thread\nso that I can go through it. Thank you.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Fri, Jun 12, 2020 at 11:01 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Thu, Jun 4, 2020 at 12:44 AM Andres Freund <andres@anarazel.de> wrote\n> >\n> >\n> > Hm. you don't explicitly mention that in your design, but given how\n> > small the benefits going from 0-1 workers is, I assume the leader\n> > doesn't do any \"chunk processing\" on its own?\n> >\n>\n> Yes you are right, the leader does not do any processing, Leader's\n> work is mainly to populate the shared memory with the offset\n> information for each record.\n>\n> >\n> >\n> > > Design of the Parallel Copy: The backend, to which the \"COPY FROM\"\n> query is\n> > > submitted acts as leader with the responsibility of reading data from\n> the\n> > > file/stdin, launching at most n number of workers as specified with\n> > > PARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\n> > > common data required for the workers execution in the DSM and shares it\n> > > with the workers. The leader then executes before statement triggers if\n> > > there exists any. Leader populates DSM chunks which includes the start\n> > > offset and chunk size, while populating the chunks it reads as many\n> blocks\n> > > as required into the DSM data blocks from the file. Each block is of\n> 64K\n> > > size. The leader parses the data to identify a chunk, the existing\n> logic\n> > > from CopyReadLineText which identifies the chunks with some changes was\n> > > used for this. Leader checks if a free chunk is available to copy the\n> > > information, if there is no free chunk it waits till the required\n> chunk is\n> > > freed up by the worker and then copies the identified chunks\n> information\n> > > (offset & chunk size) into the DSM chunks. This process is repeated\n> till\n> > > the complete file is processed. Simultaneously, the workers cache the\n> > > chunks(50) locally into the local memory and release the chunks to the\n> > > leader for further populating. Each worker processes the chunk that it\n> > > cached and inserts it into the table. The leader waits till all the\n> chunks\n> > > populated are processed by the workers and exits.\n> >\n> > Why do we need the local copy of 50 chunks? Copying memory around is far\n> > from free. I don't see why it'd be better to add per-process caching,\n> > rather than making the DSM bigger? I can see some benefit in marking\n> > multiple chunks as being processed with one lock acquisition, but I\n> > don't think adding a memory copy is a good idea.\n>\n> We had run performance with csv data file, 5.1GB, 10million tuples, 2\n> indexes on integer columns, results for the same are given below. We\n> noticed in some cases the performance is better if we copy the 50\n> records locally and release the shared memory. We will get better\n> benefits as the workers increase. Thoughts?\n>\n> ------------------------------------------------------------------------------------------------\n> Workers | Exec time (With local copying | Exec time (Without copying,\n> | 50 records & release the | processing record by\n> record)\n> | shared memory) |\n>\n> ------------------------------------------------------------------------------------------------\n> 0 | 1162.772(1X) | 1152.684(1X)\n> 2 | 635.249(1.83X) | 647.894(1.78X)\n> 4 | 336.835(3.45X) | 335.534(3.43X)\n> 8 | 188.577(6.17 X) | 189.461(6.08X)\n> 16 | 126.819(9.17X) | 142.730(8.07X)\n> 20 | 117.845(9.87X) | 146.533(7.87X)\n> 30 | 127.554(9.11X) | 160.307(7.19X)\n>\n> > This patch *desperately* needs to be split up. It imo is close to\n> > unreviewable, due to a large amount of changes that just move code\n> > around without other functional changes being mixed in with the actual\n> > new stuff.\n>\n> I have split the patch, the new split patches are attached.\n>\n> >\n> >\n> >\n> > > /*\n> > > + * State of the chunk.\n> > > + */\n> > > +typedef enum ChunkState\n> > > +{\n> > > + CHUNK_INIT, /* initial state\n> of chunk */\n> > > + CHUNK_LEADER_POPULATING, /* leader processing chunk */\n> > > + CHUNK_LEADER_POPULATED, /* leader completed populating\n> chunk */\n> > > + CHUNK_WORKER_PROCESSING, /* worker processing chunk */\n> > > + CHUNK_WORKER_PROCESSED /* worker completed processing\n> chunk */\n> > > +}ChunkState;\n> > > +\n> > > +#define RAW_BUF_SIZE 65536 /* we palloc RAW_BUF_SIZE+1\n> bytes */\n> > > +\n> > > +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> > > +#define RINGSIZE (10 * 1000)\n> > > +#define MAX_BLOCKS_COUNT 1000\n> > > +#define WORKER_CHUNK_COUNT 50 /* should be mod of RINGSIZE */\n> > > +\n> > > +#define IsParallelCopy() (cstate->is_parallel)\n> > > +#define IsLeader()\n> (cstate->pcdata->is_leader)\n> > > +#define IsHeaderLine() (cstate->header_line &&\n> cstate->cur_lineno == 1)\n> > > +\n> > > +/*\n> > > + * Copy data block information.\n> > > + */\n> > > +typedef struct CopyDataBlock\n> > > +{\n> > > + /* The number of unprocessed chunks in the current block. */\n> > > + pg_atomic_uint32 unprocessed_chunk_parts;\n> > > +\n> > > + /*\n> > > + * If the current chunk data is continued into another block,\n> > > + * following_block will have the position where the remaining\n> data need to\n> > > + * be read.\n> > > + */\n> > > + uint32 following_block;\n> > > +\n> > > + /*\n> > > + * This flag will be set, when the leader finds out this block\n> can be read\n> > > + * safely by the worker. This helps the worker to start\n> processing the chunk\n> > > + * early where the chunk will be spread across many blocks and\n> the worker\n> > > + * need not wait for the complete chunk to be processed.\n> > > + */\n> > > + bool curr_blk_completed;\n> > > + char data[DATA_BLOCK_SIZE + 1]; /* data read from file */\n> > > +}CopyDataBlock;\n> >\n> > What's the + 1 here about?\n>\n> Fixed this, removed +1. That is not needed.\n>\n> >\n> >\n> > > +/*\n> > > + * Parallel copy line buffer information.\n> > > + */\n> > > +typedef struct ParallelCopyLineBuf\n> > > +{\n> > > + StringInfoData line_buf;\n> > > + uint64 cur_lineno; /* line number\n> for error messages */\n> > > +}ParallelCopyLineBuf;\n> >\n> > Why do we need separate infrastructure for this? We shouldn't duplicate\n> > infrastructure unnecessarily.\n> >\n>\n> This was required for copying the multiple records locally and\n> releasing the shared memory. I have not changed this, will decide on\n> this based on the decision taken for one of the previous comments.\n>\n> >\n> >\n> >\n> > > +/*\n> > > + * Common information that need to be copied to shared memory.\n> > > + */\n> > > +typedef struct CopyWorkerCommonData\n> > > +{\n> >\n> > Why is parallel specific stuff here suddenly not named ParallelCopy*\n> > anymore? If you introduce a naming like that it imo should be used\n> > consistently.\n>\n> Fixed, changed to maintain ParallelCopy in all structs.\n>\n> >\n> > > + /* low-level state data */\n> > > + CopyDest copy_dest; /* type of copy\n> source/destination */\n> > > + int file_encoding; /* file or remote side's\n> character encoding */\n> > > + bool need_transcoding; /* file encoding diff\n> from server? */\n> > > + bool encoding_embeds_ascii; /* ASCII can be\n> non-first byte? */\n> > > +\n> > > + /* parameters from the COPY command */\n> > > + bool csv_mode; /* Comma Separated Value\n> format? */\n> > > + bool header_line; /* CSV header line? */\n> > > + int null_print_len; /* length of same */\n> > > + bool force_quote_all; /* FORCE_QUOTE *? */\n> > > + bool convert_selectively; /* do selective\n> binary conversion? */\n> > > +\n> > > + /* Working state for COPY FROM */\n> > > + AttrNumber num_defaults;\n> > > + Oid relid;\n> > > +}CopyWorkerCommonData;\n> >\n> > But I actually think we shouldn't have this information in two different\n> > structs. This should exist once, independent of using parallel /\n> > non-parallel copy.\n> >\n>\n> This structure helps in storing the common data from CopyStateData\n> that are required by the workers. This information will then be\n> allocated and stored into the DSM for the worker to retrieve and copy\n> it to CopyStateData.\n>\n> >\n> > > +/* List information */\n> > > +typedef struct ListInfo\n> > > +{\n> > > + int count; /* count of attributes */\n> > > +\n> > > + /* string info in the form info followed by info1, info2...\n> infon */\n> > > + char info[1];\n> > > +} ListInfo;\n> >\n> > Based on these comments I have no idea what this could be for.\n> >\n>\n> Have added better comments for this. The following is added: This\n> structure will help in converting a List data type into the below\n> structure format with the count having the number of elements in the\n> list and the info having the List elements appended contiguously. This\n> converted structure will be allocated in shared memory and stored in\n> DSM for the worker to retrieve and later convert it back to List data\n> type.\n>\n> >\n> > > /*\n> > > - * This keeps the character read at the top of the loop in the buffer\n> > > - * even if there is more than one read-ahead.\n> > > + * This keeps the character read at the top of the loop in the buffer\n> > > + * even if there is more than one read-ahead.\n> > > + */\n> > > +#define IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(extralen) \\\n> > > +if (1) \\\n> > > +{ \\\n> > > + if (copy_buff_state.raw_buf_ptr + (extralen) >=\n> copy_buff_state.copy_buf_len && !hit_eof) \\\n> > > + { \\\n> > > + if (IsParallelCopy()) \\\n> > > + { \\\n> > > + copy_buff_state.chunk_size = prev_chunk_size; /*\n> update previous chunk size */ \\\n> > > + if (copy_buff_state.block_switched) \\\n> > > + { \\\n> > > +\n> pg_atomic_sub_fetch_u32(&copy_buff_state.data_blk_ptr->unprocessed_chunk_parts,\n> 1); \\\n> > > + copy_buff_state.copy_buf_len =\n> prev_copy_buf_len; \\\n> > > + } \\\n> > > + } \\\n> > > + copy_buff_state.raw_buf_ptr = prev_raw_ptr; /* undo\n> fetch */ \\\n> > > + need_data = true; \\\n> > > + continue; \\\n> > > + } \\\n> > > +} else ((void) 0)\n> >\n> > I think it's an absolutely clear no-go to add new branches to\n> > these. They're *really* hot already, and this is going to sprinkle a\n> > significant amount of new instructions over a lot of places.\n> >\n>\n> Fixed, removed this.\n>\n> >\n> >\n> > > +/*\n> > > + * SET_RAWBUF_FOR_LOAD - Set raw_buf to the shared memory where the\n> file data must\n> > > + * be read.\n> > > + */\n> > > +#define SET_RAWBUF_FOR_LOAD() \\\n> > > +{ \\\n> > > + ShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > > + uint32 cur_block_pos; \\\n> > > + /* \\\n> > > + * Mark the previous block as completed, worker can start\n> copying this data. \\\n> > > + */ \\\n> > > + if (copy_buff_state.data_blk_ptr !=\n> copy_buff_state.curr_data_blk_ptr && \\\n> > > + copy_buff_state.data_blk_ptr->curr_blk_completed ==\n> false) \\\n> > > + copy_buff_state.data_blk_ptr->curr_blk_completed = true;\n> \\\n> > > + \\\n> > > + copy_buff_state.data_blk_ptr =\n> copy_buff_state.curr_data_blk_ptr; \\\n> > > + cur_block_pos = WaitGetFreeCopyBlock(pcshared_info); \\\n> > > + copy_buff_state.curr_data_blk_ptr =\n> &pcshared_info->data_blocks[cur_block_pos]; \\\n> > > + \\\n> > > + if (!copy_buff_state.data_blk_ptr) \\\n> > > + { \\\n> > > + copy_buff_state.data_blk_ptr =\n> copy_buff_state.curr_data_blk_ptr; \\\n> > > + chunk_first_block = cur_block_pos; \\\n> > > + } \\\n> > > + else if (need_data == false) \\\n> > > + copy_buff_state.data_blk_ptr->following_block =\n> cur_block_pos; \\\n> > > + \\\n> > > + cstate->raw_buf = copy_buff_state.curr_data_blk_ptr->data; \\\n> > > + copy_buff_state.copy_raw_buf = cstate->raw_buf; \\\n> > > +}\n> > > +\n> > > +/*\n> > > + * END_CHUNK_PARALLEL_COPY - Update the chunk information in shared\n> memory.\n> > > + */\n> > > +#define END_CHUNK_PARALLEL_COPY() \\\n> > > +{ \\\n> > > + if (!IsHeaderLine()) \\\n> > > + { \\\n> > > + ShmCopyInfo *pcshared_info =\n> cstate->pcdata->pcshared_info; \\\n> > > + ChunkBoundaries *chunkBoundaryPtr =\n> &pcshared_info->chunk_boundaries; \\\n> > > + if (copy_buff_state.chunk_size) \\\n> > > + { \\\n> > > + ChunkBoundary *chunkInfo =\n> &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > > + /* \\\n> > > + * If raw_buf_ptr is zero,\n> unprocessed_chunk_parts would have been \\\n> > > + * incremented in SEEK_COPY_BUFF_POS. This will\n> happen if the whole \\\n> > > + * chunk finishes at the end of the current\n> block. If the \\\n> > > + * new_line_size > raw_buf_ptr, then the new\n> block has only new line \\\n> > > + * char content. The unprocessed count should\n> not be increased in \\\n> > > + * this case. \\\n> > > + */ \\\n> > > + if (copy_buff_state.raw_buf_ptr != 0 && \\\n> > > + copy_buff_state.raw_buf_ptr >\n> new_line_size) \\\n> > > +\n> pg_atomic_add_fetch_u32(&copy_buff_state.curr_data_blk_ptr->unprocessed_chunk_parts,\n> 1); \\\n> > > + \\\n> > > + /* Update chunk size. */ \\\n> > > + pg_atomic_write_u32(&chunkInfo->chunk_size,\n> copy_buff_state.chunk_size); \\\n> > > + pg_atomic_write_u32(&chunkInfo->chunk_state,\n> CHUNK_LEADER_POPULATED); \\\n> > > + elog(DEBUG1, \"[Leader] After adding - chunk\n> position:%d, chunk_size:%d\", \\\n> > > + chunk_pos,\n> copy_buff_state.chunk_size); \\\n> > > + pcshared_info->populated++; \\\n> > > + } \\\n> > > + else if (new_line_size) \\\n> > > + { \\\n> > > + /* \\\n> > > + * This means only new line char, empty record\n> should be \\\n> > > + * inserted. \\\n> > > + */ \\\n> > > + ChunkBoundary *chunkInfo; \\\n> > > + chunk_pos = UpdateBlockInChunkInfo(cstate, -1,\n> -1, 0, \\\n> > > +\n> CHUNK_LEADER_POPULATED); \\\n> > > + chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > > + elog(DEBUG1, \"[Leader] Added empty chunk with\n> offset:%d, chunk position:%d, chunk size:%d\", \\\n> > > +\n> chunkInfo->start_offset, chunk_pos, \\\n> > > +\n> pg_atomic_read_u32(&chunkInfo->chunk_size)); \\\n> > > + pcshared_info->populated++; \\\n> > > + } \\\n> > > + }\\\n> > > + \\\n> > > + /*\\\n> > > + * All of the read data is processed, reset index & len. In the\\\n> > > + * subsequent read, we will get a new block and copy data in to\n> the\\\n> > > + * new block.\\\n> > > + */\\\n> > > + if (copy_buff_state.raw_buf_ptr == copy_buff_state.copy_buf_len)\\\n> > > + {\\\n> > > + cstate->raw_buf_index = 0;\\\n> > > + cstate->raw_buf_len = 0;\\\n> > > + }\\\n> > > + else\\\n> > > + cstate->raw_buf_len = copy_buff_state.copy_buf_len;\\\n> > > +}\n> >\n> > Why are these macros? They are way way way above a length where that\n> > makes any sort of sense.\n> >\n>\n> Converted these macros to functions.\n>\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi All,I've spent little bit of time going through the project discussion that has happened in this email thread and to start with I have few questions which I would like to put here:Q1) Are we also planning to read the input data in parallel or is it only about performing the multi-insert operation in parallel? AFAIU, the data reading part will be done by the leader process alone so no parallelism is involved there.Q2) How are we going to deal with the partitioned tables? I mean will there be some worker process dedicated for each partition or how is it? Further, the challenge that I see incase of partitioned tables is that we would have a single input file containing data to be inserted into multiple tables (aka partitions) unlike the normal case where all the tuples in the input file would belong to the same table.Q3) Incase of toast tables, there is a possibility of having a single tuple in the input file which could be of a very big size (probably in GB) eventually resulting in a bigger file size. So, in this case, how are we going to decide the number of worker processes to be launched. I mean, although the file size is big, but the number of tuples to be processed is just one or few of them, so, can we decide the number of the worker processes to be launched based on the file size?Q4) Who is going to process constraints (preferably the deferred constraint) that is supposed to be executed at the COMMIT time? I mean is it the leader process or the worker process or in such cases we won't be choosing the parallelism at all?Q5) Do we have any risk of table bloating when the data is loaded in parallel. I am just asking this because incase of parallelism there would be multiple processes performing bulk insert into a table. There is a chance that the table file might get extended even if there is some space into the file being written into, but that space is locked by some other worker process and hence that might result in a creation of a new block for that table. Sorry, if I am missing something here.Please note that I haven't gone through all the emails in this thread so there is a possibility that I might have repeated the question that has already been raised and answered here. If that is the case, I am sorry for that, but it would be very helpful if someone could point out that thread so that I can go through it. Thank you.-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.comOn Fri, Jun 12, 2020 at 11:01 AM vignesh C <vignesh21@gmail.com> wrote:On Thu, Jun 4, 2020 at 12:44 AM Andres Freund <andres@anarazel.de> wrote\n>\n>\n> Hm. you don't explicitly mention that in your design, but given how\n> small the benefits going from 0-1 workers is, I assume the leader\n> doesn't do any \"chunk processing\" on its own?\n>\n\nYes you are right, the leader does not do any processing, Leader's\nwork is mainly to populate the shared memory with the offset\ninformation for each record.\n\n>\n>\n> > Design of the Parallel Copy: The backend, to which the \"COPY FROM\" query is\n> > submitted acts as leader with the responsibility of reading data from the\n> > file/stdin, launching at most n number of workers as specified with\n> > PARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\n> > common data required for the workers execution in the DSM and shares it\n> > with the workers. The leader then executes before statement triggers if\n> > there exists any. Leader populates DSM chunks which includes the start\n> > offset and chunk size, while populating the chunks it reads as many blocks\n> > as required into the DSM data blocks from the file. Each block is of 64K\n> > size. The leader parses the data to identify a chunk, the existing logic\n> > from CopyReadLineText which identifies the chunks with some changes was\n> > used for this. Leader checks if a free chunk is available to copy the\n> > information, if there is no free chunk it waits till the required chunk is\n> > freed up by the worker and then copies the identified chunks information\n> > (offset & chunk size) into the DSM chunks. This process is repeated till\n> > the complete file is processed. Simultaneously, the workers cache the\n> > chunks(50) locally into the local memory and release the chunks to the\n> > leader for further populating. Each worker processes the chunk that it\n> > cached and inserts it into the table. The leader waits till all the chunks\n> > populated are processed by the workers and exits.\n>\n> Why do we need the local copy of 50 chunks? Copying memory around is far\n> from free. I don't see why it'd be better to add per-process caching,\n> rather than making the DSM bigger? I can see some benefit in marking\n> multiple chunks as being processed with one lock acquisition, but I\n> don't think adding a memory copy is a good idea.\n\nWe had run performance with  csv data file, 5.1GB, 10million tuples, 2\nindexes on integer columns, results for the same are given below. We\nnoticed in some cases the performance is better if we copy the 50\nrecords locally and release the shared memory. We will get better\nbenefits as the workers increase. Thoughts?\n------------------------------------------------------------------------------------------------\nWorkers  | Exec time (With local copying | Exec time (Without copying,\n               | 50 records & release the         | processing record by record)\n               | shared memory)                      |\n------------------------------------------------------------------------------------------------\n0             |   1162.772(1X)                        |       1152.684(1X)\n2             |   635.249(1.83X)                     |       647.894(1.78X)\n4             |   336.835(3.45X)                     |       335.534(3.43X)\n8             |   188.577(6.17 X)                    |       189.461(6.08X)\n16           |   126.819(9.17X)                     |       142.730(8.07X)\n20           |   117.845(9.87X)                     |       146.533(7.87X)\n30           |   127.554(9.11X)                     |       160.307(7.19X)\n\n> This patch *desperately* needs to be split up. It imo is close to\n> unreviewable, due to a large amount of changes that just move code\n> around without other functional changes being mixed in with the actual\n> new stuff.\n\nI have split the patch, the new split patches are attached.\n\n>\n>\n>\n> >  /*\n> > + * State of the chunk.\n> > + */\n> > +typedef enum ChunkState\n> > +{\n> > +     CHUNK_INIT,                                     /* initial state of chunk */\n> > +     CHUNK_LEADER_POPULATING,        /* leader processing chunk */\n> > +     CHUNK_LEADER_POPULATED,         /* leader completed populating chunk */\n> > +     CHUNK_WORKER_PROCESSING,        /* worker processing chunk */\n> > +     CHUNK_WORKER_PROCESSED          /* worker completed processing chunk */\n> > +}ChunkState;\n> > +\n> > +#define RAW_BUF_SIZE 65536           /* we palloc RAW_BUF_SIZE+1 bytes */\n> > +\n> > +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> > +#define RINGSIZE (10 * 1000)\n> > +#define MAX_BLOCKS_COUNT 1000\n> > +#define WORKER_CHUNK_COUNT 50        /* should be mod of RINGSIZE */\n> > +\n> > +#define      IsParallelCopy()                (cstate->is_parallel)\n> > +#define IsLeader()                           (cstate->pcdata->is_leader)\n> > +#define IsHeaderLine()                       (cstate->header_line && cstate->cur_lineno == 1)\n> > +\n> > +/*\n> > + * Copy data block information.\n> > + */\n> > +typedef struct CopyDataBlock\n> > +{\n> > +     /* The number of unprocessed chunks in the current block. */\n> > +     pg_atomic_uint32 unprocessed_chunk_parts;\n> > +\n> > +     /*\n> > +      * If the current chunk data is continued into another block,\n> > +      * following_block will have the position where the remaining data need to\n> > +      * be read.\n> > +      */\n> > +     uint32  following_block;\n> > +\n> > +     /*\n> > +      * This flag will be set, when the leader finds out this block can be read\n> > +      * safely by the worker. This helps the worker to start processing the chunk\n> > +      * early where the chunk will be spread across many blocks and the worker\n> > +      * need not wait for the complete chunk to be processed.\n> > +      */\n> > +     bool   curr_blk_completed;\n> > +     char   data[DATA_BLOCK_SIZE + 1]; /* data read from file */\n> > +}CopyDataBlock;\n>\n> What's the + 1 here about?\n\nFixed this, removed +1. That is not needed.\n\n>\n>\n> > +/*\n> > + * Parallel copy line buffer information.\n> > + */\n> > +typedef struct ParallelCopyLineBuf\n> > +{\n> > +     StringInfoData          line_buf;\n> > +     uint64                          cur_lineno;     /* line number for error messages */\n> > +}ParallelCopyLineBuf;\n>\n> Why do we need separate infrastructure for this? We shouldn't duplicate\n> infrastructure unnecessarily.\n>\n\nThis was required for copying the multiple records locally and\nreleasing the shared memory. I have not changed this, will decide on\nthis based on the decision taken for one of the previous comments.\n\n>\n>\n>\n> > +/*\n> > + * Common information that need to be copied to shared memory.\n> > + */\n> > +typedef struct CopyWorkerCommonData\n> > +{\n>\n> Why is parallel specific stuff here suddenly not named ParallelCopy*\n> anymore? If you introduce a naming like that it imo should be used\n> consistently.\n\nFixed, changed to maintain ParallelCopy in all structs.\n\n>\n> > +     /* low-level state data */\n> > +     CopyDest            copy_dest;          /* type of copy source/destination */\n> > +     int                 file_encoding;      /* file or remote side's character encoding */\n> > +     bool                need_transcoding;   /* file encoding diff from server? */\n> > +     bool                encoding_embeds_ascii;      /* ASCII can be non-first byte? */\n> > +\n> > +     /* parameters from the COPY command */\n> > +     bool                csv_mode;           /* Comma Separated Value format? */\n> > +     bool                header_line;        /* CSV header line? */\n> > +     int                 null_print_len; /* length of same */\n> > +     bool                force_quote_all;    /* FORCE_QUOTE *? */\n> > +     bool                convert_selectively;        /* do selective binary conversion? */\n> > +\n> > +     /* Working state for COPY FROM */\n> > +     AttrNumber          num_defaults;\n> > +     Oid                 relid;\n> > +}CopyWorkerCommonData;\n>\n> But I actually think we shouldn't have this information in two different\n> structs. This should exist once, independent of using parallel /\n> non-parallel copy.\n>\n\nThis structure helps in storing the common data from CopyStateData\nthat are required by the workers. This information will then be\nallocated and stored into the DSM for the worker to retrieve and copy\nit to CopyStateData.\n\n>\n> > +/* List information */\n> > +typedef struct ListInfo\n> > +{\n> > +     int     count;          /* count of attributes */\n> > +\n> > +     /* string info in the form info followed by info1, info2... infon  */\n> > +     char    info[1];\n> > +} ListInfo;\n>\n> Based on these comments I have no idea what this could be for.\n>\n\nHave added better comments for this. The following is added: This\nstructure will help in converting a List data type into the below\nstructure format with the count having the number of elements in the\nlist and the info having the List elements appended contiguously. This\nconverted structure will be allocated in shared memory and stored in\nDSM for the worker to retrieve and later convert it back to List data\ntype.\n\n>\n> >  /*\n> > - * This keeps the character read at the top of the loop in the buffer\n> > - * even if there is more than one read-ahead.\n> > + * This keeps the character read at the top of the loop in the buffer\n> > + * even if there is more than one read-ahead.\n> > + */\n> > +#define IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(extralen) \\\n> > +if (1) \\\n> > +{ \\\n> > +     if (copy_buff_state.raw_buf_ptr + (extralen) >= copy_buff_state.copy_buf_len && !hit_eof) \\\n> > +     { \\\n> > +             if (IsParallelCopy()) \\\n> > +             { \\\n> > +                     copy_buff_state.chunk_size = prev_chunk_size; /* update previous chunk size */ \\\n> > +                     if (copy_buff_state.block_switched) \\\n> > +                     { \\\n> > +                             pg_atomic_sub_fetch_u32(&copy_buff_state.data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> > +                             copy_buff_state.copy_buf_len = prev_copy_buf_len; \\\n> > +                     } \\\n> > +             } \\\n> > +             copy_buff_state.raw_buf_ptr = prev_raw_ptr; /* undo fetch */ \\\n> > +             need_data = true; \\\n> > +             continue; \\\n> > +     } \\\n> > +} else ((void) 0)\n>\n> I think it's an absolutely clear no-go to add new branches to\n> these. They're *really* hot already, and this is going to sprinkle a\n> significant amount of new instructions over a lot of places.\n>\n\nFixed, removed this.\n\n>\n>\n> > +/*\n> > + * SET_RAWBUF_FOR_LOAD - Set raw_buf to the shared memory where the file data must\n> > + * be read.\n> > + */\n> > +#define SET_RAWBUF_FOR_LOAD() \\\n> > +{ \\\n> > +     ShmCopyInfo     *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > +     uint32 cur_block_pos; \\\n> > +     /* \\\n> > +      * Mark the previous block as completed, worker can start copying this data. \\\n> > +      */ \\\n> > +     if (copy_buff_state.data_blk_ptr != copy_buff_state.curr_data_blk_ptr && \\\n> > +             copy_buff_state.data_blk_ptr->curr_blk_completed == false) \\\n> > +             copy_buff_state.data_blk_ptr->curr_blk_completed = true; \\\n> > +     \\\n> > +     copy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> > +     cur_block_pos = WaitGetFreeCopyBlock(pcshared_info); \\\n> > +     copy_buff_state.curr_data_blk_ptr = &pcshared_info->data_blocks[cur_block_pos]; \\\n> > +     \\\n> > +     if (!copy_buff_state.data_blk_ptr) \\\n> > +     { \\\n> > +             copy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> > +             chunk_first_block = cur_block_pos; \\\n> > +     } \\\n> > +     else if (need_data == false) \\\n> > +             copy_buff_state.data_blk_ptr->following_block = cur_block_pos; \\\n> > +     \\\n> > +     cstate->raw_buf = copy_buff_state.curr_data_blk_ptr->data; \\\n> > +     copy_buff_state.copy_raw_buf = cstate->raw_buf; \\\n> > +}\n> > +\n> > +/*\n> > + * END_CHUNK_PARALLEL_COPY - Update the chunk information in shared memory.\n> > + */\n> > +#define END_CHUNK_PARALLEL_COPY() \\\n> > +{ \\\n> > +     if (!IsHeaderLine()) \\\n> > +     { \\\n> > +             ShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > +             ChunkBoundaries *chunkBoundaryPtr = &pcshared_info->chunk_boundaries; \\\n> > +             if (copy_buff_state.chunk_size) \\\n> > +             { \\\n> > +                     ChunkBoundary *chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > +                     /* \\\n> > +                      * If raw_buf_ptr is zero, unprocessed_chunk_parts would have been \\\n> > +                      * incremented in SEEK_COPY_BUFF_POS. This will happen if the whole \\\n> > +                      * chunk finishes at the end of the current block. If the \\\n> > +                      * new_line_size > raw_buf_ptr, then the new block has only new line \\\n> > +                      * char content. The unprocessed count should not be increased in \\\n> > +                      * this case. \\\n> > +                      */ \\\n> > +                     if (copy_buff_state.raw_buf_ptr != 0 && \\\n> > +                             copy_buff_state.raw_buf_ptr > new_line_size) \\\n> > +                             pg_atomic_add_fetch_u32(&copy_buff_state.curr_data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> > +                     \\\n> > +                     /* Update chunk size. */ \\\n> > +                     pg_atomic_write_u32(&chunkInfo->chunk_size, copy_buff_state.chunk_size); \\\n> > +                     pg_atomic_write_u32(&chunkInfo->chunk_state, CHUNK_LEADER_POPULATED); \\\n> > +                     elog(DEBUG1, \"[Leader] After adding - chunk position:%d, chunk_size:%d\", \\\n> > +                                             chunk_pos, copy_buff_state.chunk_size); \\\n> > +                     pcshared_info->populated++; \\\n> > +             } \\\n> > +             else if (new_line_size) \\\n> > +             { \\\n> > +                     /* \\\n> > +                      * This means only new line char, empty record should be \\\n> > +                      * inserted. \\\n> > +                      */ \\\n> > +                     ChunkBoundary *chunkInfo; \\\n> > +                     chunk_pos = UpdateBlockInChunkInfo(cstate, -1, -1, 0, \\\n> > +                                                                                        CHUNK_LEADER_POPULATED); \\\n> > +                     chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > +                     elog(DEBUG1, \"[Leader] Added empty chunk with offset:%d, chunk position:%d, chunk size:%d\", \\\n> > +                                              chunkInfo->start_offset, chunk_pos, \\\n> > +                                              pg_atomic_read_u32(&chunkInfo->chunk_size)); \\\n> > +                     pcshared_info->populated++; \\\n> > +             } \\\n> > +     }\\\n> > +     \\\n> > +     /*\\\n> > +      * All of the read data is processed, reset index & len. In the\\\n> > +      * subsequent read, we will get a new block and copy data in to the\\\n> > +      * new block.\\\n> > +      */\\\n> > +     if (copy_buff_state.raw_buf_ptr == copy_buff_state.copy_buf_len)\\\n> > +     {\\\n> > +             cstate->raw_buf_index = 0;\\\n> > +             cstate->raw_buf_len = 0;\\\n> > +     }\\\n> > +     else\\\n> > +             cstate->raw_buf_len = copy_buff_state.copy_buf_len;\\\n> > +}\n>\n> Why are these macros? They are way way way above a length where that\n> makes any sort of sense.\n>\n\nConverted these macros to functions.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Jun 2020 16:57:40 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Jun 12, 2020 at 4:57 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> I've spent little bit of time going through the project discussion that has happened in this email thread and to start with I have few questions which I would like to put here:\n>\n> Q1) Are we also planning to read the input data in parallel or is it only about performing the multi-insert operation in parallel? AFAIU, the data reading part will be done by the leader process alone so no parallelism is involved there.\n>\n\nYes, your understanding is correct.\n\n> Q2) How are we going to deal with the partitioned tables?\n>\n\nI haven't studied the patch but my understanding is that we will\nsupport parallel copy for partitioned tables with a few restrictions\nas explained in my earlier email [1]. See, Case-2 (b) in the email.\n\n> I mean will there be some worker process dedicated for each partition or how is it?\n\nNo, it the split is just based on the input, otherwise each worker\nshould insert as we would have done without any workers.\n\n> Q3) Incase of toast tables, there is a possibility of having a single tuple in the input file which could be of a very big size (probably in GB) eventually resulting in a bigger file size. So, in this case, how are we going to decide the number of worker processes to be launched. I mean, although the file size is big, but the number of tuples to be processed is just one or few of them, so, can we decide the number of the worker processes to be launched based on the file size?\n>\n\nYeah, such situations would be tricky, so we should have an option for\nuser to specify the number of workers.\n\n> Q4) Who is going to process constraints (preferably the deferred constraint) that is supposed to be executed at the COMMIT time? I mean is it the leader process or the worker process or in such cases we won't be choosing the parallelism at all?\n>\n\nIn the first version, we won't do parallelism for this. Again, see\none of my earlier email [1] where I have explained this and other\ncases where we won't be supporting parallel copy.\n\n> Q5) Do we have any risk of table bloating when the data is loaded in parallel. I am just asking this because incase of parallelism there would be multiple processes performing bulk insert into a table. There is a chance that the table file might get extended even if there is some space into the file being written into, but that space is locked by some other worker process and hence that might result in a creation of a new block for that table. Sorry, if I am missing something here.\n>\n\nHmm, each worker will operate at page level, after first insertion,\nthe same worker will try to insert in the same page in which it has\ninserted last, so there shouldn't be such a problem.\n\n> Please note that I haven't gone through all the emails in this thread so there is a possibility that I might have repeated the question that has already been raised and answered here. If that is the case, I am sorry for that, but it would be very helpful if someone could point out that thread so that I can go through it. Thank you.\n>\n\nNo problem, I understand sometimes it is difficult to go through each\nand every email especially when the discussion is long. Anyway,\nthanks for showing the interest in the patch.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BANNEaMJCCXm4naweP5PLY6LhJMvGo_V7-Pnfbh6GsOA%40mail.gmail.com\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 13 Jun 2020 09:42:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nAttached is the patch supporting parallel copy for binary format files.\n\nThe performance improvement achieved with different workers is as shown\nbelow. Dataset used has 10million tuples and is of 5.3GB size.\n\nparallel workers test case 1(exec time in sec): copy from binary file, 2\nindexes on integer columns and 1 index on text column test case 2(exec time\nin sec): copy from binary file, 1 gist index on text column test case\n3(exec time in sec): copy from binary file, 3 indexes on integer columns\n0 1106.899(1X) 772.758(1X) 171.338(1X)\n1 1094.165(1.01X) 757.365(1.02X) 163.018(1.05X)\n2 618.397(1.79X) 428.304(1.8X) 117.508(1.46X)\n4 320.511(3.45X) 231.938(3.33X) 80.297(2.13X)\n8 172.462(6.42X) 150.212(5.14X) *71.518(2.39X)*\n16 110.460(10.02X) *124.929(6.18X)* 91.308(1.88X)\n20 *98.470(11.24X)* 137.313(5.63X) 95.289(1.79X)\n30 109.229(10.13X) 173.54(4.45X) 95.799(1.78X)\n\nDesign followed for developing this patch:\n\nLeader reads data from the file into the DSM data blocks each of 64K size.\nIt also identifies each tuple data block id, start offset, end offset,\ntuple size and updates this information in the ring data structure. Workers\nparallely read the tuple information from the ring data structure, the\nactual tuple data from the data blocks and parallely insert the tuples into\nthe table.\n\nPlease note that this patch can be applied on the series of patches that\nwere posted previously[1] for parallel copy for csv/text files.\nThe correct order to apply all the patches is -\n0001-Copy-code-readjustment-to-support-parallel-copy.patch\n<https://www.postgresql.org/message-id/attachment/111463/0001-Copy-code-readjustment-to-support-parallel-copy.patch>\n0002-Framework-for-leader-worker-in-parallel-copy.patch\n<https://www.postgresql.org/message-id/attachment/111465/0002-Framework-for-leader-worker-in-parallel-copy.patch>\n0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\n<https://www.postgresql.org/message-id/attachment/111464/0003-Allow-copy-from-command-to-process-data-from-file-ST.patch>\n0004-Documentation-for-parallel-copy.patch\n<https://www.postgresql.org/message-id/attachment/111466/0004-Documentation-for-parallel-copy.patch>\nand\n0005-Parallel-Copy-For-Binary-Format-Files.patch\n\nThe above tests were run with the configuration attached config.txt, which\nis the same used for performance tests of csv/text files posted earlier in\nthis mail chain.\n\nRequest the community to take this patch up for review along with the\nparallel copy for csv/text file patches and provide feedback.\n\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm3uyHpD9sKoFtB0EnMO8DLuD6H9pReFm%3Dtm%3D9ccEWuUVQ%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Jun 2020 16:39:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Amit for the clarifications. Regarding partitioned table, one of the\nquestion was - if we are loading data into a partitioned table using COPY\ncommand, then the input file would contain tuples for different tables\n(partitions) unlike the normal table case where all the tuples in the input\nfile would belong to the same table. So, in such a case, how are we going\nto accumulate tuples into the DSM? I mean will the leader process check\nwhich tuple needs to be routed to which partition and accordingly\naccumulate them into the DSM. For e.g. let's say in the input data file we\nhave 10 tuples where the 1st tuple belongs to partition1, 2nd belongs to\npartition2 and likewise. So, in such cases, will the leader process\naccumulate all the tuples belonging to partition1 into one DSM and tuples\nbelonging to partition2 into some other DSM and assign them to the worker\nprocess or we have taken some other approach to handle this scenario?\n\nFurther, I haven't got much time to look into the links that you have\nshared in your previous response. Will have a look into those and will also\nslowly start looking into the patches as and when I get some time. Thank\nyou.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Sat, Jun 13, 2020 at 9:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Jun 12, 2020 at 4:57 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Hi All,\n> >\n> > I've spent little bit of time going through the project discussion that\n> has happened in this email thread and to start with I have few questions\n> which I would like to put here:\n> >\n> > Q1) Are we also planning to read the input data in parallel or is it\n> only about performing the multi-insert operation in parallel? AFAIU, the\n> data reading part will be done by the leader process alone so no\n> parallelism is involved there.\n> >\n>\n> Yes, your understanding is correct.\n>\n> > Q2) How are we going to deal with the partitioned tables?\n> >\n>\n> I haven't studied the patch but my understanding is that we will\n> support parallel copy for partitioned tables with a few restrictions\n> as explained in my earlier email [1]. See, Case-2 (b) in the email.\n>\n> > I mean will there be some worker process dedicated for each partition or\n> how is it?\n>\n> No, it the split is just based on the input, otherwise each worker\n> should insert as we would have done without any workers.\n>\n> > Q3) Incase of toast tables, there is a possibility of having a single\n> tuple in the input file which could be of a very big size (probably in GB)\n> eventually resulting in a bigger file size. So, in this case, how are we\n> going to decide the number of worker processes to be launched. I mean,\n> although the file size is big, but the number of tuples to be processed is\n> just one or few of them, so, can we decide the number of the worker\n> processes to be launched based on the file size?\n> >\n>\n> Yeah, such situations would be tricky, so we should have an option for\n> user to specify the number of workers.\n>\n> > Q4) Who is going to process constraints (preferably the deferred\n> constraint) that is supposed to be executed at the COMMIT time? I mean is\n> it the leader process or the worker process or in such cases we won't be\n> choosing the parallelism at all?\n> >\n>\n> In the first version, we won't do parallelism for this. Again, see\n> one of my earlier email [1] where I have explained this and other\n> cases where we won't be supporting parallel copy.\n>\n> > Q5) Do we have any risk of table bloating when the data is loaded in\n> parallel. I am just asking this because incase of parallelism there would\n> be multiple processes performing bulk insert into a table. There is a\n> chance that the table file might get extended even if there is some space\n> into the file being written into, but that space is locked by some other\n> worker process and hence that might result in a creation of a new block for\n> that table. Sorry, if I am missing something here.\n> >\n>\n> Hmm, each worker will operate at page level, after first insertion,\n> the same worker will try to insert in the same page in which it has\n> inserted last, so there shouldn't be such a problem.\n>\n> > Please note that I haven't gone through all the emails in this thread so\n> there is a possibility that I might have repeated the question that has\n> already been raised and answered here. If that is the case, I am sorry for\n> that, but it would be very helpful if someone could point out that thread\n> so that I can go through it. Thank you.\n> >\n>\n> No problem, I understand sometimes it is difficult to go through each\n> and every email especially when the discussion is long. Anyway,\n> thanks for showing the interest in the patch.\n>\n> [1] -\n> https://www.postgresql.org/message-id/CAA4eK1%2BANNEaMJCCXm4naweP5PLY6LhJMvGo_V7-Pnfbh6GsOA%40mail.gmail.com\n>\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks Amit for the clarifications. Regarding partitioned table, one of the question was - if we are loading data into a partitioned table using COPY command, then the input file would contain tuples for different tables (partitions) unlike the normal table case where all the tuples in the input file would belong to the same table. So, in such a case, how are we going to accumulate tuples into the DSM? I mean will the leader process check which tuple needs to be routed to which partition and accordingly accumulate them into the DSM. For e.g. let's say in the input data file we have 10 tuples where the 1st tuple belongs to partition1, 2nd belongs to partition2 and likewise. So, in such cases, will the leader process accumulate all the tuples belonging to partition1 into one DSM and tuples belonging to partition2 into some other DSM and assign them to the worker process or we have taken some other approach to handle this scenario?Further, I haven't got much time to look into the links that you have shared in your previous response. Will have a look into those and will also slowly start looking into the patches  as and when I get some time. Thank you.--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Sat, Jun 13, 2020 at 9:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Jun 12, 2020 at 4:57 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> I've spent little bit of time going through the project discussion that has happened in this email thread and to start with I have few questions which I would like to put here:\n>\n> Q1) Are we also planning to read the input data in parallel or is it only about performing the multi-insert operation in parallel? AFAIU, the data reading part will be done by the leader process alone so no parallelism is involved there.\n>\n\nYes, your understanding is correct.\n\n> Q2) How are we going to deal with the partitioned tables?\n>\n\nI haven't studied the patch but my understanding is that we will\nsupport parallel copy for partitioned tables with a few restrictions\nas explained in my earlier email [1].  See, Case-2 (b) in the email.\n\n> I mean will there be some worker process dedicated for each partition or how is it?\n\nNo, it the split is just based on the input, otherwise each worker\nshould insert as we would have done without any workers.\n\n> Q3) Incase of toast tables, there is a possibility of having a single tuple in the input file which could be of a very big size (probably in GB) eventually resulting in a bigger file size. So, in this case, how are we going to decide the number of worker processes to be launched. I mean, although the file size is big, but the number of tuples to be processed is just one or few of them, so, can we decide the number of the worker processes to be launched based on the file size?\n>\n\nYeah, such situations would be tricky, so we should have an option for\nuser to specify the number of workers.\n\n> Q4) Who is going to process constraints (preferably the deferred constraint) that is supposed to be executed at the COMMIT time? I mean is it the leader process or the worker process or in such cases we won't be choosing the parallelism at all?\n>\n\nIn the first version, we won't do parallelism for this.  Again, see\none of my earlier email [1] where I have explained this and other\ncases where we won't be supporting parallel copy.\n\n> Q5) Do we have any risk of table bloating when the data is loaded in parallel. I am just asking this because incase of parallelism there would be multiple processes performing bulk insert into a table. There is a chance that the table file might get extended even if there is some space into the file being written into, but that space is locked by some other worker process and hence that might result in a creation of a new block for that table. Sorry, if I am missing something here.\n>\n\nHmm, each worker will operate at page level, after first insertion,\nthe same worker will try to insert in the same page in which it has\ninserted last, so there shouldn't be such a problem.\n\n> Please note that I haven't gone through all the emails in this thread so there is a possibility that I might have repeated the question that has already been raised and answered here. If that is the case, I am sorry for that, but it would be very helpful if someone could point out that thread so that I can go through it. Thank you.\n>\n\nNo problem, I understand sometimes it is difficult to go through each\nand every email especially when the discussion is long.  Anyway,\nthanks for showing the interest in the patch.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BANNEaMJCCXm4naweP5PLY6LhJMvGo_V7-Pnfbh6GsOA%40mail.gmail.com\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Jun 2020 19:41:03 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Jun 15, 2020 at 7:41 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Thanks Amit for the clarifications. Regarding partitioned table, one of the question was - if we are loading data into a partitioned table using COPY command, then the input file would contain tuples for different tables (partitions) unlike the normal table case where all the tuples in the input file would belong to the same table. So, in such a case, how are we going to accumulate tuples into the DSM? I mean will the leader process check which tuple needs to be routed to which partition and accordingly accumulate them into the DSM. For e.g. let's say in the input data file we have 10 tuples where the 1st tuple belongs to partition1, 2nd belongs to partition2 and likewise. So, in such cases, will the leader process accumulate all the tuples belonging to partition1 into one DSM and tuples belonging to partition2 into some other DSM and assign them to the worker process or we have taken some other approach to handle this scenario?\n>\n\nNo, all the tuples (for all partitions) will be accumulated in a\nsingle DSM and the workers/leader will route the tuple to an\nappropriate partition.\n\n> Further, I haven't got much time to look into the links that you have shared in your previous response. Will have a look into those and will also slowly start looking into the patches as and when I get some time. Thank you.\n>\n\nYeah, it will be good if you go through all the emails once because\nmost of the decisions (and design) in the patch is supposed to be\nbased on the discussion in this thread.\n\nNote - Please don't top post, try to give inline replies.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jun 2020 15:21:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nI have included tests for parallel copy feature & few bugs that were\nidentified during testing have been fixed. Attached patches for the\nsame.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Jun 16, 2020 at 3:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 15, 2020 at 7:41 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Thanks Amit for the clarifications. Regarding partitioned table, one of the question was - if we are loading data into a partitioned table using COPY command, then the input file would contain tuples for different tables (partitions) unlike the normal table case where all the tuples in the input file would belong to the same table. So, in such a case, how are we going to accumulate tuples into the DSM? I mean will the leader process check which tuple needs to be routed to which partition and accordingly accumulate them into the DSM. For e.g. let's say in the input data file we have 10 tuples where the 1st tuple belongs to partition1, 2nd belongs to partition2 and likewise. So, in such cases, will the leader process accumulate all the tuples belonging to partition1 into one DSM and tuples belonging to partition2 into some other DSM and assign them to the worker process or we have taken some other approach to handle this scenario?\n> >\n>\n> No, all the tuples (for all partitions) will be accumulated in a\n> single DSM and the workers/leader will route the tuple to an\n> appropriate partition.\n>\n> > Further, I haven't got much time to look into the links that you have shared in your previous response. Will have a look into those and will also slowly start looking into the patches as and when I get some time. Thank you.\n> >\n>\n> Yeah, it will be good if you go through all the emails once because\n> most of the decisions (and design) in the patch is supposed to be\n> based on the discussion in this thread.\n>\n> Note - Please don't top post, try to give inline replies.\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Jun 2020 09:40:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Jun 15, 2020 at 4:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> The above tests were run with the configuration attached config.txt, which is the same used for performance tests of csv/text files posted earlier in this mail chain.\n>\n> Request the community to take this patch up for review along with the parallel copy for csv/text file patches and provide feedback.\n>\n\nI had reviewed the patch, few comments:\n+\n+ /*\n+ * Parallel copy for binary formatted files\n+ */\n+ ParallelCopyDataBlock *curr_data_block;\n+ ParallelCopyDataBlock *prev_data_block;\n+ uint32 curr_data_offset;\n+ uint32 curr_block_pos;\n+ ParallelCopyTupleInfo curr_tuple_start_info;\n+ ParallelCopyTupleInfo curr_tuple_end_info;\n } CopyStateData;\n\n The new members added should be present in ParallelCopyData\n\n+ if (cstate->curr_tuple_start_info.block_id ==\ncstate->curr_tuple_end_info.block_id)\n+ {\n+ elog(DEBUG1,\"LEADER - tuple lies in a single data block\");\n+\n+ line_size = cstate->curr_tuple_end_info.offset -\ncstate->curr_tuple_start_info.offset + 1;\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[cstate->curr_tuple_start_info.block_id].unprocessed_line_parts,\n1);\n+ }\n+ else\n+ {\n+ uint32 following_block_id =\npcshared_info->data_blocks[cstate->curr_tuple_start_info.block_id].following_block;\n+\n+ elog(DEBUG1,\"LEADER - tuple is spread across data blocks\");\n+\n+ line_size = DATA_BLOCK_SIZE -\ncstate->curr_tuple_start_info.offset -\n+\npcshared_info->data_blocks[cstate->curr_tuple_start_info.block_id].skip_bytes;\n+\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[cstate->curr_tuple_start_info.block_id].unprocessed_line_parts,\n1);\n+\n+ while (following_block_id !=\ncstate->curr_tuple_end_info.block_id)\n+ {\n+ line_size = line_size + DATA_BLOCK_SIZE -\npcshared_info->data_blocks[following_block_id].skip_bytes;\n+\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[following_block_id].unprocessed_line_parts,\n1);\n+\n+ following_block_id =\npcshared_info->data_blocks[following_block_id].following_block;\n+\n+ if (following_block_id == -1)\n+ break;\n+ }\n+\n+ if (following_block_id != -1)\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[following_block_id].unprocessed_line_parts,\n1);\n+\n+ line_size = line_size + cstate->curr_tuple_end_info.offset + 1;\n+ }\n\nline_size can be set as and when we process the tuple from\nCopyReadBinaryTupleLeader and this can be set at the end. That way the\nabove code can be removed.\n\n+\n+ /*\n+ * Parallel copy for binary formatted files\n+ */\n+ ParallelCopyDataBlock *curr_data_block;\n+ ParallelCopyDataBlock *prev_data_block;\n+ uint32 curr_data_offset;\n+ uint32 curr_block_pos;\n+ ParallelCopyTupleInfo curr_tuple_start_info;\n+ ParallelCopyTupleInfo curr_tuple_end_info;\n } CopyStateData;\n\ncurr_block_pos variable is present in ParallelCopyShmInfo, we could\nuse it and remove from here.\ncurr_data_offset, similar variable raw_buf_index is present in\nCopyStateData, we could use it and remove from here.\n\n+ if (cstate->curr_data_offset + sizeof(fld_count) >= (DATA_BLOCK_SIZE - 1))\n+ {\n+ ParallelCopyDataBlock *data_block = NULL;\n+ uint8 movebytes = 0;\n+\n+ block_pos = WaitGetFreeCopyBlock(pcshared_info);\n+\n+ movebytes = DATA_BLOCK_SIZE - cstate->curr_data_offset;\n+\n+ cstate->curr_data_block->skip_bytes = movebytes;\n+\n+ data_block = &pcshared_info->data_blocks[block_pos];\n+\n+ if (movebytes > 0)\n+ memmove(&data_block->data[0],\n&cstate->curr_data_block->data[cstate->curr_data_offset],\n+ movebytes);\n+\n+ elog(DEBUG1, \"LEADER - field count is spread across data blocks -\nmoved %d bytes from current block %u to %u block\",\n+ movebytes, cstate->curr_block_pos, block_pos);\n+\n+ readbytes = CopyGetData(cstate, &data_block->data[movebytes], 1,\n(DATA_BLOCK_SIZE - movebytes));\n+\n+ elog(DEBUG1, \"LEADER - bytes read from file after field count is\nmoved to next data block %d\", readbytes);\n+\n+ if (cstate->reached_eof)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n+ errmsg(\"unexpected EOF in COPY data\")));\n+\n+ cstate->curr_data_block = data_block;\n+ cstate->curr_data_offset = 0;\n+ cstate->curr_block_pos = block_pos;\n+ }\n\nThis code is duplicate in CopyReadBinaryTupleLeader &\nCopyReadBinaryAttributeLeader. We could make a function and re-use.\n\n+/*\n+ * CopyReadBinaryAttributeWorker - leader identifies boundaries/offsets\n+ * for each attribute/column, it moves on to next data block if the\n+ * attribute/column is spread across data blocks.\n+ */\n+static pg_attribute_always_inline Datum\n+CopyReadBinaryAttributeWorker(CopyState cstate, int column_no,\n+ FmgrInfo *flinfo, Oid typioparam, int32 typmod, bool *isnull)\n+{\n+ int32 fld_size;\n+ Datum result;\n\ncolumn_no is not used, it can be removed\n\n+ if (fld_count == -1)\n+ {\n+ /*\n+ * Received EOF marker. In a V3-protocol copy,\nwait for the\n+ * protocol-level EOF, and complain if it doesn't come\n+ * immediately. This ensures that we correctly\nhandle CopyFail,\n+ * if client chooses to send that now.\n+ *\n+ * Note that we MUST NOT try to read more data\nin an old-protocol\n+ * copy, since there is no protocol-level EOF\nmarker then. We\n+ * could go either way for copy from file, but\nchoose to throw\n+ * error if there's data after the EOF marker,\nfor consistency\n+ * with the new-protocol case.\n+ */\n+ char dummy;\n+\n+ if (cstate->copy_dest != COPY_OLD_FE &&\n+ CopyGetData(cstate, &dummy, 1, 1) > 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n+ errmsg(\"received copy\ndata after EOF marker\")));\n+ return true;\n+ }\n+\n+ if (fld_count != attr_count)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n+ errmsg(\"row field count is %d, expected %d\",\n+ (int) fld_count, attr_count)));\n+\n+ cstate->curr_tuple_start_info.block_id = cstate->curr_block_pos;\n+ cstate->curr_tuple_start_info.offset = cstate->curr_data_offset;\n+ cstate->curr_data_offset = cstate->curr_data_offset + sizeof(fld_count);\n+ new_block_pos = cstate->curr_block_pos;\n+\n+ foreach(cur, cstate->attnumlist)\n+ {\n+ int attnum = lfirst_int(cur);\n+ int m = attnum - 1;\n+ Form_pg_attribute att = TupleDescAttr(tupDesc, m);\n\nThe above code is present in NextCopyFrom & CopyReadBinaryTupleLeader,\ncheck if we can make a common function or we could use NextCopyFrom as\nit is.\n\n+ memcpy(&fld_count,\n&cstate->curr_data_block->data[cstate->curr_data_offset],\nsizeof(fld_count));\n+ fld_count = (int16) pg_ntoh16(fld_count);\n+\n+ if (fld_count == -1)\n+ {\n+ return true;\n+ }\n\nShould this be an assert in CopyReadBinaryTupleWorker function as this\ncheck is already done in the leader.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jun 2020 18:41:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nI just got some time to review the first patch in the list i.e.\n0001-Copy-code-readjustment-to-support-parallel-copy.patch. As the patch\nname suggests, it is just trying to reshuffle the existing code for COPY\ncommand here and there. There is no extra changes added in the patch as\nsuch, but still I do have some review comments, please have a look:\n\n1) Can you please add some comments atop the new function\nPopulateAttributes() describing its functionality in detail. Further, this\nnew function contains the code from BeginCopy() to set attribute level\noptions used with COPY FROM such as FORCE_QUOTE, FORCE_NOT_NULL, FORCE_NULL\netc. in cstate and along with that it also copies the code from BeginCopy()\nto set other infos such as client encoding type, encoding conversion etc.\nHence, I think it would be good to give it some better name, basically\nsomething that matches with what actually it is doing.\n\n2) Again, the name for the new function CheckCopyFromValidity() doesn't\nlook good to me. From the function name it appears as if it does the sanity\ncheck of the entire COPY FROM command, but actually it is just doing the\nsanity check for the target relation specified with COPY FROM. So, probably\nsomething like CheckTargetRelValidity would look more sensible, I think?\nTBH, I am not good at naming the functions so you can always ignore my\nsuggestions about function and variable names :)\n\n3) Any reason for not making CheckCopyFromValidity as a macro instead of a\nnew function. It is just doing the sanity check for the target relation.\n\n4) Earlier in CopyReadLine() function while trying to clear the EOL marker\nfrom cstate->line_buf.data (copied data), we were not checking if the line\nread by CopyReadLineText() function is a header line or not, but I can see\nthat your patch checks that before clearing the EOL marker. Any reason for\nthis extra check?\n\n5) I noticed the below spurious line removal in the patch.\n\n@@ -3839,7 +3953,6 @@ static bool\n CopyReadLine(CopyState cstate)\n {\n bool result;\n-\n\nPlease note that I haven't got a chance to look into other patches as of\nnow. I will do that whenever possible. Thank you.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Fri, Jun 12, 2020 at 11:01 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Thu, Jun 4, 2020 at 12:44 AM Andres Freund <andres@anarazel.de> wrote\n> >\n> >\n> > Hm. you don't explicitly mention that in your design, but given how\n> > small the benefits going from 0-1 workers is, I assume the leader\n> > doesn't do any \"chunk processing\" on its own?\n> >\n>\n> Yes you are right, the leader does not do any processing, Leader's\n> work is mainly to populate the shared memory with the offset\n> information for each record.\n>\n> >\n> >\n> > > Design of the Parallel Copy: The backend, to which the \"COPY FROM\"\n> query is\n> > > submitted acts as leader with the responsibility of reading data from\n> the\n> > > file/stdin, launching at most n number of workers as specified with\n> > > PARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\n> > > common data required for the workers execution in the DSM and shares it\n> > > with the workers. The leader then executes before statement triggers if\n> > > there exists any. Leader populates DSM chunks which includes the start\n> > > offset and chunk size, while populating the chunks it reads as many\n> blocks\n> > > as required into the DSM data blocks from the file. Each block is of\n> 64K\n> > > size. The leader parses the data to identify a chunk, the existing\n> logic\n> > > from CopyReadLineText which identifies the chunks with some changes was\n> > > used for this. Leader checks if a free chunk is available to copy the\n> > > information, if there is no free chunk it waits till the required\n> chunk is\n> > > freed up by the worker and then copies the identified chunks\n> information\n> > > (offset & chunk size) into the DSM chunks. This process is repeated\n> till\n> > > the complete file is processed. Simultaneously, the workers cache the\n> > > chunks(50) locally into the local memory and release the chunks to the\n> > > leader for further populating. Each worker processes the chunk that it\n> > > cached and inserts it into the table. The leader waits till all the\n> chunks\n> > > populated are processed by the workers and exits.\n> >\n> > Why do we need the local copy of 50 chunks? Copying memory around is far\n> > from free. I don't see why it'd be better to add per-process caching,\n> > rather than making the DSM bigger? I can see some benefit in marking\n> > multiple chunks as being processed with one lock acquisition, but I\n> > don't think adding a memory copy is a good idea.\n>\n> We had run performance with csv data file, 5.1GB, 10million tuples, 2\n> indexes on integer columns, results for the same are given below. We\n> noticed in some cases the performance is better if we copy the 50\n> records locally and release the shared memory. We will get better\n> benefits as the workers increase. Thoughts?\n>\n> ------------------------------------------------------------------------------------------------\n> Workers | Exec time (With local copying | Exec time (Without copying,\n> | 50 records & release the | processing record by\n> record)\n> | shared memory) |\n>\n> ------------------------------------------------------------------------------------------------\n> 0 | 1162.772(1X) | 1152.684(1X)\n> 2 | 635.249(1.83X) | 647.894(1.78X)\n> 4 | 336.835(3.45X) | 335.534(3.43X)\n> 8 | 188.577(6.17 X) | 189.461(6.08X)\n> 16 | 126.819(9.17X) | 142.730(8.07X)\n> 20 | 117.845(9.87X) | 146.533(7.87X)\n> 30 | 127.554(9.11X) | 160.307(7.19X)\n>\n> > This patch *desperately* needs to be split up. It imo is close to\n> > unreviewable, due to a large amount of changes that just move code\n> > around without other functional changes being mixed in with the actual\n> > new stuff.\n>\n> I have split the patch, the new split patches are attached.\n>\n> >\n> >\n> >\n> > > /*\n> > > + * State of the chunk.\n> > > + */\n> > > +typedef enum ChunkState\n> > > +{\n> > > + CHUNK_INIT, /* initial state\n> of chunk */\n> > > + CHUNK_LEADER_POPULATING, /* leader processing chunk */\n> > > + CHUNK_LEADER_POPULATED, /* leader completed populating\n> chunk */\n> > > + CHUNK_WORKER_PROCESSING, /* worker processing chunk */\n> > > + CHUNK_WORKER_PROCESSED /* worker completed processing\n> chunk */\n> > > +}ChunkState;\n> > > +\n> > > +#define RAW_BUF_SIZE 65536 /* we palloc RAW_BUF_SIZE+1\n> bytes */\n> > > +\n> > > +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> > > +#define RINGSIZE (10 * 1000)\n> > > +#define MAX_BLOCKS_COUNT 1000\n> > > +#define WORKER_CHUNK_COUNT 50 /* should be mod of RINGSIZE */\n> > > +\n> > > +#define IsParallelCopy() (cstate->is_parallel)\n> > > +#define IsLeader()\n> (cstate->pcdata->is_leader)\n> > > +#define IsHeaderLine() (cstate->header_line &&\n> cstate->cur_lineno == 1)\n> > > +\n> > > +/*\n> > > + * Copy data block information.\n> > > + */\n> > > +typedef struct CopyDataBlock\n> > > +{\n> > > + /* The number of unprocessed chunks in the current block. */\n> > > + pg_atomic_uint32 unprocessed_chunk_parts;\n> > > +\n> > > + /*\n> > > + * If the current chunk data is continued into another block,\n> > > + * following_block will have the position where the remaining\n> data need to\n> > > + * be read.\n> > > + */\n> > > + uint32 following_block;\n> > > +\n> > > + /*\n> > > + * This flag will be set, when the leader finds out this block\n> can be read\n> > > + * safely by the worker. This helps the worker to start\n> processing the chunk\n> > > + * early where the chunk will be spread across many blocks and\n> the worker\n> > > + * need not wait for the complete chunk to be processed.\n> > > + */\n> > > + bool curr_blk_completed;\n> > > + char data[DATA_BLOCK_SIZE + 1]; /* data read from file */\n> > > +}CopyDataBlock;\n> >\n> > What's the + 1 here about?\n>\n> Fixed this, removed +1. That is not needed.\n>\n> >\n> >\n> > > +/*\n> > > + * Parallel copy line buffer information.\n> > > + */\n> > > +typedef struct ParallelCopyLineBuf\n> > > +{\n> > > + StringInfoData line_buf;\n> > > + uint64 cur_lineno; /* line number\n> for error messages */\n> > > +}ParallelCopyLineBuf;\n> >\n> > Why do we need separate infrastructure for this? We shouldn't duplicate\n> > infrastructure unnecessarily.\n> >\n>\n> This was required for copying the multiple records locally and\n> releasing the shared memory. I have not changed this, will decide on\n> this based on the decision taken for one of the previous comments.\n>\n> >\n> >\n> >\n> > > +/*\n> > > + * Common information that need to be copied to shared memory.\n> > > + */\n> > > +typedef struct CopyWorkerCommonData\n> > > +{\n> >\n> > Why is parallel specific stuff here suddenly not named ParallelCopy*\n> > anymore? If you introduce a naming like that it imo should be used\n> > consistently.\n>\n> Fixed, changed to maintain ParallelCopy in all structs.\n>\n> >\n> > > + /* low-level state data */\n> > > + CopyDest copy_dest; /* type of copy\n> source/destination */\n> > > + int file_encoding; /* file or remote side's\n> character encoding */\n> > > + bool need_transcoding; /* file encoding diff\n> from server? */\n> > > + bool encoding_embeds_ascii; /* ASCII can be\n> non-first byte? */\n> > > +\n> > > + /* parameters from the COPY command */\n> > > + bool csv_mode; /* Comma Separated Value\n> format? */\n> > > + bool header_line; /* CSV header line? */\n> > > + int null_print_len; /* length of same */\n> > > + bool force_quote_all; /* FORCE_QUOTE *? */\n> > > + bool convert_selectively; /* do selective\n> binary conversion? */\n> > > +\n> > > + /* Working state for COPY FROM */\n> > > + AttrNumber num_defaults;\n> > > + Oid relid;\n> > > +}CopyWorkerCommonData;\n> >\n> > But I actually think we shouldn't have this information in two different\n> > structs. This should exist once, independent of using parallel /\n> > non-parallel copy.\n> >\n>\n> This structure helps in storing the common data from CopyStateData\n> that are required by the workers. This information will then be\n> allocated and stored into the DSM for the worker to retrieve and copy\n> it to CopyStateData.\n>\n> >\n> > > +/* List information */\n> > > +typedef struct ListInfo\n> > > +{\n> > > + int count; /* count of attributes */\n> > > +\n> > > + /* string info in the form info followed by info1, info2...\n> infon */\n> > > + char info[1];\n> > > +} ListInfo;\n> >\n> > Based on these comments I have no idea what this could be for.\n> >\n>\n> Have added better comments for this. The following is added: This\n> structure will help in converting a List data type into the below\n> structure format with the count having the number of elements in the\n> list and the info having the List elements appended contiguously. This\n> converted structure will be allocated in shared memory and stored in\n> DSM for the worker to retrieve and later convert it back to List data\n> type.\n>\n> >\n> > > /*\n> > > - * This keeps the character read at the top of the loop in the buffer\n> > > - * even if there is more than one read-ahead.\n> > > + * This keeps the character read at the top of the loop in the buffer\n> > > + * even if there is more than one read-ahead.\n> > > + */\n> > > +#define IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(extralen) \\\n> > > +if (1) \\\n> > > +{ \\\n> > > + if (copy_buff_state.raw_buf_ptr + (extralen) >=\n> copy_buff_state.copy_buf_len && !hit_eof) \\\n> > > + { \\\n> > > + if (IsParallelCopy()) \\\n> > > + { \\\n> > > + copy_buff_state.chunk_size = prev_chunk_size; /*\n> update previous chunk size */ \\\n> > > + if (copy_buff_state.block_switched) \\\n> > > + { \\\n> > > +\n> pg_atomic_sub_fetch_u32(&copy_buff_state.data_blk_ptr->unprocessed_chunk_parts,\n> 1); \\\n> > > + copy_buff_state.copy_buf_len =\n> prev_copy_buf_len; \\\n> > > + } \\\n> > > + } \\\n> > > + copy_buff_state.raw_buf_ptr = prev_raw_ptr; /* undo\n> fetch */ \\\n> > > + need_data = true; \\\n> > > + continue; \\\n> > > + } \\\n> > > +} else ((void) 0)\n> >\n> > I think it's an absolutely clear no-go to add new branches to\n> > these. They're *really* hot already, and this is going to sprinkle a\n> > significant amount of new instructions over a lot of places.\n> >\n>\n> Fixed, removed this.\n>\n> >\n> >\n> > > +/*\n> > > + * SET_RAWBUF_FOR_LOAD - Set raw_buf to the shared memory where the\n> file data must\n> > > + * be read.\n> > > + */\n> > > +#define SET_RAWBUF_FOR_LOAD() \\\n> > > +{ \\\n> > > + ShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > > + uint32 cur_block_pos; \\\n> > > + /* \\\n> > > + * Mark the previous block as completed, worker can start\n> copying this data. \\\n> > > + */ \\\n> > > + if (copy_buff_state.data_blk_ptr !=\n> copy_buff_state.curr_data_blk_ptr && \\\n> > > + copy_buff_state.data_blk_ptr->curr_blk_completed ==\n> false) \\\n> > > + copy_buff_state.data_blk_ptr->curr_blk_completed = true;\n> \\\n> > > + \\\n> > > + copy_buff_state.data_blk_ptr =\n> copy_buff_state.curr_data_blk_ptr; \\\n> > > + cur_block_pos = WaitGetFreeCopyBlock(pcshared_info); \\\n> > > + copy_buff_state.curr_data_blk_ptr =\n> &pcshared_info->data_blocks[cur_block_pos]; \\\n> > > + \\\n> > > + if (!copy_buff_state.data_blk_ptr) \\\n> > > + { \\\n> > > + copy_buff_state.data_blk_ptr =\n> copy_buff_state.curr_data_blk_ptr; \\\n> > > + chunk_first_block = cur_block_pos; \\\n> > > + } \\\n> > > + else if (need_data == false) \\\n> > > + copy_buff_state.data_blk_ptr->following_block =\n> cur_block_pos; \\\n> > > + \\\n> > > + cstate->raw_buf = copy_buff_state.curr_data_blk_ptr->data; \\\n> > > + copy_buff_state.copy_raw_buf = cstate->raw_buf; \\\n> > > +}\n> > > +\n> > > +/*\n> > > + * END_CHUNK_PARALLEL_COPY - Update the chunk information in shared\n> memory.\n> > > + */\n> > > +#define END_CHUNK_PARALLEL_COPY() \\\n> > > +{ \\\n> > > + if (!IsHeaderLine()) \\\n> > > + { \\\n> > > + ShmCopyInfo *pcshared_info =\n> cstate->pcdata->pcshared_info; \\\n> > > + ChunkBoundaries *chunkBoundaryPtr =\n> &pcshared_info->chunk_boundaries; \\\n> > > + if (copy_buff_state.chunk_size) \\\n> > > + { \\\n> > > + ChunkBoundary *chunkInfo =\n> &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > > + /* \\\n> > > + * If raw_buf_ptr is zero,\n> unprocessed_chunk_parts would have been \\\n> > > + * incremented in SEEK_COPY_BUFF_POS. This will\n> happen if the whole \\\n> > > + * chunk finishes at the end of the current\n> block. If the \\\n> > > + * new_line_size > raw_buf_ptr, then the new\n> block has only new line \\\n> > > + * char content. The unprocessed count should\n> not be increased in \\\n> > > + * this case. \\\n> > > + */ \\\n> > > + if (copy_buff_state.raw_buf_ptr != 0 && \\\n> > > + copy_buff_state.raw_buf_ptr >\n> new_line_size) \\\n> > > +\n> pg_atomic_add_fetch_u32(&copy_buff_state.curr_data_blk_ptr->unprocessed_chunk_parts,\n> 1); \\\n> > > + \\\n> > > + /* Update chunk size. */ \\\n> > > + pg_atomic_write_u32(&chunkInfo->chunk_size,\n> copy_buff_state.chunk_size); \\\n> > > + pg_atomic_write_u32(&chunkInfo->chunk_state,\n> CHUNK_LEADER_POPULATED); \\\n> > > + elog(DEBUG1, \"[Leader] After adding - chunk\n> position:%d, chunk_size:%d\", \\\n> > > + chunk_pos,\n> copy_buff_state.chunk_size); \\\n> > > + pcshared_info->populated++; \\\n> > > + } \\\n> > > + else if (new_line_size) \\\n> > > + { \\\n> > > + /* \\\n> > > + * This means only new line char, empty record\n> should be \\\n> > > + * inserted. \\\n> > > + */ \\\n> > > + ChunkBoundary *chunkInfo; \\\n> > > + chunk_pos = UpdateBlockInChunkInfo(cstate, -1,\n> -1, 0, \\\n> > > +\n> CHUNK_LEADER_POPULATED); \\\n> > > + chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > > + elog(DEBUG1, \"[Leader] Added empty chunk with\n> offset:%d, chunk position:%d, chunk size:%d\", \\\n> > > +\n> chunkInfo->start_offset, chunk_pos, \\\n> > > +\n> pg_atomic_read_u32(&chunkInfo->chunk_size)); \\\n> > > + pcshared_info->populated++; \\\n> > > + } \\\n> > > + }\\\n> > > + \\\n> > > + /*\\\n> > > + * All of the read data is processed, reset index & len. In the\\\n> > > + * subsequent read, we will get a new block and copy data in to\n> the\\\n> > > + * new block.\\\n> > > + */\\\n> > > + if (copy_buff_state.raw_buf_ptr == copy_buff_state.copy_buf_len)\\\n> > > + {\\\n> > > + cstate->raw_buf_index = 0;\\\n> > > + cstate->raw_buf_len = 0;\\\n> > > + }\\\n> > > + else\\\n> > > + cstate->raw_buf_len = copy_buff_state.copy_buf_len;\\\n> > > +}\n> >\n> > Why are these macros? They are way way way above a length where that\n> > makes any sort of sense.\n> >\n>\n> Converted these macros to functions.\n>\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi,I just got some time to review the first patch in the list i.e. 0001-Copy-code-readjustment-to-support-parallel-copy.patch. As the patch name suggests, it is just trying to reshuffle the existing code for COPY command here and there. There is no extra changes added in the patch as such, but still I do have some review comments, please have a look:1) Can you please add some comments atop the new function PopulateAttributes() describing its functionality in detail. Further, this new function contains the code from BeginCopy() to set attribute level options used with COPY FROM such as FORCE_QUOTE, FORCE_NOT_NULL, FORCE_NULL etc. in cstate and along with that it also copies the code from BeginCopy() to set other infos such as client encoding type, encoding conversion etc. Hence, I think it would be good to give it some better name, basically something that matches with what actually it is doing.2) Again, the name for the new function CheckCopyFromValidity() doesn't look good to me. From the function name it appears as if it does the sanity check of the entire COPY FROM command, but actually it is just doing the sanity check for the target relation specified with COPY FROM. So, probably something like CheckTargetRelValidity would look more sensible, I think? TBH, I am not good at naming the functions so you can always ignore my suggestions about function and variable names :)3) Any reason for not making CheckCopyFromValidity as a macro instead of a new function. It is just doing the sanity check for the target relation.4) Earlier in CopyReadLine() function while trying to clear the EOL marker from cstate->line_buf.data (copied data), we were not checking if the line read by CopyReadLineText() function is a header line or not, but I can see that your patch checks that before clearing the EOL marker. Any reason for this extra check?5) I noticed the below spurious line removal in the patch.@@ -3839,7 +3953,6 @@ static bool CopyReadLine(CopyState cstate) {    bool        result;-Please note that I haven't got a chance to look into other patches as of now. I will do that whenever possible. Thank you.--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Fri, Jun 12, 2020 at 11:01 AM vignesh C <vignesh21@gmail.com> wrote:On Thu, Jun 4, 2020 at 12:44 AM Andres Freund <andres@anarazel.de> wrote\n>\n>\n> Hm. you don't explicitly mention that in your design, but given how\n> small the benefits going from 0-1 workers is, I assume the leader\n> doesn't do any \"chunk processing\" on its own?\n>\n\nYes you are right, the leader does not do any processing, Leader's\nwork is mainly to populate the shared memory with the offset\ninformation for each record.\n\n>\n>\n> > Design of the Parallel Copy: The backend, to which the \"COPY FROM\" query is\n> > submitted acts as leader with the responsibility of reading data from the\n> > file/stdin, launching at most n number of workers as specified with\n> > PARALLEL 'n' option in the \"COPY FROM\" query. The leader populates the\n> > common data required for the workers execution in the DSM and shares it\n> > with the workers. The leader then executes before statement triggers if\n> > there exists any. Leader populates DSM chunks which includes the start\n> > offset and chunk size, while populating the chunks it reads as many blocks\n> > as required into the DSM data blocks from the file. Each block is of 64K\n> > size. The leader parses the data to identify a chunk, the existing logic\n> > from CopyReadLineText which identifies the chunks with some changes was\n> > used for this. Leader checks if a free chunk is available to copy the\n> > information, if there is no free chunk it waits till the required chunk is\n> > freed up by the worker and then copies the identified chunks information\n> > (offset & chunk size) into the DSM chunks. This process is repeated till\n> > the complete file is processed. Simultaneously, the workers cache the\n> > chunks(50) locally into the local memory and release the chunks to the\n> > leader for further populating. Each worker processes the chunk that it\n> > cached and inserts it into the table. The leader waits till all the chunks\n> > populated are processed by the workers and exits.\n>\n> Why do we need the local copy of 50 chunks? Copying memory around is far\n> from free. I don't see why it'd be better to add per-process caching,\n> rather than making the DSM bigger? I can see some benefit in marking\n> multiple chunks as being processed with one lock acquisition, but I\n> don't think adding a memory copy is a good idea.\n\nWe had run performance with  csv data file, 5.1GB, 10million tuples, 2\nindexes on integer columns, results for the same are given below. We\nnoticed in some cases the performance is better if we copy the 50\nrecords locally and release the shared memory. We will get better\nbenefits as the workers increase. Thoughts?\n------------------------------------------------------------------------------------------------\nWorkers  | Exec time (With local copying | Exec time (Without copying,\n               | 50 records & release the         | processing record by record)\n               | shared memory)                      |\n------------------------------------------------------------------------------------------------\n0             |   1162.772(1X)                        |       1152.684(1X)\n2             |   635.249(1.83X)                     |       647.894(1.78X)\n4             |   336.835(3.45X)                     |       335.534(3.43X)\n8             |   188.577(6.17 X)                    |       189.461(6.08X)\n16           |   126.819(9.17X)                     |       142.730(8.07X)\n20           |   117.845(9.87X)                     |       146.533(7.87X)\n30           |   127.554(9.11X)                     |       160.307(7.19X)\n\n> This patch *desperately* needs to be split up. It imo is close to\n> unreviewable, due to a large amount of changes that just move code\n> around without other functional changes being mixed in with the actual\n> new stuff.\n\nI have split the patch, the new split patches are attached.\n\n>\n>\n>\n> >  /*\n> > + * State of the chunk.\n> > + */\n> > +typedef enum ChunkState\n> > +{\n> > +     CHUNK_INIT,                                     /* initial state of chunk */\n> > +     CHUNK_LEADER_POPULATING,        /* leader processing chunk */\n> > +     CHUNK_LEADER_POPULATED,         /* leader completed populating chunk */\n> > +     CHUNK_WORKER_PROCESSING,        /* worker processing chunk */\n> > +     CHUNK_WORKER_PROCESSED          /* worker completed processing chunk */\n> > +}ChunkState;\n> > +\n> > +#define RAW_BUF_SIZE 65536           /* we palloc RAW_BUF_SIZE+1 bytes */\n> > +\n> > +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> > +#define RINGSIZE (10 * 1000)\n> > +#define MAX_BLOCKS_COUNT 1000\n> > +#define WORKER_CHUNK_COUNT 50        /* should be mod of RINGSIZE */\n> > +\n> > +#define      IsParallelCopy()                (cstate->is_parallel)\n> > +#define IsLeader()                           (cstate->pcdata->is_leader)\n> > +#define IsHeaderLine()                       (cstate->header_line && cstate->cur_lineno == 1)\n> > +\n> > +/*\n> > + * Copy data block information.\n> > + */\n> > +typedef struct CopyDataBlock\n> > +{\n> > +     /* The number of unprocessed chunks in the current block. */\n> > +     pg_atomic_uint32 unprocessed_chunk_parts;\n> > +\n> > +     /*\n> > +      * If the current chunk data is continued into another block,\n> > +      * following_block will have the position where the remaining data need to\n> > +      * be read.\n> > +      */\n> > +     uint32  following_block;\n> > +\n> > +     /*\n> > +      * This flag will be set, when the leader finds out this block can be read\n> > +      * safely by the worker. This helps the worker to start processing the chunk\n> > +      * early where the chunk will be spread across many blocks and the worker\n> > +      * need not wait for the complete chunk to be processed.\n> > +      */\n> > +     bool   curr_blk_completed;\n> > +     char   data[DATA_BLOCK_SIZE + 1]; /* data read from file */\n> > +}CopyDataBlock;\n>\n> What's the + 1 here about?\n\nFixed this, removed +1. That is not needed.\n\n>\n>\n> > +/*\n> > + * Parallel copy line buffer information.\n> > + */\n> > +typedef struct ParallelCopyLineBuf\n> > +{\n> > +     StringInfoData          line_buf;\n> > +     uint64                          cur_lineno;     /* line number for error messages */\n> > +}ParallelCopyLineBuf;\n>\n> Why do we need separate infrastructure for this? We shouldn't duplicate\n> infrastructure unnecessarily.\n>\n\nThis was required for copying the multiple records locally and\nreleasing the shared memory. I have not changed this, will decide on\nthis based on the decision taken for one of the previous comments.\n\n>\n>\n>\n> > +/*\n> > + * Common information that need to be copied to shared memory.\n> > + */\n> > +typedef struct CopyWorkerCommonData\n> > +{\n>\n> Why is parallel specific stuff here suddenly not named ParallelCopy*\n> anymore? If you introduce a naming like that it imo should be used\n> consistently.\n\nFixed, changed to maintain ParallelCopy in all structs.\n\n>\n> > +     /* low-level state data */\n> > +     CopyDest            copy_dest;          /* type of copy source/destination */\n> > +     int                 file_encoding;      /* file or remote side's character encoding */\n> > +     bool                need_transcoding;   /* file encoding diff from server? */\n> > +     bool                encoding_embeds_ascii;      /* ASCII can be non-first byte? */\n> > +\n> > +     /* parameters from the COPY command */\n> > +     bool                csv_mode;           /* Comma Separated Value format? */\n> > +     bool                header_line;        /* CSV header line? */\n> > +     int                 null_print_len; /* length of same */\n> > +     bool                force_quote_all;    /* FORCE_QUOTE *? */\n> > +     bool                convert_selectively;        /* do selective binary conversion? */\n> > +\n> > +     /* Working state for COPY FROM */\n> > +     AttrNumber          num_defaults;\n> > +     Oid                 relid;\n> > +}CopyWorkerCommonData;\n>\n> But I actually think we shouldn't have this information in two different\n> structs. This should exist once, independent of using parallel /\n> non-parallel copy.\n>\n\nThis structure helps in storing the common data from CopyStateData\nthat are required by the workers. This information will then be\nallocated and stored into the DSM for the worker to retrieve and copy\nit to CopyStateData.\n\n>\n> > +/* List information */\n> > +typedef struct ListInfo\n> > +{\n> > +     int     count;          /* count of attributes */\n> > +\n> > +     /* string info in the form info followed by info1, info2... infon  */\n> > +     char    info[1];\n> > +} ListInfo;\n>\n> Based on these comments I have no idea what this could be for.\n>\n\nHave added better comments for this. The following is added: This\nstructure will help in converting a List data type into the below\nstructure format with the count having the number of elements in the\nlist and the info having the List elements appended contiguously. This\nconverted structure will be allocated in shared memory and stored in\nDSM for the worker to retrieve and later convert it back to List data\ntype.\n\n>\n> >  /*\n> > - * This keeps the character read at the top of the loop in the buffer\n> > - * even if there is more than one read-ahead.\n> > + * This keeps the character read at the top of the loop in the buffer\n> > + * even if there is more than one read-ahead.\n> > + */\n> > +#define IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(extralen) \\\n> > +if (1) \\\n> > +{ \\\n> > +     if (copy_buff_state.raw_buf_ptr + (extralen) >= copy_buff_state.copy_buf_len && !hit_eof) \\\n> > +     { \\\n> > +             if (IsParallelCopy()) \\\n> > +             { \\\n> > +                     copy_buff_state.chunk_size = prev_chunk_size; /* update previous chunk size */ \\\n> > +                     if (copy_buff_state.block_switched) \\\n> > +                     { \\\n> > +                             pg_atomic_sub_fetch_u32(&copy_buff_state.data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> > +                             copy_buff_state.copy_buf_len = prev_copy_buf_len; \\\n> > +                     } \\\n> > +             } \\\n> > +             copy_buff_state.raw_buf_ptr = prev_raw_ptr; /* undo fetch */ \\\n> > +             need_data = true; \\\n> > +             continue; \\\n> > +     } \\\n> > +} else ((void) 0)\n>\n> I think it's an absolutely clear no-go to add new branches to\n> these. They're *really* hot already, and this is going to sprinkle a\n> significant amount of new instructions over a lot of places.\n>\n\nFixed, removed this.\n\n>\n>\n> > +/*\n> > + * SET_RAWBUF_FOR_LOAD - Set raw_buf to the shared memory where the file data must\n> > + * be read.\n> > + */\n> > +#define SET_RAWBUF_FOR_LOAD() \\\n> > +{ \\\n> > +     ShmCopyInfo     *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > +     uint32 cur_block_pos; \\\n> > +     /* \\\n> > +      * Mark the previous block as completed, worker can start copying this data. \\\n> > +      */ \\\n> > +     if (copy_buff_state.data_blk_ptr != copy_buff_state.curr_data_blk_ptr && \\\n> > +             copy_buff_state.data_blk_ptr->curr_blk_completed == false) \\\n> > +             copy_buff_state.data_blk_ptr->curr_blk_completed = true; \\\n> > +     \\\n> > +     copy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> > +     cur_block_pos = WaitGetFreeCopyBlock(pcshared_info); \\\n> > +     copy_buff_state.curr_data_blk_ptr = &pcshared_info->data_blocks[cur_block_pos]; \\\n> > +     \\\n> > +     if (!copy_buff_state.data_blk_ptr) \\\n> > +     { \\\n> > +             copy_buff_state.data_blk_ptr = copy_buff_state.curr_data_blk_ptr; \\\n> > +             chunk_first_block = cur_block_pos; \\\n> > +     } \\\n> > +     else if (need_data == false) \\\n> > +             copy_buff_state.data_blk_ptr->following_block = cur_block_pos; \\\n> > +     \\\n> > +     cstate->raw_buf = copy_buff_state.curr_data_blk_ptr->data; \\\n> > +     copy_buff_state.copy_raw_buf = cstate->raw_buf; \\\n> > +}\n> > +\n> > +/*\n> > + * END_CHUNK_PARALLEL_COPY - Update the chunk information in shared memory.\n> > + */\n> > +#define END_CHUNK_PARALLEL_COPY() \\\n> > +{ \\\n> > +     if (!IsHeaderLine()) \\\n> > +     { \\\n> > +             ShmCopyInfo *pcshared_info = cstate->pcdata->pcshared_info; \\\n> > +             ChunkBoundaries *chunkBoundaryPtr = &pcshared_info->chunk_boundaries; \\\n> > +             if (copy_buff_state.chunk_size) \\\n> > +             { \\\n> > +                     ChunkBoundary *chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > +                     /* \\\n> > +                      * If raw_buf_ptr is zero, unprocessed_chunk_parts would have been \\\n> > +                      * incremented in SEEK_COPY_BUFF_POS. This will happen if the whole \\\n> > +                      * chunk finishes at the end of the current block. If the \\\n> > +                      * new_line_size > raw_buf_ptr, then the new block has only new line \\\n> > +                      * char content. The unprocessed count should not be increased in \\\n> > +                      * this case. \\\n> > +                      */ \\\n> > +                     if (copy_buff_state.raw_buf_ptr != 0 && \\\n> > +                             copy_buff_state.raw_buf_ptr > new_line_size) \\\n> > +                             pg_atomic_add_fetch_u32(&copy_buff_state.curr_data_blk_ptr->unprocessed_chunk_parts, 1); \\\n> > +                     \\\n> > +                     /* Update chunk size. */ \\\n> > +                     pg_atomic_write_u32(&chunkInfo->chunk_size, copy_buff_state.chunk_size); \\\n> > +                     pg_atomic_write_u32(&chunkInfo->chunk_state, CHUNK_LEADER_POPULATED); \\\n> > +                     elog(DEBUG1, \"[Leader] After adding - chunk position:%d, chunk_size:%d\", \\\n> > +                                             chunk_pos, copy_buff_state.chunk_size); \\\n> > +                     pcshared_info->populated++; \\\n> > +             } \\\n> > +             else if (new_line_size) \\\n> > +             { \\\n> > +                     /* \\\n> > +                      * This means only new line char, empty record should be \\\n> > +                      * inserted. \\\n> > +                      */ \\\n> > +                     ChunkBoundary *chunkInfo; \\\n> > +                     chunk_pos = UpdateBlockInChunkInfo(cstate, -1, -1, 0, \\\n> > +                                                                                        CHUNK_LEADER_POPULATED); \\\n> > +                     chunkInfo = &chunkBoundaryPtr->ring[chunk_pos]; \\\n> > +                     elog(DEBUG1, \"[Leader] Added empty chunk with offset:%d, chunk position:%d, chunk size:%d\", \\\n> > +                                              chunkInfo->start_offset, chunk_pos, \\\n> > +                                              pg_atomic_read_u32(&chunkInfo->chunk_size)); \\\n> > +                     pcshared_info->populated++; \\\n> > +             } \\\n> > +     }\\\n> > +     \\\n> > +     /*\\\n> > +      * All of the read data is processed, reset index & len. In the\\\n> > +      * subsequent read, we will get a new block and copy data in to the\\\n> > +      * new block.\\\n> > +      */\\\n> > +     if (copy_buff_state.raw_buf_ptr == copy_buff_state.copy_buf_len)\\\n> > +     {\\\n> > +             cstate->raw_buf_index = 0;\\\n> > +             cstate->raw_buf_len = 0;\\\n> > +     }\\\n> > +     else\\\n> > +             cstate->raw_buf_len = copy_buff_state.copy_buf_len;\\\n> > +}\n>\n> Why are these macros? They are way way way above a length where that\n> makes any sort of sense.\n>\n\nConverted these macros to functions.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Jun 2020 17:41:12 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Ashutosh For your review, my comments are inline.\nOn Fri, Jun 19, 2020 at 5:41 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> I just got some time to review the first patch in the list i.e. 0001-Copy-code-readjustment-to-support-parallel-copy.patch. As the patch name suggests, it is just trying to reshuffle the existing code for COPY command here and there. There is no extra changes added in the patch as such, but still I do have some review comments, please have a look:\n>\n> 1) Can you please add some comments atop the new function PopulateAttributes() describing its functionality in detail. Further, this new function contains the code from BeginCopy() to set attribute level options used with COPY FROM such as FORCE_QUOTE, FORCE_NOT_NULL, FORCE_NULL etc. in cstate and along with that it also copies the code from BeginCopy() to set other infos such as client encoding type, encoding conversion etc. Hence, I think it would be good to give it some better name, basically something that matches with what actually it is doing.\n>\n\nThere is no new code added in this function, some part of code from\nBeginCopy was made in to a new function as this part of code will also\nbe required for the parallel copy workers before the workers start the\nactual copy operation. This code was made into a function to avoid\nduplication. Changed the function name to PopulateGlobalsForCopyFrom &\nadded few comments.\n\n> 2) Again, the name for the new function CheckCopyFromValidity() doesn't look good to me. From the function name it appears as if it does the sanity check of the entire COPY FROM command, but actually it is just doing the sanity check for the target relation specified with COPY FROM. So, probably something like CheckTargetRelValidity would look more sensible, I think? TBH, I am not good at naming the functions so you can always ignore my suggestions about function and variable names :)\n>\n\nChanged as suggested.\n> 3) Any reason for not making CheckCopyFromValidity as a macro instead of a new function. It is just doing the sanity check for the target relation.\n>\n\nI felt there is reasonable number of lines in the function & it is not\nin performance intensive path, so I preferred function over macro.\nYour thoughts?\n\n> 4) Earlier in CopyReadLine() function while trying to clear the EOL marker from cstate->line_buf.data (copied data), we were not checking if the line read by CopyReadLineText() function is a header line or not, but I can see that your patch checks that before clearing the EOL marker. Any reason for this extra check?\n>\n\nIf you see the caller of CopyReadLine, i.e. NextCopyFromRawFields does\nnothing for the header line, server basically calls CopyReadLine\nagain, it is a kind of small optimization. Anyway server is not going\nto do anything with header line, I felt no need to clear EOL marker\nfor header lines.\n/* on input just throw the header line away */\nif (cstate->cur_lineno == 0 && cstate->header_line)\n{\ncstate->cur_lineno++;\nif (CopyReadLine(cstate))\nreturn false; /* done */\n}\n\ncstate->cur_lineno++;\n\n/* Actually read the line into memory here */\ndone = CopyReadLine(cstate);\nI think no need to make a fix for this. Your thoughts?\n\n> 5) I noticed the below spurious line removal in the patch.\n>\n> @@ -3839,7 +3953,6 @@ static bool\n> CopyReadLine(CopyState cstate)\n> {\n> bool result;\n> -\n>\n\nFixed.\nI have attached the patch for the same with the fixes.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 23 Jun 2020 08:07:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Jun 23, 2020 at 8:07 AM vignesh C <vignesh21@gmail.com> wrote:\n> I have attached the patch for the same with the fixes.\n\nThe patches were not applying on the head, attached the patches that can be\napplied on head.\nI have added a commitfest entry[1] for this feature.\n\n[1] - https://commitfest.postgresql.org/28/2610/\n\n\nOn Tue, Jun 23, 2020 at 8:07 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> Thanks Ashutosh For your review, my comments are inline.\n> On Fri, Jun 19, 2020 at 5:41 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Hi,\n> >\n> > I just got some time to review the first patch in the list i.e.\n> 0001-Copy-code-readjustment-to-support-parallel-copy.patch. As the patch\n> name suggests, it is just trying to reshuffle the existing code for COPY\n> command here and there. There is no extra changes added in the patch as\n> such, but still I do have some review comments, please have a look:\n> >\n> > 1) Can you please add some comments atop the new function\n> PopulateAttributes() describing its functionality in detail. Further, this\n> new function contains the code from BeginCopy() to set attribute level\n> options used with COPY FROM such as FORCE_QUOTE, FORCE_NOT_NULL, FORCE_NULL\n> etc. in cstate and along with that it also copies the code from BeginCopy()\n> to set other infos such as client encoding type, encoding conversion etc.\n> Hence, I think it would be good to give it some better name, basically\n> something that matches with what actually it is doing.\n> >\n>\n> There is no new code added in this function, some part of code from\n> BeginCopy was made in to a new function as this part of code will also\n> be required for the parallel copy workers before the workers start the\n> actual copy operation. This code was made into a function to avoid\n> duplication. Changed the function name to PopulateGlobalsForCopyFrom &\n> added few comments.\n>\n> > 2) Again, the name for the new function CheckCopyFromValidity() doesn't\n> look good to me. From the function name it appears as if it does the sanity\n> check of the entire COPY FROM command, but actually it is just doing the\n> sanity check for the target relation specified with COPY FROM. So, probably\n> something like CheckTargetRelValidity would look more sensible, I think?\n> TBH, I am not good at naming the functions so you can always ignore my\n> suggestions about function and variable names :)\n> >\n>\n> Changed as suggested.\n> > 3) Any reason for not making CheckCopyFromValidity as a macro instead of\n> a new function. It is just doing the sanity check for the target relation.\n> >\n>\n> I felt there is reasonable number of lines in the function & it is not\n> in performance intensive path, so I preferred function over macro.\n> Your thoughts?\n>\n> > 4) Earlier in CopyReadLine() function while trying to clear the EOL\n> marker from cstate->line_buf.data (copied data), we were not checking if\n> the line read by CopyReadLineText() function is a header line or not, but I\n> can see that your patch checks that before clearing the EOL marker. Any\n> reason for this extra check?\n> >\n>\n> If you see the caller of CopyReadLine, i.e. NextCopyFromRawFields does\n> nothing for the header line, server basically calls CopyReadLine\n> again, it is a kind of small optimization. Anyway server is not going\n> to do anything with header line, I felt no need to clear EOL marker\n> for header lines.\n> /* on input just throw the header line away */\n> if (cstate->cur_lineno == 0 && cstate->header_line)\n> {\n> cstate->cur_lineno++;\n> if (CopyReadLine(cstate))\n> return false; /* done */\n> }\n>\n> cstate->cur_lineno++;\n>\n> /* Actually read the line into memory here */\n> done = CopyReadLine(cstate);\n> I think no need to make a fix for this. Your thoughts?\n>\n> > 5) I noticed the below spurious line removal in the patch.\n> >\n> > @@ -3839,7 +3953,6 @@ static bool\n> > CopyReadLine(CopyState cstate)\n> > {\n> > bool result;\n> > -\n> >\n>\n> Fixed.\n> I have attached the patch for the same with the fixes.\n> Thoughts?\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Tue, 23 Jun 2020 12:22:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nThanks Vignesh for reviewing parallel copy for binary format files\npatch. I tried to address the comments in the attached patch\n(0006-Parallel-Copy-For-Binary-Format-Files.patch).\n\nOn Thu, Jun 18, 2020 at 6:42 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Jun 15, 2020 at 4:39 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > The above tests were run with the configuration attached config.txt, which is the same used for performance tests of csv/text files posted earlier in this mail chain.\n> >\n> > Request the community to take this patch up for review along with the parallel copy for csv/text file patches and provide feedback.\n> >\n>\n> I had reviewed the patch, few comments:\n>\n> The new members added should be present in ParallelCopyData\n\nAdded to ParallelCopyData.\n\n>\n> line_size can be set as and when we process the tuple from\n> CopyReadBinaryTupleLeader and this can be set at the end. That way the\n> above code can be removed.\n>\n\ncurr_tuple_start_info and curr_tuple_end_info variables are now local\nvariables to CopyReadBinaryTupleLeader and the line size calculation\ncode is moved to CopyReadBinaryAttributeLeader.\n\n>\n> curr_block_pos variable is present in ParallelCopyShmInfo, we could\n> use it and remove from here.\n> curr_data_offset, similar variable raw_buf_index is present in\n> CopyStateData, we could use it and remove from here.\n>\n\nYes, making use of them now.\n\n>\n> This code is duplicate in CopyReadBinaryTupleLeader &\n> CopyReadBinaryAttributeLeader. We could make a function and re-use.\n>\n\nAdded a new function AdjustFieldInfo.\n\n>\n> column_no is not used, it can be removed\n>\n\nRemoved.\n\n>\n> The above code is present in NextCopyFrom & CopyReadBinaryTupleLeader,\n> check if we can make a common function or we could use NextCopyFrom as\n> it is.\n>\n\nAdded a macro CHECK_FIELD_COUNT.\n\n> + if (fld_count == -1)\n> + {\n> + return true;\n> + }\n>\n> Should this be an assert in CopyReadBinaryTupleWorker function as this\n> check is already done in the leader.\n>\n\nThis check in leader signifies the end of the file. For the workers,\nthe eof is when GetLinePosition() returns -1.\n line_pos = GetLinePosition(cstate);\n if (line_pos == -1)\n return true;\nIn case the if (fld_count == -1) is encountered in the worker, workers\nshould just return true from CopyReadBinaryTupleWorker marking eof.\nHaving this as an assert doesn't serve the purpose I feel.\n\nAlong with the review comments addressed\npatch(0006-Parallel-Copy-For-Binary-Format-Files.patch) also attaching\nall other latest series of patches(0001 to 0005) from [1], the order\nof applying patches is from 0001 to 0006.\n\n[1] https://www.postgresql.org/message-id/CALDaNm0H3N9gK7CMheoaXkO99g%3DuAPA93nSZXu0xDarPyPY6sg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 24 Jun 2020 13:40:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nIt looks like the parsing of newly introduced \"PARALLEL\" option for\nCOPY FROM command has an issue(in the\n0002-Framework-for-leader-worker-in-parallel-copy.patch),\nMentioning ....PARALLEL '4ar2eteid'); would pass with 4 workers since\natoi() is being used for converting string to integer which just\nreturns 4, ignoring other strings.\n\nI used strtol(), added error checks and introduced the error \"\nimproper use of argument to option \"parallel\"\" for the above cases.\n\n parallel '4ar2eteid');\nERROR: improper use of argument to option \"parallel\"\nLINE 5: parallel '1\\');\n\nAlong with the updated patch\n0002-Framework-for-leader-worker-in-parallel-copy.patch, also\nattaching all the latest patches from [1].\n\n[1] - https://www.postgresql.org/message-id/CALj2ACW94icER3WrWapon7JkcX8j0TGRue5ycWMTEvgA3X7fOg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Jun 23, 2020 at 12:22 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 8:07 AM vignesh C <vignesh21@gmail.com> wrote:\n> > I have attached the patch for the same with the fixes.\n>\n> The patches were not applying on the head, attached the patches that can be applied on head.\n> I have added a commitfest entry[1] for this feature.\n>\n> [1] - https://commitfest.postgresql.org/28/2610/\n>\n>\n> On Tue, Jun 23, 2020 at 8:07 AM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> Thanks Ashutosh For your review, my comments are inline.\n>> On Fri, Jun 19, 2020 at 5:41 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > I just got some time to review the first patch in the list i.e. 0001-Copy-code-readjustment-to-support-parallel-copy.patch. As the patch name suggests, it is just trying to reshuffle the existing code for COPY command here and there. There is no extra changes added in the patch as such, but still I do have some review comments, please have a look:\n>> >\n>> > 1) Can you please add some comments atop the new function PopulateAttributes() describing its functionality in detail. Further, this new function contains the code from BeginCopy() to set attribute level options used with COPY FROM such as FORCE_QUOTE, FORCE_NOT_NULL, FORCE_NULL etc. in cstate and along with that it also copies the code from BeginCopy() to set other infos such as client encoding type, encoding conversion etc. Hence, I think it would be good to give it some better name, basically something that matches with what actually it is doing.\n>> >\n>>\n>> There is no new code added in this function, some part of code from\n>> BeginCopy was made in to a new function as this part of code will also\n>> be required for the parallel copy workers before the workers start the\n>> actual copy operation. This code was made into a function to avoid\n>> duplication. Changed the function name to PopulateGlobalsForCopyFrom &\n>> added few comments.\n>>\n>> > 2) Again, the name for the new function CheckCopyFromValidity() doesn't look good to me. From the function name it appears as if it does the sanity check of the entire COPY FROM command, but actually it is just doing the sanity check for the target relation specified with COPY FROM. So, probably something like CheckTargetRelValidity would look more sensible, I think? TBH, I am not good at naming the functions so you can always ignore my suggestions about function and variable names :)\n>> >\n>>\n>> Changed as suggested.\n>> > 3) Any reason for not making CheckCopyFromValidity as a macro instead of a new function. It is just doing the sanity check for the target relation.\n>> >\n>>\n>> I felt there is reasonable number of lines in the function & it is not\n>> in performance intensive path, so I preferred function over macro.\n>> Your thoughts?\n>>\n>> > 4) Earlier in CopyReadLine() function while trying to clear the EOL marker from cstate->line_buf.data (copied data), we were not checking if the line read by CopyReadLineText() function is a header line or not, but I can see that your patch checks that before clearing the EOL marker. Any reason for this extra check?\n>> >\n>>\n>> If you see the caller of CopyReadLine, i.e. NextCopyFromRawFields does\n>> nothing for the header line, server basically calls CopyReadLine\n>> again, it is a kind of small optimization. Anyway server is not going\n>> to do anything with header line, I felt no need to clear EOL marker\n>> for header lines.\n>> /* on input just throw the header line away */\n>> if (cstate->cur_lineno == 0 && cstate->header_line)\n>> {\n>> cstate->cur_lineno++;\n>> if (CopyReadLine(cstate))\n>> return false; /* done */\n>> }\n>>\n>> cstate->cur_lineno++;\n>>\n>> /* Actually read the line into memory here */\n>> done = CopyReadLine(cstate);\n>> I think no need to make a fix for this. Your thoughts?\n>>\n>> > 5) I noticed the below spurious line removal in the patch.\n>> >\n>> > @@ -3839,7 +3953,6 @@ static bool\n>> > CopyReadLine(CopyState cstate)\n>> > {\n>> > bool result;\n>> > -\n>> >\n>>\n>> Fixed.\n>> I have attached the patch for the same with the fixes.\n>> Thoughts?\n>>\n>> Regards,\n>> Vignesh\n>> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jun 2020 14:16:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Jun 24, 2020 at 2:16 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> It looks like the parsing of newly introduced \"PARALLEL\" option for\n> COPY FROM command has an issue(in the\n> 0002-Framework-for-leader-worker-in-parallel-copy.patch),\n> Mentioning ....PARALLEL '4ar2eteid'); would pass with 4 workers since\n> atoi() is being used for converting string to integer which just\n> returns 4, ignoring other strings.\n>\n> I used strtol(), added error checks and introduced the error \"\n> improper use of argument to option \"parallel\"\" for the above cases.\n>\n> parallel '4ar2eteid');\n> ERROR: improper use of argument to option \"parallel\"\n> LINE 5: parallel '1\\');\n>\n> Along with the updated patch\n> 0002-Framework-for-leader-worker-in-parallel-copy.patch, also\n> attaching all the latest patches from [1].\n>\n> [1] - https://www.postgresql.org/message-id/CALj2ACW94icER3WrWapon7JkcX8j0TGRue5ycWMTEvgA3X7fOg%40mail.gmail.com\n>\n\nI'm sorry, I forgot to attach the patches. Here are the latest series\nof patches.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Jun 2020 08:30:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\n0006 patch has some code clean up and issue fixes found during internal testing.\n\nAttaching the latest patches herewith.\n\nThe order of applying the patches remains the same i.e. from 0001 to 0006.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 26 Jun 2020 14:34:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nI have made few changes in 0003 & 0005 patch, there were a couple of\nbugs in 0003 patch & some random test failures in 0005 patch.\nAttached new patches which include the fixes for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n\nOn Fri, Jun 26, 2020 at 2:34 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> 0006 patch has some code clean up and issue fixes found during internal testing.\n>\n> Attaching the latest patches herewith.\n>\n> The order of applying the patches remains the same i.e. from 0001 to 0006.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 1 Jul 2020 14:46:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Jul 1, 2020 at 2:46 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I have made few changes in 0003 & 0005 patch, there were a couple of\n> bugs in 0003 patch & some random test failures in 0005 patch.\n> Attached new patches which include the fixes for the same.\n\nI have made changes in 0003 patch, to remove changes made in pqmq.c for\nparallel worker error handling hang issue. This is being discussed in email\n[1] separately as it is a bug in the head. The rest of the patches have no\nchanges.\n\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm1d1hHPZUg3xU4XjtWBOLCrA%2B-2cJcLpw-cePZ%3DGgDVfA%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 6 Jul 2020 14:57:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Jun 24, 2020 at 1:41 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Along with the review comments addressed\n> patch(0006-Parallel-Copy-For-Binary-Format-Files.patch) also attaching\n> all other latest series of patches(0001 to 0005) from [1], the order\n> of applying patches is from 0001 to 0006.\n>\n> [1] https://www.postgresql.org/message-id/CALDaNm0H3N9gK7CMheoaXkO99g%3DuAPA93nSZXu0xDarPyPY6sg%40mail.gmail.com\n>\n\nSome comments:\n\n+ movebytes = DATA_BLOCK_SIZE - cstate->raw_buf_index;\n+\n+ cstate->pcdata->curr_data_block->skip_bytes = movebytes;\n+\n+ data_block = &pcshared_info->data_blocks[block_pos];\n+\n+ if (movebytes > 0)\n+ memmove(&data_block->data[0],\n&cstate->pcdata->curr_data_block->data[cstate->raw_buf_index],\n+ movebytes);\nwe can create a local variable and use in place of\ncstate->pcdata->curr_data_block.\n\n+ if (cstate->raw_buf_index + sizeof(fld_count) >= (DATA_BLOCK_SIZE - 1))\n+ AdjustFieldInfo(cstate, 1);\n+\n+ memcpy(&fld_count,\n&cstate->pcdata->curr_data_block->data[cstate->raw_buf_index],\nsizeof(fld_count));\nShould this be like below, as the remaining size can fit in current block:\n if (cstate->raw_buf_index + sizeof(fld_count) >= DATA_BLOCK_SIZE)\n\n+ if ((cstate->raw_buf_index + sizeof(fld_size)) >= (DATA_BLOCK_SIZE - 1))\n+ {\n+ AdjustFieldInfo(cstate, 2);\n+ *new_block_pos = pcshared_info->cur_block_pos;\n+ }\nSame like above.\n\n+ movebytes = DATA_BLOCK_SIZE - cstate->raw_buf_index;\n+\n+ cstate->pcdata->curr_data_block->skip_bytes = movebytes;\n+\n+ data_block = &pcshared_info->data_blocks[block_pos];\n+\n+ if (movebytes > 0)\nInstead of the above check, we can have an assert check for movebytes.\n\n+ if (mode == 1)\n+ {\n+ cstate->pcdata->curr_data_block = data_block;\n+ cstate->raw_buf_index = 0;\n+ }\n+ else if(mode == 2)\n+ {\n+ ParallelCopyDataBlock *prev_data_block = NULL;\n+ prev_data_block = cstate->pcdata->curr_data_block;\n+ prev_data_block->following_block = block_pos;\n+ cstate->pcdata->curr_data_block = data_block;\n+\n+ if (prev_data_block->curr_blk_completed == false)\n+ prev_data_block->curr_blk_completed = true;\n+\n+ cstate->raw_buf_index = 0;\n+ }\n\nThis code is common for both, keep in common flow and remove if (mode == 1)\ncstate->pcdata->curr_data_block = data_block;\ncstate->raw_buf_index = 0;\n\n+#define CHECK_FIELD_COUNT \\\n+{\\\n+ if (fld_count == -1) \\\n+ { \\\n+ if (IsParallelCopy() && \\\n+ !IsLeader()) \\\n+ return true; \\\n+ else if (IsParallelCopy() && \\\n+ IsLeader()) \\\n+ { \\\n+ if\n(cstate->pcdata->curr_data_block->data[cstate->raw_buf_index +\nsizeof(fld_count)] != 0) \\\n+ ereport(ERROR, \\\n+\n(errcode(ERRCODE_BAD_COPY_FILE_FORMAT), \\\n+ errmsg(\"received copy\ndata after EOF marker\"))); \\\n+ return true; \\\n+ } \\\nWe only copy sizeof(fld_count), Shouldn't we check fld_count !=\ncstate->max_fields? Am I missing something here?\n\n+ if ((cstate->raw_buf_index + sizeof(fld_size)) >= (DATA_BLOCK_SIZE - 1))\n+ {\n+ AdjustFieldInfo(cstate, 2);\n+ *new_block_pos = pcshared_info->cur_block_pos;\n+ }\n+\n+ memcpy(&fld_size,\n&cstate->pcdata->curr_data_block->data[cstate->raw_buf_index],\nsizeof(fld_size));\n+\n+ cstate->raw_buf_index = cstate->raw_buf_index + sizeof(fld_size);\n+\n+ fld_size = (int32) pg_ntoh32(fld_size);\n+\n+ if (fld_size == 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n+ errmsg(\"unexpected EOF in COPY data\")));\n+\n+ if (fld_size < -1)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_BAD_COPY_FILE_FORMAT),\n+ errmsg(\"invalid field size\")));\n+\n+ if ((DATA_BLOCK_SIZE - cstate->raw_buf_index) >= fld_size)\n+ {\n+ cstate->raw_buf_index = cstate->raw_buf_index + fld_size;\n+ }\nWe can keep the check like cstate->raw_buf_index + fld_size < ..., for\nbetter readability and consistency.\n\n+static pg_attribute_always_inline void\n+CopyReadBinaryAttributeLeader(CopyState cstate, FmgrInfo *flinfo,\n+ Oid typioparam, int32 typmod, uint32 *new_block_pos,\n+ int m, ParallelCopyTupleInfo *tuple_start_info_ptr,\n+ ParallelCopyTupleInfo *tuple_end_info_ptr, uint32 *line_size)\nflinfo, typioparam & typmod is not used, we can remove the parameter.\n\n+static pg_attribute_always_inline void\n+CopyReadBinaryAttributeLeader(CopyState cstate, FmgrInfo *flinfo,\n+ Oid typioparam, int32 typmod, uint32 *new_block_pos,\n+ int m, ParallelCopyTupleInfo *tuple_start_info_ptr,\n+ ParallelCopyTupleInfo *tuple_end_info_ptr, uint32 *line_size)\nI felt this function need not be an inline function.\n\n+ /* binary format */\n+ /* for paralle copy leader, fill in the error\nThere are some typos, run spell check\n\n+ /* raw_buf_index should never cross data block size,\n+ * as the required number of data blocks would have\n+ * been obtained in the above while loop.\n+ */\nThere are few places, commenting style should be changed to postgres style\n\n+ if (cstate->pcdata->curr_data_block == NULL)\n+ {\n+ block_pos = WaitGetFreeCopyBlock(pcshared_info);\n+\n+ cstate->pcdata->curr_data_block =\n&pcshared_info->data_blocks[block_pos];\n+\n+ cstate->raw_buf_index = 0;\n+\n+ readbytes = CopyGetData(cstate,\n&cstate->pcdata->curr_data_block->data, 1, DATA_BLOCK_SIZE);\n+\n+ elog(DEBUG1, \"LEADER - bytes read from file %d\", readbytes);\n+\n+ if (cstate->reached_eof)\n+ return true;\n+ }\nThere are many empty lines, these are not required.\n\n\n+ if (cstate->raw_buf_index + sizeof(fld_count) >= (DATA_BLOCK_SIZE - 1))\n+ AdjustFieldInfo(cstate, 1);\n+\n+ memcpy(&fld_count,\n&cstate->pcdata->curr_data_block->data[cstate->raw_buf_index],\nsizeof(fld_count));\n+\n+ fld_count = (int16) pg_ntoh16(fld_count);\n+\n+ CHECK_FIELD_COUNT;\n+\n+ cstate->raw_buf_index = cstate->raw_buf_index + sizeof(fld_count);\n+ new_block_pos = pcshared_info->cur_block_pos;\nYou can run pg_indent once for the changes.\n\n+ if (mode == 1)\n+ {\n+ cstate->pcdata->curr_data_block = data_block;\n+ cstate->raw_buf_index = 0;\n+ }\n+ else if(mode == 2)\n+ {\nCould use macros for 1 & 2 for better readability.\n\n+ if (tuple_start_info_ptr->block_id ==\ntuple_end_info_ptr->block_id)\n+ {\n+ elog(DEBUG1,\"LEADER - tuple lies in a single\ndata block\");\n+\n+ *line_size = tuple_end_info_ptr->offset -\ntuple_start_info_ptr->offset + 1;\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[tuple_start_info_ptr->block_id].unprocessed_line_parts,\n1);\n+ }\n+ else\n+ {\n+ uint32 following_block_id =\npcshared_info->data_blocks[tuple_start_info_ptr->block_id].following_block;\n+\n+ elog(DEBUG1,\"LEADER - tuple is spread across\ndata blocks\");\n+\n+ *line_size = DATA_BLOCK_SIZE -\ntuple_start_info_ptr->offset -\n+\npcshared_info->data_blocks[tuple_start_info_ptr->block_id].skip_bytes;\n+\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[tuple_start_info_ptr->block_id].unprocessed_line_parts,\n1);\n+\n+ while (following_block_id !=\ntuple_end_info_ptr->block_id)\n+ {\n+ *line_size = *line_size +\nDATA_BLOCK_SIZE -\npcshared_info->data_blocks[following_block_id].skip_bytes;\n+\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[following_block_id].unprocessed_line_parts,\n1);\n+\n+ following_block_id =\npcshared_info->data_blocks[following_block_id].following_block;\n+\n+ if (following_block_id == -1)\n+ break;\n+ }\n+\n+ if (following_block_id != -1)\n+\npg_atomic_add_fetch_u32(&pcshared_info->data_blocks[following_block_id].unprocessed_line_parts,\n1);\n+\n+ *line_size = *line_size +\ntuple_end_info_ptr->offset + 1;\n+ }\nWe could calculate the size as we parse and identify one record, if we\ndo that way this can be removed.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Jul 2020 11:20:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Vignesh for the review. Addressed the comments in 0006 patch.\n\n>\n> we can create a local variable and use in place of\n> cstate->pcdata->curr_data_block.\n\nDone.\n\n> + if (cstate->raw_buf_index + sizeof(fld_count) >= (DATA_BLOCK_SIZE - 1))\n> + AdjustFieldInfo(cstate, 1);\n> +\n> + memcpy(&fld_count,\n> &cstate->pcdata->curr_data_block->data[cstate->raw_buf_index],\n> sizeof(fld_count));\n> Should this be like below, as the remaining size can fit in current block:\n> if (cstate->raw_buf_index + sizeof(fld_count) >= DATA_BLOCK_SIZE)\n>\n> + if ((cstate->raw_buf_index + sizeof(fld_size)) >= (DATA_BLOCK_SIZE - 1))\n> + {\n> + AdjustFieldInfo(cstate, 2);\n> + *new_block_pos = pcshared_info->cur_block_pos;\n> + }\n> Same like above.\n\nYes you are right. Changed.\n\n>\n> + movebytes = DATA_BLOCK_SIZE - cstate->raw_buf_index;\n> +\n> + cstate->pcdata->curr_data_block->skip_bytes = movebytes;\n> +\n> + data_block = &pcshared_info->data_blocks[block_pos];\n> +\n> + if (movebytes > 0)\n> Instead of the above check, we can have an assert check for movebytes.\n\nNo, we can't use assert here. For the edge case where the current data\nblock is full to the size DATA_BLOCK_SIZE, then movebytes will be 0,\nbut we need to get a new data block. We avoid memmove by having\nmovebytes>0 check.\n\n> + if (mode == 1)\n> + {\n> + cstate->pcdata->curr_data_block = data_block;\n> + cstate->raw_buf_index = 0;\n> + }\n> + else if(mode == 2)\n> + {\n> + ParallelCopyDataBlock *prev_data_block = NULL;\n> + prev_data_block = cstate->pcdata->curr_data_block;\n> + prev_data_block->following_block = block_pos;\n> + cstate->pcdata->curr_data_block = data_block;\n> +\n> + if (prev_data_block->curr_blk_completed == false)\n> + prev_data_block->curr_blk_completed = true;\n> +\n> + cstate->raw_buf_index = 0;\n> + }\n>\n> This code is common for both, keep in common flow and remove if (mode == 1)\n> cstate->pcdata->curr_data_block = data_block;\n> cstate->raw_buf_index = 0;\n>\n\nDone.\n\n> +#define CHECK_FIELD_COUNT \\\n> +{\\\n> + if (fld_count == -1) \\\n> + { \\\n> + if (IsParallelCopy() && \\\n> + !IsLeader()) \\\n> + return true; \\\n> + else if (IsParallelCopy() && \\\n> + IsLeader()) \\\n> + { \\\n> + if\n> (cstate->pcdata->curr_data_block->data[cstate->raw_buf_index +\n> sizeof(fld_count)] != 0) \\\n> + ereport(ERROR, \\\n> +\n> (errcode(ERRCODE_BAD_COPY_FILE_FORMAT), \\\n> + errmsg(\"received copy\n> data after EOF marker\"))); \\\n> + return true; \\\n> + } \\\n> We only copy sizeof(fld_count), Shouldn't we check fld_count !=\n> cstate->max_fields? Am I missing something here?\n\nfld_count != cstate->max_fields check is done after the above checks.\n\n> + if ((DATA_BLOCK_SIZE - cstate->raw_buf_index) >= fld_size)\n> + {\n> + cstate->raw_buf_index = cstate->raw_buf_index + fld_size;\n> + }\n> We can keep the check like cstate->raw_buf_index + fld_size < ..., for\n> better readability and consistency.\n>\n\nI think this is okay. It gives a good meaning that available bytes in\nthe current data block is greater or equal to fld_size then, the tuple\nlies in the current data block.\n\n> +static pg_attribute_always_inline void\n> +CopyReadBinaryAttributeLeader(CopyState cstate, FmgrInfo *flinfo,\n> + Oid typioparam, int32 typmod, uint32 *new_block_pos,\n> + int m, ParallelCopyTupleInfo *tuple_start_info_ptr,\n> + ParallelCopyTupleInfo *tuple_end_info_ptr, uint32 *line_size)\n> flinfo, typioparam & typmod is not used, we can remove the parameter.\n>\n\nDone.\n\n> +static pg_attribute_always_inline void\n> +CopyReadBinaryAttributeLeader(CopyState cstate, FmgrInfo *flinfo,\n> + Oid typioparam, int32 typmod, uint32 *new_block_pos,\n> + int m, ParallelCopyTupleInfo *tuple_start_info_ptr,\n> + ParallelCopyTupleInfo *tuple_end_info_ptr, uint32 *line_size)\n> I felt this function need not be an inline function.\n\nYes. Changed.\n\n>\n> + /* binary format */\n> + /* for paralle copy leader, fill in the error\n> There are some typos, run spell check\n\nDone.\n\n>\n> + /* raw_buf_index should never cross data block size,\n> + * as the required number of data blocks would have\n> + * been obtained in the above while loop.\n> + */\n> There are few places, commenting style should be changed to postgres style\n\nChanged.\n\n>\n> + if (cstate->pcdata->curr_data_block == NULL)\n> + {\n> + block_pos = WaitGetFreeCopyBlock(pcshared_info);\n> +\n> + cstate->pcdata->curr_data_block =\n> &pcshared_info->data_blocks[block_pos];\n> +\n> + cstate->raw_buf_index = 0;\n> +\n> + readbytes = CopyGetData(cstate,\n> &cstate->pcdata->curr_data_block->data, 1, DATA_BLOCK_SIZE);\n> +\n> + elog(DEBUG1, \"LEADER - bytes read from file %d\", readbytes);\n> +\n> + if (cstate->reached_eof)\n> + return true;\n> + }\n> There are many empty lines, these are not required.\n>\n\nRemoved.\n\n>\n> +\n> + fld_count = (int16) pg_ntoh16(fld_count);\n> +\n> + CHECK_FIELD_COUNT;\n> +\n> + cstate->raw_buf_index = cstate->raw_buf_index + sizeof(fld_count);\n> + new_block_pos = pcshared_info->cur_block_pos;\n> You can run pg_indent once for the changes.\n>\n\nI ran pg_indent and observed that there are many places getting\nmodified by pg_indent. If we need to run pg_indet on copy.c for\nparallel copy alone, then first, we need to run on plane copy.c and\ntake those changes and then run for all parallel copy files. I think\nwe better run pg_indent, for all the parallel copy patches once and\nfor all, maybe just before we kind of finish up all the code reviews.\n\n> + if (mode == 1)\n> + {\n> + cstate->pcdata->curr_data_block = data_block;\n> + cstate->raw_buf_index = 0;\n> + }\n> + else if(mode == 2)\n> + {\n> Could use macros for 1 & 2 for better readability.\n\nDone.\n\n>\n> +\n> + if (following_block_id == -1)\n> + break;\n> + }\n> +\n> + if (following_block_id != -1)\n> +\n> pg_atomic_add_fetch_u32(&pcshared_info->data_blocks[following_block_id].unprocessed_line_parts,\n> 1);\n> +\n> + *line_size = *line_size +\n> tuple_end_info_ptr->offset + 1;\n> + }\n> We could calculate the size as we parse and identify one record, if we\n> do that way this can be removed.\n>\n\nDone.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 11 Jul 2020 12:25:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, 11 Jul 2020 at 08:55, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Thanks Vignesh for the review. Addressed the comments in 0006 patch.\n>\n> >\n> > we can create a local variable and use in place of\n> > cstate->pcdata->curr_data_block.\n>\n> Done.\n>\n> > + if (cstate->raw_buf_index + sizeof(fld_count) >= (DATA_BLOCK_SIZE - 1))\n> > + AdjustFieldInfo(cstate, 1);\n> > +\n> > + memcpy(&fld_count,\n> > &cstate->pcdata->curr_data_block->data[cstate->raw_buf_index],\n> > sizeof(fld_count));\n> > Should this be like below, as the remaining size can fit in current block:\n> > if (cstate->raw_buf_index + sizeof(fld_count) >= DATA_BLOCK_SIZE)\n> >\n> > + if ((cstate->raw_buf_index + sizeof(fld_size)) >= (DATA_BLOCK_SIZE - 1))\n> > + {\n> > + AdjustFieldInfo(cstate, 2);\n> > + *new_block_pos = pcshared_info->cur_block_pos;\n> > + }\n> > Same like above.\n>\n> Yes you are right. Changed.\n>\n> >\n> > + movebytes = DATA_BLOCK_SIZE - cstate->raw_buf_index;\n> > +\n> > + cstate->pcdata->curr_data_block->skip_bytes = movebytes;\n> > +\n> > + data_block = &pcshared_info->data_blocks[block_pos];\n> > +\n> > + if (movebytes > 0)\n> > Instead of the above check, we can have an assert check for movebytes.\n>\n> No, we can't use assert here. For the edge case where the current data\n> block is full to the size DATA_BLOCK_SIZE, then movebytes will be 0,\n> but we need to get a new data block. We avoid memmove by having\n> movebytes>0 check.\n>\n> > + if (mode == 1)\n> > + {\n> > + cstate->pcdata->curr_data_block = data_block;\n> > + cstate->raw_buf_index = 0;\n> > + }\n> > + else if(mode == 2)\n> > + {\n> > + ParallelCopyDataBlock *prev_data_block = NULL;\n> > + prev_data_block = cstate->pcdata->curr_data_block;\n> > + prev_data_block->following_block = block_pos;\n> > + cstate->pcdata->curr_data_block = data_block;\n> > +\n> > + if (prev_data_block->curr_blk_completed == false)\n> > + prev_data_block->curr_blk_completed = true;\n> > +\n> > + cstate->raw_buf_index = 0;\n> > + }\n> >\n> > This code is common for both, keep in common flow and remove if (mode == 1)\n> > cstate->pcdata->curr_data_block = data_block;\n> > cstate->raw_buf_index = 0;\n> >\n>\n> Done.\n>\n> > +#define CHECK_FIELD_COUNT \\\n> > +{\\\n> > + if (fld_count == -1) \\\n> > + { \\\n> > + if (IsParallelCopy() && \\\n> > + !IsLeader()) \\\n> > + return true; \\\n> > + else if (IsParallelCopy() && \\\n> > + IsLeader()) \\\n> > + { \\\n> > + if\n> > (cstate->pcdata->curr_data_block->data[cstate->raw_buf_index +\n> > sizeof(fld_count)] != 0) \\\n> > + ereport(ERROR, \\\n> > +\n> > (errcode(ERRCODE_BAD_COPY_FILE_FORMAT), \\\n> > + errmsg(\"received copy\n> > data after EOF marker\"))); \\\n> > + return true; \\\n> > + } \\\n> > We only copy sizeof(fld_count), Shouldn't we check fld_count !=\n> > cstate->max_fields? Am I missing something here?\n>\n> fld_count != cstate->max_fields check is done after the above checks.\n>\n> > + if ((DATA_BLOCK_SIZE - cstate->raw_buf_index) >= fld_size)\n> > + {\n> > + cstate->raw_buf_index = cstate->raw_buf_index + fld_size;\n> > + }\n> > We can keep the check like cstate->raw_buf_index + fld_size < ..., for\n> > better readability and consistency.\n> >\n>\n> I think this is okay. It gives a good meaning that available bytes in\n> the current data block is greater or equal to fld_size then, the tuple\n> lies in the current data block.\n>\n> > +static pg_attribute_always_inline void\n> > +CopyReadBinaryAttributeLeader(CopyState cstate, FmgrInfo *flinfo,\n> > + Oid typioparam, int32 typmod, uint32 *new_block_pos,\n> > + int m, ParallelCopyTupleInfo *tuple_start_info_ptr,\n> > + ParallelCopyTupleInfo *tuple_end_info_ptr, uint32 *line_size)\n> > flinfo, typioparam & typmod is not used, we can remove the parameter.\n> >\n>\n> Done.\n>\n> > +static pg_attribute_always_inline void\n> > +CopyReadBinaryAttributeLeader(CopyState cstate, FmgrInfo *flinfo,\n> > + Oid typioparam, int32 typmod, uint32 *new_block_pos,\n> > + int m, ParallelCopyTupleInfo *tuple_start_info_ptr,\n> > + ParallelCopyTupleInfo *tuple_end_info_ptr, uint32 *line_size)\n> > I felt this function need not be an inline function.\n>\n> Yes. Changed.\n>\n> >\n> > + /* binary format */\n> > + /* for paralle copy leader, fill in the error\n> > There are some typos, run spell check\n>\n> Done.\n>\n> >\n> > + /* raw_buf_index should never cross data block size,\n> > + * as the required number of data blocks would have\n> > + * been obtained in the above while loop.\n> > + */\n> > There are few places, commenting style should be changed to postgres style\n>\n> Changed.\n>\n> >\n> > + if (cstate->pcdata->curr_data_block == NULL)\n> > + {\n> > + block_pos = WaitGetFreeCopyBlock(pcshared_info);\n> > +\n> > + cstate->pcdata->curr_data_block =\n> > &pcshared_info->data_blocks[block_pos];\n> > +\n> > + cstate->raw_buf_index = 0;\n> > +\n> > + readbytes = CopyGetData(cstate,\n> > &cstate->pcdata->curr_data_block->data, 1, DATA_BLOCK_SIZE);\n> > +\n> > + elog(DEBUG1, \"LEADER - bytes read from file %d\", readbytes);\n> > +\n> > + if (cstate->reached_eof)\n> > + return true;\n> > + }\n> > There are many empty lines, these are not required.\n> >\n>\n> Removed.\n>\n> >\n> > +\n> > + fld_count = (int16) pg_ntoh16(fld_count);\n> > +\n> > + CHECK_FIELD_COUNT;\n> > +\n> > + cstate->raw_buf_index = cstate->raw_buf_index + sizeof(fld_count);\n> > + new_block_pos = pcshared_info->cur_block_pos;\n> > You can run pg_indent once for the changes.\n> >\n>\n> I ran pg_indent and observed that there are many places getting\n> modified by pg_indent. If we need to run pg_indet on copy.c for\n> parallel copy alone, then first, we need to run on plane copy.c and\n> take those changes and then run for all parallel copy files. I think\n> we better run pg_indent, for all the parallel copy patches once and\n> for all, maybe just before we kind of finish up all the code reviews.\n>\n> > + if (mode == 1)\n> > + {\n> > + cstate->pcdata->curr_data_block = data_block;\n> > + cstate->raw_buf_index = 0;\n> > + }\n> > + else if(mode == 2)\n> > + {\n> > Could use macros for 1 & 2 for better readability.\n>\n> Done.\n>\n> >\n> > +\n> > + if (following_block_id == -1)\n> > + break;\n> > + }\n> > +\n> > + if (following_block_id != -1)\n> > +\n> > pg_atomic_add_fetch_u32(&pcshared_info->data_blocks[following_block_id].unprocessed_line_parts,\n> > 1);\n> > +\n> > + *line_size = *line_size +\n> > tuple_end_info_ptr->offset + 1;\n> > + }\n> > We could calculate the size as we parse and identify one record, if we\n> > do that way this can be removed.\n> >\n>\n> Done.\n\nHi Bharath,\n\nI was looking forward to review this patch-set but unfortunately it is\nshowing a reject in copy.c, and might need a rebase.\nI was applying on master over the commit-\ncd22d3cdb9bd9963c694c01a8c0232bbae3ddcfb.\n\n-- \nRegards,\nRafia Sabih\n\n\n", "msg_date": "Sun, 12 Jul 2020 11:57:29 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": ">\n> Hi Bharath,\n>\n> I was looking forward to review this patch-set but unfortunately it is\n> showing a reject in copy.c, and might need a rebase.\n> I was applying on master over the commit-\n> cd22d3cdb9bd9963c694c01a8c0232bbae3ddcfb.\n>\n\nThanks for showing interest. Please find the patch set rebased to\nlatest commit b1e48bbe64a411666bb1928b9741e112e267836d.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 12 Jul 2020 17:48:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sun, Jul 12, 2020 at 5:48 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> >\n> > Hi Bharath,\n> >\n> > I was looking forward to review this patch-set but unfortunately it is\n> > showing a reject in copy.c, and might need a rebase.\n> > I was applying on master over the commit-\n> > cd22d3cdb9bd9963c694c01a8c0232bbae3ddcfb.\n> >\n>\n> Thanks for showing interest. Please find the patch set rebased to\n> latest commit b1e48bbe64a411666bb1928b9741e112e267836d.\n>\n\nFew comments:\n====================\n0001-Copy-code-readjustment-to-support-parallel-copy\n\nI am not sure converting the code to macros is a good idea, it makes\nthis code harder to read. Also, there are a few changes which I am\nnot sure are necessary.\n1.\n+/*\n+ * CLEAR_EOL_FROM_COPIED_DATA - Clear EOL from the copied data.\n+ */\n+#define CLEAR_EOL_FROM_COPIED_DATA(copy_line_data, copy_line_pos,\ncopy_line_size) \\\n+{ \\\n+ /* \\\n+ * If we didn't hit EOF, then we must have transferred the EOL marker \\\n+ * to line_buf along with the data. Get rid of it. \\\n+ */ \\\n+ switch (cstate->eol_type) \\\n+ { \\\n+ case EOL_NL: \\\n+ Assert(copy_line_size >= 1); \\\n+ Assert(copy_line_data[copy_line_pos - 1] == '\\n'); \\\n+ copy_line_data[copy_line_pos - 1] = '\\0'; \\\n+ copy_line_size--; \\\n+ break; \\\n+ case EOL_CR: \\\n+ Assert(copy_line_size >= 1); \\\n+ Assert(copy_line_data[copy_line_pos - 1] == '\\r'); \\\n+ copy_line_data[copy_line_pos - 1] = '\\0'; \\\n+ copy_line_size--; \\\n+ break; \\\n+ case EOL_CRNL: \\\n+ Assert(copy_line_size >= 2); \\\n+ Assert(copy_line_data[copy_line_pos - 2] == '\\r'); \\\n+ Assert(copy_line_data[copy_line_pos - 1] == '\\n'); \\\n+ copy_line_data[copy_line_pos - 2] = '\\0'; \\\n+ copy_line_size -= 2; \\\n+ break; \\\n+ case EOL_UNKNOWN: \\\n+ /* shouldn't get here */ \\\n+ Assert(false); \\\n+ break; \\\n+ } \\\n+}\n\nIn the original code, we are using only len and buffer, here we are\nusing position, length/size and buffer. Is it really required or can\nwe do with just len and buffer?\n\n2.\n+/*\n+ * INCREMENTPROCESSED - Increment the lines processed.\n+ */\n+#define INCREMENTPROCESSED(processed) \\\n+processed++;\n+\n+/*\n+ * GETPROCESSED - Get the lines processed.\n+ */\n+#define GETPROCESSED(processed) \\\n+return processed;\n+\n\nI don't like converting above to macros. I don't think converting\nsuch things to macros will buy us much.\n\n0002-Framework-for-leader-worker-in-parallel-copy\n3.\n /*\n+ * Copy data block information.\n+ */\n+typedef struct ParallelCopyDataBlock\n\nIt is better to add a few comments atop this data structure to explain\nhow it is used?\n\n4.\n+ * ParallelCopyLineBoundary is common data structure between leader & worker,\n+ * this is protected by the following sequence in the leader & worker.\n+ * Leader should operate in the following order:\n+ * 1) update first_block, start_offset & cur_lineno in any order.\n+ * 2) update line_size.\n+ * 3) update line_state.\n+ * Worker should operate in the following order:\n+ * 1) read line_size.\n+ * 2) only one worker should choose one line for processing, this is handled by\n+ * using pg_atomic_compare_exchange_u32, worker will change the sate to\n+ * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\n+ * 3) read first_block, start_offset & cur_lineno in any order.\n+ */\n+typedef struct ParallelCopyLineBoundary\n\nHere, you have mentioned how workers and leader should operate to make\nsure access to the data is sane. However, you have not explained what\nis the problem if they don't do so and it is not apparent to me.\nAlso, it is not very clear what is the purpose of this data structure\nfrom comments.\n\n5.\n+/*\n+ * Circular queue used to store the line information.\n+ */\n+typedef struct ParallelCopyLineBoundaries\n+{\n+ /* Position for the leader to populate a line. */\n+ uint32 leader_pos;\n\nI don't think the variable needs to be named as leader_pos, it is okay\nto name it is as 'pos' as the comment above it explains its usage.\n\n7.\n+#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n+#define RINGSIZE (10 * 1000)\n+#define MAX_BLOCKS_COUNT 1000\n+#define WORKER_CHUNK_COUNT 50 /* should be mod of RINGSIZE */\n\nIt would be good if you can write a few comments to explain why you\nhave chosen these default values.\n\n8.\nParallelCopyCommonKeyData, shall we name this as\nSerializedParallelCopyState or something like that? For example, see\nSerializedSnapshotData which has been used to pass snapshot\ninformation to passed to workers.\n\n9.\n+CopyCommonInfoForWorker(CopyState cstate, ParallelCopyCommonKeyData\n*shared_cstate)\n\nIf you agree with point-8, then let's name this as\nSerializeParallelCopyState. See, if there is more usage of similar\ntypes in the patch then lets change those as well.\n\n10.\n+ * in the DSM. The specified number of workers will then be launched.\n+ *\n+ */\n+static ParallelContext*\n+BeginParallelCopy(int nworkers, CopyState cstate, List *attnamelist, Oid relid)\n\nNo need of an extra line with only '*' in the above multi-line comment.\n\n11.\nBeginParallelCopy(..)\n{\n..\n+ EstimateLineKeysStr(pcxt, cstate->null_print);\n+ EstimateLineKeysStr(pcxt, cstate->null_print_client);\n+ EstimateLineKeysStr(pcxt, cstate->delim);\n+ EstimateLineKeysStr(pcxt, cstate->quote);\n+ EstimateLineKeysStr(pcxt, cstate->escape);\n..\n}\n\nWhy do we need to do this separately for each variable of cstate?\nCan't we serialize it along with other members of\nSerializeParallelCopyState (a new name for ParallelCopyCommonKeyData)?\n\n12.\nBeginParallelCopy(..)\n{\n..\n+ LaunchParallelWorkers(pcxt);\n+ if (pcxt->nworkers_launched == 0)\n+ {\n+ EndParallelCopy(pcxt);\n+ elog(WARNING,\n+ \"No workers available, copy will be run in non-parallel mode\");\n..\n}\n\nI don't see the need to issue a WARNING if we are not able to launch\nworkers. We don't do that for other cases where we fail to launch\nworkers.\n\n13.\n+}\n+/*\n+ * ParallelCopyMain -\n..\n\n+}\n+/*\n+ * ParallelCopyLeader\n\nOne line space is required before starting a new function.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 10:34:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks for the comments Amit.\nOn Wed, Jul 15, 2020 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Few comments:\n> ====================\n> 0001-Copy-code-readjustment-to-support-parallel-copy\n>\n> I am not sure converting the code to macros is a good idea, it makes\n> this code harder to read. Also, there are a few changes which I am\n> not sure are necessary.\n> 1.\n> +/*\n> + * CLEAR_EOL_FROM_COPIED_DATA - Clear EOL from the copied data.\n> + */\n> +#define CLEAR_EOL_FROM_COPIED_DATA(copy_line_data, copy_line_pos,\n> copy_line_size) \\\n> +{ \\\n> + /* \\\n> + * If we didn't hit EOF, then we must have transferred the EOL marker \\\n> + * to line_buf along with the data. Get rid of it. \\\n> + */ \\\n> + switch (cstate->eol_type) \\\n> + { \\\n> + case EOL_NL: \\\n> + Assert(copy_line_size >= 1); \\\n> + Assert(copy_line_data[copy_line_pos - 1] == '\\n'); \\\n> + copy_line_data[copy_line_pos - 1] = '\\0'; \\\n> + copy_line_size--; \\\n> + break; \\\n> + case EOL_CR: \\\n> + Assert(copy_line_size >= 1); \\\n> + Assert(copy_line_data[copy_line_pos - 1] == '\\r'); \\\n> + copy_line_data[copy_line_pos - 1] = '\\0'; \\\n> + copy_line_size--; \\\n> + break; \\\n> + case EOL_CRNL: \\\n> + Assert(copy_line_size >= 2); \\\n> + Assert(copy_line_data[copy_line_pos - 2] == '\\r'); \\\n> + Assert(copy_line_data[copy_line_pos - 1] == '\\n'); \\\n> + copy_line_data[copy_line_pos - 2] = '\\0'; \\\n> + copy_line_size -= 2; \\\n> + break; \\\n> + case EOL_UNKNOWN: \\\n> + /* shouldn't get here */ \\\n> + Assert(false); \\\n> + break; \\\n> + } \\\n> +}\n>\n> In the original code, we are using only len and buffer, here we are\n> using position, length/size and buffer. Is it really required or can\n> we do with just len and buffer?\n>\n\nPosition is required so that we can have common code for parallel &\nnon-parallel copy, in case of parallel copy position & length will\ndiffer as they can spread across multiple data blocks. Retained the\nvariables as is.\nChanged the macro to function.\n\n> 2.\n> +/*\n> + * INCREMENTPROCESSED - Increment the lines processed.\n> + */\n> +#define INCREMENTPROCESSED(processed) \\\n> +processed++;\n> +\n> +/*\n> + * GETPROCESSED - Get the lines processed.\n> + */\n> +#define GETPROCESSED(processed) \\\n> +return processed;\n> +\n>\n> I don't like converting above to macros. I don't think converting\n> such things to macros will buy us much.\n>\n\nThis macro will be extended to in\n0003-Allow-copy-from-command-to-process-data-from-file.patch:\n+#define INCREMENTPROCESSED(processed) \\\n+{ \\\n+ if (!IsParallelCopy()) \\\n+ processed++; \\\n+ else \\\n+\npg_atomic_add_fetch_u64(&cstate->pcdata->pcshared_info->processed, 1);\n\\\n+}\n\nThis need to be made to macro so that it can handle both parallel copy\nand non parallel copy.\nRetaining this as macro, if you insist I can move the change to\n0003-Allow-copy-from-command-to-process-data-from-file.patch patch.\n\n\n> 0002-Framework-for-leader-worker-in-parallel-copy\n> 3.\n> /*\n> + * Copy data block information.\n> + */\n> +typedef struct ParallelCopyDataBlock\n>\n> It is better to add a few comments atop this data structure to explain\n> how it is used?\n>\n\nFixed.\n\n> 4.\n> + * ParallelCopyLineBoundary is common data structure between leader & worker,\n> + * this is protected by the following sequence in the leader & worker.\n> + * Leader should operate in the following order:\n> + * 1) update first_block, start_offset & cur_lineno in any order.\n> + * 2) update line_size.\n> + * 3) update line_state.\n> + * Worker should operate in the following order:\n> + * 1) read line_size.\n> + * 2) only one worker should choose one line for processing, this is handled by\n> + * using pg_atomic_compare_exchange_u32, worker will change the sate to\n> + * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\n> + * 3) read first_block, start_offset & cur_lineno in any order.\n> + */\n> +typedef struct ParallelCopyLineBoundary\n>\n> Here, you have mentioned how workers and leader should operate to make\n> sure access to the data is sane. However, you have not explained what\n> is the problem if they don't do so and it is not apparent to me.\n> Also, it is not very clear what is the purpose of this data structure\n> from comments.\n>\n\nFixed\n\n> 5.\n> +/*\n> + * Circular queue used to store the line information.\n> + */\n> +typedef struct ParallelCopyLineBoundaries\n> +{\n> + /* Position for the leader to populate a line. */\n> + uint32 leader_pos;\n>\n> I don't think the variable needs to be named as leader_pos, it is okay\n> to name it is as 'pos' as the comment above it explains its usage.\n>\n\nFixed\n\n> 7.\n> +#define DATA_BLOCK_SIZE RAW_BUF_SIZE\n> +#define RINGSIZE (10 * 1000)\n> +#define MAX_BLOCKS_COUNT 1000\n> +#define WORKER_CHUNK_COUNT 50 /* should be mod of RINGSIZE */\n>\n> It would be good if you can write a few comments to explain why you\n> have chosen these default values.\n>\n\nFixed\n\n> 8.\n> ParallelCopyCommonKeyData, shall we name this as\n> SerializedParallelCopyState or something like that? For example, see\n> SerializedSnapshotData which has been used to pass snapshot\n> information to passed to workers.\n>\n\nRenamed as suggested\n\n> 9.\n> +CopyCommonInfoForWorker(CopyState cstate, ParallelCopyCommonKeyData\n> *shared_cstate)\n>\n> If you agree with point-8, then let's name this as\n> SerializeParallelCopyState. See, if there is more usage of similar\n> types in the patch then lets change those as well.\n>\n\nFixed\n\n> 10.\n> + * in the DSM. The specified number of workers will then be launched.\n> + *\n> + */\n> +static ParallelContext*\n> +BeginParallelCopy(int nworkers, CopyState cstate, List *attnamelist, Oid relid)\n>\n> No need of an extra line with only '*' in the above multi-line comment.\n>\n\nFixed\n\n> 11.\n> BeginParallelCopy(..)\n> {\n> ..\n> + EstimateLineKeysStr(pcxt, cstate->null_print);\n> + EstimateLineKeysStr(pcxt, cstate->null_print_client);\n> + EstimateLineKeysStr(pcxt, cstate->delim);\n> + EstimateLineKeysStr(pcxt, cstate->quote);\n> + EstimateLineKeysStr(pcxt, cstate->escape);\n> ..\n> }\n>\n> Why do we need to do this separately for each variable of cstate?\n> Can't we serialize it along with other members of\n> SerializeParallelCopyState (a new name for ParallelCopyCommonKeyData)?\n>\n\nThese are variable length string variables, I felt we will not be able\nto serialize along with other members and need to be serialized\nseparately.\n\n> 12.\n> BeginParallelCopy(..)\n> {\n> ..\n> + LaunchParallelWorkers(pcxt);\n> + if (pcxt->nworkers_launched == 0)\n> + {\n> + EndParallelCopy(pcxt);\n> + elog(WARNING,\n> + \"No workers available, copy will be run in non-parallel mode\");\n> ..\n> }\n>\n> I don't see the need to issue a WARNING if we are not able to launch\n> workers. We don't do that for other cases where we fail to launch\n> workers.\n>\n\nFixed\n\n> 13.\n> +}\n> +/*\n> + * ParallelCopyMain -\n> ..\n>\n> +}\n> +/*\n> + * ParallelCopyLeader\n>\n> One line space is required before starting a new function.\n>\n\nFixed\n\nPlease find the updated patch with the fixes included.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Jul 2020 22:43:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": ">\n> Please find the updated patch with the fixes included.\n>\n\nPatch 0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\nhad few indentation issues, I have fixed and attached the patch for\nthe same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Jul 2020 14:09:28 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Some review comments (mostly) from the leader side code changes:\n\n1) Do we need a DSM key for the FORCE_QUOTE option? I think FORCE_QUOTE\noption is only used with COPY TO and not COPY FROM so not sure why you have\nadded it.\n\nPARALLEL_COPY_KEY_FORCE_QUOTE_LIST\n\n2) Should we be allocating the parallel copy data structure only when it is\nconfirmed that the parallel copy is allowed?\n\npcdata = (ParallelCopyData *) palloc0(sizeof(ParallelCopyData));\ncstate->pcdata = pcdata;\n\nOr, if you want it to be allocated before confirming if Parallel copy is\nallowed or not, then I think it would be good to allocate it in\n*cstate->copycontext* memory context so that when EndCopy is called towards\nthe end of the COPY FROM operation, the entire context itself gets deleted\nthereby freeing the memory space allocated for pcdata. In fact it would be\ngood to ensure that all the local memory allocated inside the ctstate\nstructure gets allocated in the *cstate->copycontext* memory context.\n\n3) Should we allow Parallel Copy when the insert method is\nCIM_MULTI_CONDITIONAL?\n\n+ /* Check if the insertion mode is single. */\n+ if (FindInsertMethod(cstate) == CIM_SINGLE)\n+ return false;\n\nI know we have added checks in CopyFrom() to ensure that if any trigger\n(before row or instead of) is found on any of partition being loaded with\ndata, then COPY FROM operation would fail, but does it mean that we are\nokay to perform parallel copy on partitioned table. Have we done some\nperformance testing with the partitioned table where the data in the input\nfile needs to be routed to the different partitions?\n\n4) There are lot of if-checks in IsParallelCopyAllowed function that are\nchecked in CopyFrom function as well which means in case of Parallel Copy\nthose checks will get executed multiple times (first by the leader and from\nsecond time onwards by each worker process). Is that required?\n\n5) Should the worker process be calling this function when the leader has\nalready called it once in ExecBeforeStmtTrigger()?\n\n/* Verify the named relation is a valid target for INSERT */\nCheckValidResultRel(resultRelInfo, CMD_INSERT);\n\n6) I think it would be good to re-write the comments atop\nParallelCopyLeader(). From the present comments it appears as if you were\ntrying to put the information pointwise but somehow you ended up putting in\na paragraph. The comments also have some typos like *line beaks* which\npossibly means line breaks. This is applicable for other comments as well\nwhere you\n\n7) Is the following checking equivalent to IsWorker()? If so, it would be\ngood to replace it with an IsWorker like macro to increase the readability.\n\n(IsParallelCopy() && !IsLeader())\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Fri, Jul 17, 2020 at 2:09 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> >\n> > Please find the updated patch with the fixes included.\n> >\n>\n> Patch 0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\n> had few indentation issues, I have fixed and attached the patch for\n> the same.\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nSome review comments (mostly) from the leader side code changes:  1) Do we need a DSM key for the FORCE_QUOTE option? I think FORCE_QUOTE option is only used with COPY TO and not COPY FROM so not sure why you have added it.PARALLEL_COPY_KEY_FORCE_QUOTE_LIST2) Should we be allocating the parallel copy data structure only when it is confirmed that the parallel copy is allowed?pcdata = (ParallelCopyData *) palloc0(sizeof(ParallelCopyData));cstate->pcdata = pcdata;Or, if you want it to be allocated before confirming if Parallel copy is allowed or not, then I think it would be good to allocate it in *cstate->copycontext* memory context so that when EndCopy is called towards the end of the COPY FROM operation, the entire context itself gets deleted thereby freeing the memory space allocated for pcdata. In fact it would be good to ensure that all the local memory allocated inside the ctstate structure gets allocated in the *cstate->copycontext* memory context.3) Should we allow Parallel Copy when the insert method is CIM_MULTI_CONDITIONAL?+   /* Check if the insertion mode is single. */+   if (FindInsertMethod(cstate) == CIM_SINGLE)+       return false;I know we have added checks in CopyFrom() to ensure that if any trigger (before row or instead of) is found on any of partition being loaded with data, then COPY FROM operation would fail, but does it mean that we are okay to perform parallel copy on partitioned table. Have we done some performance testing with the partitioned table where the data in the input file needs to be routed to the different partitions?4) There are lot of if-checks in IsParallelCopyAllowed function that are checked in CopyFrom function as well which means in case of Parallel Copy those checks will get executed multiple times (first by the leader and from second time onwards by each worker process). Is that required?5) Should the worker process be calling this function when the leader has already called it once in ExecBeforeStmtTrigger()?/* Verify the named relation is a valid target for INSERT */CheckValidResultRel(resultRelInfo, CMD_INSERT);6) I think it would be good to re-write the comments atop ParallelCopyLeader(). From the present comments it appears as if you were trying to put the information pointwise but somehow you ended up putting in a paragraph. The comments also have some typos like *line beaks* which possibly means line breaks. This is applicable for other comments as well where you 7) Is the following checking equivalent to IsWorker()? If so, it would be good to replace it with an IsWorker like macro to increase the readability.(IsParallelCopy() && !IsLeader())--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Fri, Jul 17, 2020 at 2:09 PM vignesh C <vignesh21@gmail.com> wrote:>\n> Please find the updated patch with the fixes included.\n>\n\nPatch 0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\nhad few indentation issues, I have fixed and attached the patch for\nthe same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 17 Jul 2020 19:18:11 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Jul 17, 2020 at 2:09 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> >\n> > Please find the updated patch with the fixes included.\n> >\n>\n> Patch 0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\n> had few indentation issues, I have fixed and attached the patch for\n> the same.\n>\n\nEnsure to use the version with each patch-series as that makes it\neasier for the reviewer to verify the changes done in the latest\nversion of the patch. One way is to use commands like \"git\nformat-patch -6 -v <version_of_patch_series>\" or you can add the\nversion number manually.\n\nReview comments:\n===================\n\n0001-Copy-code-readjustment-to-support-parallel-copy\n1.\n@@ -807,8 +835,11 @@ CopyLoadRawBuf(CopyState cstate)\n else\n nbytes = 0; /* no data need be saved */\n\n+ if (cstate->copy_dest == COPY_NEW_FE)\n+ minread = RAW_BUF_SIZE - nbytes;\n+\n inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n- 1, RAW_BUF_SIZE - nbytes);\n+ minread, RAW_BUF_SIZE - nbytes);\n\nNo comment to explain why this change is done?\n\n0002-Framework-for-leader-worker-in-parallel-copy\n2.\n+ * ParallelCopyLineBoundary is common data structure between leader & worker,\n+ * Leader process will be populating data block, data block offset &\nthe size of\n+ * the record in DSM for the workers to copy the data into the relation.\n+ * This is protected by the following sequence in the leader & worker. If they\n+ * don't follow this order the worker might process wrong line_size and leader\n+ * might populate the information which worker has not yet processed or in the\n+ * process of processing.\n+ * Leader should operate in the following order:\n+ * 1) check if line_size is -1, if not wait, it means worker is still\n+ * processing.\n+ * 2) set line_state to LINE_LEADER_POPULATING.\n+ * 3) update first_block, start_offset & cur_lineno in any order.\n+ * 4) update line_size.\n+ * 5) update line_state to LINE_LEADER_POPULATED.\n+ * Worker should operate in the following order:\n+ * 1) check line_state is LINE_LEADER_POPULATED, if not it means\nleader is still\n+ * populating the data.\n+ * 2) read line_size.\n+ * 3) only one worker should choose one line for processing, this is handled by\n+ * using pg_atomic_compare_exchange_u32, worker will change the sate to\n+ * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\n+ * 4) read first_block, start_offset & cur_lineno in any order.\n+ * 5) process line_size data.\n+ * 6) update line_size to -1.\n+ */\n+typedef struct ParallelCopyLineBoundary\n\nAre we doing all this state management to avoid using locks while\nprocessing lines? If so, I think we can use either spinlock or LWLock\nto keep the main patch simple and then provide a later patch to make\nit lock-less. This will allow us to first focus on the main design of\nthe patch rather than trying to make this datastructure processing\nlock-less in the best possible way.\n\n3.\n+ /*\n+ * Actual lines inserted by worker (some records will be filtered based on\n+ * where condition).\n+ */\n+ pg_atomic_uint64 processed;\n+ pg_atomic_uint64 total_worker_processed; /* total processed records\nby the workers */\n\nThe difference between processed and total_worker_processed is not\nclear. Can we expand the comments a bit?\n\n4.\n+ * SerializeList - Insert a list into shared memory.\n+ */\n+static void\n+SerializeList(ParallelContext *pcxt, int key, List *inputlist,\n+ Size est_list_size)\n+{\n+ if (inputlist != NIL)\n+ {\n+ ParallelCopyKeyListInfo *sharedlistinfo = (ParallelCopyKeyListInfo\n*)shm_toc_allocate(pcxt->toc,\n+ est_list_size);\n+ CopyListSharedMemory(inputlist, est_list_size, sharedlistinfo);\n+ shm_toc_insert(pcxt->toc, key, sharedlistinfo);\n+ }\n+}\n\nWhy do we need to write a special mechanism (CopyListSharedMemory) to\nserialize a list. Why can't we use nodeToString? It should be able\nto take care of List datatype, see outNode which is called from\nnodeToString. Once you do that, I think you won't need even\nEstimateLineKeysList, strlen should work instead.\n\nCheck, if you have any similar special handling for other types that\ncan be dealt with nodeToString?\n\n5.\n+ MemSet(shared_info_ptr, 0, est_shared_info);\n+ shared_info_ptr->is_read_in_progress = true;\n+ shared_info_ptr->cur_block_pos = -1;\n+ shared_info_ptr->full_transaction_id = full_transaction_id;\n+ shared_info_ptr->mycid = GetCurrentCommandId(true);\n+ for (count = 0; count < RINGSIZE; count++)\n+ {\n+ ParallelCopyLineBoundary *lineInfo =\n&shared_info_ptr->line_boundaries.ring[count];\n+ pg_atomic_init_u32(&(lineInfo->line_size), -1);\n+ }\n+\n\nYou can move this initialization in a separate function.\n\n6.\nIn function BeginParallelCopy(), you need to keep a provision to\ncollect wal_usage and buf_usage stats. See _bt_begin_parallel for\nreference. Those will be required for pg_stat_statements.\n\n7.\nDeserializeString() -- it is better to name this function as RestoreString.\nParallelWorkerInitialization() -- it is better to name this function\nas InitializeParallelCopyInfo or something like that, the current name\nis quite confusing.\nParallelCopyLeader() -- how about ParallelCopyFrom? ParallelCopyLeader\ndoesn't sound good to me. You can suggest something else if you don't\nlike ParallelCopyFrom\n\n8.\n /*\n- * PopulateGlobalsForCopyFrom - Populates the common variables\nrequired for copy\n- * from operation. This is a helper function for BeginCopy function.\n+ * PopulateCatalogInformation - Populates the common variables\nrequired for copy\n+ * from operation. This is a helper function for BeginCopy &\n+ * ParallelWorkerInitialization function.\n */\n static void\n PopulateGlobalsForCopyFrom(CopyState cstate, TupleDesc tupDesc,\n- List *attnamelist)\n+ List *attnamelist)\n\nThe actual function name and the name in function header don't match.\nI also don't like this function name, how about\nPopulateCommonCstateInfo? Similarly how about changing\nPopulateCatalogInformation to PopulateCstateCatalogInfo?\n\n9.\n+static const struct\n+{\n+ char *fn_name;\n+ copy_data_source_cb fn_addr;\n+} InternalParallelCopyFuncPtrs[] =\n+\n+{\n+ {\n+ \"copy_read_data\", copy_read_data\n+ },\n+};\n\nThe function copy_read_data is present in\nsrc/backend/replication/logical/tablesync.c and seems to be used\nduring logical replication. Why do we want to expose this function as\npart of this patch?\n\n0003-Allow-copy-from-command-to-process-data-from-file-ST\n10.\nIn the commit message, you have written \"The leader does not\nparticipate in the insertion of data, leaders only responsibility will\nbe to identify the lines as fast as possible for the workers to do the\nactual copy operation. The leader waits till all the lines populated\nare processed by the workers and exits.\"\n\nI think you should also mention that we have chosen this design based\non the reason \"that everything stalls if the leader doesn't accept\nfurther input data, as well as when there are no available splitted\nchunks so it doesn't seem like a good idea to have the leader do other\nwork. This is backed by the performance data where we have seen that\nwith 1 worker there is just a 5-10% (or whatever percentage difference\nyou have seen) performance difference)\".\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 15:54:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks for your comments Amit, i have worked on the comments, my thoughts\non the same are mentioned below.\n\nOn Tue, Jul 21, 2020 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 17, 2020 at 2:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > >\n> > > Please find the updated patch with the fixes included.\n> > >\n> >\n> > Patch 0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\n> > had few indentation issues, I have fixed and attached the patch for\n> > the same.\n> >\n>\n> Ensure to use the version with each patch-series as that makes it\n> easier for the reviewer to verify the changes done in the latest\n> version of the patch. One way is to use commands like \"git\n> format-patch -6 -v <version_of_patch_series>\" or you can add the\n> version number manually.\n>\n\nTaken care.\n\n> Review comments:\n> ===================\n>\n> 0001-Copy-code-readjustment-to-support-parallel-copy\n> 1.\n> @@ -807,8 +835,11 @@ CopyLoadRawBuf(CopyState cstate)\n> else\n> nbytes = 0; /* no data need be saved */\n>\n> + if (cstate->copy_dest == COPY_NEW_FE)\n> + minread = RAW_BUF_SIZE - nbytes;\n> +\n> inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n> - 1, RAW_BUF_SIZE - nbytes);\n> + minread, RAW_BUF_SIZE - nbytes);\n>\n> No comment to explain why this change is done?\n>\n> 0002-Framework-for-leader-worker-in-parallel-copy\n\nCurrently CopyGetData copies a lesser amount of data to buffer even though\nspace is available in buffer because minread was passed as 1 to\nCopyGetData. Because of this there are frequent call to CopyGetData for\nfetching the data. In this case it will load only some data due to the\nbelow check:\nwhile (maxread > 0 && bytesread < minread && !cstate->reached_eof)\nAfter reading some data bytesread will be greater than minread which is\npassed as 1 and return with lesser amount of data, even though there is\nsome space.\nThis change is required for parallel copy feature as each time we get a new\nDSM data block which is of 64K size and copy the data. If we copy less data\ninto DSM data blocks we might end up consuming all the DSM data blocks. I\nfelt this issue can be fixed as part of HEAD. Have posted a separate thread\n[1] for this. I'm planning to remove that change once it gets committed.\nCan that go as a separate\npatch or should we include it here?\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm0v4CjmvSnftYnx_9pOS_dKRG%3DO3NnBgJsQmi0KipvLog%40mail.gmail.com\n\n> 2.\n> + * ParallelCopyLineBoundary is common data structure between leader &\nworker,\n> + * Leader process will be populating data block, data block offset &\n> the size of\n> + * the record in DSM for the workers to copy the data into the relation.\n> + * This is protected by the following sequence in the leader & worker.\nIf they\n> + * don't follow this order the worker might process wrong line_size and\nleader\n> + * might populate the information which worker has not yet processed or\nin the\n> + * process of processing.\n> + * Leader should operate in the following order:\n> + * 1) check if line_size is -1, if not wait, it means worker is still\n> + * processing.\n> + * 2) set line_state to LINE_LEADER_POPULATING.\n> + * 3) update first_block, start_offset & cur_lineno in any order.\n> + * 4) update line_size.\n> + * 5) update line_state to LINE_LEADER_POPULATED.\n> + * Worker should operate in the following order:\n> + * 1) check line_state is LINE_LEADER_POPULATED, if not it means\n> leader is still\n> + * populating the data.\n> + * 2) read line_size.\n> + * 3) only one worker should choose one line for processing, this is\nhandled by\n> + * using pg_atomic_compare_exchange_u32, worker will change the sate\nto\n> + * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\n> + * 4) read first_block, start_offset & cur_lineno in any order.\n> + * 5) process line_size data.\n> + * 6) update line_size to -1.\n> + */\n> +typedef struct ParallelCopyLineBoundary\n>\n> Are we doing all this state management to avoid using locks while\n> processing lines? If so, I think we can use either spinlock or LWLock\n> to keep the main patch simple and then provide a later patch to make\n> it lock-less. This will allow us to first focus on the main design of\n> the patch rather than trying to make this datastructure processing\n> lock-less in the best possible way.\n>\n\nThe steps will be more or less same if we use spinlock too. step 1, step 3\n& step 4 will be common we have to use lock & unlock instead of step 2 &\nstep 5. I feel we can retain the current implementation.\n\n> 3.\n> + /*\n> + * Actual lines inserted by worker (some records will be filtered based\non\n> + * where condition).\n> + */\n> + pg_atomic_uint64 processed;\n> + pg_atomic_uint64 total_worker_processed; /* total processed records\n> by the workers */\n>\n> The difference between processed and total_worker_processed is not\n> clear. Can we expand the comments a bit?\n>\n\nFixed\n\n> 4.\n> + * SerializeList - Insert a list into shared memory.\n> + */\n> +static void\n> +SerializeList(ParallelContext *pcxt, int key, List *inputlist,\n> + Size est_list_size)\n> +{\n> + if (inputlist != NIL)\n> + {\n> + ParallelCopyKeyListInfo *sharedlistinfo = (ParallelCopyKeyListInfo\n> *)shm_toc_allocate(pcxt->toc,\n> + est_list_size);\n> + CopyListSharedMemory(inputlist, est_list_size, sharedlistinfo);\n> + shm_toc_insert(pcxt->toc, key, sharedlistinfo);\n> + }\n> +}\n>\n> Why do we need to write a special mechanism (CopyListSharedMemory) to\n> serialize a list. Why can't we use nodeToString? It should be able\n> to take care of List datatype, see outNode which is called from\n> nodeToString. Once you do that, I think you won't need even\n> EstimateLineKeysList, strlen should work instead.\n>\n> Check, if you have any similar special handling for other types that\n> can be dealt with nodeToString?\n>\n\nFixed\n\n> 5.\n> + MemSet(shared_info_ptr, 0, est_shared_info);\n> + shared_info_ptr->is_read_in_progress = true;\n> + shared_info_ptr->cur_block_pos = -1;\n> + shared_info_ptr->full_transaction_id = full_transaction_id;\n> + shared_info_ptr->mycid = GetCurrentCommandId(true);\n> + for (count = 0; count < RINGSIZE; count++)\n> + {\n> + ParallelCopyLineBoundary *lineInfo =\n> &shared_info_ptr->line_boundaries.ring[count];\n> + pg_atomic_init_u32(&(lineInfo->line_size), -1);\n> + }\n> +\n>\n> You can move this initialization in a separate function.\n>\n\nFixed\n\n> 6.\n> In function BeginParallelCopy(), you need to keep a provision to\n> collect wal_usage and buf_usage stats. See _bt_begin_parallel for\n> reference. Those will be required for pg_stat_statements.\n>\n\nFixed\n\n> 7.\n> DeserializeString() -- it is better to name this function as\nRestoreString.\n> ParallelWorkerInitialization() -- it is better to name this function\n> as InitializeParallelCopyInfo or something like that, the current name\n> is quite confusing.\n> ParallelCopyLeader() -- how about ParallelCopyFrom? ParallelCopyLeader\n> doesn't sound good to me. You can suggest something else if you don't\n> like ParallelCopyFrom\n>\n\nFixed\n\n> 8.\n> /*\n> - * PopulateGlobalsForCopyFrom - Populates the common variables\n> required for copy\n> - * from operation. This is a helper function for BeginCopy function.\n> + * PopulateCatalogInformation - Populates the common variables\n> required for copy\n> + * from operation. This is a helper function for BeginCopy &\n> + * ParallelWorkerInitialization function.\n> */\n> static void\n> PopulateGlobalsForCopyFrom(CopyState cstate, TupleDesc tupDesc,\n> - List *attnamelist)\n> + List *attnamelist)\n>\n> The actual function name and the name in function header don't match.\n> I also don't like this function name, how about\n> PopulateCommonCstateInfo? Similarly how about changing\n> PopulateCatalogInformation to PopulateCstateCatalogInfo?\n>\n\nFixed\n\n> 9.\n> +static const struct\n> +{\n> + char *fn_name;\n> + copy_data_source_cb fn_addr;\n> +} InternalParallelCopyFuncPtrs[] =\n> +\n> +{\n> + {\n> + \"copy_read_data\", copy_read_data\n> + },\n> +};\n>\n> The function copy_read_data is present in\n> src/backend/replication/logical/tablesync.c and seems to be used\n> during logical replication. Why do we want to expose this function as\n> part of this patch?\n>\n\nI was thinking we could include the framework to support parallelism for\nlogical replication too and can be enhanced when it is needed. Now I have\nremoved this as part of the new patch provided, that can be added whenever\nrequired.\n\n> 0003-Allow-copy-from-command-to-process-data-from-file-ST\n> 10.\n> In the commit message, you have written \"The leader does not\n> participate in the insertion of data, leaders only responsibility will\n> be to identify the lines as fast as possible for the workers to do the\n> actual copy operation. The leader waits till all the lines populated\n> are processed by the workers and exits.\"\n>\n> I think you should also mention that we have chosen this design based\n> on the reason \"that everything stalls if the leader doesn't accept\n> further input data, as well as when there are no available splitted\n> chunks so it doesn't seem like a good idea to have the leader do other\n> work. This is backed by the performance data where we have seen that\n> with 1 worker there is just a 5-10% (or whatever percentage difference\n> you have seen) performance difference)\".\n\nFixed.\nPlease find the new patch attached with the fixes.\nThoughts?\n\n\nOn Tue, Jul 21, 2020 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Jul 17, 2020 at 2:09 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > >\n> > > Please find the updated patch with the fixes included.\n> > >\n> >\n> > Patch 0003-Allow-copy-from-command-to-process-data-from-file-ST.patch\n> > had few indentation issues, I have fixed and attached the patch for\n> > the same.\n> >\n>\n> Ensure to use the version with each patch-series as that makes it\n> easier for the reviewer to verify the changes done in the latest\n> version of the patch. One way is to use commands like \"git\n> format-patch -6 -v <version_of_patch_series>\" or you can add the\n> version number manually.\n>\n> Review comments:\n> ===================\n>\n> 0001-Copy-code-readjustment-to-support-parallel-copy\n> 1.\n> @@ -807,8 +835,11 @@ CopyLoadRawBuf(CopyState cstate)\n> else\n> nbytes = 0; /* no data need be saved */\n>\n> + if (cstate->copy_dest == COPY_NEW_FE)\n> + minread = RAW_BUF_SIZE - nbytes;\n> +\n> inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n> - 1, RAW_BUF_SIZE - nbytes);\n> + minread, RAW_BUF_SIZE - nbytes);\n>\n> No comment to explain why this change is done?\n>\n> 0002-Framework-for-leader-worker-in-parallel-copy\n> 2.\n> + * ParallelCopyLineBoundary is common data structure between leader &\n> worker,\n> + * Leader process will be populating data block, data block offset &\n> the size of\n> + * the record in DSM for the workers to copy the data into the relation.\n> + * This is protected by the following sequence in the leader & worker. If\n> they\n> + * don't follow this order the worker might process wrong line_size and\n> leader\n> + * might populate the information which worker has not yet processed or\n> in the\n> + * process of processing.\n> + * Leader should operate in the following order:\n> + * 1) check if line_size is -1, if not wait, it means worker is still\n> + * processing.\n> + * 2) set line_state to LINE_LEADER_POPULATING.\n> + * 3) update first_block, start_offset & cur_lineno in any order.\n> + * 4) update line_size.\n> + * 5) update line_state to LINE_LEADER_POPULATED.\n> + * Worker should operate in the following order:\n> + * 1) check line_state is LINE_LEADER_POPULATED, if not it means\n> leader is still\n> + * populating the data.\n> + * 2) read line_size.\n> + * 3) only one worker should choose one line for processing, this is\n> handled by\n> + * using pg_atomic_compare_exchange_u32, worker will change the sate to\n> + * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\n> + * 4) read first_block, start_offset & cur_lineno in any order.\n> + * 5) process line_size data.\n> + * 6) update line_size to -1.\n> + */\n> +typedef struct ParallelCopyLineBoundary\n>\n> Are we doing all this state management to avoid using locks while\n> processing lines? If so, I think we can use either spinlock or LWLock\n> to keep the main patch simple and then provide a later patch to make\n> it lock-less. This will allow us to first focus on the main design of\n> the patch rather than trying to make this datastructure processing\n> lock-less in the best possible way.\n>\n> 3.\n> + /*\n> + * Actual lines inserted by worker (some records will be filtered based on\n> + * where condition).\n> + */\n> + pg_atomic_uint64 processed;\n> + pg_atomic_uint64 total_worker_processed; /* total processed records\n> by the workers */\n>\n> The difference between processed and total_worker_processed is not\n> clear. Can we expand the comments a bit?\n>\n> 4.\n> + * SerializeList - Insert a list into shared memory.\n> + */\n> +static void\n> +SerializeList(ParallelContext *pcxt, int key, List *inputlist,\n> + Size est_list_size)\n> +{\n> + if (inputlist != NIL)\n> + {\n> + ParallelCopyKeyListInfo *sharedlistinfo = (ParallelCopyKeyListInfo\n> *)shm_toc_allocate(pcxt->toc,\n> + est_list_size);\n> + CopyListSharedMemory(inputlist, est_list_size, sharedlistinfo);\n> + shm_toc_insert(pcxt->toc, key, sharedlistinfo);\n> + }\n> +}\n>\n> Why do we need to write a special mechanism (CopyListSharedMemory) to\n> serialize a list. Why can't we use nodeToString? It should be able\n> to take care of List datatype, see outNode which is called from\n> nodeToString. Once you do that, I think you won't need even\n> EstimateLineKeysList, strlen should work instead.\n>\n> Check, if you have any similar special handling for other types that\n> can be dealt with nodeToString?\n>\n> 5.\n> + MemSet(shared_info_ptr, 0, est_shared_info);\n> + shared_info_ptr->is_read_in_progress = true;\n> + shared_info_ptr->cur_block_pos = -1;\n> + shared_info_ptr->full_transaction_id = full_transaction_id;\n> + shared_info_ptr->mycid = GetCurrentCommandId(true);\n> + for (count = 0; count < RINGSIZE; count++)\n> + {\n> + ParallelCopyLineBoundary *lineInfo =\n> &shared_info_ptr->line_boundaries.ring[count];\n> + pg_atomic_init_u32(&(lineInfo->line_size), -1);\n> + }\n> +\n>\n> You can move this initialization in a separate function.\n>\n> 6.\n> In function BeginParallelCopy(), you need to keep a provision to\n> collect wal_usage and buf_usage stats. See _bt_begin_parallel for\n> reference. Those will be required for pg_stat_statements.\n>\n> 7.\n> DeserializeString() -- it is better to name this function as RestoreString.\n> ParallelWorkerInitialization() -- it is better to name this function\n> as InitializeParallelCopyInfo or something like that, the current name\n> is quite confusing.\n> ParallelCopyLeader() -- how about ParallelCopyFrom? ParallelCopyLeader\n> doesn't sound good to me. You can suggest something else if you don't\n> like ParallelCopyFrom\n>\n> 8.\n> /*\n> - * PopulateGlobalsForCopyFrom - Populates the common variables\n> required for copy\n> - * from operation. This is a helper function for BeginCopy function.\n> + * PopulateCatalogInformation - Populates the common variables\n> required for copy\n> + * from operation. This is a helper function for BeginCopy &\n> + * ParallelWorkerInitialization function.\n> */\n> static void\n> PopulateGlobalsForCopyFrom(CopyState cstate, TupleDesc tupDesc,\n> - List *attnamelist)\n> + List *attnamelist)\n>\n> The actual function name and the name in function header don't match.\n> I also don't like this function name, how about\n> PopulateCommonCstateInfo? Similarly how about changing\n> PopulateCatalogInformation to PopulateCstateCatalogInfo?\n>\n> 9.\n> +static const struct\n> +{\n> + char *fn_name;\n> + copy_data_source_cb fn_addr;\n> +} InternalParallelCopyFuncPtrs[] =\n> +\n> +{\n> + {\n> + \"copy_read_data\", copy_read_data\n> + },\n> +};\n>\n> The function copy_read_data is present in\n> src/backend/replication/logical/tablesync.c and seems to be used\n> during logical replication. Why do we want to expose this function as\n> part of this patch?\n>\n> 0003-Allow-copy-from-command-to-process-data-from-file-ST\n> 10.\n> In the commit message, you have written \"The leader does not\n> participate in the insertion of data, leaders only responsibility will\n> be to identify the lines as fast as possible for the workers to do the\n> actual copy operation. The leader waits till all the lines populated\n> are processed by the workers and exits.\"\n>\n> I think you should also mention that we have chosen this design based\n> on the reason \"that everything stalls if the leader doesn't accept\n> further input data, as well as when there are no available splitted\n> chunks so it doesn't seem like a good idea to have the leader do other\n> work. This is backed by the performance data where we have seen that\n> with 1 worker there is just a 5-10% (or whatever percentage difference\n> you have seen) performance difference)\".\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Wed, 22 Jul 2020 19:47:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks for reviewing and providing the comments Ashutosh.\nPlease find my thoughts below:\n\nOn Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n>\n> Some review comments (mostly) from the leader side code changes:\n>\n> 1) Do we need a DSM key for the FORCE_QUOTE option? I think FORCE_QUOTE\noption is only used with COPY TO and not COPY FROM so not sure why you have\nadded it.\n>\n> PARALLEL_COPY_KEY_FORCE_QUOTE_LIST\n>\n\nFixed\n\n> 2) Should we be allocating the parallel copy data structure only when it\nis confirmed that the parallel copy is allowed?\n>\n> pcdata = (ParallelCopyData *) palloc0(sizeof(ParallelCopyData));\n> cstate->pcdata = pcdata;\n>\n> Or, if you want it to be allocated before confirming if Parallel copy is\nallowed or not, then I think it would be good to allocate it in\n*cstate->copycontext* memory context so that when EndCopy is called towards\nthe end of the COPY FROM operation, the entire context itself gets deleted\nthereby freeing the memory space allocated for pcdata. In fact it would be\ngood to ensure that all the local memory allocated inside the ctstate\nstructure gets allocated in the *cstate->copycontext* memory context.\n>\n\nFixed\n\n> 3) Should we allow Parallel Copy when the insert method is\nCIM_MULTI_CONDITIONAL?\n>\n> + /* Check if the insertion mode is single. */\n> + if (FindInsertMethod(cstate) == CIM_SINGLE)\n> + return false;\n>\n> I know we have added checks in CopyFrom() to ensure that if any trigger\n(before row or instead of) is found on any of partition being loaded with\ndata, then COPY FROM operation would fail, but does it mean that we are\nokay to perform parallel copy on partitioned table. Have we done some\nperformance testing with the partitioned table where the data in the input\nfile needs to be routed to the different partitions?\n>\n\nPartition data is handled like what Amit had told in one of earlier mails\n[1]. My colleague Bharath has run performance test with partition table,\nhe will be sharing the results.\n\n> 4) There are lot of if-checks in IsParallelCopyAllowed function that are\nchecked in CopyFrom function as well which means in case of Parallel Copy\nthose checks will get executed multiple times (first by the leader and from\nsecond time onwards by each worker process). Is that required?\n>\n\nIt is called from BeginParallelCopy, This will be called only once. This\nchange is ok.\n\n> 5) Should the worker process be calling this function when the leader has\nalready called it once in ExecBeforeStmtTrigger()?\n>\n> /* Verify the named relation is a valid target for INSERT */\n> CheckValidResultRel(resultRelInfo, CMD_INSERT);\n>\n\nFixed.\n\n> 6) I think it would be good to re-write the comments atop\nParallelCopyLeader(). From the present comments it appears as if you were\ntrying to put the information pointwise but somehow you ended up putting in\na paragraph. The comments also have some typos like *line beaks* which\npossibly means line breaks. This is applicable for other comments as well\nwhere you\n>\n\nFixed.\n\n> 7) Is the following checking equivalent to IsWorker()? If so, it would be\ngood to replace it with an IsWorker like macro to increase the readability.\n>\n> (IsParallelCopy() && !IsLeader())\n>\n\nFixed.\n\nThese have been fixed and the new patch is attached as part of my previous\nmail.\n[1] -\nhttps://www.postgresql.org/message-id/CAA4eK1LQPxULxw8JpucX0PwzQQRk%3Dq4jG32cU1us2%2B-mtzZUQg%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks for reviewing and providing the comments Ashutosh.Please find my thoughts below:On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:>> Some review comments (mostly) from the leader side code changes:  >> 1) Do we need a DSM key for the FORCE_QUOTE option? I think FORCE_QUOTE option is only used with COPY TO and not COPY FROM so not sure why you have added it.>> PARALLEL_COPY_KEY_FORCE_QUOTE_LIST>Fixed> 2) Should we be allocating the parallel copy data structure only when it is confirmed that the parallel copy is allowed?>> pcdata = (ParallelCopyData *) palloc0(sizeof(ParallelCopyData));> cstate->pcdata = pcdata;>> Or, if you want it to be allocated before confirming if Parallel copy is allowed or not, then I think it would be good to allocate it in *cstate->copycontext* memory context so that when EndCopy is called towards the end of the COPY FROM operation, the entire context itself gets deleted thereby freeing the memory space allocated for pcdata. In fact it would be good to ensure that all the local memory allocated inside the ctstate structure gets allocated in the *cstate->copycontext* memory context.>Fixed> 3) Should we allow Parallel Copy when the insert method is CIM_MULTI_CONDITIONAL?>> +   /* Check if the insertion mode is single. */> +   if (FindInsertMethod(cstate) == CIM_SINGLE)> +       return false;>> I know we have added checks in CopyFrom() to ensure that if any trigger (before row or instead of) is found on any of partition being loaded with data, then COPY FROM operation would fail, but does it mean that we are okay to perform parallel copy on partitioned table. Have we done some performance testing with the partitioned table where the data in the input file needs to be routed to the different partitions?>Partition data is handled like what Amit had told in one of earlier mails [1].  My colleague Bharath has run performance test with partition table, he will be sharing the results.> 4) There are lot of if-checks in IsParallelCopyAllowed function that are checked in CopyFrom function as well which means in case of Parallel Copy those checks will get executed multiple times (first by the leader and from second time onwards by each worker process). Is that required?>It is called from BeginParallelCopy, This will be called only once. This change is ok.> 5) Should the worker process be calling this function when the leader has already called it once in ExecBeforeStmtTrigger()?>> /* Verify the named relation is a valid target for INSERT */> CheckValidResultRel(resultRelInfo, CMD_INSERT);>Fixed.> 6) I think it would be good to re-write the comments atop ParallelCopyLeader(). From the present comments it appears as if you were trying to put the information pointwise but somehow you ended up putting in a paragraph. The comments also have some typos like *line beaks* which possibly means line breaks. This is applicable for other comments as well where you>Fixed.> 7) Is the following checking equivalent to IsWorker()? If so, it would be good to replace it with an IsWorker like macro to increase the readability.>> (IsParallelCopy() && !IsLeader())>Fixed.These have been fixed and the new patch is attached as part of my previous mail.[1] - https://www.postgresql.org/message-id/CAA4eK1LQPxULxw8JpucX0PwzQQRk%3Dq4jG32cU1us2%2B-mtzZUQg%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 22 Jul 2020 19:56:25 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Jul 22, 2020 at 7:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for reviewing and providing the comments Ashutosh.\n> Please find my thoughts below:\n>\n> On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n> >\n> > Some review comments (mostly) from the leader side code changes:\n> >\n> > 3) Should we allow Parallel Copy when the insert method is\nCIM_MULTI_CONDITIONAL?\n> >\n> > + /* Check if the insertion mode is single. */\n> > + if (FindInsertMethod(cstate) == CIM_SINGLE)\n> > + return false;\n> >\n> > I know we have added checks in CopyFrom() to ensure that if any trigger\n(before row or instead of) is found on any of partition being loaded with\ndata, then COPY FROM operation would fail, but does it mean that we are\nokay to perform parallel copy on partitioned table. Have we done some\nperformance testing with the partitioned table where the data in the input\nfile needs to be routed to the different partitions?\n> >\n>\n> Partition data is handled like what Amit had told in one of earlier mails\n[1]. My colleague Bharath has run performance test with partition table,\nhe will be sharing the results.\n>\n\nI ran tests for partitioned use cases - results are similar to that of non\npartitioned cases[1].\n\nparallel workers test case 1(exec time in sec): copy from csv file, 5.1GB,\n10million tuples, 4 range partitions, 3 indexes on integer columns unique\ndata test case 2(exec time in sec): copy from csv file, 5.1GB, 10million\ntuples, 4 range partitions, unique data\n0 205.403(1X) 135(1X)\n2 114.724(1.79X) 59.388(2.27X)\n4 99.017(2.07X) 56.742(2.34X)\n8 99.722(2.06X) 66.323(2.03X)\n16 98.147(2.09X) 66.054(2.04X)\n20 97.723(2.1X) 66.389(2.03X)\n30 97.048(2.11X) 70.568(1.91X)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jul 22, 2020 at 7:56 PM vignesh C <vignesh21@gmail.com> wrote:>> Thanks for reviewing and providing the comments Ashutosh.> Please find my thoughts below:>> On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:> >> > Some review comments (mostly) from the leader side code changes:  > >> > 3) Should we allow Parallel Copy when the insert method is CIM_MULTI_CONDITIONAL?> >> > +   /* Check if the insertion mode is single. */> > +   if (FindInsertMethod(cstate) == CIM_SINGLE)> > +       return false;> >> > I know we have added checks in CopyFrom() to ensure that if any trigger (before row or instead of) is found on any of partition being loaded with data, then COPY FROM operation would fail, but does it mean that we are okay to perform parallel copy on partitioned table. Have we done some performance testing with the partitioned table where the data in the input file needs to be routed to the different partitions?> >>> Partition data is handled like what Amit had told in one of earlier mails [1].  My colleague Bharath has run performance test with partition table, he will be sharing the results.>I ran tests for partitioned use cases - results are similar to that of non partitioned cases[1].\n\n\n\n\n\nparallel\n workers\ntest case\n 1(exec time in sec): copy from csv file, 5.1GB, 10million tuples, 4 range\n partitions, 3 indexes on integer columns unique data\ntest case\n 2(exec time in sec): copy from csv file, 5.1GB, 10million tuples, 4 range\n partitions, unique data\n\n\n0\n205.403(1X)\n135(1X)\n\n\n2\n114.724(1.79X)\n59.388(2.27X)\n\n\n4\n99.017(2.07X)\n56.742(2.34X)\n\n\n8\n99.722(2.06X)\n66.323(2.03X)\n\n\n16\n98.147(2.09X)\n66.054(2.04X)\n\n\n20\n97.723(2.1X)\n66.389(2.03X)\n\n\n30\n97.048(2.11X)\n70.568(1.91X)\n\n\nWith Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Jul 2020 08:50:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Jul 23, 2020 at 8:51 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Jul 22, 2020 at 7:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for reviewing and providing the comments Ashutosh.\n> > Please find my thoughts below:\n> >\n> > On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > >\n> > > Some review comments (mostly) from the leader side code changes:\n> > >\n> > > 3) Should we allow Parallel Copy when the insert method is\n> CIM_MULTI_CONDITIONAL?\n> > >\n> > > + /* Check if the insertion mode is single. */\n> > > + if (FindInsertMethod(cstate) == CIM_SINGLE)\n> > > + return false;\n> > >\n> > > I know we have added checks in CopyFrom() to ensure that if any\n> trigger (before row or instead of) is found on any of partition being\n> loaded with data, then COPY FROM operation would fail, but does it mean\n> that we are okay to perform parallel copy on partitioned table. Have we\n> done some performance testing with the partitioned table where the data in\n> the input file needs to be routed to the different partitions?\n> > >\n> >\n> > Partition data is handled like what Amit had told in one of earlier\n> mails [1]. My colleague Bharath has run performance test with partition\n> table, he will be sharing the results.\n> >\n>\n> I ran tests for partitioned use cases - results are similar to that of non\n> partitioned cases[1].\n>\n\nI could see the gain up to 10-11 times for non-partitioned cases [1], can\nwe use similar test case here as well (with one of the indexes on text\ncolumn or having gist index) to see its impact?\n\n[1] -\nhttps://www.postgresql.org/message-id/CALj2ACVR4WE98Per1H7ajosW8vafN16548O2UV8bG3p4D3XnPg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jul 23, 2020 at 8:51 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Jul 22, 2020 at 7:56 PM vignesh C <vignesh21@gmail.com> wrote:>> Thanks for reviewing and providing the comments Ashutosh.> Please find my thoughts below:>> On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:> >> > Some review comments (mostly) from the leader side code changes:  > >> > 3) Should we allow Parallel Copy when the insert method is CIM_MULTI_CONDITIONAL?> >> > +   /* Check if the insertion mode is single. */> > +   if (FindInsertMethod(cstate) == CIM_SINGLE)> > +       return false;> >> > I know we have added checks in CopyFrom() to ensure that if any trigger (before row or instead of) is found on any of partition being loaded with data, then COPY FROM operation would fail, but does it mean that we are okay to perform parallel copy on partitioned table. Have we done some performance testing with the partitioned table where the data in the input file needs to be routed to the different partitions?> >>> Partition data is handled like what Amit had told in one of earlier mails [1].  My colleague Bharath has run performance test with partition table, he will be sharing the results.>I ran tests for partitioned use cases - results are similar to that of non partitioned cases[1].I could see the gain up to 10-11 times for non-partitioned cases [1], can we use similar test case here as well (with one of the indexes on text column or having gist index) to see its impact?[1] - https://www.postgresql.org/message-id/CALj2ACVR4WE98Per1H7ajosW8vafN16548O2UV8bG3p4D3XnPg%40mail.gmail.com-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Jul 2020 09:22:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "I think, when doing the performance testing for partitioned table, it would\nbe good to also mention about the distribution of data in the input file.\nOne possible data distribution could be that we have let's say 100 tuples\nin the input file, and every consecutive tuple belongs to a different\npartition.\n\nOn Thu, Jul 23, 2020 at 8:51 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Jul 22, 2020 at 7:56 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for reviewing and providing the comments Ashutosh.\n> > Please find my thoughts below:\n> >\n> > On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > >\n> > > Some review comments (mostly) from the leader side code changes:\n> > >\n> > > 3) Should we allow Parallel Copy when the insert method is\n> CIM_MULTI_CONDITIONAL?\n> > >\n> > > + /* Check if the insertion mode is single. */\n> > > + if (FindInsertMethod(cstate) == CIM_SINGLE)\n> > > + return false;\n> > >\n> > > I know we have added checks in CopyFrom() to ensure that if any\n> trigger (before row or instead of) is found on any of partition being\n> loaded with data, then COPY FROM operation would fail, but does it mean\n> that we are okay to perform parallel copy on partitioned table. Have we\n> done some performance testing with the partitioned table where the data in\n> the input file needs to be routed to the different partitions?\n> > >\n> >\n> > Partition data is handled like what Amit had told in one of earlier\n> mails [1]. My colleague Bharath has run performance test with partition\n> table, he will be sharing the results.\n> >\n>\n> I ran tests for partitioned use cases - results are similar to that of non\n> partitioned cases[1].\n>\n> parallel workers test case 1(exec time in sec): copy from csv file,\n> 5.1GB, 10million tuples, 4 range partitions, 3 indexes on integer columns\n> unique data test case 2(exec time in sec): copy from csv file, 5.1GB,\n> 10million tuples, 4 range partitions, unique data\n> 0 205.403(1X) 135(1X)\n> 2 114.724(1.79X) 59.388(2.27X)\n> 4 99.017(2.07X) 56.742(2.34X)\n> 8 99.722(2.06X) 66.323(2.03X)\n> 16 98.147(2.09X) 66.054(2.04X)\n> 20 97.723(2.1X) 66.389(2.03X)\n> 30 97.048(2.11X) 70.568(1.91X)\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nI think, when doing the performance testing for partitioned table, it would be good to also mention about the distribution of data in the input file. One possible data distribution could be that we have let's say 100 tuples in the input file, and every consecutive tuple belongs to a different partition.On Thu, Jul 23, 2020 at 8:51 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Jul 22, 2020 at 7:56 PM vignesh C <vignesh21@gmail.com> wrote:>> Thanks for reviewing and providing the comments Ashutosh.> Please find my thoughts below:>> On Fri, Jul 17, 2020 at 7:18 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:> >> > Some review comments (mostly) from the leader side code changes:  > >> > 3) Should we allow Parallel Copy when the insert method is CIM_MULTI_CONDITIONAL?> >> > +   /* Check if the insertion mode is single. */> > +   if (FindInsertMethod(cstate) == CIM_SINGLE)> > +       return false;> >> > I know we have added checks in CopyFrom() to ensure that if any trigger (before row or instead of) is found on any of partition being loaded with data, then COPY FROM operation would fail, but does it mean that we are okay to perform parallel copy on partitioned table. Have we done some performance testing with the partitioned table where the data in the input file needs to be routed to the different partitions?> >>> Partition data is handled like what Amit had told in one of earlier mails [1].  My colleague Bharath has run performance test with partition table, he will be sharing the results.>I ran tests for partitioned use cases - results are similar to that of non partitioned cases[1].\n\n\n\n\n\nparallel\n workers\ntest case\n 1(exec time in sec): copy from csv file, 5.1GB, 10million tuples, 4 range\n partitions, 3 indexes on integer columns unique data\ntest case\n 2(exec time in sec): copy from csv file, 5.1GB, 10million tuples, 4 range\n partitions, unique data\n\n\n0\n205.403(1X)\n135(1X)\n\n\n2\n114.724(1.79X)\n59.388(2.27X)\n\n\n4\n99.017(2.07X)\n56.742(2.34X)\n\n\n8\n99.722(2.06X)\n66.323(2.03X)\n\n\n16\n98.147(2.09X)\n66.054(2.04X)\n\n\n20\n97.723(2.1X)\n66.389(2.03X)\n\n\n30\n97.048(2.11X)\n70.568(1.91X)\n\n\nWith Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Jul 2020 10:21:12 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Jul 23, 2020 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>>\n>> I ran tests for partitioned use cases - results are similar to that of\nnon partitioned cases[1].\n>\n>\n> I could see the gain up to 10-11 times for non-partitioned cases [1], can\nwe use similar test case here as well (with one of the indexes on text\ncolumn or having gist index) to see its impact?\n>\n> [1] -\nhttps://www.postgresql.org/message-id/CALj2ACVR4WE98Per1H7ajosW8vafN16548O2UV8bG3p4D3XnPg%40mail.gmail.com\n>\n\nThanks Amit! Please find the results of detailed testing done for\npartitioned use cases:\n\nRange Partitions: consecutive rows go into the same partitions.\nparallel workers test case 1(exec time in sec): copy from csv file, 2\nindexes on integer columns and 1 index on text column, 4 range partitions test\ncase 2(exec time in sec): copy from csv file, 1 gist index on text column,\n4 range partitions test case 3(exec time in sec): copy from csv file, 3\nindexes on integer columns, 4 range partitions\n0 1051.924(1X) 785.052(1X) 205.403(1X)\n2 589.576(1.78X) 421.974(1.86X) 114.724(1.79X)\n4 321.960(3.27X) 230.997(3.4X) 99.017(2.07X)\n8 199.245(5.23X) *156.132(5.02X)* 99.722(2.06X)\n16 127.343(8.26X) 173.696(4.52X) 98.147(2.09X)\n20 *122.029(8.62X)* 186.418(4.21X) 97.723(2.1X)\n30 142.876(7.36X) 214.598(3.66X) *97.048(2.11X)*\n\nOn Thu, Jul 23, 2020 at 10:21 AM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n>\n> I think, when doing the performance testing for partitioned table, it\nwould be good to also mention about the distribution of data in the input\nfile. One possible data distribution could be that we have let's say 100\ntuples in the input file, and every consecutive tuple belongs to a\ndifferent partition.\n>\n\nTo address Ashutosh's point, I used hash partitioning. Hope this helps to\nclear the doubt.\n\nHash Partitions: where there are high chances that consecutive rows may go\ninto different partitions.\nparallel workers test case 1(exec time in sec): copy from csv file, 2\nindexes on integer columns and 1 index on text column, 4 hash partitions test\ncase 2(exec time in sec): copy from csv file, 1 gist index on text column,\n4 hash partitions test case 3(exec time in sec): copy from csv file, 3\nindexes on integer columns, 4 hash partitions\n0 1060.884(1X) 812.283(1X) 207.745(1X)\n2 572.542(1.85X) 418.454(1.94X) 107.850(1.93X)\n4 298.132(3.56X) 227.367(3.57X) *83.895(2.48X)*\n8 169.449(6.26X) 137.993(5.89X) 85.411(2.43X)\n16 112.297(9.45X) 95.167(8.53X) 96.136(2.16X)\n20 *101.546(10.45X)* *90.552(8.97X)* 97.066(2.14X)\n30 113.877(9.32X) 127.17(6.38X) 96.819(2.14X)\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jul 23, 2020 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:>>>>> I ran tests for partitioned use cases - results are similar to that of non partitioned cases[1].>>> I could see the gain up to 10-11 times for non-partitioned cases [1], can we use similar test case here as well (with one of the indexes on text column or having gist index) to see its impact?>> [1] - https://www.postgresql.org/message-id/CALj2ACVR4WE98Per1H7ajosW8vafN16548O2UV8bG3p4D3XnPg%40mail.gmail.com>Thanks Amit! Please find the results of detailed testing done for partitioned use cases:Range Partitions: consecutive rows go into the same partitions.\n\n\n\n\nparallel\n workers\ntest case 1(exec\n time in sec): copy from csv file, 2 indexes on integer columns and 1 index on\n text column, 4 range partitions\ntest case 2(exec\n time in sec): copy from csv file, 1 gist index on text column, 4 range\n partitions\ntest case 3(exec\n time in sec): copy from csv file, 3 indexes on integer columns, 4 range\n partitions\n\n\n0\n1051.924(1X)\n785.052(1X)\n205.403(1X)\n\n\n2\n589.576(1.78X)\n421.974(1.86X)\n114.724(1.79X)\n\n\n4\n321.960(3.27X)\n230.997(3.4X)\n99.017(2.07X)\n\n\n8\n199.245(5.23X)\n156.132(5.02X)\n99.722(2.06X)\n\n\n16\n127.343(8.26X)\n173.696(4.52X)\n98.147(2.09X)\n\n\n20\n122.029(8.62X)\n186.418(4.21X)\n97.723(2.1X)\n\n\n30\n142.876(7.36X)\n214.598(3.66X)\n97.048(2.11X)\n\n\nOn Thu, Jul 23, 2020 at 10:21 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:>> I think, when doing the performance testing for partitioned table, it would be good to also mention about the distribution of data in the input file. One possible data distribution could be that we have let's say 100 tuples in the input file, and every consecutive tuple belongs to a different partition.>To address Ashutosh's point, I used hash partitioning. Hope this helps to clear the doubt.Hash Partitions: where there are high chances that consecutive rows may go into different partitions.\n\n\n\n\nparallel\n workers\ntest case 1(exec\n time in sec): copy from csv file, 2 indexes on integer columns and 1 index on\n text column, 4 hash partitions\ntest case 2(exec\n time in sec): copy from csv file, 1 gist index on text column, 4 hash\n partitions\ntest case 3(exec\n time in sec): copy from csv file, 3 indexes on integer columns, 4 hash\n partitions\n\n\n0\n1060.884(1X)\n812.283(1X)\n207.745(1X)\n\n\n2\n572.542(1.85X)\n418.454(1.94X)\n107.850(1.93X)\n\n\n4\n298.132(3.56X)\n227.367(3.57X)\n83.895(2.48X)\n\n\n8\n169.449(6.26X)\n137.993(5.89X)\n85.411(2.43X)\n\n\n16\n112.297(9.45X)\n95.167(8.53X)\n96.136(2.16X)\n\n\n20\n101.546(10.45X)\n90.552(8.97X)\n97.066(2.14X)\n\n\n30\n113.877(9.32X)\n127.17(6.38X)\n96.819(2.14X)\n\n\nWith Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 23 Jul 2020 18:07:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "The patches were not applying because of the recent commits.\nI have rebased the patch over head & attached.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jul 23, 2020 at 6:07 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Jul 23, 2020 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> >>\n> >> I ran tests for partitioned use cases - results are similar to that of\n> non partitioned cases[1].\n> >\n> >\n> > I could see the gain up to 10-11 times for non-partitioned cases [1],\n> can we use similar test case here as well (with one of the indexes on text\n> column or having gist index) to see its impact?\n> >\n> > [1] -\n> https://www.postgresql.org/message-id/CALj2ACVR4WE98Per1H7ajosW8vafN16548O2UV8bG3p4D3XnPg%40mail.gmail.com\n> >\n>\n> Thanks Amit! Please find the results of detailed testing done for\n> partitioned use cases:\n>\n> Range Partitions: consecutive rows go into the same partitions.\n> parallel workers test case 1(exec time in sec): copy from csv file, 2\n> indexes on integer columns and 1 index on text column, 4 range partitions test\n> case 2(exec time in sec): copy from csv file, 1 gist index on text column,\n> 4 range partitions test case 3(exec time in sec): copy from csv file, 3\n> indexes on integer columns, 4 range partitions\n> 0 1051.924(1X) 785.052(1X) 205.403(1X)\n> 2 589.576(1.78X) 421.974(1.86X) 114.724(1.79X)\n> 4 321.960(3.27X) 230.997(3.4X) 99.017(2.07X)\n> 8 199.245(5.23X) *156.132(5.02X)* 99.722(2.06X)\n> 16 127.343(8.26X) 173.696(4.52X) 98.147(2.09X)\n> 20 *122.029(8.62X)* 186.418(4.21X) 97.723(2.1X)\n> 30 142.876(7.36X) 214.598(3.66X) *97.048(2.11X)*\n>\n> On Thu, Jul 23, 2020 at 10:21 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > I think, when doing the performance testing for partitioned table, it\n> would be good to also mention about the distribution of data in the input\n> file. One possible data distribution could be that we have let's say 100\n> tuples in the input file, and every consecutive tuple belongs to a\n> different partition.\n> >\n>\n> To address Ashutosh's point, I used hash partitioning. Hope this helps to\n> clear the doubt.\n>\n> Hash Partitions: where there are high chances that consecutive rows may go\n> into different partitions.\n> parallel workers test case 1(exec time in sec): copy from csv file, 2\n> indexes on integer columns and 1 index on text column, 4 hash partitions test\n> case 2(exec time in sec): copy from csv file, 1 gist index on text column,\n> 4 hash partitions test case 3(exec time in sec): copy from csv file, 3\n> indexes on integer columns, 4 hash partitions\n> 0 1060.884(1X) 812.283(1X) 207.745(1X)\n> 2 572.542(1.85X) 418.454(1.94X) 107.850(1.93X)\n> 4 298.132(3.56X) 227.367(3.57X) *83.895(2.48X)*\n> 8 169.449(6.26X) 137.993(5.89X) 85.411(2.43X)\n> 16 112.297(9.45X) 95.167(8.53X) 96.136(2.16X)\n> 20 *101.546(10.45X)* *90.552(8.97X)* 97.066(2.14X)\n> 30 113.877(9.32X) 127.17(6.38X) 96.819(2.14X)\n>\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Sat, 1 Aug 2020 09:54:54 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Aug 1, 2020 at 9:55 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> The patches were not applying because of the recent commits.\n> I have rebased the patch over head & attached.\n>\nI rebased v2-0006-Parallel-Copy-For-Binary-Format-Files.patch.\n\nPutting together all the patches rebased on to the latest commit\nb8fdee7d0ca8bd2165d46fb1468f75571b706a01. Patches from 0001 to 0005\nare rebased by Vignesh, that are from the previous mail and the patch\n0006 is rebased by me.\n\nPlease consider this patch set for further review.\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 3 Aug 2020 12:33:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Aug 03, 2020 at 12:33:48PM +0530, Bharath Rupireddy wrote:\n>On Sat, Aug 1, 2020 at 9:55 AM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> The patches were not applying because of the recent commits.\n>> I have rebased the patch over head & attached.\n>>\n>I rebased v2-0006-Parallel-Copy-For-Binary-Format-Files.patch.\n>\n>Putting together all the patches rebased on to the latest commit\n>b8fdee7d0ca8bd2165d46fb1468f75571b706a01. Patches from 0001 to 0005\n>are rebased by Vignesh, that are from the previous mail and the patch\n>0006 is rebased by me.\n>\n>Please consider this patch set for further review.\n>\n\nI'd suggest incrementing the version every time an updated version is\nsubmitted, even if it's just a rebased version. It makes it clearer\nwhich version of the code is being discussed, etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 4 Aug 2020 18:21:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Aug 4, 2020 at 9:51 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Mon, Aug 03, 2020 at 12:33:48PM +0530, Bharath Rupireddy wrote:\n> >On Sat, Aug 1, 2020 at 9:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> >>\n> >> The patches were not applying because of the recent commits.\n> >> I have rebased the patch over head & attached.\n> >>\n> >I rebased v2-0006-Parallel-Copy-For-Binary-Format-Files.patch.\n> >\n> >Putting together all the patches rebased on to the latest commit\n> >b8fdee7d0ca8bd2165d46fb1468f75571b706a01. Patches from 0001 to 0005\n> >are rebased by Vignesh, that are from the previous mail and the patch\n> >0006 is rebased by me.\n> >\n> >Please consider this patch set for further review.\n> >\n>\n> I'd suggest incrementing the version every time an updated version is\n> submitted, even if it's just a rebased version. It makes it clearer\n> which version of the code is being discussed, etc.\n\nSure, we will take care of this when we are sending the next set of patches.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Aug 2020 10:19:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, failed\n\nHi,\r\n\r\nI don't claim to yet understand all of the Postgres internals that this patch is updating and interacting with, so I'm still testing and debugging portions of this patch, but would like to give feedback on what I've noticed so far.\r\nI have done some ad-hoc testing of the patch using parallel copies from text/csv/binary files and have not yet struck any execution problems other than some option validation and associated error messages on boundary cases.\r\n\r\nOne general question that I have: is there a user benefit (over the normal non-parallel COPY) to allowing \"COPY ... FROM ... WITH (PARALLEL 1)\"?\r\n\r\n\r\nMy following comments are broken down by patch:\r\n\r\n(1) v2-0001-Copy-code-readjustment-to-support-parallel-copy.patch\r\n\r\n(i) Whilst I can't entirely blame these patches for it (as they are following what is already there), I can't help noticing the use of numerous macros in src/backend/commands/copy.c which paste in multiple lines of code in various places.\r\nIt's getting a little out-of-hand. Surely the majority of these would be best inline functions instead?\r\nPerhaps hasn't been done because too many parameters need to be passed - thoughts?\r\n\r\n\r\n(2) v2-0002-Framework-for-leader-worker-in-parallel-copy.patch\r\n\r\n(i) minor point: there are some tabbing/spacing issues in this patch (and the other patches), affecting alignment.\r\ne.g. mixed tabs/spaces and misalignment in PARALLEL_COPY_KEY_xxx definitions\r\n\r\n(ii)\r\n\r\n+/*\r\n+ * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\r\n+ * block to process to avoid lock contention. This value should be mode of\r\n+ * RINGSIZE, as wrap around cases is currently not handled while selecting the\r\n+ * WORKER_CHUNK_COUNT by the worker.\r\n+ */\r\n+#define WORKER_CHUNK_COUNT 50\r\n\r\n\r\n\"This value should be mode of RINGSIZE ...\"\r\n\r\n-> typo: mode (mod? should evenly divide into RINGSIZE?)\r\n\r\n\r\n(iii)\r\n+ * using pg_atomic_compare_exchange_u32, worker will change the sate to\r\n\r\n->typo: sate (should be \"state\")\r\n\r\n\r\n(iv)\r\n\r\n+\t\t\t\t\t\t errmsg(\"parallel option supported only for copy from\"),\r\n\r\n-> suggest change to:\t\terrmsg(\"parallel option is supported only for COPY FROM\"),\r\n\r\n(v)\r\n\r\n+\t\t\terrno = 0; /* To distinguish success/failure after call */\r\n+\t\t\tval = strtol(str, &endptr, 10);\r\n+\r\n+\t\t\t/* Check for various possible errors */\r\n+\t\t\tif ((errno == ERANGE && (val == LONG_MAX || val == LONG_MIN))\r\n+\t\t\t\t|| (errno != 0 && val == 0) ||\r\n+\t\t\t\t*endptr)\r\n+\t\t\t\tereport(ERROR,\r\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n+\t\t\t\t\t\t errmsg(\"improper use of argument to option \\\"%s\\\"\",\r\n+\t\t\t\t\t\t\t\tdefel->defname),\r\n+\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\r\n+\r\n+\t\t\tif (endptr == str)\r\n+\t\t\t ereport(ERROR,\r\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n+\t\t\t\t\t\t errmsg(\"no digits were found in argument to option \\\"%s\\\"\",\r\n+\t\t\t\t\t\t\t\tdefel->defname),\r\n+\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\r\n+\r\n+\t\t\tcstate->nworkers = (int) val;\r\n+\r\n+\t\t\tif (cstate->nworkers <= 0)\r\n+\t\t\t\tereport(ERROR,\r\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n+\t\t\t\t\t\t errmsg(\"argument to option \\\"%s\\\" must be a positive integer greater than zero\",\r\n+\t\t\t\t\t\t\t\tdefel->defname),\r\n+\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\r\n\r\n\r\nI think this validation code needs to be improved, including the error messages (e.g. when can a \"positive integer\" NOT be greater than zero?)\r\n\r\nThere is some overlap in the \"no digits were found\" case between the two conditions above, depending, for example, if the argument is quoted. \r\nAlso, \"improper use of argument to option\" sounds a bit odd and vague to me. \r\nFinally, not range checking before casting long to int can lead to allowing out-of-range int values like in the following case:\r\n\r\ntest=# copy mytable from '/myspace/test_pcopy/tmp.dat' (parallel '-2147483648');\r\nERROR: argument to option \"parallel\" must be a positive integer greater than zero\r\nLINE 1: copy mytable from '/myspace/test_pcopy/tmp.dat' (parallel '-2...\r\n ^\r\nBUT the following is allowed...\r\n\r\ntest=# copy mytable from '/myspace/test_pcopy/tmp.dat' (parallel '-2147483649');\r\nCOPY 1000000\r\n\r\n\r\nI'd suggest to change the above validation code to do similar validation to that for the CREATE TABLE parallel_workers storage parameter (case RELOPT_TYPE_INT in reloptions.c). Like that code, wouldn't it be best to range-check the integer option value to be within a reasonable range, say 1 to 1024, with a corresponding errdetail message if possible?\r\n\r\n\r\n(3) v2-0003-Allow-copy-from-command-to-process-data-from-file.patch\r\n\r\n(i)\r\n\r\nPatch comment says:\r\n\r\n\"This feature allows the copy from to leverage multiple CPUs in order to copy\r\ndata from file/STDIN to a table. This adds a PARALLEL option to COPY FROM\r\ncommand where the user can specify the number of workers that can be used\r\nto perform the COPY FROM command. Specifying zero as number of workers will\r\ndisable parallelism.\"\r\n\r\nBUT - the changes to ProcessCopyOptions() specified in \"v2-0002-Framework-for-leader-worker-in-parallel-copy.patch\" do not allow zero workers to be specified - you get an error in that case. Patch comment should be updated accordingly.\r\n\r\n(ii)\r\n\r\n#define GETPROCESSED(processed) \\\r\n-return processed;\r\n+if (!IsParallelCopy()) \\\r\n+\treturn processed; \\\r\n+else \\\r\n+\treturn pg_atomic_read_u64(&cstate->pcdata->pcshared_info->processed);\r\n+\r\n\r\nI think GETPROCESSED would be better named \"RETURNPROCESSED\".\r\n\r\n(iii)\r\n\r\nThe below comment seems out- of-date with the current code - is it referring to the loop embedded at the bottom of the current loop that the comment is within?\r\n\r\n+\t\t/*\r\n+\t\t * There is a possibility that the above loop has come out because\r\n+\t\t * data_blk_ptr->curr_blk_completed is set, but dataSize read might\r\n+\t\t * be an old value, if data_blk_ptr->curr_blk_completed and the line is\r\n+\t\t * completed, line_size will be set. Read the line_size again to be\r\n+\t\t * sure if it is complete or partial block.\r\n+\t\t */\r\n\r\n(iv)\r\n\r\nI may be wrong here, but in the following block of code, isn't there a window of opportunity (however small) in which the line_state might be updated (LINE_WORKER_PROCESSED) by another worker just AFTER pg_atomic_read_u32() returns the current line_state which is put into curr_line_state, such that a write_pos update might be missed? And then a race-condition exists for reading/setting line_size (since line_size gets atomically set after line_state is set)?\r\nIf I am wrong in thinking this synchronization might not be correct, maybe the comments could be improved here to explain how this code is safe in that respect.\r\n\r\n\r\n+\t\t/* Get the current line information. */\r\n+\t\tlineInfo = &pcshared_info->line_boundaries.ring[write_pos];\r\n+\t\tcurr_line_state = pg_atomic_read_u32(&lineInfo->line_state);\r\n+\t\tif ((write_pos % WORKER_CHUNK_COUNT == 0) &&\r\n+\t\t\t(curr_line_state == LINE_WORKER_PROCESSED ||\r\n+\t\t\t curr_line_state == LINE_WORKER_PROCESSING))\r\n+\t\t{\r\n+\t\t\tpcdata->worker_processed_pos = write_pos;\r\n+\t\t\twrite_pos = (write_pos + WORKER_CHUNK_COUNT) % RINGSIZE;\r\n+\t\t\tcontinue;\r\n+\t\t}\r\n+\r\n+\t\t/* Get the size of this line. */\r\n+\t\tdataSize = pg_atomic_read_u32(&lineInfo->line_size);\r\n+\r\n+\t\tif (dataSize != 0) /* If not an empty line. */\r\n+\t\t{\r\n+\t\t\t/* Get the block information. */\r\n+\t\t\tdata_blk_ptr = &pcshared_info->data_blocks[lineInfo->first_block];\r\n+\r\n+\t\t\tif (!data_blk_ptr->curr_blk_completed && (dataSize == -1))\r\n+\t\t\t{\r\n+\t\t\t\t/* Wait till the current line or block is added. */\r\n+\t\t\t\tCOPY_WAIT_TO_PROCESS()\r\n+\t\t\t\tcontinue;\r\n+\t\t\t}\r\n+\t\t}\r\n+\r\n+\t\t/* Make sure that no worker has consumed this element. */\r\n+\t\tif (pg_atomic_compare_exchange_u32(&lineInfo->line_state,\r\n+\t\t\t\t\t\t\t\t\t\t &line_state, LINE_WORKER_PROCESSING))\r\n+\t\t\tbreak;\r\n\r\n\r\n(4) v2-0004-Documentation-for-parallel-copy.patch\r\n\r\n(i) I think that it is necessary to mention the \"max_worker_processes\" option in the description of the COPY statement PARALLEL option.\r\n\r\nFor example, something like:\r\n\r\n+ Perform <command>COPY FROM</command> in parallel using <replaceable\r\n+ class=\"parameter\"> integer</replaceable> background workers. Please\r\n+ note that it is not guaranteed that the number of parallel workers\r\n+ specified in <replaceable class=\"parameter\">integer</replaceable> will\r\n+ be used during execution. It is possible for a copy to run with fewer\r\n+ workers than specified, or even with no workers at all (for example, \r\n+ due to the setting of max_worker_processes). This option is allowed\r\n+ only in <command>COPY FROM</command>.\r\n \r\n\r\n(5) v2-0005-Tests-for-parallel-copy.patch\r\n\r\n(i) None of the provided tests seem to test beyond \"PARALLEL 2\" \r\n\r\n\r\n(6) v2-0006-Parallel-Copy-For-Binary-Format-Files.patch\r\n\r\n(i) In the ParallelCopyFrom() function, \"cstate->raw_buf\" is pfree()d:\r\n\r\n+\t/* raw_buf is not used in parallel copy, instead data blocks are used.*/\r\n+\tpfree(cstate->raw_buf);\r\n\r\n\r\nThis comment doesn't seem to be entirely true.\r\nAt least for text/csv file COPY FROM, cstate->raw_buf is subsequently referenced in the SetRawBufForLoad() function, which is called by CopyReadLineText():\r\n\r\n cur_data_blk_ptr = (cstate->raw_buf) ? &pcshared_info->data_blocks[cur_block_pos] : NULL;\r\n\r\nSo I think cstate->raw_buf should be set to NULL after being pfree()d, and the comment fixed/adjusted.\r\n\r\n\r\n(ii) This patch adds some macros (involving parallel copy checks) AFTER the comment:\r\n\r\n/* End parallel copy Macros */\r\n\r\n\r\nRegards,\r\nGreg Nancarrow\r\nFujitsu Australia", "msg_date": "Wed, 12 Aug 2020 03:39:12 +0000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Greg for reviewing the patch. Please find my thoughts for your comments.\n\nOn Wed, Aug 12, 2020 at 9:10 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> I have done some ad-hoc testing of the patch using parallel copies from text/csv/binary files and have not yet struck any execution problems other than some option validation and associated error messages on boundary cases.\n>\n> One general question that I have: is there a user benefit (over the normal non-parallel COPY) to allowing \"COPY ... FROM ... WITH (PARALLEL 1)\"?\n>\n\nThere will be marginal improvement as worker only need to process the\ndata, need not do the file reading, file reading would have been done\nby the main process. The real improvement can be seen from 2 workers\nonwards.\n\n>\n> My following comments are broken down by patch:\n>\n> (1) v2-0001-Copy-code-readjustment-to-support-parallel-copy.patch\n>\n> (i) Whilst I can't entirely blame these patches for it (as they are following what is already there), I can't help noticing the use of numerous macros in src/backend/commands/copy.c which paste in multiple lines of code in various places.\n> It's getting a little out-of-hand. Surely the majority of these would be best inline functions instead?\n> Perhaps hasn't been done because too many parameters need to be passed - thoughts?\n>\n\nI felt they have used macros mainly because it has a tight loop and\nhaving macros gives better performance. I have added the macros\nCLEAR_EOL_LINE, INCREMENTPROCESSED & GETPROCESSED as there will be\nslight difference in parallel copy & non parallel copy for these. In\nthe remaining patches the macor will be extended to include parallel\ncopy logic. Instead of having checks in the core logic, thought of\nkeeping as macros so that the readability is good.\n\n>\n> (2) v2-0002-Framework-for-leader-worker-in-parallel-copy.patch\n>\n> (i) minor point: there are some tabbing/spacing issues in this patch (and the other patches), affecting alignment.\n> e.g. mixed tabs/spaces and misalignment in PARALLEL_COPY_KEY_xxx definitions\n>\n\nFixed\n\n> (ii)\n>\n> +/*\n> + * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n> + * block to process to avoid lock contention. This value should be mode of\n> + * RINGSIZE, as wrap around cases is currently not handled while selecting the\n> + * WORKER_CHUNK_COUNT by the worker.\n> + */\n> +#define WORKER_CHUNK_COUNT 50\n>\n>\n> \"This value should be mode of RINGSIZE ...\"\n>\n> -> typo: mode (mod? should evenly divide into RINGSIZE?)\n\nFixed, changed it to divisible by.\n\n> (iii)\n> + * using pg_atomic_compare_exchange_u32, worker will change the sate to\n>\n> ->typo: sate (should be \"state\")\n\nFixed\n\n> (iv)\n>\n> + errmsg(\"parallel option supported only for copy from\"),\n>\n> -> suggest change to: errmsg(\"parallel option is supported only for COPY FROM\"),\n>\n\nFixed\n\n> (v)\n>\n> + errno = 0; /* To distinguish success/failure after call */\n> + val = strtol(str, &endptr, 10);\n> +\n> + /* Check for various possible errors */\n> + if ((errno == ERANGE && (val == LONG_MAX || val == LONG_MIN))\n> + || (errno != 0 && val == 0) ||\n> + *endptr)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"improper use of argument to option \\\"%s\\\"\",\n> + defel->defname),\n> + parser_errposition(pstate, defel->location)));\n> +\n> + if (endptr == str)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"no digits were found in argument to option \\\"%s\\\"\",\n> + defel->defname),\n> + parser_errposition(pstate, defel->location)));\n> +\n> + cstate->nworkers = (int) val;\n> +\n> + if (cstate->nworkers <= 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"argument to option \\\"%s\\\" must be a positive integer greater than zero\",\n> + defel->defname),\n> + parser_errposition(pstate, defel->location)));\n>\n>\n> I think this validation code needs to be improved, including the error messages (e.g. when can a \"positive integer\" NOT be greater than zero?)\n>\n> There is some overlap in the \"no digits were found\" case between the two conditions above, depending, for example, if the argument is quoted.\n> Also, \"improper use of argument to option\" sounds a bit odd and vague to me.\n> Finally, not range checking before casting long to int can lead to allowing out-of-range int values like in the following case:\n>\n> test=# copy mytable from '/myspace/test_pcopy/tmp.dat' (parallel '-2147483648');\n> ERROR: argument to option \"parallel\" must be a positive integer greater than zero\n> LINE 1: copy mytable from '/myspace/test_pcopy/tmp.dat' (parallel '-2...\n> ^\n> BUT the following is allowed...\n>\n> test=# copy mytable from '/myspace/test_pcopy/tmp.dat' (parallel '-2147483649');\n> COPY 1000000\n>\n>\n> I'd suggest to change the above validation code to do similar validation to that for the CREATE TABLE parallel_workers storage parameter (case RELOPT_TYPE_INT in reloptions.c). Like that code, wouldn't it be best to range-check the integer option value to be within a reasonable range, say 1 to 1024, with a corresponding errdetail message if possible?\n>\n\nFixed, changed as suggested.\n\n> (3) v2-0003-Allow-copy-from-command-to-process-data-from-file.patch\n>\n> (i)\n>\n> Patch comment says:\n>\n> \"This feature allows the copy from to leverage multiple CPUs in order to copy\n> data from file/STDIN to a table. This adds a PARALLEL option to COPY FROM\n> command where the user can specify the number of workers that can be used\n> to perform the COPY FROM command. Specifying zero as number of workers will\n> disable parallelism.\"\n>\n> BUT - the changes to ProcessCopyOptions() specified in \"v2-0002-Framework-for-leader-worker-in-parallel-copy.patch\" do not allow zero workers to be specified - you get an error in that case. Patch comment should be updated accordingly.\n>\n\nRemoved \"Specifying zero as number of workers will disable\nparallelism\". As the new value is range from 1 to 1024.\n\n> (ii)\n>\n> #define GETPROCESSED(processed) \\\n> -return processed;\n> +if (!IsParallelCopy()) \\\n> + return processed; \\\n> +else \\\n> + return pg_atomic_read_u64(&cstate->pcdata->pcshared_info->processed);\n> +\n>\n> I think GETPROCESSED would be better named \"RETURNPROCESSED\".\n>\n\nFixed.\n\n> (iii)\n>\n> The below comment seems out- of-date with the current code - is it referring to the loop embedded at the bottom of the current loop that the comment is within?\n>\n> + /*\n> + * There is a possibility that the above loop has come out because\n> + * data_blk_ptr->curr_blk_completed is set, but dataSize read might\n> + * be an old value, if data_blk_ptr->curr_blk_completed and the line is\n> + * completed, line_size will be set. Read the line_size again to be\n> + * sure if it is complete or partial block.\n> + */\n>\n\nUpdated, it is referring to the embedded loop at the bottom of the current loop.\n\n> (iv)\n>\n> I may be wrong here, but in the following block of code, isn't there a window of opportunity (however small) in which the line_state might be updated (LINE_WORKER_PROCESSED) by another worker just AFTER pg_atomic_read_u32() returns the current line_state which is put into curr_line_state, such that a write_pos update might be missed? And then a race-condition exists for reading/setting line_size (since line_size gets atomically set after line_state is set)?\n> If I am wrong in thinking this synchronization might not be correct, maybe the comments could be improved here to explain how this code is safe in that respect.\n>\n>\n> + /* Get the current line information. */\n> + lineInfo = &pcshared_info->line_boundaries.ring[write_pos];\n> + curr_line_state = pg_atomic_read_u32(&lineInfo->line_state);\n> + if ((write_pos % WORKER_CHUNK_COUNT == 0) &&\n> + (curr_line_state == LINE_WORKER_PROCESSED ||\n> + curr_line_state == LINE_WORKER_PROCESSING))\n> + {\n> + pcdata->worker_processed_pos = write_pos;\n> + write_pos = (write_pos + WORKER_CHUNK_COUNT) % RINGSIZE;\n> + continue;\n> + }\n> +\n> + /* Get the size of this line. */\n> + dataSize = pg_atomic_read_u32(&lineInfo->line_size);\n> +\n> + if (dataSize != 0) /* If not an empty line. */\n> + {\n> + /* Get the block information. */\n> + data_blk_ptr = &pcshared_info->data_blocks[lineInfo->first_block];\n> +\n> + if (!data_blk_ptr->curr_blk_completed && (dataSize == -1))\n> + {\n> + /* Wait till the current line or block is added. */\n> + COPY_WAIT_TO_PROCESS()\n> + continue;\n> + }\n> + }\n> +\n> + /* Make sure that no worker has consumed this element. */\n> + if (pg_atomic_compare_exchange_u32(&lineInfo->line_state,\n> + &line_state, LINE_WORKER_PROCESSING))\n> + break;\n>\n\nThis is not possible because of pg_atomic_compare_exchange_u32, this\nwill succeed only for one of the workers whose line_state is\nLINE_LEADER_POPULATED, for other workers it will fail. This is\nexplained in detail above ParallelCopyLineBoundary.\n\n>\n> (4) v2-0004-Documentation-for-parallel-copy.patch\n>\n> (i) I think that it is necessary to mention the \"max_worker_processes\" option in the description of the COPY statement PARALLEL option.\n>\n> For example, something like:\n>\n> + Perform <command>COPY FROM</command> in parallel using <replaceable\n> + class=\"parameter\"> integer</replaceable> background workers. Please\n> + note that it is not guaranteed that the number of parallel workers\n> + specified in <replaceable class=\"parameter\">integer</replaceable> will\n> + be used during execution. It is possible for a copy to run with fewer\n> + workers than specified, or even with no workers at all (for example,\n> + due to the setting of max_worker_processes). This option is allowed\n> + only in <command>COPY FROM</command>.\n>\n\nFixed.\n\n> (5) v2-0005-Tests-for-parallel-copy.patch\n>\n> (i) None of the provided tests seem to test beyond \"PARALLEL 2\"\n>\n\nI intentionally ran with 1 parallel worker, because when you specify\nmore than 1 parallel worker the order of record insertion can vary &\nthere may be random failures.\n\n>\n> (6) v2-0006-Parallel-Copy-For-Binary-Format-Files.patch\n>\n> (i) In the ParallelCopyFrom() function, \"cstate->raw_buf\" is pfree()d:\n>\n> + /* raw_buf is not used in parallel copy, instead data blocks are used.*/\n> + pfree(cstate->raw_buf);\n>\n\nraw_buf is not used in parallel copy, instead raw_buf will be pointing\nto shared memory data blocks. This memory was allocated as part of\nBeginCopyFrom, uptil this point we cannot be 100% sure as copy can be\nperformed sequentially like in case max_worker_processes is not\navailable, if it switches to sequential mode raw_buf will be used\nwhile performing copy operation. At this place we can safely free this\nmemory that was allocated.\n\n> This comment doesn't seem to be entirely true.\n> At least for text/csv file COPY FROM, cstate->raw_buf is subsequently referenced in the SetRawBufForLoad() function, which is called by CopyReadLineText():\n>\n> cur_data_blk_ptr = (cstate->raw_buf) ? &pcshared_info->data_blocks[cur_block_pos] : NULL;\n>\n> So I think cstate->raw_buf should be set to NULL after being pfree()d, and the comment fixed/adjusted.\n>\n>\n> (ii) This patch adds some macros (involving parallel copy checks) AFTER the comment:\n>\n> /* End parallel copy Macros */\n\nFixed, moved the macros above the comment.\n\nI have attached new set of patches with the fixes.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 14 Aug 2020 21:18:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\n\nSome further comments:\n\n(1) v3-0002-Framework-for-leader-worker-in-parallel-copy.patch\n\n+/*\n+ * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n+ * block to process to avoid lock contention. This value should be divisible by\n+ * RINGSIZE, as wrap around cases is currently not handled while selecting the\n+ * WORKER_CHUNK_COUNT by the worker.\n+ */\n+#define WORKER_CHUNK_COUNT 50\n\n\n\"This value should be divisible by RINGSIZE\" is not a correct\nstatement (since obviously 50 is not divisible by 10000).\nIt should say something like \"This value should evenly divide into\nRINGSIZE\", or \"RINGSIZE should be a multiple of WORKER_CHUNK_COUNT\".\n\n\n(2) v3-0003-Allow-copy-from-command-to-process-data-from-file.patch\n\n(i)\n\n+ /*\n+ * If the data is present in current block\nlineInfo. line_size\n+ * will be updated. If the data is spread\nacross the blocks either\n\nSomehow a space has been put between \"lineinfo.\" and \"line_size\".\nIt should be: \"If the data is present in current block\nlineInfo.line_size will be updated\"\n\n(ii)\n\n>This is not possible because of pg_atomic_compare_exchange_u32, this\n>will succeed only for one of the workers whose line_state is\n>LINE_LEADER_POPULATED, for other workers it will fail. This is\n>explained in detail above ParallelCopyLineBoundary.\n\nYes, but prior to that call to pg_atomic_compare_exchange_u32(),\naren't you separately reading line_state and line_state, so that\nbetween those reads, it may have transitioned from leader to another\nworker, such that the read line state (\"cur_line_state\", being checked\nin the if block) may not actually match what is now in the line_state\nand/or the read line_size (\"dataSize\") doesn't actually correspond to\nthe read line state?\n\n(sorry, still not 100% convinced that the synchronization and checks\nare safe in all cases)\n\n(3) v3-0006-Parallel-Copy-For-Binary-Format-Files.patch\n\n>raw_buf is not used in parallel copy, instead raw_buf will be pointing\n>to shared memory data blocks. This memory was allocated as part of\n>BeginCopyFrom, uptil this point we cannot be 100% sure as copy can be\n>performed sequentially like in case max_worker_processes is not\n>available, if it switches to sequential mode raw_buf will be used\n>while performing copy operation. At this place we can safely free this\n>memory that was allocated\n\nSo the following code (which checks raw_buf, which still points to\nmemory that has been pfreed) is still valid?\n\n In the SetRawBufForLoad() function, which is called by CopyReadLineText():\n\n cur_data_blk_ptr = (cstate->raw_buf) ?\n&pcshared_info->data_blocks[cur_block_pos] : NULL;\n\nThe above code looks a bit dicey to me. I stepped over that line in\nthe debugger when I debugged an instance of Parallel Copy, so it\ndefinitely gets executed.\nIt makes me wonder what other code could possibly be checking raw_buf\nand using it in some way, when in fact what it points to has been\npfreed.\n\nAre you able to add the following line of code, or will it (somehow)\nbreak logic that you are relying on?\n\npfree(cstate->raw_buf);\ncstate->raw_buf = NULL; <=== I suggest that this line is added\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Mon, 17 Aug 2020 14:14:36 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Greg for reviewing the patch. Please find my thoughts for your comments.\n\nOn Mon, Aug 17, 2020 at 9:44 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> Some further comments:\n>\n> (1) v3-0002-Framework-for-leader-worker-in-parallel-copy.patch\n>\n> +/*\n> + * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n> + * block to process to avoid lock contention. This value should be divisible by\n> + * RINGSIZE, as wrap around cases is currently not handled while selecting the\n> + * WORKER_CHUNK_COUNT by the worker.\n> + */\n> +#define WORKER_CHUNK_COUNT 50\n>\n>\n> \"This value should be divisible by RINGSIZE\" is not a correct\n> statement (since obviously 50 is not divisible by 10000).\n> It should say something like \"This value should evenly divide into\n> RINGSIZE\", or \"RINGSIZE should be a multiple of WORKER_CHUNK_COUNT\".\n>\n\nFixed. Changed it to RINGSIZE should be a multiple of WORKER_CHUNK_COUNT.\n\n> (2) v3-0003-Allow-copy-from-command-to-process-data-from-file.patch\n>\n> (i)\n>\n> + /*\n> + * If the data is present in current block\n> lineInfo. line_size\n> + * will be updated. If the data is spread\n> across the blocks either\n>\n> Somehow a space has been put between \"lineinfo.\" and \"line_size\".\n> It should be: \"If the data is present in current block\n> lineInfo.line_size will be updated\"\n\nFixed, changed it to lineinfo->line_size.\n\n>\n> (ii)\n>\n> >This is not possible because of pg_atomic_compare_exchange_u32, this\n> >will succeed only for one of the workers whose line_state is\n> >LINE_LEADER_POPULATED, for other workers it will fail. This is\n> >explained in detail above ParallelCopyLineBoundary.\n>\n> Yes, but prior to that call to pg_atomic_compare_exchange_u32(),\n> aren't you separately reading line_state and line_state, so that\n> between those reads, it may have transitioned from leader to another\n> worker, such that the read line state (\"cur_line_state\", being checked\n> in the if block) may not actually match what is now in the line_state\n> and/or the read line_size (\"dataSize\") doesn't actually correspond to\n> the read line state?\n>\n> (sorry, still not 100% convinced that the synchronization and checks\n> are safe in all cases)\n>\n\nI think that you are describing about the problem could happen in the\nfollowing case:\nwhen we read curr_line_state, the value was LINE_WORKER_PROCESSED or\nLINE_WORKER_PROCESSING. Then in some cases if the leader is very fast\ncompared to the workers then the leader quickly populates one line and\nsets the state to LINE_LEADER_POPULATED. State is changed to\nLINE_LEADER_POPULATED when we are checking the currr_line_state.\nI feel this will not be a problem because, Leader will populate & wait\ntill some RING element is available to populate. In the meantime\nworker has seen that state is LINE_WORKER_PROCESSED or\nLINE_WORKER_PROCESSING(previous state that it read), worker has\nidentified that this chunk was processed by some other worker, worker\nwill move and try to get the next available chunk & insert those\nrecords. It will keep continuing till it gets the next chunk to\nprocess. Eventually one of the workers will get this chunk and process\nit.\n\n> (3) v3-0006-Parallel-Copy-For-Binary-Format-Files.patch\n>\n> >raw_buf is not used in parallel copy, instead raw_buf will be pointing\n> >to shared memory data blocks. This memory was allocated as part of\n> >BeginCopyFrom, uptil this point we cannot be 100% sure as copy can be\n> >performed sequentially like in case max_worker_processes is not\n> >available, if it switches to sequential mode raw_buf will be used\n> >while performing copy operation. At this place we can safely free this\n> >memory that was allocated\n>\n> So the following code (which checks raw_buf, which still points to\n> memory that has been pfreed) is still valid?\n>\n> In the SetRawBufForLoad() function, which is called by CopyReadLineText():\n>\n> cur_data_blk_ptr = (cstate->raw_buf) ?\n> &pcshared_info->data_blocks[cur_block_pos] : NULL;\n>\n> The above code looks a bit dicey to me. I stepped over that line in\n> the debugger when I debugged an instance of Parallel Copy, so it\n> definitely gets executed.\n> It makes me wonder what other code could possibly be checking raw_buf\n> and using it in some way, when in fact what it points to has been\n> pfreed.\n>\n> Are you able to add the following line of code, or will it (somehow)\n> break logic that you are relying on?\n>\n> pfree(cstate->raw_buf);\n> cstate->raw_buf = NULL; <=== I suggest that this line is added\n>\n\nYou are right, I have debugged & verified it sets it to an invalid\nblock which is not expected. There are chances this would have caused\nsome corruption in some machines. The suggested fix is required, I\nhave fixed it. I have moved this change to\n0003-Allow-copy-from-command-to-process-data-from-file.patch as\n0006-Parallel-Copy-For-Binary-Format-Files is only for Binary format\nparallel copy & that change is common change for parallel copy.\n\nI have attached new set of patches with the fixes.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 19 Aug 2020 11:51:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "> I have attached new set of patches with the fixes.\n> Thoughts?\n\nHi Vignesh,\n\nI don't really have any further comments on the code, but would like\nto share some results of some Parallel Copy performance tests I ran\n(attached).\n\nThe tests loaded a 5GB CSV data file into a 100 column table (of\ndifferent data types). The following were varied as part of the test:\n- Number of workers (1 – 10)\n- No indexes / 4-indexes\n- Default settings / increased resources (shared_buffers,work_mem, etc.)\n\n(I did not do any partition-related tests as I believe those type of\ntests were previously performed)\n\nI built Postgres (latest OSS code) with the latest Parallel Copy patches (v4).\nThe test system was a 32-core Intel Xeon E5-4650 server with 378GB of RAM.\n\n\nI observed the following trends:\n- For the data file size used, Parallel Copy achieved best performance\nusing about 9 – 10 workers. Larger data files may benefit from using\nmore workers. However, I couldn’t really see any better performance,\nfor example, from using 16 workers on a 10GB CSV data file compared to\nusing 8 workers. Results may also vary depending on machine\ncharacteristics.\n- Parallel Copy with 1 worker ran slower than normal Copy in a couple\nof cases (I did question if allowing 1 worker was useful in my patch\nreview).\n- Typical load time improvement (load factor) for Parallel Copy was\nbetween 2x and 3x. Better load factors can be obtained by using larger\ndata files and/or more indexes.\n- Increasing Postgres resources made little or no difference to\nParallel Copy performance when the target table had no indexes.\nIncreasing Postgres resources improved Parallel Copy performance when\nthe target table had indexes.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Thu, 27 Aug 2020 12:33:27 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Aug 27, 2020 at 8:04 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> > I have attached new set of patches with the fixes.\n> > Thoughts?\n>\n> Hi Vignesh,\n>\n> I don't really have any further comments on the code, but would like\n> to share some results of some Parallel Copy performance tests I ran\n> (attached).\n>\n> The tests loaded a 5GB CSV data file into a 100 column table (of\n> different data types). The following were varied as part of the test:\n> - Number of workers (1 – 10)\n> - No indexes / 4-indexes\n> - Default settings / increased resources (shared_buffers,work_mem, etc.)\n>\n> (I did not do any partition-related tests as I believe those type of\n> tests were previously performed)\n>\n> I built Postgres (latest OSS code) with the latest Parallel Copy patches (v4).\n> The test system was a 32-core Intel Xeon E5-4650 server with 378GB of RAM.\n>\n>\n> I observed the following trends:\n> - For the data file size used, Parallel Copy achieved best performance\n> using about 9 – 10 workers. Larger data files may benefit from using\n> more workers. However, I couldn’t really see any better performance,\n> for example, from using 16 workers on a 10GB CSV data file compared to\n> using 8 workers. Results may also vary depending on machine\n> characteristics.\n> - Parallel Copy with 1 worker ran slower than normal Copy in a couple\n> of cases (I did question if allowing 1 worker was useful in my patch\n> review).\n\nI think the reason is that for 1 worker case there is not much\nparallelization as a leader doesn't perform the actual load work.\nVignesh, can you please once see if the results are reproducible at\nyour end, if so, we can once compare the perf profiles to see why in\nsome cases we get improvement and in other cases not. Based on that we\ncan decide whether to allow the 1 worker case or not.\n\n> - Typical load time improvement (load factor) for Parallel Copy was\n> between 2x and 3x. Better load factors can be obtained by using larger\n> data files and/or more indexes.\n>\n\nNice improvement and I think you are right that with larger load data\nwe will get even better improvement.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 Aug 2020 08:24:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Aug 27, 2020 at 8:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 27, 2020 at 8:04 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > > I have attached new set of patches with the fixes.\n> > > Thoughts?\n> >\n> > Hi Vignesh,\n> >\n> > I don't really have any further comments on the code, but would like\n> > to share some results of some Parallel Copy performance tests I ran\n> > (attached).\n> >\n> > The tests loaded a 5GB CSV data file into a 100 column table (of\n> > different data types). The following were varied as part of the test:\n> > - Number of workers (1 – 10)\n> > - No indexes / 4-indexes\n> > - Default settings / increased resources (shared_buffers,work_mem, etc.)\n> >\n> > (I did not do any partition-related tests as I believe those type of\n> > tests were previously performed)\n> >\n> > I built Postgres (latest OSS code) with the latest Parallel Copy patches (v4).\n> > The test system was a 32-core Intel Xeon E5-4650 server with 378GB of RAM.\n> >\n> >\n> > I observed the following trends:\n> > - For the data file size used, Parallel Copy achieved best performance\n> > using about 9 – 10 workers. Larger data files may benefit from using\n> > more workers. However, I couldn’t really see any better performance,\n> > for example, from using 16 workers on a 10GB CSV data file compared to\n> > using 8 workers. Results may also vary depending on machine\n> > characteristics.\n> > - Parallel Copy with 1 worker ran slower than normal Copy in a couple\n> > of cases (I did question if allowing 1 worker was useful in my patch\n> > review).\n>\n> I think the reason is that for 1 worker case there is not much\n> parallelization as a leader doesn't perform the actual load work.\n> Vignesh, can you please once see if the results are reproducible at\n> your end, if so, we can once compare the perf profiles to see why in\n> some cases we get improvement and in other cases not. Based on that we\n> can decide whether to allow the 1 worker case or not.\n>\n\nI will spend some time on this and update.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Aug 2020 16:56:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Aug 27, 2020 at 4:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Aug 27, 2020 at 8:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 27, 2020 at 8:04 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > > I have attached new set of patches with the fixes.\n> > > > Thoughts?\n> > >\n> > > Hi Vignesh,\n> > >\n> > > I don't really have any further comments on the code, but would like\n> > > to share some results of some Parallel Copy performance tests I ran\n> > > (attached).\n> > >\n> > > The tests loaded a 5GB CSV data file into a 100 column table (of\n> > > different data types). The following were varied as part of the test:\n> > > - Number of workers (1 – 10)\n> > > - No indexes / 4-indexes\n> > > - Default settings / increased resources (shared_buffers,work_mem, etc.)\n> > >\n> > > (I did not do any partition-related tests as I believe those type of\n> > > tests were previously performed)\n> > >\n> > > I built Postgres (latest OSS code) with the latest Parallel Copy patches (v4).\n> > > The test system was a 32-core Intel Xeon E5-4650 server with 378GB of RAM.\n> > >\n> > >\n> > > I observed the following trends:\n> > > - For the data file size used, Parallel Copy achieved best performance\n> > > using about 9 – 10 workers. Larger data files may benefit from using\n> > > more workers. However, I couldn’t really see any better performance,\n> > > for example, from using 16 workers on a 10GB CSV data file compared to\n> > > using 8 workers. Results may also vary depending on machine\n> > > characteristics.\n> > > - Parallel Copy with 1 worker ran slower than normal Copy in a couple\n> > > of cases (I did question if allowing 1 worker was useful in my patch\n> > > review).\n> >\n> > I think the reason is that for 1 worker case there is not much\n> > parallelization as a leader doesn't perform the actual load work.\n> > Vignesh, can you please once see if the results are reproducible at\n> > your end, if so, we can once compare the perf profiles to see why in\n> > some cases we get improvement and in other cases not. Based on that we\n> > can decide whether to allow the 1 worker case or not.\n> >\n>\n> I will spend some time on this and update.\n>\n\nThanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 Aug 2020 17:42:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Aug 27, 2020 at 8:04 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> - Parallel Copy with 1 worker ran slower than normal Copy in a couple\n> of cases (I did question if allowing 1 worker was useful in my patch\n> review).\n\nThanks Greg for your review & testing.\nI had executed various tests with 1GB, 2GB & 5GB with 100 columns without\nparallel mode & with 1 parallel worker. Test result for the same is as\ngiven below:\nTest Without parallel mode With 1 Parallel worker\n1GB csv file 100 columns\n(100 bytes data in each column) 62 seconds 47 seconds (1.32X)\n1GB csv file 100 columns\n(1000 bytes data in each column) 89 seconds 78 seconds (1.14X)\n2GB csv file 100 columns\n(1 byte data in each column) 277 seconds 256 seconds (1.08X)\n5GB csv file 100 columns\n(100 byte data in each column) 515 seconds 445 seconds (1.16X)\nI have run the tests multiple times and have noticed the similar execution\ntimes in all the runs for the above tests.\nIn the above results there is slight improvement with 1 worker. In my tests\nI did not observe the degradation for copy with 1 worker compared to the\nnon parallel copy. Can you share with me the script you used to generate\nthe data & the ddl of the table, so that it will help me check that\nscenario you faced the problem.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Aug 27, 2020 at 8:04 AM Greg Nancarrow <gregn4422@gmail.com> wrote:> - Parallel Copy with 1 worker ran slower than normal Copy in a couple> of cases (I did question if allowing 1 worker was useful in my patch> review).Thanks Greg for your review & testing.I had executed various tests with 1GB, 2GB & 5GB with 100 columns without parallel mode & with 1 parallel worker. Test result for the same is as given below:TestWithout parallel modeWith 1 Parallel worker1GB csv file 100 columns(100 bytes data in each column)62 seconds47 seconds (1.32X)1GB csv file 100 columns(1000 bytes data in each column)89 seconds78 seconds (1.14X)2GB csv file 100 columns(1 byte data in each column)277 seconds256 seconds (1.08X)5GB csv file 100 columns(100 byte data in each column)515 seconds445 seconds (1.16X)I have run the tests multiple times and have noticed the similar execution times in all the runs for the above tests.In the above results there is slight improvement with 1 worker. In my tests I did not observe the degradation for copy with 1 worker compared to the non parallel copy. Can you share with me the script you used to generate the data & the ddl of the table, so that it will help me check that scenario you faced the problem. Regards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 Aug 2020 16:13:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\n\n>Can you share with me the script you used to generate the data & the ddl of the table, so that it will help me check that >scenario you faced the >problem.\n\nUnfortunately I can't directly share it (considered company IP),\nthough having said that it's only doing something that is relatively\nsimple and unremarkable, so I'd expect it to be much like what you are\ncurrently doing. I can describe it in general.\n\nThe table being used contains 100 columns (as I pointed out earlier),\nwith the first column of \"bigserial\" type, and the others of different\ntypes like \"character varying(255)\", \"numeric\", \"date\" and \"time\nwithout timezone\". There's about 60 of the \"character varying(255)\"\noverall, with the other types interspersed.\n\nWhen testing with indexes, 4 b-tree indexes were used that each\nincluded the first column and then distinctly 9 other columns.\n\nA CSV record (row) template file was created with test data\n(corresponding to the table), and that was simply copied and appended\nover and over with a record prefix in order to create the test data\nfile.\nThe following shell-script basically does it (but very slowly). I was\nusing a small C program to do similar, a lot faster.\nIn my case, N=2550000 produced about a 5GB CSV file.\n\n file_out=data.csv; for i in {1..N}; do echo -n \"$i,\" >> $file_out;\ncat sample_record.csv >> $file_out; done\n\nOne other thing I should mention is that between each test run, I\ncleared the OS page cache, as described here:\nhttps://linuxhint.com/clear_cache_linux/\nThat way, each COPY FROM is not taking advantage of any OS-cached data\nfrom a previous COPY FROM.\n\nIf your data is somehow significantly different and you want to (and\ncan) share your script, then I can try it in my environment.\n\n\nRegards,\nGreg\n\n\n", "msg_date": "Tue, 1 Sep 2020 20:09:11 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 1, 2020 at 3:39 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> >Can you share with me the script you used to generate the data & the ddl of the table, so that it will help me check that >scenario you faced the >problem.\n>\n> Unfortunately I can't directly share it (considered company IP),\n> though having said that it's only doing something that is relatively\n> simple and unremarkable, so I'd expect it to be much like what you are\n> currently doing. I can describe it in general.\n>\n> The table being used contains 100 columns (as I pointed out earlier),\n> with the first column of \"bigserial\" type, and the others of different\n> types like \"character varying(255)\", \"numeric\", \"date\" and \"time\n> without timezone\". There's about 60 of the \"character varying(255)\"\n> overall, with the other types interspersed.\n>\n> When testing with indexes, 4 b-tree indexes were used that each\n> included the first column and then distinctly 9 other columns.\n>\n> A CSV record (row) template file was created with test data\n> (corresponding to the table), and that was simply copied and appended\n> over and over with a record prefix in order to create the test data\n> file.\n> The following shell-script basically does it (but very slowly). I was\n> using a small C program to do similar, a lot faster.\n> In my case, N=2550000 produced about a 5GB CSV file.\n>\n> file_out=data.csv; for i in {1..N}; do echo -n \"$i,\" >> $file_out;\n> cat sample_record.csv >> $file_out; done\n>\n> One other thing I should mention is that between each test run, I\n> cleared the OS page cache, as described here:\n> https://linuxhint.com/clear_cache_linux/\n> That way, each COPY FROM is not taking advantage of any OS-cached data\n> from a previous COPY FROM.\n\nI will try with a similar test and check if I can reproduce.\n\n> If your data is somehow significantly different and you want to (and\n> can) share your script, then I can try it in my environment.\n\nI have attached the scripts that I used for the test results I\nmentioned in my previous mail. create.sql file has the table that I\nused, insert_data_gen.txt has the insert data generation scripts. I\nvaried the count in insert_data_gen to generate csv files of 1GB, 2GB\n& 5GB & varied the data to generate 1 char, 10 char & 100 char for\neach column for various testing. You can rename insert_data_gen.txt to\ninsert_data_gen.sh & generate the csv file.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Sep 2020 11:09:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": ">On Wed, Sep 2, 2020 at 3:40 PM vignesh C <vignesh21@gmail.com> wrote:\n> I have attached the scripts that I used for the test results I\n> mentioned in my previous mail. create.sql file has the table that I\n> used, insert_data_gen.txt has the insert data generation scripts. I\n> varied the count in insert_data_gen to generate csv files of 1GB, 2GB\n> & 5GB & varied the data to generate 1 char, 10 char & 100 char for\n> each column for various testing. You can rename insert_data_gen.txt to\n> insert_data_gen.sh & generate the csv file.\n\n\nHi Vignesh,\n\nI used your script and table definition, multiplying the number of\nrecords to produce a 5GB and 9.5GB CSV file.\nI got the following results:\n\n\n(1) Postgres default settings, 5GB CSV (530000 rows):\n\nCopy Type Duration (s) Load factor\n===============================================\nNormal Copy 132.197 -\n\nParallel Copy\n(#workers)\n1 98.428 1.34\n2 52.753 2.51\n3 37.630 3.51\n4 33.554 3.94\n5 33.636 3.93\n6 33.821 3.91\n7 34.270 3.86\n8 34.465 3.84\n9 34.315 3.85\n10 33.543 3.94\n\n\n(2) Postgres increased resources, 5GB CSV (530000 rows):\n\nshared_buffers = 20% of RAM (total RAM = 376GB) = 76GB\nwork_mem = 10% of RAM = 38GB\nmaintenance_work_mem = 10% of RAM = 38GB\nmax_worker_processes = 16\nmax_parallel_workers = 16\ncheckpoint_timeout = 30min\nmax_wal_size=2GB\n\n\nCopy Type Duration (s) Load factor\n===============================================\nNormal Copy 131.835 -\n\nParallel Copy\n(#workers)\n1 98.301 1.34\n2 53.261 2.48\n3 37.868 3.48\n4 34.224 3.85\n5 33.831 3.90\n6 34.229 3.85\n7 34.512 3.82\n8 34.303 3.84\n9 34.690 3.80\n10 34.479 3.82\n\n\n\n(3) Postgres default settings, 9.5GB CSV (1000000 rows):\n\nCopy Type Duration (s) Load factor\n===============================================\nNormal Copy 248.503 -\n\nParallel Copy\n(#workers)\n1 185.724 1.34\n2 99.832 2.49\n3 70.560 3.52\n4 63.328 3.92\n5 63.182 3.93\n6 64.108 3.88\n7 64.131 3.87\n8 64.350 3.86\n9 64.293 3.87\n10 63.818 3.89\n\n\n(4) Postgres increased resources, 9.5GB CSV (1000000 rows):\n\nshared_buffers = 20% of RAM (total RAM = 376GB) = 76GB\nwork_mem = 10% of RAM = 38GB\nmaintenance_work_mem = 10% of RAM = 38GB\nmax_worker_processes = 16\nmax_parallel_workers = 16\ncheckpoint_timeout = 30min\nmax_wal_size=2GB\n\n\nCopy Type Duration (s) Load factor\n===============================================\nNormal Copy 248.647 -\n\nParallel Copy\n(#workers)\n1 182.236 1.36\n2 92.814 2.68\n3 67.347 3.69\n4 63.839 3.89\n5 62.672 3.97\n6 63.873 3.89\n7 64.930 3.83\n8 63.885 3.89\n9 62.397 3.98\n10 64.477 3.86\n\n\n\nSo as you found, with this particular table definition and data, 1\nparallel worker always performs better than normal copy.\nThe different result obtained for this particular case seems to be\ncaused by the following factors:\n- different table definition (I used a variety of column types)\n- amount of data per row (I used less data per row, so more rows per\nsame size data file)\n\nAs I previously observed, if the target table has no indexes,\nincreasing resources beyond the default settings makes little\ndifference to the performance.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Sep 2020 16:50:31 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 1, 2020 at 3:39 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> >Can you share with me the script you used to generate the data & the ddl of the table, so that it will help me check that >scenario you faced the >problem.\n>\n> Unfortunately I can't directly share it (considered company IP),\n> though having said that it's only doing something that is relatively\n> simple and unremarkable, so I'd expect it to be much like what you are\n> currently doing. I can describe it in general.\n>\n> The table being used contains 100 columns (as I pointed out earlier),\n> with the first column of \"bigserial\" type, and the others of different\n> types like \"character varying(255)\", \"numeric\", \"date\" and \"time\n> without timezone\". There's about 60 of the \"character varying(255)\"\n> overall, with the other types interspersed.\n>\n\nThanks Greg for executing & sharing the results.\nI tried with a similar test case that you suggested, I was not able to\nreproduce the degradation scenario.\nIf it is possible, can you run perf for the scenario with 1 worker &\nnon parallel mode & share the perf results, we will be able to find\nout which of the functions is consuming more time by doing a\ncomparison of the perf reports.\nSteps for running perf:\n1) get the postgres pid\n2) perf record -a -g -p <above pid>\n3) Run copy command\n4) Execute \"perf report -g\" once copy finishes.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Sep 2020 16:30:18 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Sep 11, 2020 at 3:49 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> I couldn't use the original machine from which I obtained the previous\n> results, but ended up using a 4-core CentOS7 VM, which showed a\n> similar pattern in the performance results for this test case.\n> I obtained the following results from loading a 2GB CSV file (1000000\n> rows, 4 indexes):\n>\n> Copy Type Duration (s) Load factor\n> ===============================================\n> Normal Copy 190.891 -\n>\n> Parallel Copy\n> (#workers)\n> 1 210.947 0.90\n>\nHi Greg,\n\nI tried to recreate the test case(attached) and I didn't find much\ndifference with the custom postgresql.config file.\nTest case: 250000 tuples, 4 indexes(composite indexes with 10\ncolumns), 3.7GB, 100 columns(as suggested by you and all the\nvarchar(255) columns are having 255 characters), exec time in sec.\n\nWith custom postgresql.conf[1], removed and recreated the data\ndirectory after every run(I couldn't perform the OS page cache flush\ndue to some reasons. So, chose this recreation of data dir way, for\ntesting purpose):\n HEAD: 129.547, 128.624, 128.890\n Patch: 0 workers - 130.213, 131.298, 130.555\n Patch: 1 worker - 127.757, 125.560, 128.275\n\nWith default postgresql.conf, removed and recreated the data directory\nafter every run:\n HEAD: 138.276, 150.472, 153.304\n Patch: 0 workers - 162.468, 149.423, 159.137\n Patch: 1 worker - 136.055, 144.250, 137.916\n\nFew questions:\n 1. Was the run performed with default postgresql.conf file? If not,\nwhat are the changed configurations?\n 2. Are the readings for normal copy(190.891sec, mentioned by you\nabove) taken on HEAD or with patch, 0 workers? How much is the runtime\nwith your test case on HEAD(Without patch) and 0 workers(With patch)?\n 3. Was the run performed on release build?\n 4. Were the readings taken on multiple runs(say 3 or 4 times)?\n\n[1] - Postgres configuration used for above testing:\nshared_buffers = 40GB\nmax_worker_processes = 32\nmax_parallel_maintenance_workers = 24\nmax_parallel_workers = 32\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Sep 2020 19:19:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Bharath,\n\nOn Tue, Sep 15, 2020 at 11:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Few questions:\n> 1. Was the run performed with default postgresql.conf file? If not,\n> what are the changed configurations?\nYes, just default settings.\n\n> 2. Are the readings for normal copy(190.891sec, mentioned by you\n> above) taken on HEAD or with patch, 0 workers?\nWith patch\n\n>How much is the runtime\n> with your test case on HEAD(Without patch) and 0 workers(With patch)?\nTBH, I didn't test that. Looking at the changes, I wouldn't expect a\ndegradation of performance for normal copy (you have tested, right?).\n\n> 3. Was the run performed on release build?\nFor generating the perf data I sent (normal copy vs parallel copy with\n1 worker), I used a debug build (-g -O0), as that is needed for\ngenerating all the relevant perf data for Postgres code. Previously I\nran with a release build (-O2).\n\n> 4. Were the readings taken on multiple runs(say 3 or 4 times)?\nThe readings I sent were from just one run (not averaged), but I did\nrun the tests several times to verify the readings were representative\nof the pattern I was seeing.\n\n\nFortunately I have been given permission to share the exact table\ndefinition and data I used, so you can check the behaviour and timings\non your own test machine.\nPlease see the attachment.\nYou can create the table using the table.sql and index_4.sql\ndefinitions in the \"sql\" directory.\nThe data.csv file (to be loaded by COPY) can be created with the\nincluded \"dupdata\" tool in the \"input\" directory, which you need to\nbuild, then run, specifying a suitable number of records and path of\nthe template record (see README). Obviously the larger the number of\nrecords, the larger the file ...\nThe table can then be loaded using COPY with \"format csv\" (and\n\"parallel N\" if testing parallel copy).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Wed, 16 Sep 2020 17:50:03 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\n\nI've spent some time today looking at your new set of patches and I've\nsome thoughts and queries which I would like to put here:\n\nWhy are these not part of the shared cstate structure?\n\n SerializeString(pcxt, PARALLEL_COPY_KEY_NULL_PRINT, cstate->null_print);\n SerializeString(pcxt, PARALLEL_COPY_KEY_DELIM, cstate->delim);\n SerializeString(pcxt, PARALLEL_COPY_KEY_QUOTE, cstate->quote);\n SerializeString(pcxt, PARALLEL_COPY_KEY_ESCAPE, cstate->escape);\n\nI think in the refactoring patch we could replace all the cstate\nvariables that would be shared between the leader and workers with a\ncommon structure which would be used even for a serial copy. Thoughts?\n\n--\n\nHave you tested your patch when encoding conversion is needed? If so,\ncould you please point out the email that has the test results.\n\n--\n\nApart from above, I've noticed some cosmetic errors which I am sharing here:\n\n+#define IsParallelCopy() (cstate->is_parallel)\n+#define IsLeader() (cstate->pcdata->is_leader)\n\nThis doesn't look to be properly aligned.\n\n--\n\n+ shared_info_ptr = (ParallelCopyShmInfo *)\nshm_toc_allocate(pcxt->toc, sizeof(ParallelCopyShmInfo));\n+ PopulateParallelCopyShmInfo(shared_info_ptr, full_transaction_id);\n\n..\n\n+ /* Store shared build state, for which we reserved space. */\n+ shared_cstate = (SerializedParallelCopyState\n*)shm_toc_allocate(pcxt->toc, est_cstateshared);\n\nIn the first case, while typecasting you've added a space between the\ntypename and the function but that is missing in the second case. I\nthink it would be good if you could make it consistent.\n\nSame comment applies here as well:\n\n+ pg_atomic_uint32 line_state; /* line state */\n+ uint64 cur_lineno; /* line number for error messages */\n+}ParallelCopyLineBoundary;\n\n...\n\n+ CommandId mycid; /* command id */\n+ ParallelCopyLineBoundaries line_boundaries; /* line array */\n+} ParallelCopyShmInfo;\n\nThere is no space between the closing brace and the structure name in\nthe first case but it is in the second one. So, again this doesn't\nlook consistent.\n\nI could also find this type of inconsistency in comments. See below:\n\n+/* It can hold upto 10000 record information for worker to process. RINGSIZE\n+ * should be a multiple of WORKER_CHUNK_COUNT, as wrap around cases\nis currently\n+ * not handled while selecting the WORKER_CHUNK_COUNT by the worker. */\n+#define RINGSIZE (10 * 1000)\n\n...\n\n+/*\n+ * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n+ * block to process to avoid lock contention. Read RINGSIZE comments before\n+ * changing this value.\n+ */\n+#define WORKER_CHUNK_COUNT 50\n\nYou may see these kinds of errors at other places as well if you scan\nthrough your patch.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Wed, Aug 19, 2020 at 11:51 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks Greg for reviewing the patch. Please find my thoughts for your comments.\n>\n> On Mon, Aug 17, 2020 at 9:44 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > Some further comments:\n> >\n> > (1) v3-0002-Framework-for-leader-worker-in-parallel-copy.patch\n> >\n> > +/*\n> > + * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n> > + * block to process to avoid lock contention. This value should be divisible by\n> > + * RINGSIZE, as wrap around cases is currently not handled while selecting the\n> > + * WORKER_CHUNK_COUNT by the worker.\n> > + */\n> > +#define WORKER_CHUNK_COUNT 50\n> >\n> >\n> > \"This value should be divisible by RINGSIZE\" is not a correct\n> > statement (since obviously 50 is not divisible by 10000).\n> > It should say something like \"This value should evenly divide into\n> > RINGSIZE\", or \"RINGSIZE should be a multiple of WORKER_CHUNK_COUNT\".\n> >\n>\n> Fixed. Changed it to RINGSIZE should be a multiple of WORKER_CHUNK_COUNT.\n>\n> > (2) v3-0003-Allow-copy-from-command-to-process-data-from-file.patch\n> >\n> > (i)\n> >\n> > + /*\n> > + * If the data is present in current block\n> > lineInfo. line_size\n> > + * will be updated. If the data is spread\n> > across the blocks either\n> >\n> > Somehow a space has been put between \"lineinfo.\" and \"line_size\".\n> > It should be: \"If the data is present in current block\n> > lineInfo.line_size will be updated\"\n>\n> Fixed, changed it to lineinfo->line_size.\n>\n> >\n> > (ii)\n> >\n> > >This is not possible because of pg_atomic_compare_exchange_u32, this\n> > >will succeed only for one of the workers whose line_state is\n> > >LINE_LEADER_POPULATED, for other workers it will fail. This is\n> > >explained in detail above ParallelCopyLineBoundary.\n> >\n> > Yes, but prior to that call to pg_atomic_compare_exchange_u32(),\n> > aren't you separately reading line_state and line_state, so that\n> > between those reads, it may have transitioned from leader to another\n> > worker, such that the read line state (\"cur_line_state\", being checked\n> > in the if block) may not actually match what is now in the line_state\n> > and/or the read line_size (\"dataSize\") doesn't actually correspond to\n> > the read line state?\n> >\n> > (sorry, still not 100% convinced that the synchronization and checks\n> > are safe in all cases)\n> >\n>\n> I think that you are describing about the problem could happen in the\n> following case:\n> when we read curr_line_state, the value was LINE_WORKER_PROCESSED or\n> LINE_WORKER_PROCESSING. Then in some cases if the leader is very fast\n> compared to the workers then the leader quickly populates one line and\n> sets the state to LINE_LEADER_POPULATED. State is changed to\n> LINE_LEADER_POPULATED when we are checking the currr_line_state.\n> I feel this will not be a problem because, Leader will populate & wait\n> till some RING element is available to populate. In the meantime\n> worker has seen that state is LINE_WORKER_PROCESSED or\n> LINE_WORKER_PROCESSING(previous state that it read), worker has\n> identified that this chunk was processed by some other worker, worker\n> will move and try to get the next available chunk & insert those\n> records. It will keep continuing till it gets the next chunk to\n> process. Eventually one of the workers will get this chunk and process\n> it.\n>\n> > (3) v3-0006-Parallel-Copy-For-Binary-Format-Files.patch\n> >\n> > >raw_buf is not used in parallel copy, instead raw_buf will be pointing\n> > >to shared memory data blocks. This memory was allocated as part of\n> > >BeginCopyFrom, uptil this point we cannot be 100% sure as copy can be\n> > >performed sequentially like in case max_worker_processes is not\n> > >available, if it switches to sequential mode raw_buf will be used\n> > >while performing copy operation. At this place we can safely free this\n> > >memory that was allocated\n> >\n> > So the following code (which checks raw_buf, which still points to\n> > memory that has been pfreed) is still valid?\n> >\n> > In the SetRawBufForLoad() function, which is called by CopyReadLineText():\n> >\n> > cur_data_blk_ptr = (cstate->raw_buf) ?\n> > &pcshared_info->data_blocks[cur_block_pos] : NULL;\n> >\n> > The above code looks a bit dicey to me. I stepped over that line in\n> > the debugger when I debugged an instance of Parallel Copy, so it\n> > definitely gets executed.\n> > It makes me wonder what other code could possibly be checking raw_buf\n> > and using it in some way, when in fact what it points to has been\n> > pfreed.\n> >\n> > Are you able to add the following line of code, or will it (somehow)\n> > break logic that you are relying on?\n> >\n> > pfree(cstate->raw_buf);\n> > cstate->raw_buf = NULL; <=== I suggest that this line is added\n> >\n>\n> You are right, I have debugged & verified it sets it to an invalid\n> block which is not expected. There are chances this would have caused\n> some corruption in some machines. The suggested fix is required, I\n> have fixed it. I have moved this change to\n> 0003-Allow-copy-from-command-to-process-data-from-file.patch as\n> 0006-Parallel-Copy-For-Binary-Format-Files is only for Binary format\n> parallel copy & that change is common change for parallel copy.\n>\n> I have attached new set of patches with the fixes.\n> Thoughts?\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Sep 2020 18:35:56 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Sep 16, 2020 at 1:20 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Fortunately I have been given permission to share the exact table\n> definition and data I used, so you can check the behaviour and timings\n> on your own test machine.\n>\n\nThanks Greg for the script. I ran your test case and I didn't observe\nany increase in exec time with 1 worker, indeed, we have benefitted a\nfew seconds from 0 to 1 worker as expected.\n\nExecution time is in seconds. Each test case is executed 3 times on\nrelease build. Each time the data directory is recreated.\n\nCase 1: 1000000 rows, 2GB\nWith Patch, default configuration, 0 worker: 88.933, 92.261, 88.423\nWith Patch, default configuration, 1 worker: 73.825, 74.583, 72.678\n\nWith Patch, custom configuration, 0 worker: 76.191, 78.160, 78.822\nWith Patch, custom configuration, 1 worker: 61.289, 61.288, 60.573\n\nCase 2: 2550000 rows, 5GB\nWith Patch, default configuration, 0 worker: 246.031, 188.323, 216.683\nWith Patch, default configuration, 1 worker: 156.299, 153.293, 170.307\n\nWith Patch, custom configuration, 0 worker: 197.234, 195.866, 196.049\nWith Patch, custom configuration, 1 worker: 157.173, 158.287, 157.090\n\n[1] - Custom configuration is set up to ensure that no other processes\ninfluence the results. The postgresql.conf used:\nshared_buffers = 40GB\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\nmax_worker_processes = 32\nmax_parallel_maintenance_workers = 24\nmax_parallel_workers = 32\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Sep 2020 11:06:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Ashutosh for your comments.\n\nOn Wed, Sep 16, 2020 at 6:36 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> I've spent some time today looking at your new set of patches and I've\n> some thoughts and queries which I would like to put here:\n>\n> Why are these not part of the shared cstate structure?\n>\n> SerializeString(pcxt, PARALLEL_COPY_KEY_NULL_PRINT, cstate->null_print);\n> SerializeString(pcxt, PARALLEL_COPY_KEY_DELIM, cstate->delim);\n> SerializeString(pcxt, PARALLEL_COPY_KEY_QUOTE, cstate->quote);\n> SerializeString(pcxt, PARALLEL_COPY_KEY_ESCAPE, cstate->escape);\n>\n\nI have used shared_cstate mainly to share the integer & bool data\ntypes from the leader to worker process. The above data types are of\nchar* data type, I will not be able to use it like how I could do it\nfor integer type. So I preferred to send these as separate keys to the\nworker. Thoughts?\n\n> I think in the refactoring patch we could replace all the cstate\n> variables that would be shared between the leader and workers with a\n> common structure which would be used even for a serial copy. Thoughts?\n>\n\nCurrently we are using shared_cstate only to share integer & bool data\ntypes from leader to worker. Once worker retrieves the shared data for\ninteger & bool data types, worker will copy it to cstate. I preferred\nthis way because only for integer & bool we retrieve to shared_cstate\n& copy it to cstate and for rest of the members any way we are\ndirectly copying back to cstate. Thoughts?\n\n> Have you tested your patch when encoding conversion is needed? If so,\n> could you please point out the email that has the test results.\n>\n\nWe have not yet done encoding testing, we will do and post the results\nseparately in the coming days.\n\n> Apart from above, I've noticed some cosmetic errors which I am sharing here:\n>\n> +#define IsParallelCopy() (cstate->is_parallel)\n> +#define IsLeader() (cstate->pcdata->is_leader)\n>\n> This doesn't look to be properly aligned.\n>\n\nFixed.\n\n> + shared_info_ptr = (ParallelCopyShmInfo *)\n> shm_toc_allocate(pcxt->toc, sizeof(ParallelCopyShmInfo));\n> + PopulateParallelCopyShmInfo(shared_info_ptr, full_transaction_id);\n>\n> ..\n>\n> + /* Store shared build state, for which we reserved space. */\n> + shared_cstate = (SerializedParallelCopyState\n> *)shm_toc_allocate(pcxt->toc, est_cstateshared);\n>\n> In the first case, while typecasting you've added a space between the\n> typename and the function but that is missing in the second case. I\n> think it would be good if you could make it consistent.\n>\n\nFixed\n\n> Same comment applies here as well:\n>\n> + pg_atomic_uint32 line_state; /* line state */\n> + uint64 cur_lineno; /* line number for error messages */\n> +}ParallelCopyLineBoundary;\n>\n> ...\n>\n> + CommandId mycid; /* command id */\n> + ParallelCopyLineBoundaries line_boundaries; /* line array */\n> +} ParallelCopyShmInfo;\n>\n> There is no space between the closing brace and the structure name in\n> the first case but it is in the second one. So, again this doesn't\n> look consistent.\n>\n\nFixed\n\n> I could also find this type of inconsistency in comments. See below:\n>\n> +/* It can hold upto 10000 record information for worker to process. RINGSIZE\n> + * should be a multiple of WORKER_CHUNK_COUNT, as wrap around cases\n> is currently\n> + * not handled while selecting the WORKER_CHUNK_COUNT by the worker. */\n> +#define RINGSIZE (10 * 1000)\n>\n> ...\n>\n> +/*\n> + * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n> + * block to process to avoid lock contention. Read RINGSIZE comments before\n> + * changing this value.\n> + */\n> +#define WORKER_CHUNK_COUNT 50\n>\n> You may see these kinds of errors at other places as well if you scan\n> through your patch.\n\nFixed.\n\nPlease find the attached v5 patch which has the fixes for the same.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Sep 2020 14:44:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Sep 17, 2020 at 11:06 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Sep 16, 2020 at 1:20 PM Greg Nancarrow <gregn4422@gmail.com>\nwrote:\n> >\n> > Fortunately I have been given permission to share the exact table\n> > definition and data I used, so you can check the behaviour and timings\n> > on your own test machine.\n> >\n>\n> Thanks Greg for the script. I ran your test case and I didn't observe\n> any increase in exec time with 1 worker, indeed, we have benefitted a\n> few seconds from 0 to 1 worker as expected.\n>\n> Execution time is in seconds. Each test case is executed 3 times on\n> release build. Each time the data directory is recreated.\n>\n> Case 1: 1000000 rows, 2GB\n> With Patch, default configuration, 0 worker: 88.933, 92.261, 88.423\n> With Patch, default configuration, 1 worker: 73.825, 74.583, 72.678\n>\n> With Patch, custom configuration, 0 worker: 76.191, 78.160, 78.822\n> With Patch, custom configuration, 1 worker: 61.289, 61.288, 60.573\n>\n> Case 2: 2550000 rows, 5GB\n> With Patch, default configuration, 0 worker: 246.031, 188.323, 216.683\n> With Patch, default configuration, 1 worker: 156.299, 153.293, 170.307\n>\n> With Patch, custom configuration, 0 worker: 197.234, 195.866, 196.049\n> With Patch, custom configuration, 1 worker: 157.173, 158.287, 157.090\n>\n\nHi Greg,\n\nIf you still observe the issue in your testing environment, I'm attaching a\ntesting patch(that applies on top of the latest parallel copy patch set\ni.e. v5 1 to 6) to capture various timings such as total copy time in\nleader and worker, index and table insertion time, leader and worker\nwaiting time. These logs are shown in the server log file.\n\nFew things to follow before testing:\n1. Is the table being dropped/truncated after the test with 0 workers and\nbefore running with 1 worker? If not, then the index insertion time would\nincrease.[1](for me it is increasing by 10 sec). This is obvious because\nthe 1st time index will be created from bottom up manner(from leaves to\nroot), but for the 2nd time it has to search and insert at the proper\nleaves and inner B+Tree nodes.\n2. If possible, can you also run with custom postgresql.conf settings[2]\nalong with default? Just to ensure that other bg processes such as\ncheckpointer, autovacuum, bgwriter etc. don't affect our testcase. For\ninstance, with default postgresql.conf file, it looks like checkpointing[3]\nis happening frequently, could you please let us know if that happens at\nyour end?\n3. Could you please run the test case 3 times at least? Just to ensure the\nconsistency of the issue.\n4. I ran the tests in a performance test system where no other user\nprocesses(except system processes) are running. Is it possible for you to\ndo the same?\n\nPlease capture and share the timing logs with us.\n\nHere's a snapshot of how the added timings show up in the logs: ( I\ncaptured this with your test case case 1: 1000000 rows, 2GB, custom\npostgresql.conf file settings[2]).\nwith 0 workers:\n2020-09-22 10:49:27.508 BST [163910] LOG: totaltableinsertiontime =\n24072.034 ms\n2020-09-22 10:49:27.508 BST [163910] LOG: totalindexinsertiontime = 60.682\nms\n2020-09-22 10:49:27.508 BST [163910] LOG: totalcopytime = 59664.594 ms\n\nwith 1 worker:\n2020-09-22 10:53:58.409 BST [163947] LOG: totalcopyworkerwaitingtime =\n59.815 ms\n2020-09-22 10:53:58.409 BST [163947] LOG: totaltableinsertiontime =\n23585.881 ms\n2020-09-22 10:53:58.409 BST [163947] LOG: totalindexinsertiontime = 30.946\nms\n2020-09-22 10:53:58.409 BST [163947] LOG: totalcopytimeworker = 47047.956\nms\n2020-09-22 10:53:58.429 BST [163946] LOG: totalcopyleaderwaitingtime =\n26746.744 ms\n2020-09-22 10:53:58.429 BST [163946] LOG: totalcopytime = 47150.002 ms\n\n[1]\n0 worker:\nLOG: totaltableinsertiontime = 25491.881 ms\nLOG: totalindexinsertiontime = 14136.104 ms\nLOG: totalcopytime = 75606.858 ms\ntable is not dropped and so are indexes\n1 worker:\nLOG: totalcopyworkerwaitingtime = 64.582 ms\nLOG: totaltableinsertiontime = 21360.875 ms\nLOG: totalindexinsertiontime = 24843.570 ms\nLOG: totalcopytimeworker = 69837.162 ms\nLOG: totalcopyleaderwaitingtime = 49548.441 ms\nLOG: totalcopytime = 69997.778 ms\n\n[2]\ncustom postgresql.conf configuration:\nshared_buffers = 40GB\nmax_worker_processes = 32\nmax_parallel_maintenance_workers = 24\nmax_parallel_workers = 32\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\n[3]\nLOG: checkpoints are occurring too frequently (14 seconds apart)\nHINT: Consider increasing the configuration parameter \"max_wal_size\".\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Sep 2020 16:08:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Bharath,\n\n> Few things to follow before testing:\n> 1. Is the table being dropped/truncated after the test with 0 workers and before running with 1 worker? If not, then the index insertion time would increase.[1](for me it is increasing by 10 sec). This is obvious because the 1st time index will be created from bottom up manner(from leaves to root), but for the 2nd time it has to search and insert at the proper leaves and inner B+Tree nodes.\n\nYes, it' being truncated before running each and every COPY.\n\n> 2. If possible, can you also run with custom postgresql.conf settings[2] along with default? Just to ensure that other bg processes such as checkpointer, autovacuum, bgwriter etc. don't affect our testcase. For instance, with default postgresql.conf file, it looks like checkpointing[3] is happening frequently, could you please let us know if that happens at your end?\n\nYes, have run with default and your custom settings. With default\nsettings, I can confirm that checkpointing is happening frequently\nwith the tests I've run here.\n\n> 3. Could you please run the test case 3 times at least? Just to ensure the consistency of the issue.\n\nYes, have run 4 times. Seems to be a performance hit (whether normal\ncopy or parallel-1 copy) on the first COPY run on a freshly created\ndatabase. After that, results are consistent.\n\n> 4. I ran the tests in a performance test system where no other user processes(except system processes) are running. Is it possible for you to do the same?\n>\n> Please capture and share the timing logs with us.\n>\n\nYes, I have ensured the system is as idle as possible prior to testing.\n\nI have attached the test results obtained after building with your\nParallel Copy patch and testing patch applied (HEAD at\n733fa9aa51c526582f100aa0d375e0eb9a6bce8b).\n\nTest results show that Parallel COPY with 1 worker is performing\nbetter than normal COPY in the test scenarios run. There is a\nperformance hit (regardless of COPY type) on the very first COPY run\non a freshly-created database.\n\nI ran the test case 4 times. and also in reverse order, with truncate\nrun before each COPY (output and logs named xxxx_0_1 run normal COPY\nthen parallel COPY, and named xxxx_1_0 run parallel COPY and then\nnormal COPY).\n\nPlease refer to attached results.\n\nRegards,\nGreg", "msg_date": "Thu, 24 Sep 2020 12:56:09 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Greg for the testing.\n\nOn Thu, Sep 24, 2020 at 8:27 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> > 3. Could you please run the test case 3 times at least? Just to ensure\nthe consistency of the issue.\n>\n> Yes, have run 4 times. Seems to be a performance hit (whether normal\n> copy or parallel-1 copy) on the first COPY run on a freshly created\n> database. After that, results are consistent.\n>\n\n From the logs, I see that it is happening only with default\npostgresql.conf, and there's inconsistency in table insertion times,\nespecially from the 1st time to 2nd time. Also, the table insertion time\nvariation is more. This is expected with the default postgresql.conf,\nbecause of the background processes interference. That's the reason we\nusually run with custom configuration to correctly measure the performance\ngain.\n\nbr_default_0_1.log:\n2020-09-23 22:32:36.944 JST [112616] LOG: totaltableinsertiontime =\n155068.244 ms\n2020-09-23 22:33:57.615 JST [11426] LOG: totaltableinsertiontime =\n42096.275 ms\n2020-09-23 22:37:39.192 JST [43097] LOG: totaltableinsertiontime =\n29135.262 ms\n2020-09-23 22:38:56.389 JST [54205] LOG: totaltableinsertiontime =\n38953.912 ms\n2020-09-23 22:40:27.573 JST [66485] LOG: totaltableinsertiontime =\n27895.326 ms\n2020-09-23 22:41:34.948 JST [77523] LOG: totaltableinsertiontime =\n28929.642 ms\n2020-09-23 22:43:18.938 JST [89857] LOG: totaltableinsertiontime =\n30625.015 ms\n2020-09-23 22:44:21.938 JST [101372] LOG: totaltableinsertiontime =\n24624.045 ms\n\nbr_default_1_0.log:\n2020-09-24 11:12:14.989 JST [56146] LOG: totaltableinsertiontime =\n192068.350 ms\n2020-09-24 11:13:38.228 JST [88455] LOG: totaltableinsertiontime =\n30999.942 ms\n2020-09-24 11:15:50.381 JST [108935] LOG: totaltableinsertiontime =\n31673.204 ms\n2020-09-24 11:17:14.260 JST [118541] LOG: totaltableinsertiontime =\n31367.027 ms\n2020-09-24 11:20:18.975 JST [17270] LOG: totaltableinsertiontime =\n26858.924 ms\n2020-09-24 11:22:17.822 JST [26852] LOG: totaltableinsertiontime =\n66531.442 ms\n2020-09-24 11:24:09.221 JST [47971] LOG: totaltableinsertiontime =\n38943.384 ms\n2020-09-24 11:25:30.955 JST [58849] LOG: totaltableinsertiontime =\n28286.634 ms\n\nbr_custom_0_1.log:\n2020-09-24 10:29:44.956 JST [110477] LOG: totaltableinsertiontime =\n20207.928 ms\n2020-09-24 10:30:49.570 JST [120568] LOG: totaltableinsertiontime =\n23360.006 ms\n2020-09-24 10:32:31.659 JST [2753] LOG: totaltableinsertiontime =\n19837.588 ms\n2020-09-24 10:35:49.245 JST [31118] LOG: totaltableinsertiontime =\n21759.253 ms\n2020-09-24 10:36:54.834 JST [41763] LOG: totaltableinsertiontime =\n23547.323 ms\n2020-09-24 10:38:53.507 JST [56779] LOG: totaltableinsertiontime =\n21543.984 ms\n2020-09-24 10:39:58.713 JST [67489] LOG: totaltableinsertiontime =\n25254.563 ms\n\nbr_custom_1_0.log:\n2020-09-24 10:49:03.242 JST [15308] LOG: totaltableinsertiontime =\n16541.201 ms\n2020-09-24 10:50:11.848 JST [23324] LOG: totaltableinsertiontime =\n15076.577 ms\n2020-09-24 10:51:24.497 JST [35394] LOG: totaltableinsertiontime =\n16400.777 ms\n2020-09-24 10:52:32.354 JST [42953] LOG: totaltableinsertiontime =\n15591.051 ms\n2020-09-24 10:54:30.327 JST [61136] LOG: totaltableinsertiontime =\n16700.954 ms\n2020-09-24 10:55:38.377 JST [68719] LOG: totaltableinsertiontime =\n15435.150 ms\n2020-09-24 10:57:08.927 JST [83335] LOG: totaltableinsertiontime =\n17133.251 ms\n2020-09-24 10:58:17.420 JST [90905] LOG: totaltableinsertiontime =\n15352.753 ms\n\n>\n> Test results show that Parallel COPY with 1 worker is performing\n> better than normal COPY in the test scenarios run.\n>\n\nGood to know :)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks Greg for the testing.On Thu, Sep 24, 2020 at 8:27 AM Greg Nancarrow <gregn4422@gmail.com> wrote:>> > 3. Could you please run the test case 3 times at least? Just to ensure the consistency of the issue.>> Yes, have run 4 times. Seems to be a performance hit (whether normal> copy or parallel-1 copy) on the first COPY run on a freshly created> database. After that, results are consistent.>From the logs, I see that it is happening only with default postgresql.conf, and there's inconsistency in table insertion times, especially from the 1st time to 2nd time. Also, the table insertion time variation is more. This is expected with the default postgresql.conf, because of the background processes interference. That's the reason we usually run with custom configuration to correctly measure the performance gain.br_default_0_1.log:2020-09-23 22:32:36.944 JST [112616] LOG:  totaltableinsertiontime = 155068.244 ms2020-09-23 22:33:57.615 JST [11426] LOG:  totaltableinsertiontime = 42096.275 ms2020-09-23 22:37:39.192 JST [43097] LOG:  totaltableinsertiontime = 29135.262 ms2020-09-23 22:38:56.389 JST [54205] LOG:  totaltableinsertiontime = 38953.912 ms2020-09-23 22:40:27.573 JST [66485] LOG:  totaltableinsertiontime = 27895.326 ms2020-09-23 22:41:34.948 JST [77523] LOG:  totaltableinsertiontime = 28929.642 ms2020-09-23 22:43:18.938 JST [89857] LOG:  totaltableinsertiontime = 30625.015 ms2020-09-23 22:44:21.938 JST [101372] LOG:  totaltableinsertiontime = 24624.045 msbr_default_1_0.log:2020-09-24 11:12:14.989 JST [56146] LOG:  totaltableinsertiontime = 192068.350 ms2020-09-24 11:13:38.228 JST [88455] LOG:  totaltableinsertiontime = 30999.942 ms2020-09-24 11:15:50.381 JST [108935] LOG:  totaltableinsertiontime = 31673.204 ms2020-09-24 11:17:14.260 JST [118541] LOG:  totaltableinsertiontime = 31367.027 ms2020-09-24 11:20:18.975 JST [17270] LOG:  totaltableinsertiontime = 26858.924 ms2020-09-24 11:22:17.822 JST [26852] LOG:  totaltableinsertiontime = 66531.442 ms2020-09-24 11:24:09.221 JST [47971] LOG:  totaltableinsertiontime = 38943.384 ms2020-09-24 11:25:30.955 JST [58849] LOG:  totaltableinsertiontime = 28286.634 msbr_custom_0_1.log:2020-09-24 10:29:44.956 JST [110477] LOG:  totaltableinsertiontime = 20207.928 ms2020-09-24 10:30:49.570 JST [120568] LOG:  totaltableinsertiontime = 23360.006 ms2020-09-24 10:32:31.659 JST [2753] LOG:  totaltableinsertiontime = 19837.588 ms2020-09-24 10:35:49.245 JST [31118] LOG:  totaltableinsertiontime = 21759.253 ms2020-09-24 10:36:54.834 JST [41763] LOG:  totaltableinsertiontime = 23547.323 ms2020-09-24 10:38:53.507 JST [56779] LOG:  totaltableinsertiontime = 21543.984 ms2020-09-24 10:39:58.713 JST [67489] LOG:  totaltableinsertiontime = 25254.563 msbr_custom_1_0.log:2020-09-24 10:49:03.242 JST [15308] LOG:  totaltableinsertiontime = 16541.201 ms2020-09-24 10:50:11.848 JST [23324] LOG:  totaltableinsertiontime = 15076.577 ms2020-09-24 10:51:24.497 JST [35394] LOG:  totaltableinsertiontime = 16400.777 ms2020-09-24 10:52:32.354 JST [42953] LOG:  totaltableinsertiontime = 15591.051 ms2020-09-24 10:54:30.327 JST [61136] LOG:  totaltableinsertiontime = 16700.954 ms2020-09-24 10:55:38.377 JST [68719] LOG:  totaltableinsertiontime = 15435.150 ms2020-09-24 10:57:08.927 JST [83335] LOG:  totaltableinsertiontime = 17133.251 ms2020-09-24 10:58:17.420 JST [90905] LOG:  totaltableinsertiontime = 15352.753 ms>> Test results show that Parallel COPY with 1 worker is performing> better than normal COPY in the test scenarios run.>Good to know :)With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 24 Sep 2020 12:34:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": ">\n> > Have you tested your patch when encoding conversion is needed? If so,\n> > could you please point out the email that has the test results.\n> >\n>\n> We have not yet done encoding testing, we will do and post the results\n> separately in the coming days.\n>\n\nHi Ashutosh,\n\nI ran the tests ensuring pg_server_to_any() gets called from copy.c. I\nspecified the encoding option of COPY command, with client and server\nencodings being UTF-8.\n\nTests are performed with custom postgresql.conf[1], 10million rows, 5.2GB\ndata. The results are of the triplet form (exec time in sec, number of\nworkers, gain)\n\nUse case 1: 2 indexes on integer columns, 1 index on text column\n(1174.395, 0, 1X), (1127.792, 1, 1.04X), (644.260, 2, 1.82X), (341.284, 4,\n3.43X), (204.423, 8, 5.74X), (140.692, 16, 8.34X), (129.843, 20, 9.04X),\n(134.511, 30, 8.72X)\n\nUse case 2: 1 gist index on text column\n(811.412, 0, 1X), (772.203, 1, 1.05X), (437.364, 2, 1.85X), (263.575, 4,\n3.08X), (175.135, 8, 4.63X), (155.355, 16, 5.22X), (178.704, 20, 4.54X),\n(199.402, 30, 4.06)\n\nUse case 3: 3 indexes on integer columns\n(220.680, 0, 1X), (185.096, 1, 1.19X), (134.811, 2, 1.64X), (114.585, 4,\n1.92X), (107.707, 8, 2.05X), (101.253, 16, 2.18X), (100.749, 20, 2.19X),\n(100.656, 30, 2.19X)\n\nThe results are similar to our earlier runs[2].\n\n[1]\nshared_buffers = 40GB\nmax_worker_processes = 32\nmax_parallel_maintenance_workers = 24\nmax_parallel_workers = 32\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\n[2]\nhttps://www.postgresql.org/message-id/CALDaNm13zK%3DJXfZWqZJsm3%2B2yagYDJc%3DeJBgE4i77-4PPNj7vw%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n>> > Have you tested your patch when encoding conversion is needed? If so,> > could you please point out the email that has the test results.> >>> We have not yet done encoding testing, we will do and post the results> separately in the coming days.>Hi Ashutosh,I ran the tests ensuring pg_server_to_any() gets called from copy.c. I specified the encoding option of COPY command, with client and server encodings being UTF-8.Tests are performed with custom postgresql.conf[1], 10million rows, 5.2GB data. The results are of the triplet form (exec time in sec, number of workers, gain)Use case 1: 2 indexes on integer columns, 1 index on text column(1174.395, 0, 1X), (1127.792, 1, 1.04X), (644.260, 2, 1.82X), (341.284, 4, 3.43X), (204.423, 8, 5.74X), (140.692, 16, 8.34X), (129.843, 20, 9.04X), (134.511, 30, 8.72X)Use case 2: 1 gist index on text column(811.412, 0, 1X), (772.203, 1, 1.05X), (437.364, 2, 1.85X), (263.575, 4, 3.08X), (175.135, 8, 4.63X), (155.355, 16, 5.22X), (178.704, 20, 4.54X), (199.402, 30, 4.06)Use case 3: 3 indexes on integer columns(220.680, 0, 1X), (185.096, 1, 1.19X), (134.811, 2, 1.64X), (114.585, 4, 1.92X), (107.707, 8, 2.05X), (101.253, 16, 2.18X), (100.749, 20, 2.19X), (100.656, 30, 2.19X) The results are similar to our earlier runs[2].[1]shared_buffers = 40GBmax_worker_processes = 32max_parallel_maintenance_workers = 24max_parallel_workers = 32synchronous_commit = offcheckpoint_timeout = 1dmax_wal_size = 24GBmin_wal_size = 15GBautovacuum = off[2]https://www.postgresql.org/message-id/CALDaNm13zK%3DJXfZWqZJsm3%2B2yagYDJc%3DeJBgE4i77-4PPNj7vw%40mail.gmail.comWith Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 24 Sep 2020 15:00:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Sep 24, 2020 at 3:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> >\n> > > Have you tested your patch when encoding conversion is needed? If so,\n> > > could you please point out the email that has the test results.\n> > >\n> >\n> > We have not yet done encoding testing, we will do and post the results\n> > separately in the coming days.\n> >\n>\n> Hi Ashutosh,\n>\n> I ran the tests ensuring pg_server_to_any() gets called from copy.c. I specified the encoding option of COPY command, with client and server encodings being UTF-8.\n>\n\nThanks Bharath for the testing. The results look impressive.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Sep 2020 19:08:00 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Jul 22, 2020 at 7:48 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Jul 21, 2020 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > Review comments:\n> > ===================\n> >\n> > 0001-Copy-code-readjustment-to-support-parallel-copy\n> > 1.\n> > @@ -807,8 +835,11 @@ CopyLoadRawBuf(CopyState cstate)\n> > else\n> > nbytes = 0; /* no data need be saved */\n> >\n> > + if (cstate->copy_dest == COPY_NEW_FE)\n> > + minread = RAW_BUF_SIZE - nbytes;\n> > +\n> > inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n> > - 1, RAW_BUF_SIZE - nbytes);\n> > + minread, RAW_BUF_SIZE - nbytes);\n> >\n> > No comment to explain why this change is done?\n> >\n> > 0002-Framework-for-leader-worker-in-parallel-copy\n>\n> Currently CopyGetData copies a lesser amount of data to buffer even though space is available in buffer because minread was passed as 1 to CopyGetData. Because of this there are frequent call to CopyGetData for fetching the data. In this case it will load only some data due to the below check:\n> while (maxread > 0 && bytesread < minread && !cstate->reached_eof)\n> After reading some data bytesread will be greater than minread which is passed as 1 and return with lesser amount of data, even though there is some space.\n> This change is required for parallel copy feature as each time we get a new DSM data block which is of 64K size and copy the data. If we copy less data into DSM data blocks we might end up consuming all the DSM data blocks.\n>\n\nWhy can't we reuse the DSM block which has unfilled space?\n\n> I felt this issue can be fixed as part of HEAD. Have posted a separate thread [1] for this. I'm planning to remove that change once it gets committed. Can that go as a separate\n> patch or should we include it here?\n> [1] - https://www.postgresql.org/message-id/CALDaNm0v4CjmvSnftYnx_9pOS_dKRG%3DO3NnBgJsQmi0KipvLog%40mail.gmail.com\n>\n\nI am convinced by the reason given by Kyotaro-San in that another\nthread [1] and performance data shown by Peter that this can't be an\nindependent improvement and rather in some cases it can do harm. Now,\nif you need it for a parallel-copy path then we can change it\nspecifically to the parallel-copy code path but I don't understand\nyour reason completely.\n\n> > 2.\n..\n> > + */\n> > +typedef struct ParallelCopyLineBoundary\n> >\n> > Are we doing all this state management to avoid using locks while\n> > processing lines? If so, I think we can use either spinlock or LWLock\n> > to keep the main patch simple and then provide a later patch to make\n> > it lock-less. This will allow us to first focus on the main design of\n> > the patch rather than trying to make this datastructure processing\n> > lock-less in the best possible way.\n> >\n>\n> The steps will be more or less same if we use spinlock too. step 1, step 3 & step 4 will be common we have to use lock & unlock instead of step 2 & step 5. I feel we can retain the current implementation.\n>\n\nI'll study this in detail and let you know my opinion on the same but\nin the meantime, I don't follow one part of this comment: \"If they\ndon't follow this order the worker might process wrong line_size and\nleader might populate the information which worker has not yet\nprocessed or in the process of processing.\"\n\nDo you want to say that leader might overwrite some information which\nworker hasn't read yet? If so, it is not clear from the comment.\nAnother minor point about this comment:\n\n+ * ParallelCopyLineBoundary is common data structure between leader & worker,\n+ * Leader process will be populating data block, data block offset &\nthe size of\n\nI think there should be a full-stop after worker instead of a comma.\n\n>\n> > 6.\n> > In function BeginParallelCopy(), you need to keep a provision to\n> > collect wal_usage and buf_usage stats. See _bt_begin_parallel for\n> > reference. Those will be required for pg_stat_statements.\n> >\n>\n> Fixed\n>\n\nHow did you ensure that this is fixed? Have you tested it, if so\nplease share the test? I see a basic problem with your fix.\n\n+ /* Report WAL/buffer usage during parallel execution */\n+ bufferusage = shm_toc_lookup(toc, PARALLEL_COPY_BUFFER_USAGE, false);\n+ walusage = shm_toc_lookup(toc, PARALLEL_COPY_WAL_USAGE, false);\n+ InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n+ &walusage[ParallelWorkerNumber]);\n\nYou need to call InstrStartParallelQuery() before the actual operation\nstarts, without that stats won't be accurate? Also, after calling\nWaitForParallelWorkersToFinish(), you need to accumulate the stats\ncollected from workers which neither you have done nor is possible\nwith the current code in your patch because you haven't made any\nprovision to capture them in BeginParallelCopy.\n\nI suggest you look into lazy_parallel_vacuum_indexes() and\nbegin_parallel_vacuum() to understand how the buffer/wal usage stats\nare accumulated. Also, please test this functionality using\npg_stat_statements.\n\n>\n> > 0003-Allow-copy-from-command-to-process-data-from-file-ST\n> > 10.\n> > In the commit message, you have written \"The leader does not\n> > participate in the insertion of data, leaders only responsibility will\n> > be to identify the lines as fast as possible for the workers to do the\n> > actual copy operation. The leader waits till all the lines populated\n> > are processed by the workers and exits.\"\n> >\n> > I think you should also mention that we have chosen this design based\n> > on the reason \"that everything stalls if the leader doesn't accept\n> > further input data, as well as when there are no available splitted\n> > chunks so it doesn't seem like a good idea to have the leader do other\n> > work. This is backed by the performance data where we have seen that\n> > with 1 worker there is just a 5-10% (or whatever percentage difference\n> > you have seen) performance difference)\".\n>\n> Fixed.\n>\n\nMake it a one-paragraph starting from \"The leader does not participate\nin the insertion of data .... just a 5-10% performance difference\".\nRight now both the parts look a bit disconnected.\n\nFew additional comments:\n======================\nv5-0001-Copy-code-readjustment-to-support-parallel-copy\n---------------------------------------------------------------------------------\n1.\n+/*\n+ * CLEAR_EOL_LINE - Wrapper for clearing EOL.\n+ */\n+#define CLEAR_EOL_LINE() \\\n+if (!result && !IsHeaderLine()) \\\n+ ClearEOLFromCopiedData(cstate, cstate->line_buf.data, \\\n+ cstate->line_buf.len, \\\n+ &cstate->line_buf.len) \\\n\nI don't like this macro. I think it is sufficient to move the common\ncode to be called from the parallel and non-parallel path in\nClearEOLFromCopiedData but I think the other checks can be done\nin-place. I think having macros for such a thing makes code less\nreadable.\n\n2.\n-\n+static void PopulateCommonCstateInfo(CopyState cstate, TupleDesc tup_desc,\n+ List *attnamelist);\n\nSpurious line removal.\n\nv5-0002-Framework-for-leader-worker-in-parallel-copy\n---------------------------------------------------------------------------\n3.\n+ FullTransactionId full_transaction_id; /* xid for copy from statement */\n+ CommandId mycid; /* command id */\n+ ParallelCopyLineBoundaries line_boundaries; /* line array */\n+} ParallelCopyShmInfo;\n\nWe already serialize FullTransactionId and CommandId via\nInitializeParallelDSM->SerializeTransactionState. Can't we reuse it? I\nthink recently Parallel Insert patch has also done something for this\n[2] so you can refer that if you want.\n\nv5-0004-Documentation-for-parallel-copy\n-----------------------------------------------------------\n1. Perform <command>COPY FROM</command> in parallel using <replaceable\n+ class=\"parameter\"> integer</replaceable> background workers.\n\nNo need for space before integer.\n\n\n[1] - https://www.postgresql.org/message-id/20200911.155804.359271394064499501.horikyota.ntt%40gmail.com\n[2] - https://www.postgresql.org/message-id/CAJcOf-fn1nhEtaU91NvRuA3EbvbJGACMd4_c%2BUu3XU5VMv37Aw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Sep 2020 12:19:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 22, 2020 at 2:44 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks Ashutosh for your comments.\n>\n> On Wed, Sep 16, 2020 at 6:36 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi Vignesh,\n> >\n> > I've spent some time today looking at your new set of patches and I've\n> > some thoughts and queries which I would like to put here:\n> >\n> > Why are these not part of the shared cstate structure?\n> >\n> > SerializeString(pcxt, PARALLEL_COPY_KEY_NULL_PRINT, cstate->null_print);\n> > SerializeString(pcxt, PARALLEL_COPY_KEY_DELIM, cstate->delim);\n> > SerializeString(pcxt, PARALLEL_COPY_KEY_QUOTE, cstate->quote);\n> > SerializeString(pcxt, PARALLEL_COPY_KEY_ESCAPE, cstate->escape);\n> >\n>\n> I have used shared_cstate mainly to share the integer & bool data\n> types from the leader to worker process. The above data types are of\n> char* data type, I will not be able to use it like how I could do it\n> for integer type. So I preferred to send these as separate keys to the\n> worker. Thoughts?\n>\n\nI think the way you have written will work but if we go with\nAshutosh's proposal it will look elegant and in the future, if we need\nto share more strings as part of cstate structure then that would be\neasier. You can probably refer to EstimateParamListSpace,\nSerializeParamList, and RestoreParamList to see how we can share\ndifferent types of data in one key.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 28 Sep 2020 15:01:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Sep 28, 2020 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 22, 2020 at 2:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks Ashutosh for your comments.\n> >\n> > On Wed, Sep 16, 2020 at 6:36 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > Hi Vignesh,\n> > >\n> > > I've spent some time today looking at your new set of patches and I've\n> > > some thoughts and queries which I would like to put here:\n> > >\n> > > Why are these not part of the shared cstate structure?\n> > >\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_NULL_PRINT, cstate->null_print);\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_DELIM, cstate->delim);\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_QUOTE, cstate->quote);\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_ESCAPE, cstate->escape);\n> > >\n> >\n> > I have used shared_cstate mainly to share the integer & bool data\n> > types from the leader to worker process. The above data types are of\n> > char* data type, I will not be able to use it like how I could do it\n> > for integer type. So I preferred to send these as separate keys to the\n> > worker. Thoughts?\n> >\n>\n> I think the way you have written will work but if we go with\n> Ashutosh's proposal it will look elegant and in the future, if we need\n> to share more strings as part of cstate structure then that would be\n> easier. You can probably refer to EstimateParamListSpace,\n> SerializeParamList, and RestoreParamList to see how we can share\n> different types of data in one key.\n>\n\nYeah. And in addition to that it will also reduce the number of DSM\nkeys that we need to maintain.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Sep 2020 18:36:47 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh and Bharath,\n\nSeems like the Parallel Copy patch is regarding RI_TRIGGER_PK as\nparallel-unsafe.\nCan you explain why this is?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Tue, 29 Sep 2020 19:45:49 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Few additional comments:\n> ======================\n\nSome more comments:\n\nv5-0002-Framework-for-leader-worker-in-parallel-copy\n===========================================\n1.\nThese values\n+ * help in handover of multiple records with significant size of data to be\n+ * processed by each of the workers to make sure there is no context\nswitch & the\n+ * work is fairly distributed among the workers.\n\nHow about writing it as: \"These values help in the handover of\nmultiple records with the significant size of data to be processed by\neach of the workers. This also ensures there is no context switch and\nthe work is fairly distributed among the workers.\"\n\n2. Can we keep WORKER_CHUNK_COUNT, MAX_BLOCKS_COUNT, and RINGSIZE as\npower-of-two? Say WORKER_CHUNK_COUNT as 64, MAX_BLOCK_COUNT as 1024,\nand accordingly choose RINGSIZE. At many places, we do that way. I\nthink it can sometimes help in faster processing due to cache size\nrequirements and in this case, I don't see a reason why we can't\nchoose these values to be power-of-two. If you agree with this change\nthen also do some performance testing after this change?\n\n3.\n+ bool curr_blk_completed;\n+ char data[DATA_BLOCK_SIZE]; /* data read from file */\n+ uint8 skip_bytes;\n+} ParallelCopyDataBlock;\n\nIs there a reason to keep skip_bytes after data? Normally the variable\nsize data is at the end of the structure. Also, there is no comment\nexplaining the purpose of skip_bytes.\n\n4.\n+ * Copy data block information.\n+ * ParallelCopyDataBlock's will be created in DSM. Data read from file will be\n+ * copied in these DSM data blocks. The leader process identifies the records\n+ * and the record information will be shared to the workers. The workers will\n+ * insert the records into the table. There can be one or more number\nof records\n+ * in each of the data block based on the record size.\n+ */\n+typedef struct ParallelCopyDataBlock\n\nKeep one empty line after the description line like below. I also\nsuggested to do a minor tweak in the above sentence which is as\nfollows:\n\n* Copy data block information.\n*\n* These data blocks are created in DSM. Data read ...\n\nTry to follow a similar format in other comments as well.\n\n5. I think it is better to move parallelism related code to a new file\n(we can name it as copyParallel.c or something like that).\n\n6. copy.c(1648,25): warning C4133: 'function': incompatible types -\nfrom 'ParallelCopyLineState *' to 'uint32 *'\nGetting above compilation warning on Windows.\n\nv5-0003-Allow-copy-from-command-to-process-data-from-file\n==================================================\n1.\n@@ -4294,7 +5047,7 @@ BeginCopyFrom(ParseState *pstate,\n * only in text mode.\n */\n initStringInfo(&cstate->attribute_buf);\n- cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);\n+ cstate->raw_buf = (IsParallelCopy()) ? NULL : (char *)\npalloc(RAW_BUF_SIZE + 1);\n\nIs there anyway IsParallelCopy can be true by this time? AFAICS, we do\nanything about parallelism after this. If you want to save this\nallocation then we need to move this after we determine that\nparallelism can be used or not and accordingly the below code in the\npatch needs to be changed.\n\n * ParallelCopyFrom - parallel copy leader's functionality.\n *\n * Leader executes the before statement for before statement trigger, if before\n@@ -1110,8 +1547,302 @@ ParallelCopyFrom(CopyState cstate)\n ParallelCopyShmInfo *pcshared_info = cstate->pcdata->pcshared_info;\n ereport(DEBUG1, (errmsg(\"Running parallel copy leader\")));\n\n+ /* raw_buf is not used in parallel copy, instead data blocks are used.*/\n+ pfree(cstate->raw_buf);\n+ cstate->raw_buf = NULL;\n\nIs there anything else also the allocation of which depends on parallelism?\n\n2.\n+static pg_attribute_always_inline bool\n+IsParallelCopyAllowed(CopyState cstate)\n+{\n+ /* Parallel copy not allowed for frontend (2.0 protocol) & binary option. */\n+ if ((cstate->copy_dest == COPY_OLD_FE) || cstate->binary)\n+ return false;\n+\n+ /* Check if copy is into foreign table or temporary table. */\n+ if (cstate->rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||\n+ RelationUsesLocalBuffers(cstate->rel))\n+ return false;\n+\n+ /* Check if trigger function is parallel safe. */\n+ if (cstate->rel->trigdesc != NULL &&\n+ !IsTriggerFunctionParallelSafe(cstate->rel->trigdesc))\n+ return false;\n+\n+ /*\n+ * Check if there is after statement or instead of trigger or transition\n+ * table triggers.\n+ */\n+ if (cstate->rel->trigdesc != NULL &&\n+ (cstate->rel->trigdesc->trig_insert_after_statement ||\n+ cstate->rel->trigdesc->trig_insert_instead_row ||\n+ cstate->rel->trigdesc->trig_insert_new_table))\n+ return false;\n+\n+ /* Check if the volatile expressions are parallel safe, if present any. */\n+ if (!CheckExprParallelSafety(cstate))\n+ return false;\n+\n+ /* Check if the insertion mode is single. */\n+ if (FindInsertMethod(cstate) == CIM_SINGLE)\n+ return false;\n+\n+ return true;\n+}\n\nIn the comments, we should write why parallelism is not allowed for a\nparticular case. The cases where parallel-unsafe clause is involved\nare okay but it is not clear from comments why it is not allowed in\nother cases.\n\n3.\n+ ParallelCopyShmInfo *pcshared_info = cstate->pcdata->pcshared_info;\n+ ParallelCopyLineBoundary *lineInfo;\n+ uint32 line_first_block = pcshared_info->cur_block_pos;\n+ line_pos = UpdateBlockInLineInfo(cstate,\n+ line_first_block,\n+ cstate->raw_buf_index, -1,\n+ LINE_LEADER_POPULATING);\n+ lineInfo = &pcshared_info->line_boundaries.ring[line_pos];\n+ elog(DEBUG1, \"[Leader] Adding - block:%d, offset:%d, line position:%d\",\n+ line_first_block, lineInfo->start_offset, line_pos);\n\nCan we take all the code here inside function UpdateBlockInLineInfo? I\nsee that it is called from one other place but I guess most of the\nsurrounding code there can also be moved inside the function. Can we\nchange the name of the function to UpdateSharedLineInfo or something\nlike that and remove inline marking from this? I am not sure we want\nto inline such big functions. If it make difference in performance\nthen we can probably consider it.\n\n4.\nEndLineParallelCopy()\n{\n..\n+ /* Update line size. */\n+ pg_atomic_write_u32(&lineInfo->line_size, line_size);\n+ pg_atomic_write_u32(&lineInfo->line_state, LINE_LEADER_POPULATED);\n+ elog(DEBUG1, \"[Leader] After adding - line position:%d, line_size:%d\",\n+ line_pos, line_size);\n..\n}\n\nCan we instead call UpdateSharedLineInfo (new function name for\nUpdateBlockInLineInfo) to do this and maybe see it only updates the\nrequired info? The idea is to centralize the code for updating\nSharedLineInfo.\n\n5.\n+static uint32\n+GetLinePosition(CopyState cstate)\n+{\n+ ParallelCopyData *pcdata = cstate->pcdata;\n+ ParallelCopyShmInfo *pcshared_info = pcdata->pcshared_info;\n+ uint32 previous_pos = pcdata->worker_processed_pos;\n+ uint32 write_pos = (previous_pos == -1) ? 0 : (previous_pos + 1) % RINGSIZE;\n\nIt seems to me that each worker has to hop through all the processed\nchunks before getting the chunk which it can process. This will work\nbut I think it is better if we have some shared counter which can tell\nus the next chunk to be processed and avoid all the unnecessary work\nof hopping to find the exact position.\n\nv5-0004-Documentation-for-parallel-copy\n-----------------------------------------\n1. Can you add one or two examples towards the end of the page where\nwe have examples for other Copy options?\n\n\nPlease run pgindent on all patches as that will make the code look better.\n\n From the testing perspective,\n1. Test by having something force_parallel_mode = regress which means\nthat all existing Copy tests in the regression will be executed via\nnew worker code. You can have this as a test-only patch for now and\nmake sure all existing tests passed with this.\n2. Do we have tests for toast tables? I think if you implement the\nprevious point some existing tests might cover it but I feel we should\nhave at least one or two tests for the same.\n3. Have we checked the code coverage of the newly added code with\nexisting tests?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 29 Sep 2020 18:30:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Few additional comments:\n> > ======================\n>\n> Some more comments:\n>\n\nThanks Amit for the comments, I will work on the comments and provide\na patch in the next few days.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Sep 2020 20:14:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 29, 2020 at 3:16 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Hi Vignesh and Bharath,\n>\n> Seems like the Parallel Copy patch is regarding RI_TRIGGER_PK as\n> parallel-unsafe.\n> Can you explain why this is?\n>\n\nI don't think we need to restrict this case and even if there is some\nreason to do so then probably the same should be mentioned in the\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Oct 2020 12:13:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hello Vignesh,\n\nI've done some basic benchmarking on the v4 version of the patches (but\nAFAIKC the v5 should perform about the same), and some initial review.\n\nFor the benchmarking, I used the lineitem table from TPC-H - for 75GB\ndata set, this largest table is about 64GB once loaded, with another\n54GB in 5 indexes. This is on a server with 32 cores, 64GB of RAM and\nNVME storage.\n\nThe COPY duration with varying number of workers (specified using the\nparallel COPY option) looks like this:\n\n workers duration\n ---------------------\n 0 1366\n 1 1255\n 2 704\n 3 526\n 4 434\n 5 385\n 6 347\n 7 322\n 8 327\n\nSo this seems to work pretty well - initially we get almost linear\nspeedup, then it slows down (likely due to contention for locks, I/O\netc.). Not bad.\n\nI've only done a quick review, but overall the patch looks in fairly\ngood shape.\n\n1) I don't quite understand why we need INCREMENTPROCESSED and\nRETURNPROCESSED, considering it just does ++ or return. It just\nobfuscated the code, I think.\n\n2) I find it somewhat strange that BeginParallelCopy can just decide not\nto do parallel copy after all. Why not to do this decisions in the\ncaller? Or maybe it's fine this way, not sure.\n\n3) AFAIK we don't modify typedefs.list in patches, so these changes\nshould be removed. \n\n4) IsTriggerFunctionParallelSafe actually checks all triggers, not just\none, so the comment needs minor rewording.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 3 Oct 2020 02:49:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Oct 3, 2020 at 6:20 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> Hello Vignesh,\n>\n> I've done some basic benchmarking on the v4 version of the patches (but\n> AFAIKC the v5 should perform about the same), and some initial review.\n>\n> For the benchmarking, I used the lineitem table from TPC-H - for 75GB\n> data set, this largest table is about 64GB once loaded, with another\n> 54GB in 5 indexes. This is on a server with 32 cores, 64GB of RAM and\n> NVME storage.\n>\n> The COPY duration with varying number of workers (specified using the\n> parallel COPY option) looks like this:\n>\n> workers duration\n> ---------------------\n> 0 1366\n> 1 1255\n> 2 704\n> 3 526\n> 4 434\n> 5 385\n> 6 347\n> 7 322\n> 8 327\n>\n> So this seems to work pretty well - initially we get almost linear\n> speedup, then it slows down (likely due to contention for locks, I/O\n> etc.). Not bad.\n>\n\n+1. These numbers (> 4x speed up) look good to me.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 3 Oct 2020 15:45:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 22, 2020 at 7:48 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Jul 21, 2020 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > Review comments:\n> > > ===================\n> > >\n> > > 0001-Copy-code-readjustment-to-support-parallel-copy\n> > > 1.\n> > > @@ -807,8 +835,11 @@ CopyLoadRawBuf(CopyState cstate)\n> > > else\n> > > nbytes = 0; /* no data need be saved */\n> > >\n> > > + if (cstate->copy_dest == COPY_NEW_FE)\n> > > + minread = RAW_BUF_SIZE - nbytes;\n> > > +\n> > > inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n> > > - 1, RAW_BUF_SIZE - nbytes);\n> > > + minread, RAW_BUF_SIZE - nbytes);\n> > >\n> > > No comment to explain why this change is done?\n> > >\n> > > 0002-Framework-for-leader-worker-in-parallel-copy\n> >\n> > Currently CopyGetData copies a lesser amount of data to buffer even though space is available in buffer because minread was passed as 1 to CopyGetData. Because of this there are frequent call to CopyGetData for fetching the data. In this case it will load only some data due to the below check:\n> > while (maxread > 0 && bytesread < minread && !cstate->reached_eof)\n> > After reading some data bytesread will be greater than minread which is passed as 1 and return with lesser amount of data, even though there is some space.\n> > This change is required for parallel copy feature as each time we get a new DSM data block which is of 64K size and copy the data. If we copy less data into DSM data blocks we might end up consuming all the DSM data blocks.\n> >\n>\n> Why can't we reuse the DSM block which has unfilled space?\n>\n> > I felt this issue can be fixed as part of HEAD. Have posted a separate thread [1] for this. I'm planning to remove that change once it gets committed. Can that go as a separate\n> > patch or should we include it here?\n> > [1] - https://www.postgresql.org/message-id/CALDaNm0v4CjmvSnftYnx_9pOS_dKRG%3DO3NnBgJsQmi0KipvLog%40mail.gmail.com\n> >\n>\n> I am convinced by the reason given by Kyotaro-San in that another\n> thread [1] and performance data shown by Peter that this can't be an\n> independent improvement and rather in some cases it can do harm. Now,\n> if you need it for a parallel-copy path then we can change it\n> specifically to the parallel-copy code path but I don't understand\n> your reason completely.\n>\n\nWhenever we need data to be populated, we will get a new data block &\npass it to CopyGetData to populate the data. In case of file copy, the\nserver will completely fill the data block. We expect the data to be\nfilled completely. If data is available it will completely load the\ncomplete data block in case of file copy. There is no scenario where\neven if data is present a partial data block will be returned except\nfor EOF or no data available. But in case of STDIN data copy, even\nthough there is 8K data available in data block & 8K data available in\nSTDIN, CopyGetData will return as soon as libpq buffer data is more\nthan the minread. We will pass new data block every time to load data.\nEvery time we pass an 8K data block but CopyGetData loads a few bytes\nin the new data block & returns. I wanted to keep the same data\npopulation logic for both file copy & STDIN copy i.e copy full 8K data\nblocks & then the populated data can be required. There is an\nalternative solution I can have some special handling in case of STDIN\nwherein the existing data block can be passed with the index from\nwhere the data should be copied. Thoughts?\n\n> > > 2.\n> ..\n> > > + */\n> > > +typedef struct ParallelCopyLineBoundary\n> > >\n> > > Are we doing all this state management to avoid using locks while\n> > > processing lines? If so, I think we can use either spinlock or LWLock\n> > > to keep the main patch simple and then provide a later patch to make\n> > > it lock-less. This will allow us to first focus on the main design of\n> > > the patch rather than trying to make this datastructure processing\n> > > lock-less in the best possible way.\n> > >\n> >\n> > The steps will be more or less same if we use spinlock too. step 1, step 3 & step 4 will be common we have to use lock & unlock instead of step 2 & step 5. I feel we can retain the current implementation.\n> >\n>\n> I'll study this in detail and let you know my opinion on the same but\n> in the meantime, I don't follow one part of this comment: \"If they\n> don't follow this order the worker might process wrong line_size and\n> leader might populate the information which worker has not yet\n> processed or in the process of processing.\"\n>\n> Do you want to say that leader might overwrite some information which\n> worker hasn't read yet? If so, it is not clear from the comment.\n> Another minor point about this comment:\n>\n\nHere leader and worker must follow these steps to avoid any corruption\nor hang issue. Changed it to:\n * The leader & worker process access the shared line information by following\n * the below steps to avoid any data corruption or hang:\n\n> + * ParallelCopyLineBoundary is common data structure between leader & worker,\n> + * Leader process will be populating data block, data block offset &\n> the size of\n>\n> I think there should be a full-stop after worker instead of a comma.\n>\n\nChanged it.\n\n> >\n> > > 6.\n> > > In function BeginParallelCopy(), you need to keep a provision to\n> > > collect wal_usage and buf_usage stats. See _bt_begin_parallel for\n> > > reference. Those will be required for pg_stat_statements.\n> > >\n> >\n> > Fixed\n> >\n>\n> How did you ensure that this is fixed? Have you tested it, if so\n> please share the test? I see a basic problem with your fix.\n>\n> + /* Report WAL/buffer usage during parallel execution */\n> + bufferusage = shm_toc_lookup(toc, PARALLEL_COPY_BUFFER_USAGE, false);\n> + walusage = shm_toc_lookup(toc, PARALLEL_COPY_WAL_USAGE, false);\n> + InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n> + &walusage[ParallelWorkerNumber]);\n>\n> You need to call InstrStartParallelQuery() before the actual operation\n> starts, without that stats won't be accurate? Also, after calling\n> WaitForParallelWorkersToFinish(), you need to accumulate the stats\n> collected from workers which neither you have done nor is possible\n> with the current code in your patch because you haven't made any\n> provision to capture them in BeginParallelCopy.\n>\n> I suggest you look into lazy_parallel_vacuum_indexes() and\n> begin_parallel_vacuum() to understand how the buffer/wal usage stats\n> are accumulated. Also, please test this functionality using\n> pg_stat_statements.\n>\n\nMade changes accordingly.\nI have verified it using:\npostgres=# select * from pg_stat_statements where query like '%copy%';\n userid | dbid | queryid |\n query\n | plans | total_plan_time |\nmin_plan_time | max_plan_time | mean_plan_time | stddev_plan_time |\ncalls | total_exec_time | min_exec_time | max_exec_time |\nmean_exec_time | stddev_exec_time | rows | shared_blks_hi\nt | shared_blks_read | shared_blks_dirtied | shared_blks_written |\nlocal_blks_hit | local_blks_read | local_blks_dirtied |\nlocal_blks_written | temp_blks_read | temp_blks_written | blk_\nread_time | blk_write_time | wal_records | wal_fpi | wal_bytes\n--------+-------+----------------------+---------------------------------------------------------------------------------------------------------------------+-------+-----------------+-\n--------------+---------------+----------------+------------------+-------+-----------------+---------------+---------------+----------------+------------------+--------+---------------\n--+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+-----\n----------+----------------+-------------+---------+-----------\n 10 | 13743 | -6947756673093447609 | copy hw from\n'/home/vignesh/postgres/postgres/inst/bin/hw_175000.csv' with(format\ncsv, delimiter ',') | 0 | 0 |\n 0 | 0 | 0 | 0 |\n 1 | 265.195105 | 265.195105 | 265.195105 | 265.195105\n| 0 | 175000 | 191\n6 | 0 | 946 | 946 |\n 0 | 0 | 0 | 0\n| 0 | 0 |\n 0 | 0 | 1116 | 0 | 3587203\n 10 | 13743 | 8570215596364326047 | copy hw from\n'/home/vignesh/postgres/postgres/inst/bin/hw_175000.csv' with(format\ncsv, delimiter ',', parallel '2') | 0 | 0 |\n 0 | 0 | 0 | 0 |\n 1 | 35668.402482 | 35668.402482 | 35668.402482 | 35668.402482\n| 0 | 175000 | 310\n1 | 36 | 952 | 919 |\n 0 | 0 | 0 | 0\n| 0 | 0 |\n 0 | 0 | 1119 | 6 | 3624405\n(2 rows)\n\n> >\n> > > 0003-Allow-copy-from-command-to-process-data-from-file-ST\n> > > 10.\n> > > In the commit message, you have written \"The leader does not\n> > > participate in the insertion of data, leaders only responsibility will\n> > > be to identify the lines as fast as possible for the workers to do the\n> > > actual copy operation. The leader waits till all the lines populated\n> > > are processed by the workers and exits.\"\n> > >\n> > > I think you should also mention that we have chosen this design based\n> > > on the reason \"that everything stalls if the leader doesn't accept\n> > > further input data, as well as when there are no available splitted\n> > > chunks so it doesn't seem like a good idea to have the leader do other\n> > > work. This is backed by the performance data where we have seen that\n> > > with 1 worker there is just a 5-10% (or whatever percentage difference\n> > > you have seen) performance difference)\".\n> >\n> > Fixed.\n> >\n>\n> Make it a one-paragraph starting from \"The leader does not participate\n> in the insertion of data .... just a 5-10% performance difference\".\n> Right now both the parts look a bit disconnected.\n>\n\nMade the contents starting from \"The leader does not\" in a paragraph.\n\n> Few additional comments:\n> ======================\n> v5-0001-Copy-code-readjustment-to-support-parallel-copy\n> ---------------------------------------------------------------------------------\n> 1.\n> +/*\n> + * CLEAR_EOL_LINE - Wrapper for clearing EOL.\n> + */\n> +#define CLEAR_EOL_LINE() \\\n> +if (!result && !IsHeaderLine()) \\\n> + ClearEOLFromCopiedData(cstate, cstate->line_buf.data, \\\n> + cstate->line_buf.len, \\\n> + &cstate->line_buf.len) \\\n>\n> I don't like this macro. I think it is sufficient to move the common\n> code to be called from the parallel and non-parallel path in\n> ClearEOLFromCopiedData but I think the other checks can be done\n> in-place. I think having macros for such a thing makes code less\n> readable.\n>\n\nI have removed the macro & called ClearEOLFromCopiedData directly\nwherever required.\n\n> 2.\n> -\n> +static void PopulateCommonCstateInfo(CopyState cstate, TupleDesc tup_desc,\n> + List *attnamelist);\n>\n> Spurious line removal.\n>\n\nI have modified it to keep it as it is.\n\n> v5-0002-Framework-for-leader-worker-in-parallel-copy\n> ---------------------------------------------------------------------------\n> 3.\n> + FullTransactionId full_transaction_id; /* xid for copy from statement */\n> + CommandId mycid; /* command id */\n> + ParallelCopyLineBoundaries line_boundaries; /* line array */\n> +} ParallelCopyShmInfo;\n>\n> We already serialize FullTransactionId and CommandId via\n> InitializeParallelDSM->SerializeTransactionState. Can't we reuse it? I\n> think recently Parallel Insert patch has also done something for this\n> [2] so you can refer that if you want.\n>\n\nChanged it to remove setting of command id & full transaction id.\nAdded a function SetCurrentCommandIdUsedForWorker to set\ncurrentCommandIdUsed to true & called GetCurrentCommandId by passing\n!IsParallelCopy().\n\n> v5-0004-Documentation-for-parallel-copy\n> -----------------------------------------------------------\n> 1. Perform <command>COPY FROM</command> in parallel using <replaceable\n> + class=\"parameter\"> integer</replaceable> background workers.\n>\n> No need for space before integer.\n>\n\nI have removed it.\n\nAttached v6 patch with the fixes.\n\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 8 Oct 2020 00:14:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 29, 2020 at 3:16 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> Hi Vignesh and Bharath,\n>\n> Seems like the Parallel Copy patch is regarding RI_TRIGGER_PK as\n> parallel-unsafe.\n> Can you explain why this is?\n\nYes we don't need to restrict parallelism for RI_TRIGGER_PK cases as\nwe don't do any command counter increments while performing PK checks\nas opposed to RI_TRIGGER_FK/foreign key checks. We have modified this\nin the v6 patch set.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Oct 2020 00:18:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Sep 28, 2020 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 22, 2020 at 2:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks Ashutosh for your comments.\n> >\n> > On Wed, Sep 16, 2020 at 6:36 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > >\n> > > Hi Vignesh,\n> > >\n> > > I've spent some time today looking at your new set of patches and I've\n> > > some thoughts and queries which I would like to put here:\n> > >\n> > > Why are these not part of the shared cstate structure?\n> > >\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_NULL_PRINT, cstate->null_print);\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_DELIM, cstate->delim);\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_QUOTE, cstate->quote);\n> > > SerializeString(pcxt, PARALLEL_COPY_KEY_ESCAPE, cstate->escape);\n> > >\n> >\n> > I have used shared_cstate mainly to share the integer & bool data\n> > types from the leader to worker process. The above data types are of\n> > char* data type, I will not be able to use it like how I could do it\n> > for integer type. So I preferred to send these as separate keys to the\n> > worker. Thoughts?\n> >\n>\n> I think the way you have written will work but if we go with\n> Ashutosh's proposal it will look elegant and in the future, if we need\n> to share more strings as part of cstate structure then that would be\n> easier. You can probably refer to EstimateParamListSpace,\n> SerializeParamList, and RestoreParamList to see how we can share\n> different types of data in one key.\n>\n\nThanks for the solution Amit, I have fixed this and handled it in the\nv6 patch shared in my previous mail.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Oct 2020 00:26:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 8, 2020 at 5:44 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> Attached v6 patch with the fixes.\n>\n\nHi Vignesh,\n\nI noticed a couple of issues when scanning the code in the following patch:\n\n v6-0003-Allow-copy-from-command-to-process-data-from-file.patch\n\nIn the following code, it will put a junk uint16 value into *destptr\n(and thus may well cause a crash) on a Big Endian architecture\n(Solaris Sparc, s390x, etc.):\nYou're storing a (uint16) string length in a uint32 and then pulling\nout the lower two bytes of the uint32 and copying them into the\nlocation pointed to by destptr.\n\n\nstatic void\n+CopyStringToSharedMemory(CopyState cstate, char *srcPtr, char *destptr,\n+ uint32 *copiedsize)\n+{\n+ uint32 len = srcPtr ? strlen(srcPtr) + 1 : 0;\n+\n+ memcpy(destptr, (uint16 *) &len, sizeof(uint16));\n+ *copiedsize += sizeof(uint16);\n+ if (len)\n+ {\n+ memcpy(destptr + sizeof(uint16), srcPtr, len);\n+ *copiedsize += len;\n+ }\n+}\n\nI suggest you change the code to:\n\n uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;\n memcpy(destptr, &len, sizeof(uint16));\n\n[I assume string length here can't ever exceed (65535 - 1), right?]\n\nLooking a bit deeper into this, I'm wondering if in fact your\nEstimateStringSize() and EstimateNodeSize() functions should be using\nBUFFERALIGN() for EACH stored string/node (rather than just calling\nshm_toc_estimate_chunk() once at the end, after the length of packed\nstrings and nodes has been estimated), to ensure alignment of start of\neach string/node. Other Postgres code appears to be aligning each\nstored chunk using shm_toc_estimate_chunk(). See the definition of\nthat macro and its current usages.\n\nThen you could safely use:\n\n uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;\n *(uint16 *)destptr = len;\n *copiedsize += sizeof(uint16);\n if (len)\n {\n memcpy(destptr + sizeof(uint16), srcPtr, len);\n *copiedsize += len;\n }\n\nand in the CopyStringFromSharedMemory() function, then could safely use:\n\n len = *(uint16 *)srcPtr;\n\nThe compiler may be smart enough to optimize-away the memcpy() in this\ncase anyway, but there are issues in doing this for architectures that\ntake a performance hit for unaligned access, or don't support\nunaligned access.\n\nAlso, in CopyXXXXFromSharedMemory() functions, you should use palloc()\ninstead of palloc0(), as you're filling the entire palloc'd buffer\nanyway, so no need to ask for additional MemSet() of all buffer bytes\nto 0 prior to memcpy().\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 8 Oct 2020 14:12:28 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Sep 28, 2020 at 6:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Mon, Sep 28, 2020 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 22, 2020 at 2:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks Ashutosh for your comments.\n> > >\n> > > On Wed, Sep 16, 2020 at 6:36 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > > >\n> > > > Hi Vignesh,\n> > > >\n> > > > I've spent some time today looking at your new set of patches and I've\n> > > > some thoughts and queries which I would like to put here:\n> > > >\n> > > > Why are these not part of the shared cstate structure?\n> > > >\n> > > > SerializeString(pcxt, PARALLEL_COPY_KEY_NULL_PRINT, cstate->null_print);\n> > > > SerializeString(pcxt, PARALLEL_COPY_KEY_DELIM, cstate->delim);\n> > > > SerializeString(pcxt, PARALLEL_COPY_KEY_QUOTE, cstate->quote);\n> > > > SerializeString(pcxt, PARALLEL_COPY_KEY_ESCAPE, cstate->escape);\n> > > >\n> > >\n> > > I have used shared_cstate mainly to share the integer & bool data\n> > > types from the leader to worker process. The above data types are of\n> > > char* data type, I will not be able to use it like how I could do it\n> > > for integer type. So I preferred to send these as separate keys to the\n> > > worker. Thoughts?\n> > >\n> >\n> > I think the way you have written will work but if we go with\n> > Ashutosh's proposal it will look elegant and in the future, if we need\n> > to share more strings as part of cstate structure then that would be\n> > easier. You can probably refer to EstimateParamListSpace,\n> > SerializeParamList, and RestoreParamList to see how we can share\n> > different types of data in one key.\n> >\n>\n> Yeah. And in addition to that it will also reduce the number of DSM\n> keys that we need to maintain.\n>\n\nThanks Ashutosh, This is handled as part of the v6 patch set.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Oct 2020 11:15:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Few additional comments:\n> > ======================\n>\n> Some more comments:\n>\n> v5-0002-Framework-for-leader-worker-in-parallel-copy\n> ===========================================\n> 1.\n> These values\n> + * help in handover of multiple records with significant size of data to be\n> + * processed by each of the workers to make sure there is no context\n> switch & the\n> + * work is fairly distributed among the workers.\n>\n> How about writing it as: \"These values help in the handover of\n> multiple records with the significant size of data to be processed by\n> each of the workers. This also ensures there is no context switch and\n> the work is fairly distributed among the workers.\"\n\nChanged as suggested.\n\n>\n> 2. Can we keep WORKER_CHUNK_COUNT, MAX_BLOCKS_COUNT, and RINGSIZE as\n> power-of-two? Say WORKER_CHUNK_COUNT as 64, MAX_BLOCK_COUNT as 1024,\n> and accordingly choose RINGSIZE. At many places, we do that way. I\n> think it can sometimes help in faster processing due to cache size\n> requirements and in this case, I don't see a reason why we can't\n> choose these values to be power-of-two. If you agree with this change\n> then also do some performance testing after this change?\n>\n\nModified as suggested, Have checked few performance tests & verified\nthere is no degradation. We will post a performance run of this\nseparately in the coming days..\n\n> 3.\n> + bool curr_blk_completed;\n> + char data[DATA_BLOCK_SIZE]; /* data read from file */\n> + uint8 skip_bytes;\n> +} ParallelCopyDataBlock;\n>\n> Is there a reason to keep skip_bytes after data? Normally the variable\n> size data is at the end of the structure. Also, there is no comment\n> explaining the purpose of skip_bytes.\n>\n\nModified as suggested and added comments.\n\n> 4.\n> + * Copy data block information.\n> + * ParallelCopyDataBlock's will be created in DSM. Data read from file will be\n> + * copied in these DSM data blocks. The leader process identifies the records\n> + * and the record information will be shared to the workers. The workers will\n> + * insert the records into the table. There can be one or more number\n> of records\n> + * in each of the data block based on the record size.\n> + */\n> +typedef struct ParallelCopyDataBlock\n>\n> Keep one empty line after the description line like below. I also\n> suggested to do a minor tweak in the above sentence which is as\n> follows:\n>\n> * Copy data block information.\n> *\n> * These data blocks are created in DSM. Data read ...\n>\n> Try to follow a similar format in other comments as well.\n>\n\nModified as suggested.\n\n> 5. I think it is better to move parallelism related code to a new file\n> (we can name it as copyParallel.c or something like that).\n>\n\nModified, added copyparallel.c file to include copy parallelism\nfunctionality & copyparallel.c file & some of the function prototype &\ndata structure were moved to copy.h header file so that it can be\nshared between copy.c & copyparallel.c\n\n> 6. copy.c(1648,25): warning C4133: 'function': incompatible types -\n> from 'ParallelCopyLineState *' to 'uint32 *'\n> Getting above compilation warning on Windows.\n>\n\nModified the data type.\n\n> v5-0003-Allow-copy-from-command-to-process-data-from-file\n> ==================================================\n> 1.\n> @@ -4294,7 +5047,7 @@ BeginCopyFrom(ParseState *pstate,\n> * only in text mode.\n> */\n> initStringInfo(&cstate->attribute_buf);\n> - cstate->raw_buf = (char *) palloc(RAW_BUF_SIZE + 1);\n> + cstate->raw_buf = (IsParallelCopy()) ? NULL : (char *)\n> palloc(RAW_BUF_SIZE + 1);\n>\n> Is there anyway IsParallelCopy can be true by this time? AFAICS, we do\n> anything about parallelism after this. If you want to save this\n> allocation then we need to move this after we determine that\n> parallelism can be used or not and accordingly the below code in the\n> patch needs to be changed.\n>\n> * ParallelCopyFrom - parallel copy leader's functionality.\n> *\n> * Leader executes the before statement for before statement trigger, if before\n> @@ -1110,8 +1547,302 @@ ParallelCopyFrom(CopyState cstate)\n> ParallelCopyShmInfo *pcshared_info = cstate->pcdata->pcshared_info;\n> ereport(DEBUG1, (errmsg(\"Running parallel copy leader\")));\n>\n> + /* raw_buf is not used in parallel copy, instead data blocks are used.*/\n> + pfree(cstate->raw_buf);\n> + cstate->raw_buf = NULL;\n>\n\nRemoved the palloc change, raw_buf will be allocated both for parallel\nand non parallel copy. One other solution that I thought was to move\nthe memory allocation to CopyFrom, but this solution might affect fdw\nwhere they use BeginCopyFrom, NextCopyFrom & EndCopyFrom. So I have\nkept the allocation as in BeginCopyFrom & freeing for parallel copy in\nParallelCopyFrom.\n\n> Is there anything else also the allocation of which depends on parallelism?\n>\n\nI felt this is the only allocated memory that sequential copy requires\nand which is not required in parallel copy.\n\n> 2.\n> +static pg_attribute_always_inline bool\n> +IsParallelCopyAllowed(CopyState cstate)\n> +{\n> + /* Parallel copy not allowed for frontend (2.0 protocol) & binary option. */\n> + if ((cstate->copy_dest == COPY_OLD_FE) || cstate->binary)\n> + return false;\n> +\n> + /* Check if copy is into foreign table or temporary table. */\n> + if (cstate->rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE ||\n> + RelationUsesLocalBuffers(cstate->rel))\n> + return false;\n> +\n> + /* Check if trigger function is parallel safe. */\n> + if (cstate->rel->trigdesc != NULL &&\n> + !IsTriggerFunctionParallelSafe(cstate->rel->trigdesc))\n> + return false;\n> +\n> + /*\n> + * Check if there is after statement or instead of trigger or transition\n> + * table triggers.\n> + */\n> + if (cstate->rel->trigdesc != NULL &&\n> + (cstate->rel->trigdesc->trig_insert_after_statement ||\n> + cstate->rel->trigdesc->trig_insert_instead_row ||\n> + cstate->rel->trigdesc->trig_insert_new_table))\n> + return false;\n> +\n> + /* Check if the volatile expressions are parallel safe, if present any. */\n> + if (!CheckExprParallelSafety(cstate))\n> + return false;\n> +\n> + /* Check if the insertion mode is single. */\n> + if (FindInsertMethod(cstate) == CIM_SINGLE)\n> + return false;\n> +\n> + return true;\n> +}\n>\n> In the comments, we should write why parallelism is not allowed for a\n> particular case. The cases where parallel-unsafe clause is involved\n> are okay but it is not clear from comments why it is not allowed in\n> other cases.\n>\n\nAdded comments.\n\n> 3.\n> + ParallelCopyShmInfo *pcshared_info = cstate->pcdata->pcshared_info;\n> + ParallelCopyLineBoundary *lineInfo;\n> + uint32 line_first_block = pcshared_info->cur_block_pos;\n> + line_pos = UpdateBlockInLineInfo(cstate,\n> + line_first_block,\n> + cstate->raw_buf_index, -1,\n> + LINE_LEADER_POPULATING);\n> + lineInfo = &pcshared_info->line_boundaries.ring[line_pos];\n> + elog(DEBUG1, \"[Leader] Adding - block:%d, offset:%d, line position:%d\",\n> + line_first_block, lineInfo->start_offset, line_pos);\n>\n> Can we take all the code here inside function UpdateBlockInLineInfo? I\n> see that it is called from one other place but I guess most of the\n> surrounding code there can also be moved inside the function. Can we\n> change the name of the function to UpdateSharedLineInfo or something\n> like that and remove inline marking from this? I am not sure we want\n> to inline such big functions. If it make difference in performance\n> then we can probably consider it.\n>\n\nChanged as suggested.\n\n> 4.\n> EndLineParallelCopy()\n> {\n> ..\n> + /* Update line size. */\n> + pg_atomic_write_u32(&lineInfo->line_size, line_size);\n> + pg_atomic_write_u32(&lineInfo->line_state, LINE_LEADER_POPULATED);\n> + elog(DEBUG1, \"[Leader] After adding - line position:%d, line_size:%d\",\n> + line_pos, line_size);\n> ..\n> }\n>\n> Can we instead call UpdateSharedLineInfo (new function name for\n> UpdateBlockInLineInfo) to do this and maybe see it only updates the\n> required info? The idea is to centralize the code for updating\n> SharedLineInfo.\n>\n\nUpdated as suggested.\n\n> 5.\n> +static uint32\n> +GetLinePosition(CopyState cstate)\n> +{\n> + ParallelCopyData *pcdata = cstate->pcdata;\n> + ParallelCopyShmInfo *pcshared_info = pcdata->pcshared_info;\n> + uint32 previous_pos = pcdata->worker_processed_pos;\n> + uint32 write_pos = (previous_pos == -1) ? 0 : (previous_pos + 1) % RINGSIZE;\n>\n> It seems to me that each worker has to hop through all the processed\n> chunks before getting the chunk which it can process. This will work\n> but I think it is better if we have some shared counter which can tell\n> us the next chunk to be processed and avoid all the unnecessary work\n> of hopping to find the exact position.\n\nI had tried to have a spin lock & try to track this position instead\nof hopping through the processed chunks. But I did not get the earlier\nperformance results, there was slight degradation:\nUse case 2: 3 indexes on integer columns\nRun on earlier patches without spinlock:\n(220.680, 0, 1X), (185.096, 1, 1.19X), (134.811, 2, 1.64X), (114.585,\n4, 1.92X), (107.707, 8, 2.05X), (101.253, 16, 2.18X), (100.749, 20,\n2.19X), (100.656, 30, 2.19X)\nRun on latest v6 patches with spinlock:\n(216.059, 0, 1X), (177.639, 1, 1.22X), (145.213, 2, 1.49X), (126.370,\n4, 1.71X), (121.013, 8, 1.78X), (102.933, 16, 2.1X), (103.000, 20,\n2.1X), (100.308, 30, 2.15X)\nI have not included these changes as there was some performance\ndegradation. I will try to come with a different solution for this and\ndiscuss in the coming days. This point is not yet handled.\n\n\n> v5-0004-Documentation-for-parallel-copy\n> -----------------------------------------\n> 1. Can you add one or two examples towards the end of the page where\n> we have examples for other Copy options?\n>\n>\n> Please run pgindent on all patches as that will make the code look better.\n\nHave run pgindent on the latest patches.\n\n> From the testing perspective,\n> 1. Test by having something force_parallel_mode = regress which means\n> that all existing Copy tests in the regression will be executed via\n> new worker code. You can have this as a test-only patch for now and\n> make sure all existing tests passed with this.\n> 2. Do we have tests for toast tables? I think if you implement the\n> previous point some existing tests might cover it but I feel we should\n> have at least one or two tests for the same.\n> 3. Have we checked the code coverage of the newly added code with\n> existing tests?\n\nThese will be handled in the next few days.\n\nThese changes are present as part of the v6 patch set.\n\nI'm summarizing the pending open points so that I don't miss anything:\n1) Performance test on latest patch set.\n2) Testing points suggested.\n3) Support of parallel copy for COPY_OLD_FE.\n4) Worker has to hop through all the processed chunks before getting\nthe chunk which it can process.\n5) Handling of Tomas's comments.\n6) Handling of Greg's comments.\n\nWe plan to work on this & complete in the next few days.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Oct 2020 11:15:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 8, 2020 at 12:14 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I am convinced by the reason given by Kyotaro-San in that another\n> > thread [1] and performance data shown by Peter that this can't be an\n> > independent improvement and rather in some cases it can do harm. Now,\n> > if you need it for a parallel-copy path then we can change it\n> > specifically to the parallel-copy code path but I don't understand\n> > your reason completely.\n> >\n>\n> Whenever we need data to be populated, we will get a new data block &\n> pass it to CopyGetData to populate the data. In case of file copy, the\n> server will completely fill the data block. We expect the data to be\n> filled completely. If data is available it will completely load the\n> complete data block in case of file copy. There is no scenario where\n> even if data is present a partial data block will be returned except\n> for EOF or no data available. But in case of STDIN data copy, even\n> though there is 8K data available in data block & 8K data available in\n> STDIN, CopyGetData will return as soon as libpq buffer data is more\n> than the minread. We will pass new data block every time to load data.\n> Every time we pass an 8K data block but CopyGetData loads a few bytes\n> in the new data block & returns. I wanted to keep the same data\n> population logic for both file copy & STDIN copy i.e copy full 8K data\n> blocks & then the populated data can be required. There is an\n> alternative solution I can have some special handling in case of STDIN\n> wherein the existing data block can be passed with the index from\n> where the data should be copied. Thoughts?\n>\n\nWhat you are proposing as an alternative solution, isn't that what we\nare doing without the patch? IIUC, you require this because of your\ncorresponding changes to handle COPY_NEW_FE in CopyReadLine(), is that\nright? If so, what is the difficulty in making it behave similar to\nthe non-parallel case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Oct 2020 10:42:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 8, 2020 at 12:14 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > + */\n> > > > +typedef struct ParallelCopyLineBoundary\n> > > >\n> > > > Are we doing all this state management to avoid using locks while\n> > > > processing lines? If so, I think we can use either spinlock or LWLock\n> > > > to keep the main patch simple and then provide a later patch to make\n> > > > it lock-less. This will allow us to first focus on the main design of\n> > > > the patch rather than trying to make this datastructure processing\n> > > > lock-less in the best possible way.\n> > > >\n> > >\n> > > The steps will be more or less same if we use spinlock too. step 1, step 3 & step 4 will be common we have to use lock & unlock instead of step 2 & step 5. I feel we can retain the current implementation.\n> > >\n> >\n> > I'll study this in detail and let you know my opinion on the same but\n> > in the meantime, I don't follow one part of this comment: \"If they\n> > don't follow this order the worker might process wrong line_size and\n> > leader might populate the information which worker has not yet\n> > processed or in the process of processing.\"\n> >\n> > Do you want to say that leader might overwrite some information which\n> > worker hasn't read yet? If so, it is not clear from the comment.\n> > Another minor point about this comment:\n> >\n>\n> Here leader and worker must follow these steps to avoid any corruption\n> or hang issue. Changed it to:\n> * The leader & worker process access the shared line information by following\n> * the below steps to avoid any data corruption or hang:\n>\n\nActually, I wanted more on the lines why such corruption or hang can\nhappen? It might help reviewers to understand why you have followed\nsuch a sequence.\n\n> >\n> > How did you ensure that this is fixed? Have you tested it, if so\n> > please share the test? I see a basic problem with your fix.\n> >\n> > + /* Report WAL/buffer usage during parallel execution */\n> > + bufferusage = shm_toc_lookup(toc, PARALLEL_COPY_BUFFER_USAGE, false);\n> > + walusage = shm_toc_lookup(toc, PARALLEL_COPY_WAL_USAGE, false);\n> > + InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n> > + &walusage[ParallelWorkerNumber]);\n> >\n> > You need to call InstrStartParallelQuery() before the actual operation\n> > starts, without that stats won't be accurate? Also, after calling\n> > WaitForParallelWorkersToFinish(), you need to accumulate the stats\n> > collected from workers which neither you have done nor is possible\n> > with the current code in your patch because you haven't made any\n> > provision to capture them in BeginParallelCopy.\n> >\n> > I suggest you look into lazy_parallel_vacuum_indexes() and\n> > begin_parallel_vacuum() to understand how the buffer/wal usage stats\n> > are accumulated. Also, please test this functionality using\n> > pg_stat_statements.\n> >\n>\n> Made changes accordingly.\n> I have verified it using:\n> postgres=# select * from pg_stat_statements where query like '%copy%';\n> userid | dbid | queryid |\n> query\n> | plans | total_plan_time |\n> min_plan_time | max_plan_time | mean_plan_time | stddev_plan_time |\n> calls | total_exec_time | min_exec_time | max_exec_time |\n> mean_exec_time | stddev_exec_time | rows | shared_blks_hi\n> t | shared_blks_read | shared_blks_dirtied | shared_blks_written |\n> local_blks_hit | local_blks_read | local_blks_dirtied |\n> local_blks_written | temp_blks_read | temp_blks_written | blk_\n> read_time | blk_write_time | wal_records | wal_fpi | wal_bytes\n> --------+-------+----------------------+---------------------------------------------------------------------------------------------------------------------+-------+-----------------+-\n> --------------+---------------+----------------+------------------+-------+-----------------+---------------+---------------+----------------+------------------+--------+---------------\n> --+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+-----\n> ----------+----------------+-------------+---------+-----------\n> 10 | 13743 | -6947756673093447609 | copy hw from\n> '/home/vignesh/postgres/postgres/inst/bin/hw_175000.csv' with(format\n> csv, delimiter ',') | 0 | 0 |\n> 0 | 0 | 0 | 0 |\n> 1 | 265.195105 | 265.195105 | 265.195105 | 265.195105\n> | 0 | 175000 | 191\n> 6 | 0 | 946 | 946 |\n> 0 | 0 | 0 | 0\n> | 0 | 0 |\n> 0 | 0 | 1116 | 0 | 3587203\n> 10 | 13743 | 8570215596364326047 | copy hw from\n> '/home/vignesh/postgres/postgres/inst/bin/hw_175000.csv' with(format\n> csv, delimiter ',', parallel '2') | 0 | 0 |\n> 0 | 0 | 0 | 0 |\n> 1 | 35668.402482 | 35668.402482 | 35668.402482 | 35668.402482\n> | 0 | 175000 | 310\n> 1 | 36 | 952 | 919 |\n> 0 | 0 | 0 | 0\n> | 0 | 0 |\n> 0 | 0 | 1119 | 6 | 3624405\n> (2 rows)\n>\n\nI am not able to properly parse the data but If understand the wal\ndata for non-parallel (1116 | 0 | 3587203) and parallel (1119\n| 6 | 3624405) case doesn't seem to be the same. Is that\nright? If so, why? Please ensure that no checkpoint happens for both\ncases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Oct 2020 11:01:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 8, 2020 at 8:43 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 5:44 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> > Attached v6 patch with the fixes.\n> >\n>\n> Hi Vignesh,\n>\n> I noticed a couple of issues when scanning the code in the following patch:\n>\n> v6-0003-Allow-copy-from-command-to-process-data-from-file.patch\n>\n> In the following code, it will put a junk uint16 value into *destptr\n> (and thus may well cause a crash) on a Big Endian architecture\n> (Solaris Sparc, s390x, etc.):\n> You're storing a (uint16) string length in a uint32 and then pulling\n> out the lower two bytes of the uint32 and copying them into the\n> location pointed to by destptr.\n>\n>\n> static void\n> +CopyStringToSharedMemory(CopyState cstate, char *srcPtr, char *destptr,\n> + uint32 *copiedsize)\n> +{\n> + uint32 len = srcPtr ? strlen(srcPtr) + 1 : 0;\n> +\n> + memcpy(destptr, (uint16 *) &len, sizeof(uint16));\n> + *copiedsize += sizeof(uint16);\n> + if (len)\n> + {\n> + memcpy(destptr + sizeof(uint16), srcPtr, len);\n> + *copiedsize += len;\n> + }\n> +}\n>\n> I suggest you change the code to:\n>\n> uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;\n> memcpy(destptr, &len, sizeof(uint16));\n>\n> [I assume string length here can't ever exceed (65535 - 1), right?]\n>\n\nYour suggestion makes sense to me if the assumption related to string\nlength is correct. If we can't ensure that then we need to probably\nuse four bytes uint32 to store the length.\n\n> Looking a bit deeper into this, I'm wondering if in fact your\n> EstimateStringSize() and EstimateNodeSize() functions should be using\n> BUFFERALIGN() for EACH stored string/node (rather than just calling\n> shm_toc_estimate_chunk() once at the end, after the length of packed\n> strings and nodes has been estimated), to ensure alignment of start of\n> each string/node. Other Postgres code appears to be aligning each\n> stored chunk using shm_toc_estimate_chunk(). See the definition of\n> that macro and its current usages.\n>\n\nI am not sure if this required for the purpose of correctness. AFAIU,\nwe do store/estimate multiple parameters in same way at other places,\nsee EstimateParamListSpace and SerializeParamList. Do you have\nsomething else in mind?\n\nWhile looking at the latest code, I observed below issue in patch\nv6-0003-Allow-copy-from-command-to-process-data-from-file:\n\n+ /* Estimate the size for shared information for PARALLEL_COPY_KEY_CSTATE */\n+ est_cstateshared = MAXALIGN(sizeof(SerializedParallelCopyState));\n+ shm_toc_estimate_chunk(&pcxt->estimator, est_cstateshared);\n+ shm_toc_estimate_keys(&pcxt->estimator, 1);\n+\n+ strsize = EstimateCstateSize(pcxt, cstate, attnamelist, &whereClauseStr,\n+ &rangeTableStr, &attnameListStr,\n+ &notnullListStr, &nullListStr,\n+ &convertListStr);\n\nHere, do we need to separately estimate the size of\nSerializedParallelCopyState when it is also done in\nEstimateCstateSize?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Oct 2020 12:10:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Looking a bit deeper into this, I'm wondering if in fact your\n> > EstimateStringSize() and EstimateNodeSize() functions should be using\n> > BUFFERALIGN() for EACH stored string/node (rather than just calling\n> > shm_toc_estimate_chunk() once at the end, after the length of packed\n> > strings and nodes has been estimated), to ensure alignment of start of\n> > each string/node. Other Postgres code appears to be aligning each\n> > stored chunk using shm_toc_estimate_chunk(). See the definition of\n> > that macro and its current usages.\n> >\n>\n> I am not sure if this required for the purpose of correctness. AFAIU,\n> we do store/estimate multiple parameters in same way at other places,\n> see EstimateParamListSpace and SerializeParamList. Do you have\n> something else in mind?\n>\n\nThe point I was trying to make is that potentially more efficient code\ncan be used if the individual strings/nodes are aligned, rather than\npacked (as they are now), but as you point out, there are already\ncases (e.g. SerializeParamList) where within the separately-aligned\nchunks the data is not aligned, so maybe not a big deal. Oh well,\nwithout alignment, that means use of memcpy() cannot really be avoided\nhere for serializing/de-serializing ints etc., let's hope the compiler\noptimizes it as best it can.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Fri, 9 Oct 2020 19:06:33 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> From the testing perspective,\n> 1. Test by having something force_parallel_mode = regress which means\n> that all existing Copy tests in the regression will be executed via\n> new worker code. You can have this as a test-only patch for now and\n> make sure all existing tests passed with this.\n>\n\nI don't think all the existing copy test cases(except the new test cases\nadded in the parallel copy patch set) would run inside the parallel worker\nif force_parallel_mode is on. This is because, the parallelism will be\npicked up for parallel copy only if parallel option is specified unlike\nparallelism for select queries.\n\nAnyways, I ran with force_parallel_mode on and regress. All copy related\ntests and make check/make check-world ran fine.\n\n>\n> 2. Do we have tests for toast tables? I think if you implement the\n> previous point some existing tests might cover it but I feel we should\n> have at least one or two tests for the same.\n>\n\nToast table use case 1: 10000 tuples, 9.6GB data, 3 indexes 2 on integer\ncolumns, 1 on text column(not the toast column), csv file, each row is >\n1320KB:\n(222.767, 0, 1X), (134.171, 1, 1.66X), (93.749, 2, 2.38X), (93.672, 4,\n2.38X), (94.827, 8, 2.35X), (93.766, 16, 2.37X), (98.153, 20, 2.27X),\n(122.721, 30, 1.81X)\n\nToast table use case 2: 100000 tuples, 96GB data, 3 indexes 2 on integer\ncolumns, 1 on text column(not the toast column), csv file, each row is >\n1320KB:\n(2255.032, 0, 1X), (1358.628, 1, 1.66X), (901.170, 2, 2.5X), (912.743, 4,\n2.47X), (988.718, 8, 2.28X), (938.000, 16, 2.4X), (997.556, 20, 2.26X),\n(1000.586, 30, 2.25X)\n\nToast table use case3: 10000 tuples, 9.6GB, no indexes, binary file, each\nrow is > 1320KB:\n(136.983, 0, 1X), (136.418, 1, 1X), (81.896, 2, 1.66X), (62.929, 4, 2.16X),\n(52.311, 8, 2.6X), (40.032, 16, 3.49X), (44.097, 20, 3.09X), (62.310, 30,\n2.18X)\n\nIn the case of a Toast table, we could achieve upto 2.5X for csv files, and\n3.5X for binary files. We are analyzing this point and will post an update\non our findings soon.\n\nWhile testing for the Toast table case with a binary file, I discovered an\nissue with the earlier v6-0006-Parallel-Copy-For-Binary-Format-Files.patch\nfrom [1], I fixed it and added the updated v6-0006 patch here. Please note\nthat I'm also attaching the 1 to 5 patches from version 6 just for\ncompletion, that have no change from what Vignesh sent earlier in [1].\n\n>\n> 3. Have we checked the code coverage of the newly added code with\n> existing tests?\n>\n\nSo far, we manually ensured that most of the code parts are covered(see\nbelow list of test cases). But we are also planning to do the code coverage\nusing some tool in the coming days.\n\nApart from the above tests, I also captured performance measurement on the\nlatest v6 patch set.\n\nUse case 1: 10million rows, 5.2GB data,2 indexes on integer columns, 1\nindex on text column, csv file\n(1168.484, 0, 1X), (1116.442, 1, 1.05X), (641.272, 2, 1.82X), (338.963, 4,\n3.45X), (202.914, 8, 5.76X), (139.884, 16, 8.35X), (128.955, 20, 9.06X),\n(131.898, 30, 8.86X)\n\nUse case 2: 10million rows, 5.2GB data,2 indexes on integer columns, 1\nindex on text column, binary file\n(1097.83, 0, 1X), (1095.735, 1, 1.002X), (625.610, 2, 1.75X), (319.833, 4,\n3.43X), (186.908, 8, 5.87X), (132.115, 16, 8.31X), (128.854, 20, 8.52X),\n(134.965, 30, 8.13X)\n\nUse case 2: 10million rows, 5.2GB data, 3 indexes on integer columns, csv\nfile\n(218.227, 0, 1X), (182.815, 1, 1.19X), (135.500, 2, 1.61), (113.954, 4,\n1.91X), (106.243, 8, 2.05X), (101.222, 16, 2.15X), (100.378, 20, 2.17X),\n(100.351, 30, 2.17X)\n\nAll the above tests are performed on the latest v6 patch set (attached here\nin this thread) with custom postgresql.conf[1]. The results are of the\ntriplet form (exec time in sec, number of workers, gain)\n\nOverall, we have below test cases to cover the code and for performance\nmeasurements. We plan to run these tests whenever a new set of patches is\nposted.\n\n1. csv\n2. binary\n3. force parallel mode = regress\n4. toast data csv and binary\n5. foreign key check, before row, after row, before statement, after\nstatement, instead of triggers\n6. partition case\n7. foreign partitions and partitions having trigger cases\n8. where clause having parallel unsafe and safe expression, default\nparallel unsafe and safe expression\n9. temp, global, local, unlogged, inherited tables cases, foreign tables\n\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm29DJKy0-vozs8eeBRf2u3rbvPdZHCocrd0VjoWHS7h5A%40mail.gmail.com\n[2]\nshared_buffers = 40GB\nmax_worker_processes = 32\nmax_parallel_maintenance_workers = 24\nmax_parallel_workers = 32\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 9 Oct 2020 14:52:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 2:52 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > From the testing perspective,\n> > 1. Test by having something force_parallel_mode = regress which means\n> > that all existing Copy tests in the regression will be executed via\n> > new worker code. You can have this as a test-only patch for now and\n> > make sure all existing tests passed with this.\n> >\n>\n> I don't think all the existing copy test cases(except the new test cases added in the parallel copy patch set) would run inside the parallel worker if force_parallel_mode is on. This is because, the parallelism will be picked up for parallel copy only if parallel option is specified unlike parallelism for select queries.\n>\n\nSure, you need to change the code such that when force_parallel_mode =\n'regress' is specified then it always uses one worker. This is\nprimarily for testing purposes and will help during the development of\nthis patch as it will make all exiting Copy tests to use quite a good\nportion of the parallel infrastructure.\n\n>\n> All the above tests are performed on the latest v6 patch set (attached here in this thread) with custom postgresql.conf[1]. The results are of the triplet form (exec time in sec, number of workers, gain)\n>\n\nOkay, so I am assuming the performance is the same as we have seen\nwith the earlier versions of patches.\n\n> Overall, we have below test cases to cover the code and for performance measurements. We plan to run these tests whenever a new set of patches is posted.\n>\n> 1. csv\n> 2. binary\n\nDon't we need the tests for plain text files as well?\n\n> 3. force parallel mode = regress\n> 4. toast data csv and binary\n> 5. foreign key check, before row, after row, before statement, after statement, instead of triggers\n> 6. partition case\n> 7. foreign partitions and partitions having trigger cases\n> 8. where clause having parallel unsafe and safe expression, default parallel unsafe and safe expression\n> 9. temp, global, local, unlogged, inherited tables cases, foreign tables\n>\n\nSounds like good coverage. So, are you doing all this testing\nmanually? How are you maintaining these tests?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Oct 2020 15:26:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 3:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 9, 2020 at 2:52 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > From the testing perspective,\n> > > 1. Test by having something force_parallel_mode = regress which means\n> > > that all existing Copy tests in the regression will be executed via\n> > > new worker code. You can have this as a test-only patch for now and\n> > > make sure all existing tests passed with this.\n> > >\n> >\n> > I don't think all the existing copy test cases(except the new test cases added in the parallel copy patch set) would run inside the parallel worker if force_parallel_mode is on. This is because, the parallelism will be picked up for parallel copy only if parallel option is specified unlike parallelism for select queries.\n> >\n>\n> Sure, you need to change the code such that when force_parallel_mode =\n> 'regress' is specified then it always uses one worker. This is\n> primarily for testing purposes and will help during the development of\n> this patch as it will make all exiting Copy tests to use quite a good\n> portion of the parallel infrastructure.\n>\n\nIIUC, firstly, I will set force_parallel_mode = FORCE_PARALLEL_REGRESS\nas default value in guc.c, and then adjust the parallelism related\ncode in copy.c such that it always picks 1 worker and spawns it. This\nway, all the existing copy test cases would be run in parallel worker.\nPlease let me know if this is okay. If yes, I will do this and update\nhere.\n\n>\n> > All the above tests are performed on the latest v6 patch set (attached here in this thread) with custom postgresql.conf[1]. The results are of the triplet form (exec time in sec, number of workers, gain)\n> >\n>\n> Okay, so I am assuming the performance is the same as we have seen\n> with the earlier versions of patches.\n>\n\nYes. Most recent run on v5 patch set [1]\n\n>\n> > Overall, we have below test cases to cover the code and for performance measurements. We plan to run these tests whenever a new set of patches is posted.\n> >\n> > 1. csv\n> > 2. binary\n>\n> Don't we need the tests for plain text files as well?\n>\n\nWill add one.\n\n>\n> > 3. force parallel mode = regress\n> > 4. toast data csv and binary\n> > 5. foreign key check, before row, after row, before statement, after statement, instead of triggers\n> > 6. partition case\n> > 7. foreign partitions and partitions having trigger cases\n> > 8. where clause having parallel unsafe and safe expression, default parallel unsafe and safe expression\n> > 9. temp, global, local, unlogged, inherited tables cases, foreign tables\n> >\n>\n> Sounds like good coverage. So, are you doing all this testing\n> manually? How are you maintaining these tests?\n>\n\nYes, running them manually. Few of the tests(1,2,4) require huge\ndatasets for performance measurements and other test cases are to\nensure we don't choose parallelism. We will try to add test cases that\nare not meant for performance, to the patch test.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACW%3Djm5ri%2B7rXiQaFT_c5h2rVS%3DcJOQVFR5R%2Bbowt3QDkw%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Oct 2020 15:50:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 3:50 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Oct 9, 2020 at 3:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Oct 9, 2020 at 2:52 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > From the testing perspective,\n> > > > 1. Test by having something force_parallel_mode = regress which means\n> > > > that all existing Copy tests in the regression will be executed via\n> > > > new worker code. You can have this as a test-only patch for now and\n> > > > make sure all existing tests passed with this.\n> > > >\n> > >\n> > > I don't think all the existing copy test cases(except the new test cases added in the parallel copy patch set) would run inside the parallel worker if force_parallel_mode is on. This is because, the parallelism will be picked up for parallel copy only if parallel option is specified unlike parallelism for select queries.\n> > >\n> >\n> > Sure, you need to change the code such that when force_parallel_mode =\n> > 'regress' is specified then it always uses one worker. This is\n> > primarily for testing purposes and will help during the development of\n> > this patch as it will make all exiting Copy tests to use quite a good\n> > portion of the parallel infrastructure.\n> >\n>\n> IIUC, firstly, I will set force_parallel_mode = FORCE_PARALLEL_REGRESS\n> as default value in guc.c,\n>\n\nNo need to set this as the default value. You can change it in\npostgresql.conf before running tests.\n\n> and then adjust the parallelism related\n> code in copy.c such that it always picks 1 worker and spawns it. This\n> way, all the existing copy test cases would be run in parallel worker.\n> Please let me know if this is okay.\n>\n\nYeah, this sounds fine.\n\n> If yes, I will do this and update\n> here.\n>\n\nOkay, thanks, but ensure the difference in test execution before and\nafter your change. After your change, all the 'copy' tests should\ninvoke the worker to perform a copy.\n\n> >\n> > > All the above tests are performed on the latest v6 patch set (attached here in this thread) with custom postgresql.conf[1]. The results are of the triplet form (exec time in sec, number of workers, gain)\n> > >\n> >\n> > Okay, so I am assuming the performance is the same as we have seen\n> > with the earlier versions of patches.\n> >\n>\n> Yes. Most recent run on v5 patch set [1]\n>\n\nOkay, good to know that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Oct 2020 16:27:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 12:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> While looking at the latest code, I observed below issue in patch\n> v6-0003-Allow-copy-from-command-to-process-data-from-file:\n>\n> + /* Estimate the size for shared information for PARALLEL_COPY_KEY_CSTATE */\n> + est_cstateshared = MAXALIGN(sizeof(SerializedParallelCopyState));\n> + shm_toc_estimate_chunk(&pcxt->estimator, est_cstateshared);\n> + shm_toc_estimate_keys(&pcxt->estimator, 1);\n> +\n> + strsize = EstimateCstateSize(pcxt, cstate, attnamelist, &whereClauseStr,\n> + &rangeTableStr, &attnameListStr,\n> + &notnullListStr, &nullListStr,\n> + &convertListStr);\n>\n> Here, do we need to separately estimate the size of\n> SerializedParallelCopyState when it is also done in\n> EstimateCstateSize?\n\nThis is not required, this has been removed in the attached patches.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Oct 2020 15:23:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "I did performance testing on v7 patch set[1] with custom\npostgresql.conf[2]. The results are of the triplet form (exec time in\nsec, number of workers, gain)\n\nUse case 1: 10million rows, 5.2GB data, 2 indexes on integer columns,\n1 index on text column, binary file\n(1104.898, 0, 1X), (1112.221, 1, 1X), (640.236, 2, 1.72X), (335.090,\n4, 3.3X), (200.492, 8, 5.51X), (131.448, 16, 8.4X), (121.832, 20,\n9.1X), (124.287, 30, 8.9X)\n\nUse case 2: 10million rows, 5.2GB data,2 indexes on integer columns, 1\nindex on text column, copy from stdin, csv format\n(1203.282, 0, 1X), (1135.517, 1, 1.06X), (655.140, 2, 1.84X),\n(343.688, 4, 3.5X), (203.742, 8, 5.9X), (144.793, 16, 8.31X),\n(133.339, 20, 9.02X), (136.672, 30, 8.8X)\n\nUse case 3: 10million rows, 5.2GB data,2 indexes on integer columns, 1\nindex on text column, text file\n(1165.991, 0, 1X), (1128.599, 1, 1.03X), (644.793, 2, 1.81X),\n(342.813, 4, 3.4X), (204.279, 8, 5.71X), (139.986, 16, 8.33X),\n(128.259, 20, 9.1X), (132.764, 30, 8.78X)\n\nAbove results are similar to the results with earlier versions of the patch set.\n\nOn Fri, Oct 9, 2020 at 3:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Sure, you need to change the code such that when force_parallel_mode =\n> 'regress' is specified then it always uses one worker. This is\n> primarily for testing purposes and will help during the development of\n> this patch as it will make all exiting Copy tests to use quite a good\n> portion of the parallel infrastructure.\n>\n\nI performed force_parallel_mode = regress testing and found 2 issues,\nthe fixes for the same are available in v7 patch set[1].\n\n>\n> > Overall, we have below test cases to cover the code and for performance measurements. We plan to run these tests whenever a new set of patches is posted.\n> >\n> > 1. csv\n> > 2. binary\n>\n> Don't we need the tests for plain text files as well?\n>\n\nI added a text use case and above mentioned are perf results on v7 patch set[1].\n\n>\n> > 3. force parallel mode = regress\n> > 4. toast data csv and binary\n> > 5. foreign key check, before row, after row, before statement, after statement, instead of triggers\n> > 6. partition case\n> > 7. foreign partitions and partitions having trigger cases\n> > 8. where clause having parallel unsafe and safe expression, default parallel unsafe and safe expression\n> > 9. temp, global, local, unlogged, inherited tables cases, foreign tables\n> >\n>\n> Sounds like good coverage. So, are you doing all this testing\n> manually? How are you maintaining these tests?\n>\n\nAll test cases listed above, except for the cases that are meant to\nmeasure perf gain with huge data, are present in v7-0005 patch in v7\npatch set[1].\n\n[1] https://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.com\n\n[2]\nshared_buffers = 40GB\nmax_worker_processes = 32\nmax_parallel_maintenance_workers = 24\nmax_parallel_workers = 32\nsynchronous_commit = off\ncheckpoint_timeout = 1d\nmax_wal_size = 24GB\nmin_wal_size = 15GB\nautovacuum = off\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Oct 2020 17:05:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 10:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 12:14 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> > >\n> > >\n> > > I am convinced by the reason given by Kyotaro-San in that another\n> > > thread [1] and performance data shown by Peter that this can't be an\n> > > independent improvement and rather in some cases it can do harm. Now,\n> > > if you need it for a parallel-copy path then we can change it\n> > > specifically to the parallel-copy code path but I don't understand\n> > > your reason completely.\n> > >\n> >\n> > Whenever we need data to be populated, we will get a new data block &\n> > pass it to CopyGetData to populate the data. In case of file copy, the\n> > server will completely fill the data block. We expect the data to be\n> > filled completely. If data is available it will completely load the\n> > complete data block in case of file copy. There is no scenario where\n> > even if data is present a partial data block will be returned except\n> > for EOF or no data available. But in case of STDIN data copy, even\n> > though there is 8K data available in data block & 8K data available in\n> > STDIN, CopyGetData will return as soon as libpq buffer data is more\n> > than the minread. We will pass new data block every time to load data.\n> > Every time we pass an 8K data block but CopyGetData loads a few bytes\n> > in the new data block & returns. I wanted to keep the same data\n> > population logic for both file copy & STDIN copy i.e copy full 8K data\n> > blocks & then the populated data can be required. There is an\n> > alternative solution I can have some special handling in case of STDIN\n> > wherein the existing data block can be passed with the index from\n> > where the data should be copied. Thoughts?\n> >\n>\n> What you are proposing as an alternative solution, isn't that what we\n> are doing without the patch? IIUC, you require this because of your\n> corresponding changes to handle COPY_NEW_FE in CopyReadLine(), is that\n> right? If so, what is the difficulty in making it behave similar to\n> the non-parallel case?\n>\n\nThe alternate solution is similar to how existing copy handles STDIN\ncopies, I have made changes in the v7 patch attached in [1] to have\nparallel copy handle STDIN data similar to non parallel copy, so the\noriginal comment on why this change is required has been removed from 001\npatch:\n> > + if (cstate->copy_dest == COPY_NEW_FE)\n> > + minread = RAW_BUF_SIZE - nbytes;\n> > +\n> > inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,\n> > - 1, RAW_BUF_SIZE - nbytes);\n> > + minread, RAW_BUF_SIZE - nbytes);\n> >\n> > No comment to explain why this change is done?\n\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Oct 9, 2020 at 10:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Thu, Oct 8, 2020 at 12:14 AM vignesh C <vignesh21@gmail.com> wrote:> >> > On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> > >> > >> > > I am convinced by the reason given by Kyotaro-San in that another> > > thread [1] and performance data shown by Peter that this can't be an> > > independent improvement and rather in some cases it can do harm. Now,> > > if you need it for a parallel-copy path then we can change it> > > specifically to the parallel-copy code path but I don't understand> > > your reason completely.> > >> >> > Whenever we need data to be populated, we will get a new data block &> > pass it to CopyGetData to populate the data. In case of file copy, the> > server will completely fill the data block. We expect the data to be> > filled completely. If data is available it will completely load the> > complete data block in case of file copy. There is no scenario where> > even if data is present a partial data block will be returned except> > for EOF or no data available. But in case of STDIN data copy, even> > though there is 8K data available in data block & 8K data available in> > STDIN, CopyGetData will return as soon as libpq buffer data is more> > than the minread. We will pass new data block every time to load data.> > Every time we pass an 8K data block but CopyGetData loads a few bytes> > in the new data block & returns. I wanted to keep the same data> > population logic for both file copy & STDIN copy i.e copy full 8K data> > blocks & then the populated data can be required. There is an> > alternative solution I can have some special handling in case of STDIN> > wherein the existing data block can be passed with the index from> > where the data should be copied. Thoughts?> >>> What you are proposing as an alternative solution, isn't that what we> are doing without the patch? IIUC, you require this because of your> corresponding changes to handle COPY_NEW_FE in CopyReadLine(), is that> right? If so, what is the difficulty in making it behave similar to> the non-parallel case?>The alternate solution is similar to how existing copy handles STDIN copies, I have made changes in the v7 patch attached in [1] to have parallel copy handle STDIN data similar to non parallel copy, so the original comment on why this change is required has been removed from 001 patch:> > + if (cstate->copy_dest == COPY_NEW_FE)> > + minread = RAW_BUF_SIZE - nbytes;> > +> >   inbytes = CopyGetData(cstate, cstate->raw_buf + nbytes,> > -   1, RAW_BUF_SIZE - nbytes);> > +   minread, RAW_BUF_SIZE - nbytes);> >> > No comment to explain why this change is done?[1] https://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Oct 2020 18:24:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 12:14 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Sep 28, 2020 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > + */\n> > > > > +typedef struct ParallelCopyLineBoundary\n> > > > >\n> > > > > Are we doing all this state management to avoid using locks while\n> > > > > processing lines? If so, I think we can use either spinlock or LWLock\n> > > > > to keep the main patch simple and then provide a later patch to make\n> > > > > it lock-less. This will allow us to first focus on the main design of\n> > > > > the patch rather than trying to make this datastructure processing\n> > > > > lock-less in the best possible way.\n> > > > >\n> > > >\n> > > > The steps will be more or less same if we use spinlock too. step 1, step 3 & step 4 will be common we have to use lock & unlock instead of step 2 & step 5. I feel we can retain the current implementation.\n> > > >\n> > >\n> > > I'll study this in detail and let you know my opinion on the same but\n> > > in the meantime, I don't follow one part of this comment: \"If they\n> > > don't follow this order the worker might process wrong line_size and\n> > > leader might populate the information which worker has not yet\n> > > processed or in the process of processing.\"\n> > >\n> > > Do you want to say that leader might overwrite some information which\n> > > worker hasn't read yet? If so, it is not clear from the comment.\n> > > Another minor point about this comment:\n> > >\n> >\n> > Here leader and worker must follow these steps to avoid any corruption\n> > or hang issue. Changed it to:\n> > * The leader & worker process access the shared line information by following\n> > * the below steps to avoid any data corruption or hang:\n> >\n>\n> Actually, I wanted more on the lines why such corruption or hang can\n> happen? It might help reviewers to understand why you have followed\n> such a sequence.\n\nThere are 3 variables which the leader & worker are working on:\nline_size, line_state & data. Leader will update line_state & populate\ndata, update line_size & line_state. Workers will wait for line_state\nto be updated, once the updated leader will read the data based on the\nline_size. If the worker is not synchronized wrong line_size will be\nset & read wrong amount of data, anything can happen.There are 3\nvariables which leader & worker are working on: line_size, line_state\n& data. Leader will update line_state & populate data, update\nline_size & line_state. Workers will wait for line_state to be\nupdated, once the updated leader will read the data based on the\nline_size. If the worker is not synchronized wrong line_size will be\nset & read wrong amount of data, anything can happen. This is the\nusual concurrency case with reader/writers. I felt that much details\nneed not be mentioned.\n\n> > >\n> > > How did you ensure that this is fixed? Have you tested it, if so\n> > > please share the test? I see a basic problem with your fix.\n> > >\n> > > + /* Report WAL/buffer usage during parallel execution */\n> > > + bufferusage = shm_toc_lookup(toc, PARALLEL_COPY_BUFFER_USAGE, false);\n> > > + walusage = shm_toc_lookup(toc, PARALLEL_COPY_WAL_USAGE, false);\n> > > + InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n> > > + &walusage[ParallelWorkerNumber]);\n> > >\n> > > You need to call InstrStartParallelQuery() before the actual operation\n> > > starts, without that stats won't be accurate? Also, after calling\n> > > WaitForParallelWorkersToFinish(), you need to accumulate the stats\n> > > collected from workers which neither you have done nor is possible\n> > > with the current code in your patch because you haven't made any\n> > > provision to capture them in BeginParallelCopy.\n> > >\n> > > I suggest you look into lazy_parallel_vacuum_indexes() and\n> > > begin_parallel_vacuum() to understand how the buffer/wal usage stats\n> > > are accumulated. Also, please test this functionality using\n> > > pg_stat_statements.\n> > >\n> >\n> > Made changes accordingly.\n> > I have verified it using:\n> > postgres=# select * from pg_stat_statements where query like '%copy%';\n> > userid | dbid | queryid |\n> > query\n> > | plans | total_plan_time |\n> > min_plan_time | max_plan_time | mean_plan_time | stddev_plan_time |\n> > calls | total_exec_time | min_exec_time | max_exec_time |\n> > mean_exec_time | stddev_exec_time | rows | shared_blks_hi\n> > t | shared_blks_read | shared_blks_dirtied | shared_blks_written |\n> > local_blks_hit | local_blks_read | local_blks_dirtied |\n> > local_blks_written | temp_blks_read | temp_blks_written | blk_\n> > read_time | blk_write_time | wal_records | wal_fpi | wal_bytes\n> > --------+-------+----------------------+---------------------------------------------------------------------------------------------------------------------+-------+-----------------+-\n> > --------------+---------------+----------------+------------------+-------+-----------------+---------------+---------------+----------------+------------------+--------+---------------\n> > --+------------------+---------------------+---------------------+----------------+-----------------+--------------------+--------------------+----------------+-------------------+-----\n> > ----------+----------------+-------------+---------+-----------\n> > 10 | 13743 | -6947756673093447609 | copy hw from\n> > '/home/vignesh/postgres/postgres/inst/bin/hw_175000.csv' with(format\n> > csv, delimiter ',') | 0 | 0 |\n> > 0 | 0 | 0 | 0 |\n> > 1 | 265.195105 | 265.195105 | 265.195105 | 265.195105\n> > | 0 | 175000 | 191\n> > 6 | 0 | 946 | 946 |\n> > 0 | 0 | 0 | 0\n> > | 0 | 0 |\n> > 0 | 0 | 1116 | 0 | 3587203\n> > 10 | 13743 | 8570215596364326047 | copy hw from\n> > '/home/vignesh/postgres/postgres/inst/bin/hw_175000.csv' with(format\n> > csv, delimiter ',', parallel '2') | 0 | 0 |\n> > 0 | 0 | 0 | 0 |\n> > 1 | 35668.402482 | 35668.402482 | 35668.402482 | 35668.402482\n> > | 0 | 175000 | 310\n> > 1 | 36 | 952 | 919 |\n> > 0 | 0 | 0 | 0\n> > | 0 | 0 |\n> > 0 | 0 | 1119 | 6 | 3624405\n> > (2 rows)\n> >\n>\n> I am not able to properly parse the data but If understand the wal\n> data for non-parallel (1116 | 0 | 3587203) and parallel (1119\n> | 6 | 3624405) case doesn't seem to be the same. Is that\n> right? If so, why? Please ensure that no checkpoint happens for both\n> cases.\n>\n\nI have disabled checkpoint, the results with the checkpoint disabled\nare given below:\n | wal_records | wal_fpi | wal_bytes\nSequential Copy | 1116 | 0 | 3587669\nParallel Copy(1 worker) | 1116 | 0 | 3587669\nParallel Copy(4 worker) | 1121 | 0 | 3587668\nI noticed that for 1 worker wal_records & wal_bytes are same as\nsequential copy, but with different worker count I had noticed that\nthere is difference in wal_records & wal_bytes, I think the difference\nshould be ok because with more than 1 worker the order of records\nprocessed will be different based on which worker picks which records\nto process from input file. In the case of sequential copy/1 worker\nthe order in which the records will be processed is always in the same\norder hence wal_bytes are the same.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Oct 2020 18:50:50 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Oct 3, 2020 at 6:20 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n>\n> Hello Vignesh,\n>\n> I've done some basic benchmarking on the v4 version of the patches (but\n> AFAIKC the v5 should perform about the same), and some initial review.\n>\n> For the benchmarking, I used the lineitem table from TPC-H - for 75GB\n> data set, this largest table is about 64GB once loaded, with another\n> 54GB in 5 indexes. This is on a server with 32 cores, 64GB of RAM and\n> NVME storage.\n>\n> The COPY duration with varying number of workers (specified using the\n> parallel COPY option) looks like this:\n>\n> workers duration\n> ---------------------\n> 0 1366\n> 1 1255\n> 2 704\n> 3 526\n> 4 434\n> 5 385\n> 6 347\n> 7 322\n> 8 327\n>\n> So this seems to work pretty well - initially we get almost linear\n> speedup, then it slows down (likely due to contention for locks, I/O\n> etc.). Not bad.\n\nThanks for testing with different workers & posting the results.\n\n> I've only done a quick review, but overall the patch looks in fairly\n> good shape.\n>\n> 1) I don't quite understand why we need INCREMENTPROCESSED and\n> RETURNPROCESSED, considering it just does ++ or return. It just\n> obfuscated the code, I think.\n>\n\nI have removed the macros.\n\n> 2) I find it somewhat strange that BeginParallelCopy can just decide not\n> to do parallel copy after all. Why not to do this decisions in the\n> caller? Or maybe it's fine this way, not sure.\n>\n\nI have moved the check IsParallelCopyAllowed to the caller.\n\n> 3) AFAIK we don't modify typedefs.list in patches, so these changes\n> should be removed.\n>\n\nI had seen that in many of the commits typedefs.list is getting changed,\nalso it helps in running pgindent. So I'm retaining this change.\n\n> 4) IsTriggerFunctionParallelSafe actually checks all triggers, not just\n> one, so the comment needs minor rewording.\n>\n\nModified the comments.\n\nThanks for the comments & sharing the test results Tomas, These changes are\nfixed in one of my earlier mail [1] that I sent.\n\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Oct 3, 2020 at 6:20 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>> Hello Vignesh,>> I've done some basic benchmarking on the v4 version of the patches (but> AFAIKC the v5 should perform about the same), and some initial review.>> For the benchmarking, I used the lineitem table from TPC-H - for 75GB> data set, this largest table is about 64GB once loaded, with another> 54GB in 5 indexes. This is on a server with 32 cores, 64GB of RAM and> NVME storage.>> The COPY duration with varying number of workers (specified using the> parallel COPY option) looks like this:>>       workers    duration>      --------------------->             0        1366>             1        1255>             2         704>             3         526>             4         434>             5         385>             6         347>             7         322>             8         327>> So this seems to work pretty well - initially we get almost linear> speedup, then it slows down (likely due to contention for locks, I/O> etc.). Not bad.Thanks for testing with different workers & posting the results.> I've only done a quick review, but overall the patch looks in fairly> good shape.>> 1) I don't quite understand why we need INCREMENTPROCESSED and> RETURNPROCESSED, considering it just does ++ or return. It just> obfuscated the code, I think.>I have removed the macros.> 2) I find it somewhat strange that BeginParallelCopy can just decide not> to do parallel copy after all. Why not to do this decisions in the> caller? Or maybe it's fine this way, not sure.>I have moved the check IsParallelCopyAllowed to the caller.> 3) AFAIK we don't modify typedefs.list in patches, so these changes> should be removed.>I had seen that in many of the commits typedefs.list is getting changed, also it helps in running pgindent. So I'm retaining this change.> 4) IsTriggerFunctionParallelSafe actually checks all triggers, not just> one, so the comment needs minor rewording.>Modified the comments.Thanks for the comments & sharing the test results Tomas, These changes are fixed in one of my earlier mail [1] that I sent.[1] https://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Oct 2020 18:59:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 8, 2020 at 8:43 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 5:44 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> > Attached v6 patch with the fixes.\n> >\n>\n> Hi Vignesh,\n>\n> I noticed a couple of issues when scanning the code in the following\npatch:\n>\n> v6-0003-Allow-copy-from-command-to-process-data-from-file.patch\n>\n> In the following code, it will put a junk uint16 value into *destptr\n> (and thus may well cause a crash) on a Big Endian architecture\n> (Solaris Sparc, s390x, etc.):\n> You're storing a (uint16) string length in a uint32 and then pulling\n> out the lower two bytes of the uint32 and copying them into the\n> location pointed to by destptr.\n>\n>\n> static void\n> +CopyStringToSharedMemory(CopyState cstate, char *srcPtr, char *destptr,\n> + uint32 *copiedsize)\n> +{\n> + uint32 len = srcPtr ? strlen(srcPtr) + 1 : 0;\n> +\n> + memcpy(destptr, (uint16 *) &len, sizeof(uint16));\n> + *copiedsize += sizeof(uint16);\n> + if (len)\n> + {\n> + memcpy(destptr + sizeof(uint16), srcPtr, len);\n> + *copiedsize += len;\n> + }\n> +}\n>\n> I suggest you change the code to:\n>\n> uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;\n> memcpy(destptr, &len, sizeof(uint16));\n>\n> [I assume string length here can't ever exceed (65535 - 1), right?]\n>\n> Looking a bit deeper into this, I'm wondering if in fact your\n> EstimateStringSize() and EstimateNodeSize() functions should be using\n> BUFFERALIGN() for EACH stored string/node (rather than just calling\n> shm_toc_estimate_chunk() once at the end, after the length of packed\n> strings and nodes has been estimated), to ensure alignment of start of\n> each string/node. Other Postgres code appears to be aligning each\n> stored chunk using shm_toc_estimate_chunk(). See the definition of\n> that macro and its current usages.\n>\n\nI'm not handling this, this is similar to how it is handled in other places.\n\n> Then you could safely use:\n>\n> uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;\n> *(uint16 *)destptr = len;\n> *copiedsize += sizeof(uint16);\n> if (len)\n> {\n> memcpy(destptr + sizeof(uint16), srcPtr, len);\n> *copiedsize += len;\n> }\n>\n> and in the CopyStringFromSharedMemory() function, then could safely use:\n>\n> len = *(uint16 *)srcPtr;\n>\n> The compiler may be smart enough to optimize-away the memcpy() in this\n> case anyway, but there are issues in doing this for architectures that\n> take a performance hit for unaligned access, or don't support\n> unaligned access.\n\nChanged it to uin32, so that there are no issues in case if length exceeds\n65535 & also to avoid problems in Big Endian architecture.\n\n> Also, in CopyXXXXFromSharedMemory() functions, you should use palloc()\n> instead of palloc0(), as you're filling the entire palloc'd buffer\n> anyway, so no need to ask for additional MemSet() of all buffer bytes\n> to 0 prior to memcpy().\n>\n\nI have changed palloc0 to palloc.\n\nThanks Greg for reviewing & providing your comments. These changes are\nfixed in one of my earlier mail [1] that I sent.\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Oct 8, 2020 at 8:43 AM Greg Nancarrow <gregn4422@gmail.com> wrote:>> On Thu, Oct 8, 2020 at 5:44 AM vignesh C <vignesh21@gmail.com> wrote:>> > Attached v6 patch with the fixes.> >>> Hi Vignesh,>> I noticed a couple of issues when scanning the code in the following patch:>>     v6-0003-Allow-copy-from-command-to-process-data-from-file.patch>> In the following code, it will put a junk uint16 value into *destptr> (and thus may well cause a crash) on a Big Endian architecture> (Solaris Sparc, s390x, etc.):> You're storing a (uint16) string length in a uint32 and then pulling> out the lower two bytes of the uint32 and copying them into the> location pointed to by destptr.>>> static void> +CopyStringToSharedMemory(CopyState cstate, char *srcPtr, char *destptr,> + uint32 *copiedsize)> +{> + uint32 len = srcPtr ? strlen(srcPtr) + 1 : 0;> +> + memcpy(destptr, (uint16 *) &len, sizeof(uint16));> + *copiedsize += sizeof(uint16);> + if (len)> + {> + memcpy(destptr + sizeof(uint16), srcPtr, len);> + *copiedsize += len;> + }> +}>> I suggest you change the code to:>>     uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;>     memcpy(destptr, &len, sizeof(uint16));>> [I assume string length here can't ever exceed (65535 - 1), right?]>> Looking a bit deeper into this, I'm wondering if in fact your> EstimateStringSize() and EstimateNodeSize() functions should be using> BUFFERALIGN() for EACH stored string/node (rather than just calling> shm_toc_estimate_chunk() once at the end, after the length of packed> strings and nodes has been estimated), to ensure alignment of start of> each string/node. Other Postgres code appears to be aligning each> stored chunk using shm_toc_estimate_chunk(). See the definition of> that macro and its current usages.>I'm not handling this, this is similar to how it is handled in other places.> Then you could safely use:>>     uint16 len = srcPtr ? (uint16)strlen(srcPtr) + 1 : 0;>     *(uint16 *)destptr = len;>     *copiedsize += sizeof(uint16);>     if (len)>     {>         memcpy(destptr + sizeof(uint16), srcPtr, len);>         *copiedsize += len;>     }>> and in the CopyStringFromSharedMemory() function, then could safely use:>>     len = *(uint16 *)srcPtr;>> The compiler may be smart enough to optimize-away the memcpy() in this> case anyway, but there are issues in doing this for architectures that> take a performance hit for unaligned access, or don't support> unaligned access.Changed it to uin32, so that there are no issues in case if length exceeds 65535 & also to avoid problems in Big Endian architecture.> Also, in CopyXXXXFromSharedMemory() functions, you should use palloc()> instead of palloc0(), as you're filling the entire palloc'd buffer> anyway, so no need to ask for additional MemSet() of all buffer bytes> to 0 prior to memcpy().>I have changed palloc0 to palloc.Thanks Greg for reviewing & providing your comments. These changes are fixed in one of my earlier mail [1] that I sent.[1] https://www.postgresql.org/message-id/CALDaNm1n1xW43neXSGs%3Dc7zt-mj%2BJHHbubWBVDYT9NfCoF8TuQ%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Oct 2020 19:11:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Oct 14, 2020 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Oct 9, 2020 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I am not able to properly parse the data but If understand the wal\n> > data for non-parallel (1116 | 0 | 3587203) and parallel (1119\n> > | 6 | 3624405) case doesn't seem to be the same. Is that\n> > right? If so, why? Please ensure that no checkpoint happens for both\n> > cases.\n> >\n>\n> I have disabled checkpoint, the results with the checkpoint disabled\n> are given below:\n> | wal_records | wal_fpi | wal_bytes\n> Sequential Copy | 1116 | 0 | 3587669\n> Parallel Copy(1 worker) | 1116 | 0 | 3587669\n> Parallel Copy(4 worker) | 1121 | 0 | 3587668\n> I noticed that for 1 worker wal_records & wal_bytes are same as\n> sequential copy, but with different worker count I had noticed that\n> there is difference in wal_records & wal_bytes, I think the difference\n> should be ok because with more than 1 worker the order of records\n> processed will be different based on which worker picks which records\n> to process from input file. In the case of sequential copy/1 worker\n> the order in which the records will be processed is always in the same\n> order hence wal_bytes are the same.\n>\n\nAre all records of the same size in your test? If so, then why the\norder should matter? Also, even the number of wal_records has\nincreased but wal_bytes are not increased, rather it is one-byte less.\nCan we identify what is going on here? I don't intend to say that it\nis a problem but we should know the reason clearly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:40:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\r\n\r\nAfter having a look over the patch,\r\nI have some suggestions for \r\n0003-Allow-copy-from-command-to-process-data-from-file.patch.\r\n\r\n1.\r\n\r\n+static uint32\r\n+EstimateCstateSize(ParallelContext *pcxt, CopyState cstate, List *attnamelist,\r\n+\t\t\t\t char **whereClauseStr, char **rangeTableStr,\r\n+\t\t\t\t char **attnameListStr, char **notnullListStr,\r\n+\t\t\t\t char **nullListStr, char **convertListStr)\r\n+{\r\n+\tuint32\t\tstrsize = MAXALIGN(sizeof(SerializedParallelCopyState));\r\n+\r\n+\tstrsize += EstimateStringSize(cstate->null_print);\r\n+\tstrsize += EstimateStringSize(cstate->delim);\r\n+\tstrsize += EstimateStringSize(cstate->quote);\r\n+\tstrsize += EstimateStringSize(cstate->escape);\r\n\r\n\r\nIt use function EstimateStringSize to get the strlen of null_print, delim, quote and escape.\r\nBut the length of null_print seems has been stored in null_print_len.\r\nAnd delim/quote/escape must be 1 byte, so I think call strlen again seems unnecessary.\r\n\r\nHow about \" strsize += sizeof(uint32) + cstate->null_print_len + 1\"\r\n\r\n2.\r\n+\tstrsize += EstimateNodeSize(cstate->whereClause, whereClauseStr);\r\n\r\n+\tcopiedsize += CopyStringToSharedMemory(cstate, whereClauseStr,\r\n+\t\t\t\t\t\t\t\t\t\t shmptr + copiedsize);\r\n\r\nSome string length is counted for two times.\r\nThe ' whereClauseStr ' has call strlen in EstimateNodeSize once and call strlen in CopyStringToSharedMemory again.\r\nI don't know wheather it's worth to refacor the code to avoid duplicate strlen . what do you think ?\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n\n", "msg_date": "Sun, 18 Oct 2020 02:17:07 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel copy" }, { "msg_contents": "On Sun, Oct 18, 2020 at 7:47 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> Hi Vignesh,\n>\n> After having a look over the patch,\n> I have some suggestions for\n> 0003-Allow-copy-from-command-to-process-data-from-file.patch.\n>\n> 1.\n>\n> +static uint32\n> +EstimateCstateSize(ParallelContext *pcxt, CopyState cstate, List *attnamelist,\n> + char **whereClauseStr, char **rangeTableStr,\n> + char **attnameListStr, char **notnullListStr,\n> + char **nullListStr, char **convertListStr)\n> +{\n> + uint32 strsize = MAXALIGN(sizeof(SerializedParallelCopyState));\n> +\n> + strsize += EstimateStringSize(cstate->null_print);\n> + strsize += EstimateStringSize(cstate->delim);\n> + strsize += EstimateStringSize(cstate->quote);\n> + strsize += EstimateStringSize(cstate->escape);\n>\n>\n> It use function EstimateStringSize to get the strlen of null_print, delim, quote and escape.\n> But the length of null_print seems has been stored in null_print_len.\n> And delim/quote/escape must be 1 byte, so I think call strlen again seems unnecessary.\n>\n> How about \" strsize += sizeof(uint32) + cstate->null_print_len + 1\"\n>\n\n+1. This seems like a good suggestion but add comments for\ndelim/quote/escape to indicate that we are considering one-byte for\neach. I think this will obviate the need of function\nEstimateStringSize. Another thing in this regard is that we normally\nuse add_size function to compute the size but I don't see that being\nused in this and nearby computation. That helps us to detect overflow\nof addition if any.\n\nEstimateCstateSize()\n{\n..\n+\n+ strsize++;\n..\n}\n\nWhy do we need this additional one-byte increment? Does it make sense\nto add a small comment for the same?\n\n> 2.\n> + strsize += EstimateNodeSize(cstate->whereClause, whereClauseStr);\n>\n> + copiedsize += CopyStringToSharedMemory(cstate, whereClauseStr,\n> + shmptr + copiedsize);\n>\n> Some string length is counted for two times.\n> The ' whereClauseStr ' has call strlen in EstimateNodeSize once and call strlen in CopyStringToSharedMemory again.\n> I don't know wheather it's worth to refacor the code to avoid duplicate strlen . what do you think ?\n>\n\nIt doesn't seem worth to me. We probably need to use additional\nvariables to save those lengths. I think it will add more\ncode/complexity than we will save. See EstimateParamListSpace and\nSerializeParamList where we get the typeLen each time, that way code\nlooks neat to me and we are don't going to save much by not following\na similar thing here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Oct 2020 14:41:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 15, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 14, 2020 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, Oct 9, 2020 at 11:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I am not able to properly parse the data but If understand the wal\n> > > data for non-parallel (1116 | 0 | 3587203) and parallel (1119\n> > > | 6 | 3624405) case doesn't seem to be the same. Is that\n> > > right? If so, why? Please ensure that no checkpoint happens for both\n> > > cases.\n> > >\n> >\n> > I have disabled checkpoint, the results with the checkpoint disabled\n> > are given below:\n> > | wal_records | wal_fpi | wal_bytes\n> > Sequential Copy | 1116 | 0 | 3587669\n> > Parallel Copy(1 worker) | 1116 | 0 | 3587669\n> > Parallel Copy(4 worker) | 1121 | 0 | 3587668\n> > I noticed that for 1 worker wal_records & wal_bytes are same as\n> > sequential copy, but with different worker count I had noticed that\n> > there is difference in wal_records & wal_bytes, I think the difference\n> > should be ok because with more than 1 worker the order of records\n> > processed will be different based on which worker picks which records\n> > to process from input file. In the case of sequential copy/1 worker\n> > the order in which the records will be processed is always in the same\n> > order hence wal_bytes are the same.\n> >\n>\n> Are all records of the same size in your test? If so, then why the\n> order should matter? Also, even the number of wal_records has\n> increased but wal_bytes are not increased, rather it is one-byte less.\n> Can we identify what is going on here? I don't intend to say that it\n> is a problem but we should know the reason clearly.\n\nThe earlier run that I executed was with varying record size. The\nbelow results are by modifying the records to keep it of same size:\n | wal_records | wal_fpi\n| wal_bytes\nSequential Copy | 1307 | 0 | 4198526\nParallel Copy(1 worker) | 1307 | 0 | 4198526\nParallel Copy(2 worker) | 1308 | 0 | 4198836\nParallel Copy(4 worker) | 1307 | 0 | 4199147\nParallel Copy(8 worker) | 1312 | 0 | 4199735\nParallel Copy(16 worker) | 1313 | 0 | 4200311\n\nStill I noticed that there is some difference in wal_records &\nwal_bytes. I feel the difference in wal_records & wal_bytes is because\nof the following reasons:\nEach worker prepares 1000 tuples and then tries to do\nheap_multi_insert for 1000 tuples, In our case approximately 185\ntuples is stored in 1 page, 925 tuples are stored in 5 WAL records and\nthe remaining 75 tuples are stored in next WAL record. The wal dump is\nlike below:\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/0160EC80, prev 0/0160DDB0, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 0\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/0160FB28, prev 0/0160EC80, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 1\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/016109E8, prev 0/0160FB28, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 2\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/01611890, prev 0/016109E8, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 3\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/01612750, prev 0/01611890, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 4\nrmgr: Heap2 len (rec/tot): 1550/ 1550, tx: 510, lsn:\n0/016135F8, prev 0/01612750, desc: MULTI_INSERT+INIT 75 tuples flags\n0x02, blkref #0: rel 1663/13751/16384 blk 5\n\nAfter the 1st 1000 tuples are inserted and when the worker tries to\ninsert another 1000 tuples, it will use the last page which had free\nspace to insert where we can insert 110 more tuples:\nrmgr: Heap2 len (rec/tot): 2470/ 2470, tx: 510, lsn:\n0/01613C08, prev 0/016135F8, desc: MULTI_INSERT 110 tuples flags 0x00,\nblkref #0: rel 1663/13751/16384 blk 5\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/016145C8, prev 0/01613C08, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 6\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/01615470, prev 0/016145C8, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 7\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/01616330, prev 0/01615470, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 8\nrmgr: Heap2 len (rec/tot): 3750/ 3750, tx: 510, lsn:\n0/016171D8, prev 0/01616330, desc: MULTI_INSERT+INIT 185 tuples flags\n0x00, blkref #0: rel 1663/13751/16384 blk 9\nrmgr: Heap2 len (rec/tot): 3050/ 3050, tx: 510, lsn:\n0/01618098, prev 0/016171D8, desc: MULTI_INSERT+INIT 150 tuples flags\n0x02, blkref #0: rel 1663/13751/16384 blk 10\n\nThis behavior will be the same for sequential copy and copy with 1\nworker as the sequence of insert & the pages used to insert is in same\norder. There 2 reasons together result in the varying wal_size &\nwal_records with multiple worker: 1) When more than 1 worker is\ninvolved the sequence in which the pages that will be selected is not\nguaranteed, the MULTI_INSERT tuple count varies &\nMULTI_INSERT/MULTI_INSERT+INIT description varies. 2) wal_records will\nincrease with more number of workers because when the tuples are split\nacross the workers, one of the worker will have few more WAL record\nbecause the last heap_multi_insert gets split across the workers and\ngenerates new wal records like:\nrmgr: Heap2 len (rec/tot): 600/ 600, tx: 510, lsn:\n0/019F8B08, prev 0/019F7C48, desc: MULTI_INSERT 25 tuples flags 0x00,\nblkref #0: rel 1663/13751/16384 blk 1065\n\nAttached the tar of wal file dump which was used for analysis.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 19 Oct 2020 18:05:53 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 9, 2020 at 2:52 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> >\n> > 2. Do we have tests for toast tables? I think if you implement the\n> > previous point some existing tests might cover it but I feel we should\n> > have at least one or two tests for the same.\n> >\n> Toast table use case 1: 10000 tuples, 9.6GB data, 3 indexes 2 on integer\ncolumns, 1 on text column(not the toast column), csv file, each row is >\n1320KB:\n> (222.767, 0, 1X), (134.171, 1, 1.66X), (93.749, 2, 2.38X), (93.672, 4,\n2.38X), (94.827, 8, 2.35X), (93.766, 16, 2.37X), (98.153, 20, 2.27X),\n(122.721, 30, 1.81X)\n>\n> Toast table use case 2: 100000 tuples, 96GB data, 3 indexes 2 on integer\ncolumns, 1 on text column(not the toast column), csv file, each row is >\n1320KB:\n> (2255.032, 0, 1X), (1358.628, 1, 1.66X), (901.170, 2, 2.5X), (912.743, 4,\n2.47X), (988.718, 8, 2.28X), (938.000, 16, 2.4X), (997.556, 20, 2.26X),\n(1000.586, 30, 2.25X)\n>\n> Toast table use case3: 10000 tuples, 9.6GB, no indexes, binary file, each\nrow is > 1320KB:\n> (136.983, 0, 1X), (136.418, 1, 1X), (81.896, 2, 1.66X), (62.929, 4,\n2.16X), (52.311, 8, 2.6X), (40.032, 16, 3.49X), (44.097, 20, 3.09X),\n(62.310, 30, 2.18X)\n>\n> In the case of a Toast table, we could achieve upto 2.5X for csv files,\nand 3.5X for binary files. We are analyzing this point and will post an\nupdate on our findings soon.\n>\n\nI analyzed the above point of getting only upto 2.5X performance\nimprovement for csv files with a toast table with 3 indexers - 2 on integer\ncolumns and 1 on text column(not the toast column). Reason is that workers\nare fast enough to do the work and they are waiting for the leader to fill\nin the data blocks and in this case the leader is able to serve the workers\nat its maximum possible speed. Hence most of the time the workers are\nwaiting not doing any beneficial work.\n\nHaving observed the above point, I tried to make workers perform more work\nto avoid waiting time. For this, I added a gist index on the toasted text\ncolumn. The use and results are as follows.\n\nToast table use case4: 10000 tuples, 9.6GB, 4 indexes - 2 on integer\ncolumns, 1 on non-toasted text column and 1 gist index on toasted text\ncolumn, csv file, each row is ~ 12.2KB:\n\n(1322.839, 0, 1X), (1261.176, 1, 1.05X), (632.296, 2, 2.09X), (321.941, 4,\n4.11X), (181.796, 8, 7.27X), *(105.750, 16, 12.51X)*, (107.099, 20,\n12.35X), (123.262, 30, 10.73X)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Oct 9, 2020 at 2:52 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:>> On Tue, Sep 29, 2020 at 6:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> >> > 2. Do we have tests for toast tables? I think if you implement the> > previous point some existing tests might cover it but I feel we should> > have at least one or two tests for the same.> >> Toast table use case 1: 10000 tuples, 9.6GB data, 3 indexes 2 on integer columns, 1 on text column(not the toast column), csv file, each row is > 1320KB:> (222.767, 0, 1X), (134.171, 1, 1.66X), (93.749, 2, 2.38X), (93.672, 4, 2.38X), (94.827, 8, 2.35X), (93.766, 16, 2.37X), (98.153, 20, 2.27X), (122.721, 30, 1.81X)>> Toast table use case 2: 100000 tuples, 96GB data, 3 indexes 2 on integer columns, 1 on text column(not the toast column), csv file, each row is > 1320KB:> (2255.032, 0, 1X), (1358.628, 1, 1.66X), (901.170, 2, 2.5X), (912.743, 4, 2.47X), (988.718, 8, 2.28X), (938.000, 16, 2.4X), (997.556, 20, 2.26X), (1000.586, 30, 2.25X)>> Toast table use case3: 10000 tuples, 9.6GB, no indexes, binary file, each row is > 1320KB:> (136.983, 0, 1X), (136.418, 1, 1X), (81.896, 2, 1.66X), (62.929, 4, 2.16X), (52.311, 8, 2.6X), (40.032, 16, 3.49X), (44.097, 20, 3.09X), (62.310, 30, 2.18X)>> In the case of a Toast table, we could achieve upto 2.5X for csv files, and 3.5X for binary files. We are analyzing this point and will post an update on our findings soon.>I analyzed the above point of getting only upto 2.5X performance improvement for csv files with a toast table with 3 indexers - 2 on integer columns and 1 on text column(not the toast column). Reason is that workers are fast enough to do the work and they are waiting for the leader to fill in the data blocks and in this case the leader is able to serve the workers at its maximum possible speed. Hence most of the time the workers are waiting not doing any beneficial work.Having observed the above point, I tried to make workers perform more work to avoid waiting time. For this, I added a gist index on the toasted text column. The use and results are as follows.Toast table use case4: 10000 tuples, 9.6GB, 4 indexes - 2 on integer columns, 1 on non-toasted text column and 1 gist index on toasted text column, csv file, each row is  ~ 12.2KB:(1322.839, 0, 1X), (1261.176, 1, 1.05X), (632.296, 2, 2.09X), (321.941, 4, 4.11X), (181.796, 8, 7.27X), (105.750, 16, 12.51X), (107.099, 20, 12.35X), (123.262, 30, 10.73X)With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 20 Oct 2020 15:25:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Oct 19, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Oct 18, 2020 at 7:47 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> >\n> > Hi Vignesh,\n> >\n> > After having a look over the patch,\n> > I have some suggestions for\n> > 0003-Allow-copy-from-command-to-process-data-from-file.patch.\n> >\n> > 1.\n> >\n> > +static uint32\n> > +EstimateCstateSize(ParallelContext *pcxt, CopyState cstate, List *attnamelist,\n> > + char **whereClauseStr, char **rangeTableStr,\n> > + char **attnameListStr, char **notnullListStr,\n> > + char **nullListStr, char **convertListStr)\n> > +{\n> > + uint32 strsize = MAXALIGN(sizeof(SerializedParallelCopyState));\n> > +\n> > + strsize += EstimateStringSize(cstate->null_print);\n> > + strsize += EstimateStringSize(cstate->delim);\n> > + strsize += EstimateStringSize(cstate->quote);\n> > + strsize += EstimateStringSize(cstate->escape);\n> >\n> >\n> > It use function EstimateStringSize to get the strlen of null_print, delim, quote and escape.\n> > But the length of null_print seems has been stored in null_print_len.\n> > And delim/quote/escape must be 1 byte, so I think call strlen again seems unnecessary.\n> >\n> > How about \" strsize += sizeof(uint32) + cstate->null_print_len + 1\"\n> >\n>\n> +1. This seems like a good suggestion but add comments for\n> delim/quote/escape to indicate that we are considering one-byte for\n> each. I think this will obviate the need of function\n> EstimateStringSize. Another thing in this regard is that we normally\n> use add_size function to compute the size but I don't see that being\n> used in this and nearby computation. That helps us to detect overflow\n> of addition if any.\n>\n> EstimateCstateSize()\n> {\n> ..\n> +\n> + strsize++;\n> ..\n> }\n>\n> Why do we need this additional one-byte increment? Does it make sense\n> to add a small comment for the same?\n>\n\nChanged it to handle null_print, delim, quote & escape accordingly in\nthe attached patch, the one byte increment is not required, I have\nremoved it.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 21 Oct 2020 12:07:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 8, 2020 at 11:15 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I'm summarizing the pending open points so that I don't miss anything:\n> 1) Performance test on latest patch set.\n\nIt is tested and results are shared by bharath at [1]\n\n> 2) Testing points suggested.\n\nTests are added as suggested and details shared by bharath at [1]\n\n> 3) Support of parallel copy for COPY_OLD_FE.\n\nIt is handled as part of v8 patch shared at [2]\n\n> 4) Worker has to hop through all the processed chunks before getting\n> the chunk which it can process.\n\nOpen\n\n> 5) Handling of Tomas's comments.\n\nI have fixed and updated the fix details as part of [3]\n\n> 6) Handling of Greg's comments.\n\nI have fixed and updated the fix details as part of [4]\n\nExcept for \"4) Worker has to hop through all the processed chunks before\ngetting the chunk which it can process\", all open tasks are handled. I will\nwork on this and provide an update shortly.\n\n[1]\nhttps://www.postgresql.org/message-id/CALj2ACWeQVd-xoQZHGT01_33St4xPoZQibWz46o7jW1PE3XOqQ%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CALDaNm2UcmCMozcbKL8B7az9oYd9hZ+fNDcZHSSiiQJ4v-xN0Q@mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/CALDaNm0_zUa9%2BS%3DpwCz3Yp43SY3r9bnO4v-9ucXUujEE%3D0Sd7g%40mail.gmail.com\n[4]\nhttps://www.postgresql.org/message-id/CALDaNm31pGG%2BL9N4HbM0mO4iuceih4mJ5s87jEwOPaFLpmDKyQ%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Oct 8, 2020 at 11:15 AM vignesh C <vignesh21@gmail.com> wrote:>> I'm summarizing the pending open points so that I don't miss anything:> 1) Performance test on latest patch set.It is tested and results are shared by bharath at [1]> 2) Testing points suggested.Tests are added as suggested and details shared by bharath at [1]> 3) Support of parallel copy for COPY_OLD_FE.It is handled as part of v8 patch shared at [2]> 4) Worker has to hop through all the processed chunks before getting> the chunk which it can process.Open> 5) Handling of Tomas's comments.I have fixed and updated the fix details as part of [3] > 6) Handling of Greg's comments.I have fixed and updated the fix details as part of [4] Except for \"4) Worker has to hop through all the processed chunks before getting the chunk which it can process\", all open tasks are handled. I will work on this and provide an update shortly. [1] https://www.postgresql.org/message-id/CALj2ACWeQVd-xoQZHGT01_33St4xPoZQibWz46o7jW1PE3XOqQ%40mail.gmail.com[2] https://www.postgresql.org/message-id/CALDaNm2UcmCMozcbKL8B7az9oYd9hZ+fNDcZHSSiiQJ4v-xN0Q@mail.gmail.com[3] https://www.postgresql.org/message-id/CALDaNm0_zUa9%2BS%3DpwCz3Yp43SY3r9bnO4v-9ucXUujEE%3D0Sd7g%40mail.gmail.com[4] https://www.postgresql.org/message-id/CALDaNm31pGG%2BL9N4HbM0mO4iuceih4mJ5s87jEwOPaFLpmDKyQ%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 21 Oct 2020 13:59:17 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\n\nI took a look at the v8 patch set. Here are some comments:\n\n1. PopulateCommonCstateInfo() -- can we use PopulateCommonCStateInfo()\nor PopulateCopyStateInfo()? And also EstimateCstateSize() --\nEstimateCStateSize(), PopulateCstateCatalogInfo() --\nPopulateCStateCatalogInfo()?\n\n2. Instead of mentioning numbers like 1024, 64K, 10240 in the\ncomments, can we represent them in terms of macros?\n/* It can hold 1024 blocks of 64K data in DSM to be processed by the worker. */\n#define MAX_BLOCKS_COUNT 1024\n/*\n * It can hold upto 10240 record information for worker to process. RINGSIZE\n\n3. How about\n\"\nEach worker at once will pick the WORKER_CHUNK_COUNT records from the\nDSM data blocks and store them in it's local memory.\nThis is to make workers not contend much while getting record\ninformation from the DSM. Read RINGSIZE comments before\n changing this value.\n\"\ninstead of\n/*\n * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n * block to process to avoid lock contention. Read RINGSIZE comments before\n * changing this value.\n */\n\n4. How about one line gap before and after for comments: \"Leader\nshould operate in the following order:\" and \"Worker should operate in\nthe following order:\"\n\n5. Can we move RAW_BUF_BYTES macro definition to the beginning of the\ncopy.h where all the macro are defined?\n\n6. I don't think we need the change in toast_internals.c with the\ntemporary hack Assert(!(IsParallelWorker() && !currentCommandIdUsed));\nin GetCurrentCommandId()\n\n7. I think\n /* Can't perform copy in parallel */\n if (parallel_workers <= 0)\n return NULL;\ncan be\n /* Can't perform copy in parallel */\n if (parallel_workers == 0)\n return NULL;\nas parallel_workers can never be < 0 since we enter BeginParallelCopy\nonly if cstate->nworkers > 0 and also we are not allowed to have\nnegative values for max_worker_processes.\n\n8. Do we want to pfree(cstate->pcdata) in case we failed to start any\nparallel workers, we would have allocated a good\n else\n {\n /*\n * Reset nworkers to -1 here. This is useful in cases where user\n * specifies parallel workers, but, no worker is picked up, so go\n * back to non parallel mode value of nworkers.\n */\n cstate->nworkers = -1;\n *processed = CopyFrom(cstate); /* copy from file to database */\n }\n\n9. Instead of calling CopyStringToSharedMemory() for each string\nvariable, can't we just create a linked list of all the strings that\nneed to be copied into shm and call CopyStringToSharedMemory() only\nonce? We could avoid 5 function calls?\n\n10. Similar to above comment: can we fill all the required\ncstate->variables inside the function CopyNodeFromSharedMemory() and\ncall it only once? In each worker we could save overhead of 5 function\ncalls.\n\n11. Looks like CopyStringFromSharedMemory() and\nCopyNodeFromSharedMemory() do almost the same things except\nstringToNode() and pfree(destptr);. Can we have a generic function\nCopyFromSharedMemory() or something else and handle with flag \"bool\nisnode\" to differentiate the two use cases?\n\n12. Can we move below check to the end in IsParallelCopyAllowed()?\n /* Check parallel safety of the trigger functions. */\n if (cstate->rel->trigdesc != NULL &&\n !CheckRelTrigFunParallelSafety(cstate->rel->trigdesc))\n return false;\n\n13. CacheLineInfo(): Instead of goto empty_data_line_update; how about\nhaving this directly inside the if block as it's being used only once?\n\n14. GetWorkerLine(): How about avoiding goto statements and replacing\nthe common code with a always static inline function or a macro?\n\n15. UpdateSharedLineInfo(): Below line is misaligned.\n lineInfo->first_block = blk_pos;\n lineInfo->start_offset = offset;\n\n16. ParallelCopyFrom(): Do we need CHECK_FOR_INTERRUPTS(); at the\nstart of for (;;)?\n\n17. Remove extra lines after #define IsHeaderLine()\n(cstate->header_line && cstate->cur_lineno == 1) in copy.h\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Oct 2020 15:18:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Oct 21, 2020 at 3:19 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>\n> 9. Instead of calling CopyStringToSharedMemory() for each string\n> variable, can't we just create a linked list of all the strings that\n> need to be copied into shm and call CopyStringToSharedMemory() only\n> once? We could avoid 5 function calls?\n>\n\nIf we want to avoid different function calls then can't we just store\nall these strings in a local structure and use it? That might improve\nthe other parts of code as well where we are using these as individual\nparameters.\n\n> 10. Similar to above comment: can we fill all the required\n> cstate->variables inside the function CopyNodeFromSharedMemory() and\n> call it only once? In each worker we could save overhead of 5 function\n> calls.\n>\n\nYeah, that makes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Oct 2020 15:51:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Oct 21, 2020 at 3:18 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> 17. Remove extra lines after #define IsHeaderLine()\n> (cstate->header_line && cstate->cur_lineno == 1) in copy.h\n>\n\n I missed one comment:\n\n 18. I think we need to treat the number of parallel workers as an\ninteger similar to the parallel option in vacuum.\n\npostgres=# copy t1 from stdin with(parallel '1'); <<<<< - we\nshould not allow this.\nEnter data to be copied followed by a newline.\n\npostgres=# vacuum (parallel '1') t1;\nERROR: parallel requires an integer value\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Oct 2020 16:20:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "I had a brief look at at this patch. Important work! A couple of first \nimpressions:\n\n1. The split between patches \n0002-Framework-for-leader-worker-in-parallel-copy.patch and \n0003-Allow-copy-from-command-to-process-data-from-file.patch is quite \nartificial. All the stuff introduced in the first is unused until the \nsecond patch is applied. The first patch introduces a forward \ndeclaration for ParallelCopyData(), but the function only comes in the \nsecond patch. The comments in the first patch talk about \nLINE_LEADER_POPULATING and LINE_LEADER_POPULATED, but the enum only \ncomes in the second patch. I think these have to merged into one. If you \nwant to split it somehow, I'd suggest having a separate patch just to \nmove CopyStateData from copy.c to copy.h. The subsequent patch would \nthen be easier to read as you could see more easily what's being added \nto CopyStateData. Actually I think it would be better to have a new \nheader file, copy_internal.h, to hold CopyStateData and the other \nstructs, and keep copy.h as it is.\n\n2. This desperately needs some kind of a high-level overview of how it \nworks. What is a leader, what is a worker? Which process does each step \nof COPY processing, like reading from the file/socket, splitting the \ninput into lines, handling escapes, calling input functions, and \nupdating the heap and indexes? What data structures are used for the \ncommunication? How does is the work synchronized between the processes? \nThere are comments on those individual aspects scattered in the patch, \nbut if you're not already familiar with it, you don't know where to \nstart. There's some of that in the commit message, but it needs to be \nsomewhere in the source code, maybe in a long comment at the top of \ncopyparallel.c.\n\n3. I'm surprised there's a separate ParallelCopyLineBoundary struct for \nevery input line. Doesn't that incur a lot of synchronization overhead? \nI haven't done any testing, this is just my gut feeling, but I assumed \nyou'd work in batches of, say, 100 or 1000 lines each.\n\n- Heikki\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:31:09 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\n\nThanks for the updated patches. Here are some more comments that I can\nfind after reviewing your latest patches:\n\n+/*\n+ * This structure helps in storing the common data from CopyStateData that are\n+ * required by the workers. This information will then be allocated and stored\n+ * into the DSM for the worker to retrieve and copy it to CopyStateData.\n+ */\n+typedef struct SerializedParallelCopyState\n+{\n+ /* low-level state data */\n+ CopyDest copy_dest; /* type of copy source/destination */\n+ int file_encoding; /* file or remote side's character encoding */\n+ bool need_transcoding; /* file encoding diff from server? */\n+ bool encoding_embeds_ascii; /* ASCII can be non-first byte? */\n+\n...\n...\n+\n+ /* Working state for COPY FROM */\n+ AttrNumber num_defaults;\n+ Oid relid;\n+} SerializedParallelCopyState;\n\nCan the above structure not be part of the CopyStateData structure? I\nam just asking this question because all the fields present in the\nabove structure are also present in the CopyStateData structure. So,\nincluding it in the CopyStateData structure will reduce the code\nduplication and will also make CopyStateData a bit shorter.\n\n--\n\n+ pcxt = BeginParallelCopy(cstate->nworkers, cstate, stmt->attlist,\n+ relid);\n\nDo we need to pass cstate->nworkers and relid to BeginParallelCopy()\nfunction when we are already passing cstate structure, using which\nboth of these information can be retrieved ?\n\n--\n\n+/* DSM keys for parallel copy. */\n+#define PARALLEL_COPY_KEY_SHARED_INFO 1\n+#define PARALLEL_COPY_KEY_CSTATE 2\n+#define PARALLEL_COPY_WAL_USAGE 3\n+#define PARALLEL_COPY_BUFFER_USAGE 4\n\nDSM key names do not appear to be consistent. For shared info and\ncstate structures, the key name is prefixed with \"PARALLEL_COPY_KEY\",\nbut for WalUsage and BufferUsage structures, it is prefixed with\n\"PARALLEL_COPY\". I think it would be better to make them consistent.\n\n--\n\n if (resultRelInfo->ri_TrigDesc != NULL &&\n (resultRelInfo->ri_TrigDesc->trig_insert_before_row ||\n resultRelInfo->ri_TrigDesc->trig_insert_instead_row))\n {\n /*\n * Can't support multi-inserts when there are any BEFORE/INSTEAD OF\n * triggers on the table. Such triggers might query the table we're\n * inserting into and act differently if the tuples that have already\n * been processed and prepared for insertion are not there.\n */\n insertMethod = CIM_SINGLE;\n }\n else if (proute != NULL && resultRelInfo->ri_TrigDesc != NULL &&\n resultRelInfo->ri_TrigDesc->trig_insert_new_table)\n {\n /*\n * For partitioned tables we can't support multi-inserts when there\n * are any statement level insert triggers. It might be possible to\n * allow partitioned tables with such triggers in the future, but for\n * now, CopyMultiInsertInfoFlush expects that any before row insert\n * and statement level insert triggers are on the same relation.\n */\n insertMethod = CIM_SINGLE;\n }\n else if (resultRelInfo->ri_FdwRoutine != NULL ||\n cstate->volatile_defexprs)\n {\n...\n...\n\nI think, if possible, all these if-else checks in CopyFrom() can be\nmoved to a single function which can probably be named as\nIdentifyCopyInsertMethod() and this function can be called in\nIsParallelCopyAllowed(). This will ensure that in case of Parallel\nCopy when the leader has performed all these checks, the worker won't\ndo it again. I also feel that it will make the code look a bit\ncleaner.\n\n--\n\n+void\n+ParallelCopyMain(dsm_segment *seg, shm_toc *toc)\n+{\n...\n...\n+ InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n+ &walusage[ParallelWorkerNumber]);\n+\n+ MemoryContextSwitchTo(oldcontext);\n+ pfree(cstate);\n+ return;\n+}\n\nIt seems like you also need to delete the memory context\n(cstate->copycontext) here.\n\n--\n\n+void\n+ExecBeforeStmtTrigger(CopyState cstate)\n+{\n+ EState *estate = CreateExecutorState();\n+ ResultRelInfo *resultRelInfo;\n\nThis function has a lot of comments which have been copied as it is\nfrom the CopyFrom function, I think it would be good to remove those\ncomments from here and mention that this code changes done in this\nfunction has been taken from the CopyFrom function. If any queries\npeople may refer to the CopyFrom function. This will again avoid the\nunnecessary code in the patch.\n\n--\n\nAs Heikki rightly pointed out in his previous email, we need some high\nlevel description of how Parallel Copy works somewhere in\ncopyparallel.c file. For reference, please see how a brief description\nabout parallel vacuum has been added in the vacuumlazy.c file.\n\n * Lazy vacuum supports parallel execution with parallel worker processes. In\n * a parallel vacuum, we perform both index vacuum and index cleanup with\n * parallel worker processes. Individual indexes are processed by one vacuum\n...\n...\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\nOn Wed, Oct 21, 2020 at 12:08 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Oct 19, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Oct 18, 2020 at 7:47 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > >\n> > > Hi Vignesh,\n> > >\n> > > After having a look over the patch,\n> > > I have some suggestions for\n> > > 0003-Allow-copy-from-command-to-process-data-from-file.patch.\n> > >\n> > > 1.\n> > >\n> > > +static uint32\n> > > +EstimateCstateSize(ParallelContext *pcxt, CopyState cstate, List *attnamelist,\n> > > + char **whereClauseStr, char **rangeTableStr,\n> > > + char **attnameListStr, char **notnullListStr,\n> > > + char **nullListStr, char **convertListStr)\n> > > +{\n> > > + uint32 strsize = MAXALIGN(sizeof(SerializedParallelCopyState));\n> > > +\n> > > + strsize += EstimateStringSize(cstate->null_print);\n> > > + strsize += EstimateStringSize(cstate->delim);\n> > > + strsize += EstimateStringSize(cstate->quote);\n> > > + strsize += EstimateStringSize(cstate->escape);\n> > >\n> > >\n> > > It use function EstimateStringSize to get the strlen of null_print, delim, quote and escape.\n> > > But the length of null_print seems has been stored in null_print_len.\n> > > And delim/quote/escape must be 1 byte, so I think call strlen again seems unnecessary.\n> > >\n> > > How about \" strsize += sizeof(uint32) + cstate->null_print_len + 1\"\n> > >\n> >\n> > +1. This seems like a good suggestion but add comments for\n> > delim/quote/escape to indicate that we are considering one-byte for\n> > each. I think this will obviate the need of function\n> > EstimateStringSize. Another thing in this regard is that we normally\n> > use add_size function to compute the size but I don't see that being\n> > used in this and nearby computation. That helps us to detect overflow\n> > of addition if any.\n> >\n> > EstimateCstateSize()\n> > {\n> > ..\n> > +\n> > + strsize++;\n> > ..\n> > }\n> >\n> > Why do we need this additional one-byte increment? Does it make sense\n> > to add a small comment for the same?\n> >\n>\n> Changed it to handle null_print, delim, quote & escape accordingly in\n> the attached patch, the one byte increment is not required, I have\n> removed it.\n>\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Oct 2020 17:42:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 23, 2020 at 5:42 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> Thanks for the updated patches. Here are some more comments that I can\n> find after reviewing your latest patches:\n>\n> +/*\n> + * This structure helps in storing the common data from CopyStateData that are\n> + * required by the workers. This information will then be allocated and stored\n> + * into the DSM for the worker to retrieve and copy it to CopyStateData.\n> + */\n> +typedef struct SerializedParallelCopyState\n> +{\n> + /* low-level state data */\n> + CopyDest copy_dest; /* type of copy source/destination */\n> + int file_encoding; /* file or remote side's character encoding */\n> + bool need_transcoding; /* file encoding diff from server? */\n> + bool encoding_embeds_ascii; /* ASCII can be non-first byte? */\n> +\n> ...\n> ...\n> +\n> + /* Working state for COPY FROM */\n> + AttrNumber num_defaults;\n> + Oid relid;\n> +} SerializedParallelCopyState;\n>\n> Can the above structure not be part of the CopyStateData structure? I\n> am just asking this question because all the fields present in the\n> above structure are also present in the CopyStateData structure. So,\n> including it in the CopyStateData structure will reduce the code\n> duplication and will also make CopyStateData a bit shorter.\n>\n> --\n>\n> + pcxt = BeginParallelCopy(cstate->nworkers, cstate, stmt->attlist,\n> + relid);\n>\n> Do we need to pass cstate->nworkers and relid to BeginParallelCopy()\n> function when we are already passing cstate structure, using which\n> both of these information can be retrieved ?\n>\n> --\n>\n> +/* DSM keys for parallel copy. */\n> +#define PARALLEL_COPY_KEY_SHARED_INFO 1\n> +#define PARALLEL_COPY_KEY_CSTATE 2\n> +#define PARALLEL_COPY_WAL_USAGE 3\n> +#define PARALLEL_COPY_BUFFER_USAGE 4\n>\n> DSM key names do not appear to be consistent. For shared info and\n> cstate structures, the key name is prefixed with \"PARALLEL_COPY_KEY\",\n> but for WalUsage and BufferUsage structures, it is prefixed with\n> \"PARALLEL_COPY\". I think it would be better to make them consistent.\n>\n> --\n>\n> if (resultRelInfo->ri_TrigDesc != NULL &&\n> (resultRelInfo->ri_TrigDesc->trig_insert_before_row ||\n> resultRelInfo->ri_TrigDesc->trig_insert_instead_row))\n> {\n> /*\n> * Can't support multi-inserts when there are any BEFORE/INSTEAD OF\n> * triggers on the table. Such triggers might query the table we're\n> * inserting into and act differently if the tuples that have already\n> * been processed and prepared for insertion are not there.\n> */\n> insertMethod = CIM_SINGLE;\n> }\n> else if (proute != NULL && resultRelInfo->ri_TrigDesc != NULL &&\n> resultRelInfo->ri_TrigDesc->trig_insert_new_table)\n> {\n> /*\n> * For partitioned tables we can't support multi-inserts when there\n> * are any statement level insert triggers. It might be possible to\n> * allow partitioned tables with such triggers in the future, but for\n> * now, CopyMultiInsertInfoFlush expects that any before row insert\n> * and statement level insert triggers are on the same relation.\n> */\n> insertMethod = CIM_SINGLE;\n> }\n> else if (resultRelInfo->ri_FdwRoutine != NULL ||\n> cstate->volatile_defexprs)\n> {\n> ...\n> ...\n>\n> I think, if possible, all these if-else checks in CopyFrom() can be\n> moved to a single function which can probably be named as\n> IdentifyCopyInsertMethod() and this function can be called in\n> IsParallelCopyAllowed(). This will ensure that in case of Parallel\n> Copy when the leader has performed all these checks, the worker won't\n> do it again. I also feel that it will make the code look a bit\n> cleaner.\n>\n\nJust rewriting above comment to make it a bit more clear:\n\nI think, if possible, all these if-else checks in CopyFrom() should be\nmoved to a separate function which can probably be named as\nIdentifyCopyInsertMethod() and this function called from\nIsParallelCopyAllowed() and CopyFrom() functions. It will only be\ncalled from CopyFrom() when IsParallelCopy() returns false. This will\nensure that in case of Parallel Copy if the leader has performed all\nthese checks, the worker won't do it again. I also feel that having a\nseparate function containing all these checks will make the code look\na bit cleaner.\n\n> --\n>\n> +void\n> +ParallelCopyMain(dsm_segment *seg, shm_toc *toc)\n> +{\n> ...\n> ...\n> + InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n> + &walusage[ParallelWorkerNumber]);\n> +\n> + MemoryContextSwitchTo(oldcontext);\n> + pfree(cstate);\n> + return;\n> +}\n>\n> It seems like you also need to delete the memory context\n> (cstate->copycontext) here.\n>\n> --\n>\n> +void\n> +ExecBeforeStmtTrigger(CopyState cstate)\n> +{\n> + EState *estate = CreateExecutorState();\n> + ResultRelInfo *resultRelInfo;\n>\n> This function has a lot of comments which have been copied as it is\n> from the CopyFrom function, I think it would be good to remove those\n> comments from here and mention that this code changes done in this\n> function has been taken from the CopyFrom function. If any queries\n> people may refer to the CopyFrom function. This will again avoid the\n> unnecessary code in the patch.\n>\n> --\n>\n> As Heikki rightly pointed out in his previous email, we need some high\n> level description of how Parallel Copy works somewhere in\n> copyparallel.c file. For reference, please see how a brief description\n> about parallel vacuum has been added in the vacuumlazy.c file.\n>\n> * Lazy vacuum supports parallel execution with parallel worker processes. In\n> * a parallel vacuum, we perform both index vacuum and index cleanup with\n> * parallel worker processes. Individual indexes are processed by one vacuum\n> ...\n> ...\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n>\n>\n> On Wed, Oct 21, 2020 at 12:08 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Oct 19, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sun, Oct 18, 2020 at 7:47 AM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > > >\n> > > > Hi Vignesh,\n> > > >\n> > > > After having a look over the patch,\n> > > > I have some suggestions for\n> > > > 0003-Allow-copy-from-command-to-process-data-from-file.patch.\n> > > >\n> > > > 1.\n> > > >\n> > > > +static uint32\n> > > > +EstimateCstateSize(ParallelContext *pcxt, CopyState cstate, List *attnamelist,\n> > > > + char **whereClauseStr, char **rangeTableStr,\n> > > > + char **attnameListStr, char **notnullListStr,\n> > > > + char **nullListStr, char **convertListStr)\n> > > > +{\n> > > > + uint32 strsize = MAXALIGN(sizeof(SerializedParallelCopyState));\n> > > > +\n> > > > + strsize += EstimateStringSize(cstate->null_print);\n> > > > + strsize += EstimateStringSize(cstate->delim);\n> > > > + strsize += EstimateStringSize(cstate->quote);\n> > > > + strsize += EstimateStringSize(cstate->escape);\n> > > >\n> > > >\n> > > > It use function EstimateStringSize to get the strlen of null_print, delim, quote and escape.\n> > > > But the length of null_print seems has been stored in null_print_len.\n> > > > And delim/quote/escape must be 1 byte, so I think call strlen again seems unnecessary.\n> > > >\n> > > > How about \" strsize += sizeof(uint32) + cstate->null_print_len + 1\"\n> > > >\n> > >\n> > > +1. This seems like a good suggestion but add comments for\n> > > delim/quote/escape to indicate that we are considering one-byte for\n> > > each. I think this will obviate the need of function\n> > > EstimateStringSize. Another thing in this regard is that we normally\n> > > use add_size function to compute the size but I don't see that being\n> > > used in this and nearby computation. That helps us to detect overflow\n> > > of addition if any.\n> > >\n> > > EstimateCstateSize()\n> > > {\n> > > ..\n> > > +\n> > > + strsize++;\n> > > ..\n> > > }\n> > >\n> > > Why do we need this additional one-byte increment? Does it make sense\n> > > to add a small comment for the same?\n> > >\n> >\n> > Changed it to handle null_print, delim, quote & escape accordingly in\n> > the attached patch, the one byte increment is not required, I have\n> > removed it.\n> >\n> > Regards,\n> > Vignesh\n> > EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Oct 2020 18:58:04 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks for the comments, please find my thoughts below.\nOn Wed, Oct 21, 2020 at 3:19 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> I took a look at the v8 patch set. Here are some comments:\n>\n> 1. PopulateCommonCstateInfo() -- can we use PopulateCommonCStateInfo()\n> or PopulateCopyStateInfo()? And also EstimateCstateSize() --\n> EstimateCStateSize(), PopulateCstateCatalogInfo() --\n> PopulateCStateCatalogInfo()?\n>\n\nChanged as suggested.\n\n> 2. Instead of mentioning numbers like 1024, 64K, 10240 in the\n> comments, can we represent them in terms of macros?\n> /* It can hold 1024 blocks of 64K data in DSM to be processed by the worker. */\n> #define MAX_BLOCKS_COUNT 1024\n> /*\n> * It can hold upto 10240 record information for worker to process. RINGSIZE\n>\n\nChanged as suggested.\n\n> 3. How about\n> \"\n> Each worker at once will pick the WORKER_CHUNK_COUNT records from the\n> DSM data blocks and store them in it's local memory.\n> This is to make workers not contend much while getting record\n> information from the DSM. Read RINGSIZE comments before\n> changing this value.\n> \"\n> instead of\n> /*\n> * Each worker will be allocated WORKER_CHUNK_COUNT of records from DSM data\n> * block to process to avoid lock contention. Read RINGSIZE comments before\n> * changing this value.\n> */\n>\n\nRephrased it.\n\n> 4. How about one line gap before and after for comments: \"Leader\n> should operate in the following order:\" and \"Worker should operate in\n> the following order:\"\n>\n\nChanged it.\n\n> 5. Can we move RAW_BUF_BYTES macro definition to the beginning of the\n> copy.h where all the macro are defined?\n>\n\nChange was done as part of another commit & we are using as it is. I\npreferred it to be as it is.\n\n> 6. I don't think we need the change in toast_internals.c with the\n> temporary hack Assert(!(IsParallelWorker() && !currentCommandIdUsed));\n> in GetCurrentCommandId()\n>\n\nModified it.\n\n> 7. I think\n> /* Can't perform copy in parallel */\n> if (parallel_workers <= 0)\n> return NULL;\n> can be\n> /* Can't perform copy in parallel */\n> if (parallel_workers == 0)\n> return NULL;\n> as parallel_workers can never be < 0 since we enter BeginParallelCopy\n> only if cstate->nworkers > 0 and also we are not allowed to have\n> negative values for max_worker_processes.\n>\n\nModified it.\n\n> 8. Do we want to pfree(cstate->pcdata) in case we failed to start any\n> parallel workers, we would have allocated a good\n> else\n> {\n> /*\n> * Reset nworkers to -1 here. This is useful in cases where user\n> * specifies parallel workers, but, no worker is picked up, so go\n> * back to non parallel mode value of nworkers.\n> */\n> cstate->nworkers = -1;\n> *processed = CopyFrom(cstate); /* copy from file to database */\n> }\n>\n\nAdded pfree.\n\n> 9. Instead of calling CopyStringToSharedMemory() for each string\n> variable, can't we just create a linked list of all the strings that\n> need to be copied into shm and call CopyStringToSharedMemory() only\n> once? We could avoid 5 function calls?\n>\n\nI feel keeping it this way makes the code more readable, and also this\nis not in a performance intensive tight loop. I'm retaining the\nchange as is unless we feel this will make an impact.\n\n> 10. Similar to above comment: can we fill all the required\n> cstate->variables inside the function CopyNodeFromSharedMemory() and\n> call it only once? In each worker we could save overhead of 5 function\n> calls.\n>\n\nsame as above.\n\n> 11. Looks like CopyStringFromSharedMemory() and\n> CopyNodeFromSharedMemory() do almost the same things except\n> stringToNode() and pfree(destptr);. Can we have a generic function\n> CopyFromSharedMemory() or something else and handle with flag \"bool\n> isnode\" to differentiate the two use cases?\n>\n\nRemoved CopyStringFromSharedMemory & used CopyNodeFromSharedMemory\nappropriately. CopyNodeFromSharedMemory is renamed to\nRestoreNodeFromSharedMemory keep the name consistent.\n\n> 12. Can we move below check to the end in IsParallelCopyAllowed()?\n> /* Check parallel safety of the trigger functions. */\n> if (cstate->rel->trigdesc != NULL &&\n> !CheckRelTrigFunParallelSafety(cstate->rel->trigdesc))\n> return false;\n>\n\nModified.\n\n> 13. CacheLineInfo(): Instead of goto empty_data_line_update; how about\n> having this directly inside the if block as it's being used only once?\n>\n\nHave removed the goto by using a macro.\n\n> 14. GetWorkerLine(): How about avoiding goto statements and replacing\n> the common code with a always static inline function or a macro?\n>\n\nHave removed the goto by using a macro.\n\n> 15. UpdateSharedLineInfo(): Below line is misaligned.\n> lineInfo->first_block = blk_pos;\n> lineInfo->start_offset = offset;\n>\n\nChanged it.\n\n> 16. ParallelCopyFrom(): Do we need CHECK_FOR_INTERRUPTS(); at the\n> start of for (;;)?\n>\n\nAdded it.\n\n> 17. Remove extra lines after #define IsHeaderLine()\n> (cstate->header_line && cstate->cur_lineno == 1) in copy.h\n>\n\nModified it.\n\nAttached v9 patches have the fixes for the above comments.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Oct 2020 19:06:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Oct 21, 2020 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 21, 2020 at 3:19 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> >\n> > 9. Instead of calling CopyStringToSharedMemory() for each string\n> > variable, can't we just create a linked list of all the strings that\n> > need to be copied into shm and call CopyStringToSharedMemory() only\n> > once? We could avoid 5 function calls?\n> >\n>\n> If we want to avoid different function calls then can't we just store\n> all these strings in a local structure and use it? That might improve\n> the other parts of code as well where we are using these as individual\n> parameters.\n>\n\nI have made one structure SerializedListToStrCState to store all the\nvariables. The rest of the common variables is directly copied from &\ninto cstate.\n\n> > 10. Similar to above comment: can we fill all the required\n> > cstate->variables inside the function CopyNodeFromSharedMemory() and\n> > call it only once? In each worker we could save overhead of 5 function\n> > calls.\n> >\n>\n> Yeah, that makes sense.\n>\n\nI feel keeping it this way makes the code more readable, and also this\nis not in a performance intensive tight loop. I'm retaining the\nchange as is unless we feel this will make an impact.\n\nThis is addressed in v9 patch shared at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1cAONkFDN6K72DSiRpgqNGvwxQL7TjEiHZ58opnp9VoA@mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 20:52:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Oct 21, 2020 at 4:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Oct 21, 2020 at 3:18 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > 17. Remove extra lines after #define IsHeaderLine()\n> > (cstate->header_line && cstate->cur_lineno == 1) in copy.h\n> >\n>\n> I missed one comment:\n>\n> 18. I think we need to treat the number of parallel workers as an\n> integer similar to the parallel option in vacuum.\n>\n> postgres=# copy t1 from stdin with(parallel '1'); <<<<< - we\n> should not allow this.\n> Enter data to be copied followed by a newline.\n>\n> postgres=# vacuum (parallel '1') t1;\n> ERROR: parallel requires an integer value\n>\n\nI have made the behavior the same as vacuum.\nThis is addressed in v9 patch shared at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1cAONkFDN6K72DSiRpgqNGvwxQL7TjEiHZ58opnp9VoA@mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 20:53:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Heikki for reviewing and providing your comments. Please find\nmy thoughts below.\n\nOn Fri, Oct 23, 2020 at 2:01 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I had a brief look at at this patch. Important work! A couple of first\n> impressions:\n>\n> 1. The split between patches\n> 0002-Framework-for-leader-worker-in-parallel-copy.patch and\n> 0003-Allow-copy-from-command-to-process-data-from-file.patch is quite\n> artificial. All the stuff introduced in the first is unused until the\n> second patch is applied. The first patch introduces a forward\n> declaration for ParallelCopyData(), but the function only comes in the\n> second patch. The comments in the first patch talk about\n> LINE_LEADER_POPULATING and LINE_LEADER_POPULATED, but the enum only\n> comes in the second patch. I think these have to merged into one. If you\n> want to split it somehow, I'd suggest having a separate patch just to\n> move CopyStateData from copy.c to copy.h. The subsequent patch would\n> then be easier to read as you could see more easily what's being added\n> to CopyStateData. Actually I think it would be better to have a new\n> header file, copy_internal.h, to hold CopyStateData and the other\n> structs, and keep copy.h as it is.\n>\n\nI have merged 0002 & 0003 patch, I have moved few things like creation\nof copy_internal.h, moving of CopyStateData from copy.c into\ncopy_internal.h into 0001 patch.\n\n> 2. This desperately needs some kind of a high-level overview of how it\n> works. What is a leader, what is a worker? Which process does each step\n> of COPY processing, like reading from the file/socket, splitting the\n> input into lines, handling escapes, calling input functions, and\n> updating the heap and indexes? What data structures are used for the\n> communication? How does is the work synchronized between the processes?\n> There are comments on those individual aspects scattered in the patch,\n> but if you're not already familiar with it, you don't know where to\n> start. There's some of that in the commit message, but it needs to be\n> somewhere in the source code, maybe in a long comment at the top of\n> copyparallel.c.\n>\n\nAdded it in copyparallel.c\n\n> 3. I'm surprised there's a separate ParallelCopyLineBoundary struct for\n> every input line. Doesn't that incur a lot of synchronization overhead?\n> I haven't done any testing, this is just my gut feeling, but I assumed\n> you'd work in batches of, say, 100 or 1000 lines each.\n>\n\nData read from the file will be stored in DSM which is of size 64k *\n1024. Leader will parse and identify the line boundary like which line\nstarts from which data block, what is the starting offset in the data\nblock, what is the line size, this information will be present in\nParallelCopyLineBoundary. Like you said, each worker processes\nWORKER_CHUNK_COUNT 64 lines at a time. Performance test results run\nfor parallel copy are available at [1]. This is addressed in v9 patch\nshared at [2].\n\n[1] https://www.postgresql.org/message-id/CALj2ACWeQVd-xoQZHGT01_33St4xPoZQibWz46o7jW1PE3XOqQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm1cAONkFDN6K72DSiRpgqNGvwxQL7TjEiHZ58opnp9VoA@mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 20:56:41 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Thanks Ashutosh for reviewing and providing your comments.\n\nOn Fri, Oct 23, 2020 at 5:43 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> Thanks for the updated patches. Here are some more comments that I can\n> find after reviewing your latest patches:\n>\n> +/*\n> + * This structure helps in storing the common data from CopyStateData that are\n> + * required by the workers. This information will then be allocated and stored\n> + * into the DSM for the worker to retrieve and copy it to CopyStateData.\n> + */\n> +typedef struct SerializedParallelCopyState\n> +{\n> + /* low-level state data */\n> + CopyDest copy_dest; /* type of copy source/destination */\n> + int file_encoding; /* file or remote side's character encoding */\n> + bool need_transcoding; /* file encoding diff from server? */\n> + bool encoding_embeds_ascii; /* ASCII can be non-first byte? */\n> +\n> ...\n> ...\n> +\n> + /* Working state for COPY FROM */\n> + AttrNumber num_defaults;\n> + Oid relid;\n> +} SerializedParallelCopyState;\n>\n> Can the above structure not be part of the CopyStateData structure? I\n> am just asking this question because all the fields present in the\n> above structure are also present in the CopyStateData structure. So,\n> including it in the CopyStateData structure will reduce the code\n> duplication and will also make CopyStateData a bit shorter.\n>\n\nI have removed the common members from the structure, now there are no\ncommon members between CopyStateData & the new structure. I'm using\nCopyStateData to copy to/from directly in the new patch.\n\n> --\n>\n> + pcxt = BeginParallelCopy(cstate->nworkers, cstate, stmt->attlist,\n> + relid);\n>\n> Do we need to pass cstate->nworkers and relid to BeginParallelCopy()\n> function when we are already passing cstate structure, using which\n> both of these information can be retrieved ?\n>\n\nnworkers need not be passed as you have suggested but relid need to be\npassed as we will be setting it to pcdata, modified nworkers as\nsuggested.\n\n> --\n>\n> +/* DSM keys for parallel copy. */\n> +#define PARALLEL_COPY_KEY_SHARED_INFO 1\n> +#define PARALLEL_COPY_KEY_CSTATE 2\n> +#define PARALLEL_COPY_WAL_USAGE 3\n> +#define PARALLEL_COPY_BUFFER_USAGE 4\n>\n> DSM key names do not appear to be consistent. For shared info and\n> cstate structures, the key name is prefixed with \"PARALLEL_COPY_KEY\",\n> but for WalUsage and BufferUsage structures, it is prefixed with\n> \"PARALLEL_COPY\". I think it would be better to make them consistent.\n>\n\nModified as suggested\n\n> --\n>\n> if (resultRelInfo->ri_TrigDesc != NULL &&\n> (resultRelInfo->ri_TrigDesc->trig_insert_before_row ||\n> resultRelInfo->ri_TrigDesc->trig_insert_instead_row))\n> {\n> /*\n> * Can't support multi-inserts when there are any BEFORE/INSTEAD OF\n> * triggers on the table. Such triggers might query the table we're\n> * inserting into and act differently if the tuples that have already\n> * been processed and prepared for insertion are not there.\n> */\n> insertMethod = CIM_SINGLE;\n> }\n> else if (proute != NULL && resultRelInfo->ri_TrigDesc != NULL &&\n> resultRelInfo->ri_TrigDesc->trig_insert_new_table)\n> {\n> /*\n> * For partitioned tables we can't support multi-inserts when there\n> * are any statement level insert triggers. It might be possible to\n> * allow partitioned tables with such triggers in the future, but for\n> * now, CopyMultiInsertInfoFlush expects that any before row insert\n> * and statement level insert triggers are on the same relation.\n> */\n> insertMethod = CIM_SINGLE;\n> }\n> else if (resultRelInfo->ri_FdwRoutine != NULL ||\n> cstate->volatile_defexprs)\n> {\n> ...\n> ...\n>\n> I think, if possible, all these if-else checks in CopyFrom() can be\n> moved to a single function which can probably be named as\n> IdentifyCopyInsertMethod() and this function can be called in\n> IsParallelCopyAllowed(). This will ensure that in case of Parallel\n> Copy when the leader has performed all these checks, the worker won't\n> do it again. I also feel that it will make the code look a bit\n> cleaner.\n>\n\nIn the recent patch posted we have changed it to simplify the check\nfor parallel copy, it is not an exact match. I feel this comment is\nnot applicable on the latest patch\n\n> --\n>\n> +void\n> +ParallelCopyMain(dsm_segment *seg, shm_toc *toc)\n> +{\n> ...\n> ...\n> + InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n> + &walusage[ParallelWorkerNumber]);\n> +\n> + MemoryContextSwitchTo(oldcontext);\n> + pfree(cstate);\n> + return;\n> +}\n>\n> It seems like you also need to delete the memory context\n> (cstate->copycontext) here.\n>\n\nAdded it.\n\n> --\n>\n> +void\n> +ExecBeforeStmtTrigger(CopyState cstate)\n> +{\n> + EState *estate = CreateExecutorState();\n> + ResultRelInfo *resultRelInfo;\n>\n> This function has a lot of comments which have been copied as it is\n> from the CopyFrom function, I think it would be good to remove those\n> comments from here and mention that this code changes done in this\n> function has been taken from the CopyFrom function. If any queries\n> people may refer to the CopyFrom function. This will again avoid the\n> unnecessary code in the patch.\n>\n\nChanged as suggested.\n\n> --\n>\n> As Heikki rightly pointed out in his previous email, we need some high\n> level description of how Parallel Copy works somewhere in\n> copyparallel.c file. For reference, please see how a brief description\n> about parallel vacuum has been added in the vacuumlazy.c file.\n>\n> * Lazy vacuum supports parallel execution with parallel worker processes. In\n> * a parallel vacuum, we perform both index vacuum and index cleanup with\n> * parallel worker processes. Individual indexes are processed by one vacuum\n> ...\n\nAdded it in copyparallel.c\n\nThis is addressed in v9 patch shared at [1].\n[1] - https://www.postgresql.org/message-id/CALDaNm1cAONkFDN6K72DSiRpgqNGvwxQL7TjEiHZ58opnp9VoA@mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 21:03:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 23, 2020 at 6:58 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> >\n> > I think, if possible, all these if-else checks in CopyFrom() can be\n> > moved to a single function which can probably be named as\n> > IdentifyCopyInsertMethod() and this function can be called in\n> > IsParallelCopyAllowed(). This will ensure that in case of Parallel\n> > Copy when the leader has performed all these checks, the worker won't\n> > do it again. I also feel that it will make the code look a bit\n> > cleaner.\n> >\n>\n> Just rewriting above comment to make it a bit more clear:\n>\n> I think, if possible, all these if-else checks in CopyFrom() should be\n> moved to a separate function which can probably be named as\n> IdentifyCopyInsertMethod() and this function called from\n> IsParallelCopyAllowed() and CopyFrom() functions. It will only be\n> called from CopyFrom() when IsParallelCopy() returns false. This will\n> ensure that in case of Parallel Copy if the leader has performed all\n> these checks, the worker won't do it again. I also feel that having a\n> separate function containing all these checks will make the code look\n> a bit cleaner.\n>\n\nIn the recent patch posted we have changed it to simplify the check\nfor parallel copy, it is not an exact match. I feel this comment is\nnot applicable on the latest patch\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Oct 2020 21:07:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi \r\n\r\nI found some issue in v9-0002\r\n\r\n1.\r\n+\r\n+\telog(DEBUG1, \"[Worker] Processing - line position:%d, block:%d, unprocessed lines:%d, offset:%d, line size:%d\",\r\n+\t\t write_pos, lineInfo->first_block,\r\n+\t\t pg_atomic_read_u32(&data_blk_ptr->unprocessed_line_parts),\r\n+\t\t offset, pg_atomic_read_u32(&lineInfo->line_size));\r\n+\r\n\r\nwrite_pos or other variable to be printed here are type of uint32, I think it'better to use '%u' in elog msg.\r\n\r\n2.\r\n+\t\t * line_size will be set. Read the line_size again to be sure if it is\r\n+\t\t * completed or partial block.\r\n+\t\t */\r\n+\t\tdataSize = pg_atomic_read_u32(&lineInfo->line_size);\r\n+\t\tif (dataSize)\r\n\r\nIt use dataSize( type int ) to get uint32 which seems a little dangerous.\r\nIs it better to define dataSize uint32 here? \r\n\r\n3.\r\nSince function with 'Cstate' in name has been changed to 'CState'\r\nI think we can change function PopulateCommonCstateInfo as well.\r\n\r\n4.\r\n+\tif (pcdata->worker_line_buf_count)\r\n\r\nI think some check like the above can be 'if (xxx > 0)', which seems easier to understand.\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Wed, 28 Oct 2020 12:06:34 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel copy" }, { "msg_contents": "On Tue, Oct 27, 2020 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n[latest version]\n\nI think the parallel-safety checks in this patch\n(v9-0002-Allow-copy-from-command-to-process-data-from-file) are\nincomplete and wrong. See below comments.\n1.\n+static pg_attribute_always_inline bool\n+CheckExprParallelSafety(CopyState cstate)\n+{\n+ if (contain_volatile_functions(cstate->whereClause))\n+ {\n+ if (max_parallel_hazard((Query *) cstate->whereClause) != PROPARALLEL_SAFE)\n+ return false;\n+ }\n\nI don't understand the above check. Why do we only need to check where\nclause for parallel-safety when it contains volatile functions? It\nshould be checked otherwise as well, no? The similar comment applies\nto other checks in this function. Also, I don't think there is a need\nto make this function inline.\n\n2.\n+/*\n+ * IsParallelCopyAllowed\n+ *\n+ * Check if parallel copy can be allowed.\n+ */\n+bool\n+IsParallelCopyAllowed(CopyState cstate)\n{\n..\n+ * When there are BEFORE/AFTER/INSTEAD OF row triggers on the table. We do\n+ * not allow parallelism in such cases because such triggers might query\n+ * the table we are inserting into and act differently if the tuples that\n+ * have already been processed and prepared for insertion are not there.\n+ * Now, if we allow parallelism with such triggers the behaviour would\n+ * depend on if the parallel worker has already inserted or not that\n+ * particular tuples.\n+ */\n+ if (cstate->rel->trigdesc != NULL &&\n+ (cstate->rel->trigdesc->trig_insert_after_statement ||\n+ cstate->rel->trigdesc->trig_insert_new_table ||\n+ cstate->rel->trigdesc->trig_insert_before_row ||\n+ cstate->rel->trigdesc->trig_insert_after_row ||\n+ cstate->rel->trigdesc->trig_insert_instead_row))\n+ return false;\n..\n\nWhy do we need to disable parallelism for before/after row triggers\nunless they have parallel-unsafe functions? I see a few lines down in\nthis function you are checking parallel-safety of trigger functions,\nwhat is the use of the same if you are already disabling parallelism\nwith the above check.\n\n3. What about if the index on table has expressions that are\nparallel-unsafe? What is your strategy to check parallel-safety for\npartitioned tables?\n\nI suggest checking Greg's patch for parallel-safety of Inserts [1]. I\nthink you will find that most of those checks are required here as\nwell and see how we can use that patch (at least what is common). I\nfeel the first patch should be just to have parallel-safety checks and\nwe can test that by trying to enable Copy with force_parallel_mode. We\ncan build the rest of the patch atop of it or in other words, let's\nmove all parallel-safety work into a separate patch.\n\nFew assorted comments:\n========================\n1.\n+/*\n+ * ESTIMATE_NODE_SIZE - Estimate the size required for node type in shared\n+ * memory.\n+ */\n+#define ESTIMATE_NODE_SIZE(list, listStr, strsize) \\\n+{ \\\n+ uint32 estsize = sizeof(uint32); \\\n+ if ((List *)list != NIL) \\\n+ { \\\n+ listStr = nodeToString(list); \\\n+ estsize += strlen(listStr) + 1; \\\n+ } \\\n+ \\\n+ strsize = add_size(strsize, estsize); \\\n+}\n\nThis can be probably a function instead of a macro.\n\n2.\n+/*\n+ * ESTIMATE_1BYTE_STR_SIZE - Estimate the size required for 1Byte strings in\n+ * shared memory.\n+ */\n+#define ESTIMATE_1BYTE_STR_SIZE(src, strsize) \\\n+{ \\\n+ strsize = add_size(strsize, sizeof(uint8)); \\\n+ strsize = add_size(strsize, (src) ? 1 : 0); \\\n+}\n\nThis could be an inline function.\n\n3.\n+/*\n+ * SERIALIZE_1BYTE_STR - Copy 1Byte strings to shared memory.\n+ */\n+#define SERIALIZE_1BYTE_STR(dest, src, copiedsize) \\\n+{ \\\n+ uint8 len = (src) ? 1 : 0; \\\n+ memcpy(dest + copiedsize, (uint8 *) &len, sizeof(uint8)); \\\n+ copiedsize += sizeof(uint8); \\\n+ if (src) \\\n+ dest[copiedsize++] = src[0]; \\\n+}\n\nSimilarly, this could be a function. I think keeping such things as\nmacros in-between code makes it difficult to read. Please see if you\ncan make these and similar macros as functions unless they are doing\nfew memory instructions. Having functions makes it easier to debug the\ncode as well.\n\n[1] - https://www.postgresql.org/message-id/CAJcOf-cgfjj0NfYPrNFGmQJxsnNW102LTXbzqxQJuziar1EKfQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Oct 2020 11:45:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 27/10/2020 15:36, vignesh C wrote:\n> Attached v9 patches have the fixes for the above comments.\n\nI did some testing:\n\n/tmp/longdata.pl:\n--------\n#!/usr/bin/perl\n#\n# Generate three rows:\n# foo\n# longdatalongdatalongdata...\n# bar\n#\n# The length of the middle row is given as command line arg.\n#\n\nmy $bytes = $ARGV[0];\n\nprint \"foo\\n\";\nfor(my $i = 0; $i < $bytes; $i+=8){\n print \"longdata\";\n}\nprint \"\\n\";\nprint \"bar\\n\";\n--------\n\npostgres=# copy longdata from program 'perl /tmp/longdata.pl 100000000' \nwith (parallel 2);\n\nThis gets stuck forever (or at least I didn't have the patience to wait \nit finish). Both worker processes are consuming 100% of CPU.\n\n- Heikki\n\n\n", "msg_date": "Thu, 29 Oct 2020 10:50:44 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 27/10/2020 15:36, vignesh C wrote:\n>> Attached v9 patches have the fixes for the above comments.\n\n>I did some testing:\n\nI did some testing as well and have a cosmetic remark:\n\npostgres=# copy t1 from '/var/tmp/aa.txt' with (parallel 1000000000);\nERROR: value 1000000000 out of bounds for option \"parallel\"\nDETAIL: Valid values are between \"1\" and \"1024\".\npostgres=# copy t1 from '/var/tmp/aa.txt' with (parallel 100000000000);\nERROR: parallel requires an integer value\npostgres=# \n\nWouldn't it make more sense to only have one error message? The first one seems to be the better message.\n\nRegards\nDaniel\n\n", "msg_date": "Thu, 29 Oct 2020 08:56:39 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 29, 2020 at 11:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 27, 2020 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> [latest version]\n>\n> I think the parallel-safety checks in this patch\n> (v9-0002-Allow-copy-from-command-to-process-data-from-file) are\n> incomplete and wrong.\n>\n\nOne more point, I have noticed that some time back [1], I have given\none suggestion related to the way workers process the set of lines\n(aka chunk). I think you can try by increasing the chunk size to say\n100, 500, 1000 and use some shared counter to remember the number of\nchunks processed.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1L-Xgw1zZEbGePmhBBWmEmLFL6rCaiOMDPnq2GNMVz-sg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Oct 2020 14:54:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 27/10/2020 15:36, vignesh C wrote:\n> Attached v9 patches have the fixes for the above comments.\n\nI find this design to be very complicated. Why does the line-boundary \ninformation need to be in shared memory? I think this would be much \nsimpler if each worker grabbed a fixed-size block of raw data, and \nprocessed that.\n\nIn your patch, the leader process scans the input to find out where one \nline ends and another begins, and because of that decision, the leader \nneeds to make the line boundaries available in shared memory, for the \nworker processes. If we moved that responsibility to the worker \nprocesses, you wouldn't need to keep the line boundaries in shared \nmemory. A worker would only need to pass enough state to the next worker \nto tell it where to start scanning the next block.\n\nWhether the leader process finds the EOLs or the worker processes, it's \npretty clear that it needs to be done ASAP, for a chunk at a time, \nbecause that cannot be done in parallel. I think some refactoring in \nCopyReadLine() and friends would be in order. It probably would be \nfaster, or at least not slower, to find all the EOLs in a block in one \ntight loop, even when parallel copy is not used.\n\n- Heikki\n\n\n", "msg_date": "Fri, 30 Oct 2020 18:36:38 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 30/10/2020 18:36, Heikki Linnakangas wrote:\n> I find this design to be very complicated. Why does the line-boundary\n> information need to be in shared memory? I think this would be much\n> simpler if each worker grabbed a fixed-size block of raw data, and\n> processed that.\n> \n> In your patch, the leader process scans the input to find out where one\n> line ends and another begins, and because of that decision, the leader\n> needs to make the line boundaries available in shared memory, for the\n> worker processes. If we moved that responsibility to the worker\n> processes, you wouldn't need to keep the line boundaries in shared\n> memory. A worker would only need to pass enough state to the next worker\n> to tell it where to start scanning the next block.\n\nHere's a high-level sketch of how I'm imagining this to work:\n\nThe shared memory structure consists of a queue of blocks, arranged as a \nring buffer. Each block is of fixed size, and contains 64 kB of data, \nand a few fields for coordination:\n\ntypedef struct\n{\n /* Current state of the block */\n pg_atomic_uint32 state;\n\n /* starting offset of first line within the block */\n int startpos;\n\n char data[64 kB];\n} ParallelCopyDataBlock;\n\nWhere state is one of:\n\nenum {\n FREE, /* buffer is empty */\n FILLED, /* leader has filled the buffer with raw data */\n READY, /* start pos has been filled in, but no worker process \nhas claimed the block yet */\n PROCESSING, /* worker has claimed the block, and is processing it */\n}\n\nState changes FREE -> FILLED -> READY -> PROCESSING -> FREE. As the COPY \nprogresses, the ring of blocks will always look something like this:\n\nblk 0 startpos 0: PROCESSING [worker 1]\nblk 1 startpos 12: PROCESSING [worker 2]\nblk 2 startpos 10: READY\nblk 3 starptos -: FILLED\nblk 4 startpos -: FILLED\nblk 5 starptos -: FILLED\nblk 6 startpos -: FREE\nblk 7 startpos -: FREE\n\nTypically, each worker process is busy processing a block. After the \nblocks being processed, there is one block in READY state, and after \nthat, blocks in FILLED state.\n\nLeader process:\n\nThe leader process is simple. It picks the next FREE buffer, fills it \nwith raw data from the file, and marks it as FILLED. If no buffers are \nFREE, wait.\n\nWorker process:\n\n1. Claim next READY block from queue, by changing its state to\n PROCESSING. If the next block is not READY yet, wait until it is.\n\n2. Start scanning the block from 'startpos', finding end-of-line\n markers. (in CSV mode, need to track when we're in-quotes).\n\n3. When you reach the end of the block, if the last line continues to\n next block, wait for the next block to become FILLED. Peek into the\n next block, and copy the remaining part of the split line to a local\n buffer, and set the 'startpos' on the next block to point to the end\n of the split line. Mark the next block as READY.\n\n4. Process all the lines in the block, call input functions, insert\n rows.\n\n5. Mark the block as DONE.\n\nIn this design, you don't need to keep line boundaries in shared memory, \nbecause each worker process is responsible for finding the line \nboundaries of its own block.\n\nThere's a point of serialization here, in that the next block cannot be \nprocessed, until the worker working on the previous block has finished \nscanning the EOLs, and set the starting position on the next block, \nputting it in READY state. That's not very different from your patch, \nwhere you had a similar point of serialization because the leader \nscanned the EOLs, but I think the coordination between processes is \nsimpler here.\n\n- Heikki\n\n\n", "msg_date": "Fri, 30 Oct 2020 18:41:41 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 30/10/2020 18:36, Heikki Linnakangas wrote:\n> Whether the leader process finds the EOLs or the worker processes, it's\n> pretty clear that it needs to be done ASAP, for a chunk at a time,\n> because that cannot be done in parallel. I think some refactoring in\n> CopyReadLine() and friends would be in order. It probably would be\n> faster, or at least not slower, to find all the EOLs in a block in one\n> tight loop, even when parallel copy is not used.\n\nSomething like the attached. It passes the regression tests, but it's \nquite incomplete. It's missing handing of \"\\.\" as end-of-file marker, \nand I haven't tested encoding conversions at all, for starters. Quick \ntesting suggests that this a little bit faster than the current code, \nbut the difference is small; I had to use a \"WHERE false\" option to \nreally see the difference.\n\nThe crucial thing here is that there's a new function, ParseLinesText(), \nto find all end-of-line characters in a buffer in one go. In this patch, \nit's used against 'raw_buf', but with parallel copy, you could point it \nat a block in shared memory instead.\n\n- Heikki", "msg_date": "Fri, 30 Oct 2020 18:52:37 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi,\n\nI've done a bit more testing today, and I think the parsing is busted in\nsome way. Consider this:\n\n test=# create extension random;\n CREATE EXTENSION\n \n test=# create table t (a text);\n CREATE TABLE\n \n test=# insert into t select random_string(random_int(10, 256*1024)) from generate_series(1,10000);\n INSERT 0 10000\n \n test=# copy t to '/mnt/data/t.csv';\n COPY 10000\n \n test=# truncate t;\n TRUNCATE TABLE\n \n test=# copy t from '/mnt/data/t.csv';\n COPY 10000\n \n test=# truncate t;\n TRUNCATE TABLE\n \n test=# copy t from '/mnt/data/t.csv' with (parallel 2);\n ERROR: invalid byte sequence for encoding \"UTF8\": 0x00\n CONTEXT: COPY t, line 485: \"m&\\nh%_a\"%r]>qtCl:Q5ltvF~;2oS6@HB>F>og,bD$Lw'nZY\\tYl#BH\\t{(j~ryoZ08\"SGU~.}8CcTRk1\\ts$@U3szCC+U1U3i@P...\"\n parallel worker\n\n\nThe functions come from an extension I use to generate random data, I've\npushed it to github [1]. The random_string() generates a random string\nwith ASCII characters, symbols and a couple special characters (\\r\\n\\t).\nThe intent was to try loading data where a fields may span multiple 64kB\nblocks and may contain newlines etc.\n\nThe non-parallel copy works fine, the parallel one fails. I haven't\ninvestigated the details, but I guess it gets confused about where a\nstring starts/end, or something like that.\n\n\n[1] https://github.com/tvondra/random\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 30 Oct 2020 21:37:30 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 30, 2020 at 06:41:41PM +0200, Heikki Linnakangas wrote:\n>On 30/10/2020 18:36, Heikki Linnakangas wrote:\n>>I find this design to be very complicated. Why does the line-boundary\n>>information need to be in shared memory? I think this would be much\n>>simpler if each worker grabbed a fixed-size block of raw data, and\n>>processed that.\n>>\n>>In your patch, the leader process scans the input to find out where one\n>>line ends and another begins, and because of that decision, the leader\n>>needs to make the line boundaries available in shared memory, for the\n>>worker processes. If we moved that responsibility to the worker\n>>processes, you wouldn't need to keep the line boundaries in shared\n>>memory. A worker would only need to pass enough state to the next worker\n>>to tell it where to start scanning the next block.\n>\n>Here's a high-level sketch of how I'm imagining this to work:\n>\n>The shared memory structure consists of a queue of blocks, arranged as \n>a ring buffer. Each block is of fixed size, and contains 64 kB of \n>data, and a few fields for coordination:\n>\n>typedef struct\n>{\n> /* Current state of the block */\n> pg_atomic_uint32 state;\n>\n> /* starting offset of first line within the block */\n> int startpos;\n>\n> char data[64 kB];\n>} ParallelCopyDataBlock;\n>\n>Where state is one of:\n>\n>enum {\n> FREE, /* buffer is empty */\n> FILLED, /* leader has filled the buffer with raw data */\n> READY, /* start pos has been filled in, but no worker process \n>has claimed the block yet */\n> PROCESSING, /* worker has claimed the block, and is processing it */\n>}\n>\n>State changes FREE -> FILLED -> READY -> PROCESSING -> FREE. As the \n>COPY progresses, the ring of blocks will always look something like \n>this:\n>\n>blk 0 startpos 0: PROCESSING [worker 1]\n>blk 1 startpos 12: PROCESSING [worker 2]\n>blk 2 startpos 10: READY\n>blk 3 starptos -: FILLED\n>blk 4 startpos -: FILLED\n>blk 5 starptos -: FILLED\n>blk 6 startpos -: FREE\n>blk 7 startpos -: FREE\n>\n>Typically, each worker process is busy processing a block. After the \n>blocks being processed, there is one block in READY state, and after \n>that, blocks in FILLED state.\n>\n>Leader process:\n>\n>The leader process is simple. It picks the next FREE buffer, fills it \n>with raw data from the file, and marks it as FILLED. If no buffers are \n>FREE, wait.\n>\n>Worker process:\n>\n>1. Claim next READY block from queue, by changing its state to\n> PROCESSING. If the next block is not READY yet, wait until it is.\n>\n>2. Start scanning the block from 'startpos', finding end-of-line\n> markers. (in CSV mode, need to track when we're in-quotes).\n>\n>3. When you reach the end of the block, if the last line continues to\n> next block, wait for the next block to become FILLED. Peek into the\n> next block, and copy the remaining part of the split line to a local\n> buffer, and set the 'startpos' on the next block to point to the end\n> of the split line. Mark the next block as READY.\n>\n>4. Process all the lines in the block, call input functions, insert\n> rows.\n>\n>5. Mark the block as DONE.\n>\n>In this design, you don't need to keep line boundaries in shared \n>memory, because each worker process is responsible for finding the \n>line boundaries of its own block.\n>\n>There's a point of serialization here, in that the next block cannot \n>be processed, until the worker working on the previous block has \n>finished scanning the EOLs, and set the starting position on the next \n>block, putting it in READY state. That's not very different from your \n>patch, where you had a similar point of serialization because the \n>leader scanned the EOLs, but I think the coordination between \n>processes is simpler here.\n>\n\nI agree this design looks simpler. I'm a bit worried about serializing\nthe parsing like this, though. It's true the current approach (where the\nfirst phase of parsing happens in the leader) has a similar issue, but I\nthink it would be easier to improve that in that design.\n\nMy plan was to parallelize the parsing roughly like this:\n\n1) split the input buffer into smaller chunks\n\n2) let workers scan the buffers and record positions of interesting\ncharacters (delimiters, quotes, ...) and pass it back to the leader\n\n3) use the information to actually parse the input data (we only need to\nlook at the interesting characters, skipping large parts of data)\n\n4) pass the parsed chunks to workers, just like in the current patch\n\n\nBut maybe something like that would be possible even with the approach\nyou propose - we could have a special parse phase for processing each\nbuffer, where any worker could look for the special characters, record\nthe positions in a bitmap next to the buffer. So the whole sequence of\nstates would look something like this:\n\n EMPTY\n FILLED\n PARSED\n READY\n PROCESSING\n\nOf course, the question is whether parsing really is sufficiently\nexpensive for this to be worth it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Fri, 30 Oct 2020 21:56:00 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 30/10/2020 22:56, Tomas Vondra wrote:\n> I agree this design looks simpler. I'm a bit worried about serializing\n> the parsing like this, though. It's true the current approach (where the\n> first phase of parsing happens in the leader) has a similar issue, but I\n> think it would be easier to improve that in that design.\n> \n> My plan was to parallelize the parsing roughly like this:\n> \n> 1) split the input buffer into smaller chunks\n> \n> 2) let workers scan the buffers and record positions of interesting\n> characters (delimiters, quotes, ...) and pass it back to the leader\n> \n> 3) use the information to actually parse the input data (we only need to\n> look at the interesting characters, skipping large parts of data)\n> \n> 4) pass the parsed chunks to workers, just like in the current patch\n> \n> \n> But maybe something like that would be possible even with the approach\n> you propose - we could have a special parse phase for processing each\n> buffer, where any worker could look for the special characters, record\n> the positions in a bitmap next to the buffer. So the whole sequence of\n> states would look something like this:\n> \n> EMPTY\n> FILLED\n> PARSED\n> READY\n> PROCESSING\n\nI think it's even simpler than that. You don't need to communicate the \n\"interesting positions\" between processes, if the same worker takes care \nof the chunk through all states from FILLED to DONE.\n\nYou can build the bitmap of interesting positions immediately in FILLED \nstate, independently of all previous blocks. Once you've built the \nbitmap, you need to wait for the information on where the first line \nstarts, but presumably finding the interesting positions is the \nexpensive part.\n\n> Of course, the question is whether parsing really is sufficiently\n> expensive for this to be worth it.\n\nYeah, I don't think it's worth it. Splitting the lines is pretty fast, I \nthink we have many years to come before that becomes a bottleneck. But \nif it turns out I'm wrong and we need to implement that, the path is \npretty straightforward.\n\n- Heikki\n\n\n", "msg_date": "Sat, 31 Oct 2020 00:09:32 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Oct 31, 2020 at 12:09:32AM +0200, Heikki Linnakangas wrote:\n>On 30/10/2020 22:56, Tomas Vondra wrote:\n>>I agree this design looks simpler. I'm a bit worried about serializing\n>>the parsing like this, though. It's true the current approach (where the\n>>first phase of parsing happens in the leader) has a similar issue, but I\n>>think it would be easier to improve that in that design.\n>>\n>>My plan was to parallelize the parsing roughly like this:\n>>\n>>1) split the input buffer into smaller chunks\n>>\n>>2) let workers scan the buffers and record positions of interesting\n>>characters (delimiters, quotes, ...) and pass it back to the leader\n>>\n>>3) use the information to actually parse the input data (we only need to\n>>look at the interesting characters, skipping large parts of data)\n>>\n>>4) pass the parsed chunks to workers, just like in the current patch\n>>\n>>\n>>But maybe something like that would be possible even with the approach\n>>you propose - we could have a special parse phase for processing each\n>>buffer, where any worker could look for the special characters, record\n>>the positions in a bitmap next to the buffer. So the whole sequence of\n>>states would look something like this:\n>>\n>> EMPTY\n>> FILLED\n>> PARSED\n>> READY\n>> PROCESSING\n>\n>I think it's even simpler than that. You don't need to communicate the \n>\"interesting positions\" between processes, if the same worker takes \n>care of the chunk through all states from FILLED to DONE.\n>\n>You can build the bitmap of interesting positions immediately in \n>FILLED state, independently of all previous blocks. Once you've built \n>the bitmap, you need to wait for the information on where the first \n>line starts, but presumably finding the interesting positions is the \n>expensive part.\n>\n\nI don't think it's that simple. For example, the previous block may\ncontain a very long value (say, 1MB), so a bunch of blocks have to be\nprocessed by the same worker. That probably makes the state transitions\na bit, and it also means the bitmap would need to be passed to the\nworker that actually processes the block. Or we might just ignore this,\non the grounds that it's not a very common situation.\n\n\n>>Of course, the question is whether parsing really is sufficiently\n>>expensive for this to be worth it.\n>\n>Yeah, I don't think it's worth it. Splitting the lines is pretty fast, \n>I think we have many years to come before that becomes a bottleneck. \n>But if it turns out I'm wrong and we need to implement that, the path \n>is pretty straightforward.\n>\n\nOK. I agree the parsing is relatively cheap, and I don't recall seeing\nCSV parsing as a bottleneck in production. I suspect that's might be\nsimply because we're hitting other bottlenecks first, though.\n\nregard\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 31 Oct 2020 14:39:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Oct 30, 2020 at 10:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Leader process:\n>\n> The leader process is simple. It picks the next FREE buffer, fills it\n> with raw data from the file, and marks it as FILLED. If no buffers are\n> FREE, wait.\n>\n> Worker process:\n>\n> 1. Claim next READY block from queue, by changing its state to\n> PROCESSING. If the next block is not READY yet, wait until it is.\n>\n> 2. Start scanning the block from 'startpos', finding end-of-line\n> markers. (in CSV mode, need to track when we're in-quotes).\n>\n> 3. When you reach the end of the block, if the last line continues to\n> next block, wait for the next block to become FILLED. Peek into the\n> next block, and copy the remaining part of the split line to a local\n> buffer, and set the 'startpos' on the next block to point to the end\n> of the split line. Mark the next block as READY.\n>\n> 4. Process all the lines in the block, call input functions, insert\n> rows.\n>\n> 5. Mark the block as DONE.\n>\n> In this design, you don't need to keep line boundaries in shared memory,\n> because each worker process is responsible for finding the line\n> boundaries of its own block.\n>\n> There's a point of serialization here, in that the next block cannot be\n> processed, until the worker working on the previous block has finished\n> scanning the EOLs, and set the starting position on the next block,\n> putting it in READY state. That's not very different from your patch,\n> where you had a similar point of serialization because the leader\n> scanned the EOLs,\n>\n\nBut in the design (single producer multiple consumer) used by the\npatch the worker doesn't need to wait till the complete block is\nprocessed, it can start processing the lines already found. This will\nalso allow workers to start much earlier to process the data as it\ndoesn't need to wait for all the offsets corresponding to 64K block\nready. However, in the design where each worker is processing the 64K\nblock, it can lead to much longer waits. I think this will impact the\nCopy STDIN case more where in most cases (200-300 bytes tuples) we\nreceive line-by-line from client and find the line-endings by leader.\nIf the leader doesn't find the line-endings the workers need to wait\ntill the leader fill the entire 64K chunk, OTOH, with current approach\nthe worker can start as soon as leader is able to populate some\nminimum number of line-endings\n\nThe other point is that the leader backend won't be used completely as\nit is only doing a very small part (primarily reading the file) of the\noverall work.\n\nWe have discussed both these approaches (a) single producer multiple\nconsumer, and (b) all workers doing the processing as you are saying\nin the beginning and concluded that (a) is better, see some of the\nrelevant emails [1][2][3].\n\n[1] - https://www.postgresql.org/message-id/20200413201633.cki4nsptynq7blhg%40alap3.anarazel.de\n[2] - https://www.postgresql.org/message-id/20200415181913.4gjqcnuzxfzbbzxa%40alap3.anarazel.de\n[3] - https://www.postgresql.org/message-id/78C0107E-62F2-4F76-BFD8-34C73B716944%40anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 Nov 2020 11:44:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 02/11/2020 08:14, Amit Kapila wrote:\n> On Fri, Oct 30, 2020 at 10:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> Leader process:\n>>\n>> The leader process is simple. It picks the next FREE buffer, fills it\n>> with raw data from the file, and marks it as FILLED. If no buffers are\n>> FREE, wait.\n>>\n>> Worker process:\n>>\n>> 1. Claim next READY block from queue, by changing its state to\n>> PROCESSING. If the next block is not READY yet, wait until it is.\n>>\n>> 2. Start scanning the block from 'startpos', finding end-of-line\n>> markers. (in CSV mode, need to track when we're in-quotes).\n>>\n>> 3. When you reach the end of the block, if the last line continues to\n>> next block, wait for the next block to become FILLED. Peek into the\n>> next block, and copy the remaining part of the split line to a local\n>> buffer, and set the 'startpos' on the next block to point to the end\n>> of the split line. Mark the next block as READY.\n>>\n>> 4. Process all the lines in the block, call input functions, insert\n>> rows.\n>>\n>> 5. Mark the block as DONE.\n>>\n>> In this design, you don't need to keep line boundaries in shared memory,\n>> because each worker process is responsible for finding the line\n>> boundaries of its own block.\n>>\n>> There's a point of serialization here, in that the next block cannot be\n>> processed, until the worker working on the previous block has finished\n>> scanning the EOLs, and set the starting position on the next block,\n>> putting it in READY state. That's not very different from your patch,\n>> where you had a similar point of serialization because the leader\n>> scanned the EOLs,\n> \n> But in the design (single producer multiple consumer) used by the\n> patch the worker doesn't need to wait till the complete block is\n> processed, it can start processing the lines already found. This will\n> also allow workers to start much earlier to process the data as it\n> doesn't need to wait for all the offsets corresponding to 64K block\n> ready. However, in the design where each worker is processing the 64K\n> block, it can lead to much longer waits. I think this will impact the\n> Copy STDIN case more where in most cases (200-300 bytes tuples) we\n> receive line-by-line from client and find the line-endings by leader.\n> If the leader doesn't find the line-endings the workers need to wait\n> till the leader fill the entire 64K chunk, OTOH, with current approach\n> the worker can start as soon as leader is able to populate some\n> minimum number of line-endings\n\nYou can use a smaller block size. However, the point of parallel copy is \nto maximize bandwidth. If the workers ever have to sit idle, it means \nthat the bottleneck is in receiving data from the client, i.e. the \nbackend is fast enough, and you can't make the overall COPY finish any \nfaster no matter how you do it.\n\n> The other point is that the leader backend won't be used completely as\n> it is only doing a very small part (primarily reading the file) of the\n> overall work.\n\nAn idle process doesn't cost anything. If you have free CPU resources, \nuse more workers.\n\n> We have discussed both these approaches (a) single producer multiple\n> consumer, and (b) all workers doing the processing as you are saying\n> in the beginning and concluded that (a) is better, see some of the\n> relevant emails [1][2][3].\n> \n> [1] - https://www.postgresql.org/message-id/20200413201633.cki4nsptynq7blhg%40alap3.anarazel.de\n> [2] - https://www.postgresql.org/message-id/20200415181913.4gjqcnuzxfzbbzxa%40alap3.anarazel.de\n> [3] - https://www.postgresql.org/message-id/78C0107E-62F2-4F76-BFD8-34C73B716944%40anarazel.de\n\nSorry I'm late to the party. I don't think the design I proposed was \ndiscussed in that threads. The alternative that's discussed in that \nthread seems to be something much more fine-grained, where processes \nclaim individual lines. I'm not sure though, I didn't fully understand \nthe alternative designs.\n\nI want to throw out one more idea. It's an interim step, not the \t final \nsolution we want, but a useful step in getting there:\n\nHave the leader process scan the input for line-endings. Split the input \ndata into blocks of slightly under 64 kB in size, so that a line never \ncrosses a block. Put the blocks in shared memory.\n\nA worker process claims a block from shared memory, processes it from \nbeginning to end. It *also* has to parse the input to split it into lines.\n\nIn this design, the line-splitting is done twice. That's clearly not \noptimal, and we want to avoid that in the final patch, but I think it \nwould be a useful milestone. After that patch is done, write another \npatch to either a) implement the design I sketched, where blocks are \nfixed-size and a worker notifies the next worker on where the first line \nin next block begins, or b) have the leader process report the \nline-ending positions in shared memory, so that workers don't need to \nscan them again.\n\nEven if we apply the patches together, I think splitting them like that \nwould make for easier review.\n\n- Heikki\n\n\n", "msg_date": "Mon, 2 Nov 2020 09:10:09 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 02/11/2020 09:10, Heikki Linnakangas wrote:\n> On 02/11/2020 08:14, Amit Kapila wrote:\n>> We have discussed both these approaches (a) single producer multiple\n>> consumer, and (b) all workers doing the processing as you are saying\n>> in the beginning and concluded that (a) is better, see some of the\n>> relevant emails [1][2][3].\n>>\n>> [1] - https://www.postgresql.org/message-id/20200413201633.cki4nsptynq7blhg%40alap3.anarazel.de\n>> [2] - https://www.postgresql.org/message-id/20200415181913.4gjqcnuzxfzbbzxa%40alap3.anarazel.de\n>> [3] - https://www.postgresql.org/message-id/78C0107E-62F2-4F76-BFD8-34C73B716944%40anarazel.de\n> \n> Sorry I'm late to the party. I don't think the design I proposed was\n> discussed in that threads. The alternative that's discussed in that\n> thread seems to be something much more fine-grained, where processes\n> claim individual lines. I'm not sure though, I didn't fully understand\n> the alternative designs.\n\nI read the thread more carefully, and I think Robert had basically the \nright idea here \n(https://www.postgresql.org/message-id/CA%2BTgmoZMU4az9MmdJtg04pjRa0wmWQtmoMxttdxNrupYJNcR3w%40mail.gmail.com):\n\n> I really think we don't want a single worker in charge of finding\n> tuple boundaries for everybody. That adds a lot of unnecessary\n> inter-process communication and synchronization. Each process should\n> just get the next tuple starting after where the last one ended, and\n> then advance the end pointer so that the next process can do the same\n> thing. [...]\n\nAnd here \n(https://www.postgresql.org/message-id/CA%2BTgmoZw%2BF3y%2BoaxEsHEZBxdL1x1KAJ7pRMNgCqX0WjmjGNLrA%40mail.gmail.com):\n\n> On Thu, Apr 9, 2020 at 2:55 PM Andres Freund\n<andres(at)anarazel(dot)de> wrote:\n>> I'm fairly certain that we do *not* want to distribute input data\n>> between processes on a single tuple basis. Probably not even below\n>> a few\nhundred kb. If there's any sort of natural clustering in the loaded data\n- extremely common, think timestamps - splitting on a granular basis\nwill make indexing much more expensive. And have a lot more contention.\n> \n> That's a fair point. I think the solution ought to be that once any\n> process starts finding line endings, it continues until it's grabbed\n> at least a certain amount of data for itself. Then it stops and lets\n> some other process grab a chunk of data.\nYes! That's pretty close to the design I sketched. I imagined that the \nleader would divide the input into 64 kB blocks, and each block would \nhave few metadata fields, notably the starting position of the first \nline in the block. I think Robert envisioned having a single \"next \nstarting position\" field in shared memory. That works too, and is even \nsimpler, so +1 for that.\n\nFor some reason, the discussion took a different turn from there, to \ndiscuss how the line-endings (called \"chunks\" in the discussion) should \nbe represented in shared memory. But none of that is necessary with \nRobert's design.\n\n- Heikki\n\n\n", "msg_date": "Mon, 2 Nov 2020 09:50:58 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Nov 2, 2020 at 12:40 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 02/11/2020 08:14, Amit Kapila wrote:\n> > On Fri, Oct 30, 2020 at 10:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>\n> >> In this design, you don't need to keep line boundaries in shared memory,\n> >> because each worker process is responsible for finding the line\n> >> boundaries of its own block.\n> >>\n> >> There's a point of serialization here, in that the next block cannot be\n> >> processed, until the worker working on the previous block has finished\n> >> scanning the EOLs, and set the starting position on the next block,\n> >> putting it in READY state. That's not very different from your patch,\n> >> where you had a similar point of serialization because the leader\n> >> scanned the EOLs,\n> >\n> > But in the design (single producer multiple consumer) used by the\n> > patch the worker doesn't need to wait till the complete block is\n> > processed, it can start processing the lines already found. This will\n> > also allow workers to start much earlier to process the data as it\n> > doesn't need to wait for all the offsets corresponding to 64K block\n> > ready. However, in the design where each worker is processing the 64K\n> > block, it can lead to much longer waits. I think this will impact the\n> > Copy STDIN case more where in most cases (200-300 bytes tuples) we\n> > receive line-by-line from client and find the line-endings by leader.\n> > If the leader doesn't find the line-endings the workers need to wait\n> > till the leader fill the entire 64K chunk, OTOH, with current approach\n> > the worker can start as soon as leader is able to populate some\n> > minimum number of line-endings\n>\n> You can use a smaller block size.\n>\n\nSure, but the same problem can happen if the last line in that block\nis too long and we need to peek into the next block. And then there\ncould be cases where a single line could be greater than 64K.\n\n> However, the point of parallel copy is\n> to maximize bandwidth.\n>\n\nOkay, but this first-phase (finding the line boundaries) can anyway be\nnot done in parallel and we have seen in some of the initial\nbenchmarking that this initial phase is a small part of work\nespecially when the table has indexes, constraints, etc. So, I think\nit won't matter much if this splitting is done in a single process or\nmultiple processes.\n\n> If the workers ever have to sit idle, it means\n> that the bottleneck is in receiving data from the client, i.e. the\n> backend is fast enough, and you can't make the overall COPY finish any\n> faster no matter how you do it.\n>\n> > The other point is that the leader backend won't be used completely as\n> > it is only doing a very small part (primarily reading the file) of the\n> > overall work.\n>\n> An idle process doesn't cost anything. If you have free CPU resources,\n> use more workers.\n>\n> > We have discussed both these approaches (a) single producer multiple\n> > consumer, and (b) all workers doing the processing as you are saying\n> > in the beginning and concluded that (a) is better, see some of the\n> > relevant emails [1][2][3].\n> >\n> > [1] - https://www.postgresql.org/message-id/20200413201633.cki4nsptynq7blhg%40alap3.anarazel.de\n> > [2] - https://www.postgresql.org/message-id/20200415181913.4gjqcnuzxfzbbzxa%40alap3.anarazel.de\n> > [3] - https://www.postgresql.org/message-id/78C0107E-62F2-4F76-BFD8-34C73B716944%40anarazel.de\n>\n> Sorry I'm late to the party. I don't think the design I proposed was\n> discussed in that threads.\n>\n\nI think something close to that is discussed as you have noticed in\nyour next email but IIRC, because many people (Andres, Ants, myself\nand author) favoured the current approach (single reader and multiple\nconsumers) we decided to go with that. I feel this patch is very much\nin the POC stage due to which the code doesn't look good and as we\nmove forward we need to see what is the better way to improve it,\nmaybe one of the ways is to split it as you are suggesting so that it\ncan be easier to review. I think the other important thing which this\npatch has not addressed properly is the parallel-safety checks as\npointed by me earlier. There are two things to solve there (a) the\nlower-level code (like heap_* APIs, CommandCounterIncrement, xact.c\nAPIs, etc.) have checks which doesn't allow any writes, we need to see\nwhich of those we can open now (or do some additional work to prevent\nfrom those checks) after some of the work done for parallel-writes in\nPG-13[1][2], and (b) in which all cases we can parallel-writes\n(parallel copy) is allowed, for example need to identify whether table\nor one of its partitions has any constraint/expression which is\nparallel-unsafe.\n\n[1] 85f6b49 Allow relation extension lock to conflict among parallel\ngroup members\n[2] 3ba59cc Allow page lock to conflict among parallel group members\n\n>\n> I want to throw out one more idea. It's an interim step, not the final\n> solution we want, but a useful step in getting there:\n>\n> Have the leader process scan the input for line-endings. Split the input\n> data into blocks of slightly under 64 kB in size, so that a line never\n> crosses a block. Put the blocks in shared memory.\n>\n> A worker process claims a block from shared memory, processes it from\n> beginning to end. It *also* has to parse the input to split it into lines.\n>\n> In this design, the line-splitting is done twice. That's clearly not\n> optimal, and we want to avoid that in the final patch, but I think it\n> would be a useful milestone. After that patch is done, write another\n> patch to either a) implement the design I sketched, where blocks are\n> fixed-size and a worker notifies the next worker on where the first line\n> in next block begins, or b) have the leader process report the\n> line-ending positions in shared memory, so that workers don't need to\n> scan them again.\n>\n> Even if we apply the patches together, I think splitting them like that\n> would make for easier review.\n>\n\nI think this is worth exploring especially if it makes the patch\neasier to review.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 3 Nov 2020 14:29:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On 03/11/2020 10:59, Amit Kapila wrote:\n> On Mon, Nov 2, 2020 at 12:40 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> However, the point of parallel copy is to maximize bandwidth.\n> \n> Okay, but this first-phase (finding the line boundaries) can anyway\n> be not done in parallel and we have seen in some of the initial \n> benchmarking that this initial phase is a small part of work \n> especially when the table has indexes, constraints, etc. So, I think \n> it won't matter much if this splitting is done in a single process\n> or multiple processes.\nRight, it won't matter performance-wise. That's not my point. The \ndifference is in the complexity. If you don't store the line boundaries \nin shared memory, you get away with much simpler shared memory structures.\n\n> I think something close to that is discussed as you have noticed in\n> your next email but IIRC, because many people (Andres, Ants, myself\n> and author) favoured the current approach (single reader and multiple\n> consumers) we decided to go with that. I feel this patch is very much\n> in the POC stage due to which the code doesn't look good and as we\n> move forward we need to see what is the better way to improve it,\n> maybe one of the ways is to split it as you are suggesting so that it\n> can be easier to review.\n\nSure. I think the roadmap here is:\n\n1. Split copy.c [1]. Not strictly necessary, but I think it'd make this \nnice to review and work with.\n\n2. Refactor CopyReadLine(), so that finding the line-endings and the \nrest of the line-parsing are separated into separate functions.\n\n3. Implement parallel copy.\n\n> I think the other important thing which this\n> patch has not addressed properly is the parallel-safety checks as\n> pointed by me earlier. There are two things to solve there (a) the\n> lower-level code (like heap_* APIs, CommandCounterIncrement, xact.c\n> APIs, etc.) have checks which doesn't allow any writes, we need to see\n> which of those we can open now (or do some additional work to prevent\n> from those checks) after some of the work done for parallel-writes in\n> PG-13[1][2], and (b) in which all cases we can parallel-writes\n> (parallel copy) is allowed, for example need to identify whether table\n> or one of its partitions has any constraint/expression which is\n> parallel-unsafe.\n\nAgreed, that needs to be solved. I haven't given it any thought myself.\n\n- Heikki\n\n[1] \nhttps://www.postgresql.org/message-id/8e15b560-f387-7acc-ac90-763986617bfb%40iki.fi\n\n\n", "msg_date": "Tue, 3 Nov 2020 14:35:32 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi\r\n\r\n> \r\n> my $bytes = $ARGV[0];\r\n> for(my $i = 0; $i < $bytes; $i+=8){\r\n> print \"longdata\";\r\n> }\r\n> print \"\\n\";\r\n> --------\r\n> \r\n> postgres=# copy longdata from program 'perl /tmp/longdata.pl 100000000'\r\n> with (parallel 2);\r\n> \r\n> This gets stuck forever (or at least I didn't have the patience to wait\r\n> it finish). Both worker processes are consuming 100% of CPU.\r\n\r\nI had a look over this problem.\r\n\r\nthe ParallelCopyDataBlock has size limit:\r\n\tuint8\t\tskip_bytes;\r\n\tchar\t\tdata[DATA_BLOCK_SIZE];\t/* data read from file */\r\n\r\nIt seems the input line is so long that the leader process run out of the Shared memory among parallel copy workers.\r\nAnd the leader process keep waiting free block.\r\n\r\nFor the worker process, it wait util line_state becomes LINE_LEADER_POPULATED,\r\nBut leader process won't set the line_state unless it read the whole line.\r\n\r\nSo it stuck forever.\r\nMay be we should reconsider about this situation.\r\n\r\nThe stack is as follows:\r\n\r\nLeader stack:\r\n#3 0x000000000075f7a1 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=timeout@entry=1, wait_event_info=wait_event_info@entry=150994945) at latch.c:411\r\n#4 0x00000000005a9245 in WaitGetFreeCopyBlock (pcshared_info=pcshared_info@entry=0x7f26d2ed3580) at copyparallel.c:1546\r\n#5 0x00000000005a98ce in SetRawBufForLoad (cstate=cstate@entry=0x2978a88, line_size=67108864, copy_buf_len=copy_buf_len@entry=65536, raw_buf_ptr=raw_buf_ptr@entry=65536, \r\n copy_raw_buf=copy_raw_buf@entry=0x7fff4cdc0e18) at copyparallel.c:1572\r\n#6 0x00000000005a1963 in CopyReadLineText (cstate=cstate@entry=0x2978a88) at copy.c:4058\r\n#7 0x00000000005a4e76 in CopyReadLine (cstate=cstate@entry=0x2978a88) at copy.c:3863\r\n\r\nWorker stack:\r\n#0 GetLinePosition (cstate=cstate@entry=0x29e1f28) at copyparallel.c:1474\r\n#1 0x00000000005a8aa4 in CacheLineInfo (cstate=cstate@entry=0x29e1f28, buff_count=buff_count@entry=0) at copyparallel.c:711\r\n#2 0x00000000005a8e46 in GetWorkerLine (cstate=cstate@entry=0x29e1f28) at copyparallel.c:885\r\n#3 0x00000000005a4f2e in NextCopyFromRawFields (cstate=cstate@entry=0x29e1f28, fields=fields@entry=0x7fff4cdc0b48, nfields=nfields@entry=0x7fff4cdc0b44) at copy.c:3615\r\n#4 0x00000000005a50af in NextCopyFrom (cstate=cstate@entry=0x29e1f28, econtext=econtext@entry=0x2a358d8, values=0x2a42068, nulls=0x2a42070) at copy.c:3696\r\n#5 0x00000000005a5b90 in CopyFrom (cstate=cstate@entry=0x29e1f28) at copy.c:2985\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n", "msg_date": "Thu, 5 Nov 2020 13:02:52 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel copy" }, { "msg_contents": "On Thu, Nov 5, 2020 at 6:33 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> Hi\n>\n> >\n> > my $bytes = $ARGV[0];\n> > for(my $i = 0; $i < $bytes; $i+=8){\n> > print \"longdata\";\n> > }\n> > print \"\\n\";\n> > --------\n> >\n> > postgres=# copy longdata from program 'perl /tmp/longdata.pl 100000000'\n> > with (parallel 2);\n> >\n> > This gets stuck forever (or at least I didn't have the patience to wait\n> > it finish). Both worker processes are consuming 100% of CPU.\n>\n> I had a look over this problem.\n>\n> the ParallelCopyDataBlock has size limit:\n> uint8 skip_bytes;\n> char data[DATA_BLOCK_SIZE]; /* data read from file */\n>\n> It seems the input line is so long that the leader process run out of the Shared memory among parallel copy workers.\n> And the leader process keep waiting free block.\n>\n> For the worker process, it wait util line_state becomes LINE_LEADER_POPULATED,\n> But leader process won't set the line_state unless it read the whole line.\n>\n> So it stuck forever.\n> May be we should reconsider about this situation.\n>\n> The stack is as follows:\n>\n> Leader stack:\n> #3 0x000000000075f7a1 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=timeout@entry=1, wait_event_info=wait_event_info@entry=150994945) at latch.c:411\n> #4 0x00000000005a9245 in WaitGetFreeCopyBlock (pcshared_info=pcshared_info@entry=0x7f26d2ed3580) at copyparallel.c:1546\n> #5 0x00000000005a98ce in SetRawBufForLoad (cstate=cstate@entry=0x2978a88, line_size=67108864, copy_buf_len=copy_buf_len@entry=65536, raw_buf_ptr=raw_buf_ptr@entry=65536,\n> copy_raw_buf=copy_raw_buf@entry=0x7fff4cdc0e18) at copyparallel.c:1572\n> #6 0x00000000005a1963 in CopyReadLineText (cstate=cstate@entry=0x2978a88) at copy.c:4058\n> #7 0x00000000005a4e76 in CopyReadLine (cstate=cstate@entry=0x2978a88) at copy.c:3863\n>\n> Worker stack:\n> #0 GetLinePosition (cstate=cstate@entry=0x29e1f28) at copyparallel.c:1474\n> #1 0x00000000005a8aa4 in CacheLineInfo (cstate=cstate@entry=0x29e1f28, buff_count=buff_count@entry=0) at copyparallel.c:711\n> #2 0x00000000005a8e46 in GetWorkerLine (cstate=cstate@entry=0x29e1f28) at copyparallel.c:885\n> #3 0x00000000005a4f2e in NextCopyFromRawFields (cstate=cstate@entry=0x29e1f28, fields=fields@entry=0x7fff4cdc0b48, nfields=nfields@entry=0x7fff4cdc0b44) at copy.c:3615\n> #4 0x00000000005a50af in NextCopyFrom (cstate=cstate@entry=0x29e1f28, econtext=econtext@entry=0x2a358d8, values=0x2a42068, nulls=0x2a42070) at copy.c:3696\n> #5 0x00000000005a5b90 in CopyFrom (cstate=cstate@entry=0x29e1f28) at copy.c:2985\n>\n\nThanks for providing your thoughts. I have analyzed this issue and I'm\nworking on the fix for this, I will be posting a patch for this\nshortly.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 7 Nov 2020 19:01:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 2, 2020 at 12:40 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 02/11/2020 08:14, Amit Kapila wrote:\n> > > On Fri, Oct 30, 2020 at 10:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > >>\n> > >> In this design, you don't need to keep line boundaries in shared memory,\n> > >> because each worker process is responsible for finding the line\n> > >> boundaries of its own block.\n> > >>\n> > >> There's a point of serialization here, in that the next block cannot be\n> > >> processed, until the worker working on the previous block has finished\n> > >> scanning the EOLs, and set the starting position on the next block,\n> > >> putting it in READY state. That's not very different from your patch,\n> > >> where you had a similar point of serialization because the leader\n> > >> scanned the EOLs,\n> > >\n> > > But in the design (single producer multiple consumer) used by the\n> > > patch the worker doesn't need to wait till the complete block is\n> > > processed, it can start processing the lines already found. This will\n> > > also allow workers to start much earlier to process the data as it\n> > > doesn't need to wait for all the offsets corresponding to 64K block\n> > > ready. However, in the design where each worker is processing the 64K\n> > > block, it can lead to much longer waits. I think this will impact the\n> > > Copy STDIN case more where in most cases (200-300 bytes tuples) we\n> > > receive line-by-line from client and find the line-endings by leader.\n> > > If the leader doesn't find the line-endings the workers need to wait\n> > > till the leader fill the entire 64K chunk, OTOH, with current approach\n> > > the worker can start as soon as leader is able to populate some\n> > > minimum number of line-endings\n> >\n> > You can use a smaller block size.\n> >\n>\n> Sure, but the same problem can happen if the last line in that block\n> is too long and we need to peek into the next block. And then there\n> could be cases where a single line could be greater than 64K.\n>\n> > However, the point of parallel copy is\n> > to maximize bandwidth.\n> >\n>\n> Okay, but this first-phase (finding the line boundaries) can anyway be\n> not done in parallel and we have seen in some of the initial\n> benchmarking that this initial phase is a small part of work\n> especially when the table has indexes, constraints, etc. So, I think\n> it won't matter much if this splitting is done in a single process or\n> multiple processes.\n>\n> > If the workers ever have to sit idle, it means\n> > that the bottleneck is in receiving data from the client, i.e. the\n> > backend is fast enough, and you can't make the overall COPY finish any\n> > faster no matter how you do it.\n> >\n> > > The other point is that the leader backend won't be used completely as\n> > > it is only doing a very small part (primarily reading the file) of the\n> > > overall work.\n> >\n> > An idle process doesn't cost anything. If you have free CPU resources,\n> > use more workers.\n> >\n> > > We have discussed both these approaches (a) single producer multiple\n> > > consumer, and (b) all workers doing the processing as you are saying\n> > > in the beginning and concluded that (a) is better, see some of the\n> > > relevant emails [1][2][3].\n> > >\n> > > [1] - https://www.postgresql.org/message-id/20200413201633.cki4nsptynq7blhg%40alap3.anarazel.de\n> > > [2] - https://www.postgresql.org/message-id/20200415181913.4gjqcnuzxfzbbzxa%40alap3.anarazel.de\n> > > [3] - https://www.postgresql.org/message-id/78C0107E-62F2-4F76-BFD8-34C73B716944%40anarazel.de\n> >\n> > Sorry I'm late to the party. I don't think the design I proposed was\n> > discussed in that threads.\n> >\n>\n> I think something close to that is discussed as you have noticed in\n> your next email but IIRC, because many people (Andres, Ants, myself\n> and author) favoured the current approach (single reader and multiple\n> consumers) we decided to go with that. I feel this patch is very much\n> in the POC stage due to which the code doesn't look good and as we\n> move forward we need to see what is the better way to improve it,\n> maybe one of the ways is to split it as you are suggesting so that it\n> can be easier to review. I think the other important thing which this\n> patch has not addressed properly is the parallel-safety checks as\n> pointed by me earlier. There are two things to solve there (a) the\n> lower-level code (like heap_* APIs, CommandCounterIncrement, xact.c\n> APIs, etc.) have checks which doesn't allow any writes, we need to see\n> which of those we can open now (or do some additional work to prevent\n> from those checks) after some of the work done for parallel-writes in\n> PG-13[1][2], and (b) in which all cases we can parallel-writes\n> (parallel copy) is allowed, for example need to identify whether table\n> or one of its partitions has any constraint/expression which is\n> parallel-unsafe.\n>\n\nI have worked to provide a patch for the parallel safety checks. It\nchecks if parallely copy can be performed, Parallel copy cannot be\nperformed for the following a) If relation is temporary table b) If\nrelation is foreign table c) If relation has non parallel safe index\nexpressions d) If relation has triggers present whose type is of non\nbefore statement trigger type e) If relation has check constraint\nwhich are not parallel safe f) If relation has partition and any\npartition has the above type. This patch has the checks for it. This\npatch will be used by parallel copy implementation.\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 10 Nov 2020 19:12:46 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Nov 10, 2020 at 7:12 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I have worked to provide a patch for the parallel safety checks. It\n> checks if parallely copy can be performed, Parallel copy cannot be\n> performed for the following a) If relation is temporary table b) If\n> relation is foreign table c) If relation has non parallel safe index\n> expressions d) If relation has triggers present whose type is of non\n> before statement trigger type e) If relation has check constraint\n> which are not parallel safe f) If relation has partition and any\n> partition has the above type. This patch has the checks for it. This\n> patch will be used by parallel copy implementation.\n>\n\nHow did you ensure that this is sufficient? For parallel-insert's\npatch we have enabled parallel-mode for Inserts and ran the tests with\nforce_parallel_mode to see if we are not missing anything. Also, it\nseems there are many common things here w.r.t parallel-insert patch,\nis it possible to prepare this atop that patch or do you have any\nreason to keep this separate?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Nov 2020 19:27:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Nov 10, 2020 at 7:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 7:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > I have worked to provide a patch for the parallel safety checks. It\n> > checks if parallely copy can be performed, Parallel copy cannot be\n> > performed for the following a) If relation is temporary table b) If\n> > relation is foreign table c) If relation has non parallel safe index\n> > expressions d) If relation has triggers present whose type is of non\n> > before statement trigger type e) If relation has check constraint\n> > which are not parallel safe f) If relation has partition and any\n> > partition has the above type. This patch has the checks for it. This\n> > patch will be used by parallel copy implementation.\n> >\n>\n> How did you ensure that this is sufficient? For parallel-insert's\n> patch we have enabled parallel-mode for Inserts and ran the tests with\n> force_parallel_mode to see if we are not missing anything. Also, it\n> seems there are many common things here w.r.t parallel-insert patch,\n> is it possible to prepare this atop that patch or do you have any\n> reason to keep this separate?\n>\n\nI have done similar testing for copy too, I had set force_parallel\nmode to regress, hardcoded in the code to pick parallel workers for\ncopy operation and ran make installcheck-world to verify. Many checks\nin this patch are common between both patches, but I was not sure how\nto handle it as both the projects are in-progress and are being\nupdated based on the reviewer's opinion. How to handle this?\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Nov 2020 22:42:24 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Nov 11, 2020 at 10:42 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Nov 10, 2020 at 7:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 7:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > I have worked to provide a patch for the parallel safety checks. It\n> > > checks if parallely copy can be performed, Parallel copy cannot be\n> > > performed for the following a) If relation is temporary table b) If\n> > > relation is foreign table c) If relation has non parallel safe index\n> > > expressions d) If relation has triggers present whose type is of non\n> > > before statement trigger type e) If relation has check constraint\n> > > which are not parallel safe f) If relation has partition and any\n> > > partition has the above type. This patch has the checks for it. This\n> > > patch will be used by parallel copy implementation.\n> > >\n> >\n> > How did you ensure that this is sufficient? For parallel-insert's\n> > patch we have enabled parallel-mode for Inserts and ran the tests with\n> > force_parallel_mode to see if we are not missing anything. Also, it\n> > seems there are many common things here w.r.t parallel-insert patch,\n> > is it possible to prepare this atop that patch or do you have any\n> > reason to keep this separate?\n> >\n>\n> I have done similar testing for copy too, I had set force_parallel\n> mode to regress, hardcoded in the code to pick parallel workers for\n> copy operation and ran make installcheck-world to verify. Many checks\n> in this patch are common between both patches, but I was not sure how\n> to handle it as both the projects are in-progress and are being\n> updated based on the reviewer's opinion. How to handle this?\n> Thoughts?\n>\n\nI have not studied the differences in detail but if it is possible to\nprepare it on top of that patch then there shouldn't be a problem. To\navoid confusion if you want you can always either post the latest\nversion of that patch with your patch or point to it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Nov 2020 14:26:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 29, 2020 at 2:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 4) Worker has to hop through all the processed chunks before getting\n> the chunk which it can process.\n>\n> One more point, I have noticed that some time back [1], I have given\n> one suggestion related to the way workers process the set of lines\n> (aka chunk). I think you can try by increasing the chunk size to say\n> 100, 500, 1000 and use some shared counter to remember the number of\n> chunks processed.\n>\n\nHi, I did some analysis on using spinlock protected worker write position\ni.e. each worker acquires spinlock on a shared write position to choose the\nnext available chunk vs each worker hops to get the next available chunk\nposition:\n\nUse Case: 10mn rows, 5.6GB data, 2 indexes on integer columns, 1 index on\ntext column, results are of the form (no of workers, total exec time in\nsec, index insertion time in sec, worker write pos get time in sec, buffer\ncontention event count):\n\nWith spinlock:\n(1,1126.443,1060.067,0.478,*0*), (2,669.343,630.769,0.306,*26*),\n(4,346.297,326.950,0.161,*89*), (8,209.600,196.417,0.088,*291*),\n(16,166.113,157.086,0.065,*1468*), (20,173.884,166.013,0.067,*2700*),\n(30,173.087,1166.565,0.0065,*5346*)\nWithout spinlock:\n(1,1119.695,1054.586,0.496,*0*), (2,645.733,608.313,1.5,*8*),\n(4,340.620,320.344,1.6,*58*), (8,203.985,189.644,1.3,*222*),\n(16,142.997,133.045,1,*813*), (20,132.621,122.527,1.1,*1215*),\n(30,135.737,126.716,1.5,*2901*)\n\nWith spinlock each worker is getting the required write position quickly\nand proceeding further till the index insertion(which is becoming a single\npoint of contention) where we observed more buffer lock contention. Reason\nis that all the workers are reaching the index insertion point at the\nsimilar time.\n\nWithout spinlock, each worker is spending some time in hopping to get the\nwrite position, by the time the other workers are inserting into the\nindexes. So basically, all the workers are not reaching the index insertion\npoint at the same time and hence less buffer lock contention.\n\nThe same behaviour(explained above) is observed with different worker chunk\ncount(default 64, 128, 512 and 1024) i.e. the number of tuples each worker\ncaches into its local memory before inserting into table.\n\nIn summary: with spinlock, it looks like we are able to avoid workers\nwaiting to get the next chunk, which also means that we are not creating\nany contention point inside the parallel copy code. However this is causing\nanother choking point i.e. index insertion if indexes are available on the\ntable, which is out of scope of parallel copy code. We think that it would\nbe good to use spinlock-protected worker write position or an atomic\nvariable for worker write position(as it performs equal to spinlock or\nlittle better in some platforms). Thoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Oct 29, 2020 at 2:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>> 4) Worker has to hop through all the processed chunks before getting> the chunk which it can process.>> One more point, I have noticed that some time back [1], I have given> one suggestion related to the way workers process the set of lines> (aka chunk). I think you can try by increasing the chunk size to say> 100, 500, 1000 and use some shared counter to remember the number of> chunks processed.>Hi, I did some analysis on using spinlock protected worker write position i.e. each worker acquires spinlock on a shared write position to choose the next available chunk vs each worker hops to get the next available chunk position:Use Case: 10mn rows, 5.6GB data, 2 indexes on integer columns, 1 index on text column, results are of the form (no of workers, total exec time in sec, index insertion time in sec, worker write pos get time in sec, buffer contention event count):With spinlock:(1,1126.443,1060.067,0.478,0), (2,669.343,630.769,0.306,26), (4,346.297,326.950,0.161,89), (8,209.600,196.417,0.088,291), (16,166.113,157.086,0.065,1468), (20,173.884,166.013,0.067,2700), (30,173.087,1166.565,0.0065,5346)Without spinlock:(1,1119.695,1054.586,0.496,0), (2,645.733,608.313,1.5,8), (4,340.620,320.344,1.6,58), (8,203.985,189.644,1.3,222), (16,142.997,133.045,1,813), (20,132.621,122.527,1.1,1215), (30,135.737,126.716,1.5,2901)With spinlock each worker is getting the required write position quickly and proceeding further till the index insertion(which is becoming a single point of contention) where we observed more buffer lock contention. Reason is that all the workers are reaching the index insertion point at the similar time.Without spinlock, each worker is spending some time in hopping to get the write position, by the time the other workers are inserting into the indexes. So basically, all the workers are not reaching the index insertion point at the same time and hence less buffer lock contention.The same behaviour(explained above) is observed with different worker chunk count(default 64, 128, 512 and 1024) i.e. the number of tuples each worker caches into its local memory before inserting into table.In summary: with spinlock, it looks like we are able to avoid workers waiting to get the next chunk, which also means that we are not creating any contention point inside the parallel copy code. However this is causing another choking point i.e. index insertion if indexes are available on the table, which is out of scope of parallel copy code. We think that it would be good to use spinlock-protected worker write position or an atomic variable for worker write position(as it performs equal to spinlock or little better in some platforms). Thoughts?With Regards,Bharath Rupireddy.EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Nov 2020 11:39:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 29, 2020 at 11:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 27, 2020 at 7:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> [latest version]\n>\n> I think the parallel-safety checks in this patch\n> (v9-0002-Allow-copy-from-command-to-process-data-from-file) are\n> incomplete and wrong. See below comments.\n> 1.\n> +static pg_attribute_always_inline bool\n> +CheckExprParallelSafety(CopyState cstate)\n> +{\n> + if (contain_volatile_functions(cstate->whereClause))\n> + {\n> + if (max_parallel_hazard((Query *) cstate->whereClause) != PROPARALLEL_SAFE)\n> + return false;\n> + }\n>\n> I don't understand the above check. Why do we only need to check where\n> clause for parallel-safety when it contains volatile functions? It\n> should be checked otherwise as well, no? The similar comment applies\n> to other checks in this function. Also, I don't think there is a need\n> to make this function inline.\n>\n\nI felt we should check if where clause is parallel safe and also check\nif it does not contain volatile function, this is to avoid cases where\nexpressions may query the table we're inserting into. Modified it\naccordingly.\n\n> 2.\n> +/*\n> + * IsParallelCopyAllowed\n> + *\n> + * Check if parallel copy can be allowed.\n> + */\n> +bool\n> +IsParallelCopyAllowed(CopyState cstate)\n> {\n> ..\n> + * When there are BEFORE/AFTER/INSTEAD OF row triggers on the table. We do\n> + * not allow parallelism in such cases because such triggers might query\n> + * the table we are inserting into and act differently if the tuples that\n> + * have already been processed and prepared for insertion are not there.\n> + * Now, if we allow parallelism with such triggers the behaviour would\n> + * depend on if the parallel worker has already inserted or not that\n> + * particular tuples.\n> + */\n> + if (cstate->rel->trigdesc != NULL &&\n> + (cstate->rel->trigdesc->trig_insert_after_statement ||\n> + cstate->rel->trigdesc->trig_insert_new_table ||\n> + cstate->rel->trigdesc->trig_insert_before_row ||\n> + cstate->rel->trigdesc->trig_insert_after_row ||\n> + cstate->rel->trigdesc->trig_insert_instead_row))\n> + return false;\n> ..\n>\n> Why do we need to disable parallelism for before/after row triggers\n> unless they have parallel-unsafe functions? I see a few lines down in\n> this function you are checking parallel-safety of trigger functions,\n> what is the use of the same if you are already disabling parallelism\n> with the above check.\n>\n\nCurrently only before statement trigger is supported, rest of the\ntriggers are not supported, comments for the same is mentioned atop of\nthe checks. Removed the parallel safe check which was not required.\n\n> 3. What about if the index on table has expressions that are\n> parallel-unsafe? What is your strategy to check parallel-safety for\n> partitioned tables?\n>\n> I suggest checking Greg's patch for parallel-safety of Inserts [1]. I\n> think you will find that most of those checks are required here as\n> well and see how we can use that patch (at least what is common). I\n> feel the first patch should be just to have parallel-safety checks and\n> we can test that by trying to enable Copy with force_parallel_mode. We\n> can build the rest of the patch atop of it or in other words, let's\n> move all parallel-safety work into a separate patch.\n>\n\nI have made this as a separate patch as of now. I will work on to see\nif I can use Greg's changes as it is or if require I will provide few\nreview comments on top of Greg's patch so that it is usable for\nparallel copy too and later post a separate patch with the changes on\ntop of it. I will retain it as a separate patch till that time.\n\n> Few assorted comments:\n> ========================\n> 1.\n> +/*\n> + * ESTIMATE_NODE_SIZE - Estimate the size required for node type in shared\n> + * memory.\n> + */\n> +#define ESTIMATE_NODE_SIZE(list, listStr, strsize) \\\n> +{ \\\n> + uint32 estsize = sizeof(uint32); \\\n> + if ((List *)list != NIL) \\\n> + { \\\n> + listStr = nodeToString(list); \\\n> + estsize += strlen(listStr) + 1; \\\n> + } \\\n> + \\\n> + strsize = add_size(strsize, estsize); \\\n> +}\n>\n> This can be probably a function instead of a macro.\n>\n\nChanged it to a function.\n\n> 2.\n> +/*\n> + * ESTIMATE_1BYTE_STR_SIZE - Estimate the size required for 1Byte strings in\n> + * shared memory.\n> + */\n> +#define ESTIMATE_1BYTE_STR_SIZE(src, strsize) \\\n> +{ \\\n> + strsize = add_size(strsize, sizeof(uint8)); \\\n> + strsize = add_size(strsize, (src) ? 1 : 0); \\\n> +}\n>\n> This could be an inline function.\n>\n\nChanged it to an inline function.\n\n> 3.\n> +/*\n> + * SERIALIZE_1BYTE_STR - Copy 1Byte strings to shared memory.\n> + */\n> +#define SERIALIZE_1BYTE_STR(dest, src, copiedsize) \\\n> +{ \\\n> + uint8 len = (src) ? 1 : 0; \\\n> + memcpy(dest + copiedsize, (uint8 *) &len, sizeof(uint8)); \\\n> + copiedsize += sizeof(uint8); \\\n> + if (src) \\\n> + dest[copiedsize++] = src[0]; \\\n> +}\n>\n> Similarly, this could be a function. I think keeping such things as\n> macros in-between code makes it difficult to read. Please see if you\n> can make these and similar macros as functions unless they are doing\n> few memory instructions. Having functions makes it easier to debug the\n> code as well.\n>\n\nChanged it to a function.\n\nAttached v10 patch has the fixes for the same.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Nov 2020 15:26:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 29, 2020 at 2:20 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 27/10/2020 15:36, vignesh C wrote:\n> > Attached v9 patches have the fixes for the above comments.\n>\n> I did some testing:\n>\n> /tmp/longdata.pl:\n> --------\n> #!/usr/bin/perl\n> #\n> # Generate three rows:\n> # foo\n> # longdatalongdatalongdata...\n> # bar\n> #\n> # The length of the middle row is given as command line arg.\n> #\n>\n> my $bytes = $ARGV[0];\n>\n> print \"foo\\n\";\n> for(my $i = 0; $i < $bytes; $i+=8){\n> print \"longdata\";\n> }\n> print \"\\n\";\n> print \"bar\\n\";\n> --------\n>\n> postgres=# copy longdata from program 'perl /tmp/longdata.pl 100000000'\n> with (parallel 2);\n>\n> This gets stuck forever (or at least I didn't have the patience to wait\n> it finish). Both worker processes are consuming 100% of CPU.\n>\n\nThanks for identifying this issue, this issue is fixed in v10 patch posted\nat [1]\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Oct 29, 2020 at 2:20 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> On 27/10/2020 15:36, vignesh C wrote:> > Attached v9 patches have the fixes for the above comments.>> I did some testing:>> /tmp/longdata.pl:> --------> #!/usr/bin/perl> #> # Generate three rows:> # foo> # longdatalongdatalongdata...> # bar> #> # The length of the middle row is given as command line arg.> #>> my $bytes = $ARGV[0];>> print \"foo\\n\";> for(my $i = 0; $i < $bytes; $i+=8){>      print \"longdata\";> }> print \"\\n\";> print \"bar\\n\";> -------->> postgres=# copy longdata from program 'perl /tmp/longdata.pl 100000000'> with (parallel 2);>> This gets stuck forever (or at least I didn't have the patience to wait> it finish). Both worker processes are consuming 100% of CPU.>Thanks for identifying this issue, this issue is fixed in v10 patch posted at [1][1] https://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Nov 2020 15:28:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Wed, Oct 28, 2020 at 5:36 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com>\nwrote:\n>\n> Hi\n>\n> I found some issue in v9-0002\n>\n> 1.\n> +\n> + elog(DEBUG1, \"[Worker] Processing - line position:%d, block:%d,\nunprocessed lines:%d, offset:%d, line size:%d\",\n> + write_pos, lineInfo->first_block,\n> +\n pg_atomic_read_u32(&data_blk_ptr->unprocessed_line_parts),\n> + offset, pg_atomic_read_u32(&lineInfo->line_size));\n> +\n>\n> write_pos or other variable to be printed here are type of uint32, I\nthink it'better to use '%u' in elog msg.\n>\n\nModified it.\n\n> 2.\n> + * line_size will be set. Read the line_size again to be\nsure if it is\n> + * completed or partial block.\n> + */\n> + dataSize = pg_atomic_read_u32(&lineInfo->line_size);\n> + if (dataSize)\n>\n> It use dataSize( type int ) to get uint32 which seems a little dangerous.\n> Is it better to define dataSize uint32 here?\n>\n\nModified it.\n\n> 3.\n> Since function with 'Cstate' in name has been changed to 'CState'\n> I think we can change function PopulateCommonCstateInfo as well.\n>\n\nModified it.\n\n> 4.\n> + if (pcdata->worker_line_buf_count)\n>\n> I think some check like the above can be 'if (xxx > 0)', which seems\neasier to understand.\n\nModified it.\n\nThanks for the comments, these issues are fixed in v10 patch posted at [1]\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.com\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Oct 28, 2020 at 5:36 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:>> Hi>> I found some issue in v9-0002>> 1.> +> +       elog(DEBUG1, \"[Worker] Processing - line position:%d, block:%d, unprocessed lines:%d, offset:%d, line size:%d\",> +                write_pos, lineInfo->first_block,> +                pg_atomic_read_u32(&data_blk_ptr->unprocessed_line_parts),> +                offset, pg_atomic_read_u32(&lineInfo->line_size));> +>> write_pos or other variable to be printed here are type of uint32, I think it'better to use '%u' in elog msg.>Modified it.> 2.> +                * line_size will be set. Read the line_size again to be sure if it is> +                * completed or partial block.> +                */> +               dataSize = pg_atomic_read_u32(&lineInfo->line_size);> +               if (dataSize)>> It use dataSize( type int ) to get uint32 which seems a little dangerous.> Is it better to define dataSize uint32 here?>Modified it.> 3.> Since function with  'Cstate' in name has been changed to 'CState'> I think we can change function PopulateCommonCstateInfo as well.>Modified it.> 4.> +       if (pcdata->worker_line_buf_count)>> I think some check like the above can be 'if (xxx > 0)', which seems easier to understand.Modified it.Thanks for the comments, these issues are fixed in v10 patch posted at [1][1] https://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Nov 2020 15:31:10 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Thu, Oct 29, 2020 at 2:26 PM Daniel Westermann (DWE)\n<daniel.westermann@dbi-services.com> wrote:\n>\n> On 27/10/2020 15:36, vignesh C wrote:\n> >> Attached v9 patches have the fixes for the above comments.\n>\n> >I did some testing:\n>\n> I did some testing as well and have a cosmetic remark:\n>\n> postgres=# copy t1 from '/var/tmp/aa.txt' with (parallel 1000000000);\n> ERROR: value 1000000000 out of bounds for option \"parallel\"\n> DETAIL: Valid values are between \"1\" and \"1024\".\n> postgres=# copy t1 from '/var/tmp/aa.txt' with (parallel 100000000000);\n> ERROR: parallel requires an integer value\n> postgres=#\n>\n> Wouldn't it make more sense to only have one error message? The first one seems to be the better message.\n>\n\nI had seen similar behavior in other places too:\npostgres=# vacuum (parallel 1000000000) t1;\nERROR: parallel vacuum degree must be between 0 and 1024\nLINE 1: vacuum (parallel 1000000000) t1;\n ^\npostgres=# vacuum (parallel 100000000000) t1;\nERROR: parallel requires an integer value\n\nI'm not sure if we should fix this.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Nov 2020 15:37:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Fri, Nov 13, 2020 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 11, 2020 at 10:42 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Nov 10, 2020 at 7:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 10, 2020 at 7:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > I have worked to provide a patch for the parallel safety checks. It\n> > > > checks if parallely copy can be performed, Parallel copy cannot be\n> > > > performed for the following a) If relation is temporary table b) If\n> > > > relation is foreign table c) If relation has non parallel safe index\n> > > > expressions d) If relation has triggers present whose type is of non\n> > > > before statement trigger type e) If relation has check constraint\n> > > > which are not parallel safe f) If relation has partition and any\n> > > > partition has the above type. This patch has the checks for it. This\n> > > > patch will be used by parallel copy implementation.\n> > > >\n> > >\n> > > How did you ensure that this is sufficient? For parallel-insert's\n> > > patch we have enabled parallel-mode for Inserts and ran the tests with\n> > > force_parallel_mode to see if we are not missing anything. Also, it\n> > > seems there are many common things here w.r.t parallel-insert patch,\n> > > is it possible to prepare this atop that patch or do you have any\n> > > reason to keep this separate?\n> > >\n> >\n> > I have done similar testing for copy too, I had set force_parallel\n> > mode to regress, hardcoded in the code to pick parallel workers for\n> > copy operation and ran make installcheck-world to verify. Many checks\n> > in this patch are common between both patches, but I was not sure how\n> > to handle it as both the projects are in-progress and are being\n> > updated based on the reviewer's opinion. How to handle this?\n> > Thoughts?\n> >\n>\n> I have not studied the differences in detail but if it is possible to\n> prepare it on top of that patch then there shouldn't be a problem. To\n> avoid confusion if you want you can always either post the latest\n> version of that patch with your patch or point to it.\n>\n\nI have made this as a separate patch as of now. I will work on to see\nif I can use Greg's changes as it is or if required I will provide a\nfew review comments on top of Greg's patch so that it is usable for\nparallel copy too and later post a separate patch with the changes on\ntop of it. I will retain it as a separate patch till that time.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Nov 2020 15:40:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Oct 31, 2020 at 2:07 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n>\n> Hi,\n>\n> I've done a bit more testing today, and I think the parsing is busted in\n> some way. Consider this:\n>\n> test=# create extension random;\n> CREATE EXTENSION\n>\n> test=# create table t (a text);\n> CREATE TABLE\n>\n> test=# insert into t select random_string(random_int(10, 256*1024))\nfrom generate_series(1,10000);\n> INSERT 0 10000\n>\n> test=# copy t to '/mnt/data/t.csv';\n> COPY 10000\n>\n> test=# truncate t;\n> TRUNCATE TABLE\n>\n> test=# copy t from '/mnt/data/t.csv';\n> COPY 10000\n>\n> test=# truncate t;\n> TRUNCATE TABLE\n>\n> test=# copy t from '/mnt/data/t.csv' with (parallel 2);\n> ERROR: invalid byte sequence for encoding \"UTF8\": 0x00\n> CONTEXT: COPY t, line 485: \"m&\\nh%_a\"%r]>qtCl:Q5ltvF~;2oS6@HB\n>F>og,bD$Lw'nZY\\tYl#BH\\t{(j~ryoZ08\"SGU~.}8CcTRk1\\ts$@U3szCC+U1U3i@P...\"\n> parallel worker\n>\n>\n> The functions come from an extension I use to generate random data, I've\n> pushed it to github [1]. The random_string() generates a random string\n> with ASCII characters, symbols and a couple special characters (\\r\\n\\t).\n> The intent was to try loading data where a fields may span multiple 64kB\n> blocks and may contain newlines etc.\n>\n> The non-parallel copy works fine, the parallel one fails. I haven't\n> investigated the details, but I guess it gets confused about where a\n> string starts/end, or something like that.\n>\n\nThanks for identifying this issue, this issue is fixed in v10 patch posted\nat [1]\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.com\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Oct 31, 2020 at 2:07 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:>> Hi,>> I've done a bit more testing today, and I think the parsing is busted in> some way. Consider this:>>      test=# create extension random;>      CREATE EXTENSION>>      test=# create table t (a text);>      CREATE TABLE>>      test=# insert into t select random_string(random_int(10, 256*1024)) from generate_series(1,10000);>      INSERT 0 10000>>      test=# copy t to '/mnt/data/t.csv';>      COPY 10000>>      test=# truncate t;>      TRUNCATE TABLE>>      test=# copy t from '/mnt/data/t.csv';>      COPY 10000>>      test=# truncate t;>      TRUNCATE TABLE>>      test=# copy t from '/mnt/data/t.csv' with (parallel 2);>      ERROR:  invalid byte sequence for encoding \"UTF8\": 0x00>      CONTEXT:  COPY t, line 485: \"m&\\nh%_a\"%r]>qtCl:Q5ltvF~;2oS6@HB>F>og,bD$Lw'nZY\\tYl#BH\\t{(j~ryoZ08\"SGU~.}8CcTRk1\\ts$@U3szCC+U1U3i@P...\">      parallel worker>>> The functions come from an extension I use to generate random data, I've> pushed it to github [1]. The random_string() generates a random string> with ASCII characters, symbols and a couple special characters (\\r\\n\\t).> The intent was to try loading data where a fields may span multiple 64kB> blocks and may contain newlines etc.>> The non-parallel copy works fine, the parallel one fails. I haven't> investigated the details, but I guess it gets confused about where a> string starts/end, or something like that.>Thanks for identifying this issue, this issue is fixed in v10 patch posted at [1][1] https://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Nov 2020 15:42:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Sat, Nov 7, 2020 at 7:01 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Nov 5, 2020 at 6:33 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com>\nwrote:\n> >\n> > Hi\n> >\n> > >\n> > > my $bytes = $ARGV[0];\n> > > for(my $i = 0; $i < $bytes; $i+=8){\n> > > print \"longdata\";\n> > > }\n> > > print \"\\n\";\n> > > --------\n> > >\n> > > postgres=# copy longdata from program 'perl /tmp/longdata.pl\n100000000'\n> > > with (parallel 2);\n> > >\n> > > This gets stuck forever (or at least I didn't have the patience to\nwait\n> > > it finish). Both worker processes are consuming 100% of CPU.\n> >\n> > I had a look over this problem.\n> >\n> > the ParallelCopyDataBlock has size limit:\n> > uint8 skip_bytes;\n> > char data[DATA_BLOCK_SIZE]; /* data read from file\n*/\n> >\n> > It seems the input line is so long that the leader process run out of\nthe Shared memory among parallel copy workers.\n> > And the leader process keep waiting free block.\n> >\n> > For the worker process, it wait util line_state becomes\nLINE_LEADER_POPULATED,\n> > But leader process won't set the line_state unless it read the whole\nline.\n> >\n> > So it stuck forever.\n> > May be we should reconsider about this situation.\n> >\n> > The stack is as follows:\n> >\n> > Leader stack:\n> > #3 0x000000000075f7a1 in WaitLatch (latch=<optimized out>,\nwakeEvents=wakeEvents@entry=41, timeout=timeout@entry=1,\nwait_event_info=wait_event_info@entry=150994945) at latch.c:411\n> > #4 0x00000000005a9245 in WaitGetFreeCopyBlock\n(pcshared_info=pcshared_info@entry=0x7f26d2ed3580) at copyparallel.c:1546\n> > #5 0x00000000005a98ce in SetRawBufForLoad (cstate=cstate@entry=0x2978a88,\nline_size=67108864, copy_buf_len=copy_buf_len@entry=65536,\nraw_buf_ptr=raw_buf_ptr@entry=65536,\n> > copy_raw_buf=copy_raw_buf@entry=0x7fff4cdc0e18) at\ncopyparallel.c:1572\n> > #6 0x00000000005a1963 in CopyReadLineText (cstate=cstate@entry=0x2978a88)\nat copy.c:4058\n> > #7 0x00000000005a4e76 in CopyReadLine (cstate=cstate@entry=0x2978a88)\nat copy.c:3863\n> >\n> > Worker stack:\n> > #0 GetLinePosition (cstate=cstate@entry=0x29e1f28) at\ncopyparallel.c:1474\n> > #1 0x00000000005a8aa4 in CacheLineInfo (cstate=cstate@entry=0x29e1f28,\nbuff_count=buff_count@entry=0) at copyparallel.c:711\n> > #2 0x00000000005a8e46 in GetWorkerLine (cstate=cstate@entry=0x29e1f28)\nat copyparallel.c:885\n> > #3 0x00000000005a4f2e in NextCopyFromRawFields (cstate=cstate@entry=0x29e1f28,\nfields=fields@entry=0x7fff4cdc0b48, nfields=nfields@entry=0x7fff4cdc0b44)\nat copy.c:3615\n> > #4 0x00000000005a50af in NextCopyFrom (cstate=cstate@entry=0x29e1f28,\necontext=econtext@entry=0x2a358d8, values=0x2a42068, nulls=0x2a42070) at\ncopy.c:3696\n> > #5 0x00000000005a5b90 in CopyFrom (cstate=cstate@entry=0x29e1f28) at\ncopy.c:2985\n> >\n>\n> Thanks for providing your thoughts. I have analyzed this issue and I'm\n> working on the fix for this, I will be posting a patch for this\n> shortly.\n>\n\nI have fixed and provided a patch for this at [1]\n[1]\nhttps://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.com\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Nov 7, 2020 at 7:01 PM vignesh C <vignesh21@gmail.com> wrote:>> On Thu, Nov 5, 2020 at 6:33 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:> >> > Hi> >> > >> > > my $bytes = $ARGV[0];> > > for(my $i = 0; $i < $bytes; $i+=8){> > >      print \"longdata\";> > > }> > > print \"\\n\";> > > --------> > >> > > postgres=# copy longdata from program 'perl /tmp/longdata.pl 100000000'> > > with (parallel 2);> > >> > > This gets stuck forever (or at least I didn't have the patience to wait> > > it finish). Both worker processes are consuming 100% of CPU.> >> > I had a look over this problem.> >> > the ParallelCopyDataBlock has size limit:> >         uint8           skip_bytes;> >         char            data[DATA_BLOCK_SIZE];  /* data read from file */> >> > It seems the input line is so long that the leader process run out of the Shared memory among parallel copy workers.> > And the leader process keep waiting free block.> >> > For the worker process, it wait util line_state becomes LINE_LEADER_POPULATED,> > But leader process won't set the line_state unless it read the whole line.> >> > So it stuck forever.> > May be we should reconsider about this situation.> >> > The stack is as follows:> >> > Leader stack:> > #3  0x000000000075f7a1 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=41, timeout=timeout@entry=1, wait_event_info=wait_event_info@entry=150994945) at latch.c:411> > #4  0x00000000005a9245 in WaitGetFreeCopyBlock (pcshared_info=pcshared_info@entry=0x7f26d2ed3580) at copyparallel.c:1546> > #5  0x00000000005a98ce in SetRawBufForLoad (cstate=cstate@entry=0x2978a88, line_size=67108864, copy_buf_len=copy_buf_len@entry=65536, raw_buf_ptr=raw_buf_ptr@entry=65536,> >     copy_raw_buf=copy_raw_buf@entry=0x7fff4cdc0e18) at copyparallel.c:1572> > #6  0x00000000005a1963 in CopyReadLineText (cstate=cstate@entry=0x2978a88) at copy.c:4058> > #7  0x00000000005a4e76 in CopyReadLine (cstate=cstate@entry=0x2978a88) at copy.c:3863> >> > Worker stack:> > #0  GetLinePosition (cstate=cstate@entry=0x29e1f28) at copyparallel.c:1474> > #1  0x00000000005a8aa4 in CacheLineInfo (cstate=cstate@entry=0x29e1f28, buff_count=buff_count@entry=0) at copyparallel.c:711> > #2  0x00000000005a8e46 in GetWorkerLine (cstate=cstate@entry=0x29e1f28) at copyparallel.c:885> > #3  0x00000000005a4f2e in NextCopyFromRawFields (cstate=cstate@entry=0x29e1f28, fields=fields@entry=0x7fff4cdc0b48, nfields=nfields@entry=0x7fff4cdc0b44) at copy.c:3615> > #4  0x00000000005a50af in NextCopyFrom (cstate=cstate@entry=0x29e1f28, econtext=econtext@entry=0x2a358d8, values=0x2a42068, nulls=0x2a42070) at copy.c:3696> > #5  0x00000000005a5b90 in CopyFrom (cstate=cstate@entry=0x29e1f28) at copy.c:2985> >>> Thanks for providing your thoughts. I have analyzed this issue and I'm> working on the fix for this, I will be posting a patch for this> shortly.>I have fixed and provided a patch for this at [1][1] https://www.postgresql.org/message-id/CALDaNm05FnA-ePvYV_t2%2BWE_tXJymbfPwnm%2Bkc9y1iMkR%2BNbUg%40mail.gmail.comRegards,VigneshEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Nov 2020 15:44:33 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi Vignesh,\r\n\r\nI took a look at the v10 patch set. Here are some comments:\r\n\r\n1. \r\n+/*\r\n+ * CheckExprParallelSafety\r\n+ *\r\n+ * Determine if where cluase and default expressions are parallel safe & do not\r\n+ * have volatile expressions, return true if condition satisfies else return\r\n+ * false.\r\n+ */\r\n\r\n'cluase' seems a typo.\r\n\r\n\r\n2.\r\n+\t\t\t/*\r\n+\t\t\t * Make sure that no worker has consumed this element, if this\r\n+\t\t\t * line is spread across multiple data blocks, worker would have\r\n+\t\t\t * started processing, no need to change the state to\r\n+\t\t\t * LINE_LEADER_POPULATING in this case.\r\n+\t\t\t */\r\n+\t\t\t(void) pg_atomic_compare_exchange_u32(&lineInfo->line_state,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t &current_line_state,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t LINE_LEADER_POPULATED);\r\nAbout the commect\r\n\r\n+\t\t\t * started processing, no need to change the state to\r\n+\t\t\t * LINE_LEADER_POPULATING in this case.\r\n\r\nDoes it means no need to change the state to LINE_LEADER_POPULATED ' here?\r\n\r\n\r\n3.\r\n+ * 3) only one worker should choose one line for processing, this is handled by\r\n+ * using pg_atomic_compare_exchange_u32, worker will change the state to\r\n+ * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\r\n\r\nIn the latest patch, it will set the state to LINE_WORKER_PROCESSING if line_state is LINE_LEADER_POPULATED or LINE_LEADER_POPULATING.\r\nSo The comment here seems wrong.\r\n\r\n\r\n4.\r\nA suggestion for CacheLineInfo.\r\n\r\nIt use appendBinaryStringXXX to store the line in memory.\r\nappendBinaryStringXXX will double the str memory when there is no enough spaces.\r\n\r\nHow about call enlargeStringInfo in advance, if we already know the whole line size?\r\nIt can avoid some memory waste and may impove a little performance.\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\n\n", "msg_date": "Thu, 19 Nov 2020 11:16:42 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel copy" }, { "msg_contents": "Thanks for the comments.\n> I took a look at the v10 patch set. Here are some comments:\n>\n> 1.\n> +/*\n> + * CheckExprParallelSafety\n> + *\n> + * Determine if where cluase and default expressions are parallel safe & do not\n> + * have volatile expressions, return true if condition satisfies else return\n> + * false.\n> + */\n>\n> 'cluase' seems a typo.\n>\n\nchanged.\n\n> 2.\n> + /*\n> + * Make sure that no worker has consumed this element, if this\n> + * line is spread across multiple data blocks, worker would have\n> + * started processing, no need to change the state to\n> + * LINE_LEADER_POPULATING in this case.\n> + */\n> + (void) pg_atomic_compare_exchange_u32(&lineInfo->line_state,\n> + &current_line_state,\n> + LINE_LEADER_POPULATED);\n> About the commect\n>\n> + * started processing, no need to change the state to\n> + * LINE_LEADER_POPULATING in this case.\n>\n> Does it means no need to change the state to LINE_LEADER_POPULATED ' here?\n>\n>\n\nYes it is LINE_LEADER_POPULATED, changed accordingly.\n\n> 3.\n> + * 3) only one worker should choose one line for processing, this is handled by\n> + * using pg_atomic_compare_exchange_u32, worker will change the state to\n> + * LINE_WORKER_PROCESSING only if line_state is LINE_LEADER_POPULATED.\n>\n> In the latest patch, it will set the state to LINE_WORKER_PROCESSING if line_state is LINE_LEADER_POPULATED or LINE_LEADER_POPULATING.\n> So The comment here seems wrong.\n>\n\nUpdated the comments.\n\n> 4.\n> A suggestion for CacheLineInfo.\n>\n> It use appendBinaryStringXXX to store the line in memory.\n> appendBinaryStringXXX will double the str memory when there is no enough spaces.\n>\n> How about call enlargeStringInfo in advance, if we already know the whole line size?\n> It can avoid some memory waste and may impove a little performance.\n>\n\nHere we will not know the size beforehand, in some cases we will start\nprocessing the data when current block is populated and keep\nprocessing block by block, we will come to know of the size at the\nend. We cannot use enlargeStringInfo because of this.\n\nAttached v11 patch has the fix for this, it also includes the changes\nto rebase on top of head.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Dec 2020 14:02:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "> > 4.\r\n> > A suggestion for CacheLineInfo.\r\n> >\r\n> > It use appendBinaryStringXXX to store the line in memory.\r\n> > appendBinaryStringXXX will double the str memory when there is no enough\r\n> spaces.\r\n> >\r\n> > How about call enlargeStringInfo in advance, if we already know the whole\r\n> line size?\r\n> > It can avoid some memory waste and may impove a little performance.\r\n> >\r\n> \r\n> Here we will not know the size beforehand, in some cases we will start\r\n> processing the data when current block is populated and keep processing\r\n> block by block, we will come to know of the size at the end. We cannot use\r\n> enlargeStringInfo because of this.\r\n> \r\n> Attached v11 patch has the fix for this, it also includes the changes to\r\n> rebase on top of head.\r\n\r\nThanks for the explanation.\r\n\r\nI think there is still chances we can know the size.\r\n\r\n+\t\t * line_size will be set. Read the line_size again to be sure if it is\r\n+\t\t * completed or partial block.\r\n+\t\t */\r\n+\t\tdataSize = pg_atomic_read_u32(&lineInfo->line_size);\r\n+\t\tif (dataSize != -1)\r\n+\t\t{\r\n\r\nIf I am not wrong, this seems the branch that procsssing the populated block.\r\nI think we can check the copiedSize here, if copiedSize == 0, that means\r\nDatasizes is the size of the whole line and in this case we can do the enlarge.\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\n\n", "msg_date": "Mon, 7 Dec 2020 09:30:26 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel copy" }, { "msg_contents": "On Mon, Dec 7, 2020 at 3:00 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> > Attached v11 patch has the fix for this, it also includes the changes to\n> > rebase on top of head.\n>\n> Thanks for the explanation.\n>\n> I think there is still chances we can know the size.\n>\n> + * line_size will be set. Read the line_size again to be sure if it is\n> + * completed or partial block.\n> + */\n> + dataSize = pg_atomic_read_u32(&lineInfo->line_size);\n> + if (dataSize != -1)\n> + {\n>\n> If I am not wrong, this seems the branch that procsssing the populated block.\n> I think we can check the copiedSize here, if copiedSize == 0, that means\n> Datasizes is the size of the whole line and in this case we can do the enlarge.\n>\n>\n\nYes this optimization can be done, I will handle this in the next patch set.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Dec 2020 16:11:16 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "Hi\r\n\r\n> Yes this optimization can be done, I will handle this in the next patch\r\n> set.\r\n> \r\n\r\nI have a suggestion for the parallel safety-check.\r\n\r\nAs designed, The leader does not participate in the insertion of data.\r\nIf User use (PARALLEL 1), there is only one worker process which will do the insertion.\r\n\r\nIMO, we can skip some of the safety-check in this case, becase the safety-check is to limit parallel insert.\r\n(except temporary table or ...)\r\n\r\nSo, how about checking (PARALLEL 1) separately ?\r\nAlthough it looks a bit complicated, But (PARALLEL 1) do have a good performance improvement.\r\n\r\nBest regards,\r\nhouzj\r\n\n\n", "msg_date": "Wed, 23 Dec 2020 09:35:25 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel copy" }, { "msg_contents": "On Wed, Dec 23, 2020 at 3:05 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n>\n> Hi\n>\n> > Yes this optimization can be done, I will handle this in the next patch\n> > set.\n> >\n>\n> I have a suggestion for the parallel safety-check.\n>\n> As designed, The leader does not participate in the insertion of data.\n> If User use (PARALLEL 1), there is only one worker process which will do the insertion.\n>\n> IMO, we can skip some of the safety-check in this case, becase the safety-check is to limit parallel insert.\n> (except temporary table or ...)\n>\n> So, how about checking (PARALLEL 1) separately ?\n> Although it looks a bit complicated, But (PARALLEL 1) do have a good performance improvement.\n>\n\nThanks for the comments Hou Zhijie, I will run a few tests with 1\nworker and try to include this in the next patch set.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 26 Dec 2020 21:18:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Tue, Nov 3, 2020 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 2, 2020 at 12:40 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >\n> > On 02/11/2020 08:14, Amit Kapila wrote:\n> > > On Fri, Oct 30, 2020 at 10:11 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > >>\n> > >> In this design, you don't need to keep line boundaries in shared memory,\n> > >> because each worker process is responsible for finding the line\n> > >> boundaries of its own block.\n> > >>\n> > >> There's a point of serialization here, in that the next block cannot be\n> > >> processed, until the worker working on the previous block has finished\n> > >> scanning the EOLs, and set the starting position on the next block,\n> > >> putting it in READY state. That's not very different from your patch,\n> > >> where you had a similar point of serialization because the leader\n> > >> scanned the EOLs,\n> > >\n> > > But in the design (single producer multiple consumer) used by the\n> > > patch the worker doesn't need to wait till the complete block is\n> > > processed, it can start processing the lines already found. This will\n> > > also allow workers to start much earlier to process the data as it\n> > > doesn't need to wait for all the offsets corresponding to 64K block\n> > > ready. However, in the design where each worker is processing the 64K\n> > > block, it can lead to much longer waits. I think this will impact the\n> > > Copy STDIN case more where in most cases (200-300 bytes tuples) we\n> > > receive line-by-line from client and find the line-endings by leader.\n> > > If the leader doesn't find the line-endings the workers need to wait\n> > > till the leader fill the entire 64K chunk, OTOH, with current approach\n> > > the worker can start as soon as leader is able to populate some\n> > > minimum number of line-endings\n> >\n> > You can use a smaller block size.\n> >\n>\n> Sure, but the same problem can happen if the last line in that block\n> is too long and we need to peek into the next block. And then there\n> could be cases where a single line could be greater than 64K.\n>\n> > However, the point of parallel copy is\n> > to maximize bandwidth.\n> >\n>\n> Okay, but this first-phase (finding the line boundaries) can anyway be\n> not done in parallel and we have seen in some of the initial\n> benchmarking that this initial phase is a small part of work\n> especially when the table has indexes, constraints, etc. So, I think\n> it won't matter much if this splitting is done in a single process or\n> multiple processes.\n>\n\nI wrote a patch to compare the performance of the current\nimplementation leader identifying the line bound design vs the workers\nidentifying the line boundary. The results of the same is given below:\nThe below data can be read as parallel copy time taken in seconds\nbased on the leader identifying the line boundary design, parallel\ncopy time taken in seconds based on the workers identifying the line\nboundary design, workers.\n\nUse case 1 - 10million rows, 5.2GB data,3 indexes on integer columns:\n(211.206, 632.583, 1), (165.402, 360.152, 2), (137.608, 219.623, 4),\n(128.003, 206.851, 8), (114.518, 177.790, 16), (109.257, 170.058, 20),\n(102.050, 158.376, 30)\n\nUse case 2 - 10million rows, 5.2GB data,2 indexes on integer columns,\n1 index on text column, csv file:\n(1212.356, 1602.118, 1), (707.191, 849.105, 2), (369.620, 441.068, 4),\n(221.359, 252.775, 8), (167.152, 180.207, 16), (168.804, 181.986, 20),\n(172.320, 194.875, 30)\n\nUse case 3 - 10million rows, 5.2GB data without index:\n(96.317, 437.453, 1), (70.730, 240.517, 2), (64.436, 197.604, 4),\n(67.186, 175.630, 8), (76.561, 156.015, 16), (81.025, 150.687, 20),\n(86.578, 148.481, 30)\n\nUse case 4 - 10000 records, 9.6GB, toast data:\n(147.076, 276.323, 1), (101.610, 141.893, 2), (100.703, 134.096, 4),\n(112.583, 134.765, 8), (101.898, 135.789, 16), (109.258, 135.625, 20),\n(109.219, 136.144, 30)\n\nAttached is a patch that was used for the same. The patch is written\non top of the parallel copy patch.\nThe design Amit, Andres & myself voted for that is the leader\nidentifying the line bound design and sharing it in shared memory is\nperforming better.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Dec 2020 15:14:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" }, { "msg_contents": "On Mon, Dec 28, 2020 at 3:14 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Attached is a patch that was used for the same. The patch is written\n> on top of the parallel copy patch.\n> The design Amit, Andres & myself voted for that is the leader\n> identifying the line bound design and sharing it in shared memory is\n> performing better.\n\nHi Hackers, I see following are some of the problem with parallel copy feature:\n\n1) Leader identifying the line/tuple boundaries from the file, letting\nthe workers pick, insert parallelly vs leader reading the file and\nletting workers identify line/tuple boundaries, insert\n2) Determining parallel safety of partitioned tables\n3) Bulk extension of relation while inserting i.e. adding more than\none extra blocks to the relation in RelationAddExtraBlocks\n\nPlease let me know if I'm missing anything.\n\nFor (1) - from Vignesh's experiments above, it shows that the \" leader\nidentifying the line/tuple boundaries from the file, letting the\nworkers pick, insert parallelly\" fares better.\nFor (2) - while it's being discussed in another thread (I'm not sure\nwhat's the status of that thread), how about we take this feature\nwithout the support for partitioned tables i.e. parallel copy is\ndisabled for partitioned tables? Once the other discussion gets to a\nlogical end, we can come back and enable parallel copy for partitioned\ntables.\nFor (3) - we need a way to extend or add new blocks fastly - fallocate\nmight help here, not sure who's working on it, others can comment\nbetter here.\n\nCan we take the \"parallel copy\" feature forward of course with some\nrestrictions in place?\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 7 Mar 2022 13:26:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel copy" } ]
[ { "msg_contents": "[ Starting a new thread about this, since the old one about GUC reporting\nis only marginally related to this point ... if it were more so, maybe I'd\nhave found it when I went looking for it yesterday ]\n\nRobert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Nov 5, 2019 at 10:02 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> There's a reason the SQL standard defines SET SESSION AUTHORIZATION but\n>> no RESET SESSION AUTHORIZATION: once you enter a security context, you\n>> cannot escape it. ISTM that essentially we broke feature F321 \"User\n>> authorization\" by adding RESET into the mix. (I think RESET ROLE breaks\n>> the spirit of feature T331 too.) The SQL:2016 standard describes how\n>> this is supposed to work in Foundation \"4.40.1.1 SQL-session\n>> authorization identifiers\" (same section is numbered 4.35.1.1 in\n>> SQL:2011), and ISTM we made a huge mess of it.\n>> \n>> I don't see how to fix it, though. If we were to adopt the standard's\n>> mechanism, we'd probably break tons of existing code.\n\n> It wouldn't be difficult to introduce a new protocol-level option that\n> prohibits RESET SESSION AUTHORIZATION; and it would also be possible\n> to introduce a new protocol message that has the same effect as RESET\n> SESSION AUTHORIZATION. If you do those two things, then it's possible\n> to create a sandbox which the end client cannot escape but which the\n> pooler can escape easily.\n\nI went looking into the SQL standard to see just what it says about this,\nand I'm darned if I see anything supporting Alvaro's argument. I do not\nhave SQL:2016 at hand, but in SQL:2011 what I see is that section 4.35.1.1\ndescribes a stack of authorization identifiers and/or roles that controls\nthe currently-applicable privileges. It says\n\n Let E be an externally-invoked procedure, SQL-invoked routine,\n triggered action, prepared statement, or directly executed\n statement. When E is invoked, a copy of the top cell is pushed onto\n the authorization stack. If the invocation of E is to be under\n definer's rights, then the contents of the top cell are replaced with\n the authorization identifier of the owner of E. On completion of the\n execution of E, the top cell is removed.\n ...\n The <set session user identifier statement> changes the value of the\n current user identifier and of the SQL- session user identifier. The\n <set role statement> changes the value of the current role name.\n ...\n The term current authorization identifier denotes an authorization\n identifier in the top cell of the authorization stack.\n\nThere is nothing anywhere in 4.35 that constrains the allowable\ntransitions of authorization identifiers. The only thing I can find on\nthat point is in the General Rules of 19.2 <set session user identifier\nstatement> (a/k/a SET SESSION AUTHORIZATION), which says:\n\n 4) If V is not equal to the current value of the SQL-session user\n identifier of the current SQL-session context, then the restrictions\n on the permissible values for V are implementation-defined.\n\n 5) If the current user identifier and the current role name are\n restricted from setting the user identifier to V, then an exception\n condition is raised: invalid authorization specification.\n\nSo as far as I can see, restrictions on what SET SESSION AUTHORIZATION\ncan set the authorization ID to are implementation-defined, full stop.\nThere might be considerable value in the semantics Alvaro suggests,\nbut I think arguing that the spec requires 'em is just wrong.\n\nOn the other hand, the restrictions on SET ROLE in 19.3 are much less\nsquishy:\n\n 3) If <role specification> contains a <value specification>, then:\n\n c) If no role authorization descriptor exists that indicates that\n the role identified by V has been granted to either the current\n user identifier or to PUBLIC, then an exception condition is\n raised: invalid role specification.\n\n d) The SQL-session role name and the current role name are set to\n V.\n\n 4) If NONE is specified, then the current role name is removed.\n\nAs best I can tell, we actually are entirely compliant with that, modulo\nthe fact that we don't think of the current state as an <auth ID, role>\npair. What you can SET ROLE to is determined by your authorization\nidentifier, not your current role, and so doing a SET ROLE doesn't change\nwhat you can SET ROLE to later. The argument that \"RESET ROLE\" is somehow\ninvalid seems a little silly when \"SET ROLE NONE\" does the same thing.\n\nWhat I'm now thinking is that we shouldn't mess with the behavior of\nSET ROLE, as I mused about doing yesterday in [1]. It's spec-compliant,\nor close enough, so let's leave it be. On the other hand, changing the\nbehavior of SET SESSION AUTHORIZATION is not constrained by spec\ncompliance concerns, only backwards compatibility. We could address the\npg_dump concerns I had in [1] by tweaking what SET SESSION AUTHORIZATION\ncan do and then adjusting pg_dump to swap its usage of SET SESSION\nAUTHORIZATION (do that just once, in response to --role) and SET ROLE\n(do that per-object, to establish ownership).\n\nThe only thing stopping us from addressing Alvaro's concern is backwards\ncompatibility. Perhaps a reasonable solution that preserves that is\nto add an option to the command, say\n\n\tSET SESSION AUTHORIZATION foo PERMANENT;\n\nwhich would check that you're allowed to become foo and then establish\nthat as the logged-in userid, with no going back being possible (unless\nof course foo has privilege enough to do so). A protocol-level message\nto set session auth could also be possible, of course.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/11496.1581634533%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 14 Feb 2020 16:01:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Standards compliance of SET ROLE / SET SESSION AUTHORIZATION" }, { "msg_contents": "On 2/14/20 4:01 PM, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> It wouldn't be difficult to introduce a new protocol-level option that\n>> prohibits RESET SESSION AUTHORIZATION; and it would also be possible\n>> to introduce a new protocol message that has the same effect as RESET\n>> SESSION AUTHORIZATION. If you do those two things, then it's possible\n>> to create a sandbox which the end client cannot escape but which the\n>> pooler can escape easily.\n> ...\n> \tSET SESSION AUTHORIZATION foo PERMANENT;\n> ... A protocol-level message\n> to set session auth could also be possible, of course.\n\nI'll once again whimper softly and perhaps ineffectually that an\nSQL-exposed equivalent like\n\n SET SESSION AUTHORIZATION foo WITH RESET COOKIE 'lkjhikuhoihkihlj';\n\nwould seem to suit the same purpose, with the advantage of being\nimmediately usable by any kind of front- or middle-end code the\ninstant there is a server version that supports it, without having\nto wait for something new at the protocol level to trickle through\nto n different driver implementations.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 14 Feb 2020 16:19:57 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Standards compliance of SET ROLE / SET SESSION AUTHORIZATION" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 2/14/20 4:01 PM, Tom Lane wrote:\n>> ... A protocol-level message\n>> to set session auth could also be possible, of course.\n\n> I'll once again whimper softly and perhaps ineffectually that an\n> SQL-exposed equivalent like\n\n> SET SESSION AUTHORIZATION foo WITH RESET COOKIE 'lkjhikuhoihkihlj';\n\n> would seem to suit the same purpose, with the advantage of being\n> immediately usable by any kind of front- or middle-end code the\n> instant there is a server version that supports it, without having\n> to wait for something new at the protocol level to trickle through\n> to n different driver implementations.\n\nYeah, I'm not that thrilled with the idea of a protocol message\nthat's not equivalent to any SQL-level functionality, either.\n\nBut the immediate point here is that I think we could get away with\nplaying around with SET SESSION AUTHORIZATION's semantics. Or,\nseeing that that's just syntactic sugar for \"SET session_authorization\",\nwe could invent some new GUCs that allow control over this, rather\nthan new syntax.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Feb 2020 16:35:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Standards compliance of SET ROLE / SET SESSION AUTHORIZATION" }, { "msg_contents": "I wrote:\n> What I'm now thinking is that we shouldn't mess with the behavior of\n> SET ROLE, as I mused about doing yesterday in [1]. It's spec-compliant,\n> or close enough, so let's leave it be. On the other hand, changing the\n> behavior of SET SESSION AUTHORIZATION is not constrained by spec\n> compliance concerns, only backwards compatibility. We could address the\n> pg_dump concerns I had in [1] by tweaking what SET SESSION AUTHORIZATION\n> can do and then adjusting pg_dump to swap its usage of SET SESSION\n> AUTHORIZATION (do that just once, in response to --role) and SET ROLE\n> (do that per-object, to establish ownership).\n\nConcretely, I propose the following semantics:\n\n* SET SESSION AUTHORIZATION is allowed if your original login role\nis a member of the target role. If successful, it resets the role\nto \"NONE\", ie session authorization and effective role both become\nthe stated role.\n\n* SET ROLE is allowed if your session authorization is a member\nof the target role. If successful, it sets the effective role to\nthe target role. SET ROLE NONE resets effective role to the\ncurrent session authorization.\n\nThis is the same behavior we have now for SET ROLE. The difference\nfor SET SESSION AUTHORIZATION is that currently that requires your\nlogin role to be superuser or equal to the target role, so the\nabove is a strictly weaker check.\n\nThe reason this is interesting is that currently, if you log in\nas somebody who isn't superuser but is allowed to become superuser\n(ie, has been granted a superuser role), you're not allowed to\nSET SESSION AUTHORIZATION to the superuser, only SET ROLE to it.\nAnd that in turn means that you can't necessarily SET ROLE to any\nrandom other userid, which is a weird restriction that breaks\nthe \"pg_restore --role\" use-case for this whole thing [1].\n\nI suppose it could be argued that that's a bug in the interpretation\nof role membership: arguably, if you're a member of some superuser\nrole, that ought to give you membership in anything else. IOW, a\nsuperuser's implicit membership in every role isn't transitive,\nand maybe it should be. But I'm not sure that I want to change that;\nit feels like doing so might have surprising side-effects.\n\nNote that AFAICS, this is just as spec-compliant as our current\nbehavior. The spec only constrains what SET ROLE does.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/11496.1581634533%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 14 Feb 2020 18:43:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Standards compliance of SET ROLE / SET SESSION AUTHORIZATION" }, { "msg_contents": "On 02/14/20 18:43, Tom Lane wrote:\n\n> I suppose it could be argued that that's a bug in the interpretation\n> of role membership: arguably, if you're a member of some superuser\n> role, that ought to give you membership in anything else. IOW, a\n> superuser's implicit membership in every role isn't transitive,\n> and maybe it should be. But I'm not sure that I want to change that;\n> it feels like doing so might have surprising side-effects.\n\nI have a tendency to create roles like postgres_assumable or\ndba_assumable, which are themselves members of the indicated\nroles, but without rolinherit, and then grant those to my own\nrole. That way in my day to day faffing about, I don't get to\nmake superuser-powered mistakes, but I can 'set role postgres'\nwhen needed.\n\nWould it make sense for a proposed transitive superuser-membership-\nin-everything also to stop at a role without rolinherit? Clearly\nit would just add one extra step to 'set role anybody', but sometimes\none extra step inspires a useful extra moment of thought.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 14 Feb 2020 19:40:06 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Standards compliance of SET ROLE / SET SESSION AUTHORIZATION" }, { "msg_contents": "On Sat, 15 Feb 2020 at 05:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Chapman Flack <chap@anastigmatix.net> writes:\n> > On 2/14/20 4:01 PM, Tom Lane wrote:\n> >> ... A protocol-level message\n> >> to set session auth could also be possible, of course.\n>\n> > I'll once again whimper softly and perhaps ineffectually that an\n> > SQL-exposed equivalent like\n>\n> > SET SESSION AUTHORIZATION foo WITH RESET COOKIE 'lkjhikuhoihkihlj';\n>\n> > would seem to suit the same purpose, with the advantage of being\n> > immediately usable by any kind of front- or middle-end code the\n> > instant there is a server version that supports it, without having\n> > to wait for something new at the protocol level to trickle through\n> > to n different driver implementations.\n>\n> Yeah, I'm not that thrilled with the idea of a protocol message\n> that's not equivalent to any SQL-level functionality, either.\n>\n> But the immediate point here is that I think we could get away with\n> playing around with SET SESSION AUTHORIZATION's semantics. Or,\n> seeing that that's just syntactic sugar for \"SET session_authorization\",\n> we could invent some new GUCs that allow control over this, rather\n> than new syntax.\n\nBased on the argument given here I tend to agree. And I've advocated\nstrongly for this in the past because poolers really need it.\n\nMy main issue with using SET SESSION AUTHORIZATION is that it requires\nthe pooler-user to be a superuser and gives the pooler total trust to\nbecome any and all roles on the Pg instance. That's a significant\ndownside, as it'd be preferable for the pooler to have no way to\nbecome superuser and to confine its role access.\n\nSET ROLE on the other hand offers a nice way to constrain the\navailable roles that a session user can ever attain. But as noted\nabove, has standards compliance constraints.\n\nBecause S-S-A isn't currently allowed as non-superuser, we can extend\nwithout breaking BC since we're free to define totally new semantics\nfor non-superuser invocation of S-S-A. So long as we don't restrict\nthe currently-allowed S-S-A to self anyway.\n\nI think the truly ideal semantics are somewhere between S-S-A and SET\nROLE, and rely on the separation of *authorization* from\n*authentication*, something Pg doesn't offer much of at the moment.\n\nI suggest something like:\n\n* A new GRANT ROLE AUTHORIZATION FOR <<role>> TO <<grantee>>. This\ngrants the right for a non-superuser <<grantee>> to SET SESSION\nAUTHORIZATION to <<role>>, much like our GRANT <<role>> TO <<grantee>>\nworks for granting SET ROLE and inheritance. But granting SESSION\nAUTHORIZATION would not allow SET ROLE and would not inherit rights,\nit'd be a separate catalog with separate membership query functions\netc.\n* (Some more detail is needed to handle granting, and granting to,\nroles that have member-roles, since we'd want to control ).\n* SET SESSION AUTHORIZATION is extended to allow a non-superuser to\nS-S-A to any role it been granted appropriate rights for.\n* Pooler *authenticates* as a non-superuser pooler user, establishing\na normal session as the pooler login user.\n* Pooler authenticates clients using appropriate pooler-defined\nmethods then does a protocol-level SET SESSION AUTHORIZATION to the\nclient's authenticated role. If a non-empty reset cookie is provided\nin the S-S-A protocol message then a matching reset cookie must be\nsent in any subsequent S-S-A or R-S-A messages or queries, otherwise\nthey fail with permission-denied.\n* Pooler proxies client access to session like ususal, with no need to\nspecially filter.\n* When the client releases the session, pooler does a protocol-level\nRESET SESSION AUTHORIZATION to the pooler user, supplying the reset\ncookie it gave at S-S-A time.\n\n\n\n>\n> regards, tom lane\n>\n>\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Mon, 17 Feb 2020 12:22:39 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Standards compliance of SET ROLE / SET SESSION AUTHORIZATION" } ]
[ { "msg_contents": "In general, the variable LN_S is 'ln -s', however, LN_S changes to 'cp\n-pR' if configure finds the file system does not support symbolic\nlinks.\n\nI was playing with 'ln' for some other reason where I symbolic linked\nit to '/bin/false'. That revealed the failure for\nsrc/backend/Makefile. Greping for 'ln -s' revealed three places it's\nused. Attaching the patch to fix the same.", "msg_date": "Fri, 14 Feb 2020 16:30:05 -0800", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Use LN_S instead of \"ln -s\" in Makefile" }, { "msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> In general, the variable LN_S is 'ln -s', however, LN_S changes to 'cp\n> -pR' if configure finds the file system does not support symbolic\n> links.\n> I was playing with 'ln' for some other reason where I symbolic linked\n> it to '/bin/false'. That revealed the failure for\n> src/backend/Makefile. Greping for 'ln -s' revealed three places it's\n> used. Attaching the patch to fix the same.\n\nHm. So, these oversights are certainly bugs in narrow terms. However,\nthey've all been like that for a *long* time --- the three instances\nyou found date to 2005, 2006, and 2008 according to \"git blame\".\nThe complete lack of complaints shows that nobody cares. So I think\na fairly strong case could be made for going the other way, ie\ns/$(LN_S)/ln -s/g and get rid of the configure-time cycles wasted in\nsetting up that variable. Otherwise I fear somebody will \"break\"\nit again soon, and it will stay \"broken\" for another 15 years till\nsomeone happens to notice. We have better things to do than spend\nour time maintaining such nonfunctional differences.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Feb 2020 19:57:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use LN_S instead of \"ln -s\" in Makefile" }, { "msg_contents": "On Fri, Feb 14, 2020 at 4:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> > In general, the variable LN_S is 'ln -s', however, LN_S changes to 'cp\n> > -pR' if configure finds the file system does not support symbolic\n> > links.\n> > I was playing with 'ln' for some other reason where I symbolic linked\n> > it to '/bin/false'. That revealed the failure for\n> > src/backend/Makefile. Greping for 'ln -s' revealed three places it's\n> > used. Attaching the patch to fix the same.\n>\n> Hm. So, these oversights are certainly bugs in narrow terms. However,\n> they've all been like that for a *long* time --- the three instances\n> you found date to 2005, 2006, and 2008 according to \"git blame\".\n> The complete lack of complaints shows that nobody cares. So I think\n> a fairly strong case could be made for going the other way, ie\n> s/$(LN_S)/ln -s/g and get rid of the configure-time cycles wasted in\n> setting up that variable.\n>\n\nI accede to it.\n\nOn Fri, Feb 14, 2020 at 4:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> In general, the variable LN_S is 'ln -s', however, LN_S changes to 'cp\n> -pR' if configure finds the file system does not support symbolic\n> links.\n> I was playing with 'ln' for some other reason where I symbolic linked\n> it to '/bin/false'. That revealed the failure for\n> src/backend/Makefile. Greping for 'ln -s' revealed three places it's\n> used. Attaching the patch to fix the same.\n\nHm.  So, these oversights are certainly bugs in narrow terms.  However,\nthey've all been like that for a *long* time --- the three instances\nyou found date to 2005, 2006, and 2008 according to \"git blame\".\nThe complete lack of complaints shows that nobody cares.  So I think\na fairly strong case could be made for going the other way, ie\ns/$(LN_S)/ln -s/g and get rid of the configure-time cycles wasted in\nsetting up that variable.I accede to it.", "msg_date": "Fri, 14 Feb 2020 17:29:01 -0800", "msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Use LN_S instead of \"ln -s\" in Makefile" }, { "msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> On Fri, Feb 14, 2020 at 4:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hm. So, these oversights are certainly bugs in narrow terms. However,\n>> they've all been like that for a *long* time --- the three instances\n>> you found date to 2005, 2006, and 2008 according to \"git blame\".\n>> The complete lack of complaints shows that nobody cares. So I think\n>> a fairly strong case could be made for going the other way, ie\n>> s/$(LN_S)/ln -s/g and get rid of the configure-time cycles wasted in\n>> setting up that variable.\n\n> I accede to it.\n\nOh ... 2005 was just the last time anybody touched that particular\nline in backend/Makefile. Further digging shows that we've been\ninstalling the postmaster -> postgres symlink with raw \"ln -s\"\nclear back to the Postgres95 virgin sources. I didn't bother to\nchase down the oldest instances of the other two cases.\n\n(Man, \"git blame\" is such a great tool for software archaeology.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Feb 2020 20:46:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use LN_S instead of \"ln -s\" in Makefile" }, { "msg_contents": "On 2/15/20 1:57 AM, Tom Lane wrote:\n> Hm. So, these oversights are certainly bugs in narrow terms. However,\n> they've all been like that for a *long* time --- the three instances\n> you found date to 2005, 2006, and 2008 according to \"git blame\".\n> The complete lack of complaints shows that nobody cares. So I think\n> a fairly strong case could be made for going the other way, ie\n> s/$(LN_S)/ln -s/g and get rid of the configure-time cycles wasted in\n> setting up that variable.\n\nHere is a patch which does that.\n\nAndreas", "msg_date": "Sat, 15 Feb 2020 15:58:22 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Use LN_S instead of \"ln -s\" in Makefile" }, { "msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 2/15/20 1:57 AM, Tom Lane wrote:\n>> Hm. So, these oversights are certainly bugs in narrow terms. However,\n>> they've all been like that for a *long* time --- the three instances\n>> you found date to 2005, 2006, and 2008 according to \"git blame\".\n>> The complete lack of complaints shows that nobody cares. So I think\n>> a fairly strong case could be made for going the other way, ie\n>> s/$(LN_S)/ln -s/g and get rid of the configure-time cycles wasted in\n>> setting up that variable.\n\n> Here is a patch which does that.\n\nI was just about to push that when I noticed something that gave me\npause: the \"ln -s\" in backend/Makefile is wrapped in \n\tifneq ($(PORTNAME), win32)\nThis means there's one popular platform where we *don't* know for\nsure that people aren't building in environments that don't support\n\"ln -s\". (The other two direct uses that Ashwin found are in test\ncode that a non-developer person very likely would never exercise,\nso I don't think they prove much.)\n\nI'm still on balance inclined to push this. We have no buildfarm\nanimals exercising the case (they all report \"ln -s\" as supported,\neven the Windows animals), and these days I think most people who\nare building for Windows use the MSVC scripts not the makefiles.\n\nMoreover, $(LN_S) is a horribly error-prone macro, because of the\nfact that \"ln -s\" and \"cp\" don't have the same semantics for the\nsource argument. The Autoconf manual says\n\n If you make a link in a directory other than the current\n directory, its meaning depends on whether `ln' or `ln -s' is used.\n To safely create links using `$(LN_S)', either find out which\n form is used and adjust the arguments, or always invoke `ln' in\n the directory where the link is to be created.\n\n In other words, it does not work to do:\n $(LN_S) foo /x/bar\n\n Instead, do:\n\n (cd /x && $(LN_S) foo bar)\n\nSo Ashwin's original patch would, far from fixing the code for\nsymlink-less systems, just have caused them to fail in a different way.\nI could do without having that sort of gotcha in our build system,\nespecially if the set of people it would help is so close to empty,\nand most especially when we have no testing that would catch mistakes.\n\nNonetheless, it looks like the current makefiles do work, for moderate\nvalues of \"work\", on non-symlink Windows. If we apply this patch\nthen they won't.\n\nAn alternative we could consider is to go back to Ashwin's patch,\nafter fixing it to use the \"cd && ln\" approach. I noticed though\nwhile chasing through the git history that that approach was in place\nthere originally and was specifically rejected in commit ccca61b5f.\nThat commit is quite old enough to drink, so maybe the underlying\nconcern no longer applies --- certainly we're using \"cd && ln\"\nelsewhere. But this seems like another point in favor of the whole\nbusiness being too complex/error-prone to want to support.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Feb 2020 13:29:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use LN_S instead of \"ln -s\" in Makefile" } ]
[ { "msg_contents": "Execute this:\n\nselect jsonb_pretty(jsonb_build_object(\n 'a'::varchar, 1.7::numeric,\n 'b'::varchar, 'dog'::varchar,\n 'c'::varchar, true::boolean\n ))\n\nIt produces the result that I expect:\n\n { +\n \"a\": 1.7, +\n \"b\": \"dog\",+\n \"c\": true +\n }\n\nNotice that the numeric, text, and boolean primitive values are properly rendered with the text value double-quoted and the numeric and boolean values unquoted.\n\nNow execute this supposed functional equivalent:\n\nselect jsonb_pretty(jsonb_object(\n '{a, 17, b, \"dog\", c, true}'::varchar[]\n ))\n\nIt is meant to be a nice alternative when you want to build an object (rather than an array) because the syntax is less verbose.\n\nHowever, it gets the wrong answer, thus:\n\n { +\n \"a\": \"17\", +\n \"b\": \"dog\",+\n \"c\": \"true\"+\n }\n\nNow, the numeric value and the boolean value are double-quoted—in other words, they have been implicitly converted to JSON primitive text values.\n\nDo you agree that this is a bug?\n\nNotice that I see this behavior in vanilla PostgreSQL 11.2 and in YugabyteDB Version 2.0.11.0. See this blogpost:\n\n“Distributed PostgreSQL on a Google Spanner Architecture—Query Layer”\nhttps://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/\n\nYugabyteDB uses the PostgreSQL source code for its SQL upper half.\n\nRegards, Bryn Llewellyn, Yugabyte\n\n\n\n", "msg_date": "Fri, 14 Feb 2020 18:21:54 -0800", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "On 15/02/2020 03:21, Bryn Llewellyn wrote:\n> Now execute this supposed functional equivalent:\n> \n> select jsonb_pretty(jsonb_object(\n> '{a, 17, b, \"dog\", c, true}'::varchar[]\n> ))\n> \n> It is meant to be a nice alternative when you want to build an object (rather than an array) because the syntax is less verbose.\n> \n> However, it gets the wrong answer, thus:\n> \n> { +\n> \"a\": \"17\", +\n> \"b\": \"dog\",+\n> \"c\": \"true\"+\n> }\n> \n> Now, the numeric value and the boolean value are double-quoted—in other words, they have been implicitly converted to JSON primitive text values.\n\nThey haven't been implicitly converted, you gave an array of varchars.\nHow should it know that you don't want texts?\n\n> Do you agree that this is a bug?\nNo.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 15 Feb 2020 03:28:07 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "This:\n\nselect jsonb_pretty(jsonb_build_object(\n 'a'::varchar, 1.7::numeric,\n 'b'::varchar, 'dog'::varchar,\n 'c'::varchar, true::boolean\n ))\n\nallows me to express what I want. That’s a good thing. Are you saying that this:\n\nselect jsonb_pretty(jsonb_object(\n '{a, 17, b, \"dog\", c, true}'::varchar[]\n ))\n\nsimply lacks that power of expression and that every item in the array is assumed to be intended to end up as a JSON text primitive value? In other words, do the double quotes around \"dog\" have no effect? That would be a bad thing—and it would limit the usefulness of the jsonb_object() function.\n\nThe doc (“Builds a JSON object out of a text array.”) is simply too terse to inform an answer to this question.\n\nOn 14-Feb-2020, at 18:28, Vik Fearing <vik@postgresfriends.org> wrote:\n\nOn 15/02/2020 03:21, Bryn Llewellyn wrote:\n> Now execute this supposed functional equivalent:\n> \n> select jsonb_pretty(jsonb_object(\n> '{a, 17, b, \"dog\", c, true}'::varchar[]\n> ))\n> \n> It is meant to be a nice alternative when you want to build an object (rather than an array) because the syntax is less verbose.\n> \n> However, it gets the wrong answer, thus:\n> \n> { +\n> \"a\": \"17\", +\n> \"b\": \"dog\",+\n> \"c\": \"true\"+\n> }\n> \n> Now, the numeric value and the boolean value are double-quoted—in other words, they have been implicitly converted to JSON primitive text values.\n\nThey haven't been implicitly converted, you gave an array of varchars.\nHow should it know that you don't want texts?\n\n> Do you agree that this is a bug?\nNo.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 14 Feb 2020 19:07:15 -0800", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "On 15/02/2020 04:07, Bryn Llewellyn wrote:\n> This:\n> \n> select jsonb_pretty(jsonb_build_object(\n> 'a'::varchar, 1.7::numeric,\n> 'b'::varchar, 'dog'::varchar,\n> 'c'::varchar, true::boolean\n> ))\n> \n> allows me to express what I want. That’s a good thing. Are you saying that this:\n> \n> select jsonb_pretty(jsonb_object(\n> '{a, 17, b, \"dog\", c, true}'::varchar[]\n> ))\n> \n> simply lacks that power of expression and that every item in the array is assumed to be intended to end up as a JSON text primitive value? In other words, do the double quotes around \"dog\" have no effect?\n\nThat is correct.\n\n> That would be a bad thing—and it would limit the usefulness of the jsonb_object() function.\n\nUse the long form if you need to mix datatypes.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 15 Feb 2020 04:12:43 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "On Friday, February 14, 2020, Bryn Llewellyn <bryn@yugabyte.com> wrote:\n>\n> The doc (“Builds a JSON object out of a text array.”) is simply too terse\n> to inform an answer to this question.\n>\n\nIt does presume knowledge but it precisely defines the outcome:\n\nPostgreSQL arrays are typed and all members are of the same type. A text\narray’s members are all text.\n\nGiven the above knowledge the fact that the resultant json object contains\nexclusively text keys and text values directly follows.\n\nDavid J.\n\nOn Friday, February 14, 2020, Bryn Llewellyn <bryn@yugabyte.com> wrote:\nThe doc (“Builds a JSON object out of a text array.”) is simply too terse to inform an answer to this question.\nIt does presume knowledge but it precisely defines the outcome:PostgreSQL arrays are typed and all members are of the same type.  A text array’s members are all text.Given the above knowledge the fact that the resultant json object contains exclusively text keys and text values directly follows.David J.", "msg_date": "Fri, 14 Feb 2020 20:19:40 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "On Friday, February 14, 2020, Bryn Llewellyn <bryn@yugabyte.com> wrote:\n>\n>\n> select jsonb_pretty(jsonb_object(\n> '{a, 17, b, \"dog\", c, true}'::varchar[]\n> ))\n>\n> In other words, do the double quotes around \"dog\" have no effect? That\n> would be a bad thing—and it would limit the usefulness of the\n> jsonb_object() function.\n>\n\nThe double quotes serve a specific purpose, to allow values containing\ncommas to be treated as a single value (see syntax details for the exact\nrules) in the resulting array of text values. The fact you don’t have to\nquote the other strings is a convenience behavior of the feature.\n\nDavid J.\n\nOn Friday, February 14, 2020, Bryn Llewellyn <bryn@yugabyte.com> wrote:\n\nselect jsonb_pretty(jsonb_object(\n '{a, 17, b, \"dog\", c, true}'::varchar[]\n ))\nIn other words, do the double quotes around \"dog\" have no effect? That would be a bad thing—and it would limit the usefulness of the jsonb_object() function.\nThe double quotes serve a specific purpose, to allow values containing commas to be treated as a single value (see syntax details for the exact rules) in the resulting array of text values.  The fact you don’t have to quote the other strings is a convenience behavior of the feature.David J.", "msg_date": "Fri, 14 Feb 2020 20:24:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "Thank you both, Vik, and David, for bing so quick to respond. All is clear now. It seems to me that the price (giving up the ability to say explicitly what primitive JSON values you want) is too great to pay for the benefit (being able to build the semantic equivalent of a variadic list of actual arguments as text.\n\nSo I wrote my own wrapper for jsonb_build_array() and jsonb_build_object():\n\ncreate function my_jsonb_build(\n kind in varchar,\n variadic_elements in varchar)\n returns jsonb\n immutable\n language plpgsql\nas $body$\ndeclare\n stmt varchar :=\n case kind\n when 'array' then\n 'select jsonb_build_array('||variadic_elements||')'\n when 'object' then\n 'select jsonb_build_object('||variadic_elements||')'\n end;\n j jsonb;\nbegin\n execute stmt into j;\n return j;\nend;\n$body$;\n\ncreate type t1 as(a int, b varchar);\n\n———————————————————————————————————\n— Test it.\n\nselect jsonb_pretty(my_jsonb_build(\n 'array',\n $$\n 17::integer, 'dog'::varchar, true::boolean\n $$));\n\nselect jsonb_pretty(my_jsonb_build(\n 'array',\n $$\n 17::integer,\n 'dog'::varchar,\n true::boolean,\n (17::int, 'dog'::varchar)::t1\n $$));\n\nselect jsonb_pretty(my_jsonb_build(\n 'object',\n $$\n 'a'::varchar, 17::integer,\n 'b'::varchar, 'dog'::varchar,\n 'c'::varchar, true::boolean\n $$));\n\nIt produces the result that I want. And I’m prepared to pay the price of using $$ to avoid doubling up interior single quotes..\n\nOn 14-Feb-2020, at 19:24, David G. Johnston <david.g.johnston@gmail.com> wrote:\n\nOn Friday, February 14, 2020, Bryn Llewellyn <bryn@yugabyte.com <mailto:bryn@yugabyte.com>> wrote:\n\nselect jsonb_pretty(jsonb_object(\n '{a, 17, b, \"dog\", c, true}'::varchar[]\n ))\n\nIn other words, do the double quotes around \"dog\" have no effect? That would be a bad thing—and it would limit the usefulness of the jsonb_object() function.\n\nThe double quotes serve a specific purpose, to allow values containing commas to be treated as a single value (see syntax details for the exact rules) in the resulting array of text values. The fact you don’t have to quote the other strings is a convenience behavior of the feature.\n\nDavid J.\n\n\nThank you both, Vik, and David, for bing so quick to respond. All is clear now. It seems to me that the price (giving up the ability to say explicitly what primitive JSON values you want) is too great to pay for the benefit (being able to build the semantic equivalent of a variadic list of actual arguments as text.So I wrote my own wrapper for jsonb_build_array() and jsonb_build_object():create function my_jsonb_build(  kind in varchar,  variadic_elements in varchar)  returns jsonb  immutable  language plpgsqlas $body$declare  stmt varchar :=    case kind     when 'array' then       'select jsonb_build_array('||variadic_elements||')'     when 'object' then       'select jsonb_build_object('||variadic_elements||')'    end;  j jsonb;begin  execute stmt into j;  return j;end;$body$;create type t1 as(a int, b varchar);———————————————————————————————————— Test it.select jsonb_pretty(my_jsonb_build(  'array',  $$    17::integer, 'dog'::varchar, true::boolean  $$));select jsonb_pretty(my_jsonb_build(  'array',  $$    17::integer,    'dog'::varchar,    true::boolean,    (17::int, 'dog'::varchar)::t1  $$));select jsonb_pretty(my_jsonb_build(  'object',  $$    'a'::varchar,  17::integer,    'b'::varchar,  'dog'::varchar,    'c'::varchar,  true::boolean  $$));It produces the result that I want. And I’m prepared to pay the price of using $$ to avoid doubling up interior single quotes..On 14-Feb-2020, at 19:24, David G. Johnston <david.g.johnston@gmail.com> wrote:On Friday, February 14, 2020, Bryn Llewellyn <bryn@yugabyte.com> wrote:\n\nselect jsonb_pretty(jsonb_object(\n '{a, 17, b, \"dog\", c, true}'::varchar[]\n ))\nIn other words, do the double quotes around \"dog\" have no effect? That would be a bad thing—and it would limit the usefulness of the jsonb_object() function.\nThe double quotes serve a specific purpose, to allow values containing commas to be treated as a single value (see syntax details for the exact rules) in the resulting array of text values.  The fact you don’t have to quote the other strings is a convenience behavior of the feature.David J.", "msg_date": "Fri, 14 Feb 2020 21:06:02 -0800", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "\nOn 2/15/20 12:06 AM, Bryn Llewellyn wrote:\n> Thank you both, Vik, and David, for bing so quick to respond. All is\n> clear now. It seems to me that the price (giving up the ability to say\n> explicitly what primitive JSON values you want) is too great to pay\n> for the benefit (being able to build the semantic equivalent of a\n> variadic list of actual arguments as text.\n>\n> So I wrote my own wrapper for jsonb_build_array()\n> and jsonb_build_object():\n>\n> create function my_jsonb_build(\n>   kind in varchar,\n>   variadic_elements in varchar)\n>   returns jsonb\n>   immutable\n>   language plpgsql\n> as $body$\n> declare\n>   stmt varchar :=\n>     case kind\n>      when 'array' then\n>        'select jsonb_build_array('||variadic_elements||')'\n>      when 'object' then\n>        'select jsonb_build_object('||variadic_elements||')'\n>     end;\n>   j jsonb;\n> begin\n>   execute stmt into j;\n>   return j;\n> end;\n> $body$;\n>\n\nPlease don't top-post on PostgreSQL lists.  See\n<http://idallen.com/topposting.html>\n\n\nThe function above has many deficiencies, including lack of error\nchecking and use of 'execute' which will significantly affect\nperformance. Still, if it works for you, that's your affair.\n\n\nThese functions were written to accommodate PostgreSQL limitations. We\ndon't have a heterogenous array type.  So json_object() will return an\nobject where all the values are strings, even if they look like numbers,\nbooleans etc. And indeed, this is shown in the documented examples.\njsonb_build_object and jsonb_build_array overcome that issue, but there\nthe PostgreSQL limitation is that you can't pass in an actual array as\nthe variadic element, again because we don't have heterogenous arrays.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 16 Feb 2020 01:27:27 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "Bryn Llewellyn wrote:\n\n> ...I wrote my own wrapper for jsonb_build_array()\n> and jsonb_build_object():\n> \n> create function my_jsonb_build(\n> kind in varchar,\n> variadic_elements in varchar)\n> returns jsonb\n> immutable\n> language plpgsql\n> as $body$\n> declare\n> stmt varchar :=\n> case kind\n> when 'array' then\n> 'select jsonb_build_array('||variadic_elements||')'\n> when 'object' then\n> 'select jsonb_build_object('||variadic_elements||')'\n> end;\n> j jsonb;\n> begin\n> execute stmt into j;\n> return j;\n> end;\n> $body$;\n> \n\nAndrew replied\n\nPlease don't top-post on PostgreSQL lists. See\n<http://idallen.com/topposting.html>\n\nThe function above has many deficiencies, including lack of error\nchecking and use of 'execute' which will significantly affect\nperformance. Still, if it works for you, that's your affair.\n\nThese functions were written to accommodate PostgreSQL limitations. We\ndon't have a heterogenous array type. So json_object() will return an\nobject where all the values are strings, even if they look like numbers,\nbooleans etc. And indeed, this is shown in the documented examples.\njsonb_build_object and jsonb_build_array overcome that issue, but there\nthe PostgreSQL limitation is that you can't pass in an actual array as\nthe variadic element, again because we don't have heterogenous arrays.\n\nBryn replies:\n\nAh… I didn’t know about the bottom-posting rule.\n\nOf course I didn’t show error handling. Doing so would have increased the source text size and made it harder to appreciate the point.\n\nI used dynamic SQL because I was modeling the use case where on-the-fly analysis determines what JSON object or array must be built—i.e. the number of components and the datatype of each. It’s nice that jsonb_build_object() and jsonb_build_array() accommodate this dynamic need by being variadic. But I can’t see a way to wrote the invocation using only static code.\n\nWhat am I missing?\n\n", "msg_date": "Sun, 16 Feb 2020 10:40:59 -0800", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "\nOn 2/16/20 1:40 PM, Bryn Llewellyn wrote:\n>\n> Andrew replied\n>\n> The function above has many deficiencies, including lack of error\n> checking and use of 'execute' which will significantly affect\n> performance. Still, if it works for you, that's your affair.\n>\n> These functions were written to accommodate PostgreSQL limitations. We\n> don't have a heterogenous array type. So json_object() will return an\n> object where all the values are strings, even if they look like numbers,\n> booleans etc. And indeed, this is shown in the documented examples.\n> jsonb_build_object and jsonb_build_array overcome that issue, but there\n> the PostgreSQL limitation is that you can't pass in an actual array as\n> the variadic element, again because we don't have heterogenous arrays.\n>\n> Bryn replies:\n>\n>\n> Of course I didn’t show error handling. Doing so would have increased the source text size and made it harder to appreciate the point.\n>\n> I used dynamic SQL because I was modeling the use case where on-the-fly analysis determines what JSON object or array must be built—i.e. the number of components and the datatype of each. It’s nice that jsonb_build_object() and jsonb_build_array() accommodate this dynamic need by being variadic. But I can’t see a way to wrote the invocation using only static code.\n>\n> What am I missing?\n\n\n\nProbably not much, These functions work best from application code which\nbuilds up the query. But if you do that and then call a function which\nin turn calls execute you get a double whammy of interpreter overhead.\nI'm also not a fan of functions that in effect take bits of SQL text and\ninterpolate them into a query in plpgsql, like your query does.\n\n\njson_object() is meant to be an analog of the hstore() function that\ntakes one or two text arrays and return an hstore. Of course, it doesn't\nhave the issue you complained about, since all values in an hstore are\nstrings.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 16 Feb 2020 16:27:13 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n\nBryn Llewellyn wrote:\n> \n> Andrew replied\n> \n> The function above has many deficiencies, including lack of error\n> checking and use of 'execute' which will significantly affect\n> performance. Still, if it works for you, that's your affair.\n> \n> These functions were written to accommodate PostgreSQL limitations. We\n> don't have a heterogenous array type. So json_object() will return an\n> object where all the values are strings, even if they look like numbers,\n> booleans etc. And indeed, this is shown in the documented examples.\n> jsonb_build_object and jsonb_build_array overcome that issue, but there\n> the PostgreSQL limitation is that you can't pass in an actual array as\n> the variadic element, again because we don't have heterogenous arrays.\n> \n> Bryn replies:\n> \n> \n> Of course I didn’t show error handling. Doing so would have increased the source text size and made it harder to appreciate the point.\n> \n> I used dynamic SQL because I was modeling the use case where on-the-fly analysis determines what JSON object or array must be built—i.e. the number of components and the datatype of each. It’s nice that jsonb_build_object() and jsonb_build_array() accommodate this dynamic need by being variadic. But I can’t see a way to wrote the invocation using only static code.\n> \n> What am I missing?\n\n\n\nProbably not much, These functions work best from application code which\nbuilds up the query. But if you do that and then call a function which\nin turn calls execute you get a double whammy of interpreter overhead.\nI'm also not a fan of functions that in effect take bits of SQL text and\ninterpolate them into a query in plpgsql, like your query does.\n\n\njson_object() is meant to be an analog of the hstore() function that\ntakes one or two text arrays and return an hstore. Of course, it doesn't\nhave the issue you complained about, since all values in an hstore are\nstrings.\n\nBryn replied:\n\nWe don’t yet support the hstore() function in YugabyteDB. So, meanwhile, I see no alternative to the approach that I illustrated—whatever that implies for doing things of which you’re not a fan. That’s why I asked “ What am I missing?”. But your “ Probably not much” seems, then, to force my hand.\n\nB.t.w., you earlier said “The double quotes [around “dog”] serve a specific purpose, to allow values containing commas to be treated as a single value (see syntax details for the exact rules) in the resulting array of text values.” But this test shows that they are not needed for that purpose:\n\nselect jsonb_pretty(jsonb_object(\n '{a, 17, b, dog house, c, true}'::varchar[]\n ))\n\nThis is the result:\n\n { +\n \"a\": \"17\", +\n \"b\": \"dog house\",+\n \"c\": \"true\" +\n }\n\nThe commas are sufficient separators.\n\nIt seems to me, therefore, that writing the double quotes gives the wrong message: they make it look like you are indeed specifying a text value rather than a numeric or integer value. But we know that the double quotes do *not* achieve this.\n\n\n\n\n\n", "msg_date": "Sun, 16 Feb 2020 16:25:20 -0800", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "\nOn 2/16/20 7:25 PM, Bryn Llewellyn wrote:\n>\n> B.t.w., you earlier said “The double quotes [around “dog”] serve a specific purpose, to allow values containing commas to be treated as a single value (see syntax details for the exact rules) in the resulting array of text values.” But this test shows that they are not needed for that purpose:\n\n\nI didn't say that. Someone else did.\n\n\n>\n> select jsonb_pretty(jsonb_object(\n> '{a, 17, b, dog house, c, true}'::varchar[]\n> ))\n>\n> This is the result:\n>\n> { +\n> \"a\": \"17\", +\n> \"b\": \"dog house\",+\n> \"c\": \"true\" +\n> }\n>\n> The commas are sufficient separators.\n>\n> It seems to me, therefore, that writing the double quotes gives the wrong message: they make it look like you are indeed specifying a text value rather than a numeric or integer value. But we know that the double quotes do *not* achieve this.\n>\n\n\nNo, you haven't understood what they said. If the field value contains a\ncomma it needs to be quoted. But none of the fields in your example do.\nIf your field were \"dog,house\" instead of \"dog house\" it would need to\nbe quoted. This had nothing to do with json, BTW, it's simply from the\nrules for array literals.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sun, 16 Feb 2020 19:40:12 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." }, { "msg_contents": "On 16-Feb-2020, at 16:40, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n\nOn 2/16/20 7:25 PM, Bryn Llewellyn wrote:\n> \n> B.t.w., you earlier said “The double quotes [around “dog”] serve a specific purpose, to allow values containing commas to be treated as a single value (see syntax details for the exact rules) in the resulting array of text values.” But this test shows that they are not needed for that purpose:\n\n\nI didn't say that. Someone else did.\n\n\n> \n> select jsonb_pretty(jsonb_object(\n> '{a, 17, b, dog house, c, true}'::varchar[]\n> ))\n> \n> This is the result:\n> \n> { +\n> \"a\": \"17\", +\n> \"b\": \"dog house\",+\n> \"c\": \"true\" +\n> }\n> \n> The commas are sufficient separators.\n> \n> It seems to me, therefore, that writing the double quotes gives the wrong message: they make it look like you are indeed specifying a text value rather than a numeric or integer value. But we know that the double quotes do *not* achieve this.\n> \n\n\nNo, you haven't understood what they said. If the field value contains a\ncomma it needs to be quoted. But none of the fields in your example do.\nIf your field were \"dog,house\" instead of \"dog house\" it would need to\nbe quoted. This had nothing to do with json, BTW, it's simply from the\nrules for array literals.\n\nBryn replied:\n\nGot it! Thanks for helping me out, Andrew.\n\n", "msg_date": "Sun, 16 Feb 2020 19:05:26 -0800", "msg_from": "Bryn Llewellyn <bryn@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: jsonb_object() seems to be buggy. jsonb_build_object() is good." } ]
[ { "msg_contents": "Hi,\n\nFeel free to send out the email blast.\n\nThere are a number of other channels. postgres slack, postgres mailing\nlists, @PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>, twitter\nwith Postgres tag\n\nCheers,\n\n\nDave Cramer\n\n\nOn Sat, 15 Feb 2020 at 19:44, 'Meetup Messages' via Meetup <\nmeetup@postgresql.us> wrote:\n\n> ~~~ Respond by replying directly to this email ~~~\n> [image: Meetup]\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NIB7ZcW8mxKXM9a8Vncw2bQwXghq9sJ-2FROf-2FehyGhp543sb3pzFAhk0LPjJc1hcWdG7G60fVcUyYlajlYb5vUXnnCivrSi2Y4jZlqmS0jAD4Q-3D-3DBkZW_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QIX0RMmrn4X9zfsaH9mF-2FuvejGtDASocoMg0UKbCojQSs3iK0Yzp5wFp2H5IZ2u7W-2FFE6hZHBWstP3DaFS1sE-2BV-2BeBy7xsKX9Fqy2FNVOS3tDXsLDiWvbCo6Ad4aWY1ilxCQUaaI8XraVN3EBErOJI7R33cTHlZw8V5S4YsndtlndeQI7VlXhkCQUxRG9Nk4r>\n> Priscilla Ip\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NKYEGA-2Bm2q7cVmGDfxg27ILJ86qs7pcrIvCXDRbPuDEy17I-2FBHQML-2B-2F9GrPAwddQjc5DxE-2BrKCJJhzeZ56wVtXVZOc8zRKEuCdGZRpVxCjMSKema1pWHyU6KRKe-2FvHyjZFKQGR710fiTNBntVzw9qXECbjsHl7M71KRVCnrFN18Jw-3D-3D-eyI_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QmA7rOSKryc4ov983774QF1spsq0sUbTc45u6SeMfFViPpeTYe8VJx-2FnP8p44iBDlAsa70OaKnMG1VSHQjSZeCteVbW0QqZ8WsmgkdBDbGHVGFXSlYd1pzHrza-2FzrEoDmloaCgyDkl4pXDznCkwVCrFk5n4Q1-2B9q0-2Fa-2BUwhtT4sfimafJ66NaiTw4w3kj0QgF>\n> Hi, my name is Priscilla and I am a McGill student in my final year of\n> electrical engineering. As a part of my capstone design project, I am\n> conducting a user study on the impact of collaborative features in database\n> tools along with my partner and the SoftwareREBELs research group at McGill\n> University. We are looking for participants for the study and were hoping\n> to send out an email blast to your group members if that is possible.\n>\n> Please see the details below and feel free to contact me with any\n> questions. Thank you for your consideration and I look forward to hearing\n> from you.\n>\n> The study will evaluate the impact of collaborative features on\n> database-related tasks but participants with all levels of experience with\n> databases are welcome! Participation involves a 45 minute in-person session\n> on McGill campus where you’ll complete simple database-related tasks. Those\n> attending the session will be entered in a raffle for $20 Amazon gift cards\n> where the odds of winning are about 1 in 5.\n> Sign up for the study here:\n> https://docs.google.com/forms/d/e/1FAIpQLScDaa8J...\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkpoecazZDF4VVCvSvRUvE5C5WRqvh02wWOkGH294kxwEyE-2FiVOyNqtWFCBu6ucO-2BF1teqE2w8Bh-2BaYY-2BQElwf2A0u1vJBkKSScLEyBuEEpWvx-2FFJ0ejcFY2euKCbcDZpg-3D-3DUKj1_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0Q9eozqwJVNbLyHC-2FVlepdIKLCfKlvQiUjPEgJB4o2ltd8-2FJN1VgBv-2FbbhCEA5-2F1v4W-2Bxzen-2FJuIrpAhQ-2FFkOWKKVaa5sO6nBqqP-2B4-2B5mWjR9qbzN6bur2X3Elno3ps05l0dDEGQ014E85aijimtI8uaLRKU2jjJoZmHGOO-2Fha9fovaaoPXaxd0fR2pCRK1gWG>\n> ­\n> February 15, 2020 4:43 PM\n> Reply directly to this email or respond on Meetup.com\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NKYEGA-2Bm2q7cVmGDfxg27ILJ86qs7pcrIvCXDRbPuDEy17I-2FBHQML-2B-2F9GrPAwddQjc5DxE-2BrKCJJhzeZ56wVtXVZOc8zRKEuCdGZRpVxCjMSKema1pWHyU6KRKe-2FvHyjZFKQGR710fiTNBntVzw9qXECbjsHl7M71KRVCnrFN18Jw-3D-3DFc4D_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0Qw9Q1VCT6JGS7q-2BBXLpMgL5v6oenVhlzSjIkrMaKKbXgzLboDxPULttLFzEd0GZ-2BFFt5lMVAV-2FZE-2BFG6AIPAKU5MMBnWQvGgmpDYuyrmI-2Fv7eAeZ9TR9C1VVIVWeFv2x5DEROx4VYdk5MLwOOo5Mi1PPjt6oOjPL71SuZQtlOpBfEVswpiHlddfjhx1BLqHv2>\n>\n> You've received this notification because Priscilla Ip\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NI00kOswBWpvBpWFrG-2BcrnUr0nWC9lrRsW020oTy2d9nvLglOxWtkB012ZkD-2BY8M-2FQ7g1-2BPwlhfyL0oJ6lfwRYNTPYGI-2FXMTGB8fqKYsY2qnzt4EX0lhEBLa2eguwFU-2F5E-3DbfSf_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0Qb4-2FtkOnaW5JiXgxczLDArcNG-2BNW7GKGEzX7y-2BY-2FONMezABzY1gVXEVs4AjlS0jVL0pFKQOrGMpb7wjp0mE6HxEca3wmiR6V5Ufo5zK0eAHawPybMmWrP81eEMSli-2F7ymbni4D0O-2F7r3gFxqray2q0a8XNMJpK-2BWPp14-2FQqUHDKJ7-2FcUtnzbYWFaV0OvQ66a7>\n> has contacted you on Meetup.\n>\n> Never miss a last-minute change. Get the app.\n> [image: iPhone App Store]\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NI54YRGSbjWUPfAQ-2BcKGEqaxK4csVqoiXcf3tnhqfHL-2Fs8WwuZ2xf70MJ1x4B1rUietAaiCeZ3U01X5ibPp-2Bsgh3JmFKV1PXB4yOGR9-2BbiGaolJNUePFW5EKXOaE8CMJbqbEZA915Y61rOaMipL8NjPS0DewTmwIVnjnkmsw6WL95D-2FCKw4fVbNyQnF90NoXzskAWwXxEPoV-2Bl06CD74g7Dy-2B0Wv4p1K4LqhIgWHy4FaSMw4Ss-2FC6rB1F5ArjeEC-2B544s0MbK70Y3o2A8lOTKH3WLi7_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QV33Z9Z3zb-2FCkWtbTN25cffbFHulfSUDXwAYVC6LMRh8VvTMjwvors4AzKiVpncoO4NvnWSOPu8G71gtbTHR6YZ01Uy7wXudc8EdqiQQkCeYsxc-2BFc3Az9fw7qKB99onxThs2-2FvX3AKjLvo3RT-2FdzMhuwkVablNv-2BdrQHVM5v0ZKP-2FdVCs1wqUAaVFkAQ9MyZ> [image:\n> Google Play]\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NI54YRGSbjWUPfAQ-2BcKGEqaxK4csVqoiXcf3tnhqfHL-2Fs8WwuZ2xf70MJ1x4B1rUietAaiCeZ3U01X5ibPp-2Bsgh3JmFKV1PXB4yOGR9-2BbiGaolJNUePFW5EKXOaE8CMJbr-2B484BG9sdfdrqD-2BzdFWyuUoiAtDUtAGQMsTXhWpsmDt-2B-2B6kqvFnASMYWcBcCczxKr-2BmSndeROV5RiWy0ISAzE2bAtMYVMue7jq1nOEbI1DLj5lLoX1fFnWdwUULEyVtImCOebFmRnIpraP0QNxPhdDhkH_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QSrsZnvuNiCExJnpwNgcGkX2VfUiTCBkDi8yaQQ5xn4M0rfEXv45IytXi6RSDgqsv-2Bm8Iof5ltLUf5-2FZrueIkpEXHYPaXVm18GEKQinReVqwr5-2BsJ4enb9k0sc4BzH9DVgDyyAQAf65gSmh7l1LuWjBX8ZDS5Rs-2FmAVpxVH02wprcyLtW1UzcIUMtyANuF6-2FQ>\n>\n> You're getting this message because your Meetup account is connected to\n> this email address.\n>\n> Unsubscribe\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NItXTEN57u-2FeV-2BqFW-2FA9-2FkO5DkZn5YEJapUGjvbyyXj-2F-2B6viW-2BDV4ab15gQbQOHtNxTDKLpFyGnbpIJPI-2Fm9nyn-2Fd0VBBI7M9JieswGDCalVG9qH4KXxMFJK6e3xJU04YhNVwDHpbmPC-2Fx-2FiuRMfMq-2B0X-2BZCj1usZ0TDUleKJzDM8ORmh91RT4QXkY080wnl6VCu4z5f9c-2FcjqQFtQoWsDU-2FxZY0gpL7nb4gwScbBnsqYnUJZMxDko92SSnxfuL1Jv5fOQ0CHYY9-2FZsJXI0JhaaUSqldGC20rm1fGFUV1aSYQ-3D-3DRuQb_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QASzPFXfVeta-2BPy8QMNow9mJ5c9kX-2Bgy-2BpiLzmJoXYnfflkA4dHFRaFpR760pYi69zvbK-2BSEgns-2FBUcDr49ed-2BZWqQacQpURKA-2Bn2-2BiFEySrSLXwoRFZp-2BmqhgCknPGYqvIrg5TelkTRDWuaZBqwKllQEzrhIRbuWVoHgC2LJGcld-2FAtfyClZoJOaPL7jQaej>\n> from this type of email. Manage your settings\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NIJAevnuDf059y-2BTe3QgibPQsuH45LFEc9eyCI-2B2FT7JaJ70JQ-2BVHbj42ELSZra40wdtDdeRkUWwbwxGcKD61CrnH8CYL7pjCjPmp5SdQtMnyeY7OScr5-2F-2BjNHr50nG5ZE-3DjNQ2_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QA23v-2FjzVwNDvdNgYv5vau1FJTdiX4b915HXM9gBx4ZrYHdQFYjiyEPwBm1LmQ2UYosWDiF5Or5LdjoMa-2BMo1WVRC7-2F40mTY-2Fg1KhZ-2FjAgdrpaAxrteLIVOUwEbX8lku6ePNTe2N3nw1lXCHCUuApJBJauipLLZq7J5n2guJN-2BwCLc3Vt-2F64YjsE63fDeIf0i>\n> for all types of email updates.\n>\n> Meetup will always send you information about: your account, security,\n> privacy & policies, and payments. Read our Privacy Policy\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NLl0q1ZNkd0dteGF4LFNQ3KD-2B0NedeAk0sFJIwElUgsd09ofktC-2BsW3wvCfUFf90NutT7PQRnKcxd0Wim8xqEU4nAIxFj-2BTj2PbMjvK5udNQcdjOrwg-2FngbSacIAlwOPEc-3DkA0T_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0Q-2BcK7XOo8-2FVQALKaafm3ePZvEjlybw5QmbvswE5cK4DlzwJmhxqswy0UkmyFyvd3bCNxwY7we8gXc5NBKVUpZTkrQ9OBIrTPFQulvLyOpnhXqbpUdVYL340izuLPjS7moVq-2FjMThSSnAZrAEsFrTZKdFh7aL87aKulUTJq9twhoT0Q3jWIyEGngEzR6mD5VU6>\n>\n> Report this message.\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NKce-2Fx0Rzw0h8HQ9EbJADv6Y1gb9c9ejQEnhmvDD1pT4vryFW7rt7TOs3DFmPVA2SwHuMQ8deF9tZA0f-2B7lJNSG5eDXidk-2Bv-2B93SUjoFODtYsOyVs3vTM-2BaMY1WfSHST4e092jeNpBSygy6sUXtTbXRjiZgRiwWC-2F-2B4QmOR6UxY7Q-3D-3DVcw__acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QeTxMQOfIUgzj6BRozCw6jbd4Nmkm7SF2-2FscSACVkqXmSl-2FbFc6ixCnYsIu4zAdZk-2FA-2FAyW5Dtu7yM0kW11m3v34QxjKKKdk6NWUuboy5D5xA-2BhPYif4izGinL621rOFCLAGkBQsiTKLOf9eyfBDzyJ8CqxLvp27x3z5fSw3MSML8mwHw0y8OBkKq-2FYW6ImdX>\n>\n> Block message sender.\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NI00kOswBWpvBpWFrG-2BcrnUr0nWC9lrRsW020oTy2d9nvLglOxWtkB012ZkD-2BY8M-2FQ7g1-2BPwlhfyL0oJ6lfwRYNTPYGI-2FXMTGB8fqKYsY2qnzt4EX0lhEBLa2eguwFU-2F5E-3DAeZa_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QF3cy5r34n-2FPeV0vQCK28j0JFHPEP3JaWG6p5FgSfTBnBQ9rreJQOwudrnkYkk1dSBJw375y-2B7JqvJEIuGErSYTHSB9Wse-2Bs5sTWASziXmVU1ZRw5eJSRv3dl2Xv22Fy4ZgTRpK5TWXF8-2Be-2F-2FDatWxIyjpyX179w9I6-2FaIVvA0QUhh-2BNwKWGw6vDBRBUIKRSd>\n>\n> Visit your account page\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NLdcln7iFGl-2FUtvFzP4qCekGLn8g-2FtLHmSn0hBR-2F1suYLS34Pj6GGKAj0xZhzdM0Hpy4Xr4w8biq6zZk35h2UPfmFwzJMzh9D2qad9YprIfgXz-2BbsgXrf3TgJlfZn0nrmg-3DU0QJ_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QDwTGGwhiJRpRP4S66d8-2FmEXuKS6UgLJiY0oOZItu5AxBl6N2IGWIU7v-2BeTMdnp6XIzX7uT3ZPsYVLhMCByGdS-2BSdDidTTM8-2FhDZG7LUv8aoRPW-2FITxl-2BNZtUdbZC1W98pJ6e7G8vBvTs7ptjSbo6EFZD7OLlQHhR3VgJacBq9f5MdgvRMdboQ40qojxcGNoN>\n> to change your contact details, privacy settings, and other settings.\n>\n> Meetup, Inc.\n> <http://meet.meetup.com/ls/click?upn=yBf4llw5PeaY7leriFwBBkipzLsJ7uXZdea5ZSOL1NIB7ZcW8mxKXM9a8Vncw2bQwXghq9sJ-2FROf-2FehyGhp543sb3pzFAhk0LPjJc1hcWdG7G60fVcUyYlajlYb5vUXnp-2BiaJujphbh8vHRkKJ2hIA-3D-3D-d2m_acu8CQBeIyL8F4cSRPf7fOIGjbqg5QN-2BFCPlkXeWD3cq-2FiRfVI-2B0m9cr6ymyJp0QzopjUVweezf1Geg-2BaCWXa1w-2FXPLYZzzN-2F6N09WFaMDkoVrDS5q-2Bu-2FO-2B9Mx4iRW2jp5J4YvuJ0KZkhO-2BTHfVQKx-2BqPH0zPFMd-2B9kJJKrWYz2h0t9iJz7X-2F-2F-2BkHG8e-2FTq4sGkXxq32aq00G3GnYNmG4CpQTCMVTlP1m48iK76cdONH15tAk1-2BaCyZMCdt-2FE-2BfJ>,\n> POB 4668 #37895 New York NY USA 10163. Meetup is a wholly owned subsidiary\n> of WeWork Companies Inc.\n>\n\nHi,Feel free to send out the email blast.There are a number of other channels. postgres slack, postgres mailing lists,  @PostgreSQL Hackers, twitter with Postgres tagCheers,Dave CramerOn Sat, 15 Feb 2020 at 19:44, 'Meetup Messages' via Meetup <meetup@postgresql.us> wrote:\n\n~~~ Respond by replying directly to this email ~~~\n\n\n\n\n\nPriscilla Ip\nHi, my name is Priscilla and I am a McGill student in my final year of electrical engineering. As a part of my capstone design project, I am conducting a user study on the impact of collaborative features in database tools along with my partner and the SoftwareREBELs research group at McGill University. We are looking for participants for the study and were hoping to send out an email blast to your group members if that is possible. Please see the details below and feel free to contact me with any questions. Thank you for your consideration and I look forward to hearing from you. The study will evaluate the impact of collaborative features on database-related tasks but participants with all levels of experience with databases are welcome! Participation involves a 45 minute in-person session on McGill campus where you’ll complete simple database-related tasks. Those attending the session will be entered in a raffle for $20 Amazon gift cards where the odds of winning are about 1 in 5. Sign up for the study here: https://docs.google.com/forms/d/e/1FAIpQLScDaa8J...­\n\n\t\t\t\t\tFebruary 15, 2020 4:43 PM\n\t\t\t\t\n\n\n\n\t\t\t\t\t\t\tReply directly to this email or respond on Meetup.com\n\n\n\n\n\n\n\n\t\t\t\t\t\t\t\t\tYou've received this notification because Priscilla Ip has contacted you on Meetup.\n\t\t\t\t\t\t\n\n\nNever miss a last-minute change. Get the app.\n\n\n\n\n\n\n\n\n\n\n\n\t\t\t\t\tYou're getting this message because your Meetup account is connected to this email address.\n\t\t\t\t\n\nUnsubscribe from this type of email.\n\t\t\t\t\t\tManage your settings for all types of email updates.\n\t\t\t\t\t\n\n\t\t\t\t\tMeetup will always send you information about: your account, security, privacy & policies, and payments. Read our Privacy Policy\n\n\nReport this message.\n\n\nBlock message sender.\n\n\n\t\t\t\t\tVisit your account page to change your contact details, privacy settings, and other settings.\n\t\t\t\t\n\nMeetup, Inc., POB 4668 #37895 New York NY USA 10163. Meetup is a wholly owned subsidiary of WeWork Companies Inc.", "msg_date": "Sat, 15 Feb 2020 19:50:40 -0500", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: New messages from Priscilla Ip" } ]
[ { "msg_contents": "Hi\n\nwhen I do some profiling of plpgsql, usually I surprised how significant\noverhead has expression execution. Any calculations are very slow.\n\nThis is not typical example of plpgsql, but it shows cleanly where is a\noverhead\n\nCREATE OR REPLACE FUNCTION public.foo()\n RETURNS void\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\ndeclare i bigint = 0;\nbegin\n while i < 100000000\n loop\n i := i + 1;\n end loop;\nend;\n$function$\n\nProfile of development version\n\n 10,04% plpgsql.so [.] exec_eval_simple_expr\n 9,17% postgres [.] AcquireExecutorLocks\n 7,01% postgres [.] ExecInterpExpr\n 5,86% postgres [.]\nOverrideSearchPathMatchesCurrent\n 4,71% postgres [.] GetCachedPlan\n 4,14% postgres [.] AcquirePlannerLocks\n 3,72% postgres [.] RevalidateCachedQuery\n 3,56% postgres [.] MemoryContextReset\n 3,43% plpgsql.so [.] plpgsql_param_eval_var\n 3,33% postgres [.] SPI_plan_get_cached_plan\n 3,28% plpgsql.so [.] exec_stmt\n 3,18% postgres [.] ReleaseCachedPlan\n 2,92% postgres [.] ResourceArrayRemove\n 2,81% plpgsql.so [.] exec_assign_value\n 2,74% plpgsql.so [.] exec_cast_value\n 2,70% plpgsql.so [.] exec_eval_expr\n 1,96% postgres [.] recomputeNamespacePath\n 1,90% plpgsql.so [.] exec_eval_boolean\n 1,82% plpgsql.so [.] exec_eval_cleanup\n 1,72% postgres [.] ScanQueryForLocks\n 1,68% postgres [.] CheckCachedPlan\n 1,49% postgres [.] ResourceArrayAdd\n 1,48% plpgsql.so [.] exec_assign_expr\n 1,42% postgres [.]\nResourceOwnerForgetPlanCacheRef\n 1,24% plpgsql.so [.] exec_stmts\n 1,23% plpgsql.so [.] exec_stmt_while\n 1,03% plpgsql.so [.] assign_simple_var\n 0,73% postgres [.] int84lt\n 0,62% postgres [.]\nResourceOwnerEnlargePlanCacheRefs\n 0,54% postgres [.] int84pl\n 0,49% plpgsql.so [.] setup_param_list\n 0,45% postgres [.] ResourceArrayEnlarge\n 0,44% postgres [.] choose_custom_plan\n 0,39% postgres [.]\nResourceOwnerRememberPlanCacheRef\n 0,30% plpgsql.so [.] exec_stmt_assign\n 0,26% postgres [.] GetUserId\n 0,22% plpgsql.so [.]\nSPI_plan_get_cached_plan@plt\n\nand profile of PostgreSQL 8.2\n\n 13,63% plpgsql.so [.] exec_eval_simple_expr\n 9,72% postgres [.] AllocSetAlloc\n 7,84% postgres [.]\nExecMakeFunctionResultNoSets\n 6,20% plpgsql.so [.] exec_assign_value\n 5,46% postgres [.] AllocSetReset\n 4,79% postgres [.] ExecEvalParam\n 4,53% plpgsql.so [.] exec_eval_datum\n 4,40% postgres [.] MemoryContextAlloc\n 3,51% plpgsql.so [.] exec_stmt\n 3,01% plpgsql.so [.] exec_eval_expr\n 2,76% postgres [.] int84pl\n 2,11% plpgsql.so [.] exec_eval_cleanup\n 1,77% postgres [.] datumCopy\n 1,76% postgres [.] MemoryContextReset\n 1,75% libc-2.30.so [.] __sigsetjmp\n 1,64% postgres [.] int84lt\n 1,47% postgres [.] pfree\n 1,43% plpgsql.so [.] exec_simple_cast_value\n 1,36% plpgsql.so [.] MemoryContextReset@plt\n 1,28% plpgsql.so [.] exec_stmt_while\n 1,25% plpgsql.so [.] exec_assign_expr\n 1,22% postgres [.] check_stack_depth\n 1,09% plpgsql.so [.] exec_eval_boolean\n 1,06% postgres [.] AllocSetFree\n 0,99% plpgsql.so [.] free_var\n 0,93% plpgsql.so [.] exec_cast_value\n 0,93% plpgsql.so [.] exec_stmts\n 0,78% libc-2.30.so [.]\n__memmove_sse2_unaligned_erms\n 0,72% postgres [.] datumGetSize\n 0,62% postgres [.] Int64GetDatum\n 0,51% libc-2.30.so [.] __sigjmp_save\n 0,49% postgres [.] ExecEvalConst\n 0,41% plpgsql.so [.] exec_stmt_assign\n 0,28% postgres [.] SPI_pop\n 0,26% plpgsql.so [.] MemoryContextAlloc@plt\n 0,25% postgres [.] SPI_push\n 0,25% plpgsql.so [.] SPI_push@plt\n 0,24% plpgsql.so [.] __sigsetjmp@plt\n 0,23% plpgsql.so [.] SPI_pop@plt\n 0,19% libc-2.30.so [.]\n__memset_sse2_unaligned_erms\n 0,14% libc-2.30.so [.] memcpy@GLIBC_2.2.5\n 0,13% postgres [.] memcpy@plt\n\nIs interesting so overhead of plan cache about 15%\n\nThe execution needs 32 sec on Postgres13 and 27sec on Postgres8.2\n\nRegards\n\nPavel\n\nHiwhen I do some profiling of plpgsql, usually I surprised how significant overhead has expression execution. Any calculations are very slow. This is not typical example of plpgsql, but it shows cleanly where is a overheadCREATE OR REPLACE FUNCTION public.foo() RETURNS void LANGUAGE plpgsql IMMUTABLEAS $function$declare i bigint = 0;begin  while i < 100000000  loop    i := i + 1;  end loop;end;$function$Profile of development  version  10,04%  plpgsql.so                          [.] exec_eval_simple_expr   9,17%  postgres                            [.] AcquireExecutorLocks   7,01%  postgres                            [.] ExecInterpExpr   5,86%  postgres                            [.] OverrideSearchPathMatchesCurrent   4,71%  postgres                            [.] GetCachedPlan   4,14%  postgres                            [.] AcquirePlannerLocks   3,72%  postgres                            [.] RevalidateCachedQuery   3,56%  postgres                            [.] MemoryContextReset   3,43%  plpgsql.so                          [.] plpgsql_param_eval_var   3,33%  postgres                            [.] SPI_plan_get_cached_plan   3,28%  plpgsql.so                          [.] exec_stmt   3,18%  postgres                            [.] ReleaseCachedPlan   2,92%  postgres                            [.] ResourceArrayRemove   2,81%  plpgsql.so                          [.] exec_assign_value   2,74%  plpgsql.so                          [.] exec_cast_value   2,70%  plpgsql.so                          [.] exec_eval_expr   1,96%  postgres                            [.] recomputeNamespacePath   1,90%  plpgsql.so                          [.] exec_eval_boolean   1,82%  plpgsql.so                          [.] exec_eval_cleanup   1,72%  postgres                            [.] ScanQueryForLocks   1,68%  postgres                            [.] CheckCachedPlan   1,49%  postgres                            [.] ResourceArrayAdd   1,48%  plpgsql.so                          [.] exec_assign_expr   1,42%  postgres                            [.] ResourceOwnerForgetPlanCacheRef   1,24%  plpgsql.so                          [.] exec_stmts   1,23%  plpgsql.so                          [.] exec_stmt_while   1,03%  plpgsql.so                          [.] assign_simple_var   0,73%  postgres                            [.] int84lt   0,62%  postgres                            [.] ResourceOwnerEnlargePlanCacheRefs   0,54%  postgres                            [.] int84pl   0,49%  plpgsql.so                          [.] setup_param_list   0,45%  postgres                            [.] ResourceArrayEnlarge   0,44%  postgres                            [.] choose_custom_plan   0,39%  postgres                            [.] ResourceOwnerRememberPlanCacheRef   0,30%  plpgsql.so                          [.] exec_stmt_assign   0,26%  postgres                            [.] GetUserId   0,22%  plpgsql.so                          [.] SPI_plan_get_cached_plan@pltand profile of PostgreSQL 8.2  13,63%  plpgsql.so                          [.] exec_eval_simple_expr   9,72%  postgres                            [.] AllocSetAlloc   7,84%  postgres                            [.] ExecMakeFunctionResultNoSets   6,20%  plpgsql.so                          [.] exec_assign_value   5,46%  postgres                            [.] AllocSetReset   4,79%  postgres                            [.] ExecEvalParam   4,53%  plpgsql.so                          [.] exec_eval_datum   4,40%  postgres                            [.] MemoryContextAlloc   3,51%  plpgsql.so                          [.] exec_stmt   3,01%  plpgsql.so                          [.] exec_eval_expr   2,76%  postgres                            [.] int84pl   2,11%  plpgsql.so                          [.] exec_eval_cleanup   1,77%  postgres                            [.] datumCopy   1,76%  postgres                            [.] MemoryContextReset   1,75%  libc-2.30.so                        [.] __sigsetjmp   1,64%  postgres                            [.] int84lt   1,47%  postgres                            [.] pfree   1,43%  plpgsql.so                          [.] exec_simple_cast_value   1,36%  plpgsql.so                          [.] MemoryContextReset@plt   1,28%  plpgsql.so                          [.] exec_stmt_while   1,25%  plpgsql.so                          [.] exec_assign_expr   1,22%  postgres                            [.] check_stack_depth   1,09%  plpgsql.so                          [.] exec_eval_boolean   1,06%  postgres                            [.] AllocSetFree   0,99%  plpgsql.so                          [.] free_var   0,93%  plpgsql.so                          [.] exec_cast_value   0,93%  plpgsql.so                          [.] exec_stmts   0,78%  libc-2.30.so                        [.] __memmove_sse2_unaligned_erms   0,72%  postgres                            [.] datumGetSize   0,62%  postgres                            [.] Int64GetDatum   0,51%  libc-2.30.so                        [.] __sigjmp_save   0,49%  postgres                            [.] ExecEvalConst   0,41%  plpgsql.so                          [.] exec_stmt_assign   0,28%  postgres                            [.] SPI_pop   0,26%  plpgsql.so                          [.] MemoryContextAlloc@plt   0,25%  postgres                            [.] SPI_push   0,25%  plpgsql.so                          [.] SPI_push@plt   0,24%  plpgsql.so                          [.] __sigsetjmp@plt   0,23%  plpgsql.so                          [.] SPI_pop@plt   0,19%  libc-2.30.so                        [.] __memset_sse2_unaligned_erms   0,14%  libc-2.30.so                        [.] memcpy@GLIBC_2.2.5   0,13%  postgres                            [.] memcpy@pltIs interesting so overhead of plan cache about 15%The execution needs 32 sec on Postgres13 and 27sec on Postgres8.2RegardsPavel", "msg_date": "Sun, 16 Feb 2020 15:12:25 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "plan cache overhead on plpgsql expression" }, { "msg_contents": "ne 16. 2. 2020 v 15:12 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> when I do some profiling of plpgsql, usually I surprised how significant\n> overhead has expression execution. Any calculations are very slow.\n>\n> This is not typical example of plpgsql, but it shows cleanly where is a\n> overhead\n>\n> CREATE OR REPLACE FUNCTION public.foo()\n> RETURNS void\n> LANGUAGE plpgsql\n> IMMUTABLE\n> AS $function$\n> declare i bigint = 0;\n> begin\n> while i < 100000000\n> loop\n> i := i + 1;\n> end loop;\n> end;\n> $function$\n>\n>\n> Is interesting so overhead of plan cache about 15%\n>\n> The execution needs 32 sec on Postgres13 and 27sec on Postgres8.2\n>\n\nOn same computer same example in Perl needs only 7 sec.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n\nne 16. 2. 2020 v 15:12 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hiwhen I do some profiling of plpgsql, usually I surprised how significant overhead has expression execution. Any calculations are very slow. This is not typical example of plpgsql, but it shows cleanly where is a overheadCREATE OR REPLACE FUNCTION public.foo() RETURNS void LANGUAGE plpgsql IMMUTABLEAS $function$declare i bigint = 0;begin  while i < 100000000  loop    i := i + 1;  end loop;end;$function$Is interesting so overhead of plan cache about 15%The execution needs 32 sec on Postgres13 and 27sec on Postgres8.2On same computer same example in Perl needs only 7 sec.RegardsPavelRegardsPavel", "msg_date": "Sun, 16 Feb 2020 17:00:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi,\n\nOn Sun, Feb 16, 2020 at 11:13 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> when I do some profiling of plpgsql, usually I surprised how significant overhead has expression execution. Any calculations are very slow.\n>\n> This is not typical example of plpgsql, but it shows cleanly where is a overhead\n>\n> CREATE OR REPLACE FUNCTION public.foo()\n> RETURNS void\n> LANGUAGE plpgsql\n> IMMUTABLE\n> AS $function$\n> declare i bigint = 0;\n> begin\n> while i < 100000000\n> loop\n> i := i + 1;\n> end loop;\n> end;\n> $function$\n>\n> Profile of development version\n>\n> 10,04% plpgsql.so [.] exec_eval_simple_expr\n> 9,17% postgres [.] AcquireExecutorLocks\n> 7,01% postgres [.] ExecInterpExpr\n> 5,86% postgres [.] OverrideSearchPathMatchesCurrent\n> 4,71% postgres [.] GetCachedPlan\n> 4,14% postgres [.] AcquirePlannerLocks\n> 3,72% postgres [.] RevalidateCachedQuery\n> 3,56% postgres [.] MemoryContextReset\n> 3,43% plpgsql.so [.] plpgsql_param_eval_var\n\nI was thinking about this overhead many months back and had even\nwritten a patch to avoid going to the planner for \"simple\"\nexpressions, which can be handled by the executor. Here is what the\nperformance looks like:\n\nHEAD:\n\nlatency: 31979.393 ms\n\n 18.32% postgres postgres [.] ExecInterpExpr\n 11.37% postgres plpgsql.so [.] exec_eval_expr\n 8.58% postgres plpgsql.so [.] plpgsql_param_eval_var\n 8.31% postgres plpgsql.so [.] exec_stmt\n 6.44% postgres postgres [.] GetCachedPlan\n 5.47% postgres postgres [.] AcquireExecutorLocks\n 5.30% postgres postgres [.] RevalidateCachedQuery\n 4.79% postgres plpgsql.so [.] exec_assign_value\n 4.41% postgres postgres [.] SPI_plan_get_cached_plan\n 4.36% postgres postgres [.] MemoryContextReset\n 4.22% postgres postgres [.] ReleaseCachedPlan\n 4.03% postgres postgres [.] OverrideSearchPathMatchesCurrent\n 2.63% postgres plpgsql.so [.] exec_assign_expr\n 2.11% postgres postgres [.] int84lt\n 1.95% postgres postgres [.] ResourceOwnerForgetPlanCacheRef\n 1.71% postgres postgres [.] int84pl\n 1.57% postgres postgres [.] ResourceOwnerRememberPlanCacheRef\n 1.38% postgres postgres [.] recomputeNamespacePath\n 1.35% postgres postgres [.] ScanQueryForLocks\n 1.24% postgres plpgsql.so [.] exec_cast_value\n 0.38% postgres postgres [.] ResourceOwnerEnlargePlanCacheRefs\n 0.05% postgres [kernel.kallsyms] [k] __do_softirq\n 0.03% postgres postgres [.] GetUserId\n\nPatched:\n\nlatency: 21011.871 ms\n\n 28.26% postgres postgres [.] ExecInterpExpr\n 12.26% postgres plpgsql.so [.] plpgsql_param_eval_var\n 12.02% postgres plpgsql.so [.] exec_stmt\n 11.10% postgres plpgsql.so [.] exec_eval_expr\n 10.05% postgres postgres [.] SPI_plan_is_valid\n 7.09% postgres postgres [.] MemoryContextReset\n 6.65% postgres plpgsql.so [.] exec_assign_value\n 3.53% postgres plpgsql.so [.] exec_assign_expr\n 2.91% postgres postgres [.] int84lt\n 2.61% postgres postgres [.] int84pl\n 2.42% postgres plpgsql.so [.] exec_cast_value\n 0.86% postgres postgres [.] CachedPlanIsValid\n 0.16% postgres plpgsql.so [.] SPI_plan_is_valid@plt\n 0.05% postgres [kernel.kallsyms] [k] __do_softirq\n 0.03% postgres [kernel.kallsyms] [k] finish_task_switch\n\nI didn't send the patch, because it didn't handle the cases where a\nsimple expression consists of an inline-able function(s) in it, which\nare better handled by a full-fledged planner call backed up by the\nplan cache. If we don't do that then every evaluation of such\n\"simple\" expression needs to invoke the planner. For example:\n\nConsider this inline-able SQL function:\n\ncreate or replace function sql_incr(a bigint)\nreturns int\nimmutable language sql as $$\nselect a+1;\n$$;\n\nThen this revised body of your function foo():\n\nCREATE OR REPLACE FUNCTION public.foo()\n RETURNS int\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\ndeclare i bigint = 0;\nbegin\n while i < 1000000\n loop\n i := sql_incr(i);\n end loop; return i;\nend;\n$function$\n;\n\nWith HEAD `select foo()` finishes in 786 ms, whereas with the patch,\nit takes 5102 ms.\n\nI think the patch might be good idea to reduce the time to compute\nsimple expressions in plpgsql, if we can address the above issue.\n\nThanks,\nAmit", "msg_date": "Tue, 18 Feb 2020 14:03:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "út 18. 2. 2020 v 6:03 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Sun, Feb 16, 2020 at 11:13 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > when I do some profiling of plpgsql, usually I surprised how significant\n> overhead has expression execution. Any calculations are very slow.\n> >\n> > This is not typical example of plpgsql, but it shows cleanly where is a\n> overhead\n> >\n> > CREATE OR REPLACE FUNCTION public.foo()\n> > RETURNS void\n> > LANGUAGE plpgsql\n> > IMMUTABLE\n> > AS $function$\n> > declare i bigint = 0;\n> > begin\n> > while i < 100000000\n> > loop\n> > i := i + 1;\n> > end loop;\n> > end;\n> > $function$\n> >\n> > Profile of development version\n> >\n> > 10,04% plpgsql.so [.] exec_eval_simple_expr\n> > 9,17% postgres [.] AcquireExecutorLocks\n> > 7,01% postgres [.] ExecInterpExpr\n> > 5,86% postgres [.]\n> OverrideSearchPathMatchesCurrent\n> > 4,71% postgres [.] GetCachedPlan\n> > 4,14% postgres [.] AcquirePlannerLocks\n> > 3,72% postgres [.] RevalidateCachedQuery\n> > 3,56% postgres [.] MemoryContextReset\n> > 3,43% plpgsql.so [.] plpgsql_param_eval_var\n>\n> I was thinking about this overhead many months back and had even\n> written a patch to avoid going to the planner for \"simple\"\n> expressions, which can be handled by the executor. Here is what the\n> performance looks like:\n>\n> HEAD:\n>\n> latency: 31979.393 ms\n>\n> 18.32% postgres postgres [.] ExecInterpExpr\n> 11.37% postgres plpgsql.so [.] exec_eval_expr\n> 8.58% postgres plpgsql.so [.] plpgsql_param_eval_var\n> 8.31% postgres plpgsql.so [.] exec_stmt\n> 6.44% postgres postgres [.] GetCachedPlan\n> 5.47% postgres postgres [.] AcquireExecutorLocks\n> 5.30% postgres postgres [.] RevalidateCachedQuery\n> 4.79% postgres plpgsql.so [.] exec_assign_value\n> 4.41% postgres postgres [.] SPI_plan_get_cached_plan\n> 4.36% postgres postgres [.] MemoryContextReset\n> 4.22% postgres postgres [.] ReleaseCachedPlan\n> 4.03% postgres postgres [.]\n> OverrideSearchPathMatchesCurrent\n> 2.63% postgres plpgsql.so [.] exec_assign_expr\n> 2.11% postgres postgres [.] int84lt\n> 1.95% postgres postgres [.]\n> ResourceOwnerForgetPlanCacheRef\n> 1.71% postgres postgres [.] int84pl\n> 1.57% postgres postgres [.]\n> ResourceOwnerRememberPlanCacheRef\n> 1.38% postgres postgres [.] recomputeNamespacePath\n> 1.35% postgres postgres [.] ScanQueryForLocks\n> 1.24% postgres plpgsql.so [.] exec_cast_value\n> 0.38% postgres postgres [.]\n> ResourceOwnerEnlargePlanCacheRefs\n> 0.05% postgres [kernel.kallsyms] [k] __do_softirq\n> 0.03% postgres postgres [.] GetUserId\n>\n> Patched:\n>\n> latency: 21011.871 ms\n>\n> 28.26% postgres postgres [.] ExecInterpExpr\n> 12.26% postgres plpgsql.so [.] plpgsql_param_eval_var\n> 12.02% postgres plpgsql.so [.] exec_stmt\n> 11.10% postgres plpgsql.so [.] exec_eval_expr\n> 10.05% postgres postgres [.] SPI_plan_is_valid\n> 7.09% postgres postgres [.] MemoryContextReset\n> 6.65% postgres plpgsql.so [.] exec_assign_value\n> 3.53% postgres plpgsql.so [.] exec_assign_expr\n> 2.91% postgres postgres [.] int84lt\n> 2.61% postgres postgres [.] int84pl\n> 2.42% postgres plpgsql.so [.] exec_cast_value\n> 0.86% postgres postgres [.] CachedPlanIsValid\n> 0.16% postgres plpgsql.so [.] SPI_plan_is_valid@plt\n> 0.05% postgres [kernel.kallsyms] [k] __do_softirq\n> 0.03% postgres [kernel.kallsyms] [k] finish_task_switch\n>\n> I didn't send the patch, because it didn't handle the cases where a\n> simple expression consists of an inline-able function(s) in it, which\n> are better handled by a full-fledged planner call backed up by the\n> plan cache. If we don't do that then every evaluation of such\n> \"simple\" expression needs to invoke the planner. For example:\n>\n> Consider this inline-able SQL function:\n>\n> create or replace function sql_incr(a bigint)\n> returns int\n> immutable language sql as $$\n> select a+1;\n> $$;\n>\n> Then this revised body of your function foo():\n>\n> CREATE OR REPLACE FUNCTION public.foo()\n> RETURNS int\n> LANGUAGE plpgsql\n> IMMUTABLE\n> AS $function$\n> declare i bigint = 0;\n> begin\n> while i < 1000000\n> loop\n> i := sql_incr(i);\n> end loop; return i;\n> end;\n> $function$\n> ;\n>\n> With HEAD `select foo()` finishes in 786 ms, whereas with the patch,\n> it takes 5102 ms.\n>\n> I think the patch might be good idea to reduce the time to compute\n> simple expressions in plpgsql, if we can address the above issue.\n>\n\nYour patch is very interesting - minimally it returns performance before\n8.2. The mentioned issue can be fixed if we disallow SQL functions in this\nfast execution.\n\nI am worried about too low percent if this fundament methods.\n\n 2.91% postgres postgres [.] int84lt\n 2.61% postgres postgres [.] int84pl\n\nPerl\n\n 18,20% libperl.so.5.30.1 [.] Perl_pp_add\n 17,61% libperl.so.5.30.1 [.] Perl_pp_lt\n\nSo can be nice if we increase percent overhead over 10%, maybe more.\n\nMaybe we can check if expression has only builtin immutable functions, and\nif it, then we can reuse expression state\n\nMore, if I understand well, the function is running under snapshot, so\nthere is not possibility to plan invalidation inside function. So some\nchecks should not be repeated.\n\nPavel\n\n\n> Thanks,\n> Amit\n>\n\nút 18. 2. 2020 v 6:03 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:Hi,\n\nOn Sun, Feb 16, 2020 at 11:13 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> when I do some profiling of plpgsql, usually I surprised how significant overhead has expression execution. Any calculations are very slow.\n>\n> This is not typical example of plpgsql, but it shows cleanly where is a overhead\n>\n> CREATE OR REPLACE FUNCTION public.foo()\n>  RETURNS void\n>  LANGUAGE plpgsql\n>  IMMUTABLE\n> AS $function$\n> declare i bigint = 0;\n> begin\n>   while i < 100000000\n>   loop\n>     i := i + 1;\n>   end loop;\n> end;\n> $function$\n>\n> Profile of development  version\n>\n>   10,04%  plpgsql.so                          [.] exec_eval_simple_expr\n>    9,17%  postgres                            [.] AcquireExecutorLocks\n>    7,01%  postgres                            [.] ExecInterpExpr\n>    5,86%  postgres                            [.] OverrideSearchPathMatchesCurrent\n>    4,71%  postgres                            [.] GetCachedPlan\n>    4,14%  postgres                            [.] AcquirePlannerLocks\n>    3,72%  postgres                            [.] RevalidateCachedQuery\n>    3,56%  postgres                            [.] MemoryContextReset\n>    3,43%  plpgsql.so                          [.] plpgsql_param_eval_var\n\nI was thinking about this overhead many months back and had even\nwritten a patch to avoid going to the planner for \"simple\"\nexpressions, which can be handled by the executor.  Here is what the\nperformance looks like:\n\nHEAD:\n\nlatency: 31979.393 ms\n\n    18.32%  postgres  postgres           [.] ExecInterpExpr\n    11.37%  postgres  plpgsql.so         [.] exec_eval_expr\n     8.58%  postgres  plpgsql.so         [.] plpgsql_param_eval_var\n     8.31%  postgres  plpgsql.so         [.] exec_stmt\n     6.44%  postgres  postgres           [.] GetCachedPlan\n     5.47%  postgres  postgres           [.] AcquireExecutorLocks\n     5.30%  postgres  postgres           [.] RevalidateCachedQuery\n     4.79%  postgres  plpgsql.so         [.] exec_assign_value\n     4.41%  postgres  postgres           [.] SPI_plan_get_cached_plan\n     4.36%  postgres  postgres           [.] MemoryContextReset\n     4.22%  postgres  postgres           [.] ReleaseCachedPlan\n     4.03%  postgres  postgres           [.] OverrideSearchPathMatchesCurrent\n     2.63%  postgres  plpgsql.so         [.] exec_assign_expr\n     2.11%  postgres  postgres           [.] int84lt\n     1.95%  postgres  postgres           [.] ResourceOwnerForgetPlanCacheRef\n     1.71%  postgres  postgres           [.] int84pl\n     1.57%  postgres  postgres           [.] ResourceOwnerRememberPlanCacheRef\n     1.38%  postgres  postgres           [.] recomputeNamespacePath\n     1.35%  postgres  postgres           [.] ScanQueryForLocks\n     1.24%  postgres  plpgsql.so         [.] exec_cast_value\n     0.38%  postgres  postgres           [.] ResourceOwnerEnlargePlanCacheRefs\n     0.05%  postgres  [kernel.kallsyms]  [k] __do_softirq\n     0.03%  postgres  postgres           [.] GetUserId\n\nPatched:\n\nlatency: 21011.871 ms\n\n    28.26%  postgres  postgres           [.] ExecInterpExpr\n    12.26%  postgres  plpgsql.so         [.] plpgsql_param_eval_var\n    12.02%  postgres  plpgsql.so         [.] exec_stmt\n    11.10%  postgres  plpgsql.so         [.] exec_eval_expr\n    10.05%  postgres  postgres           [.] SPI_plan_is_valid\n     7.09%  postgres  postgres           [.] MemoryContextReset\n     6.65%  postgres  plpgsql.so         [.] exec_assign_value\n     3.53%  postgres  plpgsql.so         [.] exec_assign_expr\n     2.91%  postgres  postgres           [.] int84lt\n     2.61%  postgres  postgres           [.] int84pl\n     2.42%  postgres  plpgsql.so         [.] exec_cast_value\n     0.86%  postgres  postgres           [.] CachedPlanIsValid\n     0.16%  postgres  plpgsql.so         [.] SPI_plan_is_valid@plt\n     0.05%  postgres  [kernel.kallsyms]  [k] __do_softirq\n     0.03%  postgres  [kernel.kallsyms]  [k] finish_task_switch\n\nI didn't send the patch, because it didn't handle the cases where a\nsimple expression consists of an inline-able function(s) in it, which\nare better handled by a full-fledged planner call backed up by the\nplan cache.  If we don't do that then every evaluation of such\n\"simple\" expression needs to invoke the planner.  For example:\n\nConsider this inline-able SQL function:\n\ncreate or replace function sql_incr(a bigint)\nreturns int\nimmutable language sql as $$\nselect a+1;\n$$;\n\nThen this revised body of your function foo():\n\nCREATE OR REPLACE FUNCTION public.foo()\n RETURNS int\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\ndeclare i bigint = 0;\nbegin\n  while i < 1000000\n  loop\n    i := sql_incr(i);\n  end loop; return i;\nend;\n$function$\n;\n\nWith HEAD `select foo()` finishes in 786 ms, whereas with the patch,\nit takes 5102 ms.\n\nI think the patch might be good idea to reduce the time to compute\nsimple expressions in plpgsql, if we can address the above issue.Your patch is very interesting - minimally it returns performance before 8.2. The mentioned issue can be fixed if we disallow SQL functions in this fast execution.I am worried about too low percent if this fundament methods.     2.91%  postgres  postgres           [.] int84lt\n     2.61%  postgres  postgres           [.] int84pl Perl   18,20%  libperl.so.5.30.1                        [.] Perl_pp_add  17,61%  libperl.so.5.30.1                        [.] Perl_pp_ltSo can be nice if we increase percent overhead over 10%, maybe more.Maybe we can check if expression has only builtin immutable functions, and if it, then we can reuse expression stateMore, if I understand well, the function is running under snapshot, so there is not possibility to plan invalidation inside function. So some checks should not be repeated.Pavel\n\nThanks,\nAmit", "msg_date": "Tue, 18 Feb 2020 06:55:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "On Tue, Feb 18, 2020 at 2:56 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 18. 2. 2020 v 6:03 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n>> I didn't send the patch, because it didn't handle the cases where a\n>> simple expression consists of an inline-able function(s) in it, which\n>> are better handled by a full-fledged planner call backed up by the\n>> plan cache. If we don't do that then every evaluation of such\n>> \"simple\" expression needs to invoke the planner. For example:\n>>\n>> Consider this inline-able SQL function:\n>>\n>> create or replace function sql_incr(a bigint)\n>> returns int\n>> immutable language sql as $$\n>> select a+1;\n>> $$;\n>>\n>> Then this revised body of your function foo():\n>>\n>> CREATE OR REPLACE FUNCTION public.foo()\n>> RETURNS int\n>> LANGUAGE plpgsql\n>> IMMUTABLE\n>> AS $function$\n>> declare i bigint = 0;\n>> begin\n>> while i < 1000000\n>> loop\n>> i := sql_incr(i);\n>> end loop; return i;\n>> end;\n>> $function$\n>> ;\n>>\n>> With HEAD `select foo()` finishes in 786 ms, whereas with the patch,\n>> it takes 5102 ms.\n>>\n>> I think the patch might be good idea to reduce the time to compute\n>> simple expressions in plpgsql, if we can address the above issue.\n>\n>\n> Your patch is very interesting - minimally it returns performance before 8.2. The mentioned issue can be fixed if we disallow SQL functions in this fast execution.\n\nI updated the patch to do that.\n\nWith the new patch, `select foo()`, with inline-able sql_incr() in it,\nruns in 679 ms.\n\nWithout any inline-able function, it runs in 330 ms, whereas with\nHEAD, it takes 590 ms.\n\nThanks,\nAmit", "msg_date": "Tue, 18 Feb 2020 18:56:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "On Tue, Feb 18, 2020 at 6:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 18, 2020 at 2:56 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > út 18. 2. 2020 v 6:03 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >> I didn't send the patch, because it didn't handle the cases where a\n> >> simple expression consists of an inline-able function(s) in it, which\n> >> are better handled by a full-fledged planner call backed up by the\n> >> plan cache. If we don't do that then every evaluation of such\n> >> \"simple\" expression needs to invoke the planner. For example:\n> >>\n> >> Consider this inline-able SQL function:\n> >>\n> >> create or replace function sql_incr(a bigint)\n> >> returns int\n> >> immutable language sql as $$\n> >> select a+1;\n> >> $$;\n> >>\n> >> Then this revised body of your function foo():\n> >>\n> >> CREATE OR REPLACE FUNCTION public.foo()\n> >> RETURNS int\n> >> LANGUAGE plpgsql\n> >> IMMUTABLE\n> >> AS $function$\n> >> declare i bigint = 0;\n> >> begin\n> >> while i < 1000000\n> >> loop\n> >> i := sql_incr(i);\n> >> end loop; return i;\n> >> end;\n> >> $function$\n> >> ;\n> >>\n> >> With HEAD `select foo()` finishes in 786 ms, whereas with the patch,\n> >> it takes 5102 ms.\n> >>\n> >> I think the patch might be good idea to reduce the time to compute\n> >> simple expressions in plpgsql, if we can address the above issue.\n> >\n> >\n> > Your patch is very interesting - minimally it returns performance before 8.2. The mentioned issue can be fixed if we disallow SQL functions in this fast execution.\n>\n> I updated the patch to do that.\n>\n> With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> runs in 679 ms.\n>\n> Without any inline-able function, it runs in 330 ms, whereas with\n> HEAD, it takes 590 ms.\n\nI polished it a bit.\n\nThanks,\nAmit", "msg_date": "Wed, 19 Feb 2020 01:08:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> On Tue, Feb 18, 2020 at 6:56 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > On Tue, Feb 18, 2020 at 2:56 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > > út 18. 2. 2020 v 6:03 odesílatel Amit Langote <amitlangote09@gmail.com>\n> napsal:\n> > >> I didn't send the patch, because it didn't handle the cases where a\n> > >> simple expression consists of an inline-able function(s) in it, which\n> > >> are better handled by a full-fledged planner call backed up by the\n> > >> plan cache. If we don't do that then every evaluation of such\n> > >> \"simple\" expression needs to invoke the planner. For example:\n> > >>\n> > >> Consider this inline-able SQL function:\n> > >>\n> > >> create or replace function sql_incr(a bigint)\n> > >> returns int\n> > >> immutable language sql as $$\n> > >> select a+1;\n> > >> $$;\n> > >>\n> > >> Then this revised body of your function foo():\n> > >>\n> > >> CREATE OR REPLACE FUNCTION public.foo()\n> > >> RETURNS int\n> > >> LANGUAGE plpgsql\n> > >> IMMUTABLE\n> > >> AS $function$\n> > >> declare i bigint = 0;\n> > >> begin\n> > >> while i < 1000000\n> > >> loop\n> > >> i := sql_incr(i);\n> > >> end loop; return i;\n> > >> end;\n> > >> $function$\n> > >> ;\n> > >>\n> > >> With HEAD `select foo()` finishes in 786 ms, whereas with the patch,\n> > >> it takes 5102 ms.\n> > >>\n> > >> I think the patch might be good idea to reduce the time to compute\n> > >> simple expressions in plpgsql, if we can address the above issue.\n> > >\n> > >\n> > > Your patch is very interesting - minimally it returns performance\n> before 8.2. The mentioned issue can be fixed if we disallow SQL functions\n> in this fast execution.\n> >\n> > I updated the patch to do that.\n> >\n> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> > runs in 679 ms.\n> >\n> > Without any inline-able function, it runs in 330 ms, whereas with\n> > HEAD, it takes 590 ms.\n>\n> I polished it a bit.\n>\n\nthe performance looks very interesting - on my comp the execution time of\n100000000 iterations was decreased from 34 sec to 15 sec,\n\nSo it is interesting speedup\n\nPavel\n\n\n\n> Thanks,\n> Amit\n>\n\nút 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Tue, Feb 18, 2020 at 6:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 18, 2020 at 2:56 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > út 18. 2. 2020 v 6:03 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >> I didn't send the patch, because it didn't handle the cases where a\n> >> simple expression consists of an inline-able function(s) in it, which\n> >> are better handled by a full-fledged planner call backed up by the\n> >> plan cache.  If we don't do that then every evaluation of such\n> >> \"simple\" expression needs to invoke the planner.  For example:\n> >>\n> >> Consider this inline-able SQL function:\n> >>\n> >> create or replace function sql_incr(a bigint)\n> >> returns int\n> >> immutable language sql as $$\n> >> select a+1;\n> >> $$;\n> >>\n> >> Then this revised body of your function foo():\n> >>\n> >> CREATE OR REPLACE FUNCTION public.foo()\n> >>  RETURNS int\n> >>  LANGUAGE plpgsql\n> >>  IMMUTABLE\n> >> AS $function$\n> >> declare i bigint = 0;\n> >> begin\n> >>   while i < 1000000\n> >>   loop\n> >>     i := sql_incr(i);\n> >>   end loop; return i;\n> >> end;\n> >> $function$\n> >> ;\n> >>\n> >> With HEAD `select foo()` finishes in 786 ms, whereas with the patch,\n> >> it takes 5102 ms.\n> >>\n> >> I think the patch might be good idea to reduce the time to compute\n> >> simple expressions in plpgsql, if we can address the above issue.\n> >\n> >\n> > Your patch is very interesting - minimally it returns performance before 8.2. The mentioned issue can be fixed if we disallow SQL functions in this fast execution.\n>\n> I updated the patch to do that.\n>\n> With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> runs in 679 ms.\n>\n> Without any inline-able function, it runs in 330 ms, whereas with\n> HEAD, it takes 590 ms.\n\nI polished it a bit.the performance looks very interesting - on my comp the execution time of  100000000 iterations was decreased from 34 sec to 15 sec,So it is interesting speedupPavel\n\nThanks,\nAmit", "msg_date": "Wed, 19 Feb 2020 07:30:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com>\n> napsal:\n>\n>> On Tue, Feb 18, 2020 at 6:56 PM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>> > On Tue, Feb 18, 2020 at 2:56 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > > út 18. 2. 2020 v 6:03 odesílatel Amit Langote <\n>> amitlangote09@gmail.com> napsal:\n>> > >> I didn't send the patch, because it didn't handle the cases where a\n>> > >> simple expression consists of an inline-able function(s) in it, which\n>> > >> are better handled by a full-fledged planner call backed up by the\n>> > >> plan cache. If we don't do that then every evaluation of such\n>> > >> \"simple\" expression needs to invoke the planner. For example:\n>> > >>\n>> > >> Consider this inline-able SQL function:\n>> > >>\n>> > >> create or replace function sql_incr(a bigint)\n>> > >> returns int\n>> > >> immutable language sql as $$\n>> > >> select a+1;\n>> > >> $$;\n>> > >>\n>> > >> Then this revised body of your function foo():\n>> > >>\n>> > >> CREATE OR REPLACE FUNCTION public.foo()\n>> > >> RETURNS int\n>> > >> LANGUAGE plpgsql\n>> > >> IMMUTABLE\n>> > >> AS $function$\n>> > >> declare i bigint = 0;\n>> > >> begin\n>> > >> while i < 1000000\n>> > >> loop\n>> > >> i := sql_incr(i);\n>> > >> end loop; return i;\n>> > >> end;\n>> > >> $function$\n>> > >> ;\n>> > >>\n>> > >> With HEAD `select foo()` finishes in 786 ms, whereas with the patch,\n>> > >> it takes 5102 ms.\n>> > >>\n>> > >> I think the patch might be good idea to reduce the time to compute\n>> > >> simple expressions in plpgsql, if we can address the above issue.\n>> > >\n>> > >\n>> > > Your patch is very interesting - minimally it returns performance\n>> before 8.2. The mentioned issue can be fixed if we disallow SQL functions\n>> in this fast execution.\n>> >\n>> > I updated the patch to do that.\n>> >\n>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n>> > runs in 679 ms.\n>> >\n>> > Without any inline-able function, it runs in 330 ms, whereas with\n>> > HEAD, it takes 590 ms.\n>>\n>> I polished it a bit.\n>>\n>\n> the performance looks very interesting - on my comp the execution time of\n> 100000000 iterations was decreased from 34 sec to 15 sec,\n>\n> So it is interesting speedup\n>\n\nbut regress tests fails\n\n\n\n> Pavel\n>\n>\n>\n>> Thanks,\n>> Amit\n>>\n>", "msg_date": "Wed, 19 Feb 2020 07:37:26 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n>>> > I updated the patch to do that.\n>>> >\n>>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n>>> > runs in 679 ms.\n>>> >\n>>> > Without any inline-able function, it runs in 330 ms, whereas with\n>>> > HEAD, it takes 590 ms.\n>>>\n>>> I polished it a bit.\n>>\n>>\n>> the performance looks very interesting - on my comp the execution time of 100000000 iterations was decreased from 34 sec to 15 sec,\n>>\n>> So it is interesting speedup\n>\n> but regress tests fails\n\nOops, I failed to check src/pl/plpgsql tests.\n\nFixed in the attached.\n\nThanks,\nAmit", "msg_date": "Wed, 19 Feb 2020 15:56:45 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n> >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >>> > I updated the patch to do that.\n> >>> >\n> >>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> >>> > runs in 679 ms.\n> >>> >\n> >>> > Without any inline-able function, it runs in 330 ms, whereas with\n> >>> > HEAD, it takes 590 ms.\n> >>>\n> >>> I polished it a bit.\n> >>\n> >>\n> >> the performance looks very interesting - on my comp the execution time of 100000000 iterations was decreased from 34 sec to 15 sec,\n> >>\n> >> So it is interesting speedup\n> >\n> > but regress tests fails\n>\n> Oops, I failed to check src/pl/plpgsql tests.\n>\n> Fixed in the attached.\n\nAdded a regression test based on examples discussed here too.\n\nThanks,\nAmit", "msg_date": "Wed, 19 Feb 2020 16:08:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <\n> pavel.stehule@gmail.com> napsal:\n> > >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <\n> amitlangote09@gmail.com> napsal:\n> > >>> > I updated the patch to do that.\n> > >>> >\n> > >>> > With the new patch, `select foo()`, with inline-able sql_incr() in\n> it,\n> > >>> > runs in 679 ms.\n> > >>> >\n> > >>> > Without any inline-able function, it runs in 330 ms, whereas with\n> > >>> > HEAD, it takes 590 ms.\n> > >>>\n> > >>> I polished it a bit.\n> > >>\n> > >>\n> > >> the performance looks very interesting - on my comp the execution\n> time of 100000000 iterations was decreased from 34 sec to 15 sec,\n> > >>\n> > >> So it is interesting speedup\n> > >\n> > > but regress tests fails\n> >\n> > Oops, I failed to check src/pl/plpgsql tests.\n> >\n> > Fixed in the attached.\n>\n> Added a regression test based on examples discussed here too.\n>\n\nIt is working without problems\n\nI think this patch is very interesting for Postgres 13\n\nRegards\n\nPavel\n\n>\n> Thanks,\n> Amit\n>\n\nst 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n> >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >>> > I updated the patch to do that.\n> >>> >\n> >>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> >>> > runs in 679 ms.\n> >>> >\n> >>> > Without any inline-able function, it runs in 330 ms, whereas with\n> >>> > HEAD, it takes 590 ms.\n> >>>\n> >>> I polished it a bit.\n> >>\n> >>\n> >> the performance looks very interesting - on my comp the execution time of  100000000 iterations was decreased from 34 sec to 15 sec,\n> >>\n> >> So it is interesting speedup\n> >\n> > but regress tests fails\n>\n> Oops, I failed to check src/pl/plpgsql tests.\n>\n> Fixed in the attached.\n\nAdded a regression test based on examples discussed here too.It is working without problemsI think this patch is very interesting for Postgres 13RegardsPavel\n\nThanks,\nAmit", "msg_date": "Thu, 20 Feb 2020 20:15:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "čt 20. 2. 2020 v 20:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com>\n> napsal:\n>\n>> On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>> > On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <\n>> pavel.stehule@gmail.com> napsal:\n>> > >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <\n>> amitlangote09@gmail.com> napsal:\n>> > >>> > I updated the patch to do that.\n>> > >>> >\n>> > >>> > With the new patch, `select foo()`, with inline-able sql_incr()\n>> in it,\n>> > >>> > runs in 679 ms.\n>> > >>> >\n>> > >>> > Without any inline-able function, it runs in 330 ms, whereas with\n>> > >>> > HEAD, it takes 590 ms.\n>> > >>>\n>> > >>> I polished it a bit.\n>> > >>\n>> > >>\n>> > >> the performance looks very interesting - on my comp the execution\n>> time of 100000000 iterations was decreased from 34 sec to 15 sec,\n>> > >>\n>> > >> So it is interesting speedup\n>> > >\n>> > > but regress tests fails\n>> >\n>> > Oops, I failed to check src/pl/plpgsql tests.\n>> >\n>> > Fixed in the attached.\n>>\n>> Added a regression test based on examples discussed here too.\n>>\n>\n> It is working without problems\n>\n> I think this patch is very interesting for Postgres 13\n>\n\nI checked a performance of this patch again and I think so there is not too\nmuch space for another optimization - maybe JIT can help.\n\nThere is relative high overhead of call of strict functions - the params\nare repeatedly tested against NULL.\n\nRegards\n\nPavel\n\n\n\n> Regards\n>\n> Pavel\n>\n>>\n>> Thanks,\n>> Amit\n>>\n>\n\nčt 20. 2. 2020 v 20:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n> >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >>> > I updated the patch to do that.\n> >>> >\n> >>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> >>> > runs in 679 ms.\n> >>> >\n> >>> > Without any inline-able function, it runs in 330 ms, whereas with\n> >>> > HEAD, it takes 590 ms.\n> >>>\n> >>> I polished it a bit.\n> >>\n> >>\n> >> the performance looks very interesting - on my comp the execution time of  100000000 iterations was decreased from 34 sec to 15 sec,\n> >>\n> >> So it is interesting speedup\n> >\n> > but regress tests fails\n>\n> Oops, I failed to check src/pl/plpgsql tests.\n>\n> Fixed in the attached.\n\nAdded a regression test based on examples discussed here too.It is working without problemsI think this patch is very interesting for Postgres 13I checked a performance of this patch again and I think so there is not too much space for another optimization - maybe JIT can help.There is relative high overhead of call of strict functions - the params are repeatedly tested against NULL. RegardsPavelRegardsPavel\n\nThanks,\nAmit", "msg_date": "Mon, 24 Feb 2020 18:47:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "po 24. 2. 2020 v 18:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> čt 20. 2. 2020 v 20:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com>\n>> napsal:\n>>\n>>> On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com>\n>>> wrote:\n>>> > On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>> > > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <\n>>> pavel.stehule@gmail.com> napsal:\n>>> > >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <\n>>> amitlangote09@gmail.com> napsal:\n>>> > >>> > I updated the patch to do that.\n>>> > >>> >\n>>> > >>> > With the new patch, `select foo()`, with inline-able sql_incr()\n>>> in it,\n>>> > >>> > runs in 679 ms.\n>>> > >>> >\n>>> > >>> > Without any inline-able function, it runs in 330 ms, whereas with\n>>> > >>> > HEAD, it takes 590 ms.\n>>> > >>>\n>>> > >>> I polished it a bit.\n>>> > >>\n>>> > >>\n>>> > >> the performance looks very interesting - on my comp the execution\n>>> time of 100000000 iterations was decreased from 34 sec to 15 sec,\n>>> > >>\n>>> > >> So it is interesting speedup\n>>> > >\n>>> > > but regress tests fails\n>>> >\n>>> > Oops, I failed to check src/pl/plpgsql tests.\n>>> >\n>>> > Fixed in the attached.\n>>>\n>>> Added a regression test based on examples discussed here too.\n>>>\n>>\n>> It is working without problems\n>>\n>> I think this patch is very interesting for Postgres 13\n>>\n>\n> I checked a performance of this patch again and I think so there is not\n> too much space for another optimization - maybe JIT can help.\n>\n> There is relative high overhead of call of strict functions - the params\n> are repeatedly tested against NULL.\n>\n\nBut I found one issue - I don't know if this issue is related to your patch\nor plpgsql_check.\n\nplpgsql_check try to clean after it was executed - it cleans all plans. But\nsome pointers on simple expressions are broken after catched exceptions.\n\nexpr->plan = 0x80. Is interesting, so other fields of this expressions are\ncorrect.\n\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>>\n>>> Thanks,\n>>> Amit\n>>>\n>>\n\npo 24. 2. 2020 v 18:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 20. 2. 2020 v 20:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n> >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >>> > I updated the patch to do that.\n> >>> >\n> >>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> >>> > runs in 679 ms.\n> >>> >\n> >>> > Without any inline-able function, it runs in 330 ms, whereas with\n> >>> > HEAD, it takes 590 ms.\n> >>>\n> >>> I polished it a bit.\n> >>\n> >>\n> >> the performance looks very interesting - on my comp the execution time of  100000000 iterations was decreased from 34 sec to 15 sec,\n> >>\n> >> So it is interesting speedup\n> >\n> > but regress tests fails\n>\n> Oops, I failed to check src/pl/plpgsql tests.\n>\n> Fixed in the attached.\n\nAdded a regression test based on examples discussed here too.It is working without problemsI think this patch is very interesting for Postgres 13I checked a performance of this patch again and I think so there is not too much space for another optimization - maybe JIT can help.There is relative high overhead of call of strict functions - the params are repeatedly tested against NULL. But I found one issue - I don't know if this issue is related to your patch or plpgsql_check.plpgsql_check try to clean after it was executed - it cleans all plans. But some pointers on simple expressions are broken after catched exceptions.expr->plan = 0x80. Is interesting, so other fields of this expressions are correct.RegardsPavelRegardsPavel\n\nThanks,\nAmit", "msg_date": "Mon, 24 Feb 2020 18:56:55 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "po 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 24. 2. 2020 v 18:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>>\n>> čt 20. 2. 2020 v 20:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>\n>>>\n>>>\n>>> st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com>\n>>> napsal:\n>>>\n>>>> On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com>\n>>>> wrote:\n>>>> > On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <\n>>>> pavel.stehule@gmail.com> wrote:\n>>>> > > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <\n>>>> pavel.stehule@gmail.com> napsal:\n>>>> > >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <\n>>>> amitlangote09@gmail.com> napsal:\n>>>> > >>> > I updated the patch to do that.\n>>>> > >>> >\n>>>> > >>> > With the new patch, `select foo()`, with inline-able sql_incr()\n>>>> in it,\n>>>> > >>> > runs in 679 ms.\n>>>> > >>> >\n>>>> > >>> > Without any inline-able function, it runs in 330 ms, whereas\n>>>> with\n>>>> > >>> > HEAD, it takes 590 ms.\n>>>> > >>>\n>>>> > >>> I polished it a bit.\n>>>> > >>\n>>>> > >>\n>>>> > >> the performance looks very interesting - on my comp the execution\n>>>> time of 100000000 iterations was decreased from 34 sec to 15 sec,\n>>>> > >>\n>>>> > >> So it is interesting speedup\n>>>> > >\n>>>> > > but regress tests fails\n>>>> >\n>>>> > Oops, I failed to check src/pl/plpgsql tests.\n>>>> >\n>>>> > Fixed in the attached.\n>>>>\n>>>> Added a regression test based on examples discussed here too.\n>>>>\n>>>\n>>> It is working without problems\n>>>\n>>> I think this patch is very interesting for Postgres 13\n>>>\n>>\n>> I checked a performance of this patch again and I think so there is not\n>> too much space for another optimization - maybe JIT can help.\n>>\n>> There is relative high overhead of call of strict functions - the params\n>> are repeatedly tested against NULL.\n>>\n>\n> But I found one issue - I don't know if this issue is related to your\n> patch or plpgsql_check.\n>\n> plpgsql_check try to clean after it was executed - it cleans all plans.\n> But some pointers on simple expressions are broken after catched exceptions.\n>\n> expr->plan = 0x80. Is interesting, so other fields of this expressions are\n> correct.\n>\n\nI am not sure, but after patching the SPI_prepare_params the current memory\ncontext is some short memory context.\n\nCan SPI_prepare_params change current memory context? It did before. But\nafter patching different memory context is active.\n\nRegards\n\nPavel\n\n\n>\n>\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>>\n>>>> Thanks,\n>>>> Amit\n>>>>\n>>>\n\npo 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 24. 2. 2020 v 18:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 20. 2. 2020 v 20:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:st 19. 2. 2020 v 8:09 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:On Wed, Feb 19, 2020 at 3:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Feb 19, 2020 at 3:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > st 19. 2. 2020 v 7:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n> >> út 18. 2. 2020 v 17:08 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:\n> >>> > I updated the patch to do that.\n> >>> >\n> >>> > With the new patch, `select foo()`, with inline-able sql_incr() in it,\n> >>> > runs in 679 ms.\n> >>> >\n> >>> > Without any inline-able function, it runs in 330 ms, whereas with\n> >>> > HEAD, it takes 590 ms.\n> >>>\n> >>> I polished it a bit.\n> >>\n> >>\n> >> the performance looks very interesting - on my comp the execution time of  100000000 iterations was decreased from 34 sec to 15 sec,\n> >>\n> >> So it is interesting speedup\n> >\n> > but regress tests fails\n>\n> Oops, I failed to check src/pl/plpgsql tests.\n>\n> Fixed in the attached.\n\nAdded a regression test based on examples discussed here too.It is working without problemsI think this patch is very interesting for Postgres 13I checked a performance of this patch again and I think so there is not too much space for another optimization - maybe JIT can help.There is relative high overhead of call of strict functions - the params are repeatedly tested against NULL. But I found one issue - I don't know if this issue is related to your patch or plpgsql_check.plpgsql_check try to clean after it was executed - it cleans all plans. But some pointers on simple expressions are broken after catched exceptions.expr->plan = 0x80. Is interesting, so other fields of this expressions are correct.I am not sure, but after patching the SPI_prepare_params the current memory context is some short memory context.Can SPI_prepare_params change current memory context? It did before. But after patching different memory context is active.RegardsPavelRegardsPavelRegardsPavel\n\nThanks,\nAmit", "msg_date": "Mon, 24 Feb 2020 20:27:51 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi\n\nI added this patch to a commitfest\n\nhttps://commitfest.postgresql.org/27/2467/\n\nIt is very interesting speedup and it is in good direction to JIT\nexpressions\n\nPavel\n\nHiI added this patch to a commitfesthttps://commitfest.postgresql.org/27/2467/It is very interesting speedup and it is in good direction to JIT expressionsPavel", "msg_date": "Tue, 25 Feb 2020 08:16:09 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi Pavel,\n\nOn Tue, Feb 25, 2020 at 4:16 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> I added this patch to a commitfest\n>\n> https://commitfest.postgresql.org/27/2467/\n>\n> It is very interesting speedup and it is in good direction to JIT expressions\n\nThank you. I was planning to do that myself.\n\nI will take a look at your other comments in a day or two.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 25 Feb 2020 17:42:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi Amit,\n\nOn 2/25/20 3:42 AM, Amit Langote wrote:\n> On Tue, Feb 25, 2020 at 4:16 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> I added this patch to a commitfest\n>>\n>> https://commitfest.postgresql.org/27/2467/\n>>\n>> It is very interesting speedup and it is in good direction to JIT expressions\n> \n> Thank you. I was planning to do that myself.\n> \n> I will take a look at your other comments in a day or two.\n\nDo you know when you'll have chance to look at Pavel's comments?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Tue, 17 Mar 2020 07:53:08 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi David,\n\nOn Tue, Mar 17, 2020 at 8:53 PM David Steele <david@pgmasters.net> wrote:\n>\n> Hi Amit,\n>\n> On 2/25/20 3:42 AM, Amit Langote wrote:\n> > On Tue, Feb 25, 2020 at 4:16 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> I added this patch to a commitfest\n> >>\n> >> https://commitfest.postgresql.org/27/2467/\n> >>\n> >> It is very interesting speedup and it is in good direction to JIT expressions\n> >\n> > Thank you. I was planning to do that myself.\n> >\n> > I will take a look at your other comments in a day or two.\n>\n> Do you know when you'll have chance to look at Pavel's comments?\n\nSorry, I had forgotten about this. I will try to post an update by Thursday.\n\n-- \nThank you,\nAmit\n\n\n", "msg_date": "Tue, 17 Mar 2020 22:32:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi Pavel,\n\nSorry it took me a while to look at this.\n\nOn Tue, Feb 25, 2020 at 4:28 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> But I found one issue - I don't know if this issue is related to your patch or plpgsql_check.\n>>\n>> plpgsql_check try to clean after it was executed - it cleans all plans. But some pointers on simple expressions are broken after catched exceptions.\n>>\n>> expr->plan = 0x80. Is interesting, so other fields of this expressions are correct.\n>\n> I am not sure, but after patching the SPI_prepare_params the current memory context is some short memory context.\n>\n> Can SPI_prepare_params change current memory context? It did before. But after patching different memory context is active.\n\nI haven't been able to see the behavior you reported. Could you let\nme know what unexpected memory context you see in the problematic\ncase?\n\n--\nThank you,\nAmit\n\n\n", "msg_date": "Thu, 19 Mar 2020 18:47:17 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "čt 19. 3. 2020 v 10:47 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> Hi Pavel,\n>\n> Sorry it took me a while to look at this.\n>\n> On Tue, Feb 25, 2020 at 4:28 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > po 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> But I found one issue - I don't know if this issue is related to your\n> patch or plpgsql_check.\n> >>\n> >> plpgsql_check try to clean after it was executed - it cleans all plans.\n> But some pointers on simple expressions are broken after catched exceptions.\n> >>\n> >> expr->plan = 0x80. Is interesting, so other fields of this expressions\n> are correct.\n> >\n> > I am not sure, but after patching the SPI_prepare_params the current\n> memory context is some short memory context.\n> >\n> > Can SPI_prepare_params change current memory context? It did before. But\n> after patching different memory context is active.\n>\n> I haven't been able to see the behavior you reported. Could you let\n> me know what unexpected memory context you see in the problematic\n>\ncase?\n>\n\nHow I can detect it? Are there some steps for debugging memory context?\n\nPavel\n\n>\n> --\n> Thank you,\n> Amit\n>\n\nčt 19. 3. 2020 v 10:47 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:Hi Pavel,\n\nSorry it took me a while to look at this.\n\nOn Tue, Feb 25, 2020 at 4:28 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> But I found one issue - I don't know if this issue is related to your patch or plpgsql_check.\n>>\n>> plpgsql_check try to clean after it was executed - it cleans all plans. But some pointers on simple expressions are broken after catched exceptions.\n>>\n>> expr->plan = 0x80. Is interesting, so other fields of this expressions are correct.\n>\n> I am not sure, but after patching the SPI_prepare_params the current memory context is some short memory context.\n>\n> Can SPI_prepare_params change current memory context? It did before. But after patching different memory context is active.\n\nI haven't been able to see the behavior you reported.  Could you let\nme know what unexpected memory context you see in the problematic \ncase?How I can detect it? Are there some steps for debugging memory context? Pavel\n\n--\nThank you,\nAmit", "msg_date": "Thu, 19 Mar 2020 12:19:10 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "čt 19. 3. 2020 v 10:47 odesílatel Amit Langote <amitlangote09@gmail.com>\nnapsal:\n\n> Hi Pavel,\n>\n> Sorry it took me a while to look at this.\n>\n> On Tue, Feb 25, 2020 at 4:28 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > po 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n> >> But I found one issue - I don't know if this issue is related to your\n> patch or plpgsql_check.\n> >>\n> >> plpgsql_check try to clean after it was executed - it cleans all plans.\n> But some pointers on simple expressions are broken after catched exceptions.\n> >>\n> >> expr->plan = 0x80. Is interesting, so other fields of this expressions\n> are correct.\n> >\n> > I am not sure, but after patching the SPI_prepare_params the current\n> memory context is some short memory context.\n> >\n> > Can SPI_prepare_params change current memory context? It did before. But\n> after patching different memory context is active.\n>\n> I haven't been able to see the behavior you reported. Could you let\n> me know what unexpected memory context you see in the problematic\n> case?\n>\n\nThere was a problem with plpgsql_check after I applied this patch. It\ncrashed differently on own regress tests.\n\nBut I cannot to reproduce this issue now. Probably there was more issues\nthan one on my build environment.\n\nSo my questions and notes about a change of MemoryContext after patching\nare messy. Sorry for noise.\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> Thank you,\n> Amit\n>\n\nčt 19. 3. 2020 v 10:47 odesílatel Amit Langote <amitlangote09@gmail.com> napsal:Hi Pavel,\n\nSorry it took me a while to look at this.\n\nOn Tue, Feb 25, 2020 at 4:28 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 24. 2. 2020 v 18:56 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:\n>> But I found one issue - I don't know if this issue is related to your patch or plpgsql_check.\n>>\n>> plpgsql_check try to clean after it was executed - it cleans all plans. But some pointers on simple expressions are broken after catched exceptions.\n>>\n>> expr->plan = 0x80. Is interesting, so other fields of this expressions are correct.\n>\n> I am not sure, but after patching the SPI_prepare_params the current memory context is some short memory context.\n>\n> Can SPI_prepare_params change current memory context? It did before. But after patching different memory context is active.\n\nI haven't been able to see the behavior you reported.  Could you let\nme know what unexpected memory context you see in the problematic\ncase?There was a problem with plpgsql_check after I applied this patch. It crashed differently on own regress tests.But I cannot to reproduce this issue now. Probably there was more issues than one on my build environment.So my questions and notes about a change of MemoryContext after patching are messy. Sorry for noise.RegardsPavel  \n\n--\nThank you,\nAmit", "msg_date": "Fri, 20 Mar 2020 10:46:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi\n\nI did another test\n\nI use a pi estimation algorithm and it is little bit more realistic than\njust almost empty cycle body - still probably nobody will calculate pi in\nplpgsql.\n\nCREATE OR REPLACE FUNCTION pi_est(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\n v constant double precision DEFAULT 2.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + v)));\n c1 := c1 + v;\n c2 := c2 + v;\n END LOOP;\n RETURN accum * v;\nEND;\n$$ LANGUAGE plpgsql;\n\nFor this code the patch increased speed for 10000000 iterations from 6.3\nsec to 4.7 .. it is speedup about 25%\n\nThe best performance (28%) is with code\n\nCREATE OR REPLACE FUNCTION pi_est_2(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + double precision '2.0')));\n c1 := c1 + double precision '2.0';\n c2 := c2 + double precision '2.0';\n END LOOP;\n RETURN accum * double precision '2.0';\nEND;\n$$ LANGUAGE plpgsql;\n\nUnfortunately for unoptimized code the performance is worse (it is about\n55% slower)\n\nCREATE OR REPLACE FUNCTION pi_est_1(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + 2.0)));\n c1 := c1 + 2.0;\n c2 := c2 + 2.0;\n END LOOP;\n RETURN accum * 2.0;\nEND;\n$$ LANGUAGE plpgsql;\n\nsame performance (bad) is for explicit casting\n\nCREATE OR REPLACE FUNCTION pi_est_3(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + 2.0::double precision)));\n c1 := c1 + 2.0::double precision;\n c2 := c2 + 2.0::double precision;\n END LOOP;\n RETURN accum * double precision '2.0';\nEND;\n$$ LANGUAGE plpgsql;\n\nThere is relative high overhead of cast from numeric init_var_from_num.\n\nOn master (without patching) the speed all double precision variants is\nalmost same.\n\nThis example can be reduced\n\nCREATE OR REPLACE FUNCTION public.fx(integer)\n RETURNS double precision\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n result double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..$1\n LOOP\n result := result * 1.000001::double precision;\n END LOOP;\n RETURN result;\nEND;\n$function$\n\nCREATE OR REPLACE FUNCTION public.fx_1(integer)\n RETURNS double precision\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n result double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..$1\n LOOP\n result := result * 1.000001;\n END LOOP;\n RETURN result;\nEND;\n$function$\n\nCREATE OR REPLACE FUNCTION public.fx_2(integer)\n RETURNS double precision\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n result double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..$1\n LOOP\n result := result * double precision '1.000001';\n END LOOP;\n RETURN result;\nEND;\n$function$\n\nPatched select fx(1000000) .. 400ms, fx_1 .. 400ms, fx_2 .. 126ms\nMaster fx(1000000) .. 180ms, fx_1 180 ms, fx_2 .. 180ms\n\nSo the patch has a problem with constant casting - unfortunately the mix of\ndouble precision variables and numeric constants is pretty often in\nPostgres.\n\nRegards\n\nPavel", "msg_date": "Sat, 21 Mar 2020 06:08:49 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> So the patch has a problem with constant casting - unfortunately the mix of\n> double precision variables and numeric constants is pretty often in\n> Postgres.\n\nYeah. I believe the cause of that is that the patch thinks it can skip\npassing an inline-function-free simple expression through the planner.\nThat's flat out wrong. Quite aside from failing to perform\nconstant-folding (which is presumably the cause of the slowdown you\nspotted), that means that we miss performing such non-optional\ntransformations as rearranging named-function-argument notation into\npositional order. I didn't bother to test that but I'm sure it can be\nshown to lead to crashes.\n\nNow that I've looked at the patch I don't like it one bit from a\nstructural standpoint either. It's basically trying to make an end\nrun around the plancache, which is not going to be maintainable even\nif it correctly accounted for everything the plancache does today.\nWhich it doesn't. Two big problems are:\n\n* It doesn't account for the possibility of search_path changes\naffecting the interpretation of an expression.\n\n* It assumes that the *only* things that a simple plan could get\ninvalidated for are functions that were inlined. This isn't the\ncase --- a counterexample is that removal of no-op CoerceToDomain\nnodes requires the plan to be invalidated if the domain's constraints\nchange. And there are likely to be more such issues in future.\n\n\nSo while there's clearly something worth pursuing here, I do not like\nanything about the way it was done. I think that the right way to\nthink about this problem is \"how can the plan cache provide a fast\npath for checking validity of simple-expression plans?\". And when you\nthink about it that way, there's a pretty obvious answer: if the plan\ninvolves no table references, there's not going to be any locks that\nhave to be taken before we can check the is_valid flag. So we can\nhave a fast path that skips AcquirePlannerLocks and\nAcquireExecutorLocks, which are a big part of the problem, and we can\nalso bypass some of the other random checks that GetCachedPlan has to\nmake, like whether RLS affects the plan.\n\nAnother chunk of the issue is the constant acquisition and release of\nreference counts on the plan. We can't really skip that (I suspect\nthere are additional bugs in Amit's patch arising from trying to do so).\nHowever, plpgsql already has mechanisms for paying simple-expression\nsetup costs once per transaction rather than once per expression use.\nSo we can set up a simple-expression ResourceOwner managed much like\nthe simple-expression EState, and have it hold a refcount on the\nCachedPlan for each simple expression, and pay that overhead just once\nper transaction.\n\nSo I worked on those ideas for awhile, and came up with the attached\npatchset:\n\n0001 adds some regression tests in this area (Amit's patch fails the\ntests concerning search_path changes).\n\n0002 does what's suggested above. I also did a little bit of marginal\nmicro-tuning in exec_eval_simple_expr() itself.\n\n0003 improves the biggest remaining cost of validity rechecking,\nwhich is verifying that the search_path is the same as it was when\nthe plan was cached.\n\nI haven't done any serious performance testing on this, but it gives\ncirca 2X speedup on Pavel's original example, which is at least\nfairly close to the results that Amit's patch got there. And it\nmakes this last batch of test cases faster not slower, too.\n\nWith this patch, perf shows the hotspots on Pavel's original example\nas being\n\n+ 19.24% 19.17% 46470 postmaster plpgsql.so [.] exec_eval_expr\n+ 15.19% 15.15% 36720 postmaster plpgsql.so [.] plpgsql_param_eval_var\n+ 14.98% 14.94% 36213 postmaster postgres [.] ExecInterpExpr\n+ 6.32% 6.30% 15262 postmaster plpgsql.so [.] exec_stmt\n+ 6.08% 6.06% 14681 postmaster plpgsql.so [.] exec_assign_value\n\nMaybe there's more that could be done to knock fat out of\nexec_eval_expr and/or plpgsql_param_eval_var, but at least\nthe plan cache isn't the bottleneck anymore.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 21 Mar 2020 14:24:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "so 21. 3. 2020 v 19:24 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > So the patch has a problem with constant casting - unfortunately the mix\n> of\n> > double precision variables and numeric constants is pretty often in\n> > Postgres.\n>\n> Yeah. I believe the cause of that is that the patch thinks it can skip\n> passing an inline-function-free simple expression through the planner.\n> That's flat out wrong. Quite aside from failing to perform\n> constant-folding (which is presumably the cause of the slowdown you\n> spotted), that means that we miss performing such non-optional\n> transformations as rearranging named-function-argument notation into\n> positional order. I didn't bother to test that but I'm sure it can be\n> shown to lead to crashes.\n>\n> Now that I've looked at the patch I don't like it one bit from a\n> structural standpoint either. It's basically trying to make an end\n> run around the plancache, which is not going to be maintainable even\n> if it correctly accounted for everything the plancache does today.\n> Which it doesn't. Two big problems are:\n>\n> * It doesn't account for the possibility of search_path changes\n> affecting the interpretation of an expression.\n>\n> * It assumes that the *only* things that a simple plan could get\n> invalidated for are functions that were inlined. This isn't the\n> case --- a counterexample is that removal of no-op CoerceToDomain\n> nodes requires the plan to be invalidated if the domain's constraints\n> change. And there are likely to be more such issues in future.\n>\n>\n> So while there's clearly something worth pursuing here, I do not like\n> anything about the way it was done. I think that the right way to\n> think about this problem is \"how can the plan cache provide a fast\n> path for checking validity of simple-expression plans?\". And when you\n> think about it that way, there's a pretty obvious answer: if the plan\n> involves no table references, there's not going to be any locks that\n> have to be taken before we can check the is_valid flag. So we can\n> have a fast path that skips AcquirePlannerLocks and\n> AcquireExecutorLocks, which are a big part of the problem, and we can\n> also bypass some of the other random checks that GetCachedPlan has to\n> make, like whether RLS affects the plan.\n>\n> Another chunk of the issue is the constant acquisition and release of\n> reference counts on the plan. We can't really skip that (I suspect\n> there are additional bugs in Amit's patch arising from trying to do so).\n> However, plpgsql already has mechanisms for paying simple-expression\n> setup costs once per transaction rather than once per expression use.\n> So we can set up a simple-expression ResourceOwner managed much like\n> the simple-expression EState, and have it hold a refcount on the\n> CachedPlan for each simple expression, and pay that overhead just once\n> per transaction.\n>\n> So I worked on those ideas for awhile, and came up with the attached\n> patchset:\n>\n> 0001 adds some regression tests in this area (Amit's patch fails the\n> tests concerning search_path changes).\n>\n> 0002 does what's suggested above. I also did a little bit of marginal\n> micro-tuning in exec_eval_simple_expr() itself.\n>\n> 0003 improves the biggest remaining cost of validity rechecking,\n> which is verifying that the search_path is the same as it was when\n> the plan was cached.\n>\n> I haven't done any serious performance testing on this, but it gives\n> circa 2X speedup on Pavel's original example, which is at least\n> fairly close to the results that Amit's patch got there. And it\n> makes this last batch of test cases faster not slower, too.\n>\n> With this patch, perf shows the hotspots on Pavel's original example\n> as being\n>\n> + 19.24% 19.17% 46470 postmaster plpgsql.so\n> [.] exec_eval_expr\n> + 15.19% 15.15% 36720 postmaster plpgsql.so\n> [.] plpgsql_param_eval_var\n> + 14.98% 14.94% 36213 postmaster postgres\n> [.] ExecInterpExpr\n> + 6.32% 6.30% 15262 postmaster plpgsql.so\n> [.] exec_stmt\n> + 6.08% 6.06% 14681 postmaster plpgsql.so\n> [.] exec_assign_value\n>\n> Maybe there's more that could be done to knock fat out of\n> exec_eval_expr and/or plpgsql_param_eval_var, but at least\n> the plan cache isn't the bottleneck anymore.\n>\n\nI tested Tom's patches, and I can confirm these results.\n\nIt doesn't break tests (and all tests plpgsql_check tests passed without\nproblems).\n\nThe high overhead of ExecInterpExpr is related to prepare fcinfo, and\nchecking nulls arguments because all functions are strict\nplpgsql_param_eval_var, looks like expensive is var = (PLpgSQL_var *)\nestate->datums[dno] and *op->resvalue = var->value;\n\nIt looks great.\n\nPavel\n\n\n\n>\n> regards, tom lane\n>\n>\n\nso 21. 3. 2020 v 19:24 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> So the patch has a problem with constant casting - unfortunately the mix of\n> double precision variables and numeric constants is pretty often in\n> Postgres.\n\nYeah.  I believe the cause of that is that the patch thinks it can skip\npassing an inline-function-free simple expression through the planner.\nThat's flat out wrong.  Quite aside from failing to perform\nconstant-folding (which is presumably the cause of the slowdown you\nspotted), that means that we miss performing such non-optional\ntransformations as rearranging named-function-argument notation into\npositional order.  I didn't bother to test that but I'm sure it can be\nshown to lead to crashes.\n\nNow that I've looked at the patch I don't like it one bit from a\nstructural standpoint either.  It's basically trying to make an end\nrun around the plancache, which is not going to be maintainable even\nif it correctly accounted for everything the plancache does today.\nWhich it doesn't.  Two big problems are:\n\n* It doesn't account for the possibility of search_path changes\naffecting the interpretation of an expression.\n\n* It assumes that the *only* things that a simple plan could get\ninvalidated for are functions that were inlined.  This isn't the\ncase --- a counterexample is that removal of no-op CoerceToDomain\nnodes requires the plan to be invalidated if the domain's constraints\nchange.  And there are likely to be more such issues in future.\n\n\nSo while there's clearly something worth pursuing here, I do not like\nanything about the way it was done.  I think that the right way to\nthink about this problem is \"how can the plan cache provide a fast\npath for checking validity of simple-expression plans?\".  And when you\nthink about it that way, there's a pretty obvious answer: if the plan\ninvolves no table references, there's not going to be any locks that\nhave to be taken before we can check the is_valid flag.  So we can\nhave a fast path that skips AcquirePlannerLocks and\nAcquireExecutorLocks, which are a big part of the problem, and we can\nalso bypass some of the other random checks that GetCachedPlan has to\nmake, like whether RLS affects the plan.\n\nAnother chunk of the issue is the constant acquisition and release of\nreference counts on the plan.  We can't really skip that (I suspect\nthere are additional bugs in Amit's patch arising from trying to do so).\nHowever, plpgsql already has mechanisms for paying simple-expression\nsetup costs once per transaction rather than once per expression use.\nSo we can set up a simple-expression ResourceOwner managed much like\nthe simple-expression EState, and have it hold a refcount on the\nCachedPlan for each simple expression, and pay that overhead just once\nper transaction.\n\nSo I worked on those ideas for awhile, and came up with the attached\npatchset:\n\n0001 adds some regression tests in this area (Amit's patch fails the\ntests concerning search_path changes).\n\n0002 does what's suggested above.  I also did a little bit of marginal\nmicro-tuning in exec_eval_simple_expr() itself.\n\n0003 improves the biggest remaining cost of validity rechecking,\nwhich is verifying that the search_path is the same as it was when\nthe plan was cached.\n\nI haven't done any serious performance testing on this, but it gives\ncirca 2X speedup on Pavel's original example, which is at least\nfairly close to the results that Amit's patch got there.  And it\nmakes this last batch of test cases faster not slower, too.\n\nWith this patch, perf shows the hotspots on Pavel's original example\nas being\n\n+   19.24%    19.17%         46470  postmaster       plpgsql.so                   [.] exec_eval_expr\n+   15.19%    15.15%         36720  postmaster       plpgsql.so                   [.] plpgsql_param_eval_var\n+   14.98%    14.94%         36213  postmaster       postgres                     [.] ExecInterpExpr\n+    6.32%     6.30%         15262  postmaster       plpgsql.so                   [.] exec_stmt\n+    6.08%     6.06%         14681  postmaster       plpgsql.so                   [.] exec_assign_value\n\nMaybe there's more that could be done to knock fat out of\nexec_eval_expr and/or plpgsql_param_eval_var, but at least\nthe plan cache isn't the bottleneck anymore.I tested Tom's patches, and I can confirm these results. It doesn't break tests (and all tests plpgsql_check tests passed without problems).The high overhead of ExecInterpExpr is related to prepare fcinfo, and checking nulls arguments because all functions are strictplpgsql_param_eval_var, looks like expensive is var = (PLpgSQL_var *) estate->datums[dno] and *op->resvalue = var->value;It looks great.Pavel \n\n                        regards, tom lane", "msg_date": "Sat, 21 Mar 2020 21:29:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "so 21. 3. 2020 v 21:29 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> so 21. 3. 2020 v 19:24 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > So the patch has a problem with constant casting - unfortunately the\n>> mix of\n>> > double precision variables and numeric constants is pretty often in\n>> > Postgres.\n>>\n>> Yeah. I believe the cause of that is that the patch thinks it can skip\n>> passing an inline-function-free simple expression through the planner.\n>> That's flat out wrong. Quite aside from failing to perform\n>> constant-folding (which is presumably the cause of the slowdown you\n>> spotted), that means that we miss performing such non-optional\n>> transformations as rearranging named-function-argument notation into\n>> positional order. I didn't bother to test that but I'm sure it can be\n>> shown to lead to crashes.\n>>\n>> Now that I've looked at the patch I don't like it one bit from a\n>> structural standpoint either. It's basically trying to make an end\n>> run around the plancache, which is not going to be maintainable even\n>> if it correctly accounted for everything the plancache does today.\n>> Which it doesn't. Two big problems are:\n>>\n>> * It doesn't account for the possibility of search_path changes\n>> affecting the interpretation of an expression.\n>>\n>> * It assumes that the *only* things that a simple plan could get\n>> invalidated for are functions that were inlined. This isn't the\n>> case --- a counterexample is that removal of no-op CoerceToDomain\n>> nodes requires the plan to be invalidated if the domain's constraints\n>> change. And there are likely to be more such issues in future.\n>>\n>>\n>> So while there's clearly something worth pursuing here, I do not like\n>> anything about the way it was done. I think that the right way to\n>> think about this problem is \"how can the plan cache provide a fast\n>> path for checking validity of simple-expression plans?\". And when you\n>> think about it that way, there's a pretty obvious answer: if the plan\n>> involves no table references, there's not going to be any locks that\n>> have to be taken before we can check the is_valid flag. So we can\n>> have a fast path that skips AcquirePlannerLocks and\n>> AcquireExecutorLocks, which are a big part of the problem, and we can\n>> also bypass some of the other random checks that GetCachedPlan has to\n>> make, like whether RLS affects the plan.\n>>\n>> Another chunk of the issue is the constant acquisition and release of\n>> reference counts on the plan. We can't really skip that (I suspect\n>> there are additional bugs in Amit's patch arising from trying to do so).\n>> However, plpgsql already has mechanisms for paying simple-expression\n>> setup costs once per transaction rather than once per expression use.\n>> So we can set up a simple-expression ResourceOwner managed much like\n>> the simple-expression EState, and have it hold a refcount on the\n>> CachedPlan for each simple expression, and pay that overhead just once\n>> per transaction.\n>>\n>> So I worked on those ideas for awhile, and came up with the attached\n>> patchset:\n>>\n>> 0001 adds some regression tests in this area (Amit's patch fails the\n>> tests concerning search_path changes).\n>>\n>> 0002 does what's suggested above. I also did a little bit of marginal\n>> micro-tuning in exec_eval_simple_expr() itself.\n>>\n>> 0003 improves the biggest remaining cost of validity rechecking,\n>> which is verifying that the search_path is the same as it was when\n>> the plan was cached.\n>>\n>> I haven't done any serious performance testing on this, but it gives\n>> circa 2X speedup on Pavel's original example, which is at least\n>> fairly close to the results that Amit's patch got there. And it\n>> makes this last batch of test cases faster not slower, too.\n>>\n>> With this patch, perf shows the hotspots on Pavel's original example\n>> as being\n>>\n>> + 19.24% 19.17% 46470 postmaster plpgsql.so\n>> [.] exec_eval_expr\n>> + 15.19% 15.15% 36720 postmaster plpgsql.so\n>> [.] plpgsql_param_eval_var\n>> + 14.98% 14.94% 36213 postmaster postgres\n>> [.] ExecInterpExpr\n>> + 6.32% 6.30% 15262 postmaster plpgsql.so\n>> [.] exec_stmt\n>> + 6.08% 6.06% 14681 postmaster plpgsql.so\n>> [.] exec_assign_value\n>>\n>> Maybe there's more that could be done to knock fat out of\n>> exec_eval_expr and/or plpgsql_param_eval_var, but at least\n>> the plan cache isn't the bottleneck anymore.\n>>\n>\n> I tested Tom's patches, and I can confirm these results.\n>\n> It doesn't break tests (and all tests plpgsql_check tests passed without\n> problems).\n>\n> The high overhead of ExecInterpExpr is related to prepare fcinfo, and\n> checking nulls arguments because all functions are strict\n> plpgsql_param_eval_var, looks like expensive is var = (PLpgSQL_var *)\n> estate->datums[dno] and *op->resvalue = var->value;\n>\n\nI rechecked Tom's patch, and all tests passed, and there is stable positive\nperformance impact about 30% in tested pi estimation example.\n\nAfter this patch, the code is only 3x times slower than in Lua (originally\nit was 5x) and 1/3 slower than in Python (but Python calculates with higher\nprecision).\n\nI think so this speed is maximum what is possible (now) - after patching\nthe slower execution is assigned to nullable types and related operations.\nProbably it can be reduced too. The variables can be marked as NOT NULL,\nand if all variables are NOT NULL, then we don't need to repeat check of\nnull arguments of strict functions.\n\nI'll mark this patch as ready for commiters.\n\nThank you\n\nPavel\n\n\n> It looks great.\n>\n> Pavel\n>\n>\n>\n>>\n>> regards, tom lane\n>>\n>>\n\nso 21. 3. 2020 v 21:29 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:so 21. 3. 2020 v 19:24 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> So the patch has a problem with constant casting - unfortunately the mix of\n> double precision variables and numeric constants is pretty often in\n> Postgres.\n\nYeah.  I believe the cause of that is that the patch thinks it can skip\npassing an inline-function-free simple expression through the planner.\nThat's flat out wrong.  Quite aside from failing to perform\nconstant-folding (which is presumably the cause of the slowdown you\nspotted), that means that we miss performing such non-optional\ntransformations as rearranging named-function-argument notation into\npositional order.  I didn't bother to test that but I'm sure it can be\nshown to lead to crashes.\n\nNow that I've looked at the patch I don't like it one bit from a\nstructural standpoint either.  It's basically trying to make an end\nrun around the plancache, which is not going to be maintainable even\nif it correctly accounted for everything the plancache does today.\nWhich it doesn't.  Two big problems are:\n\n* It doesn't account for the possibility of search_path changes\naffecting the interpretation of an expression.\n\n* It assumes that the *only* things that a simple plan could get\ninvalidated for are functions that were inlined.  This isn't the\ncase --- a counterexample is that removal of no-op CoerceToDomain\nnodes requires the plan to be invalidated if the domain's constraints\nchange.  And there are likely to be more such issues in future.\n\n\nSo while there's clearly something worth pursuing here, I do not like\nanything about the way it was done.  I think that the right way to\nthink about this problem is \"how can the plan cache provide a fast\npath for checking validity of simple-expression plans?\".  And when you\nthink about it that way, there's a pretty obvious answer: if the plan\ninvolves no table references, there's not going to be any locks that\nhave to be taken before we can check the is_valid flag.  So we can\nhave a fast path that skips AcquirePlannerLocks and\nAcquireExecutorLocks, which are a big part of the problem, and we can\nalso bypass some of the other random checks that GetCachedPlan has to\nmake, like whether RLS affects the plan.\n\nAnother chunk of the issue is the constant acquisition and release of\nreference counts on the plan.  We can't really skip that (I suspect\nthere are additional bugs in Amit's patch arising from trying to do so).\nHowever, plpgsql already has mechanisms for paying simple-expression\nsetup costs once per transaction rather than once per expression use.\nSo we can set up a simple-expression ResourceOwner managed much like\nthe simple-expression EState, and have it hold a refcount on the\nCachedPlan for each simple expression, and pay that overhead just once\nper transaction.\n\nSo I worked on those ideas for awhile, and came up with the attached\npatchset:\n\n0001 adds some regression tests in this area (Amit's patch fails the\ntests concerning search_path changes).\n\n0002 does what's suggested above.  I also did a little bit of marginal\nmicro-tuning in exec_eval_simple_expr() itself.\n\n0003 improves the biggest remaining cost of validity rechecking,\nwhich is verifying that the search_path is the same as it was when\nthe plan was cached.\n\nI haven't done any serious performance testing on this, but it gives\ncirca 2X speedup on Pavel's original example, which is at least\nfairly close to the results that Amit's patch got there.  And it\nmakes this last batch of test cases faster not slower, too.\n\nWith this patch, perf shows the hotspots on Pavel's original example\nas being\n\n+   19.24%    19.17%         46470  postmaster       plpgsql.so                   [.] exec_eval_expr\n+   15.19%    15.15%         36720  postmaster       plpgsql.so                   [.] plpgsql_param_eval_var\n+   14.98%    14.94%         36213  postmaster       postgres                     [.] ExecInterpExpr\n+    6.32%     6.30%         15262  postmaster       plpgsql.so                   [.] exec_stmt\n+    6.08%     6.06%         14681  postmaster       plpgsql.so                   [.] exec_assign_value\n\nMaybe there's more that could be done to knock fat out of\nexec_eval_expr and/or plpgsql_param_eval_var, but at least\nthe plan cache isn't the bottleneck anymore.I tested Tom's patches, and I can confirm these results. It doesn't break tests (and all tests plpgsql_check tests passed without problems).The high overhead of ExecInterpExpr is related to prepare fcinfo, and checking nulls arguments because all functions are strictplpgsql_param_eval_var, looks like expensive is var = (PLpgSQL_var *) estate->datums[dno] and *op->resvalue = var->value;I rechecked Tom's patch, and all tests passed, and there is stable positive performance impact about 30% in tested pi estimation example.After this patch, the code is only 3x times slower than in Lua (originally it was 5x) and 1/3 slower than in Python (but Python calculates with higher precision).I think so this speed is maximum what is possible (now) - after patching the slower execution is assigned to nullable types and related operations. Probably it can be reduced too. The variables can be marked as NOT NULL, and if all variables are NOT NULL, then we don't need to repeat check of null arguments of strict functions. I'll mark this patch as ready for commiters.Thank you PavelIt looks great.Pavel \n\n                        regards, tom lane", "msg_date": "Wed, 25 Mar 2020 20:14:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I'll mark this patch as ready for commiters.\n\nThanks for reviewing! Amit, do you have any thoughts on this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 15:44:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "On Sat, Mar 21, 2020 at 2:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> With this patch, perf shows the hotspots on Pavel's original example\n> as being\n>\n> + 19.24% 19.17% 46470 postmaster plpgsql.so [.] exec_eval_expr\n> + 15.19% 15.15% 36720 postmaster plpgsql.so [.] plpgsql_param_eval_var\n> + 14.98% 14.94% 36213 postmaster postgres [.] ExecInterpExpr\n> + 6.32% 6.30% 15262 postmaster plpgsql.so [.] exec_stmt\n> + 6.08% 6.06% 14681 postmaster plpgsql.so [.] exec_assign_value\n\nThat's pretty sweet. As you say, there's probably some way to\neliminate some of the non-plancache overhead, but it's still a big\nimprovement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 25 Mar 2020 16:32:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi,\n\nOn 2020-03-21 14:24:05 -0400, Tom Lane wrote:\n> So while there's clearly something worth pursuing here, I do not like\n> anything about the way it was done. I think that the right way to\n> think about this problem is \"how can the plan cache provide a fast\n> path for checking validity of simple-expression plans?\". And when you\n> think about it that way, there's a pretty obvious answer: if the plan\n> involves no table references, there's not going to be any locks that\n> have to be taken before we can check the is_valid flag. So we can\n> have a fast path that skips AcquirePlannerLocks and\n> AcquireExecutorLocks, which are a big part of the problem, and we can\n> also bypass some of the other random checks that GetCachedPlan has to\n> make, like whether RLS affects the plan.\n\nThat makes sense to me.\n\nI wonder if it'd make sense to store the locks needed for\nAcquirePlannerLocks/AcquireExecutorLocks in a better form. Not really\ninstead of your optimization, but to also address simple statements that\ndo reference a relation. If we stored all the locks for a plansource in\nan array, it should get cheaper - and automatically implement the fast\npath of skipping AcquirePlannerLocks/AcquireExecutorLocks when there's\nno relations.\n\n\n> Another chunk of the issue is the constant acquisition and release of\n> reference counts on the plan. We can't really skip that (I suspect\n> there are additional bugs in Amit's patch arising from trying to do so).\n> However, plpgsql already has mechanisms for paying simple-expression\n> setup costs once per transaction rather than once per expression use.\n> So we can set up a simple-expression ResourceOwner managed much like\n> the simple-expression EState, and have it hold a refcount on the\n> CachedPlan for each simple expression, and pay that overhead just once\n> per transaction.\n\n\n> I haven't done any serious performance testing on this, but it gives\n> circa 2X speedup on Pavel's original example, which is at least\n> fairly close to the results that Amit's patch got there. And it\n> makes this last batch of test cases faster not slower, too.\n> \n> With this patch, perf shows the hotspots on Pavel's original example\n> as being\n> \n> + 19.24% 19.17% 46470 postmaster plpgsql.so [.] exec_eval_expr\n> + 15.19% 15.15% 36720 postmaster plpgsql.so [.] plpgsql_param_eval_var\n> + 14.98% 14.94% 36213 postmaster postgres [.] ExecInterpExpr\n> + 6.32% 6.30% 15262 postmaster plpgsql.so [.] exec_stmt\n> + 6.08% 6.06% 14681 postmaster plpgsql.so [.] exec_assign_value\n> \n> Maybe there's more that could be done to knock fat out of\n> exec_eval_expr and/or plpgsql_param_eval_var, but at least\n> the plan cache isn't the bottleneck anymore.\n\nNice!\n\n\n> diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c\n> index dbae18d..8e27b03 100644\n> --- a/src/backend/utils/cache/plancache.c\n> +++ b/src/backend/utils/cache/plancache.c\n> @@ -1278,6 +1278,160 @@ ReleaseCachedPlan(CachedPlan *plan, bool useResOwner)\n> }\n> \n> /*\n> + * CachedPlanAllowsSimpleValidityCheck: can we use CachedPlanIsSimplyValid?\n> + *\n> + * This function, together with CachedPlanIsSimplyValid, provides a fast path\n> + * for revalidating \"simple\" generic plans. The core requirement to be simple\n> + * is that the plan must not require taking any locks, which translates to\n> + * not touching any tables; this happens to match up well with an important\n> + * use-case in PL/pgSQL. This function tests whether that's true, along\n> + * with checking some other corner cases that we'd rather not bother with\n> + * handling in the fast path. (Note that it's still possible for such a plan\n> + * to be invalidated, for example due to a change in a function that was\n> + * inlined into the plan.)\n> + *\n> + * This must only be called on known-valid generic plans (eg, ones just\n> + * returned by GetCachedPlan). If it returns true, the caller may re-use\n> + * the cached plan as long as CachedPlanIsSimplyValid returns true; that\n> + * check is much cheaper than the full revalidation done by GetCachedPlan.\n> + * Nonetheless, no required checks are omitted.\n> + */\n> +bool\n> +CachedPlanAllowsSimpleValidityCheck(CachedPlanSource *plansource,\n> +\t\t\t\t\t\t\t\t\tCachedPlan *plan)\n> +{\n> +\tListCell *lc;\n\nWould it make sense to instead compute this as we go when building a\nvalid CachedPlanSource? If we make it a property of a is_valid\nCachedPlanSource, we can assert that the plan is safe for use in\nCachedPlanIsSimplyValid().\n\nAnd perhaps also optimize the normal checks in RevalidateCachedQuery()\nfor cases not going through the \"simple\" path. We could not use the\noptimizations around refcounts for those, but it still seems like it\ncould be useful? And less separate infrastructure is good too.\n\n\n\n> +/*\n> + * CachedPlanIsSimplyValid: quick check for plan still being valid\n> + *\n> + * This function must not be used unless CachedPlanAllowsSimpleValidityCheck\n> + * previously said it was OK.\n> + *\n> + * If the plan is valid, and \"owner\" is not NULL, record a refcount on\n> + * the plan in that resowner before returning. It is caller's responsibility\n> + * to be sure that a refcount is held on any plan that's being actively used.\n> + *\n> + * The code here is unconditionally safe as long as the only use of this\n> + * CachedPlanSource is in connection with the particular CachedPlan pointer\n> + * that's passed in. If the plansource were being used for other purposes,\n> + * it's possible that its generic plan could be invalidated and regenerated\n> + * while the current caller wasn't looking, and then there could be a chance\n> + * collision of address between this caller's now-stale plan pointer and the\n> + * actual address of the new generic plan. For current uses, that scenario\n> + * can't happen; but with a plansource shared across multiple uses, it'd be\n> + * advisable to also save plan->generation and verify that that still matches.\n\nThat's mighty subtle :/\n\n\n> \t/*\n> +\t * Likewise for the simple-expression resource owner. (Note: it'd be\n> +\t * safer to create this as a child of TopTransactionResourceOwner; but\n> +\t * right now that causes issues in transaction-spanning procedures, so\n> +\t * make it standalone.)\n> +\t */\n\nHm. I'm quite unfamiliar with this area of the code - so I'm likely just\nmissing something: Given that you're using a post xact cleanup hook to\nrelease the resowner, I'm not quite sure I understand this comment. The\nXACT_EVENT_ABORT/COMMIT callbacks are called before\nTopTransactionResourceOwner is released, no?\n\n> void\n> plpgsql_xact_cb(XactEvent event, void *arg)\n> {\n> \t/*\n> \t * If we are doing a clean transaction shutdown, free the EState (so that\n> -\t * any remaining resources will be released correctly). In an abort, we\n> +\t * any remaining resources will be released correctly). In an abort, we\n> \t * expect the regular abort recovery procedures to release everything of\n> -\t * interest.\n> +\t * interest. The resowner has to be explicitly released in both cases,\n> +\t * though, since it's not a child of TopTransactionResourceOwner.\n> \t */\n> \tif (event == XACT_EVENT_COMMIT || event == XACT_EVENT_PREPARE)\n> \t{\n> @@ -8288,11 +8413,17 @@ plpgsql_xact_cb(XactEvent event, void *arg)\n> \t\tif (shared_simple_eval_estate)\n> \t\t\tFreeExecutorState(shared_simple_eval_estate);\n> \t\tshared_simple_eval_estate = NULL;\n> +\t\tif (shared_simple_eval_resowner)\n> +\t\t\tplpgsql_free_simple_resowner(shared_simple_eval_resowner);\n> +\t\tshared_simple_eval_resowner = NULL;\n> \t}\n> \telse if (event == XACT_EVENT_ABORT)\n> \t{\n> \t\tsimple_econtext_stack = NULL;\n> \t\tshared_simple_eval_estate = NULL;\n> +\t\tif (shared_simple_eval_resowner)\n> +\t\t\tplpgsql_free_simple_resowner(shared_simple_eval_resowner);\n> +\t\tshared_simple_eval_resowner = NULL;\n> \t}\n> }\n\n\n\n> +void\n> +plpgsql_free_simple_resowner(ResourceOwner simple_eval_resowner)\n> +{\n> +\t/*\n> +\t * At this writing, the only thing that could actually get released is\n> +\t * plancache refcounts; but we may as well do the full release protocol.\n\nHm, any chance that the multiple resowner calls here could show up in a\nprofile? Probably not?\n\n\n> +\t * We pass isCommit = false even when committing, to suppress\n> +\t * resource-leakage gripes, since we aren't bothering to release the\n> +\t * refcounts one-by-one.\n> +\t */\n\nThat's a bit icky...\n\n\n\n> * OverrideSearchPathMatchesCurrent - does path match current setting?\n> + *\n> + * This is tested over and over in some common code paths, and in the typical\n> + * scenario where the active search path seldom changes, it'll always succeed.\n> + * We make that case fast by keeping a generation counter that is advanced\n> + * whenever the active search path changes.\n> */\n\nCould it be worth optimizing the path generation logic so that a\npush/pop of an override path restores the old generation? That way we\ncould likely avoid the overhead even for cases where some functions\nspecify their own search path?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Mar 2020 14:18:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if it'd make sense to store the locks needed for\n> AcquirePlannerLocks/AcquireExecutorLocks in a better form.\n\nPerhaps, but I'm not sure that either of those functions represent\nmaterial overhead in cases besides this one.\n\n> Would it make sense to instead compute this as we go when building a\n> valid CachedPlanSource? If we make it a property of a is_valid\n> CachedPlanSource, we can assert that the plan is safe for use in\n> CachedPlanIsSimplyValid().\n\nI'm inclined to think not, because it'd just be overhead for other\nusers of cached plans.\n\n> That's mighty subtle :/\n\nYeah :-(. I don't like it that much, but I don't see an easy way to\ndo better, given the way that plpgsql manages its simple expressions.\n\n>> /*\n>> +\t * Likewise for the simple-expression resource owner. (Note: it'd be\n>> +\t * safer to create this as a child of TopTransactionResourceOwner; but\n>> +\t * right now that causes issues in transaction-spanning procedures, so\n>> +\t * make it standalone.)\n>> +\t */\n\n> Hm. I'm quite unfamiliar with this area of the code - so I'm likely just\n> missing something: Given that you're using a post xact cleanup hook to\n> release the resowner, I'm not quite sure I understand this comment. The\n> XACT_EVENT_ABORT/COMMIT callbacks are called before\n> TopTransactionResourceOwner is released, no?\n\nThe comment is there because the regression tests fall over if you try\nto do it the other way :-(. The failure I saw was specific to a\ntransaction being done in a DO block, and maybe we could handle that\ndifferently from the case for a normal procedure; but it seemed better\nto me to make them the same.\n\nThere's a separate question lurking under there, which is whether the\nexisting management of the simple-expression EState is right at all\nfor transaction-spanning DO blocks; frankly it smells a bit fishy to\nme. But looking into that did not seem in-scope for this patch.\n\n>> +void\n>> +plpgsql_free_simple_resowner(ResourceOwner simple_eval_resowner)\n>> +{\n>> +\t/*\n>> +\t * At this writing, the only thing that could actually get released is\n>> +\t * plancache refcounts; but we may as well do the full release protocol.\n\n> Hm, any chance that the multiple resowner calls here could show up in a\n> profile? Probably not?\n\nDoubt it. On the other hand, as the code stands it's certain that the\nresowner contains nothing but plancache pins (while I was writing the\npatch it wasn't entirely clear that that would hold). So we could\ndrop the two unnecessary calls. There are assertions in\nResourceOwnerDelete that would fire if we somehow missed releasing\nanything, so it doesn't seem like much of a maintenance hazard.\n\n>> +\t * We pass isCommit = false even when committing, to suppress\n>> +\t * resource-leakage gripes, since we aren't bothering to release the\n>> +\t * refcounts one-by-one.\n>> +\t */\n\n> That's a bit icky...\n\nAgreed, and it's not like our practice elsewhere. I thought about adding\na data structure that would track the set of held plancache pins outside\nthe resowner, but concluded that that'd just be pointless duplicative\noverhead.\n\n> Could it be worth optimizing the path generation logic so that a\n> push/pop of an override path restores the old generation?\n\n(1) Not given the existing set of uses of the push/pop capability, which\nso far as I can see is *only* CREATE SCHEMA. It's not involved in any\nother manipulations of the search path. And (2) as this is written, it's\ntotally unsafe for the generation counter ever to back up; that risks\nfalse match detections later.\n\nI appreciate the review!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 17:51:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi,\n\nOn 2020-03-25 17:51:50 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder if it'd make sense to store the locks needed for\n> > AcquirePlannerLocks/AcquireExecutorLocks in a better form.\n> \n> Perhaps, but I'm not sure that either of those functions represent\n> material overhead in cases besides this one.\n\nFor pgbench -M prepared -S GetCachedPlan() and its children are 2.36% of\nthe time. 1.75% of the total is RevalidateCachedQuery(). 1.13% of that\nin turn is LockAcquireExtended.\n\nThat's not huge, but also not nothing. And this includes client\nroundtrips. So I assume it'd show up larger when executing actual\nqueries in a function, or when pipelining (which e.g. pgjdbc has on by\ndefault).\n\nIf I to simple lookups from pgbench_accounts in a loop in plpgsql\nGetCachedPlan() is 4.43% and the LockAcquireExtended()'s called from\nwithin are 1.46%.\n\nSo it's plausible that making this a more generic optimization would be\nworthwhile.\n\n\n> > Would it make sense to instead compute this as we go when building a\n> > valid CachedPlanSource? If we make it a property of a is_valid\n> > CachedPlanSource, we can assert that the plan is safe for use in\n> > CachedPlanIsSimplyValid().\n> \n> I'm inclined to think not, because it'd just be overhead for other\n> users of cached plans.\n\nEven if we make RevalidateCachedQuery take advantage of the simpler\ntests when possible? While there's plenty of cases where it'd not be\napplicable, it seems likely that those wouldn't notice the small\nslowdown either.\n\n\n\n> >> /*\n> >> +\t * Likewise for the simple-expression resource owner. (Note: it'd be\n> >> +\t * safer to create this as a child of TopTransactionResourceOwner; but\n> >> +\t * right now that causes issues in transaction-spanning procedures, so\n> >> +\t * make it standalone.)\n> >> +\t */\n> \n> > Hm. I'm quite unfamiliar with this area of the code - so I'm likely just\n> > missing something: Given that you're using a post xact cleanup hook to\n> > release the resowner, I'm not quite sure I understand this comment. The\n> > XACT_EVENT_ABORT/COMMIT callbacks are called before\n> > TopTransactionResourceOwner is released, no?\n> \n> The comment is there because the regression tests fall over if you try\n> to do it the other way :-(. The failure I saw was specific to a\n> transaction being done in a DO block, and maybe we could handle that\n> differently from the case for a normal procedure; but it seemed better\n> to me to make them the same.\n\nI'm still confused as to why it actually fixes the issue. Feel we should\nat least understand what's going on before commtting.\n\n\n> >> +void\n> >> +plpgsql_free_simple_resowner(ResourceOwner simple_eval_resowner)\n> >> +{\n> >> +\t/*\n> >> +\t * At this writing, the only thing that could actually get released is\n> >> +\t * plancache refcounts; but we may as well do the full release protocol.\n> \n> > Hm, any chance that the multiple resowner calls here could show up in a\n> > profile? Probably not?\n> \n> Doubt it. On the other hand, as the code stands it's certain that the\n> resowner contains nothing but plancache pins (while I was writing the\n> patch it wasn't entirely clear that that would hold). So we could\n> drop the two unnecessary calls. There are assertions in\n> ResourceOwnerDelete that would fire if we somehow missed releasing\n> anything, so it doesn't seem like much of a maintenance hazard.\n\nOne could even argue that that's a nice crosscheck: Due to the later\nrelease it'd not actually be correct to just add \"arbitrary\" things to\nthat resowner.\n\n\n> > Could it be worth optimizing the path generation logic so that a\n> > push/pop of an override path restores the old generation?\n> \n> (1) Not given the existing set of uses of the push/pop capability, which\n> so far as I can see is *only* CREATE SCHEMA.\n\nOh. Well, then that'd be something for later.\n\nI do recall that there were issues with SET search_path in functions\ncausing noticable slowdowns...\n\n\n> It's not involved in any other manipulations of the search path. And\n> (2) as this is written, it's totally unsafe for the generation counter\n> ever to back up; that risks false match detections later.\n\nI was just thinking of backing up the 'active generation' state. New\ngenerations would have to come from a separate 'next generation'\ncounter.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Mar 2020 15:49:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-25 17:51:50 -0400, Tom Lane wrote:\n>> Perhaps, but I'm not sure that either of those functions represent\n>> material overhead in cases besides this one.\n\n> That's not huge, but also not nothing.\n\nI see. So maybe worth the trouble --- but still, seems like material for\na separate patch.\n\n>>> Would it make sense to instead compute this as we go when building a\n>>> valid CachedPlanSource?\n\n>> I'm inclined to think not, because it'd just be overhead for other\n>> users of cached plans.\n\n> Even if we make RevalidateCachedQuery take advantage of the simpler\n> tests when possible?\n\nI'm not convinced that any real optimization is practical once you\nallow tables in the query. You then have to check the RLS-active\nflags in some form, and the existing tests are not *that* expensive\nas long as the answer is \"no\". At best I think you might be reducing\ntwo or three simple tests to one.\n\nAlso, the reason why this is interesting at all for plpgsql simple\nexpressions is that the cost of these checks, simple as they are,\nis a noticeable fraction of the total time to do a simple expression.\nThat's not going to be the case for queries involving table access.\n\n>> The comment is there because the regression tests fall over if you try\n>> to do it the other way :-(. The failure I saw was specific to a\n>> transaction being done in a DO block, and maybe we could handle that\n>> differently from the case for a normal procedure; but it seemed better\n>> to me to make them the same.\n\n> I'm still confused as to why it actually fixes the issue. Feel we should\n> at least understand what's going on before commtting.\n\nI do understand the issue. If you make the simple-resowner a child\nof TopTransactionResourceOwner, it vanishes at commit --- but\nplpgsql_inline_handler has still got a pointer to it, which it'll try\nto free afterwards, if the commit was inside the DO block.\n\nWhat's not entirely clear to me is why this in exec_stmt_commit\n\n@@ -4825,6 +4845,7 @@ exec_stmt_commit(PLpgSQL_execstate *estate, PLpgSQL_stmt_commit *stmt)\n \t}\n \n \testate->simple_eval_estate = NULL;\n+\testate->simple_eval_resowner = NULL;\n \tplpgsql_create_econtext(estate);\n \n \treturn PLPGSQL_RC_OK;\n\nis okay --- it avoids having a dangling pointer, sure, but if we're inside\na DO block won't plpgsql_create_econtext create a simple_eval_estate (and,\nnow, simple_eval_resowner) with the wrong properties? But that's a\npre-existing question, and maybe Peter got it right and there's no\nproblem.\n\n>> Doubt it. On the other hand, as the code stands it's certain that the\n>> resowner contains nothing but plancache pins (while I was writing the\n>> patch it wasn't entirely clear that that would hold). So we could\n>> drop the two unnecessary calls. There are assertions in\n>> ResourceOwnerDelete that would fire if we somehow missed releasing\n>> anything, so it doesn't seem like much of a maintenance hazard.\n\n> One could even argue that that's a nice crosscheck: Due to the later\n> release it'd not actually be correct to just add \"arbitrary\" things to\n> that resowner.\n\nOK, I'll change that.\n\n>> (1) Not given the existing set of uses of the push/pop capability, which\n>> so far as I can see is *only* CREATE SCHEMA.\n\n> I do recall that there were issues with SET search_path in functions\n> causing noticable slowdowns...\n\nYeah, possibly that could be improved, but that seems outside the scope of\nthis patch.\n\n>> (2) as this is written, it's totally unsafe for the generation counter\n>> ever to back up; that risks false match detections later.\n\n> I was just thinking of backing up the 'active generation' state. New\n> generations would have to come from a separate 'next generation'\n> counter.\n\nOh, I see. Yeah, that could work, but there's no point until we have\npush/pop calls that are actually interesting for performance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 19:15:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi,\n\nOn 2020-03-25 19:15:28 -0400, Tom Lane wrote:\n> >> The comment is there because the regression tests fall over if you try\n> >> to do it the other way :-(. The failure I saw was specific to a\n> >> transaction being done in a DO block, and maybe we could handle that\n> >> differently from the case for a normal procedure; but it seemed better\n> >> to me to make them the same.\n> \n> > I'm still confused as to why it actually fixes the issue. Feel we should\n> > at least understand what's going on before commtting.\n> \n> I do understand the issue. If you make the simple-resowner a child\n> of TopTransactionResourceOwner, it vanishes at commit --- but\n> plpgsql_inline_handler has still got a pointer to it, which it'll try\n> to free afterwards, if the commit was inside the DO block.\n\nI was confused why it fixes that, because:\n\n> void\n> plpgsql_xact_cb(XactEvent event, void *arg)\n> {\n> \t/*\n> \t * If we are doing a clean transaction shutdown, free the EState (so that\n> -\t * any remaining resources will be released correctly). In an abort, we\n> +\t * any remaining resources will be released correctly). In an abort, we\n> \t * expect the regular abort recovery procedures to release everything of\n> -\t * interest.\n> +\t * interest. The resowner has to be explicitly released in both cases,\n> +\t * though, since it's not a child of TopTransactionResourceOwner.\n> \t */\n> \tif (event == XACT_EVENT_COMMIT || event == XACT_EVENT_PREPARE)\n> \t{\n> @@ -8288,11 +8413,17 @@ plpgsql_xact_cb(XactEvent event, void *arg)\n> \t\tif (shared_simple_eval_estate)\n> \t\t\tFreeExecutorState(shared_simple_eval_estate);\n> \t\tshared_simple_eval_estate = NULL;\n> +\t\tif (shared_simple_eval_resowner)\n> +\t\t\tplpgsql_free_simple_resowner(shared_simple_eval_resowner);\n> +\t\tshared_simple_eval_resowner = NULL;\n> \t}\n> \telse if (event == XACT_EVENT_ABORT)\n> \t{\n> \t\tsimple_econtext_stack = NULL;\n> \t\tshared_simple_eval_estate = NULL;\n> +\t\tif (shared_simple_eval_resowner)\n> +\t\t\tplpgsql_free_simple_resowner(shared_simple_eval_resowner);\n> +\t\tshared_simple_eval_resowner = NULL;\n> \t}\n> }\n\nwill lead to shared_simple_eval_resowner being deleted before\nTopTransactionResourceOwner is deleted:\n\nstatic void\nCommitTransaction(void)\n...\n\tCallXactCallbacks(is_parallel_worker ? XACT_EVENT_PARALLEL_COMMIT\n\t\t\t\t\t : XACT_EVENT_COMMIT);\n\n\tResourceOwnerRelease(TopTransactionResourceOwner,\n\t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n\t\t\t\t\t\t true, true);\n\nWhat I missed is that the inline handler will not use\nshared_simple_eval_resowner, but instead use the function local\nsimple_eval_resowner. Which I had not realized before.\n\n\nI'm still confused by the comment I was reacting to - the code\nexplicitly is about creating the *shared* resowner:\n\n> +\t * Likewise for the simple-expression resource owner. (Note: it'd be\n> +\t * safer to create this as a child of TopTransactionResourceOwner; but\n> +\t * right now that causes issues in transaction-spanning procedures, so\n> +\t * make it standalone.)\n> +\t */\n> +\tif (estate->simple_eval_resowner == NULL)\n> +\t{\n> +\t\tif (shared_simple_eval_resowner == NULL)\n> +\t\t\tshared_simple_eval_resowner =\n> +\t\t\t\tResourceOwnerCreate(NULL, \"PL/pgSQL simple expressions\");\n> +\t\testate->simple_eval_resowner = shared_simple_eval_resowner;\n> +\t}\n\nwhich, afaict, will always deleted before TopTransactionResourceOwner\ngoes away?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Mar 2020 16:41:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm still confused by the comment I was reacting to - the code\n> explicitly is about creating the *shared* resowner:\n\nRight, this is because of the choice I mentioned earlier about creating\nthis resowner the same way as the one for the inline case. I guess the\ncomments could go into more detail. Or we could make the parentage\ndifferent for the two cases, but I don't like that much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 19:50:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "On Thu, Mar 26, 2020 at 4:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I'll mark this patch as ready for commiters.\n>\n> Thanks for reviewing! Amit, do you have any thoughts on this?\n\nThanks for picking this up. Test cases added by your patch really\nshows why the plancache and the planner must not be skipped, something\nI totally failed to grasp.\n\nI can't really see any problem with your patch, but mainly due to my\nunfamiliarity with some of the more complicated things it touches,\nlike resowner stuff.\n\nOne thing -- I don't get the division between\nCachedPlanAllowsSimpleValidityCheck() and CachedPlanIsSimplyValid().\nMaybe I am missing something, but could there not be just one\nfunction, possibly using whether expr_simple_expr is set or not to\nskip or do, resp., the checks that the former does?\n\n--\nThank you,\n\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Mar 2020 19:56:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-25 17:51:50 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Hm, any chance that the multiple resowner calls here could show up in a\n>>> profile? Probably not?\n\n>> Doubt it. On the other hand, as the code stands it's certain that the\n>> resowner contains nothing but plancache pins (while I was writing the\n>> patch it wasn't entirely clear that that would hold). So we could\n>> drop the two unnecessary calls. There are assertions in\n>> ResourceOwnerDelete that would fire if we somehow missed releasing\n>> anything, so it doesn't seem like much of a maintenance hazard.\n\n> One could even argue that that's a nice crosscheck: Due to the later\n> release it'd not actually be correct to just add \"arbitrary\" things to\n> that resowner.\n\nI had a thought about a possibly-cleaner way to do this. We could invent\na resowner function, say ResourceOwnerReleaseAllPlanCacheRefs, that\nexplicitly releases all plancache pins it knows about. So plpgsql\nwould not call the regular ResourceOwnerRelease entry point at all,\nbut just call that and then ResourceOwnerDelete, again relying on the\nassertions therein to catch anything not released.\n\nThis would be slightly more code but it'd perhaps make it clearer\nwhat's going on, without the cost of a duplicative data structure.\nPerhaps in future there'd be use for similar calls for other resource\ntypes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 10:02:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> One thing -- I don't get the division between\n> CachedPlanAllowsSimpleValidityCheck() and CachedPlanIsSimplyValid().\n> Maybe I am missing something, but could there not be just one\n> function, possibly using whether expr_simple_expr is set or not to\n> skip or do, resp., the checks that the former does?\n\nWell, we don't want to do the initial checks over again every time;\nwe want the is-valid test to be as simple and fast as we can make it.\nI suppose we could have one function with a boolean flag saying \"this is a\nrecheck\", but I don't find that idea to be any better than the way it is.\n\nAlso, although the existing structure in plpgsql always calls\nCachedPlanIsSimplyValid immediately after a successful call to\nCachedPlanAllowsSimpleValidityCheck, I don't think that's necessarily\ngoing to be true for other potential users of the functions.\nSo merging the functions would reduce flexibility.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 13:39:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "I wrote:\n> I had a thought about a possibly-cleaner way to do this. We could invent\n> a resowner function, say ResourceOwnerReleaseAllPlanCacheRefs, that\n> explicitly releases all plancache pins it knows about. So plpgsql\n> would not call the regular ResourceOwnerRelease entry point at all,\n> but just call that and then ResourceOwnerDelete, again relying on the\n> assertions therein to catch anything not released.\n\nHere's a version that does it like that. This does seem marginally\nnicer than the other way. I have a feeling that at some point we'll\nwant to expose resowners' contents more generally, but I'm not quite\nsure what the use-cases will be, so I don't want to design that now.\n\nAlso, I studied the question of DO blocks' eval_estate + resowner\nmore carefully, and eventually concluded that the way it's being\ndone is okay --- it doesn't leak memory, as I'd first suspected.\nBut it's surely underdocumented, so I added some comments about it.\nI also concluded as part of that study that it's probably best if\nwe *do* make the resowner parentage different in the two cases\nafter all. So this has the \"shared\" resowner as a child of\nTopTransactionResourceOwner after all (which means we don't need\nto delete it explicitly), while a DO block's private resowner is\nstandalone (so it needs an explicit deletion).\n\nTesting that reminded me of the other regression test failure I'd seen\nwhen I first tried to do it: select_parallel.sql shows a WARNING about\na plancache leak in a parallel worker process. When I looked into the\nreason for that, it turned out that some cowboy has split\nXACT_EVENT_COMMIT into XACT_EVENT_COMMIT and\nXACT_EVENT_PARALLEL_COMMIT (where the latter is used in parallel\nworkers) without bothering to fix the collateral damage to plpgsql.\nSo plpgsql_xact_cb isn't doing any cleanup in parallel workers, and\nhasn't been for a couple of releases. The bad effects of that are\nprobably limited given that the worker process will exit after\ncommitting, but I still think that that part of this patch is a bug\nfix that needs to be back-patched. (Just looking at what\nFreeExecutorState does, I wonder whether jit_release_context has any\nside-effects that are visible outside the process? But I bet I can\nmake a test case that shows issues even without JIT, based on the\nfailure to call ExprContext shutdown callbacks.)\n\nAnyway, I think this is committable now.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 26 Mar 2020 14:37:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Hi,\n\nOn 2020-03-26 14:37:59 -0400, Tom Lane wrote:\n> I wrote:\n> > I had a thought about a possibly-cleaner way to do this. We could invent\n> > a resowner function, say ResourceOwnerReleaseAllPlanCacheRefs, that\n> > explicitly releases all plancache pins it knows about. So plpgsql\n> > would not call the regular ResourceOwnerRelease entry point at all,\n> > but just call that and then ResourceOwnerDelete, again relying on the\n> > assertions therein to catch anything not released.\n> \n> Here's a version that does it like that. This does seem marginally\n> nicer than the other way. I have a feeling that at some point we'll\n> want to expose resowners' contents more generally, but I'm not quite\n> sure what the use-cases will be, so I don't want to design that now.\n\nYea, agreed with all of what you said in that paragraph.\n\n\n> Testing that reminded me of the other regression test failure I'd seen\n> when I first tried to do it: select_parallel.sql shows a WARNING about\n> a plancache leak in a parallel worker process. When I looked into the\n> reason for that, it turned out that some cowboy has split\n> XACT_EVENT_COMMIT into XACT_EVENT_COMMIT and\n> XACT_EVENT_PARALLEL_COMMIT (where the latter is used in parallel\n> workers) without bothering to fix the collateral damage to plpgsql.\n> So plpgsql_xact_cb isn't doing any cleanup in parallel workers, and\n> hasn't been for a couple of releases.\n\nUgh.\n\n\n> The bad effects of that are probably limited given that the worker\n> process will exit after committing, but I still think that that part\n> of this patch is a bug fix that needs to be back-patched.\n\nUgh. Lucky that we don't register many things inside those resowners.\n\n\n> (Just\n> looking at what FreeExecutorState does, I wonder whether\n> jit_release_context has any side-effects that are visible outside the\n> process? But I bet I can make a test case that shows issues even\n> without JIT, based on the failure to call ExprContext shutdown\n> callbacks.)\n\nJIT doesn't currently have side-effects outside of the process. I really\nwant to add caching support, which'd presumably have problems due to\nthis, but it's not there yet... This could lead to leaking a fair bit of\nmemory over time otherwise.\n\n\n\n> /*\n> + * CachedPlanAllowsSimpleValidityCheck: can we use CachedPlanIsSimplyValid?\n> + *\n> + * This function, together with CachedPlanIsSimplyValid, provides a fast path\n> + * for revalidating \"simple\" generic plans. The core requirement to be simple\n> + * is that the plan must not require taking any locks, which translates to\n> + * not touching any tables; this happens to match up well with an important\n> + * use-case in PL/pgSQL.\n\nHm - is there currently sufficient guarantee that we absorb sinval\nmessages? Would still matter for types, functions, etc?\n\n\n> /*\n> + * ResourceOwnerReleaseAllPlanCacheRefs\n> + *\t\tRelease the plancache references (only) held by this owner.\n> + *\n> + * We might eventually add similar functions for other resource types,\n> + * but for now, only this is needed.\n> + */\n> +void\n> +ResourceOwnerReleaseAllPlanCacheRefs(ResourceOwner owner)\n> +{\n> +\tResourceOwner save;\n> +\tDatum\t\tfoundres;\n> +\n> +\tsave = CurrentResourceOwner;\n> +\tCurrentResourceOwner = owner;\n> +\twhile (ResourceArrayGetAny(&(owner->planrefarr), &foundres))\n> +\t{\n> +\t\tCachedPlan *res = (CachedPlan *) DatumGetPointer(foundres);\n> +\n> +\t\tReleaseCachedPlan(res, true);\n> +\t}\n> +\tCurrentResourceOwner = save;\n> +}\n\nWhile it'd do a small bit unnecessary work, I do wonder if it'd be\nbetter to use this code in ResourceOwnereReleaseInternal().\n\n\n> --- a/src/pl/plpgsql/src/pl_exec.c\n> +++ b/src/pl/plpgsql/src/pl_exec.c\n> @@ -84,6 +84,13 @@ typedef struct\n> * has its own simple-expression EState, which is cleaned up at exit from\n> * plpgsql_inline_handler(). DO blocks still use the simple_econtext_stack,\n> * though, so that subxact abort cleanup does the right thing.\n> + *\n> + * (However, if a DO block executes COMMIT or ROLLBACK, then exec_stmt_commit\n> + * or exec_stmt_rollback will unlink it from the DO's simple-expression EState\n> + * and create a new shared EState that will be used thenceforth. The original\n> + * EState will be cleaned up when we get back to plpgsql_inline_handler. This\n> + * is a bit ugly, but it isn't worth doing better, since scenarios like this\n> + * can't result in indefinite accumulation of state trees.)\n> */\n> typedef struct SimpleEcontextStackEntry\n> {\n> @@ -96,6 +103,15 @@ static EState *shared_simple_eval_estate = NULL;\n> static SimpleEcontextStackEntry *simple_econtext_stack = NULL;\n> \n> /*\n> + * In addition to the shared simple-eval EState, we have a shared resource\n> + * owner that holds refcounts on the CachedPlans for any \"simple\" expressions\n> + * we have evaluated in the current transaction. This allows us to avoid\n> + * continually grabbing and releasing a plan refcount when a simple expression\n> + * is used over and over.\n> + */\n> +static ResourceOwner shared_simple_eval_resowner = NULL;\n\nPerhaps add a reference to the new (appreciated, btw) DO comment above?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 26 Mar 2020 11:49:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-26 14:37:59 -0400, Tom Lane wrote:\n>> + * This function, together with CachedPlanIsSimplyValid, provides a fast path\n>> + * for revalidating \"simple\" generic plans. The core requirement to be simple\n>> + * is that the plan must not require taking any locks, which translates to\n>> + * not touching any tables; this happens to match up well with an important\n>> + * use-case in PL/pgSQL.\n\n> Hm - is there currently sufficient guarantee that we absorb sinval\n> messages? Would still matter for types, functions, etc?\n\nThere are potentially issues of that sort throughout the backend, not\njust here, since we don't have any locking on types or functions.\nI don't think it's this patch's job to address that. In practice\nI think we've thought about it and concluded that the cost/benefit\nof introducing such locks just isn't promising:\n\n* Generally speaking you can't do anything very interesting to a type\nanyway, at least not with supported DDL. The worst-case situation that\ncould materialize AFAIK is possibly evaluating slightly-stale constraints\nfor a domain. (The typcache does have sinval invalidation for those\nconstraints, but I don't recall offhand how much we guarantee about\nhow quickly we'll notice updates.)\n\n* For functions, you might execute a somewhat stale version of the\nfunction body. The bad effects there are pretty limited since a function\nis defined by just one catalog row, unlike tables, so you can't see a\nself-inconsistent version of it.\n\nThe amount of lock overhead that it would take to remove those edge\ncases seems slightly staggering, so I doubt we'd ever do it.\n\n> While it'd do a small bit unnecessary work, I do wonder if it'd be\n> better to use this code in ResourceOwnereReleaseInternal().\n\nWhen and if we refactor to expose this sort of thing more generally,\nit might be worth doing it like that. I don't think it helps much\nright now.\n\n> Perhaps add a reference to the new (appreciated, btw) DO comment above?\n\nCan do.\n\nAgain, thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 15:05:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-26 14:37:59 -0400, Tom Lane wrote:\n>> Testing that reminded me of the other regression test failure I'd seen\n>> when I first tried to do it: select_parallel.sql shows a WARNING about\n>> a plancache leak in a parallel worker process. When I looked into the\n>> reason for that, it turned out that some cowboy has split\n>> XACT_EVENT_COMMIT into XACT_EVENT_COMMIT and\n>> XACT_EVENT_PARALLEL_COMMIT (where the latter is used in parallel\n>> workers) without bothering to fix the collateral damage to plpgsql.\n>> So plpgsql_xact_cb isn't doing any cleanup in parallel workers, and\n>> hasn't been for a couple of releases.\n>> The bad effects of that are probably limited given that the worker\n>> process will exit after committing, but I still think that that part\n>> of this patch is a bug fix that needs to be back-patched.\n\n> Ugh. Lucky that we don't register many things inside those resowners.\n\nYeah. I spent some time trying to produce a failure this way, and\nconcluded that it's pretty hard because most of the relevant callbacks\nwill be run during ExprContext shutdown, which is done during plpgsql\nfunction exit. In a non-transaction-abort situation, the simple EState\nshouldn't have any live ExprContexts left at commit. I did find a case\nwhere a memory context callback attached to the EState's query context\ndoesn't get run when expected ... but it still gets run later, when the\nwhole memory context tree is destroyed. So I can't demonstrate any\nuser-visible misbehavior in the core code. But it still seems like a\nprudent idea to back-patch a fix, in case I missed something or there is\nsome extension that's pushing the boundaries further. It's definitely\nnot very cool that we're leaving behind a dangling static pointer to an\nEState that won't exist once TopTransactionMemoryContext is gone.\n\nI'll back-patch relevant parts of those comments about DO block\nmanagement, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 16:59:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" }, { "msg_contents": "I wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> One thing -- I don't get the division between\n>> CachedPlanAllowsSimpleValidityCheck() and CachedPlanIsSimplyValid().\n>> Maybe I am missing something, but could there not be just one\n>> function, possibly using whether expr_simple_expr is set or not to\n>> skip or do, resp., the checks that the former does?\n\n> Well, we don't want to do the initial checks over again every time;\n> we want the is-valid test to be as simple and fast as we can make it.\n> I suppose we could have one function with a boolean flag saying \"this is a\n> recheck\", but I don't find that idea to be any better than the way it is.\n\nSo after looking at the buildfarm results, I think you were on to\nsomething. The initial and recheck conditions actually have to be\na bit different, and the reason is that immediately after GetCachedPlan\nhas produced a plan, it's possible for plansource->is_valid to be false\neven though the derived plan is marked valid. (In the buildfarm, this\nis happening because of CLOBBER_CACHE_ALWAYS or equivalent cache flushes;\nin the real world it'd probably require sinval queue overflow to happen\nwhile building the plan.)\n\nWhat we want in this situation is to go ahead and use the derived plan,\nand then rebuild next time; that's what the pre-existing code did and\nit's really the only reasonable answer. It might seem better to go\nback and try to rebuild the plan right away, but that'd be an infinite\nloop in a CLOBBER_CACHE_ALWAYS build. Also, if we fail to use the\nderived plan at all, that amounts to disabling the \"simple expression\"\noptimization as a result of a chance sinval overflow. That's bad from\na performance standpoint and it will also cause regression test output\nchanges (since, as you previously discovered, the simple-expression\npath produces different CONTEXT messages for error cases --- maybe we\nshould change that, but I don't want to be forced into it).\n\nThe existing code structure can't support doing it like that, so we have\nto refactor to make the initial check and the recheck be separate code.\nWorking on a patch for that now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 14:01:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: plan cache overhead on plpgsql expression" } ]
[ { "msg_contents": "Forking old, long thread:\nhttps://www.postgresql.org/message-id/36712441546604286%40sas1-890ba5c2334a.qloud-c.yandex.net\nOn Fri, Jan 04, 2019 at 03:18:06PM +0300, Sergei Kornilov wrote:\n> About reindex invalid indexes - i found one good question in archives [1]: how about toast indexes?\n> I check it now, i am able drop invalid toast index, but i can not drop reduntant valid index.\n> Reproduce:\n> session 1: begin; select from test_toast ... for update;\n> session 2: reindex table CONCURRENTLY test_toast ;\n> session 2: interrupt by ctrl+C\n> session 1: commit\n> session 2: reindex table test_toast ;\n> and now we have two toast indexes. DROP INDEX is able to remove only invalid ones. Valid index gives \"ERROR: permission denied: \"pg_toast_16426_index_ccnew\" is a system catalog\"\n> [1]: https://www.postgresql.org/message-id/CAB7nPqT%2B6igqbUb59y04NEgHoBeUGYteuUr89AKnLTFNdB8Hyw%40mail.gmail.com\n\nIt looks like this was never addressed.\n\nI noticed a ccnew toast index sitting around since October - what do I do with it ?\n\nts=# DROP INDEX pg_toast.pg_toast_463881620_index_ccnew;\nERROR: permission denied: \"pg_toast_463881620_index_ccnew\" is a system catalog\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 16 Feb 2020 13:08:35 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "reindex concurrently and two toast indexes" }, { "msg_contents": "On Sun, Feb 16, 2020 at 01:08:35PM -0600, Justin Pryzby wrote:\n> Forking old, long thread:\n> https://www.postgresql.org/message-id/36712441546604286%40sas1-890ba5c2334a.qloud-c.yandex.net\n> On Fri, Jan 04, 2019 at 03:18:06PM +0300, Sergei Kornilov wrote:\n>> About reindex invalid indexes - i found one good question in archives [1]: how about toast indexes?\n>> I check it now, i am able drop invalid toast index, but i can not drop reduntant valid index.\n>> Reproduce:\n>> session 1: begin; select from test_toast ... for update;\n>> session 2: reindex table CONCURRENTLY test_toast ;\n>> session 2: interrupt by ctrl+C\n>> session 1: commit\n>> session 2: reindex table test_toast ;\n>> and now we have two toast indexes. DROP INDEX is able to remove\n>> only invalid ones. Valid index gives \"ERROR: permission denied:\n>> \"pg_toast_16426_index_ccnew\" is a system catalog\" \n>> [1]: https://www.postgresql.org/message-id/CAB7nPqT%2B6igqbUb59y04NEgHoBeUGYteuUr89AKnLTFNdB8Hyw%40mail.gmail.com\n> \n> It looks like this was never addressed.\n\nOn HEAD, this exact scenario leads to the presence of an old toast\nindex pg_toast.pg_toast_*_index_ccold, causing the index to be skipped\non a follow-up concurrent reindex:\n=# reindex table CONCURRENTLY test_toast ;\nWARNING: XX002: cannot reindex invalid index\n\"pg_toast.pg_toast_16385_index_ccold\" concurrently, skipping\nLOCATION: ReindexRelationConcurrently, indexcmds.c:2863\nREINDEX\n\nAnd this toast index can be dropped while it remains invalid:\n=# drop index pg_toast.pg_toast_16385_index_ccold;\nDROP INDEX\n\nI recall testing that stuff for all the interrupts which could be\ntriggered and in this case, this waits at step 5 within\nWaitForLockersMultiple(). Now, in your case you take an extra step\nwith a plain REINDEX, which forces a rebuild of the invalid toast\nindex, making it per se valid, and not droppable.\n\nHmm. There could be an argument here for skipping invalid toast\nindexes within reindex_index(), because we are sure about having at\nleast one valid toast index at anytime, and these are not concerned\nwith CIC.\n\nAny thoughts?\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 14:29:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Feb 18, 2020 at 6:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Feb 16, 2020 at 01:08:35PM -0600, Justin Pryzby wrote:\n> > Forking old, long thread:\n> > https://www.postgresql.org/message-id/36712441546604286%40sas1-890ba5c2334a.qloud-c.yandex.net\n> > On Fri, Jan 04, 2019 at 03:18:06PM +0300, Sergei Kornilov wrote:\n> >> About reindex invalid indexes - i found one good question in archives [1]: how about toast indexes?\n> >> I check it now, i am able drop invalid toast index, but i can not drop reduntant valid index.\n> >> Reproduce:\n> >> session 1: begin; select from test_toast ... for update;\n> >> session 2: reindex table CONCURRENTLY test_toast ;\n> >> session 2: interrupt by ctrl+C\n> >> session 1: commit\n> >> session 2: reindex table test_toast ;\n> >> and now we have two toast indexes. DROP INDEX is able to remove\n> >> only invalid ones. Valid index gives \"ERROR: permission denied:\n> >> \"pg_toast_16426_index_ccnew\" is a system catalog\"\n> >> [1]: https://www.postgresql.org/message-id/CAB7nPqT%2B6igqbUb59y04NEgHoBeUGYteuUr89AKnLTFNdB8Hyw%40mail.gmail.com\n> >\n> > It looks like this was never addressed.\n>\n> On HEAD, this exact scenario leads to the presence of an old toast\n> index pg_toast.pg_toast_*_index_ccold, causing the index to be skipped\n> on a follow-up concurrent reindex:\n> =# reindex table CONCURRENTLY test_toast ;\n> WARNING: XX002: cannot reindex invalid index\n> \"pg_toast.pg_toast_16385_index_ccold\" concurrently, skipping\n> LOCATION: ReindexRelationConcurrently, indexcmds.c:2863\n> REINDEX\n>\n> And this toast index can be dropped while it remains invalid:\n> =# drop index pg_toast.pg_toast_16385_index_ccold;\n> DROP INDEX\n>\n> I recall testing that stuff for all the interrupts which could be\n> triggered and in this case, this waits at step 5 within\n> WaitForLockersMultiple(). Now, in your case you take an extra step\n> with a plain REINDEX, which forces a rebuild of the invalid toast\n> index, making it per se valid, and not droppable.\n>\n> Hmm. There could be an argument here for skipping invalid toast\n> indexes within reindex_index(), because we are sure about having at\n> least one valid toast index at anytime, and these are not concerned\n> with CIC.\n\nOr even automatically drop any invalid index on toast relation in\nreindex_relation, since those can't be due to a failed CIC?\n\n\n", "msg_date": "Tue, 18 Feb 2020 07:06:25 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Feb 18, 2020 at 07:06:25AM +0100, Julien Rouhaud wrote:\n> On Tue, Feb 18, 2020 at 6:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Hmm. There could be an argument here for skipping invalid toast\n>> indexes within reindex_index(), because we are sure about having at\n>> least one valid toast index at anytime, and these are not concerned\n>> with CIC.\n> \n> Or even automatically drop any invalid index on toast relation in\n> reindex_relation, since those can't be due to a failed CIC?\n\nNo, I don't like much outsmarting REINDEX with more index drops than\nit needs to do. And this would not take care of the case with REINDEX\nINDEX done directly on a toast index.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 15:19:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Feb 18, 2020 at 7:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Feb 18, 2020 at 07:06:25AM +0100, Julien Rouhaud wrote:\n> > On Tue, Feb 18, 2020 at 6:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Hmm. There could be an argument here for skipping invalid toast\n> >> indexes within reindex_index(), because we are sure about having at\n> >> least one valid toast index at anytime, and these are not concerned\n> >> with CIC.\n> >\n> > Or even automatically drop any invalid index on toast relation in\n> > reindex_relation, since those can't be due to a failed CIC?\n>\n> No, I don't like much outsmarting REINDEX with more index drops than\n> it needs to do. And this would not take care of the case with REINDEX\n> INDEX done directly on a toast index.\n\nWell, we could still do both but I get the objection. Then skipping\ninvalid toast indexes in reindex_relation looks like the best fix.\n\n\n", "msg_date": "Tue, 18 Feb 2020 07:39:49 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Feb 18, 2020 at 07:39:49AM +0100, Julien Rouhaud wrote:\n> On Tue, Feb 18, 2020 at 7:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Feb 18, 2020 at 07:06:25AM +0100, Julien Rouhaud wrote:\n> > > On Tue, Feb 18, 2020 at 6:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >> Hmm. There could be an argument here for skipping invalid toast\n> > >> indexes within reindex_index(), because we are sure about having at\n> > >> least one valid toast index at anytime, and these are not concerned\n> > >> with CIC.\n> > >\n> > > Or even automatically drop any invalid index on toast relation in\n> > > reindex_relation, since those can't be due to a failed CIC?\n> >\n> > No, I don't like much outsmarting REINDEX with more index drops than\n> > it needs to do. And this would not take care of the case with REINDEX\n> > INDEX done directly on a toast index.\n>\n> Well, we could still do both but I get the objection. Then skipping\n> invalid toast indexes in reindex_relation looks like the best fix.\n\nPFA a patch to fix the problem using this approach.\n\nI also added isolation tester regression tests. The failure is simulated using\na pg_cancel_backend() on top of pg_stat_activity, using filters on a\nspecifically set application name and the query text to avoid any unwanted\ninteraction. I also added a 1s locking delay, to ensure that even slow/CCA\nmachines can consistently reproduce the failure. Maybe that's not enough, or\nmaybe testing this scenario is not worth the extra time.", "msg_date": "Sat, 22 Feb 2020 08:09:24 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Feb 18, 2020 at 02:29:33PM +0900, Michael Paquier wrote:\n> On Sun, Feb 16, 2020 at 01:08:35PM -0600, Justin Pryzby wrote:\n> > Forking old, long thread:\n> > https://www.postgresql.org/message-id/36712441546604286%40sas1-890ba5c2334a.qloud-c.yandex.net\n> > On Fri, Jan 04, 2019 at 03:18:06PM +0300, Sergei Kornilov wrote:\n> >> About reindex invalid indexes - i found one good question in archives [1]: how about toast indexes?\n> >> I check it now, i am able drop invalid toast index, but i can not drop reduntant valid index.\n> >> Reproduce:\n> >> session 1: begin; select from test_toast ... for update;\n> >> session 2: reindex table CONCURRENTLY test_toast ;\n> >> session 2: interrupt by ctrl+C\n> >> session 1: commit\n> >> session 2: reindex table test_toast ;\n> >> and now we have two toast indexes. DROP INDEX is able to remove\n> >> only invalid ones. Valid index gives \"ERROR: permission denied:\n> >> \"pg_toast_16426_index_ccnew\" is a system catalog\" \n> >> [1]: https://www.postgresql.org/message-id/CAB7nPqT%2B6igqbUb59y04NEgHoBeUGYteuUr89AKnLTFNdB8Hyw%40mail.gmail.com\n> > \n> > It looks like this was never addressed.\n> \n> On HEAD, this exact scenario leads to the presence of an old toast\n> index pg_toast.pg_toast_*_index_ccold, causing the index to be skipped\n> on a follow-up concurrent reindex:\n> =# reindex table CONCURRENTLY test_toast ;\n> WARNING: XX002: cannot reindex invalid index\n> \"pg_toast.pg_toast_16385_index_ccold\" concurrently, skipping\n> LOCATION: ReindexRelationConcurrently, indexcmds.c:2863\n> REINDEX\n> \n> And this toast index can be dropped while it remains invalid:\n> =# drop index pg_toast.pg_toast_16385_index_ccold;\n> DROP INDEX\n> \n> I recall testing that stuff for all the interrupts which could be\n> triggered and in this case, this waits at step 5 within\n> WaitForLockersMultiple(). Now, in your case you take an extra step\n> with a plain REINDEX, which forces a rebuild of the invalid toast\n> index, making it per se valid, and not droppable.\n> \n> Hmm. There could be an argument here for skipping invalid toast\n> indexes within reindex_index(), because we are sure about having at\n> least one valid toast index at anytime, and these are not concerned\n> with CIC.\n\nJulien sent a patch for that, but here are my ideas (which you are free to\nreject):\n\nCould you require an AEL for that case, or something which will preclude\nreindex table test_toast from working ?\n\nCould you use atomic updates to ensure that exactly one index in an {old,new}\npair is invalid at any given time ?\n\nCould you make the new (invalid) toast index not visible to other transactions?\n\n-- \nJustin Pryzby\n\n\n", "msg_date": "Sat, 22 Feb 2020 05:13:19 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Sat, Feb 22, 2020 at 08:09:24AM +0100, Julien Rouhaud wrote:\n> On Tue, Feb 18, 2020 at 07:39:49AM +0100, Julien Rouhaud wrote:\n> > On Tue, Feb 18, 2020 at 7:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Tue, Feb 18, 2020 at 07:06:25AM +0100, Julien Rouhaud wrote:\n> > > > On Tue, Feb 18, 2020 at 6:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >> Hmm. There could be an argument here for skipping invalid toast\n> > > >> indexes within reindex_index(), because we are sure about having at\n> > > >> least one valid toast index at anytime, and these are not concerned\n> > > >> with CIC.\n>\n> PFA a patch to fix the problem using this approach.\n>\n> I also added isolation tester regression tests. The failure is simulated using\n> a pg_cancel_backend() on top of pg_stat_activity, using filters on a\n> specifically set application name and the query text to avoid any unwanted\n> interaction. I also added a 1s locking delay, to ensure that even slow/CCA\n> machines can consistently reproduce the failure. Maybe that's not enough, or\n> maybe testing this scenario is not worth the extra time.\n\nSorry, I just realized that I forgot to commit the last changes before sending\nthe patch, so here's the correct v2.", "msg_date": "Sat, 22 Feb 2020 16:06:57 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Sat, Feb 22, 2020 at 04:06:57PM +0100, Julien Rouhaud wrote:\n> Sorry, I just realized that I forgot to commit the last changes before sending\n> the patch, so here's the correct v2.\n\nThanks for the patch.\n\n> +\tif (skipit)\n> +\t{\n> +\t\tereport(NOTICE,\n> +\t\t\t (errmsg(\"skipping invalid index \\\"%s.%s\\\"\",\n> +\t\t\t\t get_namespace_name(get_rel_namespace(indexOid)),\n> +\t\t\t\t get_rel_name(indexOid))));\n\nReindexRelationConcurrently() issues a WARNING when bumping on an\ninvalid index, shouldn't the same log level be used?\n\nEven with this patch, it is possible to reindex an invalid toast index\nwith REINDEX INDEX (with and without CONCURRENTLY), which is the\nproblem I mentioned upthread (Er, actually only for the non-concurrent\ncase as told about reindex_index). Shouldn't both cases be prevented\nas well with an ERROR?\n--\nMichael", "msg_date": "Thu, 27 Feb 2020 16:32:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Thu, Feb 27, 2020 at 04:32:11PM +0900, Michael Paquier wrote:\n> On Sat, Feb 22, 2020 at 04:06:57PM +0100, Julien Rouhaud wrote:\n> > Sorry, I just realized that I forgot to commit the last changes before sending\n> > the patch, so here's the correct v2.\n>\n> Thanks for the patch.\n>\n> > +\tif (skipit)\n> > +\t{\n> > +\t\tereport(NOTICE,\n> > +\t\t\t (errmsg(\"skipping invalid index \\\"%s.%s\\\"\",\n> > +\t\t\t\t get_namespace_name(get_rel_namespace(indexOid)),\n> > +\t\t\t\t get_rel_name(indexOid))));\n>\n> ReindexRelationConcurrently() issues a WARNING when bumping on an\n> invalid index, shouldn't the same log level be used?\n\nFor ReindexRelationConcurrently, the index is skipped because the feature isn't\nsupported, thus a warning. In this case that would work, it's just that we\ndon't want to process such indexes, so I used a notice instead.\n\nI'm not opposed to use a warning instead if you prefer. What errcode should be\nused though, ERRCODE_WARNING? ERRCODE_FEATURE_NOT_SUPPORTED doesn't feel\nright.\n\n> Even with this patch, it is possible to reindex an invalid toast index\n> with REINDEX INDEX (with and without CONCURRENTLY), which is the\n> problem I mentioned upthread (Er, actually only for the non-concurrent\n> case as told about reindex_index). Shouldn't both cases be prevented\n> as well with an ERROR?\n\nAh indeed, sorry I missed that.\n\nWhile looking at it, I see that invalid indexes seem to leaked when the table\nis dropped, with no way to get rid of them:\n\ns1:\nCREATE TABLE t1(val text);\nCREATE INDEX ON t1 (val);\nBEGIN;\nSELECT * FROM t1 FOR UPDATE;\n\ns2:\nREINDEX TABLE CONCURRENTLY t1;\n[stucked and canceled]\nSELECT indexrelid::regclass, indrelid::regclass FROM pg_index WHERE NOT indisvalid;\n indexrelid | indrelid\n-------------------------------------+-------------------------\n t1_val_idx_ccold | t1\n pg_toast.pg_toast_16385_index_ccold | pg_toast.pg_toast_16385\n(2 rows)\n\ns1:\nROLLBACK;\nDROP TABLE t1;\n\nSELECT indexrelid::regclass, indrelid::regclass FROM pg_index WHERE NOT indisvalid;\n indexrelid | indrelid\n-------------------------------------+----------\n t1_val_idx_ccold | 16385\n pg_toast.pg_toast_16385_index_ccold | 16388\n(2 rows)\n\nREINDEX INDEX t1_val_idx_ccold;\nERROR: XX000: could not open relation with OID 16385\nLOCATION: relation_open, relation.c:62\n\nDROP INDEX t1_val_idx_ccold;\nERROR: XX000: could not open relation with OID 16385\nLOCATION: relation_open, relation.c:62\n\nREINDEX INDEX pg_toast.pg_toast_16385_index_ccold;\nERROR: XX000: could not open relation with OID 16388\nLOCATION: relation_open, relation.c:62\n\nDROP INDEX pg_toast.pg_toast_16385_index_ccold;\nERROR: XX000: could not open relation with OID 16388\nLOCATION: relation_open, relation.c:62\n\nREINDEX DATABASE rjuju;\nREINDEX\n\nSELECT indexrelid::regclass, indrelid::regclass FROM pg_index WHERE NOT indisvalid;\n indexrelid | indrelid\n-------------------------------------+----------\n t1_val_idx_ccold | 16385\n pg_toast.pg_toast_16385_index_ccold | 16388\n(2 rows)\n\nShouldn't DROP TABLE be fixed to also drop invalid indexes?\n\n\n", "msg_date": "Thu, 27 Feb 2020 09:07:35 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Thu, Feb 27, 2020 at 09:07:35AM +0100, Julien Rouhaud wrote:\n> While looking at it, I see that invalid indexes seem to leaked when the table\n> is dropped, with no way to get rid of them:\n>\n> Shouldn't DROP TABLE be fixed to also drop invalid indexes?\n\nHmm. The problem here is that I think that we don't have the correct\ncorrect interface to handle the dependency switching between the old\nand new indexes from the start, and 68ac9cf made things better in some\naspects (like non-cancellation and old index drop) but not in others\n(like yours, or even a column drop). changeDependenciesOf/On() have\nbeen added especially for REINDEX CONCURRENTLY, but they are not\nactually able to handle the case we want them to handle: do a switch\nfor both relations within the same scan. It is possible to use three\ntimes the existing routines with a couple of CCIs in-between and what\nI would call a fake placeholder OID to switch all the records cleanly,\nbut it would be actually cleaner to do a single scan of pg_depend and\nswitch the dependencies of both objects at once.\n\nAttached is a draft patch to take care of that problem for HEAD. It\nstill needs a lot of polishing (variable names are not actually old\nor new anymore, etc.) but that's enough to show the idea. If a version\nreaches PG12, we would need to keep around the past routines to avoid\nan ABI breakage, even if I doubt there are callers of it, but who\nknows..\n--\nMichael", "msg_date": "Tue, 3 Mar 2020 17:06:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Mar 03, 2020 at 05:06:42PM +0900, Michael Paquier wrote:\n> Attached is a draft patch to take care of that problem for HEAD. It\n> still needs a lot of polishing (variable names are not actually old\n> or new anymore, etc.) but that's enough to show the idea. If a version\n> reaches PG12, we would need to keep around the past routines to avoid\n> an ABI breakage, even if I doubt there are callers of it, but who\n> knows..\n\nOr actually, a more simple solution is to abuse of the two existing\nroutines so as the dependency switch is done the other way around,\nfrom the new index to the old one. That would visibly work because\nthere is no CCI between each scan, and that's faster because the scan\nof pg_depend is done only on the entries in need of an update. I'll\nlook at that again tomorrow, it is late here and I may be missing\nsomething obvious.\n--\nMichael", "msg_date": "Tue, 3 Mar 2020 18:25:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Mar 03, 2020 at 06:25:51PM +0900, Michael Paquier wrote:\n> Or actually, a more simple solution is to abuse of the two existing\n> routines so as the dependency switch is done the other way around,\n> from the new index to the old one. That would visibly work because\n> there is no CCI between each scan, and that's faster because the scan\n> of pg_depend is done only on the entries in need of an update. I'll\n> look at that again tomorrow, it is late here and I may be missing\n> something obvious.\n\nIt was a good inspiration. I have been torturing this patch today and\nplayed with it by injecting elog(ERROR) calls in the middle of reindex\nconcurrently for all the phases, and checked manually the handling of\nentries in pg_depend for the new and old indexes, and these correctly\nmap. So this is taking care of your problem. Attached is an updated\npatch with an updated comment about the dependency of this code with\nCCIs. I'd like to go fix this issue first.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 14:15:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Wed, Mar 4, 2020 at 6:15 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 03, 2020 at 06:25:51PM +0900, Michael Paquier wrote:\n> > Or actually, a more simple solution is to abuse of the two existing\n> > routines so as the dependency switch is done the other way around,\n> > from the new index to the old one. That would visibly work because\n> > there is no CCI between each scan, and that's faster because the scan\n> > of pg_depend is done only on the entries in need of an update. I'll\n> > look at that again tomorrow, it is late here and I may be missing\n> > something obvious.\n>\n> It was a good inspiration. I have been torturing this patch today and\n> played with it by injecting elog(ERROR) calls in the middle of reindex\n> concurrently for all the phases, and checked manually the handling of\n> entries in pg_depend for the new and old indexes, and these correctly\n> map. So this is taking care of your problem. Attached is an updated\n> patch with an updated comment about the dependency of this code with\n> CCIs. I'd like to go fix this issue first.\n\nThanks for the patch! I started to look at it during the weekend, but\nI got interrupted and unfortunately didn't had time to look at it\nsince.\n\nThe fix looks good to me. I also tried multiple failure scenario and\nit's unsurprisingly working just fine. Should we add some regression\ntests for that? I guess most of it could be borrowed from the patch\nto fix the toast index issue I sent last week.\n\n\n", "msg_date": "Wed, 4 Mar 2020 09:21:45 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Wed, Mar 04, 2020 at 09:21:45AM +0100, Julien Rouhaud wrote:\n> Thanks for the patch! I started to look at it during the weekend, but\n> I got interrupted and unfortunately didn't had time to look at it\n> since.\n\nNo problem, thanks for looking at it. I have looked at it again this\nmorning, and applied it.\n\n> The fix looks good to me. I also tried multiple failure scenario and\n> it's unsurprisingly working just fine. Should we add some regression\n> tests for that? I guess most of it could be borrowed from the patch\n> to fix the toast index issue I sent last week.\n\nI have doubts when it comes to use a strategy based on\npg_cancel_backend() and a match of application_name (see for example\n5ad72ce but I cannot find the associated thread). I think that we\ncould design something more robust here and usable by all tests, with\ntwo things coming into my mind: \n- A new meta-command for isolation tests to be able to cancel a\nsession with PQcancel().\n- Fault injection in the backend.\nFor the case of this thread, the cancellation command would be a better\nmatch.\n--\nMichael", "msg_date": "Thu, 5 Mar 2020 12:53:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Thu, Mar 05, 2020 at 12:53:54PM +0900, Michael Paquier wrote:\n> On Wed, Mar 04, 2020 at 09:21:45AM +0100, Julien Rouhaud wrote:\n> > Thanks for the patch! I started to look at it during the weekend, but\n> > I got interrupted and unfortunately didn't had time to look at it\n> > since.\n>\n> No problem, thanks for looking at it. I have looked at it again this\n> morning, and applied it.\n>\n> > The fix looks good to me. I also tried multiple failure scenario and\n> > it's unsurprisingly working just fine. Should we add some regression\n> > tests for that? I guess most of it could be borrowed from the patch\n> > to fix the toast index issue I sent last week.\n>\n> I have doubts when it comes to use a strategy based on\n> pg_cancel_backend() and a match of application_name (see for example\n> 5ad72ce but I cannot find the associated thread). I think that we\n> could design something more robust here and usable by all tests, with\n> two things coming into my mind:\n> - A new meta-command for isolation tests to be able to cancel a\n> session with PQcancel().\n> - Fault injection in the backend.\n> For the case of this thread, the cancellation command would be a better\n> match.\n\nI agree that the approach wasn't quite robust. I'll try to look at adding a\nnew command for isolationtester, but that's probably not something we want to\nput in pg13?\n\nHere's a v3 that takes address the various comments you previously noted, and\nfor which I also removed the regression tests.\n\nNote that while looking at it, I noticed another bug in RIC:\n\n# create table t1(id integer, val text); create index on t1(val);\nCREATE TABLE\n\nCREATE INDEX\n\n# reindex table concurrently t1;\n^CCancel request sent\nERROR: 57014: canceling statement due to user request\nLOCATION: ProcessInterrupts, postgres.c:3171\n\n# select indexrelid::regclass, indrelid::regclass, indexrelid, indrelid from pg_index where not indisvalid;\n indexrelid | indrelid | indexrelid | indrelid\n-------------------------------------+-------------------------+------------+----------\n t1_val_idx_ccold | t1 | 16401 | 16395\n pg_toast.pg_toast_16395_index_ccold | pg_toast.pg_toast_16395 | 16400 | 16398\n(2 rows)\n\n\n# reindex table concurrently t1;\nWARNING: 0A000: cannot reindex invalid index \"public.t1_val_idx_ccold\" concurrently, skipping\nLOCATION: ReindexRelationConcurrently, indexcmds.c:2821\nWARNING: XX002: cannot reindex invalid index \"pg_toast.pg_toast_16395_index_ccold\" concurrently, skipping\nLOCATION: ReindexRelationConcurrently, indexcmds.c:2867\nREINDEX\n\n# reindex index concurrently t1_val_idx_ccold;\nREINDEX\n\nThat case is also fixed in this patch.", "msg_date": "Thu, 5 Mar 2020 17:57:07 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Thu, Mar 05, 2020 at 05:57:07PM +0100, Julien Rouhaud wrote:\n> I agree that the approach wasn't quite robust. I'll try to look at adding a\n> new command for isolationtester, but that's probably not something we want to\n> put in pg13?\n\nYes, that's too late.\n\n> Note that while looking at it, I noticed another bug in RIC:\n>\n> [...]\n>\n> # reindex table concurrently t1;\n> WARNING: 0A000: cannot reindex invalid index \"public.t1_val_idx_ccold\" concurrently, skipping\n> LOCATION: ReindexRelationConcurrently, indexcmds.c:2821\n> WARNING: XX002: cannot reindex invalid index \"pg_toast.pg_toast_16395_index_ccold\" concurrently, skipping\n> LOCATION: ReindexRelationConcurrently, indexcmds.c:2867\n> REINDEX \n> # reindex index concurrently t1_val_idx_ccold;\n> REINDEX\n> \n> That case is also fixed in this patch.\n\nThis choice is intentional. The idea about bypassing invalid indexes\nfor table-level REINDEX is that this would lead to a bloat in the\nnumber of relations to handling if multiple runs are failing, leading\nto more and more invalid indexes to handle each time. Allowing a\nsingle invalid non-toast index to be reindexed with CONCURRENTLY can\nbe helpful in some cases, like for example a CIC for a unique index\nthat failed and was invalid, where the relation already defined can be\nreused.\n--\nMichael", "msg_date": "Fri, 6 Mar 2020 10:38:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Fri, Mar 06, 2020 at 10:38:44AM +0900, Michael Paquier wrote:\n> On Thu, Mar 05, 2020 at 05:57:07PM +0100, Julien Rouhaud wrote:\n> > I agree that the approach wasn't quite robust. I'll try to look at adding a\n> > new command for isolationtester, but that's probably not something we want to\n> > put in pg13?\n>\n> Yes, that's too late.\n>\n> > Note that while looking at it, I noticed another bug in RIC:\n> >\n> > [...]\n> >\n> > # reindex table concurrently t1;\n> > WARNING: 0A000: cannot reindex invalid index \"public.t1_val_idx_ccold\" concurrently, skipping\n> > LOCATION: ReindexRelationConcurrently, indexcmds.c:2821\n> > WARNING: XX002: cannot reindex invalid index \"pg_toast.pg_toast_16395_index_ccold\" concurrently, skipping\n> > LOCATION: ReindexRelationConcurrently, indexcmds.c:2867\n> > REINDEX\n> > # reindex index concurrently t1_val_idx_ccold;\n> > REINDEX\n> >\n> > That case is also fixed in this patch.\n>\n> This choice is intentional. The idea about bypassing invalid indexes\n> for table-level REINDEX is that this would lead to a bloat in the\n> number of relations to handling if multiple runs are failing, leading\n> to more and more invalid indexes to handle each time. Allowing a\n> single invalid non-toast index to be reindexed with CONCURRENTLY can\n> be helpful in some cases, like for example a CIC for a unique index\n> that failed and was invalid, where the relation already defined can be\n> reused.\n\nAh I see, thanks for the clarification. I guess there's room for improvement\nin the comments about that, since the ERRCODE_FEATURE_NOT_SUPPORTED usage is\nquite misleading there.\n\nv4 attached, which doesn't prevent a REINDEX INDEX CONCURRENTLY on any invalid\nnon-TOAST index anymore.", "msg_date": "Fri, 6 Mar 2020 13:36:48 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Thu, Mar 05, 2020 at 12:53:54PM +0900, Michael Paquier wrote:\n> On Wed, Mar 04, 2020 at 09:21:45AM +0100, Julien Rouhaud wrote:\n>\n> > Should we add some regression\n> > tests for that? I guess most of it could be borrowed from the patch\n> > to fix the toast index issue I sent last week.\n>\n> I have doubts when it comes to use a strategy based on\n> pg_cancel_backend() and a match of application_name (see for example\n> 5ad72ce but I cannot find the associated thread). I think that we\n> could design something more robust here and usable by all tests, with\n> two things coming into my mind:\n> - A new meta-command for isolation tests to be able to cancel a\n> session with PQcancel().\n> - Fault injection in the backend.\n> For the case of this thread, the cancellation command would be a better\n> match.\n\nHere's a patch to add an optional \"timeout val\" clause to isolationtester's\nstep definition. When used, isolationtester will actively wait on the query\nrather than continuing with the permutation next step, and will issue a cancel\nonce the defined timeout is reached. I also added as a POC the previous\nregression tests for invalid TOAST indexes, updated to use this new\ninfrastructure (which won't pass as long as the original bug for invalid TOAST\nindexes isn't fixed).\n\nI'll park that in the next commitfest, with a v14 target version.", "msg_date": "Fri, 6 Mar 2020 14:15:47 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Fri, Mar 06, 2020 at 02:15:47PM +0100, Julien Rouhaud wrote:\n> Here's a patch to add an optional \"timeout val\" clause to isolationtester's\n> step definition. When used, isolationtester will actively wait on the query\n> rather than continuing with the permutation next step, and will issue a cancel\n> once the defined timeout is reached. I also added as a POC the previous\n> regression tests for invalid TOAST indexes, updated to use this new\n> infrastructure (which won't pass as long as the original bug for invalid TOAST\n> indexes isn't fixed).\n\nOne problem with this approach is that it does address the stability\nof the test on very slow machines, and there are some of them in the\nbuildfarm. Taking your patch, I can make the test fail by applying\nthe following sleep because the query would be cancelled before some\nof the indexes are marked as invalid:\n--- a/src/backend/commands/indexcmds.c\n+++ b/src/backend/commands/indexcmds.c\n@@ -3046,6 +3046,8 @@ ReindexRelationConcurrently(Oid relationOid, int\noptions)\n CommitTransactionCommand();\n StartTransactionCommand();\n\n+ pg_usleep(100000L * 10); /* 10s */\n+\n /*\n * Phase 2 of REINDEX CONCURRENTLY\n\nAnother problem is that on faster machines the test is slow because of\nthe timeout used. What are your thoughts about having instead a\ncancel meta-command instead?\n--\nMichael", "msg_date": "Sat, 7 Mar 2020 10:41:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Sat, Mar 07, 2020 at 10:41:42AM +0900, Michael Paquier wrote:\n> On Fri, Mar 06, 2020 at 02:15:47PM +0100, Julien Rouhaud wrote:\n> > Here's a patch to add an optional \"timeout val\" clause to isolationtester's\n> > step definition. When used, isolationtester will actively wait on the query\n> > rather than continuing with the permutation next step, and will issue a cancel\n> > once the defined timeout is reached. I also added as a POC the previous\n> > regression tests for invalid TOAST indexes, updated to use this new\n> > infrastructure (which won't pass as long as the original bug for invalid TOAST\n> > indexes isn't fixed).\n>\n> One problem with this approach is that it does address the stability\n> of the test on very slow machines, and there are some of them in the\n> buildfarm. Taking your patch, I can make the test fail by applying\n> the following sleep because the query would be cancelled before some\n> of the indexes are marked as invalid:\n> --- a/src/backend/commands/indexcmds.c\n> +++ b/src/backend/commands/indexcmds.c\n> @@ -3046,6 +3046,8 @@ ReindexRelationConcurrently(Oid relationOid, int\n> options)\n> CommitTransactionCommand();\n> StartTransactionCommand();\n>\n> + pg_usleep(100000L * 10); /* 10s */\n> +\n> /*\n> * Phase 2 of REINDEX CONCURRENTLY\n>\n> Another problem is that on faster machines the test is slow because of\n> the timeout used. What are your thoughts about having instead a\n> cancel meta-command instead?\n\nLooking at timeouts.spec and e.g. a7921f71a3c, it seems that we already chose\nto fix this problem by having a timeout long enough to statisfy the slower\nbuildfarm members, even when running on fast machines, so I assumed that the\nsame approach could be used here.\n\nI agree that the 1s timeout I used is maybe too low, but that's easy enough to\nchange. Another point is that it's possible to have a close behavior without\nthis patch by using a statement_timeout (the active wait does change things\nthough), but the spec files would be more verbose.\n\n\n", "msg_date": "Sat, 7 Mar 2020 07:16:15 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 07, 2020 at 10:41:42AM +0900, Michael Paquier wrote:\n>> On Fri, Mar 06, 2020 at 02:15:47PM +0100, Julien Rouhaud wrote:\n>>> Here's a patch to add an optional \"timeout val\" clause to isolationtester's\n>>> step definition. When used, isolationtester will actively wait on the query\n>>> rather than continuing with the permutation next step, and will issue a cancel\n>>> once the defined timeout is reached.\n\n>> One problem with this approach is that it does address the stability\n>> of the test on very slow machines, and there are some of them in the\n>> buildfarm.\n\n> Looking at timeouts.spec and e.g. a7921f71a3c, it seems that we already chose\n> to fix this problem by having a timeout long enough to statisfy the slower\n> buildfarm members, even when running on fast machines, so I assumed that the\n> same approach could be used here.\n\nThe arbitrarily-set timeouts that exist in some of the isolation tests\nare horrid kluges that have caused us lots of headaches in the past\nand no doubt will again in the future. Aside from occasionally failing\nwhen a machine is particularly overloaded, they cause the tests to\ntake far longer than necessary on decently-fast machines. So ideally\nwe'd get rid of those entirely in favor of some more-dynamic approach.\nAdmittedly, I have no proposal for what that would be. But adding yet\nmore ways to set a (guaranteed-to-be-wrong) timeout seems like the\nwrong direction to be going in. What's the actual need that you're\ntrying to deal with?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 10:46:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Sat, Mar 07, 2020 at 10:46:34AM -0500, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sat, Mar 07, 2020 at 10:41:42AM +0900, Michael Paquier wrote:\n> >> On Fri, Mar 06, 2020 at 02:15:47PM +0100, Julien Rouhaud wrote:\n> >>> Here's a patch to add an optional \"timeout val\" clause to isolationtester's\n> >>> step definition. When used, isolationtester will actively wait on the query\n> >>> rather than continuing with the permutation next step, and will issue a cancel\n> >>> once the defined timeout is reached.\n>\n> >> One problem with this approach is that it does address the stability\n> >> of the test on very slow machines, and there are some of them in the\n> >> buildfarm.\n>\n> > Looking at timeouts.spec and e.g. a7921f71a3c, it seems that we already chose\n> > to fix this problem by having a timeout long enough to statisfy the slower\n> > buildfarm members, even when running on fast machines, so I assumed that the\n> > same approach could be used here.\n>\n> The arbitrarily-set timeouts that exist in some of the isolation tests\n> are horrid kluges that have caused us lots of headaches in the past\n> and no doubt will again in the future. Aside from occasionally failing\n> when a machine is particularly overloaded, they cause the tests to\n> take far longer than necessary on decently-fast machines.\n\nYeah, I have no doubt that it has been a pain, and this patch is clearly not a\nbullet-proof solution.\n\n> So ideally\n> we'd get rid of those entirely in favor of some more-dynamic approach.\n> Admittedly, I have no proposal for what that would be.\n\nThe fault injection framework that was previously discussed would cover most of\nthe usecase I can think of, but that's a way bigger project.\n\n> But adding yet\n> more ways to set a (guaranteed-to-be-wrong) timeout seems like the\n> wrong direction to be going in.\n\nFair enough, I'll mark the patch as rejected then.\n\n> What's the actual need that you're trying to deal with?\n\nTesting the correct behavior of non trivial commands, such as CIC/reindex\nconcurrently, that fails during the execution.\n\n\n", "msg_date": "Sat, 7 Mar 2020 21:53:28 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 07, 2020 at 10:46:34AM -0500, Tom Lane wrote:\n>> What's the actual need that you're trying to deal with?\n\n> Testing the correct behavior of non trivial commands, such as CIC/reindex\n> concurrently, that fails during the execution.\n\nHmm ... don't see how a timeout helps with that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 16:09:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Sat, Mar 07, 2020 at 04:09:31PM -0500, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sat, Mar 07, 2020 at 10:46:34AM -0500, Tom Lane wrote:\n> >> What's the actual need that you're trying to deal with?\n>\n> > Testing the correct behavior of non trivial commands, such as CIC/reindex\n> > concurrently, that fails during the execution.\n>\n> Hmm ... don't see how a timeout helps with that?\n\nFor reindex concurrently, a SELECT FOR UPDATE on a different connection can\nensure that the reindex will be stuck at some point, so canceling the command\nafter a long enough timeout reproduces the original faulty behavior.\n\n\n", "msg_date": "Sat, 7 Mar 2020 22:17:09 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, Mar 07, 2020 at 04:09:31PM -0500, Tom Lane wrote:\n>> Julien Rouhaud <rjuju123@gmail.com> writes:\n>>> On Sat, Mar 07, 2020 at 10:46:34AM -0500, Tom Lane wrote:\n>>>> What's the actual need that you're trying to deal with?\n\n>>> Testing the correct behavior of non trivial commands, such as CIC/reindex\n>>> concurrently, that fails during the execution.\n\n>> Hmm ... don't see how a timeout helps with that?\n\n> For reindex concurrently, a SELECT FOR UPDATE on a different connection can\n> ensure that the reindex will be stuck at some point, so canceling the command\n> after a long enough timeout reproduces the original faulty behavior.\n\nHmm, seems like a pretty arbitrary (and slow) way to test that. I'd\nenvision testing that by setting up a case with an expression index\nwhere the expression is designed to fail at some point partway through\nthe build -- say, with a divide-by-zero triggered by one of the tuples\nto be indexed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Mar 2020 16:23:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Sat, Mar 07, 2020 at 04:23:58PM -0500, Tom Lane wrote:\n> Hmm, seems like a pretty arbitrary (and slow) way to test that. I'd\n> envision testing that by setting up a case with an expression index\n> where the expression is designed to fail at some point partway through\n> the build -- say, with a divide-by-zero triggered by one of the tuples\n> to be indexed.\n\nI am not sure that I think that's very tricky to get an invalid index\n_ccold after the swap phase with what the existing test facility\nprovides, because the new index is already built at the point where\nthe dependencies are switched so you cannot rely on a failure when\nbuilding the index. Note also that some tests of CREATE INDEX\nCONCURRENTLY rely on the uniqueness to create invalid index entries\n(division by zero is fine as well). And, actually, if you rely on\nthat, you can get invalid _ccnew entries easily:\ncreate table aa (a int);\ninsert into aa values (1),(1);\ncreate unique index concurrently aai on aa (a);\nreindex index concurrently aai;\n=# \\d aa\n Table \"public.aa\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nIndexes:\n \"aai\" UNIQUE, btree (a) INVALID\n \"aai_ccnew\" UNIQUE, btree (a) INVALID\n\nThat's before the dependency swapping is done though... With a fault\ninjection facility, it would be possible to test the stability of\nthe operation by enforcing for example failures after the start of\neach inner transaction of REINDEX CONCURRENTLY.\n--\nMichael", "msg_date": "Sun, 8 Mar 2020 12:44:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Fri, Mar 06, 2020 at 01:36:48PM +0100, Julien Rouhaud wrote:\n> Ah I see, thanks for the clarification. I guess there's room for improvement\n> in the comments about that, since the ERRCODE_FEATURE_NOT_SUPPORTED usage is\n> quite misleading there.\n> \n> v4 attached, which doesn't prevent a REINDEX INDEX CONCURRENTLY on any invalid\n> non-TOAST index anymore.\n\nThanks. The position of the error check in reindex_relation() is\ncorrect, but as it opens a relation for the cache lookup let's invent\na new routine in lsyscache.c to grab pg_index.indisvalid. It is\npossible to make use of this routine with all the other checks:\n- WARNING for REINDEX TABLE (non-conurrent)\n- ERROR for REINDEX INDEX (non-conurrent)\n- ERROR for REINDEX INDEX CONCURRENTLY\n(There is already a WARNING for REINDEX TABLE CONCURRENTLY.)\n\nI did not find the addition of an error check in ReindexIndex()\nconsistent with the existing practice to check the state of the\nrelation reindexed in reindex_index() (for the non-concurrent case)\nand ReindexRelationConcurrently() (for the concurrent case). Okay,\nthis leads to the introduction of two new ERROR messages related to\ninvalid toast indexes for the concurrent and the non-concurrent cases\nwhen using REINDEX INDEX instead of one, but having two messages leads\nto something much more consistent with the rest, and all checks remain\ncentralized in the same routines.\n\nFor the index-level operation, issuing a WARNING is not consistent\nwith the existing practice to use an ERROR, which is more adapted as\nthe operation is done on a single index at a time. \n\nFor the check in reindex_relation, it is more consistent to check the\nnamespace of the index instead of the parent relation IMO (the\nprevious patch used \"rel\", which refers to the parent table). This\nhas in practice no consequence though.\n\nIt would have been nice to test this stuff. However, this requires an\ninvalid toast index and we cannot create that except by canceling a\nconcurrent reindex, leading us back to the upthread discussion about\nisolation tests, timeouts and fault injection :/\n\nAny opinions?\n--\nMichael", "msg_date": "Mon, 9 Mar 2020 14:52:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Mon, Mar 09, 2020 at 02:52:31PM +0900, Michael Paquier wrote:\n> On Fri, Mar 06, 2020 at 01:36:48PM +0100, Julien Rouhaud wrote:\n> >\n> > v4 attached, which doesn't prevent a REINDEX INDEX CONCURRENTLY on any invalid\n> > non-TOAST index anymore.\n>\n> Thanks. The position of the error check in reindex_relation() is\n> correct, but as it opens a relation for the cache lookup let's invent\n> a new routine in lsyscache.c to grab pg_index.indisvalid. It is\n> possible to make use of this routine with all the other checks:\n> - WARNING for REINDEX TABLE (non-conurrent)\n> - ERROR for REINDEX INDEX (non-conurrent)\n> - ERROR for REINDEX INDEX CONCURRENTLY\n> (There is already a WARNING for REINDEX TABLE CONCURRENTLY.)\n>\n> I did not find the addition of an error check in ReindexIndex()\n> consistent with the existing practice to check the state of the\n> relation reindexed in reindex_index() (for the non-concurrent case)\n> and ReindexRelationConcurrently() (for the concurrent case). Okay,\n> this leads to the introduction of two new ERROR messages related to\n> invalid toast indexes for the concurrent and the non-concurrent cases\n> when using REINDEX INDEX instead of one, but having two messages leads\n> to something much more consistent with the rest, and all checks remain\n> centralized in the same routines.\n\nI wanted to go this way at first but hesitated and finally chose to add less\nchecks, so I'm fine with this approach, and patch looks good to me.\n\n> For the index-level operation, issuing a WARNING is not consistent\n> with the existing practice to use an ERROR, which is more adapted as\n> the operation is done on a single index at a time.\n\nAgreed.\n\n> For the check in reindex_relation, it is more consistent to check the\n> namespace of the index instead of the parent relation IMO (the\n> previous patch used \"rel\", which refers to the parent table). This\n> has in practice no consequence though.\n\nOops yes.\n\n\n> It would have been nice to test this stuff. However, this requires an\n> invalid toast index and we cannot create that except by canceling a\n> concurrent reindex, leading us back to the upthread discussion about\n> isolation tests, timeouts and fault injection :/\n\nYes, unfortunately I don't see an acceptable way to add tests for that without\nsome kind of fault injection, so this will have to wait :(\n\n\n", "msg_date": "Mon, 9 Mar 2020 08:04:27 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Sat, Mar 07, 2020 at 10:46:34AM -0500, Tom Lane wrote:\n> The arbitrarily-set timeouts that exist in some of the isolation tests\n> are horrid kluges that have caused us lots of headaches in the past\n> and no doubt will again in the future. Aside from occasionally failing\n> when a machine is particularly overloaded, they cause the tests to\n> take far longer than necessary on decently-fast machines. So ideally\n> we'd get rid of those entirely in favor of some more-dynamic approach.\n> Admittedly, I have no proposal for what that would be. But adding yet\n> more ways to set a (guaranteed-to-be-wrong) timeout seems like the\n> wrong direction to be going in. What's the actual need that you're\n> trying to deal with?\n\nAs a matter of fact, the buildfarm member petalura just reported a\nfailure with the isolation test \"timeouts\", the machine being\nextremely slow:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-08%2011%3A20%3A05\n\ntest timeouts ... FAILED 60330 ms\n[...]\n-step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...>\n-step update: <... completed>\n+step update: DELETE FROM accounts WHERE accountid = 'checking';\n ERROR: canceling statement due to statement timeout\n--\nMichael", "msg_date": "Mon, 9 Mar 2020 16:47:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Mon, Mar 09, 2020 at 04:47:27PM +0900, Michael Paquier wrote:\n> On Sat, Mar 07, 2020 at 10:46:34AM -0500, Tom Lane wrote:\n> > The arbitrarily-set timeouts that exist in some of the isolation tests\n> > are horrid kluges that have caused us lots of headaches in the past\n> > and no doubt will again in the future. Aside from occasionally failing\n> > when a machine is particularly overloaded, they cause the tests to\n> > take far longer than necessary on decently-fast machines. So ideally\n> > we'd get rid of those entirely in favor of some more-dynamic approach.\n> > Admittedly, I have no proposal for what that would be. But adding yet\n> > more ways to set a (guaranteed-to-be-wrong) timeout seems like the\n> > wrong direction to be going in. What's the actual need that you're\n> > trying to deal with?\n>\n> As a matter of fact, the buildfarm member petalura just reported a\n> failure with the isolation test \"timeouts\", the machine being\n> extremely slow:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-08%2011%3A20%3A05\n>\n> test timeouts ... FAILED 60330 ms\n> [...]\n> -step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...>\n> -step update: <... completed>\n> +step update: DELETE FROM accounts WHERE accountid = 'checking';\n> ERROR: canceling statement due to statement timeout\n\nIndeed. I guess we could add some kind of environment variable facility in\nisolationtester to let slow machine owner put a way bigger timeout without\nmaking the test super slow for everyone else, but that seems overkill for just\none test, and given the other thread about deploying REL_11 build-farm client,\nthat wouldn't be an immediate fix either.\n\n\n", "msg_date": "Mon, 9 Mar 2020 09:39:59 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Hi,\n\nOn 2020-03-07 22:17:09 +0100, Julien Rouhaud wrote:\n> For reindex concurrently, a SELECT FOR UPDATE on a different connection can\n> ensure that the reindex will be stuck at some point, so canceling the command\n> after a long enough timeout reproduces the original faulty behavior.\n\nThat kind of thing can already be done using statement_timeout or\nlock_timeout, no?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Mar 2020 15:15:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Mon, Mar 09, 2020 at 03:15:58PM -0700, Andres Freund wrote:\n> On 2020-03-07 22:17:09 +0100, Julien Rouhaud wrote:\n>> For reindex concurrently, a SELECT FOR UPDATE on a different connection can\n>> ensure that the reindex will be stuck at some point, so canceling the command\n>> after a long enough timeout reproduces the original faulty behavior.\n> \n> That kind of thing can already be done using statement_timeout or\n> lock_timeout, no?\n\nYep, still that's not something I would recommend to commit in the\ntree as that's a double-edged sword as you already know. For slower\nmachines, you need a statement_timeout large enough so as you make\nsure that the state you want the query to wait for is reached, which\nhas a cost on all other faster machines as it makes the tests slower.\n--\nMichael", "msg_date": "Tue, 10 Mar 2020 11:14:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Mar 09, 2020 at 03:15:58PM -0700, Andres Freund wrote:\n>> That kind of thing can already be done using statement_timeout or\n>> lock_timeout, no?\n\n> Yep, still that's not something I would recommend to commit in the\n> tree as that's a double-edged sword as you already know. For slower\n> machines, you need a statement_timeout large enough so as you make\n> sure that the state you want the query to wait for is reached, which\n> has a cost on all other faster machines as it makes the tests slower.\n\nIt strikes me to wonder whether we could improve matters by teaching\nisolationtester to watch for particular values in a connected backend's\npg_stat_activity.wait_event_type/wait_event columns. Those columns\ndidn't exist when isolationtester was designed, IIRC, so it's not\nsurprising that they're not used in the current design. But we could\nuse them perhaps to detect that a backend has arrived at some state\nthat's not a heavyweight-lock-wait state.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Mar 2020 22:32:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Mon, Mar 09, 2020 at 10:32:27PM -0400, Tom Lane wrote:\n> It strikes me to wonder whether we could improve matters by teaching\n> isolationtester to watch for particular values in a connected backend's\n> pg_stat_activity.wait_event_type/wait_event columns. Those columns\n> didn't exist when isolationtester was designed, IIRC, so it's not\n> surprising that they're not used in the current design. But we could\n> use them perhaps to detect that a backend has arrived at some state\n> that's not a heavyweight-lock-wait state.\n\nInteresting idea. So that would be basically an equivalent of\nPostgresNode::poll_query_until but for the isolation tester? In short\nwe gain a meta-command that runs a SELECT query that waits until the\nquery defined in the command returns true. The polling interval may\nbe tricky to set though. If set too low it would consume resources\nfor nothing, and if set too large it would make the tests using this\nmeta-command slower than they actually need to be. Perhaps something\nlike 100ms may be fine..\n--\nMichael", "msg_date": "Tue, 10 Mar 2020 11:55:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Mon, Mar 09, 2020 at 08:04:27AM +0100, Julien Rouhaud wrote:\n> On Mon, Mar 09, 2020 at 02:52:31PM +0900, Michael Paquier wrote:\n>> For the index-level operation, issuing a WARNING is not consistent\n>> with the existing practice to use an ERROR, which is more adapted as\n>> the operation is done on a single index at a time.\n> \n> Agreed.\n\nThanks for checking the patch.\n\n>> It would have been nice to test this stuff. However, this requires an\n>> invalid toast index and we cannot create that except by canceling a\n>> concurrent reindex, leading us back to the upthread discussion about\n>> isolation tests, timeouts and fault injection :/\n> \n> Yes, unfortunately I don't see an acceptable way to add tests for that without\n> some kind of fault injection, so this will have to wait :(\n\nLet's discuss that separately.\n\nI have also been reviewing the isolation test you have added upthread\nabout the dependency handling of invalid indexes, and one thing that\nwe cannot really do is attempting to do a reindex at index or\ntable-level with invalid toast indexes as this leads to unstable ERROR\nor WARNING messages. But at least one thing we can do is to extend\nthe query you sent directly so as it exposes the toast relation name\nfiltered with regex_replace(). This gives us a stable output, and\nthis way the test makes sure that the query cancellation happened\nafter the dependencies are swapped, and not at build or validation\ntime (indisvalid got appended to the end of the output): \n+pg_toast.pg_toast_<oid>_index_ccoldf\n+pg_toast.pg_toast_<oid>_indext\n\nPlease feel free to see the attached for reference, that's not\nsomething for commit in upstream, but I am going to keep that around\nin my own plugin tree.\n--\nMichael", "msg_date": "Tue, 10 Mar 2020 12:09:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Mar 09, 2020 at 10:32:27PM -0400, Tom Lane wrote:\n>> It strikes me to wonder whether we could improve matters by teaching\n>> isolationtester to watch for particular values in a connected backend's\n>> pg_stat_activity.wait_event_type/wait_event columns. Those columns\n>> didn't exist when isolationtester was designed, IIRC, so it's not\n>> surprising that they're not used in the current design. But we could\n>> use them perhaps to detect that a backend has arrived at some state\n>> that's not a heavyweight-lock-wait state.\n\n> Interesting idea. So that would be basically an equivalent of\n> PostgresNode::poll_query_until but for the isolation tester?\n\nNo, more like the existing isolationtester wait query, which watches\nfor something being blocked on a heavyweight lock. Right now, that\none depends on a bespoke function pg_isolation_test_session_is_blocked(),\nbut it used to be a query on pg_stat_activity/pg_locks.\n\n> In short\n> we gain a meta-command that runs a SELECT query that waits until the\n> query defined in the command returns true. The polling interval may\n> be tricky to set though.\n\nI think it'd be just the same as the polling interval for the existing\nwait query. We'd have to have some way to mark a script step to say\nwhat to check to decide that it's blocked ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Mar 2020 00:09:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Tue, Mar 10, 2020 at 12:09:42PM +0900, Michael Paquier wrote:\n> On Mon, Mar 09, 2020 at 08:04:27AM +0100, Julien Rouhaud wrote:\n>> Agreed.\n> \n> Thanks for checking the patch.\n\nAnd applied as 61d7c7b. Regarding the isolation tests, let's\nbrainstorm on what we can do, but I am afraid that it is too late for\n13. \n--\nMichael", "msg_date": "Tue, 10 Mar 2020 17:01:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: reindex concurrently and two toast indexes" }, { "msg_contents": "On Tue, Mar 10, 2020 at 12:09:12AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Mar 09, 2020 at 10:32:27PM -0400, Tom Lane wrote:\n> >> It strikes me to wonder whether we could improve matters by teaching\n> >> isolationtester to watch for particular values in a connected backend's\n> >> pg_stat_activity.wait_event_type/wait_event columns. Those columns\n> >> didn't exist when isolationtester was designed, IIRC, so it's not\n> >> surprising that they're not used in the current design. But we could\n> >> use them perhaps to detect that a backend has arrived at some state\n> >> that's not a heavyweight-lock-wait state.\n>\n> > Interesting idea. So that would be basically an equivalent of\n> > PostgresNode::poll_query_until but for the isolation tester?\n>\n> No, more like the existing isolationtester wait query, which watches\n> for something being blocked on a heavyweight lock. Right now, that\n> one depends on a bespoke function pg_isolation_test_session_is_blocked(),\n> but it used to be a query on pg_stat_activity/pg_locks.\n\nAh interesting indeed!\n\n> > In short\n> > we gain a meta-command that runs a SELECT query that waits until the\n> > query defined in the command returns true. The polling interval may\n> > be tricky to set though.\n>\n> I think it'd be just the same as the polling interval for the existing\n> wait query. We'd have to have some way to mark a script step to say\n> what to check to decide that it's blocked ...\n\nSo basically we could just change pg_isolation_test_session_is_blocked() to\nalso return the wait_event_type and wait_event, and adding something like\n\nstep \"<name>\" { SQL } [ cancel on \"<wait_event_type>\" \"<wait_event>\" ]\n\nto the step definition should be enough. I'm attaching a POC patch for that.\nOn my laptop, the full test now complete in about 400ms.\n\nFTR the REINDEX TABLE CONCURRENTLY case is eventually locked on a virtualxid,\nI'm not sure if that's could lead to too early cancellation.", "msg_date": "Tue, 10 Mar 2020 14:53:36 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Tue, Mar 10, 2020 at 02:53:36PM +0100, Julien Rouhaud wrote:\n> So basically we could just change pg_isolation_test_session_is_blocked() to\n> also return the wait_event_type and wait_event, and adding something like\n\nHmm. I think that Tom has in mind the reasons behind 511540d here.\n\n> step \"<name>\" { SQL } [ cancel on \"<wait_event_type>\" \"<wait_event>\" ]\n> \n> to the step definition should be enough. I'm attaching a POC patch for that.\n> On my laptop, the full test now complete in about 400ms.\n\nNot much a fan of that per the lack of flexibility, but we have a\nsingle function to avoid a huge performance impact when using\nCLOBBER_CACHE_ALWAYS, so we cannot really use a SQL-based logic\neither...\n\n> FTR the REINDEX TABLE CONCURRENTLY case is eventually locked on a virtualxid,\n> I'm not sure if that's could lead to too early cancellation.\n\nWaitForLockersMultiple() is called three times in this case, but your\ntest case is waiting on a lock to be released for the old index which\nREINDEX CONCURRENTLY would like to drop at the beginning of step 5, so\nthis should work reliably here.\n\n> +\tTupleDescInitEntry(tupdesc, (AttrNumber) 3, \"wait_even\",\n> +\t\t\t\t\t TEXTOID, -1, 0);\nGuess who is missing a 't' here.\n\npg_isolation_test_session_is_blocked() is not documented and it is\nonly used internally in the isolation test suite, so breaking its\ncompatibility should be fine in practice.. Now you are actually\nchanging it so as we get a more complex state of the blocked\nsession, so I think that we should use a different function name, and\na different function. Like pg_isolation_test_session_state?\n--\nMichael", "msg_date": "Wed, 11 Mar 2020 13:10:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 10, 2020 at 02:53:36PM +0100, Julien Rouhaud wrote:\n>> So basically we could just change pg_isolation_test_session_is_blocked() to\n>> also return the wait_event_type and wait_event, and adding something like\n\n> Hmm. I think that Tom has in mind the reasons behind 511540d here.\n\nYeah, that history suggests that we need to be very protective of the\nperformance of the wait-checking query, especially in CLOBBER_CACHE_ALWAYS\nbuilds. That being the case, I'm hesitant to consider changing the test\nfunction to return a tuple. That'll add quite a lot of overhead due to\nthe cache lookups involved, or so my gut says.\n\nI'm also finding the proposed semantics (issue a cancel if wait state X\nis reached) to be odd and special-purpose. I was envisioning something\nmore like \"if wait state X is reached, consider the session to be blocked,\nthe same as if it had reached a heavyweight-lock wait\". Then\nisolationtester would move on to issue another step, which is where\nI'd envision putting the cancel for that particular test usage.\n\nSo that idea leads to thinking that the wait-state specification is an\ninput to pg_isolation_test_session_is_blocked, not an output. We could\nre-use Julien's ideas about the isolation spec syntax by making it be,\nroughly,\n\nstep \"<name>\" { <SQL> } [ blocked if \"<wait_event_type>\" \"<wait_event>\" ]\n\nand then those items would need to be passed as parameters of the prepared\nquery.\n\nOr maybe we should use two different prepared queries depending on whether\nthere's a BLOCKED IF spec. We probably don't need lock-wait detection\nif we're expecting a wait-state-based block, so maybe we should invent a\nseparate backend function \"is this process waiting with this type of wait\nstate\" and use that to check the state of a step that has this type of\nannotation.\n\nJust eyeing the proposed test case, I'm wondering whether this will\nactually be sufficiently fine-grained. It seems like \"REINDEX has\nreached a wait on a virtual XID\" is not really all that specific;\nit could match on other situations, such as blocking on a concurrent\ntuple update. Maybe it's okay given the restrictive context that\nwe don't expect anything to be happening that the isolation test\ndidn't ask for.\n\nI'd like to see an attempt to rewrite some of the existing\ntimeout-dependent test cases to use this facility instead of\nlong timeouts. If we could get rid of the timeouts in the\ndeadlock tests, that'd go a long way towards showing that this\nidea is actually any good.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 16:33:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On 2020-Mar-11, Tom Lane wrote:\n\n> We could re-use Julien's ideas about the isolation spec syntax by\n> making it be, roughly,\n> \n> step \"<name>\" { <SQL> } [ blocked if \"<wait_event_type>\" \"<wait_event>\" ]\n> \n> and then those items would need to be passed as parameters of the prepared\n> query.\n\nI think for test readability's sake, it'd be better to put the BLOCKED\nIF clause ahead of the SQL, so you can write it in the same line and let\nthe SQL flow to the next one:\n\nSTEP \"long_select\" BLOCKED IF \"lwlock\" \"ClogControlLock\"\n { select foo from pg_class where ... some more long clauses ... }\n\notherwise I think a step would require more lines to write.\n\n> I'd like to see an attempt to rewrite some of the existing\n> timeout-dependent test cases to use this facility instead of\n> long timeouts. If we could get rid of the timeouts in the\n> deadlock tests, that'd go a long way towards showing that this\n> idea is actually any good.\n\n+1. Those long timeouts are annoying enough that infrastructure to make\na run shorter in normal circumstances might be sufficient justification\nfor this patch ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Mar 2020 17:52:54 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Wed, Mar 11, 2020 at 05:52:54PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-11, Tom Lane wrote:\n>> We could re-use Julien's ideas about the isolation spec syntax by\n>> making it be, roughly,\n>> \n>> step \"<name>\" { <SQL> } [ blocked if \"<wait_event_type>\" \"<wait_event>\" ]\n>> \n>> and then those items would need to be passed as parameters of the prepared\n>> query.\n> \n> I think for test readability's sake, it'd be better to put the BLOCKED\n> IF clause ahead of the SQL, so you can write it in the same line and let\n> the SQL flow to the next one:\n> \n> STEP \"long_select\" BLOCKED IF \"lwlock\" \"ClogControlLock\"\n> { select foo from pg_class where ... some more long clauses ... }\n> \n> otherwise I think a step would require more lines to write.\n\nI prefer this version.\n\n>> I'd like to see an attempt to rewrite some of the existing\n>> timeout-dependent test cases to use this facility instead of\n>> long timeouts. If we could get rid of the timeouts in the\n>> deadlock tests, that'd go a long way towards showing that this\n>> idea is actually any good.\n> \n> +1. Those long timeouts are annoying enough that infrastructure to make\n> a run shorter in normal circumstances might be sufficient justification\n> for this patch ...\n\n+1. A patch does not seem to be that complicated. Now isn't it too\nlate for v13?\n--\nMichael", "msg_date": "Thu, 12 Mar 2020 16:49:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> +1. A patch does not seem to be that complicated. Now isn't it too\n> late for v13?\n\nI think we've generally given new tests more slack than new features so\nfar as schedule goes. If the patch ends up being complicated/invasive,\nI might vote to hold it for v14, but let's see it first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Mar 2020 09:48:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Wed, Mar 11, 2020 at 05:52:54PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-11, Tom Lane wrote:\n>\n> > We could re-use Julien's ideas about the isolation spec syntax by\n> > making it be, roughly,\n> >\n> > step \"<name>\" { <SQL> } [ blocked if \"<wait_event_type>\" \"<wait_event>\" ]\n> >\n> > and then those items would need to be passed as parameters of the prepared\n> > query.\n>\n> I think for test readability's sake, it'd be better to put the BLOCKED\n> IF clause ahead of the SQL, so you can write it in the same line and let\n> the SQL flow to the next one:\n>\n> STEP \"long_select\" BLOCKED IF \"lwlock\" \"ClogControlLock\"\n> { select foo from pg_class where ... some more long clauses ... }\n>\n> otherwise I think a step would require more lines to write.\n>\n> > I'd like to see an attempt to rewrite some of the existing\n> > timeout-dependent test cases to use this facility instead of\n> > long timeouts. If we could get rid of the timeouts in the\n> > deadlock tests, that'd go a long way towards showing that this\n> > idea is actually any good.\n>\n> +1. Those long timeouts are annoying enough that infrastructure to make\n> a run shorter in normal circumstances might be sufficient justification\n> for this patch ...\n\n\nI'm not familiar with those test so I'm probably missing something, but looks\nlike all isolation tests that setup a timeout are doing so to test server side\nfeatures (deadlock detection, statement and lock timeout). I'm not sure how\nadding a client-side facility to detect locks earlier is going to help reducing\nthe server side timeouts?\n\nFor the REINDEX CONCURRENTLY failure test, the problem that needs to be solved\nisn't detecting that the command is blocked as it's already getting blocked on\na heavyweight lock, but being able to reliably cancel a specific query as early\nas possible, which AFAICS isn't possible with current isolation tester:\n\n- either we reliably cancel the query using a statement timeout, but we'll make\n it slow for everyone\n- either we send a blind pg_cancel_backend() hoping that we don't catch\n anything else (and also make it slower than required to make sure that it's\n not canceled to early)\n\nSo we would actually only need something like this to make it work:\n\nstep \"<name>\" [ CANCEL IF BLOCKED ] { <SQL }\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:04:50 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Mar 11, 2020 at 05:52:54PM -0300, Alvaro Herrera wrote:\n>> On 2020-Mar-11, Tom Lane wrote:\n>>> I'd like to see an attempt to rewrite some of the existing\n>>> timeout-dependent test cases to use this facility instead of\n>>> long timeouts.\n\n>> +1. Those long timeouts are annoying enough that infrastructure to make\n>> a run shorter in normal circumstances might be sufficient justification\n>> for this patch ...\n\n> I'm not familiar with those test so I'm probably missing something, but looks\n> like all isolation tests that setup a timeout are doing so to test server side\n> features (deadlock detection, statement and lock timeout). I'm not sure how\n> adding a client-side facility to detect locks earlier is going to help reducing\n> the server side timeouts?\n\nThe point is that those timeouts have to be set long enough for even a\nvery slow machine to reach a desired state before the timeout happens;\non faster machines the test is just uselessly sleeping for a long time,\nbecause of the fixed timeout. My thought was that maybe the tests could\nbe recast as \"watch for session to reach $expected_state and then do\nthe next thing\", allowing them to be automatically adaptive to the\nmachine's speed. This might require some rather subtle test redesign\nand/or addition of more infrastructure (to allow recognition of the\ndesired state and/or taking an appropriate next action). I'm prepared\nto believe that not much can be done about timeouts.spec in particular,\nbut it seems to me that the long delays in the deadlock tests are not\ninherent in what we need to test.\n\n> For the REINDEX CONCURRENTLY failure test, the problem that needs to be solved\n> isn't detecting that the command is blocked as it's already getting blocked on\n> a heavyweight lock, but being able to reliably cancel a specific query as early\n> as possible, which AFAICS isn't possible with current isolation tester:\n\nRight, it's the same thing of needing to wait till the backend has reached\na particular state before you do the next thing.\n\n> So we would actually only need something like this to make it work:\n> step \"<name>\" [ CANCEL IF BLOCKED ] { <SQL }\n\nI continue to resist the idea of hard-wiring this feature to query cancel\nas the action-to-take. That will more or less guarantee that it's not\ngood for anything but this one test case. I think that the feature\nshould have the behavior of \"treat this step as blocked once it's reached\nstate X\", and then you make the next step in the permutation be one that\nissues a query cancel. (Possibly, using pg_stat_activity and\npg_cancel_backend for that will be painful enough that we'd want to\ninvent separate script syntax that says \"send a cancel to session X\".\nBut that's a separate discussion.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:12:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "On Fri, Mar 13, 2020 at 10:12:20AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n>\n> > I'm not familiar with those test so I'm probably missing something, but looks\n> > like all isolation tests that setup a timeout are doing so to test server side\n> > features (deadlock detection, statement and lock timeout). I'm not sure how\n> > adding a client-side facility to detect locks earlier is going to help reducing\n> > the server side timeouts?\n>\n> The point is that those timeouts have to be set long enough for even a\n> very slow machine to reach a desired state before the timeout happens;\n> on faster machines the test is just uselessly sleeping for a long time,\n> because of the fixed timeout. My thought was that maybe the tests could\n> be recast as \"watch for session to reach $expected_state and then do\n> the next thing\", allowing them to be automatically adaptive to the\n> machine's speed. This might require some rather subtle test redesign\n> and/or addition of more infrastructure (to allow recognition of the\n> desired state and/or taking an appropriate next action). I'm prepared\n> to believe that not much can be done about timeouts.spec in particular,\n> but it seems to me that the long delays in the deadlock tests are not\n> inherent in what we need to test.\n\n\nAh I see. I'll try to see if that could help the deadlock tests, but for sure\nsuch feature would allow us to get rid of the two pg_sleep(5) in\ntuplelock-update.\n\nIt seems that for all the possibly interesting cases, what we want to wait on\nis an heavyweight lock, which is already what isolationtester detects. Maybe\nwe could simply implement something like\n\nstep \"<name>\" [ WAIT UNTIL BLOCKED ] { <SQL> }\n\nwithout any change to the blocking detection function?\n\n\n> > For the REINDEX CONCURRENTLY failure test, the problem that needs to be solved\n> > isn't detecting that the command is blocked as it's already getting blocked on\n> > a heavyweight lock, but being able to reliably cancel a specific query as early\n> > as possible, which AFAICS isn't possible with current isolation tester:\n>\n> Right, it's the same thing of needing to wait till the backend has reached\n> a particular state before you do the next thing.\n>\n> > So we would actually only need something like this to make it work:\n> > step \"<name>\" [ CANCEL IF BLOCKED ] { <SQL }\n>\n> I continue to resist the idea of hard-wiring this feature to query cancel\n> as the action-to-take. That will more or less guarantee that it's not\n> good for anything but this one test case. I think that the feature\n> should have the behavior of \"treat this step as blocked once it's reached\n> state X\", and then you make the next step in the permutation be one that\n> issues a query cancel. (Possibly, using pg_stat_activity and\n> pg_cancel_backend for that will be painful enough that we'd want to\n> invent separate script syntax that says \"send a cancel to session X\".\n> But that's a separate discussion.)\n\n\nI agree. A new step option to kill a session rather than executing sql would\ngo perfectly with the above new active-wait-for-blocking-state feature.\n\n\n", "msg_date": "Fri, 13 Mar 2020 17:25:20 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> It seems that for all the possibly interesting cases, what we want to wait on\n> is an heavyweight lock, which is already what isolationtester detects. Maybe\n> we could simply implement something like\n\n> step \"<name>\" [ WAIT UNTIL BLOCKED ] { <SQL> }\n\n> without any change to the blocking detection function?\n\nUm, isn't that the existing built-in behavior?\n\nI could actually imagine some uses for the reverse option, *don't* wait\nfor it to become blocked but just immediately continue with issuing\nthe next step.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Mar 2020 12:58:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add an optional timeout clause to isolationtester step." } ]
[ { "msg_contents": "Hello,\n\n1. I was told that M$ SQLServer provides huge performance deltas over \nPostgreSQL when dealing with index-unaligned queries :\ncreate index i on t (a,b, c);\nselect * from t where b=... and c=...;\nColumnar storage has been tried by various companies, CitusData, \nEnterpriseDB, 2ndQuadrant, Fujitsu Enterprise Postgres. It has been \ndiscussed quite a lot, last thread that I was able to find being in \n2017, \nhttps://www.postgresql.org/message-id/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com \nwhere Fujitsu's patch made it quite far.\nWhat is the status on such a storage manager extension interface ?\n\n2. What do you think of adding a new syntax : 'from t join t2 using \n(fk_constraint)' ? And further graph algorithms to make automatic joins ?\nBoth 'natural join' and 'using (column_name)' are useless when the \ncolumns are not the same in source and destination.\nPlus it is often the case that the fk_constraints are over numerous \ncolumns, even though this is usually advised against. But when this case \nhappens there will be a significant writing speedup.\nI have been bothered by this to the point that I developed a \ngraphical-query-builder plugin for pgModeler,\nhttps://github.com/maxzor/plugins/tree/master/graphicalquerybuilder#automatic-join-mode \n,\nbut I believe such a syntax would be much better in the core!\n\n3. What is the status of making the internal parser of PostgreSQL less \ncoupled to the core, and easier to cherry-pick from outside?\nIt would be great to incorporate it into companion projects : pgAdmin4, \npgModeler, pgFormatter...\n\nBR, Maxime Chambonnet\n\n\n\n\n\n\n\nHello,\n1. I was told that M$ SQLServer provides huge performance deltas\n over PostgreSQL when dealing with index-unaligned queries :\n create index i on t (a,b, c);\n select * from t where b=... and c=...;\n Columnar storage has been tried by various companies, CitusData,\n EnterpriseDB, 2ndQuadrant, Fujitsu Enterprise Postgres. It has\n been discussed quite a lot, last thread that I was able to find\n being in 2017,\nhttps://www.postgresql.org/message-id/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com\n where Fujitsu's patch made it quite far.\n What is the status on such a storage manager extension interface ?\n\n2. What do you think of adding a new syntax : 'from t join t2 using\n (fk_constraint)' ? And further graph algorithms to make\n automatic joins ?\n Both 'natural join' and 'using (column_name)' are useless\n when the columns are not the same in source and destination.\n Plus it is often the case that the fk_constraints are over\n numerous columns, even though this is usually advised against. But\n when this case happens there will be a significant writing\n speedup.\n I have been bothered by this to the point that I developed a\n graphical-query-builder plugin for pgModeler,\nhttps://github.com/maxzor/plugins/tree/master/graphicalquerybuilder#automatic-join-mode\n ,\n but I believe such a syntax would be much better in the core!\n3. What is the status of making the internal parser of PostgreSQL\n less coupled to the core, and easier to cherry-pick from outside?\n It would be great to incorporate it into companion projects :\n pgAdmin4, pgModeler, pgFormatter...\nBR, Maxime Chambonnet", "msg_date": "Sun, 16 Feb 2020 22:38:29 +0100", "msg_from": "maxzor <maxzor@maxzor.eu>", "msg_from_op": true, "msg_subject": "1 Status of vertical clustered index - 2 Join using (fk_constraint)\n suggestion - 3 Status of pgsql's parser autonomization" }, { "msg_contents": "On Sun, Feb 16, 2020 at 10:38:29PM +0100, maxzor wrote:\n>Hello,\n>\n>1. I was told that M$ SQLServer provides huge performance deltas over \n>PostgreSQL when dealing with index-unaligned queries :\n>create index i on t (a,b, c);\n>select * from t where b=... and c=...;\n\nPerhaps index-only scans might help here, but that generally does not\nwork for \"SELECT *\" queries.\n\n>Columnar storage has been tried by various companies, CitusData, \n>EnterpriseDB, 2ndQuadrant, Fujitsu Enterprise Postgres. It has been \n>discussed quite a lot, last thread that I was able to find being in \n>2017, https://www.postgresql.org/message-id/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com \n>where Fujitsu's patch made it quite far.\n>What is the status on such a storage manager extension interface ?\n>\n\nI think you're looking for threads about zheap and (especially)\nzedstore. Those are two \"storage manager\" implementations various people\nare currently working on. Neither of those is likely to make it into\npg13, though :-(\n\n>2. What do you think of adding a new syntax : 'from t join t2 using \n>(fk_constraint)' ? And further graph algorithms to make automatic \n>joins ?\n>Both 'natural join' and 'using (column_name)' are useless when the \n>columns are not the same in source and destination.\n>Plus it is often the case that the fk_constraints are over numerous \n>columns, even though this is usually advised against. But when this \n>case happens there will be a significant writing speedup.\n\nI'm not really sure what's the point / benefit here. Initially it seemed\nyou simply propose a syntax saying \"do a join using the columns in the\nFK constraint\" but it's unclear to me how this implies any writing\nspeedup? \n\n>I have been bothered by this to the point that I developed a \n>graphical-query-builder plugin for pgModeler,\n>https://github.com/maxzor/plugins/tree/master/graphicalquerybuilder#automatic-join-mode \n>,\n>but I believe such a syntax would be much better in the core!\n>\n\nHm, maybe.\n\n>3. What is the status of making the internal parser of PostgreSQL less \n>coupled to the core, and easier to cherry-pick from outside?\n>It would be great to incorporate it into companion projects : \n>pgAdmin4, pgModeler, pgFormatter...\n>\n\nI have no idea what you mean by \"less coupled\" here. What are the\nrequirements / use cases you're thinking about?\n\n\nFWIW I think it's pretty bad idea to post questions about three very\ndifferent topics into a single pgsql-hackers thread. That'll just lead\nto a lot of confusion.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Feb 2020 02:40:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: 1 Status of vertical clustered index - 2 Join using\n (fk_constraint) suggestion - 3 Status of pgsql's parser autonomization" }, { "msg_contents": "\n> ...\nThank you will look into it!\n> I'm not really sure what's the point / benefit here. Initially it seemed\n> you simply propose a syntax saying \"do a join using the columns in the\n> FK constraint\" but it's unclear to me how this implies any writing\n> speedup?\nThis is exactly what I mean. If you know the fk_constraint (usually \nthere are simple patterns) you are all set, or else... you could use a \nfunction fk(t, t2) to lookup pg_constraint, or even better / more bloat, \nhave psql do autocompletion for you? Corner case multiple fks between t \nand t2.\n'from t join t2 using(fk(t,t2))'\n> I have no idea what you mean by \"less coupled\" here. What are the\n> requirements / use cases you're thinking about?\nA lot of external tools do query parsing or validation, I wish they \ncould use the official parser as a dependency. AFAIK it is currently not \nthe case and everyone is re-implementing its subpar parser.\n> FWIW I think it's pretty bad idea to post questions about three very\n> different topics into a single pgsql-hackers thread. That'll just lead\n> to a lot of confusion.\n\nRight... I figured as a newcomer I would not spam the mailing list.\nBest regards\n\n\n\n", "msg_date": "Mon, 17 Feb 2020 02:56:52 +0100", "msg_from": "maxzor <maxzor@maxzor.eu>", "msg_from_op": true, "msg_subject": "Re: 1 Status of vertical clustered index - 2 Join using\n (fk_constraint) suggestion - 3 Status of pgsql's parser autonomization" }, { "msg_contents": "> 3. What is the status of making the internal parser of PostgreSQL less\ncoupled to the core, and easier to cherry-pick from outside?\n\nimho:\nOne of the current solutions is: https://github.com/lfittl/libpg_query C\nlibrary\n\n\"C library for accessing the PostgreSQL parser outside of the server.\n\nThis library uses the actual PostgreSQL server source to parse SQL queries\nand return the internal PostgreSQL parse tree.Note that this is mostly\nintended as a base library for\n\n- pg_query <https://github.com/lfittl/pg_query> (Ruby),\n\n- pg_query.go <https://github.com/lfittl/pg_query.go> (Go),\n\n- pg-query-parser <https://github.com/zhm/pg-query-parser> (Node),\n\n- psqlparse <https://github.com/alculquicondor/psqlparse> (Python) and\n\n- pglast <https://pypi.org/project/pglast/> (Python 3).\"\n\n\"\n\nBest,\n Imre\n\n\n\nmaxzor <maxzor@maxzor.eu> ezt írta (időpont: 2020. febr. 16., V, 22:38):\n\n> Hello,\n>\n> 1. I was told that M$ SQLServer provides huge performance deltas over\n> PostgreSQL when dealing with index-unaligned queries :\n> create index i on t (a,b, c);\n> select * from t where b=... and c=...;\n> Columnar storage has been tried by various companies, CitusData,\n> EnterpriseDB, 2ndQuadrant, Fujitsu Enterprise Postgres. It has been\n> discussed quite a lot, last thread that I was able to find being in 2017,\n> https://www.postgresql.org/message-id/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com\n> where Fujitsu's patch made it quite far.\n> What is the status on such a storage manager extension interface ?\n>\n> 2. What do you think of adding a new syntax : 'from t join t2 using\n> (fk_constraint)' ? And further graph algorithms to make automatic joins ?\n> Both 'natural join' and 'using (column_name)' are useless when the\n> columns are not the same in source and destination.\n> Plus it is often the case that the fk_constraints are over numerous\n> columns, even though this is usually advised against. But when this case\n> happens there will be a significant writing speedup.\n> I have been bothered by this to the point that I developed a\n> graphical-query-builder plugin for pgModeler,\n>\n> https://github.com/maxzor/plugins/tree/master/graphicalquerybuilder#automatic-join-mode\n> ,\n> but I believe such a syntax would be much better in the core!\n>\n> 3. What is the status of making the internal parser of PostgreSQL less\n> coupled to the core, and easier to cherry-pick from outside?\n> It would be great to incorporate it into companion projects : pgAdmin4,\n> pgModeler, pgFormatter...\n>\n> BR, Maxime Chambonnet\n>\n\n>  3. What is the status of making the internal parser of PostgreSQL less coupled to the core, and easier to cherry-pick from outside?imho:   One of the current solutions is:  https://github.com/lfittl/libpg_query  C library \"C library for accessing the PostgreSQL parser outside of the server.This library uses the actual PostgreSQL server source to parse SQL queries and return the internal PostgreSQL parse tree.Note that this is mostly intended as a base library for-  pg_query (Ruby),-  pg_query.go (Go),-  pg-query-parser (Node),-  psqlparse (Python) and-  pglast (Python 3).\"\"Best, Imremaxzor <maxzor@maxzor.eu> ezt írta (időpont: 2020. febr. 16., V, 22:38):\n\nHello,\n1. I was told that M$ SQLServer provides huge performance deltas\n over PostgreSQL when dealing with index-unaligned queries :\n create index i on t (a,b, c);\n select * from t where b=... and c=...;\n Columnar storage has been tried by various companies, CitusData,\n EnterpriseDB, 2ndQuadrant, Fujitsu Enterprise Postgres. It has\n been discussed quite a lot, last thread that I was able to find\n being in 2017,\nhttps://www.postgresql.org/message-id/CAJrrPGfaC7WC9NK6PTTy6YN-NN%2BhCy8xOLAh2doYhVg5d6HsAA%40mail.gmail.com\n where Fujitsu's patch made it quite far.\n What is the status on such a storage manager extension interface ?\n\n2. What do you think of adding a new syntax : 'from t join t2 using\n (fk_constraint)' ? And further graph algorithms to make\n automatic joins ?\n Both 'natural join' and 'using (column_name)' are useless\n when the columns are not the same in source and destination.\n Plus it is often the case that the fk_constraints are over\n numerous columns, even though this is usually advised against. But\n when this case happens there will be a significant writing\n speedup.\n I have been bothered by this to the point that I developed a\n graphical-query-builder plugin for pgModeler,\nhttps://github.com/maxzor/plugins/tree/master/graphicalquerybuilder#automatic-join-mode\n ,\n but I believe such a syntax would be much better in the core!\n3. What is the status of making the internal parser of PostgreSQL\n less coupled to the core, and easier to cherry-pick from outside?\n It would be great to incorporate it into companion projects :\n pgAdmin4, pgModeler, pgFormatter...\nBR, Maxime Chambonnet", "msg_date": "Mon, 17 Feb 2020 03:16:42 +0100", "msg_from": "Imre Samu <pella.samu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 1 Status of vertical clustered index - 2 Join using\n (fk_constraint) suggestion - 3 Status of pgsql's parser autonomization" } ]
[ { "msg_contents": "Dear Hackers.\n\nFor the current PostgreSQL, the toast data and some WAL data (page\ndata) are compressed using the LZ-based algorithm.\nFor PostgreSQL, two types of strategy(PGLZ_strategy_default,\nPGLZ_strategy_always) are provided by default,\nand only used of PGLZ_strategy_default(default values) on PostgreSQL core.\n\nFor PGLZ_strategy_default values:\n1. reduce the size of compressed data by 25% or more\n2. original data between 32 bytes and INT_MAX bytes\n3. When the first 1 kb data is compressed (addition to the dictionary\nand reused)\nPGLZ Compression will only succeed if these conditions are met.\n\nHowever, some users may want to be compressed, requiring a slightly\nmore loose condition.\nSo how about things modify a flexible strategy that is not a fixed\nsetting to compression strategy, to allow the user to set it?\n\nPerhaps the configurable value is\nmin_input_size\nmin_comp_rate\nfirst_supcess_by\nI want to add this to GUC so that is can set the minimum compressible\nfile size and compression rate.\n\nThe compression-related strategy is applied only when compressed.\nDecompression does not use strategy, so the old compressed data is not\naffected by the new patch.\n\nWhat do you think of this proposal?\nIf there are any agree to this proposal, I want to write patches.\n\n\n", "msg_date": "Mon, 17 Feb 2020 11:04:47 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": true, "msg_subject": "Flexible pglz_stategy values and delete const." }, { "msg_contents": "On Mon, Feb 17, 2020 at 11:04:47AM +0900, Moon, Insung wrote:\n> The compression-related strategy is applied only when compressed.\n> Decompression does not use strategy, so the old compressed data is not\n> affected by the new patch.\n\nThis may be a difficult question, but do you have examples of\nworkloads which could benefit of having such reloptions? It is not\nthe kind of options which are easy to tune, still a POC should be\nsimple enough to implement to show your point.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 13:20:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Flexible pglz_stategy values and delete const." }, { "msg_contents": "Hello.\n\nOn Tue, Feb 18, 2020 at 1:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Feb 17, 2020 at 11:04:47AM +0900, Moon, Insung wrote:\n> > The compression-related strategy is applied only when compressed.\n> > Decompression does not use strategy, so the old compressed data is not\n> > affected by the new patch.\n>\n> This may be a difficult question, but do you have examples of\n> workloads which could benefit of having such reloptions? It is not\n> the kind of options which are easy to tune, still a POC should be\n> simple enough to implement to show your point.\n\nThank you for your response.\nIn fact, I have not a very specific workload,\nbut for the sake of simplicity test workload, I'm created TOAST data\nusing my PoC to check if compression was successful.\n\nFor default pglz_strategy ...\n=# create table foo (i text);\n=# insert INTO foo values (random_string_simple (10000));\n# random_string_simple function is that generates random numbers(text)\nfrom 0 to 9.\nresult = Compression failure [10000]\n# This is the log I added, if compression failed.\n\nFor change pglz_strategy in the made POC ...\n(Here I set the min_comp_rate strategy more loosely 25 to 10)\n=# Insert INTO foo values (random_string_simple (10000));\nresult = Compression Success [10000] [7631]\nCompression was successful, and data was reduced from 10000byte to\n7631byte(about 24%), including the TOAST header.\n\nIn other words, such a case may occur, and instead of compressing with\na fixed strategy,\nI want to give a choice to users who need a compression strategy.\nOf course, compression may cause performance degradation, but some\nusers may want to minimize disk usage.\nAnd, in the case of FPW, there are more cases where the size of the\nWAL is smaller, so there may be cases in which the SR environment is\nadvantageous.\n\nI will send my POC for testing and discussing it as soon as possible.\nOf course, test modules and documentation modifications are not included.\n\nBest regards.\nMoon.\n\n> --\n> Michael\n\n\n", "msg_date": "Tue, 18 Feb 2020 14:45:02 +0900", "msg_from": "\"Moon, Insung\" <tsukiwamoon.pgsql@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Flexible pglz_stategy values and delete const." } ]
[ { "msg_contents": "Hi,\n\nI propose this small fix for 27.4. Progress Reporting:\n\n- all of its partitions are also recursively analyzed as also mentioned on\n+ all of its partitions are also recursively analyzed as also mentioned in\n <xref linkend=\"sql-analyze\"/>.\n\nNote the last word: \"in\" sounds more correct.\n\nThanks,\nAmit", "msg_date": "Mon, 17 Feb 2020 15:55:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "tiny documentation fix" }, { "msg_contents": "On Mon, Feb 17, 2020 at 03:55:46PM +0900, Amit Langote wrote:\n> I propose this small fix for 27.4. Progress Reporting:\n> \n> - all of its partitions are also recursively analyzed as also mentioned on\n> + all of its partitions are also recursively analyzed as also mentioned in\n> <xref linkend=\"sql-analyze\"/>.\n> \n> Note the last word: \"in\" sounds more correct.\n\nWhat you are suggesting sounds much better to me than the original.\nDo others have comments or objections?\n--\nMichael", "msg_date": "Mon, 17 Feb 2020 18:42:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: tiny documentation fix" }, { "msg_contents": "On Mon, Feb 17, 2020 at 10:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Feb 17, 2020 at 03:55:46PM +0900, Amit Langote wrote:\n> > I propose this small fix for 27.4. Progress Reporting:\n> >\n> > - all of its partitions are also recursively analyzed as also mentioned on\n> > + all of its partitions are also recursively analyzed as also mentioned in\n> > <xref linkend=\"sql-analyze\"/>.\n> >\n> > Note the last word: \"in\" sounds more correct.\n>\n> What you are suggesting sounds much better to me than the original.\n> Do others have comments or objections?\n\n+1 with Amit's suggestion.\n\n\n", "msg_date": "Mon, 17 Feb 2020 12:55:12 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tiny documentation fix" }, { "msg_contents": "> On 17 Feb 2020, at 10:42, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Feb 17, 2020 at 03:55:46PM +0900, Amit Langote wrote:\n>> I propose this small fix for 27.4. Progress Reporting:\n>> \n>> - all of its partitions are also recursively analyzed as also mentioned on\n>> + all of its partitions are also recursively analyzed as also mentioned in\n>> <xref linkend=\"sql-analyze\"/>.\n>> \n>> Note the last word: \"in\" sounds more correct.\n> \n> What you are suggesting sounds much better to me than the original.\n> Do others have comments or objections?\n\nIn my understanding, the difference comes from how the link is interpreted, is\nthe mention \"on a webpage\" or \"in a section\". Personally I prefer 'in' as it\nworks for the PDF docs as well as the web docs. In doc/src/sgml/mvcc.sgml\nthere is similar instance where we've used \"in <xref ..\":\n\n \"As mentioned in <xref linkend=\"xact-serializable\"/>, Serializable\n transactions are just Repeatable Read transactions which add\"\n\nChanging as per the patch makes these consistent, so +1 on doing that.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 17 Feb 2020 13:06:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: tiny documentation fix" }, { "msg_contents": "On Mon, Feb 17, 2020 at 01:06:21PM +0100, Daniel Gustafsson wrote:\n> Changing as per the patch makes these consistent, so +1 on doing that.\n\nThanks, applied.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 10:55:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: tiny documentation fix" }, { "msg_contents": "On Tue, Feb 18, 2020 at 10:56 Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Feb 17, 2020 at 01:06:21PM +0100, Daniel Gustafsson wrote:\n> > Changing as per the patch makes these consistent, so +1 on doing that.\n>\n> Thanks, applied.\n\n\nThank you.\n\n- Amit\n\n>\n\nOn Tue, Feb 18, 2020 at 10:56 Michael Paquier <michael@paquier.xyz> wrote:On Mon, Feb 17, 2020 at 01:06:21PM +0100, Daniel Gustafsson wrote:\n> Changing as per the patch makes these consistent, so +1 on doing that.\n\nThanks, applied.Thank you.- Amit", "msg_date": "Tue, 18 Feb 2020 12:17:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: tiny documentation fix" } ]
[ { "msg_contents": "Hi all,\n\nDuring testing recovery, I realized that it's possible that a relation\nfile is orphaned (probably) forever if postgres crashes during\nexecuting a transaction that creates a new relation and loads data.\n\nAfter postgres crashing, the relation is recovered during crash\nrecovery and the transaction is regards as aborted, but that relation\nfile exists in database cluster. There seems no way for postgres to\nknow that that relation is now garbage and can be deleted. The entry\nof that relation in pg_class exists but it is never detected even by\nautovacuum because autovacuum's snapshot doesn't include that. In\naddition, that relfilenode number is never be reused because the file\nalready exists. Therefore I think that such orphaned relation file is\nleft forever. The orphaned relation can be large if we are loading\nlarge data or creating a large materialized view.\n\nIs this a bug? Has this ever been discussed?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Feb 2020 19:19:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Orphaned relation files after crash recovery" }, { "msg_contents": "On Mon, Feb 17, 2020 at 11:20 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> Is this a bug? Has this ever been discussed?\n\nHi Sawada-san,\n\nThere was this thread:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D0ULqYgM2aFeOnrx6YrtBg3xUdxALoyCG%2BXpssKqmezug%40mail.gmail.com\n\nWe chose that as a sort of simple test case to demonstrate the\nmachinery for performing any kind of clean-up work when when a\ntransaction aborts, even if there is a crash restart in between. It's\ngoing to take a little while to come back to that due to some on-going\nredesign work on the undo proposal.\n\n\n", "msg_date": "Tue, 18 Feb 2020 00:04:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Orphaned relation files after crash recovery" }, { "msg_contents": "On Mon, 17 Feb 2020 at 20:05, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Feb 17, 2020 at 11:20 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > Is this a bug? Has this ever been discussed?\n>\n> Hi Sawada-san,\n>\n> There was this thread:\n>\n> https://www.postgresql.org/message-id/flat/CAEepm%3D0ULqYgM2aFeOnrx6YrtBg3xUdxALoyCG%2BXpssKqmezug%40mail.gmail.com\n>\n> We chose that as a sort of simple test case to demonstrate the\n> machinery for performing any kind of clean-up work when when a\n> transaction aborts, even if there is a crash restart in between. It's\n> going to take a little while to come back to that due to some on-going\n> redesign work on the undo proposal.\n\nThank you! That's a good use case of undo logging. I'm looking forward\nto the new patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Feb 2020 20:23:15 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Orphaned relation files after crash recovery" } ]
[ { "msg_contents": "When applying the recent \"the the\" comment fixup to downstream docs I happened\nto notice other cases of duplicated words in comments. The attached trivial\ndiff removes the few that I came across (the last one was perhaps correct but\nif so seemed strange to a non-native speaker).\n\ncheers ./daniel", "msg_date": "Mon, 17 Feb 2020 15:36:10 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Duplicate words in comments" }, { "msg_contents": "On 17/02/2020 15:36, Daniel Gustafsson wrote:\n> When applying the recent \"the the\" comment fixup to downstream docs I happened\n> to notice other cases of duplicated words in comments. The attached trivial\n> diff removes the few that I came across (the last one was perhaps correct but\n> if so seemed strange to a non-native speaker).\n\nThese changes look good to me.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 17 Feb 2020 22:57:33 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Duplicate words in comments" }, { "msg_contents": "On Mon, Feb 17, 2020 at 10:57:33PM +0100, Vik Fearing wrote:\n> These changes look good to me.\n\nApplied. The indentation in nodeAgg.c had a nit.\n--\nMichael", "msg_date": "Tue, 18 Feb 2020 12:25:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Duplicate words in comments" } ]
[ { "msg_contents": "We've had multiple previous discussions of $SUBJECT (eg [1][2]),\nwithout any resolution of what to do exactly. Thinking about this\nsome more, I had an idea that I don't think has been discussed.\nTo wit:\n\n1. On platforms where Python 2.x is still supported, recommend that\npackagers continue to build both plpython2 and plpython3, same as now.\n\n2. On platforms where Python 2.x is no longer supported, transparently\nmap plpythonu and plpython2u to plpython3u. \"Transparent\" meaning that\ndump/reload or pg_upgrade of existing plpythonu/plpython2u functions\nwill work, but when you run them, what you actually get is Python 3.x.\n\nFor existing functions that don't use any obsolete Python syntax\n(which one would hope is a pretty large percentage), this is a\nzero-effort conversion for users. If a function does use obsolete\nconstructs, it will get a parse failure when executed, and the user\nwill have to update it to Python 3 syntax. I propose that we make\nthat case reasonably painless by providing the conversion script\nI posted in [3] (or another one if somebody's got a better one),\nbundled as a separately-installable extension.\n\nA possible gotcha in this approach is if there are any python 2/3\nincompatibilities that would not manifest as syntax errors or\nobvious runtime errors, but would allow old code to execute and\nsilently do the wrong thing. One would hope that the Python crowd\nweren't dumb enough to do that, but I don't know whether it's true.\nIf there are nasty cases like that, maybe what we have to do is allow\nplpythonu/plpython2u functions to be dumped and reloaded into a\npython-3-only install, but refuse to execute them until they've\nbeen converted.\n\nIn either case, to allow dump/reload or pg_upgrade to work without\nugly hacks, what we need to do is provide a stub version of\nplpython2.so. (The extension definitions that sit on top of it\nthen don't need to change.) The stub would either redirect calls\nto plpython3.so if we prefer that approach, or throw errors if we\nprefer that approach. I envision adding a configure option that\nenables build and install of this stub library while doing a\nplpython3 build; packagers not planning to build a \"real\" plpython2\nshould ask for the stub instead.\n\nThe end result given the first approach is that \"plpythonu\" and\n\"plpython2u\" and \"plpython3u\" all work and mean the same thing.\nOver some long time period we might want to deprecate and remove\nthe \"plpython2u\" alias, but there would be no hurry about it.\n\nThe work involved in making this happen seems fairly minimal, and\npractical to get done in time for PG 13. Perhaps there'd even be\na case for back-patching it, though I'm not going to advocate for\nthat here.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/5351890.TdMePpdHBD%40nb.usersys.redhat.com\n[2] https://www.postgresql.org/message-id/flat/CAKmB1PGDAy9mXxSTqUchYEi4iJAA6NKVj4P5BtAzvQ9wSDUwJw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/11546.1566584867%40sss.pgh.pa.us\n\n\n", "msg_date": "Mon, 17 Feb 2020 11:49:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Hi Tom,\n\nI really like the \"stub .so\" idea, but feel pretty uncomfortable for the\n\"transparent\" upgrade. Response inlined.\n\nOn Mon, Feb 17, 2020 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> 2. On platforms where Python 2.x is no longer supported, transparently\n> map plpythonu and plpython2u to plpython3u. \"Transparent\" meaning that\n> dump/reload or pg_upgrade of existing plpythonu/plpython2u functions\n> will work, but when you run them, what you actually get is Python 3.x.\n\nIt's fair enough that plpythonu changes its meaning, people who really\nwant the stability should explicitly use plpython2u.\n\n>\n> For existing functions that don't use any obsolete Python syntax\n> (which one would hope is a pretty large percentage), this is a\n> zero-effort conversion for users. If a function does use obsolete\n> constructs, it will get a parse failure when executed, and the user\n> will have to update it to Python 3 syntax. I propose that we make\n> that case reasonably painless by providing the conversion script\n> I posted in [3] (or another one if somebody's got a better one),\n> bundled as a separately-installable extension.\n>\n> A possible gotcha in this approach is if there are any python 2/3\n> incompatibilities that would not manifest as syntax errors or\n> obvious runtime errors, but would allow old code to execute and\n> silently do the wrong thing. One would hope that the Python crowd\n> weren't dumb enough to do that, but I don't know whether it's true.\n> If there are nasty cases like that, maybe what we have to do is allow\n> plpythonu/plpython2u functions to be dumped and reloaded into a\n> python-3-only install, but refuse to execute them until they've\n> been converted.\n\n\"True division\", one of the very first (2011, awww) few breaking changes\nintroduced in Python 3 [1], comes to mind. While it's not the worst\nincompatibilities between Python 2 and 3, it's bad enough to give pause\nto the notion that a successful parsing implies successful conversion.\n\n[1] https://www.python.org/dev/peps/pep-0238/\n\nCheers,\nJesse\n\n\n", "msg_date": "Mon, 17 Feb 2020 14:57:59 -0800", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> I really like the \"stub .so\" idea, but feel pretty uncomfortable for the\n> \"transparent\" upgrade. Response inlined.\n\nFair enough, but ...\n\n>> 2. On platforms where Python 2.x is no longer supported, transparently\n>> map plpythonu and plpython2u to plpython3u. \"Transparent\" meaning that\n>> dump/reload or pg_upgrade of existing plpythonu/plpython2u functions\n>> will work, but when you run them, what you actually get is Python 3.x.\n\n> It's fair enough that plpythonu changes its meaning, people who really\n> want the stability should explicitly use plpython2u.\n\nYeah, but then what do you want to do with functions declared plpython2u?\nHave them fail even if they'd work fine under Python 3? Doesn't really\nseem like that's helping anyone.\n\n>> A possible gotcha in this approach is if there are any python 2/3\n>> incompatibilities that would not manifest as syntax errors or\n>> obvious runtime errors, but would allow old code to execute and\n>> silently do the wrong thing.\n\n> \"True division\", one of the very first (2011, awww) few breaking changes\n> introduced in Python 3 [1], comes to mind. While it's not the worst\n> incompatibilities between Python 2 and 3, it's bad enough to give pause\n> to the notion that a successful parsing implies successful conversion.\n\nHm. I agree that's kind of nasty, because 2to3 doesn't fix it AFAICT\n(and, likely, there is no way to do so that doesn't include solving\nthe halting problem). However, it's not clear to me why forcing users\nto do a conversion is going to help them any with that, precisely\nbecause the automated conversion won't fix it. They're going to have\nto find such issues the hard way whenever they move to Python 3, no\nmatter what we do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Feb 2020 18:57:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": ">\n> A possible gotcha in this approach is if there are any python 2/3\n> incompatibilities that would not manifest as syntax errors or\n> obvious runtime errors, but would allow old code to execute and\n> silently do the wrong thing. One would hope that the Python crowd\n> weren't dumb enough to do that, but I don't know whether it's true.\n> If there are nasty cases like that, maybe what we have to do is allow\n> plpythonu/plpython2u functions to be dumped and reloaded into a\n> python-3-only install, but refuse to execute them until they've\n> been converted.\n>\n\nUnfortunately, I think there are cases like that. The shift to Unicode as\nthe default string means that some functions that used to return a `str`\nnow return a `bytes` (I know of this in the hashlib and base64 modules, but\nprobably also in URL request data and others), and to use a `bytes` in\nstring manipulation you have to first explicitly convert it to some string\nencoding. So things like a function that wraps around a python crypto\nlibrary would be the exact places where those was-str-now-bytes functions\nwould be used.\n\nA possible gotcha in this approach is if there are any python 2/3\nincompatibilities that would not manifest as syntax errors or\nobvious runtime errors, but would allow old code to execute and\nsilently do the wrong thing.  One would hope that the Python crowd\nweren't dumb enough to do that, but I don't know whether it's true.\nIf there are nasty cases like that, maybe what we have to do is allow\nplpythonu/plpython2u functions to be dumped and reloaded into a\npython-3-only install, but refuse to execute them until they've\nbeen converted.Unfortunately, I think there are cases like that. The shift to Unicode as the default string means that some functions that used to return a `str` now return a `bytes` (I know of this in the hashlib and base64 modules, but probably also in URL request data and others), and to use a `bytes` in string manipulation you have to first explicitly convert it to some string encoding. So things like a function that wraps around a python crypto library would be the exact places where those was-str-now-bytes functions would be used.", "msg_date": "Mon, 17 Feb 2020 20:25:41 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n>> A possible gotcha in this approach is if there are any python 2/3\n>> incompatibilities that would not manifest as syntax errors or\n>> obvious runtime errors, but would allow old code to execute and\n>> silently do the wrong thing.\n\n> Unfortunately, I think there are cases like that. The shift to Unicode as\n> the default string means that some functions that used to return a `str`\n> now return a `bytes` (I know of this in the hashlib and base64 modules, but\n> probably also in URL request data and others), and to use a `bytes` in\n> string manipulation you have to first explicitly convert it to some string\n> encoding. So things like a function that wraps around a python crypto\n> library would be the exact places where those was-str-now-bytes functions\n> would be used.\n\nSo, as with Jesse's example, what I'm wondering is whether or not 2to3\nwill fix that for you (or even flag it). The basic difference between\nthe two alternatives I suggested is whether we force people to put their\npython function through that converter before we'll even try to run it.\nSubtleties that 2to3 doesn't catch seem like non-reasons to insist on\napplying it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Feb 2020 10:59:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": ">\n> So, as with Jesse's example, what I'm wondering is whether or not 2to3\n> will fix that for you (or even flag it). The basic difference between\n> the two alternatives I suggested is whether we force people to put their\n> python function through that converter before we'll even try to run it.\n> Subtleties that 2to3 doesn't catch seem like non-reasons to insist on\n> applying it.\n>\n\nThe 2018 vintage of 2to3 didn't catch it.\n\nIt's not firsthand knowledge, but I just watched a nearby team have some\nproduction issues where one library couldn't fetch b'http://foo.org' so I'm\nguessing 2to3 still doesn't catch those things, or they stopped using it.\n\nSo, as with Jesse's example, what I'm wondering is whether or not 2to3\nwill fix that for you (or even flag it).  The basic difference between\nthe two alternatives I suggested is whether we force people to put their\npython function through that converter before we'll even try to run it.\nSubtleties that 2to3 doesn't catch seem like non-reasons to insist on\napplying it.The 2018 vintage of 2to3 didn't catch it.It's not firsthand knowledge, but I just watched a nearby team have some production issues where one library couldn't fetch b'http://foo.org' so I'm guessing 2to3 still doesn't catch those things, or they stopped using it.", "msg_date": "Tue, 18 Feb 2020 14:37:10 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "After thinking about this awhile longer, I'm starting to believe\nwe should do some of each. That is, the stub replacement for\nplpython2.so should redirect \"plpythonu\" functions to plpython3.so,\nbut throw errors for \"plpython2u\" functions. This is not because\nof any technical difference between plpythonu and plpython2u ---\nup to now, there wasn't any --- but because it seems like users\nwould be expecting that if they've read what we have said in\n\nhttps://www.postgresql.org/docs/current/plpython-python23.html\n\nAdmittedly, what it says there is that plpythonu might become\nPython 3 in some \"distant\" future release, not next year.\nBut at least there's a direct line between that documentation\nand this behavior.\n\nSo attached is a pair of draft patches that do it like that.\n0001 creates an extension with two conversion functions, based\non the script I showed in the other thread. Almost independently\nof that, 0002 provides code to generate a stub version of\nplpython2.so that behaves as stated above. 0002 is incomplete,\nbecause I haven't looked into what is needed in the MSVC build\nscripts. Maybe we could create some regression tests, too.\nBut I think these are potentially committable with those additions,\nif people approve of this approach.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 18 Feb 2020 23:39:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "On 2020-02-19 05:39, Tom Lane wrote:\n> After thinking about this awhile longer, I'm starting to believe\n> we should do some of each. That is, the stub replacement for\n> plpython2.so should redirect \"plpythonu\" functions to plpython3.so,\n> but throw errors for \"plpython2u\" functions.\n\nI'm not sure these complications are worth it. They don't match \nanything that is done in other Python 2/3 porting schemes. I think \nthere should just be an option \"plpython is: {2|3|don't build it at \nall}\". Then packagers can match this to what their plan for \n/usr/bin/python* is -- which appears to be different everywhere.\n\nYour scheme appears to center around the assumption that people will \nwant to port their functions at the same time as not building plpython2u \nanymore. This would defeat testing functions before and after in the \nsame installation. I think the decisions of what plpythonu points to \nand which variants are built at all should be separate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Feb 2020 20:42:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Your scheme appears to center around the assumption that people will \n> want to port their functions at the same time as not building plpython2u \n> anymore.\n\nNot really; use of the proposed porting infrastructure is the same whether\nplpython2u still works or not. You end up with functions that are labeled\nplpython3u, so what bare \"plpythonu\" means is not a factor.\n\nIt is true that as this patch is written, switching of plpythonu to\npoint at Python 3 rather than 2 is coupled to disabling plpython2u.\nIf we'd have gotten this done a year or two ago, I'd have made it more\ncomplex to allow more separation there. But events have passed us by:\nthe info we are getting from packagers is that Python 2 is getting\ndropped *this year*, not in some distant future. So I think that allowing\nthe plpythonu redefinition to be separate is no longer of any great value,\nand not worth extra complication for. People are just going to be\nshipping v13 with both things changed in any case.\n\nIf we wanted to do something to help people port their functions in\nadvance of the big changeover, the thing to do would be to back-patch\nthe proposed convert_python3 extension into existing branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Feb 2020 15:00:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Here's an updated pair of patches that attempt to fix the MSVC\nscripts (pretty blindly) and provide a very simple regression test.\nI'm not too sure whether the regression test will really prove\nworkable or not: for starters, it'll fail if \"2to3\" isn't available\nin the PATH. Perhaps there's reason to object to even trying to\ntest that, on security grounds.\n\nI set up the MSVC scripts to default to building the stub extension.\nI don't know if we really want to commit it that way, but the idea\nfor the moment is to try to get the cfbot to test it on Windows.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 25 Feb 2020 14:57:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "I wrote:\n> Here's an updated pair of patches that attempt to fix the MSVC\n> scripts (pretty blindly) and provide a very simple regression test.\n\nA little *too* blindly, evidently. Try again ...\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 25 Feb 2020 15:48:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "I wrote:\n> I set up the MSVC scripts to default to building the stub extension.\n> I don't know if we really want to commit it that way, but the idea\n> for the moment is to try to get the cfbot to test it on Windows.\n\nNo joy there --- now that I look closer, it seems the cfbot doesn't\nbuild any of the external-language PLs on Windows. I'll have to\nwait for some reviewer to try it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Feb 2020 17:06:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "\nOn 2/25/20 5:06 PM, Tom Lane wrote:\n> I wrote:\n>> I set up the MSVC scripts to default to building the stub extension.\n>> I don't know if we really want to commit it that way, but the idea\n>> for the moment is to try to get the cfbot to test it on Windows.\n> No joy there --- now that I look closer, it seems the cfbot doesn't\n> build any of the external-language PLs on Windows. I'll have to\n> wait for some reviewer to try it.\n>\n> \t\t\t\n\n\n\nWhat are the requirements for testing? bowerbird builds with python 2.7,\nalthough I guess I should really try to upgrade it  3.x.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 25 Feb 2020 18:12:30 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 2/25/20 5:06 PM, Tom Lane wrote:\n>> No joy there --- now that I look closer, it seems the cfbot doesn't\n>> build any of the external-language PLs on Windows. I'll have to\n>> wait for some reviewer to try it.\n\n> What are the requirements for testing? bowerbird builds with python 2.7,\n> although I guess I should really try to upgrade it  3.x.\n\nHas to be python 3, unfortunately; the patch has no effect on a\npython 2 build.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Feb 2020 19:08:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "\nOn 2/25/20 7:08 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 2/25/20 5:06 PM, Tom Lane wrote:\n>>> No joy there --- now that I look closer, it seems the cfbot doesn't\n>>> build any of the external-language PLs on Windows. I'll have to\n>>> wait for some reviewer to try it.\n>> What are the requirements for testing? bowerbird builds with python 2.7,\n>> although I guess I should really try to upgrade it  3.x.\n> Has to be python 3, unfortunately; the patch has no effect on a\n> python 2 build.\n>\n> \t\t\t\n\n\n\nYeah, I have python3 working on drongo, I'll test there.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 25 Feb 2020 20:24:21 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "\nOn 2/25/20 8:24 PM, Andrew Dunstan wrote:\n> On 2/25/20 7:08 PM, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> On 2/25/20 5:06 PM, Tom Lane wrote:\n>>>> No joy there --- now that I look closer, it seems the cfbot doesn't\n>>>> build any of the external-language PLs on Windows. I'll have to\n>>>> wait for some reviewer to try it.\n>>> What are the requirements for testing? bowerbird builds with python 2.7,\n>>> although I guess I should really try to upgrade it  3.x.\n>> Has to be python 3, unfortunately; the patch has no effect on a\n>> python 2 build.\n>>\n>> \t\t\t\n>\n>\n> Yeah, I have python3 working on drongo, I'll test there.\n>\n\nIt's almost there, you need to add something like this to Mkvcbuild.pm:\n\n\n    if ($solution->{options}->{python2_stub})\n    {\n        my $plpython2_stub =\n          $solution->AddProject('plpython2', 'dll', 'PLs',\n'src/pl/stub_plpython2');\n        $plpython2_stub->AddReference($postgres);\n    }\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 26 Feb 2020 02:47:29 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "\nOn 2/26/20 2:47 AM, Andrew Dunstan wrote:\n> On 2/25/20 8:24 PM, Andrew Dunstan wrote:\n>> On 2/25/20 7:08 PM, Tom Lane wrote:\n>>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>>> On 2/25/20 5:06 PM, Tom Lane wrote:\n>>>>> No joy there --- now that I look closer, it seems the cfbot doesn't\n>>>>> build any of the external-language PLs on Windows. I'll have to\n>>>>> wait for some reviewer to try it.\n>>>> What are the requirements for testing? bowerbird builds with python 2.7,\n>>>> although I guess I should really try to upgrade it  3.x.\n>>> Has to be python 3, unfortunately; the patch has no effect on a\n>>> python 2 build.\n>>>\n>>> \t\t\t\n>>\n>> Yeah, I have python3 working on drongo, I'll test there.\n>>\n> It's almost there, you need to add something like this to Mkvcbuild.pm:\n>\n>\n>     if ($solution->{options}->{python2_stub})\n>     {\n>         my $plpython2_stub =\n>           $solution->AddProject('plpython2', 'dll', 'PLs',\n> 'src/pl/stub_plpython2');\n>         $plpython2_stub->AddReference($postgres);\n>     }\n\n\n\n\n\nHowever, when it get to testing contrib it complains like this:\n\n\n============================================================\nChecking hstore_plpython\nC:/prog/bf/root/HEAD/pgsql/Release/pg_regress/pg_regress\n--bindir=C:/prog/bf/root/HEAD/pgsql/Release/psql\n--dbname=contrib_regression --load-ex\ntension=hstore --load-extension=plpythonu\n--load-extension=hstore_plpythonu hstore_plpython\n(using postmaster on localhost, default port)\n============== dropping database \"contrib_regression\" ==============\nDROP DATABASE\n============== creating database \"contrib_regression\" ==============\nCREATE DATABASE\nALTER DATABASE\n============== installing hstore                      ==============\nCREATE EXTENSION\n============== installing plpythonu                   ==============\nCREATE EXTENSION\n============== installing hstore_plpythonu            ==============\nERROR:  could not access file \"$libdir/hstore_plpython2\": No such file\nor directory\ncommand failed: \"C:/prog/bf/root/HEAD/pgsql/Release/psql/psql\" -X -c\n\"CREATE EXTENSION IF NOT EXISTS \\\"hstore_plpythonu\\\"\" \"contrib_regression\"\n\n\nSo there's a bit more work to do.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 26 Feb 2020 03:17:21 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "On 2/26/20 3:17 AM, Andrew Dunstan wrote:\n> On 2/26/20 2:47 AM, Andrew Dunstan wrote:\n>> On 2/25/20 8:24 PM, Andrew Dunstan wrote:\n>>> On 2/25/20 7:08 PM, Tom Lane wrote:\n>>>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>>>> On 2/25/20 5:06 PM, Tom Lane wrote:\n>>>>>> No joy there --- now that I look closer, it seems the cfbot doesn't\n>>>>>> build any of the external-language PLs on Windows. I'll have to\n>>>>>> wait for some reviewer to try it.\n>>>>> What are the requirements for testing? bowerbird builds with python 2.7,\n>>>>> although I guess I should really try to upgrade it  3.x.\n>>>> Has to be python 3, unfortunately; the patch has no effect on a\n>>>> python 2 build.\n>>>>\n>>>> \t\t\t\n>>> Yeah, I have python3 working on drongo, I'll test there.\n>>>\n>> It's almost there, you need to add something like this to Mkvcbuild.pm:\n>>\n>>\n>>     if ($solution->{options}->{python2_stub})\n>>     {\n>>         my $plpython2_stub =\n>>           $solution->AddProject('plpython2', 'dll', 'PLs',\n>> 'src/pl/stub_plpython2');\n>>         $plpython2_stub->AddReference($postgres);\n>>     }\n>\n>\n>\n>\n> However, when it get to testing contrib it complains like this:\n>\n>\n> ============================================================\n> Checking hstore_plpython\n> C:/prog/bf/root/HEAD/pgsql/Release/pg_regress/pg_regress\n> --bindir=C:/prog/bf/root/HEAD/pgsql/Release/psql\n> --dbname=contrib_regression --load-ex\n> tension=hstore --load-extension=plpythonu\n> --load-extension=hstore_plpythonu hstore_plpython\n> (using postmaster on localhost, default port)\n> ============== dropping database \"contrib_regression\" ==============\n> DROP DATABASE\n> ============== creating database \"contrib_regression\" ==============\n> CREATE DATABASE\n> ALTER DATABASE\n> ============== installing hstore                      ==============\n> CREATE EXTENSION\n> ============== installing plpythonu                   ==============\n> CREATE EXTENSION\n> ============== installing hstore_plpythonu            ==============\n> ERROR:  could not access file \"$libdir/hstore_plpython2\": No such file\n> or directory\n> command failed: \"C:/prog/bf/root/HEAD/pgsql/Release/psql/psql\" -X -c\n> \"CREATE EXTENSION IF NOT EXISTS \\\"hstore_plpythonu\\\"\" \"contrib_regression\"\n>\n>\n> So there's a bit more work to do.\n>\n>\n\n\nThis seems to fix it.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Feb 2020 06:15:58 -0500", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> This seems to fix it.\n\nOK, so we need that *and* the AddProject addition you mentioned?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Feb 2020 10:03:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "On Thu, Feb 27, 2020 at 1:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > This seems to fix it.\n>\n> OK, so we need that *and* the AddProject addition you mentioned?\n>\n>\n\nYes, the first one builds it, the second one fixes the tests to run correctly.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Feb 2020 06:34:52 +1030", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Thu, Feb 27, 2020 at 1:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> OK, so we need that *and* the AddProject addition you mentioned?\n\n> Yes, the first one builds it, the second one fixes the tests to run correctly.\n\nThanks, here's a patchset incorporating those fixes. Otherwise\nsame as before.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 27 Feb 2020 16:11:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Am 17.02.2020 um 17:49 schrieb Tom Lane:\n> We've had multiple previous discussions of $SUBJECT (eg [1][2]),\n> without any resolution of what to do exactly. Thinking about this\n> some more, I had an idea that I don't think has been discussed.\n> To wit:\n> \n> 1. On platforms where Python 2.x is still supported, recommend that\n> packagers continue to build both plpython2 and plpython3, same as now.\n> \n\nthere is some documentation on how to build both ?\nThe INSTALL gives no hint.\n\nAnd how to build for multiples 3.x ?\n\nCurrently for Cygwin package I am building only 2.x and it is clearly\nnot a good situation.\n\nRegards\nMarco\n\n\n\n\n\n\n", "msg_date": "Thu, 26 Mar 2020 06:46:05 +0100", "msg_from": "Marco Atzeri <marco.atzeri@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "On 2020-03-26 06:46, Marco Atzeri wrote:\n> Am 17.02.2020 um 17:49 schrieb Tom Lane:\n>> We've had multiple previous discussions of $SUBJECT (eg [1][2]),\n>> without any resolution of what to do exactly. Thinking about this\n>> some more, I had an idea that I don't think has been discussed.\n>> To wit:\n>>\n>> 1. On platforms where Python 2.x is still supported, recommend that\n>> packagers continue to build both plpython2 and plpython3, same as now.\n>>\n> \n> there is some documentation on how to build both ?\n\nYou have to configure and build the sources twice with different PYTHON \nsettings. It depends on your packaging system how to best arrange that.\n\n> And how to build for multiples 3.x ?\n\nThat is not supported.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Mar 2020 10:14:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Marco Atzeri <marco.atzeri@gmail.com> writes:\n> Am 17.02.2020 um 17:49 schrieb Tom Lane:\n>> 1. On platforms where Python 2.x is still supported, recommend that\n>> packagers continue to build both plpython2 and plpython3, same as now.\n\n> there is some documentation on how to build both ?\n> The INSTALL gives no hint.\n\nIt's explained in the plpython documentation: basically you have to\nconfigure and build the source tree twice (although I think the\nsecond time you can just cd into src/pl/plpython and build/install\nonly that much).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 10:26:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "I wrote:\n> [ a couple patches ]\n\nPing? I wish somebody would review this. I'm not wedded to any\nof the details, but it would be an embarrassment for us to ship v13\nwithout any response to the fact that Python 2 is EOL.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 09:19:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "On Wed, Feb 19, 2020 at 08:42:36PM +0100, Peter Eisentraut wrote:\n> I think there should just\n> be an option \"plpython is: {2|3|don't build it at all}\". Then packagers can\n> match this to what their plan for /usr/bin/python* is -- which appears to be\n> different everywhere.\n\nToday, we do not give packagers this sort of discretion over SQL-level\nbehavior. We formerly had --disable-integer-datetimes, but I view that as the\nlesser of two evils, the greater of which would have been to force dump/reload\nof terabyte-scale clusters. We should continue to follow today's principle,\nwhich entails not inviting packagers to align the nature of \"LANGUAGE\nplpythonu\" with the nature of /usr/bin/python. Users wouldn't benefit from\nalignment. Moreover, it has long been conventional for the host to be a VM\ndedicated to PostgreSQL, in which case users of the DBMS aren't users of its\nhost. I don't have an opinion on which version \"LANGUAGE plpythonu\" should\ndenote in PostgreSQL v13, but the version should be constant across v13\nconfigurations. (If some builds make it unavailable or make it an error-only\nstub, that is no problem.)\n\nI reviewed all code:\n\nOn Thu, Feb 27, 2020 at 04:11:05PM -0500, Tom Lane wrote:\n> --- a/configure.in\n> +++ b/configure.in\n> @@ -766,6 +766,9 @@ PGAC_ARG_BOOL(with, python, no, [build Python modules (PL/Python)])\n> AC_MSG_RESULT([$with_python])\n> AC_SUBST(with_python)\n> \n> +PGAC_ARG_BOOL(with, python2-stub, no, [build Python 2 compatibility stub])\n> +AC_SUBST(with_python2_stub)\n> +\n> #\n> # GSSAPI\n> #\n> @@ -1042,6 +1045,12 @@ fi\n> if test \"$with_python\" = yes; then\n> PGAC_PATH_PYTHON\n> PGAC_CHECK_PYTHON_EMBED_SETUP\n> + # Disable building Python 2 stub if primary version isn't Python 3\n> + if test \"$python_majorversion\" -lt 3; then\n> + with_python2_stub=no\n> + fi\n\nStandard PostgreSQL practice would be to AC_MSG_ERROR in response to the\ninfeasible option, not to ignore the option.\n\n> --- /dev/null\n> +++ b/src/pl/plpython/sql/plpython_stub.sql\n\n> +call convert_python3_all();\n\nBuilding with PYTHON=python3 --with-python2-stub, this fails on my RHEL 7.8\ninstallation. I had installed python34-tools, which provides /usr/bin/2to3-3.\nI had not installed python-tools (v2.7.5), which provides /usr/bin/2to3.\nMaking a 2to3 -> 2to3-3 symlink let the test pass. Requiring such a step to\npass tests may or may not be fine; what do you think? (I am attaching\nregression.diffs; to my knowledge, it's not interesting.)\n\n> --- a/src/tools/msvc/config_default.pl\n> +++ b/src/tools/msvc/config_default.pl\n> @@ -16,6 +16,7 @@ our $config = {\n> \ttcl => undef, # --with-tcl=<path>\n> \tperl => undef, # --with-perl=<path>\n> \tpython => undef, # --with-python=<path>\n> +\tpython2_stub => 1, # --with-python2-stub (ignored unless Python is v3)\n\nThis default should not depend on whether one uses the MSVC build system or\nuses the GNU make build system.\n\n> --- /dev/null\n> +++ b/src/pl/plpython/convert_python3--1.0.sql\n\n> +create procedure convert_python3_all(tool text default '2to3',\n> + options text default '')\n> +language plpython3u as $$\n> +import re, subprocess, tempfile\n> +\n> +# pattern to extract just the function header from pg_get_functiondef result\n> +aspat = re.compile(\"^(.*?\\nAS )\", re.DOTALL)\n\nThis fails on:\n\ncreate function convert1(\"\nAS \" int) returns int\nAS $$return 123l$$\nlanguage\nplpython2u\nimmutable;\n\nThat's not up to project standard, but I'm proceeding to ignore this since the\nsubject is an untrusted language and ~nobody uses such argument names.", "msg_date": "Thu, 28 May 2020 01:03:44 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Wed, Feb 19, 2020 at 08:42:36PM +0100, Peter Eisentraut wrote:\n>> I think there should just\n>> be an option \"plpython is: {2|3|don't build it at all}\". Then packagers can\n>> match this to what their plan for /usr/bin/python* is -- which appears to be\n>> different everywhere.\n\n> Today, we do not give packagers this sort of discretion over SQL-level\n> behavior. We formerly had --disable-integer-datetimes, but I view that as the\n> lesser of two evils, the greater of which would have been to force dump/reload\n> of terabyte-scale clusters. We should continue to follow today's principle,\n> which entails not inviting packagers to align the nature of \"LANGUAGE\n> plpythonu\" with the nature of /usr/bin/python.\n\nFWIW, I've abandoned this patch. We've concluded (by default, at least)\nthat nothing is getting done in v13, and by the time v14 is out it will\nbe too late to have any useful effect. I expect that the situation\non-the-ground by 2021 will be that packagers build with PYTHON=python3\nand package whatever they get from that. That means (1) plpythonu won't\nexist anymore and (2) users will be left to their own devices to convert\nexisting plpython code. Now, (1) corresponds to not providing any\n/usr/bin/python executable, only python3 -- and that is a really common\nchoice for distros to make, AFAIK, so I don't feel too awful about it.\nI find (2) less than ideal, but there's evidently not enough interest\nin doing anything about it. There's certainly going to be no point in\nshipping a solution for (2) if we fail to do so before v14; people\nwill already have done the work by hand.\n\nWe should, however, consider updating the plpython docs to reflect\ncurrent reality. Notably, the existing wording in section 45.1\nsuggests that we'll eventually redefine \"plpythonu\" as Python 3,\nand it seems to me that that's not going to happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 May 2020 10:44:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Resolving the python 2 -> python 3 mess" } ]
[ { "msg_contents": "Dear Hackers\n\nI've been working on an extension and using SPI to execute some queries. \nI am in a situation where I have the option to issue multiple queries \nconcurrently, ideally under same snapshot and transaction. In short, I \nam achieving this by creating multiple dynamic background workers, each \none of them executing a query at the same time using \nSPI_execute(sql_string, ...). To be more precise, sometimes I am also \nopting to issue a 'CREATE TABLE AS <sql_query>' command, an SPI utility \ncommand.\n\nI was however wondering whether I can indeed achieve concurrency in this \nway. My initial results are not showing much difference compared to a \nnot concurrent implementation. If there would be a large lock somewhere \nin SPI implementation obviously this can be counterintuitive. What would \nbe the precautions I would need to consider when working with SPI in \nthis manner?\n\nThanks,\nTom\n\n\n", "msg_date": "Mon, 17 Feb 2020 21:24:43 +0100", "msg_from": "Tom Mercha <mercha_t@hotmail.com>", "msg_from_op": true, "msg_subject": "SPI Concurrency Precautions?" }, { "msg_contents": "On 17/02/2020 21:24, Tom Mercha wrote:\n> Dear Hackers\n> \n> I've been working on an extension and using SPI to execute some queries. \n> I am in a situation where I have the option to issue multiple queries \n> concurrently, ideally under same snapshot and transaction. In short, I \n> am achieving this by creating multiple dynamic background workers, each \n> one of them executing a query at the same time using \n> SPI_execute(sql_string, ...). To be more precise, sometimes I am also \n> opting to issue a 'CREATE TABLE AS <sql_query>' command, an SPI utility \n> command.\n> \n> I was however wondering whether I can indeed achieve concurrency in this \n> way. My initial results are not showing much difference compared to a \n> not concurrent implementation. If there would be a large lock somewhere \n> in SPI implementation obviously this can be counterintuitive. What would \n> be the precautions I would need to consider when working with SPI in \n> this manner?\n> \n> Thanks,\n> Tom\n\nDear Hackers\n\nI have run some tests to try and better highlight my issue as I am still \nstruggling a lot with it.\n\nI have 4 'CREATE TABLE AS' statements of this nature: \"CREATE TABLE \n<different_tbl_name> AS <same_query>\". This means that I have different \ntable names for the same query.\n\nI am spawning a number of dynamic background workers to execute each \nstatement. When I spawn 4 workers on a quad-core machine, the resutling \nexecution time per statement is {0.158s, 0.216s, 0.399s, 0.396s}. \nHowever, when I have just one worker, the results are {0.151s, 0.141s, \n0.146s, 0.136s}.\n\nThe way I am executing my statements is through SPI in each worker \n(using a PG extension) as follows:\n SPI_connect();\n SPI_exec(queryString, 0);\n SPI_finish();\nIn both test cases, SPI_connect/finish are executed 4 times.\n\nWhat I expect is that with 4 workers, each statements will take approx \n0.15s to execute since they are independent from each other. This would \nresult in approx a 4x speedup. Despite seeing concurrency, I am seeing \nthat each invdividual statement will take longer to execute. I am \nstruggling to understand this behavior, what this suggests to me is that \nthere is a lock somewhere which completely defeats my purpose.\n\nI was wondering how I could execute my CREATE TABLE statements in a \nparallel fashion given that they are independent from each other. If the \nlock is the problem, what steps could I take to relax it? I would \ngreatly appreciate any guidance or insights on this topic.\n\nThanks,\nTom\n\n\n", "msg_date": "Sat, 22 Feb 2020 01:20:06 +0100", "msg_from": "Tom Mercha <mercha_t@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: SPI Concurrency Precautions? Problems with Parallel Execution of\n Multiple CREATE TABLE statements" }, { "msg_contents": "On Sat, Feb 22, 2020 at 5:50 AM Tom Mercha <mercha_t@hotmail.com> wrote:\n> I am spawning a number of dynamic background workers to execute each\n> statement. When I spawn 4 workers on a quad-core machine, the resutling\n> execution time per statement is {0.158s, 0.216s, 0.399s, 0.396s}.\n> However, when I have just one worker, the results are {0.151s, 0.141s,\n> 0.146s, 0.136s}.\n>\n> The way I am executing my statements is through SPI in each worker\n> (using a PG extension) as follows:\n> SPI_connect();\n> SPI_exec(queryString, 0);\n> SPI_finish();\n> In both test cases, SPI_connect/finish are executed 4 times.\n>\n> What I expect is that with 4 workers, each statements will take approx\n> 0.15s to execute since they are independent from each other. This would\n> result in approx a 4x speedup. Despite seeing concurrency, I am seeing\n> that each invdividual statement will take longer to execute. I am\n> struggling to understand this behavior, what this suggests to me is that\n> there is a lock somewhere which completely defeats my purpose.\n>\n> I was wondering how I could execute my CREATE TABLE statements in a\n> parallel fashion given that they are independent from each other. If the\n> lock is the problem, what steps could I take to relax it? I would\n> greatly appreciate any guidance or insights on this topic.\n\nWell, I'm not altogether sure that your expectations are realistic.\nRarely do things parallelize perfectly. In a case like this, some time\nis probably being spent doing disk I/O. When multiple processes do CPU\nwork at the same time, you should be able to see near-linear speedup,\nbut when multiple processes do disk I/O at the same time, you may see\nno speedup at all, or even a slowdown, because of the way that disks\nwork. This seems especially likely given how short the queries are and\nthe fact that they create a new table, which involves an fsync()\noperation.\n\nIt's possible that if you run a query to select the wait events from\npg_stat_activity, maybe using psql's \\watch with a fractional value,\nyou might be able to see something about what those queries are\nactually spending time on. It's also possible that you might get more\ninteresting results if you have things that run for longer than a few\nhundred milliseconds. But in general I would question the assumption\nthat this ought to scale well.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 25 Feb 2020 16:04:14 +0530", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SPI Concurrency Precautions? Problems with Parallel Execution of\n Multiple CREATE TABLE statements" } ]
[ { "msg_contents": "As mentioned in https://postgr.es/m/20191231194759.GA24692@alvherre.pgsql\nI propose to add a new column to pg_trigger, which allows us to remove a\npg_depend scan when cloning triggers when adding/attaching partitions.\n(It's not that I think the scan is a performance problem, but rather\nthan notionally we try not to depend on pg_depend contents for this kind\nof semantic derivation.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W", "msg_date": "Mon, 17 Feb 2020 18:56:41 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg_trigger.tgparentid" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> As mentioned in https://postgr.es/m/20191231194759.GA24692@alvherre.pgsql\n> I propose to add a new column to pg_trigger, which allows us to remove a\n> pg_depend scan when cloning triggers when adding/attaching partitions.\n> (It's not that I think the scan is a performance problem, but rather\n> than notionally we try not to depend on pg_depend contents for this kind\n> of semantic derivation.)\n\nIt'd be nice if the term \"parent trigger\" were defined somewhere in\nthis. Seems all right otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Feb 2020 16:59:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "On Tue, Feb 18, 2020 at 6:56 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> As mentioned in https://postgr.es/m/20191231194759.GA24692@alvherre.pgsql\n> I propose to add a new column to pg_trigger, which allows us to remove a\n> pg_depend scan when cloning triggers when adding/attaching partitions.\n> (It's not that I think the scan is a performance problem, but rather\n> than notionally we try not to depend on pg_depend contents for this kind\n> of semantic derivation.)\n\n@@ -16541,7 +16493,7 @@ CloneRowTriggersToPartition(Relation parent,\nRelation partition)\n *\n * However, if our parent is a partitioned relation, there might be\n\nThis is existing text, but should really be:\n\nHowever, if our parent is a *partition* itself, there might be\n\n(Sorry, I forgot to report this when the bug-fix went in couple months ago.)\n\n * internal triggers that need cloning. In that case, we must skip\n- * clone it if the trigger on parent depends on another trigger.\n+ * cloning it if the trigger on parent depends on another trigger.\n\n2nd sentence seems unclear to me. Does the following say what needs\nto be said here:\n\n * However, if our parent is a partition itself, there might be\n * internal triggers that need cloning. For example, triggers on the\n * parent that were in turn cloned from its own parent are marked\n * internal, which too must be cloned to the partition.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 18 Feb 2020 13:11:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "On Tue, Feb 18, 2020 at 1:11 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 18, 2020 at 6:56 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> @@ -16541,7 +16493,7 @@ CloneRowTriggersToPartition(Relation parent,\n> Relation partition)\n> *\n> * However, if our parent is a partitioned relation, there might be\n>\n> This is existing text, but should really be:\n>\n> However, if our parent is a *partition* itself, there might be\n>\n> (Sorry, I forgot to report this when the bug-fix went in couple months ago.)\n>\n> * internal triggers that need cloning. In that case, we must skip\n> - * clone it if the trigger on parent depends on another trigger.\n> + * cloning it if the trigger on parent depends on another trigger.\n>\n> 2nd sentence seems unclear to me. Does the following say what needs\n> to be said here:\n>\n> * However, if our parent is a partition itself, there might be\n> * internal triggers that need cloning. For example, triggers on the\n> * parent that were in turn cloned from its own parent are marked\n> * internal, which too must be cloned to the partition.\n\nOr:\n\n * However, if our parent is a partition itself, there might be\n * internal triggers that must not be skipped. For example, triggers\n * on the parent that were in turn cloned from its own parent are\n * marked internal, which must be cloned to the partition.\n\nThanks,\nAmit\n\n\n", "msg_date": "Wed, 19 Feb 2020 11:52:23 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "On 2020-Feb-19, Amit Langote wrote:\n\n> Or:\n> \n> * However, if our parent is a partition itself, there might be\n> * internal triggers that must not be skipped. For example, triggers\n> * on the parent that were in turn cloned from its own parent are\n> * marked internal, which must be cloned to the partition.\n\nThanks for pointing this out -- I agree it needed rewording. I slightly\nadjusted your text like this:\n\n\t\t * Internal triggers require careful examination. Ideally, we don't\n\t\t * clone them. However, if our parent is itself a partition, there\n\t\t * might be internal triggers that must not be skipped; for example,\n\t\t * triggers on our parent that are in turn clones from its parent (our\n\t\t * grandparent) are marked internal, yet they are to be cloned.\n\nIs this okay for you?\n\nTom Lane wrote:\n\n> It'd be nice if the term \"parent trigger\" were defined somewhere in\n> this. Seems all right otherwise.\n\nFair point. I propose to patch catalog.sgml like this\n\n <entry>\n Parent trigger that this trigger is cloned from, zero if not a clone;\n this happens when partitions are created or attached to a partitioned\n table.\n </entry>\n\nIt's perhaps not great to have to explain the parentage concept in the\ncatalog docs, so I'm going to go over the other documentation pages\n(trigger.sgml and ref/create_trigger.sgml) to see whether they need any\npatching; it's possible that we neglected to update them properly when\nthe partitioning-related commits went it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 24 Feb 2020 15:58:50 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "Hi Alvaro,\n\nOn Tue, Feb 25, 2020 at 3:58 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Feb-19, Amit Langote wrote:\n>\n> > Or:\n> >\n> > * However, if our parent is a partition itself, there might be\n> > * internal triggers that must not be skipped. For example, triggers\n> > * on the parent that were in turn cloned from its own parent are\n> > * marked internal, which must be cloned to the partition.\n>\n> Thanks for pointing this out -- I agree it needed rewording. I slightly\n> adjusted your text like this:\n>\n> * Internal triggers require careful examination. Ideally, we don't\n> * clone them. However, if our parent is itself a partition, there\n> * might be internal triggers that must not be skipped; for example,\n> * triggers on our parent that are in turn clones from its parent (our\n> * grandparent) are marked internal, yet they are to be cloned.\n>\n> Is this okay for you?\n\nThanks. Your revised text looks good, except there is a typo:\n\nin turn clones -> in turn cloned\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 25 Feb 2020 10:26:19 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "On 2020-Feb-25, Amit Langote wrote:\n\n> On Tue, Feb 25, 2020 at 3:58 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > Thanks for pointing this out -- I agree it needed rewording. I slightly\n> > adjusted your text like this:\n> >\n> > * Internal triggers require careful examination. Ideally, we don't\n> > * clone them. However, if our parent is itself a partition, there\n> > * might be internal triggers that must not be skipped; for example,\n> > * triggers on our parent that are in turn clones from its parent (our\n> > * grandparent) are marked internal, yet they are to be cloned.\n> >\n> > Is this okay for you?\n> \n> Thanks. Your revised text looks good, except there is a typo:\n> \n> in turn clones -> in turn cloned\n\nActually, that was on purpose ... (I also changed \"were\" to \"are\" to match.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 25 Feb 2020 11:00:55 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "On Tue, Feb 25, 2020 at 11:01 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2020-Feb-25, Amit Langote wrote:\n> > On Tue, Feb 25, 2020 at 3:58 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > Thanks for pointing this out -- I agree it needed rewording. I slightly\n> > > adjusted your text like this:\n> > >\n> > > * Internal triggers require careful examination. Ideally, we don't\n> > > * clone them. However, if our parent is itself a partition, there\n> > > * might be internal triggers that must not be skipped; for example,\n> > > * triggers on our parent that are in turn clones from its parent (our\n> > > * grandparent) are marked internal, yet they are to be cloned.\n> > >\n> > > Is this okay for you?\n> >\n> > Thanks. Your revised text looks good, except there is a typo:\n> >\n> > in turn clones -> in turn cloned\n>\n> Actually, that was on purpose ... (I also changed \"were\" to \"are\" to match.)\n\nAh, got it.\n\nThanks,\nAmit\n\n\n", "msg_date": "Tue, 25 Feb 2020 23:57:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "Thanks both -- pushed. I also changed regress/sql/triggers to leave\ntables around that have a non-zero tgparentid. This ensures that the\npg_upgrade test sees such objects, as well as findoidjoins.\n\nI refrained from doing the findoidjoins dance itself, though; I got a\nlarge number of false positives that I think are caused by some pg12-era\nhacking.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Feb 2020 13:26:26 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg_trigger.tgparentid" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Thanks both -- pushed. I also changed regress/sql/triggers to leave\n> tables around that have a non-zero tgparentid. This ensures that the\n> pg_upgrade test sees such objects, as well as findoidjoins.\n\n> I refrained from doing the findoidjoins dance itself, though; I got a\n> large number of false positives that I think are caused by some pg12-era\n> hacking.\n\nGenerally I try to update findoidjoins once per release cycle, after\nfeature freeze. I don't think it's worth messing with it more often\nthan that. But thanks for making sure there'll be data for it to find.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Feb 2020 11:32:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_trigger.tgparentid" } ]
[ { "msg_contents": "ALTER ... DEPENDS ON EXTENSION (dependencies of type 'x' on an\nextension) was found to have a few problems. One was fixed as\nCVE-2020-1720. Other issues:\n\n* pg_dump does not reproduce database state correctly.\n The attached 0000 fixes it.\n\n* More than one 'x' dependencies are allowed for the same object on the\n same extension. That's useless and polluting, so should be prevented.\n\n* There's no way to remove an 'x' dependency.\n\nI'll send patches for the other two issues as replies later. (I\ndiscovered an issue in my 0001, for the second one, just as I was\nsending.)\n\n-- \n�lvaro Herrera", "msg_date": "Mon, 17 Feb 2020 19:53:33 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Feb-17, Alvaro Herrera wrote:\n\n> * More than one 'x' dependencies are allowed for the same object on the\n> same extension. That's useless and polluting, so should be prevented.\n> \n> * There's no way to remove an 'x' dependency.\n\nHere's these two patches. There's an \"if (true)\" in 0002 which is a\nlittle weird -- that's there just to avoid reindenting those lines in\n0003.\n\nIn principle, I would think that these are all backpatchable bugfixes.\nMaybe 0002 could pass as not backpatchable since it disallows a command\nthat works today. OTOH the feature is rarely used, so maybe a backpatch\nis not welcome anyhow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 17 Feb 2020 20:56:03 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nTested the pg_dump patch for dumping \"ALTER .. DEPENDS ON EXTENSION\" in case of indexes, functions, triggers etc. The \"ALTER .. DEPENDS ON EXTENSION\" is included in the dump. However in some case not sure why \"ALTER INDEX.....DEPENDS ON EXTENSION\" is repeated several times in the dump?", "msg_date": "Fri, 28 Feb 2020 07:49:23 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Feb-28, ahsan hadi wrote:\n\n\n> Tested the pg_dump patch for dumping \"ALTER .. DEPENDS ON EXTENSION\" in case of indexes, functions, triggers etc. The \"ALTER .. DEPENDS ON EXTENSION\" is included in the dump. However in some case not sure why \"ALTER INDEX.....DEPENDS ON EXTENSION\" is repeated several times in the dump?\n\nHi, thanks for testing.\n\nAre the repeated commands for the same index, same extension? Did you\napply the same command multiple times before running pg_dump?\n\nThere was an off-list complaint that if you repeat the ALTER .. DEPENDS\nfor the same object on the same extension, then the same dependency is\nregistered multiple times. (You can search pg_depend for \"deptype = 'x'\"\nto see that). I suppose that would lead to the line being output\nmultiple times by pg_dump, also. Is that what you did?\n\nIf so: Patch 0002 is supposed to fix that problem, by raising an error\nif the dependency is already registered ... though it occurs to me now\nthat it would be more in line with custom to make the command a silent\nno-op. In fact, doing that would cause old dumps (generated with\ndatabases containing duplicated entries) to correctly restore a single\nentry, without error. Therefore my inclination now is to change 0002\nthat way and push and backpatch it ahead of 0001.\n\nI realize just now that I have failed to verify what happens with\npartitioned indexes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Feb 2020 10:44:51 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On Sat, Feb 29, 2020 at 2:38 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Feb-28, ahsan hadi wrote:\n>\n>\n> > Tested the pg_dump patch for dumping \"ALTER .. DEPENDS ON EXTENSION\" in\n> case of indexes, functions, triggers etc. The \"ALTER .. DEPENDS ON\n> EXTENSION\" is included in the dump. However in some case not sure why\n> \"ALTER INDEX.....DEPENDS ON EXTENSION\" is repeated several times in the\n> dump?\n>\n> Hi, thanks for testing.\n>\n> Are the repeated commands for the same index, same extension?\n\n\nYes same index and same extension...\n\n\n> Did you\n> apply the same command multiple times before running pg_dump?\n>\n\nYes but in some cases I applied the command once and it appeared multiple\ntimes in the dump..\n\n\n>\n> There was an off-list complaint that if you repeat the ALTER .. DEPENDS\n> for the same object on the same extension, then the same dependency is\n> registered multiple times. (You can search pg_depend for \"deptype = 'x'\"\n> to see that). I suppose that would lead to the line being output\n> multiple times by pg_dump, also. Is that what you did?\n>\n\nI checked out pg_depend for \"deptype='x'\" the same dependency is registered\nmultiple times...\n\n>\n> If so: Patch 0002 is supposed to fix that problem, by raising an error\n> if the dependency is already registered ... though it occurs to me now\n> that it would be more in line with custom to make the command a silent\n> no-op. In fact, doing that would cause old dumps (generated with\n> databases containing duplicated entries) to correctly restore a single\n> entry, without error. Therefore my inclination now is to change 0002\n> that way and push and backpatch it ahead of 0001.\n>\n\nMakes sense, will also try our Patch 0002.\n\n>\n> I realize just now that I have failed to verify what happens with\n> partitioned indexes.\n>\n\nYes I also missed this one..\n\n\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Sat, Feb 29, 2020 at 2:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Feb-28, ahsan hadi wrote:\n\n\n> Tested the pg_dump patch for dumping \"ALTER .. DEPENDS ON EXTENSION\" in case of indexes, functions, triggers etc. The \"ALTER .. DEPENDS ON EXTENSION\" is included in the dump. However in some case not sure why \"ALTER INDEX.....DEPENDS ON EXTENSION\" is repeated several times in the dump?\n\nHi, thanks for testing.\n\nAre the repeated commands for the same index, same extension?Yes same index and same extension...   Did you\napply the same command multiple times before running pg_dump?Yes but in some cases I applied the command once and it appeared multiple times in the dump.. \n\nThere was an off-list complaint that if you repeat the ALTER .. DEPENDS\nfor the same object on the same extension, then the same dependency is\nregistered multiple times.  (You can search pg_depend for \"deptype = 'x'\"\nto see that).  I suppose that would lead to the line being output\nmultiple times by pg_dump, also.  Is that what you did?I checked out pg_depend for \"deptype='x'\" the same dependency is registered multiple times... \n\nIf so: Patch 0002 is supposed to fix that problem, by raising an error\nif the dependency is already registered ... though it occurs to me now\nthat it would be more in line with custom to make the command a silent\nno-op.  In fact, doing that would cause old dumps (generated with\ndatabases containing duplicated entries) to correctly restore a single\nentry, without error.  Therefore my inclination now is to change 0002\nthat way and push and backpatch it ahead of 0001.Makes sense, will also try our Patch 0002. \n\nI realize just now that I have failed to verify what happens with\npartitioned indexes.Yes I also missed this one.. \n\n-- \nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Mon, 2 Mar 2020 12:45:13 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On Mon, Mar 2, 2020 at 12:45 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n\n>\n>\n> On Sat, Feb 29, 2020 at 2:38 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n>\n>> On 2020-Feb-28, ahsan hadi wrote:\n>>\n>>\n>> > Tested the pg_dump patch for dumping \"ALTER .. DEPENDS ON EXTENSION\" in\n>> case of indexes, functions, triggers etc. The \"ALTER .. DEPENDS ON\n>> EXTENSION\" is included in the dump. However in some case not sure why\n>> \"ALTER INDEX.....DEPENDS ON EXTENSION\" is repeated several times in the\n>> dump?\n>>\n>> Hi, thanks for testing.\n>>\n>> Are the repeated commands for the same index, same extension?\n>\n>\n> Yes same index and same extension...\n>\n\nYou cannot do that after applying all the patches.\n\n\n>\n>\n>> Did you\n>> apply the same command multiple times before running pg_dump?\n>>\n>\n> Yes but in some cases I applied the command once and it appeared multiple\n> times in the dump..\n>\n\nNot for me, it works for me.\n\n\n>\n>\n>>\n>> There was an off-list complaint that if you repeat the ALTER .. DEPENDS\n>> for the same object on the same extension, then the same dependency is\n>> registered multiple times. (You can search pg_depend for \"deptype = 'x'\"\n>> to see that). I suppose that would lead to the line being output\n>> multiple times by pg_dump, also. Is that what you did?\n>>\n>\n> I checked out pg_depend for \"deptype='x'\" the same dependency is\n> registered multiple times...\n>\n>>\n>> If so: Patch 0002 is supposed to fix that problem, by raising an error\n>> if the dependency is already registered ... though it occurs to me now\n>> that it would be more in line with custom to make the command a silent\n>> no-op. In fact, doing that would cause old dumps (generated with\n>> databases containing duplicated entries) to correctly restore a single\n>> entry, without error. Therefore my inclination now is to change 0002\n>> that way and push and backpatch it ahead of 0001.\n>>\n>\n> Makes sense, will also try our Patch 0002.\n>\n>>\n>> I realize just now that I have failed to verify what happens with\n>> partitioned indexes.\n>>\n>\n> Yes I also missed this one..\n>\n\nIt works for partitioned indexes.\n\n\nIs this intentional that there is no error when removing a non-existing\ndependency?\n\nOn Mon, Mar 2, 2020 at 12:45 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:On Sat, Feb 29, 2020 at 2:38 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Feb-28, ahsan hadi wrote:\n\n\n> Tested the pg_dump patch for dumping \"ALTER .. DEPENDS ON EXTENSION\" in case of indexes, functions, triggers etc. The \"ALTER .. DEPENDS ON EXTENSION\" is included in the dump. However in some case not sure why \"ALTER INDEX.....DEPENDS ON EXTENSION\" is repeated several times in the dump?\n\nHi, thanks for testing.\n\nAre the repeated commands for the same index, same extension?Yes same index and same extension...You cannot do that after applying all the patches.    Did you\napply the same command multiple times before running pg_dump?Yes but in some cases I applied the command once and it appeared multiple times in the dump.. Not for me, it works for me.   \n\nThere was an off-list complaint that if you repeat the ALTER .. DEPENDS\nfor the same object on the same extension, then the same dependency is\nregistered multiple times.  (You can search pg_depend for \"deptype = 'x'\"\nto see that).  I suppose that would lead to the line being output\nmultiple times by pg_dump, also.  Is that what you did?I checked out pg_depend for \"deptype='x'\" the same dependency is registered multiple times... \n\nIf so: Patch 0002 is supposed to fix that problem, by raising an error\nif the dependency is already registered ... though it occurs to me now\nthat it would be more in line with custom to make the command a silent\nno-op.  In fact, doing that would cause old dumps (generated with\ndatabases containing duplicated entries) to correctly restore a single\nentry, without error.  Therefore my inclination now is to change 0002\nthat way and push and backpatch it ahead of 0001.Makes sense, will also try our Patch 0002. \n\nI realize just now that I have failed to verify what happens with\npartitioned indexes.Yes I also missed this one..It works for partitioned indexes.Is this intentional that there is no error when removing a non-existing dependency?", "msg_date": "Thu, 5 Mar 2020 23:20:43 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nIt works for me", "msg_date": "Thu, 05 Mar 2020 18:23:12 +0000", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Mar-05, Ibrar Ahmed wrote:\n\n> Is this intentional that there is no error when removing a non-existing\n> dependency?\n\nHmm, I think we can do nothing silently if nothing is called for.\nSo, yes, that seems to be the way it should work.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 15:38:35 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On Thu, Mar 5, 2020 at 11:38 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Mar-05, Ibrar Ahmed wrote:\n>\n> > Is this intentional that there is no error when removing a non-existing\n> > dependency?\n>\n> Hmm, I think we can do nothing silently if nothing is called for.\n> So, yes, that seems to be the way it should work.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nI think we need a tab-completion patch too for this.\n\n-- \nIbrar Ahmed", "msg_date": "Fri, 6 Mar 2020 00:05:00 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Mar-05, Alvaro Herrera wrote:\n\n> On 2020-Mar-05, Ibrar Ahmed wrote:\n> \n> > Is this intentional that there is no error when removing a non-existing\n> > dependency?\n> \n> Hmm, I think we can do nothing silently if nothing is called for.\n> So, yes, that seems to be the way it should work.\n\nI pushed 0002 to all branches (9.6+), after modifying it to silently do\nnothing instead of throwing an error when the dependency exists -- same\nwe discussed here, for the other form of the command.\nI just noticed that I failed to credit Ahsan Hadi as reviewer for this\npatch :-(\n\nThanks for reviewing. I'll see about 0001 next, also backpatched to\n9.6.\n\nI'm still not sure whether to apply 0003 (+ your tab-completion patch,\nthanks for it) to backbranches or just to master. It seems legitimate\nto see it as a feature addition, but OTOH the overall feature is not\ncomplete without it ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Mar 2020 11:14:12 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I'm still not sure whether to apply 0003 (+ your tab-completion patch,\n> thanks for it) to backbranches or just to master. It seems legitimate\n> to see it as a feature addition, but OTOH the overall feature is not\n> complete without it ...\n\n0003 is the command addition to allow removing such a dependency,\nright? Given the lack of field demand I see no reason to risk\nadding it to the back branches.\n\nBTW, I did not like the syntax too much. \"NO DEPENDS ON EXTENSION\"\ndoesn't seem like good English. \"NOT DEPENDS ON EXTENSION\" is hardly\nany better. The real problem with both is that an ALTER action should\nbe, well, an action. A grammar stickler would say that it should be\n\"ALTER thing DROP DEPENDENCY ON EXTENSION ext\", but perhaps we could\nget away with \"ALTER thing DROP DEPENDS ON EXTENSION ext\" to avoid\nadding a new keyword. By that logic the original command should have\nbeen \"ALTER thing ADD DEPENDS ON EXTENSION ext\", but I suppose it's\ntoo late for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 10:30:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "Thanks for the reviews; I pushed 0001 now, again to all branches since\n9.6. Because of the previous commit, the fact that multiple statements\nare emitted is not important anymore: the server will only restore the\nfirst one, and silently ignore subsequent ones. And once you're using a\nsystem in that state, naturally only one will be emitted by pg_dump in\nall cases.\n\nWhat remains on this CF item is the new feature to remove an existing\ndependency. As Tom says, given the little use this feature gets it\ndoesn't sound worth the destabilization risk in back branches, so I'm\ngoing to push it only to master -- but not yet.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Mar 2020 17:06:48 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Mar-11, Tom Lane wrote:\n\n> > thanks for it) to backbranches or just to master. It seems legitimate\n> > to see it as a feature addition, but OTOH the overall feature is not\n> > complete without it ...\n> \n> 0003 is the command addition to allow removing such a dependency,\n> right? Given the lack of field demand I see no reason to risk\n> adding it to the back branches.\n\nYeah, okay.\n\nI hereby request permission to push this patch past the feature freeze\ndate; it's a very small one that completes an existing feature (rather\nthan a complete new feature in itself), and it's not intrusive nor\nlikely to break anything.\n\n> BTW, I did not like the syntax too much. \"NO DEPENDS ON EXTENSION\"\n> doesn't seem like good English. \"NOT DEPENDS ON EXTENSION\" is hardly\n> any better. The real problem with both is that an ALTER action should\n> be, well, an action. A grammar stickler would say that it should be\n> \"ALTER thing DROP DEPENDENCY ON EXTENSION ext\", but perhaps we could\n> get away with \"ALTER thing DROP DEPENDS ON EXTENSION ext\" to avoid\n> adding a new keyword. By that logic the original command should have\n> been \"ALTER thing ADD DEPENDS ON EXTENSION ext\", but I suppose it's\n> too late for that.\n\nI will be submitting a version with these changes shortly.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 14:38:37 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Mar-11, Tom Lane wrote:\n\n> BTW, I did not like the syntax too much. \"NO DEPENDS ON EXTENSION\"\n> doesn't seem like good English. \"NOT DEPENDS ON EXTENSION\" is hardly\n> any better. The real problem with both is that an ALTER action should\n> be, well, an action. A grammar stickler would say that it should be\n> \"ALTER thing DROP DEPENDENCY ON EXTENSION ext\", but perhaps we could\n> get away with \"ALTER thing DROP DEPENDS ON EXTENSION ext\" to avoid\n> adding a new keyword. By that logic the original command should have\n> been \"ALTER thing ADD DEPENDS ON EXTENSION ext\", but I suppose it's\n> too late for that.\n\nThe problem with DROP DEPENDS is alter_table_cmd, which already defines\n\"DROP opt_column ColId\", so there's a reduce/reduce conflict for the\nALTER INDEX and ALTER MATERIALIZED VIEW forms because \"depends\" could be\na column name. (It works fine for ALTER FUNCTION/ROUTINE/PROCEDURE/TRIGGER\nbecause there's no command that tries to define a conflicting DROP form\nfor these.)\n\nIt works if I change DEPENDS to be type_func_name_keyword (currently\nunreserved_keyword), but I bet we won't like that.\n\n(DEPENDENCY is not a keyword of any kind, so DROP DEPENDENCY require us\nmaking it one of high reservedness, which I suspect we don't like\neither).\n\nIt would also work to use a different keyword in the DROP position;\nmaybe REMOVE. But that's not a keyword currently.\n\nHow about ALTER .. REVOKE DEPENDS or DELETE DEPENDS? Bison is okay\nwith either of those forms.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Apr 2020 19:30:19 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "As promised, here's a rebased version of this patch -- edits pending per\ndiscussion to decide the grammar to use.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 9 Apr 2020 19:49:07 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "I pushed this (to pg13 only) using the originally proposed \"NO DEPENDS\"\nsyntax. It's trivial to change to REVOKE DEPENDS on REMOVE DEPENDS if\nwe decide to do that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:54:51 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" }, { "msg_contents": "On 2020-Mar-06, Ibrar Ahmed wrote:\n\n> I think we need a tab-completion patch too for this.\n\nThanks, I pushed this.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:55:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: more ALTER .. DEPENDS ON EXTENSION fixes" } ]
[ { "msg_contents": "Hi,\n\nRecoveryWalAll and RecoveryWalStream wait events are documented as follows.\n\n RecoveryWalAll\n Waiting for WAL from any kind of source (local, archive or stream) at recovery.\n\n RecoveryWalStream\n Waiting for WAL from a stream at recovery.\n\nBut as far as I read the code, RecoveryWalAll is reported only when waiting\nfor WAL from a stream. So the current description looks incorrect. What's\ndescribed now for RecoveryWalStream seems rather fit to RecoveryWalAll.\nI'd like to change the description of RecoveryWalAll to \"Waiting for WAL\n from a stream at recovery\".\n\nRegarding RecoveryWalStream, as far as I read the code, while this event is\nbeing reported, the startup process is waiting for next trial to retrieve\nWAL data when WAL data is not available from any sources, based on\nwal_retrieve_retry_interval. So this current description looks also\nincorrect. I'd like to change it to \"Waiting when WAL data is not available\n from any kind of sources (local, archive or stream) before trying again\n to retrieve WAL data\".\n\nThought?\n\nAlso the current names of these wait events sound confusing. I think\nthat RecoveryWalAll should be changed to RecoveryWalStream.\nRecoveryWalStream should be RecoveryRetrieveRetryInterval or\nsomething.\n\nAnother problem is that the current wait event types of them also look\nstrange. Currently the type of them is Activity, but IMO it's better to\nuse IPC for RecoveryWalAll because it's waiting for walreceiver to\nreceive new WAL. Also it's better to use Timeout for RecoveryWalStream\nbecause it's waiting depending on wal_retrieve_retry_interval.\n\nThe changes of wait event types and names would break the compatibility\nof wait events in pg_stat_activity. So this change should not be applied\nto the back branches, but it's ok to apply in the master. Right?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:25:51 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "Hello.\n\nAt Tue, 18 Feb 2020 12:25:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Hi,\n> \n> RecoveryWalAll and RecoveryWalStream wait events are documented as\n> follows.\n> \n> RecoveryWalAll\n> Waiting for WAL from any kind of source (local, archive or stream) at\n> recovery.\n> \n> RecoveryWalStream\n> Waiting for WAL from a stream at recovery.\n> \n> But as far as I read the code, RecoveryWalAll is reported only when\n> waiting\n> for WAL from a stream. So the current description looks\n> incorrect. What's\n> described now for RecoveryWalStream seems rather fit to\n> RecoveryWalAll.\n> I'd like to change the description of RecoveryWalAll to \"Waiting for\n> WAL\n> from a stream at recovery\".\n\nGood catch!\n\n> Regarding RecoveryWalStream, as far as I read the code, while this\n> event is\n> being reported, the startup process is waiting for next trial to\n> retrieve\n> WAL data when WAL data is not available from any sources, based on\n> wal_retrieve_retry_interval. So this current description looks also\n> incorrect. I'd like to change it to \"Waiting when WAL data is not\n> available\n> from any kind of sources (local, archive or stream) before trying\n> again\n> to retrieve WAL data\".\n> \n> Thought?\n\nI agree that the corrected description sound correct in meaning. The\nlatter seems a bit lengthy, though.\n\n> Also the current names of these wait events sound confusing. I think\n> that RecoveryWalAll should be changed to RecoveryWalStream.\n> RecoveryWalStream should be RecoveryRetrieveRetryInterval or\n> something.\n\nI agree to the former, I think RecoveryWalInterval works well enough.\n\n> Another problem is that the current wait event types of them also look\n> strange. Currently the type of them is Activity, but IMO it's better\n> to\n> use IPC for RecoveryWalAll because it's waiting for walreceiver to\n> receive new WAL. Also it's better to use Timeout for RecoveryWalStream\n> because it's waiting depending on wal_retrieve_retry_interval.\n\nDo you mean condition variable by the \"IPC\"? But the WaitLatch waits\nnot only for new WAL but also for trigger, SIGHUP, shutdown and\nwalreceiver events other than new WAL. I'm not sure that condition\nvariable fits for the purpose.\n\n> The changes of wait event types and names would break the\n> compatibility\n> of wait events in pg_stat_activity. So this change should not be\n> applied\n> to the back branches, but it's ok to apply in the master. Right?\n\nFWIW, It seems right.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 18 Feb 2020 14:20:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "On 2020/02/18 14:20, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> At Tue, 18 Feb 2020 12:25:51 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Hi,\n>>\n>> RecoveryWalAll and RecoveryWalStream wait events are documented as\n>> follows.\n>>\n>> RecoveryWalAll\n>> Waiting for WAL from any kind of source (local, archive or stream) at\n>> recovery.\n>>\n>> RecoveryWalStream\n>> Waiting for WAL from a stream at recovery.\n>>\n>> But as far as I read the code, RecoveryWalAll is reported only when\n>> waiting\n>> for WAL from a stream. So the current description looks\n>> incorrect. What's\n>> described now for RecoveryWalStream seems rather fit to\n>> RecoveryWalAll.\n>> I'd like to change the description of RecoveryWalAll to \"Waiting for\n>> WAL\n>> from a stream at recovery\".\n> \n> Good catch!\n> \n>> Regarding RecoveryWalStream, as far as I read the code, while this\n>> event is\n>> being reported, the startup process is waiting for next trial to\n>> retrieve\n>> WAL data when WAL data is not available from any sources, based on\n>> wal_retrieve_retry_interval. So this current description looks also\n>> incorrect. I'd like to change it to \"Waiting when WAL data is not\n>> available\n>> from any kind of sources (local, archive or stream) before trying\n>> again\n>> to retrieve WAL data\".\n>>\n>> Thought?\n> \n> I agree that the corrected description sound correct in meaning. The\n> latter seems a bit lengthy, though.\n\nYeah, so better idea?\n\nAnyway, attached is the patch (fix_recovery_wait_event_doc_v1.patch)\nthat fixes the descriptions of those wait events. This should be\nback-patched to v9.5.\n\n>> Also the current names of these wait events sound confusing. I think\n>> that RecoveryWalAll should be changed to RecoveryWalStream.\n>> RecoveryWalStream should be RecoveryRetrieveRetryInterval or\n>> something.\n> \n> I agree to the former, I think RecoveryWalInterval works well enough.\n\nRecoveryWalInterval sounds confusing to me...\n\nAttached is the patch (improve_recovery_wait_event_for_master_v1.patch) that\nchanges the names and types of wait events. This patch uses\nRecoveryRetrieveRetryInterval, but if there is better name,\nI will adopt that.\n\nNote that this patch needs to be applied after\nfix_recovery_wait_event_doc_v1.patch is applied.\n\n>> Another problem is that the current wait event types of them also look\n>> strange. Currently the type of them is Activity, but IMO it's better\n>> to\n>> use IPC for RecoveryWalAll because it's waiting for walreceiver to\n>> receive new WAL. Also it's better to use Timeout for RecoveryWalStream\n>> because it's waiting depending on wal_retrieve_retry_interval.\n> \n> Do you mean condition variable by the \"IPC\"? But the WaitLatch waits\n> not only for new WAL but also for trigger, SIGHUP, shutdown and\n> walreceiver events other than new WAL. I'm not sure that condition\n> variable fits for the purpose.\n\nOK, I didn't change the type of RecoveryWalStream to IPC, in the patch.\nIts type is still Activity.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Wed, 19 Feb 2020 21:45:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "On 2020/02/19 21:46 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>> I agree to the former, I think RecoveryWalInterval works well enough.\n>RecoveryWalInterval sounds confusing to me...\n\nIMHO as a user, I prefer RecoveryRetrieveRetryInterval because\nit's easy to understand this wait_event is related to the\nparameter 'wal_retrieve_retry_interval'.\n\nAlso from the point of balance, the explanation of\nRecoveryRetrieveRetryInterval is lengthy, but I\nsometimes feel explanations of wait_events in the\nmanual are so simple that it's hard to understand\nwell.\n\n\n> Waiting when WAL data is not available from any kind of sources\n> (local, archive or stream) before trying again to retrieve WAL data,\n\nI think 'local' means pg_wal here, but the comment on\nWaitForWALToBecomeAvailable() says checking pg_wal in\nstandby mode is 'not documented', so I'm a little bit worried\nthat users may be confused.\n\nRegards,\n--\nTorikoshi Atsushi\n\nOn 2020/02/19 21:46 Fujii Masao <masao.fujii@oss.nttdata.com>:>> I agree to the former, I think RecoveryWalInterval works well enough.>RecoveryWalInterval sounds confusing to me...IMHO as a user, I prefer RecoveryRetrieveRetryInterval becauseit's easy to understand this wait_event is related to theparameter 'wal_retrieve_retry_interval'.Also from the point of balance, the explanation ofRecoveryRetrieveRetryInterval is lengthy, but Isometimes feel explanations of wait_events in themanual are so simple that it's hard to understandwell.>    Waiting when WAL data is not available from any kind of sources>    (local, archive or stream) before trying again to retrieve WAL data,I think 'local' means pg_wal here, but the comment onWaitForWALToBecomeAvailable() says checking pg_wal instandby mode is 'not documented', so I'm a little bit worriedthat users may be confused.Regards,--Torikoshi Atsushi", "msg_date": "Sun, 15 Mar 2020 00:06:12 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "\n\nOn 2020/03/15 0:06, Atsushi Torikoshi wrote:\n> On 2020/02/19 21:46 Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>>:\n> >> I agree to the former, I think RecoveryWalInterval works well enough.\n> >RecoveryWalInterval sounds confusing to me...\n> \n> IMHO as a user, I prefer RecoveryRetrieveRetryInterval because\n> it's easy to understand this wait_event is related to the\n> parameter 'wal_retrieve_retry_interval'.\n> \n> Also from the point of balance, the explanation of\n> RecoveryRetrieveRetryInterval is lengthy, but I\n> sometimes feel explanations of wait_events in the\n> manual are so simple that it's hard to understand\n> well.\n\n+1 to document them more. It's not easy task, though..\n\n> >    Waiting when WAL data is not available from any kind of sources\n> >    (local, archive or stream) before trying again to retrieve WAL data,\n> \n> I think 'local' means pg_wal here, but the comment on\n> WaitForWALToBecomeAvailable() says checking pg_wal in\n> standby mode is 'not documented', so I'm a little bit worried\n> that users may be confused.\n\nThis logic seems to be documented in high-availability.sgml.\nBut, anyway, you think that \"pg_wal\" should be used instead of \"local\" here?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 17 Mar 2020 11:55:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "On Tue, Mar 17, 2020 at 11:55 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> > > Waiting when WAL data is not available from any kind of sources\n> > > (local, archive or stream) before trying again to retrieve WAL\n> data,\n> >\n> > I think 'local' means pg_wal here, but the comment on\n> > WaitForWALToBecomeAvailable() says checking pg_wal in\n> > standby mode is 'not documented', so I'm a little bit worried\n> > that users may be confused.\n>\n> This logic seems to be documented in high-availability.sgml.\n\n\nThanks! I didn't notice it.\nI think you mean the below sentence.\n\n> The standby server will also attempt to restore any WAL found in the\nstandby cluster's pg_wal directory.\n\nIt seems the comment on WaitForWALToBecomeAvailable()\ndoes not go along with the high-availability.sgml, do we need\nmodification on the comment on the function?\nOr do I misunderstand something?\n\nBut, anyway, you think that \"pg_wal\" should be used instead\n\nof \"local\" here?\n\n\nI don't have special opinion here.\nIt might be better because high-availability.sgml does not use\n\"local\" but \"pg_wal\" for the explanation, but I also feel it's\nobvious in this context.\n\n\nRegards,\n\n--\nTorikoshi Atsushi\n\nOn Tue, Mar 17, 2020 at 11:55 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>  >    Waiting when WAL data is not available from any kind of sources\n>  >    (local, archive or stream) before trying again to retrieve WAL data,\n> \n> I think 'local' means pg_wal here, but the comment on\n> WaitForWALToBecomeAvailable() says checking pg_wal in\n> standby mode is 'not documented', so I'm a little bit worried\n> that users may be confused.\n\nThis logic seems to be documented in high-availability.sgml.\nThanks! I didn't notice it.I think you mean the below sentence.>  The standby server will also attempt to restore any WAL found in the standby cluster's pg_wal directory.It seems the comment on WaitForWALToBecomeAvailable() does not go along with the high-availability.sgml, do we needmodification on the comment on the function?Or do I misunderstand something?But, anyway, you think that \"pg_wal\" should be used instead  of \"local\" here?I don't have special opinion here.It might be better because high-availability.sgml does not use\"local\" but \"pg_wal\" for the explanation,  but I also feel it'sobvious in this context.Regards,--Torikoshi Atsushi", "msg_date": "Wed, 18 Mar 2020 17:56:38 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "On 2020/03/18 17:56, Atsushi Torikoshi wrote:\n> \n> \n> On Tue, Mar 17, 2020 at 11:55 AM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> >  >    Waiting when WAL data is not available from any kind of sources\n> >  >    (local, archive or stream) before trying again to retrieve WAL data,\n> >\n> > I think 'local' means pg_wal here, but the comment on\n> > WaitForWALToBecomeAvailable() says checking pg_wal in\n> > standby mode is 'not documented', so I'm a little bit worried\n> > that users may be confused.\n> \n> This logic seems to be documented in high-availability.sgml.\n> \n> \n> Thanks! I didn't notice it.\n> I think you mean the below sentence.\n> \n> >  The standby server will also attempt to restore any WAL found in the standby cluster's pg_wal directory.\n\nI meant the following part in the doc.\n\n---------------------\nAt startup, the standby begins by restoring all WAL available in the archive\nlocation, calling restore_command. Once it reaches the end of WAL available\nthere and restore_command fails, it tries to restore any WAL available in the\npg_wal directory. If that fails, and streaming replication has been configured,\nthe standby tries to connect to the primary server and start streaming WAL from\nthe last valid record found in archive or pg_wal. If that fails or streaming\nreplication is not configured, or if the connection is later disconnected,\nthe standby goes back to step 1 and tries to restore the file from the archive\nagain. This loop of retries from the archive, pg_wal, and via streaming\nreplication goes on until the server is stopped or failover is triggered by a\ntrigger file.\n---------------------\n\n> It seems the comment on WaitForWALToBecomeAvailable()\n> does not go along with the high-availability.sgml, do we need\n> modification on the comment on the function?\n\nNo, I think for now. But you'd like to improve the docs?\n\n> But, anyway, you think that \"pg_wal\" should be used instead \n> \n> of \"local\" here?\n> \n> \n> I don't have special opinion here.\n> It might be better because high-availability.sgml does not use\n> \"local\" but \"pg_wal\" for the explanation,  but I also feel it's\n> obvious in this context.\n\nOk, I changed that from \"local\" to \"pg_wal\" in the patch for\nthe master. Attached is the updated version of the patch.\nIf you're OK with this, I'd like to commit two patches that I posted\nin this thread.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Wed, 18 Mar 2020 18:59:51 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "On Wed, Mar 18, 2020 at 6:59 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n> I meant the following part in the doc.\n>\n> ---------------------\n> At startup, the standby begins by restoring all WAL available in the\n> archive\n> location, calling restore_command. Once it reaches the end of WAL available\n> there and restore_command fails, it tries to restore any WAL available in\n> the\n> pg_wal directory. If that fails, and streaming replication has been\n> configured,\n> the standby tries to connect to the primary server and start streaming WAL\n> from\n> the last valid record found in archive or pg_wal. If that fails or\n> streaming\n> replication is not configured, or if the connection is later disconnected,\n> the standby goes back to step 1 and tries to restore the file from the\n> archive\n> again. This loop of retries from the archive, pg_wal, and via streaming\n> replication goes on until the server is stopped or failover is triggered\n> by a\n> trigger file.\n> ---------------------\n>\n>\nThanks!\n\n\n> > It seems the comment on WaitForWALToBecomeAvailable()\n> > does not go along with the high-availability.sgml, do we need\n> > modification on the comment on the function?\n>\n> No, I think for now. But you'd like to improve the docs?\n>\n\nI'll do it.\n\n\n> > But, anyway, you think that \"pg_wal\" should be used instead\n> >\n> > of \"local\" here?\n> >\n> >\n> > I don't have special opinion here.\n> > It might be better because high-availability.sgml does not use\n> > \"local\" but \"pg_wal\" for the explanation, but I also feel it's\n> > obvious in this context.\n>\n> Ok, I changed that from \"local\" to \"pg_wal\" in the patch for\n> the master. Attached is the updated version of the patch.\n> If you're OK with this, I'd like to commit two patches that I posted\n> in this thread.\n\n\n Thanks for your modification and it looks good to me.\n\nRegards,\n\n--\nTorikoshi Atsushi\n\nOn Wed, Mar 18, 2020 at 6:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\nI meant the following part in the doc.\n\n---------------------\nAt startup, the standby begins by restoring all WAL available in the archive\nlocation, calling restore_command. Once it reaches the end of WAL available\nthere and restore_command fails, it tries to restore any WAL available in the\npg_wal directory. If that fails, and streaming replication has been configured,\nthe standby tries to connect to the primary server and start streaming WAL from\nthe last valid record found in archive or pg_wal. If that fails or streaming\nreplication is not configured, or if the connection is later disconnected,\nthe standby goes back to step 1 and tries to restore the file from the archive\nagain. This loop of retries from the archive, pg_wal, and via streaming\nreplication goes on until the server is stopped or failover is triggered by a\ntrigger file.\n---------------------\nThanks! \n> It seems the comment on WaitForWALToBecomeAvailable()\n> does not go along with the high-availability.sgml, do we need\n> modification on the comment on the function?\n\nNo, I think for now. But you'd like to improve the docs?I'll do it. \n>     But, anyway, you think that \"pg_wal\" should be used instead \n> \n>     of \"local\" here?\n> \n> \n> I don't have special opinion here.\n> It might be better because high-availability.sgml does not use\n> \"local\" but \"pg_wal\" for the explanation,  but I also feel it's\n> obvious in this context.\n\nOk, I changed that from \"local\" to \"pg_wal\" in the patch for\nthe master. Attached is the updated version of the patch.\nIf you're OK with this, I'd like to commit two patches that I posted\nin this thread. Thanks for your modification and it looks good to me.Regards,--Torikoshi Atsushi", "msg_date": "Wed, 18 Mar 2020 22:37:21 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" }, { "msg_contents": "\n\nOn 2020/03/18 22:37, Atsushi Torikoshi wrote:\n> \n> On Wed, Mar 18, 2020 at 6:59 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> \n> I meant the following part in the doc.\n> \n> ---------------------\n> At startup, the standby begins by restoring all WAL available in the archive\n> location, calling restore_command. Once it reaches the end of WAL available\n> there and restore_command fails, it tries to restore any WAL available in the\n> pg_wal directory. If that fails, and streaming replication has been configured,\n> the standby tries to connect to the primary server and start streaming WAL from\n> the last valid record found in archive or pg_wal. If that fails or streaming\n> replication is not configured, or if the connection is later disconnected,\n> the standby goes back to step 1 and tries to restore the file from the archive\n> again. This loop of retries from the archive, pg_wal, and via streaming\n> replication goes on until the server is stopped or failover is triggered by a\n> trigger file.\n> ---------------------\n> \n> \n> Thanks!\n> \n> > It seems the comment on WaitForWALToBecomeAvailable()\n> > does not go along with the high-availability.sgml, do we need\n> > modification on the comment on the function?\n> \n> No, I think for now. But you'd like to improve the docs?\n> \n> \n> I'll do it.\n> \n> >     But, anyway, you think that \"pg_wal\" should be used instead\n> >\n> >     of \"local\" here?\n> >\n> >\n> > I don't have special opinion here.\n> > It might be better because high-availability.sgml does not use\n> > \"local\" but \"pg_wal\" for the explanation,  but I also feel it's\n> > obvious in this context.\n> \n> Ok, I changed that from \"local\" to \"pg_wal\" in the patch for\n> the master. Attached is the updated version of the patch.\n> If you're OK with this, I'd like to commit two patches that I posted\n> in this thread.\n> \n> \n>  Thanks for your modification and it looks good to me.\n\nPushed! Thanks a lot!\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:34:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: RecoveryWalAll and RecoveryWalStream wait events" } ]
[ { "msg_contents": "Hello.\n\nI found it quite annoying that it stops with complaining as \"unused\ndefines\" during repeated execution of build.pl. The subroutine\nGenerateConfigHeader prepares %defines_copy before checking the\nnewness of $config_header and even if it decides not to generate new\none, the following code makes sure if the %defines_copy is empty, then\nof course it fails with the message.\n\nThe attached fixes that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 18 Feb 2020 16:05:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "False failure during repeated windows build." }, { "msg_contents": "On Tue, Feb 18, 2020 at 8:06 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> The attached fixes that.\n\n\nAfter commit 9573384 this patch no longer applies, but with a trivial\nrebase it fixes the issue.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Feb 18, 2020 at 8:06 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nThe attached fixes that.After commit 9573384 this patch no longer applies, but with a trivial rebase it fixes the issue.Regards,Juan José Santamaría Flecha", "msg_date": "Fri, 21 Feb 2020 14:02:40 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: False failure during repeated windows build." }, { "msg_contents": "At Fri, 21 Feb 2020 14:02:40 +0100, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote in \n> After commit 9573384 this patch no longer applies, but with a trivial\n> rebase it fixes the issue.\n\nThanks! This is the rebased version. I'll register this to the next CF.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 25 Feb 2020 10:14:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: False failure during repeated windows build." }, { "msg_contents": "On Tue, Feb 25, 2020 at 10:14:10AM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 21 Feb 2020 14:02:40 +0100, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote in \n> > After commit 9573384 this patch no longer applies, but with a trivial\n> > rebase it fixes the issue.\n> \n> Thanks! This is the rebased version. I'll register this to the next CF.\n\nThat's annoying, and you are right. So, committed.\n--\nMichael", "msg_date": "Tue, 25 Feb 2020 14:02:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: False failure during repeated windows build." }, { "msg_contents": "At Tue, 25 Feb 2020 14:02:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Feb 25, 2020 at 10:14:10AM +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 21 Feb 2020 14:02:40 +0100, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote in \n> > > After commit 9573384 this patch no longer applies, but with a trivial\n> > > rebase it fixes the issue.\n> > \n> > Thanks! This is the rebased version. I'll register this to the next CF.\n> \n> That's annoying, and you are right. So, committed.\n\nThank you for committing.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 25 Feb 2020 16:00:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: False failure during repeated windows build." } ]
[ { "msg_contents": "Hi all,\n\nWhen recovery conflicts happen on the streaming replication standby,\nthe wait event of startup process is null when\nmax_standby_streaming_delay = 0 (to be exact, when the limit time\ncalculated by max_standby_streaming_delay is behind the last WAL data\nreceipt time is behind). Moreover the process title of waiting startup\nprocess looks odd in the case of lock conflicts.\n\n1. When max_standby_streaming_delay > 0 and the startup process\nconflicts with a lock,\n\n* wait event\n backend_type | wait_event_type | wait_event\n--------------+-----------------+------------\n startup | Lock | relation\n(1 row)\n\n* ps\n42513 ?? Ss 0:00.05 postgres: startup recovering\n000000010000000000000003 waiting\n\nLooks good.\n\n2. When max_standby_streaming_delay > 0 and the startup process\nconflicts with a snapshot,\n\n* wait event\n backend_type | wait_event_type | wait_event\n--------------+-----------------+------------\n startup | |\n(1 row)\n\n* ps\n44299 ?? Ss 0:00.05 postgres: startup recovering\n000000010000000000000003 waiting\n\nwait_event_type and wait_event are null in spite of waiting for\nconflict resolution.\n\n3. When max_standby_streaming_delay > 0 and the startup process\nconflicts with a lock,\n\n* wait event\n backend_type | wait_event_type | wait_event\n--------------+-----------------+------------\n startup | |\n(1 row)\n\n* ps\n46510 ?? Ss 0:00.05 postgres: startup recovering\n000000010000000000000003 waiting waiting\n\nwait_event_type and wait_event are null and the process title is\nwrong; \"waiting\" appears twice.\n\nThe cause of the first problem, wait_event_type and wait_event are not\nset, is that WaitExceedsMaxStandbyDelay which is called by\nResolveRecoveryConflictWithVirtualXIDs waits for other transactions\nusing pg_usleep rather than WaitLatch. I think we can change it so\nthat it uses WaitLatch and those caller passes wait event information.\n\nFor the second problem, wrong process title, the cause is also\nrelevant with ResolveRecoveryConflictWithVirtualXIDs; in case of lock\nconflicts we add \"waiting\" to the process title in WaitOnLock but we\nadd it again in ResolveRecoveryConflictWithVirtualXIDs. I think we can\nhave WaitOnLock not set process title in recovery case.\n\nThis problem exists on 12, 11 and 10. I'll submit the patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 18 Feb 2020 17:58:13 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Some problems of recovery conflict wait events" }, { "msg_contents": "On Tue, 18 Feb 2020 at 17:58, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Hi all,\n>\n> When recovery conflicts happen on the streaming replication standby,\n> the wait event of startup process is null when\n> max_standby_streaming_delay = 0 (to be exact, when the limit time\n> calculated by max_standby_streaming_delay is behind the last WAL data\n> receipt time is behind). Moreover the process title of waiting startup\n> process looks odd in the case of lock conflicts.\n>\n> 1. When max_standby_streaming_delay > 0 and the startup process\n> conflicts with a lock,\n>\n> * wait event\n> backend_type | wait_event_type | wait_event\n> --------------+-----------------+------------\n> startup | Lock | relation\n> (1 row)\n>\n> * ps\n> 42513 ?? Ss 0:00.05 postgres: startup recovering\n> 000000010000000000000003 waiting\n>\n> Looks good.\n>\n> 2. When max_standby_streaming_delay > 0 and the startup process\n> conflicts with a snapshot,\n>\n> * wait event\n> backend_type | wait_event_type | wait_event\n> --------------+-----------------+------------\n> startup | |\n> (1 row)\n>\n> * ps\n> 44299 ?? Ss 0:00.05 postgres: startup recovering\n> 000000010000000000000003 waiting\n>\n> wait_event_type and wait_event are null in spite of waiting for\n> conflict resolution.\n>\n> 3. When max_standby_streaming_delay > 0 and the startup process\n> conflicts with a lock,\n>\n> * wait event\n> backend_type | wait_event_type | wait_event\n> --------------+-----------------+------------\n> startup | |\n> (1 row)\n>\n> * ps\n> 46510 ?? Ss 0:00.05 postgres: startup recovering\n> 000000010000000000000003 waiting waiting\n>\n> wait_event_type and wait_event are null and the process title is\n> wrong; \"waiting\" appears twice.\n>\n> The cause of the first problem, wait_event_type and wait_event are not\n> set, is that WaitExceedsMaxStandbyDelay which is called by\n> ResolveRecoveryConflictWithVirtualXIDs waits for other transactions\n> using pg_usleep rather than WaitLatch. I think we can change it so\n> that it uses WaitLatch and those caller passes wait event information.\n>\n> For the second problem, wrong process title, the cause is also\n> relevant with ResolveRecoveryConflictWithVirtualXIDs; in case of lock\n> conflicts we add \"waiting\" to the process title in WaitOnLock but we\n> add it again in ResolveRecoveryConflictWithVirtualXIDs. I think we can\n> have WaitOnLock not set process title in recovery case.\n>\n> This problem exists on 12, 11 and 10. I'll submit the patch.\n>\n\nI've attached patches that fix the above two issues.\n\n0001 patch fixes the first problem. Currently there are 5 types of\nrecovery conflict resolution: snapshot, tablespace, lock, database and\nbuffer pin, and we set wait events to only 2 events out of 5: lock\n(only when doing ProcWaitForSignal) and buffer pin. Therefore, users\ncannot know that the startup process is waiting or not, and what\nwaiting for. This patch sets wait events to more 3 events: snapshot,\ntablespace and lock. For wait events of those 3 events, I thought that\nwe can create a new more appropriate wait event type, say\nRecoveryConflict, and set it for them. However, considering\nback-patching to existing versions, adding new wait event type would\nnot be acceptable. So this patch sets existing wait events such as\nPG_WAIT_LOCK to those 3 places and doesn't not set a wait event for\nconflict resolution on dropping database because there is not an\nappropriate existing one. I'll start a separate thread about\nimprovement on wait events of recovery conflict resolution for PG13 if\nnecessary.\n\n0002 patch fixes the second problem. With this patch, the process\ntitle is updated properly in all recovery conflict resolution cases.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Feb 2020 16:19:09 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, 26 Feb 2020 at 16:19, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 18 Feb 2020 at 17:58, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Hi all,\n> >\n> > When recovery conflicts happen on the streaming replication standby,\n> > the wait event of startup process is null when\n> > max_standby_streaming_delay = 0 (to be exact, when the limit time\n> > calculated by max_standby_streaming_delay is behind the last WAL data\n> > receipt time is behind). Moreover the process title of waiting startup\n> > process looks odd in the case of lock conflicts.\n> >\n> > 1. When max_standby_streaming_delay > 0 and the startup process\n> > conflicts with a lock,\n> >\n> > * wait event\n> > backend_type | wait_event_type | wait_event\n> > --------------+-----------------+------------\n> > startup | Lock | relation\n> > (1 row)\n> >\n> > * ps\n> > 42513 ?? Ss 0:00.05 postgres: startup recovering\n> > 000000010000000000000003 waiting\n> >\n> > Looks good.\n> >\n> > 2. When max_standby_streaming_delay > 0 and the startup process\n> > conflicts with a snapshot,\n> >\n> > * wait event\n> > backend_type | wait_event_type | wait_event\n> > --------------+-----------------+------------\n> > startup | |\n> > (1 row)\n> >\n> > * ps\n> > 44299 ?? Ss 0:00.05 postgres: startup recovering\n> > 000000010000000000000003 waiting\n> >\n> > wait_event_type and wait_event are null in spite of waiting for\n> > conflict resolution.\n> >\n> > 3. When max_standby_streaming_delay > 0 and the startup process\n> > conflicts with a lock,\n> >\n> > * wait event\n> > backend_type | wait_event_type | wait_event\n> > --------------+-----------------+------------\n> > startup | |\n> > (1 row)\n> >\n> > * ps\n> > 46510 ?? Ss 0:00.05 postgres: startup recovering\n> > 000000010000000000000003 waiting waiting\n> >\n> > wait_event_type and wait_event are null and the process title is\n> > wrong; \"waiting\" appears twice.\n> >\n> > The cause of the first problem, wait_event_type and wait_event are not\n> > set, is that WaitExceedsMaxStandbyDelay which is called by\n> > ResolveRecoveryConflictWithVirtualXIDs waits for other transactions\n> > using pg_usleep rather than WaitLatch. I think we can change it so\n> > that it uses WaitLatch and those caller passes wait event information.\n> >\n> > For the second problem, wrong process title, the cause is also\n> > relevant with ResolveRecoveryConflictWithVirtualXIDs; in case of lock\n> > conflicts we add \"waiting\" to the process title in WaitOnLock but we\n> > add it again in ResolveRecoveryConflictWithVirtualXIDs. I think we can\n> > have WaitOnLock not set process title in recovery case.\n> >\n> > This problem exists on 12, 11 and 10. I'll submit the patch.\n> >\n>\n> I've attached patches that fix the above two issues.\n>\n> 0001 patch fixes the first problem. Currently there are 5 types of\n> recovery conflict resolution: snapshot, tablespace, lock, database and\n> buffer pin, and we set wait events to only 2 events out of 5: lock\n> (only when doing ProcWaitForSignal) and buffer pin. Therefore, users\n> cannot know that the startup process is waiting or not, and what\n> waiting for. This patch sets wait events to more 3 events: snapshot,\n> tablespace and lock. For wait events of those 3 events, I thought that\n> we can create a new more appropriate wait event type, say\n> RecoveryConflict, and set it for them. However, considering\n> back-patching to existing versions, adding new wait event type would\n> not be acceptable. So this patch sets existing wait events such as\n> PG_WAIT_LOCK to those 3 places and doesn't not set a wait event for\n> conflict resolution on dropping database because there is not an\n> appropriate existing one. I'll start a separate thread about\n> improvement on wait events of recovery conflict resolution for PG13 if\n> necessary.\n\nAttached a patch improves wait events of recovery conflict resolution.\nIt's for PG13. I added new RecoveryConflict wait_event_type and some\nwait_event. This patch can be applied on top of two patches I already\nproposed.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CA%2Bfd4k63ukOtdNx2f-fUZ2vuB3RgE%3DPo%2BxSnpmcPJbKqsJMtiA%40mail.gmail.com\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 29 Feb 2020 12:36:30 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/02/29 12:36, Masahiko Sawada wrote:\n> On Wed, 26 Feb 2020 at 16:19, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Tue, 18 Feb 2020 at 17:58, Masahiko Sawada\n>> <masahiko.sawada@2ndquadrant.com> wrote:\n>>>\n>>> Hi all,\n>>>\n>>> When recovery conflicts happen on the streaming replication standby,\n>>> the wait event of startup process is null when\n>>> max_standby_streaming_delay = 0 (to be exact, when the limit time\n>>> calculated by max_standby_streaming_delay is behind the last WAL data\n>>> receipt time is behind). Moreover the process title of waiting startup\n>>> process looks odd in the case of lock conflicts.\n>>>\n>>> 1. When max_standby_streaming_delay > 0 and the startup process\n>>> conflicts with a lock,\n>>>\n>>> * wait event\n>>> backend_type | wait_event_type | wait_event\n>>> --------------+-----------------+------------\n>>> startup | Lock | relation\n>>> (1 row)\n>>>\n>>> * ps\n>>> 42513 ?? Ss 0:00.05 postgres: startup recovering\n>>> 000000010000000000000003 waiting\n>>>\n>>> Looks good.\n>>>\n>>> 2. When max_standby_streaming_delay > 0 and the startup process\n>>> conflicts with a snapshot,\n>>>\n>>> * wait event\n>>> backend_type | wait_event_type | wait_event\n>>> --------------+-----------------+------------\n>>> startup | |\n>>> (1 row)\n>>>\n>>> * ps\n>>> 44299 ?? Ss 0:00.05 postgres: startup recovering\n>>> 000000010000000000000003 waiting\n>>>\n>>> wait_event_type and wait_event are null in spite of waiting for\n>>> conflict resolution.\n>>>\n>>> 3. When max_standby_streaming_delay > 0 and the startup process\n>>> conflicts with a lock,\n>>>\n>>> * wait event\n>>> backend_type | wait_event_type | wait_event\n>>> --------------+-----------------+------------\n>>> startup | |\n>>> (1 row)\n>>>\n>>> * ps\n>>> 46510 ?? Ss 0:00.05 postgres: startup recovering\n>>> 000000010000000000000003 waiting waiting\n>>>\n>>> wait_event_type and wait_event are null and the process title is\n>>> wrong; \"waiting\" appears twice.\n>>>\n>>> The cause of the first problem, wait_event_type and wait_event are not\n>>> set, is that WaitExceedsMaxStandbyDelay which is called by\n>>> ResolveRecoveryConflictWithVirtualXIDs waits for other transactions\n>>> using pg_usleep rather than WaitLatch. I think we can change it so\n>>> that it uses WaitLatch and those caller passes wait event information.\n>>>\n>>> For the second problem, wrong process title, the cause is also\n>>> relevant with ResolveRecoveryConflictWithVirtualXIDs; in case of lock\n>>> conflicts we add \"waiting\" to the process title in WaitOnLock but we\n>>> add it again in ResolveRecoveryConflictWithVirtualXIDs. I think we can\n>>> have WaitOnLock not set process title in recovery case.\n>>>\n>>> This problem exists on 12, 11 and 10. I'll submit the patch.\n>>>\n>>\n>> I've attached patches that fix the above two issues.\n>>\n>> 0001 patch fixes the first problem. Currently there are 5 types of\n>> recovery conflict resolution: snapshot, tablespace, lock, database and\n>> buffer pin, and we set wait events to only 2 events out of 5: lock\n>> (only when doing ProcWaitForSignal) and buffer pin.\n\n+1 to add those new wait events in the master. But adding them sounds like\nnew feature rather than bug fix. So ISTM that it's not be back-patchable...\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 11:04:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, 4 Mar 2020 at 11:04, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/02/29 12:36, Masahiko Sawada wrote:\n> > On Wed, 26 Feb 2020 at 16:19, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Tue, 18 Feb 2020 at 17:58, Masahiko Sawada\n> >> <masahiko.sawada@2ndquadrant.com> wrote:\n> >>>\n> >>> Hi all,\n> >>>\n> >>> When recovery conflicts happen on the streaming replication standby,\n> >>> the wait event of startup process is null when\n> >>> max_standby_streaming_delay = 0 (to be exact, when the limit time\n> >>> calculated by max_standby_streaming_delay is behind the last WAL data\n> >>> receipt time is behind). Moreover the process title of waiting startup\n> >>> process looks odd in the case of lock conflicts.\n> >>>\n> >>> 1. When max_standby_streaming_delay > 0 and the startup process\n> >>> conflicts with a lock,\n> >>>\n> >>> * wait event\n> >>> backend_type | wait_event_type | wait_event\n> >>> --------------+-----------------+------------\n> >>> startup | Lock | relation\n> >>> (1 row)\n> >>>\n> >>> * ps\n> >>> 42513 ?? Ss 0:00.05 postgres: startup recovering\n> >>> 000000010000000000000003 waiting\n> >>>\n> >>> Looks good.\n> >>>\n> >>> 2. When max_standby_streaming_delay > 0 and the startup process\n> >>> conflicts with a snapshot,\n> >>>\n> >>> * wait event\n> >>> backend_type | wait_event_type | wait_event\n> >>> --------------+-----------------+------------\n> >>> startup | |\n> >>> (1 row)\n> >>>\n> >>> * ps\n> >>> 44299 ?? Ss 0:00.05 postgres: startup recovering\n> >>> 000000010000000000000003 waiting\n> >>>\n> >>> wait_event_type and wait_event are null in spite of waiting for\n> >>> conflict resolution.\n> >>>\n> >>> 3. When max_standby_streaming_delay > 0 and the startup process\n> >>> conflicts with a lock,\n> >>>\n> >>> * wait event\n> >>> backend_type | wait_event_type | wait_event\n> >>> --------------+-----------------+------------\n> >>> startup | |\n> >>> (1 row)\n> >>>\n> >>> * ps\n> >>> 46510 ?? Ss 0:00.05 postgres: startup recovering\n> >>> 000000010000000000000003 waiting waiting\n> >>>\n> >>> wait_event_type and wait_event are null and the process title is\n> >>> wrong; \"waiting\" appears twice.\n> >>>\n> >>> The cause of the first problem, wait_event_type and wait_event are not\n> >>> set, is that WaitExceedsMaxStandbyDelay which is called by\n> >>> ResolveRecoveryConflictWithVirtualXIDs waits for other transactions\n> >>> using pg_usleep rather than WaitLatch. I think we can change it so\n> >>> that it uses WaitLatch and those caller passes wait event information.\n> >>>\n> >>> For the second problem, wrong process title, the cause is also\n> >>> relevant with ResolveRecoveryConflictWithVirtualXIDs; in case of lock\n> >>> conflicts we add \"waiting\" to the process title in WaitOnLock but we\n> >>> add it again in ResolveRecoveryConflictWithVirtualXIDs. I think we can\n> >>> have WaitOnLock not set process title in recovery case.\n> >>>\n> >>> This problem exists on 12, 11 and 10. I'll submit the patch.\n> >>>\n> >>\n> >> I've attached patches that fix the above two issues.\n> >>\n> >> 0001 patch fixes the first problem. Currently there are 5 types of\n> >> recovery conflict resolution: snapshot, tablespace, lock, database and\n> >> buffer pin, and we set wait events to only 2 events out of 5: lock\n> >> (only when doing ProcWaitForSignal) and buffer pin.\n>\n> +1 to add those new wait events in the master. But adding them sounds like\n> new feature rather than bug fix. So ISTM that it's not be back-patchable...\n>\n\nYeah, so 0001 patch sets existing wait events to recovery conflict\nresolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\nto the recovery conflict on a snapshot. 0003 patch improves these wait\nevents by adding the new type of wait event such as\nWAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\nis the fix for existing versions and 0003 patch is an improvement for\nonly PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 4 Mar 2020 13:13:19 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> events by adding the new type of wait event such as\n> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> is the fix for existing versions and 0003 patch is an improvement for\n> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n\nI got my eyes on this patch set. The full patch set is in my opinion\njust a set of improvements, and not bug fixes, so I would refrain from\nback-backpatching.\n--\nMichael", "msg_date": "Wed, 4 Mar 2020 13:27:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/04 13:27, Michael Paquier wrote:\n> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>> events by adding the new type of wait event such as\n>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>> is the fix for existing versions and 0003 patch is an improvement for\n>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n\nYes, it looks like a improvement rather than bug fix.\n\n> I got my eyes on this patch set. The full patch set is in my opinion\n> just a set of improvements, and not bug fixes, so I would refrain from\n> back-backpatching.\n\nI think that the issue (i.e., \"waiting\" is reported twice needlessly\nin PS display) that 0002 patch tries to fix is a bug. So it should be\nfixed even in the back branches.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 13:48:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/04 13:27, Michael Paquier wrote:\n> > On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >> events by adding the new type of wait event such as\n> >> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >> is the fix for existing versions and 0003 patch is an improvement for\n> >> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>\n> Yes, it looks like a improvement rather than bug fix.\n>\n\nOkay, understand.\n\n> > I got my eyes on this patch set. The full patch set is in my opinion\n> > just a set of improvements, and not bug fixes, so I would refrain from\n> > back-backpatching.\n>\n> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> fixed even in the back branches.\n\nSo we need only two patches: one fixes process title issue and another\nimprove wait event. I've attached updated patches.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 4 Mar 2020 14:31:47 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/04 14:31, Masahiko Sawada wrote:\n> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>> events by adding the new type of wait event such as\n>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>\n>> Yes, it looks like a improvement rather than bug fix.\n>>\n> \n> Okay, understand.\n> \n>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>> back-backpatching.\n>>\n>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>> fixed even in the back branches.\n> \n> So we need only two patches: one fixes process title issue and another\n> improve wait event. I've attached updated patches.\n\nThanks for updating the patches! I started reading 0001 patch.\n\n-\t\t\t/*\n-\t\t\t * Report via ps if we have been waiting for more than 500 msec\n-\t\t\t * (should that be configurable?)\n-\t\t\t */\n-\t\t\tif (update_process_title && new_status == NULL &&\n-\t\t\t\tTimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n-\t\t\t\t\t\t\t\t\t\t 500))\n\nThe patch changes ResolveRecoveryConflictWithSnapshot() and\nResolveRecoveryConflictWithTablespace() so that they always add\n\"waiting\" into the PS display, whether wait is really necessary or not.\nBut isn't it better to display \"waiting\" in PS basically when wait is\nnecessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\ndoes as the above?\n\n ResolveRecoveryConflictWithDatabase(Oid dbid)\n {\n+\tchar\t\t*new_status = NULL;\n+\n+\t/* Report via ps we are waiting */\n+\tnew_status = set_process_title_waiting();\n\nIn ResolveRecoveryConflictWithDatabase(), there seems no need to\ndisplay \"waiting\" in PS because no wait occurs when recovery conflict\nwith database happens.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 4 Mar 2020 15:21:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> > On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>> events by adding the new type of wait event such as\n> >>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>\n> >> Yes, it looks like a improvement rather than bug fix.\n> >>\n> >\n> > Okay, understand.\n> >\n> >>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>> back-backpatching.\n> >>\n> >> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >> fixed even in the back branches.\n> >\n> > So we need only two patches: one fixes process title issue and another\n> > improve wait event. I've attached updated patches.\n>\n> Thanks for updating the patches! I started reading 0001 patch.\n\nThank you for reviewing this patch.\n\n>\n> - /*\n> - * Report via ps if we have been waiting for more than 500 msec\n> - * (should that be configurable?)\n> - */\n> - if (update_process_title && new_status == NULL &&\n> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> - 500))\n>\n> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> ResolveRecoveryConflictWithTablespace() so that they always add\n> \"waiting\" into the PS display, whether wait is really necessary or not.\n> But isn't it better to display \"waiting\" in PS basically when wait is\n> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> does as the above?\n\nYou're right. Will fix it.\n\n>\n> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> {\n> + char *new_status = NULL;\n> +\n> + /* Report via ps we are waiting */\n> + new_status = set_process_title_waiting();\n>\n> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> display \"waiting\" in PS because no wait occurs when recovery conflict\n> with database happens.\n\nIsn't the startup process waiting for other backend to terminate?\n\nRegards\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 5 Mar 2020 16:58:47 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/05 16:58, Masahiko Sawada wrote:\n> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>> events by adding the new type of wait event such as\n>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>\n>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>\n>>>\n>>> Okay, understand.\n>>>\n>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>> back-backpatching.\n>>>>\n>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>> fixed even in the back branches.\n>>>\n>>> So we need only two patches: one fixes process title issue and another\n>>> improve wait event. I've attached updated patches.\n>>\n>> Thanks for updating the patches! I started reading 0001 patch.\n> \n> Thank you for reviewing this patch.\n> \n>>\n>> - /*\n>> - * Report via ps if we have been waiting for more than 500 msec\n>> - * (should that be configurable?)\n>> - */\n>> - if (update_process_title && new_status == NULL &&\n>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>> - 500))\n>>\n>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>> ResolveRecoveryConflictWithTablespace() so that they always add\n>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>> But isn't it better to display \"waiting\" in PS basically when wait is\n>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>> does as the above?\n> \n> You're right. Will fix it.\n> \n>>\n>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n>> {\n>> + char *new_status = NULL;\n>> +\n>> + /* Report via ps we are waiting */\n>> + new_status = set_process_title_waiting();\n>>\n>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>> with database happens.\n> \n> Isn't the startup process waiting for other backend to terminate?\n\nYeah, you're right. I agree that \"waiting\" should be reported in this case.\n\nCurrently ResolveRecoveryConflictWithLock() and\nResolveRecoveryConflictWithBufferPin() don't call\nResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\nin PS display. You changed them so that they report \"waiting\". I agree\nto have this change. But this change is an improvement rather than\na bug fix, i.e., we should apply this change only in v13?\n\nOf course, the other part in the patch, i.e., fixing the issue that\n\"waiting\" is doubly reported, should be back-patched, I think,\nthough.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 5 Mar 2020 20:16:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/05 16:58, Masahiko Sawada wrote:\n> > On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>> events by adding the new type of wait event such as\n> >>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>\n> >>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>\n> >>>\n> >>> Okay, understand.\n> >>>\n> >>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>> back-backpatching.\n> >>>>\n> >>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>> fixed even in the back branches.\n> >>>\n> >>> So we need only two patches: one fixes process title issue and another\n> >>> improve wait event. I've attached updated patches.\n> >>\n> >> Thanks for updating the patches! I started reading 0001 patch.\n> >\n> > Thank you for reviewing this patch.\n> >\n> >>\n> >> - /*\n> >> - * Report via ps if we have been waiting for more than 500 msec\n> >> - * (should that be configurable?)\n> >> - */\n> >> - if (update_process_title && new_status == NULL &&\n> >> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >> - 500))\n> >>\n> >> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> >> ResolveRecoveryConflictWithTablespace() so that they always add\n> >> \"waiting\" into the PS display, whether wait is really necessary or not.\n> >> But isn't it better to display \"waiting\" in PS basically when wait is\n> >> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> >> does as the above?\n> >\n> > You're right. Will fix it.\n> >\n> >>\n> >> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> >> {\n> >> + char *new_status = NULL;\n> >> +\n> >> + /* Report via ps we are waiting */\n> >> + new_status = set_process_title_waiting();\n> >>\n> >> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> >> display \"waiting\" in PS because no wait occurs when recovery conflict\n> >> with database happens.\n> >\n> > Isn't the startup process waiting for other backend to terminate?\n>\n> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>\n> Currently ResolveRecoveryConflictWithLock() and\n> ResolveRecoveryConflictWithBufferPin() don't call\n> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n> in PS display. You changed them so that they report \"waiting\". I agree\n> to have this change. But this change is an improvement rather than\n> a bug fix, i.e., we should apply this change only in v13?\n>\n\nDid you mean ResolveRecoveryConflictWithDatabase and\nResolveRecoveryConflictWithBufferPin? In the current code as far as I\nresearched there are two cases where we don't add \"waiting\" and one\ncase where we doubly add \"waiting\".\n\nResolveRecoveryConflictWithDatabase and\nResolveRecoveryConflictWithBufferPin don't update the ps title.\nAlthough the path where GetCurrentTimestamp() >= ltime is false in\nResolveRecoveryConflictWithLock also doesn't update the ps title, it's\nalready updated in WaitOnLock. On the other hand, the path where\nGetCurrentTimestamp() >= ltime is true in\nResolveRecoveryConflictWithLock updates the ps title but it's wrong as\nI reported.\n\nI've split the patch into two patches: 0001 patch fixes the issue\nabout doubly updating ps title, 0002 patch makes the recovery conflict\nresolution on database and buffer pin update the ps title.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 8 Mar 2020 13:52:30 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On 2020/03/08 13:52, Masahiko Sawada wrote:\n> On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>\n>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>\n>>>>>\n>>>>> Okay, understand.\n>>>>>\n>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>> back-backpatching.\n>>>>>>\n>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>> fixed even in the back branches.\n>>>>>\n>>>>> So we need only two patches: one fixes process title issue and another\n>>>>> improve wait event. I've attached updated patches.\n>>>>\n>>>> Thanks for updating the patches! I started reading 0001 patch.\n>>>\n>>> Thank you for reviewing this patch.\n>>>\n>>>>\n>>>> - /*\n>>>> - * Report via ps if we have been waiting for more than 500 msec\n>>>> - * (should that be configurable?)\n>>>> - */\n>>>> - if (update_process_title && new_status == NULL &&\n>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>>> - 500))\n>>>>\n>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>>>> does as the above?\n>>>\n>>> You're right. Will fix it.\n>>>\n>>>>\n>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n>>>> {\n>>>> + char *new_status = NULL;\n>>>> +\n>>>> + /* Report via ps we are waiting */\n>>>> + new_status = set_process_title_waiting();\n>>>>\n>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>>>> with database happens.\n>>>\n>>> Isn't the startup process waiting for other backend to terminate?\n>>\n>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>>\n>> Currently ResolveRecoveryConflictWithLock() and\n>> ResolveRecoveryConflictWithBufferPin() don't call\n>> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n>> in PS display. You changed them so that they report \"waiting\". I agree\n>> to have this change. But this change is an improvement rather than\n>> a bug fix, i.e., we should apply this change only in v13?\n>>\n> \n> Did you mean ResolveRecoveryConflictWithDatabase and\n> ResolveRecoveryConflictWithBufferPin?\n\nYes! Sorry for my typo.\n\n> In the current code as far as I\n> researched there are two cases where we don't add \"waiting\" and one\n> case where we doubly add \"waiting\".\n> \n> ResolveRecoveryConflictWithDatabase and\n> ResolveRecoveryConflictWithBufferPin don't update the ps title.\n> Although the path where GetCurrentTimestamp() >= ltime is false in\n> ResolveRecoveryConflictWithLock also doesn't update the ps title, it's\n> already updated in WaitOnLock. On the other hand, the path where\n> GetCurrentTimestamp() >= ltime is true in\n> ResolveRecoveryConflictWithLock updates the ps title but it's wrong as\n> I reported.\n> \n> I've split the patch into two patches: 0001 patch fixes the issue\n> about doubly updating ps title, 0002 patch makes the recovery conflict\n> resolution on database and buffer pin update the ps title.\n\nThanks for splitting the patches. I think that 0001 patch can be back-patched.\n\n-\t\t\t/*\n-\t\t\t * Report via ps if we have been waiting for more than 500 msec\n-\t\t\t * (should that be configurable?)\n-\t\t\t */\n-\t\t\tif (update_process_title && new_status == NULL &&\n-\t\t\t\tTimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n-\t\t\t\t\t\t\t\t\t\t 500))\n\nOriginally, \"waiting\" is reported in PS if we've been waiting for more than\n500 msec, as the above does. But you got rid of those codes in the patch.\nDid you confirm that it's safe to do that? If not, isn't it better to apply\nthe attached patch? The attached patch makes\nResolveRecoveryConflictWithVirtualXIDs() report \"waiting\" as it does now,\nand allows its caller to choose whether to report that.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Mon, 9 Mar 2020 13:24:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Mon, 9 Mar 2020 at 13:24, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/08 13:52, Masahiko Sawada wrote:\n> > On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/05 16:58, Masahiko Sawada wrote:\n> >>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>>> events by adding the new type of wait event such as\n> >>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>>\n> >>>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>>\n> >>>>>\n> >>>>> Okay, understand.\n> >>>>>\n> >>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>>> back-backpatching.\n> >>>>>>\n> >>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>>> fixed even in the back branches.\n> >>>>>\n> >>>>> So we need only two patches: one fixes process title issue and another\n> >>>>> improve wait event. I've attached updated patches.\n> >>>>\n> >>>> Thanks for updating the patches! I started reading 0001 patch.\n> >>>\n> >>> Thank you for reviewing this patch.\n> >>>\n> >>>>\n> >>>> - /*\n> >>>> - * Report via ps if we have been waiting for more than 500 msec\n> >>>> - * (should that be configurable?)\n> >>>> - */\n> >>>> - if (update_process_title && new_status == NULL &&\n> >>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >>>> - 500))\n> >>>>\n> >>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> >>>> ResolveRecoveryConflictWithTablespace() so that they always add\n> >>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n> >>>> But isn't it better to display \"waiting\" in PS basically when wait is\n> >>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> >>>> does as the above?\n> >>>\n> >>> You're right. Will fix it.\n> >>>\n> >>>>\n> >>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> >>>> {\n> >>>> + char *new_status = NULL;\n> >>>> +\n> >>>> + /* Report via ps we are waiting */\n> >>>> + new_status = set_process_title_waiting();\n> >>>>\n> >>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> >>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n> >>>> with database happens.\n> >>>\n> >>> Isn't the startup process waiting for other backend to terminate?\n> >>\n> >> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n> >>\n> >> Currently ResolveRecoveryConflictWithLock() and\n> >> ResolveRecoveryConflictWithBufferPin() don't call\n> >> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n> >> in PS display. You changed them so that they report \"waiting\". I agree\n> >> to have this change. But this change is an improvement rather than\n> >> a bug fix, i.e., we should apply this change only in v13?\n> >>\n> >\n> > Did you mean ResolveRecoveryConflictWithDatabase and\n> > ResolveRecoveryConflictWithBufferPin?\n>\n> Yes! Sorry for my typo.\n>\n> > In the current code as far as I\n> > researched there are two cases where we don't add \"waiting\" and one\n> > case where we doubly add \"waiting\".\n> >\n> > ResolveRecoveryConflictWithDatabase and\n> > ResolveRecoveryConflictWithBufferPin don't update the ps title.\n> > Although the path where GetCurrentTimestamp() >= ltime is false in\n> > ResolveRecoveryConflictWithLock also doesn't update the ps title, it's\n> > already updated in WaitOnLock. On the other hand, the path where\n> > GetCurrentTimestamp() >= ltime is true in\n> > ResolveRecoveryConflictWithLock updates the ps title but it's wrong as\n> > I reported.\n> >\n> > I've split the patch into two patches: 0001 patch fixes the issue\n> > about doubly updating ps title, 0002 patch makes the recovery conflict\n> > resolution on database and buffer pin update the ps title.\n>\n> Thanks for splitting the patches. I think that 0001 patch can be back-patched.\n>\n> - /*\n> - * Report via ps if we have been waiting for more than 500 msec\n> - * (should that be configurable?)\n> - */\n> - if (update_process_title && new_status == NULL &&\n> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> - 500))\n>\n> Originally, \"waiting\" is reported in PS if we've been waiting for more than\n> 500 msec, as the above does. But you got rid of those codes in the patch.\n> Did you confirm that it's safe to do that? If not, isn't it better to apply\n> the attached patch?\n\nIn WaitOnLock() we update the ps title regardless of waiting time. So\nI thought we can change it to make these behavior consistent. But\nconsidering back-patch, your patch looks better than mine.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 9 Mar 2020 14:10:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/09 14:10, Masahiko Sawada wrote:\n> On Mon, 9 Mar 2020 at 13:24, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/08 13:52, Masahiko Sawada wrote:\n>>> On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n>>>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>>>\n>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>>>\n>>>>>>>\n>>>>>>> Okay, understand.\n>>>>>>>\n>>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>>>> back-backpatching.\n>>>>>>>>\n>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>>>> fixed even in the back branches.\n>>>>>>>\n>>>>>>> So we need only two patches: one fixes process title issue and another\n>>>>>>> improve wait event. I've attached updated patches.\n>>>>>>\n>>>>>> Thanks for updating the patches! I started reading 0001 patch.\n>>>>>\n>>>>> Thank you for reviewing this patch.\n>>>>>\n>>>>>>\n>>>>>> - /*\n>>>>>> - * Report via ps if we have been waiting for more than 500 msec\n>>>>>> - * (should that be configurable?)\n>>>>>> - */\n>>>>>> - if (update_process_title && new_status == NULL &&\n>>>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>>>>> - 500))\n>>>>>>\n>>>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>>>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n>>>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>>>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n>>>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>>>>>> does as the above?\n>>>>>\n>>>>> You're right. Will fix it.\n>>>>>\n>>>>>>\n>>>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n>>>>>> {\n>>>>>> + char *new_status = NULL;\n>>>>>> +\n>>>>>> + /* Report via ps we are waiting */\n>>>>>> + new_status = set_process_title_waiting();\n>>>>>>\n>>>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n>>>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>>>>>> with database happens.\n>>>>>\n>>>>> Isn't the startup process waiting for other backend to terminate?\n>>>>\n>>>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>>>>\n>>>> Currently ResolveRecoveryConflictWithLock() and\n>>>> ResolveRecoveryConflictWithBufferPin() don't call\n>>>> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n>>>> in PS display. You changed them so that they report \"waiting\". I agree\n>>>> to have this change. But this change is an improvement rather than\n>>>> a bug fix, i.e., we should apply this change only in v13?\n>>>>\n>>>\n>>> Did you mean ResolveRecoveryConflictWithDatabase and\n>>> ResolveRecoveryConflictWithBufferPin?\n>>\n>> Yes! Sorry for my typo.\n>>\n>>> In the current code as far as I\n>>> researched there are two cases where we don't add \"waiting\" and one\n>>> case where we doubly add \"waiting\".\n>>>\n>>> ResolveRecoveryConflictWithDatabase and\n>>> ResolveRecoveryConflictWithBufferPin don't update the ps title.\n>>> Although the path where GetCurrentTimestamp() >= ltime is false in\n>>> ResolveRecoveryConflictWithLock also doesn't update the ps title, it's\n>>> already updated in WaitOnLock. On the other hand, the path where\n>>> GetCurrentTimestamp() >= ltime is true in\n>>> ResolveRecoveryConflictWithLock updates the ps title but it's wrong as\n>>> I reported.\n>>>\n>>> I've split the patch into two patches: 0001 patch fixes the issue\n>>> about doubly updating ps title, 0002 patch makes the recovery conflict\n>>> resolution on database and buffer pin update the ps title.\n>>\n>> Thanks for splitting the patches. I think that 0001 patch can be back-patched.\n>>\n>> - /*\n>> - * Report via ps if we have been waiting for more than 500 msec\n>> - * (should that be configurable?)\n>> - */\n>> - if (update_process_title && new_status == NULL &&\n>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>> - 500))\n>>\n>> Originally, \"waiting\" is reported in PS if we've been waiting for more than\n>> 500 msec, as the above does. But you got rid of those codes in the patch.\n>> Did you confirm that it's safe to do that? If not, isn't it better to apply\n>> the attached patch?\n> \n> In WaitOnLock() we update the ps title regardless of waiting time. So\n> I thought we can change it to make these behavior consistent. But\n> considering back-patch, your patch looks better than mine.\n\nYeah, so I pushed the 0001 patch at first!\nI will review the other patches later.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 10 Mar 2020 00:57:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Tue, 10 Mar 2020 at 00:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/09 14:10, Masahiko Sawada wrote:\n> > On Mon, 9 Mar 2020 at 13:24, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/08 13:52, Masahiko Sawada wrote:\n> >>> On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n> >>>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>>>>> events by adding the new type of wait event such as\n> >>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>>>>\n> >>>>>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>>>>\n> >>>>>>>\n> >>>>>>> Okay, understand.\n> >>>>>>>\n> >>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>>>>> back-backpatching.\n> >>>>>>>>\n> >>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>>>>> fixed even in the back branches.\n> >>>>>>>\n> >>>>>>> So we need only two patches: one fixes process title issue and another\n> >>>>>>> improve wait event. I've attached updated patches.\n> >>>>>>\n> >>>>>> Thanks for updating the patches! I started reading 0001 patch.\n> >>>>>\n> >>>>> Thank you for reviewing this patch.\n> >>>>>\n> >>>>>>\n> >>>>>> - /*\n> >>>>>> - * Report via ps if we have been waiting for more than 500 msec\n> >>>>>> - * (should that be configurable?)\n> >>>>>> - */\n> >>>>>> - if (update_process_title && new_status == NULL &&\n> >>>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >>>>>> - 500))\n> >>>>>>\n> >>>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> >>>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n> >>>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n> >>>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n> >>>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> >>>>>> does as the above?\n> >>>>>\n> >>>>> You're right. Will fix it.\n> >>>>>\n> >>>>>>\n> >>>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> >>>>>> {\n> >>>>>> + char *new_status = NULL;\n> >>>>>> +\n> >>>>>> + /* Report via ps we are waiting */\n> >>>>>> + new_status = set_process_title_waiting();\n> >>>>>>\n> >>>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> >>>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n> >>>>>> with database happens.\n> >>>>>\n> >>>>> Isn't the startup process waiting for other backend to terminate?\n> >>>>\n> >>>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n> >>>>\n> >>>> Currently ResolveRecoveryConflictWithLock() and\n> >>>> ResolveRecoveryConflictWithBufferPin() don't call\n> >>>> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n> >>>> in PS display. You changed them so that they report \"waiting\". I agree\n> >>>> to have this change. But this change is an improvement rather than\n> >>>> a bug fix, i.e., we should apply this change only in v13?\n> >>>>\n> >>>\n> >>> Did you mean ResolveRecoveryConflictWithDatabase and\n> >>> ResolveRecoveryConflictWithBufferPin?\n> >>\n> >> Yes! Sorry for my typo.\n> >>\n> >>> In the current code as far as I\n> >>> researched there are two cases where we don't add \"waiting\" and one\n> >>> case where we doubly add \"waiting\".\n> >>>\n> >>> ResolveRecoveryConflictWithDatabase and\n> >>> ResolveRecoveryConflictWithBufferPin don't update the ps title.\n> >>> Although the path where GetCurrentTimestamp() >= ltime is false in\n> >>> ResolveRecoveryConflictWithLock also doesn't update the ps title, it's\n> >>> already updated in WaitOnLock. On the other hand, the path where\n> >>> GetCurrentTimestamp() >= ltime is true in\n> >>> ResolveRecoveryConflictWithLock updates the ps title but it's wrong as\n> >>> I reported.\n> >>>\n> >>> I've split the patch into two patches: 0001 patch fixes the issue\n> >>> about doubly updating ps title, 0002 patch makes the recovery conflict\n> >>> resolution on database and buffer pin update the ps title.\n> >>\n> >> Thanks for splitting the patches. I think that 0001 patch can be back-patched.\n> >>\n> >> - /*\n> >> - * Report via ps if we have been waiting for more than 500 msec\n> >> - * (should that be configurable?)\n> >> - */\n> >> - if (update_process_title && new_status == NULL &&\n> >> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >> - 500))\n> >>\n> >> Originally, \"waiting\" is reported in PS if we've been waiting for more than\n> >> 500 msec, as the above does. But you got rid of those codes in the patch.\n> >> Did you confirm that it's safe to do that? If not, isn't it better to apply\n> >> the attached patch?\n> >\n> > In WaitOnLock() we update the ps title regardless of waiting time. So\n> > I thought we can change it to make these behavior consistent. But\n> > considering back-patch, your patch looks better than mine.\n>\n> Yeah, so I pushed the 0001 patch at first!\n> I will review the other patches later.\n\nThank you!\n\nFor 0002 patch which makes ResolveRecoveryConflictWithDatabase and\nResolveRecoveryConflictWithBufferPin update the ps title, I think\nthese are better to wait for 5ms before updating the ps title like\nResolveRecoveryConflictWithVirtualXIDs, for consistency among recovery\nconflict resolution functions, but what do you think?\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 10 Mar 2020 13:54:18 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/10 13:54, Masahiko Sawada wrote:\n> On Tue, 10 Mar 2020 at 00:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/09 14:10, Masahiko Sawada wrote:\n>>> On Mon, 9 Mar 2020 at 13:24, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/08 13:52, Masahiko Sawada wrote:\n>>>>> On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n>>>>>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>>>>>\n>>>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> Okay, understand.\n>>>>>>>>>\n>>>>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>>>>>> back-backpatching.\n>>>>>>>>>>\n>>>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>>>>>> fixed even in the back branches.\n>>>>>>>>>\n>>>>>>>>> So we need only two patches: one fixes process title issue and another\n>>>>>>>>> improve wait event. I've attached updated patches.\n>>>>>>>>\n>>>>>>>> Thanks for updating the patches! I started reading 0001 patch.\n>>>>>>>\n>>>>>>> Thank you for reviewing this patch.\n>>>>>>>\n>>>>>>>>\n>>>>>>>> - /*\n>>>>>>>> - * Report via ps if we have been waiting for more than 500 msec\n>>>>>>>> - * (should that be configurable?)\n>>>>>>>> - */\n>>>>>>>> - if (update_process_title && new_status == NULL &&\n>>>>>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>>>>>>> - 500))\n>>>>>>>>\n>>>>>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>>>>>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n>>>>>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>>>>>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n>>>>>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>>>>>>>> does as the above?\n>>>>>>>\n>>>>>>> You're right. Will fix it.\n>>>>>>>\n>>>>>>>>\n>>>>>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n>>>>>>>> {\n>>>>>>>> + char *new_status = NULL;\n>>>>>>>> +\n>>>>>>>> + /* Report via ps we are waiting */\n>>>>>>>> + new_status = set_process_title_waiting();\n>>>>>>>>\n>>>>>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n>>>>>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>>>>>>>> with database happens.\n>>>>>>>\n>>>>>>> Isn't the startup process waiting for other backend to terminate?\n>>>>>>\n>>>>>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>>>>>>\n>>>>>> Currently ResolveRecoveryConflictWithLock() and\n>>>>>> ResolveRecoveryConflictWithBufferPin() don't call\n>>>>>> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n>>>>>> in PS display. You changed them so that they report \"waiting\". I agree\n>>>>>> to have this change. But this change is an improvement rather than\n>>>>>> a bug fix, i.e., we should apply this change only in v13?\n>>>>>>\n>>>>>\n>>>>> Did you mean ResolveRecoveryConflictWithDatabase and\n>>>>> ResolveRecoveryConflictWithBufferPin?\n>>>>\n>>>> Yes! Sorry for my typo.\n>>>>\n>>>>> In the current code as far as I\n>>>>> researched there are two cases where we don't add \"waiting\" and one\n>>>>> case where we doubly add \"waiting\".\n>>>>>\n>>>>> ResolveRecoveryConflictWithDatabase and\n>>>>> ResolveRecoveryConflictWithBufferPin don't update the ps title.\n>>>>> Although the path where GetCurrentTimestamp() >= ltime is false in\n>>>>> ResolveRecoveryConflictWithLock also doesn't update the ps title, it's\n>>>>> already updated in WaitOnLock. On the other hand, the path where\n>>>>> GetCurrentTimestamp() >= ltime is true in\n>>>>> ResolveRecoveryConflictWithLock updates the ps title but it's wrong as\n>>>>> I reported.\n>>>>>\n>>>>> I've split the patch into two patches: 0001 patch fixes the issue\n>>>>> about doubly updating ps title, 0002 patch makes the recovery conflict\n>>>>> resolution on database and buffer pin update the ps title.\n>>>>\n>>>> Thanks for splitting the patches. I think that 0001 patch can be back-patched.\n>>>>\n>>>> - /*\n>>>> - * Report via ps if we have been waiting for more than 500 msec\n>>>> - * (should that be configurable?)\n>>>> - */\n>>>> - if (update_process_title && new_status == NULL &&\n>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>>> - 500))\n>>>>\n>>>> Originally, \"waiting\" is reported in PS if we've been waiting for more than\n>>>> 500 msec, as the above does. But you got rid of those codes in the patch.\n>>>> Did you confirm that it's safe to do that? If not, isn't it better to apply\n>>>> the attached patch?\n>>>\n>>> In WaitOnLock() we update the ps title regardless of waiting time. So\n>>> I thought we can change it to make these behavior consistent. But\n>>> considering back-patch, your patch looks better than mine.\n>>\n>> Yeah, so I pushed the 0001 patch at first!\n>> I will review the other patches later.\n> \n> Thank you!\n> \n> For 0002 patch which makes ResolveRecoveryConflictWithDatabase and\n> ResolveRecoveryConflictWithBufferPin update the ps title, I think\n> these are better to wait for 5ms before updating the ps title like\n> ResolveRecoveryConflictWithVirtualXIDs, for consistency among recovery\n> conflict resolution functions, but what do you think?\n\nMaybe yes.\n\nAs another idea, for consistency, we can change all\nResolveRecoveryConflictWithXXX() so that they don't wait\nat all before reporting \"waiting\". But if we don't do that,\n\"waiting\" can be reported even when we can immediately\ncancel or terminate the conflicting transactions (e.g., in\ncase of max_standby_streaming_delay=0). To avoid this\nissue, I think it's better to wait for 500ms.\n\nThe 0002 patch changes ResolveRecoveryConflictWithBufferPin()\nso that it updates PS every time. But this seems not good\nbecause the update can happen very frequently. Thought?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 11 Mar 2020 16:41:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, 11 Mar 2020 at 16:42, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/10 13:54, Masahiko Sawada wrote:\n> > On Tue, 10 Mar 2020 at 00:57, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/09 14:10, Masahiko Sawada wrote:\n> >>> On Mon, 9 Mar 2020 at 13:24, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/08 13:52, Masahiko Sawada wrote:\n> >>>>> On Thu, 5 Mar 2020 at 20:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n> >>>>>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>>\n> >>>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>>\n> >>>>>>>>>>\n> >>>>>>>>>>\n> >>>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>>>>>>> events by adding the new type of wait event such as\n> >>>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>>>>>>\n> >>>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> Okay, understand.\n> >>>>>>>>>\n> >>>>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>>>>>>> back-backpatching.\n> >>>>>>>>>>\n> >>>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>>>>>>> fixed even in the back branches.\n> >>>>>>>>>\n> >>>>>>>>> So we need only two patches: one fixes process title issue and another\n> >>>>>>>>> improve wait event. I've attached updated patches.\n> >>>>>>>>\n> >>>>>>>> Thanks for updating the patches! I started reading 0001 patch.\n> >>>>>>>\n> >>>>>>> Thank you for reviewing this patch.\n> >>>>>>>\n> >>>>>>>>\n> >>>>>>>> - /*\n> >>>>>>>> - * Report via ps if we have been waiting for more than 500 msec\n> >>>>>>>> - * (should that be configurable?)\n> >>>>>>>> - */\n> >>>>>>>> - if (update_process_title && new_status == NULL &&\n> >>>>>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >>>>>>>> - 500))\n> >>>>>>>>\n> >>>>>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> >>>>>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n> >>>>>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n> >>>>>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n> >>>>>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> >>>>>>>> does as the above?\n> >>>>>>>\n> >>>>>>> You're right. Will fix it.\n> >>>>>>>\n> >>>>>>>>\n> >>>>>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> >>>>>>>> {\n> >>>>>>>> + char *new_status = NULL;\n> >>>>>>>> +\n> >>>>>>>> + /* Report via ps we are waiting */\n> >>>>>>>> + new_status = set_process_title_waiting();\n> >>>>>>>>\n> >>>>>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> >>>>>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n> >>>>>>>> with database happens.\n> >>>>>>>\n> >>>>>>> Isn't the startup process waiting for other backend to terminate?\n> >>>>>>\n> >>>>>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n> >>>>>>\n> >>>>>> Currently ResolveRecoveryConflictWithLock() and\n> >>>>>> ResolveRecoveryConflictWithBufferPin() don't call\n> >>>>>> ResolveRecoveryConflictWithVirtualXIDs and don't report \"waiting\"\n> >>>>>> in PS display. You changed them so that they report \"waiting\". I agree\n> >>>>>> to have this change. But this change is an improvement rather than\n> >>>>>> a bug fix, i.e., we should apply this change only in v13?\n> >>>>>>\n> >>>>>\n> >>>>> Did you mean ResolveRecoveryConflictWithDatabase and\n> >>>>> ResolveRecoveryConflictWithBufferPin?\n> >>>>\n> >>>> Yes! Sorry for my typo.\n> >>>>\n> >>>>> In the current code as far as I\n> >>>>> researched there are two cases where we don't add \"waiting\" and one\n> >>>>> case where we doubly add \"waiting\".\n> >>>>>\n> >>>>> ResolveRecoveryConflictWithDatabase and\n> >>>>> ResolveRecoveryConflictWithBufferPin don't update the ps title.\n> >>>>> Although the path where GetCurrentTimestamp() >= ltime is false in\n> >>>>> ResolveRecoveryConflictWithLock also doesn't update the ps title, it's\n> >>>>> already updated in WaitOnLock. On the other hand, the path where\n> >>>>> GetCurrentTimestamp() >= ltime is true in\n> >>>>> ResolveRecoveryConflictWithLock updates the ps title but it's wrong as\n> >>>>> I reported.\n> >>>>>\n> >>>>> I've split the patch into two patches: 0001 patch fixes the issue\n> >>>>> about doubly updating ps title, 0002 patch makes the recovery conflict\n> >>>>> resolution on database and buffer pin update the ps title.\n> >>>>\n> >>>> Thanks for splitting the patches. I think that 0001 patch can be back-patched.\n> >>>>\n> >>>> - /*\n> >>>> - * Report via ps if we have been waiting for more than 500 msec\n> >>>> - * (should that be configurable?)\n> >>>> - */\n> >>>> - if (update_process_title && new_status == NULL &&\n> >>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >>>> - 500))\n> >>>>\n> >>>> Originally, \"waiting\" is reported in PS if we've been waiting for more than\n> >>>> 500 msec, as the above does. But you got rid of those codes in the patch.\n> >>>> Did you confirm that it's safe to do that? If not, isn't it better to apply\n> >>>> the attached patch?\n> >>>\n> >>> In WaitOnLock() we update the ps title regardless of waiting time. So\n> >>> I thought we can change it to make these behavior consistent. But\n> >>> considering back-patch, your patch looks better than mine.\n> >>\n> >> Yeah, so I pushed the 0001 patch at first!\n> >> I will review the other patches later.\n> >\n> > Thank you!\n> >\n> > For 0002 patch which makes ResolveRecoveryConflictWithDatabase and\n> > ResolveRecoveryConflictWithBufferPin update the ps title, I think\n> > these are better to wait for 5ms before updating the ps title like\n> > ResolveRecoveryConflictWithVirtualXIDs, for consistency among recovery\n> > conflict resolution functions, but what do you think?\n>\n> Maybe yes.\n>\n> As another idea, for consistency, we can change all\n> ResolveRecoveryConflictWithXXX() so that they don't wait\n> at all before reporting \"waiting\". But if we don't do that,\n> \"waiting\" can be reported even when we can immediately\n> cancel or terminate the conflicting transactions (e.g., in\n> case of max_standby_streaming_delay=0). To avoid this\n> issue, I think it's better to wait for 500ms.\n\nAgreed.\n\n>\n> The 0002 patch changes ResolveRecoveryConflictWithBufferPin()\n> so that it updates PS every time. But this seems not good\n> because the update can happen very frequently. Thought?\n\nAgreed. In the updated version patch, I update the process title in\nLockBufferForCleanup() only once when we've been waiting for more than\n500 ms. This change also affects the primary server that is waiting\nfor buffer cleanup lock. I think it would not be bad but it's\ndifferent behaviour from LockBuffer().\n\nI've attached the updated version patch. Please review it.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 12 Mar 2020 15:12:44 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/05 20:16, Fujii Masao wrote:\n> \n> \n> On 2020/03/05 16:58, Masahiko Sawada wrote:\n>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>> events by adding the new type of wait event such as\n>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>\n>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>\n>>>>\n>>>> Okay, understand.\n>>>>\n>>>>>> I got my eyes on this patch set.  The full patch set is in my opinion\n>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>> back-backpatching.\n>>>>>\n>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>> fixed even in the back branches.\n>>>>\n>>>> So we need only two patches: one fixes process title issue and another\n>>>> improve wait event. I've attached updated patches.\n>>>\n>>> Thanks for updating the patches! I started reading 0001 patch.\n>>\n>> Thank you for reviewing this patch.\n>>\n>>>\n>>> -                       /*\n>>> -                        * Report via ps if we have been waiting for more than 500 msec\n>>> -                        * (should that be configurable?)\n>>> -                        */\n>>> -                       if (update_process_title && new_status == NULL &&\n>>> -                               TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>> -                                                                                  500))\n>>>\n>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>>> ResolveRecoveryConflictWithTablespace() so that they always add\n>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>>> But isn't it better to display \"waiting\" in PS basically when wait is\n>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>>> does as the above?\n>>\n>> You're right. Will fix it.\n>>\n>>>\n>>>    ResolveRecoveryConflictWithDatabase(Oid dbid)\n>>>    {\n>>> +       char            *new_status = NULL;\n>>> +\n>>> +       /* Report via ps we are waiting */\n>>> +       new_status = set_process_title_waiting();\n>>>\n>>> In  ResolveRecoveryConflictWithDatabase(), there seems no need to\n>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>>> with database happens.\n>>\n>> Isn't the startup process waiting for other backend to terminate?\n> \n> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n\nOn second thought, in recovery conflict case, \"waiting\" should be reported\nwhile waiting for the specified delay (e.g., by max_standby_streaming_delay)\nuntil the conflict is resolved. So IMO reporting \"waiting\" in the case of\nrecovery conflict with buffer pin, snapshot, lock and tablespace seems valid,\nbecause they are user-visible \"expected\" wait time.\n\nHowever, in the case of recovery conflict with database, the recovery\nbasically doesn't wait at all and just terminates the conflicting sessions\nimmediately. Then the recovery waits for all those sessions to be terminated,\nbut that wait time is basically small and should not be the user-visible.\nIf that wait time becomes very long because of unresponsive backend, ISTM\nthat LOG or WARNING should be logged instead of reporting something in\nPS display. I'm not sure if that logging is really necessary now, though.\nTherefore, I'm thinking that \"waiting\" doesn't need to be reported in the case\nof recovery conflict with database. Thought?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 24 Mar 2020 17:04:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Tue, 24 Mar 2020 at 17:04, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/05 20:16, Fujii Masao wrote:\n> >\n> >\n> > On 2020/03/05 16:58, Masahiko Sawada wrote:\n> >> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>> events by adding the new type of wait event such as\n> >>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>\n> >>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>\n> >>>>\n> >>>> Okay, understand.\n> >>>>\n> >>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>> back-backpatching.\n> >>>>>\n> >>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>> fixed even in the back branches.\n> >>>>\n> >>>> So we need only two patches: one fixes process title issue and another\n> >>>> improve wait event. I've attached updated patches.\n> >>>\n> >>> Thanks for updating the patches! I started reading 0001 patch.\n> >>\n> >> Thank you for reviewing this patch.\n> >>\n> >>>\n> >>> - /*\n> >>> - * Report via ps if we have been waiting for more than 500 msec\n> >>> - * (should that be configurable?)\n> >>> - */\n> >>> - if (update_process_title && new_status == NULL &&\n> >>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >>> - 500))\n> >>>\n> >>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> >>> ResolveRecoveryConflictWithTablespace() so that they always add\n> >>> \"waiting\" into the PS display, whether wait is really necessary or not.\n> >>> But isn't it better to display \"waiting\" in PS basically when wait is\n> >>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> >>> does as the above?\n> >>\n> >> You're right. Will fix it.\n> >>\n> >>>\n> >>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> >>> {\n> >>> + char *new_status = NULL;\n> >>> +\n> >>> + /* Report via ps we are waiting */\n> >>> + new_status = set_process_title_waiting();\n> >>>\n> >>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> >>> display \"waiting\" in PS because no wait occurs when recovery conflict\n> >>> with database happens.\n> >>\n> >> Isn't the startup process waiting for other backend to terminate?\n> >\n> > Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>\n> On second thought, in recovery conflict case, \"waiting\" should be reported\n> while waiting for the specified delay (e.g., by max_standby_streaming_delay)\n> until the conflict is resolved. So IMO reporting \"waiting\" in the case of\n> recovery conflict with buffer pin, snapshot, lock and tablespace seems valid,\n> because they are user-visible \"expected\" wait time.\n>\n> However, in the case of recovery conflict with database, the recovery\n> basically doesn't wait at all and just terminates the conflicting sessions\n> immediately. Then the recovery waits for all those sessions to be terminated,\n> but that wait time is basically small and should not be the user-visible.\n> If that wait time becomes very long because of unresponsive backend, ISTM\n> that LOG or WARNING should be logged instead of reporting something in\n> PS display. I'm not sure if that logging is really necessary now, though.\n> Therefore, I'm thinking that \"waiting\" doesn't need to be reported in the case\n> of recovery conflict with database. Thought?\n\nFair enough. The longer wait time of conflicts with database is not\nuser-expected behaviour so logging would be better. I'd like to just\ndrop the change around ResolveRecoveryConflictWithDatabase() from the\npatch. Maybe logging LOG or WARNING for recovery conflict on database\nwould be a separate patch and need more discussion.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Mar 2020 14:33:43 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On 2020/03/26 14:33, Masahiko Sawada wrote:\n> On Tue, 24 Mar 2020 at 17:04, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/05 20:16, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n>>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>>\n>>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>>\n>>>>>>\n>>>>>> Okay, understand.\n>>>>>>\n>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>>> back-backpatching.\n>>>>>>>\n>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>>> fixed even in the back branches.\n>>>>>>\n>>>>>> So we need only two patches: one fixes process title issue and another\n>>>>>> improve wait event. I've attached updated patches.\n>>>>>\n>>>>> Thanks for updating the patches! I started reading 0001 patch.\n>>>>\n>>>> Thank you for reviewing this patch.\n>>>>\n>>>>>\n>>>>> - /*\n>>>>> - * Report via ps if we have been waiting for more than 500 msec\n>>>>> - * (should that be configurable?)\n>>>>> - */\n>>>>> - if (update_process_title && new_status == NULL &&\n>>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>>>> - 500))\n>>>>>\n>>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n>>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n>>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>>>>> does as the above?\n>>>>\n>>>> You're right. Will fix it.\n>>>>\n>>>>>\n>>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n>>>>> {\n>>>>> + char *new_status = NULL;\n>>>>> +\n>>>>> + /* Report via ps we are waiting */\n>>>>> + new_status = set_process_title_waiting();\n>>>>>\n>>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n>>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>>>>> with database happens.\n>>>>\n>>>> Isn't the startup process waiting for other backend to terminate?\n>>>\n>>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>>\n>> On second thought, in recovery conflict case, \"waiting\" should be reported\n>> while waiting for the specified delay (e.g., by max_standby_streaming_delay)\n>> until the conflict is resolved. So IMO reporting \"waiting\" in the case of\n>> recovery conflict with buffer pin, snapshot, lock and tablespace seems valid,\n>> because they are user-visible \"expected\" wait time.\n>>\n>> However, in the case of recovery conflict with database, the recovery\n>> basically doesn't wait at all and just terminates the conflicting sessions\n>> immediately. Then the recovery waits for all those sessions to be terminated,\n>> but that wait time is basically small and should not be the user-visible.\n>> If that wait time becomes very long because of unresponsive backend, ISTM\n>> that LOG or WARNING should be logged instead of reporting something in\n>> PS display. I'm not sure if that logging is really necessary now, though.\n>> Therefore, I'm thinking that \"waiting\" doesn't need to be reported in the case\n>> of recovery conflict with database. Thought?\n> \n> Fair enough. The longer wait time of conflicts with database is not\n> user-expected behaviour so logging would be better. I'd like to just\n> drop the change around ResolveRecoveryConflictWithDatabase() from the\n> patch.\n\nMake sense.\n\n+ if (update_process_title)\n+ waitStart = GetCurrentTimestamp();\n\nSince LockBufferForCleanup() can be called very frequently,\nI don't think that it's good thing to call GetCurrentTimestamp()\nevery time when LockBufferForCleanup() is called.\n\n+ /* Report via ps if we have been waiting for more than 500 msec */\n+ if (update_process_title && new_status == NULL &&\n+ TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n+ 500))\n\nDo we really want to see \"waiting\" in PS even in non hot standby mode?\n\nIf max_standby_streaming_delay and deadlock_timeout are set to large value,\nResolveRecoveryConflictWithBufferPin() can wait for a long time, e.g.,\nmore than 500ms. In that case, I'm afraid that \"report if we've been\n waiting for more than 500ms\" logic doesn't work.\n\nSo I'm now thinking that it's better to report \"waiting\" immdiately before\nResolveRecoveryConflictWithBufferPin(). Of course, we can still use\n\"report if we've been waiting for more than 500ms\" logic by teaching 500ms\nto ResolveRecoveryConflictWithBufferPin() as the minimum wait time.\nBut this looks overkill. Thought?\n\nBased on the above comments, I updated the patch. Attached. Right now\nthe patch looks very simple. Could you review this patch?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters", "msg_date": "Fri, 27 Mar 2020 10:32:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Fri, 27 Mar 2020 at 10:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/26 14:33, Masahiko Sawada wrote:\n> > On Tue, 24 Mar 2020 at 17:04, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/05 20:16, Fujii Masao wrote:\n> >>>\n> >>>\n> >>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n> >>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>\n> >>>>>>>\n> >>>>>>>\n> >>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>>>> events by adding the new type of wait event such as\n> >>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>>>\n> >>>>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>>>\n> >>>>>>\n> >>>>>> Okay, understand.\n> >>>>>>\n> >>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>>>> back-backpatching.\n> >>>>>>>\n> >>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>>>> fixed even in the back branches.\n> >>>>>>\n> >>>>>> So we need only two patches: one fixes process title issue and another\n> >>>>>> improve wait event. I've attached updated patches.\n> >>>>>\n> >>>>> Thanks for updating the patches! I started reading 0001 patch.\n> >>>>\n> >>>> Thank you for reviewing this patch.\n> >>>>\n> >>>>>\n> >>>>> - /*\n> >>>>> - * Report via ps if we have been waiting for more than 500 msec\n> >>>>> - * (should that be configurable?)\n> >>>>> - */\n> >>>>> - if (update_process_title && new_status == NULL &&\n> >>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> >>>>> - 500))\n> >>>>>\n> >>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n> >>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n> >>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n> >>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n> >>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n> >>>>> does as the above?\n> >>>>\n> >>>> You're right. Will fix it.\n> >>>>\n> >>>>>\n> >>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n> >>>>> {\n> >>>>> + char *new_status = NULL;\n> >>>>> +\n> >>>>> + /* Report via ps we are waiting */\n> >>>>> + new_status = set_process_title_waiting();\n> >>>>>\n> >>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n> >>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n> >>>>> with database happens.\n> >>>>\n> >>>> Isn't the startup process waiting for other backend to terminate?\n> >>>\n> >>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n> >>\n> >> On second thought, in recovery conflict case, \"waiting\" should be reported\n> >> while waiting for the specified delay (e.g., by max_standby_streaming_delay)\n> >> until the conflict is resolved. So IMO reporting \"waiting\" in the case of\n> >> recovery conflict with buffer pin, snapshot, lock and tablespace seems valid,\n> >> because they are user-visible \"expected\" wait time.\n> >>\n> >> However, in the case of recovery conflict with database, the recovery\n> >> basically doesn't wait at all and just terminates the conflicting sessions\n> >> immediately. Then the recovery waits for all those sessions to be terminated,\n> >> but that wait time is basically small and should not be the user-visible.\n> >> If that wait time becomes very long because of unresponsive backend, ISTM\n> >> that LOG or WARNING should be logged instead of reporting something in\n> >> PS display. I'm not sure if that logging is really necessary now, though.\n> >> Therefore, I'm thinking that \"waiting\" doesn't need to be reported in the case\n> >> of recovery conflict with database. Thought?\n> >\n> > Fair enough. The longer wait time of conflicts with database is not\n> > user-expected behaviour so logging would be better. I'd like to just\n> > drop the change around ResolveRecoveryConflictWithDatabase() from the\n> > patch.\n>\n> Make sense.\n>\n> + if (update_process_title)\n> + waitStart = GetCurrentTimestamp();\n>\n> Since LockBufferForCleanup() can be called very frequently,\n> I don't think that it's good thing to call GetCurrentTimestamp()\n> every time when LockBufferForCleanup() is called.\n>\n> + /* Report via ps if we have been waiting for more than 500 msec */\n> + if (update_process_title && new_status == NULL &&\n> + TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n> + 500))\n>\n> Do we really want to see \"waiting\" in PS even in non hot standby mode?\n>\n> If max_standby_streaming_delay and deadlock_timeout are set to large value,\n> ResolveRecoveryConflictWithBufferPin() can wait for a long time, e.g.,\n> more than 500ms. In that case, I'm afraid that \"report if we've been\n> waiting for more than 500ms\" logic doesn't work.\n>\n> So I'm now thinking that it's better to report \"waiting\" immdiately before\n> ResolveRecoveryConflictWithBufferPin(). Of course, we can still use\n> \"report if we've been waiting for more than 500ms\" logic by teaching 500ms\n> to ResolveRecoveryConflictWithBufferPin() as the minimum wait time.\n> But this looks overkill. Thought?\n>\n> Based on the above comments, I updated the patch. Attached. Right now\n> the patch looks very simple. Could you review this patch?\n\nThank you for the patch. I agree with you for all the points. Your\npatch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:39:48 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/27 15:39, Masahiko Sawada wrote:\n> On Fri, 27 Mar 2020 at 10:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/26 14:33, Masahiko Sawada wrote:\n>>> On Tue, 24 Mar 2020 at 17:04, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/05 20:16, Fujii Masao wrote:\n>>>>>\n>>>>>\n>>>>> On 2020/03/05 16:58, Masahiko Sawada wrote:\n>>>>>> On Wed, 4 Mar 2020 at 15:21, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>>>>\n>>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>>>>\n>>>>>>>>\n>>>>>>>> Okay, understand.\n>>>>>>>>\n>>>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>>>>> back-backpatching.\n>>>>>>>>>\n>>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>>>>> fixed even in the back branches.\n>>>>>>>>\n>>>>>>>> So we need only two patches: one fixes process title issue and another\n>>>>>>>> improve wait event. I've attached updated patches.\n>>>>>>>\n>>>>>>> Thanks for updating the patches! I started reading 0001 patch.\n>>>>>>\n>>>>>> Thank you for reviewing this patch.\n>>>>>>\n>>>>>>>\n>>>>>>> - /*\n>>>>>>> - * Report via ps if we have been waiting for more than 500 msec\n>>>>>>> - * (should that be configurable?)\n>>>>>>> - */\n>>>>>>> - if (update_process_title && new_status == NULL &&\n>>>>>>> - TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>>>>>>> - 500))\n>>>>>>>\n>>>>>>> The patch changes ResolveRecoveryConflictWithSnapshot() and\n>>>>>>> ResolveRecoveryConflictWithTablespace() so that they always add\n>>>>>>> \"waiting\" into the PS display, whether wait is really necessary or not.\n>>>>>>> But isn't it better to display \"waiting\" in PS basically when wait is\n>>>>>>> necessary, like originally ResolveRecoveryConflictWithVirtualXIDs()\n>>>>>>> does as the above?\n>>>>>>\n>>>>>> You're right. Will fix it.\n>>>>>>\n>>>>>>>\n>>>>>>> ResolveRecoveryConflictWithDatabase(Oid dbid)\n>>>>>>> {\n>>>>>>> + char *new_status = NULL;\n>>>>>>> +\n>>>>>>> + /* Report via ps we are waiting */\n>>>>>>> + new_status = set_process_title_waiting();\n>>>>>>>\n>>>>>>> In ResolveRecoveryConflictWithDatabase(), there seems no need to\n>>>>>>> display \"waiting\" in PS because no wait occurs when recovery conflict\n>>>>>>> with database happens.\n>>>>>>\n>>>>>> Isn't the startup process waiting for other backend to terminate?\n>>>>>\n>>>>> Yeah, you're right. I agree that \"waiting\" should be reported in this case.\n>>>>\n>>>> On second thought, in recovery conflict case, \"waiting\" should be reported\n>>>> while waiting for the specified delay (e.g., by max_standby_streaming_delay)\n>>>> until the conflict is resolved. So IMO reporting \"waiting\" in the case of\n>>>> recovery conflict with buffer pin, snapshot, lock and tablespace seems valid,\n>>>> because they are user-visible \"expected\" wait time.\n>>>>\n>>>> However, in the case of recovery conflict with database, the recovery\n>>>> basically doesn't wait at all and just terminates the conflicting sessions\n>>>> immediately. Then the recovery waits for all those sessions to be terminated,\n>>>> but that wait time is basically small and should not be the user-visible.\n>>>> If that wait time becomes very long because of unresponsive backend, ISTM\n>>>> that LOG or WARNING should be logged instead of reporting something in\n>>>> PS display. I'm not sure if that logging is really necessary now, though.\n>>>> Therefore, I'm thinking that \"waiting\" doesn't need to be reported in the case\n>>>> of recovery conflict with database. Thought?\n>>>\n>>> Fair enough. The longer wait time of conflicts with database is not\n>>> user-expected behaviour so logging would be better. I'd like to just\n>>> drop the change around ResolveRecoveryConflictWithDatabase() from the\n>>> patch.\n>>\n>> Make sense.\n>>\n>> + if (update_process_title)\n>> + waitStart = GetCurrentTimestamp();\n>>\n>> Since LockBufferForCleanup() can be called very frequently,\n>> I don't think that it's good thing to call GetCurrentTimestamp()\n>> every time when LockBufferForCleanup() is called.\n>>\n>> + /* Report via ps if we have been waiting for more than 500 msec */\n>> + if (update_process_title && new_status == NULL &&\n>> + TimestampDifferenceExceeds(waitStart, GetCurrentTimestamp(),\n>> + 500))\n>>\n>> Do we really want to see \"waiting\" in PS even in non hot standby mode?\n>>\n>> If max_standby_streaming_delay and deadlock_timeout are set to large value,\n>> ResolveRecoveryConflictWithBufferPin() can wait for a long time, e.g.,\n>> more than 500ms. In that case, I'm afraid that \"report if we've been\n>> waiting for more than 500ms\" logic doesn't work.\n>>\n>> So I'm now thinking that it's better to report \"waiting\" immdiately before\n>> ResolveRecoveryConflictWithBufferPin(). Of course, we can still use\n>> \"report if we've been waiting for more than 500ms\" logic by teaching 500ms\n>> to ResolveRecoveryConflictWithBufferPin() as the minimum wait time.\n>> But this looks overkill. Thought?\n>>\n>> Based on the above comments, I updated the patch. Attached. Right now\n>> the patch looks very simple. Could you review this patch?\n> \n> Thank you for the patch. I agree with you for all the points. Your\n> patch looks good to me.\n\nThanks for the review! Barring any objections, I will commit the latest patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Fri, 27 Mar 2020 16:35:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/04 14:31, Masahiko Sawada wrote:\n> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>> events by adding the new type of wait event such as\n>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>\n>> Yes, it looks like a improvement rather than bug fix.\n>>\n> \n> Okay, understand.\n> \n>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>> back-backpatching.\n>>\n>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>> fixed even in the back branches.\n> \n> So we need only two patches: one fixes process title issue and another\n> improve wait event. I've attached updated patches.\n\nI started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n\n-\tProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n+\tProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n\nCurrently the wait event indicating the wait for buffer pin has already\nbeen reported. But the above change in the patch changes the name of\nwait event for buffer pin only in the startup process. Is this really useful?\nIsn't the existing wait event for buffer pin enough?\n\n-\t/* Wait to be signaled by the release of the Relation Lock */\n-\tProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n+\t\t/* Wait to be signaled by the release of the Relation Lock */\n+\t\tProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n\nSame as above. Isn't the existing wait event enough?\n\n-\t/*\n-\t * Progressively increase the sleep times, but not to more than 1s, since\n-\t * pg_usleep isn't interruptible on some platforms.\n-\t */\n-\tstandbyWait_us *= 2;\n-\tif (standbyWait_us > 1000000)\n-\t\tstandbyWait_us = 1000000;\n+\tWaitLatch(MyLatch,\n+\t\t\t WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n+\t\t\t STANDBY_WAIT_MS,\n+\t\t\t wait_event_info);\n+\tResetLatch(MyLatch);\n\nResetLatch() should be called before WaitLatch()?\n\nCould you tell me why you dropped the \"increase-sleep-times\" logic?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Fri, 27 Mar 2020 17:54:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> > On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>> events by adding the new type of wait event such as\n> >>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>\n> >> Yes, it looks like a improvement rather than bug fix.\n> >>\n> >\n> > Okay, understand.\n> >\n> >>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>> back-backpatching.\n> >>\n> >> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >> fixed even in the back branches.\n> >\n> > So we need only two patches: one fixes process title issue and another\n> > improve wait event. I've attached updated patches.\n>\n> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n>\n> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n>\n> Currently the wait event indicating the wait for buffer pin has already\n> been reported. But the above change in the patch changes the name of\n> wait event for buffer pin only in the startup process. Is this really useful?\n> Isn't the existing wait event for buffer pin enough?\n>\n> - /* Wait to be signaled by the release of the Relation Lock */\n> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> + /* Wait to be signaled by the release of the Relation Lock */\n> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n>\n> Same as above. Isn't the existing wait event enough?\n\nYeah, we can use the existing wait events for buffer pin and lock.\n\n>\n> - /*\n> - * Progressively increase the sleep times, but not to more than 1s, since\n> - * pg_usleep isn't interruptible on some platforms.\n> - */\n> - standbyWait_us *= 2;\n> - if (standbyWait_us > 1000000)\n> - standbyWait_us = 1000000;\n> + WaitLatch(MyLatch,\n> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n> + STANDBY_WAIT_MS,\n> + wait_event_info);\n> + ResetLatch(MyLatch);\n>\n> ResetLatch() should be called before WaitLatch()?\n\nFixed.\n\n>\n> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n\nI thought we can remove it because WaitLatch is interruptible but my\nobservation was not correct. The waiting startup process is not\nnecessarily woken up by signal. I think it's still better to not wait\nmore than 1 sec even if it's an interruptible wait.\n\nAttached patch fixes the above and introduces only two wait events of\nconflict resolution: snapshot and tablespace. I also removed the wait\nevent of conflict resolution of database since it's unlikely to become\na user-visible and a long sleep as we discussed before.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Mar 2020 20:10:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/03/30 20:10, Masahiko Sawada wrote:\n> On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>> events by adding the new type of wait event such as\n>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>\n>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>\n>>>\n>>> Okay, understand.\n>>>\n>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>> back-backpatching.\n>>>>\n>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>> fixed even in the back branches.\n>>>\n>>> So we need only two patches: one fixes process title issue and another\n>>> improve wait event. I've attached updated patches.\n>>\n>> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n>>\n>> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n>>\n>> Currently the wait event indicating the wait for buffer pin has already\n>> been reported. But the above change in the patch changes the name of\n>> wait event for buffer pin only in the startup process. Is this really useful?\n>> Isn't the existing wait event for buffer pin enough?\n>>\n>> - /* Wait to be signaled by the release of the Relation Lock */\n>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>> + /* Wait to be signaled by the release of the Relation Lock */\n>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n>>\n>> Same as above. Isn't the existing wait event enough?\n> \n> Yeah, we can use the existing wait events for buffer pin and lock.\n> \n>>\n>> - /*\n>> - * Progressively increase the sleep times, but not to more than 1s, since\n>> - * pg_usleep isn't interruptible on some platforms.\n>> - */\n>> - standbyWait_us *= 2;\n>> - if (standbyWait_us > 1000000)\n>> - standbyWait_us = 1000000;\n>> + WaitLatch(MyLatch,\n>> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n>> + STANDBY_WAIT_MS,\n>> + wait_event_info);\n>> + ResetLatch(MyLatch);\n>>\n>> ResetLatch() should be called before WaitLatch()?\n> \n> Fixed.\n> \n>>\n>> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n> \n> I thought we can remove it because WaitLatch is interruptible but my\n> observation was not correct. The waiting startup process is not\n> necessarily woken up by signal. I think it's still better to not wait\n> more than 1 sec even if it's an interruptible wait.\n\nSo we don't need to use WaitLatch() there, i.e., it's ok to keep using\npg_usleep()?\n\n> Attached patch fixes the above and introduces only two wait events of\n> conflict resolution: snapshot and tablespace.\n\nMany thanks for updating the patch!\n\n-\t/* Wait to be signaled by the release of the Relation Lock */\n-\tProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n+\t\t/* Wait to be signaled by the release of the Relation Lock */\n+\t\tProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n+\t}\n\nIs this change really valid? What happens if the latch is set during\nResolveRecoveryConflictWithVirtualXIDs()?\nResolveRecoveryConflictWithVirtualXIDs() can return after the latch\nis set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n\n+\t\tdefault:\n+\t\t\tevent_name = \"unknown wait event\";\n+\t\t\tbreak;\n\nSeems this default case should be removed. Please see other\npgstat_get_wait_xxx() function, so there is no such code.\n\n> I also removed the wait\n> event of conflict resolution of database since it's unlikely to become\n> a user-visible and a long sleep as we discussed before.\n\nIs it worth defining new wait event type RecoveryConflict only for\nthose two events? ISTM that it's ok to use IPC event type here.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 1 Apr 2020 22:32:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Wed, 1 Apr 2020 at 22:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/30 20:10, Masahiko Sawada wrote:\n> > On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>> events by adding the new type of wait event such as\n> >>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>\n> >>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>\n> >>>\n> >>> Okay, understand.\n> >>>\n> >>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>> back-backpatching.\n> >>>>\n> >>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>> fixed even in the back branches.\n> >>>\n> >>> So we need only two patches: one fixes process title issue and another\n> >>> improve wait event. I've attached updated patches.\n> >>\n> >> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n> >>\n> >> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> >> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n> >>\n> >> Currently the wait event indicating the wait for buffer pin has already\n> >> been reported. But the above change in the patch changes the name of\n> >> wait event for buffer pin only in the startup process. Is this really useful?\n> >> Isn't the existing wait event for buffer pin enough?\n> >>\n> >> - /* Wait to be signaled by the release of the Relation Lock */\n> >> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >> + /* Wait to be signaled by the release of the Relation Lock */\n> >> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n> >>\n> >> Same as above. Isn't the existing wait event enough?\n> >\n> > Yeah, we can use the existing wait events for buffer pin and lock.\n> >\n> >>\n> >> - /*\n> >> - * Progressively increase the sleep times, but not to more than 1s, since\n> >> - * pg_usleep isn't interruptible on some platforms.\n> >> - */\n> >> - standbyWait_us *= 2;\n> >> - if (standbyWait_us > 1000000)\n> >> - standbyWait_us = 1000000;\n> >> + WaitLatch(MyLatch,\n> >> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n> >> + STANDBY_WAIT_MS,\n> >> + wait_event_info);\n> >> + ResetLatch(MyLatch);\n> >>\n> >> ResetLatch() should be called before WaitLatch()?\n> >\n> > Fixed.\n> >\n> >>\n> >> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n> >\n> > I thought we can remove it because WaitLatch is interruptible but my\n> > observation was not correct. The waiting startup process is not\n> > necessarily woken up by signal. I think it's still better to not wait\n> > more than 1 sec even if it's an interruptible wait.\n>\n> So we don't need to use WaitLatch() there, i.e., it's ok to keep using\n> pg_usleep()?\n>\n> > Attached patch fixes the above and introduces only two wait events of\n> > conflict resolution: snapshot and tablespace.\n>\n> Many thanks for updating the patch!\n>\n> - /* Wait to be signaled by the release of the Relation Lock */\n> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> + /* Wait to be signaled by the release of the Relation Lock */\n> + ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> + }\n>\n> Is this change really valid? What happens if the latch is set during\n> ResolveRecoveryConflictWithVirtualXIDs()?\n> ResolveRecoveryConflictWithVirtualXIDs() can return after the latch\n> is set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n\nThank you for reviewing the patch!\n\nYou're right. It's better to keep using pg_usleep() and set the wait\nevent by pgstat_report_wait_start().\n\n>\n> + default:\n> + event_name = \"unknown wait event\";\n> + break;\n>\n> Seems this default case should be removed. Please see other\n> pgstat_get_wait_xxx() function, so there is no such code.\n>\n> > I also removed the wait\n> > event of conflict resolution of database since it's unlikely to become\n> > a user-visible and a long sleep as we discussed before.\n>\n> Is it worth defining new wait event type RecoveryConflict only for\n> those two events? ISTM that it's ok to use IPC event type here.\n>\n\nI dropped a new wait even type and added them to IPC wait event type.\n\nI've attached the new version patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Apr 2020 14:25:18 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/04/02 14:25, Masahiko Sawada wrote:\n> On Wed, 1 Apr 2020 at 22:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/03/30 20:10, Masahiko Sawada wrote:\n>>> On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>\n>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>\n>>>>>\n>>>>> Okay, understand.\n>>>>>\n>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>> back-backpatching.\n>>>>>>\n>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>> fixed even in the back branches.\n>>>>>\n>>>>> So we need only two patches: one fixes process title issue and another\n>>>>> improve wait event. I've attached updated patches.\n>>>>\n>>>> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n>>>>\n>>>> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n>>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n>>>>\n>>>> Currently the wait event indicating the wait for buffer pin has already\n>>>> been reported. But the above change in the patch changes the name of\n>>>> wait event for buffer pin only in the startup process. Is this really useful?\n>>>> Isn't the existing wait event for buffer pin enough?\n>>>>\n>>>> - /* Wait to be signaled by the release of the Relation Lock */\n>>>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>> + /* Wait to be signaled by the release of the Relation Lock */\n>>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n>>>>\n>>>> Same as above. Isn't the existing wait event enough?\n>>>\n>>> Yeah, we can use the existing wait events for buffer pin and lock.\n>>>\n>>>>\n>>>> - /*\n>>>> - * Progressively increase the sleep times, but not to more than 1s, since\n>>>> - * pg_usleep isn't interruptible on some platforms.\n>>>> - */\n>>>> - standbyWait_us *= 2;\n>>>> - if (standbyWait_us > 1000000)\n>>>> - standbyWait_us = 1000000;\n>>>> + WaitLatch(MyLatch,\n>>>> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n>>>> + STANDBY_WAIT_MS,\n>>>> + wait_event_info);\n>>>> + ResetLatch(MyLatch);\n>>>>\n>>>> ResetLatch() should be called before WaitLatch()?\n>>>\n>>> Fixed.\n>>>\n>>>>\n>>>> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n>>>\n>>> I thought we can remove it because WaitLatch is interruptible but my\n>>> observation was not correct. The waiting startup process is not\n>>> necessarily woken up by signal. I think it's still better to not wait\n>>> more than 1 sec even if it's an interruptible wait.\n>>\n>> So we don't need to use WaitLatch() there, i.e., it's ok to keep using\n>> pg_usleep()?\n>>\n>>> Attached patch fixes the above and introduces only two wait events of\n>>> conflict resolution: snapshot and tablespace.\n>>\n>> Many thanks for updating the patch!\n>>\n>> - /* Wait to be signaled by the release of the Relation Lock */\n>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>> + /* Wait to be signaled by the release of the Relation Lock */\n>> + ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>> + }\n>>\n>> Is this change really valid? What happens if the latch is set during\n>> ResolveRecoveryConflictWithVirtualXIDs()?\n>> ResolveRecoveryConflictWithVirtualXIDs() can return after the latch\n>> is set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n> \n> Thank you for reviewing the patch!\n> \n> You're right. It's better to keep using pg_usleep() and set the wait\n> event by pgstat_report_wait_start().\n> \n>>\n>> + default:\n>> + event_name = \"unknown wait event\";\n>> + break;\n>>\n>> Seems this default case should be removed. Please see other\n>> pgstat_get_wait_xxx() function, so there is no such code.\n>>\n>>> I also removed the wait\n>>> event of conflict resolution of database since it's unlikely to become\n>>> a user-visible and a long sleep as we discussed before.\n>>\n>> Is it worth defining new wait event type RecoveryConflict only for\n>> those two events? ISTM that it's ok to use IPC event type here.\n>>\n> \n> I dropped a new wait even type and added them to IPC wait event type.\n> \n> I've attached the new version patch.\n\nThanks for updating the patch! The patch looks good to me except\nthe following mior things.\n\n+ <row>\n+ <entry><literal>RecoveryConflictSnapshot</literal></entry>\n+ <entry>Waiting for recovery conflict resolution on a physical cleanup.</entry>\n+ </row>\n+ <row>\n+ <entry><literal>RecoveryConflictTablespace</literal></entry>\n+ <entry>Waiting for recovery conflict resolution on dropping tablespace.</entry>\n+ </row>\n\nYou need to increment the value of \"morerows\" in\n\"<entry morerows=\"38\"><literal>IPC</literal></entry>\".\n\nThe descriptions of those two events should be placed in alphabetical order\nfor event name. That is, they should be placed above RecoveryPause.\n\n\"vacuum cleanup\" is better than \"physical cleanup\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Apr 2020 15:34:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Thu, 2 Apr 2020 at 15:34, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/02 14:25, Masahiko Sawada wrote:\n> > On Wed, 1 Apr 2020 at 22:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2020/03/30 20:10, Masahiko Sawada wrote:\n> >>> On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>\n> >>>>>>\n> >>>>>>\n> >>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>>> events by adding the new type of wait event such as\n> >>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>>\n> >>>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>>\n> >>>>>\n> >>>>> Okay, understand.\n> >>>>>\n> >>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>>> back-backpatching.\n> >>>>>>\n> >>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>>> fixed even in the back branches.\n> >>>>>\n> >>>>> So we need only two patches: one fixes process title issue and another\n> >>>>> improve wait event. I've attached updated patches.\n> >>>>\n> >>>> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n> >>>>\n> >>>> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> >>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n> >>>>\n> >>>> Currently the wait event indicating the wait for buffer pin has already\n> >>>> been reported. But the above change in the patch changes the name of\n> >>>> wait event for buffer pin only in the startup process. Is this really useful?\n> >>>> Isn't the existing wait event for buffer pin enough?\n> >>>>\n> >>>> - /* Wait to be signaled by the release of the Relation Lock */\n> >>>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >>>> + /* Wait to be signaled by the release of the Relation Lock */\n> >>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n> >>>>\n> >>>> Same as above. Isn't the existing wait event enough?\n> >>>\n> >>> Yeah, we can use the existing wait events for buffer pin and lock.\n> >>>\n> >>>>\n> >>>> - /*\n> >>>> - * Progressively increase the sleep times, but not to more than 1s, since\n> >>>> - * pg_usleep isn't interruptible on some platforms.\n> >>>> - */\n> >>>> - standbyWait_us *= 2;\n> >>>> - if (standbyWait_us > 1000000)\n> >>>> - standbyWait_us = 1000000;\n> >>>> + WaitLatch(MyLatch,\n> >>>> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n> >>>> + STANDBY_WAIT_MS,\n> >>>> + wait_event_info);\n> >>>> + ResetLatch(MyLatch);\n> >>>>\n> >>>> ResetLatch() should be called before WaitLatch()?\n> >>>\n> >>> Fixed.\n> >>>\n> >>>>\n> >>>> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n> >>>\n> >>> I thought we can remove it because WaitLatch is interruptible but my\n> >>> observation was not correct. The waiting startup process is not\n> >>> necessarily woken up by signal. I think it's still better to not wait\n> >>> more than 1 sec even if it's an interruptible wait.\n> >>\n> >> So we don't need to use WaitLatch() there, i.e., it's ok to keep using\n> >> pg_usleep()?\n> >>\n> >>> Attached patch fixes the above and introduces only two wait events of\n> >>> conflict resolution: snapshot and tablespace.\n> >>\n> >> Many thanks for updating the patch!\n> >>\n> >> - /* Wait to be signaled by the release of the Relation Lock */\n> >> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >> + /* Wait to be signaled by the release of the Relation Lock */\n> >> + ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >> + }\n> >>\n> >> Is this change really valid? What happens if the latch is set during\n> >> ResolveRecoveryConflictWithVirtualXIDs()?\n> >> ResolveRecoveryConflictWithVirtualXIDs() can return after the latch\n> >> is set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n> >\n> > Thank you for reviewing the patch!\n> >\n> > You're right. It's better to keep using pg_usleep() and set the wait\n> > event by pgstat_report_wait_start().\n> >\n> >>\n> >> + default:\n> >> + event_name = \"unknown wait event\";\n> >> + break;\n> >>\n> >> Seems this default case should be removed. Please see other\n> >> pgstat_get_wait_xxx() function, so there is no such code.\n> >>\n> >>> I also removed the wait\n> >>> event of conflict resolution of database since it's unlikely to become\n> >>> a user-visible and a long sleep as we discussed before.\n> >>\n> >> Is it worth defining new wait event type RecoveryConflict only for\n> >> those two events? ISTM that it's ok to use IPC event type here.\n> >>\n> >\n> > I dropped a new wait even type and added them to IPC wait event type.\n> >\n> > I've attached the new version patch.\n>\n> Thanks for updating the patch! The patch looks good to me except\n> the following mior things.\n>\n> + <row>\n> + <entry><literal>RecoveryConflictSnapshot</literal></entry>\n> + <entry>Waiting for recovery conflict resolution on a physical cleanup.</entry>\n> + </row>\n> + <row>\n> + <entry><literal>RecoveryConflictTablespace</literal></entry>\n> + <entry>Waiting for recovery conflict resolution on dropping tablespace.</entry>\n> + </row>\n>\n> You need to increment the value of \"morerows\" in\n> \"<entry morerows=\"38\"><literal>IPC</literal></entry>\".\n>\n> The descriptions of those two events should be placed in alphabetical order\n> for event name. That is, they should be placed above RecoveryPause.\n>\n> \"vacuum cleanup\" is better than \"physical cleanup\"?\n\nAgreed.\n\nI've attached the updated version patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Apr 2020 15:54:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/04/02 15:54, Masahiko Sawada wrote:\n> On Thu, 2 Apr 2020 at 15:34, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/04/02 14:25, Masahiko Sawada wrote:\n>>> On Wed, 1 Apr 2020 at 22:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2020/03/30 20:10, Masahiko Sawada wrote:\n>>>>> On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>\n>>>>>>>>\n>>>>>>>>\n>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>>>\n>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>>>\n>>>>>>>\n>>>>>>> Okay, understand.\n>>>>>>>\n>>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>>>> back-backpatching.\n>>>>>>>>\n>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>>>> fixed even in the back branches.\n>>>>>>>\n>>>>>>> So we need only two patches: one fixes process title issue and another\n>>>>>>> improve wait event. I've attached updated patches.\n>>>>>>\n>>>>>> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n>>>>>>\n>>>>>> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n>>>>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n>>>>>>\n>>>>>> Currently the wait event indicating the wait for buffer pin has already\n>>>>>> been reported. But the above change in the patch changes the name of\n>>>>>> wait event for buffer pin only in the startup process. Is this really useful?\n>>>>>> Isn't the existing wait event for buffer pin enough?\n>>>>>>\n>>>>>> - /* Wait to be signaled by the release of the Relation Lock */\n>>>>>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>>>> + /* Wait to be signaled by the release of the Relation Lock */\n>>>>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n>>>>>>\n>>>>>> Same as above. Isn't the existing wait event enough?\n>>>>>\n>>>>> Yeah, we can use the existing wait events for buffer pin and lock.\n>>>>>\n>>>>>>\n>>>>>> - /*\n>>>>>> - * Progressively increase the sleep times, but not to more than 1s, since\n>>>>>> - * pg_usleep isn't interruptible on some platforms.\n>>>>>> - */\n>>>>>> - standbyWait_us *= 2;\n>>>>>> - if (standbyWait_us > 1000000)\n>>>>>> - standbyWait_us = 1000000;\n>>>>>> + WaitLatch(MyLatch,\n>>>>>> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n>>>>>> + STANDBY_WAIT_MS,\n>>>>>> + wait_event_info);\n>>>>>> + ResetLatch(MyLatch);\n>>>>>>\n>>>>>> ResetLatch() should be called before WaitLatch()?\n>>>>>\n>>>>> Fixed.\n>>>>>\n>>>>>>\n>>>>>> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n>>>>>\n>>>>> I thought we can remove it because WaitLatch is interruptible but my\n>>>>> observation was not correct. The waiting startup process is not\n>>>>> necessarily woken up by signal. I think it's still better to not wait\n>>>>> more than 1 sec even if it's an interruptible wait.\n>>>>\n>>>> So we don't need to use WaitLatch() there, i.e., it's ok to keep using\n>>>> pg_usleep()?\n>>>>\n>>>>> Attached patch fixes the above and introduces only two wait events of\n>>>>> conflict resolution: snapshot and tablespace.\n>>>>\n>>>> Many thanks for updating the patch!\n>>>>\n>>>> - /* Wait to be signaled by the release of the Relation Lock */\n>>>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>> + /* Wait to be signaled by the release of the Relation Lock */\n>>>> + ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>> + }\n>>>>\n>>>> Is this change really valid? What happens if the latch is set during\n>>>> ResolveRecoveryConflictWithVirtualXIDs()?\n>>>> ResolveRecoveryConflictWithVirtualXIDs() can return after the latch\n>>>> is set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n>>>\n>>> Thank you for reviewing the patch!\n>>>\n>>> You're right. It's better to keep using pg_usleep() and set the wait\n>>> event by pgstat_report_wait_start().\n>>>\n>>>>\n>>>> + default:\n>>>> + event_name = \"unknown wait event\";\n>>>> + break;\n>>>>\n>>>> Seems this default case should be removed. Please see other\n>>>> pgstat_get_wait_xxx() function, so there is no such code.\n>>>>\n>>>>> I also removed the wait\n>>>>> event of conflict resolution of database since it's unlikely to become\n>>>>> a user-visible and a long sleep as we discussed before.\n>>>>\n>>>> Is it worth defining new wait event type RecoveryConflict only for\n>>>> those two events? ISTM that it's ok to use IPC event type here.\n>>>>\n>>>\n>>> I dropped a new wait even type and added them to IPC wait event type.\n>>>\n>>> I've attached the new version patch.\n>>\n>> Thanks for updating the patch! The patch looks good to me except\n>> the following mior things.\n>>\n>> + <row>\n>> + <entry><literal>RecoveryConflictSnapshot</literal></entry>\n>> + <entry>Waiting for recovery conflict resolution on a physical cleanup.</entry>\n>> + </row>\n>> + <row>\n>> + <entry><literal>RecoveryConflictTablespace</literal></entry>\n>> + <entry>Waiting for recovery conflict resolution on dropping tablespace.</entry>\n>> + </row>\n>>\n>> You need to increment the value of \"morerows\" in\n>> \"<entry morerows=\"38\"><literal>IPC</literal></entry>\".\n>>\n>> The descriptions of those two events should be placed in alphabetical order\n>> for event name. That is, they should be placed above RecoveryPause.\n>>\n>> \"vacuum cleanup\" is better than \"physical cleanup\"?\n> \n> Agreed.\n> \n> I've attached the updated version patch.\n\nThanks! Looks good to me. Barring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Apr 2020 16:12:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "\n\nOn 2020/04/02 16:12, Fujii Masao wrote:\n> \n> \n> On 2020/04/02 15:54, Masahiko Sawada wrote:\n>> On Thu, 2 Apr 2020 at 15:34, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>>\n>>>\n>>> On 2020/04/02 14:25, Masahiko Sawada wrote:\n>>>> On Wed, 1 Apr 2020 at 22:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>> On 2020/03/30 20:10, Masahiko Sawada wrote:\n>>>>>> On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n>>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n>>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n>>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n>>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n>>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n>>>>>>>>>>> events by adding the new type of wait event such as\n>>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n>>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n>>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n>>>>>>>>>\n>>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n>>>>>>>>>\n>>>>>>>>\n>>>>>>>> Okay, understand.\n>>>>>>>>\n>>>>>>>>>> I got my eyes on this patch set.  The full patch set is in my opinion\n>>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n>>>>>>>>>> back-backpatching.\n>>>>>>>>>\n>>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n>>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n>>>>>>>>> fixed even in the back branches.\n>>>>>>>>\n>>>>>>>> So we need only two patches: one fixes process title issue and another\n>>>>>>>> improve wait event. I've attached updated patches.\n>>>>>>>\n>>>>>>> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n>>>>>>>\n>>>>>>> -       ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n>>>>>>> +       ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n>>>>>>>\n>>>>>>> Currently the wait event indicating the wait for buffer pin has already\n>>>>>>> been reported. But the above change in the patch changes the name of\n>>>>>>> wait event for buffer pin only in the startup process. Is this really useful?\n>>>>>>> Isn't the existing wait event for buffer pin enough?\n>>>>>>>\n>>>>>>> -       /* Wait to be signaled by the release of the Relation Lock */\n>>>>>>> -       ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>>>>> +               /* Wait to be signaled by the release of the Relation Lock */\n>>>>>>> +               ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n>>>>>>>\n>>>>>>> Same as above. Isn't the existing wait event enough?\n>>>>>>\n>>>>>> Yeah, we can use the existing wait events for buffer pin and lock.\n>>>>>>\n>>>>>>>\n>>>>>>> -       /*\n>>>>>>> -        * Progressively increase the sleep times, but not to more than 1s, since\n>>>>>>> -        * pg_usleep isn't interruptible on some platforms.\n>>>>>>> -        */\n>>>>>>> -       standbyWait_us *= 2;\n>>>>>>> -       if (standbyWait_us > 1000000)\n>>>>>>> -               standbyWait_us = 1000000;\n>>>>>>> +       WaitLatch(MyLatch,\n>>>>>>> +                         WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n>>>>>>> +                         STANDBY_WAIT_MS,\n>>>>>>> +                         wait_event_info);\n>>>>>>> +       ResetLatch(MyLatch);\n>>>>>>>\n>>>>>>> ResetLatch() should be called before WaitLatch()?\n>>>>>>\n>>>>>> Fixed.\n>>>>>>\n>>>>>>>\n>>>>>>> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n>>>>>>\n>>>>>> I thought we can remove it because WaitLatch is interruptible but my\n>>>>>> observation was not correct. The waiting startup process is not\n>>>>>> necessarily woken up by signal. I think it's still better to not wait\n>>>>>> more than 1 sec even if it's an interruptible wait.\n>>>>>\n>>>>> So we don't need to use WaitLatch() there, i.e., it's ok to keep using\n>>>>> pg_usleep()?\n>>>>>\n>>>>>> Attached patch fixes the above and introduces only two wait events of\n>>>>>> conflict resolution: snapshot and tablespace.\n>>>>>\n>>>>> Many thanks for updating the patch!\n>>>>>\n>>>>> -       /* Wait to be signaled by the release of the Relation Lock */\n>>>>> -       ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>>> +               /* Wait to be signaled by the release of the Relation Lock */\n>>>>> +               ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n>>>>> +       }\n>>>>>\n>>>>> Is this change really valid? What happens if the latch is set during\n>>>>> ResolveRecoveryConflictWithVirtualXIDs()?\n>>>>> ResolveRecoveryConflictWithVirtualXIDs() can return after the latch\n>>>>> is set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n>>>>\n>>>> Thank you for reviewing the patch!\n>>>>\n>>>> You're right. It's better to keep using pg_usleep() and set the wait\n>>>> event by pgstat_report_wait_start().\n>>>>\n>>>>>\n>>>>> +               default:\n>>>>> +                       event_name = \"unknown wait event\";\n>>>>> +                       break;\n>>>>>\n>>>>> Seems this default case should be removed. Please see other\n>>>>> pgstat_get_wait_xxx() function, so there is no such code.\n>>>>>\n>>>>>> I also removed the wait\n>>>>>> event of conflict resolution of database since it's unlikely to become\n>>>>>> a user-visible and a long sleep as we discussed before.\n>>>>>\n>>>>> Is it worth defining new wait event type RecoveryConflict only for\n>>>>> those two events? ISTM that it's ok to use IPC event type here.\n>>>>>\n>>>>\n>>>> I dropped a new wait even type and added them to IPC wait event type.\n>>>>\n>>>> I've attached the new version patch.\n>>>\n>>> Thanks for updating the patch! The patch looks good to me except\n>>> the following mior things.\n>>>\n>>> +        <row>\n>>> +         <entry><literal>RecoveryConflictSnapshot</literal></entry>\n>>> +         <entry>Waiting for recovery conflict resolution on a physical cleanup.</entry>\n>>> +        </row>\n>>> +        <row>\n>>> +         <entry><literal>RecoveryConflictTablespace</literal></entry>\n>>> +         <entry>Waiting for recovery conflict resolution on dropping tablespace.</entry>\n>>> +        </row>\n>>>\n>>> You need to increment the value of \"morerows\" in\n>>> \"<entry morerows=\"38\"><literal>IPC</literal></entry>\".\n>>>\n>>> The descriptions of those two events should be placed in alphabetical order\n>>> for event name. That is, they should be placed above RecoveryPause.\n>>>\n>>> \"vacuum cleanup\" is better than \"physical cleanup\"?\n>>\n>> Agreed.\n>>\n>> I've attached the updated version patch.\n> \n> Thanks! Looks good to me. Barring any objection, I will commit this patch.\n\nPushed! Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Apr 2020 12:28:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Some problems of recovery conflict wait events" }, { "msg_contents": "On Fri, 3 Apr 2020 at 12:28, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/04/02 16:12, Fujii Masao wrote:\n> >\n> >\n> > On 2020/04/02 15:54, Masahiko Sawada wrote:\n> >> On Thu, 2 Apr 2020 at 15:34, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>>\n> >>>\n> >>> On 2020/04/02 14:25, Masahiko Sawada wrote:\n> >>>> On Wed, 1 Apr 2020 at 22:32, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>\n> >>>>>\n> >>>>>\n> >>>>> On 2020/03/30 20:10, Masahiko Sawada wrote:\n> >>>>>> On Fri, 27 Mar 2020 at 17:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>\n> >>>>>>>\n> >>>>>>>\n> >>>>>>> On 2020/03/04 14:31, Masahiko Sawada wrote:\n> >>>>>>>> On Wed, 4 Mar 2020 at 13:48, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>>\n> >>>>>>>>> On 2020/03/04 13:27, Michael Paquier wrote:\n> >>>>>>>>>> On Wed, Mar 04, 2020 at 01:13:19PM +0900, Masahiko Sawada wrote:\n> >>>>>>>>>>> Yeah, so 0001 patch sets existing wait events to recovery conflict\n> >>>>>>>>>>> resolution. For instance, it sets (PG_WAIT_LOCK | LOCKTAG_TRANSACTION)\n> >>>>>>>>>>> to the recovery conflict on a snapshot. 0003 patch improves these wait\n> >>>>>>>>>>> events by adding the new type of wait event such as\n> >>>>>>>>>>> WAIT_EVENT_RECOVERY_CONFLICT_SNAPSHOT. Therefore 0001 (and 0002) patch\n> >>>>>>>>>>> is the fix for existing versions and 0003 patch is an improvement for\n> >>>>>>>>>>> only PG13. Did you mean even 0001 patch doesn't fit for back-patching?\n> >>>>>>>>>\n> >>>>>>>>> Yes, it looks like a improvement rather than bug fix.\n> >>>>>>>>>\n> >>>>>>>>\n> >>>>>>>> Okay, understand.\n> >>>>>>>>\n> >>>>>>>>>> I got my eyes on this patch set. The full patch set is in my opinion\n> >>>>>>>>>> just a set of improvements, and not bug fixes, so I would refrain from\n> >>>>>>>>>> back-backpatching.\n> >>>>>>>>>\n> >>>>>>>>> I think that the issue (i.e., \"waiting\" is reported twice needlessly\n> >>>>>>>>> in PS display) that 0002 patch tries to fix is a bug. So it should be\n> >>>>>>>>> fixed even in the back branches.\n> >>>>>>>>\n> >>>>>>>> So we need only two patches: one fixes process title issue and another\n> >>>>>>>> improve wait event. I've attached updated patches.\n> >>>>>>>\n> >>>>>>> I started reading v2-0002-Improve-wait-events-for-recovery-conflict-resolut.patch.\n> >>>>>>>\n> >>>>>>> - ProcWaitForSignal(PG_WAIT_BUFFER_PIN);\n> >>>>>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_BUFFER_PIN);\n> >>>>>>>\n> >>>>>>> Currently the wait event indicating the wait for buffer pin has already\n> >>>>>>> been reported. But the above change in the patch changes the name of\n> >>>>>>> wait event for buffer pin only in the startup process. Is this really useful?\n> >>>>>>> Isn't the existing wait event for buffer pin enough?\n> >>>>>>>\n> >>>>>>> - /* Wait to be signaled by the release of the Relation Lock */\n> >>>>>>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >>>>>>> + /* Wait to be signaled by the release of the Relation Lock */\n> >>>>>>> + ProcWaitForSignal(WAIT_EVENT_RECOVERY_CONFLICT_LOCK);\n> >>>>>>>\n> >>>>>>> Same as above. Isn't the existing wait event enough?\n> >>>>>>\n> >>>>>> Yeah, we can use the existing wait events for buffer pin and lock.\n> >>>>>>\n> >>>>>>>\n> >>>>>>> - /*\n> >>>>>>> - * Progressively increase the sleep times, but not to more than 1s, since\n> >>>>>>> - * pg_usleep isn't interruptible on some platforms.\n> >>>>>>> - */\n> >>>>>>> - standbyWait_us *= 2;\n> >>>>>>> - if (standbyWait_us > 1000000)\n> >>>>>>> - standbyWait_us = 1000000;\n> >>>>>>> + WaitLatch(MyLatch,\n> >>>>>>> + WL_LATCH_SET | WL_POSTMASTER_DEATH | WL_TIMEOUT,\n> >>>>>>> + STANDBY_WAIT_MS,\n> >>>>>>> + wait_event_info);\n> >>>>>>> + ResetLatch(MyLatch);\n> >>>>>>>\n> >>>>>>> ResetLatch() should be called before WaitLatch()?\n> >>>>>>\n> >>>>>> Fixed.\n> >>>>>>\n> >>>>>>>\n> >>>>>>> Could you tell me why you dropped the \"increase-sleep-times\" logic?\n> >>>>>>\n> >>>>>> I thought we can remove it because WaitLatch is interruptible but my\n> >>>>>> observation was not correct. The waiting startup process is not\n> >>>>>> necessarily woken up by signal. I think it's still better to not wait\n> >>>>>> more than 1 sec even if it's an interruptible wait.\n> >>>>>\n> >>>>> So we don't need to use WaitLatch() there, i.e., it's ok to keep using\n> >>>>> pg_usleep()?\n> >>>>>\n> >>>>>> Attached patch fixes the above and introduces only two wait events of\n> >>>>>> conflict resolution: snapshot and tablespace.\n> >>>>>\n> >>>>> Many thanks for updating the patch!\n> >>>>>\n> >>>>> - /* Wait to be signaled by the release of the Relation Lock */\n> >>>>> - ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >>>>> + /* Wait to be signaled by the release of the Relation Lock */\n> >>>>> + ProcWaitForSignal(PG_WAIT_LOCK | locktag.locktag_type);\n> >>>>> + }\n> >>>>>\n> >>>>> Is this change really valid? What happens if the latch is set during\n> >>>>> ResolveRecoveryConflictWithVirtualXIDs()?\n> >>>>> ResolveRecoveryConflictWithVirtualXIDs() can return after the latch\n> >>>>> is set but before WaitLatch() in WaitExceedsMaxStandbyDelay() is reached.\n> >>>>\n> >>>> Thank you for reviewing the patch!\n> >>>>\n> >>>> You're right. It's better to keep using pg_usleep() and set the wait\n> >>>> event by pgstat_report_wait_start().\n> >>>>\n> >>>>>\n> >>>>> + default:\n> >>>>> + event_name = \"unknown wait event\";\n> >>>>> + break;\n> >>>>>\n> >>>>> Seems this default case should be removed. Please see other\n> >>>>> pgstat_get_wait_xxx() function, so there is no such code.\n> >>>>>\n> >>>>>> I also removed the wait\n> >>>>>> event of conflict resolution of database since it's unlikely to become\n> >>>>>> a user-visible and a long sleep as we discussed before.\n> >>>>>\n> >>>>> Is it worth defining new wait event type RecoveryConflict only for\n> >>>>> those two events? ISTM that it's ok to use IPC event type here.\n> >>>>>\n> >>>>\n> >>>> I dropped a new wait even type and added them to IPC wait event type.\n> >>>>\n> >>>> I've attached the new version patch.\n> >>>\n> >>> Thanks for updating the patch! The patch looks good to me except\n> >>> the following mior things.\n> >>>\n> >>> + <row>\n> >>> + <entry><literal>RecoveryConflictSnapshot</literal></entry>\n> >>> + <entry>Waiting for recovery conflict resolution on a physical cleanup.</entry>\n> >>> + </row>\n> >>> + <row>\n> >>> + <entry><literal>RecoveryConflictTablespace</literal></entry>\n> >>> + <entry>Waiting for recovery conflict resolution on dropping tablespace.</entry>\n> >>> + </row>\n> >>>\n> >>> You need to increment the value of \"morerows\" in\n> >>> \"<entry morerows=\"38\"><literal>IPC</literal></entry>\".\n> >>>\n> >>> The descriptions of those two events should be placed in alphabetical order\n> >>> for event name. That is, they should be placed above RecoveryPause.\n> >>>\n> >>> \"vacuum cleanup\" is better than \"physical cleanup\"?\n> >>\n> >> Agreed.\n> >>\n> >> I've attached the updated version patch.\n> >\n> > Thanks! Looks good to me. Barring any objection, I will commit this patch.\n>\n> Pushed! Thanks!\n\nThank you so much!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Apr 2020 14:00:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Some problems of recovery conflict wait events" } ]
[ { "msg_contents": "after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. \n\nspec: RAM 16gb,4vCore\nAny bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! \n\n\n\n\nI could see below error logs and due to this reason database more often going into recovery mode, \n\n\n2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID32731) was terminated by signal 9: Killed\n2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:Failed process was running: selectinfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageidfrom salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1)\n2020-02-17 22:34:32 UTC::@:[20467]:LOG:terminating any other active server processes\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC::@:[20467]:LOG: archiverprocess (PID 30799) exited with exit code 1\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:DETAIL:The postmaster has commanded this server process to roll back the current transactionand exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC::@:[30798]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC::@:[30798]:DETAIL: Thepostmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC::@:[30798]:HINT: In amoment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:DETAIL:The postmaster has commanded this server process to roll back the current transactionand exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:DETAIL: The postmaster hascommanded this server process to roll back the current transaction and exit,because another server process exited abnormally and possibly corrupted sharedmemory.\n2020-02-17 22:34:32UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:HINT: In a moment you shouldbe able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:HINT:In a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:HINT:In a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43660):devops_user@salesdb:[30873]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42966):devops_user@salesdb:[30869]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60818):devops_user@salesdb:[30831]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:33 UTC::@:[20467]:LOG: allserver processes terminated; reinitializing\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: databasesystem was interrupted; last known up at 2020-02-17 22:33:33 UTC\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: databasesystem was not properly shut down; automatic recovery in progress\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: redostarts at 15B0/D5FCA110\n2020-02-17 22:34:34 UTC:(54556):digitaladmin@salesdb:[19637]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:34 UTC:(54557):digitaladmin@salesdb:[19639]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58713):devops_user@salesdb:[19638]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58714):devops_user@salesdb:[19644]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: invalidrecord length at 15B0/E4C32288: wanted 24, got 0\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: redodone at 15B0/E4C32260\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: lastcompleted transaction was at log time 2020-02-17 22:34:31.864309+00\n2020-02-17 22:34:35 UTC::@:[19633]:LOG:checkpoint starting: end-of-recovery immediate\n\n \n\n\n \nThank you.\n\n\n\n\nafter upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. spec: RAM 16gb,4vCoreAny bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! I could see below error logs and due to this reason database more often going into recovery mode, 2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID\n32731) was terminated by signal 9: Killed\n2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:\nFailed process was running: select\ninfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageid\nfrom salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1\n)\n2020-02-17 22:34:32 UTC::@:[20467]:LOG:\nterminating any other active server processes\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC::@:[20467]:LOG: archiver\nprocess (PID 30799) exited with exit code 1\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:DETAIL:\nThe postmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC::@:[30798]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC::@:[30798]:DETAIL: The\npostmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC::@:[30798]:HINT: In a\nmoment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:DETAIL:\nThe postmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32\nUTC:10.65.152.155(58906):bi_user@salesdb:[17003]:DETAIL: The postmaster has\ncommanded this server process to roll back the current transaction and exit,\nbecause another server process exited abnormally and possibly corrupted shared\nmemory.\n2020-02-17 22:34:32\nUTC:10.65.152.155(58906):bi_user@salesdb:[17003]:HINT: In a moment you should\nbe able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43660):devops_user@salesdb:[30873]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42966):devops_user@salesdb:[30869]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60818):devops_user@salesdb:[30831]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:33 UTC::@:[20467]:LOG: all\nserver processes terminated; reinitializing\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: database\nsystem was interrupted; last known up at 2020-02-17 22:33:33 UTC\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: database\nsystem was not properly shut down; automatic recovery in progress\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: redo\nstarts at 15B0/D5FCA110\n2020-02-17 22:34:34 UTC:(54556):digitaladmin@salesdb:[19637]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:34 UTC:(54557):digitaladmin@salesdb:[19639]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58713):devops_user@salesdb:[19638]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58714):devops_user@salesdb:[19644]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: invalid\nrecord length at 15B0/E4C32288: wanted 24, got 0\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: redo\ndone at 15B0/E4C32260\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: last\ncompleted transaction was at log time 2020-02-17 22:34:31.864309+00\n2020-02-17 22:34:35 UTC::@:[19633]:LOG:\ncheckpoint starting: end-of-recovery immediate\n \n\n \nThank you.", "msg_date": "Tue, 18 Feb 2020 17:46:28 +0000 (UTC)", "msg_from": "Nagaraj Raj <nagaraj.sf@yahoo.com>", "msg_from_op": true, "msg_subject": "DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:\n>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade.�\n>\n>spec: RAM 16gb,4vCore\n>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!!�\n>\n\nThis bug report (in fact, we don't know if it's a bug, but OK) is\nwoefully incomplete :-(\n\nThe server log is mostly useless, unfortunately - it just says a bunch\nof processes were killed (by OOM killer, most likely) so the server has\nto restart. It tells us nothing about why the backends consumed so much\nmemory etc.\n\nWhat would help us is knowing how much memory was the backend (killed by\nOOM) consuming, which should be in dmesg.\n\nAnd then MemoryContextStats output - you need to connect to a backend\nconsuming a lot of memory using gdb (before it gets killed) and do\n\n (gdb) p MemoryContextStats(TopMemoryContext)\n (gdb) q\n\nand show us the output printed into server log. If it's a backend\nrunning a query, it'd help knowing the execution plan.\n\nIt would also help knowing the non-default configuration, i.e. stuff\ntweaked in postgresql.conf.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 18 Feb 2020 18:58:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> This bug report (in fact, we don't know if it's a bug, but OK) is\n> woefully incomplete :-(\n\nAlso, cross-posting to ten(!) different mailing lists, most of which are\noff-topic for this, is incredibly rude.\n\nPlease read\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nand try to follow its suggestions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Feb 2020 13:07:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Below are the same configurations ins .conf file before and after updagrade\nshow max_connections; = 1743show shared_buffers = \"4057840kB\"show effective_cache_size =  \"8115688kB\"show maintenance_work_mem = \"259MB\"show checkpoint_completion_target = \"0.9\"show wal_buffers = \"16MB\"show default_statistics_target = \"100\"show random_page_cost = \"1.1\"show effective_io_concurrency =\" 200\"show work_mem = \"4MB\"show min_wal_size = \"256MB\"show max_wal_size = \"2GB\"show max_worker_processes = \"8\"show max_parallel_workers_per_gather = \"2\"\n\nhere is some sys logs,\n2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. \n\nI identified one simple select which consuming more memory and here is the query plan,\n\n\n\"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)\"\"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)\"\"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\n\n\nThanks,\n\n\n On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: \n \n On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:\n>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. \n>\n>spec: RAM 16gb,4vCore\n>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! \n>\n\nThis bug report (in fact, we don't know if it's a bug, but OK) is\nwoefully incomplete :-(\n\nThe server log is mostly useless, unfortunately - it just says a bunch\nof processes were killed (by OOM killer, most likely) so the server has\nto restart. It tells us nothing about why the backends consumed so much\nmemory etc.\n\nWhat would help us is knowing how much memory was the backend (killed by\nOOM) consuming, which should be in dmesg.\n\nAnd then MemoryContextStats output - you need to connect to a backend\nconsuming a lot of memory using gdb (before it gets killed) and do\n\n  (gdb) p MemoryContextStats(TopMemoryContext)\n  (gdb) q\n\nand show us the output printed into server log. If it's a backend\nrunning a query, it'd help knowing the execution plan.\n\nIt would also help knowing the non-default configuration, i.e. stuff\ntweaked in postgresql.conf.\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n \n\nBelow are the same configurations ins .conf file before and after updagradeshow max_connections; = 1743show shared_buffers = \"4057840kB\"show effective_cache_size =  \"8115688kB\"show maintenance_work_mem = \"259MB\"show checkpoint_completion_target = \"0.9\"show wal_buffers = \"16MB\"show default_statistics_target = \"100\"show random_page_cost = \"1.1\"show effective_io_concurrency =\" 200\"show work_mem = \"4MB\"show min_wal_size = \"256MB\"show max_wal_size = \"2GB\"show max_worker_processes = \"8\"show max_parallel_workers_per_gather = \"2\"here is some sys logs,2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. I identified one simple select which consuming more memory and here is the query plan,\"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)\"\"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)\"\"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"Thanks,\n\n\n\n On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n \n\n\nOn Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. >>spec: RAM 16gb,4vCore>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! >This bug report (in fact, we don't know if it's a bug, but OK) iswoefully incomplete :-(The server log is mostly useless, unfortunately - it just says a bunchof processes were killed (by OOM killer, most likely) so the server hasto restart. It tells us nothing about why the backends consumed so muchmemory etc.What would help us is knowing how much memory was the backend (killed byOOM) consuming, which should be in dmesg.And then MemoryContextStats output - you need to connect to a backendconsuming a lot of memory using gdb (before it gets killed) and do  (gdb) p MemoryContextStats(TopMemoryContext)  (gdb) qand show us the output printed into server log. If it's a backendrunning a query, it'd help knowing the execution plan.It would also help knowing the non-default configuration, i.e. stufftweaked in postgresql.conf.regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 18 Feb 2020 18:10:08 +0000 (UTC)", "msg_from": "Nagaraj Raj <nagaraj.sf@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > This bug report (in fact, we don't know if it's a bug, but OK) is\n> > woefully incomplete :-(\n> \n> Also, cross-posting to ten(!) different mailing lists, most of which are\n> off-topic for this, is incredibly rude.\n\nNot to mention a couple -owner aliases that hit moderators directly..\n\nI continue to feel that we should disallow this kind of cross-posting in\nthe list management software.\n\nThanks,\n\nStephen", "msg_date": "Tue, 18 Feb 2020 13:33:33 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:\n>\n> Below are the same configurations ins .conf file before and after updagrade\n>\n> show max_connections; = 1743\n> show shared_buffers = \"4057840kB\"\n> show effective_cache_size = \"8115688kB\"\n> show maintenance_work_mem = \"259MB\"\n> show checkpoint_completion_target = \"0.9\"\n> show wal_buffers = \"16MB\"\n> show default_statistics_target = \"100\"\n> show random_page_cost = \"1.1\"\n> show effective_io_concurrency =\" 200\"\n> show work_mem = \"4MB\"\n> show min_wal_size = \"256MB\"\n> show max_wal_size = \"2GB\"\n> show max_worker_processes = \"8\"\n> show max_parallel_workers_per_gather = \"2\"\n\nThis smells like oom killer for sure. how did you resolve some of\nthese values. In particular max_connections and effective_cache_size.\n How much memory is in this server?\n\nmerlin\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:38:35 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Please don't cross post to different lists.\n\n Pgsql-general <pgsql-general@postgresql.org>,\n PgAdmin Support <pgadmin-support@postgresql.org>,\n PostgreSQL Hackers <pgsql-hackers@postgresql.org>,\n \"pgsql-hackers-owner@postgresql.org\" <pgsql-hackers-owner@postgresql.org>,\n Postgres Performance List <pgsql-performance@postgresql.org>,\n Pg Bugs <pgsql-bugs@postgresql.org>,\n Pgsql-admin <pgsql-admin@postgresql.org>,\n Pgadmin-hackers <pgadmin-hackers@postgresql.org>,\n PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>,\n Pgsql-pkg-yum <pgsql-pkg-yum@postgresql.org>\n\n\nOn Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:\n> after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade.�\n> \n> spec: RAM 16gb,4vCore\n\nOn Tue, Feb 18, 2020 at 06:10:08PM +0000, Nagaraj Raj wrote:\n> Below are the same configurations ins .conf file before and after updagrade\n> show max_connections; = 1743\n> show shared_buffers = \"4057840kB\"\n> show work_mem = \"4MB\"\n> show maintenance_work_mem = \"259MB\"\n\n> Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!!�\n> \n> I could see below error logs and due to this reason database more often going into recovery mode,�\n\nWhat do you mean \"more often\" ? Did the crash/OOM happen before the upgrade, too ?\n\n> 2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID32731) was terminated by signal 9: Killed\n> 2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:Failed process was running: selectinfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageidfrom salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1)\n\nThat process is the one which was killed (in this case) but maybe not the\nprocess responsible for using lots of *private* RAM. Is\nsalesdb.liveperson.intents a view ? What is the query plain for that query ?\n(Run it with \"explain\").\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nOn Tue, Feb 18, 2020 at 06:10:08PM +0000, Nagaraj Raj wrote:\n> I identified one simple select which consuming more memory and here is the query plan,\n> \n> \"Result� (cost=0.00..94891854.11 rows=3160784900 width=288)\"\"� ->� Append� (cost=0.00..47480080.61 rows=3160784900 width=288)\"\"� � � � ->� Seq Scan on msghist� (cost=0.00..15682777.12 rows=3129490000 width=288)\"\"� � � � � � � Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\"� � � � ->� Seq Scan on msghist msghist_1� (cost=0.00..189454.50 rows=31294900 width=288)\"\"� � � � � � � Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\n\nThis is almost certainly unrelated. It looks like that query did a seq scan\nand accessed a large number of tuples (and pages from \"shared_buffers\"), which\nthe OS then shows as part of that processes memory, even though *shared*\nbuffers are not specific to that one process.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:40:37 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This is almost certainly unrelated. It looks like that query did a seq scan\n> and accessed a large number of tuples (and pages from \"shared_buffers\"), which\n> the OS then shows as part of that processes memory, even though *shared*\n> buffers are not specific to that one process.\n\nYeah. This server looks highly overprovisioned, I'm in particularly\nsuspicious of the high max_connections setting. To fetch this out\nI'd be tracking connections in the database, both idle and not idle,\ncontinuously. The solution is most likely to install a connection\npooler such as pgbouncer.\n\nmerlin\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:49:50 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Hi Merlin,\nIts configured high value for max_conn, but active and idle session have never crossed the count 50.\nDB Size: 20 GBTable size: 30MBRAM: 16GBvC: 4\n\nyes, its view earlier I posted and here is there query planner for new actual view,\n\"Append  (cost=0.00..47979735.57 rows=3194327000 width=288)\"\"  ->  Seq Scan on msghist  (cost=0.00..15847101.30 rows=3162700000 width=288)\"\"  ->  Seq Scan on msghist msghist_1  (cost=0.00..189364.27 rows=31627000 width=288)\"\n\nThanks,Rj On Tuesday, February 18, 2020, 10:51:02 AM PST, Merlin Moncure <mmoncure@gmail.com> wrote: \n \n On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This is almost certainly unrelated.  It looks like that query did a seq scan\n> and accessed a large number of tuples (and pages from \"shared_buffers\"), which\n> the OS then shows as part of that processes memory, even though *shared*\n> buffers are not specific to that one process.\n\nYeah.  This server looks highly overprovisioned, I'm in particularly\nsuspicious of the high max_connections setting.  To fetch this out\nI'd be tracking connections in the database, both idle and not idle,\ncontinuously.  The solution is most likely to install a connection\npooler such as pgbouncer.\n\nmerlin\n \n\nHi Merlin,Its configured high value for max_conn, but active and idle session have never crossed the count 50.DB Size: 20 GBTable size: 30MBRAM: 16GBvC: 4yes, its view earlier I posted and here is there query planner for new actual view,\"Append  (cost=0.00..47979735.57 rows=3194327000 width=288)\"\"  ->  Seq Scan on msghist  (cost=0.00..15847101.30 rows=3162700000 width=288)\"\"  ->  Seq Scan on msghist msghist_1  (cost=0.00..189364.27 rows=31627000 width=288)\"Thanks,Rj\n\n\n\n On Tuesday, February 18, 2020, 10:51:02 AM PST, Merlin Moncure <mmoncure@gmail.com> wrote:\n \n\n\nOn Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby <pryzby@telsasoft.com> wrote:> This is almost certainly unrelated.  It looks like that query did a seq scan> and accessed a large number of tuples (and pages from \"shared_buffers\"), which> the OS then shows as part of that processes memory, even though *shared*> buffers are not specific to that one process.Yeah.  This server looks highly overprovisioned, I'm in particularlysuspicious of the high max_connections setting.  To fetch this outI'd be tracking connections in the database, both idle and not idle,continuously.  The solution is most likely to install a connectionpooler such as pgbouncer.merlin", "msg_date": "Tue, 18 Feb 2020 19:10:20 +0000 (UTC)", "msg_from": "Nagaraj Raj <nagaraj.sf@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On 2020-02-18 18:10:08 +0000, Nagaraj Raj wrote:\n> Below are the same configurations ins .conf file before and after updagrade\n> \n> show max_connections; = 1743\n[...]\n> show work_mem = \"4MB\"\n\nThis is an interesting combination: So you expect a large number of\nconnections but each one should use very little RAM?\n\n[...]\n\n> here is some sys logs,\n> \n> 2020-02-16 21:01:17 UTC [-]The database process was killed by the OS\n> due to excessive memory consumption. \n> 2020-02-16 13:41:16 UTC [-]The database process was killed by the OS\n> due to excessive memory consumption. \n\nThe oom-killer produces a huge block of messages which you can find with\ndmesg or in your syslog. It looks something like this:\n\nFeb 19 19:06:53 akran kernel: [3026711.344817] platzangst invoked oom-killer: gfp_mask=0x15080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), nodemask=(null), order=1, oom_score_adj=0\nFeb 19 19:06:53 akran kernel: [3026711.344819] platzangst cpuset=/ mems_allowed=0-1\nFeb 19 19:06:53 akran kernel: [3026711.344825] CPU: 7 PID: 2012 Comm: platzangst Tainted: G OE 4.15.0-74-generic #84-Ubuntu\nFeb 19 19:06:53 akran kernel: [3026711.344826] Hardware name: Dell Inc. PowerEdge R630/02C2CP, BIOS 2.1.7 06/16/2016\nFeb 19 19:06:53 akran kernel: [3026711.344827] Call Trace:\nFeb 19 19:06:53 akran kernel: [3026711.344835] dump_stack+0x6d/0x8e\nFeb 19 19:06:53 akran kernel: [3026711.344839] dump_header+0x71/0x285\n...\nFeb 19 19:06:53 akran kernel: [3026711.344893] RIP: 0033:0x7f292d076b1c\nFeb 19 19:06:53 akran kernel: [3026711.344894] RSP: 002b:00007fff187ef240 EFLAGS: 00000246 ORIG_RAX: 0000000000000038\nFeb 19 19:06:53 akran kernel: [3026711.344895] RAX: ffffffffffffffda RBX: 00007fff187ef240 RCX: 00007f292d076b1c\nFeb 19 19:06:53 akran kernel: [3026711.344896] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011\nFeb 19 19:06:53 akran kernel: [3026711.344897] RBP: 00007fff187ef2b0 R08: 00007f292d596740 R09: 00000000009d43a0\nFeb 19 19:06:53 akran kernel: [3026711.344897] R10: 00007f292d596a10 R11: 0000000000000246 R12: 0000000000000000\nFeb 19 19:06:53 akran kernel: [3026711.344898] R13: 0000000000000020 R14: 0000000000000000 R15: 0000000000000000\nFeb 19 19:06:53 akran kernel: [3026711.344899] Mem-Info:\nFeb 19 19:06:53 akran kernel: [3026711.344905] active_anon:14862589 inactive_anon:1133875 isolated_anon:0\nFeb 19 19:06:53 akran kernel: [3026711.344905] active_file:467 inactive_file:371 isolated_file:0\nFeb 19 19:06:53 akran kernel: [3026711.344905] unevictable:0 dirty:3 writeback:0 unstable:0\n...\nFeb 19 19:06:53 akran kernel: [3026711.344985] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name\nFeb 19 19:06:53 akran kernel: [3026711.344997] [ 823] 0 823 44909 0 106496 121 0 lvmetad\nFeb 19 19:06:53 akran kernel: [3026711.344999] [ 1354] 0 1354 11901 3 135168 112 0 rpcbind\nFeb 19 19:06:53 akran kernel: [3026711.345000] [ 1485] 0 1485 69911 99 180224 159 0 accounts-daemon\n...\nFeb 19 19:06:53 akran kernel: [3026711.345345] Out of memory: Kill process 25591 (postgres) score 697 or sacrifice child\nFeb 19 19:06:53 akran kernel: [3026711.346563] Killed process 25591 (postgres) total-vm:71116948kB, anon-rss:52727552kB, file-rss:0kB, shmem-rss:3023196kB\n\nThe most interesting lines are usually the last two: In this case they\ntell us that the process killed was a postgres process and it occupied\nabout 71 GB of virtual memory at that time. That was clearly the right\nchoice since the machine has only 64 GB of RAM. Sometimes it is less\nclear and then you might want to scroll through the (usually long) list\nof processes to see if there are other processes which need suspicious\namounts of RAM or maybe if there are just more of them than you would\nexpect.\n\n\n> I identified one simple select which consuming more memory and here is the\n> query plan,\n> \n> \n> \n> \"Result (cost=0.00..94891854.11 rows=3160784900 width=288)\"\n> \" -> Append (cost=0.00..47480080.61 rows=3160784900 width=288)\"\n> \" -> Seq Scan on msghist (cost=0.00..15682777.12 rows=3129490000 width\n> =288)\"\n> \" Filter: (((data -> 'info'::text) ->> 'status'::text) =\n> 'CLOSE'::text)\"\n> \" -> Seq Scan on msghist msghist_1 (cost=0.00..189454.50 rows=31294900\n> width=288)\"\n> \" Filter: (((data -> 'info'::text) ->> 'status'::text) =\n> 'CLOSE'::text)\"\n\nSo: How much memory does that use? It produces a huge number of rows\n(more than 3 billion) but it doesn't do much with them, so I wouldn't\nexpect the postgres process itself to use much memory. Are you sure its\nthe postgres process and not the application which uses a lot of memory?\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"", "msg_date": "Sun, 23 Feb 2020 11:19:28 +0100", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 1:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:\n>\n> Hi Merlin,\n>\n> Its configured high value for max_conn, but active and idle session have never crossed the count 50.\n>\n> DB Size: 20 GB\n> Table size: 30MB\n> RAM: 16GB\n> vC: 4\n>\n>\n> yes, its view earlier I posted and here is there query planner for new actual view,\n>\n> \"Append (cost=0.00..47979735.57 rows=3194327000 width=288)\"\n> \" -> Seq Scan on msghist (cost=0.00..15847101.30 rows=3162700000 width=288)\"\n> \" -> Seq Scan on msghist msghist_1 (cost=0.00..189364.27 rows=31627000 width=288)\"\n\n\nDatabase size of 20GB is not believable; you have table with 3Bil\nrows, this ought to be 60GB+ mill+ all by itself. How did you get\n20GB figure?\n\n\nmerlin\n\n\n", "msg_date": "Mon, 24 Feb 2020 10:34:03 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" } ]
[ { "msg_contents": "Hello,\n\nI noticed that variables in PL/Python are not released at the end of procedure.\nDoes it expected behavior?\n\nSee this example below:\nhttps://github.com/heterodb/pg-strom/blob/master/test/input/arrow_python.source#L53\n\nThis PL/Python function maps a GPU buffer as cupy.ndarray object by\ncupy_strom.ipc_import()\nat the L59. It shall be stored in X. I have expected that X shall be\nreleased at end of the\nprocedure invocation, but not happen.\n\nThe object X internally hold IpcMemory class,\n https://github.com/heterodb/pg-strom/blob/master/python/cupy_strom.pyx\nAnd, it has destructor routine that unmaps the above GPU buffer using CUDA API.\n https://github.com/heterodb/pg-strom/blob/master/python/cupy_ipcmem.c#L242\nBecause of the restriction by CUDA API, we cannot map a certain GPU buffer twice\non the same process space. So, I noticed that the second invocation of\nthe PL/Python\nprocedure on the same session failed.\nThe L103 explicitly reset X, by X=0, to invoke the destructor manually.\n\nI wonder whether it is an expected behavior, or oversight of something.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Wed, 19 Feb 2020 10:12:19 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": true, "msg_subject": "PL/Python - lifetime of variables?" }, { "msg_contents": "On Wed, 19 Feb 2020 at 09:12, Kohei KaiGai <kaigai@heterodb.com> wrote:\n\n> I noticed that variables in PL/Python are not released at the end of procedure.\n> Does it expected behavior?\n\nPL/Python vars are freed when the interpreter instance is freed and\ntheir refcounts reach zero.\n\nI believe we use one subinterpreter for the lifetime of the backend\nsession. It might be worth checking whether we do an eager refcount\ncheck and sweep when a procedure finishes.\n\nBut in general, I suggest that relying on object\nfinalizers/destructors to accomplish side effects visible outside the\nprocedure is bad development practice. Instead, use a \"with\" block, or\na try/finally block, and do explicit cleanup for external resources.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n", "msg_date": "Wed, 19 Feb 2020 09:39:16 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python - lifetime of variables?" } ]